input
stringlengths
6.82k
29k
Instruction: The impact of differing anxiety disorders on outcome following an acute coronary syndrome: time to start worrying? Abstracts: abstract_id: PUBMED:19691025 The impact of differing anxiety disorders on outcome following an acute coronary syndrome: time to start worrying? Background: Both depression and anxiety have been implicated as influencing survival following an acute coronary syndrome (ACS). Studies evaluating the contribution of anxiety have produced varying results, perhaps reflecting the use of dimensional self-report measures of state anxiety and failure to control for co-morbid depression. We sought to assess the impact of anxiety on outcome in ACS patients using DSM-IV diagnoses, in addition to self-report measures, controlling for effects of concurrent depressive diagnosis as well as medical and socio-demographic variables. Methods: Some 489 patients hospitalized with an ACS were assessed for lifetime and current DSM-IV anxiety disorders using both Composite International Diagnostic Interview (CIDI) decisions and such decisions complemented by clinical judgments of impairment. Patients were re-interviewed over the next 12 months to assess cardiac outcome (ACS readmission and cardiac mortality). Results: Univariate analyses revealed a trend for those with a lifetime history of agoraphobia to experience poorer cardiac outcome and for those with a lifetime diagnosis of generalized anxiety disorder (GAD) to experience a superior cardiac outcome. After controlling for post-ACS depression and key medical and demographic covariates, agoraphobia was a significant predictor of poorer cardiac outcome while the trend for those with a history of GAD to experience a superior cardiac outcome remained. Conclusions: Any impact of "anxiety" on post-ACS outcome appears to be influenced by the clinical sub-type. The seemingly paradoxical finding that GAD might improve outcome may reflect "apprehensive worrying" being constructive, by improving self-management of the individual's cardiac problems. abstract_id: PUBMED:21652086 GAD is good? Generalized anxiety disorder predicts a superior five-year outcome following an acute coronary syndrome. While differing anxiety disorders have been reported to have quite variable impact on outcome following an acute coronary syndrome (ACS), a recent study quantified generalized anxiety disorder (GAD) as having a distinctly negative impact. We examined anxiety disorder status at baseline for any differential five-year impact on cardiac outcome following initial hospitalization for an ACS in 489 subjects. Of those initially assessed, 89% were examined at a five-year review. There were non-significant trends for all non-GAD anxiety disorders to be associated with a worse cardiac outcome. Meeting GAD criteria (both at baseline assessment and over the subjects' lifetime) was associated with a superior five-year cardiac outcome, particularly in the sub-set of those experiencing GAD as their only anxiety disorder, and after controlling for depression and medical comorbidities. As our results are at distinct variance with two previous studies specifically examining the impact of GAD on outcome in cardiac patients, we consider methodological and other explanations. We conclude that, if our findings are valid, then they may more reflect GAD patients having a 'constructive worrying' capacity and therefore being more likely to seek help in response to less severe somatic symptoms and to also be more adherent with cardiac rehabilitation programs. abstract_id: PUBMED:23169864 Outcome quality of in-patient cardiac rehabilitation in elderly patients--identification of relevant parameters. Background: Outcome quality management requires the consecutive registration of defined variables. The aim was to identify relevant parameters in order to objectively assess the in-patient rehabilitation outcome. Methods: From February 2009 to June 2010 1253 patients (70.9 ± 7.0 years, 78.1% men) at 12 rehabilitation clinics were enrolled. Items concerning sociodemographic data, the impairment group (surgery, conservative/interventional treatment), cardiovascular risk factors, structural and functional parameters and subjective health were tested in respect of their measurability, sensitivity to change and their propensity to be influenced by rehabilitation. Results: The majority of patients (61.1%) were referred for rehabilitation after cardiac surgery, 38.9% after conservative or interventional treatment for an acute coronary syndrome. Functionally relevant comorbidities were seen in 49.2% (diabetes mellitus, stroke, peripheral artery disease, chronic obstructive lung disease). In three key areas 13 parameters were identified as being sensitive to change and subject to modification by rehabilitation: cardiovascular risk factors (blood pressure, low-density lipoprotein cholesterol, triglycerides), exercise capacity (resting heart rate, maximal exercise capacity, maximal walking distance, heart failure, angina pectoris) and subjective health (IRES-24 (indicators of rehabilitation status): pain, somatic health, psychological well-being and depression as well as anxiety on the Hospital Anxiety and Depression Scale). Conclusion: The outcome of in-patient rehabilitation in elderly patients can be comprehensively assessed by the identification of appropriate key areas, that is, cardiovascular risk factors, exercise capacity and subjective health. This may well serve as a benchmark for internal and external quality management. abstract_id: PUBMED:17595368 "The rust of life": impact of anxiety on cardiac patients. Anxiety has both functionally appropriate and inappropriate consequences. Among patients with cardiac disease, anxiety can be functionally appropriate when it prompts an individual to quickly seek treatment for acute cardiac signs and symptoms. But anxiety may have medical or psychological consequences when it is persistent or severe, including difficulty adhering to prescribed treatments and making recommended lifestyle changes, adoption of or failure to change risky behaviors, increased risk for acute cardiac events, and increased risk for in-hospital complications after admission for acute coronary syndrome. Yet, because anxiety is a universal emotion that is managed without consequence by many people, its importance is often ignored by healthcare providers. The impact of psychosocial (with a major emphasis on anxiety) and behavioral variables on biological outcomes was examined systematically. The research included (1) examination and comparison of the intensity of anxiety in international samples of various critically, acutely, and chronically ill cardiac patients; (2) determination of differences between men and women in the expression of anxiety; (3) investigation of factors predictive of anxiety levels, including perceived control; (4) studies of healthcare providers' knowledge of anxiety assessment and providers' practices in assessing and managing anxiety; and (5) determination of the impact of anxiety on clinical outcomes in cardiac patients. The goal of this program of research is to improve patients' outcomes on a widespread basis by placing anxiety in the forefront of clinical cardiac practice. abstract_id: PUBMED:7498109 The impact of negative emotions on prognosis following myocardial infarction: is it more than depression? This study examine the importance of major depression symptoms, history of major depression, anxiety, anger-in, anger-out, and perceived social support, measured in the hospital after a myocardial infarction (MI), in predicting cardiac events over the subsequent 12 months in a sample of 222 patients. Cardiac events included both recurrences of acute coronary syndromes (unstable angina admissions and survived and nonsurvived MI recurrences) and probable arrhythmic events (survived cardiac arrests and arrhythmic deaths). Major depression, depressive symptoms, anxiety, and history of major depression all significantly predicted cardiac events. Multivariate analyses showed that depressive symptoms, anxiety, and history of major depression each had an impact independent of each other, as well as of measures of cardiac disease severity. abstract_id: PUBMED:27443760 Work-related outcome after acute coronary syndrome: Implications of complex cardiac rehabilitation in occupational medicine. Objectives: Coronary heart disease is frequent in the working-age population. Traditional outcomes, such as mortality and hospital readmission, are useful for evaluating prognosis. Fit-for-work is an emerging outcome with clinical as well as socioeconomic significance. We describe the possible benefit of a cardiac rehabilitation (CR) program for return to work (RTW) after acute coronary syndrome (ACS). Material And Methods: We evaluated 204 patients with recent ACS. They were divided into 4 groups on the basis of their occupational work load: very light (VL), light (L), moderate (M), and heavy (H). Work-related outcomes were assessed with the Work Performance Scale (WPS) of the Functional Status Questionnaire and as "days missed from work" (DMW) in the previous 4 weeks. The variables considered for outcomes were percent ejection fraction, functional capacity expressed in metabolic equivalents (METs), and participation or non-participation in the CR program (CR+ and CR-). Results: One hundred thirty (66%) patients took part in the CR program. Total WPS scores for CR+ and CR- subgroups were VL group: 18±4 vs. 14±4 (p < 0.001), L group: 18±3 vs. 14±3 (p < 0.0001), M group: 19±3 vs. 16±3 (p < 0.003), and H group: 20±4 vs. 17±3 (p < 0.006). Fewer DMW were reported by the CR+ group. Conclusions: Non-participation in CR was a consistent cause of poorer work-related outcomes. Our findings indicate that CR and occupational counseling play a very important role in worker recovery and subsequent reintegration in the workplace, in particular among clerical workers. abstract_id: PUBMED:26764067 Impact of Bleeding on Quality of Life in Patients on DAPT: Insights From TRANSLATE-ACS. Background: Prolonged dual antiplatelet therapy (DAPT) is recommended after an acute myocardial infarction (AMI) to reduce ischemic events but is associated with increased rates of major and minor bleeding. Objectives: This study sought to determine the incidence of post-percutaneous coronary intervention (PCI) bleeding that occurs on contemporary DAPT and its impact on quality of life (QOL). Methods: We studied 9,290 AMI patients treated with PCI and discharged alive between April 2010 and September 2012. Post-discharge bleeding was categorized according to the Bleeding Academic Research Consortium (BARC) definition. The primary outcome was the 6-month Euro QOL-5 Dimension (EQ-5D) index score (a measure of health utility); a secondary outcome was the EQ-5D visual analog scale (VAS) at 6 months. Results: Of the 9,290 patients with AMI, bleeding events occurred as follows: any BARC bleeding: 24.2%; BARC 1: 9.1%; BARC 2: 13.8%; BARC 3: 1.1%; BARC 4: 0.03%; and BARC 5: 0%. Those who experienced any BARC bleeding had lower scores across all 5 EQ-5D domains (mobility, self-care, usual activities, pain, and anxiety), as well as lower EQ-5D VAS and EQ-5D index scores. After clinical risk adjustment, any BARC bleeding was independently associated with 6-month EQ-5D index score (p < 0.0001) and lower QOL (p < 0.001). Both the EQ-5D index and the VAS score declined in a stepwise fashion with increasing BARC severity. Conclusions: Among patients undergoing PCI for AMI, bleeding during follow-up was associated with worse 6-month utility and QOL. Although even minor bleeding was associated with impaired health status and QOL, the degree of impairment increased in a stepwise fashion with bleeding severity. abstract_id: PUBMED:28992998 Predictive value of depression and anxiety for long-term mortality: differences in outcome between acute coronary syndrome and stable angina pectoris. Background: Since the early 2000s the treatment of choice for an acute myocardial infarction has moved from thrombolytic therapy to primary PCI (pPCI). As a result, the majority of patients undergoing PCI shifted from stable angina pectoris (SA) to acute coronary syndrome (ACS). Additionally the previously observed association between depression and anxiety and long-term outcome in patients who underwent a PCI may have been changed. The main objective of this study was to investigate the predictive value of depression and anxiety for 10-year mortality, in a cohort with post-PCI patients treated for SA versus patients treated for ACS. Methods: This prospective single center cohort consists of a consecutive series of patients (n=528) treated with PCI. At 1month post-PCI, patients completed the Dutch version of the Hospital Anxiety and Depression Scale (HADS). Results: After adjustment for baseline characteristics depression was associated with higher 10-year mortality post-PCI (HR 1.58 95% Confidence Interval [95% CI] 1.04-2.40). In the ACS population no association between depression and 10-year mortality was found (HR 1.05 95% CI 0.62-1.79), in contrast to the SA population (HR 1.97 95% CI 1.09-3.57). After additional adjustment for anxiety, depression was no longer associated with higher mortality. Conclusions: Anxiety at baseline was associated with an increased 10-year mortality rate after PCI. Depression was also associated with higher 10-year mortality, however the association disappeared after additional adjustment for anxiety. This finding was more pronounced in patients presenting with SA as compared to those presenting with ACS, which might be a result of the increasing number of ACS patients treated with pPCI. abstract_id: PUBMED:21275296 Transradial percutaneous coronary interventions: indications, success rates & clinical outcome. Before ten years, radial artery was discovered as a useful vascular access site for percutaneous coronary procedures. It has the advantage of reduced access site complications but is associated with specific technical challenges in comparison with the transfemoral approach. Although earlier data from a meta-analysis indicated higher procedure failure rates with radial--as compared to femoral access (7.2 vs. 2.4%), more recent data from prospective multicenter studies and large meta analysis showed significantly better outcomes with radial access versus femoral access in contemporary, real-world clinical settings of percutaneous cardiovascular procedures (e.g. PREVAIL-, PRESTO-ACS-studies). This includes also challenging coronary procedures in acute coronary syndromes (NSTEMI and STEMI) where the radial access was associated with fewer bleeding complications leading to better long-term outcomes. Transradial procedure failures can sometimes be due to variation in radial artery anatomy (e.g. vessel diameter, anomalous branching patterns, tortuosity) or risk factors for radial spasms (e.g. smoking, anxiety, vessel diameter, age, gender). Postprocedural radial occlusions (0.6-1.2%) seems strongly be related to these anatomical variances, which possibly may be reduced by the use of smaller catheter, however 5 French lumen diameter guiding catheter include limitations regarding treating options in complex coronary lesion. In conclusion, the transradial access for coronary angiography and interventions is not only to enhance patients comfort, but shows significant better long-term results due to less bleeding complications as compared to the femoral access. abstract_id: PUBMED:31175425 No impact of an extensive social intervention program on return to work and quality of life after acute cardiac event: a cluster-randomized trial in patients with negative occupational prognosis. Objectives: To examine the effectiveness of extensive social therapy intervention during inpatient multi-component cardiac rehabilitation (CR) on return to work and quality of life in patients with low probability of work resumption after an acute cardiac event. Methods: Patients after acute cardiac event with negative subjective expectations about return to work or unemployment (n = 354) were included and randomized in clusters of 3-6 study participants. Clusters were randomized for social counseling and therapy led by a social worker, six sessions of 60 min each in 3 weeks, or control group (usual care: individual counseling meeting by request). The return to work (RTW) status and change in quality of life (QoL, short form 12: Physical and Mental Component Summary PCS and MCS) 12 months after discharge from inpatient CR were outcome measures. Results: The regression model for RTW showed no impact of the intervention (OR 1.1, 95% CI 0.6-2.1, P = 0.79; n = 263). Predictors were unemployment prior to CR as well as higher anxiety values at discharge from CR. Likewise, QoL was not improved by social therapy (linear mixed model: ΔPCS 0.3, 95% CI - 1.9 to 2.5; P = 0.77; n = 177; ΔMCS 0.7, 95% CI - 1.9 to 3.3; P = 0.58; n = 215). Conclusions: In comparison to usual care, an intensive program of social support for patients during inpatient cardiac rehabilitation after an acute cardiac event had no additional impact on either the rate of resuming work or quality of life. Answer: The impact of differing anxiety disorders on outcomes following an acute coronary syndrome (ACS) has been a subject of research, with studies indicating that the type of anxiety disorder may influence the prognosis of ACS patients. Univariate analyses have shown that patients with a lifetime history of agoraphobia tend to experience poorer cardiac outcomes, while those with a history of generalized anxiety disorder (GAD) may experience a superior cardiac outcome (PUBMED:19691025). This paradoxical finding suggests that the "apprehensive worrying" characteristic of GAD might be constructive, possibly leading to better self-management of cardiac issues. Further research supports the notion that GAD can predict a superior five-year outcome following an ACS. This study found that meeting GAD criteria was associated with better cardiac outcomes, especially in patients who experienced GAD as their only anxiety disorder, even after controlling for depression and medical comorbidities (PUBMED:21652086). The authors speculate that this may be due to GAD patients' tendency to seek help for somatic symptoms and adhere to cardiac rehabilitation programs. However, it is important to note that anxiety can also have negative consequences for cardiac patients. Anxiety can lead to difficulty adhering to treatments, risky behaviors, increased risk for acute cardiac events, and in-hospital complications after admission for ACS (PUBMED:17595368). Moreover, anxiety and depression have been shown to independently predict cardiac events post-MI, with each having an impact independent of cardiac disease severity (PUBMED:7498109). In the context of cardiac rehabilitation, key parameters such as cardiovascular risk factors, exercise capacity, and subjective health (including anxiety) have been identified as sensitive to change and modifiable through rehabilitation, which can comprehensively assess the outcome of in-patient rehabilitation in elderly patients (PUBMED:23169864). Overall, while anxiety disorders can have varying impacts on the outcome following an ACS, certain types like GAD may be associated with better long-term outcomes, potentially due to constructive worrying behaviors that lead to proactive health management. However, the presence of anxiety still requires careful management as it can also have detrimental effects on cardiac patient outcomes.
Instruction: Does technical failure of revascularization during coronary artery bypass grafting predict severity of poststernotomy mediastinitis? Abstracts: abstract_id: PUBMED:15544078 Does technical failure of revascularization during coronary artery bypass grafting predict severity of poststernotomy mediastinitis? Background And Aims: To assess the impact of unsuccessful revascularization in relation to poststernotomy mediastinitis (PSM), which affects long-term outcome after coronary artery bypass grafting (CABG). Material And Methods: An active approach for the follow-up of PSM involved a step by step treatment protocol of conventional surgery and plastic reconstructive surgery. 47 patients treated for PSM after CABG were identified and further evaluated. Complete revascularization was considered unsuccessful when technical hazards were reported during CABG. When PSM subsided after thorough debridement and sternal refixation without plastic reconstructive surgery, such as omentoplasty or muscle transposition, PSM was categorized as mild PSM. If treatment required plastic reconstructive surgery, PSM was categorized as severe PSM. Preoperative coronary artery angiographic status and success of revascularization were compared to postoperative outcome in relation to mild and severe PSM. Results: 36 patients suffered from mild PSM and 11 patients from severe PSM. Preoperative clinical status did not differ among patients. Two patients (4.3 %) died during hospitalization. The need for plastic reconstructive surgery was significant (p < 0.05) among patients with unsuccessful revascularization. 35 out of 41 patients (85 %) without problems of graft anastomosis during CABG (successful revascularization) were associated with mild PSM, whereas only 6 out of 41 patients (15 %) with successful revascularization during CABG required plastic reconstructive surgery (p < 0.05). Technical failure of graft anastomosis (3 cases) or poor outflow of internal thoracic artery (2 cases) were statistically associated with severe PSM. Conclusion: Technical failures of revascularization during CABG may delay recovery from PSM. abstract_id: PUBMED:18361878 Utility of omentoplasty for poststernotomy mediastinitis secondary to myocardial revascularization surgery Acute poststernotomy mediastinitis is a serious complication of cardiac surgery and is associated with high mortality. Conservative treatment with local debridement, irrigating-suction systems, and specific antibiotic therapy is sometimes inadequate. Omentoplasty is occasionally used for reconstruction and the treatment of various chest diseases. This useful procedure is most often indicated for suppurative processes and radionecrosis, to complement myoplasty, or to cover a chest wall prosthesis. When used to treat poststernotomy mediastinitis secondary to cardiac surgery, omentoplasty improves control of infection and prognosis. We describe 2 cases of poststernotomy mediastinitis secondary to cardiac revascularization surgery in which omentoplasty was an effective treatment. Excellent control of infection was achieved. abstract_id: PUBMED:10617044 Vacuum-assisted closure in the treatment of poststernotomy mediastinitis. A new technique using vacuum-assisted closure was successfully applied in 3 patients with poststernotomy mediastinitis. After surgical debridement, this vacuum-assisted closure technique has made it possible to avoid the need for secondary surgical closure (including direct secondary surgical closure and secondary surgical closure by use of vascularized muscle flaps). A healed stable sternotomy wound can be achieved using this new technique. abstract_id: PUBMED:17062271 Late presentation of poststernotomy mediastinitis 15 years after coronary artery bypass grafting. Poststernotomy mediastinitis is a relatively rare, but potentially fatal complication of cardiac surgery. Although the vast majority of cases present within 1 month of median sternotomy, there are some reports of presentations beyond 1 year. We report a rare case of mediastinitis presenting 15 years after coronary artery bypass grafting. abstract_id: PUBMED:2348106 Poststernotomy mediastinitis treated by omental transfer--successful management of 2 cases Two patients, who had developed poststernotomy mediastinitis, underwent sternal bone debridement and omental transfer. Their previous cardiac operations were bilateral internal mammary artery-coronary artery bypass and triple valve replacement, respectively. Although their infections had been sustained even with open wound packing and irrigation, complete healing was achieved by omental transposition in both cases. From these results, omental transposition seem to be a very effective method of treatment for this serious and life-threatening complication. abstract_id: PUBMED:15746733 Acute poststernotomy mediastinitis managed with debridement and closed-drainage aspiration: factors associated with death in the intensive care unit. Objective: The purpose of the study is to describe an intensive care unit's experience in the treatment of poststernotomy mediastinitis and to identify factors associated with intensive care unit death. Methods: Over a 10-year period, 316 consecutive patients with mediastinitis occurring less than 30 days after sternotomy were treated in a single unit. First-line therapy was closed-drainage aspiration with Redon catheters. Variables recorded, including patient demographics, underlying disease classification, clinical and biologic data available at intensive care unit admission and day 3, and their association with intensive care unit mortality, were subjected to multivariate analyses. Results: Intensive care unit mortality (20.3%) was significantly associated with 5 variables available at admission: age greater than 70 years (odds ratio, 2.70), operation other than coronary artery bypass grafting alone (odds ratio, 2.59), McCabe class 2/3 (odds ratio, 2.47), APACHE II score (odds ratio, 1.12 per point), and organ failure (odds ratio, 2.07). After introducing day 3 variables into the logistic regression model, independent risk factors for intensive care unit death were as follows: age greater than 70 years, operations other than coronary artery bypass grafting alone, McCabe class 2/3, APACHE II score, mechanical ventilation still required on day 3, and persistently positive bacteremia. For patients receiving mechanical ventilation for less than 3 days, mortality was very low (2.4%). In contrast, for patients receiving mechanical ventilation for 3 days or longer, mortality reached 52.8% and was associated with non-coronary artery bypass grafting cardiac surgery, persistently positive bacteremia, and underlying disease. Conclusions: In patients requiring intensive care for acute poststernotomy mediastinitis, age, type of cardiac surgery, underlying disease, and severity of illness at the time of intensive care unit admission were associated with intensive care unit death. Two additional factors (mechanical ventilation dependence and persistently positive bacteremia) were identified when the analyses were repeated with inclusion of day 3 patient characteristics. abstract_id: PUBMED:16528987 High-pressure suction drainage for poststernotomy mediastinitis We present a case of high-pressure suction drainage for poststernotomy mediastinitis. A 67-year-old man who underwent coronary artery bypass grafting because of angina pectoris was noted massive dirty exudate fluid (putrid secretions) from median sternal wound on the 6th postoperative day. The sternal wounds were completely reopened and all sternal wires removed. Thorough debridement was performed. The mediastinum was then washed out with warn normal saline. The polyurethane foam was shaped to fit the resulting sternal defect and placed within the cavity just below the skin edge. The suction tube was inserted inside the foam and the area was covered with the adhesive drape. After 45 days suction drainage the sternal wound was closed, after which he was discharged. abstract_id: PUBMED:12365283 Arterial revascularization of coronary vessels Unlabelled: ARTERIAL GRAFTS: Atherosclerosis is the major cause of late vein graft failure and has led to the use of arterial grafts. The use of the left internal thoracic artery (ITA) is associated with improved survival which has been proven in numerous studies. Recent studies have also demonstrated a survival benefit of double over single ITA graft use. Especially in younger patients bilateral ITA grafting is recommended. Surgical Technique: In terms of dissection of the graft, the technique of skeletizing the artery seems beneficial especially in diabetics to decrease the likelihood of mediastinitis. The radial artery is mainly used as a T graft in order to achieve complete arterial revascularization. Results: While patency rates seem superior as compared to vein grafts, a survival benefit has yet to be proven. Irrespective of graft type, in-situ-grafts have a superior patency as opposed to free grafts. As has long been demonstrated with vein grafts, patency of arterial grafts also depends on the size of the recipient vessel. Grafts to the distal circumflex and distal right coronary artery have worse patency rates than grafts to the LAD. Conflicting results exist concerning the gastroepiploic artery; its use can only be recommended if the vessel is of good caliber. All other arterial conduits are used infrequently and there are no data to draw any meaningful conclusion for routine use. Arterial revascularization on the beating heart is feasible and short and midterm follow-up data are encouraging. Conclusion: Our current policy is to perform complete arterial revascularization in all patients under the age of 70 using bilateral ITAs and radial arteries. abstract_id: PUBMED:11574218 Aggressive primary treatment for poststernotomy acute mediastinitis: our experience with omental- and muscle flaps surgery. Objective: The surgical treatment of poststernotomy acute mediastinitis remains challenging. After disappointing results with a conservative management of post coronary artery bypass grafting (CABG) mediastinitis, we shifted towards a more aggressive surgical management. Methods: From March 1993 until December 1999, 32 patients (6 female/26 male), 0.5% of the total sternotomy population, were operated for mediastinitis, defined as wound and sternal dehiscence with medistinal pus and positive culture. Mean age was 66 years (32-79 years). Twenty-two patients (75%) underwent CABG and 16 patients were in New York Heart Association (NYHA) class III/IV. Results: We performed an omentoplasty in 11 patients, a pectoralis muscle flap associated with an omentoplasty in 20 patients. One patient had a bilateral pectoralis myoplasty. The reconstruction surgery occurred at an average of 11 days (6-26) after primary surgery. Twelve patients had a previous surgical drainage (1-3 surgical procedures) of the mediastinum. Hospital mortality was nine patients (28%). Seven of these patients (77%) were in NYHA IV with inotropic support. Five patients had to be reoperated on: four patients had a bilateral myoplasty after omentoplasty, one patient had an omentoplasty after a unilateral myoplasty. Late epigastric hernia was seen in three patients, two patients had wound revision and one had a retroperitoneal drainage for pancreatitis. There were no early or late flap failures. Conclusion: In our experience, omental and pectoralis flaps for poststernotomy acute mediastinitis provides good outcome of our stable patients. We would be reluctant to use it as standard therapy in our unstable patients. abstract_id: PUBMED:17768577 Off-pump total myocardial revascularization in patients with left ventricular dysfunction. Objective: To assess off-pump myocardial revascularization in patients with significant left ventricular dysfunction. Methods: Four hundred and five patients with an ejection fraction less than 35% underwent myocardial revascularization without extracorporeal circulation. The procedure was performed with the aid of a suction stabilizer and the LIMA stitch. The distal anastomoses were performed first. Results: A total of 405 patients were evaluated whose mean age was 63.4 +/- 9.78 years. Two hundred and seventy-nine patients were men (68.8%). With regard to risk factors, 347 patients were hypertensive, 194 were smokers, 202 were dyslipidemic, and 134 had diabetes. Two hundred and sixty patients were classified as NYHA functional class III and IV. Twenty patients suffered from chronic renal disease and were under dialysis. Fifty-one underwent emergency surgery, and 33 had been previously operated on. The mean ejection fraction was 27.2 +/- 3.54%. The mean EuroSCORE was 8.46 +/- 4.41. The mean number of anastomoses performed was 3.03 +/- 1.54 per patient. Forty-nine patients (12%) needed an intra-aortic balloon inserted after induction of anesthesia, whereas 73 (18%) needed inotropic support during the perioperative period. As to complications, 2 patients (0.49%) had renal failure, 2 had mediastinitis (0.49%), 7 (1.7%) needed to be reoperated because of bleeding, 5 patients (1.2%) suffered acute myocardial infarction, and 70 patients (17.3%) experienced atrial fibrillation. Eighteen (4.4%) patients died. Conclusion: Based on the data above, we concluded that myocardial revascularization without extracorporeal circulation in patients with left ventricular dysfunction is a safe and effective technique, and an alternative for high-risk patients. Results obtained were better than those predicted by EuroSCORE. Answer: Yes, technical failure of revascularization during coronary artery bypass grafting (CABG) does predict the severity of poststernotomy mediastinitis (PSM). A study found that technical failures of revascularization during CABG, such as technical failure of graft anastomosis or poor outflow of the internal thoracic artery, were statistically associated with severe PSM. The need for plastic reconstructive surgery, which is indicative of severe PSM, was significantly higher among patients with unsuccessful revascularization. In contrast, successful revascularization during CABG was associated with mild PSM, with 85% of patients without problems of graft anastomosis experiencing mild PSM and only 15% requiring plastic reconstructive surgery (PUBMED:15544078).
Instruction: Outcomes in people after stroke attending an adapted cardiac rehabilitation exercise program: does time from stroke make a difference? Abstracts: abstract_id: PUBMED:24709146 Outcomes in people after stroke attending an adapted cardiac rehabilitation exercise program: does time from stroke make a difference? Background: Individuals referred to cardiac rehabilitation programs (CRPs) after stroke have demonstrated postprogram improvements in cardiovascular fitness (VO2peak). However, the effect of CRPs on other physiological/quality-of-life outcomes and effect of time from stroke on these results has not been investigated. The objectives of the present study are (1) to evaluate the effects of a CRP in participants with motor impairment after stroke and (2) to explore the effects of elapsed time from stroke on physiological/quality-of-life outcomes. Methods: The CRP included 24 weeks of resistance and aerobic training. Primary outcomes in 120 participants, 25.4±42.3 (mean±standard deviation) months after stroke, included 6-minute walk distance (6MWD), VO2peak, timed repeated sit-to-stand performance, and affected-side isometric knee extensor strength (IKES). Secondary measures included gait characteristics (cadence, step lengths, and symmetry), walking speed, balance (Berg Balance Scale), affected-side range of motion (ROM), elbow flexor and grip strength, anaerobic threshold, and perceptions of participation/social reintegration. Results: After adjusting for multiple comparisons, participants demonstrated significant improvements (all P<.001) in 6MWD (283.2±126.6 to 320.7±141.8 m), sit-to-stand performance (16.3±9.5 to 13.3±7.1 seconds), affected-side IKES (25.9±10.1 to 30.2±11 kg as a percentage of body mass), and VO2peak (15.2±4.5 to 17.2±4.9 mL·kg·min(-1)). Participants also demonstrated post-CRP improvements in secondary outcomes: anaerobic threshold, balance, affected-side hip/shoulder ROM, grip and isometric elbow flexor strength, participation, walking speed, cadence (all P<.001), and bilateral step lengths (P<.04). In a linear regression model, there was a negative association between the change in 6MWD and time from stroke (β=-42.1; P=.002) independent of baseline factors. Conclusions: A CRP yields improvements over multiple domains of recovery; however, those who start earlier demonstrate greater improvement in functional ambulation independent of baseline factors. These data support the use of adapted CRPs as a standard of care practice after conventional stroke rehabilitation. abstract_id: PUBMED:34929696 Exercise-Based Cardiac Rehabilitation Associates with Lower Major Adverse Cardiovascular Events in People with Stroke. Background: The risk of major adverse cardiovascular events is substantially increased following a stroke. Although exercise-based cardiac rehabilitation has been shown to improve prognosis following cardiac events, it is not part of routine care for people following a stroke. We therefore investigated the association between cardiac rehabilitation and major adverse cardiovascular events for people with stroke. Methods: This retrospective analysis was conducted on June 20, 2021, using anonymized data within TriNetX, a global federated health research network with access to electronic medical records from participating healthcare organizations, predominantly in the USA. All participants were aged ≥18 years with cerebrovascular disease and at least 2 years of follow-up. People with stroke and an electronic medical record of exercise-based cardiac rehabilitation were 1:1 propensity score matched to people with stroke but without cardiac rehabilitation using participant characteristics, comorbidities, cardiovascular procedures, and cardiovascular medications. Results: Of 836,923 people with stroke and 2-year follow-up, 2,909 met the inclusion for the exercise-based cardiac rehabilitation cohort. Following propensity score matching (n = 5,818), exercise-based cardiac rehabilitation associated with 53% lower odds of all-cause mortality (odds ratio 0.47, 95% confidence interval: 0.40-0.56), 12% lower odds of recurrent stroke (0.88, 0.79-0.98), and 36% lower odds of rehospitalization (0.64, 0.58-0.71), compared to controls. No significant association between cardiac rehabilitation and incident atrial fibrillation was observed. Conclusion: Exercise-based cardiac rehabilitation prescribed for people following a stroke associated with significantly lower odds of major adverse cardiovascular events at 2 years, compared to usual care. abstract_id: PUBMED:31588512 Eligibility, Enrollment, and Completion of Exercise-Based Cardiac Rehabilitation Following Stroke Rehabilitation: What Are the Barriers? Background: People after stroke benefit from comprehensive secondary prevention programs including cardiac rehabilitation (CR), yet there is little understanding of eligibility for exercise and barriers to use. Objective: The aim of this study was to examine eligibility for CR; enrollment, adherence, and completion; and factors affecting use. Design: This was a prospective study of 116 consecutive people enrolled in a single outpatient stroke rehabilitation (OSR) program located in Toronto, Ontario, Canada. Methods: Questionnaires were completed by treating physical therapists for consecutive participants receiving OSR and included reasons for CR ineligibility, reasons for declining participation, demographics, and functional level. CR eligibility criteria included the ability to walk ≥100 m (no time restriction) and the ability to exercise at home independently or with assistance. People with or without hemiplegic gait were eligible for adapted or traditional CR, respectively. Logistic regression analyses were used to examine factors associated with use indicators. Results: Of 116 participants receiving OSR, 82 (70.7%) were eligible for CR; 2 became eligible later. Sixty (71.4%) enrolled in CR and 49 (81.7%) completed CR, attending 87.1% (SD = 16.6%) of prescribed sessions. The primary reasons for ineligibility included being nonambulatory or having poor ambulation (52.9%; 18/34 patients) and having severe cognitive deficits and no home exercise support (20.6%; 7/34). Frequently cited reasons for declining CR were moving or travel out of country (17.2%; 5/29 reasons), lack of interest (13.8%; 4/29), transportation issues (10.3%; 3/29), and desiring a break from therapy (10.3%; 3/29). In a multivariate analysis, people who declined CR were more likely to be women, have poorer attendance at OSR, and not diabetic. Compared with traditional CR, stroke-adapted CR resulted in superior attendance (66.1% [SD = 22.9%] vs 87.1% [SD = 16.6%], respectively) and completion (66.7% vs 89.7%, respectively). The primary reasons for dropping out were medical (45%) and moving (27%). Limitations: Generalizability to other programs is limited, and other, unmeasured factors may have affected outcomes. Conclusions: An OSR-CR partnership provided an effective continuum of care, with approximately 75% of eligible people participating and more than 80% completing. However, just over 1 of 4 eligible people declined participation; therefore, strategies should target lack of interest, transportation, women, and people without diabetes. An alternative program model is needed for people who have severe ambulatory or cognitive deficits and no home exercise support. abstract_id: PUBMED:37087771 Sex differences and predictors of completion of a 6-month exercise-based cardiac rehabilitation program in 1,536 people following stroke. Objective: To retrospectively examine sex-differences and predictors of completion in consecutively-referred patients to a 6-month exercise-based cardiac rehabilitation program (CRP) from 2006 to 2017. Materials/methods: People with hemiplegic gait participated in stroke-adapted-CRP; otherwise, traditional-CRP. Reasons for non-completion were ascertained by interview. Regression-analyses were conducted to determine non-completion in all patients and women and men separately. Results: There were 1536 patients (30.3% women), mean age 64.5 ± 12.5 with 23% initiating the stroke-adapted-CRP. Overall, 75.1% completed the CRP (87.3% stroke-adapted-CRP vs 71.5% traditional-CRP; p < .001). There was no difference in completion between women and men (74.5% vs 75.4%; p=0.7), or in attendance to pre-scheduled sessions (p=0.6) or reasons for non-completion (p > .05, all). The only sex difference in completion by age (decade) occurred in those <41 years (59% women vs 85% men; p=.02). Baseline predictors of non-completion among all patients included not being enrolled in the stroke-adapted-CRP, lower V̇O2peak, smoking, diabetes (prescribed insulin) and depression but not sex (p=.5) or age (p=.15). Unique predictors in women vs men were younger age, lower V̇O2peak, smoking, diabetes (prescribed insulin), depression, and cancer diagnoses. Unique to men was having >1 stroke and diabetes (any anti-diabetes medication). The strongest predictor of non-completion among all models was not being enrolled in stroke-adapted-CRP. Conclusions: While there were no sex-differences in adherence to the CRP, women and men have mostly unique predictors of non-completion. Younger women are at greatest risk for non-completion. Practitioners should provide sex-specific, tailored strategies for enhancing completion with a focus on younger women and offering a stroke-adapted-CRP with close attention to those with diabetes. abstract_id: PUBMED:31764536 Pathophysiology of Exercise Intolerance and Its Treatment With Exercise-Based Cardiac Rehabilitation in Heart Failure With Preserved Ejection Fraction. Heart failure with preserved ejection fraction (HFpEF) is the fastest growing form of heart failure in the United States. The cardinal feature of HFpEF is reduced exercise tolerance (peak oxygen uptake, (Equation is included in full-text article.)O2peak) secondary to impaired cardiac, vascular, and skeletal muscle function. There are currently no evidence-based drug therapies to improve clinical outcomes in patients with HFpEF. In contrast, exercise training is a proven effective intervention for improving (Equation is included in full-text article.)O2peak, aerobic endurance, and quality of life in HFpEF patients. This brief review discusses the pathophysiology of exercise intolerance and the role of exercise training to improve (Equation is included in full-text article.)O2peak in clinically stable HFpEF patients. It also discusses the mechanisms responsible for the exercise training-mediated improvements in (Equation is included in full-text article.)O2peak in HFpEF. Finally, it provides evidence-based exercise prescription guidelines for cardiac rehabilitation specialists to assist them with safely implementing exercise-based cardiac rehabilitation programs for HFpEF patients. abstract_id: PUBMED:31177833 Exercise-based cardiac rehabilitation in patients with reduced left ventricular ejection fraction: The Cardiac Rehabilitation Outcome Study in Heart Failure (CROS-HF): A systematic review and meta-analysis. Background: In heart failure with reduced left ventricular ejection fraction (HFrEF) patients the effects of exercise-based cardiac rehabilitation on top of state-of-the-art pharmacological and device therapy on mortality, hospitalization, exercise capacity and quality-of-life are not well established. Design: The design of this study involved a structured review and meta-analysis. Methods: Evaluation of randomised controlled trials of exercise-based cardiac rehabilitation in HFrEF-patients with left ventricular ejection fraction ≤40% of any aetiology with a follow-up of ≥6 months published in 1999 or later. Results: Out of 12,229 abstracts, 25 randomised controlled trials including 4481 HFrEF-patients were included in the final evaluation. Heterogeneity in study population, study design and exercise-based cardiac rehabilitation-intervention was evident. No significant difference in the effect of exercise-based cardiac rehabilitation on mortality compared to control-group was found (hazard ratio 0.75, 95% confidence interval 0.39-1.41, four studies; 12-months follow-up: relative risk 1.29, 95% confidence interval 0.66-2.49, eight studies; six-months follow-up: relative risk 0.91, 95% confidence interval 0.26-3.16, seven studies). In addition there was no significant difference between the groups with respect to 'hospitalization-for-any-reason' (12-months follow-up: relative risk 0.79, 95% confidence interval 0.41-1.53, four studies), or 'hospitalization-due-to-heart-failure' (12-months follow-up: relative risk 0.59, 95% confidence interval 0.12-2.91, four studies; six-months follow-up: relative risk 0.84, 95% confidence interval 0.07-9.71, three studies). All studies show improvement of exercise capacity. Participation in exercise-based cardiac rehabilitation significantly improved quality-of-life as evaluated with the Kansas City Cardiomyopathy Questionnaire: (six-months follow-up: mean difference 1.94, 95% confidence interval 0.35-3.56, two studies), but no significant results emerged for quality-of-life measured by the Minnesota Living with Heart Failure Questionnaire (nine-months or more follow-up: mean difference -4.19, 95% confidence interval -10.51-2.12, seven studies; six-months follow-up: mean difference -5.97, 95% confidence interval -16.17-4.23, four studies). Conclusion: No association between exercise-based cardiac rehabilitation and mortality or hospitalisation could be observed in HFrEF patients but exercise-based cardiac rehabilitation is likely to improve exercise capacity and quality of life. abstract_id: PUBMED:35452920 Adapted cardiac rehabilitation for people with sub-acute, mild-to-moderate stroke: a mixed methods feasibility study. Objective: To determine the recruitment strategy, acceptability, adherence, outcome measures, and adverse events for a definitive study that will explore adapted cardiac rehabilitation (CR) for people post-stroke with mild-to-moderate severity stroke in the sub-acute stage of recovery. Design: Mixed methods feasibility study. Setting: Acute hospital setting, neurology outpatients and community hospitals. Participants: 32 participants with stroke (mean age: 64.4 years) of median National Institutes of Health Stroke Scale (NIHSS) score 2 (range: 0 to 6) within six months of stroke. Intervention: All participants attended six weeks, adapted CR within one to six months after a stroke. A combined class with people post cardiac event. Main Outcome Measures: Incremental shuttle walk test (ISWT), blood pressure, heart rate, weight, body mass index, quality of life, fatigue, anxiety and depression, tone, falls, stroke attitude and knowledge, physical activity (accelerometry) and functional ability. Qualitative: Interviews with participants, non-participants and people post-cardiac event. Focus groups with Stroke and CR teams. Results: 32 participants were recruited. The programme was acceptable to people with mild stroke (NIHSS<3) and people post cardiac events; 80% of classes attended, a mean of 9.6 classes, with six drop-outs. The ISWT was an acceptable outcome measure (for NIHSS<3) and most measures showed positive changes. There was one adverse event. Conclusion: A definitive study to determine the effect of six weeks of adapted CR on cardiorespiratory fitness (CRf) in people who have had a mild severity stroke (NIHSS<3) in the sub-acute phase of recovery, is feasible. Teams need specialist education and support. A more specialist service may be needed for people with a stroke severity defined by NIHSS>2. Clinical Trial Registration Number: ISRCTN14861846. abstract_id: PUBMED:34942234 Cardiac rehabilitation for heart transplant patients: Considerations for exercise training. Heart transplantation (HT) is the treatment of choice for eligible patients with end-stage chronic heart failure (HF). One-year survival world-wide is >85%. Many patients experience a reasonable functional ability post-HT, but episodes of acute rejection, as well as multiple co-morbidities such as hypertension, diabetes, chronic kidney disease and cardiac allograft vasculopathy are common. Immunosuppression with prednisone frequently results in increased body fat and skeletal muscle atrophy. Exercise capacity is below normal for most patients with a mean peak oxygen uptake (VO2) of approximately 60% of expected. HT recipients have abnormal exercise physiology findings related to surgical cardiac denervation, diastolic dysfunction, and the legacy of reduced skeletal muscle oxidative capacity and impaired vasodilatory ability resulting from pre-HT chronic HF. The heart rate response to exercise is blunted. Cardiac reinnervation resulting in partial normalization of the heart rate response to exercise occurs in approximately 40% of HT recipients months to years after HT. Supervised exercise training in cardiac rehabilitation (CR) programs is safe and is recommended by professional societies both before (pre-habilitation) and after HT. Exercise training does not require alteration in immunosuppressants. Exercise training in adults after HT improves peak VO2 and skeletal muscle strength. It has also been demonstrated to reduce the severity of cardiac allograft vasculopathy. In addition, CR exercise training is associated with reduced stroke risk, percutaneous coronary intervention, hospitalization for either acute rejection or HF, and death. There are only limited data for exercise training in the pediatric population. abstract_id: PUBMED:27369843 Impact of exercise-based cardiac rehabilitation on long-term clinical outcomes in patients with left main coronary artery stenosis. Background: Cardiac rehabilitation may reduce cardiovascular mortality and morbidity rates in patients with coronary artery disease. The specific relationship between cardiac rehabilitation participation and left main coronary artery stenosis is unknown. Design: Prospective registry analysis. Methods: Data from a registry of 3120 patients with left main coronary artery stenosis, collected between 1995 and 2010, were analyzed. A total of 596 patients (19.6%) had participated in a cardiac rehabilitation program. Results: The crude seven-year mortality, cardiovascular mortality, target-vessel revascularization and myocardial infarction rates were significantly lower in the cardiac rehabilitation than in the non-rehabilitation group, and the incidence of stroke tended to be lower in the cardiac rehabilitation group. Multivariate analysis showed that cardiac rehabilitation was associated with significantly lower risks of mortality (hazard ratio (HR), 0.70; 95% confidence interval (CI), 0.49-1.00; p = 0.05) and cardiovascular mortality (HR, 0.69; 95% CI, 0.48-0.97; p = 0.03). However, the adjusted HR of myocardial infarction, target-vessel revascularization and stroke did not differ significantly. Propensity score-matched analysis of 507 pairs showed that cardiac rehabilitation was associated with significant decreases in mortality (HR, 0.62; 95% CI, 0.43-0.89; p = 0.009) and cardiovascular mortality (HR, 0.54; 95% CI, 0.36-0.80; p = 0.002) but had no beneficial effects on myocardial infarction, target-vessel revascularization or stroke. Conclusions: Exercise-based cardiac rehabilitation participation was associated with significant decreases in mortality. Our findings show that cardiac rehabilitation programs should be implemented in standard management for patients with left main coronary artery stenosis. abstract_id: PUBMED:20529376 Feasibility and effects of adapted cardiac rehabilitation after stroke: a prospective trial. Background: Despite the cardiovascular etiology of stroke, exercise and risk factor modification programs akin to cardiac rehabilitation (CR) are not available. This study aimed to establish the feasibility of adapting a CR model for individuals with mild to moderate stroke disability. A secondary objective was to determine the program's effects on aerobic and walking capacity, and stroke risk factors. Methods: A repeated measures design was used with a 3-month baseline period and 6-month adapted CR intervention (n = 43, mean +/- SD age 65 +/- 12 years, 30 +/- 28 months post stroke). Feasibility was determined by the number of participants who completed the study, occurrence of adverse events and frequency, duration and intensity of exercise performed. To determine effectiveness of the program, outcomes measured included aerobic capacity (VO2peak, ventilatory threshold), 6-Minute Walk Test (6MWT) distance, and risk factors. Descriptive statistics characterized the classes attended and number and intensity of exercise sessions. Paired t-tests, one-factor repeated measures analyses of variance contrasts and chi-square analyses were used to compare changes over time. Results: Two participants withdrew during the baseline period. Of the remaining 41 participants who commenced the program, 38 (93%) completed all aspects. No serious adverse effects occurred. Post-intervention, VO2peak improved relative to the stable baseline period (P = 0.046) and the increase in ventilatory threshold approached significance (P = 0.062). Conclusions: CR is feasible after stroke and may be adapted to accommodate for those with a range of post-stroke disability. It is effective in increasing aerobic capacity. CR may be an untapped opportunity for stroke survivors to access programs of exercise and risk factor modification to lower future event risk. Trial Registration: ClinicalTrials.gov registration number: NCT01067495. Answer: Yes, time from stroke does make a difference in the outcomes for people attending an adapted cardiac rehabilitation exercise program. A study found that participants demonstrated significant improvements in various primary outcomes such as 6-minute walk distance (6MWD), VO2peak, timed repeated sit-to-stand performance, and affected-side isometric knee extensor strength (IKES) after completing a 24-week cardiac rehabilitation program (CRP). Secondary outcomes such as anaerobic threshold, balance, affected-side hip/shoulder range of motion (ROM), grip and isometric elbow flexor strength, participation, walking speed, and cadence also showed improvements. However, the study also found a negative association between the change in 6MWD and time from stroke, indicating that those who started the CRP earlier demonstrated greater improvement in functional ambulation independent of baseline factors (PUBMED:24709146). This finding is supported by the fact that exercise-based cardiac rehabilitation has been associated with significantly lower odds of major adverse cardiovascular events at 2 years compared to usual care for people following a stroke (PUBMED:34929696). Moreover, a prospective study of 116 consecutive people enrolled in an outpatient stroke rehabilitation program found that approximately 75% of eligible people participated in CR and more than 80% completed it. However, about one in four eligible people declined participation, suggesting that strategies should target lack of interest, transportation, women, and people without diabetes to improve participation rates (PUBMED:31588512). Furthermore, a retrospective examination of sex differences and predictors of completion in a 6-month exercise-based CRP found that while there were no sex differences in adherence to the CRP, women and men have mostly unique predictors of non-completion. The strongest predictor of non-completion among all models was not being enrolled in stroke-adapted-CRP (PUBMED:37087771). In conclusion, the time from stroke does influence the outcomes of stroke survivors attending an adapted cardiac rehabilitation exercise program, with earlier participation after stroke associated with greater improvements in functional outcomes.
Instruction: Do the incontinent patients improve their equality of life using a humidity detector device? Abstracts: abstract_id: PUBMED:12938580 Do the incontinent patients improve their equality of life using a humidity detector device? Background: To evaluate the effect of a humidity detector device on the quality of life of patients with urinary incontinence IU. Method: Quasi-experimental study: a series of ten cases followed for a month. The devices were placed and the questionnaires filled in before and after using it for at least ten hours a day during a month. Health related quality of life was assessed through the questionnaires for IU convalidated and adapted to our specific environment: Urogenital Inventory Distress (UDI) and Incontinence Impact Questionary (IIQ). An improvement of 58 points by a four option Likert scale was considered a positive impact in the quality of life (IIQ). The scores obtained in UDI and IIQ are described before and after use the device and paried T test and Wilcoxon sign test were carried out to compare the scores obtained in each instance. The capacity to detect a difference of 58 points on the UDI scale was calculated (minimum relevant difference). A binomial test was undertaken to ascertain a probability of achieving an increase in the above mentioned index which would exceed the clinical relevance threshold. Results: Average increase in IIQ improvement: x = 5.48 (Std Error = 20.43) 95% CI = -34.56 to 45.56. Average increase in UDI improvement: x = -11.87 (Std Error = 20.70) 95% IC = -52.45 to 28.70. The power of the analysis to detect as relevant a difference of a 588 point increase in IIQ 71.1% and probability of obtaining a relevant improvement in the questionnaire IIQ 10% (IC 95%) 0% to 39.4%. Conclusions: A negative impact in the Quality of Life due to frequent changes of incontinence pads. abstract_id: PUBMED:37374763 Humidity-Sensing Mattress for Long-Term Bedridden Patients with Incontinence-Associated Dermatitis. Designing new medical devices with advanced humidity sensors is of great significance for patients with incontinence-associated dermatitis (IAD). The primary goal of this study is to test the humidity-sensing mattress system for patients with IAD in clinical settings. The design of the mattress is set at 203 cm, with 10 × 3 sensors, dimensions of 19 × 32 cm, and a weighted bearing of 200 kg. The main sensors consist of a humidity-sensing film, a thin-film electrode (6 × 0.1 mm), and a glass substrate (500 nm). The sensitivity of the test mattress system showed that the resistance-humidity sensor was at a temperature of 35 °C (V0 = 30 V, V0 = 350 mV), with slope at 1.13 V/fF, f = 1 MHz, 20-90% RH, and a response time of 20 s at 2 μm. In addition, the humidity sensor reached 90% RH, with a response time of less than 10 s, a magnitude of 107-104 Ω, 1 mol%, CrO1.5, and FO1.5, respectively. This design is not only a simple, low-cost medical sensing device, but also opens a new pathway for developing humidity-sensing mattresses in the field of flexible sensors, wearable medical diagnostic devices, and health detection. abstract_id: PUBMED:28254445 New device for intermittent emptying of the bladder in female children and adolescents: A pilot study. Introduction: Urinary incontinence (UI) is a challenging problem for the urological community. Clean intermittent catheterization (CIC) is the most commonly used method to restore bladder emptying to the state close to the physiological condition. This procedure can cause negative aspects such as pain and possible urethral injury. In addition, there is a negative impact on self-image and decline in quality of patient's life. The aim of the present study was to evaluate the safety and efficacy of a new intraurethral self-retaining device (ISRD), in female children and adolescents, as an attractive alternative to CIC. Materials And Methods: A prospective clinical pilot study was performed, in a single-institution, including female children and adolescent patients with urinary incontinence secondary to myelomeningocele who were already in an intermittent bladder catheterization program. Assessments included the use of a visual analogue scale in diagnosis of UI, reported adverse events, and the King's Health Questionnaire (KHQ) to evaluate quality of life, which was answered by patients and the caregivers of younger patients before and 6 months after enrolment. Discussion: The device was efficient and well tolerated by most patients (84%). The ISRD significantly improved quality of life in children and adolescents (p<0.0001 for both) (Table). The safe insertion and removal of the ISRD can be considered an advantage compared with CIC as eventual urethral trauma is significantly minimized. It was found that one of the main advantages of the ISRD is the possibility of management as an outpatient procedure both for initial insertion and replacement. The autonomy of patients to perform the bladder emptying process by themselves indicates the importance of this study, especially for school life and social interaction. ISRD use showed a tendency to improve the QoL. Conclusions: This new bladder-draining device (ISRD) was effective and secure in terms of insertion technique, and improved QoL of patients with urinary retention and incontinence, according to domains evaluated in the KHQ. abstract_id: PUBMED:25189504 New device and new concept for treating nocturnal enuresis: preliminary results of a phase one study. Objective: This new device for nocturnal enuresis (NE) consists of a humidity sensor, which beyond activating the sound also triggers an electrical stimulus, contracts the pelvic floor muscles and closes the urethra, thereby interrupting the void. The aim of this study is to test if the theoretical principle described above is true and if the device used is safe. As a secondary endpoint, we studied the efficacy of this device in a small number of patients with NE. Material And Methods: The age of the patients ranged from 7 to 20 years old, with an average of 11 years. Two surface electrodes are placed at 3 and 9 o'clock on the perineum area. When the humidity sensor is activated it triggers an electrical circuit with a current frequency of 50 Hz. After 20 s of perineal contraction, if the patient does not turn down the device, a buzz sounds and the parents or the child will wake up. In this way, the child is taught to go to the toilet and void. All patients who used the device were also treated with behavior modifications. The definition of resolution of NE was 1 month of dry consecutive nights. Results: All patients had daily or almost daily NE (two failed with DDAVP and alarm treatment). Four patients had monosymptomatic NE and two had the non-monosymptomatic form. In five patients the device worked as expected and one patient continued wetting the bed and dropped out of the treatment. The five patients who kept using the device had the symptoms resolved completely. There were no recognizable side effects associated with this treatment. Conclusion: The presented device works as a conventional enuresis alarm with the addition of a pelvic floor contraction. This has the advantage of the children not wetting the bed during treatment. This principle was proved by this study and the device demonstrated itself to be safe. abstract_id: PUBMED:10074989 Effect of a vaginal device on quality of life with urinary stress incontinence. Objective: To assess the effect of a vaginal device (Continence Guard) on urine leakage and quality of life. Methods: Fifty-five women with stress incontinence participated in a 3-month study. They were assessed by the Incontinence Impact Questionnaire, two incontinence-related quality-of-life questions, a generic quality-of-life questionnaire (Short Form-36), two 24-hour home pad weighing tests, a 2-day voiding diary, uroflowmetry, urine cultures, and a questionnaire about subjective effectiveness of the device. Results: Forty-one (74.5%) women completed the study. Estimated on an intent-to-treat basis, the vaginal device was associated with subjective cure in 11 women (20%) and improvement in 27 (49%). The mean 24-hour pad test leakage and leakage episodes in the voiding diary decreased significantly. Fifty-eight percent of the 55 women enrolled wanted to continue using the device after 3 months. The quality of life measured by the Incontinence Impact Questionnaire showed highly significant improvement, and the results of the two incontinence-related quality of life questions also showed significant improvement. Responses to the Short Form-36 general health questionnaire showed no significant changes. Improvement on the Incontinence Impact Questionnaire correlated with improvements in incontinence, whereas the Short Form-36 scores were unchanged. Conclusion: Treatment with the Continence Guard significantly decreases leakage and improves quality of life in women with symptoms of urinary stress incontinence. An incontinence-specific, rather than a generic, quality-of-life questionnaire was important in assessing treatment outcomes. abstract_id: PUBMED:16806654 Gynaecological operations: do they improve sexual life? Aim of the study was to determine if gynaecological operations have an effect on sexual function using the current medlined literature. We performed a Medline search using the terms "sexual life/function after operative gynaecological treatment", "sexual life/function after operations for gynaecological problems", "sexual life/function after hysterectomy", "sexual life/function, incontinence" and "sexual life/function, pelvic organ prolapse". Reviews were excluded. We divided the operations into four groups of (1) combined prolapse and incontinence operations, (2) prolapse operations only, (3) incontinence operations only and (4) hysterectomy and compared pre-to postoperative sexual outcome. Thirty-six articles including 4534 patients were identified. Only 13 studies used a validated questionnaire. The other authors used self-designed and non-validated questionnaires or orally posed questions by the examiner to determine sexual function. Prolapse operations particularly posterior repair using levator plication seem to deteriorate sexual function, incontinence procedure have some worsening effect on sexual function and hysterectomy seems to improve sexual function with no differences between subtotal or total hysterectomy. Gynaecological operations do influence sexual function. However, little validated data are available to come to this conclusion. abstract_id: PUBMED:8940866 Use of a micturition-monitoring device in elderly inpatients More than 100,000 people suffer from functional urinary incontinence in Japan. To improve the quality of life of these people, we tested a device for monitoring micturition by means of a thin-layer membrane sensor in a diaper. The device was tested in elderly inpatients, and associated changes in the wordload of the nursing staff were also recorded. The device was beneficial because the precise micturition time was easily obtained and thus the daily pattern of micturition became obvious. With the use of this device, the time during which the patients wore wet diapers was shorter than with the conventional system of scheduled changes. However, the increase in the frequency of diaper changed doubled the workload of the nursing staff. In conclusion, improving the quality of life with regard to urinary incontinence entails an increase in the workload of the nursing staff; efficient distribution of that work may require some staff members to devote all their time to changing diapers. abstract_id: PUBMED:27926562 Long-term Experience of Magnetic Anal Sphincter Augmentation in Patients With Fecal Incontinence. Background: Magnetic anal sphincter augmentation is a novel technique for the treatment of patients with fecal incontinence. Objective: The current study reports the long-term effectiveness and safety of this new treatment modality. Design: This was a prospective multicenter pilot study. Settings: The study was performed at 4 clinical sites in Europe and the United States. Patients: The cohort included patients with severe fecal incontinence for ≥6 months who had previously failed conservative therapy and were implanted with a magnetic anal sphincter device between 2008 and 2011. Main Outcome Measures: Adverse events, symptom severity, quality of life, and bowel diary data were collected. Results: A total of 35 patients (34 women) underwent magnetic anal sphincter augmentation. The median length of follow-up was 5.0 years (range, 0-5.6 years), with 23 patients completing assessment at 5 years. Eight patients underwent a subsequent operation (7 device explantations) because of device failure or complications, 7 of which occurred in the first year. Therapeutic success rates, with patients who underwent device explantation or stoma creation counted as treatment failures, were 63% at year 1, 66% at year 3 and 53% at year 5. In patients who retained their device, the number of incontinent episodes per week and Cleveland Clinic incontinence scores significantly decreased from baseline, and there were significant improvements in all 4 scales of the Fecal Incontinence Quality of Life instrument. There were 30 adverse events reported in 20 patients, most commonly defecatory dysfunction (20%), pain (14%), erosion (11%), and infection (11%). Limitations: This study does not allow for comparison between surgical treatments and involves a limited number of patients. Conclusions: Magnetic anal sphincter augmentation provided excellent outcomes in patients who retained a functioning device at long-term follow-up. Protocols to reduce early complications will be important to improve overall results. abstract_id: PUBMED:32129447 Health-related quality of life in Japanese patients with prostate cancer following proton beam therapy: an institutional cohort study. Objective: Many treatment options have guaranteed long-term survival in patients with localized prostate cancer and health-related quality of life has become a greater concern for those patients. The purpose of this study was to reveal the health-related quality of life after proton beam therapy and to clarify the differences from other treatment modalities for prostate cancer. Methods: Between January 2011 and April 2016, 583 patients were enrolled in the study and health-related quality of life outcomes using the Expanded Prostate Cancer Index Composite questionnaire were evaluated and compared with previous research targeted at Japanese patients. Results: We found a significant decrease in the least square mean scores for urinary and bowel domains excluding the incontinence subscale after proton beam therapy (P < 0.0001) and recovery at a year following treatment. The scores for sexual function in patients without androgen deprivation therapy decreased each year after proton beam therapy (P < 0.0001). The scores for hormones in patients without androgen deprivation therapy remained high and those of patients with androgen deprivation therapy were lower before treatment but were comparable to those of non-androgen deprivation therapy patients at 2 years post-treatment. We found that the impact of radiotherapy including proton beam therapy on urinary condition and sexual function was lower than that of surgery. Conclusions: For the first time in Japan, we investigated health-related quality of life using Expanded Prostate Cancer Index Composite questionnaires in patients with prostate cancer after proton beam therapy and compared it with other treatment modalities. abstract_id: PUBMED:10850193 The effect of a vaginal device on urinary leakage and quality of life of women with stress urinary incontinence The aim of this study was to assess the effect of a vaginal device (Continence Guard) on urine leakage and its impact on quality of life (QoL). Fifty-five women participated in a three month study using the Continence Guard. QoL was assessed by an incontinence specific questionnaire (IIQ), two incontinence specific questions and the generic SF-36 health questionnaire. A total of 41 (74.5%) women completed the study. Use of the vaginal device was associated with subjective cure in 11 women (27%) and improvement in 27 (66%). The mean 24-hour pad test leakage decreased significantly. QoL measured by the IIQ and the two incontinence specific questions showed highly significant improvements. The SF-36 questionnaire showed no significant changes. In conclusion treatment with the Continence Guard significantly decreases leakage and improves QoL in women with the symptom of urinary stress incontinence. The SF-36 questionnaire was not sensitive enough to detect alterations in QoL in patients with stress urinary incontinence. Answer: The studies on the use of various devices for incontinence suggest mixed results regarding the improvement of quality of life (QoL) for patients. One study evaluating the effect of a humidity detector device on the QoL of patients with urinary incontinence found that there was a negative impact on QoL due to frequent changes of incontinence pads, indicating that the device did not improve QoL (PUBMED:12938580). However, this study's results are not conclusive as the power of the analysis to detect a relevant difference was 71.1%, and the probability of obtaining a relevant improvement in the questionnaire was only 10%. In contrast, other studies on different devices have shown improvements in QoL. For instance, a study on a new intraurethral self-retaining device (ISRD) for female children and adolescents with urinary incontinence reported significant improvements in QoL (PUBMED:28254445). Another study on a new device for treating nocturnal enuresis, which combines a humidity sensor with an electrical stimulus to contract pelvic floor muscles, found that the device was safe and effective, with symptoms resolved completely in five out of six patients (PUBMED:25189504). Additionally, a study on a vaginal device (Continence Guard) for women with stress incontinence showed significant decreases in leakage and improvements in QoL (PUBMED:10074989; PUBMED:10850193). While the study specifically addressing a humidity detector device (PUBMED:12938580) did not show an improvement in QoL, other studies with different devices for incontinence management have demonstrated positive outcomes. It is important to note that the effectiveness and impact on QoL may vary depending on the type of device, the patient population, and the specific design and functionality of the device. Therefore, while some incontinent patients may experience an improvement in their QoL with certain devices, others may not, and further research is needed to determine the most effective solutions for different types of incontinence.
Instruction: SPORT lumbar intervertebral disk herniation and back pain: does treatment, location, or morphology matter? Abstracts: abstract_id: PUBMED:18277876 SPORT lumbar intervertebral disk herniation and back pain: does treatment, location, or morphology matter? Study Design: Diskectomy candidates with at least 6 weeks of sciatica and confirmatory imaging were enrolled in a randomized or observational cohort. Objective: This study sought to determine: (1) whether diskectomy resulted in greater improvement in back pain than nonoperative treatment, and (2) whether herniation location and morphology affected back pain outcomes. Summary Of Background Data: Previous studies have reported that lumbar diskectomy is less successful for relief of back pain than leg pain and patients with central disc herniations or protrusions have worse outcomes. Methods: Patients underwent diskectomy or received "usual" nonoperative care. Data from the randomized cohort and observational cohort were combined in an as-treated analysis. Low back pain was recorded on a 0 to 6 point scale, and changes in low back pain were compared between the surgical and nonoperative treatment groups. The effects of herniation location and morphology on back pain outcomes were determined. Results: The combined analysis included 1191 patients with 775 undergoing surgery within 2 years, whereas 416 remained nonoperative. Overall, leg pain improved more than back pain in both treatment groups. Back pain improved in both surgical and nonoperative patients, but surgical patients improved significantly more (treatment effect favoring surgery -0.9 at 3 months, -0.5 at 2 years, P < 0.001). Patients who underwent surgery were more likely to report no back pain than nonoperative patients at each follow-up period (28.0% vs. 12.0% at 3 months, P < 0.001, 25.5% vs. 17.6% at 2 years, P = 0.009). At baseline, central herniations were associated with more severe back pain than more lateral herniations (4.3 vs. 3.9, P = 0.012). Patients with central herniations and protrusionshad a beneficial treatment effect from surgery similar to the overall surgical group. Conclusion: Diskectomy resulted in greater improvement in back pain than nonoperative treatment, and this difference was maintained at 2 years for all herniation locations and morphologies. abstract_id: PUBMED:30594668 Retrolisthesis and lumbar disc herniation: a postoperative assessment of outcomes at 8-year follow-up. Background Context: Lumbar disc herniation and retrolisthesis have been shown to be significant degenerative changes that can be associated with back pain. Current literature has shown evidence that retrolisthesis is associated with similar baseline function in patients with L5-S1 disc herniation, but worse postoperative outcomes 2 years after lumbar discectomy. However, literature comparing long-term postoperative outcomes at 8-year follow-up in patients with L5-S1 disc herniation with retrolisthesis is lacking. Purpose: The purpose of the present study is to compare long-term postoperative outcomes at 8-year follow-up in patients with retrolisthesis and L5-S1 disc herniations to patients with L5-S1 disc herniations without retrolisthesis. Study Design: Retrospective review of prospectively collected data from the Spine Patients Outcomes Research Trial (SPORT) database. Patient Sample: Sixty-five patients who underwent lumbar discectomy for L5-S1 disc herniations with 8-year follow-up from the SPORT. Outcome Measures: Short Form (SF)-36 bodily pain scale, SF-36 physical function scale, Oswestry Disability Index, Sciatica Bothersomeness Index, and reoperation rate. Methods: Baseline surgical parameters, length of stay, complication rates, reoperation rates, and outcome measures were recorded in the SPORT database. Follow-up data were collected at 6 weeks, 3 months, 6 months, 1 year, 2 years, 3 years, 4 years, 5 years, 6 years, 7 years, and 8 years. Retrolisthesis was defined as posterior subluxation ≥8%. Patients with and without retrolisthesis were compared using a mixed-effects model of longitudinal regression. Outcomes were calculated as time-weighted averages over 8 years. Reoperation rates were compared using the log-rank test based on time to first reoperation. Results: One hundred-twenty five patients met inclusion criteria for the present study, including 29 patients with retrolisthesis (23.3%) and 96 patients who did not have retrolisthesis (76.7%). The greatest difference in clinical outcome measures was found at 2 years postoperatively. This was the only point at which both the Short Form-36 Bodily Pain scale (SF-36 BP) and PF showed significant differences between the retrolisthesis and nonretrolisthesis group. At 3 years, SF-36 BP was significantly lower in patients with retrolisthesis (39.9 vs. 52, p=.046). At 8-year follow-up, the presence of retrolisthesis in patients undergoing L5-S1 discectomy was not associated with worse outcome measure scores based on the area under the curve analysis for any metric investigated (SF-36 BP 41.4 vs. 47.1, p=.18; SF-36 Physical Function scale 38.9 vs. 45.4, p=.12; Oswestry Disability Index -39.4 vs. -34.8, p=.23; -11.6 vs. -10.4, p=.25) or a difference in reoperation rate (retrolisthesis group 10%, nonretrolisthesis group 17%, p=.41). Conclusions: While retrolisthesis can contribute to low back pain and dysfunction in patients undergoing lumbar discectomy for L5-S1 herniated nucleus pulposus in early follow-up, no significant difference was found in postoperative outcomes after 3 years. Additionally, retrolisthesis was not found to be associated with different reoperation rates at 8 years postoperatively. abstract_id: PUBMED:28091825 Preoperative sport improves the outcome of lumbar disc surgery: a prospective monocentric cohort study. A lumbar disc herniation resulting in surgery may be an incisive event in a patient's everyday life. The patient's recovery after sequestrectomy may be influenced by several factors. There is evidence that regular physical activity can lower pain perception and improve the outcome after surgery. For this purpose, we hypothesized that patients performing regular sports prior to lumbar disc surgery might have less pain perception and disability thereafter. Fifty-two participants with a single lumbar disc herniation confirmed on MRI treated by a lumbar sequestrectomy were included in the trial. They were categorized into two groups based on their self-reported level of physical activity prior to surgery: group NS, no regular physical activity and group S, with regular physical activity. Further evaluation included a detailed medical history, a physical examination, and various questionnaires: Visual Analog Scale (VAS), Beck-Depression-Inventory (BDI), Oswestry Disability Index (ODI), Core Outcome Measure Index (COMI), and the EuroQoL-5Dimension (EQ- 5D). Surgery had an excellent overall improvement of pain and disability (p < 0.005). The ODI, COMI, and EQ-5D differed 6 months after intervention (p < 0.05) favoring the sports group. Leg and back pain on VAS was also significantly less in group B than in group A, 12 months after surgery (p < 0.05). Preoperative regular physical activity is an important influencing factor for the overall satisfaction and disability after lumbar disc surgery. The importance of sports may have been underestimated for surgical outcomes. abstract_id: PUBMED:12652411 Lumbar disc disease in adolescents Introduction: Intervertebral disc herniation is a very common neurosurgical pathology. Most commonly disc disease occurs in adults and series published in children are only few and each reports only few cases. A genetic factor has been considered but in none of our patients there was a family history of disc disease. Patients And Methods: We report ten patients with disc herniations in adolescents occurring among 1920 patients operated of herniated discs between 1988 and 1998. These herniations in adolescents were all related to some sport activities, and they occurred in the age range of 12 to 16 years. Symptoms were back pain and lower extremities pains of roots L5 or S1. All were studied with magnetic resonance imaging and in all patients the herniated disc was either in L4 L5 or L5 S1 intervertebral spaces. All disc were subligamental and 81.5% were posterolateral in position. Pathological studies of the herniated discs showed elastic consistency with high water content without degenerative changes. Results: After microscopic removal of the herniated discs pain disappeared and remained so in the follow up. It was not necessary to reoperate in any of these patients and the postoperative follow up was between 2 and 5 years. Conclusions: Herniated disc disease is very infrequent in adolescents and all recovered completely after surgery, and the localization is posterolateral and subligamental. abstract_id: PUBMED:36807736 Lumbar arthroplasty for treatment of primary or recurrent lumbar disc herniation. Purpose: Microdiscectomy is the current gold standard surgical treatment for primary lumbar disc herniations that fail non-surgical measures. Herniated nucleus pulposus is the manifestation of underlying discopathy that remains unaddressed with microdiscectomy. Therefore, risk remains of recurrent disc herniation, progression of the degenerative cascade, and on-going discogenic pain. Lumbar arthroplasty allows for complete discectomy, complete direct and indirect decompression of neural elements, restoration of alignment, restoration of foraminal height, and preservation of motion. In addition, arthroplasty avoids disruption of posterior elements and musculoligamentous stabilizers. The purpose of this study is to describe the feasibility of the use of lumbar arthroplasty in the treatment of patients with primary or recurrent disc herniations. In addition, we describe the clinical and peri-operative outcomes associated with this technique. Methods: All patients that underwent lumbar arthroplasty by a single surgeon at a single institution from 2015 to 2020 were reviewed. All patients with radiculopathy and pre-operative imaging demonstrating disc herniation that received lumbar arthroplasty were included in the study. In general, these patients were those with large disc herniations, advanced degenerative disc disease, and a clinical component of axial back pain. Patient-reported outcomes of VAS back, VAS leg, and ODI pre-operatively, at three months, one year, and at last follow-up were collected. Reoperation rate, patient satisfaction, and return to work were documented at last follow-up. Results: Twenty-four patients underwent lumbar arthroplasty during the study period. Twenty-two (91.6%) patients underwent lumbar total disc replacement (LTDR) for a primary disc herniation. Two patients (8.3%) underwent LTDR for a recurrent disc herniation after prior microdiscectomy. The mean age was 40 years. The mean pre-operative VAS leg and back pain were 9.2 and 8.9, respectively. The mean pre-operative ODI was 22.3. Mean VAS back and leg pain was 1.2 and 0.5 at three months post-operative. The mean VAS back and leg pain was 1.3 and 0.6 at one year post-operative. The mean ODI was 3.0 at one year post-operative. One patient (4.2%) underwent re-operation for migrated arthroplasty device which required repositioning. At last follow-up, 92% of patients were satisfied with their outcome and would undergo the same treatment again. The mean time for return-to-work was 4.8 weeks. After returning to work, 89% of patients required no further leave of absence for recurrent back or leg pain at last follow-up. Forty-four percent of patients were pain free at last follow-up. Conclusion: Most patients with lumbar disc herniations can avoid surgical intervention altogether. Of those that require surgical treatment, microdiscectomy may be appropriate for certain patients with preserved disc height and extruded fragments. In a subset of patients with lumbar disc herniation that require surgical treatment, lumbar total disc replacement is an effective option by performing complete discectomy, restoring disc height, restoring alignment, and preserving motion. The restoration of physiologic alignment and motion may result in durable outcomes for these patients. Longer follow-up and comparative and prospective trials are needed to determine how the outcomes of microdiscectomy may differ from lumbar total disc replacement in the treatment of primary or recurrent disc herniation. abstract_id: PUBMED:16463199 Misleading history of pain location in 51 patients with osteoporotic vertebral fractures. The aim of this study was to investigate associations between the location of osteoporotic vertebral fractures and the patient's localization of pain. Fifty-one consecutive patients (m 6, f 45; average age 74.8 years) with diagnosed osteoporotic vertebral fractures between T8 and L2 were included in the study. Exclusion criteria were fractures above T8 and below L2, spondylolisthesis, disc herniations, tumors, infections, and instability. Pain location was assessed by pain drawing, subdivided into thoracic, lumbar, and thoracic plus lumbar pain areas, and pain intensity using a 101 numeric rating scale. Furthermore, the onset of back pain and the lack or the indication of a trigger event at the onset of pain were documented. Only four of 20 patients with thoracic fractures reported thoracic pain, while the other 16 (80%) reported only lumbar pain. The location of the fracture and the patient's pain report were not related (Cohens Kappa=0.046; P=0.438). Patients with thoracic or lumbar osteoporotic fractures report pain mainly in the lumbosacrogluteal area. Therefore, the complaint of low back pain (LBP) in persons at risk for osteoporotic fractures may require both thoracic and lumbar X-rays. LBP patients with a suspect history of an osteoporotic vertebral fracture should also be given an X-ray of the thoracic and lumbar spine. Patients with a thoracic vertebral fracture had more severe pain than patients with a lumbar vertebral fracture. Onset not related to a fall or a false movement related to a significantly longer pain duration. abstract_id: PUBMED:37355169 Effect of Lumbar Discectomy or Lumbar Decompression on Axial Back Pain: Results of a Meta-Analysis. Objective: This meta-analysis evaluated the impact of lumbar disk herniation and lumbar spinal stenosis (LSS) on axial back pain and the extent of improvement of axial and radicular pain following lumbar decompression and discectomy surgery in patients with low back pain (LBP). Methods: A systematic search for published literature between January 2012 and January 2023 was made on PubMed, Google Scholar, and Cochrane library database on 31st January 2023. Original articles that included patients with lumbar disc herniation or LSS who underwent lumbar discectomy or lumbar decompression respectively were included in the study. Results: A total of 71 studies including 16,770 patients with LBP undergoing lumbar discectomy or decompression surgery were included in the meta-analysis. The pooled standard mean difference between postoperative and preoperative: Visual Analog Scale scores for leg pain was -5.14 with 95% confidence interval (CI): -6.59 to -3.69 (P-value = 0) and for back pain was -2.90 with 95% CI: -3.79 to -2.01 (P value = 0), Numerical pain Rating Scale for leg pain was -1.64 with 95% CI: -1.97 to -1.30 (P-value<0.01) and for back pain was -1.58 with 95% CI: -1.84 to -1.32 (P-value <0.01), Oswerty Disability Index score was -4.76 with 95% CI: -6.22 to -3.29 (P-value = 0) and the Japanese Orthopaedic Association score was 3.45 with 95% CI: 0.02 to 6.88 (P value 0) at follow-up. Conclusions: This meta-analysis provides evidence that lumbar discectomy and decompression are effective in improving axial LBP in patients with lumbar disk herniation and LSS. abstract_id: PUBMED:27890759 Transforaminal Percutaneous Endoscopic Lumbar Diskectomy with Percutaneous Epidural Neuroplasty in Lumbar Disk Herniation: Technical Note. Objective: Percutaneous endoscopic lumbar diskectomy and percutaneous epidural neuroplasty is a commonly used minimally invasive spinal surgery in lumbar disk herniation. But to our knowledge, there is no report about the results of percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty. We did this study to evaluate the effect of percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty in lumbar disk herniation. Methods: We report a series of cases about techniques and results of percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty in lumbar disk herniation, as well as introduce technical procedures and advantages. Results: Seven patients improved axial back pain with percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty and had no complications. The percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty has many advantages with removing lumbar disk herniation in procedures. Conclusions: Percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty is a safe and effective surgical method in lumbar disk herniation. abstract_id: PUBMED:32905720 Does Size Matter? An Analysis of the Effect of Lumbar Disc Herniation Size on the Success of Nonoperative Treatment. Study Design: Retrospective study. Objective: In this study, we examined whether the size of a lumbar disc herniation (LDH) is predictive of the need for surgical intervention within 2 years after obtaining an initial magnetic resonance imaging (MRI) scan. We hypothesized that a fragment that occupied a larger percentage of the spinal canal would not predict which patients failed conservative management. Methods: Using the ICD-10 code M51.26, we identified patients at a single academic institution, across the 2-year period from 2015 to 2016, who received a diagnosis of primary lumbar radicular pain, had MRI showing a disc herniation, and underwent at least 6 weeks of nonoperative management. Patients experiencing symptoms suggesting cauda equina syndrome and those with progressive motor neurological deficits were excluded from analysis, as were patients exhibiting "hard" disc herniations. Within the axial view of an MRI, the following measurements were made on AGFA-IMPACS for a given disc herniation: the length of both the canal and the herniated disc along the anterior-posterior axis, the average width of the disc within the canal; the total canal area, and the area of the disc herniation. Data analysis was conducted in SPSS and a 2-tailed reliability analysis using Cronbach's alpha as a measure of reliability was obtained. Results: A total of 368 patients met the inclusion and exclusion criteria for this study. Of these, 14 (3.8%) had L3-L4 herniations, 185 had L4-L5 herniations (50.3%), and 169 had L5-S1 herniations (45.9%). Overall, 336 (91.3%) patients did not undergo surgery within 1 year of the LDH diagnosis. Patients who did not receive surgery had an average herniation size that occupied 31.2% of the canal, whereas patients who received surgery had disc herniations that occupied 31.5% of the canal on average. A Cronbach's alpha of .992 was observed overall across interobserver measurements. After controlling for age, race, gender, and location of herniation through a logistic regression, it was found that the size of the herniation and the percentage of the canal that was occupied had no predictive value with regard to failure of conservative management, generating an odds ratio for surgery of 1.00. Conclusions: The percentage of the spinal canal occupied by a herniated disc does not predict which patients will fail nonoperative treatment and require surgery within 2 years after undergoing a lumbar spine MRI scan. abstract_id: PUBMED:2013776 Unilateral facetectomy approach for lateral lumbar disc herniation. Forty-one patients with herniated lumbar discs in a lateral location underwent unilateral complete facetectomy for removal of their disc herniation. The diagnosis was made by computerized tomography in all patients. The follow-up period varied between 4 and 60 months, with an average of 22.4 months. All patients underwent dynamic lumbar spine x-ray films with flexion and extension exposures at various times during their follow-up period. The results were excellent in 35 patients, good in three, and poor in three. One patient suffered spinal instability postoperatively and required lumbar fusion because of back pain. Unilateral facetectomy gives an excellent view of the affected nerve root and the herniated disc, and the risk of spinal instability is very low. Answer: The SPORT (Spine Patient Outcomes Research Trial) study on lumbar intervertebral disk herniation and back pain aimed to determine whether diskectomy resulted in greater improvement in back pain than nonoperative treatment and whether herniation location and morphology affected back pain outcomes. The study found that diskectomy did result in greater improvement in back pain than nonoperative treatment, and this difference was maintained at 2 years for all herniation locations and morphologies (PUBMED:18277876). Additionally, the study on retrolisthesis and lumbar disc herniation found that while retrolisthesis can contribute to low back pain and dysfunction in patients undergoing lumbar discectomy for L5-S1 herniated nucleus pulposus in early follow-up, no significant difference was found in postoperative outcomes after 3 years. Retrolisthesis was not found to be associated with different reoperation rates at 8 years postoperatively (PUBMED:30594668). Another study suggested that preoperative sports activity might influence the outcome after lumbar disc surgery, with patients performing regular sports prior to surgery experiencing less pain perception and disability thereafter (PUBMED:28091825). In adolescents, lumbar disc disease related to sports activities was found to be infrequent, but those affected recovered completely after surgery, with localization being posterolateral and subligamental (PUBMED:12652411). Lumbar arthroplasty was also explored as a treatment for primary or recurrent lumbar disc herniations, showing that it could be an effective option by performing complete discectomy, restoring disc height, restoring alignment, and preserving motion (PUBMED:36807736). The study on misleading history of pain location in patients with osteoporotic vertebral fractures highlighted the importance of considering both thoracic and lumbar X-rays in patients at risk for osteoporotic fractures who report low back pain (PUBMED:16463199). A meta-analysis evaluating the impact of lumbar disk herniation and lumbar spinal stenosis on axial back pain found that lumbar discectomy and decompression are effective in improving axial low back pain in patients with these conditions (PUBMED:37355169). Percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty was reported to be a safe and effective surgical method in lumbar disk herniation (PUBMED:27890759). Continuing from where I stopped: The study examining the effect of lumbar disc herniation size on the success of nonoperative treatment found that the size of the herniation and the percentage of the canal occupied did not predict which patients would fail nonoperative treatment and require surgery within 2 years after undergoing a lumbar spine MRI scan (PUBMED:32905720). Lastly, a study on the unilateral facetectomy approach for lateral lumbar disc herniation reported that this surgical technique provided an excellent view of the affected nerve root and the herniated disc, with a very low risk of spinal instability postoperatively (PUBMED:2013776). In summary, the SPORT study and subsequent research suggest that treatment (such as diskectomy or lumbar arthroplasty), location, and morphology of lumbar intervertebral disk herniation do matter in terms of back pain outcomes. Diskectomy has been shown to result in greater improvement in back pain than nonoperative treatment, and this is consistent across different herniation locations and morphologies. Preoperative physical activity levels and the presence of retrolisthesis may also influence outcomes, although the size of the herniation itself may not be predictive of the need for surgery. Additionally, minimally invasive techniques like percutaneous endoscopic lumbar diskectomy with percutaneous epidural neuroplasty have been found to be safe and effective.
Instruction: Can patients interpret health information? Abstracts: abstract_id: PUBMED:30512739 The adequate information of patients The adequate information of patients. Providing information to patients is an intangible duty for physicians. It must be confirmed by indisputable elements if a lawsuit is initiated, because practitioners have the burden of proof of this information since 1997 and since the May 4th 2002 law. Information is essential in building a trustful relationship between a doctor and his patient, to avoid any surprises if a non-exceptional complication were to occur. abstract_id: PUBMED:31143819 Identification of patients' rights to benefit from consumer health information services: A Delphi study. Introduction: Patients have different rights, one of which is their right to access health information. The aim of this study was to identify patients' rights to benefit from consumer health information services using a qualitative method. Materials And Methods: The research method was qualitative using a Delphi technique. The statistical population consisted of 12 specialists in the field of medical library and information science and researchers and healthcare professionals. Eight dimensions and 42 items of patients' rights were identified and were approved by Delphi panel. Results: Regarding patients' rights to benefit from consumer health information services, eight dimensions including the right to health knowledge, the right to access to health information, the professional behavior of medical librarians with patients, content richness, information seeking skills, awareness of new services and products, the ease of using health information centers, and the professional behavior of healthcare professionals with patients were identified and approved. Conclusion: Decreasing the gap between the health literacy of healthcare professionals and patients is one of the duties of medical librarians and health information professionals. Establishing of patient rights in the area of utilizing health information services is an important step in improving the quality of services received by patients. abstract_id: PUBMED:19387598 Use of the internet for health information by ENT patients Introduction: The internet is becoming increasingly more important as a source of information for patients, so that nowadays internet-based information must be more frequently considered in the dialogue with patients. Material And Methods: This study is based on an analysis of ENT patients' habits in acquiring information from the internet. The data were collected by a questionnaire for in-patients over a period of 9 months during 2006 and 2007. A total of 503 patients with different family backgrounds were interviewed and the answers analyzed with respect to the illness, age, gender, education and information acquisition from the internet. Results: The younger generation was found to use the internet in general more than older patients and there were no differences between males and females. Middle aged patients in particular used the internet for answering questions on health. Patients with complex elective surgery used the internet more often than patients undergoing simple elective surgery and patients with tumors and private patients used the internet more often than those with social health insurance. Among the questioned patients it was noticeable that a high proportion of those using internet to obtain information were well educated. Discussion: Despite the high quality of information available for ENT patients on the internet, information obtained from the treating physician is still preferred by many patients. Due to demographic changes an increased use of the internet for patient information is to be expected in the future. The correlation found in this study between education, insurance status and the use of the internet might play a role in a target group oriented improvement of information supply. abstract_id: PUBMED:36995251 Health literacy and consumer health information. The rigours of the past few years have demonstrated the importance of good health literacy levels with the imperative of being able to obtain and interpret information to maintain and improve one's health never more apparent. With this in mind, this issue is focused on consumer health information, the gender and population group differences that exist in information seeking behaviour, the challenges of understanding medical explanations and terminology, and existing criteria to assess and ultimately produce better consumer health information. abstract_id: PUBMED:23695812 Consumer health-care information technology Consumer health-care information technology is intended to improve patients' opportunities to gather information about their own health. Ideally, this will be achieved through an improved involvement of existing data bases and an improved communication of information to patients and to care providers, if desired by patients. Additionally, further interconnection of existing and new systems and pervasive system design may be used. All consumer health-care information technology services are optional and leave patients in control of their medical data at all times. This article reflects the current status of consumer health-care information technology research and suggests further research areas that should be addressed. abstract_id: PUBMED:31223119 The Potential of Blockchain Technology for Health Information Exchange: Experimental Study From Patients' Perspectives. Background: Nowadays, a number of mechanisms and tools are being used by health care organizations and physicians to electronically exchange the personal health information of patients. The main objectives of different methods of health information exchange (HIE) are to reduce health care costs, minimize medical errors, and improve the coordination of interorganizational information exchange across health care entities. The main challenges associated with the common HIE systems are privacy concerns, security risks, low visibility of system transparency, and lack of patient control. Blockchain technology is likely to disrupt the current information exchange models utilized in the health care industry. Objective: Little is known about patients' perceptions and attitudes toward the implementation of blockchain-enabled HIE networks, and it is still not clear if patients (as one of the main HIE stakeholders) are likely to opt in to the applications of this technology in HIE initiatives. Thus, this study aimed at exploring the core value of blockchain technology in the health care industry from health care consumers' views. Methods: To recognize the potential applications of blockchain technology in health care practices, we designed 16 information exchange scenarios for controlled Web-based experiments. Overall, 2013 respondents participated in 16 Web-based experiments. Each experiment described an information exchange condition characterized by 4 exchange mechanisms (ie, direct, lookup, patient-centered, and blockchain), 2 types of health information (ie, sensitive vs nonsensitive), and 2 types of privacy policy (weak vs strong). Results: The findings show that there are significant differences in patients' perceptions of various exchange mechanisms with regard to patient privacy concern, trust in competency and integrity, opt-in intention, and willingness to share information. Interestingly, participants hold a favorable attitude toward the implementation of blockchain-based exchange mechanisms for privacy protection, coordination, and information exchange purposes. This study proposed the potentials and limitations of a blockchain-based attempt in the HIE context. Conclusions: The results of this research should be of interest to both academics and practitioners. The findings propose potential limitations of a blockchain-based HIE that should be addressed by health care organizations to exchange personal health information in a secure and private manner. This study can contribute to the research in the blockchain area and enrich the literature on the use of blockchain in HIE efforts. Practitioners can also identify how to leverage the benefit of blockchain to promote HIE initiatives nationwide. abstract_id: PUBMED:31884311 The effects of data entry structure on patients' perceptions of information quality in Health Information Exchange (HIE). Background And Objective: To exchange patient health information using Health Information Exchange (HIE) projects, such information first should be collected thoroughly using an appropriate data entry interface that reinforces information quality (IQ). Assessment of the given data interface based on its structure level may give us a better understanding of patients' attitudes toward information-sharing efforts. The main objective of this study is to examine the effects of data structure on perceptions and attitudes of patients toward the quality of health information that may be shared through HIE networks. Materials And Methods: Eight experiments were conducted to examine the impact of different design of information collection interfaces (structured vs. unstructured) to record two types of health information (sensitive vs. non-sensitive) that can be used for two types of sharing purposes (health care vs. marketing). Results: Results show that the degree of data entry structure can significantly influence patients' perceptions of completeness, accuracy, psychological risk, accessibility of data, concise representation, and understandability of health information. Discussion: There is a connection between data entry interface design and patients' perceptions of the quality of health information used in HIE networks, which in turn, could lead to the development of best practices in interface design and data collection techniques. This may also improve interactions between patients and healthcare entities, enhance patients' attitudes toward data collection procedures and HIE, and help healthcare providers use complete and accurate databases. Conclusions: We propose that healthcare professionals can tailor data entry interfaces based on the sensitivity of medical data and the purpose of information exchange. abstract_id: PUBMED:17032638 Are physicians ready for patients with Internet-based health information? Background: An increasing number of patients bring Internet-based health information to medical consultations. However, little is known about how physicians experience, manage, and view these patients. Objective: This study aimed to advance the understanding of the effects of incorporating Internet-based health information into routine medical consultations from physicians' perspectives, using a qualitative approach. Methods: Six focus groups were conducted with 48 family physicians practising in Toronto. The data were analyzed using qualitative methods of content analysis and constant comparison, derived from grounded theory approach. Results: Three overarching themes were identified: (1) perceived reactions of patients, (2) physician burden, and (3) physician interpretation and contextualization of information. Physicians in our study generally perceived Internet-based health information as problematic when introduced by patients during medical consultations. They believed that Internet information often generated patient misinformation, leading to confusion, distress, or an inclination towards detrimental self-diagnosis and/or self-treatment. Physicians felt these influences added a new interpretive role to their clinical responsibilities. Although most of the physicians felt obliged to carry out this new responsibility, the additional role was often unwelcome. Despite identifying various reactions of patients to Internet-based health information, physicians in our study were unprepared to handle these patients. Conclusion: Effective initiatives at the level of the health care system are needed. The potential of Internet-based health information to lead to better physician-patient communication and patient outcomes could be facilitated by promoting physician acknowledgment of increasing use of the Internet among patients and by developing patient management guidelines and incentives for physicians. abstract_id: PUBMED:23766635 Patient health information materials in waiting rooms of family physicians: do patients care? Background: Patient health information materials (PHIMs), such as leaflets and posters are widely used by family physicians to reinforce or illustrate information, and to remind people of information received previously. This facilitates improved health-related knowledge and self-management by patients. Objective: This study assesses the use of PHIMs by patient. It also addresses their perception of the quality and the impact of PHIMs on the interaction with their physician, along with changes in health-related knowledge and self-management. Methods: QUESTIONNAIRE SURVEY AMONG PATIENTS OF FAMILY PRACTICES OF ONE TOWN IN BELGIUM, ASSESSING: (1) the extent to which patients read PHIMs in waiting rooms (leaflets and posters) and take them home, (2) the patients' perception of the impact of PHIMs on interaction with their physician, their change in health-related knowledge and self-management, and (3) the patients judgment of the quality of PHIMs. Results: We included 903 questionnaires taken from ten practices. Ninety-four percent of respondents stated they read PHIMs (leaflets), 45% took the leaflets home, and 78% indicated they understood the content of the leaflets. Nineteen percent of respondents reportedly discussed the content of the leaflets with their physician and 26% indicated that leaflets allowed them to ask fewer questions of their physician. Thirty-four percent indicated that leaflets had previously helped them to improve their health-related knowledge and self-management. Forty-two percent reportedly discussed the content of the leaflets with others. Patient characteristics are of significant influence on the perceived impact of PHIMS in physician interaction, health-related knowledge, and self-management. Conclusion: This study suggests that patients value health information materials in the waiting rooms of family physicians and that they perceive such materials as being helpful in improving patient-physician interaction, health-related knowledge, and self-management. abstract_id: PUBMED:18258672 Use of health-related information from the Internet by English-speaking patients. The aim of this research is to determine the kinds of health-related information that patients seek more often from websites written in English, and at which stages of the healthcare decisional process they use this information more intensively. A quantitative study was performed. Canadian English-speaking patients who have long-term diseases and who use the Internet completed an 18-item questionnaire online. Respondents were questioned about the categories of health-related websites they visit the most (scientific, general, commercial websites, or discussion groups) and the stages of the medical decisional process during which they use the information obtained (identification of possible treatments, treatment choice, and treatment application or follow-up). Results show that respondents use Internet information displayed in English mostly at the stages of identification of possible treatments (94.2%) and treatment application or follow-up (86%). At these two stages, patients look more often for information from scientific websites. Answer: Patients' ability to interpret health information can vary widely based on several factors, including their health literacy levels, the quality and clarity of the information provided, and the medium through which the information is accessed. Health literacy is crucial for patients to obtain, process, and understand basic health information and services needed to make appropriate health decisions (PUBMED:36995251). The rigors of recent years have underscored the importance of good health literacy, with the need to interpret information to maintain and improve one's health being more apparent than ever. However, there are challenges in understanding medical explanations and terminology, and there are gender and population group differences that exist in information-seeking behavior (PUBMED:36995251). The design of health information technology and the structure of data entry can significantly influence patients' perceptions of the completeness, accuracy, psychological risk, accessibility of data, concise representation, and understandability of health information (PUBMED:31884311). This suggests that with well-designed interfaces and structured information, patients may be better able to interpret health information. Additionally, the study of ENT patients' internet usage for health information found that despite the high quality of information available, many patients still prefer information obtained from their treating physician (PUBMED:19387598). This indicates that while patients may seek out health information, they may not always be confident in their ability to interpret it without professional guidance. Moreover, the study on physicians' readiness for patients with Internet-based health information revealed that physicians often perceive Internet-based health information as problematic when introduced by patients during medical consultations, leading to confusion and distress (PUBMED:17032638). This suggests that patients may struggle to interpret health information correctly and that physicians may need to help contextualize and clarify this information. In conclusion, while patients can interpret health information to varying degrees, their ability to do so effectively is influenced by their health literacy, the quality and presentation of the information, and their interactions with healthcare professionals. It is important for healthcare providers to support patients in this process by providing clear, accurate, and accessible information and by being prepared to discuss and clarify information that patients find online or through other sources.
Instruction: A comparison of 'visible' and 'invisible' users of amphetamine, cocaine and heroin: two distinct populations? Abstracts: abstract_id: PUBMED:9581005 A comparison of 'visible' and 'invisible' users of amphetamine, cocaine and heroin: two distinct populations? Aim: To compare the characteristics of heroin, cocaine and amphetamine users having no history of contact with services with those of a group in contact. Method: Multiple agency sampling and field work which included 'snowballing' using 'privileged access interviewers'. Each subject underwent a structured interview which included the Severity of Dependency Scale (SDS), and completed a confidential, self-report questionnaire. Setting: Three contrasting provincial urban locations. Participants: Five hundred and eighty-one regular users of the target drugs. Of these, 380 (65%) denied any contact with police or helping agencies in connection with drug use. Findings: Most zero-contact users (79%) expressed little or no concern about their drug use, and no wish for help or advice. They were much more likely to use stimulants only; less likely ever to inject any drug or, for those that did, to share equipment; less likely to use opioids, amphetamine or cocaine powder on a daily basis; more likely to use Ecstasy; and yielded significantly lower SDS scores for all target drugs save crack. Prevalence of crack use was lower, but the proportion of daily users was the same as in the contact group. Most (69%) contact users remained concerned about their drug use, but 58% expressed little or no confidence that local services could meet their needs. In both groups, SDS scores for cocaine powder were comparable to those for cannabis, LSD and Ecstasy. Of the 495 cannabis smokers identified (85% of the sample), 72% reported daily consumption. Conclusions: The findings are consistent with the hypothesis that 'visible' and 'invisible' drug users are distinct populations in terms of behavioral characteristics, vulnerability to compulsive use, and prevalence of drug-related problems or concern. Purchasers and providers with limited resources should concentrate on improving the range and quality of services for users already in contact rather than attempting to uncover invisible populations. On the basis of SDS scores, cocaine HCI seems to have a relatively modest addictive potential. abstract_id: PUBMED:17305688 Mechanisms of prescription drug diversion among drug-involved club- and street-based populations. Objective: Prescription drug diversion involves the unlawful channeling of regulated pharmaceuticals from legal sources to the illicit marketplace, and can occur along all points in the drug delivery process, from the original manufacturing site to the wholesale distributor, the physician's office, the retail pharmacy, or the patient. However, empirical data on diversion are limited. Method: In an attempt to develop a better understanding of how specific drug-using populations are diverting prescription opioids and other medications, or obtaining controlled drugs that have already been diverted, qualitative interviews and focus group data were collected on four separate populations of prescription drug abusers in Miami, Florida--club drug users, street-based illicit drug users, methadone maintenance patients, and HIV positive individuals who abuse and/or divert drugs. Results: Sources of abused prescription drugs cited by focus group participants were extremely diverse, including their physicians and pharmacists; parents and relatives; "doctor shopping"; leftover supplies following an illness or injury; personal visits to Mexico, South America and the Caribbean; prescriptions intended for the treatment of mental illness; direct sales on the street and in nightclubs; pharmacy and hospital theft; through friends or acquaintances; under-the-door apartment flyers advertising telephone numbers to call; and "stealing from grandma's medicine cabinet." Conclusion: While doctor shoppers, physicians and the Internet receive much of the attention regarding diversion, the data reported in this paper suggest that there are numerous active street markets involving patients, Medicaid recipients and pharmacies as well. In addition, there are other data which suggest that the contributions of residential burglaries, pharmacy robberies and thefts, and "sneak thefts" to the diversion problem may be understated. abstract_id: PUBMED:22211586 A comparison of rural and urban nonmedical prescription opioid users' lifetime and recent drug use. Background: Nonmedical prescription opioid use has emerged as a major public health concern. The growing burden of nonmedical prescription opioid use in America may have unique manifestations and consequences in rural areas, which to a large extent have yet to be explored. Objectives: To describe rural-urban differences among drug users in recent and lifetime use and age of onset of use for alcohol, heroin, OxyContin(®), oxycodone, hydrocodone, barbiturates, benzodiazepines, cocaine, crack, methamphetamine, amphetamine, marijuana, hallucinogens, and inhalants. Methods: A purposive sample of 212 prescription drug users was recruited from a rural Appalachian county (n = 101) and a major metropolitan area (n = 111) in Kentucky. Consenting participants were given an interviewer-administered questionnaire examining sociodemographics, psychiatric characteristics, and self-reported drug use. Results: Rural drug users had significantly earlier ages of onset for use of oxycodone, hydrocodone, benzodiazepines, cocaine, and crack. In age-, gender-, and race-adjusted logistic regression analyses, rural drug users had significantly higher odds of lifetime and recent use of methadone, OxyContin(®), and oxycodone. Rural drug users also had significantly higher odds of lifetime cocaine and crack use. However, urban participants as expected had significantly higher odds of recent crack use. Conclusions: The findings demonstrate that, in this sample, nonmedical prescription opioid use is dissimilar among rural and urban drug users. Additional research is needed to better understand the individual, social, and structural level factors contributing to the burden of nonmedical opioid use, particularly in rural populations, with the aim of developing tailored substance abuse treatment and prevention. abstract_id: PUBMED:30831411 Hair analysis of more than 140 families with drug consuming parents. Comparison between hair results from adults and their children. Background: Hair samples from children are frequently analyzed in order to characterize their endangerment in a drug using environment. However, the interpretation of the results remains difficult because of lacking data for comparison. In this study, hair samples from families with drug consuming parents were analyzed for illegal and selected medical drugs and the results were evaluated concerning a relationship between findings of parents and children depending on kind of drug, age and gender of children as well as maternal or paternal drug concentrations in hair. Methods: In an ongoing social supporting project for families with underage children and drug consuming parents, hair samples were analyzed since 2011 for methadone, opiates and opioid analgesics, cocaine, amphetamines, ecstasy, cannabinoids and benzodiazepines by LC-MS/MS with LOQs ≈ 0.01 ng/mg. From the data pool of more than 1300 individuals, 100 families with results for one or both parents and one to five children, 30 families with results only for both parents, and 11 families with results only for 2-4 children were selected. Fifty eight of these 141 families were repeatedly tested (altogether 251 family tests). Results: One to 5 drugs were detected in 239 (95.2%) of the family tests with highest occurrence of cocaine (79.7%) and THC (50.2%). According to the concentrations of the tested persons, the most probable drug users were the mother (25%), the father (24%), both parents (16%), or were not tested (30%). Within the families, there was an agreement of the detected drugs between parents and children of 47.8%, between both parents of 36.1%, and between children of 42.3%. For parents with hair concentrations in the typical range of regular drug use, the drug was detected in children hair with the following frequency: methadone 65.5%, heroin (6-AM) 63.6%, cocaine 92.1%, amphetamine 80%, MDMA 42.9% and THC 67.4% with higher percentage for younger children. The agreement for medical drugs (benzodiazepines 7.7%, synthetic opioids 8.7%, diphenhydramine 7.1%) was much lower suggesting voluntary administration or intake. Despite the strong variation of the data, clear trends were found that the child/parent drug concentration ratio decreases with increasing children age and is higher for boys than for girls. Conclusion: The comparison of hair results within families gives a deeper insight in the drug situation, often enables the identification of the drug user and is helpful for social and legal decisions to improve the conditions of the children. abstract_id: PUBMED:26896168 Assessing geographical differences in illicit drug consumption--A comparison of results from epidemiological and wastewater data in Germany and Switzerland. Background: Wastewater analysis is an innovative approach that allows monitoring illicit drug use at the community level. This study focused on investigating geographical differences in drug consumption by comparing epidemiological, crime and wastewater data. Methods: Wastewater samples were collected in 19 cities across Germany and Switzerland during one week, covering a population of approximately 8.1 million people. Self-report data and consumption offences for the investigated areas were used for comparison and to investigate differences between the indicators. Results: Good agreement between data sources was observed for cannabis and amphetamine-type stimulants, whereas substantial discrepancies were observed for cocaine. In Germany, an important distinction could be made between Berlin, Dortmund and Munich, where cocaine and particularly amphetamine were more prevalent, and Dresden, where methamphetamine consumption was clearly predominant. Cocaine consumption was relatively homogenous in the larger urban areas of Switzerland, although prevalence and offences data suggested a more heterogeneous picture. Conversely, marked regional differences in amphetamine and methamphetamine consumption could be highlighted. Conclusions: Combining the available data allowed for a better understanding of the geographical differences regarding prevalence, typology and amounts of substances consumed. For cannabis and amphetamine-type stimulants, the complementarity of survey, police and wastewater data could be highlighted, although notable differences could be identified when considering more stigmatised drugs (i.e. cocaine and heroin). Understanding illicit drug consumption at the national scale remains a difficult task, yet this research illustrates the added value of combining complementary data sources to obtain a more comprehensive and accurate picture of the situation. abstract_id: PUBMED:31730939 Comparison of several immunoassays used in drugs of abuse screening: Assessment against gold standard methods and calculation of measurement uncertainty. Background: Immunoassays provide simple, powerful and inexpensive screening methods for urine drug screening. Other substances and/or factors may interfere with the test and cause false or positive results. It is essential to understand the differences between methods to be able to evaluate their impact on the results. All evaluated immunoassays were assessed in comparison with GC-MS or LC-MS/MS, which are generally accepted as vigorous confirmation gold standard techniques. Methods: CEDIA, DRI, EIA and EMIT II Plus screening immunoassays were evaluated on Beckman-Coulter AU5810 analyser. All results were confirmed using GC-MS or LC-MS/MS methods. Measurement Uncertainty for immunoassays was calculated by using standard deviation multiplied by 1.96 to cover 95% confidence interval of tested samples. Results: No discrepancy was found between CEDIA and EMIT II Plus for cocaine, methadone, heroin, and benzodiazepines assays. No discrepancy was found between oxycodone DRI assay and Immunalysis enzyme immunoassay (EIA). Cannabinoids EMIT II Plus assay performed better than DRI assay. EMIT II Plus assays for amphetamine and ecstasy performed better than CEDIA amphetamine/ecstasy combined assay. Opiates EMIT II Plus assay performed better than CEDIA assay. Fentanyl Ark EIA method performed slightly better than the DRI method. Buprenorphine CEDIA second-generation assay performed better than CEDIA first-generation assay, EMIT II Plus assay and EMIT II Plus with added Beta-Glucuronidase assay. Measurement Uncertainty for immunoassays was calculated and tabulated. Conclusions: This study covered a fundamental gap in available knowledge by evaluating the performance of screening the current new generation of immunoassays methods for drugs of abuse in urine against gold standard methods. abstract_id: PUBMED:10716507 Comparison of sexual behaviors, unprotected sex, and substance use between two independent cohorts of gay and bisexual men. Objective: To compare demographic characteristics, sexual practices, unprotected receptive and insertive anal intercourse, substance use and rates of HIV-1 seroconversion between two prospective cohorts of HIV-negative men who have sex with men. Design: Comparative analysis of two independent cohorts. Methods: Between May 1995 and April 1996, 235 HIV-negative Vanguard Project (VP) participants were enrolled and between January and December 1985, 263 HIV-negative participants in the Vancouver Lymphadenopathy AIDS Study (VLAS) completed a follow-up visit. The VP participants were compared with VLAS participants with respect to self-reported demographic variables, sexual behaviors, unprotected sex, substance use and rates of HIV-1 seroconversion during follow-up. Results: In comparison with the VLAS participants the VP participants were younger (median age, 26 versus 34 years; P< 0.001), more likely to be non-Caucasian (75 versus 97%; P< 0.001), and were less likely to have attended university/college (35 versus 46%; P = 0.014). The VP participants reported a higher mean number of male sex partners in the previous year (15 versus 12; P= 0.026) and a higher mean number of regular partners (1.7 versus 0.6; P < 0.001). The VP participants were more likely to report engaging in receptive (92 versus 60%; P< 0.001) and insertive (90 versus 69%; P < 0.001) anal intercourse with regular partners and receptive anal intercourse with casual partners (62 versus 38%; P< 0.001). The VLAS participants were more likely to report never using condoms during insertive and receptive anal intercourse with both regular and casual partners. The VP participants were less likely to report using nitrite inhalants (34 versus 43%; P= 0.033), but more likely to report the use of cocaine (30 versus 8%; P< 0.001), LSD (21 versus 3%; P < 0.001), amphetamine (11 versus 1%; P< 0.001), heroin (3 versus 0%; P= 0.010) and methyldiamphetamine (17 versus 10%; P= 0.034). The VLAS participants were nine times more likely to report high-risk sexual behavior, after controlling for differences in age, ethnicity, substance use, and method of recruitment between cohort members. After adjustment for differences in demographics, sexual behaviors, and level of substance use, the risk ratio for seroconversion among VLAS participants remained significantly elevated compared with VP participants. Conclusion: These data provide evidence that men who have sex with men who were enrolled in the VP were more sexually active than their VLAS counterparts were 10 years ago as measured by self-reported numbers of regular and casual partners and frequency of anal intercourse with these partners. However, condom use appears to be significantly higher among VP participants, which has contributed to a lower rate of HIV-1 infection. abstract_id: PUBMED:7795497 The Severity of Dependence Scale (SDS): psychometric properties of the SDS in English and Australian samples of heroin, cocaine and amphetamine users. The Severity of Dependence Scale (SDS) was devised to provide a short, easily administered scale which can be used to measure the degree of dependence experienced by users of different types of drugs. The SDS contains five items, all of which are explicitly concerned with psychological components of dependence. These items are specifically concerned with impaired control over drug taking and with preoccupation and anxieties about drug use. The SDS was given to five samples of drug users in London and Sydney. The samples comprised users of heroin and users of cocaine in London, and users of amphetamines and methadone maintenance patients in Sydney. The SDS satisfies a number of criteria which indicate its suitability as a measure of dependence. All SDS items load significantly with a single factor, and the total SDS score was extremely highly correlated with the single factor score. The SDS score is related to behavioural patterns of drug taking that are, in themselves, indicators of dependence, such as dose, frequency of use, duration of use, daily use and degree of contact with other drug users; it also shows criterion validity in that drug users who have sought treatment at specialist and non-specialist agencies for drug problems have higher SDS scores than non-treatment samples. The psychometric properties of the scale were good in all five samples, despite being applied to primary users of different classes of drug, using different recruitment procedures in different cities in different countries. abstract_id: PUBMED:19374969 Prevalence of psychotic symptoms in substance users: a comparison across substances. Background: Psychotic symptoms (delusions and hallucinations) are reported to be increased among persons using illicit substances, but little is known about the comparative frequency with which the symptoms occur with abuse of different substances. To establish this, we interviewed individuals who had wide experience of commonly used drugs. Methods: Four hundred seventy-six intravenous drug users, crack-cocaine users, and heroin snorters recruited via street outreach were interviewed using the Composite International Diagnostic Interview-Substance Abuse Model to assess dependence on a number of substances including amphetamines, cannabis, cocaine, and opioids. As a part of this assessment, we assessed a history of delusions and hallucinations in the context of use of, or withdrawal from, these specific substances. Results: From 27.8% to 79.6% users of amphetamine, cannabis, cocaine, and opiates met Diagnostic and Statistical Manual of Mental Disorders, Third Edition, Revised, dependence for that specific substance. The prevalence of psychotic symptoms associated with each specific substance ranged from users with no diagnosis to users with severe dependence as follows: amphetamines (5.2%-100%), cannabis (12.4%-80.0%), cocaine (6.7%-80.7%), and opiates (6.7%-58.2%). The risk of psychotic symptoms increased for respondents who abused (odds ratio [OR], 12.2) or had mild (OR, 17.1), moderate (OR, 47.0), or severe dependence (OR, 114.0) on cocaine when compared to those who were users with no diagnosis. A similar pattern was evident in cannabis, opiate, and amphetamine users. Conclusions: Most users dependent on illicit substances experience psychotic symptoms in the context of use of, or withdrawal from, these substances. Psychotic symptoms increased with the severity of the substance use disorders for all 4 substances. These findings emphasize the importance of developing services to target this population as they are at a heightened risk for developing psychotic symptoms. abstract_id: PUBMED:35878579 Common opioids and stimulants in autopsy and DUID cases: A comparison of measured concentrations. Quantitative results from toxicological analyses of autopsy material are widely compared to ranges in reference works to determine if drug concentrations are in relevant levels for establishing intoxication. This study compares concentrations of commonly used opioids and stimulants from drug addict autopsies and driving under the influence of drugs (DUID) cases to supplement current knowledge of the possible span and overlaps of measured concentrations. The study included whole-blood results from forensic autopsies of drug addicts performed from 2015 to 2020 (n = 220) and DUID cases from 2015 to 2019 (n = 7088). The focus was on heroin/morphine, methadone, cocaine, amphetamine and MDMA concentrations because these drugs are commonly encountered in both fatal intoxications and DUID cases and the potential for abuse is well known. In the DUID group, the opioids heroin/morphine and methadone and the stimulants amphetamine and MDMA were often seen in concentrations above the reported lower comatose-fatal level whereas cocaine was almost always below. Thus, based on our data, the potential for false assessment of intoxication cases when comparing to reported comatose-fatal limits appears greatest on lower end concentrations of heroin/morphine, methadone, amphetamine and MDMA, whereas false assessment of cocaine appears less likely because most control cases are below reported comatose-fatal levels. Answer: The study described in abstract PUBMED:9581005 aimed to compare the characteristics of 'visible' and 'invisible' users of heroin, cocaine, and amphetamine. 'Visible' users are those who have had contact with police or helping agencies in connection with drug use, while 'invisible' users have not had such contact. The study involved structured interviews and self-report questionnaires with 581 regular users of the target drugs, 380 of whom denied any contact with services related to drug use. The findings suggest that 'visible' and 'invisible' drug users indeed represent two distinct populations. Most 'invisible' users (79%) had little or no concern about their drug use and did not seek help or advice. They were more likely to use stimulants only, less likely to inject drugs or share equipment, less likely to use opioids, amphetamine, or cocaine powder daily, more likely to use Ecstasy, and had lower Severity of Dependency Scale (SDS) scores for all target drugs except crack. In contrast, 'visible' users were more concerned about their drug use, but many lacked confidence in local services to meet their needs. The SDS scores for cocaine powder were comparable to those for cannabis, LSD, and Ecstasy, suggesting a relatively modest addictive potential for cocaine HCI. The study concluded that these two groups have different behavioral characteristics, vulnerability to compulsive use, and prevalence of drug-related problems or concerns. It recommended that service providers focus on improving services for users already in contact rather than attempting to uncover 'invisible' populations. The distinct differences between the two groups highlight the need for tailored approaches to address the specific needs and behaviors of each population.
Instruction: Are there sex biases in standardized tests of radiation oncology knowledge? Abstracts: abstract_id: PUBMED:9215846 Are there sex biases in standardized tests of radiation oncology knowledge? Purpose/objective: Recent studies have identified biases directed against women in standardized tests. We tested for the existence of such biases in the American College of Radiology (ACR) In-Training Examination in Radiation Oncology and the American Board of Radiology (ABR) Written Radiation Oncology Board Examination. Materials And Methods: Our request to the ABR to permit us to study performance on their examinations, as a function of sex, was refused. We obtained scores, through the cooperation of six academic radiation oncology departments, for residents-in-training taking the in-service examination and candidates taking the written board examination for the first time. Test results for 1984 to 1995 were blinded as to name, but not sex or institution of training. For the in-service examination, scores are reported as percentiles normalized to the year of training. The effect of multiple scores for the same resident was assessed using a repeated-measures analysis of variance. Residents were nested within each sex/institution combination and crossed with training year and calendar year. The effects of three factors (sex, institution, and year the examination was taken) on the results of the biology, physics, and clinical sections were evaluated with an analysis of variance. The interactions of sex with institution and year were included to determine the scope of the sex effect. For the board examination, scores are reported as percentiles, as well as an overall pass/ fail outcome. An analyses of variance was performed similar to that used for the in-service examination. In addition, Fisher's exact test and logistic regression were used to analyze overall outcome (pass/fail). Results: We obtained data for 79 residents (48 men and 31 women, 1.54:1) who took the in-service examinations 165 times. Sixty-two residents (41 men and 21 women, 1.95:1) had an initial sitting for the ABR written examination. On the in-service examination, for the biology, physics, and clinical subsections, calendar year, training year, and sex did not have a significant effect on examinees scores. Institution of training had a significant effect (P < .02) on the scores in biology and physics. The total in-service examination scores were not significantly influenced by calendar year, training year, or sex. Institution of training has a strong influence on overall score (P = .03) and the interaction of sex with training year is near significance level (P = .06). The power for our statistical tests ranged from 0.88 to 0.99. On the board examination, sex, institution of training, year the examination was taken, and interaction of sex with year or sex with institution of training did not have a significant effect on test scores. Pass rates were 90% for men versus 81% for women (P = .43). Conclusion: Sex did not significantly influence the results of the in-service examination or the written board examination. Institution of training is the strongest influence on the results of the in-service examination. abstract_id: PUBMED:27522049 PHYSICIANS' KNOWLEDGE ABOUT RADIATION DOSE AND POSSIBLE RISKS OF COMMON MEDICAL TESTS: A SURVEY IN IRAN. Recent data suggest that knowledge of radiation exposures among physicians is inadequate. This study, therefore, aimed to evaluate their knowledge of the radiation doses their patients received and awareness of associated biological risks of radiation exposure. A questionnaire in multiple-choice format consisted of four sections with a total of 10 questions based on the literature review. A total of 136 questionnaires were returned from 69 general practitioners and 67 physicians in various specialties from 10 different hospitals in the capital city of Tehran, Iran. Fifty-four percent of general practitioners and twenty-five percent of specialties declared that they are not aware of biological risks of radiation exposure. Fifty-six percent of physicians did not know the correct definition of absorbed dose. Only 33% of physicians knew the dose exposure of a chest X-ray and only 31% knew the approximate doses of various procedures relative to a chest X-ray. Forty-seven percent of physicians incorrectly distinguished the stochastic effects of radiation from the deterministic effects, and thirty-eight of physicians did not know the organs of the body that are most sensitive to ionizing radiation. Only 23.5% of physicians were aware of the one in 2000 risk of induction of fatal carcinoma from computed tomography of the abdomen. Seventy-nine percent of physicians incorrectly underestimated the contribution of nuclear and radiological tests in exposure of an average person. The mean score of the specialties trended toward being more accurate than general practitioners (4.18 ± 1.28 vs. 3.89 ± 1.46, respectively, from a potential accurate total score of 9), but these differences were not statistically significant. Among specialists, orthopedics had the highest scores. The present study demonstrated the limited knowledge of radiation exposures among general practitioners and specialists and a need to improve their knowledge by means of targeted training and re-education. abstract_id: PUBMED:28488259 The Impact of a Radiation Oncologist led Oncology Curriculum on Medical Student Knowledge. Medical students at our institution all take a pre-clinical oncology course as well as a clinical radiation oncology didactic session during their clinical curriculum. The objective of this analysis is to demonstrate the impact of the radiation oncology didactic on medical student knowledge of core oncology concepts. All students received a standardized didactic lecture introducing students to core concepts of general oncology and radiation. We administered an 18-question pretest and a posttest examining student knowledge between 2012 and 2015. Changes in student responses between pre-test and post-tests were analyzed to evaluate the effectiveness of the didactic session. Over the course of three years, 319 (64.4%) of 495 students who completed the Radiology block completed both the pre-test and post-test. The average student test grade improved from 62.0% on the pretest to 69.6% on the posttest (p < 0.001). By category, students increased their score from 81.4% to 89.8% (p < 0.001) in general oncology, from 59.9% to 69.9% (p < 0.001) in breast oncology, from 43.0% to 51.0% (p < 0.001) in prostate oncology, and from 71.3% to 75.7% (p = 0.003) in radiation oncology. Students showed increases in knowledge across general oncology, prostate oncology, breast oncology, and radiation oncology. abstract_id: PUBMED:31770832 Standardized treatment of chronic radiation-induced bowel injury Radiation-induced bowel injury is a common complication of radiation therapy for pelvic malignancy. Given the huge number of patients diagnosed with pelvic malignancy, the number of patients diagnosed with radiation-induced bowel injury increased year by year, which put a great burden on the clinical diagnosis and treatment of radiation-induced bowel injury. In particular, chronic radiation-induced bowel injury, which is manifested in the process of prolonged, repeated and progressive aggravation, seriously affects the physical and mental health of patients and makes clinical diagnosis and treatment difficult. However, due to insufficient attention and understanding from doctors and patients, standardized diagnosis and treatment of radiation-induced bowel injury still have a long way to go. Radiation-induced bowel injury is self-limited but irreversible. During diagnosis, we should pay attention to overall evaluation of the stage of disease based on clinical symptoms, endoscopic examination, imaging examination, pathology and nutritional risk. The treatment methods include health education, drug therapy, enema therapy, formalin local treatment, endoscopic treatment and surgical treatment, etc. The treatment decision-making should be based on clinical symptoms, endoscopic or imaging findings to alleviate the clinical symptoms of patients as the primary goal and to improve the long-term quality of life of patients as the ultimate goal. abstract_id: PUBMED:33436277 Learning From Cancer Survivors as Standardized Patients: Radiation Therapy Students' Perspective. Introduction/background: Healthcare programs have proven the benefits of standardized patients in educational activities, but limited scholarship exists on the involvement of cancer survivors in radiation therapy education. This innovative approach to simulation-based education in radiation therapy is the first to integrate cancer survivors as standardized patients in a high-fidelity environment. This research aims to examine the impact of cancer survivors on the students' learning during their simulation course. Methods: This qualitative research used a phenomenological approach to gather data and analyze the students' view of their interactions with cancer survivors in the simulation course. The researchers conducted three focus group discussions with five of the eight radiation therapy students using a set of preformulated questions. Results: The collected data were comparatively analyzed, and the research team identified four main themes: experience, teaching/coaching, patient-centeredness and gratitude. These represented the students' perspective on their experiences and the unique learning opportunities provided by the cancer survivors. Findings/discussion: The simulation activities with the cancer survivors provided a safe and realistic experience for the students. This research highlighted the influence of the cancer survivors' oncology journey on the skills development of the radiation therapy students. Conclusion: The students benefited from the authentic patient perspective of the cancer survivors which helped build their confidence and develop their skills prior to clinical placement. This research demonstrated that incorporating cancer survivors as standardized patients in radiation therapy simulation education has added significant value to the students' learning. abstract_id: PUBMED:37131156 Investigation of standardized training of radiation oncology residents for gynaecological tumours in China. Background: Radiotherapy standardized training (ST) has been conducted for 7 years in China. This investigation evaluated the difficulties of and need for ST of radiation oncology residents (RORs) for gynaecological tumours (GYN) in China. Methods: An anonymous online survey was conducted on the "Questionnaire Star" platform. The questionnaire contained 30 questions, including the basic information of the students, their knowledge of radiotherapy theory, training on GYN, the difficulties and needs they faced, and possible solutions. Results: A total of 469 valid questionnaires were collected, resulting in a valid response rate of 85.3%. During the ST, only 58-60% of RORs received training in GYN, with a median clinical rotation time of 2-3 months. Among the RORs surveyed, 50.1% knew the physical characteristics of brachytherapy (BRT), and 49.2% could choose the appropriate BRT for patients. At the end of ST, 75.3% were able to complete the target delineation in GYN independently, and 56% were able to complete the BRT operation independently. The scarcity of GYN patients, insufficient teaching awareness of superior doctors, and lack of interest are the main reasons why ST cannot meet the standard. Conclusion: In China, the ST of RORs in GYN should be strengthened, the teaching awareness of specialist trainers should be increased, and the curriculum should be optimized, especially the curriculum for specialist operation and a strict assessment system. abstract_id: PUBMED:34344273 Impact of Standardized Educational Intervention on Improvement in Analgesic Knowledge and Its Compliance among Patients of Advanced Cancer Patients Attending Palliative Clinic: Prospective Randomized Study. Patient education is a low-cost intervention that can help in improving the knowledge and compliance regarding analgesics. This study aimed to assess the effectiveness of the standardized educational intervention on knowledge regarding analgesic, its compliance, and barriers to compliance among advanced cancer patients attending palliative care clinic.In this randomized control trial, 100 advanced cancer patients with pain were randomly allocated to two groups. A standardized educational intervention of 10-15 minutes on cancer pain management and analgesics was delivered at clinic visit to the patients in the intervention group at baseline and the control group received usual care. Analgesic knowledge, analgesic compliance, and barriers to compliance were measured for each patient at baseline, 2 and 4 weeks follow up visit. The data was analyzed and p < 0.05 was considered significant.Analgesic knowledge (p = 0.001) and compliance (p = 0.001) improved significantly in the experimental group at 2 weeks and 4 weeks as compared to the control group. There was also a statistically significant decrease in barriers to compliance in the experimental group as compared to the control group. A significant correlation between the knowledge and compliance score at 4 weeks in the experimental group was found with a Karl Pearson correlation coefficient, r = 0.628 (p = 0.001).The standardized educational intervention was found to be effective in improving the analgesic knowledge and analgesic compliance among cancer patients with pain at 2 and 4 weeks follow up as compared to the usual care. abstract_id: PUBMED:31130979 Sex Difference of Radiation Response in Occupational and Accidental Exposure. Ionizing radiation is a well-established cause of deleterious effects on human health. Understanding the risks of radiation exposure is important for the development of protective measures and guidelines. Demographic factors such as age, sex, genetic susceptibility, comorbidities, and various other lifestyle factors influence the radiosensitivity of different subpopulations. Amongst these factors, the influence of sex differences on radiation sensitivity has been given very less attention. In fact, the International Commission on Radiological Protection (ICRP) has based its recommendations on a population average, rather than the data on the radiosensitivity of distinct subpopulations. In this study, we reviewed major human studies on the health risks of radiation exposure and showed that sex-related factors may potentially influence the long-term response to radiation exposure. Available data suggest that long-term radiosensitivity in women is higher than that in men who receive a comparable dose of radiation. The report on the biological effects of ionizing radiation (BEIR VII) published in 2006 by the National Academy of Sciences, United States emphasized that women may be at significantly greater risk of suffering and dying from radiation-induced cancer than men exposed to the same dose of radiation. We show that radiation effects are sex-specific, and long-term radiosensitivity in females is higher than that in males. We also discuss the radiation effects as a function of age. In the future, more systematic studies are needed to elucidate the sex differences in radiation responses across the life continuum - from preconception through childhood, adulthood, and old age - to ensure that boys and girls and men and women are equally protected across ages. abstract_id: PUBMED:33447187 Attitudes and knowledge of anesthesiology trainees to radiation exposure in a Tertiary care hospital. Background And Aims: Ionizing radiation procedures are indispensable in medical clinical practice. Exposure to radiation at any dose could have serious adverse effects. Anesthesiologists working in interventional radiology suites are at a higher risk of radiation exposure than other personnel. The aim of this study was to assess the knowledge and attitude of anesthesiology trainees towards the radiation hazards and current safety practices. Methods: This prospective cross-sectional survey was conducted at the department of anesthesiology at Aga Khan University. All anesthesiology trainees working in the department were given a 12-question paper-based survey after getting ethical review committee approval and informed consent. The questionnaire contained requests for personal demographic data and specific questions regarding radiation protection. Results: A total of 54 participants were included in this survey. Thirty-two (59.3%) were male, and 22 (40.7%) were female. The average year of experience working in anesthesia of the participants was 2.8 ± 1.65 years (range, one to eight years). Frequency of radiation exposure of 32 (59.3%) participants was 1-5 times per week. Approximately 68.5% (37/54) of participants believed they took adequate precautions for protection against radiation. Only 20.4% (11/54) used both a lead apron and a thyroid shield for prevention of radiation exposure. Most participants using the radiation shield or clothing (70.4%; 38/54) cited concerns about cancer. Conclusions: A lack of knowledge persists among anesthesiology trainees in our institute regarding the risks associated with ionizing radiation. This study also serves to highlight the need for anesthesiology trainees to protect themselves properly. Radiation dose, hazards, and protection strategies must be included in the basic curriculum of medical colleges. abstract_id: PUBMED:23374547 Knowledge of medical imaging radiation dose and risk among doctors. Introduction: The growth of computed tomography (CT) and nuclear medicine (NM) scans has revolutionised healthcare but also greatly increased population radiation doses. Overuse of diagnostic radiation is becoming a feature of medical practice, leading to possible unnecessary radiation exposures and lifetime-risks of developing cancer. Doctors across all medical specialties and experience levels were surveyed to determine their knowledge of radiation doses and potential risks associated with some diagnostic imaging. Methods: A survey relating to knowledge and understanding of medical imaging radiation was distributed to doctors at 14 major Queensland public hospitals, as well as fellows and trainees in radiology, emergency medicine and general practice. Results: From 608 valid responses, only 17.3% correctly estimated the radiation dose from CT scans and almost 1 in 10 incorrectly believed that CT radiation is not associated with any increased lifetime risk of developing cancer. There is a strong inverse relationship between a clinician's experience and their knowledge of CT radiation dose and risks, even among radiologists. More than a third (35.7%) of doctors incorrectly believed that typical NM imaging either does not use ionising radiation or emits doses equal to or less than a standard chest radiograph. Conclusion: Knowledge of CT and NM radiation doses is poor across all specialties, and there is a significant inverse relationship between experience and awareness of CT dose and risk. Despite having a poor understanding of these concepts, most doctors claim to consider them prior to requesting scans and when discussing potential risks with patients. Answer: According to the study reported in abstract PUBMED:9215846, there were no significant sex biases found in standardized tests of radiation oncology knowledge. The study analyzed the performance of residents-in-training and candidates taking the American College of Radiology (ACR) In-Training Examination in Radiation Oncology and the American Board of Radiology (ABR) Written Radiation Oncology Board Examination. The results indicated that sex did not significantly influence the results of the in-service examination or the written board examination. The institution of training was identified as the strongest influence on the results of the in-service examination.
Instruction: Do single and partnered women with gynecologic cancer differ in types and intensities of illness- and treatment-related psychosocial concerns? Abstracts: abstract_id: PUBMED:17719360 Do single and partnered women with gynecologic cancer differ in types and intensities of illness- and treatment-related psychosocial concerns? A pilot study. Objective: We compared the psychosocial and psychosexual concerns of single and partnered women with gynecologic cancer, since relationship status and psychosocial context are known to affect sexuality, a life domain commonly affected by this cancer. Method: A cross-sectional convenience sample of 49 women (68% response), with ovarian (n=31), endometrial (n=12), and cervical (n=6) cancer, responded to a 72-item self-report Cancer Concerns Questionnaire and additional psychosocial questionnaires. Results: Single (n=13) and partnered women (n=36) similarly reported prognosis as their highest concern, but single women (26% of the sample) reported that communication with the treatment team, treatment side effects, and prognosis were of greater salience to them than did partnered women. The latter group had greater sexuality and partner relationship concerns. Conclusion: These preliminary findings suggest that relationship status, whether partnered or single, influences current psychosocial concerns among women with gynecologic cancer, despite similar levels of illness- and treatment-related intrusions on important life domains. abstract_id: PUBMED:24290535 Understanding the impact of the treatment pathway upon the health-related quality of life of women with newly diagnosed endometrial cancer - a qualitative study. Purpose: (1) To determine the impact of treatment and recovery on the health-related quality of life (HRQoL) of endometrial cancer (EC) patients. (2) To explore how treatment types and delivery affect HRQoL and invite suggestions for improvement. Methods: Qualitative study. In-depth, semi-structured interviews at 3, 6, 9 or 12 months post-treatment were carried out with 22 women with stage IA to IVB EC who had undergone treatment at a tertiary referral centre for gynaecological cancers in Sheffield, UK. 21 were treated surgically and 4 received adjuvant treatment. Data were analysed using an inductive thematic approach. Results: Four dominant themes emerged regarding the treatment pathway: pre-treatment concerns, experience during treatment, post-treatment and survivorship issues. Expectations and understandings of EC and its treatment were often inaccurate. Proper explanations eased anxiety but were uncommon. Laparoscopic surgery was welcomed where offered but did not necessarily influence coping ability. Instead, women evaluated treatment impacts against their expectations. Treatments affected women's physical abilities, self-perception and relationships resulting in re-evaluation of lifestyle. Conclusions: The impact of treatment upon HRQoL for women with EC differs from other gynaecological cancers. Better information provision would enhance coping ability. Coping methods and expectations appear to influence HRQoL more than treatment modality. abstract_id: PUBMED:7834466 Preservation of function in genitourinary cancers: psychosexual and psychosocial issues. GU cancers are commonly associated with treatment-related sexual dysfunction, varying from mild to severe. For the clinical oncologist, it is important to be aware that sexual problems or sexual dysfunction may occur as a result of any aspect of cancer and cancer treatment. Sexual function is sensitive to the effects of trauma, both physical and emotional. This is particularly the case for patients whose cancer affects their genitals. Despite some commonalities of psychosocial and psychosexual issues in GU patients across diagnostic categories, GU patients do not present as one distinct, homogeneous group. GU neoplasms, with the exception of bladder cancer, affect mostly men. Men and women tend to differ in their ways of dealing with emotional distress and physical illness. While the man may typically do well using denial as a defense and may not want to discuss his feelings about his diagnosis and treatment, his partner may suffer if her way of coping is oriented toward communicating thoughts and feelings about the situation. Another important difference in this population is related to age and developmental differences. Testicular cancer typically is diagnosed in young men between the ages of 15 and 34 (46). The patients often still live with their parents or have just formed families of their own. In contrast, prostate cancer is diagnosed in older men, who are at a completely different stage in their lives when they have to deal with this challenge. Developmental stage at the time of diagnosis and treatment is an important variable in adaptation to cancer (47). Since GU malignancies and their treatments directly or indirectly affect sexual organs and sexual functioning, issues of sexual body image and identity as men or women are frequent concerns for GU patients. This is most obviously the case for men facing penectomy or orchiectomy. Similarly, women undergoing cystectomy with its simultaneous removal of uterus, ovaries, and parts of the vaginal wall face issues regarding their femininity as well as doubts about future sexual functioning. However, body image concerns are not limited to these two groups but may affect any cancer patient. The experience of having lost all hair due to chemotherapy, bearing of surgical scars, lost or altered functioning, and just the realization of having had cancer may change the way a patient feels about his/her body permanently.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:15653253 Women's experiences of information, psychological distress and worry after treatment for gynaecological cancer. Sensitive, appropriate patient information is considered to be an important element in the psychological support of patients. Specialist nurses are seen to have a key responsibility for this work. With regard to gynaecological cancer, evidence suggests that women do not get optimum psychological care. This study set out to explore women's experiences of information, psychological distress and worry after treatment for gynaecological cancer. The study was a survey (not an RCT) and 70 patients from two specialist gynaecological oncology centres were interviewed at the time of diagnosis/initial treatment and again at 6 months. The semi-structured schedule included recognised instruments to assess; sources of information, concerns, and psychological distress. Both initially and at 6 months there was evidence of a considerable burden of worry; over half the women had four or more significant concerns related to their illness experience. However, women who had initial support from a clinical nurse specialist at the time of diagnosis experienced a clinically significant reduction in their level of psychological distress 6 months from diagnosis. Hospital linked professional sources of information were well used at the time of diagnosis, but by 6 months many patients were using non-professional sources such as television, magazines and newspapers. This study suggests that support from a clinical nurse specialist may be able to assist psychological recovery. However, to be effective in this area nurses should be skilled and willing to assess the individual's need for help with information, and managing their worry. abstract_id: PUBMED:34149518 "Will It Affect Our Chances of Having Children?" and Feeling "Like a Ticking Bomb" -The Fertility Concerns and Fears of Cancer Progression and Recurrence in Cancer Treatment Decision-Making Among Young Women Diagnosed With Gynaecological or Breast Cancer. Objective: Cancer treatment decision making process is particularly fraught with challenges for young women because the treatment can affect their reproductive potential. Among many factors affecting the process, fears of cancer progression and recurrence can also be important psychological factors. Our aim is to apply Common-Sense Model and shared decision-making model to explore experiences of treatment decision-making women of reproductive age who were diagnosed with gynaecological or breast cancer and the influence of fertility issues and fears of cancer progression and recurrence. Method: We conducted telephone interviews with 24 women who were diagnosed with gynaecological or breast cancer aged 18-45, who finished active treatment within 5 years prior to study enrolment and had no known evidence of cancer recurrence at the time of participation. They were recruited from three NHS oncology clinics in Scotland and online outlets of cancer charities and support organisations. We analysed the data using Braun and Clarke's thematic analysis method as it allows for both inductive and deductive analyses. Results: We identified five main themes pertaining to treatment-related decision-making experiences and fertility issues and fear of progression and recurrence: Becoming aware of infertility as a potential consequence of cancer treatment; Balancing-prioritising cancer and fertility; Decisions about treatments; Evaluation of treatment decisions; and The consequences of treatments. Sub-themes have also been reported. Different factors such as whether the cancer is breast or gynaecological, physicians' willingness of discussing fertility, influence of others in decision-making, childbearing and relationship status as well as fear of cancer recurrence emerged as important. Conclusion: The importance of physicians directly addressing fertility preservation in the process of treatment decision-making and not treating it as an "add-on" was evident. Satisfaction with treatment decisions depended on both the quality of the process of decision making and its outcome. Fear of recurrence was present in different parts of the adaptation process from illness perceptions to post-treatment evaluation of decisions. Both Common-Sense Model and shared decision-making model were helpful in understanding and explaining young women's experience of treatment decision-making and fertility concerns. abstract_id: PUBMED:29430767 Sexual function, psychosocial adjustment to illness, and quality of life among Chinese gynaecological cancer survivors. Background: Disrupted sexual function is a prevalent and sustained side effect of gynaecological cancer and its related treatment. This problem may pose challenges to the survivors in the process of illness adjustment, leading to elevated psychological distress and impaired quality of life. However, care and interventions in this area have been neglected in most countries. Objectives: To investigate sexual function, psychosocial adjustment to illness, and quality of life among Chinese gynaecological cancer survivors in Hong Kong and to explore their associations. Methods: A cross-sectional design was adopted. Gynaecological cancer survivors were recruited from a gynaecological oncology outpatient clinic at a regional hospital in Hong Kong. Results: A total of 225 Chinese gynaecological cancer survivors were recruited. Their sexual function was found to be impaired. They had satisfactory performance in psychosocial adjustment to illness, but the worst domain was sexual relationships. Their quality of life was fair, with physical and social functioning performing best. Path analysis demonstrated that psychosocial adjustment to illness played a significant mediating role in the relationship between sexual function and quality of life among those who were married or had a regular sex partner. Conclusions: Impaired sexual function was prevalent among Chinese gynaecological cancer survivors, and psychosocial adjustment to illness mediates the relationship between sexual function and quality of life. In Chinese clinical settings without routine sexuality assessments, early sexual function and psychosocial adjustment assessment should be integrated into routine nursing practice. In addition, a culturally appropriate practice model should be developed to guide sexuality care delivery. abstract_id: PUBMED:21810573 Development of a teaching tool for women with a gynecologic malignancy undergoing minimally invasive robotic-assisted surgery. Women undergoing minimally invasive robotic-assisted surgery for a gynecologic malignancy have many questions and concerns related to the cancer diagnosis and surgery. The provision of information enhances coping with such illness-related challenges. A lack of print materials for these patients prompted the creation of a written teaching tool to improve informational support. A booklet was developed using guidelines for the design of effective patient education materials, including an iterative process of collaboration with healthcare providers and women who had undergone robotic-assisted surgery, as well as attention to readability. The 52-page booklet covers the trajectory of the woman's experience and includes the physical, psychosocial, and sexual aspects of recovery. abstract_id: PUBMED:18840207 Living with gynecologic cancer: experience of women and their partners. Purpose: To describe the experiences of Turkish women with gynecologic cancer and their partners. Design And Methods: Qualitative research using a descriptive phenomenological approach. Setting: Gynecologic oncology outpatient clinic of the Gulhane Military Medical Academy in Ankara, Turkey. Nineteen women with gynecologic cancer aged 43-70 years and 12 partners aged 46-73 years were recruited. Data were obtained through open-ended and in-depth, audio-taped interviews which lasted approximately 35-45 minutes each. Findings: Seven themes emerged from the participants' description of their experience: (a) experiences during the diagnosis period; (b) experiences during the treatment period; (c) the effect of cancer on family life; (d) changes in daily life; (e) coping methods and support sources; (f) the meaning of illness; and (g) experiencing the illness as a woman. Conclusions: Receiving the diagnosis of gynecologic cancer was remembered as a very dramatic experience. The women and partners reported many physical and psychosocial difficulties relating to the treatment period. Healthcare professionals must understand and recognize cancer patients' and their partners' experiences to provide appropriate holistic care. Clinical Relevance: Nurses can only provide effective and comprehensive nursing care to gynecologic cancer patients and their families if they fully understand the physical, emotional, social, and spiritual experiences in their lives. abstract_id: PUBMED:24476727 Linking cancer and intimate partner violence: the importance of screening women in the oncology setting. Millions of women in the United States experience physical abuse because of intimate partner violence (IPV) that results in injuries, social and family dysfunction, mental health disorders, chronic pain and illness, and death. Cancer causes a quarter of the deaths of women in the United States. When IPV and a cancer diagnosis intersect, a special population of women with unique needs is created. The purpose of the current study was to determine the rates of IPV and the types of cancer reported by women seeking services for IPV. Safety, community agency use, severity of violence, danger, psychological distress, post-traumatic stress disorder, self-efficacy, social support, pain, and marginality also were assessed. Three hundred abused women were interviewed in person to determine their health, safety, and functioning. Of the 300 women, eight reported receiving a cancer diagnosis, and most of those women had cervical cancer. The prevalence of cervical cancer reported by abused women was 10 times higher than the general population. Higher danger scores and risk for revictimization were reported. Increased awareness of the potential connection between IPV and cancer is needed, and evidence-based strategies that promote IPV screening in the oncology setting should be developed. abstract_id: PUBMED:31050081 Age-related supportive care needs of women with gynaecological cancer: A qualitative exploration. Women with gynaecological cancer (GC) experience significant morbidity with associated needs for support, not all of which are currently met by the current system. Types and levels of unmet needs vary across age and the care continuum. This study aimed to identify the shared and unique supportive care needs of younger and older GC patients and survivors to inform improved supportive care. Nineteen younger and ten older women, 3 months to 5 years post a GC diagnosis, were purposively recruited during active treatment, and at early and extended survivorship. Audiotaped and transcribed semi-structured interviews were thematically analysed to establish areas of needs. GC patients reported nine shared needs relating to support, isolation, uncertainty, information, asking questions, escape from illness, advocacy, loss and finding meaning. Younger patients reported unique needs related to the impact of treatment-induced menopause. There is a need for a systematic screening process to identify women who require and want additional help, to ensure appropriate and timely assistance or referrals are provided. Identification of needs will allow health professionals to provide relevant and timely information and support services, resulting in improved quality of life for women affected by GC. Answer: Yes, single and partnered women with gynecologic cancer do differ in types and intensities of illness and treatment-related psychosocial concerns. A pilot study comparing the psychosocial and psychosexual concerns of single and partnered women with gynecologic cancer found that while both groups reported prognosis as their highest concern, single women reported greater salience in communication with the treatment team, treatment side effects, and prognosis than did partnered women. In contrast, partnered women had greater concerns related to sexuality and partner relationships (PUBMED:17719360). This suggests that relationship status influences current psychosocial concerns among women with gynecologic cancer, despite similar levels of illness- and treatment-related intrusions on important life domains. Additionally, the impact of treatment on health-related quality of life (HRQoL) for women with endometrial cancer differs from other gynecological cancers, and coping methods and expectations appear to influence HRQoL more than treatment modality (PUBMED:24290535). Moreover, genitourinary (GU) cancers, which include gynecologic cancers, are commonly associated with treatment-related sexual dysfunction, and these issues can affect sexual body image and identity as men or women, with concerns about femininity and future sexual functioning being common among women (PUBMED:7834466). Furthermore, the provision of sensitive and appropriate patient information is considered important in the psychological support of patients, and specialist nurses are seen to have a key responsibility for this work. Women who had initial support from a clinical nurse specialist at the time of diagnosis experienced a clinically significant reduction in their level of psychological distress 6 months from diagnosis (PUBMED:15653253). Lastly, the importance of physicians directly addressing fertility preservation in the process of treatment decision-making and not treating it as an "add-on" was evident, as satisfaction with treatment decisions depended on both the quality of the process of decision making and its outcome, with fear of recurrence being present in different parts of the adaptation process (PUBMED:34149518).
Instruction: Comparison of outcomes after posterior spinal fusion for adolescent idiopathic and neuromuscular scoliosis: does the surgical first assistant's level of training matter? Abstracts: abstract_id: PUBMED:24480953 Comparison of outcomes after posterior spinal fusion for adolescent idiopathic and neuromuscular scoliosis: does the surgical first assistant's level of training matter? Study Design: This was a retrospective review of posterior spinal fusion surgical procedures in patients diagnosed with adolescent idiopathic scoliosis (AIS) or neuromuscular scoliosis (NMS). Objective: The purpose was to determine if the first assistant's training experience is associated with outcomes in AIS and NMS surgical procedures. Summary Of Background Data: A previous study found that patients with AIS undergoing posterior spinal fusion with 2 attendings had similar operating times, blood loss, and complication rates compared with those with a resident or fellow first assistant. NMS cases are more complex than AIS cases, but to our knowledge, no previous studies have examined the impact of the first assistant's level of training on NMS outcomes. Methods: This was a single-center retrospective review of 200 patients, 120 with AIS and 80 with NMS, undergoing primary posterior spinal fusion. Minimum follow-up was 2 years. For each diagnosis group, cases assisted by junior orthopedic residents were compared with those assisted by orthopedic fellows. Results: NMS cases were more complex and had higher complication rates than AIS cases (P < 0.05). AIS and NMS cases were similarly distributed among the fellow and junior resident groups (P = 0.63). AIS cases in the fellow and junior resident groups had similar operating times, estimated blood loss (EBL), complications, lengths of stay, and reoperation rates (P > 0.05). In NMS cases, the fellow group had shorter operating times (320 ± 73 min vs. 367 ± 104 min, P = 0.035) and greater percent correction at initial and 2-year follow-up (58 ± 15% vs. 42 ± 19%, P < 0.001). EBL, complications, lengths of stay, and reoperation rates were similar between the assistant groups in NMS cases (P > 0.05). Conclusion: NMS surgical procedures in which fellows serve as the first assistants were associated with shorter operating times and greater percent correction than surgical procedures with junior resident first assistants. Level Of Evidence: 3. abstract_id: PUBMED:28554809 Five-year outcomes of the First Distal Uninstrumented Vertebra after posterior fusion for Adolescent Idiopathic Scoliosis Lenke 1 or 2. Background: Tilt of the First Distal Uninstrumented Vertebra (FDUV) reflects changes in the main curve and compensatory lumbar curve after posterior fusion to treat thoracic Adolescent Idiopathic Scoliosis (AIS). Hypothesis: FDUV tilt 5 years or more post-fusion depends chiefly on reduction of the main curve and on other factors such as selection of the last instrumented vertebra. Material And Method: A multicenter retrospective cohort of 182 patients with Lenke 1 or 2 AIS treated with posterior instrumentation and followed up for a mean of 8 years and a minimum of 5 years was studied. The patients were divided into two groups based on whether tilt of the upper endplate of the FDUV was ≤5° or >5°at last follow-up. Variables associated with tilt were identified by multiple logistic regression. Results: Six variables were significantly associated with FDUVtilt: percentage of correction at last follow-up, correction loss, lumbar modifier B, number of instrumented vertebrae, inclusion within the instrumentation of the distal neutral vertebra, and inclusion within the instrumentation of the lowest vertebra intersected by the central sacral vertical line. Discussion And Conclusion: The main variables associated with FDUVtilt ≤5° were a final correction percentage ≥60% and absence of correction loss between the postoperative period and last follow-up. Given the stable reduction provided by contemporary instrumentations, we recommend selective thoracic fusion of Lenke 1 or 2 AIS with lumbar modifiers A, B, and C. The lowest instrumented vertebra should be either the neutral vertebra or the vertebra intersected by the central sacral vertical line if it is distal to the neutral vertebra. Level Of Evidence Iv: Retrospective multicenter study. abstract_id: PUBMED:38136084 Minimally Invasive Surgery for Posterior Spinal Instrumentation and Fusion in Adolescent Idiopathic Scoliosis: Current Status and Future Application. The posterior minimally invasive spine surgery (MISS) approach-or the paraspinal muscle approach-for posterior spinal fusion and segmental instrumentation in adolescent idiopathic scoliosis (AIS) was first reported in 2011. It is less invasive than the traditionally used open posterior midline approach, which is associated with significant morbidity, including denervation of the paraspinal muscles, significant blood loss, and a large midline skin incision. The literature suggests that the MISS approach, though technically challenging and with a longer operative time, provides similar levels of deformity correction, lower intraoperative blood loss, shorter hospital stays, better pain outcomes, and a faster return to sports than the open posterior midline approach. Correction maintenance and fusion rates also seem to be equivalent for both approaches. This narrative review presents the results of relevant publications reporting on spinal segmental instrumentation using pedicle screws and posterior spinal fusion as part of an MISS approach. It then compares them with the results of the traditional open posterior midline approach for treating AIS. It specifically examines perioperative morbidity and radiological and clinical outcomes with a minimal follow-up length of 2 years (range 2-9 years). abstract_id: PUBMED:33025389 Use of intraoperative navigation for posterior spinal fusion in adolescent idiopathic scoliosis surgery is safe to consider. Purpose: The use of image-guided stereotactic navigation is increasing in use in treating AIS; however, no studies have investigated perioperative outcomes and short-term adverse events compared with non-navigated procedures. The aim of the present study is to use a large national pediatric database to assess the rate of utilization of intraoperative navigation in pediatric patients undergoing posterior spinal fusion for adolescent idiopathic scoliosis (AIS) and to compare thirty-day outcomes of navigated vs. non-navigated surgery. Methods: The NSQIP-Pediatric database was queried for cases of posterior fusion for AIS. Patients were stratified by whether or not a concurrent code for stereotactic navigation was used (CPT 61,783). Year of procedure, demographics, comorbidities, operative variables and perioperative adverse outcomes were abstracted and assessed using univariate and multivariate analysis. Results: Overall, 12,739 non-navigated patients and 340 navigated patients were identified. The use of navigation increased from 0.5% of cases in 2012 to 5.2% of cases in 2018. Demographics, comorbidities, and number of levels fused did not differ between navigated and non-navigated patients. Navigated cases were on average 41 min longer than non-navigated cases (268.6 vs. 309.6 min p < 0.001), with 9.84% more cases requiring transfusion (65.0% vs 75.6%, p < 0.001). Hospital stay for navigated cases was an average of 0.4 days shorter (3.9 days vs 4.3 days, p = 0.001). On multivariate analysis, navigated cases had higher odds of prolonged surgery (OR = 2.13, p < 0.001) and lower odds of prolonged length of stay (OR = 0.28, p < 0.001). Conclusion: Although the use of navigation for AIS posterior fusion was associated with longer operative time, post-operative hospital stay was shorter and other perioperative adverse outcomes were not significantly different between groups. abstract_id: PUBMED:35597892 Surgeon specialty effect on early outcomes of elective posterior spinal fusion for adolescent idiopathic scoliosis: a propensity-matched analysis of 965 patients. Background And Objective: Comparative effectiveness research plays a vital role in health care delivery. Specialty training is one of these variables; surgeons who are trained in different specialties may have different outcomes performing the same procedure. The objective of this study was to investigate the impact of spine surgeon specialty (neurosurgery vs orthopedic surgery) on early perioperative outcome measures of elective posterior spinal fusion (PSF) for adolescent idiopathic scoliosis (AIS). Methods: This is a retrospective, 1:4 propensity score-matched cohort study. 5520 AIS patients were reviewed from ACS-NSQIP pediatric database. Propensity score matching was utilized. Results: Patients operated on by orthopedic surgeons were more likely to have shorter operation time (263 min vs 285 min), shorter total hospital stay (95 h vs 118 h), lower rate of return to operating room within the same admission (1.2% vs 3.8%), lower discharge rates after postoperative day 4 (23.8% vs 30.9%), and lower unplanned readmission rate (1.6% vs 4.1%), (p < 0.05). On the other hand, patients operated on by neurosurgeons had lower perioperative blood transfusion rate (62.1% vs 69.8%), (p < 0.05). Other outcome measures and mortality rates were not significantly different between the two cohorts. Conclusions: This retrospective study found significant differences in early perioperative outcomes of patients undergoing PSF for AIS by neurosurgeons and orthopedic surgeons. Further studies are recommended to corroborate this finding which may trigger changes in the educational curriculum for neurosurgery residents. abstract_id: PUBMED:36197603 Adolescent athletes return to sports rapidly after posterior spine fusion for idiopathic scoliosis: a prospective cohort study. Purpose: No consensus exists regarding the timing for return to sports after PSF for patients with AIS. Return-to-play protocols are based on expert opinion and vary widely. The purpose of this study was to determine how rapidly athletes return to baseline sports activity following posterior spinal fusion for adolescent idiopathic scoliosis. Methods: Athletes were consecutively enrolled. Inclusion criteria included competition at a junior varsity level or greater for ≥ 3 months yearly, major Cobb angle of 40-75°, age 10-18 years, and one year of follow-up. Athletes completed preoperative sports performance and Patient Reported Outcomes Measurement Information System (PROMIS) physical activity, pain interference, and depressive symptoms questionnaires. Self-assessments were repeated monthly until one year after PSF. Results: Twenty-six athletes were enrolled. The median time to return to sport was 2.7 months [range: 0.6-13 months]. At twelve months, 24 of 26 [90.1%; 95% CI 36.9-74.9%] athletes reported they had returned to the sport at their presurgical level of play. Participation in contact sports was associated with a longer return to sport relative to participation in non-contact/limited contact sports [Hazard Ratio: 0.37, 95% 95% CI 0.14-0.97, p = 0.0427]. Conditioning and flexibility were the most common barriers to return to sport. Conclusions: When released to unrestricted activity at 4-8 weeks, athletes rapidly return to baseline levels of sports performance, with over half achieving this metric by 3 months. abstract_id: PUBMED:36462724 How do I plan adolescent idiopathic scoliosis surgery? Systematization of a preoperative planning method. Preoperative planning is essential in adolescent idiopathic scoliosis (AIS) surgery to determine the fusion levels and to perform the procedure with greater precision and diligence. However, the protocolized performance of such planning is not as widespread among specialists in training. The aim of this article is to describe in detail the preoperative planning method for AIS used in a specialized pediatric and adolescent spine unit of a referral center, as well as the logistics and the tactics, supported a free semi-automatic digital measurement and planning software. Three representative cases of different vertebral deformities, treated by posterior spinal fusion after preoperative planning according to the method, are shown. This method is highly suitable for the trainee surgeon as it combines the advantages of traditional and modern methods, and is simple, low cost, accessible, reproducible and with an educational character. abstract_id: PUBMED:31732001 Comparison of the Coagulation Profile of Adolescent Idiopathic Scoliosis Patients Undergoing Posterior Spinal Fusion With and Without Tranexamic Acid. Study Design: Prospective, observational cohort study. Objective: To improve the understanding of coagulation and bleeding mechanisms during spinal deformity surgery. Summary Of Background Data: Fibrinolysis is the mechanism of bleeding for adolescent idiopathic scoliosis undergoing posterior spinal fusion. Antifibrinolytics have become popular; however, literature to support their use remains mixed. The mechanism of action has not been demonstrated. Methods: The coagulation profile of 88 adolescent idiopathic scoliosis patients undergoing posterior spinal fusion was analyzed. Standard coagulation laboratory investigations and thromboelastograms were drawn hourly through the case. Fifty-eight patients received no antifibrinolytic, whereas 30 patients received tranexamic acid by standardized protocol. The coagulation parameters, estimated blood loss, and transfusion requirements were compared in the two groups. Results: The two cohorts had no differences in demographic or surgical characteristics. Mean age was 13.6 years, 83% were female, a mean of 11.1 levels were fused, and the mean duration of surgery was 209 minutes. The tranexamic acid cohort did not demonstrate a decrease in blood loss. The transfusion rate, however, dropped from 47% in the non-tranexamic acid cohort to 23% in the tranexamic acid cohort (p = .03). Standard coagulation parameters did not differ between the groups. Fibrinolysis was diminished in the tranexamic acid cohort as measured by a Fibrinolysis score (mean maximum value 2.0 without tranexamic acid vs. 0.7 with tranexamic acid, p < .0001) and the lysis percent at 30 minutes by thromboelastogram (elevated to 3.9% without tranexamic acid vs. 1.2% with tranexamic acid at the 3-hour mark, p = .05). Conclusions: This study provides confirmation of antifibrinolytic activity during posterior spinal fusion for adolescent idiopathic scoliosis. The presented data of fibrinolysis are proposed as standard measurements for future work on controlling blood loss during scoliosis surgery. Level Of Evidence: Level 2, prospective comparative study. abstract_id: PUBMED:27558492 Comparison between harvesting and preserving the spinous process for adolescent idiopathic scoliosis. Background: Spinous process has been routinely resected during posterior fusion of adolescent idiopathic scoliosis for fusion bed preparation and local autologous bone graft supplement. However, spinous process serves as an important structure in posterior ligament complex and was the anchorage of paraspinal muscle groups. With the development of pedicle screws instrumentation and the potential fusion ability in children, the need for resecting spinous process in this procedure could be further investigated. The purpose of this study was to compare the fusion rates, surgical outcomes and complications between harvesting and preserving the spinous process in posterior fusion of adolescent idiopathic scoliosis. Methods: From January 2003 to December 2008, 104 consecutive adolescent idiopathic scoliosis patients underwent primary posterior fusion with local autologous bone grafts and following for a minimum of 24 months were reviewed. The patients were divided into a harvesting group (n = 61) with the spinous process harvested, and a preserving group (n = 43) with the spinous process preserved. Blood loss, radiographic assessments, and clinical outcomes were compared between the two groups. Results: There were no significant differences in duration of surgery and peri-operative blood transfusion between the two groups. However, blood loss was statistically greater (983 ± 446 ml vs. 824 ± 361 ml; p = 0.048) and duration of hospitalization was statistically longer (7.4 ± 1.0 days vs. 6.8 ± 0.8 days; p = 0.003) in the harvesting group. The pre- and post-operative structural curves, correction rates, sagittal profile and loss of corrections were similar in both groups. Based on radiographic evaluation, the incidences of pseudoarthrosis were similar in both groups (3/61 vs. 2/43; p = 0.95). The incidence of prescribing pain medication for back discomfort during follow-up was statistically higher in the harvesting group (16/61 vs. 4/43; p = 0.03). Conclusions: The surgical outcomes and fusion rates between harvesting and preserving the spinous process were comparable. Resecting the spinous process as local autologous bone graft may not be necessary in posterior fusion for adolescent idiopathic scoliosis patients. abstract_id: PUBMED:30123840 Long-term outcome of posterior spinal fusion for the correction of adolescent idiopathic scoliosis. Background: Adolescent idiopathic scoliosis (AIS) is the most common form of idiopathic scoliosis, and surgery is considered as one of the therapeutic options. However, it is associated with a variety of irreversible complications, in spite of the benefits it provides. Here, we evaluated the long-term outcome of posterior spinal fusion (PSF) of AIS to shed more light on the consequences of this surgery. Methods: In a cross-sectional study, a total of 42 AIS patients who underwent PSF surgery were radiographically and clinically inspected for the potential post-operative complications. Radiographic assessments included the device failure, union status, and vertebral tilt below the site of fusion. Clinical outcomes were evaluated using the Oswestry disability index (ODI) and visual analogue scale (VAS). Results: The mean age of the surgery was 14.4 ± 5.1 years. The mean follow-up of the patients was 5.6 ± 3.2 years. Complete union was observed in all patients, and no device failure was noticed. Pre- and post-operative vertebral tilt below the site of fusion were 11.12° ± 7.92° and 6.21° ± 5.73°, respectively (p < 0.001). The mean post-operative ODI was 16.7 ± 9.8. The mean post-operative VAS was 2.1 ± 0.7. ODI value was positively correlated with follow-up periods (p = 0.04, r = 0.471). New degenerative disc disease (DDD) was observed in 6 out of 37 (16%) patients. Conclusion: In spite of the efficacy and safety of PSF surgery of AIS, it might result in irreversible complications such as DDD. Moreover, the amount of post-operative disability might increase over the time and should be discussed with the patients. Answer: The study described in abstract PUBMED:24480953 investigated whether the level of training of the surgical first assistant impacts outcomes after posterior spinal fusion for adolescent idiopathic scoliosis (AIS) and neuromuscular scoliosis (NMS). The retrospective review included 200 patients, 120 with AIS and 80 with NMS, and compared cases assisted by junior orthopedic residents with those assisted by orthopedic fellows. The results showed that for AIS cases, there were no significant differences in operating times, estimated blood loss (EBL), complications, lengths of stay, and reoperation rates between the fellow and junior resident groups. However, in NMS cases, the fellow group had shorter operating times and greater percent correction at initial and 2-year follow-up compared to the junior resident group. Despite these differences, EBL, complications, lengths of stay, and reoperation rates were similar between the assistant groups in NMS cases. In conclusion, the study found that for NMS surgical procedures, having fellows serve as the first assistants was associated with shorter operating times and greater percent correction than procedures with junior resident first assistants. However, for AIS cases, the level of training of the first assistant did not significantly affect the outcomes measured in the study.
Instruction: Capsaicin-sensitive local sensory innervation is involved in pacing-induced preconditioning in rat hearts: role of nitric oxide and CGRP? Abstracts: abstract_id: PUBMED:9303573 Capsaicin-sensitive local sensory innervation is involved in pacing-induced preconditioning in rat hearts: role of nitric oxide and CGRP? Unlabelled: Among several mediators, nitric oxide (NO) and calcitonin gene-related peptide (CGRP) were suggested to be involved in the mechanism of preconditioning. We examined the possible role of the cardiac capsaicin-sensitive sensory innervation in pacing-induced preconditioning, as well as in the cardiac NO and CGRP content. Wistar rats were treated subcutaneously with capsaicin or its solvent in the sequence of 10, 30, and 50 mg/kg increasing single daily doses for 3 days to deplete neurotransmitters of the sensory innervation. Isolated hearts from both groups were then subjected to either preconditioning induced by three consecutive periods of pacing at 600 beats per minute for 5 min with 5 min interpacing periods, or time-matched non-preconditioning perfusion, followed by a 10-min coronary occlusion. NO content of left ventricular tissue samples was assayed by electron-spin resonance, and CGRP release was determined by radioimmunoassay. CGRP immunohistochemistry was also performed. In the non-preconditioned, solvent-treated group, coronary occlusion decreased cardiac output (CO) from 68.1 to 32.1 mL/min, increased left ventricular end-diastolic pressure (LVEDP) from 0.58 to 1.90 kPa, and resulted in 200 mU/min/g LDH release. Preconditioning significantly increased ischaemic CO to 42.9 mL/min (P < 0.05), decreased ischaemic LVEDP to 1.26 kPa (P < 0.05) and decreased LDH release to 47 mU/min/g (P < 0.05) in the solvent-treated group. Preconditioning did not confer protection in the capsaicin-pretreated group (ischaemic CO: 35.6 mL/min; LVEDP: 1.76 kPa; LDH 156 mU/min/g). Capsaicin-treatment markedly decreased cardiac NO content, CGRP release, and CGRP-immunoreactivity. Conclusions: (i) The presence of an intact local sensory innervation is a prerequisite to elicit pacing-induced preconditioning in the rat heart. (ii) A significant portion of cardiac basal NO content may be of neural origin. (iii) Release of NO and CGRP from capsaicin-sensitive nerves may be involved in the mechanism of pacing-induced preconditioning. abstract_id: PUBMED:15464099 Delayed preconditioning by cardiac ischemia involves endogenous calcitonin gene-related peptide via the nitric oxide pathway. Previous investigations have shown separately that calcitonin gene-related peptide (CGRP) or nitric oxide (NO) is involved in mediation of ischemic preconditioning. In the present study, we tested interactions of CGRP with NO in mediation of delayed preconditioning. In Sprague-Dawley rats, ischemia-reperfusion injury was induced by 45-min occlusion followed by 3-h reperfusion of coronary artery, and preconditioning was induced by four cycles of 3-min ischemia and 5-min reperfusion. Infarct size, plasma creatine kinase activity, the plasma level of NO and CGRP, and the expression of CGRP mRNA in dorsal root ganglion were measured. Pretreatment with preconditioning significantly reduced infarct size and the release of creatine kinase during reperfusion, and caused a significant increase in the expression of CGRP mRNA, concomitantly with an elevation in the plasma level of CGRP and NO. The effects of preconditioning were completely abolished by administration of L-nitroarginine methyl ester (L-NAME, 10 mg/kg, i.p.), an inhibitor of NO synthase. Pretreatment with capsaicin (50 mg/kg, s.c.), which depletes transmitters in capsaicin-sensitive sensory nerves, also blocked the cardioprotection of preconditioning and reduced the synthesis and release of CGRP, but did not affect the concentration of NO. The present results suggest the delayed protection afforded by ischemic preconditioning is also mediated by endogenous CGRP via the NO pathway in rat heart. abstract_id: PUBMED:11749870 Role of calcitonin gene-related peptide in nitric oxide-mediated myocardial delayed preconditioning induced by head stress. Aim: To study the role of calcitonin gene-related peptide (CGRP) in nitric oxide (NO)-mediated myocardial delayed preconditioning induced by heat stress. Methods: The isolated rat heart was perfused in a Langendorff model. Hearts for all groups were subjected to 4 h hypothermia (4 degrees C) and 40 min reperfusion (37 degrees C). In the hyperthermia-treated group, rats were subjected to whole-body hyperthermia (rectal 42 degrees C, 15 min) 24 h before the experiment. Heart rate, coronary flow, left ventricular pressure, and its derivative (+/- dp/dtmax) were recorded, and calcitonin gene-related peptide-like immunoreactivity (CGRP-LI) in plasma and the activity of creatine kinase (CK) in the coronary effluent were measured. Results: Pretreatment with hyperthermia significantly improved the recovery of cardiac protection, reduced the release of CK, and increased plasma concentrations of CGRP. Pretreatment with L-NAME, an inhibitor of NOS, or capsaicin, which selectively depleted sensory neurotransmitter content, abolished the protective effects and the increased level of CGRP elicited by hyperthermia. Conclusion: Endogenous NO is involved in the cardioprotection afforded by heat stress, and the beneficial effects of NO are mediated by CGRP in the rat. abstract_id: PUBMED:25460323 H2S-induced HCO3- secretion in the rat stomach--involvement of nitric oxide, prostaglandins, and capsaicin-sensitive sensory neurons. Hydrogen sulfide (H2S) is known to be an important gaseous mediator that affects various functions under physiological and pathological conditions. We examined the effects of NaHS, a H2S donor, on HCO3(-) secretion in rat stomachs and investigated the mechanism involved in this response. Under urethane anesthesia, rat stomachs were mounted on an ex vivo chamber and perfused with saline. Acid secretion had been inhibited by omeprazole. The secretion of HCO3(-) was measured at pH 7.0 using a pH-stat method and by the addition of 10 mM HCl. NaHS (0.5-10 mM) was perfused in the stomach for 5 min. Indomethacin or L-NAME was administered s.c. before NaHS treatment, while glibenclamide (a KATP channel blocker), ONO-8711 (an EP1 antagonist), or propargylglycine (a cystathionine γ-lyase inhibitor) was given i.p. before. The mucosal perfusion of NaHS dose-dependently increased the secretion of HCO3(-), and this effect was significantly attenuated by indomethacin, L-NAME, and sensory deafferentation, but not by glibenclamide or ONO-8711. The luminal output of nitric oxide, but not the mucosal production of prostaglandin E2, was increased by the perfusion of NaHS. Mucosal acidification stimulated HCO3(-) secretion, and this response was inhibited by sensory deafferentation, indomethacin, L-NAME, and ONO-8711, but not by propargylglycine. These results suggested that H2S increased HCO3(-) secretion in the stomach, and this effect was mediated by capsaicin-sensitive afferent neurons and dependent on nitric oxide and prostaglandins, but not ATP-sensitive K(+) channels. Further study is needed to define the role of endogenous H2S in the mechanism underlying acid-induced gastric HCO3(-) secretion. abstract_id: PUBMED:12183327 Endothelial nitric oxide modulates perivascular sensory neurotransmission in the rat isolated mesenteric arterial bed. 1. A possible role of nitric oxide (NO) as a modulator of capsaicin-sensitive sensory neurotransmission in blood vessels was investigated in the rat isolated mesenteric arterial bed. 2. Electrical field stimulation (EFS) of methoxamine-preconstricted mesenteric beds elicited frequency-dependent vasorelaxation mediated by capsaicin-sensitive sensory nerves. N(G)-nitro-L-arginine methyl ester (L-NAME, 10 and 300 microM) and 7-nitroindazole (7-NI, 100 microM), inhibitors of nitric oxide synthase (NOS), augmented sensory neurogenic vasorelaxation. D-NAME (300 microM), 6-aminoindazole (100 microM) and N(omega)-propyl-L-arginine (50 nM), a selective inhibitor of neuronal NOS, were without effect. The effect of 10 microM L-NAME was reversed by L-arginine (1 mM), the substrate for NOS. 3. L-NAME (300 microM) and 7-NI (100 microM) had no significant effect on vasorelaxations to calcitonin gene-related peptide (CGRP), the principal motor neurotransmitter of capsaicin-sensitive sensory nerves in rat mesenteric arteries, or to capsaicin, indicating a prejunctional action. The inhibitors of NOS had no effect on vasorelaxation to forskolin, but augmented vasorelaxation to sodium nitroprusside (SNP). 4. Removal of the endothelium augmented sensory neurogenic vasorelaxation, but did not affect vasorelaxation to CGRP, indicating a prejunctional action of endothelial NO. 5. In the absence of endothelium, L-NAME (300 microM) inhibited, and 7-NI (100 microM) caused no further augmentation of sensory neurotransmission. 6. SNP (100 nM), a nitric oxide donor, attenuated sensory neurogenic relaxations to EFS. 7. In rat isolated thoracic aortic rings, L-NAME (100 microM) and 7-NI (100 microM) attenuated concentration-dependent relaxations to acetylcholine. 8. These data show that NO modulates sensory neurotransmission evoked by EFS of the rat isolated mesenteric arterial bed, and that when NO synthesis is blocked sensory neurogenic relaxation is augmented. The source of NO is the vascular endothelium. abstract_id: PUBMED:8864557 Rapid nitric oxide- and prostaglandin-dependent release of calcitonin gene-related peptide (CGRP) triggered by endotoxin in rat mesenteric arterial bed. 1. Our objective was to determine whether endotoxin (ETX) could directly trigger the release of calcitonin gene-related peptide (CGRP) from perivascular sensory nerves in the isolated mesenteric arterial bed (MAB) of the rat and to determine whether nitric oxide (NO) and prostaglandins (PGs) are involved. 2. ETX caused time- and concentration-dependent release of CGRP, and as much as a 17 fold increase in CGRP levels in the perfusate at 10-15 min after the administration of ETX (50 micrograms ml-1). 3. CGRP-like immunoreactivity in the perfusate was shown to co-elute with synthetic rat CGRP by reverse-phase h.p.l.c. 4. Pretreatment of MAB with capsaicin or ruthenium red inhibited ETX-induced CGRP release by 90% and 71%, respectively. ETX-evoked CGRP release was decreased by 84% during Ca2(+)-free perfusion. 5. The release of CGRP evoked by ETX was enhanced by L-arginine by 43% and inhibited by N omega-nitro-L-arginine (L-NOARG) and methylene blue by 37% and 38%, respectively. L-Arginine reversed the effect of L-NOARG. 6. Indomethacin and ibuprofen also inhibited the ETX-induced CGRP release by 34% and 44%, respectively. No additive inhibition could be found when L-NOARG and indomethacin were concomitantly incubated. 7. The data suggest that ETX triggers the release of CGRP from capsaicin-sensitive sensory nerves innervating blood vessels. The ETX-induced CGRP release is dependent on extracellular Ca2+ influx and involves a ruthenium red-sensitive mechanism. Both NO and PGs appear to be involved in the ETX-induced release of CGRP in the rat mesenteric arterial bed. abstract_id: PUBMED:11834251 Involvement of alpha-calcitonin gene-related peptide in monophosphoryl lipid A-induced delayed preconditioning in rat hearts. Recent study has shown that monophosphoryl lipid A-induced delayed preconditioning enhanced preservation with cardioplegia and that the protective effects of monophosphoryl lipid A were related to stimulation of calcitonin gene-related peptide (CGRP) release. The purpose of the present study was to explore whether the elevated release of CGRP induced by monophosphoryl lipid A is secondary to stimulation of CGRP synthesis via the nitric oxide (NO) pathway and to characterize the isoform of CGRP. Sprague-Dawley rats were pretreated with monophosphoryl lipid A 24 h before the experiment, and then the left main coronary artery of rat hearts was subjected to 1 h occlusion followed by 3 h reperfusion. Infarct size, plasma creatine kinase activity, the plasma level of CGRP, and the expression of CGRP isoforms (alpha- and beta-CGRP) mRNA in lumbar dorsal root ganglia were measured. Pretreatment with monophosphoryl lipid A (500 microg/kg, i.p.) significantly reduced infarct size and creatine kinase release. Monophosphoryl lipid A caused a significant increase in the expression of alpha-CGRP mRNA, but not of beta-CGRP mRNA, concomitantly with an increase in plasma concentrations of CGRP, and the increased level of CGRP expression happened before stimulation of CGRP release. The effect of monophosphoryl lipid A was completely abolished by pretreatment with L-nitroarginine methyl ester (L-NAME, 10 mg/kg, i.p.), an inhibitor of NO synthase or capsaicin (50 mg/kg, s.c.), which selectively depletes transmitters in capsaicin-sensitive sensory nerves. The results suggest that the delayed cardioprotection afforded by monophosphoryl lipid A involves the synthesis and release of CGRP via the NO pathway, and that the protection is mainly mediated by the alpha-CGRP isoform. abstract_id: PUBMED:7762663 Interaction of endogenous nitric oxide and CGRP in sensory neuron-induced gastric vasodilation. Stimulation of capsaicin-sensitive sensory nerves induces gastric mucosal hyperemia, which is mediated in part by both calcitonin gene-related peptide (CGRP) and nitric oxide (NO). In the present study, we used in vivo microscopy in anesthetized rats to determine 1) whether these agents were released locally at the submucosal level and, if so, 2) whether CGRP dilates arterioles via release of endothelium-derived NO. Intragastric capsaicin (160 microM) dilated submucosal arterioles from 25 +/- 3 to 67 +/- 8 microns. The intragastric capsaicin-induced vasodilation was markedly reversed not only by intravenous administration of the NO synthesis inhibitor NG-nitro-L-arginine methyl ester (L-NAME) but also by submucosal suffusion of either L-NAME or the CGRP receptor antagonist human CGRP-(8-37). The latter findings indicate that both NO and CGRP are released locally at the submucosal level. Submucosal application of CGRP induced dose-dependent dilation of gastric submucosal arterioles, which was significantly attenuated by L-NAME. However, at the same degree of vasodilation (42 microns), the dilation induced with submucosal CGRP was much less attenuated by NO synthesis inhibition (-28%) compared with that induced with intragastric capsaicin (-79%). This indicates that endothelium-derived NO released by CGRP was not the only source of submucosal NO in the latter response. There must be another as yet undetermined source of submucosal NO, e.g., possibly nitroxidergic nerves. abstract_id: PUBMED:11557265 Delayed cardioprotection by intestinal preconditioning is mediated by calcitonin gene-related peptide. Previous studies have shown that nitric oxide and calcitonin gene-related peptide (CGRP) are involved in mediation of the delayed cardioprotection of ischemic or pharmacological preconditioning, and nitric oxide can evoke the release of CGRP. In the present study, we examined the role of CGRP in nitric oxide-mediated delayed cardioprotection by brief intestinal ischemia in rats. The serum concentration of creatine kinase and infarct size were measured after 45-min coronary artery occlusion and 180-min reperfusion. Ischemic preconditioning was induced by six cycles of 4-min ischemia and 4-min reperfusion of the small intestine. Pretreatment with intestinal ischemic preconditioning for 24, 48, or 72 h significantly reduced infarct size and creatine kinase release, and the effects of ischemic preconditioning were completely abolished by L-nitroarginine methyl ester (L-NAME, 10 mg/kg, i.p.), an inhibitor of nitric oxide synthase, or by pretreatment with capsaicin (50 mg/kg, s.c.), which selectively depletes transmitters in capsaicin-sensitive sensory nerves. Intestinal preconditioning caused a significant increase in plasma concentrations of CGRP, and the effect was also abolished by L-NAME or capsaicin. These results suggest that the delayed cardioprotection afforded by intestinal ischemic preconditioning is mediated by endogenous CGRP via the nitric oxide pathway. abstract_id: PUBMED:11744139 Improvement of preservation with cardioplegic solution by nitroglycerin-induced delayed preconditioning is mediated by calcitonin gene-related peptide. Improvement of preservation with cardioplegic solution by nitroglycerin-induced delayed preconditioning was studied in the isolated rat heart. The isolated rat heart was arrested using St. Thomas Hospital solution, and then reperfused with normothermic Krebs-Henseleit solution for 40 min after a 4-h hypothermic ischemic period. Heart rate, coronary flow, left ventricular pressure and the maximum value of the first derivatives of left ventricular pressure (+/-dp/dt(max)) were recorded, and plasma concentrations of CGRP-like immunoreactivity (CGRP-LI) and nitric oxide (NO), tumor necrosis factor-alpha (TNF-alpha) in myocardial tissues, and creatine kinase in coronary effluent were measured. Delayed preconditioning was induced by i.v. injection of nitroglycerin 24 h before the experiment. Nitroglycerin (60 microg/kg or 120 microg/kg) caused an improvement of cardiac function, a decrease in the release of creatine kinase in coronary effluent and a decrease in the content of TNF-alpha in myocardial tissues. Nitroglycerin significantly increased plasma concentrations of CGRP and NO. After pretreatment with capsaicin, which depletes neurotransmitters in sensory nerves, or methylene blue, a selective guanylate cyclase inhibitor, the protection and the elevated release of CGRP induced by nitroglycerin were abolished. The present study suggests that improvement of preservation with cardioplegic solution by nitroglycerin-induced delayed preconditioning is due to stimulation of CGRP release in the rat heart, and that the protection of CGRP-mediated nitroglycerin is related to inhibition of TNF-alpha production. Answer: Capsaicin-sensitive local sensory innervation plays a crucial role in pacing-induced preconditioning in rat hearts, and both nitric oxide (NO) and calcitonin gene-related peptide (CGRP) are implicated in this process. Studies have shown that the presence of an intact local sensory innervation is necessary to elicit pacing-induced preconditioning in the rat heart (PUBMED:9303573). This preconditioning is associated with improved cardiac output, reduced left ventricular end-diastolic pressure, and decreased lactate dehydrogenase release during ischemia, indicating a protective effect. However, when capsaicin is used to deplete neurotransmitters of the sensory innervation, this protective preconditioning effect is lost (PUBMED:9303573). Further research indicates that the cardioprotective effects of preconditioning are mediated by endogenous CGRP via the NO pathway. For instance, delayed protection afforded by ischemic preconditioning is associated with increased expression of CGRP mRNA, elevated plasma levels of CGRP and NO, and reduced infarct size and creatine kinase release. These effects are abolished by the administration of L-nitroarginine methyl ester (L-NAME), an inhibitor of NO synthase, and by capsaicin treatment, which depletes transmitters in capsaicin-sensitive sensory nerves (PUBMED:15464099; PUBMED:11749870). Moreover, studies involving heat stress-induced myocardial delayed preconditioning have demonstrated that endogenous NO is involved in the cardioprotection, and the beneficial effects of NO are mediated by CGRP (PUBMED:11749870). Similarly, monophosphoryl lipid A-induced delayed preconditioning involves the synthesis and release of CGRP via the NO pathway, with the alpha-CGRP isoform being primarily responsible for the protection (PUBMED:11834251). In summary, capsaicin-sensitive local sensory innervation is integral to pacing-induced preconditioning in rat hearts, with NO and CGRP playing pivotal roles in mediating this protective mechanism. The release of NO and CGRP from capsaicin-sensitive nerves appears to be involved in the mechanism of pacing-induced preconditioning (PUBMED:9303573; PUBMED:15464099; PUBMED:11749870; PUBMED:11834251).
Instruction: Functional overreaching: the key to peak performance during the taper? Abstracts: abstract_id: PUBMED:25134000 Functional overreaching: the key to peak performance during the taper? Purpose: The purpose of this study is to examine whether performance supercompensation during taper is maximized in endurance athletes after experiencing overreaching during an overload training (OT) period. Methods: Thirty-three trained male triathletes were assigned to either OT (n = 23) or normal training groups (n = 10, CTL) during 8 wk. Cycling performance and maximal oxygen uptake (V˙O2max) were measured after 1 wk of moderate training, a 3-wk period of OT, and then each week during 4-wk taper. Results: Eleven of the 23 subjects from the OT group were diagnosed as functionally overreached (F-OR) after the overload period (decreased performance with concomitant high perceived fatigue), whereas the 12 other subjects were only acutely fatigued (AF) (no decrease in performance). According to qualitative statistical analysis, the AF group demonstrated a small to large greater peak performance supercompensation than the F-OR group (2.6% ± 1.1%) and the CTL group (2.6% ± 1.6%). V˙O2max increased significantly from baseline at peak performance only in the CTL and AF groups. Of the peak performances, 60%, 83%, and 73% occurred within the two first weeks of taper in CTL, AF, and OR, respectively. Ten cases of infection were reported during the study with higher prevalence in F-OR (70%) than that in AF (20%) and CTL (10%). Conclusion: This study showed that 1) greater gains in performance and V˙O2max can be achieved when higher training load is prescribed before the taper but not in the presence of F-OR; 2) peak performance is not delayed during taper when heavy training loads are completed immediately prior; and 3) F-OR provides higher risk for training maladaptation, including increased infection risks. abstract_id: PUBMED:38450281 Effect of the pre-taper level of fatigue on the taper-induced changes in performance in elite swimmers. Introduction: In swimming, performance gains after tapering could be influenced by the pre-taper level of fatigue. Moreover, this level of fatigue could be associated with sleep. This study aimed to assess (1) the effect of tapering on performance according to the pre-taper level of fatigue in swimmers and (2) the association between sleep and pre-taper level of fatigue. Methods: Physiological, psychological and biomechanical profiles were evaluated in 26 elite swimmers on 2 occasions to estimate the pre-taper level of fatigue: at T0 and T1, scheduled respectively 10 and 3 weeks before the main competition. Sleep quantity and quality were also evaluated at T0 and T1. Race time was officially assessed at T0, T1 and during the main competition. The level of significance was set at p ≤ .05. Results: Fourteen swimmers (17 ± 2 years) were allocated to acute fatigue group (AF) and 12 swimmers (18 ± 2 years) to functional overreaching group (F-OR). From T1 to the main competition, performance was improved in AF (+1.80 ± 1.36%), while it was impaired in F-OR (-0.49 ± 1.58%, p < 0.05 vs. AF). Before taper period, total sleep time was lower in F-OR, as compared to AF. Conversely, the fragmentation index was higher in F-OR (p = .06). From wakefulness to sleep, body core temperature decreased in AF but not in F-OR. Discussion: Performance gain after tapering was higher in AF swimmers than in overreached. Moreover, pre-taper sleep was poorer in overreached swimmers, which could contribute to their different response to the same training load. This poorer sleep could be linked to a lower regulation of internal temperature. abstract_id: PUBMED:29635874 Prediction of taper performance using quasi static FE models: The influence of loading, taper clearance and trunnion length. The head-neck taper junction has been widely reported to corrode leading to adverse tissue reactions. Taper corrosion is a poorly understood phenomenon but has been associated with oxide layer damage and ingress of corrosive physiological fluids. Micromotion may damage the oxide layer; although little is understood about the prevailing stresses which cause this. The ingress of fluid around the joint space into the taper will depend on the taper contact position and the separation of the interfaces during loading. The current work reports on the effect of taper clearances and trunnion length on the taper surface stresses and the taper gap opening. These were determined for CoCr/Ti taper interfaces using FE under loading conditions including walking and stair climb as well as hip simulator load profiles. Shorter trunnions and stair climb loading were shown to generate the greatest taper gaps (82 µm) and also the largest surface stresses (1200 MPa) on the head taper. The largest taper gaps were associated with smaller taper contact areas. Clearances within ±0.1° had no effect on the taper gaps generated, as the tapers engaged over comparable lengths; the taper gap opening was dependent upon the taper engagement length rather than location (proximal or distal) of contact. The walking profile or variants applied by hip simulators, was insufficient to differentiate between taper designs and evaluate differences in the magnitudes of taper gaps. The use of more demanding activity such as stair climb during in vitro evaluations could provide better predictions of taper performance in vivo. © 2018 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 107B: 138-148, 2019. abstract_id: PUBMED:24421726 Identifying Optimal Overload and Taper in Elite Swimmers over Time. The aim of this exploratory study was to identify the most influential training designs during the final six weeks of training (F6T) before a major swimming event, taking into account athletes' evolution over several seasons. Fifteen female and 17 male elite swimmers were followed for one to nine F6T periods. The F6T was divided into two sub-periods of a three-week overload period (OP) and a three-week taper period (TP). The final time trial performance was recorded for each swimmer in his or her specialty at the end of both OP and TP. The change in performances (ΔP) between OP and TP was recorded. Training variables were derived from the weekly training volume at several intensity levels as a percentage of the individual maximal volume measured at each intensity level, and the individual total training load (TTL) was considered to be the mean of the loads at these seven intensity levels. Also, training patterns were identified from TTL in the three weeks of both OP and TP by cluster analysis. Mixed-model was used to analyse the longitudinal data. The training pattern during OP that was associated with the greatest improvement in performance was a training load peak followed by a linear slow decay (84 ± 17, 81 ± 22, and 80 ± 19 % of the maximal training load measured throughout the F6T period for each subject, Mean ± SD) (p < 0.05). During TP, a training load peak in the 1(st) week associated with a slow decay design (57 ± 26, 45 ± 24 and 38 ± 14%) led to higher ΔP (p < 0.05). From the 1(st) to 3(rd) season, the best results were characterized by maintenance of a medium training load from OP to TP. Progressively from the 4(th) season, high training loads during OP followed by a sharp decrease during TP were associated with higher ΔP. Key PointsDuring the overload training period, a medium training load peak in the first week followed by an exponential slow decay training load design was linked to highest performance improvement.During the taper period, a training load peak in the first week associated with a slow decay design led to higher performances.Over the course of the swimmers' athletic careers, better performances were obtained with an increase in training load during the overload period followed by a sharper decrease in the taper period.Training loads schedules during the final six weeks of training before a major swimming event and changes over time could be prescribed on the basis of the model results. abstract_id: PUBMED:28121198 Supercompensation Kinetics of Physical Qualities During a Taper in Team-Sport Athletes. Peaking for major competition is considered critical for maximizing team-sport performance. However, there is little scientific information available to guide coaches in prescribing efficient tapering strategies for team-sport players. Purpose: To monitor the changes in physical performance in elite team-sport players during a 3-wk taper after a preseason training camp. Methods: Ten male international rugby sevens players were tested before (Pre) and after (Post) a 4-wk preseason training camp focusing on high-intensity training and strength training with moderate loads and once each week during a subsequent 3-wk taper. During each testing session, midthigh-pull maximal strength, sprint-acceleration mechanical outputs, and performance, as well as repeated-sprint ability (RSA), were assessed. Results: At Post, no single peak performance was observed for maximal lower-limb force output and sprint performance, while RSA peaked for only 1 athlete. During the taper, 30-m-sprint time decreased almost certainly (-3.1% ± 0.9%, large), while maximal lower-limb strength and RSA, respectively, improved very likely (+7.7% ± 5.3%, small) and almost certainly (+9.0% ± 2.6%, moderate). Of the peak performances, 70%, 80%, and 80% occurred within the first 2 wk of taper for RSA, maximal force output, and sprint performance, respectively. Conclusions: These results show the sensitivity of physical qualities to tapering in rugby sevens players and suggest that an ~1- to 2-wk tapering time frame appears optimal to maximize the overall physical-performance response. abstract_id: PUBMED:17960506 Maximal power and performance during a swim taper. This study examined how altering training intensity during a taper impacts maximal mechanical power (Pmax), torque at power maximum (T), velocity at power maximum (V), and swim performance (m . sec (-1)). Using an arm ergometer with inertial loading, measurements of Pmax, T, and V were made for 7 consecutive weeks prior to the taper and during the taper in 7 female competitive collegiate swimmers. Subjects were tested over two consecutive years. Swim performance was obtained from 3 competitive meets; a conference meet (CM), the conference championship meet (CONF) and the national championship meet (NAT). A 50 to 60 % increase in the amount of "high-intensity training" during the taper of 2005 (High-Intensity Taper - HIT) resulted in Pmax values that were 8 to 14 % higher (40 to 60 Watts) at all but one time point when compared to the 2004 taper (Low-Intensity Taper - LIT). Swim performance was significantly worsened at the NAT following LIT. However, with the HIT, swim performance, Pmax, and T were maintained prior to and at NAT. A large reduction in high-intensity training during a taper reduces the length of time that Pmax, T, and swim performance can be maintained at peak levels. abstract_id: PUBMED:34651125 Longer Disciplined Tapers Improve Marathon Performance for Recreational Runners. For marathoners the taper refers to a period of reduced training load in the weeks before race-day. It helps runners to recover from the stresses of weeks of high-volume, high-intensity training to enhance race-day performance. The aim of this study was to analyse the taper strategies of recreational runners to determine whether particular forms of taper were more or less favorable to race-day performance. Methods: We analyzed the training activities of more than 158,000 recreational marathon runners to define tapers based on a decrease in training volume (weekly distance). We identified different types of taper based on a combination of duration (1-4 weeks of decreasing training) and discipline (strict tapers progressively decrease training in the weeks before the marathon, relaxed tapers do not) and we grouped runners based on their taper type to determine the popularity of different types of taper and their associated performance characteristics. Results: Kruskal-Wallis tests (H(7)≥ 521.11, p < 0.001), followed by posthoc Dunns tests with a Bonferroni correction, confirmed that strict tapers were associated with better marathon performance than relaxed tapers (p < 0.001) and that longer tapers of up to 3 weeks were associated with better performance than shorter tapers (p < 0.001). Results indicated that strict 3-week tapers were associated with superior marathon finish-time benefits (a median finish-time saving of 5 min 32.4 s or 2.6%) compared with a minimal taper (p < 0.001). We further found that female runners were associated with greater finish-time benefits than men, for a given taper type ( ≤ 3-weeks in duration), based on Mann Whitney U tests of significance with p < 0.001. Conclusion: The findings of this study for recreational runners are consistent with related studies on highly-trained athletes, where disciplined tapers were associated with comparable performance benefits. The findings also highlight how most recreational runners (64%) adopt less disciplined (2-week and 3-week) tapers and suggest that shifting to a more disciplined taper strategy could improve performance relative to the benefits of a less disciplined taper. abstract_id: PUBMED:30981609 Influence of flexural rigidity on micromotion at the head-stem taper interface of modular hip prostheses. Fretting corrosion as one reason for failure of modular hip prostheses has been associated with micromotion at the head taper junction. Historically the taper diameter was reduced to improve the range of motion of the hip joint. In combination with other developments, this was accompanied by increased observations of taper fretting, possibly due to the reduced flexural rigidity of smaller tapers. The aim of the study was to investigate how the flexural rigidity of tapers influences the amount of micromotion at the head taper junction. Three different stem and two different taper designs were manufactured. Experimental testing was performed using three different activity levels with peak loads representing walking, stair climbing and stumbling. The relative motion at the head-stem taper was measured in six degrees of freedom. Micromotion was obtained by subtraction of the elastic deformation derived from monoblock and finite element analysis. Less rigid tapers lead to increased micromotion between the head and stem, enlarging the risk of fretting corrosion. The influence of the stem design on micromotion is secondary to taper design. Manufacturers should consider stiffer taper designs to reduce micromotion within the head taper junction. abstract_id: PUBMED:31543826 Performance Development From Youth to Senior and Age of Peak Performance in Olympic Weightlifting. A total of 3,782 performance results for male and female weightlifters, ages 14-30 from 123 countries, from Youth, Junior, and Senior World Championships and Olympic Games 2013-2017 were used to estimate the age at peak performance in Olympic weightlifting and quantify performance development from adolescence to adulthood. The age at peak performance was estimated for men and women globally and for different geographic regions. Overall, male and female weightlifters achieve their peak performance in weightlifting at similar ages. The median peak age is 26.0 years (95% CI: 24.9, 27.1) for men and 25.0 years (95% CI: 23.9, 27.4) for women, at the 90th percentile of performances. The median peak age was 26.3 years for men (95% CI: 24.5, 29.6) and 26.4 years for women (95% CI: 24.5, 29.6), at the 50th percentile. It is a novel finding that the age at peak performance varies for male and female athletes from different geographic regions (Western Europe, Eastern Europe, Middle East, Far East, North- and South America). For some regions men reach peak performance at a younger age than women, while this relationship is reversed for other regions. A possible explanation could be that socio-economic factors influence the pool of available athletes and thus may under- or overestimate the true peak age. Unlike in track and field where the discipline might determine specific body types, weightlifters at all ages compete in body weight classes, enabling us to compare performance levels and annual rate of change for athletes of different body mass. We quantified increases in performance in Olympic weightlifting for male and female adolescents. Sex-specific differences arise during puberty, boys outperform girls, and there is a rapid increase in their performance levels before the further growth slows down. The largest annual rate of increase in the total weight lifted was achieved between 16 and 17 years of age for both sexes with lower body mass and between 21 and 22 years with higher body mass. Such new information may help to establish progression trajectories for young athletes. abstract_id: PUBMED:27478424 Use of a Nonexercise Estimate for Prestroke Peak Vo2 During the Acute Stroke Hospital Stay. Purpose: For individuals with acute stroke, it is difficult to conduct an exercise test to assess peak oxygen consumption (peak Vo2). Therefore, the purpose of this study was to use a clinically feasible tool for assessing prestroke peak Vo2 using a nonexercise estimation equation to test whether estimated prestroke peak Vo2 was related to the functional outcome measures at discharge from the hospital in individuals after an acute stroke. We hypothesized that the estimated prestroke peak Vo2 would be significantly related to discharge Physical Performance Test (PPT), 6-minute walk test (6MWT), and lower extremity Fugl-Meyer (LEFM) assessment. Methods: Estimated prestroke peak Vo2 was calculated using a previously validated prediction equation using the following variables: body mass index, age, sex, resting heart rate, and a self-reported measure of physical activity. Outcome measures were assessed 4 days after enrollment or immediately before discharge (whichever occurred first). Results: Thirty-four participants (mean age = 56.0, SD = 12.6 years; 20 men) with acute stroke were enrolled within 48 hours of admission. For all individuals, mean estimated prestroke peak Vo2 was 27.3 (SD = 7.4) mL·kg-1·min-1 and had a weak, nonsignificant relationship with the PPT (r = 0.19; P = .28), 6MWT (r = 0.10; P = .56), and LEFM (r = 0.32; P = .06). However, when considering sex, women, but not men, had a significant relationship with LEFM (r = 0.73; P = .005) and moderate but nonsignificant relationship with PPT (r = 0.53; P = .06) and 6MWT (r = 0.47; P = .10). Conclusions: Within 48 hours of stroke admission, we were able to administer a nonexercise equation to estimate prestroke peak Vo2. For the entire sample, functional measures conducted at discharge were not related to estimated prestroke peak Vo2. However, when considering sex, the relationship between prestroke Vo2 and the functional measures was strengthened. Answer: Functional overreaching (F-OR) is a state resulting from excessive overload during training, characterized by a temporary performance decrement and high perceived fatigue. The relationship between F-OR and peak performance during the taper phase in athletes is complex and has been the subject of various studies. Research on trained male triathletes found that those who were functionally overreached after an overload training period did not achieve greater gains in performance or maximal oxygen uptake (V˙O2max) during the subsequent taper compared to those who were acutely fatigued (AF) or followed normal training (CTL). In fact, the AF group demonstrated a greater peak performance supercompensation than the F-OR group, and a higher prevalence of infections was reported in the F-OR group, suggesting a higher risk for training maladaptation (PUBMED:25134000). Similarly, a study on elite swimmers showed that performance gains after tapering were higher in swimmers who were acutely fatigued before the taper compared to those who were functionally overreached. The study also found that pre-taper sleep was poorer in overreached swimmers, which could contribute to their different response to the same training load (PUBMED:38450281). Another study on elite swimmers identified that the most effective training pattern during the overload period leading to the greatest improvement in performance was a training load peak followed by a linear slow decay. During the taper period, a training load peak in the first week followed by a slow decay design led to higher performance improvements (PUBMED:24421726). In team-sport athletes, a 3-week taper after a preseason training camp resulted in significant improvements in physical performance, with the majority of peak performances occurring within the first two weeks of taper (PUBMED:28121198). In conclusion, the evidence suggests that while a certain level of fatigue may be necessary to induce supercompensation during the taper, functional overreaching does not appear to be the key to peak performance. Instead, a state of acute fatigue, without reaching the level of functional overreaching, seems to be more beneficial for achieving peak performance during the taper phase.
Instruction: Approach to radial nerve palsy caused by humerus shaft fracture: is primary exploration necessary? Abstracts: abstract_id: PUBMED:21915231 Primary Exploration of Radial Nerve is Not Required for Radial Nerve Palsy while Treating Humerus Shaft Fractures with Unreamed Humerus Nails (UHN). Background: Today, humerus nails have become the surgical method of choice in the treatment of humerus shaft fractures. Whether or not the radial nerve should be intraoperatively examined by default in case of primary paresis is currently under discussion. Patients And Methods: Clinical findings from 38 patients with humeral shaft fractures surgically treated with unreamed humerus nail (UHN) at the Department of Accident Surgery, University Clinics Bonn, Germany, between 2000 and 2003 were retrospectively assessed. Constant Score was applied for evaluation of functional results. Results: In 40% of patients, primary radial nerve paresis was present. This was especially common after high energy trauma (e.g. traffic accident) and significantly increased in fractures of the middle third. In 93% of cases, spontaneous remission of motor and sensory loss was observed. No iatrogenic radial nerve impairment occurred. Conclusion: Due to the high rates of spontaneous remissions of radial nerve palsy after treatment with UHN in humerus shaft fractures, primar exploration of the radial nerve does not appear to be necessary. abstract_id: PUBMED:27812778 Retrospective analysis of extra-articular distal humerus shaft fractures treated with the use of pre-contoured lateral column metaphyseal LCP by triceps-sparing posterolateral approach. Management of extra-articular distal humerus fractures presents a challenge to the treating surgeon due to the complex anatomy of the distal part of the humerus and complicated fracture morphology. Although surgical treatment has shown to provide a more stable reduction and alignment and predictable return to function, it has been associated with complications like iatrogenic radial nerve palsy, infection, non-union and Implant failure. We in the present series retrospectively analysed 20 patients with extra-articular distal humerus shaft fractures surgically treated using the extra-articular distal humeral locking plate approached by the triceps-sparing posterolateral approach. The outcome was assessed using the DASH score, range of motion at the elbow and the time to union. The mean time to radiographic fracture union was 12 weeks. abstract_id: PUBMED:33403064 Anteromedial Surface Plating for Midshaft Fracture Humerus Through an Anterolateral Approach - A Better Option than Anterolateral Plating. Introduction: Osteosynthesis by plate fixation of humeral shaft fractures as a gold standard for fracture fixation has been proven beyond doubt. However, during conventional anterolateral plating Radial nerve injury may occur which can be avoided by applying plate on the medial flat surface. The aim of this study was to evaluate the results of application of plate on the flat medial surface of humerus rather than the conventional anterolateral surface. Materials And Methods: This study was conducted between Oct 2010 to Dec 2015. One-hundred-fifty fracture shafts of the humerus were treated with the anteromedial plating through the anterolateral approach. Results: One-hundred-fifty patients with a fracture shaft of the humerus were treated with anteromedial plating. Twenty were female (mean ±SD,28 years±4.5) and 130 were male (mean ± SD, 38 years±5.6). One hundred and forty-eight out of 150 (98.6%) patients achieved union at 12 months. Two of three patients developed a superficial infection, both of which were treated successfully by antibiotics and one developed a deep infection, which was treated by wound debridement, prolonged antibiotics with the removal of the plate and subsequently by delayed plating and bone grafting. Conclusion: In the present study, we applied plate on the anteromedial flat surface of humerus using the anterolateral approach. It is an easier and quicker fixation as compared to anterolateral plating because later involved much more dissection than a medial application of the plate and this application of plate on a medial flat surface, does not required Radial nerve exposure and palsy post-operatively. The significant improvement in elbow flexion without brachialis dissection is also a potential benefit of this approach. Based on our results, we recommend the application of an anteromedial plate for treatment of midshaft fractures humerus. abstract_id: PUBMED:34227588 Triceps-Sparing Approach for Distal Humerus Fixation. Summary: The triceps-sparing approach to the posterior humerus affords broad exposure for fracture fixation with a minor risk of radial nerve palsy. By using a muscle-sparing interval, the approach facilitates reduction and fixation of humeral shaft and extra-articular distal humerus fractures without injury to the triceps. This video demonstrates the triceps-sparing approach used in the surgical treatment of a 23-year-old man with an extra-articular distal humerus fracture. abstract_id: PUBMED:33928860 Primary Radial Nerve Exploration Determined by Ultrasound in Pediatric Supracondylar Humerus Fracture: A Report of Two Cases. We report two pediatric cases of radial nerve palsy caused by supracondylar fracture of the humerus requiring nerve exploration. The children had comparable conditions, palsy types (complete motor and partial sensory) and radiographic findings. The fracture in the first case was managed conservatively with closed reduction and percutaneous pinning but, while the patient eventually recovered from the partial sensory palsy, her motor palsy remained unchanged. In the second case, ultrasound assessment of the nerve prior to fixation indicated that surgical exploration was needed as it revealed tethering on the edge of the proximal fracture fragment. The nerve was released during an open reduction and the patient subsequently recovered from both sensory and motor palsies. Ultrasonography proved essential in the initial clinical assessment by determining how to proceed. We recommend primary nerve exploration when ultrasound findings show entrapment or tethering of the radial nerve. abstract_id: PUBMED:31061585 Is early exploration of secondary radial nerve injury in patients with humerus shaft fracture justified? Background: Radial nerve is commonly injured with a fracture of the shaft of the humerus. Primary radial nerve injury occurs at the time of fracture, and secondary nerve injuries are seen after closed reduction or operative management. Management of these secondary nerve injuries is controversial regarding conservative approach verses early exploration. Materials And Methods: The PubMed, Science Direct, Scopus, and Google Scholar were used to find out relevant studies in the English language from October 2007 to October 2017. After a search of total 114 articles, we excluded 107 articles which did not meet our inclusion criteria, and only seven studies were thoroughly reviewed. Results: Among the seven studies, three studies by Wang JP, Wang X, and Reichert P have included only secondary radial nerve injuries. Studies by Noaman H, Gouse M, Schwab TR and Bhardwaj A have included both primary as well as secondary radial nerve injury cases. Four studies used a conservative strategy and late exploration was advocated only if no nerve recovery was found within three to five months. Three studies recommended early radial nerve exploration (within the first two weeks) in patients with secondary radial nerve injury. Conclusions: The pattern and duration of radial nerve recovery in secondary nerve injury was similar to that seen in primary radial nerve palsy. No advantage was seen in the early exploration of the radial nerve in most of these studies. If there is no misplaced instrumentation, macroscopic laceration of nerve or fracture displacement in the postoperative radiograph, secondary radial nerve injury can be treated as a primary radial nerve injury, and we recommend observation for a minimum of four to five months before exploration. abstract_id: PUBMED:27521146 Distal-third diaphyseal fractures of the humerus: choice of approach and surgical treatment. Purpose: Distal-third diaphyseal fractures of the humerus are often hard to treat due to location and pattern of the fractures, radial nerve injury, and quality of bone and age of patients. The aim of this retrospective study is to propose the best approach and the best surgical technique according to the pattern of extra-articular fracture of the distal humerus. Methods: We have treated 37 fractures of the distal humerus between January 2010 and July 2015 classified according to the AO classification. There were 2 open fractures. We treated all fractures with open reduction and internal fixation with plates and screws. In 20 cases, we performed a posterior midline triceps-splitting approach, with patients in prone decubitus position; in 2 cases, the triceps-splitting approach with the patients in supine decubitus position; in 3 cases, the olecranon osteotomy approach in prone decubitus position; and in 12 cases, the lateral approach in supine decubitus position. Results: Thirty cases had a medium follow-up of 6 months. We observe 2 post-operative radial nerve palsies healed in 5 months and 2 cases of non-union. The average time to union of remaining cases was 16 weeks (range 12-24). Elbow motility was complete in 25 cases, in 4 cases there was an extension loss of 5°, and in one case there was an extension loss of 10°. Conclusions: The use of plates allows an anatomical fracture reduction, a better control of alignment of humerus and, with a rigid fixation, an early elbow mobilization. The best approach and the best surgical technique depend on the pattern of the fracture of distal humerus. abstract_id: PUBMED:24009907 Modified combined approach for distal humerus shaft fracture: anterolateral and lateral bimodal approach. Background: Due to the anatomical nature of the radial nerve, dissection and attainment of an adequate operative field in mid to distal humerus fracture is dangerous and limited. We devised a combined anterolateral and lateral approach that ensures protection of the radial nerve. This is achieved by performing bimodal dissection of the proximal humerus anteriorly and the distal humerus laterally. Methods: Thirty-five consecutive patients were treated using a combined anterolateral and lateral approach for a minimum follow-up period of 24 months. We analyzed time to bony union, time to return to daily work, range of motion, elbow joint function as assessed by the Mayo elbow performance index and complications. Results: Radiologic bony union was observed at 11.2 weeks (range, 8 to 20 weeks) on average. Four cases of incomplete radial nerve palsy before surgery all recovered. Time to return to work was 10.2 weeks (range, 2 to 32 weeks) on average. The average range of motion of the elbow was 3.3° (range, 0° to 10°) of extension and 135.9° (range, 125° to 145°) of flexion. There were 21 cases of excellent and 13 cases of good or better recovery, comprising over 97.1% on the Mayo elbow performance index. There were no complications of radial nerve palsy, non-union, mal-union, or infection. Conclusions: Our a modified combined anterolateral and lateral approach is a clinically effective surgical method of achieving protection of the radial nerve and securing easy and firm internal fixation. abstract_id: PUBMED:27900704 Humerus shaft fracture complicated by radial nerve palsy: Is surgical exploration necessary? Fractures of the humerus shaft often are complicated by radial nerve palsy. Controversy still exists in the treatment that includes clinical observation and eventually late surgical exploration or early surgical exploration. Algorithms have been proposed to provide recommendations with regard to management of the injuries. However, advantages and disadvantages are associated with each of these algorithms. The aim of this study was to analyze the indications of each treatment options and facilitate the surgeon in choosing the conduct for each lesion, proposing our own algorithm. abstract_id: PUBMED:23827843 Management of radial nerve palsy following fractures of the humerus. Radial nerve palsy is the most common peripheral nerve injury following a humerus fracture, occurring in 2% to 17% of cases. Radial nerve palsies associated with closed humerus fractures have traditionally been treated with observation, with late exploration restricted to cases without spontaneous nerve recovery at 3 to 6 months. Advocates for early exploration believe that late exploration can result in increased muscular atrophy, motor endplate loss, compromised nerve recovery upon delayed repair, and significant interval loss of patient function and livelihood. In contrast, early exploration can hasten nerve injury characterization and repair, and facilitate early fracture stabilization and rehabilitation. Answer: The approach to radial nerve palsy caused by humerus shaft fractures varies, and the necessity of primary exploration is debated. Several studies suggest that primary exploration of the radial nerve may not be required in all cases. For instance, a study assessing humeral shaft fractures treated with unreamed humerus nails (UHN) found that 93% of cases with primary radial nerve palsy experienced spontaneous remission of motor and sensory loss, indicating that primary exploration of the radial nerve does not appear to be necessary (PUBMED:21915231). Similarly, another study concluded that the pattern and duration of radial nerve recovery in secondary nerve injury were similar to primary radial nerve palsy, and no advantage was seen in early exploration of the radial nerve in most cases. Observation for a minimum of four to five months before exploration was recommended (PUBMED:31061585). Other studies have focused on surgical techniques that minimize the risk of radial nerve injury. For example, anteromedial surface plating for midshaft humerus fractures through an anterolateral approach was found to be a better option than anterolateral plating, as it does not require radial nerve exposure and thus avoids post-operative radial nerve palsy (PUBMED:33403064). The triceps-sparing approach for distal humerus fixation also presents a minor risk of radial nerve palsy by using a muscle-sparing interval (PUBMED:34227588). However, there are circumstances where primary exploration may be indicated. A report on pediatric supracondylar humerus fractures suggested that primary nerve exploration should be considered when ultrasound findings show entrapment or tethering of the radial nerve (PUBMED:33928860). Additionally, some advocate for early exploration to facilitate early fracture stabilization and rehabilitation, and to avoid potential complications associated with delayed nerve repair (PUBMED:23827843). In conclusion, while primary exploration of the radial nerve is not universally deemed necessary for radial nerve palsy following humerus shaft fractures, certain cases, such as those with evidence of nerve entrapment or tethering, may benefit from early intervention. The decision should be individualized based on clinical assessment, imaging findings, and the specific circumstances of the injury.
Instruction: Colorectal tumor vascularity: quantitative assessment with multidetector CT--do tumor perfusion measurements reflect angiogenesis? Abstracts: abstract_id: PUBMED:18812560 Colorectal tumor vascularity: quantitative assessment with multidetector CT--do tumor perfusion measurements reflect angiogenesis? Purpose: To establish the relationships between quantitative perfusion computed tomography (CT) parameters-specifically, primary tumor blood flow, blood volume, transit time, and permeability surface-area product-and immunohistologic markers of angiogenesis in colorectal cancer. Materials And Methods: After institutional review board approval and informed patient consent were obtained for this prospective study, 23 patients (11 men, 12 women; mean age, 68.4 years; age range, 34.8-87.1 years) with colorectal adenocarcinoma underwent a 65-second perfusion CT examination, and tumor blood flow, blood volume, mean transit time, and permeability surface-area product were determined. After surgery, resected specimens were sectioned and stained immunohistochemically to identify CD34 for quantification of microvessel density (MVD), to identify smooth muscle actin for assessment of pericyte coverage index, to identify vascular endothelial growth factor (VEGF), and to identify glucose transporter protein (GLUT-1). Perfusion CT measurements were correlated with MVD, pericyte coverage index, VEGF expression, and GLUT-1 expression by using Pearson or Spearman rank correlation analysis, with significance assigned at the 5% level. Results: Mean blood flow, blood volume, transit time, and permeability surface-area product values were 72.1 mL/min/100 g of tissue +/- 28.4 (standard deviation), 6.2 mL/100 g of tissue +/- 1.4, 9.3 seconds +/- 3.9, and 13.9 mL/min/100 g of tissue +/- 3.2, respectively. Blood volume (r = 0.59, P = .002) and permeability surface-area product (r = 0.46, P = .03) correlated positively with MVD, but blood flow (r = 0.27, P = .22) and transit time (r = -0.18, P = .44) did not. There were no significant associations between any perfusion CT parameter and pericyte coverage index (r <or= 0.36, P > .05), VEGF score (rho <or= 0.30, P >or= .15), or GLUT-1 score (rho < 0.21, P >or= .33). Conclusion: Tumor permeability surface-area product and blood volume correlate positively with MVD and may reflect the microvascularity of colorectal tumors. abstract_id: PUBMED:24809317 Assessment of tumor grade and angiogenesis in colorectal cancer: whole-volume perfusion CT. Rationale And Objectives: The preoperative evaluation of tumor grading and angiogenesis has important clinical implications in the treatment and prognosis of patients with colorectal cancers (CRCs). The aim of the present study was to assess tumor perfusion with 256-slice computed tomography (CT) using whole-volume perfusion technology before surgery, and to investigate the differences in the perfusion parameters among tumor grades and the correlation between perfusion parameters and pathologic results in CRC. Materials And Methods: Thirty-seven patients with CRC confirmed by endoscopic pathology underwent whole-volume perfusion CT assessments with a 256-slice CT and surgery. Quantitative values for blood flow, blood volume, and time to peak were determined using commercial software. After surgery, resected specimens were analyzed immunohistochemically with CD105 antibodies for the quantification of microvessel density (MVD). The difference in CT perfusion parameters and MVD among different tumor differentiation grades was evaluated by the Student-Newman-Keuls test. The correlations between CT perfusion parameters and MVD were evaluated using the Pearson correlation analysis. Results: The mean blood flow was significantly different among well, moderately, and poorly differentiated groups (61.17 ± 17.97, 34.80 ± 13.06, and 22.24 ± 9.31 mL/minute/100 g, respectively; P < .05). The blood volume in the well-differentiated group was significantly higher than that in the moderately differentiated group (33.96 ± 24.81 vs. 16.93 ± 5.73 mL/100 g; P = .002) and that in the poorly differentiated group (33.96 ± 24.81 vs. 18.05 ± 6.01 mL/100 g; P = .009). The time to peak in the poorly differentiated group was significantly longer than that in the well-differentiated group (27.81 ± 11.95 vs. 17.60 ± 8.53 seconds; P = .016) and that in the moderately differentiated group (27.81 ± 11.95 vs. 18.94 ± 7.47 seconds; P = .028). There was no significant difference in the MVD among well, moderately, and poorly differentiated groups (33.47 ± 14.69, 28.89 ± 11.82, and 29.89 ± 11.02, respectively; P > .05). There was no significant correlation between CT perfusion parameters and MVD (r = 0.201, 0.295, and -0.178, respectively; P = .233, .076, and .292, respectively). Conclusions: CT whole-volume perfusion technology has the potential to evaluate pathologic differentiation grade of CRC before surgery. However, preoperative perfusion CT parameters do not reflect the MVD of CRC. abstract_id: PUBMED:22563293 Perfusion CT in colorectal cancer: comparison of perfusion parameters with tumor grade and microvessel density. Objective: The purpose of this study was to prospectively compare pre-operative computed tomography (CT) perfusion parameters with tumor grade from colorectal adenocarcinoma (CRC) and to correlate pre-operative CT perfusion parameters with microvessel density (MVD) to evaluate angiogenesis in CRC. Materials And Methods: Pre-operative perfusion CTs were performed with a 64-channel multidetector row CT in 27 patients (17 women and 10 men; age range 32-82 years) who were diagnosed with CRC involving the sigmoid and rectum between August 2006 and November 2007. All patients underwent surgery without pre-operative chemotherapy or radiation therapy. Dynamic perfusion CTs were performed for 65 seconds after intravenous injection of contrast medium (100 mL, 300 mg of iodine per mL, 5 mL/sec). Before surgery, blood flow (BF), blood volume, mean transit time (MTT), and permeability-surface area product were measured in the tumor. After surgery, one gastrointestinal pathologist evaluated tumor grade and performed immunohistochemical staining using CD 34 to determine MVD in each tumor. The Kruskal-Wallis test was used to compare CT perfusion parameters with tumor grade, and Pearson's correlation analysis was used to correlate CT perfusion parameters with MVD. Results: In 27 patients with CRC, tumor grading was as follows: well differentiated (n = 8); moderately differentiated (n = 15); and poorly differentiated (n = 4). BF was higher in moderately differentiated CRC than well differentiated and poorly differentiated CRCs (p = 0.14). MTT was shorter in moderately differentiated than well differentiated and poorly differentiated CRCs (p = 0.039). The MVD was greater in poorly differentiated than well differentiated and moderately differentiated CRCs (p = 0.034). There was no significant correlation between other perfusion parameters and tumor grade. There was no significant correlation between CT perfusion parameters and MVD. Conclusion: BF and MTT measurement by perfusion CT is effective in predicting moderately differentiated CRCs. However, perfusion CT is limited in distinguishing well differentiated and poorly differentiated CRCs. Pre-operative perfusion CT does not reflect the MVD of CRCs. abstract_id: PUBMED:19481397 Evaluation of angiogenesis in colorectal carcinoma with multidetector-row CT multislice perfusion imaging. To evaluate the correlation between 64 multidetector-row CT (64MDCT) perfusion imaging in colorectal carcinoma and microvessel density (MVD) and vascular endothelial growth factor (VEGF), 64MDCT perfusion imaging was performed in 33 patients with pathologically verified colorectal carcinoma. These images were analyzed with perfusion functional software, and time-density curves (TDC) were created for the region of interest (ROI) encompassing the tumor, the target artery and vein. The individual perfusion maps generated indicated blood flow (BF), blood volume (BV), mean transit time (MTT), and permeability-surface area product (PS). MVD and VEGF were evaluated by immunohistochemical staining with anti-CD34 and anti-VEGF, respectively. Correlations between MVD or VEGF with CT perfusion parameters and clinicopathological factors (Dukes' stages, invasion depth, and lymph node and liver metastasis) were also investigated. MVD in the colorectal carcinoma was 22.61+/-9.01 per x200 field. The scores obtained for VEGF expression were 4.15+/-1.09. VEGF staining was positive in 25 of 29 tumors (86.2%). There was no significant correlation between the presence of MVD, VEGF expression and clinicopathological factors (P>0.05). There was also no correlation between MVD, VEGF expression, and any dynamic CT parameters (P>0.05). The BV and MTT were significantly higher in tumors demonstrating serous coat invasion than in those without it (t=-2.63, -2.24, P=0.0137, 0.0331, respectively). BV was also significantly correlated with tumor size (r=0.41, P=0.02). Neither BF nor PS was correlated with clinicopathological factors. In conclusion, 64MDCT perfusion imaging, MVD, and VEGF may reflect angiogenic activity, but no significant correlation among these factors. abstract_id: PUBMED:22514101 Perfusion CT to assess angiogenesis in colon cancer: technical limitations and practical challenges. Objective: Perfusion CT may have the potential to quantify the degree of angiogenesis of solid tumours in vivo. This study aims to identify the practical and technical challenges inherent to the technique, and evaluate its feasibility in colorectal tumours. Methods: 51 patients from 2 institutions prospectively underwent a single perfusion CT on 2 different multidetector scanners. The patients were advised to breath-hold as long as possible, followed by shallow breathing, and were given intravenous buscopan to reduce movement. Numerous steps were explored to identify the challenges. Results: 43 patients successfully completed the perfusion CT as per protocol. Inability to detect the tumour (n=3), misplacement of dynamic sequence co-ordinates (n=2), failure of contrast injection (n=2) and displacement of tumour (n=1) were the reasons for failure. In 14 cases excessive respiratory motion displaced the tumour out of the scanning field along the temporal sequence, leading to erroneous data capture. In nine patients, minor displacements of the tumour were corrected by repositioning the region of interest (ROI) to its original position after reviewing each dynamic sequence slice. In 20 patients the tumour was stable, and data captured from the ROI were representative, and could have been analysed by commercially available Body Tumor Perfusion 3.0® software (GE Healthcare, Waukesha, WI). Hence all data were manually analysed by MATLAB® processing software (MathWorks, Cambridge, UK). Conclusion: Perfusion CT in tumours susceptible to motion during acquisition makes accurate data capture challenging and requires meticulous attention to detail. Motion correction software is essential if perfusion CT is to be used routinely in colorectal cancer. abstract_id: PUBMED:26421473 Predictive Significance of Tumor Grade Using 256-Slice CT Whole-Tumor Perfusion Imaging in Colorectal Adenocarcinoma. Rationale And Objectives: The preoperative assessment of tumor grade has important clinical implications for the treatment and prognosis of patients with colorectal adenocarcinomas. The purpose of this study is to investigate the predictive significance of colorectal adenocarcinoma grade using 256-slice whole-tumor computed tomography (CT) perfusion. Materials And Methods: Fifty-three patients with proven colorectal adenocarcinomas were enrolled. All of them underwent 256-slice whole-tumor CT perfusion. They were divided into two different subgroups according to postoperative pathological results: low grade and high grade. The Kruskal-Wallis test or one-way analysis of variance was used for comparison of CT perfusion parameters between different tumor grades. Multivariant correlation between pathologic tumor stage, histologic tumor differentiation, and whole-tumor CT perfusion parameters was evaluated by Spearman rank correlation coefficient. According to receiver operating characteristic (ROC) curves, perfusion parameters including blood flow (BF), peak enhancement index (PEI), blood volume (BV), and time to peak (TTP) of 53 patients were analyzed, and the sensitivity, specificity, and accuracy of these parameters in predicting tumor grade were calculated. Results: There were significant differences in BF and TTP between low-grade and high-grade tumors. According to the ROC curve, BF and TTP were of diagnostic significance, with the area under the curve values of 0.828 and 0.736, respectively. The diagnostic threshold of BF was 32.12 mL/min/100 g and that of TTP was 18.10 seconds. Conclusions: The CT perfusion parameters (BF, TTP) of first-pass 256-slice whole-tumor CT perfusion imaging can reflect tumor grade in colorectal adenocarcinoma. abstract_id: PUBMED:24434157 Perfusion CT imaging of colorectal cancer. Imaging plays an important role in the assessment of colorectal cancer, including diagnosis, staging, selection of treatment, assessment of treatment response, surveillance and investigation of suspected disease relapse. Anatomical imaging remains the mainstay for size measurement and structural evaluation; however, functional imaging techniques may provide additional insights into the tumour microenvironment. With dynamic contrast-enhanced CT techniques, iodinated contrast agent kinetics may inform on regional tumour perfusion, shunting and microvascular function and provide a surrogate measure of tumour hypoxia and angiogenesis. In colorectal cancer, this may be relevant for clinical practice in terms of tumour phenotyping, prognostication, selection of individualized treatment and therapy response assessment. abstract_id: PUBMED:19598297 Perfusion computed tomography in colorectal cancer: protocols, clinical applications and emerging trends. Perfusion computed tomography (CT) has emerged as a novel functional imaging technique with gradually increasing importance in the management of colorectal cancer (CRC). By providing the functional tumor microvasculature, it also helps the assessment of therapeutic response of anti-angiogenic drugs as it may reflect tumor angiogenesis. Perfusion CT has been applied in clinical practice to delineate inflammatory or neoplastic lymph nodes irrespective of their size, identify micro-metastases and to predict metastases in advance of their development. It is of increasing significance for preoperative adjuvant therapies and avoidance of unnecessary interventions. Despite controversies regarding the techniques employed, its validity and reproducibility, it can be advantageous in the management of CRCs in which the prognosis is dependent on preoperative staging. With recent advances in the perfusion CT techniques, and incorporation to other modalities like positron emission tomography, perfusion CT will be a novel tool in the overall management of CRCs. This article aims at reviewing the existing clinical applications and recent advances of perfusion CT with a reference to future development in the management of CRCs. abstract_id: PUBMED:25516645 Computed tomography perfusion imaging as a potential imaging biomarker of colorectal cancer. Neovascularization was reported to arise early in the adenoma-carcinoma sequence in colorectal cancer (CRC), and the importance of angiogenesis in cancer progression has been established. Computed tomography (CT) perfusion (CTP) based on high temporal resolution CT images enables evaluation of hemodynamics of tissue in vivo by modeling tracer kinetics. CTP has been reported to characterize tumor angiogenesis, and to be a sensitive marker for predicting recurrence or survival in CRC. In this review, we will discuss the biomarker value of CTP in the management of CRC patients. abstract_id: PUBMED:11220094 Improvement in diagnosis of liver metastases with the multi-detector CT The introduction of multidetector computed tomography in clinical routine creates a new dimension of increased spatial and temporal resolution in CT scanning. On the basis of the very short acquisition time, the whole liver can now be scanned within one breathhold. Multidetector-CT allows the examination of the liver with nearly isotropic data sets. This is the prerequisite for the optimal assessment of very small liver lesions in all planes and excellent enhancement of liver lesions in defined organic perfusion phases. The speed of multidetector-CT can either be used to reduce the time to cover a given volume, or to use narrower beam collimation to increase the resolution of details along the z-axis and to reduce volume averaging. We report our clinical experiences in hepatic multidetector-CT examinations and optimized examination protocols in the assessment of hepatic metastases. Multidetector-CT and the use of interactive multiplanar reconstructions improve the detection and characterization of liver metastases. However, new problems in data management may arise from the large amount of data generated by multidetector-CT. Answer: The relationship between quantitative perfusion computed tomography (CT) parameters and immunohistologic markers of angiogenesis in colorectal cancer has been investigated in several studies. The results, however, are somewhat mixed. One study found that tumor permeability surface-area product and blood volume correlate positively with microvessel density (MVD), suggesting that these perfusion CT measurements may reflect the microvascularity of colorectal tumors (PUBMED:18812560). However, this study did not find significant associations between perfusion CT parameters and other markers of angiogenesis such as pericyte coverage index, vascular endothelial growth factor (VEGF) score, or glucose transporter protein (GLUT-1) score. Another study using whole-volume perfusion CT technology found that CT perfusion parameters were able to evaluate the pathologic differentiation grade of colorectal cancer (CRC) before surgery, but these parameters did not reflect the MVD of CRC (PUBMED:24809317). Similarly, another study concluded that while blood flow (BF) and mean transit time (MTT) measurement by perfusion CT could be effective in predicting moderately differentiated CRCs, perfusion CT was limited in distinguishing well differentiated and poorly differentiated CRCs and did not reflect the MVD of CRCs (PUBMED:22563293). A study evaluating the correlation between 64 multidetector-row CT (64MDCT) perfusion imaging and MVD and VEGF in colorectal carcinoma found no significant correlation between MVD, VEGF expression, and any dynamic CT parameters (PUBMED:19481397). In summary, while some perfusion CT parameters may correlate with certain markers of angiogenesis such as MVD, the overall evidence suggests that tumor perfusion measurements do not consistently reflect angiogenesis across all studies and markers. The variability in findings may be due to technical limitations, practical challenges, and the complex nature of tumor angiogenesis (PUBMED:22514101, PUBMED:26421473, PUBMED:24434157, PUBMED:19598297, PUBMED:25516645, PUBMED:11220094).
Instruction: Is psychotherapy a reliable alternative to pharmacotherapy to promote the mental health of infertile women? Abstracts: abstract_id: PUBMED:18848381 Is psychotherapy a reliable alternative to pharmacotherapy to promote the mental health of infertile women? A randomized clinical trial. Objective: Women with fertility problems experience a higher prevalence of negative emotions than women without fertility problems. The goal of this study was to compare the effects of psychological intervention with psychotropic medication on the mental health improvement of depressed infertile women. Study Design: In a randomized controlled clinical trial, 89 depressed infertile women that they were recruited and divided into three groups in three groups: cognitive behavior therapy (CBT), antidepressant therapy, and a control group. Twenty-nine participants in the CBT method received 10 sessions on relaxation training, restructuring, and eliminating negative automatic thoughts and dysfunctional attitudes to infertility. Thirty participants in the pharmacotherapic group took 20mg fluoxetine daily for 90 days. Thirty control subjects did not receive any intervention. All participants completed the Beck Depression Inventory (BDI) and the General Health Questionnaire (GHQ) at the beginning and end of the study. Paired t-test, ANOVA, chi(2), and McNemar tests were used to analyze the data. Results: Fluoxetine significantly reduced the mean of three subscale scores of the GHQ anxiety (7.3+/-4.1 vs. 5.1+/-3.2), social function (7+/-2.8 vs. 4.3+/-2), and depression (7.8+/-5.2 vs. 4.4+/-2.2) but could not significantly change the mean score of psychosomatic signs. The CBT method effectively reduced the mean of all four GHQ subscales: anxiety (8+/-4 vs. 3.2+/-2), social function (7.2+/-2.6 vs. 4.7+/-2.5), depression (7.7+/-4.2 vs. 3.6+/-2.7), and psychosomatic signs (7.5+/-3.2 vs. 5.5+/-3.2). Also, both methods significantly reduced the total GHQ scores. Successful treatment of depression in three groups was fluoxetine group 50%, CBT 79.3%, and control 10%. The mean Beck scores among the groups at the beginning and end of study were, respectively: fluoxetine 23.2+/-8.6 versus 14.3+/-8.5 (p<0.001), CBT 20+/-7.9 versus 7.7+/-4.8 (p<0.001), and control 19.8+/-8.5 versus 19.7+/-8.4 (p=0.9). Although both fluoxetine and CBT significantly decreased the mean BDI scores more than the control group, the decrease in the CBT group was significantly greater than the fluoxetine group. Conclusion: Psychotherapy, such as group CBT, was superior to or at least as effective as pharmacotherapy to promote the well being of depressed infertile women. abstract_id: PUBMED:32309766 Is psychosexual therapy a reliable alternative to bupropion extended-release to promote the sexual function in infertile women? An RCT. Background: Various treatment methods are used to deal with sexual problems. Objective: This study was applied to answer the question of whether psychosexual therapy (PST) can be a reliable alternative to bupropion extended-release (BUP ER) to promote sexual function in infertile women. Materials And Methods: In this randomized clinical trial, 105 infertile women with sexual dysfunction were randomly allocated to three groups: PST, BUP ER, and a control group. The PST group participated in a total of eight 2-hr group sessions. In BUP ER group, 150 mg/day Bupropion ER was administered for eight weeks. The control group did not receive any interventions. The female sexual function index (FSFI) and a clinical interview were used to assess their sexual dysfunction. Results: The mean pre-to-post treatment scores of FSFI and its subscales increased significantly in PST and BUP ER groups (except in the subscale of sexual pain) (p = 0.0001, p = 0.0001). The changes in the subjects were not significant in the control group. After adjusting for the baseline values, the results remained significant for the mean FSFI (p = 0.0001), and its subscales between the groups. Compared to the control group, a significant increase was observed in the mean FSFI (p = 0.0001, p = 0.002) and its subscales in the PST group and in the BUP ER group (except in the subscale of sexual pain). Comparison of two intervention methods showed that PST had the better effect on the sexual function improving (p = 0.0001) and its subscales (exempting the subscale of orgasm) than BUP ER. Conclusion: PST can be considered not only a reliable alternative to pharmacotherapy; it also produces better results in terms of improving sexual function in infertile women. abstract_id: PUBMED:23052895 Mental disorders and female infertility Approximately 5-10% of women in the reproductive years are affected by infertility which is associated with depression, anxiety and disturbed eating behavior. Part of this association can be explained by the emotional stress resulting from infertility. As mental disorders, such as depressive disorder or eating disorders are also prospectively associated with infertility, a bidirectional relationship is assumed. A special relationship exists between mental disorders and the main causes of ovulatory infertility, hypothalamic amenorrhea and polycystic ovary disease. The results of pilot studies support the assumption that psychotherapy may constitute an important component of the treatment of infertility. abstract_id: PUBMED:11887609 Infertility and the role of psychotherapy. Infertility affects 6 million American women and their partners, about 10% of the reproductive age population. Recent improvements in medication, microsurgery, and assisted reproductive technology (ART) make pregnancy possible for more than half the couples pursuing treatment, yet infertility is more than a medical condition. Infertility touches all aspects of a person's life. It affects how individuals feel about themselves, their relationships, and their life perspective. Stress is only one of a myriad of emotional realities that couples facing infertility deal with, often for extended periods of time. In addition to ongoing stress, infertility creates issues of guilt, anxiety, tension within the relationship, and feelings of depression and isolation. Treating couples and individuals who are involved in the journey of dealing with infertility is an opportunity for clinical specialists in psychiatric mental health nursing. The expertise of clinical specialists both in providing psychotherapy services as well as providing a bridge of understanding of sophisticated medical and surgical procedures places them in a unique position among the various disciplines offering mental health services. This paper provides an overview of the psychological issues that play a role in psychotherapy with this population and suggests specific clinical interventions and potential role expansion for psychiatric clinical specialists. abstract_id: PUBMED:29264989 Mental state assessment of recipients in the IVF donor programs and psychotherapeutic methods of its correction. The management of infertility with assisted reproductive technologies (ART) could solve a medical problem but still leave psychosocial problems, associated with both long-term infertility and specific treatment, unaddressed in a number of cases. Evaluation of the mental state of recipients in the IVF donor programs and its dynamics when mid-term multimodal psychotherapy has been used. The study was conducted in the Family Medicine Center - an ART clinic of the city of Ekaterinburg, Russia between 2016 and 2017. Mental state assessment of 200 potential female candidates for the IVF programs was made with the following methods: figures test and color test of relations, a method for the self-assessment of the emotional state by Hans Eysenck, a method for assessing neuropsychic stress by Nemchin, an original questionnaire for patients of the ART clinic. In our study, it was revealed that women requiring donor oocytes with indirect motivation to have a child demonstrated predominantly a high and moderate level of anxiety, frustration, aggression, rigidity, and neuropsychic stress. The vast majority of women had partial compliance for treatment. With psychotherapeutic approach using mid-term multimodal psychotherapy, patients' compliance for treatment could be improved, resulting in a reduction in negative mental states and development of adequate motivation for oocyte donation in the IVF programs. abstract_id: PUBMED:29764413 Evidence for the use of complementary and alternative medicines during fertility treatment: a scoping review. Background: Complementary and alternative medicines (CAM) are sometimes used by individuals who desire to improve the outcomes of their fertility treatment and/or mental health during fertility treatment. However, there is little comprehensive information available that analyzes various CAM methods across treatment outcomes and includes information that is published in languages other than English. Method: This scoping review examines the evidence for 12 different CAM methods used to improve female and male fertility outcomes as well as their association with improving mental health outcomes during fertility treatment. Using predefined key words, online medical databases were searched for articles (n = 270). After exclusion criteria were applied, 148 articles were analyzed in terms of their level of evidence and the potential for methodological and author bias. Results: Surveying the literature on a range of techniques, this scoping review finds a lack of high quality evidence that complementary and alternative medicine (CAM) improves fertility or mental health outcomes for men or women. Acupuncture has the highest level of evidence for its use in improving male and female fertility outcomes although this evidence is inconclusive. Conclusion: Overall, the quality of the evidence across CAM methods was poor not only because of the use of research designs that do not yield conclusive results, but also because results were contradictory. There is a need for more research using strong methods such as randomized controlled trials to determine the effectiveness of CAM in relation to fertility treatment, and to help physicians and patients make evidence-based decisions about CAM use during fertility treatment. abstract_id: PUBMED:34753757 Infertility-related distress and clinical targets for psychotherapy: a qualitative study. Objectives: An estimated 30%-40% of women attending infertility tertiary care facilities experience clinically significant depression and anxiety. However, current psychological interventions for infertility are only modestly effective in this population. In this study, we aimed to identify the specific psychological components of infertility-related distress to assist in the development of a more targeted and effective therapeutic intervention. To our knowledge, this study is the first of its kind to include the views and opinions of mental health professionals who specialise in the field of infertility and the first to explore therapies currently used by mental health professionals. Design: A qualitative approach using semistructured individual interviews and focus group interviews with women who have experience with infertility and also mental health professionals specialising in the field of infertility. Thematic analysis was used to identify patterns and themes emerging from the data. Participants: Twenty-one women (aged 25-41 years) struggling to conceive for ≥12 months and 14 mental health professionals participated in semistructured interviews about the psychological challenges related to infertility. Results: Five themes, each divided into subthemes, emerged from the data and these were developed into a model of infertility-related distress. These five themes are: (1) anxiety, (2) mood disturbance, (3) threat to self-esteem, identity and purpose, (4) deterioration of the couple and (5) weakened support network. In addition, therapeutic techniques used by mental health professionals were identified. Conclusions: The results of this study suggest specific clinical targets that future interventions treating infertility-related distress should address. abstract_id: PUBMED:37653562 Interventions promoting mental health dimensions in infertile women: a systematic review. Background: Scientific developments have brought hope to infertile couples; however, the results are not always favorable. This makes women suffer psychological pressure. Therefore, previous studies have investigated the effectiveness of many psychological interventions but no research identified the most common psychological interventions. In this regard, the present review aimed to investigate different psychological interventions that promote mental health in infertile to identify the most frequent (common) ones. Methods: In the present study, the search was carried out using appropriate keywords Infertility, psychological interventions, mental health, stress, anxiety, depression and women in the Google Scholar، Magiran، SID، Pubmed، Scopus، Science Direct، ProQuest، Web of Science databases and One of the leading websites in health- WHO with Persian and English languages and two operators of "AND" and "OR" between 2000 and 2021. Results: First, 7319 articles were searched, 6948 articles of which were removed due to irrelevant subjects, and 31 articles were removed due to duplication. 340 abstracts were examined and the results of 60 articles were extracted. Two approaches (examining the type and content of intervention) were used to extract findings. The first approach indicated high diversity in psychological interventions, leading to the categorization of the interventions into 4 categories of cognitive behavioral therapy (CBT), mind-body interventions (MBI), stress management skills, and others. CBT and MBI and stress management skills were found as the most frequent promoting interventions for mental health in infertile women. The second approach indicated the differences in protocols (number of sessions and time of interventions). Conclusion: Despite differences in protocol of interventions under study, the results of all articles revealed the effectiveness of interventions in improving infertile women's mental health; therefore, it is suggested to apply the most common psychological interventions based on scientific evidence (CBT, MBI, and stress management skills) along with infertility treatment methods. The results will help the specialists, policy-makers, and planners to select and implement the most appropriate psychological interventions for infertile women. abstract_id: PUBMED:38093258 The social determinants of mental health disorders among women with infertility: a systematic review. Introduction: Infertility is associated with mental health disorders in women, even if a successful pregnancy resolves infertility. However, the link between social determinants of health (SDoH) and mental health in women with infertility is not well understood. We aimed to investigate the determinants thoroughly so that mental health screening and services can be tailored to suit women with infertility who are vulnerable to mental health disorders. Methodology: All observational studies that included women participants of reproductive age with infertility and assessed social determinants associated with mental health disorders were searched using a combination of keywords from MEDLINE, EMBASE, CINAHL, PsycINFO, Scopus, and Web of Science databases and published in English. Two reviewers conducted screening, data extraction, quality assessment and risk of bias. The protocol was registered on PROSPERO (number CRD42022343962). Results: The systematic review included 32 studies out of 3405 screened articles from January 1st 2010 to 16th October 2023. Compared to women without infertility, the prevalence of mental health disorders, including anxiety, depression, psychological distress, and stress, is high among women with infertility, with the severity being influenced by social determinants-those with higher education, employment, higher personal or family income, private health insurance, higher social support, stronger religious beliefs, and higher spiritual well-being reported better mental health outcomes. Conclusion: The study highlights the need for early detection, tailored interventions, and integrated and comprehensive support systems to address the mental health needs of women with infertility and improve their well-being. abstract_id: PUBMED:34053382 A systematic review and meta-analysis of the Endometriosis and Mental-Health Sequelae; The ELEMI Project. Background: It is important to evaluate sequalae for complex chronic health conditions such as endometriosis and mental health disorders. Endometriosis impacts 1 in 10 women. Mental health outcomes can be a primary determinant in many physical health conditions although this is an area not well researched particularly in women's health. This has been problematic for endometriosis patients in particular, who report mental health issues as well as other key comorbidities such as chronic pelvic pain and infertility. This could be partly due to the complexities associated with comprehensively exploring overlaps between physical and mental health disorders in the presence of multiple comorbidities and their potential mechanistic relationship. Methods: In this evidence synthesis, a systematic methodology and mixed-methods approaches were used to synthesize both qualitative and quantitative data to examine the prevalence of the overlapping sequalae between endometriosis and psychiatric symptoms and disorders. As part of this, an evidence synthesis protocol was developed which included a systematic review protocol that was published on PROSPERO (CRD42020181495). The aim was to identify and evaluate mental health reported outcomes and prevalence of symptoms and psychiatric disorders associated with endometriosis. Findings: A total of 34 papers were included in the systematic review and 15 were included in the meta-analysis. Anxiety and depression symptoms were the most commonly reported mental health outcomes while a pooled analysis also revealed high prevalence of chronic pelvic pain and dyspareunia. Interpretation: It is evident that small-scale cross-sectional studies have been conducted in a variety of settings to determine mental health outcomes among endometriosis patients. Further research is required to comprehensively evaluate the mental health sequalae with endometriosis. Answer: Yes, psychotherapy appears to be a reliable alternative to pharmacotherapy to promote the mental health of infertile women. A randomized clinical trial compared the effects of psychological intervention with psychotropic medication on the mental health improvement of depressed infertile women and found that psychotherapy, specifically cognitive behavior therapy (CBT), was superior to or at least as effective as pharmacotherapy in promoting the well-being of depressed infertile women. The study showed that CBT effectively reduced the mean of all four subscales of the General Health Questionnaire (GHQ) and significantly reduced the total GHQ scores, with a successful treatment rate of 79.3% compared to 50% in the fluoxetine (antidepressant) group (PUBMED:18848381). Additionally, another randomized clinical trial investigated whether psychosexual therapy (PST) could be a reliable alternative to bupropion extended-release (BUP ER) to promote sexual function in infertile women. The results indicated that PST not only serves as a reliable alternative to pharmacotherapy but also produces better results in terms of improving sexual function in infertile women (PUBMED:32309766). Furthermore, the literature suggests that mental disorders such as depressive disorder or eating disorders are prospectively associated with infertility, indicating a bidirectional relationship. Pilot studies support the assumption that psychotherapy may constitute an important component of the treatment of infertility (PUBMED:23052895). In summary, psychotherapy is a reliable alternative to pharmacotherapy for improving mental health in infertile women, with studies showing its effectiveness in reducing symptoms of depression and anxiety, as well as improving sexual function and overall well-being.
Instruction: Promoting repeat tobacco dependence treatment: are relapsed smokers interested? Abstracts: abstract_id: PUBMED:16610925 Promoting repeat tobacco dependence treatment: are relapsed smokers interested? Background: Promotion of repeat tobacco dependence treatment among relapsed smokers interested in "recycling" (repeat quit attempt) may be a promising approach to increase quit rates. Objective: To report relapsed smokers' interest in recycling and their treatment preferences. Study Design: Descriptive analysis of a population of relapsed smokers who were randomized to receive a recycling intervention strategy to increase tobacco dependence treatment rates, as part of a randomized controlled trial at 5 Veterans Affairs medical centers. Methods: Individuals prescribed a tobacco dependence medication in 2002 were eligible and were identified from the Department of Veterans Affairs Pharmacy Benefits Management database. Intervention group participants (n = 951) were contacted for a standardized telephone interview approximately 6 months after the prescription fill date to assess smoking status, interest in recycling, and treatment preferences. Bivariate analyses and generalized linear mixed-model regressions were used to describe outcomes. Results: The response rate to the intervention telephone call was 62% (586/951), at which 61% (357/586) of respondents had relapsed. Almost two thirds of relapsed smokers were interested in recycling within 30 days. Of these, 91% wanted behavioral or pharmacologic smoking cessation treatment, and 64% wanted behavioral and pharmacologic treatment. In multivariate analyses, independent predictors of interest in recycling within 30 days included black race, lower smoking level, and greater number of smoking-related medical conditions. Conclusion: Most smokers who attempt to quit but relapse want to quit again right away, and most are interested in receiving behavioral and pharmacologic treatment. abstract_id: PUBMED:36862604 Appeal of Tobacco Quitline Services Among Low-Income Smokers. Introduction: State tobacco quitlines are delivering cessation assistance through an increasingly diverse range of channels. However, offerings vary from state to state, many smokers are unaware of what is available, and it is not yet clear how much demand exists for different types of assistance. In particular, the demand for online and digital cessation interventions among low-income smokers, who bear a disproportionate burden of tobacco-related disease, is not well understood. Methods: We examined interest in using 13 tobacco quitline services in a racially diverse sample of 1,605 low-income smokers in 9 states who had called a 2-1-1 helpline and participated in an ongoing intervention trial from June 2020 through September 2022. We classified services as standard (used by ≥90% of state quitlines [eg, calls from a quit coach, nicotine replacement therapy, printed cessation booklets]) or nonstandard (mobile app, personalized web, personalized text, online chat with quit coach). Results: Interest in nonstandard services was high. Half or more of the sample reported being very or somewhat interested in a mobile app (65%), a personalized web program (59%), or chatting online with quit coaches (49%) to help them quit. In multivariable regression analyses, younger smokers were more interested than older smokers in digital and online cessation services, as were women and smokers with greater nicotine dependence. Conclusion: On average, participants were very interested in at least 3 different cessation services, suggesting that bundled or combination interventions might be designed to appeal to different groups of low-income smokers. Findings provide some initial hints about potential subgroups and the services they might use in a rapidly changing landscape of behavioral interventions for smoking cessation. abstract_id: PUBMED:35546937 Sex-Dependent Alterations of Regional Homogeneity in Cigarette Smokers. Biological sex may play a large role in cigarette use and cessation outcomes and neuroimaging studies have demonstrated that cigarette smoking is associated with sex-related differences in brain structure and function. However, less is known about sex-specific alterations in spontaneous brain activity in cigarette smokers. In this study, we investigated the sex-related effects of cigarette smoking on local spontaneous brain activity using regional homogeneity (ReHo) based on resting-state fMRI. Fifty-six smokers (24 females) and sixty-three (25 females) healthy non-smoking controls were recruited. Whole-brain voxelwise 2-way analysis of covariance of ReHo was performed to detect brain regions with sex-dependent alterations on the spontaneous brain activity. Compared to non-smokers, smokers exhibited significant ReHo differences in several brain regions, including the right medial orbitofrontal cortex extended to the ventral striatum/amygdala/parahippocampus, left precuneus, and bilateral cerebellum crus. Smoking and sex interaction analysis revealed that male smokers showed significantly lower ReHo in the right ventral striatum, left cerebellum crus1, and left fusiform gyrus compared to male non-smokers, whereas there are no significant differences between female smokers and non-smokers. Furthermore, the ReHo within the left cerebellum crus1 was negatively correlated with craving scores in male smokers but not in female smokers. Such sex-dependent differences in spontaneous brain activity lays a foundation for further understanding the neural pathophysiology of sex-specific effects of nicotine addiction and promoting more effective health management of quitting smoking. abstract_id: PUBMED:28910178 Interpersonal and Intrapersonal Differences among Adolescent Nonsmokers, Ex-Smokers, and Smokers. Background: A large body of research has traced tobacco dependence among adolescents to a series of intrapersonal and interpersonal factors. However, there are remaining questions regarding the differences on these factors related to tobacco use. Objectives: We sought to investigate intrapersonal and interpersonal differences among adolescent nonsmokers, ex-smokers, and smokers. Methods: We used data from a 3-year project designed to investigate and address tobacco dependence among 1071 students (Mage = 15.76, SD = 1.52; girls = 51.54%) who were recruited from 11 high schools. Participants, filling out a survey, provided information on tobacco use (nonsmoker, ex-smoker, and smoker), tobacco-related experiences (smoking-related risk perception, parental smoking, number of friends who smoke, resisting peer pressure to smoke), cognitive variables (metacognitive skills), and personality traits (disinhibition and impulsivity). Results: Results from a discriminant function analysis showed that smokers and ex-smokers reported more disinhibition, impulsivity, number of friends who smoke and less self-control under peer pressure to smoke compared to nonsmokers. Ex-smokers reported less metacognitive processes, more smoking-related risk perception and were less likely to have parents who smoke. Conclusions/Importance: Interventions and campaigns aimed to persuade adolescents to stop smoking should work to develop adaptive metacognitive skills and an accurate risk perception of tobacco use. abstract_id: PUBMED:27943592 Smokers and ex-smokers have shared differences in the neural substrates for potential monetary gains and losses. Despite an increased understanding of nicotine addiction, there is a scarcity of research comparing the neural correlates of non-drug reward between smokers and ex-smokers. Long-term changes in reward-related brain functioning for non-drug incentives may elucidate patterns of functioning that potentially contribute to ongoing smoking behaviour in current smokers. Similarly, examining the effects of previous chronic nicotine exposure during a period of extended abstinence may reveal whether there are neural correlates responsible for non-drug reward processing that are different from current smokers. The current study, therefore, sets out to examine the neural correlates of reward and loss anticipation, and their respective outcomes, in smokers, ex-smokers and matched controls using a monetary incentive delay task during functional magnetic resonance imaging. Here, we report that in the absence of any significant behavioural group differences, both smokers and ex-smokers showed a significantly greater activation change in the lateral orbitofrontal/anterior insular cortex compared with smokers when anticipating both potential monetary gains and losses. We further report that ex-smokers showed a significantly greater activation change in the ventral putamen compared with both controls and smokers and in the caudate compared with controls during the anticipation of potential monetary losses only. The results suggest that smoking may sensitize striato-orbitofrontal circuitry subserving motivational processes for loss avoidance and reward gain in nicotine addiction. abstract_id: PUBMED:36751870 Physical and mental health status of former smokers and non-smokers patients with bipolar disorder. Objectives: Up to 70% individuals with bipolar disorder (BD) are lifetime tobacco smokers, a major modifiable risk factor for morbidity. However, quitting smoking is rarely proposed to individuals with BD, mainly because of fear of unfavorable metabolic or psychiatric changes. Evaluating the physical and mental impact of tobacco cessation is primordial. The aim of this study was to characterize the psychiatric and nonpsychiatric correlates of tobacco smoking status (never- vs. current vs. former smokers) in individuals with BD. Methods: 3860 individuals with ascertained BD recruited in the network of Fondamental expert centers for BD between 2009 and 2020 were categorized into current, former, and never tobacco smokers. We compared the sociodemographic and clinical characteristics assessed by standard instruments (e.g., BD type, current symptoms load, and non-psychiatric morbidity-including anthropometric and biological data) of the three groups using multinomial regression logistic models. Corrections for multiple testing were applied. Results: Current smokers had higher depression, anxiety, and impulsivity levels than former and never-smokers, and also higher risk of comorbid substance use disorders with a gradient from never to former to current smokers-suggesting shared liability. Current smokers were at higher risk to have a metabolic syndrome than never-smokers, although this was only evidenced in cases, who were not using antipsychotics. Conclusions: Tobacco smoking was associated with high morbidity level. Strikingly, as in the general population, quitting smoking seemed associated with their return to the never-smokers' levels. Our findings strongly highlight the need to spread strategies to treat tobacco addiction in the BD population. abstract_id: PUBMED:36563479 Anxiety sensitivity and reasons for smoking among Black smokers. Black smokers demonstrate higher nicotine dependence and experience higher rates of smoking-related diseases and mortality relative to European American/White smokers. A potential factor relevant to race-specific smoking health disparities may be smoking motives (i.e., motivational basis of smoking). Yet, little research has been conducted to understand psychological factors that may be associated with specific smoking motives among Black smokers. To address this gap in the literature, the current study explored the association between anxiety sensitivity and smoking motives within a subset sample of Black smokers who were interested in participating in a smoking cessation trial (N = 105; 70.5 % male; Mage = 44.8 years, SD = 11.6 years). Hierarchical regression analyses indicated anxiety sensitivity was associated with smoking motives related to habit (β = 0.39, p <.001), negative affect reduction (β = 0.32, p <.001), stimulation (β = 0.31, p <.001), and sensorimotor manipulation (β = 0.26, p =.008). Limited support was found for the effect of motives on past quit attempt engagement. These results may broaden understanding of the psychological mechanisms related to smoking motivation among Black smokers and may inform future intervention efforts to reduce smoking motivation among this health disparities population. abstract_id: PUBMED:30428933 The effects of the European e-cigarette health warnings and comparative health messages on non-smokers' and smokers' risk perceptions and behavioural intentions. Background: Article 20 of the EU Tobacco Products Directive [TPD] stipulates that e-cigarette packets and refill products must carry a nicotine addiction health warning. Although previous studies conducted in North America have found that perceived harm, addictiveness and intention to use declined following exposure to e-cigarette health warnings, possible effects of the TPD health warnings on smokers and non-smokers has not been studied. This study will investigate the effects of the EU TPD e-cigarette health warnings and a comparative harm message (COMP; developed specifically for this study) on smokers' and non-smokers' perceptions of harm, addictiveness and social acceptability of e-cigarettes. Additionally, the potential effects of the TPD warnings and the COMP on smokers' intentions to purchase and use e-cigarettes will be explored. Methods/design: A sample of 2400 UK residents will be recruited in this experimental, randomised design, with Smoking status (Smoker vs. Non-smoker), TPD presence (TPD1 vs. TPD2 vs. No-TPD) and COMP presence (Presence vs. Absence) as between subjects independent variables, and Time (pre-post exposure of images) as a within subjects factor. Dependent variables comprise self-reported perceived harm, addictiveness, social acceptability, e-cigarettes' effectiveness, intentions to purchase and use e-cigarettes. Cigarette dependence, previous e-cigarette exposure, and baseline intentions to quit will be measured as covariates. Discussion: Health warnings, such as those implemented by the TPD, may help to prevent non-smokers from e-cigarettes use, but it is possible that they may inadvertently deter smokers from initiating use and substituting their tobacco smoking for e-cigarettes use if their content is deemed too negative. It is hoped that this study will help identify the most effective message or combination of messages that encourage use among smokers without promoting use among non-smokers. Trial Registration: ISRCTN registry ISRCTN76967031 ; date of registration: 23/10/18. abstract_id: PUBMED:34423746 Decisional Balance of Smoking and Planning to Quit Smoking: A Moderated Mediation Model among Chinese Male Smokers. Decisional balance in quitting smoking involves positive and negative attitudes toward smoking. This study explored the relationship between the decisional balance of smoking and planning to quit smoking, and examined the mediating role of willingness to quit smoking and the moderating role of social support. A questionnaire survey was conducted among 326 daily male smokers from China in 2016, including the decisional balance of smoking, willingness to quit smoking, social support, planning to quit, and nicotine dependence. The results showed: (1) willingness to quit smoking mediated the relationship between the decisional balance of smoking and planning to quit smoking; (2) the relationship between the decisional balance of smoking and planning to quit smoking was moderated by emotional support rather than instrumental support. Smokers with a higher level of emotional support for quitting smoking were more likely to progress to planning to quit smoking, while this relationship was not found among smokers with a lower level of emotional support. These findings suggested that smokers' willingness to quit smoking would be key to promoting their planning to quit. Meanwhile, the dynamic process of planning to quit smoking was moderated by emotional support from others. abstract_id: PUBMED:28339571 Depression Among Non-Daily Smokers Compared to Daily Smokers and Never-Smokers in the United States: An Emerging Problem. Introduction: Depression is strongly associated with daily smoking. Yet, little is known about the association between depression and non-daily smoking. The aim of this study was to investigate the prevalence of past-year depression and changes in past-year depression over time among non-daily smokers, compared to daily smokers and never-smokers, overall and stratified by age, gender, income, nicotine dependence, and cigarettes per day. Methods: Data were drawn from the National Household Survey on Drug Use (NSDUH), an annual cross-sectional study of persons aged 12 and over (total study population N = 496 805). The prevalence of past-year depression was examined annually among non-daily smokers, daily smokers, and never-smokers from 2005 to 2013 using linear trend analyses. Results: Past-year depression was common among 10.10% of non-daily smokers, common among 10.78% of daily smokers, and 5.51% of never-smokers in 2013. The prevalence of depression increased from 2005 to 2013 among non-daily smokers (9.06% vs. 10.10%; p = .034) while there was no significant change in depression over time among daily smokers. Increases in depression among non-daily smokers occurred for both men and women and appear most pronounced youth, those smoking fewer cigarettes, and those without nicotine dependence. Conclusions: The prevalence of depression among non-daily smokers was equivalent to daily smokers and nearly twice that among nonsmokers. Depression appears to be increasing over time in non-daily smokers especially among youth, those who smoke less, and those without nicotine dependence. More work on the mental health of non-daily smokers is needed as this is an increasing and understudied group. Implications: This is the first study to investigate changes in the prevalence of depression among non-daily smokers compared to daily smokers and never-smokers over the past decade in a nationally representative sample of the United States. The results suggest an increase in depression among non-daily smokers over time that did not similarly occur for daily smokers. Further, there were several subgroups of non-daily smokers among whom depression has increased more rapidly. This study suggests the need for more information about the relationship between depression and non-daily smoking including the impact of depression on quit attempts and outcomes. Answer: Yes, relapsed smokers are interested in promoting repeat tobacco dependence treatment. A study that assessed the interest of relapsed smokers in "recycling" (making another quit attempt) found that almost two-thirds of relapsed smokers were interested in recycling within 30 days. Of these individuals, 91% wanted behavioral or pharmacologic smoking cessation treatment, and 64% wanted both behavioral and pharmacologic treatment. Independent predictors of interest in recycling within 30 days included black race, lower smoking level, and a greater number of smoking-related medical conditions (PUBMED:16610925). This suggests that most smokers who attempt to quit but relapse are willing to try quitting again soon after, and the majority are interested in receiving comprehensive treatment to support their efforts.
Instruction: Are patients more satisfied when they receive a prescription? Abstracts: abstract_id: PUBMED:34931429 Prescription drug survey of elderly patients with degenerative musculoskeletal disorders. Aims: Elderly patients with musculoskeletal disorders are generally expected to receive many prescription drugs for non-musculoskeletal comorbidities and for alleviating chronic musculoskeletal pains. The aims of this study were to review the use of prescription drugs in elderly patients with elective surgeries for musculoskeletal disorders and to identify the factors associated with polypharmacy in elderly patients with musculoskeletal disorders. Methods: We retrospectively collected the clinical data of patients aged ≥65 years who underwent knee arthroplasty, total hip arthroplasty, or spinal surgery for lumbar or cervical degenerative disorders at our institution. The following data were evaluated: age, body mass index, sex, surgical site, prescription drugs used, American Society of Anesthesiologists physical status grade, and medical history, including hypertension, hyperlipidemia, diabetes, stroke, malignancy, and smoking. Polypharmacy was defined as the use of six or more drugs. Results: In the present study, 767 consecutive patients were evaluated retrospectively. The prevalence of polypharmacy was >50% in the elderly patients with musculoskeletal disorders. The mean numbers of total drugs and pain relief medications were significantly higher in the lumbar surgery group than in the other surgery groups. Multivariable analysis revealed that the factors associated with polypharmacy were lumbar surgery, hypertension, hyperlipidemia, diabetes, and malignancy. Conclusions: This is the first study to cross-sectionally review the drugs prescribed to patients with degenerative musculoskeletal disorders. It clearly identified the factors associated with polypharmacy in elderly patients with degenerative musculoskeletal disorders. Particular attention should be paid to polypharmacy in elderly patients with lumbar degenerative disorders. Geriatr Gerontol Int 2022; 22: 121-126. abstract_id: PUBMED:27559190 Utilization of Pharmaceutical Patient and Prescription Assistance Programs via a Pharmacy Department Patient Assistance Program for Indigent Cancer Patients. Background: With the advances in cancer treatments, mortality rates in the United States have been consistently falling but they are accompanied by substantial increases in the cost of cancer care. Patient and prescription assistance programs (PPAPs) are offered by pharmaceutical manufacturers to provide free medications to medically indigent patients. To assist the Cancer Care Center (CCC) at Nassau University Medical Center (NUMC) with drug costs for chemotherapies, the pharmacy department uses a patient assistance program (PAP) to obtain medications from the drug companies at no cost. Purpose: This study evaluates the impact of the PAP at a public hospital from which indigent cancer patients obtain assistance for chemotherapy. Methods: We followed all patients requiring assistance with chemotherapy who enrolled in the PAP from January 1, 2011 through December 31, 2012. Medications included both oral and parenteral chemotherapy drugs and antiemetics used in the outpatient clinic setting. Results: The program served 347 patients in 2011 and 579 patients in 2012. The total number of visits in the clinic over 24 months was 9,405. The total cost savings of the medications was $1,066,000 in 2011 and $1,715,538 in 2012. Conclusions: A pharmacy-based PAP to procure free medications from PPAPs for cancer patients has helped to defray the expense of providing care at NUMC, increased patients' compliance with chemo protocols, and allowed many patients to receive the treatment they otherwise would not be able to afford. The combination of PPAPs and PAP provides a safety net to ensure that indigent cancer patients receive needed prescription medications in the outpatient clinic setting. abstract_id: PUBMED:9323777 Are patients more satisfied when they receive a prescription? The effect of patient expectations in general practice. Objective: To test the hypotheses that patient expectations are a driving force in drug prescribing and that fulfilment of expectations is followed by higher satisfaction. Design: Pre- and post-consultation survey of patients; parallel doctor survey (matched pairs). Setting: Primary health care in Göttingen, a town of about 130,000 inhabitants in Germany. Subjects: Ten general practitioners and 185 randomly addressed patients. Main Outcome Measures: Patient expectations with respect to the result of the consultation; doctor's perception of patient expectations; agreement between patient and doctor; patient satisfaction. Results: Nearly half of the patients (86/185) expected a drug prescription from their doctor; 68% (125/185) received a prescription. The doctors recognized the expectation of a prescription in only 40.7% of the patients. A high percentage (82.6%) of patients expecting a drug were issued a prescription. Nearly all the patients (45/48) who expected a drug according to their doctor's judgement left the surgery with a prescription, and 58.4% of the remaining patients were prescribed a drug. There was no difference in satisfaction scores between patients whose expectations were or were not fulfilled. Conclusion: These results are in some contrast to the main hypotheses. As fulfilment of expectations was not associated with higher satisfaction, physicians need not necessarily worry that patients will change their doctor if he or she refuses a pharmacologically dubious prescription. abstract_id: PUBMED:30713999 Baseline mental status predicts happy patients after operative or non-operative treatment of adult spinal deformity. Background: The study is a retrospective review of a multi-institutional database, aiming to determine predictors of non-depressed, satisfied adult spinal deformity (ASD) patients with good self-image at 2-year follow-up (2Y). ASD significantly impacts a patients' psychological status. Following treatment, little is known about predictors of satisfied patients with high self-image and mental status. Methods: Inclusion: primary ASD pts >18 y/o with complete 2Y follow-up. Non-depressed [Short Form 36-mental component score (SF36-MCS) >42], satisfied patients (SRS22-satisfaction >3) with good self-image (SR22-self-image >3) at 2Y were isolated (happy). Happy and control patients were propensity-matched by baseline and 2Y leg pain, Charlson, frailty, and radiographic measures for the operative (OP) and non-operative cohorts (NOP). Health related quality of life (HRQL), surgical and radiographic metrics were compared. Regression models identified predictors of happy patients. Thresholds were calculated using area under the curve (AUC) and 95%CI. Results: Of 480 patients, 94 OP (happy: 47 vs. control: 47) and 92 NOP (46 each) reached inclusion. At baseline, groups had similar age, gender, Oswestry disability index (ODI) (OP: 39.13 vs. 37.49, NOP: 17.70 vs. 19.74) and SF36-physical component score (PCS) (OP: 33.51 vs. 35.04, NOP: 47.93 vs. 44.72). Despite similar (P>0.05) surgeries, length of stay (LOS), and radiographic outcomes between OP happy and control groups, happy had less peri-operative complications (31.9% vs. 57.4%, P=0.13), better 2Y ODI (17.77 vs. 29.98), SRS22 component, total, and SF36 scores (P<0.05). NOP happy patients also exhibited better 2Y ODI (13.24 vs. 22.09), SRS22 component, total, and SF36 scores (P<0.05). Baseline SRS-mental (OR: 2.199, AUC: 0.617, cutoff: 2.5) and ODI improvement (OR: 1.055, AUC: 0.717, cutoff: >12) predicted happy OP patients, while baseline SRS-self-image (OR: 5.195, AUC: 0.740, cutoff: 3.5) and ODI improvement (OR: 1.087, AUC: 0.683, cutoff: >9) predicted happy NOP patients. Conclusions: Baseline mental-status, self-image and ODI improvement significantly impact long-term happiness in ASD patients. Despite equivalent management and alignment outcomes, operative and non-operative happy patients had better 2Y disability scores. Management strategies aimed at improving baseline mental-status, perception-of-deformity, and maximizing ODI may optimize treatment outcomes. abstract_id: PUBMED:31534817 Exercise as a prescription for patients with various diseases. A growing understanding of the benefits of exercise over the past few decades has prompted researchers to take an interest in the possibilities of exercise therapy. Because each sport has its own set of characteristics and physiological complications that tend to occur during exercise training, the effects and underlying mechanisms of exercise remain unclear. Thus, the first step in probing the effects of exercise on different diseases is the selection of an optimal exercise protocol. This review summarizes the latest exercise prescription treatments for 26 different diseases: musculoskeletal system diseases (low back pain, tendon injury, osteoporosis, osteoarthritis, and hip fracture), metabolic system diseases (obesity, type 2 diabetes, type 1 diabetes, and nonalcoholic fatty liver disease), cardio-cerebral vascular system diseases (coronary artery disease, stroke, and chronic heart failure), nervous system diseases (Parkinson's disease, Huntington's disease, Alzheimer's disease, depression, and anxiety disorders), respiratory system diseases (chronic obstructive pulmonary disease, interstitial lung disease, and after lung transplantation), urinary system diseases (chronic kidney disease and after kidney transplantation), and cancers (breast cancer, colon cancer, prostate cancer, and lung cancer). Each exercise prescription is displayed in a corresponding table. The recommended type, intensity, and frequency of exercise prescriptions are summarized, and the effects of exercise therapy on the prevention and rehabilitation of different diseases are discussed. abstract_id: PUBMED:35479843 Comparing trained student peers versus paid actors as standardized patients for simulated patient prescription counseling. Background: Simulation can be a useful tool for teaching and assessing clinical skills, but can also be costly and faculty-time intensive. It is defined as a technique to create an activity to portray a real experience for purpose of practicing or evaluating. Simulations can use standardized patients (SPs), which can be paid actors (PASPs), staff and faculty, manikins, volunteers, or students from higher level cohorts, also known as advanced class standardized patients (ACSPs). Objective: The objective of this study was to conduct a multifaceted analysis comparing ACSPs and PASPs, based on student performance in the assessment, student preference of SP type, and SP performance as an actor. Methods: ACSPs and PASPs were used in a summative prescription counseling role play. For the evaluation, students counseled a SP about a new prescription medication and answered questions about taking an over-the-counter product with the new medication. The interaction was recorded and evaluated by faculty using a previously developed rubric. SP performance was evaluated by faculty using a separate rubric to determine how well the patient role was performed. A pre- and post-evaluation survey was completed by student pharmacists to gather student preferences about SPs and confidence in their counseling skills. Data were evaluated using a paired t-test. Results: One hundred sixty-seven student pharmacists completed the summative prescription counseling evaluation. Student pharmacists performed well overall with minimal differences between SP types. Students preferred PASPs to role play the patient but felt that the actor type did not affect their performance. Conclusions: ACSPs performed the role of the SP well for a summative prescription counseling session without impacting student performance compared to PASPs and with reduced cost. However, students preferred PASPs, and PASPs were better at role playing the patient. abstract_id: PUBMED:27426210 Prevalence and Detection of Prescription Opioid Misuse and Prescription Opioid Use Disorder Among Emergency Department Patients 50 Years of Age and Older: Performance of the Prescription Drug Use Questionnaire, Patient Version. Background: Despite increased concern about prescription opioid misuse among older adults, there is limited work examining the best means to identify misuse by older adults. The goal of this investigation was to examine the performance the Prescription Drug Use Questionnaire Patient Version (PDUQp), in detecting prescription opioid misuse and prescription opioid use disorders among adult emergency department (ED) patients aged 50 years and older. Methods: This was a cross-sectional study of a random sample of adult ED patients, aged 50 years and older. All participants were without cognitive impairment and reported prescription opioid use within the past 30 days. We evaluated the sensitivity, specificity, predictive values, and receiver operating characteristics of the PDUQp against a standard definition of opioid misuse and DSM-5 criteria for prescription opioid use disorder. Results: Overall, 112 participants completed the study; 74 were aged 50-64 years and 38 were aged 65 years and older. Over half of the participants satisfied DSM-5 criteria for prescription opioid use disorder, with slightly less participants reporting misuse. Overall, the respective sensitivity and specificity of the PDUQp was: 44% and 79% (Receiver Operating Characteristic [ROC] area under the curve: 0.61) for prescription opioid misuse, 38% and 81% (ROC area under the curve: 0.64) for the presence of any prescription opioid use disorder, and 56% and 75% (ROC area under the curve: 0.71) for moderate to severe prescription opioid use disorder. Conclusions: Based on this preliminary work, the PDUQp may be a viable instrument to screen for prescription opioid misuse and prescription opioid use disorder, but it likely requires modifications to optimize its predictive ability in adults over age 50 years. abstract_id: PUBMED:37118400 Measuring the crop water demand and satisfied degree using remote sensing data and machine learning method in monsoon climatic region, India. Supply of water is one of the most significant determinants of regional crop production and human food security. To promote sustainable management of agricultural water, the crop water requirement assessment (CropWRA) model was introduced as a tool for the assessment of satisfied degree of crop water requirements (CWR). Crop combination, water availability for agricultural production, water accessibility, and other indices were calculated considering the DEM, hydrological and climatic data, and crop properties for measuring the agricultural water requirement and satisfied degree in Bansloi River basin using the CropWRA model. Advanced machine learning model random forest was used to calculate the soil moisture considering the atmospheric variable, Landsat indices, and energy balance components for calculating the crop water satisfied degree and water requirement. The average crop water demand is 1.92 m, and it ranges from 1.58 to 2.26 m. The demand of crop water is more in the western part of the basin than the eastern part. The CropWSD (crop water satisfied degree) ranges from 17 to 116% due to variation in topography, river system, crop combination, land use, water uses, etc. The average crop water satisfied degree is 59%. About 71% of the total area is under 40% to 60% CropWSD level. CropWRA model can be applied for the sustainable water resource management, irrigation infrastructure development, and use of other modern technologies. abstract_id: PUBMED:19570631 Factors influencing the prescription or non-prescription of antibiotics by general practitioners Aim: The authors wanted to identify the factors associated with prescription or non-prescription of antibiotics by general practitioners (GPs), for three frequent infectious diseases: tonsillitis, acute bronchitis (AB) and exacerbations of chronic bronchitis (ECB). Patients And Methods: A cross-sectional study included 574 patients followed by 58 randomly selected GPs in Eastern France. Data on physicians, patients (clinical, social and environmental) and antibiotic treatment (type, motivations) was collected during the consultation via a standardized questionnaire. Results: Official recommendations for tonsillitis, AB and ECB were known by GPs in 100, 96 and 91% of the cases, respectively. An antibiotic was administered in 61% of tonsillitis cases, 64% of AB and 84% of ECB. The correlation between prescribing or not antibiotics and official recommendations was variable (tonsillitis: 81.3%, ECB 89.6%, AB: 36.5%). Clinical criteria were the main factors influencing antibiotic prescription. A rapid antigen detection test for streptococcus A was performed in 65% of tonsillitis cases (positive in 55%) increased conformity with recommendations to 93.7%. A second consultation, poor tolerance and patient demand were also significantly associated with a higher rate of antibiotic prescription. Conclusions: Recommendations were known, but semiologic definitions and nosologic limits varied from one physician to another. Recommendations must be based on routine medical practice surveys to include factors influencing prescription, even if non significant. Paraclinical tests help in therapeutic decision making and in the non-prescription of antibiotics but such tests need to be more commonly used. abstract_id: PUBMED:30466808 How many patients enter endometrial cancer surgery with psychotropic medication prescriptions, and how many receive a new prescription perioperatively? Objective: Psychotropic medications including antidepressants and anxiolytics are used to treat anxiety and depression in cancer patients; however, little is known about the prescription practices in endometrial cancer. This study aimed to determine the prevalence, type, dose, frequency and timing of psychotropic medications prescribed to endometrial cancer patients. A secondary aim was to study sociodemographic and clinical characteristics associated with receiving a psychotropic medication prescription. Methods: Secondary data analysis of an international, multicentre, prospective randomised controlled trial was conducted. Patients aged >18 years diagnosed with Stage I endometrial cancer were included. Logistic regression models were fitted to estimate the association of receiving psychotropic medications with patient's socio-demographic and clinical characteristics. Results: The overall prevalence of patients prescribed one or more psychotropic medications was 16.8% (n = 121/719) comprising antidepressants (12.6%, n = 91/719) and anxiolytics (5.8%, n = 42/719). The majority of patients (78.1%, n = 71/91) were already receiving antidepressants before cancer diagnosis, the remaining medications were newly prescribed perioperatively (21.9%, n = 20/91). Patients of younger age (18-50 years, OR (Odds Ratio): 2.61), who had hypertension (OR: 0.61), history of a previous cancer (OR: 1.96), and ≥2 comorbidities (2-3, OR: 2.97; 4-5, OR: 7.85; ≥6, OR: 9.13) were significantly (p < 0.05) more likely to receive a prescription of psychotropic medications. Conclusions: While one in eight patients already had psychotropic medications prescribed before surgery for early stage endometrial cancer, only few women received a new prescription after surgery. The overall prescription rates were similar to other patients with cancer, but higher than those observed in the general population, likely reflecting the comorbidity burden of patients who develop endometrial cancer. Qualitative data could be used in future research to explore the psychological and quality of life impacts of endometrial cancer. Answer: The satisfaction of patients when receiving a prescription is not straightforward and can depend on various factors, including patient expectations and the context of the medical condition being treated. According to a study that surveyed patients and doctors in primary health care, nearly half of the patients expected a drug prescription from their doctor, and a high percentage of those expecting a drug were issued a prescription. However, the study found that there was no difference in satisfaction scores between patients whose expectations were or were not fulfilled. This suggests that fulfillment of expectations for a prescription was not associated with higher satisfaction, indicating that physicians need not necessarily worry that patients will change their doctor if they refuse a pharmacologically dubious prescription (PUBMED:9323777). Therefore, based on the available abstract, it appears that patient satisfaction is not solely dependent on whether they receive a prescription, but rather on a complex interplay of expectations, the perceived necessity of the prescription, and possibly other factors related to the patient-doctor interaction and the patient's health condition.
Instruction: Is splenectomy more dangerous for massive spleens? Abstracts: abstract_id: PUBMED:9515524 Is splenectomy more dangerous for massive spleens? Background: Reports vary about whether risks are greater for removal of massive (> or = 1500 g) spleens than for smaller (< 1500 g) spleens. We sought to determine the hazards of splenectomy. Methods: We reviewed 223 consecutive adults with elective splenectomies for hematologic diseases. Morbidity and mortality rates were combined with published data to create a meta-analysis. Results: Patients with massive spleens are more likely to have postoperative complications (relative risk [RR] 2.1, 95% confidence interval [CI] 1.3 to 3.4; P = 0.003) and death (RR 4.7, 95% CI, 1.5 to 15.1; P = 0.01). However, when the investigation is restricted to comparable diagnoses, patients with massive spleens do not differ from those with smaller spleens regarding complications (RR 1.4, 95% CI, 0.8 to 2.7; P = 0.3) or mortality (RR 2.1, 95% CI, 0.5 to 9.7; P = 0.4). These observations are confirmed by metaanalysis. Furthermore, multivariate analysis indicts age as a critical risk of complications and death. Conclusions: Increased age and underlying illness are the predominant factors associated with morbidity and mortality following splenectomy for hematologic disease. Adjusting for age and diagnosis, spleen size is not a hazard. abstract_id: PUBMED:30720696 Laparoscopic Splenectomy Versus Open Splenectomy In Massive and Giant Spleens: Should we Update the 2008 EAES Guidelines? The objective of this study was to derive some useful parameters to define the feasibility of laparoscopic splenectomy (LS) in massive [spleen longitudinal diameter (SLD)>20 cm] and giant spleens (SLD>25 cm). Between December 1996 and May 2017, 175 patients underwent an elective splenectomy. A laparoscopic approach was used in 133 (76%) patients. Massive spleens were treated in 65 (37.1%) patients, of which 24 were treated laparoscopically. In this subset of massive spleens, the results of laparoscopic splenectomy in massive spleens (LSM) and open splenectomy in massive spleens (OSM) were compared. The clinical outcome of a subgroup of patients with giant spleens was also analyzed. The LSM group resulted in significant longer operative times (143±31 vs. 112±40 min; P=0.001), less blood loss (278±302 vs. 575±583 mL; P=0.007), and shorter hospital stay (6±3 vs. 9±4 d; P=0.004). No conversions were experienced in the LSM group, and the morbidity rate was similar in both the LSM and OSM groups (16.6% vs. 20%; P=0.75). When considering the subset of 9 LSM patients and 26 OSM patients with giant spleens, the same favorable tendency of the laparoscopic group as regards surgical conversion, blood loss, and hospital stay was maintained. The laparoscopic approach can be successfully proposed in the presence of massive splenomegaly also after a careful preoperative evaluation of the expected abdominal "working space." In experienced hands, LS is safe, feasible, and associated with better outcomes than open splenectomy for the treatment of massive and giant spleen, with a maximum SLD limit of 31 cm. abstract_id: PUBMED:21924403 Laparoscopic splenectomy: outcome and efficacy for massive and supramassive spleens. Background: Massive and supramassive splenomegaly are relative contraindications to pure laparoscopic splenectomy (LS). Methods: A retrospective review of adult patients was conducted for splenectomy occurring from 1999 to 2009. Massive and supramassive spleens were defined as craniocaudad length ≥ 17 cm or weight ≥ 600 g and craniocaudad length ≥ 22 cm or weight ≥ 1,600 g, respectively. Results: LS was done for 22 and open splenectomy for 21 patients, of which 12 and 14 were supramassive. Spleen weight and craniocaudad length were comparable. LS was associated with lower blood loss (308 vs 400 mL, P = .24), shorter length of stay (3 vs 4.5 days, P = .054), and similar morbidity (17% vs 14%). Two reoperations and 1 death occurred with open splenectomy. Operative times were longer for LS (195 vs 105 min, P = .008), while the conversion rate was 25%. Conclusions: In cases of massive and supramassive splenomegaly, better outcomes are accomplished with LS than open splenectomy, and are comparable to hand-assisted LS. abstract_id: PUBMED:26264084 Laparoscopic splenectomy is emerging 'gold standard' treatment even for massive spleens. Introduction: Since its first description by Delaitre and Maignien in 1991, laparoscopic splenectomy (LS) has evolved as treatment of choice for mild-to-moderately-enlarged spleens and for benign haematological disorders. LS is a challenge if massive spleens or malignant conditions necessitate treatment, but we report our method and its feasibility in this study. Methods: We undertook a retrospective study of prospectively collected data of all elective splenectomies carried out in our firm of upper gastrointestinal surgeons from June 2003 to June 2012. Only patients opting for elective LS were included in this study. Results: From June 2003 to June 2012, elective splenectomy was carried out in 80 patients. Sixty-seven patients underwent LS and 13 underwent open splenectomy (OS). In the LS group, there were 38 males and 29 females. Age ranged from 6 years to 82 years. Spleen size in the LS group ranged from ≤11 cm to 27.6 cm. Twelve patients had a spleen size of >20 cm. Weight ranged from 35 g to 2,400 g. Eighteen patients had a spleen weight of 600-1,600 g and eight had a spleen weight >1,600 g. Operating times were available for 56 patients. Mean operating time for massive spleens was 129.73 min. There was no conversion to OS. There were no major complications. Conclusions: With improved laparoscopic expertise and advancing technology, LS is safe and feasible even for massive spleens and splenic malignancies. It is the emerging 'gold standard' for all elective splenectomies and has very few contraindications. abstract_id: PUBMED:27695216 The role of laparoscopy in the identification and management of missing accessory spleens after primary splenectomy: A case report and literature review. We present a 7-year-old boy with recurrent thrombocytopenia after primary laparoscopic splenectomy for immune thrombocytopenia (ITP). Imaging modalities (ultrasound, computed tomography scan, and scintigraphy) revealed two accessory spleens while the subsequent second laparoscopy revealed 11, which were successfully removed. The relevant medical literature is reviewed, and the value of laparoscopy for chronic ITP is highlighted. abstract_id: PUBMED:16924082 Trends in laparoscopic splenectomy for massive splenomegaly. Hypothesis: During the past 10 years, expertise with minimally invasive techniques has grown, leading to an increase in successful laparoscopic splenectomy (LS) even in the setting of massive and supramassive spleens. Design: Retrospective series of patients who underwent splenectomy from November 1, 1995, to August 31, 2005. Setting: Academic tertiary care center. Patients: Adult patients who underwent elective splenectomy as their primary procedure (n = 111). Main Outcome Measures: Demographics, spleen size and weight, conversion from LS to open splenectomy, postoperative length of stay, and perioperative complications and mortality. Massive splenomegaly was defined as the spleen having a craniocaudal length greater than 17 cm or weight more than 600 g, and supramassive splenomegaly was defined as the spleen having a craniocaudal length greater than 22 cm or weight more than 1600 g. Results: Eighty-five (77%) of the 111 patients underwent LS. Of these 85 patients, 25 (29%) had massive or supramassive spleens. These accounted for 40% of LSs performed in 2004 and 50% in 2005. Despite this increase in giant spleens, the conversion rate for massive or supramassive spleens has declined from 33% prior to 1999 to 0% in 2004 and 2005. Since January 2004 at our institution, all of the massive or supramassive spleens have been removed with a laparoscopic approach. Patients with massive or supramassive spleens who underwent LS had no reoperations for bleeding or deaths and had a significantly shorter postoperative length of stay (mean postoperative length of stay, 3.8 days for patients who underwent LS vs 9.0 days for patients who underwent open splenectomy; P<.001). Conclusions: Despite conflicting reports regarding the safety of LS for massive splenomegaly, our data indicate that with increasing institutional experience, the laparoscopic approach is safe, shortens the length of stay, and improves mortality. abstract_id: PUBMED:25392624 Massive splenomegaly in children: laparoscopic versus open splenectomy. Background And Objectives: Laparoscopic splenectomy for massive splenomegaly is still a controversial procedure as compared with open splenectomy. We aimed to compare the feasibility of laparoscopic splenectomy versus open splenectomy for massive splenomegaly from different surgical aspects in children. Methods: The data of children aged <12 years with massive splenomegaly who underwent splenectomy for hematologic disorders were retrospectively reviewed in 2 pediatric surgery centers from June 2004 until July 2012. Results: The study included 32 patients, 12 who underwent laparoscopic splenectomy versus 20 who underwent open splenectomy. The mean ages were 8.5 years and 8 years in the laparoscopic splenectomy group and open splenectomy group, respectively. The mean operative time was 180 minutes for laparoscopic splenectomy and 120 minutes for open splenectomy. The conversion rate was 8%. The mean amount of intraoperative blood loss was 60 mL in the laparoscopic splenectomy group versus 110 mL in the open splenectomy group. Postoperative atelectasis developed in 2 cases in the open splenectomy group (10%) and 1 case in the laparoscopic splenectomy group (8%). Oral feeding postoperatively resumed at a mean of 7.5 hours in the laparoscopic splenectomy group versus 30 hours in the open splenectomy group. The mean hospital stay was 36 hours in the laparoscopic splenectomy group versus 96 hours in the open splenectomy group. Postoperative pain was less in the laparoscopic splenectomy group. Conclusion: Laparoscopic splenectomy for massive splenomegaly in children is safe and feasible. Although the operative time was significantly greater in the laparoscopic splenectomy group, laparoscopic splenectomy was associated with statistically significantly less pain, less blood loss, better recovery, and shorter hospital stay. Laparoscopic splenectomy for pediatric hematologic disorders should be the gold-standard approach regardless of the size of the spleen. abstract_id: PUBMED:11513782 Laparoscopic splenectomy for massive splenomegaly using a Lahey bag. Background: Although the recent development of hand-assisted laparoscopic surgery (HALS) has made the laparoscopic retraction of large spleens feasible, the laparoscopic removal of massively enlarged spleens (>1,000 g) remains a significant problem because these spleens do not fit into endoscopic bags. Consequently, in order to remove massive spleens either a large abdominal incision or morcellation of the spleen outside of an endoscopy bag is required. Methods: Two patients, with spleens weighing 2,510 g and 1,720 g, underwent laparoscopic splenectomy using a hand port to ensure safe retraction. The massive spleen was placed into a Lahey bag that was inserted into the abdomen through the hand port site. While in the Lahey bag, the spleen was removed piecemeal through the hand port site. Results: Both operations were completed laparoscopically without complications. The patients were discharged on postoperative day 2 and experienced minimal morbidity. Conclusions: The Lahey bag facilitates laparoscopic splenectomy for massive splenomegaly as even the most massive spleens will fit into a Lahey bag. A massive spleen may be removed piecemeal from the Lahey bag through the small hand port incision without risking a large abdominal incision, splenosis, or the insertion of a morcellator. abstract_id: PUBMED:11918887 Laparoscopic splenectomy for massive splenomegaly. Background: Laparoscopic splenectomy (LS) is the preferred operative approach for diseases involving normal-sized spleens. Our experience with laparoscopic splenectomy in the setting of massive splenomegaly is presented. Methods: A prospective review of patients undergoing LS for massive splenomegaly was conducted. Massive splenomegaly (MS) in adults was defined as a craniocaudal length >or=17 cm or a morcellated weight >or=600 g. In children, spleens measuring fourfold larger than normal for age were considered massive. Results: Forty-nine patients with MS were treated with LS. The most common primary diagnoses were lymphoma and leukemia. Mean splenic length was 20 cm (15 to 27 cm), with weights ranging from 600 to 4,750 g. Twelve patients with supermassive splenomegaly (length >22 cm) required a hand-assisted laparoscopic approach. There were no conversions to open surgery. Mean operating time was 171 minutes (90 to 369). Mean blood loss was 114 cc (<30 to 600 cc). Average length of stay was 2.3 days (1 to 16). Minor postoperative complications occurred in 3 patients. Conclusions: Laparoscopic splenectomy in the setting of splenomegaly is safe and appears to minimize perioperative morbidity. In patients with supermassive splenomegaly, a hand-assisted laparoscopic approach may be required. abstract_id: PUBMED:12632136 Hand-assisted laparoscopic splenectomy for giant spleens. Background: Laparoscopic splenectomy for massive splenomegaly is technically difficult, and the morcellated splenic tissue may be inadequate for histologic study. A hand-assisted technique may provide a technical advantage and allow removal of larger pieces of spleen. Methods: Patients who underwent hand-assisted laparoscopic splenectomy for massive splenomegaly were reviewed. Demographic information, operative data, and outcomes data were tabulated. Results: Sixteen patients met these criteria. Mean age was 56 years (range, 35-78 years). Operating time averaged 240 min (range, 165-360 min), and median blood loss was 425 cc (range, 100-1800 cc). There were no conversions to an open procedure. Mean weight of extracted spleens was 2008 g (range, 543-4090 g). Postoperative length of stay averaged 3.3 days (range, 2-7 days). There was one postoperative complication (6.25%) and no mortality. Conclusions: Hand-assisted laparoscopic splenectomy for massive splenomegaly is feasible and safe while preserving the recovery benefits of minimal access surgery. It provides an adequate specimen for histologic study. Answer: Splenectomy for massive spleens has been a subject of debate regarding its associated risks. The evidence suggests that while there are increased risks associated with the removal of massive spleens (defined as ≥1500 g), these risks are predominantly related to increased age and underlying illness rather than the size of the spleen itself. When adjusting for age and diagnosis, spleen size does not appear to be a significant hazard (PUBMED:9515524). Laparoscopic splenectomy (LS) has been increasingly used for the removal of massive and even giant spleens (spleen longitudinal diameter >20 cm for massive and >25 cm for giant spleens). Studies have shown that LS can be successfully performed in massive splenomegaly with careful preoperative evaluation and in experienced hands. It is associated with better outcomes than open splenectomy, such as longer operative times but less blood loss and shorter hospital stays (PUBMED:30720696, PUBMED:21924403, PUBMED:26264084). Moreover, with the advancement of minimally invasive techniques and growing expertise, LS has become safer and more feasible even for massive and supramassive spleens, leading to better outcomes compared to open splenectomy and comparable to hand-assisted LS (PUBMED:16924082). The conversion rate from LS to open splenectomy has declined with increasing institutional experience, and LS has been associated with shorter postoperative lengths of stay and improved mortality rates (PUBMED:16924082). In pediatric populations, laparoscopic splenectomy for massive splenomegaly has been shown to be safe and feasible, with less pain, less blood loss, better recovery, and shorter hospital stays compared to open splenectomy (PUBMED:25392624). Hand-assisted laparoscopic splenectomy has also been employed for giant spleens, providing a technical advantage and allowing for the removal of larger pieces of spleen without conversion to an open procedure (PUBMED:12632136). In conclusion, while splenectomy for massive spleens carries inherent risks, these risks are more closely associated with patient factors such as age and underlying illness rather than the size of the spleen. With the evolution of laparoscopic techniques and increased surgical expertise, LS has become a safer and more effective option for managing massive splenomegaly, potentially becoming the 'gold standard' for all elective splenectomies (PUBMED:26264084).
Instruction: Functional dyspepsia and dyspepsia associated with Helicobacter pylori infection: Do they have different clinical characteristics? Abstracts: abstract_id: PUBMED:27282295 Functional dyspepsia and dyspepsia associated with Helicobacter pylori infection: Do they have different clinical characteristics? Background: Helicobacter pylori causes motor, secretory, and inflammatory gastrointestinal disorders and therefore the term "functional" has been questioned when referring to dyspepsia associated with this bacterium. Patients with dyspepsia and Helicobacter pylori infection could have clinical characteristics that differentiate them a priori from those with true functional dyspepsia. Aims: To determine whether there are clinical differences between patients with functional dyspepsia and Helicobacter pylori-associated dyspepsia that enable their a priori identification and to know the prevalence of Helicobacter pylori infection in patients with functional dyspepsia. Patients And Methods: A total of 578 patients with dyspepsia with no significant lesions detectable through endoscopy were divided into 2 groups according to the presence of Helicobacter pylori. The clinical characteristics, medical history, comorbidities, and use of health resources were compared between the two groups. A sub-analysis pairing the groups by age and sex in a 1:1 ratio was carried out to reduce bias. Results: A total of 336 patients infected with Helicobacter pylori were compared with 242 non-infected patients. The prevalence of infection in the patients with dyspeptic symptoms and no endoscopically detectable lesions was 58%. The initial analysis showed that the cases with dyspepsia and Helicobacter pylori infection were more frequently associated with overweight, obesity, high blood pressure, diabetes mellitus, and metabolic syndrome, but the paired analysis nullified all these differences. Conclusions: The patients with dyspepsia infected with Helicobacter pylori had similar clinical characteristics to the non-infected patients and could not be differentiated a priori. The prevalence of Helicobacter pylori infection in patients with functional dyspepsia was 58% and increased with age. abstract_id: PUBMED:15904423 Analysis of clinical characteristics of dyspeptic symptoms in Shanghai patients. Objective: To improve the management of dyspepsia by analyzing the clinical characteristics of dyspeptic symptoms in patients from Shanghai. Methods: 782 patients with functional dyspepsia (FD) or organic dyspepsia (OD) completed a questionnaire about dyspepsia. The questionnaire asked participants to score 12 previously validated common upper abdominal symptoms. The clinical characteristics of dyspepsia including severe symptoms; and the relationship between symptoms and meals were then analyzed. Results: Among the 782 dyspeptic patients, 543 cases (69.4%) were classed as FD and 239 (30.6%) OD. The proportion of males was significantly higher in the OD group. There was no difference in average dyspepsia scores between the 2 dyspeptic groups (21.5 vs 20.4, P > 0.05), but the scores of 'stomach' pain and 'stomach' pain before meals were higher in OD patients than in FD patients (2.65 +/- 1.11 vs 2.16 +/- 0.92, 2.26 +/- 1.26 vs 1.79 +/- 0.92, P < 0.05). In 45.2% of the OD patients and 47.7% of the FD patients, respectively, the severity of symptoms was not related to meals. In subgroups of ulcer-like, dysmotility-like and unspecified dyspepsia, the proportion of patients with symptoms not related to meals was 59.6%, 50.9% and 35.2%, respectively. 2.5% (6/239) of OD patients presented with progressive dysphagia, compared with 2.8% (15/543) of FD patients who presented with intermittent dysphagia. Approximately 8.8% (21/239) of OD patients reported dramatic weight loss accompanied with other severe symptoms, compared with 5.9% (32/543) of FD patients who had no other severe symptoms. A shift in symptom subtypes during the follow-up period was found in 13.8% of FD patients. The infection rate of Helicobacter pylori was higher in the OD group than in the FD group (53.1%vs 42.2%, P < 0.01), but no difference was found among the three subgroups of FD patients (P > 0.05). Halitosis was more often found in dyspeptic patients with H. pylori infection (44.9%vs 17.0% in OD, 47.3%vs 25.4% in FD, P < 0.01). Conclusions: When dyspepsia patients present with 'stomach' pain or 'stomach' pain before meals, a diagnosis of OD should be considered. Intermittent dysphagia, weight loss not accompanied with other severe symptoms, and halitosis (more often seen in patients with H. pylori infection) might be regarded as the relatively unique symptoms of dyspepsia in some FD patients. In FD, we found that the severity of dyspepsia symptoms was not related to meals in half of the patients, and symptom subtypes might shift over time, this adds difficulty to the management of FD. abstract_id: PUBMED:35417865 CLINICAL, ENDOSCOPIC, AND HISTOLOGICAL CHARACTERISTICS OF HELICOBACTER PYLORI POSITIVE AND NEGATIVE ARMENIAN CHILDREN WITH RECURRENT ABDOMINAL PAIN AND/OR DYSPEPSIA. Recurrent abdominal pain (RAP) and dyspepsia are common complaints in children. These symptoms are often associated with Helicobacter pylori (Hp) infection. The aim of the present study was to prospectively analyze clinical, endoscopic, and histological characteristics of Hp+ and Hp- children with RAP and/or dyspepsia. Patients aged 2-18 years with RAP and/or dyspepsia, referred for an upper endoscopy to Arabkir Medical Center - Institute of Child and Adolescent Health (Arabkir MC-ICAH) from November 2015 to December 2017, were involved in the study. Histology was assessed according to the updated Sydney system. Gastric and duodenal specimens were stained by modified Giemsa staining for Hp infection. One antral biopsy was cultured in Hp selective media. 150 patients were included into the study: 70.7% Hp+, 29.3% Hp-. Nausea and vomiting were significantly more common in Hp+ patients (p<0.05). Gastric nodularity (p=0.02), erosions in the stomach (p=0.056), and duodenal erosions (p=0.019) were more common in Hp+. Chronic active (p=0.027) and non-active gastritis (p=0.002), cumulative findings of metaplasia/dysplasia/atrophy in the stomach (p=0.014) and chronic non-active duodenitis (p=0.016), were significantly more common in Hp+ patients. Hp infection prevalence is high in Armenian children with dyspepsia and/or RAP. Clinical symptoms, endoscopic findings, and histopathological findings were significantly different in Hp+ patients as compared to Hp- patients. abstract_id: PUBMED:23653938 Clinical characteristics of dyspepsia in patients with diabetes mellitus type 2 Aim: To estimate the prevalence of dyspepsia and to study its clinical manifestations and risk factors in patients with type 2 diabetes mellitus. Subjects And Methods: One hundred and seven patients with type 2 DM and 33 with functional dyspepsia were examined. A clinical and laboratory study and testing were made to identify the symptoms of dyspepsia; dyspepsia-associated factors were studied. Results: Dyspepsia was observed in 71.0% of the examined patients with type 2 DM. It may be attributed to organic gastrointestinal tract (GIT) diseases only in 42.3% of cases. In the type 2 DM patients, dyspepsia that could not be explained by organic GIT diseases was mainly manifested by a dyskinetic type while an ulcer-like type was prevalent in those with organic GIT diseases. In the patients with type 2 DM, dyspepsia that could not be accounted for GIT diseases was associated with the duration of carbohydrate metabolism disturbance, the presence of diabetic complications, Helicobacter pylori infection, and patient age. Some symptoms of dyspepsia (repletion and epigastric discomfort), which could not be explained by organic GIT diseases in patients with type 2 DM were associated with diabetic complications and carbohydrate metabolic parameters. Conclusion: Dyspepsia in type 2 DM was observed in 71% of cases; it can be due to organic GIT diseases in 42.3% and its association with digestive organ pathology was not revealed in 57.7%. abstract_id: PUBMED:25374243 Clinical characteristics and Helicobacter pylori status of gastric cancer in Thailand. Background: Gastric cancer is the second leading course of cancer death worldwide and H. pylori infection is an important risk factor for gastric cancer development. This study was design to evaluate the clinical, pathological features, survival rate and prevalence of H. pylori infection in gastric cancer in Thailand. Materials And Methods: Clinical information, histological features, endoscopic findings and H. pylori status were collected from gastric cancer patients from Thammasat university hospital during June 1996-December 2011. H. pylori infection was assessed by histological evaluation, rapid urease test and serological test. Clinical information, endoscopic findings and histopathology of all patients were recorded and compared between patients with active or non-active H. pylori infection. Results: A total of 100 gastric cancer patients (55 men and 45 women with mean age of 55±16.8 years) were enrolled in this study. Common presenting symptoms were dyspepsia (74%), weight loss (66%), anemia (63%) and anorexia (38%). Mean duration of symptoms prior to diagnosis was 98 days. Overall prevalence of H. pylori infection was 83% and active H. pylori infection was 40%. 1-year and 5-year survival rates were 43% and 0%. There was no significant difference between active H. pylori infection in different locations (proximal vs non-proximal: 47.1% vs 48.5%; P-value=0.9, OR=0.9; 95%CI=0.3-3.1) and histology of gastric cancer (diffuse type vs intestinal type: 47.4% vs 50%; P-value=0.8, OR=0.9, 95%CI=0.3-2.7). However, linitis plastica was significantly more common in non-active than active H. pylori infection (27.9% vs 0%; P-value<0.0001, OR=13.3, 95%CI=3.2-64.5). Moreover, gastric cancer stage 4 was higher in non-active than active H. pylori infection (93% vs 50%, P-value<0.001). Conclusions: Prevalence of H. pylori infection in Thai gastric cancer patients was high but active infection was low. Most gastric cancer patients presented in advance stage and had a grave prognosis. Screening for gastric cancer in high risk individuals might be an appropriate tool for early detection and improve the treatment outcome for this particular disease in Thailand. abstract_id: PUBMED:17092197 Gastric MALT lymphoma: clinical characteristics and prevalence of H. pylori infection in a series of 37 cases. Objective: to perform a retrospective review of the clinical characteristics and prevalence of H. pylori infection in patients with gastric MALT lymphoma diagnosed in our hospital during the last 15 years. Methods: patients with gastric MALT lymphoma diagnosed in our hospital during the last 15 years were retrospectively included. Demographic, clinic, analytic, endoscopic, and histological variables were reviewed. The extension study, the staging classification, and the presence of H. pylori infection were assessed. Results: thirty-seven patients with gastric MALT lymphoma were identified. Mean age was 61 years, with 62% of males. The most common presentation symptom was dyspepsia (76%), followed by digestive bleeding (11%) and constitutional syndrome (8%). At endoscopy, erosive lesions were identified in 41%, and proliferative or exophytic lesions in 43%. Most lymphomas were classified as low-grade (68%). The stage distribution was EI for 56%, EII for 13%, EIII for 3%, and EIV for 28%. The prevalence of H. pylori infection (histology in all cases, rapid urease test in 19%, and 13C-urea breath test in 24%) was 46%. When only low-grade lymphomas in stage EI were considered, H. pylori prevalence increased to 55%. When H. pylori infection was evaluated by 13C-urea breath testing (in addition to histology), the prevalence of H. pylori infection increased to 78%. Conclusions: it is probable that the reduced H. pylori prevalence found in some studies, as in ours, could be explained by false-negative results obtained when only one diagnostic method was used. Therefore, at least two (invasive) diagnostic methods should be performed. Furthermore, the performance of a non-invasive diagnostic method (such as a 13C-urea breath test) before the exclusion of H. pylori infection should be considered. abstract_id: PUBMED:26058116 Clinical case. Chronic gastritis associated with Helicobacter pylori and Epstein-Barr virus The article describes a clinical case of chronic gastritis associated with Helicobacter pylori infection and Epstein-Barr virus. The authors draw attention to the peculiarities of dyspepsia syndrome, on the characteristics of this disease in the endoscopic and morphological study. abstract_id: PUBMED:16472585 Association of the predominant symptom with clinical characteristics and pathophysiological mechanisms in functional dyspepsia. Background & Aims: Functional dyspepsia (FD) is considered a heterogeneous disorder with different pathophysiological mechanisms contributing to the symptom pattern. The Rome II committee proposed that subdividing patients with FD into groups with predominant pain versus discomfort might identify subgroups with homogeneous pathophysiological and clinical properties. The aim of this study was to analyze the relationship of predominant pain or discomfort with pathophysiological mechanisms and to evaluate whether considering individual predominant symptoms yields better results. Methods: Consecutive FD patients (n = 720; 489 women; mean age, 41.3 +/- 0.6 years) filled out a dyspepsia questionnaire and identified a single most bothersome symptom. We analyzed the association of this predominant symptom with demographic, clinical, and pathophysiological features (Helicobacter pylori status, gastric emptying in 592 patients, and gastric sensitivity and accommodation testing in 332 patients). Results: According to Rome II criteria, 22% were pain predominant and 78% discomfort predominant. Patients with predominant pain had a higher prevalence of hypersensitivity (44% vs 25%) and delayed gastric emptying was observed less frequently in these patients (16% vs 26%), but there was major overlap. Detailed analysis showed that any of 8 dyspeptic symptoms could be predominant. Predominant early satiety or vomiting was associated with significantly higher prevalences of weight loss (89% and 75%, respectively) and of acute onset (61% and 60%, respectively). Impaired accommodation was found in 79% of patients with predominant early satiety. The highest prevalence of delayed emptying was found in predominant fullness (38%) and of hypersensitivity in predominant pain (44%). Conclusions: Subdividing FD patient groups according to the predominant symptom does not reliably identify subgroups with a homogeneous underlying pathophysiological mechanism. abstract_id: PUBMED:20052349 Examination of geographical, clinical and intrahost variations in the 3' repeat region of cagA gene in Helicobacter pylori. The size variation of the cytoxin-associated protein (cagA), which is dependent on the 3' repeat region (3'RR) of the cagA gene, is known to play a crucial role in the pathogenesis of Helicobacter pylori infection. The present study evaluated the relationship between the 3'RR variation and the geographic distribution, clinical manifestations, and locations of colonization in the stomach. We evaluated the 3'RR of H. pylori isolates from 78 patients with gastric cancer, peptic ulcer, and non-ulcer dyspepsia from Japan, Hong Kong, India, and the United States and assessed the variations of 3'RR according to the geographical and clinical characteristics. Sixty eight (87.2%) patients had the same 650 bp band without geographical differences. The frequency of polymorphisms in the 3'RR did not differ when compared to the clinical manifestations (P=0.868). The length of 3'RR did not differ by location of colonization. In conclusion, the 3'RR variation of cagA gene is not associated with the geographical and clinical characteristics of the patients studied. abstract_id: PUBMED:24941349 Clinical expression of lymphocytic duodenosis in "mild enteropathy" celiac disease and in functional gastrointestinal syndromes. Objective: Abnormally high number of duodenal intraepithelial lymphocytes is frequently found in many conditions including mild enteropathy celiac disease (CD) and functional gastrointestinal syndromes, but is unclear whether lymphocytosis affects the clinical phenotype particularly in functional syndromes. Materials And Methods: We compared clinical characteristics of celiac patients with lymphocytic duodenosis and normal villous structure with those of patients with functional gastrointestinal syndromes with and without lymphocytic duodenosis. We retrospectively identified 3 cohorts among patients referred for suspected CD: (1) "CoelD", 135 patients (age 36 ± 14 years) with mild enteropathy CD; (2) "LymD", 245 patients (38 ± 12 years) with functional gastrointestinal syndromes and lymphocytic duodenosis; and (3) "NorD", 147 patients (37 ± 15 years) with functional syndromes and normal duodenal histology. Results: Prevalence of gastrointestinal symptoms was similar in the three cohorts, but prevalence of extra-intestinal manifestations (42% vs. 27% vs. 18%, p < 0.003) and of associated diseases (35% vs. 15% vs. 14%, p < 0.0001) was higher in "CoelD" than in "LymD" and "NorD", respectively. Prevalence of Helicobacter pylori infection was similar in the three cohorts. The proportion of patients with final diagnosis of irritable bowel syndrome-diarrhea (38% vs. 37%), dyspepsia (31% vs. 27%), functional pain (14% vs. 19%), and functional diarrhoea (14% vs. 11%) was virtually the same in the cohorts with (LymD) and without (NorD) lymphocytic duodenosis. Conclusions: Lymphocytic duodenosis has different clinical presentation in patients with mild enteropathy CD than those with functional gastrointestinal syndromes, and is not specific for any particular functional syndrome. Answer: The clinical characteristics of functional dyspepsia (FD) and dyspepsia associated with Helicobacter pylori infection may not be significantly different. A study found that patients with dyspepsia infected with Helicobacter pylori had similar clinical characteristics to non-infected patients and could not be differentiated a priori. The prevalence of Helicobacter pylori infection in patients with functional dyspepsia was 58% and increased with age (PUBMED:27282295). Another study reported that the infection rate of Helicobacter pylori was higher in the organic dyspepsia (OD) group than in the FD group, but no difference was found among the three subgroups of FD patients. Halitosis was more often found in dyspeptic patients with H. pylori infection (PUBMED:15904423). In children, clinical symptoms, endoscopic findings, and histopathological findings were significantly different in Hp+ patients compared to Hp- patients, with nausea and vomiting being more common in Hp+ patients (PUBMED:35417865). In patients with diabetes mellitus type 2, dyspepsia was observed in 71% of cases, and it could be due to organic gastrointestinal tract diseases in 42.3% (PUBMED:23653938). In Thailand, the prevalence of H. pylori infection in gastric cancer patients was high, but active infection was low. Most gastric cancer patients presented in an advanced stage and had a grave prognosis (PUBMED:25374243). In patients with gastric MALT lymphoma, the prevalence of H. pylori infection was 46%, and when only low-grade lymphomas in stage EI were considered, H. pylori prevalence increased to 55% (PUBMED:17092197). Subdividing FD patient groups according to the predominant symptom does not reliably identify subgroups with a homogeneous underlying pathophysiological mechanism (PUBMED:16472585). The 3' repeat region variation of the cagA gene in H. pylori is not associated with the geographical and clinical characteristics of the patients studied (PUBMED:20052349). Lastly, lymphocytic duodenosis has different clinical presentations in patients with mild enteropathy CD than those with functional gastrointestinal syndromes, and is not specific for any particular functional syndrome (PUBMED:24941349). In summary, In summary, while there are some associations between Helicobacter pylori infection and certain clinical characteristics in dyspepsia, the evidence suggests that it may be challenging to distinguish between functional dyspepsia and dyspepsia associated with H. pylori infection based solely on clinical characteristics. The prevalence of H. pylori is notable in various dyspeptic conditions, but the presence of the infection does not consistently result in a distinct clinical profile that can be easily differentiated from functional dyspepsia without the infection. Moreover, the presence of lymphocytic duodenosis in patients with functional gastrointestinal syndromes does not appear to be specific for any particular functional syndrome and presents differently in mild enteropathy celiac disease compared to functional gastrointestinal syndromes (PUBMED:24941349).
Instruction: Do clinical pathways for major vascular surgery improve outcomes and reduce cost? Abstracts: abstract_id: PUBMED:9279303 Do clinical pathways for major vascular surgery improve outcomes and reduce cost? Purpose: This study was performed to determine whether the implementation of clinical pathways for patients who undergo major vascular procedures in a community hospital would shorten the length of stay and reduce charges when compared with Medicare standards. Methods: Length of stay, hospital costs, and morbidity, mortality, and readmission rates for the four most common vascular diagnosis-related group (DRG) categories at our institution were compared with Medicare standards. The four categories were DRG 005 (extracranial vascular procedures), DRG 110 (aortic and renal procedures), DRG 478 (leg bypass with comorbidity), and DRG 479 (leg bypass without comorbidity). Between May 1, 1994, and June 30, 1996, 112 patients underwent carotid endarterectomy, 42 patients underwent aortic or renal procedures, and 130 patients underwent lower extremity bypass procedures (68% with comorbidity). Only Medicare patients were included because exact cost/reimbursement data were available. No admissions were excluded. Results: The average length of stay was 1.2 days for DRG 005, 6.9 days for DRG 110, and 3.2 and 2.1 days for DRGs 478 and 479, respectively. The average cost savings when compared with the Medicare reimbursement was $4338 for DRG 005, $7161 for DRG 110, $4108 for DRG 478, and $2313 for DRG 479. Readmission was necessary for 9% of peripheral bypass patients. Ten percent of patients in DRG 005 and 86% of patients in DRG 110 needed intensive care, whereas only 2% of patients who underwent complicated bypass procedures did. Ninety percent of carotid endarterectomy patients and 23% of leg bypass patients were discharged on the first postoperative day. There were two postoperative strokes (2%) after carotid surgery. Thirty-three percent of aortic/renal patients had complications that led to care outside the clinical pathway. Twenty-five percent of leg bypass patients required home care to treat open foot wounds. Total inpatient cost savings were $1,256,000 when compared with Medicare reimbursement. Conclusions: Clinical pathways significantly improve the length of stay and decrease inpatient charges for major vascular surgical procedures while maintaining high standards of care. Factors that favorably affected the length of stay and hospital charges were outpatient arteriography, same-day admission, early ambulation, physical therapy, home care, use of the intensive care unit on a selective basis, and early discharge. Factors that adversely affected these outcomes were emergency admission, inpatient arteriography, thrombolytic therapy, complications, and the need for dialysis or anticoagulation. abstract_id: PUBMED:11239381 Cost-effectiveness and cost-efficiency in vascular surgery. Medical costs have been an increasing focus of attention in medicine in general, particularly in the United States health care environment. Because vascular disease is most prevalent in the elderly, the forecasted growth of this population over the next several decades will put increasing strains on health care financing. Costs will undoubtedly be a major component of any system of outcomes measurement, and it is important for vascular surgeons and others who specialize in vascular disease to be familiar with the definitions and methodology and cost-effectiveness analysis. Similarly, improving cost efficiency within the practice of vascular surgery will be mandated. The purpose of this article is to review systems of cost-effectiveness analysis as they pertain to vascular surgery, as well as to define critical pathway models that have been used to improve cost-efficiency in vascular surgery. abstract_id: PUBMED:9258016 Role of nursing personnel in implementing clinical pathways and decreasing hospital costs for major vascular surgery. Collaboration with key nursing personnel is essential to successfully implement clinical pathways, establish a dedicated vascular wing, and achieve significant hospital cost savings while still maintaining quality care with low morbidity and mortality rates. Key nursing personnel met once a month during a 1-year period with vascular surgeons, hospital administrators, and health care advisors to plan strategies to develop clinical pathways and establish a dedicated vascular ward. The pathways were then implemented. We compared morbidity, mortality, readmission rates, adn hospital costs among two groups of patients admitted for major vascular surgery. Nursing personnel, attending staff, and surgical residents found that treating patients who had undergone major vascular surgery was more straightforward and efficient after clinical pathways and a dedicated vascular wing were established. In addition, there were no significant differences between the groups in terms of overall mortality or pulmonary, neurologic, or cardiac complications despite shorter hospital stay and decreased hospital costs. Also there were no significant differences in readmission rates within 30 days. abstract_id: PUBMED:8523599 Impact of clinical pathways on hospital costs and early outcome after major vascular surgery. Purpose: The purpose of this study was to determine whether major vascular surgery could be performed safely and with significant hospital cost savings by decreasing length of stay and implementation of vascular clinical pathways. Methods: Morbidity, mortality, readmission rates, same-day admissions, length of stay, and hospital costs were compared between patients who were electively admitted between September 1, 1992, and August 30, 1993 (group 1), and January 1 to December 31, 1994 (group 2), for extracranial, infrarenal abdominal aortic, and lower extremity arterial surgery. For group 2 patients, vascular critical pathways were instituted, a dedicated vascular ward was established, and outpatient preoperative arteriography and anesthesiology-cardiology evaluations were performed. Length-of-stay goals were 1 day for extracranial, 5 days for aortic, and 2 to 5 days for lower extremity surgery. Emergency admissions, inpatients referred for vascular surgery, patients transferred from other hospitals, and patients who required prolonged preoperative treatment were excluded. Results: With this strategy same-day admissions were significantly increased (80% [145/177] vs 6.2% [9/145]) (p < 0.0001), and average length of stay was significantly decreased (3.8 vs 8.8 days) (p < 0.0001) in group 2 versus group 1, respectively. There were no significant differences between group 1 and group 2 in terms of overall mortality rate (2.1% [3/145] vs 2.3% [4/177]), cardiac (3.4% [5/145] vs 4.0% [7/177]), pulmonary (4.1% [6/145] vs 1.7% [3/177]), or neurologic (1.4% [2/145] vs 0% [0/177]) complications, or readmission within 30 days (11.3% [16/142] vs 9.2% [16/173]) (p > 0.05). There were also no differences in morbidity or mortality rates when each type of surgery was compared. Annual hospital cost savings totalled $1,267,445. Conclusion: Same-day admission and early hospital discharge for patients undergoing elective major vascular surgery can result in significant hospital cost savings without apparent increase in morbidity or mortality rates. abstract_id: PUBMED:17382215 Deviation-based cost modeling: a novel model to evaluate the clinical and economic impact of clinical pathways. Background: Although clinical pathways were developed to streamline patient care cost efficiently, few have been put to rigorous financial test. This is important today, because payors demand clear solutions to the cost-quality puzzle. We describe a novel, objective, and versatile model that can evaluate and link the clinical and economic impacts of clinical pathways. Study Design: Outcomes for 209 consecutive patients undergoing high-acuity surgery (pancreaticoduodenectomy), before and after pathway implementation, were examined. Four grades of deviation (none to major) from the expected postoperative course were defined by merging length of stay with a validated classification scheme for complications. Deviation-based cost modeling (DBCM) links these deviations to actual total costs. Results: Clinical outcomes compared favorably with benchmark standards for pancreaticoduodenectomy. Despite increasing patient acuity, this new pathway shortened length of stay, reduced resource use, and decreased hospital costs. DBCM indicated that fewer deviations from the expected course occurred after pathway implementation. The impacts of complications were less severe and translated to an overall cost savings of $5,542 per patient. DBCM also revealed that as more patients migrated to the expected course within our standardized care path, 50% of overall cost savings ($2,780) was attributable to the pathway alone, and improvements in care over time (secular trends) accounted for the remainder. Conclusions: DBCM accurately determined the incremental contribution of clinical pathway implementation to cost savings beyond that of secular trends alone. In addition, this versatile model can be customized to other systems' improvements to reveal their true clinical and economic impacts. This is valuable when choices linking quality with cost must be made. abstract_id: PUBMED:9805697 Benchmarking the perioperative process: II. Introducing anesthesia clinical pathways to improve processes and outcomes and to reduce nursing labor intensity in ambulatory orthopedic surgery. Study Objectives: (1) To introduce anesthesia clinical pathways as a management tool to improve the quality of care; (2) to use the Procedural Times Glossary published by the Association of Anesthesia Clinical Directors (AACD) as a template for data collection and analysis; and (3) to determine the effects of anesthesia clinical pathways on surgical processes, outcomes, and costs in common ambulatory orthopedic surgery. Design: Hospital database and patient chart review of consecutive patients undergoing anterior cruciate ligament reconstruction (ACLR) during academic years (AY) 1995-1996 and 1996-1997. Patient data from AY 1995-1996, during which no intraoperative anesthesia clinical pathways existed, served as historical controls. Data from AY 1996-1997, during which intraoperative anesthesia clinical pathways were used, served as the treatment group. Regional anesthesia options were routinely offered to patients in the clinical pathway. Setting: Ambulatory surgery center in a teaching hospital. Measurements And Main Results: The records of 503 ASA physical status I and II patients were reviewed. 1996-1997 patients underwent clinical pathway anesthesia care in which the intraoperative and postoperative anesthesia process was standardized with respect to symptom management, drugs, and equipment used. 1995-1996 patients did not have a standardized intraoperative and postoperative anesthetic course with respect to the management of common symptoms or to specific drugs and supplies used. Intervals described in the AACD Procedural Times Glossary, anesthesia drug and supply costs, and patient outcome variables (postoperative nursing interventions required and unexpected admissions), as influenced by the use of the anesthesia clinical pathway, were measured. Clinical pathway anesthesia care of ACLR in 1996-1997, which actively incorporated regional anesthesia options, reduced pharmacy and materials cost variability; slightly increased turnover time; improved intraoperative anesthesia and surgical efficiency, recovery times, and unexpected admission rates; and decreased the number of required nursing interventions for common postoperative symptoms. Conclusions: Clinical pathway patient management systems in anesthesia care are likely to produce useful outcome data of current practice patterns when compared with historical controls. This management tool may be useful in simultaneously containing costs and improving process efficiency and patient outcomes. abstract_id: PUBMED:33819488 When More Is Better: Underused Advanced Imaging Exams That Can Improve Outcomes and Reduce Cost of Care. Appropriate use of resources is a tenet of care transformation efforts, with a national campaign to reduce low-value imaging. The next level of performance improvement is to bolster evidence-based screening, imaging surveillance, and diagnostic innovation, which can avert more costly, higher-risk elements of unnecessary care like emergent interventions. Clinical scenarios in which underused advanced imaging can improve outcomes and reduce total cost of care are reviewed, including abdominal aortic aneurysm surveillance, coronary artery disease diagnosis, and renal mass characterization. Reliable abdominal aortic aneurysm surveillance imaging reduces emergency surgery and can be driven by radiologists incorporating best practice standardized recommendations in imaging interpretations. Coronary computed tomography angiography in patients with stable and unstable chest pain can reduce downstream resource use while improving outcomes. Preoperative 99mTc-sestamibi single-photon emission computed tomography (SPECT) reliably distinguishes oncocytoma from renal cell carcinoma to obviate unnecessary nephrectomy. As technological advances in diagnostic, molecular, and interventional radiology improve our ability to detect and cure disease, analyses of cost effectiveness will be critical to radiology leadership and sustainability in the transition to a value-based reimbursement model. abstract_id: PUBMED:29606569 Preoperative dementia is associated with increased cost and complications after vascular surgery. Objective: Dementia represents a major risk factor for medical complications and has been linked to higher rates of complication after surgery. Given the systemic nature of vascular disease, medical comorbidities significantly increase cost and complications after vascular surgery. We hypothesize that the presence of dementia is an independent predictor of increased postoperative complications and higher health care costs after vascular surgery. Methods: The Vascular Quality Initiative database was queried for all patients undergoing vascular surgery at a single academic medical center from 2012 to 2017. All modules were included (open abdominal aortic aneurysm, suprainguinal bypass, lower extremity bypass, amputation, carotid endarterectomy, endovascular aortic aneurysm repair, thoracic endovascular aortic aneurysm repair, and peripheral endovascular intervention). An institutional clinical data repository was queried to identify patients with International Classification of Diseases, Ninth Revision diagnosis codes for dementia as well as total hospital cost and long-term survival using Social Security records from the Virginia Department of Health. Hierarchical logistic and linear regression models were fit to assess risk-adjusted predictors of any complication and inflation-adjusted cost. Kaplan-Meier and Cox proportional hazards models were used for survival analysis. Results: A total of 2318 patients underwent vascular surgery and were captured by the Vascular Quality Initiative during the past 5 years, with 88 (3.8%) having a diagnosis of dementia. Patients with dementia were older and had higher rates of medical comorbidities, and the most common procedure was major amputation. In addition, dementia patients had a significantly higher rate of any complication (52% vs 16%; P < .0001) and increased 90-day mortality (14% vs 4.8%; P = .0002). Furthermore, dementia was associated with significant resource utilization, including preoperative length of stay (LOS), postoperative LOS, intensive care unit LOS, and inflation-adjusted total hospital cost (all P < .0001). Hierarchical modeling demonstrated that dementia was the strongest preoperative predictor for any complication (odds ratio, 8.64; P < .0001) and had the largest risk-adjusted impact on total hospital cost ($22,069; P < .0001). Finally, survival analysis demonstrated that dementia is independently associated with reduced survival after vascular surgery (hazard ratio, 1.37; P = .018). Conclusions: This study demonstrated that dementia is one of the strongest predictors of any complication and increased hospital cost after vascular surgery. Given the high risk of clinical and financial maladies, patients with dementia should be carefully considered and counseled before undergoing vascular surgery. abstract_id: PUBMED:10164875 Clinical pathways can help manage managed care. Clinical pathways are designed to map the course of medical or surgical interventions and their expected outcomes. They take into account standards of care and the human and material resources needed to provide specific services. Clinical pathways promote cost-effective, high-quality care and thereby help to standardize practice protocols and coordinate care. Healthcare organizations can decrease lengths of stay, reduce costs, and win more managed care contracts by developing and implementing comprehensive clinical pathways. Pathways can also make carve-outs and other specialty contract services easier to manage. abstract_id: PUBMED:21123093 Clinical pathways improve hospital resource use in endocrine surgery. Background: Clinical pathways are increasingly adopted to streamline care after elective surgery. Here, we describe novel clinical pathways developed for endocrine operations (ie, unilateral thyroid lobectomy, total thyroidectomy, parathyroidectomy) and evaluate their effects on economic end points at a major academic hospital. Study Design: Length of stay (LOS), hospital charges, and hospital costs for 681 patients undergoing elective endocrine surgery during a 30-month period were compared between patients managed with or without a specific pathway. Hospital costs were subcategorized by cost center. The analysis arms were conducted concurrently to control for institutional effects and end points were adjusted for demographic factors and comorbidity. Results: Clinical pathways were observed to significantly reduce LOS, charges, and costs for endocrine procedures. LOS was reduced for thyroid lobectomy (nonpathway 1.6 days versus pathway 1.0; p < 0.001), total thyroidectomy (2.8 versus 1.1; p < 0.0001), and parathyroidectomy (1.6 versus 1.1; p < 0.001). Nonpathway patients were 6.2 times more likely to be admitted to the intensive care unit than pathway patients (p < 0.05). Clinical pathways reduced total charges from $21,941 to $17,313 for all cases (21% reduction; p < 0.0001), with 47% of savings attributable to reduced LOS. Significant improvements were observed for laboratory use (73% reduction; p < 0.0001) and nonroutine medication administration (73% reduction; p < 0.0001). The readmission rate within 72 hours of discharge was not significantly lower in the pathway group. Conclusions: Implementation of clinical pathways improves efficiency of care after elective endocrine surgery without adversely affecting safety or quality. Because these system measures optimize resource use, they represent an important component of high-volume subspecialty surgical services. Answer: Yes, clinical pathways for major vascular surgery have been shown to improve outcomes and reduce costs. Studies have demonstrated that the implementation of clinical pathways in patients undergoing major vascular procedures can significantly shorten the length of stay and reduce hospital charges compared with Medicare standards. For example, one study found that the average length of stay was reduced across various diagnosis-related group (DRG) categories, with corresponding average cost savings when compared with Medicare reimbursement. This resulted in total inpatient cost savings of $1,256,000 (PUBMED:9279303). Another study showed that same-day admission and early hospital discharge for patients undergoing elective major vascular surgery could result in significant hospital cost savings without an apparent increase in morbidity or mortality rates (PUBMED:8523599). Clinical pathways have been associated with improved efficiency in treating patients who have undergone major vascular surgery, with no significant differences in overall mortality or complications despite shorter hospital stays and decreased hospital costs (PUBMED:9258016). Additionally, the introduction of anesthesia clinical pathways in ambulatory orthopedic surgery improved surgical processes, outcomes, and costs (PUBMED:9805697). Furthermore, clinical pathways have been found to improve hospital resource use in other areas of surgery, such as endocrine surgery, by significantly reducing length of stay, hospital charges, and costs (PUBMED:21123093). The use of clinical pathways is also associated with improved cost-efficiency in vascular surgery (PUBMED:11239381). In summary, clinical pathways are effective tools for improving outcomes and reducing costs in major vascular surgery, as well as in other surgical disciplines. They help to standardize care, streamline patient management, and optimize resource utilization while maintaining high standards of care.
Instruction: Is malnutrition overdiagnosed in older hospitalized patients? Abstracts: abstract_id: PUBMED:32560480 Comparison of Three Nutritional Screening Tools with the New Glim Criteria for Malnutrition and Association with Sarcopenia in Hospitalized Older Patients. The integrated assessment of nutritional status and presence of sarcopenia would help improve clinical outcomes of in-hospital aged patients. We compared three common nutritional screening tools with the new Global Leadership Initiative on Malnutrition (GLIM) diagnostic criteria among hospitalized older patients. To this, 152 older patients were assessed consecutively at hospital admission by the Malnutrition Universal Screening Tool (MUST), the Subjective Global Assessment (SGA), and the Nutritional Risk Screening 2002 (NRS-2002). A 46% prevalence of malnutrition was reported according to GLIM. Sensitivity was 64%, 96% and 47%, and specificity was 82%, 15% and 76% with the MUST, SGA, and NRS-2002, respectively. The concordance with GLIM criteria was 89%, 53% and 62% for the MUST, SGA, and NRS-2002, respectively. All the screening tools had a moderate value to diagnose malnutrition. Moreover, patients at high nutritional risk by MUST were more likely to present with sarcopenia than those at low risk (OR 2.5, CI 1.3-3.6). To conclude, MUST is better than SGA and NRS-2002 at detecting malnutrition in hospitalized older patients diagnosed by the new GLIM criteria. Furthermore, hospitalized older patients at high risk of malnutrition according to MUST are at high risk of presenting with sarcopenia. Nutritional status should be determined by MUST in older patients at hospital admission, followed by both GLIM and the European Working Group on Sarcopenia in Older People (EWGSOP2) assessment. abstract_id: PUBMED:37764866 Refeeding Syndrome in Older Hospitalized Patients: Incidence, Management, and Outcomes. Refeeding syndrome (RFS) is a serious metabolic disturbance that manifests after reintroducing nutrition to severely malnourished individuals. Especially susceptible are older patients, due to higher malnutrition rates, although the incidence remains uncertain. Our study aimed to assess the occurrence and management of RFS in malnourished older hospitalized patients. This prospective study included 156 malnourished older patients, with malnutrition identified using the Mini Nutritional Assessment-Short Form. We evaluated critical biochemical parameters at admission and for ten days after starting nutritional therapy. Using the consensus evidence-based approach, we managed and evaluated RFS. We also tracked mortality and unexpected hospital readmissions for six months after discharge. The average patient age was 82.3 ± 7.5 years, with 69% female. Patients showed hypophosphatemia (23%), hypomagnesemia (31%), and hypokalemia (6%) on admission. Prior to nutritional replenishment, patients were classified as being at low (64%), high (30%), or very high risk (6%) for RFS. After nutritional therapy, 14% and 5% developed imminent and manifest RFS, respectively. There were no significant differences in six-month post-discharge mortality rates or unexpected hospital readmissions between patients with or without RFS. Despite adherence to guideline-recommended management, RFS can persist. No elevated mortality was noted in RFS patients, potentially due to early diagnosis and treatment. abstract_id: PUBMED:36286208 Key Factors and AI-Based Risk Prediction of Malnutrition in Hospitalized Older Women. The numerous consequences caused by malnutrition in hospitalized patients can worsen their quality of life. The aim of this study was to evaluate the prevalence of malnutrition on the elderly population, especially focusing on women, identify key factors and develop a malnutrition risk predictive model. The study group consisted of 493 older women admitted to the Asunción Klinika Hospital in the Basque Region (Spain). For this purpose, demographic, clinical, laboratory, and admission information was gathered. Correlations and multivariate analyses and the MNA-SF screening test-based risk of malnutrition were performed. Additionally, different predictive models designed using this information were compared. The estimated frequency of malnutrition among this population in the Basque Region (Spain) is 13.8%, while 41.8% is considered at risk of malnutrition, which is increased in women, with up to 16.4% with malnutrition and 47.5% at risk of malnutrition. Sixteen variables were used to develop a predictive model obtaining Area Under the Curve (AUC) values of 0.76. Elderly women assisted at home and with high scores of dependency were identified as a risk group, as well as patients admitted in internal medicine units, and in admissions with high severity. abstract_id: PUBMED:38280831 Prevalence and determinants of micronutrient deficiencies in malnourished older hospitalized patients. Background And Objective: Malnutrition and micronutrient deficiencies represent significant concerns in geriatric care, leading to adverse health outcomes in older adults. The study aimed to investigate the prevalence and determinants of micronutrient deficiencies in malnourished older hospitalized patients. Design And Setting: This prospective, observational study was conducted in a geriatric acute care unit. Participants: The study included 156 malnourished older adults. Measurements: Malnutrition was identified using the Mini Nutritional Assessment-Short Form. Micronutrient status was assessed through serum analysis of vitamins (A, B1, B6, B12, C, D, E, H, K, folic acid) and minerals (iron, zinc, copper, selenium) within 24 h post-admission. Results: The average patient age was 82.3 ± 7.5 years, with 69% female. The results revealed a high prevalence of micronutrient deficiencies, with 90% of patients exhibiting deficiencies in three or more micronutrients. Notably, every patient presented at least one micronutrient deficiency. Common deficiencies were found in vitamins C (75%), D (65%), H (61%), and K (45%), as well as folic acid (37%), iron (31%), zinc (36%) and selenium (35%). In binary regression analysis, the amount of previous weight loss was significantly associated with a higher prevalence of multiple (>2) micronutrient deficiencies (P = 0.045). Other variables such age (P = 0.449), gender (P = 0.252), BMI (P = 0.265) and MNA-SF score (P = 0.200) did not show any significant association with the prevalence multiple micronutrient deficiencies. Conclusion: The high prevalence of micronutrient deficiencies in malnourished older hospitalized patients underscore the urgent need for targeted interventions to address micronutrient deficiencies in this population, promoting their health status. abstract_id: PUBMED:38068326 Optimized Refeeding vs. Standard Care in Malnourished Older Hospitalized Patients: A Prospective, Non-Randomized Cluster-Controlled Study in Geriatric Acute Care. Malnutrition is a prevalent geriatric syndrome with adverse health outcomes. This study aimed to assess the effectiveness of an optimized protocol for treatment of malnutrition in older hospitalized patients. We conducted a prospective, non-randomized cluster-controlled study with 156 malnourished patients in the intervention and 73 in the control group, determined using the Mini Nutritional Assessment-Short-Form. The intervention group received individualized nutritional care, including electrolyte and micronutrients monitoring, while the control received standard care. We primarily focused on complications such as infections, falls, unplanned hospital readmissions, and mortality, and secondarily focused on functional status and mobility improvements. Post-discharge follow-ups occurred at 3 and 6 months. Our findings demonstrated that the intervention group (age 82.3 ± 7.5 y, 69% female), exhibited greater previous weight loss (11.5 kg vs. 4.7 kg), more cognitive impairment and a longer hospital stay (19 days vs. 15 days). Binary logistic regression showed no difference in primary endpoint outcomes between groups during hospitalization. At 3- and 6-month follow-ups, the control group exhibited fewer adverse outcomes, particularly falls and readmissions. Both groups showed in-hospital functional improvements, but only controls maintained post-discharge mobility gains. The study concludes that the nutritional intervention did not outperform standard care, potentially due to study limitations and high-quality standard care in control group geriatric departments. abstract_id: PUBMED:36185642 Validation of GLIM criteria on malnutrition in older Chinese inpatients. Objective: Malnutrition is a nutritional disorder and common syndrome that has a high incidence and is easily ignored in hospitalized older patients. It can lead to multiple poor prognoses, such as frailty. Early identification and correct evaluation of possible malnutrition and frailty are essential to improve clinical outcomes in older patients. Therefore, our objective was to explore the applicability and effectiveness of the Global Leadership Initiative on Malnutrition (GLIM) criteria for identifying malnutrition in older patients. Methods: In total, 223 participants aged ≥60 years were involved. Nutrition was evaluated using the Mini Nutritional Assessment-Full Form (MNA-FF) and GLIM criteria, which adopt a two-step procedure. The first step was to use three different methods for the screening of nutritional risk: the Nutrition Risk Screening 2002, the Mini Nutritional Assessment Short Form (MNA-SF), and the Malnutrition Universal Screening Tool. The second step was to link a combination of at least one phenotypical criterion and one etiological criterion to diagnose malnutrition. The Clinical Frailty Scale was used to assess frailty. Sensitivity, specificity, Youden index, kappa values, and positive and negative predictive values were used to evaluate the validity of the GLIM criteria. Logistic regression models were used to assess whether there was a correlation between malnutrition, as defined by the GLIM criteria, and frailty. Results: We found that 32.3-49.8% of our patient sample were at risk of malnutrition based on the GLIM diagnosis and using the three different screening tools; 19.3-27.8% of the patients were malnourished. GLIM criteria with MNA-SF as a diagnostic validation and MNA-FF as a reference showed high consistency (K = 0.629; p < 0.001), sensitivity (90.5%), and specificity (86.4%). Logistic regression analysis showed that malnutrition, using MNA-SF with the GLIM criteria, was relevant for a higher likelihood of frailty (OR = 1.887; 95% CI 1.184-2.589). Conclusions: The incidence of GLIM-defined malnutrition was 19.3-27.8% using different screening tools. The consistency between the GLIM criteria using the MNA-SF and the MNA methods was high. Malnutrition, as diagnosed by the GLIM criteria with MNA-SF, was significantly correlated with frailty. GLIM criteria with MNA-SF may be a more reliable malnutrition assessment process in older inpatients. abstract_id: PUBMED:38111316 "Eat Enough"-A nurse-led intervention to enhance hospitalized older adults' protein and energy nutrition. Purpose: To develop an intervention enhancing hospitalized older adults' nutrition. Methods: For the first time, a mixed-methods design with data triangulation was applied according to the six-step model of Corry et al. to elaborate on a complex nursing intervention in the form of a logic model. Patients who were aged ≥80 years and hospitalized for at least 5 days were included. Sample size for quantitative practice analysis was 135 older adults, whereas 22 older inpatients participated in interviews and observations for needs analysis and generated data for key principles. Findings: The intervention "Eat Enough" encompasses nursing team culture and comprises six actionable targets to deliver needs-based support and reach required protein and energy intake for hospitalized older adults by sensitizing nurses and the interprofessional team. Facilitating nutritional intake would be supported by an advanced practice nurse who considers the medical and nursing care plan and therapy. Conclusions: The intervention "Eat Enough" demonstrates that nurses play a key role in interprofessional teams to enhance older adults' nutrition in hospital. The pipeline model displays how the actionable targets can be achieved, and how awareness raising can influence the context-leading to raised calories and protein requirement coverages and shorter length of stay. Implications For Clinical Practice: By identifying risk factors of malnutrition and strengthening nurses' responsibilities, the intervention "Eat Enough" could significantly enhance nutrition among hospitalized older adults. However, the logic model should be tested and implemented in future research. abstract_id: PUBMED:33239871 Low Self-Perception of Malnutrition in Older Hospitalized Patients. Background: Studies focusing on self-perception of nutritional status in older hospitalized patients are lacking. We aimed to examine the self-perception of body weight and nutritional status among older hospitalized patients compared to their actual body weight and nutritional status based on medical assessment. Materials And Methods: This observational cross-sectional study investigated 197 older participants (mean age 82.2±6.8 years, 61% women) who were consecutively admitted to the geriatric acute care ward. Body weight status and nutritional status were assessed using WHO-BMI classification and Mini Nutritional Assessment-Short Form (MNA-SF), respectively. Self-perceived body weight status and nutritional status were assessed with a standardized questionnaire. A follow-up was performed with a short telephone interview after three months. Results: According to MNA-SF, 49% and 35% were at risk of malnutrition and malnourished, respectively. There was no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF (Kappa: 0.06). A slight agreement was found between subjective body weight status and objective body weight status according to WHO-BMI classification (Kappa: 0.19). A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time, of which 7 and 2 were malnourished and at risk of malnutrition according to MNA-SF, respectively. Of those who were malnourished and at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% did not realize their malnutrition. Compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission and further weight loss and more often reported health deterioration and experienced death within three months after discharge. Conclusion: No agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients was found. Our study highlights the need to raise knowledge about the issue of malnutrition and increase awareness of health risks associated with malnutrition among older hospitalized patients. abstract_id: PUBMED:38202202 Evaluation of Multi-Frequency Bioelectrical Impedance Analysis against Dual-Energy X-ray Absorptiometry for Estimation of Low Muscle Mass in Older Hospitalized Patients. The accuracy of multi-frequency (MF) bioelectrical impedance analysis (BIA) to estimate low muscle mass in older hospitalized patients remains unclear. This study aimed to describe the ability of MF-BIA to identify low muscle mass as proposed by The Global Leadership Initiative on Malnutrition (GLIM) and The European Working Group on Sarcopenia in Older People (EWGSOP-2) and examine the association between muscle mass, dehydration, malnutrition, and poor appetite in older hospitalized patients. In this prospective exploratory cohort study, low muscle mass was estimated with MF-BIA against dual-energy X-ray absorptiometry (DXA) in 42 older hospitalized adults (≥65 years). The primary variable for muscle mass was appendicular skeletal muscle mass (ASM), and secondary variables were appendicular skeletal muscle mass index (ASMI) and fat-free mass index (FFMI). Cut-off values for low muscle mass were based on recommendations by GLIM and EWGSOP-2. MF-BIA was evaluated against DXA on the ability to estimate absolute values of muscle mass by mean bias, limits of agreement (LOA), and accuracy (5% and 10% levels). Agreement between MF-BIA and DXA to identify low muscle mass was evaluated with sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV). The association between muscle mass, dehydration, malnutrition, and poor appetite was visually examined with boxplots. MF-BIA overestimated absolute values of ASM with a mean bias of 0.63 kg (CI: -0.20:1.46, LOA: -4.61:5.87). Agreement between MF-BIA and DXA measures of ASM showed a sensitivity of 86%, specificity of 94%, PPV of 75% and NPV of 97%. Boxplots indicate that ASM is lower in patients with malnutrition. This was not observed in patients with poor appetite. We observed a tendency toward higher ASM in patients with dehydration. Estimation of absolute ASM values with MF-BIA should be interpreted with caution, but MF-BIA might identify low muscle mass in older hospitalized patients. abstract_id: PUBMED:31832278 Optimizing Nutrition Care for Pressure Injuries in Hospitalized Patients. Significance: It is estimated that up to 50% of hospitalized patients are malnourished. Malnutrition can lead to longer hospital stays, altered immune function, and impaired skin integrity and wound healing. Malnutrition has been found to be a significant factor influencing pressure injury (PI) risk and wound healing. While PI prevention requires multidimensional complex care using a variety of evidence-based strategies, hospitalized patients benefit from interventions that focus on improving oral nutrition to reduce PI risk and enhance wound healing. Unfortunately, malnutrition is often under-recognized and inadequately managed in hospitalized patients and this can lead to higher rates of complications such as PI. Recent Advances: Recent studies suggest that nutritional care has a major impact in PI prevention and management. Strategies, including early identification and management of malnutrition and provision of specially-formulated oral nutritional interventions to at-risk patients, optimization of electronic health record systems to allow for enhanced administration, monitoring, and evaluation of nutritional therapies, and implementation of protocol-based computerized decision support systems, have been reported to improve outcomes. Critical Issues: Unfortunately, there are gaps in the implementation of nutritional care in hospitals. Timely identification and management of malnutrition is needed to advance quality care for hospitalized patients and reduce malnutrition and associated PI. Future Directions: Further research on effective, evidence-based strategies for implementation of all stages of the nutrition care process is needed to reduce pressure injuries and malnutrition in hospitalized patients. Answer: The provided abstracts do not directly address the question of whether malnutrition is overdiagnosed in older hospitalized patients. Instead, they focus on the prevalence, assessment, management, and outcomes of malnutrition and related conditions such as sarcopenia and refeeding syndrome in this population. For instance, one study compared three common nutritional screening tools with the new Global Leadership Initiative on Malnutrition (GLIM) diagnostic criteria among hospitalized older patients and found that the Malnutrition Universal Screening Tool (MUST) had better sensitivity and specificity than the Subjective Global Assessment (SGA) and the Nutritional Risk Screening 2002 (NRS-2002) when compared to GLIM criteria (PUBMED:32560480). Another study highlighted the incidence and management of refeeding syndrome in malnourished older hospitalized patients, noting that despite adherence to guideline-recommended management, refeeding syndrome can persist (PUBMED:37764866). Additionally, a study on the prevalence of malnutrition in hospitalized older women in the Basque Region of Spain found that a significant proportion of the population was at risk of malnutrition or malnourished (PUBMED:36286208). Another study reported a high prevalence of micronutrient deficiencies in malnourished older hospitalized patients, with 90% exhibiting deficiencies in three or more micronutrients (PUBMED:38280831). One study evaluated an optimized protocol for the treatment of malnutrition in older hospitalized patients and found that it did not outperform standard care (PUBMED:38068326). Another study validated the GLIM criteria for malnutrition in older Chinese inpatients and found a significant correlation between malnutrition and frailty (PUBMED:36185642). Furthermore, an intervention study aimed at enhancing hospitalized older adults' protein and energy nutrition was developed, emphasizing the role of nurses in improving patient nutrition (PUBMED:38111316). Lastly, a study on self-perception of malnutrition in older hospitalized patients found that there was no agreement between self-perceived nutritional status and objective nutritional status, suggesting a lack of awareness among patients regarding their malnutrition status (PUBMED:33239871). In summary, the abstracts suggest that malnutrition is a prevalent and serious concern in older hospitalized patients, with various tools and interventions being used to assess and manage the condition. However, there is no clear evidence from these abstracts to suggest that malnutrition is overdiagnosed in this patient group. Instead, the challenge seems to be in accurately diagnosing and managing malnutrition to improve clinical outcomes.
Instruction: Cranioplasty after postinjury decompressive craniectomy: is timing of the essence? Abstracts: abstract_id: PUBMED:20699735 Cranioplasty after postinjury decompressive craniectomy: is timing of the essence? Background: The appropriate timing of cranioplasty after decompressive craniectomy for trauma is unknown. Potential benefits of delayed intervention (&gt;6 weeks) for reducing the risk of infection must be balanced by persistent altered cerebrospinal fluid dynamics leading to hydrocephalus. We reviewed our recent 5-year experience in an effort to improve patient throughput and develop a rational decision making plan. Methods: A 5-year query (2003-2007) of our level I neurotrauma database. From 2,400 head injuries, we performed a total of 350 craniotomies. Of the 350 patients who underwent craniotomy for trauma, 70 patients (20%) underwent decompressive craniectomy requiring cranioplasty. Timing of cranioplasty, cranioplasty material, postoperative infections, and incidence of hydrocephalus were evaluated with logistic regression to study potential associations between complications and timing, adjusted for risk factors. Results: No specific time frame was predictive of hydrocephalus or infection, and logistic regression failed to identify significant predictors among the collected variables. Conclusion: In our experience, the prior practice of delayed cranioplasty (3-6 months postdecompressive craniectomy), requiring repeat hospital admission, does not seem to lower postcranioplasty infection rates nor the need for cerebrospinal fluid diversion procedures. Our current practice emphasizes cranioplasty during the initial hospital admission, as soon as there is resolution on computed tomography scan of brain swelling outside of the cranial vault with concurrent clinical examination. This occurs as early as 2 weeks postcraniectomy and should lower the overall cost of care by eliminating the need for additional hospital admissions. abstract_id: PUBMED:24778913 Timing of cranioplasty after decompressive craniectomy for trauma. Background: The optimal timing of cranioplasty after decompressive craniectomy for trauma is unknown. The aim of this study was to determine if early cranioplasty after decompressive craniectomy for trauma reduces complications. Methods: Consecutive cases of patients who underwent autologous cranioplasty after decompressive craniectomy for trauma at a single Level I Trauma Center were studied in a retrospective 10 year data review. Associations of categorical variables were compared using Chi-square test or Fisher's exact test. Results: A total of 157 patients were divided into early (&lt;12 weeks; 78 patients) and late (≥12 weeks; 79 patients) cranioplasty cohorts. Baseline characteristics were similar between the two cohorts. Cranioplasty operative time was significantly shorter in the early (102 minutes) than the late (125 minutes) cranioplasty cohort (P = 0.0482). Overall complication rate in both cohorts was 35%. Infection rates were lower in the early (7.7%) than the late (14%) cranioplasty cohort as was bone graft resorption (15% early, 19% late), hydrocephalus rate (7.7% early, 1.3% late), and postoperative hematoma incidence (3.9% early, 1.3% late). However, these differences were not statistically significant. Patients &lt;18 years of age were at higher risk of bone graft resorption than patients ≥18 years of age (OR 3.32, 95% CI 1.25-8.81; P = 0.0162). Conclusions: After decompressive craniectomy for trauma, early (&lt;12 weeks) cranioplasty does not alter the incidence of complication rates. In patients &lt;18 years of age, early (&lt;12 weeks) cranioplasty increases the risk of bone resorption. Delaying cranioplasty (≥12 weeks) results in longer operative times and may increase costs. abstract_id: PUBMED:30610974 Decompressive Cranioplasty (Osteoplastic Hinged Craniectomy): A Novel Technique for Increased Intracranial Pressure-Initial Experience and Outcome. Background: We redesigned decompressive craniectomy and cranioplasty procedures to decrease the inherent risk of complications. This novel technique, called decompressive cranioplasty, not only may decrease the complication rate but also may improve the cosmetic result, obviate the need for artificial skull implant, and increase the decompressive volume compared with traditional craniectomy. Methods: In decompressive cranioplasty, the Agnes Fast craniotomy was adopted without cutting the temporalis muscle from the underlying bone flap. After opening the dura with or without removal of intracranial hematomas, duraplasty was performed with an intracranial pressure monitor inserted. Four miniplates were bent into a "Z" shape, and the vascularized bone flap was elevated approximately 1.2-1.5 cm above the outer cortex of the skull and fixed with the miniplates. Subsequent cranioplasty was done with a mini-incision on the miniplate sites and reshaping of the miniplate to align the outer cortex of the bone flap. Results: We successfully performed decompressive cranioplasty in 3 emergent cases-2 traumatic subdural hematomas and 1 malignant middle cerebral artery infarction. Postoperative brain computed tomography demonstrated adequate decompression in all cases. Cosmetic outcome was excellent, and there was no temporal hallowing. Mastication function was not affected. At 6-month follow-up, there was no bone flap shrinkage and no hydrocephalus. Conclusions: Decompressive cranioplasty is a safe and effective method in the management of patients with brain edema and intracranial hypertension. It is simple to perform and may reduce the morbidity associated with traditional decompressive craniectomy and subsequent cranioplasty. abstract_id: PUBMED:23133731 Long-term incidence and predicting factors of cranioplasty infection after decompressive craniectomy. Objective: The predictors of cranioplasty infection after decompressive craniectomy have not yet been fully characterized. The objective of the current study was to compare the long-term incidences of surgical site infection according to the graft material and cranioplasty timing after craniectomy, and to determine the associated factors of cranioplasty infection. Methods: A retrospective cohort study was conducted to assess graft infection in patients who underwent cranioplasty after decompressive craniectomy between 2001 and 2011 at a single-center. From a total of 197 eligible patients, 131 patients undergoing 134 cranioplasties were assessed for event-free survival according to graft material and cranioplasty timing after craniectomy. Kaplan-Meier survival analysis and Cox regression methods were employed, with cranioplasty infection identified as the primary outcome. Secondary outcomes were also evaluated, including autogenous bone resorption, epidural hematoma, subdural hematoma and brain contusion. Results: The median follow-up duration was 454 days (range 10 to 3900 days), during which 14 (10.7%) patients suffered cranioplasty infection. There was no significant difference between the two groups for event-free survival rate for cranioplasty infection with either a cryopreserved or artificial bone graft (p=0.074). Intergroup differences according to cranioplasty time after craniectomy were also not observed (p=0.083). Poor neurologic outcome at cranioplasty significantly affected the development of cranioplasty infection (hazard ratio 5.203, 95% CI 1.075 to 25.193, p=0.04). Conclusion: Neurologic status may influence cranioplasty infection after decompressive craniectomy. A further prospective study about predictors of cranioplasty infection including graft material and cranioplasty timing is necessary. abstract_id: PUBMED:27169073 Analysis of the Risk Factors Affecting the Surgical Site Infection after Cranioplasty Following Decompressive Craniectomy. Objective: The risk factors for surgical site infections (SSIs) after cranioplasty following decompressive craniectomy remain unclear. The goal of this study was to analyze the risk factors related to developing SSIs after cranioplasty and to suggest valuable predictors. Methods: A retrospective review was conducted of patients who underwent cranioplasty following decompressive craniectomy at our institution from January 2011 to December 2014, a total of 78 patients who underwent 78 cranioplasties. Univariate and multivariate logistic regression analyses were carried out to determine possible risk factors related to developing SSIs. We analyzed both patient-specific and surgery-specific factors. Results: The overall rate of SSIs was 9.0% (7/78). SSIs after cranioplasty were significantly related to being female, having the primary etiology of traumatic brain injury (TBI) and having had a bilateral cranioplasty in the univariate analysis. Multivariate logistic regression analysis showed that being female [odds ratio (OR) 5.98, p=0.000] and having had a bilateral cranioplasty (OR 4.00, p=0.001) significantly increased the risk of SSIs. Conclusion: Based on our data, cranioplasty following decompressive craniectomy is associated with a high incidence of SSI. Being female, having a primary etiology of TBI and having had a bilateral cranioplasty may be risk factors for surgical site infections after cranioplasty. abstract_id: PUBMED:31439481 Cranioplasty after decompressive craniectomy in hemorrhagic moyamoya disease. The objective of this study was to investigate cranioplasty after decompressive craniectomy in hemorrhagic moyamoya disease (MMD). We retrospectively reviewed seven patients with hemorrhagic MMD after decompressive craniectomy from January 1, 2010 to December 31, 2018. Clinical features, radiological findings, and outcomes were analyzed. The study included four females and three males, with a mean age at diagnosis of 38.7 years (range 19-56 years). The mean days between craniectomy and DSA was 255 days (with a range of 92-382 days). Three in seven patients had good spontaneous revascularization, while four patients had poor spontaneous revascularization. For patients with good spontaneous revascularization, the computer-assisted design titanium implants for cranioplasty was cut off on the lower edge of the bone window, and all three patients had no complications after cranioplasty and had relatively stable prognosis. For patients with poor spontaneous revascularization, only two patients received cranioplasty. And one in those had unfavorable outcome because of recurrent hemorrhage. In conclusion, spontaneous revascularization after decompressive craniectomy in MMD patients should be evaluated before cranioplasty, and postoperative treatment should be discussed individually. abstract_id: PUBMED:22154044 Acute autologous bone flap infection after cranioplasty for postinjury decompressive craniectomy. Background: Acute bone flap infection is a devastating complication after cranioplasty for postinjury decompressive craniectomy. We aim to identify the risk factors of autologous bone flap infection. Methods: We enrolled 151 patients undergoing 153 cranioplasties in the 4-year retrospective study. Autologous bones stored at -75°C were used in the cranioplasties. Acute bone flap infection was defined as the onset of infection ≤14 days after cranioplasty. The epidemiological data of patients and details of the cranioplasty procedure were recorded. Results: Acute bone flap infection was identified in five of the 153 cranioplasties, accounting for 3.3% of all episodes. Three of the 5 infected patients and five of 143 uninfected patients presented with dysfunction of subgaleal drainage comparatively, which was significantly different (p=0.001). Statistical analysis of the cranioplasty procedures and subsequent results of the two patient groups revealed the following significant findings: the duration of operation (p=0.03) and the length of hospital stay after cranioplasty (p&lt;0.001). Conclusions: Dysfunction of subgaleal drainage and long operative duration of cranioplasty are risk factors of acute autologous bone flap infection. Regarding the prolonged hospital stay in complicated patients, better surgical techniques should be implemented in order to eliminate the risks of infection. abstract_id: PUBMED:25284308 Autologous cranioplasty following decompressive craniectomy in the trauma setting. Background: Decompressive craniectomy (DC) is an option for the treatment of increased intracranial pressure resulting from an acute neurological insult, including insults caused by trauma. When the brain swelling has receded, the skull is reconstructed with a wide choice of materials, each with its own advantages and disadvantages in terms of cost, cosmetic appearance, biocompatibility, implant strength and complication rate. Autologous cranioplasty (AC), where the patient's own bone flap is stored and reutilised, is common in many countries. No outcome studies have, however, been published on this technique for traumatic injuries. Methods: A retrospective study was conducted including all AC operations performed following DC due to traumatic brain injury. All operations were performed in one institution during a 4-year time period. Results were analysed for complication rates. Results: 44 cases were included. The mean time from craniotomy to cranioplasty was 86 (95% CI: 63-109) days. Complications severe enough to warrant readmission or further surgery were found in 13 cases (30%). No statistically significant predictor of complication from cranioplasty was detected. The complication rate was similar to published data on cranioplasty using artificial prosthetic materials. Conclusions: AC in the trauma setting is a valid treatment option with a complication rate that seems no worse than other alternatives. abstract_id: PUBMED:35937601 Subgaleal Effusion and Brain Midline Shift After Cranioplasty: A Retrospective Study Between Polyetheretherketone Cranioplasty and Titanium Cranioplasty After Decompressive Craniectomy. Cranioplasty with polyetheretherketone (PEEK) has recently shown better cerebral protection performance, improved brain function, and aesthetic contour compared with titanium mesh. However, whether patients undergoing PEEK cranioplasty tend to develop subgaleal effusions remains elusive. This retrospective study included patients who underwent cranioplasty with PEEK implants or titanium mesh after decompressive craniectomy between July 2017 and July 2020. Patient information, including general information, location, size of the defect, subgaleal depth, and brain midline shift was collected and statistically analyzed. There were 130 cases of cranioplasty, including 35 with PEEK implants and 95 with a titanium mesh. Patients who underwent cranioplasty with a PEEK implant had a higher subgaleal effusion rate than those who underwent cranioplasty with titanium mesh (85.71% vs. 53.68%, P &lt; 0.001), while a midline shift &gt;5 mm was more frequently observed in the PEEK group than in the titanium group (20% vs. 6.3%, P = 0.021). The PEEK material was the only factor associated with subgaleal effusion after cranioplasty (OR 5.589, P = 0.002). Logistic regression analysis further showed that age was a protective factor against midline shift in the PEEK cranioplasty group (OR 0.837, P = 0.029). Patients who underwent cranioplasty with PEEK implants were more likely to develop severe subgaleal effusion and significant brain midline shifts than those with titanium mesh implants. abstract_id: PUBMED:30412161 Technical features and complications of cranioplasty in patients after decompressive craniectomy in the acute period of subarachnoid hemorrhage Decompressive craniectomy is used for arresting hypertension-dislocation syndrome developing in seriuos patients in the acute period of subarachnoid hemorrhage. After stabilization of the patient's neurological and somatic status, cranioplasty is performed for protective, cosmetic, and therapeutic purposes. The most common surgery in these patients is skull repair using an autologous bone graft. Before cranioplasty, the patient's bone is preserved in two ways: cryopreservation or subcutaneous implantation to the anterior abdominal wall area. Recently, there have been numerous reports of early and delayed complications of cranioplasty with autologous bone grafts. The use of artificial grafts may reduce the risk of postoperative complications compared to an autologous bone graft. Previously, 'freely' or 'manually' simulated biopolymers were used. At present, they are rarely used for repair of extensive defects due to a poor cosmetic result. However, the advent of stereolithographic modeling and computer modeling of artificial grafts has improved the cosmetic result of this surgery. The purpose of this study is to assess the risk of postoperative complications of cranioplasty as well as to define the criteria for choosing a cranioplasty technique. Answer: The timing of cranioplasty after decompressive craniectomy for trauma is a subject of debate, and the optimal timing for this procedure remains uncertain. Several studies have attempted to address this question, with varying conclusions. One study found that the prior practice of delayed cranioplasty (3-6 months post-decompressive craniectomy) did not seem to lower post-cranioplasty infection rates nor the need for cerebrospinal fluid diversion procedures. Their current practice emphasizes cranioplasty during the initial hospital admission, as soon as there is resolution on computed tomography scan of brain swelling outside of the cranial vault with concurrent clinical examination, which could occur as early as 2 weeks post-craniectomy (PUBMED:20699735). Another study concluded that early cranioplasty (<12 weeks) after decompressive craniectomy for trauma does not alter the incidence of complication rates. However, in patients under 18 years of age, early cranioplasty increases the risk of bone resorption. Delaying cranioplasty (≥12 weeks) results in longer operative times and may increase costs (PUBMED:24778913). A novel technique called decompressive cranioplasty was introduced to decrease the risk of complications and improve cosmetic results. This technique may reduce the morbidity associated with traditional decompressive craniectomy and subsequent cranioplasty (PUBMED:30610974). A retrospective cohort study indicated that neurologic status might influence cranioplasty infection after decompressive craniectomy, suggesting that further prospective studies are needed to explore predictors of cranioplasty infection, including graft material and timing (PUBMED:23133731). An analysis of risk factors for surgical site infections after cranioplasty following decompressive craniectomy identified being female, having the primary etiology of traumatic brain injury (TBI), and having had a bilateral cranioplasty as potential risk factors (PUBMED:27169073). In the context of hemorrhagic moyamoya disease, spontaneous revascularization after decompressive craniectomy should be evaluated before cranioplasty, and postoperative treatment should be discussed individually (PUBMED:31439481). One study identified dysfunction of subgaleal drainage and long operative duration of cranioplasty as risk factors for acute autologous bone flap infection (PUBMED:22154044).
Instruction: Are residents of downtown Toronto influenced by their urban neighbourhoods? Abstracts: abstract_id: PUBMED:22862839 Are residents of downtown Toronto influenced by their urban neighbourhoods? Using concept mapping to examine neighbourhood characteristics and their perceived impact on self-rated mental well-being. Background: There is ample evidence that residential neighbourhoods can influence mental well-being (MWB), with most studies relying on census or similar data to characterize communities. Few studies have actively investigated local residents' perceptions. Methods: Concept mapping was conducted with residents from five Toronto neighbourhoods representing low income and non-low income socio-economic groups. These residents participated in small groups and attended two sessions per neighbourhood. The first session (brainstorming) generated neighbourhood characteristics that residents felt influenced their MWB. A few weeks later, participants returned to sort these neighbourhood characteristics and rate their relative importance in affecting residents' 'good' and 'poor' MWB. The data from the sorting and rating groups were analyzed to generate conceptual maps of neighbourhood characteristics that influence MWB. Results: While agreement existed on factors influencing poor MWB (regardless of neighbourhood, income, gender and age), perceptions related to factors affecting good MWB were more varied. For example, women were more likely to rank physical beauty of their neighbourhood and range of services available as more important to good MWB, while men were more likely to cite free access to computers/internet and neighbourhood reputation as important. Low-income residents emphasized aesthetic attributes and public transportation as important to good MWB, while non-low-income residents rated crime, negative neighbourhood environment and social concerns as more important contributors to good MWB. Conclusion: These findings contribute to the emerging literature on neighbourhoods and MWB, and inform urban planning in a Canadian context. abstract_id: PUBMED:16324186 Mapping the social demography and location of HIV services across Toronto neighbourhoods. In this paper we map the location and distribution of HIV service providers across Toronto neighbourhoods. Our analysis identified an uneven distribution of services across Toronto and a number of communities that are less accessible to HIV-related services. We subsequently identified three neighbourhood-level characteristics of the populations living within these communities (i.e. concentrated economic disadvantage, concentrated immigration, and residential instability). Our findings suggest a significant overlap in the location of HIV service providers and the clustering of neighbourhood-level demographic and socioeconomic factors. Some inaccessible neighbourhoods overlap with clusters of neighbourhoods with higher levels of concentrated disadvantage, immigration and percentage of black Canadians. Accessible neighbourhoods are located within the downtown core of Toronto and overlap with clusters of highly dense, younger neighbourhoods (with a high proportion of 15- to 34-year-olds who are unmarried). Our findings point to the need for policy-makers to integrate spatial analytic techniques into their examination of the types of neighbourhoods, and subsequently the community members that live within those neighbourhoods, that are potentially underserved with respect to health and social services. abstract_id: PUBMED:32677410 Overweight and Obesity among New Urban Migrants in Chengdu and Kunming Objective: To determine the prevalence of overweight and obesity in new urban migrants. Methods: A questionnaire survey was conducted on 21 651 urban residents in the downtown and surrounding urban areas of Kunming and Chengdu selected through a multi-stage stratified cluster sampling method. Factors associated with overweight and obesity were identified using Chi-square tests and multivariate logistic regression models. Results: 11 626 people were enrolled. The new urban migrants (n=7 555) had 31.2% overweight and 6.7% obesity, compared with 24.9% and 4.3%, respectively, in the urban residents(n=4 071). Gender, age, educational, marital status and alcohol consumption were associated with overweight and obesity in the new urban migrants. Gender, age, education and alcohol consumption were predictors of overweight and obesity of the urban residents. Conclusions: Urban new immigrants are more likely to be overweight and obese than urban residents, although predictors for overweight and obesity remain similar. abstract_id: PUBMED:21174893 The impact of gentrification on ethnic neighbourhoods in Toronto: a case study of little Portugal. Despite extensive literature on the nature and impact of gentrification, there has been little consideration of the effects of gentrification on ethnic neighbourhoods. This study evaluates the negative and positive effects of gentrification on the Portuguese in west central Toronto. Details concerning the settlement patterns of the Portuguese, the characteristics of Portuguese residents and patterns of gentrification in inner-city Toronto were obtained from census data. Evaluations of neighbourhood change and attitudes of the residents towards gentrification were obtained from key informant and focus group interviews. The results suggest considerable ambivalence among the respondents, but most agreed that the long-term viability of Little Portugal as an immigrant reception area with a good supply of low-cost housing is in doubt. abstract_id: PUBMED:29323351 Ethnic differences in realising desires to leave urban neighbourhoods. Selective mobility into and out of urban neighbourhoods is one of the main driving forces of segregation. Earlier research has found group differences in who wants to leave or who leaves certain types of neighbourhoods. A factor that has received little attention so far is that some residents will have a desire to leave their neighbourhood, but are unable to do so. If there are differences between population groups in the realisation of desires to leave the neighbourhood, this might lead to involuntary segregation. This paper uses a unique combination of register data and survey data. We combine data from a large housing survey in the Netherlands (WoON) with longitudinal register data from the Netherlands (SSD) which contains individual-level information on residential mobility histories. This allows us to study whether households with a desire to leave their neighbourhood do realise this desire and which households are successful in leaving which neighbourhoods. A more thorough insight in who wants to leave which neighbourhoods but is unable to do so will contribute to a better understanding of selective mobility and segregation. We find that ethnic minorities and low-income households are less likely to realise a desire to leave their neighbourhood. We expected that ethnic minorities would be especially unsuccessful in realising desires to leave minority concentration neighbourhoods; however, for none of the ethnic groups we found an effect of neighbourhood ethnic composition on the realisation of desires to leave. abstract_id: PUBMED:36368380 Spatial effects of urban green infrastructure on instream water quality assessed by chemical and sensory indicators. Urban green infrastructure has been simulated effectively and economically to reduce volume and pollutants of stormwater runoffs but its spatial effects remain unclear. A snap sampling campaign was carried out for surface water quality in the downtown waterway network of a pilot sponge city (Suzhou) in China, dividing into 7 subwatersheds according to the digital elevation map. In total, 144 sampling points were investigated and measured for chemical quality of surface water while 68 out of the sampling points had a sensory evaluation questionnaire interview for water quality with 321 respondents, in whom the native residents scored a significant spatiality of water quality. The downtown waterway network had phosphorus-limited eutrophic surface water with total nitrogen worse than Class V of the national guidelines. Chemical and sensory evaluation indexes of surface water quality had significant spatial consistency (p &lt; 0.001). All types of green spaces (%) in subwatershed, especially along the urban waterway network (waterfront) and roadside, and in the 100 m riparian buffer zone, significantly influenced nutrient loads in surface water. Findings of the present study suggest that the 100 m riparian buffer zone would be priority areas and the waterfront and roadside should be the highly efficient spots for planning strategy on urban green infrastructure implementation to reduce nutrient loads in surface water and to improve urban landscape aesthetics. abstract_id: PUBMED:33223788 Relative Income and Subjective Well-being of Urban Residents in China. The purposes of this study are to examine subjective well-being and to test its correlation with social reference and self-expectations of urban residents in China. Data are obtained from the 2015 Chinese General Social Survey, and we focus on urban residents (N=4,857). Because the dependent variable is a sequence variable, the ordered logit model is used for data analysis. We find that absolute income remains an important factor in determining people's subjective well-being. We also find that when relative income factors are included, the correlation between absolute income and subjective well-being is reduced. This suggests that the subjective well-being of urban residents is influenced not only by absolute income itself, but also - more importantly - by relative income based on social comparison and self-expectations. These findings have implications for the formulation of social policies to improve citizens' happiness. abstract_id: PUBMED:35756210 Satisfaction Levels of Young Urban Residents With Public Sports Services in China During the COVID-19 Epidemic. The extensive changes in public and private life caused by the ongoing COVID-19 epidemic in China have created a "new normal." To understand urban residents' satisfaction with public sports services under these conditions and to identify factors that influence satisfaction, urban residents in several major Chinese cities were randomly sampled with an online questionnaire to measure their level of satisfaction with public sports services in China during the COVID-19 epidemic. With the response rate of 87.9%, 703 valid questionnaires out of 800 questionnaires distributed were analyzed. A structural equation model was constructed where health literacy and the residents' expectations were independent variables, residents' participation was a mediating variable, and residents' satisfaction was the dependent variable. Cronbach's α test and Kaiser-Meyer-Olkin test confirmed that the questionnaire was reliable and valid. Survey results suggest that young urban residents' participation in public sports services is influenced by personal health literacy and residents' expectations. Participation serves a mediating role between health literacy and residents' satisfaction, but not between residents' expectations and their satisfaction. Young urban residents' satisfaction with public sports services may be improved by increasing access to health information, improving hardware and software platforms to support those services, and catering the services to the interests of the population. abstract_id: PUBMED:27997577 Spatial Accessibility to Health Care Services: Identifying under-Serviced Neighbourhoods in Canadian Urban Areas. Background: Urban environments can influence many aspects of health and well-being and access to health care is one of them. Access to primary health care (PHC) in urban settings is a pressing research and policy issue in Canada. Most research on access to healthcare is focused on national and provincial levels in Canada; there is a need to advance current understanding to local scales such as neighbourhoods. Methods: This study examines spatial accessibility to family physicians using the Three-Step Floating Catchment Area (3SFCA) method to identify neighbourhoods with poor geographical access to PHC services and their spatial patterning across 14 Canadian urban settings. An index of spatial access to PHC services, representing an accessibility score (physicians-per-1000 population), was calculated for neighborhoods using a 3km road network distance. Information about primary health care providers (this definition does not include mobile services such as health buses or nurse practitioners or less distributed services such as emergency rooms) used in this research was gathered from publicly available and routinely updated sources (i.e. provincial colleges of physicians and surgeons). An integrated geocoding approach was used to establish PHC locations. Results: The results found that the three methods, Simple Ratio, Neighbourhood Simple Ratio, and 3SFCA that produce City level access scores are positively correlated with each other. Comparative analyses were performed both within and across urban settings to examine disparities in distributions of PHC services. It is found that neighbourhoods with poor accessibility scores in the main urban settings across Canada have further disadvantages in relation to population high health care needs. Conclusions: The results of this study show substantial variations in geographical accessibility to PHC services both within and among urban areas. This research enhances our understanding of spatial accessibility to health care services at the neighbourhood level. In particular, the results show that the low access neighbourhoods tend to be clustered in the neighbourhoods at the urban periphery and immediately surrounding the downtown area. abstract_id: PUBMED:18477748 Higher mortality in urban neighbourhoods in The Netherlands: who is at risk? Background: Urban residents have higher mortality risks than rural residents. These urban-rural differences might be more pronounced within certain demographic subpopulations. Aim: To determine urban-rural differences in all-cause and cause-specific mortality within specific demographic subpopulations of the Dutch population. Method: Mortality records with information on gender, age, marital status, region of origin and place of residence were available for 1995 through 2000. Neighbourhood data on address density and socioeconomic level were linked through postcode information. Variations in all-cause and cause-specific mortality between urban and rural neighbourhoods were estimated through Poisson regression. Additionally, analyses were stratified according to demographic subpopulation. Result: After adjustments for population composition, urban neighbourhoods have higher all-cause mortality risks than rural neighbourhoods (RR = 1.05; CI 1.04 to 1.05), but this pattern reverses after adjustment for neighbourhood socioeconomic level (RR = 0.98; CI 0.97 to 0.99). The beneficial effect of living in an urban environment applies particularly to individuals aged 10-40 years and 80 years and above, people who never married and residents from non-Western ethnic origins. The beneficial effect of urban residence for non-married people is related to their lower cancer and heart disease mortality. The beneficial effect of urban residence for people of non-Western ethnic origin is related to their lower cancer and suicide mortality. Conclusion: In The Netherlands, living in an urban environment is not consistently related to higher mortality risks. Young adults, elderly, single and non-Western residents, especially, benefit from living in an urban environment. The urban environment seems to offer these subgroups better opportunities for a healthy life. Answer: Yes, residents of downtown Toronto are influenced by their urban neighbourhoods. A concept mapping study conducted with residents from five Toronto neighbourhoods revealed that neighbourhood characteristics significantly impact residents' self-rated mental well-being (MWB). The study found that while there was agreement on factors influencing poor MWB across different demographics, perceptions related to factors affecting good MWB were more varied. Women, for instance, ranked the physical beauty of their neighbourhood and the range of services available as more important to good MWB, while men cited free access to computers/internet and neighbourhood reputation as important. Low-income residents emphasized aesthetic attributes and public transportation, whereas non-low-income residents rated crime, negative neighbourhood environment, and social concerns as more important contributors to good MWB (PUBMED:22862839). Furthermore, the distribution of HIV service providers across Toronto neighbourhoods is uneven, with some communities being less accessible to HIV-related services. This distribution correlates with neighbourhood-level characteristics such as economic disadvantage, immigration, and residential instability, suggesting that the location of services and neighbourhood demographics influence each other (PUBMED:16324186). Additionally, the impact of gentrification on ethnic neighbourhoods, such as Little Portugal in Toronto, has shown that gentrification can have both negative and positive effects on the communities. However, there is concern about the long-term viability of such neighbourhoods as immigrant reception areas with affordable housing (PUBMED:21174893). These findings indicate that urban neighbourhoods in downtown Toronto have a significant influence on the residents' well-being, access to services, and the overall character and sustainability of the communities.
Instruction: Olmesartan-associated enteropathy: new insights on the natural history? Abstracts: abstract_id: PUBMED:26291698 Olmesartan-associated enteropathy: new insights on the natural history? Report of two cases. Introduction: The association between olmesartan and an enteropathy histologically indistinguishable from untreated celiac disease has recently been described. However, pathogenetic mechanisms leading to villous atrophy, prevalence, natural history and genetic background of this condition have not yet been defined. Patients: We describe here two cases of olmesartan-associated enteropathy and discuss some aspects of the natural history of this condition. Results: In both patients, an infectious episode seems to have triggered the severe malabsorption syndrome which led them to hospitalization. High titer positive antinuclear antibodies with homogeneous pattern were found. Conclusions: Our reports add to a growing body of evidence suggesting that olmesartan-associated enteropathy should be considered in the presence of villous atrophy and negative celiac serology and in the diagnostic algorithm of non-responsive celiac disease. abstract_id: PUBMED:38450484 Association of microscopic colitis and enteropathy due to olmesartan. We report the case of a patient with severe chronic diarrhea. He was admitted on multiple occasions for this reason, with the cause remaining undetected. After obtaining a detailed medical history and performing several studies, the patient was diagnosed with microscopic colitis and enteropathy due to Olmesartan. In the literature, both diseases appear concurrently only in a few cases. Here we highlight the importance of conducting a comprehensive medical history and maintaining high clinical suspicion to avoid delays in the diagnosis of these uncommon pathologies, as well as unnecessary tests and empirical treatments. abstract_id: PUBMED:25350242 A new spruelike disease as a cause of severe diarrhea Unlabelled: History and presentation at admission: A 77-years old woman presented with an 8-week history of pronounced diarrhea and weight loss of 10 kg. Investigations: Infectious enteritis and chronic inflammatory bowel disease were excluded. Duodenal biopsies showed villous atrophy and 50 intraepithelial lymphocytes per 100 enterocytes, but antibody testing for celiac disease was negative. Treatment And Course: The diarrhea resolved within 2 weeks after withdrawing olmesartan, which she had received for hypertension for 5 years, without a gluten-free diet. 6 months later the histological abnormalities in the duodenum regressed markedly. A diagnosis of sprue-like enteropathy associated with olmesartan was made. Conclusion: The syndrome is probably rare and was first described in 2012. It should be suspected in patients on olmesartan treatment who have clinical and histological findings, but not serological markers of celiac disease. abstract_id: PUBMED:34556042 Severe spruelike enteropathy and collagenous colitis caused by olmesartan. Background: Olmesartan, which is an angiotensin II receptor blocker, reportedly causes spruelike enteropathy, with intestinal villous atrophy as its typical histopathological finding. Interestingly, collagenous and/or lymphocytic gastritis and colitis occur in some patients. We report the case of a 73-year-old Japanese man with a 2-month clinical history of severe diarrhea and weight loss. There were few reports in which spruelike enteropathy and collagenous colitis were both observed and could be followed up. Case Presentation: We report a case of a 73-year-old man with a 2-month clinical history of severe diarrhea and weight loss. He had taken olmesartan for hypertension treatment for 5 years. Endoscopic examination with biopsies revealed intestinal villous atrophy and collagenous colitis. Suspecting enteropathy caused by olmesartan, which was discontinued on admission because of hypotension, we continued to stop the drug. Within 3 weeks after olmesartan discontinuation, his clinical symptoms improved. After 3 months, follow-up endoscopy showed improvement of villous atrophy but not of the thickened collagen band of the colon. However, the mucosa normalized after 6 months, histologically confirming that the preexistent pathology was finally resolved. Conclusions: This report presents a case in which spruelike enteropathy and collagenous colitis were both observed and could be followed up. In unexplained cases of diarrhea, medication history should be reconfirmed and this disease should be considered a differential diagnosis. abstract_id: PUBMED:32850220 Olmesartan-Induced Spruelike Enteropathy: An Emerging Cause of Small Bowel Injury. Olmesartan-induced spruelike enteropathy is a rare clinical entity that is characterized by unexplained chronic diarrhea and weight loss. Prompt recognition of this adverse event may be challenging due to clinical and histologic similarities with other small intestinal pathologies. We hereby delineate the case of an elderly female with a 14-month clinical history of non-bloody diarrhea and weight loss. After extensive diagnostic workup and exclusion of probable etiologies, the patient was diagnosed with olmesartan-associated enteropathy. A dramatic clinical and histologic recuperation was achieved after discontinuation of olmesartan. This paper illustrates the overarching need for a detailed clinical history focusing on medication review in patients presenting with chronic diarrhea with no obvious cause. The spruelike enteropathy associated with olmesartan is an emerging cause of small bowel injury. Clinicians should maintain a high index of suspicion for this adverse drug reaction. Early and correct diagnosis carries paramount importance in sparing these patients from unnecessary diagnostic investigations and therapeutic delays. abstract_id: PUBMED:30870870 Persistent Diarrhea of an Elderly Lady - How to Combine Blood Pressure and Diarrhea History And Clinical Findings: A 72-year-old female presented with a therapy-resistant diarrhea. Examinations: In the case of negative stool cultures and inconspicuous radiological imaging, further endoscopic diagnostics were performed. Histological implicated the image of a celiac disease in the duodenum and lymphocytic colitis reaching into the terminal ileum. In the case of negative antibody detection for celiac disease, a medication side effect was considered by differential diagnosis. Treatment: When olmesartan was discontinued, she developed a rapid improvement of the symptoms. Conclusion And Diagnosis: For the angiotensin receptor antagonist olmesartan, the occurrence of a sprue-like enteropathy has rarely described. Microscopic colitis is an exception. abstract_id: PUBMED:31970030 Olmesartan-induced Enteropathy: A Rare Side Effect of a Common Medication. Ischemic heart disease and stroke are the leading causes of mortality worldwide according to the World Health Organization. Hypertension is a major factor in the development of these diseases. Olmesartan is an angiotensin II receptor blocker (ARB) indicated in the treatment of hypertension. There are several case reports describing sprue-like enteropathy caused by olmesartan. We report on a 72-year-old patient referred to our hospital for work-up of chronic diarrhoea, vomiting and weight-loss, and villous atrophy on intestinal biopsy. The patient's symptoms abated upon cessation of olmesartan. This case illustrates the need for a thorough medication history and regular review during work-up. We hope it will add to the current understanding of this rare phenomenon. abstract_id: PUBMED:34540405 Olmesartan-Induced Enteropathy: A Report of an Unusual Cause of Chronic Diarrhea. Olmesartan, an angiotensin II receptor blocker indicated in the treatment of hypertension, has been associ-ated with a seronegative sprue-like enteropathy that should be considered in the differential diagnosis of patients with unexplained chronic diarrhoea. It typically presents with severe chronic diarrhoea, considerable weight loss, and villous atrophy on biopsy and may be difficult to recognize because of its clinical and histological similarities to other clinical entities. Practically, discontinuation of the drug leads to dramatic recovery of the symptoms. We report a 76-year-old Caucasian female who was admitted to our hospital with complaints of chronic diarrhea and significant weight loss. Medical history was notable for hypertension being treated with olmesartan. Initially, investigation for all potential infectious causes and celiac disease was negative. Both upper and lower endoscopy was performed with duodenal biopsies revealing total villous atrophy and colonic biopsies showing lymphocytic colitis. In the presence of negative serology for celiac disease and after a thorough review of the patient's medications, olmesartan in-duced-enteropathy was the most possible diagnosis. Olmesartan was discontinued and the symptoms rapidly resolved. A follow-up done a few months later showed no recurrence of the symptoms. In olmesartan-associated enteropathy, discontinuation of olmesartan results in immediate clinical recovery. Although rare, it is considered an emerging and underdiagnosed enteropathy. This case report illustrates the need for a thorough medication history evaluation and regular review during workup. We aim to increase the awareness of olmesartan-induced enteropathy among clinicians and gastroenterologists. We hope it will add to the current literature and help to understand this rare phenomenon in order to avoid unnecessary testing. abstract_id: PUBMED:30279248 Olmesartan-induced enteropathy. Olmesartan-induced enteropathy (OIE) typically presents with a constellation of signs and symptoms including chronic diarrhoea, weight loss and villous atrophy on biopsy. We describe a 68-year-old Caucasian woman with a history of hypothyroidism and hypertension who presented to our hospital with recurrent episodes of acute intermittent diarrhoea, nausea, vomiting, renal failure and 15 lbs weight loss. After an extensive workup, she was diagnosed with possible OIE. Cessation of the offending drug resulted in improvement of clinical symptoms and also hospital admissions for severe diarrhoea reinforcing the diagnosis of OIE. Among the adverse effects of drug therapy, diarrhoea is a relatively frequent adverse event accounting for about 7%. This report serves as an addition to existing literature and to increase the awareness of olmesartan-induced sprue-like enteropathy among the primary care physicians and gastroenterologists. abstract_id: PUBMED:31549594 Olmesartan Associated Enteropathy: Usefulness of Video Capsule Endoscopy in a Case With Doubtful Upper Endoscopic/Histological Picture. Background: Olmesartan, an antihypertensive drug, may be associated with a severe "sprue-like enteropathy". Objectives: To report a case of Olmesartan enteropathy demonstrated by video capsule endoscopy distally from the second duodenum along with the whole small bowel before and after drug withdrawal. Case Presentation: A 81-years-old man was referred for asthenia, chronic watery diarrhea and anasarca (ascites, pleural effusion and edemas of superior and inferior limb). The only comorbidity was hypertension treated with Olmesartan. All causes of infective and inflammatory chronic diarrhea were investigated and excluded. Upper endoscopy was normal; histological examination of the second portion of the duodenum showed moderate and patchy infiltration of lymphocytes at mucosal and intra-epithelial level with intermittent partial villous atrophy. The possibility of adverse drug reaction, estimated by Naranjo scale, showed a score of 7, indicating a strong probability. Olmesartan was then withdrawn. However, because of severe clinical general condition, we preferred to corroborate our diagnostic work-up by a non-invasive investigation, i.e. video capsule endoscopy, which showed jejunal and ileal mucosal alterations (mosaic pattern, diffuse hyperemia, severe edema, consequent apparent reduced lumen, diffuse thickening of intestinal folds, multiple erosions, patchy lymphangectasia). After 14 days, the resolution of anasarcatic state and hydroelectrolytic imbalances was observed. Nine months later, small-bowel video-capsule demonstrated mild mucosal hyperaemia and mosaic pattern. Conclusion: Our case could give new insights in the field of Olmesartan associated enteropathy by highlighting the possibility of distally main lesion location and, therefore, the usefulness of video capsule endoscopy in the presence of doubtful diagnostic features. Answer: Olmesartan-associated enteropathy is a condition that has been increasingly recognized in the medical literature. It is characterized by severe malabsorption syndrome, chronic diarrhea, weight loss, and villous atrophy histologically similar to that seen in untreated celiac disease, but without the serological markers for celiac disease (PUBMED:26291698, PUBMED:25350242, PUBMED:32850220, PUBMED:31970030, PUBMED:34540405, PUBMED:30279248). The natural history of this condition is not yet fully understood, but several case reports have provided insights into its clinical presentation and course. In some reported cases, an infectious episode appeared to trigger the severe malabsorption syndrome leading to hospitalization, and high titer positive antinuclear antibodies with a homogeneous pattern were found (PUBMED:26291698). The condition has been observed in patients taking olmesartan for hypertension, and symptoms typically improve dramatically upon discontinuation of the drug (PUBMED:25350242, PUBMED:32850220, PUBMED:31970030, PUBMED:34540405, PUBMED:30279248). In one case, histological abnormalities in the duodenum regressed markedly 6 months after olmesartan withdrawal (PUBMED:25350242), and in another case, both spruelike enteropathy and collagenous colitis were observed, with mucosal normalization confirmed histologically after 6 months (PUBMED:34556042). The association of microscopic colitis with olmesartan-induced enteropathy has been reported, although it appears to be a rare occurrence (PUBMED:38450484, PUBMED:30870870). Video capsule endoscopy has been useful in diagnosing olmesartan enteropathy, especially when lesions are located distally in the small bowel and the upper endoscopic/histological picture is doubtful (PUBMED:31549594). Overall, olmesartan-associated enteropathy should be considered in patients with unexplained chronic diarrhea, weight loss, and villous atrophy on biopsy, especially in the context of negative celiac serology and a history of olmesartan use for hypertension. Early recognition and discontinuation of the drug are crucial for patient recovery and can prevent unnecessary diagnostic procedures and treatments.
Instruction: Stomach cancer mortality in two large cohorts of migrants from the Former Soviet Union to Israel and Germany: are there implications for prevention? Abstracts: abstract_id: PUBMED:19242359 Stomach cancer mortality in two large cohorts of migrants from the Former Soviet Union to Israel and Germany: are there implications for prevention? Objectives: Prevention and early detection are key elements for the reduction of stomach cancer mortality. To apply pertinent measures effectively, high-risk groups need to be identified. With this aim, we assessed stomach cancer mortality among migrants from the Former Soviet Union (FSU), a high-risk area, to Germany and Israel. Methods: We calculated standardized mortality ratios (SMRs) comparing stomach cancer mortality in two retrospective migrant cohorts from the FSU to Germany (n=34,393) and Israel (n=589,388) to that in the FSU and the host country. The study period ranges from 1990 to 2005 in Germany and from 1990 to 2003 in Israel. Vital status and cause of death were retrieved from municipal and state registries. To assess secular mortality trends, we calculated annual age-standardized mortality rates in the cohorts, the FSU, and the two host countries and conducted Poisson regression modeling. Results: SMRs (95% confidence intervals) for men in the German migrant cohort were 0.51 (0.36-0.70) compared with the FSU population and 1.44 (1.04-1.99) compared with the German population, respectively. For women, SMRs were 0.73 (0.49-1.03) compared with the FSU population and 1.40 (0.98-1.99) compared with the German population. SMRs for men in the Israeli migrant cohort were 0.49 (0.45-0.53) compared with the FSU population and 1.79 (1.65-1.94) compared with the Israeli population. SMRs for women in the Israeli cohort were 0.65 (0.59-0.72) compared with the FSU population and 1.82 (1.66-1.99) compared with the Israeli population. Poisson modeling showed a secular decrease in all populations with a time lag of 4-5 years between migrants and 'natives' in Germany and converging rates between migrants and the general population in Israel. Conclusion: Stomach cancer mortality in migrants from the FSU remains elevated after migration to Germany and Israel but is much lower than in the FSU. Due to a secular decline, it can be expected that mortality among migrants from the FSU reaches within a few years levels similar to those of the host countries today. Therefore, migrant-specific prevention and early detection measures cannot be recommended. Detailed risk factor profiles, however, need to be obtained through further studies. abstract_id: PUBMED:18400085 Chronic disease mortality associated with infectious agents: a comparative cohort study of migrants from the Former Soviet Union in Israel and Germany. Background: Prevalence of infectious diseases in migrant populations has been addressed in numerous studies. However, information is sparse on their mortality due to chronic diseases that are aetiologically associated with an infectious agent. This study investigates mortality related to infectious diseases with a specific focus on cancers of possibly infectious origin in voluntary migrants from the Former Soviet Union residing in Israel and in Germany. Methods: Both groups of migrants arrived from the Former Soviet Union in their destination countries between 1990 and 2001. Population-based data on migrants in Israel were obtained from the Israel Central Bureau of Statistics. Data for migrants in Germany were obtained from a representative sample of all migrants from the Former Soviet Union in Germany. Cause of death information was available until 2003 for the Israeli cohort and until 2005 for the German cohort. Standardized mortality ratios were calculated relative to the destination country for selected causes of death for which infectious agents may be causally involved. Multivariate Poisson regression was applied to assess differences in mortality by length of residence in the host country. Results: Both in Israel and in Germany these migrants have lower overall mortality than the population in their destination countries. However, they have significantly elevated mortality from viral hepatitis and from stomach and liver cancer when compared to the destination populations. Regression analysis shows that in Israel stomach cancer mortality is significantly higher among migrants at shorter durations of residence when compared to durations of more than nine years. Conclusion: Higher mortality from cancers associated with infection and from viral hepatitis among migrants from the Former Soviet Union might result from higher prevalence of infections which were acquired in earlier years of life. The results highlight new challenges posed by diseases of infectious origin in migrants and call attention to the link between communicable and non-communicable diseases. abstract_id: PUBMED:30254988 Cancer Incidence and Mortality Among Ethnic German Migrants From the Former Soviet Union. Germany is a country known for immigration. In 2015, 21% of the general population in Germany consisted of individuals with a migration background. This article focuses on cancer-specific incidence and mortality among one of the biggest migrant groups in Germany: the resettlers. Resettlers are ethnic Germans who mainly immigrated from the Russian federation and other countries of the former Soviet Union after its collapse in 1989. We investigated differences between resettlers and the general German population, regarding (i) incidence and mortality of malignant neoplasms, (ii) time trends of the corresponding incidence and mortality, and (iii) cancer stage at diagnosis. We provide data from two resettler cohorts covering an observation time of 20 years: one cohort on cancer incidence (N = 32,972), and another cohort on mortality (N = 59,390). Cancer-specific standardized incidence ratios (SIR) and standardized mortality ratios (SMR) for all malignant neoplasms combined and the most common cancer-sites were calculated between resettlers and the general German population. Time trend analyses using Poisson regression were performed to investigate the developments of SIRs and SMRs. To investigate differences in stage at diagnosis, logistic regression was performed, calculating Odds Ratios for condensed cancer stages. We observed higher incidence and mortality of stomach cancer [SIR (men) 1.62, 95%CI 1.17-2.19; SMR (men) 1.62, 95%CI 1.31-2.01; SIR (women) 1.32, 95%CI 0.86-1.94; SMR (women) 1.52, 95%CI 1.19-1.93] and higher mortality of lung cancer [SMR (men) 1.34, 95%CI 1.20-1.50] among resettlers compared to the general German population, but lower incidence and mortality of colorectal (both sexes), lung (women), prostate and female breast cancer. However, time trend analyses showed converging incidence risks of cause-specific incidence over time, whereas differences of mortality did not show changes over time. Results from logistic regression suggest that resettler men were more often diagnosed with advanced cancer stages compared to the Münster population. Our findings suggest that risk factor patterns of the most common cancer-sites among resettlers are similar to those observed within the Russian population. Such increases in prostate, colorectal and breast cancer incidence may be the consequence of improved detection measures, and/or the adaptation of resettlers to the German lifestyle. abstract_id: PUBMED:19543985 Cancer profile of migrants from the Former Soviet Union in Germany: incidence and mortality. This study compares cancer mortality and incidence of ethnic German migrants from the Former Soviet Union (FSU) in Germany. Data were obtained from two migrant cohorts residing in the federal state of North Rhine-Westphalia (NRW) (n = 34,393) and Saarland (n = 18,619). Vital status of the NRW cohort was ascertained through local population registries. Causes of death were obtained from the NRW statistical office or from local health offices. Cancer incidence of the Saarland cohort was derived from the Saarland cancer registry using record linkage. From 1990 to 2005, we observed 708 cancer deaths and 586 incident cancer cases. In males, both cancer incidence and cancer mortality were similar to the German population. Female cancer incidence and mortality were lower, the latter significantly. Site-specific standardized mortality and incidence ratios showed great variation in comparison to Germans and were remarkably similar to each other for most sites. Lung cancer was elevated among males, but lower among females. Stomach cancer was higher contrasting with lower ratios for prostate cancer, male colorectal cancer, and female breast cancer. Results confirm that FSU-migrants suffer from cancers, which may be prevented by prevention programs. Furthermore, we cannot conclude a different health-seeking behavior compared to Germans. abstract_id: PUBMED:9486461 Cancer incidence patterns (1972-91) among migrants from the Soviet Union to Israel. A proportional cancer incidence analysis is reported, based on 6,563 cases of cancer diagnosed in Israel between 1972 and 1991 among migrants from the former Soviet Union who arrived in Israel between 1972 and 1986. For digestive system cancers, there is evidence of a marked reduction in the risk of stomach cancer among the migrants with time since migration, and an increase in the proportion of colon cancer, although little change in cancer of the rectum. For most of the smoking-related cancers, there is little evidence of any meaningful change in proportions with time since immigration. For multiple myeloma, proportions decreased substantially in both men and women over the 20-year period. Among women, there is a small, statistically significant increase in breast cancer, and a marked decrease in cancer of the cervix. Among younger immigrants, the proportion of malignant melanoma has increased substantially since migration. A number of the changes in cancer patterns are consistent with various etiologic hypotheses including those based on possible dietary and other lifestyle changes among the migrants. abstract_id: PUBMED:19197401 Noncommunicable disease mortality and life expectancy in immigrants to Israel from the former Soviet Union: country of origin compared with host country. Objective: To assess the influence of country of origin effects and of adjustment and selection processes by comparing noncommunicable disease mortality and life expectancy among migrants to Israel from the former Soviet Union (FSU) with noncommunicable disease mortality and life expectancy among Israelis and the population of the Russian Federation. Methods: Data from 926,870 FSU-immigrants who migrated to Israel between 1990 and 2003 (study cohort) were analysed. Life expectancy was calculated for the study cohort, all Israelis, and the population of the Russian Federation. Age-standardized death rates were calculated for grouped causes of death. FSU immigrants were additionally compared with other Israelis and with inhabitants of the Russian Federation using cause-specific standardized mortality ratios (SMRs). Findings: Life expectancy at age 15 years in 2000-2003 was 61.0 years for male and 67.0 years for female FSU immigrants to Israel. Age-standardized death rates for FSU immigrants in Israel were similar to those of other Israelis and much lower than those of inhabitants of the Russian Federation. Relative to Israelis, the study cohort had a higher SMR for neoplasms, and particularly for stomach cancer. Mortality from brain cancer was higher when immigrants were compared to the Russian Federation (SMR: 1.71, 95% confidence interval, CI: 1.50-1.94 for males; SMR: 1.77, 95% CI: 1.56-2.02 for females), whereas mortality from stomach cancer was lower among immigrants relative to the Russian Federation (SMR: 0.43, 95% CI: 0.40-0.47 for males; SMR: 0.56, 95% CI: 0.52-0.61 for females). Mortality from external causes was lower among immigrants relative to the population of the Russian Federation (SMR: 0.20, 95% CI: 0.19-0.21 for males; SMR: 0.35, 95% CI: 0.33-0.37 for females) but significantly higher relative to other Israelis (SMR: 1.41, 95% CI: 1.35-1.47 for males; SMR: 1.08, 95% CI: 1.02-1.15). Conclusion: Noncommunicable disease mortality among FSU immigrants to Israel is lower than in the population of the Russian Federation. Mortality rates in FSU immigrants, particularly from circulatory diseases, have rapidly adjusted and have become similar to those of the destination country. However, immigrants from the FSU have considerably higher mortality than other Israelis from external causes and some noncommunicable diseases such as cancer. Mortality rates in these diaspora migrants show a mixed picture of rapid assimilation together with persistent country of origin effects, as well as the effects of adjustment hardships. abstract_id: PUBMED:24275258 Cancer incidence in ethnic German migrants from the Former Soviet Union in comparison to the host population. Aim: To investigate cancer incidence patterns among ethnic German migrants (Aussiedler) from the Former Soviet Union, a large migrant group in Germany, in comparison to autochthonous Saarland population over a 20 year observation period. Methods: Data were obtained from a cohort of Aussiedler residing in the federal state of Saarland (n=18,619). Cancer incidence and vital status were ascertained through record linkage with the Saarland Cancer Registry and local population registries. Results: During the follow up period from 1990 to 2009 we observed 638 incident diagnoses of malignant neoplasms (except non-melanoma skin cancer). The overall standardized incidence ratio (SIR) was 0.98 (95% confidence interval 0.92, 1.04). However, site-specific SIRs revealed great variation. Stomach cancer incidence was significantly higher among Aussiedler. Lung cancer was elevated for males, but lower among females. Additionally, diagnoses for colorectal cancer among males were significantly lower. Age-standardized rates (ASRs) over time show not all cancer rates of Aussiedler attenuate as expected to Saarland rates. For example, lung and prostate cancer incidence rates show increasing disparity from Saarland rates and female breast cancer incidence develops in parallel. Furthermore, ASR for overall cancer incidence of Aussiedler shows a yearly decrease (p=0.06) whereas Saarland rates remain stable. Discussion: Aussiedler incidence rates reflect incidence pattern observed in their countries of origin. abstract_id: PUBMED:33317154 The Incidence of Intestinal Gastric Cancer among Resettlers in Germany-Do Resettlers Remain at an Elevated Risk in Comparison to the General Population? Objective: Previous studies have shown that the incidence of gastric cancer (GC), and particularly intestinal GC, is higher among resettlers from the former Soviet Union (FSU) than in the general German population. Our aim was to investigate if the higher risk remains over time. Methods: GC cases between 1994 and 2013, in a cohort of 32,972 resettlers, were identified by the respective federal cancer registry. Age-standardized rates (ASRs) and standardized incidence ratios (SIRs) were analyzed in comparison to the general population for GC subtypes according to the Laurén classification. Additionally, the cohort was pooled with data from a second resettler cohort from Saarland to investigate time trends using negative binomial regression. Results: The incidence of intestinal GC was elevated among resettlers in comparison to the general population (SIR (men) 1.64, 95% CI: 1.09-2.37; SIR (women) 1.91, 95% CI: 1.15-2.98). The analysis with the pooled data confirmed an elevated SIR, which was stable over time. Conclusion: Resettlers' higher risk of developing intestinal GC does not attenuate towards the incidence in the general German population. Dietary and lifestyle patterns might amplify the risk of GC, and we believe that further investigation of risk behaviors is needed to better understand the development of disease pattern among migrants. abstract_id: PUBMED:29343242 Stage of cancer diagnoses among migrants from the former Soviet Union in comparison to the German population - are diagnoses among migrants delayed? Background: In this study, we compared stage at diagnosis, standardized incidence ratio (SIR) and standardized mortality ratio (SMR) of most frequent cancer diagnoses between re-settlers (Aussiedler) from the former Soviet Union and the general population in the Saarland in Germany to assess possible delays in diagnosis of cancer among this migrant group. Methods: Lung cancer, colorectal cancer, breast cancer, prostate cancer, malignant melanoma of the skin and stomach cancer diagnoses among a cohort of 18,619 re-settlers living in the Saarland between 1990 and 2009 were identified by the federal state's cancer registry. Vital status was available for the respective time-period and used to calculate SIR and SMR in comparison to the autochthonous population. Tumor stages were condensed into local and advanced stages. Odds ratios (OR) for an advanced tumor stage were modeled in dependence of re-settler-status and relevant covariates by logistic regression. Missing values were addressed in a sensitivity analysis. The influence of duration of stay in Germany on advanced stage diagnosis was analyzed among re-settlers. Results: SIR and SMR of lung and breast cancer were lower among female re-settlers, while SIR and SMR of colorectal and prostate cancer were lower among male re-settlers. SIR and SMR of stomach cancer were elevated among both sexes. Female re-settlers showed an elevated OR for being diagnosed with advanced stage breast cancer. Both male and female re-settlers showed an elevated OR when observing all six sites combined (OR among males 1.47, p = 0.04; OR among females 1.37, p = 0.05). The result of elevated ORs was supported in the sensitivity analysis. Finally, male re-settlers showed a weak association between duration of stay in Germany and reduced risk for advanced stage diagnosis. Conclusion: Re-settlers were more likely to be diagnosed at an advanced tumor stage. These findings are in line with previous research having shown unfavorable health care utilization of re-settlers. Overall, low mortality rates despite an increased risk of advanced stage at diagnosis argue for a sufficient follow-up care, comparable to the autochthonous population. abstract_id: PUBMED:27187801 Histologic types of gastric cancer among migrants from the former Soviet Union and the general population in Germany: what kind of prevention do we need? Objective: The incidence of gastric cancer (GC) is high among migrants from Eastern Europe and Asia, but a detailed picture of disease characteristics is missing. Our study examined the incidence of histologic types among resettlers from the former Soviet Union and the general population in Germany to draw conclusions on risk factors and possible prevention strategies. Methods: Between 1990 and 2009, all GC diagnoses among a cohort of 18 619 resettlers residing in the Saarland were identified in the Saarland Cancer Registry database. Age-standardized incidence rates (ASRs) of the entire Saarland population and standardized incidence ratios (SIRs) of resettlers compared with the Saarland population were calculated for types according to Laurén. In addition, ASRs and SIRs were modeled using Poisson's regression to investigate time trends. Results: The ASR of intestinal GC in the Saarland population decreased over time, whereas the ASR of diffuse GC remained unchanged. Resettlers' incidence of intestinal GC was elevated among men [SIR: 3.04, 95% confidence interval (CI): 2.05-4.50] and women (SIR: 2.78, 95% CI: 1.61-4.79), whereas diffuse GC was elevated only among women (SIR: 1.98, 95% CI: 1.07-3.69). No time trends for SIRs could be observed in regression analysis. Conclusion: Different trends of diffuse GC incidence in Germany and the USA underline the importance of environmental risk factors. The continuously elevated risk of GC among male resettlers is probably associated with risk factors affecting exclusively the intestinal type such as a low intake of fruit and vegetables and heavy alcohol consumption. Future prevention programs for resettlers should include dietary measures. Answer: The studies on stomach cancer mortality among migrants from the Former Soviet Union (FSU) to Israel and Germany reveal several important findings with implications for prevention. Stomach cancer mortality among these migrants remains elevated after migration to both Germany and Israel compared to the host country populations, but it is much lower than in the FSU (PUBMED:19242359). This suggests that while the risk decreases after migration, possibly due to changes in environmental factors or lifestyle, it does not align completely with the lower risk levels of the host populations. In Israel, stomach cancer mortality is significantly higher among migrants at shorter durations of residence compared to durations of more than nine years (PUBMED:18400085). This indicates that the longer migrants from the FSU live in Israel, the more their stomach cancer mortality rates approach those of the native population, possibly due to gradual lifestyle and dietary assimilation. Cancer incidence and mortality data among ethnic German migrants from the FSU in Germany show higher incidence and mortality of stomach cancer compared to the general German population (PUBMED:30254988). However, time trend analyses suggest that incidence risks of cause-specific incidence are converging over time, although mortality differences do not show changes over time (PUBMED:30254988). The consistently elevated risk of stomach cancer among FSU migrants, particularly the intestinal type, may be associated with risk factors such as low intake of fruit and vegetables and heavy alcohol consumption (PUBMED:27187801). This highlights the need for targeted prevention programs that include dietary measures for this population. In conclusion, the elevated stomach cancer mortality among FSU migrants to Germany and Israel, despite a secular decline, suggests that migrant-specific prevention and early detection measures cannot be recommended based on the current evidence (PUBMED:19242359). However, detailed risk factor profiles should be obtained through further studies to inform potential prevention strategies. The findings also underscore the importance of considering the length of residence in the host country when assessing cancer mortality risks among migrants (PUBMED:18400085).
Instruction: Can the Paula method facilitate co-contraction of the pelvic floor muscles? Abstracts: abstract_id: PUBMED:21104402 Can the Paula method facilitate co-contraction of the pelvic floor muscles? A 4D ultrasound study. Introduction And Hypothesis: The aim was to compare constriction of the levator hiatus (LH) and reduction of pelvic floor muscle (PFM) length during instruction of the Paula method (contraction of ring musculature of the mouth) and contraction of the PFM. Methods: Seventeen pregnant or postpartum women, mean age 28.6 (range 20-35) participated. A Voluson E8 ultrasound machine with 4-8 MHz curved array 3D/4D transducer (RAB 4-7l/obstetric) was used. Measurements were performed in the axial plane of minimal hiatal dimensions. Muscle length was calculated as circumference of the LH minus the suprapubic arch. Differences between groups were analyzed using Wilcoxon signed rank test. Significance level was set to 0.05. Results: There was a significant reduction of the LH area (p&lt;0.001) and muscle length (p&lt;0.001) during PFM contraction, but not during contraction according to the Paula method, p=0.51 and p=0.31, respectively. Conclusions: The Paula method did not facilitate PFM contraction. abstract_id: PUBMED:21104399 Effects of the Paula method in electromyographic activation of the pelvic floor: a comparative study. Introduction And Hypothesis: To determine if Paula method of circular muscles contraction e.g., those surrounding the eyes, mouth and fingers, could increase pelvic floor muscle (PFM) activity. Methods: Thirty-four healthy, nulliparous volunteers were included, with an average age of 28 (± 5.9) years and body mass index of 23.8 (± 3.3) kg/m(2). They were assigned to randomly perform the sequence: a PFM maximal voluntary contraction, a contraction of circular muscles, followed by a simultaneous combination of PFM and circular muscle contraction. The PFM were evaluated using surface electromyography. Results: PFM activity at baseline was 22.6 (± 10.9) μv. It was unchanged with Paula method contraction (p=0.322). There were also no differences between values of PFM maximal voluntary contraction alone, 99.8 (± 44.3) μv and in combination with the Paula method, 91.8 (± 35.3) μv (p=0.093). Conclusions: The Paula method seems not to increase PFM activity in nulliparous women. abstract_id: PUBMED:11494188 Contraction of the pelvic floor muscles during abdominal maneuvers. Objective: To determine whether voluntary abdominal muscle contraction is associated with pelvic floor muscle activity. Design: Pelvic floor muscle activity was recorded during contractions of the abdominal muscles at 3 different intensities in supine and standing positions. Setting: Research laboratory. Participants: Six women and 1 man with no histories of lower back pain. Intervention: Not applicable. Main Outcome Measures: Electromyographic activity of the pelvic floor muscles was recorded with surface electrodes inserted into the anus and vagina. These recordings were corroborated by measurements of anal and vaginal pressures. Gastric pressure was recorded in 2 subjects. Results: Pelvic floor muscle electromyography increased with contraction of the abdominal muscles. With strong abdominal contraction, pelvic floor muscle activity did not differ from that recorded during a maximal pelvic floor muscle effort. The pressure recordings confirmed these data. The increase in pressure recorded in the anus and vagina preceded the pressure in the abdomen. Conclusions: In healthy subjects, voluntary activity in the abdominal muscles results in increased pelvic floor muscle activity. The increase in pelvic floor pressure before the increase in the abdomen pressure indicates that this response is preprogrammed. Dysfunction of the pelvic floor muscles can result in urinary and fecal incontinence. Abdominal muscle training to rehabilitate those muscles may be useful in treating these conditions. abstract_id: PUBMED:25953072 Modeling the contraction of the pelvic floor muscles. We performed numerical simulation of voluntary contraction of the pelvic floor muscles to evaluate the resulting displacements of the organs and muscles. Structures were segmented in Magnetic Resonance (MR) images. Different material properties and constitutive models were attributed. The Finite Element Method was applied, and displacements were compared with dynamic MRI findings. Numerical simulation showed muscle magnitude displacement ranging from 0 to 7.9 mm, more evident in the posterior area. Accordingly, the anorectum moved more than the uterus and bladder. Dynamic MRI showed less 0.2 mm and 4.1 mm muscle dislocation in the anterior and cranial directions, respectively. Applications of this model include evaluating muscle impairment, subject-specific mesh implant planning, or effectiveness of rehabilitation. abstract_id: PUBMED:24361824 Reliability of the ultrasound measurements of abdominal muscles activity when activated with and without pelvic floor muscles contraction. Background: Synergistic co-activation of the abdominal and pelvic floor muscles (PFM) has been shown in literature. Some studies have assessed the reliability of ultrasound measures of the abdominal muscles. Objective: The aim of this study was to determine the reliability of ultrasound measurements of transverses abdominis (TrA) and obliquus internus (OI) muscles during different conditions (PFM contraction, abdominal hollowing manoeuvre (AHM) with and without PFM contraction) in participants with and without chronic low back pain (LBP). Methods: 21 participants (9 with LBP, 12 healthy) participated in the study. The reliability of thickness measurements at rest and during each condition and thickness changes and percentage of this changes at different conditions were assessed. Results: The results showed high reliability of the thickness measurement at rest and during each condition of TrA and OI muscles, moderate to substantial reliability for the thickness change and percentage of thickness change of TrA, and fair to moderate reliability of the thickness change and percentage of thickness change of OI in both groups. Conclusions: Ultrasound imaging can be used as a reliable method for assessment of abdominal muscle activity with and without PFM contraction. abstract_id: PUBMED:8963488 Manometric measurement of contraction capability of pelvic floor muscles The objective of our investigation was to introduce manometric assessment of the force of contraction of pelvic floor muscles as one of the examination methods for confirming the diagnosis of assessment of the degree of urinary incontinence in women; also to assess the relationship between MUCP (maximal urethral closure pressure), PWT (pad weight test) and results of manometric assessment. The trial comprised 10 healthy premenopausal and 10 postmenopausal women who did not suffer from incontinence. The authors investigated also 20 premenopausal women with the stress type of incontinence and 20 postmenopausal ones. In all manometric assessments with a WISAP apparatus were made. The maximal increase of pressure was recorded during contraction of the entire pelvic floor and when the pressure in the balloon was set at 50 and 75 mm Hg. During this assessment the authors recorded also a statistically significant difference between the group also a statistically significant difference between the group of continent and incontinent women before the menopause. No relationship between MUCP, PWT and the maximal contraction capacity of the pelvic floor muscles was found. The disadvantage of the manometric apparatus WISAP is the calibration of the apparatus in mm Hg (1 mm Hg = 13.5 cm water), i.e. the low sensitivity of the apparatus and the shape of the balloons which leads to irregular expression from the vagina during contraction of the pelvic floor muscles. From absolute values of the contraction force of the pelvic floor muscles we cannot draw any conclusions on urinary incontinence or its grade. The authors recommend this method only as a supplementary one, in particular to evaluate the results of exercise of the pelvic floor muscles. abstract_id: PUBMED:29733699 Ankle positions potentially facilitating greater maximal contraction of pelvic floor muscles: a systematic review and meta-analysis. Objectives: To evaluate the effect of ankle positions on pelvic floor muscles in women. Methods: Multiple databases were searched from inception-July 2017. Study quality was rated using the grading of recommendations, assessment, development, and evaluation system and the "threats to validity tool". Results: Four studies were eligible for inclusion. Meta-analysis revealed significantly greater resting activity of pelvic floor muscles in neutral ankle position (-1.36 (95% CI -2.30, -0.42) p = 0.004) and induced 15° dorsiflexion (-1.65 (95% CI -2.49, -0.81) p = 0.0001) compared to induced 15° plantar flexion. Significantly greater maximal voluntary contraction of pelvic floor was found in dorsiflexion compared to plantar flexion (-2.28 (95% CI -3.96, -0.60) p = 0.008). Meta-analyses revealed no significant differences between the neutral ankle position and 15° dorsiflexion for either resting activity (0.30 (95% CI -0.75, 1.35) p = 0.57) or maximal voluntary contraction (0.97 (95% CI -0.77, 2.72) p = 0.27). Conclusion: Pelvic floor muscle-training for women with urinary incontinence could be performed in standing with ankles in a neutral position or dorsiflexion to facilitate greater maximal pelvic floor muscle contraction. As urethral support requires resting contraction of pelvic floor muscles, decreased resting activity in plantar flexion identified in the meta-analysis indicates that high-heel wearers with urinary incontinence might potentially experience more leakage during exertion in a standing position. Implications for rehabilitation Pooled analyses revealed that maximal voluntary contraction of pelvic floor muscle is greater in induced ankle dorsiflexion than induced plantar flexion. As pelvic floor muscle strengthening involves achieving a greater maximal voluntary contraction, pelvic floor muscle training for women with stress urinary incontinence could be performed in standing either with ankles in a neutral position or dorsiflexion. Decreased resting activity in plantar flexion identified in the meta-analysis indicates that high-heel wearers with stress urinary incontinence might potentially experience more leakage during exertion in a standing position. Women with stress urinary incontinence should be advised to wear flat shoes instead of high-heels and should be cautioned about body posture and ankle positions assumed during exercise and daily activities. abstract_id: PUBMED:30592502 Can you train the pelvic floor muscles by contracting other related muscles? Aims: The aim of this study is to assess whether contraction of muscles other than the pelvic floor muscles (PFM) would be of sufficient magnitude to provide a "training" effect for the pelvic floor. Methods: Women were recruited via advertisement from a convenience sample of pelvic floor physiotherapists. A thin flexible array of pressure sensors (FemFit) was self-inserted into the vagina. Participants performed three maximum pelvic floor contractions and completed a randomized exercise protocol. Maximum pressures were determined for each sensor, for each exercise. Wilcoxon paired tests were used to ascertain the difference in pelvic floor muscle pressure between exercises, and between the pressures from the pelvic floor muscles sensors and those measuring abdominal pressure. Bonferroni correction α 0.005. Results: Data was obtained from nineteen participants. Mean age 43 ± 11 years, BMI 22.4 ± 3.2 kg m-2 . Mean PFM pressure for pelvic floor muscle contractions was 16.3 ± 12.2 mmHg. Pressure in the region of the pelvic floor was significantly higher during a targeted contraction compared to that for all other exercises, except for cough and curl-ups (P = 0.009, P = 0.013 respectively). PFM pressure was significantly higher than abdominal for PFMC, internal rotators and gluteals (P &lt; 0.001). Conclusion: Targeted PFM contractions develop higher pressures compared to abdominal pressure than any exercise tested in this study. The Femfit device was able to distinguish between abdominal and pelvic floor muscle pressures simultaneously. Exercising accessory muscles in an attempt to activate the pelvic floor sufficiently to illicit a training effect is not recommended. abstract_id: PUBMED:27564290 Training through gametherapy promotes coactivation of the pelvic floor and abdominal muscles in young women, nulliparous and continents. Introduction And Objectives: several studies have been investigated co-activation can enhance the effectveness of PFM training protocols allowing preventive and therapeutic goals in pelvic floor dysfunctions. The objective of the present study was to investigate if an abdominal-pelvic protocol of training (APT) using gametherapy would allow co-activation of PFM and transversus abdominis/oblique internal (TrA/OI) muscles. Patients And Methods: Twenty-five nulliparous, continent, young females, with median age 24.76 (±3.76) years were evaluated using digital palpation (DP) of PFM and surfasse electromyography of PFM and TrA/OI simultaneously, during maximal voluntary contraction (MVC), alternating PFM and TrA/OI contraction requests. All women participated on a supervised program of APT using gametherapy, that included exercises of pelvic mobilization associated to contraction of TrA/OI muscles oriented by virtual games, for 30 minutes, three times a week, in a total of 10 sessions. Electromyographic data were processed and analyzed by ANOVA - analysis of variance. Results: When MVC of TrA/OI was solicited, it was observed simultaneous increase of electromyographic activity of PFM (p=0.001) following ATP. However, EMG activity did not change significantly during MVC of PFM. Conclusion: Training using gametherapy allowed better co-activation of pelvic floor muscles in response to contraction of TrA, in young nulliparous and continent women. abstract_id: PUBMED:36961619 Pelvic floor and perineal muscles: a dynamic coordination between skeletal and smooth muscles on pelvic floor stabilization. The purpose of this review is to present our researches on the pelvic outlet muscles, including the pelvic floor and perineal muscles, which are responsible for urinary function, defecation, sexual function, and core stability, and to discuss the insights into the mechanism of pelvic floor stabilization based on the findings. Our studies are conducted using a combination of macroscopic examination, immunohistological analysis, 3D reconstruction, and imaging. Unlike most previous reports, this article describes not only on skeletal muscle but also on smooth muscle structures in the pelvic floor and perineum to encourage new understanding. The skeletal muscles of the pelvic outlet are continuous, which means that they share muscle bundles. They form three muscle slings that pass anterior and posterior to the anal canal, thus serving as the foundation of pelvic floor support. The smooth muscle of the pelvic outlet, in addition to forming the walls of the viscera, also extends in three dimensions. This continuous smooth muscle occupies the central region of the pelvic floor and perineum, thus revising the conventional understanding of the perineal body. At the interface between the levator ani and pelvic viscera, smooth muscle forms characteristic structures that transfer the lifting power of the levator ani to the pelvic viscera. The findings suggest new concepts of pelvic floor stabilization mechanisms, such as dynamic coordination between skeletal and smooth muscles. These two types of muscles possibly coordinate the direction and force of muscle contraction with each other. Answer: No, the Paula method does not appear to facilitate co-contraction of the pelvic floor muscles (PFM). A study using 4D ultrasound to compare the constriction of the levator hiatus and reduction of PFM length during instruction of the Paula method (contraction of ring musculature of the mouth) and contraction of the PFM found that there was a significant reduction in the levator hiatus area and muscle length during PFM contraction, but not during contraction according to the Paula method (PUBMED:21104402). Another study using electromyography to measure PFM activity found that the Paula method did not increase PFM activity in nulliparous women, with no significant differences in PFM activity during maximal voluntary contraction alone or in combination with the Paula method (PUBMED:21104399). These findings suggest that the Paula method, which involves contracting circular muscles such as those around the eyes, mouth, and fingers, does not effectively facilitate PFM contraction.
Instruction: Retroperitoneal approach for aortic surgery: is it worth it? Abstracts: abstract_id: PUBMED:34786910 Active surgical approach for retroperitoneal liposarcoma Objective: To assess the influence of active surgical approach on the long-term postoperative outcomes in patients with retroperitoneal liposarcoma. Material And Methods: A retrospective study included 190 patients with retroperitoneal liposarcoma. The effect of malignancy grade, adjuvant chemotherapy, number of separate tumor nodes in primary neoplasm and the first relapse, as well as the number of previous total resections on survival rate was analyzed. Results: Overall and relapse-free survival is significantly worse in patients with high-grade retroperitoneal liposarcoma (G2-3) compared to low-grade (G1) tumor (plog-rank=0.000). Multifocal growth of primary tumor (p=0.869; plog-rank=0.607) and multiple (&gt;1) separately located nodes in abdominal cavity and retroperitoneal space at the first relapse (plog-rank=0.158 to 0.985) did not significantly impair prognosis after total resection of all types of retroperitoneal liposarcoma regardless malignancy grade. Adjuvant chemotherapy does not significantly improve relapse-free survival. Overall survival was significantly higher in patients who underwent ≥4 previous total resections compared to 1 surgical treatment for all types of retroperitoneal liposarcoma regardless malignancy grade (p=0.000; plog-rank=0.001). Conclusion: The only potentially radical treatment for patients with retroperitoneal liposarcoma is surgery. We reported the advantages of active surgical approach for improvement of long-term outcomes in patients with retroperitoneal liposarcoma. abstract_id: PUBMED:1412286 Abdominal aortic surgery. Transperitoneal or retroperitoneal approach? During the past five years a retroperitoneal approach was used in 34 patients and the transabdominal route in 49 patients for treatment of abdominal aortosclerosis. Both groups were identical with respect to age, sex and preoperative morbidity. In patients with previous laparotomy the retroperitoneal approach was always chosen. There was no significant difference with regard to postoperative complications. The only significant difference between the two groups after the operation was a trend towards earlier defecation among the patients treated by the retroperitoneal approach. We can not demonstrate any important advantage or disadvantage of the retroperitoneal approach. The transabdominal approach provides an opportunity to examine the abdominal organs for malignancy. We found one asymptomatic gastric cancer. The retroperitoneal approach is particularly suitable in patients who have undergone laparotomy before for expected severe abdominal adhesions. abstract_id: PUBMED:34980150 Total laparoscopic resection by medial-retroperitoneal approach using virtual navigation: two case reports of primary retroperitoneal schwannoma. Background: We report two rare cases of retroperitoneal schwannoma completely resected by a laparoscopic medial-retroperitoneal approach aided by virtual navigation. Three-dimensional images have been used in liver and lung surgery, but there are few prior reports on retroperitoneal surgery. Case Presentation: These two case reports are of a 60-year-old man and a 40-year-old man with asymptomatic retroperitoneal schwannoma. In both cases, the tumors were located in the right renal hilum and were close to the duodenum, right ureter, and inferior vena cava. Simulation using three-dimensional images was performed before surgery, and a medial-retroperitoneal approach was performed to secure a wide surgical field. During the operation, we confirmed the location of the main feeder and the relationship between the tumor and organs with those shown on the three-dimensional images and performed total laparoscopic resection. Conclusion: The medial-retroperitoneal approach provides operative safety. Preoperative simulation and intraoperative navigation with three-dimensional images, which can be freely rotated and interactively visualized from any angle, are useful methods to enhance the surgeon's understanding of a patient's specific anatomy and are especially effective when resecting a retroperitoneal tumor that is located in an anatomically deep and complex location. abstract_id: PUBMED:24765496 Retroperitoneal approach for recurrent benign multicystic peritoneal mesothelioma. Benign multicystic peritoneal mesothelioma (BMPM) is an uncommon lesion. It presents as a lump in abdomen or a finding seen on imaging modalities. Surgery is the primary modality of treatment. However, it has a high recurrence rate; this results in adhesions and subsequent surgeries difficult. We present a case of recurrent BMPM in a female operated twice earlier in a rural centre. Imaging modalities showed majority of the lesion in paracolic and retroperitoneal region. Hence, retroperitoneal approach for surgery was taken. This avoided previous surgical adhesions. A brief case report on this novel approach and review of literature is presented. abstract_id: PUBMED:37884734 Safety and efficacy of laparoscopic transperitoneal versus retroperitoneal resection for benign retroperitoneal tumors: a retrospective cohort study. Background And Objective: Benign retroperitoneal tumors (BRTs) are clinically rare solid tumors. This study aimed to compare the safety and efficacy of laparoscopic transperitoneal versus retroperitoneal resection for BRTs. Methods: The clinical data of 43 patients who had pathologically confirmed BRTs and underwent laparoscopic resection in a single center from January 2019 to May 2022 were retrospectively analyzed. Patients were divided into two groups according to the surgical methods: the Transperitoneal approach group (n = 24) and the Retroperitoneal approach group (n = 19). The clinical characteristics and perioperative data between the two groups were compared. The baseline data and surgical variables were analyzed to determine the impact of different surgical approaches on the treatment outcomes of BRTs. Results: No significant difference was observed between the two groups in gender, age, body mass index, the American Society of Anesthesiologists score, presence of underlying diseases, tumor size, tumor position, operation duration, intraoperative hemorrhage, postoperative hospital stay, intestinal function recovery time, and postoperative complication rate. The conversion rate from laparoscopic to open surgery was significantly lower in the Transperitoneal approach group than in the Retroperitoneal approach group (1/24 vs. 5/19, χ2 = 4.333, P = 0.037). Tumor size was an independent influencing factor for the effect of surgery (odds ratio = 1.869, 95% confidence interval = 1.135-3.078, P = 0.014) and had a larger efficacy on the retroperitoneal group (odds ratio = 3.740, 95% confidence interval = 1.044-13.394, P = 0.043). Conclusion: The laparoscopic transperitoneal approach has the inherent advantages of anatomical hierarchies and surgical space, providing a better optical perspective of the targeted mass and improved bleeding control. This approach may have better efficacy than the retroperitoneal approach, especially in cases of a large tumor or when the tumor is located near important blood vessels. abstract_id: PUBMED:31143263 The Role of Combined Posterior and Anterolateral Retroperitoneal Approach in the Treatment of Posttraumatic Burst Lumbar Fractures. Context: Combined posterior and anterolateral retroperitoneal approach is very important for the treatment of unstable burst lumber fractures with retropulsed fragments. Aims: The aim of the study is to evaluate the role of combined posterior and anterolateral retroperitoneal approach in the treatment of unstable burst lumber fractures. Settings And Design: This is a retrospective clinical case series study. Patients And Methods: This study was conducted on 41 patients with unstable lumber burst fractures with retropulsed fragment. Frankel scale score and Denis pain score were used to evaluate the functional outcome. All patients were surgically treated using combined posterior and anterolateral retroperitoneal approach. They were followed for 1 year postoperatively. Statistical Analysis: Using SPSS version 21, data were presented as mean ± standard deviation, and percentage and paired sample and Wilcoxon signed-rank tests were used for data analysis. Results: the functional state of all patients improved after surgery. According to the Frankel and Denis pain scores, there was a significant improvement in patients' scores postoperatively compared to preoperative ones (P = 0.001). Visceral manifestations were present in 16 cases (36.6%) with complete improvement postoperatively except two cases. There is a significant improvement as regards pre- and postoperative regional kyphotic angle (9.12 ± 10.03) and vertebral body height (3.14 ± 0.37). Unintended durotomy occurred in six cases treated by stitching using absorbable sutures and fat graft. Wound infection was present in two cases treated by antibiotics and daily dressing. A solid fusion was achieved in all cases. Conclusions: Combined posterior and anterolateral retroperitoneal approach is feasible and effective in surgical exposure and treatment of unstable burst lumber fractures with retropulsed fragments. abstract_id: PUBMED:37741075 Laparoscopic excision of a retroperitoneal pseudocyst using a modified medial approach: A case report. Introduction: Retroperitoneal pseudocysts of non-pancreatic origin are rare. Although the laparoscopic approach has been used for their treatment, laparotomy remains the mainstream strategy for these lesions. Presentation Of Case: We report the case of an asymptomatic 51-year-old male patient who was incidentally diagnosed with a retroperitoneal pseudocyst. Computed tomography showed a 3 × 3 cm cystic lesion in the retroperitoneum, localized between the third part of the duodenum and the inferior vena cava. The patient underwent complete laparoscopic excision using a modified right-sided colonic resection procedure (modified medial approach), and histopathological examination revealed a non-pancreatic retroperitoneal pseudocyst. The patient was discharged without any complications, and no postoperative recurrence was detected. Discussion: Complete excision is of great significance in the treatment of retroperitoneal cysts. There have been few studies on the laparoscopic approach for retroperitoneal pseudocysts. A laparoscopic procedure is less invasive and helps identify the correct tissue planes to ensure complete resection and prevent inadvertent injury to adjacent vital structures. We detached the retroperitoneal lesion from the duodenum and safely removed it without damaging the surrounding organs by a modified medial approach. Conclusions: Retroperitoneal pseudocysts can be resected laparoscopically by identifying the correct tissue planes and adjacent vital structures. abstract_id: PUBMED:8512363 Giant retroperitoneal liposarcoma A case of giant retroperitoneal liposarcoma is described. Treatment was by radical surgery, using the anterior approach, with preservation of the renal unit. The diagnostic procedure is briefly discussed and the literature reviewed. abstract_id: PUBMED:29804624 Functional adrenal cortex preservation: A good reason for posterior retroperitoneal endoscopic approach. Introduction: Cortical-sparing adrenalectomy is a suitable treatment for hereditary and sporadic bilateral pheochromocytoma, in cases of low risk of malignancy, to reduce the possibility of adrenal insufficiency assuming the chance of local recurrence. The aim of the study is to analyze the functional results of partial adrenalectomy by retroperitoneal endoscopic approach in single-adrenal patients or patients requiring bilateral adrenalectomy. Methods: Prospective study between January 2015 and February 2016 including pheochromocytoma patients diagnosed with low risk of malignant mutations. All patients agreed to be included in the study. Experienced endocrine surgeons who have been trained in minimally invasive endocrine surgery performed the procedure using the same surgical technique. Demographic variables and clinical characteristics were collected, subsequently carrying out the descriptive analysis of the data. Results: A total of 6 patients were registered, four associated with MEN type 2 syndrome and two in the context of VHL syndrome. Retroperitoneoscopic resection was performed without laparoscopic or open conversion and no postoperative complications; the average hospital stay was 2.5 days. Preservation of the functional cortex without corticosteroids was achieved in 5 (83%) of out 6 cases with a follow-up of 26.2 ± 6 months. Today, these 5 patients have a preserved adrenal function without hormone replacement. Conclusions: Cortical-sparing adrenalectomy by the retroperitoneal endoscopic approach, in expert hands, is safe and feasible for the treatment of hereditary and sporadic pheochromocytoma in a context of low malignancy, making it possible to avoid the need for corticoid replacement in most cases. abstract_id: PUBMED:8273059 Retroperitoneal chylous cyst In pathology, cysts of the retroperitoneum are usually classified together with omental and mesenteric cysts. Because of risk of complications and malignancy these cysts should be extirpated. The frequency of complications and recurrence after surgery is significantly higher for retroperitoneal cysts than for the other types mentioned above. We describe a patient with a retroperitoneal chylous cyst of three litres. Correct diagnosis was made by CT-scan and chemical analysis of cyst fluid prior to surgery. The cyst was removed completely using an intercostal, retroperitoneal approach. The postoperative course was uneventful, and no evidence of recurrence was found after six months. Answer: The retroperitoneal approach for aortic surgery has been discussed in the context of abdominal aortosclerosis treatment. In a comparison between the retroperitoneal and transabdominal approaches, no significant difference in postoperative complications was observed. However, patients treated via the retroperitoneal approach showed a trend towards earlier defecation post-operation. The retroperitoneal approach was particularly favored in patients with previous laparotomies to avoid severe abdominal adhesions, although the study did not demonstrate any major advantages or disadvantages of this approach over the transabdominal route. The transabdominal approach does provide the opportunity to examine abdominal organs for malignancy, as evidenced by the incidental finding of an asymptomatic gastric cancer in one case (PUBMED:1412286). In the context of retroperitoneal liposarcoma, an active surgical approach, which likely includes retroperitoneal access, has been shown to improve long-term outcomes. The study suggests that surgery is the only potentially radical treatment for patients with retroperitoneal liposarcoma, and an active surgical approach is advantageous for improving long-term postoperative outcomes (PUBMED:34786910). In summary, while the retroperitoneal approach for aortic surgery does not show a clear superiority over the transabdominal approach in terms of postoperative complications, it may be beneficial in certain patient populations, such as those with previous abdominal surgeries. Additionally, in the context of retroperitoneal liposarcoma, an active surgical approach, which may include retroperitoneal access, is associated with better long-term outcomes. Therefore, the decision to use a retroperitoneal approach should be individualized based on the patient's history and the specific surgical context.
Instruction: Is combined androgen blockade with bicalutamide cost-effective compared with combined androgen blockade with flutamide? Abstracts: abstract_id: PUBMED:17956709 Combined androgen blockade in advanced prostate cancer: looking back to move forward. In 1989, Crawford and colleagues suggested that combined androgen blockade with castration plus antiandrogen therapy provided significantly improved survival compared with castration alone. Since then, some studies have supported these results, whereas others have not. To resolve this discrepancy, the Prostate Cancer Trialists' Collaborative Group conducted a metaanalysis of 27 randomized trials to evaluate whether combined androgen blockade has benefits compared with castration alone. The results published in 2000 showed that combined androgen blockade using a nonsteroidal antiandrogen treatment (nilutamide or flutamide) improved survival compared with castration alone, whereas combined androgen blockade using a steroidal antiandrogen agent (cyproterone acetate) reduced survival compared with castration alone. In 2004, an analysis was carried out to evaluate the nonsteroidal antiandrogen agent bicalutamide in the combined androgen blockade setting, by incorporating the data from a trial of combined androgen blockade with bicalutamide versus combined androgen blockade with flutamide with the Prostate Cancer Trialists' Collaborative Group metaanalysis data for combined androgen blockade with flutamide versus castration. This analysis showed that combined androgen blockade with bicalutamide was associated with a 20% reduction in the risk of death compared with castration alone. The survival benefit associated with combined androgen blockade using a nonsteroidal antiandrogen agent should be weighed against the potential for increased toxicity and expense compared with castration alone. Studies have shown that bicalutamide has a better tolerability profile than flutamide or nilutamide. Furthermore, cost-benefit analyses of combined androgen blockade with bicalutamide suggest it is a cost-effective option versus castration alone and versus combined androgen blockade with flutamide. In summary, the present evidence suggests that combined androgen blockade with a nonsteroidal antiandrogen agent should be a first-line therapy option in patients with advanced disease. abstract_id: PUBMED:16230148 Is combined androgen blockade with bicalutamide cost-effective compared with combined androgen blockade with flutamide? Objectives: To determine the cost-effectiveness of combined androgen blockade (CAB) with bicalutamide versus CAB with flutamide in men with Stage D2 prostate cancer. Both bicalutamide and flutamide are commonly used in CAB for prostate cancer. Although the cost of bicalutamide is more than that of flutamide, it is important that the efficacy, quality of life, and side effects are also considered when determining whether CAB with bicalutamide is a cost-effective option. Methods: A decision model was created to compare treatment strategies. Survival and side-effect information was based on a randomized trial that directly compared bicalutamide and flutamide. The costs and quality-of-life effects related to therapy were determined from published sources. Results: The incremental cost per quality-adjusted life year gained for bicalutamide versus flutamide was 22,000 dollars and 16,000 dollars at 5 and 10 years, respectively. If a quality adjustment was not included, the incremental cost-effectiveness ratio for CAB with bicalutamide compared with CAB with flutamide was even more favorable (20,000 dollars/life year gained at 5 years). One-way sensitivity analysis demonstrated that the cost-effectiveness estimates were most sensitive to drug costs and survival (baseline survival was not significantly different between therapies). Multi-way uncertainty analysis revealed that the median value of the incremental cost-effectiveness ratio at 5 years was 13,637 dollars/quality-adjusted life year when all the parameters were varied over a clinically reasonable range. Conclusions: Bicalutamide is cost-effective compared with flutamide when used for androgen blockade as part of CAB for men with advanced prostate cancer. abstract_id: PUBMED:16631454 Combined androgen blockade: an update. The use of combined androgen blockade therapy in prostate cancer management remains controversial. This article reviews the effect of the different non-steroid androgens in blocking androgen-independent activation of the androgen receptor in the androgen-depleted environment, and the potential benefit of bicalutamide in comparison to the first generation of anti-androgens (flutamide and nilutamide). An estimate of the benefit of combined therapy with bicalutamide suggests there is a high probability that bicalutamide 50 mg as combined therapy provides a survival advantage over castration alone. This treatment must be balanced against the potential for an increase in side-effects and a consequent adverse effect on the patient's quality of life. abstract_id: PUBMED:22189220 Monotherapy versus combined androgen blockade for advanced/metastatic prostate cancer In advanced/metastatic prostate cancer, a standard treatment is androgen deprivation therapy, either by surgical castration/LH-RH agonist monotherapy or by combined androgen blockade (CAB) with an antiandrogen. Clinical improvement and survival after CAB with an antiandrogen (instead of monotherapy) has been investigated for 20 years in many randomized clinical trials conducted primarily in Europe and America. However, there were both positive and negative results regarding the efficacy of CAB therapy. Therefore, CAB has neither been recommended as, nor has it become, a common therapy. But, in 2000, a meta-analysis-conducted Prostate Cancer Trialists Collaborative Group (PCTCG)showed the survival benefits of CAB with nonsteroidal antiandrogen (nilutamide and flutamide). Moreover, the J-Cap phase III trial in Japan suggested that CAB with bicalutamide significantly prolongs survival, which has led to the placement of CAB as the treatment of choice for advanced/metastatic prostate cancer. Neverthless, the benefit of CAB compared to monotherapy remains controversial because of the many issues involving survival, safety profiles, QOL, and cost-effectiveness. In this article, we discuss the feasibility of CAB for advanced/metastatic prostate cancer by reviewing the results of RCT, and introduce novel treatment modalities involving androgen and the androgen receptor, which are still under development. abstract_id: PUBMED:27123292 Clinical outcomes of anti-androgen withdrawal and subsequent alternative anti-androgen therapy for advanced prostate cancer following failure of initial maximum androgen blockade. The present study aimed to investigate the significance of anti-androgen withdrawal and/or subsequent alternative anti-androgen therapy in patients with advanced prostate cancer (PC) who relapsed after initial maximum androgen blockade (MAB). The present study evaluated the clinical outcomes of 272 consecutive advanced PC patients undergoing anti-androgen withdrawal and/or subsequent alternative anti-androgen therapy with flutamide following the failure of initial MAB using bicalutamide. With the exception of 41 patients (15.1%) who did not undergo anti-androgen withdrawal due to the characteristics of PC suggesting aggressive diseases, prostate-specific antigen (PSA) declined from the baseline value in 83 patients (35.9%), including 18 (7.8%) with PSA decline &gt;50%, but not in the remaining 148 (64.1%). No significant difference in the overall survival (OS) or cancer-specific survival (CSS) among the three groups was observed based on the response to anti-androgen withdrawal. Following the introduction of alternative anti-androgen therapy with flutamide, PSA decline was observed in 185 patients (68.0%), including 103 (37.9%) who achieved a PSA reduction of &gt;50%; however, the PSA level continued to elevate in the remaining 87 (32.0%). Furthermore, of the numerous factors examined, only the duration of the initial MAB therapy was shown to be significantly correlated with the PSA decline following alternative anti-androgen therapy. Multivariate analysis of several factors identified revealed that only PSA decline following alternative anti-androgen therapy was an independent predictor of CSS and OS. If initial MAB is effective, the introduction of alternative anti-androgen therapy may be considered; however, anti-androgen withdrawal should be omitted, irrespective of the characteristics of advanced PC. abstract_id: PUBMED:21091846 Combined androgen blockade for prostate cancer: review of efficacy, safety and cost-effectiveness. A standard treatment for advanced prostate cancer is androgen deprivation by surgical or medical castration. In theory, however, combined androgen blockade (CAB) with an antiandrogen plus castration should be more effective because castration alone does not completely eliminate androgens in the prostate. Therefore, a number of randomized clinical trials (RCT) were conducted in the 1990s to investigate the efficacy of CAB with an antiandrogen (nilutamide or flutamide) plus castration; however, there were both positive and negative results for the efficacy of CAB. The lack of data on safety, quality of life (QOL) and cost-effectiveness has been a hindrance to the adoption of CAB for the treatment of prostate cancer. Nevertheless, discussion on CAB for the treatment of prostate cancer has continued for over 20 years, which suggests that there remains some hope for this regimen. In the 2000s, clinical research on CAB with the antiandrogen bicalutamide commenced. CAB using this new antiandrogen was found to prolong overall survival (OS) in patients with prostate cancer, with favorable safety profiles and cost-effectiveness, without deteriorating QOL. In this article, we discuss the feasibility of CAB with bicalutamide for the treatment of prostate cancer by reviewing the theoretical background of CAB and then the results of RCT conducted in the 1990s when the usefulness of CAB was assessed. abstract_id: PUBMED:10587288 Which is the optimal antiandrogen for use in combined androgen blockade of advanced prostate cancer? The transition from a first- to second-generation antiandrogen. Many physicians use combined androgen blockade in the form of a luteinizing hormone-releasing hormone analog or bilateral orchiectomy in combination with a non-steroidal antiandrogen to offer patients a potentially more effective treatment than castration alone. Three non-steroidal anti-androgens are available in the US, i.e. flutamide (Eulexin), bicalutamide (Casodex) and nilutamide (Nilandron). Nilutamide offers patients no benefit over flutamide or bicalutamide and has the least favorable safety profile. Because of its short half-life, flutamide must be administered 3 times a day. Furthermore, flutamide therapy is associated with a relatively high incidence of diarrhea, often intolerable for some patients. Bicalutamide is available in a convenient one tablet, once-a-day dosing regimen, is at least as effective as flutamide and is better tolerated in terms of diarrhea. Therefore, bicalutamide would seem to represent an appropriate first choice in patients who are suitable candidates for combined androgen blockade. abstract_id: PUBMED:15882477 Combined androgen blockade: the case for bicalutamide. Combination therapy consists of castration plus an antiandrogen. Following medical or surgical castration, the androgen receptor can be activated by adrenal androgens, low levels of residual testosterone, and ligand-independent activators. The survival benefit of combination therapy compared with castration alone is one of the most studied questions in urology. Results from trials comparing combination therapy to castration alone are variable. A metaanalysis of 26 randomized trials indicated that the type of antiandrogen used is relevant. Combination therapy using nonsteroidal antiandrogens was associated with a statistically significant overall survival benefit. In contrast, combination therapy using steroidal antiandrogens was associated with reduced survival compared with castration alone. Bicalutamide 50 mg has a number of advantages compared with nilutamide and flutamide when used in combination with castration. These include an improved side-effect profile, once-daily dosing, more potent inhibition of androgen-independent activation of the androgen receptor through favorable interactions with nuclear coactivators and corepressors, and evidence for improved survival in one randomized trial. An analysis combining historic trial data suggests that bicalutamide 50 mg in addition to androgen deprivation may reduce the hazard ratio (HR) for prostate cancer mortality by 20% (HR, 0.80; 95% CI, 0.66-0.98). abstract_id: PUBMED:8977985 Maximum androgen blockade in 1996. Objectives: To review the current status of maximal androgen blockade (MAB) in the treatment of patients with prostate cancer. Introduction: Ten years after its launch, the concept of MAB remains open to many questions regarding its modalities and efficacy. Although more than 5,000 patients with metastatic prostate cancer have been entered in trials which have matured enough to be clinically meaningful, many uncertainties persist. Conclusions: It is now accepted that MAB with steroidal antiandrogens has limited, if any, efficacy. Nonsteroidal antiandrogens appear to be the drugs of choice if MAB is considered. Apart from differences in the nature of the side effects, today there appears to be no major difference between the various types of nonsteroidal antiandrogens (flutamide, nilutamide, bicalutamide). Overall, the impact of MAB on survival and progression-free survival is at best marginal when considering the results of the large meta-analysis published in July 1995. There is undoubtedly a trend in favor of MAB, which hopefully will be confirmed at the next meta-analysis. However, the real target of MAB (patients with minimal disease?) has yet to be defined more accurately as it becomes clear that many patients will not benefit from MAB and should be spared the side effects and cost of antiandrogens. A further point of concern is that continuous administration of any antiandrogen can induce mutations in the androgen receptor, transforming the antiandrogen into an agonist so that the first therapeutic action to be taken in case of progression would be the withdrawal of the antiandrogen. The so-called antiandrogen withdrawal syndrome is, however, less frequent than initially thought. abstract_id: PUBMED:31564004 Enzalutamide versus flutamide for castration-resistant prostate cancer after combined androgen blockade therapy with bicalutamide: the OCUU-CRPC study. Background: Before the androgen target therapy era, flutamide was widely used for castration-resistant prostate cancer in Japan. Enzalutamide is currently the recommended treatment; however, the efficacy and safety of enzalutamide and flutamide after combined androgen blockade therapy with bicalutamide, has not been compared. Methods: Patients with castration-resistant prostate cancer who received combined androgen blockade therapy with bicalutamide were randomly assigned to receive either enzalutamide or flutamide. The primary endpoint for efficacy was the 3-month prostate-specific antigen response rate. This trial is registered with ClinicalTrials.gov (NCT02346578) and the University hospital Medical Information Network (UMIN000016301). Results: Overall, 103 patients were enrolled. The 3- (80.8% vs. 35.3%; p &lt; 0.001) and 6-month (73.1% vs. 31.4%; p &lt; 0.001) prostate-specific antigen response rates were higher in the enzalutamide than in the flutamide group. The 3-month disease progression rates (radiographic or prostate-specific antigen progression) were 6.4% and 38.8% in the enzalutamide and flutamide groups, respectively [hazard ratio (HR): 0.16; 95% confidence interval (CI): 0.05-0.47; p &lt; 0.001]; the 6-month rates were 11.4% and 51.1%, respectively (HR 0.22; 95% CI 0.09-0.50; p &lt; 0.001). Enzalutamide provided superior prostate-specific antigen progression-free survival compared with flutamide (HR 0.29; 95% CI 0.15-0.54; p &lt; 0.001). Median time to prostate-specific antigen progression-free survival was not reached and was 6.6 months in the enzalutamide and flutamide groups, respectively. Conclusions: As an alternative anti-androgen therapy in patients with castration-resistant prostate cancer who fail bicalutamide-combined androgen blockade therapy, enzalutamide provides superior clinical outcomes compared with flutamide. Enzalutamide should be preferred over flutamide in these patients. Answer: Yes, combined androgen blockade with bicalutamide is considered cost-effective compared with combined androgen blockade with flutamide. A decision model comparing the two treatments found that the incremental cost per quality-adjusted life year gained for bicalutamide versus flutamide was $22,000 and $16,000 at 5 and 10 years, respectively. Without quality adjustment, the cost-effectiveness ratio was even more favorable for bicalutamide ($20,000/life year gained at 5 years). Sensitivity analysis showed that the cost-effectiveness estimates were most sensitive to drug costs and survival, with the median value of the incremental cost-effectiveness ratio at 5 years being $13,637/quality-adjusted life year when all parameters were varied over a clinically reasonable range. These findings suggest that bicalutamide is a cost-effective option when used for androgen blockade as part of combined androgen blockade for men with advanced prostate cancer (PUBMED:16230148). Additionally, studies have shown that bicalutamide has a better tolerability profile than flutamide or nilutamide, which can also contribute to its cost-effectiveness by potentially reducing the costs associated with managing side effects (PUBMED:17956709). Cost-benefit analyses of combined androgen blockade with bicalutamide suggest it is a cost-effective option versus castration alone and versus combined androgen blockade with flutamide (PUBMED:17956709). Therefore, the evidence supports the use of bicalutamide as a cost-effective option for combined androgen blockade in patients with advanced prostate cancer.
Instruction: Is xanthogranulomatous cholecystitis the most difficult for laparoscopic cholecystectomy? Abstracts: abstract_id: PUBMED:31341622 Outcomes of laparoscopic cholecystectomy for xanthogranulomatous cholecystitis. Xanthogranulomatous cholecystitis (XGC) is a rare inflammatory disease of the gallbladder with distinct histopathological characteristics. Laparoscopic cholecystectomy (LC) is currently the standard treatment for gallbladder disease. However, the outcomes of LC for XGC have not been completely investigated, due to the rarity of XGC. The present study aimed to assess the surgical outcomes of LC for XGC. Among 3,037 patients undergoing cholecystectomy between 2005 and 2017 at our institution, 58 patients (1.9%) were diagnosed with XGC based on histopathology. Of the patients, LC was performed in 38 (65.5%), and they were enrolled in the present study. The outcome of LC for XGC in the cases was assessed, and was compared with outcomes of LC for other diseases. The average operation time was 109±36 min, and average intraoperative blood loss was 58±85 ml. LC was converted to open cholecystectomy in 6 (15.8%) of the 38 cases. No operative mortality occurred. One patient developed postoperative complications greater than grade II in the Clavien-Dindo classification, and the mean postoperative hospital stay was 6.1±5.8 days. Based on previous reports and the nature of XGC itself, the outcomes reported herein of LC for XGC seemed acceptable. It should be also noted that LC for XGC exhibited a higher conversion rate compared with LC than other benign gallbladder diseases, implying that LC for XGC remains challenging. abstract_id: PUBMED:24761077 A comprehensive predictive scoring method for difficult laparoscopic cholecystectomy. Context: Laparoscopic cholecystectomy (LC) is the gold standard cholecystectomy. LC is the most common difficult laparoscopic surgery performed by surgeons today. The factors leading to difficult laparoscopic cholecystectomy can be predicted. Aims: To develop a scoring method that predicts difficult laparoscopic cholecystectomy. Settings And Design: Bidirectional prospective study in a medical college setup. Materials And Methods: Following approval from the institutional ethical committee, cases from the three associated hospitals in a medical college setup, were collected using a detailed proforma stating the parameters of difficulty in laparoscopic cholecystectomy. Study period was between May 10 and June 12. Preoperative, sonographic and intraoperative criteria were considered. Statistical Analysis Used: Chi Square test and Receiver Operater Curve (ROC) analysis. Results: Total 323 patients were included. On analysis, elderly patients, males, recurrent cholecystitis, obese patients, previous surgery, patients who needed preoperative Endoscopic retrograde cholangiopancreatography (ERCP), abnormal serum hepatic and pancreatic enzyme profiles, distended or contracted gall bladder, intra-peritoneal adhesions, structural anomalies or distortions and the presence of a cirrhotic liver on ultrasonography (USG) were identified as predictors of difficult LC. A scoring system tested against the same sample proved to be effective. A ROC analysis was done with area under receiver operator curve of 0.956. A score above 9 was considered difficult with sensitivity of 85% and specificity of 97.8%. Conclusions: This study demonstrates that a scoring system predicting the difficulty in LC is feasible. There is scope for further refinement to make the same less cumbersome and easier to handle. Further studies are warranted in this direction. abstract_id: PUBMED:28593410 Xanthogranulomatous cholecystitis: Is an initial laparoscopic approach feasible? Background: Xanthogranulomatous cholecystitis is an inflammatory disease with pathologically distinct characteristics such as accumulation of lipid-laden macrophages, fibrous tissue, and acute and chronic inflammatory cells. It often involves adjacent organs and mimics gallbladder cancer. The purpose of this study was to review the clinical findings of xanthogranulomatous cholecystitis and to determine the appropriate treatment plan. Methods: We retrospectively analyzed clinical demographics, operation records, and postoperative results of 31 patients with a pathological diagnosis of xanthogranulomatous cholecystitis who underwent surgery between January 2010 and 2015 at two university hospitals. Results: Xanthogranulomatous cholecystitis was found in 0.81% (31/3820) of cholecystectomy patients in our hospital over 5 years. The most frequently observed clinical symptom was abdominal pain (21 patients, 67.7%). Preoperative radiological studies showed cholelithiasis in 23 patients (74.2%), thickening of the gallbladder wall in 23 patients (74.2%), and suspicious cancer in 2 patients (6.5%), but there were no cases of gallbladder cancer accompanying xanthogranulomatous cholecystitis. Laparoscopic cholecystectomy was planned in all patients and was converted to open cholecystectomy in five patients. T-tube choledocholithotomy was needed in one patient due to common bile duct injury. Mean operation time was 149.2 min, and estimated blood loss was 270.1 mL. There were two patients with complications greater than Clavien-Dindo Classification grade III (CBD injury, pleural effusion). Conclusion: An initial laparoscopic approach is possible for xanthogranulomatous cholecystitis. However, it is troublesome and challenging, with significantly higher conversion and complication rates compared to standard laparoscopic cholecystectomy. abstract_id: PUBMED:24019688 Outcomes of Xanthogranulomatous cholecystitis in laparoscopic era: A retrospective Cohort study. Introduction: Xanthogranulomatous cholecystitis (XGC) is a rare variant of cholecystitis and reported incidence of XGC varies from different geographic region from 0.7% -9%. Most of the clinicians are not aware of the pathology and less evidence is available regarding the optimal treatment of this less common form of cholecystitis in the present era of laparoscopic surgery. Materials And Methods: A retrospective cohort study was conducted in a tertiary care university hospital from 1989 to 2009. Histopathologically confirmed XGC study patients (N=27) were compared with non-Xanthogranulomatous cholecystitis (NXGC) control group (N=27). The outcomes variables were operative time, complication rate and laparoscopic to open cholecystectomy conversion rate. The study group (XGC) was further divided in to three sub groups; group I open cholecystectomy (OC), laparoscopic cholecystectomy (LC) and laparoscopic converted to open cholecystectomy (LCO) for comparative analysis to identify the significant variables. Results: During the study period 6878 underwent cholecystectomy including open cholecystectomy in 2309 and laparoscopic cholecystectomy in 4569 patients. Histopathology confirmed xanthogranulomatous cholecystitis in 30 patients (0.43% of all cholecystectomies) and 27 patients qualified for the inclusion criterion. Gallbladder carcinoma was reported in 100 patients (1.45%) during the study period and no association was found with XGC. The mean age of patients with XGC was 49.8 year (range: 29-79), with male to female ratio of 1:3. The most common clinical features were abdominal pain and tenderness in right hypochondrium. Biliary colic and acute cholecystitis were the most common preoperative diagnosis. Ultrasonogram was performed in all patients and CT scan abdomen in 5 patients. In study population (XGC), 10 were patients in group I, 8 in group II and 9 in group III. Conversion rate from laparoscopy to open was 53 % (n=9), surgical site infection rate of 14.8% (n=4) and common bile duct injury occurred one patient in open cholecystectomy group (3.7%). Statistically significant differences between group I and group II were raised total leukocyte count: 10.6±3.05 vs. 7.05±1.8 (P-Value 0.02) and duration of surgery in minutes: 248.75±165 vs. 109±39.7 (P-Value 0.04). The differences between group III and group II were duration of surgery in minutes: 208.75±58 vs. 109±39.7 (P-Value 0.03) and duration of symptoms in days: 3±1.8 vs. 9.8±8.8 (P-Value 0.04). The mean hospital stay in group I was 9.7 days, group II 5.6 days and in group III 10.5 days. Two patients underwent extended cholecystectomy based on clinical suspicion of carcinoma. No mortality was observed in this study population. Duration of surgery was higher in XGC group as compared to controls (NXGC) (203±129 vs.128±4, p-value=0.008) and no statistically significant difference in incidence proportion of operative complication rate were observed among the group (25.9% vs. 14.8%, p-value=0.25. Laparoscopic surgery was introduced in 1994 and 17 patients underwent laparoscopic cholecystectomy and higher conversion rate from laparoscopic to open cholecystectomy was observed in 17 study group (XGC) as compared to 27 Control group (NXGC) 53%vs.3.3% with P-value of &lt; 0.023. Conclusion: XGC is a rare entity of cholecystitis and preoperative diagnosis is a challenging task. Difficult dissection was encountered in open as well in laparoscopic cholecystectomy with increased operation time. Laparoscopic cholecystectomy was carried out with high conversion rate to improve the safety of procedure. Per operative clinical suspicion of malignancy was high but no association of XGC was found with gallbladder carcinoma, therefore frozen section is recommended before embarking on radical surgery. abstract_id: PUBMED:34815866 Prediction of difficult laparoscopic cholecystectomy: An observational study. Background: Laparoscopic cholecystectomy (LC) is a gold standard treatment of symptomatic gallstone disease. Meanwhile, it is also a challenging procedure demanding excellent expertise for the best outcomes. Many times, difficult laparoscopic cholecystectomy is a nerve-wracking situation for surgeons. It endangers patients by causing potential injury to vital structures. Thus, we aimed to identify predictors for difficult LC. Methods: A retrospective cross-sectional review of surgical records was done. Patients who underwent laparoscopic cholecystectomy on an elective basis from July 2017 to June 2021 were included in the study. We divided our patients into two groups based on operative findings of difficult LC; difficult LC group and non-difficult LC group. We compared patient's demographics, predictors, and perioperative details and analyzed the data. Results: A total of 338 patients (82 males) with a median age of 47 years were studied. Total difficult LC was found in 52 patients (15.4%). The overall conversion rate was 8.9%. Logistic multivariable regression analysis revealed that; male gender (odds ratio (OR); 0.171, confidence interval (CI),(0.043-0.675), P; 0.012), past history of acute cholecystitis (OR; 0.038, CI; (0.005-0.309), P; 0.002), gall bladder wall thickness (≥4-5 mm) (OR; 0.074, CI; (0.008-0.666), P; 0.020), fibrotic gallbladder (OR; 166.6, CI; (7.946-3492), P; 0.001), and adhesion at Calot's triangle (OR; 0.021, CI (0.001-0.311), P; 0.005) were independent predictors of difficult LC. Conclusions: Gender (male), past history of acute cholecystitis, gallbladder wall thickness (≥4-5 mm), fibrotic gallbladder, and adhesion at Calot's triangle are significant predictors for difficult LC. Moreover, an awareness about reliable predictors for difficult LC would be helpful for an appropriate treatment plan and application of the resources to anticipate difficult LC. abstract_id: PUBMED:29615182 A Rare Pathology of Difficult Laparoscopic Cholecystectomy: Xanthogranulomatous Cholecystitis. Xanthogranulomatous cholecystitis (XGC) is a rare inflammatory condition of the gallbladder (GB). Preoperatively, it is a diagnostic challenge. One thousand and seventy nine laparoscopic cholecystectomy (LC) patients from September 2012 to June 2015 were retrospectively reviewed. Fourteen patients were identified with XGC. An analysis was carried out on this group of patients looking at postoperative pathology results, preoperative radiology, operative findings and outcome. The overall frequency of XGC was 1.3%. A thick walled GB was found in 11 patients on preoperative imaging. Retrospectively suggestive feature thick gallbladder walls with low attenuation mural nodules. No suspicion of cancer was demonstrated in the final histology or preoperatively. Five cases (36%) perforated intraoperatively. Although all were surgically challenging, no conversions to open were performed. The perforation risk could be higher. A difficult cholecystectomy should alert a surgeon to consider XGC as a diagnosis. Contrary to the popular belief, XGC was found to be difficult to differentiate from infection rather than malignancy. abstract_id: PUBMED:35112058 The management of xanthogranulomatous cholecystitis. Objectives: Xanthogranulomatous cholecystitis (XGC) is a rare variant of chronic cholecystitis. This rare pathology is characterized by severe and progressive fibrosis of the gallbladder wall as well as infiltration of fat-laden macrophages. Material And Methods: The final pathology report of 8213 cholecystectomies performed between 2011 and 2019 was evaluated retrospectively, and patients whose pathology result was reported as XGC were included in the study. Patients' demographic characteristics, pathology results, and surgical methods were evaluated. Logistic regression analysis was performed for risk factors on conversion to open cholecystectomy. Results: The rate of XGC among cholecystectomies was 0.91%. Mean age of the patients was 57.32 years. Laparoscopic cholecystectomy was applied to 92% (n: 69) of the patients. None of the patients had cancer suspicion in the preoperative period, but cancer suspicion was found in 10.6% of the patients during the operation. With the frozen test, unnecessary surgeries were prevented in these patients. Conversion rate to open cholecystectomy was found to be 26.09%. The most common reason for conversion to open cholecystectomy (66.7%) was intense fibrosis. Increased gallbladder wall thickness and acute cholecystitis were found to be statistically significant risk factors in ultrasonography (p &lt;0.05). Total complication rate in XGC cases was 3.9%. Conclusion: XGC is an extremely rare disease and is difficult to diagnose before cholecystectomy. Especially in preoperative USG, in cases with no suspicion of malignancy, but with suspected malignancy during the operation, histopathological examination with frozen method before extensive surgery may prevent unnecessary dissection and related morbidities. abstract_id: PUBMED:27746459 Pre-operative Prediction of Difficult Laparoscopic Cholecystectomy. Introduction: Laparoscopic cholecystectomy is one of the most common operation performed. Though LC have become safer and easier at times it can be difficult. Difficult cases can result in prolonged operative time, bleeding, bile spillage, conversion to open technique and bile duct injury resulting in unplanned prolonged hospital stay, increase in estimated cost to the patients and for the surgeon it leads to increased stress during operation and time pressure to complete the operative list. . Identification of difficult cases has potential advantages for surgeons, patients and their relatives. We aim to develop and validate a scoring system to predict difficult LC preoperatively. Methods: Prospective study. History, physical examination, abdominal ultrasound and biochemical parameters were included to develop a scoring system. Hundred patients undergoing LC were included and preoperative scores were calculated preoperatively to predict difficult LC which was compared with operative assessment. Results: Sensitivity and specificity of the preoperative scoring for difficult case was 53.8 % and 89.2 % respectively with PPV of 63.64 % and NPV of 84.62%. Only three parameters (history of acute cholecystitis, gall bladder wall thickness and contracted gall bladder) were statistically significant to predict difficult LC individually. Area under ROC curve was 0.779 (95 % CI, 0.657-0.883). Conclusions: Preoperative scoring system can be used to predict difficult LC. Surgeons can plan operation based on predicted difficulty. Patients and relatives can be counselled preoperatively for the possibility of difficult operation, prolonged hospital stay and increased cost in predicted difficult case. abstract_id: PUBMED:36590783 Difficult Laparoscopic Cholecystectomy Predictors and its Significance: Our Experience. Background: Laparoscopic cholecystectomy (LC) has become the procedure of choice for the management of symptomatic gallstone disease. In LC, the surgeons encountered difficulties with acutely inflamed or gangrenous gallbladder (GB), dense adhesions at Calot's triangle, fibrotic and contracted GB, and cholecystoenteric fistula. Depending on the difficulty faced during the surgery, the outcome of LC may vary from abandoning the procedure or partial cholecystectomy to conversion into open cholecystectomy. Complications related to biliary tract or adjoining structures or vessels may also occur. Our aim was to assess the different preoperative factors in patients of cholelithiasis and ascertain the validity of the scoring system devised by Randhawa and Pujahari in preoperatively predicting the difficult LC in our hospital scenario. Materials And Methods: This hospital-based observational study was conducted in the Department of General Surgery for a period of 2 years. All diagnosed cases of cholelithiasis admitted for elective LC during the study period in our hospital were included in the study. Results: In total, 154 patients, aged≥50 years, history of hospitalization for acute cholecystitis (AC), body mass index of 25 kg/m2 and more, abdominal scar, palpable GB, GB wall thickness ≥4 mm, pericholecystic collection, impacted stone found to be significant factors to predict difficult LC preoperatively. Endoscopic retrograde cholangiopancreatography and pancreatitis were found as independent risk factor for difficult LC. Conclusion: We recommend that the scoring system should be regularly used as a protocol for predicting difficulty levels preoperatively in LC. It can help to decide the surgical approach, counsel the patients, and reduce the complication rate, rate of conversion, and overall medical cost. The scoring system proposed by Randhawa and Pujahari is effective but has some lacunae. abstract_id: PUBMED:33425342 A prospective cohort study for prediction of difficult laparoscopic cholecystectomy. Introduction: Difficult laparoscopic cholecystectomy (DLC) is a stressful condition for surgeon which is followed by greater risk for various injuries (biliary, vascular etc.) Preoperative factors that are related to DLC are landmarks for surgeon to assess the possibilities for overcoming difficulties and making early decision about conversion to an open surgery. In prospective cohort study we evaluated and defined the importance and impact of preoperative parameters on difficulties encountered during surgery, defined DLC, predictors of DLC and index of DLC. Materials And Methods: All patients in the study were operated by the same surgeon. We defined the total duration of the operation as the time from insertion of Veress needle to the extraction of gallbladder (GB) and DLC as a laparoscopic cholecystectomy (LC) that lasted longer than the average duration of LC and the value of one standard deviation. Results: Multivariate logistic regression analysis identified five predictors significantly related to DLC: GB wall thickness &gt; 4 mm, GB fibrosis, leukocytosis ˃10 × 109 g/L, ˃ 5 pain attacks that lasted longer than 4 h and diabetes mellitus. The sensitivity of the generated index of DLC in our series is 81.8% and specificity 97.2%. Conclusion: Preoperative prediction of DLC is important for the surgeon, for his operating strategy, better organization of work in operating room, reduction of treatment expenses, as well as for the patient, for his timely information, giving a consent for an operation and a better psychological preparation for possible open cholecystectomy (OC). Answer: Xanthogranulomatous cholecystitis (XGC) is indeed recognized as a challenging condition for laparoscopic cholecystectomy (LC) due to its distinct pathological characteristics, such as severe fibrosis and infiltration of fat-laden macrophages, which can mimic gallbladder cancer and often involve adjacent organs (PUBMED:28593410, PUBMED:35112058). The difficulty in performing LC on patients with XGC is highlighted by several studies: 1. A study found that among 3,037 cholecystectomy patients, 58 (1.9%) were diagnosed with XGC. LC was attempted in 38 of these cases, with a conversion rate to open cholecystectomy of 15.8%, which is higher than for other benign gallbladder diseases, indicating that LC for XGC is more challenging (PUBMED:31341622). 2. Another study reported that XGC was found in 0.81% of cholecystectomy patients over 5 years, with a conversion rate from LC to open cholecystectomy in five patients, suggesting that an initial laparoscopic approach is possible but troublesome and challenging (PUBMED:28593410). 3. A retrospective cohort study indicated that XGC had a high conversion rate from laparoscopic to open cholecystectomy, with difficult dissection encountered in both open and laparoscopic approaches, leading to increased operation time (PUBMED:24019688). 4. One study observed that XGC was a rare inflammatory condition that posed a diagnostic challenge preoperatively and was associated with a higher risk of intraoperative perforation, although no conversions to open surgery were performed in their cohort (PUBMED:29615182). 5. The management of XGC was discussed in a study that found a conversion rate to open cholecystectomy of 26.09%, with intense fibrosis being the most common reason for conversion, indicating the difficulty of LC in XGC cases (PUBMED:35112058). While XGC is a significant predictor of difficult LC, it is not the only factor. Other predictors include male gender, past history of acute cholecystitis, gallbladder wall thickness, fibrotic gallbladder, and adhesions at Calot's triangle (PUBMED:34815866, PUBMED:36590783, PUBMED:33425342). Therefore, while XGC is a challenging condition for LC, it is not necessarily the most difficult, as other factors also contribute to the complexity of the procedure.
Instruction: Do male and female patients with chronic musculoskeletal pain differ in their pre-treatment expectations of rehabilitation outcome? Abstracts: abstract_id: PUBMED:21174053 Do male and female patients with chronic musculoskeletal pain differ in their pre-treatment expectations of rehabilitation outcome? Purpose: To analyse differences between males and females in expectations about rehabilitation outcome. Methods: Design: cross-sectional study. Setting: a general rehabilitation centre. Patients: n = 616, mean age 44 years, with chronic musculoskeletal pain, referred for multidisciplinary pain rehabilitation. All eligible patients in the period January 2005 to September 2009 were sent a questionnaire prior to or during the first two weeks of the treatment; the response was 630 out of 1105 (57%), of whom 14 patients did not give permission to use their data for research purposes. Main Outcome Measure: odds ratios. Results: Odds ratios for 21 items from a list of 25 expectations were not significant, odds ratios for 3 expectations were lower than 2 and odds ratio for 1 expectation was 4.0. Male patients were more likely than female patients to expect that the treatment would enable them to cope better with being a spouse and that the cause of their pain would be found. Female patients were more likely than male patients to expect that the treatment would enable them to better accept the fact that they could no longer do what they were able to do in the past, and that they would have fewer problems with household activities. Conclusion: There were no gender differences in pre-treatment expectations for 21 of 25 possible expectations, small differences for 3 expectations and a strong, clinically important difference for 1 expectation: female patients were more likely than male patients to expect that they would have fewer problems with household activities. abstract_id: PUBMED:28125978 Expectations of pain and functioning in patients with musculoskeletal disorders: a cross-sectional study. Background: Research has suggested that patient expectations are associated with treatment outcome and evolve along with patient communication within the musculoskeletal field. However, few studies have investigated if or how physical medicine and rehabilitation (PMR) consultations affect the attending patients' expectations regarding pain and functional improvement. Hence, the aims of the present study were to compare patient expectations regarding pain and functional improvement before and after a PMR consultation and to assess patient characteristics, including diagnosis, that could perhaps predict changes in expectations. Methods: The study design was cross-sectional. Eligible participants were first-time patients with neck/back or shoulder complaints who were referred to a PMR outpatient clinic between January and June 2013. Questionnaires (the Patient Shoulder Outcome Expectancies, or PSOE, questionnaire and a numeric rating scale, or NRS) focused on expectations regarding pain and functioning were completed immediately prior to and after a consultation with a PMR specialist. Results: In total, 257 patients were included. In total, 24% of the subjects expected a more positive outcome after the PMR consultation compared with before the consultation, while 10% of the subjects exhibited a negative change in expectations. Few patient characteristics other than sick leave were associated with changes in expectations; however, patients with shoulder complaints seemed to be more optimistic than patients with neck/back complaints. Conclusion: Expectations can be influenced by a single specialist consultation. Among clinical prognostic factors, only sick leave influenced the change expectations. However, patients with shoulder complaints seemed to be more optimistic than patients with neck/back complaints. Trial Registration: The study was approved by the Data Protection Office at Oslo University Hospital, 2012/2574. ISRCTN registration: 40963362 (registered retrospectively 12.12.2016). abstract_id: PUBMED:28339337 "I know what I want but I'm not sure how to get it"-Expectations of physiotherapy treatment of persons with persistent pain. Expectations of physiotherapy treatment of patients with persistent pain have been shown to influence treatment outcome and patient satisfaction, yet this is mostly explored and described in retrospective. The aim of the study was to explore and describe the expectations people with persistent pain have prior to physiotherapy treatment. Ten participants with persistent musculoskeletal pain from the back, neck, or shoulders were included in the study. Data were collected by interviews using a semi-structured interview guide and were analyzed with qualitative content analysis. The analysis resulted in one main category: "The multifaceted picture of expectations" and four categories: 1) Standing in the doorway: curious and uncertain; 2) Looking for respect, confirmation and knowledge; 3) Expecting treatment, regular training, and follow up; and 4) Having dreams, being realistic, or feeling resigned. The main category and the categories describe a multifaceted picture of the participants' expectations, gradually developed and eventually encompassing several aspects: good dialog and communication, the need to be confirmed as individuals, and getting an explanation for the pain. The results also show that the participants expected tailored training with frequent follow-ups and their expectations of outcome ranged from hope of the best possible results to realistic or resigned regarding pain relief and activity levels. abstract_id: PUBMED:25228388 Predictors of multidisciplinary treatment outcome in patients with chronic musculoskeletal pain. Purpose: The present study aimed to identify predictors of rehabilitation outcome for patients with chronic musculoskeletal pain (CMP) and psychological problems. Methods: A retrospective cohort study including 230 adult patients with CMP admitted for multidisciplinary pain rehabilitation. Potential predictors were patient characteristics, duration of complaints, baseline functioning, pain, personality, coping style, fear of movement, psychological distress and type of treatment. Outcome measures were physical functioning, mental health, pain and patient-reported effect. Multiple (logistic) regression models were used to identify predictors. Results: Patients who were more disabled and patients with more pain benefitted more from the rehabilitation treatment than less disabled patients or those with less pain. Age, work status, vitality, depression and coping style also predicted outcomes significantly. The models explained between 27 and 80% of the outcomes. There was an interaction between type of treatment, work status and the baseline pain score as regards the outcome in terms of pain. Conclusions: No strong predictors of treatment outcome were found other than the baseline scores of the respective outcome variables. More disabled patients and patients with more pain benefitted more from the rehabilitation program. Other predictors improved the prediction models slightly. Implications for Rehabilitation It remains challenging to correctly predict the outcome of treatment from patients' baseline sociodemographic and psychological characteristics; predictors other than baseline scores of the outcome variables are only slightly associated with treatment outcome. Patients with chronic musculoskeletal pain and poor physical functioning or mental health benefit most from pain rehabilitation. Older patients benefit less from a pain rehabilitation program than younger patients in terms of physical functioning. Pain reduction during a pain rehabilitation program is greatest in patients with high pain intensity who are not at work at the start of the rehabilitation program. Coping style influences the outcome of rehabilitation of patients with chronic musculoskeletal pain. abstract_id: PUBMED:30514271 Stakeholder expectations from the integration of chiropractic care into a rehabilitation setting: a qualitative study. Background: Few studies have investigated patient and provider expectations of chiropractic care, particularly in multidisciplinary settings. This qualitative study explored stakeholder expectations of adding a chiropractor to the healthcare team at a rehabilitation specialty hospital. Methods: The research methodology was an organizational case study with an inpatient facility for persons recovering from complex neurological conditions serving as the setting. Sixty stakeholders, including patients, families, hospital staff, and administrators, were interviewed or participated in focus groups in June 2015. Semi-structured questions guided the interview sessions which were digitally audiorecorded and transcribed. Data were entered into a qualitative software program to conduct content analysis using an iterative approach to identify key themes. Results: Expectations for the chiropractic program were mostly positive with themes consistently reported across stakeholder groups. The central domain, making progress, encompassed the organizational mission to empower patients to reach hospital discharge and return to life in the community. Higher order goals, characterized as achieving whole person healing, encompassed patients' quality of life, self-efficacy, and activities of daily living. Stakeholders expected the addition of chiropractic to help patients progress toward these goals by improving pain management and physical functioning. Pain management themes included pain intensity, medication use, and pain-related behaviors, while functional improvement themes included muscle tone, extremity function, and balance and mobility. In addition to these direct effects on clinical outcomes, stakeholders also expected indirect effects of chiropractic care on healthcare integration. This indirect effect was expected to increase patient participation in other providers' treatments leading to improved care for the patient across the team and facility-level outcomes such as decreased length of stay. Conclusions: Stakeholders expected the addition of chiropractic care to a rehabilitation specialty hospital to benefit patients through pain management and functional improvements leading to whole person healing. They also expected chiropractic to benefit the healthcare team by facilitating other therapies in pursuit of the hospital mission, that is, moving patients towards discharge. Understanding stakeholder expectations may allow providers to align current expectations with what may be reasonable, in an effort to achieve appropriate clinical outcomes and patient and staff satisfaction. abstract_id: PUBMED:31238588 Moderate and Stable Pain Reductions as a Result of Interdisciplinary Pain Rehabilitation-A Cohort Study from the Swedish Quality Registry for Pain Rehabilitation (SQRP). Few studies have investigated the real-life outcomes of interdisciplinary multimodal pain rehabilitation programs (IMMRP) for chronic pain. This study has four aims: investigate effect sizes (ES); analyse correlation patterns of outcome changes; define a multivariate outcome measure; and investigate whether the clinical self-reported presentation pre-IMMRP predicts the multivariate outcome. To this end, this study analysed chronic pain patients in specialist care included in the Swedish Quality Registry for Pain Rehabilitation for 22 outcomes (pain, psychological distress, participation, and health) on three occasions: pre-IMMRP, post-IMMRP, and 12-month follow-up. Moderate stable ES were demonstrated for pain intensity, interference in daily life, vitality, and health; most other outcomes showed small ES. Using a Multivariate Improvement Score (MIS), we identified three clusters. Cluster 1 had marked positive MIS and was associated with the overall worst situation pre-IMMRP. However, the pre-IMMRP situation could only predict 8% of the variation in MIS. Specialist care IMPRPs showed moderate ES for pain, interference, vitality, and health. Outcomes were best for patients with the worst clinical presentation pre-IMMRP. It was not possible to predict who would clinically benefit most from IMMRP. abstract_id: PUBMED:21351369 Depression and age as predictors of patient-reported outcome in a multidisciplinary rehabilitation programme for chronic musculoskeletal pain. Background: The recommended treatment for chronic musculoskeletal pain is multidisciplinary, with a cognitive approach. The aim of this study was to investigate health-related quality of life (HRQoL) outcome after a multidisciplinary treatment with a cognitive approach. Methods: A total of 131 subjects who participated in a multidisciplinary rehabilitation programme (2005-2008) were studied at baseline and after six months, using the Short Form Short Form 36-item Health Survey questionnaire (SF-36) as primary outcome (HRQoL), and the Hospital Anxiety and Depression Scale (HAD) and pain as secondary outcomes and possible baseline predictors for HRQoL. Results: Complete data were available for 97 subjects (85 women, mean age [SD] 44.6 [9.7] years). The SF-36 subscales physical function (PF), general health (GH), vitality (VT), social function (SF) and mental health (MH), the visual analogue scale for pain and the HAD improved significantly (p &lt; 0.05) at follow-up compared with baseline. A pre-treatment probable depression (HAD score ≥11) was associated with a favourable outcome of the SF-36 subscales PF (odds ratio [OR] 5.6; p = 0.01), VT (OR 4.3; p = 0.02) and MH (OR 3.6; p = 0.02). A probable anxiety (HAD score ≥11) was associated with a favourable outcome of PF (OR 2.6; p = 0.05). There was an even stronger association for younger subjects (20-45 years), with probable depression scores at baseline and a favourable HRQoL outcome at follow up. Conclusion: This multidisciplinary rehabilitation programme, using a non-pharmacological cognitive approach, seemed to yield a better outcome concerning HRQoL measures in younger subjects with higher depression scores at baseline. This information is important for clinics when tailoring a multidisciplinary rehabilitation programme for patients with musculoskeletal chronic pain. abstract_id: PUBMED:15319686 Work resumption or not after rehabilitation? A descriptive study from six social insurance offices. The objective of this study was to describe measures and outcomes of vocational rehabilitation at six local national insurance offices in the same county in Sweden. Data were collected from mainframe registers and other records at each office. There were great differences in sickness allowance, incapacity rate, selected rehabilitation measures and resuming work. The percentage of sick-listed people who received any rehabilitation measure differed from 1.2 to 8.7%. The gender distribution for the study population was 36% men and 64% women and the predominant diagnosis was musculoskeletal pain conditions, which was followed by psychiatric disorders. Outcomes varied from office B, which reported 58% fully fit after completed planned rehabilitation, to office C, which reported only 24% fully fit. The clear differences in outcome between the offices indicate that various rehabilitation measures differ in effectiveness. The rehabilitation measure 'investigation of working ability' was not linked to any great proportion of people resuming work, but showed a greater correspondence to full disability pension. There were also large differences in social and demographic factors in the different municipalities. The effect of these on the rehabilitation process requires further investigation. abstract_id: PUBMED:37777907 Role of Rehabilitation Exercise on Myofascial Pain Syndrome Causing Upper Back Pain. Upper back pain is as painful or troublesome as the pain in the lower back or the neck. Myofascial pain syndrome which is most common cause of upper back pain is characterized by localized musculoskeletal pain and tenderness in association with trigger points. The aim of the study was to correlate the improvement of myofascial pain syndrome patients with proper and timely physical therapy. This quasi experimental study was conducted in the department of Physical Medicine and Rehabilitation, Bangabandhu Sheikh Mujib Medical University (BSMMU), Bangladesh, from 1st January 2008 to 31st August 2008 to see the role of rehabilitation exercise on myofascial pain syndrome causing upper back pain. Sixty (60) patients of myofascial pain syndrome causing upper back pain were randomly assigned for treatment; out of which 23(38.33%) were male and 37(61.66%) were female. The male and female ratio was 1:1.6. The patients selected for the trial were divided into two groups: Group A and Group B. In group A (n=28) the patients were treated with thermotherapy- Microwave diathermy, non-steroidal anti inflammatory drugs and activities of daily living instructions and in Group B (n=32) with same interventions in addition to rehabilitation exercises. Treatment duration was 6 weeks. The difference of treatment improvement was statistically significant (p&lt;0.05) from 1st week up to 6th week. After complete course of treatment 67.86% patients in Group A and 78.13% patients in group B reported improvement. So rehabilitation exercises can be a valuable adjunct to other modalities of treatment of myofascial pain syndrome causing upper back pain. abstract_id: PUBMED:28786337 Early rehabilitation after hospital admission for road trauma using an in-reach multidisciplinary team: a randomised controlled trial. Objective: To investigate the impact of an in-reach rehabilitation team for patients admitted after road trauma. Design: Randomised control trial of usual care versus early involvement of in-reach rehabilitation team. Telephone follow-up was conducted by a blind assessor at three months for those with minor/moderate injuries and six months for serious/severe injuries. Setting: Four participating trauma services in New South Wales, Australia. Subjects: A total of 214 patients admitted during 2012-2015 with a length of stay of at least five days. Intervention: Provision of rehabilitation services in parallel with ward based therapy using an in-reach team for the intervention group. The control group could still access the ward based therapy (usual care). Main Measures: The primary outcome was acute length of stay. Secondary outcomes included percentage requiring inpatient rehabilitation, function (Functional Independence Measure and Timed Up and Go Test), psychological status (Depression Anxiety and Stress Score 21), pain (Orebro Musculoskeletal Pain Questionnaire) and quality of life (Short Form-12 v2). Results: Median length of stay in acute care was 13 days (IQR 8-21). The intervention group, compared to the control group, received more physiotherapy and occupational therapy sessions (median number of sessions 16.0 versus 11.5, P=0.003). However, acute length of stay did not differ between the intervention and control groups (median 15 vs 12 days, P=0.37). There were no significant differences observed in the secondary outcomes at hospital discharge and follow-up. Conclusion: No additional benefit was found from the routine use of acute rehabilitation teams for trauma patients over and above usual care. Answer: Yes, male and female patients with chronic musculoskeletal pain do differ in their pre-treatment expectations of rehabilitation outcomes. A study analyzing differences between males and females in expectations about rehabilitation outcome found that while there were no gender differences in pre-treatment expectations for 21 of 25 possible expectations, there were small differences for 3 expectations and a strong, clinically important difference for 1 expectation. Specifically, male patients were more likely than female patients to expect that the treatment would enable them to cope better with being a spouse and that the cause of their pain would be found. Conversely, female patients were more likely than male patients to expect that the treatment would enable them to better accept the fact that they could no longer do what they were able to do in the past, and that they would have fewer problems with household activities (PUBMED:21174053).
Instruction: Does implicit emotion regulation in binge eating disorder matter? Abstracts: abstract_id: PUBMED:26117164 Does implicit emotion regulation in binge eating disorder matter? Objective: To examine if implicit emotion regulation (occurring outside of awareness) is related to binge eating disorder (BED) symptomatology and explicit emotion regulation (occurring within awareness), and can be altered via intervention. Methods: Implicit emotion regulation was assessed via the Emotion Conflict Task (ECT) among a group of adults with BED. Study 1 correlated BED symptomatology and explicit emotion regulation with ECT performance at baseline (BL) and after receiving BED treatment (PT). Study 2 generated effect sizes comparing ECT performance at BL and PT with healthy (non-eating disordered) controls (HC). Results: Study 1 yielded significant correlations (p&lt;.05) between both BED symptomatology and explicit emotion regulation with ECT performance. Study 2 found that compared to BL ECT performance, PT shifted (d=-.27), closer to HC. Preliminary results suggest a) BED symptomatology and explicit emotion regulation are associated with ECT performance, and b) PT ECT performance normalized after BED treatment. Conclusions: Implicit emotion regulation may be a BED treatment mechanism because psychotherapy, directly or indirectly, decreased sensitivity to implicit emotional conflict. Further understanding implicit emotion regulation may refine conceptualizations and effective BED treatments. abstract_id: PUBMED:32896682 Emotion regulation flexibility and disordered eating. Research suggests that individuals with eating disorders use more putatively maladaptive emotion regulation strategies and fewer putatively adaptive strategies. However, there is growing theoretical and empirical support for the notion that the efficacy of emotion regulation strategies varies across situations. Thus, successful emotion regulation is characterized by the ability to flexibly choose between emotion regulation strategies in order to fit one's situational needs. Despite growing support for this conceptualization of successful emotion regulation, no research has investigated it in relation to disordered eating. Using the emotion regulation choice paradigm, this study investigated the association between emotion regulation flexibility and disordered eating. Women (N = 50) completed self-report questionnaires and a laboratory-based emotion regulation choice task to assess emotion regulation flexibility. Results indicated that lower emotion regulation flexibility was associated with more frequent purging and excessive exercise. Emotion regulation flexibility was not significantly associated with binge eating or overall eating disorder psychopathology. These findings suggest that individuals who engage in unhealthy compensatory behaviors exhibit reduced emotion regulation flexibility. If replicated in clinical samples, treatment for eating disorders characterized by compensatory behaviors may benefit from incorporating additional strategies to help people utilize emotion regulation strategies in a flexible, situationally appropriate manner. abstract_id: PUBMED:34629115 Examining emotion regulation in binge-eating disorder. Background: Inefficient mechanisms of emotional regulation appear essential in understanding the development and maintenance of binge-eating disorder (BED). Previous research focused mainly on a very limited emotion regulation strategies in BED, such as rumination, suppression, and positive reappraisal. Therefore, the aim of the study was to assess a wider range of emotional regulation strategies (i.e. acceptance, refocusing on planning, positive refocusing, positive reappraisal, putting into perspective, self-blame, other-blame, rumination, and catastrophizing), as well as associations between those strategies and binge-eating-related beliefs (negative, positive, and permissive), and clinical variables (eating disorders symptoms, both anxiety, depressive symptoms, and alexithymia). Methods: Women diagnosed with BED (n = 35) according to the DSM-5 criteria and healthy women (n = 41) aged 22-60 years were assessed using: the Eating Attitudes Test-26, the Eating Beliefs Questionnaire-18, the Hospital Anxiety and Depression Scale, the Toronto Alexithymia Scale-20, the Cognitive Emotion Regulation Questionnaire, and the Difficulties in Emotion Regulation Scale. Statistical analyses included: Student t - tests or Mann-Whitney U tests for testing group differences between BED and HC group, and Pearson's r coefficient or Spearman's rho for exploring associations between the emotion regulation difficulties and strategies, and clinical variables and binge-eating-related beliefs in both groups. Results: The BED group presented with a significantly higher level of emotion regulation difficulties such as: nonacceptance of emotional responses, lack of emotional clarity, difficulties engaging in goal-directed behavior, impulse control difficulties, and limited access to emotion regulation strategies compared to the healthy controls. Moreover, patients with BED were significantly more likely to use maladaptive strategies (rumination and self-blame) and less likely to use adaptive strategies (positive refocusing and putting into perspective). In the clinical group, various difficulties in emotion regulation difficulties occurred to be positively correlated with the level of alexithymia, and anxiety and depressive symptoms. Regarding emotion regulation strategies, self-blame and catastrophizing were positively related to anxiety symptoms, but solely catastrophizing was related to the severity of eating disorder psychopathology. Conclusions: Our results indicate an essential and still insufficiently understood role of emotional dysregulation in BED. An especially important construct in this context seems to be alexithymia, which was strongly related to the majority of emotion regulation difficulties. Therefore, it might be beneficial to pay special attention to this construct when planning therapeutic interventions, as well as to the maladaptive emotion regulation strategies self-blame and catastrophizing, which were significantly related to BED psychopathology. abstract_id: PUBMED:36975259 An Examination of the Relationships between Eating-Disorder Symptoms, Difficulties with Emotion Regulation, and Mental Health in People with Binge Eating Disorder. Eating disorders, such as binge eating disorder, are commonly associated with difficulties with emotion regulation and mental-health complications. However, the relationship between eating-disorder symptoms, difficulties with emotion regulation, and mental health in people with binge eating disorder is unclear. Thus, we investigated associations between eating-disorder symptoms, difficulties with emotion regulation, and mental health in 119 adults with binge eating disorder. Participants were assessed with the Eating Disorder Examination Questionnaire, Loss of Control over Eating Scale, Difficulties in Emotion Regulation Scale, Depression Anxiety and Stress Scale, and the 12-Item Short Form Survey at the pre-treatment phase of a randomized controlled trial. Structural-equation-modelling path analysis was used to investigate relationships between variables. We found that (1) eating-disorder behaviors had a direct association with depression, anxiety, and stress; (2) depression, psychological stress, difficulties with emotion regulation, and eating-disorder psychopathology had a direct association with mental-health-related quality of life; and (3) eating-disorder psychopathology/behaviors and stress had a direct association with difficulties with emotion regulation. Our findings show that depression, stress, difficulties with emotion regulation, and eating-disorder psychopathology were related in important ways to mental-health complications in people with binge eating disorder. abstract_id: PUBMED:35472237 Emotion regulation in binge eating disorder. A review of the literature on emotion regulation in binge eating disorder (BED) published both in English and Polish between 1990 and 2020. BED might be considered as an impulsive and compulsive disorder associated with altered reward sensitivity and food-related attentional bias. The growing body of research indicated that there were corticostriatal circuitry alterations in BED, comparable to those observed in substance abuse, including altered function of orbitofrontal, prefrontal and insular cortices with the striatum included. Negative emotions and deficits in their regulation play a significant role in BED. Processing of anger, anxiety and sadness appear to be particularly important in this disorder. Research results identified an increase in negative emotions preceding episodes of binge eating. However, there is still inconsistency when it comes to whether these episodes alleviate negative affect. Individuals with BED more often use non-adaptive emotion regulation strategies, such as rumination and suppression of negative sensations. Whereas adaptive ones, for instance, cognitive reappraisal, are used less often. Clinical implications, besides pharmacology, highlight the high effectiveness of enhanced cognitive behavioral therapy (CBT-E), dialectic-behavioral therapy (DBT) and psychodynamic therapy in the treatment of emotional dysregulation in BED. Further studies, including ecological momentary assessment (EMA), should focus on emotional changes related to the binge cycle and the identification of reinforcing factors of BED. abstract_id: PUBMED:31621002 Emotion regulation difficulties and dietary restraint independently predict binge eating among men. Purpose: Research on eating disorder symptoms among men is growing, but there still are few specific models of disordered eating. The aim of the current study is to expand the literature on male eating behavior by investigating how negative affect can lead to binge eating and overeating through possible mediators, i.e., emotion regulation difficulties and dietary restraint, both separately and together in an integrative model. Methods: Altogether 104 men (mean age 27.3, SD = 8.02; BMI 24.2, SD = 2.92) participated in the study using ecological momentary assessment to collect data on negative affect, the urge to restrict, and the occurrence of binge eating and overeating episodes. Self-report questionnaires were used to measure emotion regulation difficulties and eating disorder symptoms. Results: We found that both emotion regulation difficulties and the urge to restrict mediated the effect of negative affect on binge eating in separate models as well as in an integrated model. However, emotion regulation difficulties did not predict the urge to restrict in the joint model. These models did not predict overeating, indicating the pathological nature of binge eating. Conclusions: Our data suggest that negative emotions themselves do not lead to binge eating, but manifest through emotion regulation difficulties and dietary restraint. The results indicate that emotion regulation and restraint play a role in precipitating binge eating among men. However, the emotion regulation and restraint pathways seem to operate independently from one another. Our study also supports the differentiation of overeating and binge eating, as these seem to have different etiologies. Level Of Evidence: Level V, cross-sectional descriptive study. abstract_id: PUBMED:25530255 Emotion regulation model in binge eating disorder and obesity--a systematic review. Deficits in emotion regulation processes are a common and widely used explanation for the development and maintenance of binge eating disorder (BED). It is assumed that BED patients - as they have difficulty regulating their negative emotions - use binge eating to cope with these emotions and to find relief. However, the number of experimental studies investigating this assumption is scarce and the differentiation of obese individuals with and without BED regarding the emotion regulation model is not verified. We reviewed literature for experimental studies investigating the emotion regulation model in obese patients (OB) with and without BED. Our search resulted in 18 experimental studies examining the triggering effect of negative emotions for binge eating or its effects on subsequent relief. We found evidence indicating that negative emotion serves as a trigger for binge eating in the BED group unlike the obese group without BED. Considering the small number of studies, we found evidence for a (short-term) improvement of mood through food intake, irrespective of group. abstract_id: PUBMED:29101588 Differences in Emotion Regulation Difficulties Across Types of Eating Disorders During Adolescence. Emotion regulation appears to play a key role in eating disorders. However, prior attempts to associate specific emotion regulation abilities with specific types of eating disorders resulted in inconsistent findings. Moreover, far less is known about emotion regulation in eating disorders during adolescence, a critical period of emotional development. The current study addresses this gap, comparing emotion regulation characteristics between adolescents with restrictive types of eating disorders and those with binge eating or purging types of eating disorders. Ninety-eight adolescents with eating disorders (49 with restrictive and 49 with binge eating/purging eating disorders) completed a set of questionnaires including the Difficulties in Emotion Regulation Scale (DERS). The results revealed that binge eating/purging types of eating disorders were associated with greater difficulties in a variety of emotion regulation dimensions including impulse control, goal-directed behavior and access to effective emotion regulation strategies. Awareness and clarity of emotions were also worse in the binge eating/purging types of eating disorders, but this difference did not remain when comorbid psychopathology measures were controlled for. Moreover, the emotion regulation profile of adolescents with anorexia nervosa-binging/purging type was more similar to that of adolescents with bulimia nervosa than to that of adolescents with anorexia nervosa-restrictive type. While both restrictive and binge eating/purging eating disorders have been associated with emotion regulation difficulties, the current study shows that the presence of binge eating or purging episodes is linked with greater severity of emotion regulation deficits among adolescents with eating disorders. abstract_id: PUBMED:27252857 Difficulties in emotion regulation in patients with eating disorders. Background: A defining characteristic of eating disorders (EDs) is difficulty with emotion regulation (ER). Previous research indicates that ED subtypes demonstrate differing ER difficulties. Specifically, individuals with Anorexia Nervosa (AN) or Bulimia Nervosa (BN) show greater impairment in their ability to regulate emotions in areas such as achieving goals while upset, reacting impulsively to distress, and effectively using coping strategies, as compared to those with Binge Eating Disorder (BED). However, limited research includes the diagnostic category of Eating Disorder, Not Otherwise Specified (EDNOS). The aim of this study was to better understand ER difficulties for all ED diagnoses, especially EDNOS. It was hypothesized that patients with EDs will demonstrate similar ER difficulties as psychiatric patients without EDs and that patients with EDNOS will be similar in their total level of ER difficulties but will differ in their specific types of difficulties in ER as compared to patients with other EDs. Methods: Participants included 404 adults presenting to an inpatient psychiatric hospital. Psychiatric diagnoses, including EDs, were determined using the Structured Clinical Interview for DSM Disorders. Differences in specific and overall difficulties with ER were examined across psychiatric patients using the multidimensional Difficulties in Emotion Regulation Scale. Results: Results of this study indicate that individuals with EDs have greater ER difficulties in most domains of ER and that those with BED and EDNOS demonstrate the most significant differences in ER as compared to psychiatric patients without EDs. Additionally, it was found that ED subtypes typically did not differ in terms of specific difficulties in ER. One exception emerged indicating that individuals with BED demonstrated significantly greater difficulty on the Limited Access to Emotion Regulation Strategies subscale as compared to those with EDNOS. Conclusions: Researchers were able to clarify difficulties in ER across ED diagnoses. Results highlight the importance of providing ER skills training for patients with EDs, particularly those with BED and EDNOS, and give insight into the specific areas of ER that may be important for these patients to focus on throughout recovery. abstract_id: PUBMED:25516798 Change in emotion regulation during the course of treatment predicts binge abstinence in guided self-help dialectical behavior therapy for binge eating disorder. Background: Dialectical behavior therapy (DBT), which appears to be an effective treatment for binge eating disorder (BED), focuses on teaching emotion regulation skills. However, the role of improved emotion regulation in predicting treatment outcome in BED is uncertain. Methods: This secondary analysis explored whether change in self-reported emotion regulation (as measured by the Difficulties in Emotion Regulation Scale) during treatment was associated with abstinence from binge eating at post-treatment and 4-, 5-, and 6-month follow-up in individuals who received a guided self-help adaptation of DBT for BED. Participants were 60 community-based men and women with BED who received a self-help manual and six 20-minute support phone calls. Results: Greater improvement in self-reported emotion regulation between pre- and post-treatment predicted abstinence from binge eating at post-treatment, 4-, 5-, and 6-month follow-up. However, some follow-up results were no longer significant when imputed data was excluded, suggesting that the effect of emotion regulation on binge abstinence may be strongest at 4-month follow-up but decline across a longer duration of follow-up. Conclusions: This study provides preliminary support for the theoretical role played by improved emotion regulation in achieving binge eating abstinence. If this finding is replicated with larger samples, further research should identify specific techniques to help more individuals to effectively regulate their emotions over a longer duration. Answer: Yes, implicit emotion regulation in binge eating disorder (BED) does matter. Research has shown that implicit emotion regulation, which occurs outside of awareness, is related to BED symptomatology and explicit emotion regulation, which occurs within awareness. A study found significant correlations between BED symptomatology and explicit emotion regulation with performance on the Emotion Conflict Task (ECT), suggesting that BED symptomatology and explicit emotion regulation are associated with implicit emotion regulation performance. Furthermore, after receiving BED treatment, participants' ECT performance shifted closer to that of healthy controls, indicating that implicit emotion regulation may be a treatment mechanism for BED. Psychotherapy, whether directly or indirectly, decreased sensitivity to implicit emotional conflict, suggesting that understanding implicit emotion regulation could refine conceptualizations and treatments for BED (PUBMED:26117164). Additionally, difficulties in emotion regulation are a common characteristic of eating disorders, including BED. Individuals with BED often use maladaptive emotion regulation strategies and have difficulty accessing adaptive strategies. These difficulties are associated with clinical variables such as anxiety, depressive symptoms, and alexithymia, which is a strong indicator of emotion regulation difficulties. Therefore, addressing emotion regulation, particularly implicit emotion regulation, is crucial in the treatment of BED (PUBMED:34629115). Moreover, the literature suggests that negative emotions and deficits in their regulation play a significant role in BED, with individuals often using non-adaptive strategies like rumination and suppression. Treatment approaches such as cognitive behavioral therapy (CBT-E), dialectic-behavioral therapy (DBT), and psychodynamic therapy have been highlighted for their effectiveness in addressing emotional dysregulation in BED (PUBMED:35472237). In summary, implicit emotion regulation is an important factor in BED, and its alteration through treatment may contribute to the normalization of BED symptomatology and the improvement of explicit emotion regulation strategies.
Instruction: MR-guided vacuum-assisted breast biopsy: is it an essential technique? Abstracts: abstract_id: PUBMED:16341985 MR-guided vacuum assisted breast biopsy Objective: This prospective study was undertaken to determine the feasibility of MR-guided vacuum assisted breast biopsy with the Vacora-vacuum-biopsy system for histological evaluation of suspicious lesions in MR-mammography. Material And Methods: During 3 months MR-guided vacuum assisted breast biopsy was indicated in 12 patients with suspicious lesions in MR-mammography. Results: MR-guided vacuum assisted breast biopsy with the Vacora-vacuum-biopsy system could be performed in 9 of 12 patients. In 2 patients the lesions could not be identified at the time of the intervention. In one patient the intervention could not be performed due to obesity. Histopathology revealed benign lesions in 8 patients and malignancy in one patient. In one of the cases with benign histology, the biopsy specimen was not representative for the lesion. Conclusions: MR-guided vacuum assisted breast biopsy with Vacora-vacuum-biopsy is technical feasible can be performed with a low complication rate. abstract_id: PUBMED:36980315 Freehand 1.5T MR-Guided Vacuum-Assisted Breast Biopsy (MR-VABB): Contribution of Radiomics to the Differentiation of Benign and Malignant Lesions. Radiomics and artificial intelligence have been increasingly applied in breast MRI. However, the advantages of using radiomics to evaluate lesions amenable to MR-guided vacuum-assisted breast biopsy (MR-VABB) are unclear. This study includes patients scheduled for MR-VABB, corresponding to subjects with MRI-only visible lesions, i.e., with a negative second-look ultrasound. The first acquisition of the multiphase dynamic contrast-enhanced MRI (DCE-MRI) sequence was selected for image segmentation and radiomics analysis. A total of 80 patients with a mean age of 55.8 years ± 11.8 (SD) were included. The dataset was then split into a training set (50 patients) and a validation set (30 patients). Twenty out of the 30 patients with a positive histology for cancer were in the training set, while the remaining 10 patients with a positive histology were included in the test set. Logistic regression on the training set provided seven features with significant p values (&lt;0.05): (1) 'AverageIntensity', (2) 'Autocorrelation', (3) 'Contrast', (4) 'Compactness', (5) 'StandardDeviation', (6) 'MeanAbsoluteDeviation' and (7) 'InterquartileRange'. AUC values of 0.86 (95% C.I. 0.73-0.94) for the training set and 0.73 (95% C.I. 0.54-0.87) for the test set were obtained for the radiomics model. Radiological evaluation of the same lesions scheduled for MR-VABB had AUC values of 0.42 (95% C.I. 0.28-0.57) for the training set and 0.4 (0.23-0.59) for the test set. In this study, a radiomics logistic regression model applied to DCE-MRI images increased the diagnostic accuracy of standard radiological evaluation of MRI suspicious findings in women scheduled for MR-VABB. Confirming this performance in large multicentric trials would imply that using radiomics in the assessment of patients scheduled for MR-VABB has the potential to reduce the number of biopsies, in suspicious breast lesions where MR-VABB is required, with clear advantages for patients and healthcare resources. abstract_id: PUBMED:26984430 MR-guided vacuum-assisted breast biopsy of MRI-only lesions: a single center experience. Purpose: The purpose of this study was to compare three different biopsy devices on false-negative and underestimation rates in MR-guided, vacuum-assisted breast biopsy (VABB) of MRI-only lesions. Methods: This retrospective, single-center study was IRB-approved. Informed consent was waived. 467 consecutive patients underwent 487 MR-guided VABB using three different 8-10-gauge-VABB devices (Atec-9-gauge,A; Mammotome-8-gauge,M; Vacora-10-gauge,V). VABB data (lesion-type, size, biopsy device, histopathology) were compared to final diagnosis (surgery, n = 210 and follow-up, n = 277). Chi-square, and Kruskal-Wallis tests were applied. P values &lt; 0.05 were considered significant. Results: Final diagnosis was malignant in 104 (21.4 %), high risk in 64 (13.1 %) and benign in 319 (65.5 %) cases. Eleven of 328 (3.4 %) benign-rated lesions were false-negative (1/95, 1.1 %, A; 2/73, 2.7 %, M; 8/160 5.0 % V; P = 0.095). Eleven high-risk (11/77, 14.3 %) lesions proved to be malignant (3/26, 11.5 % A; 4/12, 33.3 % M; 4/39, 10.3 % V; P = 0.228). Five of 34 (14.7 %) DCIS were upgraded to invasive cancer (2/15, 13.3 %, A; 1/6, 16.6 % M; 2/13, 15.3 %, V; P = 0.977). Lesion size (P = 0.05) and type (mass vs. non-mass, P = 0.107) did not differ significantly. Conclusion: MR-guided VABB is an accurate method for diagnosis of MRI-only lesions. No significant differences on false-negative and underestimation rates were observed between three different biopsy devices. Key Points: • MR-guided VABB is an accurate procedure for the diagnosis of MRI-only lesions. • Similar false-negative and underestimation rates allow all three different MR-guided VABB devices for clinical application. • High-risk lesions should undergo surgery due to a substantial underestimation rate. • Agreement between MR-guided VABB and final diagnosis (benign/malignant) was 95.5% (465/487). abstract_id: PUBMED:35449697 Ultrasound-guided vacuum-assisted breast biopsy in the diagnosis of cancer recurrence at the surgical scar: a report of three cases. Aim Of The Study: Ultrasound-guided vacuum-assisted biopsy is being increasingly used in the diagnosis of breast lesions. The advantages of vacuum-assisted biopsy over core needle biopsy include large sample and higher diagnostic accuracy. Indications for ultrasound-guided vacuum-assisted biopsy include suspicious calcifications visible on ultrasound, architectural distortion, and very subtle or insinuating lesions. Case Description: We present three patients treated for breast cancer with breast-conserving surgery who developed suspicious findings on mammogram and MRI at or near the surgical scar. The findings were subtle, small, or atypical lesions on ultrasound. Ultrasound-guided vacuum-assisted biopsy was performed, and recurrence was diagnosed. The technique was advantageous due to real-time imaging, ability to control the path of the needle, obtaining multiple cores with a single skin puncture and single pass, supine position, no radiation, and no IV contrast. Conclusions: Ultrasound-guided vacuum-assisted biopsy should be considered in cases involving multiple suspicious findings at or near the surgical scar, with subtle or atypical sonographic correlates. Vacuum-assisted biopsy is indicated; yet ultrasound guidance is more comfortable, no radiation and no contrast. abstract_id: PUBMED:27861232 Safety and Efficacy of Magnetic Resonance-Guided Vacuum-Assisted Large-Volume Breast Biopsy (MR-Guided VALB). Objective: Magnetic resonance (MR)-guided vacuum-biopsy is technically demanding and may fail depending on target-lesion size or breast size, and location of lesions within the breast. We developed an MR-guided vacuum-assisted biopsy protocol that collects larger amounts of tissue, aiming at an at least partial or complete ablation of the target-lesion, just as it is intended during surgical (excisional) biopsy. Rationale is to avoid biopsy failures (false-negative results due to undersampling) by collecting larger amounts of tissue. We report on our experience with MR-guided vacuum-assisted large-volume breast biopsy (VALB) (MR-guided VALB) with regard to clinical success and complication rates. Materials: Institutional review board-approved analysis of 865 patients with 1414 MR imaging (MRI)-only breast lesions who underwent tissue sampling under MRI guidance. Magnetic resonance-guided VALB was performed on a 1.5 T-system with a 9G system. Per target lesion, we collected at least 24 samples, with the biopsy notch directed toward the position of the target until on postbiopsy control imaging the target lesion appeared completely or at least greatly removed. The standard-of-reference was established by at least 24-months follow-up (for benign biopsy results), or results of surgical histology (for malignant or borderline results). We investigated the technical success rates as a function of factors that usually interfere with MR-guided vacuum biopsy. Results: Target lesions were located in the central versus peripheral parts of the breast in 66.6% (941/1414) versus 33.6% (473/1414), occurred in large, intermediate, or small breasts in 22.7% (321/1414), 56.4% (797/1414), or 20.9% (296/1414), corresponded to nonmass enhancement (NME) versus mass enhancement (ME) in 64.0% (905/1414) vs. 36.0% (509/1414), with an average size of 23 mm for NME versus 9 mm for ME, respectively. Primary technical failures, that is, inability to reach the target lesion occurred in 0.2% of patients (2/865) and 0.1% of target lesions (2/1414). Successful biopsy, that is, an MR-guided VALB diagnosis matching with the standard of reference, was achieved in 99.5% (859/863) of patients and 99.7% (1408/1412) target lesions that had been amenable to MR-guided VALB. In 0.5% of patients (4/863) and 0.3% of target lesions (4/1412), a radiologic-pathologic mismatch suggested a false-negative biopsy, confirmed by secondary excisional biopsy. The likelihood of failure was independent of the lesion's location in the breast, breast size, target lesion size, or target lesion type (NME vs ME). None of the patients with benign MR-guided VALB diagnoses developed breast cancer at the biopsy site during follow-up of 2 years. None of the patients developed major complications. Conclusion: Magnetic resonance-guided VALB is a safe procedure that is associated with a high success rate (99.7%) that is independent of the size, type, or location of a target lesion, or the size of the breast, and is associated with a very low complication rate. abstract_id: PUBMED:29772934 Image-guided vacuum-assisted breast biopsy in the diagnosis of breast microcalcifications. Objective The present study was performed to assess the accuracy and clinical value of image-guided vacuum-assisted breast biopsy in the diagnosis of mammography-detected breast microcalcifications. Methods This prospective study involved 100 patients with suspicious mammography-detected microcalcifications who underwent image-guided vacuum-assisted breast biopsies from January 2013 to October 2016. Stereotactic vacuum-assisted breast biopsy (SVAB) was performed in 64 patients, and ultrasound (US)-guided vacuum-assisted breast biopsy (US-VAB) was performed in 36 patients in whom application of SVAB was difficult. The microcalcifications were detectable by mammography or US. The mean follow-up duration was 32 months (range, 9-57 months). The biopsy results were correlated with the pathological examination results. Results The positive predictive value of microcalcifications detected by mammography and US in the diagnosis of breast cancer was 37.0% (37/100) and 52.8% (19/36), respectively. The negative predictive value of microcalcifications detected by US in the diagnosis of breast cancer was 71.9% (46/64). No recurrence was observed in the 37 patients with breast cancer. No evidence of malignancy was found among the 63 patients with benign lesions. Conclusion Image-guided vacuum-assisted breast biopsy is accurate and minimally invasive. It can be used as a safe approach for diagnosis in patients with breast microcalcifications. abstract_id: PUBMED:33549383 Advances in Ultrasound-Guided Vacuum-Assisted Biopsy of Breast Microcalcifications. Microcalcification is one of the significant indications for or can even be the sole mammographic feature of breast cancer, especially occult breast cancer. Biopsy and pathologic examination are the most important methods used to identify the nature of suspicious microcalcifications. Stereotactic vacuum-assisted breast biopsy (S-VAB) is the most commonly used biopsy method for microcalcifications currently because of the high detection rate of mammography for microcalcifications. However, in recent years, several clinical studies have gradually found that ultrasound-guided vacuum-assisted breast biopsy (US-VAB) could be an alternative to S-VAB for microcalcifications to some extent, and has its own advantages of flexibility, real-time performance, comfort and high accessibility compared with mammography. An overview of US-VAB of microcalcifications is provided with respect to success rate, diagnostic accuracy, advantages and limitations. On the basis of numerous studies and clinical experience, US-VAB proved to be a valid alternative to S-VAB, with comparable diagnostic accuracy if the microcalcification foci could be detected by ultrasound. And for patients with ultrasound-invisible microcalcifications who are not suitable for or tolerable of S-VAB, US-VAB combined with mammography localization of microcalcifications can also be considered. abstract_id: PUBMED:23077445 Outcome of MRI-guided vacuum-assisted breast biopsy - initial experience at Institute of Oncology Ljubljana, Slovenia. Background: Like all breast imaging modalities MRI has limited specificity and the positive predictive value for lesions detected by MRI alone ranges between 15 and 50%. MRI guided procedures (needle biopsy, presurgical localisation) are mandatory for suspicious findings visible only at MRI, with potential influence on therapeutic decision. The aim of this retrospective study was to evaluate our initial clinical experience with MRI-guided vacuum-assisted breast biopsy as an alternative to surgical excision and to investigate the outcome of MRI-guided breast biopsy as a function of the MRI features of the lesions. PATIENTS AND METHODS.: In 14 women (median age 51 years) with 14 MRI-detected lesions, MRI-guided vacuum-assisted breast biopsy was performed. We evaluated the MRI findings that led to biopsy and we investigated the core and postoperative histology results and follow-up data. Results: The biopsy was technically successful in 14 (93%) of 15 women. Of 14 biopsies in 14 women, core histology revealed 6 malignant (6/14, 43%), 6 benign (6/14, 43%) and 2 high-risk (2/14, 14%) lesions. Among the 6 cancer 3 were invasive and 3 were ductal carcinoma in situ (DCIS). The probability of malignancy in our experience was higher for non-mass lesion type and for washout and plateau kinetics. Conclusions: Our initial experience confirms that MRI-guided vacuum-assisted biopsy is fast, safe and accurate alternative to surgical biopsy for breast lesions detected at MRI only. abstract_id: PUBMED:26753603 Comparison of needle aspiration and vacuum-assisted biopsy in the ultrasound-guided drainage of lactational breast abscesses. Purpose: The purpose of this study was to compare needle aspiration and vacuum-assistedbiopsy in the ultrasound-guided treatment of lactational breast abscesses. Methods: Between January 2005 and December 2014, a total of 74 patients presented withlactational breast abscesses. Thirty of these patients underwent treatment with antibioticsalone, while the remaining 44 lactating women with breast abscesses were treated withneedle aspiration (n=25) or vacuum-assisted biopsy (n=19). Age, duration of lactation, abscess diameter, pus culture results, the number of interventions, the healing time, and the cure rate were reviewed and compared between these two groups. The Student's t test and the chi-square test were used to compare the variables. Results: No significant difference was found in the cure rate between the needle aspirationgroup (22/25, 88%) and the vacuum-assisted biopsy group (18/19, 94.7%) (P=0.441). However, the mean healing time was significantly shorter in the vacuum-assisted biopsy group (6.7 days) than in the needle aspiration group (9.0 days) (P=0.001). Conclusion: Vacuum-assisted biopsy is a viable option for the management of lactational breast abscesses and was found to lead to a shorter healing time than needle aspiration. However, further study is necessary to establish the clinical efficacy of vacuum-assisted biopsy in the management of lactational breast abscesses. abstract_id: PUBMED:27355898 Underestimation Rate at MR Imaging-guided Vacuum-assisted Breast Biopsy: A Multi-Institutional Retrospective Study of 1509 Breast Biopsies. Purpose To assess the rate of underestimation of atypical ductal hyperplasia (ADH) and ductal carcinoma in situ (DCIS) at magnetic resonance (MR) imaging-guided vacuum-assisted breast biopsy and to explore the imaging, demographic, and histologic characteristics associated with lesion upgrade after surgery. Materials and Methods This retrospective study had institutional review board approval, and the need to obtain informed patient consent was waived. A total of 1509 MR imaging-guided vacuum-assisted biopsy procedures were performed in nine centers. A diagnosis of ADH was obtained after biopsy in 72 cases, and a diagnosis of DCIS was obtained in 118 cases. Pearson χ2 and Fisher tests were used to assess the association between demographic, MR imaging, and biopsy features and lesion upgrade. Univariate statistical analyses were performed, and each significant parameter was entered into a multivariate logistic regression analysis. Results Surgical excision was performed in 66 of the 72 ADH cases and in 117 of 118 DCIS cases. The ADH and DCIS underestimation rates were 25.8% (17 of 66) and 23.1% (27 of 117), respectively. Underestimation was 5.6-fold (odds ratio [OR] = 5.6; 95% confidence interval [CI]: 1.7, 18.3) and 3.6-fold (OR = 3.6; 95% CI: 1.2, 10) more likely in mass (n = 20 for ADH and n = 20 for DCIS) than in non-mass (n = 46 for ADH and n = 97 for DCIS), compared with nonunderestimation, in ADH and DCIS respectively. At multivariate analysis, the use of a 9- or 10-gauge needle versus a 7- or 8-gauge needle was also an independently associated with underestimation when a diagnosis of ADH was made at MR imaging-guided biopsy. No other parameters were associated with of ADH or DCIS upgrade at surgery. Conclusion The rates of underestimation in ADH and DCIS diagnosed at MR imaging-guided vacuum-assisted biopsy were high, at around 25%, and were significantly associated with the presence of a mass at MR imaging. © RSNA, 2016. Answer: MR-guided vacuum-assisted breast biopsy (MR-VABB) is a technique that has been shown to be feasible and can be performed with a low complication rate for the histological evaluation of suspicious lesions in MR-mammography (PUBMED:16341985). It is particularly useful for lesions that are visible only on MRI, which cannot be detected by other imaging modalities such as mammography or ultrasound (PUBMED:23077445). This makes MR-VABB an essential technique for certain cases where other biopsy methods may not be applicable. The technique has been reported to have high diagnostic accuracy and a low false-negative rate, making it a reliable method for diagnosing MRI-only lesions (PUBMED:26984430). Additionally, MR-VABB has been shown to have similar false-negative and underestimation rates across different biopsy devices, suggesting that it is a versatile technique that can be adapted to various clinical settings (PUBMED:36980315). Moreover, MR-VABB has been developed to collect larger amounts of tissue, aiming at partial or complete ablation of the target lesion, which helps to avoid false-negative results due to undersampling (PUBMED:27861232). This approach has been associated with a high success rate and is independent of the size, type, or location of a target lesion, or the size of the breast (PUBMED:35449697). The use of radiomics and artificial intelligence in conjunction with MR-VABB has also been explored, and it has been found to increase the diagnostic accuracy of standard radiological evaluation of MRI suspicious findings, potentially reducing the number of unnecessary biopsies (PUBMED:29772934). In summary, MR-guided vacuum-assisted breast biopsy is an essential technique for the accurate diagnosis of breast lesions that are only visible on MRI. It offers a high success rate, low complication rate, and the potential to reduce the number of unnecessary biopsies when combined with advanced analytical techniques such as radiomics (PUBMED:33549383).
Instruction: Specialising in radiology in Switzerland: still attractive for medical school graduates? Abstracts: abstract_id: PUBMED:21458185 Specialising in radiology in Switzerland: still attractive for medical school graduates? Purpose: To gain insight into the professional characteristics of radiologists in Switzerland and to determine how to enhance the attractiveness of radiology to medical graduates as a specialty. Materials And Methods: Data from 262 members of the Swiss Society of Radiology (m:f = 76:24%) obtained in a questionnaire survey were analysed regarding socio-demographic variables, working status, specialty, main fields of interest, career success, mentoring and reasons for the shortage of radiologists. Results: 35 (56.4%) female and 85 (45.5%) male radiologists were aged ≤ 45 years. 228 (87%) were board-certified; 44 (17.9%) had completed a sub-specialisation. Men worked part-time mostly just before retirement, while women worked part-time at a younger age. As reasons for specialty choice, the wide range of clinical work and the combination of technology and medicine were ranked highest. Women reported significantly less career success and support. To improve the attractiveness of radiology to graduates, radiology should be visible on medical school curricula. Conclusion: In Switzerland, more female radiologists work part-time than male ones, and there is less career success and support for women. In order to make radiology more attractive to medical graduates as a specialty, structured residency programmes and reliable gender-respecting career support are needed. abstract_id: PUBMED:37183141 International Medical Graduates in Radiology Residencies: Demographics, Performance, and Visa Issues. Rationale And Objectives: The number of international medical graduates (IMG) in radiology residencies has varied from year to year even as the number of candidates continues to grow. It is unclear from which countries the IMGs are arriving and what visas are being used to accommodate them. Materials And Methods: We sent a survey to 195 program directors (PD) in diagnostic radiology (DR) inquiring about the number and nationality of IMG residents in their program, their attitudes about IMG candidates, the performance of their IMG trainees, and the visas that are offered. Results: We received responses from 121 of 195 (62.1%) DR programs (121/149 =81.2% of actionable emails). 80/121 (66.1%) had at least one IMG in their DR residency program and the countries of origin included India (36), Iran (30), Saudi Arabia (24), Egypt (16), Canada (14), Brazil (14), and Pakistan (9), as the most common. While most programs (76/104, 73.1%) offered J1 visas, 23/99 (23.2%) provided H-1B visas to trainees. IMG DR residents overall performed as well as American graduates, with an equal number of PDs saying IMGs performed better and worse than American graduates. PDs' issues with IMGs centered on visas: (1) expense, (2) lack of familiarity, (3) Educational Commission for Foreign Medical Graduates regulations, and (4) time commitment in submitting paperwork. Conclusion: Most radiology IMG residents originate from India and Middle Eastern countries. Once enrolled, IMG residents perform similarly to US graduates. However, adding IMG candidates to the training program requires overcoming bureaucratic and monetary hurdles around visas. Summary Sentence: Most DR international medical graduate residents are from India or the Middle East. Although their performance is the same as American graduates in general, PDs note the monetary and bureaucratic hassles accompanying their recruitment. abstract_id: PUBMED:34975425 Analysis of Career-Advancement for Medical School Graduates During the COVID-19 Pandemic at a Chinese Teaching Hospital. The COVID-19 pandemic has led to widespread social and economic disruptions in the balance of labor market. Our study aims to analyze the career-advancement of medical school graduates during the COVID-19 pandemic and the associated influencing factors. We collected and compared the career-advancement data of medical school graduates at a Chinese teaching hospital from 2016 to 2020. A self-designed 20-element medical graduates employment questionnaire and a Chinese adaptation of the General self-efficacy scale were distributed by the Questionnaire Star platform. Univariate analysis (Pearson's Chi-square-test and Fisher's exact-test) and subsequent binary logistic regression were used. Findings demonstrated that the career-advancement rate of medical graduate students in 2020 is 71.3%, which is significantly lower than that for the preceding 4 years from 2016 to 2019 (p &lt; 0.001). Of the 251 employed medical school graduates, 159 (63.3%) have signed an employment agreement or contract, 83 (33.1%) are pursuing continued education domestically, and 9 (3.6%) have offers from foreign institutions. Univariate analysis revealed statistical differences of medical graduates' employment among various specialties, oral defense completion, job search start date, CV submission times, participation in a probationary period, and self-efficacy. Significant predictors for successful employment were early job search and self-efficacy by logistic regression model (χ2 = 12.719, p &lt; 0.001). Most medical graduates assumed that the COVID-19 pandemic had a major (40.6%) or moderate (48%) impact on career-advancement. The COVID-19 pandemic has profoundly impacted the career-advancement of medical school graduates in 2020. We should make adaptive changes to improve the career-advancement of medical graduates. abstract_id: PUBMED:33158699 A 15-Year Analysis of International Medical Graduates Matching Into Diagnostic Radiology Residency Programs in the United States. Rationale And Objectives: To analyze the trends in international medical graduates (IMGs) matching into diagnostic radiology residency programs in the United States (US). Materials And Methods: The National Resident Match Program data was accessed for years 2005-2020 and diagnostic radiology residency-specific data was extracted for US MD, osteopathic (DO), and IMG applicants. IMGs were categorized into US-citizen IMGs and non-US citizen IMGs per the National Residency Match Program. Variables collected for each year included the number of positions, number of applicants in each group, positions filled/unfilled, and fill rate of each group. Additional data for some years included USMLE Step 1 score, United States Medical Licensing Examination (USMLE) Step 2 clinical knowledge (CK) score, number of research experiences, number of abstracts/publications, and additional degrees obtained. Trends were analyzed using simple linear regression model and p value &lt;0.05 was considered significant. Results: The number of diagnostic radiology residency programs increased from 203 (2006) to 212 (2020). The total number of diagnostic radiology residency positions increased from 1011 (2006) to 1113 (2020), with the peak of 1145 in 2014. The overall "match rate," that is, proportion of positions filled to positions available, increased from 96.4% (2006) to 97.3% (2020), with a brief decline to 86.7% in 2015. Among the filled positions, the proportion filled by US medical school graduates significantly declined from 89.7% (2006) to 69.2% (2020) (p &lt; 0.001), and the proportion of positions filled by osteopathic seniors and graduates significantly increased from 2.2% (2006) to 15.1% (2020; p &lt; 0.001). The proportion of US IMGs increased from 3.2% (2006) to 5.4% (2020), while the proportion of non-US IMGs increased from 4.4% (2006) to 9.4% (2020), with overall IMG match rate increased significantly from 7.6% to 14.9% (p = 0.009). The mean Step 1 scores of US IMGs and non-US IMGs were 238 and 237.3, and the mean Step 2 CK scores were 241.67 and 241, respectively. Cumulatively over the study period, a total of 736 US IMGs and 1051 non-US IMGs have matched into diagnostic radiology residency. Conclusion: There is an increasing proportion of IMGs, especially the non-US citizen IMGs, matching into US radiology residency programs in the last decade. Diagnostic radiology remains a competitive specialty evidenced by average USMLE scores higher than national average and research experiences of matched candidates. abstract_id: PUBMED:9762968 The situation of diagnostic radiology training programs and their graduates in 1997. Objective: In light of concerns about the job market, the American College of Radiology studied the employment situation of 1997 graduates from diagnostic radiology training programs and the status and plans of these programs. Materials And Methods: In an April-May 1997 survey and in a December 1997 follow-up, the American College of Radiology asked a 50% random sample of diagnostic radiology residency directors about their programs and about the employment situation of their 1997 residency and fellowship graduates. Of those surveyed, 89% responded. We compared these findings with those from a similar 1996 survey. The test of statistical significance was p &lt; or = .05. Results: All diagnostic residency and fellowship graduates who wanted to work were employed within 6 months after graduation. Approximately 95% of graduates had positions that directors believed to reasonably match their training and personal employment goals. Outcomes were similar across all fellowship fields except nuclear medicine, a field in which graduates had greater difficulty finding jobs. The completed plus planned changes in program size will lead to a 13-14% reduction in the annual number of graduates. As in previous years, by late April to mid May 1997, 93% of beginning-year residency slots were filled. However, the percentage of beginning residents who are international medical graduates increased. In 1997, residency program directors were more optimistic about graduates' job prospects than in 1996, and there was a statistically significant increase from 1996 in the proportion of fellowship graduates, according to directors, who had found jobs that fit their goals and training. Conclusion: Unemployment continues to be low. The 1997 job market has improved over the 1996 job market, but job prospects in nuclear medicine continue to be more problematic than in other subspecialties. abstract_id: PUBMED:24120904 United States medical school graduate interest in radiology residency programs as depicted by online search tools. Recent media publications have indicated a tough job market in medical specialty positions for medical school graduates, specifically in the field of radiology. Internet search tools, such as Google Trends, have proved useful in the prediction of certain diseases on the basis of the search volume index for a specific term. The authors hypothesized that online search tools might be useful in the prediction of US medical school graduates' interest in residency positions in radiology. Google Trends indicated an increase over time in searches for "radiology salary" and a decrease over time in searches for "radiology residency." National Resident Matching Program results for diagnostic radiology showed an increase from 2004 to 2009 in the percentage of US graduates entering radiology but a dramatic drop from 2010 to 2013. This occurred even while the total number of US graduates active in the National Resident Matching Program increased. This finding might have been foretold on the basis of online query result trends. Online search data may be a useful insight into the interests of US medical school graduates and may be predictive of unfilled radiology residency positions and eventual increased shortages of community radiologists coming from US medical schools. abstract_id: PUBMED:16258575 Radiology knowledge in new medical graduates in New Zealand. Aims: To establish the level of knowledge of new medical graduates in New Zealand about common radiological investigations and to assess their ability to request most appropriate, cost-effective radiological investigations for common clinical scenarios. Methods: A test was developed and administered in Waikato, Christchurch, Rotorua, Auckland, and Dunedin hospitals during the first month of new house officer year (November 2002). Results: Sixty-two first year house officers participated; 59 were New Zealand medical graduates (Auckland: 24 and Otago: 35) and 3 were from overseas institutions. The mean score for questions that assessed about risks involved in common investigations, including radiation, was 47% (95% CI: 45%-49%). The mean score for selecting the appropriate clinical investigations was 53% (95% CI 52%-54%). Most significantly, only 42% (95% CI 38%-46%) of the respondents thought they had adequate radiology teaching to work as house officers. The following percentage of the respondents never observed the respective examination during their medical school training: barium enema 72% (95% CI: 60%-82%); IVU 75% (95% CI: 63%-87%); US scan 25% (95% CI: 16%-37%); CT scan 20% (95% CI: 11%-32%); angiogram 16% (95% CI: 9%-28%); MRI 42% (95% CI: 30%-54%). The mean score for the practical knowledge about common investigations was 50 (95% CI: 48%-52%). Conclusions: Medical students report that they have limited exposure to radiology teaching during their medical school training. The test results suggest that medical school training enabled them to commence their probationary year with a 'just safe' level of radiology knowledge and skill. abstract_id: PUBMED:8884249 Entry of US medical school graduates into family practice residencies: 1995-1996 and 3-year summary. This is the 15th report prepared by the American Academy of Family Physicians (AAFP) on the percentage of each US medical school's graduates entering family practice residency programs. Approximately 14.6% of the 16,124 graduates of US medical schools between July 1994 and June 1995 were first-year family practice residents in October 1995, compared with 13.4% in 1994 and 12.3% in 1993. This is the highest percentage since 1980-1981 (12.8%), when this series of studies began. Medical school graduates from publicly funded medical schools were almost twice as likely to be first-year family practice residents in October 1995 than were residents from privately funded schools, 18% compared with 9.7%. The West North Central region reported the highest percentage of medical school graduates who were first-year residents in family practice programs in October 1995 at 24.8%; the Middle Atlantic and New England regions continued with the lowest percentages at 7.6% and 9.3%, respectively. Approximately one in two medical school graduates (46.7%) entering a family practice residency program as first-year residents in October 1995 entered a program in the same state where they graduated from medical school. The percentages for each medical school have varied substantially from year to year since the AAFP began reporting this information. The average percentage for each medical school for the last 3 years is reported. In addition, the number and percentage of graduates from colleges of osteopathic medicine who entered Accreditation Council for Graduate Medical Education-accredited family practice residency programs are reported. abstract_id: PUBMED:24136695 Entry of US medical school graduates into family medicine residencies: 2012--2013. Background: Analyzing the US medical school origin of family medicine residents highlights schools, states, or regions that have higher entrance rates into family medicine. Methods: The American Academy of Family Physicians (AAFP) 2013 Residency Census has a 100% response rate and lists information for family medicine residents who entered training July 2012. MD graduates are verified through medical school registrars or the American Medical Association's Physicians Masterfile data. The American Association of Colleges of Osteopathic Medicine provides data on DO graduates. Three-year rolling averages of graduates entering family medicine are calculated for Liaison Committee of Medical Education (LCME)-accredited medical schools. Results: In July 2012, 3,523 first-year residents entered Accreditation Council for Graduation Medical Education (ACGME)-accredited family medicine residencies. Medical students from LCME-accredited schools account for less than half of the family medicine residents (46%). Public MD-granting medical schools graduate almost threefold more students into family medicine residencies than do private schools (1,101 versus 380). The Mountain, West North Central, and Pacific regions of the United States have the highest percentage of MD graduates (13.5%, 12.3%, and 11.4%, respectively) entering family medicine. Forty-five percent of MD medical students enter a family medicine residency in the state in which they attended medical school. Conclusions: LCME-accredited medical schools with lower percentages of graduates entering family medicine should examine the economic, environmental, and academic factors that may be causing low numbers of their students graduating and entering family medicine residencies. abstract_id: PUBMED:7258220 Performance of U.S. citizen-foreign medical graduates on certifying examinations in internal medicine. Increasing numbers of U.S. citizens are studying medicine abroad and returning for graduate medical education and practice. The numbers and performance of U.S. citizen and all other graduates of foreign medical schools on the certifying examination of the American Board of Internal Medicine were compared with those of graduates of U.S. medical schools for 1975-1980. Numbers of first-taker U.S. citizen foreign medical school graduates increased whereas numbers of all other foreign medical school graduates decreased during this period. In 1980, U.S. citizens represented 19 percent of all first-taker foreign medical school graduates. During the study period, annual examinations were passed by 79 to 82 percent of U.S. medical school graduates, 15 to 38 percent of U.S. citizen foreign medical school graduates, and 27 to 45 percent of all other foreign medical school graduates taking an examination for the first time. On each examination, U.S. citizen foreign medical school graduates did not do as well as all other foreign medical school graduates. Members of all groups met the same postdoctoral training requirements. Answer: Specializing in radiology in Switzerland appears to have both attractive and less appealing aspects for medical school graduates. According to the data from 262 members of the Swiss Society of Radiology, the specialty choice was highly regarded due to the wide range of clinical work and the combination of technology and medicine (PUBMED:21458185). However, there are concerns regarding the attractiveness of radiology to graduates. For instance, more female radiologists work part-time than male ones, and women report significantly less career success and support. To enhance the appeal of radiology to medical graduates, the study suggests that radiology should be more visible on medical school curricula, and there is a need for structured residency programs and reliable gender-respecting career support (PUBMED:21458185). The attractiveness of radiology as a specialty in Switzerland is influenced by various factors, including socio-demographic variables, working status, specialty, main fields of interest, career success, mentoring, and the perceived shortage of radiologists. The study indicates that to improve the attractiveness of radiology, there is a need for more visibility in medical school curricula and better support for career advancement, especially for female radiologists. Structured residency programs and reliable career support that respects gender differences are recommended as measures to make radiology more appealing to medical graduates (PUBMED:21458185).
Instruction: Does it matter who you see to help you stop smoking? Abstracts: abstract_id: PUBMED:34852182 Accessing specialist support to stop smoking in pregnancy: A qualitative study exploring engagement with UK-based stop smoking services. Objectives: Specialist stop smoking services can be effective for supporting women with smoking cessation during pregnancy, but uptake of these services is low. A novel theoretical approach was used for this research, aiming to identify barriers to and facilitators of engaging with specialist smoking cessation support using the Theoretical Domains Framework (TDF). Methods: Semi-structured interviews and a focus group (n = 28) were carried out with pregnant women who smoke/recently quit smoking, midwives and Stop Smoking in Pregnancy advisors from two local authority commissioned services in the UK. Inductive thematic analysis was used to code interview transcripts and deductive thematic analysis used to match emerging themes to TDF domains. Results: Themes corresponded to seven domains of the TDF: Knowledge: Knowledge of available services for pregnant smokers; Environmental context and resources: Uptake of referral to cessation services by pregnant smokers; Social Influences: Smoking norms and role of others on addressing smoking in pregnancy; Beliefs about Capabilities: Confidence in delivering and accepting pregnancy smoking cessation support; Beliefs about Consequences: Beliefs about risks of smoking in pregnancy and role of cessation services; Intentions: Intentions to quit smoking during pregnancy; Emotions: Fear of judgement from healthcare professionals for smoking in pregnancy. Conclusions: These novel findings help to specify factors associated with pregnant women's engagement, which are useful for underpinning service specification and design by public health commissioners and service providers. Addressing these factors could help to increase uptake of cessation services and reduce rates of smoking in pregnancy. abstract_id: PUBMED:19526972 "Smoke smarts": how to help patients to stop smoking? Smoking cessation advice must be part of each medical consultation, because smoking is a disease. Smokers wish to stop, and clinically tested drugs for smoking cessation exist. The success depends on the motivation of the patient and the ability of the doctor to behave as a coach, to improve the desire to stop even in patients heavily addicted to nicotine. In this situation it is proposed to proceed by steps, diminishing gradually the number of smoked cigarettes with the help of nicotine containing preparations. The patient tries to avoid the cigarette by boosting his serum nicotine level with the inhaler or the nicotine gum as often as possible and seeks to understand the reasons why he is a smoker. The drugs bupropion and vareniclin may then help the patient to reinforce their motivation to stop smoking completely. abstract_id: PUBMED:22571648 Does it matter who you see to help you stop smoking? Short-term quit rates across specialist stop smoking practitioners in England. Aims: A network of Stop Smoking Services has been set up within the National Health Service (NHS) in England. The services deliver a combination of behavioural support and medication. It is important to establish the degree of variability in quit rates attributable to differences between individual practitioners, to gauge the scope for improvement by training and professional support. The aim of the present analysis was to examine how far short-term quit rates depend on the practitioner delivering the intervention after adjusting for potential confounding variables. Design: Observational study using routinely collected data. Setting: Thirty-one NHS Stop Smoking Services in England. Participants: Data from 46,237 one-to-one treatment episodes (supported quit attempts) delivered by specialist practitioners. Measurements: Three-level logistic regression models were fitted for carbon monoxide (CO)-validated short-term (4-week) quit rates. Models adjusted for age, gender, exemption from prescription charges, medication and intervention setting for each treatment episode, number of clients for each practitioner and economic deprivation at the level of the Stop Smoking Service. Secondary analyses included (i) the heaviness-of-smoking index (HSI) as predictor and (ii) 4-week quit rates whether or not confirmed by CO. Findings: Differences between individual specialist practitioners explained 7.6% of the variance in CO-verified quit rates after adjusting for client demographics, intervention characteristics and practitioner and service variables (P &lt; 0.001). HSI had little impact on this figure; in quits not necessarily validated by CO, practitioners explained less variance. Conclusions: Individual stop smoking practitioners appear to differ to a significant degree in effectiveness. It is important to examine what underlies these differences in order to improve selection, training and professional development. abstract_id: PUBMED:2220510 How to help your patients stop smoking. Family physicians can effectively help their patients stop smoking by using brief intervention and simple office organizational procedures. Intervention involves asking all patients about smoking, advising smokers to stop, assisting with self-help materials, establishing a date for quitting, possibly prescribing nicotine gum and, finally, arranging for follow-up visits. Office procedures include selecting an office coordinator, ensuring a smoke-free office, establishing a mechanism to identify and monitor patients who smoke, and involving the office staff in intervention and follow-up. With this protocol, intervention is possible at every office visit. abstract_id: PUBMED:31193736 An exploration of the barriers to attendance at the English Stop Smoking Services. Introduction: Despite the availability of effective stop smoking assistance, most smokers do not utilise formal cessation programmes such as the English Stop Smoking Services (SSS). We modified the Treatment Barriers Questionnaire (TBQ), developed in the USA, and distributed it to a sample of English smokers to explore the most important barriers to the use of the SSS. Methods: Participants of Start2quit, a randomised controlled trial aiming to increase attendance at the SSS using tailored risk information and 'taster' sessions, who reported at follow-up that they had not attended the SSS, were asked to complete the TBQ; 672 (76.9% response rate) were retained for analysis. Principal Component Analysis (PCA) was conducted to examine the structure of the data. Multiple linear regressions were used to determine whether any participant characteristics were associated with particular barriers. Results: The most commonly endorsed items related to a lack of information on and a lack of confidence in the efficacy of the SSS. PCA yielded seven factors: Work and time constraints (Factor1); Smokers should quit on their own (Factor2); Nothing can help in quitting smoking(Factor3); Disinterest in quitting (Factor4); Lack of social support to attend (Factor5); Lack of privacy at programmes (Factor6); Lack of information and perceived availability (Factor7). Age was associated with Factors 1, 3 and 4, motivation to quit with Factors 2 and 4, and confidence in quitting with Factors 1, 2, and 3. Conclusions: The findings suggest that many barriers exist, and they vary according to smoker demographics and characteristics, pointing to the need for tailored recruitment strategies. Trial Registration: ISRCTN76561916. abstract_id: PUBMED:15574535 Telephone support for pregnant smokers who want to stop smoking. This article describes a statewide, proactive telephone peer-support system to help low-income pregnant women stop smoking. From October 1994 to December 2000, 948 pregnant smokers attending the Women, Infants, Children program accepted an offer to receive support by telephone from a woman ex-smoker who called weekly, biweekly, and then monthly to provide guidance, encouragement, and reinforcement for smoking cessation. This free service, funded by the Vermont Department of Health, resulted in 25% self-reported abstinence at last telephone contact when women never reached for telephone support were counted as smokers, and self-reported abstinence during pregnancy of 20% among participants who returned for their postpartum Women, Infants, Children visit. These findings suggest that proactive telephone peer support is a viable way to help pregnant women stop smoking during pregnancy. abstract_id: PUBMED:12592162 How can we help pregnant women stop smoking? Smoking is a behavior maintained and enhanced by nicotine-induced dependence. Despite awareness and knowledge of the associated health risks many smokers find it considerably difficult to quit. Nicotine withdrawal symptoms and undesirable effects such as depression and weight gain serve as justification for the numerous unsuccessful attempts in smoking cessation. Yet, we have now come to the end of empiricism. Treatments that have been shown to work exist and international evidence-based recommendations for cessation interventions have been established: brief advice, nicotine replacement therapy and behavioral and cognitive therapies. Measuring nicotine dependence using the Fagerström test helps to define the therapeutic strategy. Blood cotinine level can be measured during pregnancy. It is obvious that therapies can only work for smokers who are motivated to stop smoking. Before reaching the decision to quit, the smoker goes through a process during the course of which the role of information and advice from health professionals are paramount. When they became pregnant, young women are not necessary ready to quit. For many of them, pregnancy is sufficient motivation enabling them to stop smoking, but for thirty percent of them, probably the most dependent, it would be very difficult to stop without a specific program of help. abstract_id: PUBMED:2586327 Research in an antenatal clinic--the experience of the Nottingham Mothers' Stop Smoking Project. The Nottingham Mothers' Stop Smoking Project was extremely successful in helping women to stop smoking during pregnancy and in reinforcing non-smoking behaviour. This paper describes the experience of undertaking the action research project in the antenatal clinic from the points of view of the researchers, women and midwifery staff. The article outlines the practical problems encountered in attempting to adhere to a research protocol. The implementation of the project involving friendly encouragement, a carbon monoxide monitor and the offer of self-help groups is described within the context of the busy antenatal clinic setting. Pregnant women's experience of the project and the views of midwifery staff are also presented. The feasibility of incorporating such projects into everyday activity is discussed. abstract_id: PUBMED:27886140 English Stop-Smoking Services: One-Year Outcomes. The UK is a global leader in stop-smoking support-providing free behavioral support and cessation medication via stop smoking services (SSS) without charge to smokers. This study aimed to explore the client and service characteristics associated with abstinence 52 weeks after quitting. A prospective cohort study of 3057 SSS clients in nine different areas of England who began their quit attempt between March 2012 and March 2013 was conducted. Important determinants of long-term quitting were assessed through quit rates and multivariable logistic regression. Our results showed that the overall weighted carbon monoxide validated quit rate for clients at 52 weeks was 7.7% (95% confidence interval (CI) 6.6-9.0). The clients of advisors, whose main role was providing stop-smoking support, were more likely to quit long-term than advisors who had a generalist role in pharmacies or general practices (odds ratio (OR) 2.3 (95% CI 1.2-4.6)). Clients were more likely to achieve abstinence through group support than one-to-one support (OR 3.4 (95% CI 1.7-6.7)). Overall, one in thirteen people who set a quit date with the National Health Service (NHS) Stop-Smoking Service maintain abstinence for a year. Improving abstinence is likely to require a greater emphasis on providing specialist smoking cessation support. Results from this study suggest that over 18,000 premature deaths were prevented through longer-term smoking cessation achieved by smokers who accessed SSS in England from March 2012 to April 2013, but outcomes varied by client characteristic and the type of support provided. abstract_id: PUBMED:2228634 Essential elements of self-help/minimal intervention strategies for smoking cessation. Two decades of research suggest that self-help/minimal intervention strategies for smoking cessation may be the preferred means by which smokers stop and can produce success rates approximating those of more formal programs, at lower cost and with greater access to relevant populations. In order to make the best possible use of these self-help/minimal intervention approaches, the National Cancer Institute (NCI) supported a series of randomized, controlled intervention trials and, in June of 1988, convened an Expert Advisory Panel to address the question "What are the essential elements of self-help/minimal intervention strategies for smoking cessation?". The panel's recommendations were that: (1) Intervention efforts should focus on increasing smokers' motivations to make serious quit attempts; (2) Delivery of programs be broadened to include all smokers; (3) Programs be targeted to stages of cessation and specific populations; (4) All programs include (a) elements focused on health and social consequences of smoking, and (b) strategies and exercises aimed at quitting, maintenance of nonsmoking, relapse prevention, and recycling; (5) Materials and programs be made widely available rather than "fine tuning" existing programs or developing new ones; and (6) Programs make use of specific adjunctive strategies. In this way, a reacceleration of the decline in smoking prevalence may be realized in the 1990s and significantly contribute to the NCI's Year 2000 goals and the Surgeon General's aim of a smoke-free society. Answer: Yes, it does matter who you see to help you stop smoking. According to the research, individual stop smoking practitioners appear to differ significantly in effectiveness. An observational study using routinely collected data from 31 NHS Stop Smoking Services in England found that differences between individual specialist practitioners explained 7.6% of the variance in carbon monoxide (CO)-verified short-term (4-week) quit rates after adjusting for client demographics, intervention characteristics, and practitioner and service variables (PUBMED:22571648). This suggests that the specific practitioner delivering the intervention can have a notable impact on the success of a smoker's attempt to quit. Therefore, it is important to examine what underlies these differences in order to improve selection, training, and professional development of stop smoking practitioners.
Instruction: Is expansion thoracoplasty a safe procedure for mobility and growth potential of the spine? Abstracts: abstract_id: PUBMED:22706464 Is expansion thoracoplasty a safe procedure for mobility and growth potential of the spine? Spontaneous fusion after multiple chest distractions in young children. Background: Expansion thoracoplasty (ET) is claimed to be a spine-sparing procedure because of the fact that the spine is not exposed directly and intervention toward the spinal column is not performed. It is also recommended in cases of primary spine deformities without rib fusion/aplasia and when the primary problem is not in the thorax itself. The aim of this study was to report spontaneous spinal fusion after multiple thoracic distractions in patients with congenital thoracospinal deformities who have undergone ET in early childhood. Methods: In our clinic, between May 2003 and March 2010, ET treatment was started in 11 patients who had congenital spinal deformity along with thoracic asymmetry due to fused ribs. The lengthening procedures for 6 of them are still ongoing. This study includes 5 of these patients who had undergone 3-dimensional computed tomography (3D-CT) scans. Four of these patients underwent a CT scan as part of the graduation protocol and 1 was ordered a 3D-CT because a newly formed bony fusion of the ribs was observed in a lengthening procedure. Results: The mean age of the patients (4 F, 1 M) was 4.2 years (range, 2 to 8 y) at the time of the index operation. The mean number of lengthening procedures was 9.4 (range, 7 to 13). The mean follow-up period was 73 months (range, 60 to 96 mo). All 5 patients in the latest follow-up were evaluated by x-ray and 3D-CT scans of the whole spine. The CT scans confirmed spontaneous spinal fusion in the thoracic spine. Conclusions: We speculate that distraction-based, growth-friendly instrumentations are not risk-free for spinal health and motion, even when the instrumentation is not primarily on the spine. Level Of Evidence: Level IV. abstract_id: PUBMED:12637424 Growth of the thoracic spine in congenital scoliosis after expansion thoracoplasty. Background: Children with congenital thoracic scoliosis associated with fused ribs with a unilateral unsegmented bar adjacent to convex hemivertebrae will invariably have curve progression without treatment. Surgery has been thought to have a negligible growth-inhibition effect on the thoracic spine in such patients because it has been assumed that the concave side of the curve and the unilateral unsegmented bar do not grow, but we are unaware of any conclusive studies regarding this assumption. Methods: The changes in the length of the concave and convex sides, anterior and posterior vertebral edges, posterior arch, and unilateral unsegmented bars of the thoracic spine were measured in the twenty-one children with congenital scoliosis and fused ribs after expansion thoracoplasty had been carried out with use of a vertical, expandable titanium prosthetic rib. Three of these children had undergone posterior spinal fusion previously. Measurements were made with use of a three-dimensional software program that analyzed baseline and follow-up computed tomography scans. The technique was validated through measurement of the thorax of a small female adult cadaver. Results: The patients without spine fusion had an average age of 3.3 years at the time of the baseline computed tomography scan, and the average duration of follow-up was 4.2 years. On the average, these patients showed significant growth (p &lt; 0.0001) of the concave side of the thoracic spine (an increase in length of 7.9 mm/yr, or 7.1%/yr) and the convex side (8.3 mm/yr, or 6.4%/yr) compared with the baseline lengths. There was no significant difference in the increases in length (p = 0.38) between the concave and convex sides. Eleven patients with an unsegmented bar had an average 7.3% increase in the length of the bar (p &lt; 0.0001). In the three children with prior spinal fusion, the increase in length averaged only 4.6 mm/yr (3%/yr) on the concave side of the thoracic spine and 3.7 mm/yr (2.2%/yr) on the convex side; both increases were significant (p &lt; 0.0001). Conclusions: Longitudinal growth of the thoracic spine in a normal child has been estimated to be 0.6 cm/yr between the ages of five and nine years. After expansion thoracoplasty, growth of the thoracic spine was approximately 8 mm/yr in our series of children with congenital scoliosis and fused ribs. After expansion thoracoplasty, both the concave and the convex side of the thoracic spine and unilateral unsegmented bars appeared to grow in these patients. When a thorax is already foreshortened by congenital scoliosis, control of spine deformity with expansion thoracoplasty allows growth of the thoracic spine, and it is likely that the longer thorax provides additional volume for growth of the underlying lungs with probable clinical benefit. Level Of Evidence: Therapeutic study, Level IV (case series [no, or historical, control group]). See Instructions to Authors for a complete description of levels of evidence. abstract_id: PUBMED:20081510 Expansion thoracoplasty improves respiratory function in a rabbit model of postnatal pulmonary hypoplasia: a pilot study. Study Design: Using a rabbit model of postnatal pulmonary hypoplasia, we investigated how expansion thoracoplasty affected growth of the spine and lungs. Objective: By constricting the hemithorax in a growing rabbit to create postnatal pulmonary hypoplasia, we quantified how expansion thoracoplasty affects lung volume, respiratory function, alveolar morphology, and spine growth. Summary Of Background Data: In children with thoracic insufficiency syndrome, expansion thoracoplasty of the constricted hemithorax improves respiratory function and controls scoliosis. We hypothesize that expansion thoracoplasty of the constricted hemithorax improves pulmonary hypoplasia by allowing the lung to expand, improving respiratory function and stimulating lung growth. Methods: Postnatal pulmonary hypoplasia and scoliosis were induced in 5-week old rabbits by constricting left ribs 3 to 8. Expansion thoracoplasty through the fused ribs was performed at 10 weeks. These were compared with Disease rabbits allowed to grow with a constricted left hemithorax and Normal rabbits. Spine and thoracic deformity, right and left lung volumes were measured on reconstructed 3-dimensional computed tomography images and functional residual capacity measured by plethysmography. At maturity, lungs were excised and quantitative histology performed to measure alveolar air fraction and surface density. Results: Expansion thoracoplasty of the constricted left hemithorax improved scoliosis but increased left lung volumes only moderately compared with Disease rabbits. For Disease and Thoracoplasty Treated rabbits, a compensatory increase in the volume of the right lung maintained total lung volumes equivalent to Normal. Alveolar air space fraction was greater in Disease rabbits, suggestive of emphysema improved by expansion thoracoplasty. Capillaries adjacent to the alveoli were prominent in Thoracoplasty Treated rabbits. Conclusion: Expansion thoracoplasty reduces scoliosis and increases the volume of the constricted hemithorax, but the relative increase in the ipsilateral lung volume is small since compensatory hypertrophy of the contralateral lung also occurred. Expansion thoracoplasty may improve respiratory function by increasing alveolar capillaries and preventing emphysematous changes. abstract_id: PUBMED:14996922 Expansion thoracoplasty: the surgical technique of opening-wedge thoracostomy. Surgical technique. Background: Children with congenital thoracic scoliosis associated with fused ribs with a unilateral unsegmented bar adjacent to convex hemivertebrae will invariably have curve progression without treatment. Surgery has been thought to have a negligible growth-inhibition effect on the thoracic spine in such patients because it has been assumed that the concave side of the curve and the unilateral unsegmented bar do not grow, but we are unaware of any conclusive studies regarding this assumption. Methods: The changes in the length of the concave and convex sides, anterior and posterior vertebral edges, posterior arch, and unilateral unsegmented bars of the thoracic spine were measured in the twenty-one children with congenital scoliosis and fused ribs after expansion thoracoplasty had been carried out with use of a vertical, expandable titanium prosthetic rib. Three of these children had undergone posterior spinal fusion previously. Measurements were made with use of a three-dimensional software program that analyzed baseline and follow-up computed tomography scans. The technique was validated through measurement of the thorax of a small female adult cadaver. Results: The patients without spine fusion had an average age of 3.3 years at the time of the baseline computed tomography scan, and the average duration of follow-up was 4.2 years. On the average, these patients showed significant growth (p &lt; 0.0001) of the concave side of the thoracic spine (an increase in length of 7.9 mm/yr, or 7.1%/yr) and the convex side (8.3 mm/yr, or 6.4%/yr) compared with the baseline lengths. There was no significant difference in the increases in length (p = 0.38) between the concave and convex sides. Eleven patients with an unsegmented bar had an average 7.3% increase in the length of the bar (p &lt; 0.0001). In the three children with prior spinal fusion, the increase in length averaged only 4.6 mm/yr (3%/yr) on the concave side of the thoracic spine and 3.7 mm/yr (2.2%/yr) on the convex side; both increases were significant (p &lt; 0.0001). Conclusions: Longitudinal growth of the thoracic spine in a normal child has been estimated to be 0.6 cm/yr between the ages of five and nine years. After expansion thoracoplasty, growth of the thoracic spine was approximately 8 mm/yr in our series of children with congenital scoliosis and fused ribs. After expansion thoracoplasty, both the concave and the convex side of the thoracic spine and unilateral unsegmented bars appeared to grow in these patients. When a thorax is already foreshortened by congenital scoliosis, control of spine deformity with expansion thoracoplasty allows growth of the thoracic spine, and it is likely that the longer thorax provides additional volume for growth of the underlying lungs with probable clinical benefit. abstract_id: PUBMED:29985869 Expansion Thoracoplasty in Rabbit Model: Effect of Timing on Preserving Pulmonary Growth and Correcting Spine Deformity. Study Design: In a treatment-control animal study expansion thoracoplasty (ET) was performed in a juvenile rabbit model of thoracic insufficiency syndrome (TIS) and benefits to thoracic development and respiratory function quantified. Rabbits treated early versus late were compared to age-matched normal and disease control rabbits through to skeletal maturity. Objective: Evaluate (1) how ET changes the natural TIS disease trajectory and (2) how timing of ET affects changes in spine growth, lung growth, and respiratory mechanics. Summary Of Background Data: Pulmonary growth potential is thought to diminish with age; thus, early therapeutic intervention may increase pulmonary growth in children with TIS. However, no direct empirical evidence exists to support this treatment paradigm. Methods: Convex left scoliosis and resultant TIS was induced in 3-week-old rabbits via surgical rib tethering. We compare the efficacy of ET performed at 7 weeks and expanded at 11 weeks (early, n = 7) versus only at 11 weeks of age (late, n = 7) in preserving lung growth and respiratory function relative to normal (n = 8) and disease (n = 10) rabbits. Sequential computed tomography images and pulmonary function testing was performed to quantify spine curvature, lung growth, and respiratory volumes. At 28 weeks of age chest wall elastance was measured in vivo then acinar complexity analyzed histologically via radial alveolar counts. Results: ET performed early or late altered the predicted trajectory of spine deformity, pulmonary growth inhibition, and respiratory dysfunction seen in disease rabbits. Growth was not significantly different between early and late rabbits and post-treatment gains remained below those of age-matched normal rabbits. Chest wall elastance was impaired by ET and more so in early rabbits, there were no differences in pulmonary elastance. Conclusion: ET interrupted the natural progression of deformity and pulmonary hypoplasia associated with spine curvature in disease rabbits. However, growth benefits are only seen in cases of the most severe initial deformity and must be balanced against the further impairment to chest wall function associated with repetitive surgery. Level Of Evidence: N/A. abstract_id: PUBMED:20715440 Thoracoplasty Thoracoplasty was at 1st invented for the treatment of pulmonary tuberculosis during the days when no effective chemotherapeutic drugs for tuberculosis were available. Removal of some portions of the ribs by thoracoplasty deforms the chest wall and compresses tuberculous cavities. Since the introduction of potent anti-tuberculous drugs, thoracoplasty for pulmonary tuberculosis has been obsolete. Currently, thoracoplasty is mainly applied to reduction of the volume of the pleural space in the treatment of post-resectional space problems and in the treatment of thoracic empyema. Well-planned and safe resections of the affected ribs hold the keys to successful thoracoplasty. The procedure is performed alone or with muscle flaps or with omental flap, depending on the extent of space and the presence of bronchopleural fistula. Thoracic surgeons should know the current application of thoracoplasty. abstract_id: PUBMED:32812164 Does thoracoplasty adversely affect lung function in complex pediatric spine deformity? A 2-year follow-up review. Study Design: Retrospective review of prospective multi-center cohort. Objective: To investigate the impact of thoracoplasty on pulmonary function at 2-year follow-up among complex pediatric spine deformity patients. Complex pediatric spine deformities may be associated with significant rib prominence causing body image concerns. Surgical correction of spine deformity may include thoracoplasty to correct the rotational prominence. Some surgeons refrain from performing thoracoplasty due to its purported negative effect on pulmonary function. There is paucity of literature on the effect of thoracoplasty on pulmonary function at 2-year follow-up in pediatric patients with complex spine deformity. Methods: We reviewed data of 312 patients (&gt; 100°, with or without vertebral column resection (VCR)) or (&lt; 100° with VCR)) from an international multicenter database. Data of 106 patients with complete radiographic and pulmonary function test (PFT) assessment with a minimum of 2-year follow-up was analyzed. Paired t test was performed to compare pre-op and 2-year PFT results. PFT comparison was stratified based on thoracoplasty status (thoracoplasty: Group 1 vs. no thoracoplasty: Group 2). Results: 106 patients (61 patients Group 1 vs. 45 in Group 2). The average age and gender ratio were similar in both groups (p &gt; 0.05). Group 1 had significantly lower body mass index (BMI) compared to Group 2 (18.4 kg m-2 ± 2.8 vs. 19.9 kg m-2 ± 4.8, p = 0.0351). The average baseline coronal and sagittal Cobbs were larger for Group 1 relative to Group 2 (p &lt; 0.05). The distribution of deformity etiology and curve types, and apices were similar between the two groups (p &gt; 0.05). The rate of pre-op utilization of halo gravity traction (HGT) was 52.5% vs. 26.7% (p = 0.008), at an average duration of 103 days vs. 47 days, p = 0.0001. The rate of surgical osteotomies was similar in both groups. Estimated blood volume (EBV) loss was greater in Group 1 (63.1% vs. 43.1%, p = 0.0012). Post-op coronal and sagittal Cobb correction was similar in both groups. The incidence of post-op pulmonary complication was similar in both groups (8.2% vs. 8.9%, p = 0.899). Baseline and 2-year follow-up PFT did not differ significantly between and within the groups. Vertebral column resection (VCR) did not negatively affect PFT in both groups. Conclusion: Despite higher curve magnitudes in patients undergoing surgical correction and thoracoplasty for complex pediatric spine deformity, our findings revealed that thoracoplasty does not negatively affect pulmonary function at 2-year follow-up. abstract_id: PUBMED:21424837 Expansion thoracoplasty affects lung growth and morphology in a rabbit model: a pilot study. Background: Thoracic insufficiency syndrome represents a novel form of postnatal restrictive respiratory disease occurring in children with early-onset scoliosis and chest wall anomalies. Expansion thoracoplasty improves lung volumes in children with thoracic insufficiency syndrome; however, how it affects lung development is unknown. Questions/purposes: Using a rabbit model of thoracic insufficiency syndrome, we evaluated the effect of expansion thoracoplasty on the response of biologic mechanisms in the alveolar microstructure. Methods: Using archived material from a previous experiment, 10 4-week-old New Zealand rabbits were divided into three groups: normal (n = 3), disease (n = 3), and treated (n = 4). Left ribs four to eight were tethered in seven rabbits at age 5 weeks to induce hypoplasia of the left hemithorax (disease). At age 10 weeks, four of these rabbits were treated by expansion thoracoplasty (treated). At age 24 weeks, lungs were excised and processed. Alveolar density and parenchymal airspace were measured on histologic sections. Immunohistochemistry was performed for vascular endothelial growth factor receptor 2 (angiogenesis), KI-67 (cell proliferation), and RAM-11 (macrophages). Results: Alveolar walls were poorly perfused and airspace fraction was larger (emphysematous) in disease rabbits than normal or treated rabbits. Immunohistochemistry provided inconclusive evidence to support the concept that pulmonary hypoplasia is induced by thoracic insufficiency syndrome and controlled by expansion thoracoplasty. Conclusions: Treatment of thoracic insufficiency syndrome by expansion thoracoplasty may prevent emphysematous changes in the alveolar microstructure, thereby enhancing gas exchange. abstract_id: PUBMED:8202805 Internal thoracoplasty. A new procedure. Study Design: The efficacy of performing a thoracoplasty from within the thoracotomy during anterior surgery for scoliosis was investigated. Objectives: Patients were prospectively studied to determine the possible complications and morbidity of the procedure, and were compared to a similar group of patients that previously underwent same-day anterior and posterior procedures for scoliosis, but without thoracoplasty. Description of the technique is presented. Summary Of Background Data: The seven study patients had uneventful intra- and post-operative courses. For the posterior procedure (CD instrumentation), only morselized rib graft was used, obviating the need for iliac graft. Results: There was no greater rate or additional types of complications in the study group compared to the control group, except one additional day of thoracotomy tube retention. Conclusions: When same day anterior and posterior procedures are to be performed for scoliosis, internal thoracoplasty is indicated, as a source of autogenous bone and for cosmesis. abstract_id: PUBMED:19203739 Thoracic malformation with early-onset scoliosis: effect of serial VEPTR expansion thoracoplasty on lung growth and function in children. The effect on pulmonary function of serial VEPTR expansion thoracoplasty was studied longitudinally in anesthetized children with spondylothoracic dysplasia using a special mobile unit. The median age of 24 children at the start of surgery was 4.6 years (1.8-10.8) and most exhibited a moderate-to-severe restrictive lung defect. After a median of 3.2 years (1.0-6.5), their forced vital capacity (FVC) was found to have increased by an average of 11.1%/year. The rate of increase was greater in children who were younger than 6 years at the start of the study than in older children (14.5% versus 6.5%, p&lt;0.01). The average specific respiratory system compliance (C(rs)) was mildly-to-moderately decreased at the start, and over the study it decreased on average to 56% of the initial value in spite of clinically successful expansion thoracoplasty and lung growth, indicating increasing stiffness of the thorax with growth. Answer: Expansion thoracoplasty (ET) is a surgical procedure used to treat congenital thoracic scoliosis and other conditions associated with thoracic insufficiency syndrome. The safety of ET in terms of mobility and growth potential of the spine has been investigated in several studies. One study reported spontaneous spinal fusion in the thoracic spine after multiple thoracic distractions in patients who underwent ET in early childhood. This suggests that distraction-based, growth-friendly instrumentations may pose risks to spinal health and motion, even when the instrumentation is not primarily on the spine (PUBMED:22706464). However, another study found that after ET, both the concave and convex sides of the thoracic spine and unilateral unsegmented bars appeared to grow in patients with congenital scoliosis and fused ribs. This indicates that when a thorax is already foreshortened by congenital scoliosis, control of spine deformity with ET allows growth of the thoracic spine, which may provide additional volume for growth of the underlying lungs with probable clinical benefit (PUBMED:12637424, PUBMED:14996922). In a rabbit model of postnatal pulmonary hypoplasia, ET improved scoliosis and increased the volume of the constricted hemithorax. Although the relative increase in the ipsilateral lung volume was small, ET may improve respiratory function by increasing alveolar capillaries and preventing emphysematous changes (PUBMED:20081510). Another study in a rabbit model found that ET altered the predicted trajectory of spine deformity and pulmonary growth inhibition seen in disease rabbits. However, growth benefits were only seen in cases of the most severe initial deformity and must be balanced against the further impairment to chest wall function associated with repetitive surgery (PUBMED:29985869). Lastly, a retrospective review of a prospective multi-center cohort of complex pediatric spine deformity patients found that thoracoplasty did not negatively affect pulmonary function at 2-year follow-up, despite higher curve magnitudes in patients undergoing surgical correction and thoracoplasty (PUBMED:32812164). In conclusion, ET can be beneficial for the growth of the thoracic spine and may improve respiratory function, but it is not without risks. Spontaneous spinal fusion and potential impairment to chest wall function are concerns that need to be considered. The procedure should be tailored to individual cases, taking into account the severity of the deformity and the potential benefits versus risks to chest wall function. The procedure should be tailored to individual cases, taking into account the severity of the deformity and the potential benefits versus risks. It is important to note that while ET may interrupt the natural progression of deformity and pulmonary hypoplasia associated with spine curvature, the growth benefits must be balanced against the potential for increased chest wall stiffness and the impairment of chest wall function that may result from repetitive surgery (PUBMED:29985869). Additionally, in a pilot study using a rabbit model, ET was suggested to prevent emphysematous changes in the alveolar microstructure, which could enhance gas exchange and potentially benefit lung development (PUBMED:21424837). Overall, while ET has shown promise in allowing growth of the thoracic spine and improving respiratory function in some cases, it is not without potential risks to spinal mobility and health. Careful patient selection and monitoring are essential to maximize the benefits and minimize the risks associated with this procedure.
Instruction: Is There a Causal Relation between Maternal Acetaminophen Administration and ADHD? Abstracts: abstract_id: PUBMED:27295086 Is There a Causal Relation between Maternal Acetaminophen Administration and ADHD? Objective: Recent epidemiological studies reported an association between maternal intake of acetaminophen (APAP) and attention deficit hyperactivity disorder (ADHD) in their children. However, none of these studies demonstrated causality. Our objective was to determine whether exposure to APAP during pregnancy result in hyperkinetic dysfunctions in offspring, using a murine model. Material And Methods: Pregnant CD1 mice (N = 8/group) were allocated to receive by gavage either APAP (150 mg/kg/day, equivalent to the FDA-approved maximum human clinical dose), or 0.5% carboxymethylcellulose (control group), starting on embryonic day 7 until delivery. Maternal serum APAP and alanine transaminase (ALT) concentrations were determined by ELISA and kinetic colorimetric assays, respectively. Open field locomotor activity (LMA) in the 30-day old mouse offspring was quantified using Photobeam Activity System. Mouse offspring were then sacrificed, whole brains processed for magnetic resonance imaging (MRI; 11.7 Tesla magnet) and for neuronal quantification using Nissl stain. The association between APAP exposure and LMA in mouse offspring was analyzed using a mixed effects Poisson regression model that accounted for mouse offspring weight, gender, random selection, and testing time and day. We corrected for multiple comparisons and considered P&lt;0.008 as statistically significant. Results: Maternal serum APAP concentration peaked 30 minutes after gavage, reaching the expected mean of 117 μg/ml. Serum ALT concentrations were not different between groups. There were no significant differences in vertical (rearing), horizontal, or total locomotor activity between the two rodent offspring groups at the P level fixed to adjust for multiple testing. In addition, no differences were found in volumes of 29 brain areas of interest on MRI or in neuronal quantifications between the two groups. Conclusion: This study refutes that hypothesis that prenatal exposure to APAP causes hyperkinetic dysfunction in mouse offspring. Due to lack of accurate assessment of ADHD in murine models, our results should be taken with caution when compared to the reported clinical data. abstract_id: PUBMED:29970852 Maternal Biomarkers of Acetaminophen Use and Offspring Attention Deficit Hyperactivity Disorder. Previous studies have suggested a positive association between self-reported maternal acetaminophen use during pregnancy and risk of attention deficit hyperactivity disorder (ADHD) in offspring. We sought to examine the prospective association between maternal plasma biomarkers of acetaminophen intake and ADHD diagnosis in the offspring. This report analyzed 1180 children enrolled at birth and followed prospectively as part of the Boston Birth Cohort, including 188 with ADHD diagnosis based on electronic medical record review. Maternal biomarkers of acetaminophen intake were measured in plasma samples obtained within 1⁻3 days postpartum. Odds ratios for having ADHD diagnosis or other developmental disorders were estimated using multinomial logistic regression models, adjusting for pertinent covariables. Compared to neurotypical children, we observed significant positive dose-responsive associations with ADHD diagnosis for each maternal acetaminophen biomarker. These dose⁻responsive associations persisted after adjusting for indication of acetaminophen use and other pertinent covariates; and were specific to ADHD, rather than other neurodevelopmental disorders. In the stratified analyses, differential point estimates of the associations were observed across some strata of covariates. However, these differences were not statistically significant. Maternal acetaminophen biomarkers were specifically associated with increased risk of ADHD diagnosis in offspring. Additional clinical and mechanistic investigations are warranted. abstract_id: PUBMED:28428763 ADHD, Lifestyles and Comorbidities: A Call for an Holistic Perspective - from Medical to Societal Intervening Factors. The review examines Attention Deficit Hyperactivity Disorder (ADHD in its Child and Adult form) and its various presentations (Hyperactive Impulsive, Inattentive, and Combined) with a particular focus on environmental (incl. social factors), lifestyles and comorbidities. It is argued that ADHD is best understood in a holistic and interactive context and a vast empirical literature is presented to illustrate the point: Environmental factors include stress in general as well as exposure to toxins (phthalates, bisphenol A). Social factors are illustrated by effects of social deprivation and seduction to unhealthy lifestyles. Maternal lifestyle during pregnancy is pointed out (particularly her exposure to nicotine, alcohol, caffeine, and drugs, even seemingly benign medications like acetaminophen), which all tend to be related to ADHD. Family environment is discussed with respect to protective effect of (mainly authoritative and autocratic) parenting styles. Societal factors include mainly economic and political issues: income inequality and poverty (low SES is an ADHD risk factor) and a growing moral dilemma between a humanistic effort to globally spread the knowledge of ADHD and the medicalization and commercialization of the disorder. The second part of the review is devoted to ADHD related lifestyles and resulting comorbidities (e.g., food addiction and obesity, substance abuse, electronic media dependencies and conduct and personality disorders). Although ADHD is a neurodevelopmental disorder, its assessment and treatment are also linked to environmental, behavioral and social factors and their interactions. abstract_id: PUBMED:29084830 Prenatal Exposure to Acetaminophen and Risk of ADHD. Objectives: To estimate the association between maternal use of acetaminophen during pregnancy and of paternal use before pregnancy with attention-deficit/hyperactivity disorder (ADHD) in offspring while adjusting for familial risk for ADHD and indications of acetaminophen use. Methods: Diagnoses were obtained from the Norwegian Patient Registry for 112 973 offspring from the Norwegian Mother and Child Cohort Study, including 2246 with ADHD. We estimated hazard ratios (HRs) for an ADHD diagnosis by using Cox proportional hazard models. Results: After adjusting for maternal use of acetaminophen before pregnancy, familial risk for ADHD, and indications of acetaminophen use, we observed a modest association between any prenatal maternal use of acetaminophen in 1 (HR = 1.07; 95% confidence interval [CI] 0.96-1.19), 2 (HR = 1.22; 95% CI 1.07-1.38), and 3 trimesters (HR = 1.27; 95% CI 0.99-1.63). The HR for more than 29 days of maternal acetaminophen use was 2.20 (95% CI 1.50-3.24). Use for &lt;8 days was negatively associated with ADHD (HR = 0.90; 95% CI 0.81-1.00). Acetaminophen use for fever and infections for 22 to 28 days was associated with ADHD (HR = 6.15; 95% CI 1.71-22.05). Paternal and maternal use of acetaminophen were similarly associated with ADHD. Conclusions: Short-term maternal use of acetaminophen during pregnancy was negatively associated with ADHD in offspring. Long-term maternal use of acetaminophen during pregnancy was substantially associated with ADHD even after adjusting for indications of use, familial risk of ADHD, and other potential confounders. abstract_id: PUBMED:28785368 Long-term prenatal exposure to paracetamol is associated with DNA methylation differences in children diagnosed with ADHD. Background: Epidemiological studies have shown that long-term exposure to paracetamol during pregnancy is associated with attention-deficit/hyperactivity disorder (ADHD). The mechanism by which paracetamol may modulate the increased risk of developing ADHD is currently unknown. We have conducted an epigenome-wide association study (n = 384 cord blood samples) and investigated whether prenatal exposure to paracetamol is associated with DNA methylation in children diagnosed with ADHD. Results: Analyses identified significant differences in DNA methylation (n = 6211 CpGs) associated with prenatal exposure to paracetamol for more than 20 days in children diagnosed with ADHD compared to controls. In addition, these samples were differentially methylated compared to samples with ADHD exposed to paracetamol for less than 20 days (n = 2089 CpGs) and not exposed to paracetamol (n = 193 CpGs). Interestingly, several of the top genes ranked according to significance and effect size have been linked to ADHD, neural development, and neurotransmission. Gene ontology analysis revealed enrichment of pathways involved in oxidative stress, neurological processes, and the olfactory sensory system, which have previously been implicated in the etiology of ADHD. Conclusions: These initial findings suggest that in individuals susceptible to ADHD, prenatal long-term exposure to paracetamol is associated with DNA methylation differences compared to controls. abstract_id: PUBMED:37431475 Acetaminophen use during pregnancy and offspring attention deficit hyperactivity disorder - a longitudinal sibling control study. Background: Maternal acetaminophen use during pregnancy is associated with increased risk of ADHD in the child. This could reflect causal influence of acetaminophen on fetal neurodevelopment or could be due to confounding factors. The aim of the current study was to examine unmeasured familial confounding factors of this association. Methods: We used data from 26,613 children from 12,902 families participating in the prospective Norwegian Mother, Father, and Child Cohort Study (MoBa). The MoBa was linked to the Norwegian Medical Birth Register and the Norwegian Patient Registry. Siblings discordant for prenatal acetaminophen exposure were compared regarding risk of having an ADHD diagnosis. Results: Children exposed to acetaminophen up to 28 days during pregnancy did not have increased risk of receiving an ADHD diagnosis compared to unexposed children. The adjusted Hazard ratio (aHR) was 0.87 (95% C.I. = 0.70-1.08) for exposure 1 to 7 days, and 1.13 (95% C.I. = 0.82-1.49) for 8-28 days. Long-term exposure (29 days or more) was associated with a two-fold increase in risk of ADHD diagnosis (aHR = 2.02, 95% C.I = 1.17-3.25). In the sibling control model, the association between long-term acetaminophen use and ADHD in the child was aHR = 2.77 (95% C.I. = 1.48-5.05) at the between-family level, and aHR = 1.06 (95% C.I. = 0.51-2.05) at the within-family level. Conclusions: Both the exposed and the unexposed children of mothers with long-term use of acetaminophen in one of the pregnancies had increased risk of receiving an ADHD diagnosis. This indicates that the observed association between long-term acetaminophen use during pregnancy and ADHD in the child may at least partly be confounded by unobserved family factors. abstract_id: PUBMED:31916282 Acetaminophen use during pregnancy and the risk of attention deficit hyperactivity disorder: A causal association or bias? Background: The association between acetaminophen use during pregnancy and the development of attention deficit hyperactivity disorder (ADHD) in the offspring may be due to bias. Objectives: The primary objective was to assess the role of potential unmeasured confounding in the estimation of the association between acetaminophen use during pregnancy and the risk of ADHD, through bias analysis. The secondary objective was to assess the roles of selection bias and exposure misclassification. Data Sources: We searched MEDLINE, Embase, Scopus, and the Cochrane Library up to December 2018. Study Selection And Data Extraction: We included observational studies examining the association between acetaminophen use during pregnancy and the risk of ADHD. Synthesis: We meta-analysed data across studies, using random-effects model. We conducted a bias analysis to studies that did not adjust for important confounders, to explore systematic errors related to unmeasured confounding, selection bias, and exposure misclassification. Results: The search resulted in seven studies included in our meta-analysis. When adjusted estimates were pooled across all studies, the risk ratio (RR) for ADHD was 1.35 (95% confidence interval [CI] 1.25, 1.46; I2 = 48%). Sensitivity analysis for unmeasured confounding in this meta-analysis showed that a confounder of 1.69 on the RR scale would reduce to 10% the proportion of studies with a true effect size of RR &gt;1.10. Unmeasured confounding bias analysis decreased the point estimate in five of the seven studies and increased in two studies, suggesting that the observed association could be confounded by parental ADHD. Unadjusted and bias-corrected risk ratios (bcRRs) were: RR = 1.34, bcRR = 1.13; RR = 1.51, bcRR = 1.17; RR = 1.63, bcRR = 1.38; RR = 1.44, bcRR = 1.17; RR = 1.16, bcRR = 1.18; RR = 1.25, bcRR = 1.05; and RR = 0.99, bcRR = 1.18. Conclusions: Bias analysis suggests that the previously reported association between acetaminophen use during pregnancy and an increased risk of ADHD in the offspring may be due to unmeasured confounding. Our ability to conclude a causal association is limited. abstract_id: PUBMED:34679367 Perinatal Acetaminophen Exposure and Childhood Attention-Deficit/Hyperactivity Disorder (ADHD): Exploring the Role of Umbilical Cord Plasma Metabolites in Oxidative Stress Pathways. Oxidative stress mechanisms may explain associations between perinatal acetaminophen exposure and childhood attention-deficit hyperactivity disorder (ADHD). We investigated whether the changes in umbilical cord plasma amino acids needed to synthesize the antioxidant glutathione and in the oxidative stress biomarker 8-hydroxy-deoxyguanosine may explain the association between cord plasma acetaminophen and ADHD in the Boston Birth Cohort (BBC). Mother-child dyads were followed at the Boston Medical Center between 1998 and 2018. Cord plasma analytes were measured from archived samples collected at birth. Physician diagnoses of childhood ADHD were obtained from medical records. The final sample consisted of 568 participants (child mean age [SD]: 9.3 [3.5] years, 315 (52.8%) male, 248 (43.7%) ADHD, 320 (56.3%) neurotypical development). Cord unmetabolized acetaminophen was positively correlated with methionine (R = 0.33, p &lt; 0.001), serine (R = 0.30, p &lt; 0.001), glycine (R = 0.34, p &lt; 0.001), and glutamate (R = 0.16, p &lt; 0.001). Children with cord acetaminophen levels &gt;50th percentile appeared to have higher risk of ADHD for each increase in cord 8-hydroxy-deoxyguanosine level. Adjusting for covariates, increasing cord methionine, glycine, serine, and 8-hydroxy-deoxyguanosine were associated with significantly higher odds for childhood ADHD. Cord methionine statistically mediated 22.1% (natural indirect effect logOR = 0.167, SE = 0.071, p = 0.019) and glycine mediated 22.0% (natural indirect effect logOR = 0.166, SE = 0.078, p = 0.032) of the association between cord acetaminophen &gt;50th percentile with ADHD. Our findings provide some clues, but additional investigation into oxidative stress pathways and the association of acetaminophen exposure and childhood ADHD is warranted. abstract_id: PUBMED:30923825 Use of Negative Control Exposure Analysis to Evaluate Confounding: An Example of Acetaminophen Exposure and Attention-Deficit/Hyperactivity Disorder in Nurses' Health Study II. Frequent maternal use of acetaminophen in pregnancy has been linked to attention-deficit/hyperactivity disorder (ADHD) in children, but concerns regarding uncontrolled confounding remain. In this article, we illustrate use of the negative control exposure (NCE) approach to evaluate uncontrolled confounding bias in observational studies on pregnancy drug safety and explain the causal assumptions behind the method. We conducted an NCE analysis and evaluated the associations between maternal acetaminophen use during different exposure periods and ADHD among 8,856 children born in 1993-2005 to women enrolled in the Nurses' Health Study II cohort. Information on regular maternal acetaminophen use was collected prospectively in biennial questionnaires. A total of 721 children (8.1%) in the cohort had been diagnosed with ADHD as reported by the mothers. Our NCE analysis suggested that only acetaminophen use at the time of pregnancy was associated with childhood ADHD (odds ratio = 1.34, 95% confidence interval: 1.05, 1.72), and the effect estimates for the 2 NCE periods (about 4 years before and 4 years after the pregnancy) were null. Our findings corroborate those of prior reports suggesting that prenatal acetaminophen exposure may influence neurodevelopment. The lack of an association between acetaminophen use in the pre- and postpregnancy exposure periods and ADHD provides assurance that uncontrolled time-invariant factors do not explain this association. abstract_id: PUBMED:31042271 Association of Maternal Neurodevelopmental Risk Alleles With Early-Life Exposures. Importance: Early-life exposures, such as prenatal maternal lifestyle, illnesses, nutritional deficiencies, toxin levels, and adverse birth events, have long been considered potential risk factors for neurodevelopmental disorders in offspring. However, maternal genetic factors could be confounding the association between early-life exposures and neurodevelopmental outcomes in offspring, which makes inferring a causal relationship problematic. Objective: To test whether maternal polygenic risk scores (PRSs) for neurodevelopmental disorders were associated with early-life exposures previously linked to the disorders. Design, Setting, And Participants: In this UK population-based cohort study, 7921 mothers with genotype data from the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent testing for association of maternal PRS for attention-deficit/hyperactivity disorder (ADHD PRS), autism spectrum disorder (ASD PRS), and schizophrenia (SCZ PRS) with 32 early-life exposures. ALSPAC data collection began September 6, 1990, and is ongoing. Data were analyzed for the current study from April 1 to September 1, 2018. Exposures: Maternal ADHD PRS, ASD PRS, and SCZ PRS were calculated using discovery effect size estimates from the largest available genome-wide association study and a significance threshold of P &lt; .05. Main Outcomes And Measures: Outcomes measured included questionnaire data on maternal lifestyle and behavior (eg, smoking, alcohol consumption, body mass index, and maternal age), maternal use of nutritional supplements and medications in pregnancy (eg, acetaminophen, iron, zinc, folic acid, and vitamins), maternal illnesses (eg, diabetes, hypertension, rheumatism, psoriasis, and depression), and perinatal factors (eg, birth weight, preterm birth, and cesarean delivery). Results: Maternal PRSs were available from 7921 mothers (mean [SD] age, 28.5 [4.8] years). The ADHD PRS was associated with multiple prenatal factors, including infections (odds ratio [OR], 1.11; 95% CI, 1.04-1.18), use of acetaminophen during late pregnancy (OR, 1.11; 95% CI, 1.04-1.18), lower blood levels of mercury (β coefficient, -0.06; 95% CI, -0.11 to -0.02), and higher blood levels of cadmium (β coefficient, 0.07; 95% CI, 0.05-0.09). Little evidence of associations between ASD PRS or SCZ PRS and prenatal factors or of association between any of the PRSs and adverse birth events was found. Sensitivity analyses revealed consistent results. Conclusions And Relevance: These findings suggest that maternal risk alleles for neurodevelopmental disorders, primarily ADHD, are associated with some pregnancy-related exposures. These findings highlight the need to carefully account for potential genetic confounding and triangulate evidence from different approaches when assessing the effects of prenatal exposures on neurodevelopmental disorders in offspring. Answer: The question of whether there is a causal relationship between maternal acetaminophen administration and ADHD in offspring has been the subject of several studies, with varying results. One study using a murine model did not find evidence that prenatal exposure to acetaminophen causes hyperkinetic dysfunction in mouse offspring, suggesting that there may not be a direct causal relationship (PUBMED:27295086). However, this study also noted that due to the lack of accurate assessment of ADHD in murine models, the results should be taken with caution when compared to clinical data. In contrast, other studies have reported associations between maternal acetaminophen use during pregnancy and an increased risk of ADHD in offspring. For instance, a study found significant positive dose-responsive associations with ADHD diagnosis for each maternal acetaminophen biomarker, even after adjusting for various covariates (PUBMED:29970852). Another study reported that long-term maternal use of acetaminophen during pregnancy was substantially associated with ADHD after adjusting for indications of use and familial risk of ADHD (PUBMED:29084830). Further research has suggested that long-term prenatal exposure to paracetamol is associated with DNA methylation differences in children diagnosed with ADHD, indicating a potential epigenetic mechanism (PUBMED:28785368). Additionally, a longitudinal sibling control study indicated that the observed association between long-term acetaminophen use during pregnancy and ADHD in the child may be confounded by unobserved family factors (PUBMED:37431475). A bias analysis of several studies suggested that the association between acetaminophen use during pregnancy and an increased risk of ADHD could be due to unmeasured confounding, indicating that a causal association is not definitive (PUBMED:31916282). Another study exploring oxidative stress pathways found that umbilical cord plasma metabolites related to oxidative stress may partially explain the association between cord plasma acetaminophen and ADHD (PUBMED:34679367). Lastly, a study using negative control exposure analysis suggested that only acetaminophen use during pregnancy was associated with childhood ADHD, providing some assurance that uncontrolled time-invariant factors do not explain this association (PUBMED:30923825).
Instruction: Mediastinal small cell carcinoma: a unique clinical entity? Abstracts: abstract_id: PUBMED:26329297 Mediastinal small cell carcinoma: a unique clinical entity? Purpose: Mediastinal small cell carcinoma (MSCC) is a rare tumor with limited published literature. In view of diagnostic confusion pertaining to this tumor, we investigated its origin, clinical features, management and survival. Methods: Clinical data of MSCC patients were retrospectively reviewed. Eligible patients showed pathologically proven small cell carcinoma (SCC) with the primary lesions confined to the mediastinum. Survival information was collected through follow-up studies. Results: Among 25 MSCC patients identified, 22 were classified to have limited disease (LD), while 3 were with extensive disease (ED). The 5 patients (20%) underwent surgery and 20 patients (80%) underwent non-surgical treatment. The 4 patients with LD MSCC received chemotherapy alone, while 13 of them received chemoradiotherapy. Overall median survival time (MST) of all patients was 22 months, and the 1-, 3- and 5-year overall survival rates were 67.4, 16.8, and 8.4%, respectively. The MST of LD and ED patients separately was 23 and 8 months, respectively, with significant difference (P = 0.005). But, the MST of patients who received surgical and non-surgical treatment was 25 and 21 months, respectively, with no significant difference (P = 0.757). The MST of LD patients receiving chemotherapy and chemoradiotherapy was 12 and 29 months, respectively, but somehow did not show significant difference (P = 0.482). Conclusions: Our data suggested that MSCC may be a separate clinical entity like extrapulmonary small cell carcinomas (EPSCCs). Despite, multimodal treatment is currently the main treatment option, but for patients with LD MSCC, chemoradiotherapy is recommended to be preferred treatment modality. abstract_id: PUBMED:18979286 Extrapulmonary small cell carcinoma localized in lymph nodes: is it a different clinical entity? Background: Extrapulmonary small cell carcinomas (EPSCC) can clinically progress differently depending on the primary site of disease involvement. This review is focused on patients with small cell carcinoma (SmCC) exclusively localized in a lymph node or in multiple lymph nodes without any evidence of a primary tumor in any other organ. Methods: We searched the period 1980 to 2007 in the PubMed database and identified 11 publications in the English language presenting at least one case of SmCC. In total 28 individual patients were included in the present study. They were scrutinized in terms of epidemiology, clinical presentation, staging, pathology, etiology, treatment and prognosis. Results: Characteristics such as age, gender and smoking were similar to those seen in other EPSCCs. Median survival was not reached (42+, range, 9.1 to 100 months). The survival rate was found to be 79% at 3 years. Seventy-seven percent of the patients had limited stage disease. These patients completely responded to surgical therapy, chemotherapy, radiotherapy or to a combination of these treatments. Seventy-one percent of the patients with limited stage SmCC localized in lymph nodes were recurrence-free during the study periods. Discussion: Our review patient group with SmCC localized in lymph nodes exhibited an excellent clinical behavior and survival results when compared to other patients with pulmonary and non-pulmonary SmCCs. SmCCs localized in lymph nodes may be a separate clinical entity. abstract_id: PUBMED:35466749 SMARCA4-deficient Undifferentiated Uterine Sarcoma: Clinicopathological Features of an Emerging Entity. SMARCA4-deficient undifferentiated uterine sarcoma is a recently described molecularly defined entity among the subset of aggressive undifferentiated uterine tumors. Mutation in the SMARCA4 gene is a key driver alteration, as also seen in small cell carcinoma of ovary, hypercalcemic type (SCCOHT) and thoracic undifferentiated carcinoma. Limited number of cases of SMARCA4-deficient undifferentiated uterine sarcoma has been reported in literature. We hereby describe a case of this distinct entity in a 52-year-old woman. Histomorphological examination showed sheets of monomorphic epithelioid cells with a variable proportion of cells displaying rhabdoid features, brisk mitotic activity, and lymphovascular invasion. A panel of immunohistochemical markers was required to exclude the differential diagnoses. The tumor was microsatellite stable. Loss of SMARCA4 expression and intact expression of INI1 in tumor cells by immunohistochemistry (IHC) confirmed the diagnosis of SMARCA4- deficient undifferentiated uterine sarcoma. The patient had a rapidly progressive clinical course. abstract_id: PUBMED:7492823 Desmoplastic small round cell tumors in adults: a new entity Desmoplastic small round cell tumor is a recently recognized clinicopathologic entity with distinctive morphologic and immunohistochemical features. Moreover, specific cytogenetic and molecular characteristics have been described. We report here five new adult cases. The median age was 22 years. The clinical presentation was intra-abdominal in four patients and para-testicular in one patient. In patients with intra-abdominal disease, the initial surgery consisted of suboptimal debulking in three patients and only biopsies in the fourth. The patient with para-testicular tumor underwent an orchiectomy. Chemotherapy including cisplatin, etoposide, cyclophosphamide and doxorubicin or epirubicin (PAVEP/PEVEP regimen) was delivered after initial surgery in patients with intra-abdominal presentation or at relapse in the patient with para-testicular tumor. Only one persistent complete response was obtained and no salvage treatment was active in the other patients who died of progressive disease. A review of the literature concerning the clinicopathologic, cytogenetic and molecular features of this new entity is exposed. abstract_id: PUBMED:22301491 HPV-associated neuroendocrine carcinoma of the oropharynx: a rare new entity with potentially aggressive clinical behavior. High-grade neuroendocrine carcinoma of the head and neck is an aggressive neoplasm which rarely arises in the oropharynx. Here we report a series of 8 oropharyngeal neuroendocrine carcinomas associated with both human papillomavirus (HPV) infection and tobacco exposure. The tumor occurred predominantly in male patients (6 of 8) at a mean age of 59 years. Histologically, these cases were all classified as poorly differentiated neuroendocrine carcinoma (small cell carcinoma) with high mitotic activity [mean 53.3 mitoses per 10 HPF], necrosis, high nuclear-to-cytoplasmic ratio, and nuclear molding. One case also exhibited a moderately differentiated component, and one other case had a component of squamous cell carcinoma with basaloid features. Neuroendocrine differentiation was confirmed by immunoreactivity for synaptophysin and/or chromogranin A in all cases. P63 staining was negative, except in 1 case. Seven of the 8 cases showed strong and diffuse p16 expression, a surrogate marker for high-risk HPV infection. HPV infection was confirmed in 6 of these 7 cases by HPV in situ hybridization and/or polymerase chain reaction analysis. HPV subtypes 16, 18, and 33 were identified in 1 case each by polymerase chain reaction testing. Six of the 7 patients for whom clinical history was available presented with advanced disease (4 with regional lymph node metastases, 1 with distant metastases, and 1 with distant and locoregional metastases). Disease recurred in 5 of the 6 patients with available clinical follow-up, with 3 developing distant metastases to brain, bones, lung, pleura, adrenal glands, and pancreas. These 3 cases were all from the HPV-positive group. In summary, neuroendocrine carcinoma of the oropharynx represents a rare novel HPV-associated entity with high-grade histologic features and aggressive clinical behavior. abstract_id: PUBMED:16861964 Gastric large cell neuroendocrine carcinomas: a distinct clinicopathologic entity. The current histologic classifications of gastric cancers define only carcinoids and small cell carcinomas in the neuroendocrine (NE) category. This study aimed to characterize the histologic and clinical features of high-grade gastric NE carcinomas of nonsmall cell type, tentatively named large cell neuroendocrine carcinoma (LCNEC). Tumors with histologic features suspicious of NE differentiation were selected by a histologic review of 2835 resected gastric cancers, and those with a NE phenotype in &gt; 50% and 1% to approximately 50% tumor cells assessed by expressing chromogranin A and/or synaptophysin were defined as LCNEC and adenocarcinoma with neuroendocrine differentiation (ACNED), respectively. One hundred ninety-nine tumors were selected and of the 109 positive for chromogranin A and/or synaptophysin, 42 and 44 met the criteria for LCNEC and ACNED, respectively. Generally, LCNECs demonstrated less predominant NE morphology than carcinoids, and could be roughly divided into solid (30 cases), tubular (7 cases), and scirrhous (5 cases) subtypes with reference to their main growth pattern. The prognosis of LCNECs was significantly worse than that of conventional adenocarcinomas (P &lt; 0.0001). Thus, this study shows that the spectrum of gastric NE tumors is broader than has previously been recognized and LCNEC is not only a distinct histopathologic entity, but also a distinct clinical entity. Furthermore, the prognosis of ACNEDs was also significantly worse than that of adenocarcinomas (P &lt; 0.0001), and some ACNEDs might actually have been LCNECs, and survival analysis showed that &gt; 20% positivity of NE markers could be enough to characterize LCNEC, as long as light microscopic NE morphology was present in the tumor. abstract_id: PUBMED:17197916 Nonsmall cell lung carcinoma with neuroendocrine differentiation--an entity of no clinical or prognostic significance. The existence of non-small cell lung carcinoma with neuroendocrine differentiation as a distinct entity and its relevance for prognostic and treatment purposes is controversial. This study assesses the frequency and biologic and prognostic significance of neuroendocrine (NE) expression of synaptophysin (SNP), chromogranin (Ch), and neural cell adhesion molecule (N-CAM) using tissue microarray (TMA) and immunohistochemistry. Six hundred nine nonsmall cell lung carcinomas (NSCLCs) were reviewed for subclassification. TMA blocks were made using duplicate 0.6-mm-diameter tissue cores and slides stained with SNP, Ch, and N-CAM. Immunoreactivity was considered if 1% or more of tumor cells were positive. Hematoxylin and eosin-stained sections were subclassified as: 243 adenocarcinoma (ACA), 272 squamous cell carcinoma (SCC), 35 large cell carcinoma, 32 non-small cell carcinoma NOS, and 6 other (carcinosarcoma, giant cell carcinoma). Positivity for either marker was identified in 13.6% of NSCLC (76/558). NSCLC showed reactivity for Ch in 0.4% of cases (2/524), for SNP in 7.5% of cases (39/521) and for N-CAM in 8.6% of cases (44/511), whereas only 0.2% of cases (1/517) showed coexpression of SNP and Ch and none of all 3 markers. The assessment of NE differentiation in NSCLC is unnecessary and expensive and is of no clinical or prognostic significance. SNP or N-CAM stains a small minority of NSCLC, whereas Ch immunoreactivity is less common. Positivity for any 2 NE markers is rare. SNP is more likely to be expressed in adenocarcinoma (P=0.01) and N-CAM in squamous-cell carcinoma (P=0.008). Otherwise there was no correlation between immunoreactivity and tumor morphology. Disease specific and overall survival is not influenced by NE differentiation and therefore non-small cell lung carcinoma with neuroendocrine differentiation should not be a subclass distinct from the other NSCLC. abstract_id: PUBMED:26960647 Synchronous existence of granular cell tumor and small cell carcinoma of lung: An unusual entity. Granular cell tumor (GCT) is a rare benign mesenchymal tumor that uncommonly occurs in the lung and tracheobronchial tree. Small cell carcinoma of lung is a centrally located malignant neoplasm that commonly occurs in elderly smokers. Concomitant existence of both the neoplasm in lung is extremely rare with only one reported case in the literature. Few rare combinations of GCT with other primary bronchogenic carcinomas have also been reported. Clinical symptoms depend upon the site and size of the tumor. Definitive diagnosis is by histopathological and proper immunohistochemical analysis. Identification of this entity is important as treatment requires individual therapy protocols that depend on the presence of metastasis, location of the tumors, and type of bronchogenic carcinoma. abstract_id: PUBMED:37203324 Small Cell Carcinoma of the Rectum-An Unexpected Diagnosis: Current Treatment Options for a Rare and Aggressive Entity. Rectal small cell carcinoma is a rare and aggressive cancer subtype for which a consensus of optimal treatment has not yet been reached. This cancer presents a difficult surgical problem, and thus, the mainstay of treatment tends to mirror that of small cell carcinoma of the lung (chemotherapy, radiation therapy, and immune modulators). This brief report highlights current treatment options available for this rare and difficult entity. There is a significant need for large-center clinical trials and prospective studies to help determine the best treatment regimen to effectively care for patients with small cell carcinoma of the rectum. abstract_id: PUBMED:10483278 Askin's tumor--a rare entity. Case report with references to the literature Askin's Tumour (synonym: primitive neuroectodermal tumour) is a rare neoplasm of the chest wall. In 1979 Askin and Rosai described an unique clinicopathological entity, characterised as a malignant small-cell tumour of the soft tissues of the chest wall in childhood and adolescence. We report on a case of a 28-year old male who had a massive tumour in the left thorax with association to the chest wall. The clinical symptoms were dyspnoea and increasing chest pain. The tumour was revealed as a sarcoma of the lung by CT-guided fine needle aspiration. A pneumectomy of the left lung with partial resection of ribs IV. and V. was performed. Postoperative histology revealed an Askin's tumour of the chest wall with infiltration of the lung. After surgical treatment in our hospital we transferred the patient to an oncological centre for adjuvant chemotherapy. As a consequence of aggressive growth of tumour therapy should be performed in oncological centres in clinical studies. The treatment includes radical surgical resection, neoadjuvant and adjuvant chemotherapy plus radiation. In our case the primary resection was performed because of increasing symptoms. It is difficult to establish an accurate preoperative diagnosis of Askin's tumour. Microscopy and immunohistological stain of the specific marker--neuron-specific enolase--are essential. Multimodal treatment allows a long-term survival, but often the prognosis is infaust. Answer: Mediastinal small cell carcinoma (MSCC) appears to be a unique clinical entity, as suggested by the data from a retrospective review of clinical data of MSCC patients. The study found that MSCC may be distinct like other extrapulmonary small cell carcinomas (EPSCCs). The majority of patients (88%) were classified with limited disease (LD), and the overall median survival time (MST) was 22 months, with 1-, 3-, and 5-year overall survival rates of 67.4%, 16.8%, and 8.4%, respectively. Patients with LD had a longer MST (23 months) compared to those with extensive disease (ED) (8 months). The study recommended chemoradiotherapy as the preferred treatment modality for patients with LD MSCC (PUBMED:26329297). This finding aligns with the notion that certain subtypes of small cell carcinomas localized in specific areas, such as lymph nodes, may exhibit unique clinical behaviors and survival outcomes, potentially qualifying them as separate clinical entities. For example, small cell carcinoma localized in lymph nodes has been reported to show excellent clinical behavior and survival results compared to other pulmonary and non-pulmonary small cell carcinomas, with a 3-year survival rate of 79% and a high percentage of patients being recurrence-free during the study periods (PUBMED:18979286). The concept of unique clinical entities is further supported by the identification of other distinct clinicopathologic entities, such as SMARCA4-deficient undifferentiated uterine sarcoma (PUBMED:35466749), desmoplastic small round cell tumors (PUBMED:7492823), HPV-associated neuroendocrine carcinoma of the oropharynx (PUBMED:22301491), gastric large cell neuroendocrine carcinomas (PUBMED:16861964), and Askin's tumor (PUBMED:10483278). Each of these entities has specific clinical, histological, and sometimes molecular characteristics that set them apart from more common forms of cancer. In summary, the evidence suggests that MSCC may indeed be considered a unique clinical entity, with distinct clinical features, management strategies, and survival outcomes compared to other forms of small cell carcinoma.
Instruction: Can medical treatment be the first-line treatment for most ectopic pregnancies? Abstracts: abstract_id: PUBMED:20005640 Can medical treatment be the first-line treatment for most ectopic pregnancies? A series of 202 patients Objectives: To assess the efficacy and safety of a liberal policy of medical treatment for ectopic pregnancies (EUP). Material And Methods: Retrospective study of EUP that received active management between January 1999 and December 2003. Patients with rupture or near-rupture and those who refused medical treatment received surgical treatment. Other patients were treated by methotrexate. Results: Two hundred and two EUP were managed; 26 % were treated surgically, 74 % medically. The success rate of medical treatment was 83 %. A ss-hCG threshold of 2526 IU/l was selected. Under this level, the success rate was 90.7 % compared with 68 % when it was higher (p=0.001). Cardiac activity tripled the failure rate. Morbidity with medical treatment consisted of an augmentation in transaminases in 18.8 % and one case of severe dermatosis (0.7 %). Almost half of the patients treated medically required hospitalizations. Conclusion: The extension of medical treatment in our population to 74 % of all EUP yielded a high success rate, 83 %. abstract_id: PUBMED:24008873 Medical treatment of ectopic pregnancy Background: The ectopic pregnancy (EP) is a public health problem and its frequency has doubled in most industrialized countries in 20 years. aim: To evaluate the effectiveness of medical treatment of ectopic pregnancy with methotrexate (MTX) intramuscularly (IM). Methods: prospective study supported between October 2006 and December 2010. The selected patients received methotrexate IM (1 mg per kg). The monitoring was based on: the kinetics of plasma HCG, clinical examination and ultrasound. A second injection was performed if hCG on day 4 was increased by more than 25% or J7&gt; the initial rate. Healing corresponded to obtain a zero rate of HCG in a stable manner. Results: We used the first-line medical treatment in 122 patients. The average age of patients was 31.94 years. A haematosalpinx was found in 87.70% of cases. The initial rate of HCG plasma varied between 40 IU/ml and 4088 IU / ml, with an average of 805.88 m IU / ml. The primary success rate obtained after a single injection of intramuscular MTX was 67%. The high success rate obtained after two injections of MTX was 27%. The overall success rate after 1 or 2 injections of MTX was 82%. 17 patients underwent surgical treatment after a first injection of methotrexate. 5 patients underwent surgery after receiving two doses of methotrexate. The period of normalization of plasma levels of h CG was 24 days on average, with extremes ranging from 4 to 43 days for 67 patients cured after a single injection of MTX. This period was 33 days on average, with extremes ranging from 8 to 62 days for patients healed after two injections of MTX. Conclusion: Medical treatment applied to 38% of ectopic pregnancies diagnosed in our department is effective in 82% of cases if the inclusion criteria are strictly adhered to. Successful treatment is limited by patient compliance and demanding nature of monitoring. abstract_id: PUBMED:33654437 Suction Curettage and Foley Balloon as a First-Line Treatment Option for Caesarean Scar Pregnancy and Reproductive Outcomes. Background: Caesarean scar pregnancy (CSP) is associated with various serious pregnancy complications. This study presents the outcomes of patients treated with suction curettage and Foley balloon as the first-line treatment for CSP as well as their future outcomes in terms of successful pregnancy. Methods: Of the 44 patients diagnosed with CSP between January 2015 and April 2019, 42 who provided consent for the termination of pregnancy and who simultaneously underwent the transabdominal ultrasound-guided suction curettage + Foley balloon treatment were included in the study. These patients were then contacted and interviewed to collect data concerning their post-treatment number of pregnancies, number of miscarriages, number of live or dead births, mode of delivery, delivery time as well as whether any abnormal placental invasion or uterine ruptures developed. Results: Transabdominal ultrasound-guided suction curettage + Foley balloon was simultaneously performed as the first-line treatment in 42 patients with CSP. In two of these cases, wherein post-treatment level of β-hCG reached a plateau, single-dose systemic methotrexate was administered. Emergency surgical intervention, hysterectomy, massive blood transfusion and additional systemic methotrexate administration were not required. Twenty-six of 42 patients could be contacted. 18/26 were trying to conceive. 6/18 patients had secondary infertility, and 12/18 patients managed to conceive. 8/12 had caesarean delivery at full term. 1/12 was 16-week pregnant, 1/12 had tubal ectopic pregnancy and 2/12 had first-trimester abortus. Conclusion: When administered as the first-line treatment for CSP, the suction curettage + Foley balloon treatment is a highly successful, cheap and easily performed minimally invasive method that requires only a short hospital stay, making it comfortable for patients. Compared with other uterine-sparing methods, it does not harm fertility and has positive effects on patients' future fertility outcomes. abstract_id: PUBMED:37937363 Interstitial pregnancy management: A multicentric analysis of 98 patients from the FRANCOGENT group comparing surgery and medical treatment. Objective: To evaluate the outcomes associated with each therapeutic option for patients diagnosed with interstitial pregnancy (IP). Methods: We conducted a multicentric retrospective cohort study within the departments of Gynecology and Obstetrics involved in the Francogent research group. Women treated for an interstitial pregnancy between January 2008 to December 2019 were included. Three therapeutic options were evaluated: surgical treatment (ST); in situ methotrexate combined with systemic methotrexate (IS-MTX); and systemic methotrexate (IM-MTX). Success of first-line treatment was defined by hCG negativation (&lt;5I U/L). Secondary outcomes included the need for secondary surgical procedure, secondary medical treatment, emergency surgery, postoperative complications, duration of hospitalization, and delay before hCG negativation. Results: A total of 98 patients were managed for IP: 42 (42.9%) patients had IM-MTX; 34 (34.7%) had IS-MTX; and 22 (22.4%) had ST. First-line treatment was successful in all patients of the ST group (22/22, 100%), in 31% of patients within the IM-MTX group (13/42) and 70.6% (24/34) in the IS-MTX group. The sole parameter associated with the risk of treatment failure was the mode of methotrexate administration. The size of the gestational sac or the presence of fetal heartbeat was not associated with decreased medical treatment (IS or IM-MTX) efficiency. Conclusion: Either ST or IS-MTX are good options for IP treatment associated with high success rates. A single-dose regimen of IM-MTX is less efficient than IS-MTX or ST. Symptomatic patients with severity criteria should always undergo emergency surgery. IP remains a high-risk condition that should be managed, whenever possible, in referral centers to potentialize the chances of favorable outcomes. abstract_id: PUBMED:9417466 Medical treatment exclusively for cervical pregnancy with in situ methotrexate We present a case report of cervical pregnancy with medical treatment. Medical treatment consisted in injection of methotrexate (50 mg) into the pregnancy, on the first, third and seventh day. Ultrasound and Doppler give important information for follow up. The pregnancy totally resolved and the patient did not need any further treatment. abstract_id: PUBMED:25889882 Local Methotrexate Injection as the First-line Treatment for Cesarean Scar Pregnancy: Review of the Literature. The objective of this study was to determine the outcome of using ultrasound-guided local methotrexate injection as the first-line treatment of cesarean scar pregnancy (CSP). A literature review was performed on all eligible reports using this modality as the first-line treatment of CSP. Relevant publications were obtained from the PubMed electronic database from inception to December 2014. Ninety-six cases from 95 women reported in 17 articles were reviewed. The success rate was 73.9% after a single local methotrexate injection. An accumulated success rate of 88.5% could be achieved after additional local or intramuscular methotrexate administration. Eleven cases (11.5%) failed methotrexate treatment and required surgical interventions. Except for women with serum human chorionic gonadotropin levels higher than 100 000 IU/L, ultrasound-guided local methotrexate injection could be considered as a first-line treatment modality for CSP. abstract_id: PUBMED:25338827 Effectiveness of medical treatment with methotrexate for interstitial pregnancy. Background: In the last three decades, systemic methotrexate (MTX) has become widely accepted as the primary treatment for unruptured tubal pregnancy. This has prompted investigations into the use of MTX in the management of interstitial pregnancy. Aim: To determine the effectiveness of methotrexate for treatment of interstitial pregnancy. Methods: We retrospectively reviewed 17 haemodynamically stable women admitted to our tertiary, university-affiliated medical centre with an intact interstitial pregnancy from January 2003 through February 2013. First-line treatment consisted of intramuscular MTX 1 mg/kg/day x4 alternating with folinic acid 0.1 mg/kg or, in cases of low beta-human chorionic gonadotropin (β-hCG) and no embryonic pole, one dose of 50 mg/m(2) body surface area. Uterine artery MTX injection was administered on the ipsilateral side of the pregnancy followed by uterine artery embolisation as second-line treatment. Failure criteria were &lt;15% decrease in β-hCG or sonographic evidence of pregnancy development. Second-line failures were referred for surgery. Results: Twelve women (70.5%) underwent successful first-line treatment and 5 (29.5%) required second-line treatment. Baseline characteristics were similar except for a higher rate of two previous ectopic pregnancies in the first-line failures (80% versus 8.3%, P = 0.01). Two women eventually required surgery, both with two previous ectopic pregnancies. Of the three women with fetal cardiac activity on admission, two had successful first-line treatment and one required surgery. Conclusion: Systemic MTX is effective for treatment of intact interstitial pregnancy in haemodynamically stable women. The success rate is increased with uterine artery MTX injection as second-line treatment. abstract_id: PUBMED:8677411 Current indications for medical treatment of ectopic pregnancy Today, the early diagnosis of ectopic pregnancy (EP) is made possible by the assay of plasmatic hCG and the improvement of trans-vaginal sonography. These different tests lead to diagnose the EP before the ruptured stage. Therefore, the medical treatment is today a logical alternative to the surgical one. The medical treatment is mainly based upon the use of methotrexate (MTX). The logical indications for EP medical treatment are: early EP without peritoneum effusion, with hCG &lt; or = 5000 mUI/ml, without embryonic cardiac activity at sonography. The medical therapy by intramuscular injection of MTX at 1 mg/kg gives a success rate of 91.8% similar to the success rate of intra tubal injection of MTX under sonographic control. abstract_id: PUBMED:9341620 Medical treatment of ectopic pregnancy with methotrexate. Objective: To review our experience with low-dose IM methotrexate for the medical management of ectopic pregnancy (EP). Design: Retrospective chart review. Setting: Magee-Womens Hospital, Pittsburgh, Pennsylvania. Patient(s): The first 50 women treated by the resident service in whom EP was diagnosed and treated with methotrexate. Intervention(s): Intramuscular methotrexate, 50 mg/m2. Serum beta-hCG was evaluated 4 and 7 days after treatment and then weekly thereafter. The dose was repeated if the beta-hCG level did not drop &gt; or = 15% between days 4 and 7 or if a plateau or rise was noted during weekly follow-up evaluation. Surgery was performed if significant abdominal pain occurred in the presence of hemodynamic instability or signs of peritoneal irritation on physical examination. Main Outcome Measure(s): Resolution of the EP without surgical intervention. Result(s): Two patients were lost to follow-up and one was treated without a certain diagnosis of EP. Forty-three of the remaining 47 women (91.5%; 95% confidence interval, 83.5%, 99.5%) were treated successfully with methotrexate. Of these, 36 women were treated with a single dose, and 7 required a second dose. Four women were treated surgically after medical management failed. The time from initiation of treatment to cure in women who were treated successfully was 25 +/- 15 days (mean +/- SD). Thirteen patients (27.7%) made additional visits to the emergency department because of increased abdominal pain. Conclusion(s): As medical therapy for EP becomes common practice, familiarity with its side effects may lead to greater success rates. The decision to abandon medical treatment and proceed with surgery should be based on defined guidelines, such as the development of peritoneal signs, decreasing hemoglobin levels, or hemodynamic instability. abstract_id: PUBMED:36592889 Efficacy of suction curettage as the first-line treatment of cesarean scar pregnancy: A retrospective study. Objective: There is still no consensus on a safe and efficient treatment modality for cesarean scar pregnancy (CSP), which is known to cause severe complications, such as life-threatening hemorrhage. Suction curettage (SC) has been used as the first-line treatment for CSP with controversial outcomes. In this context, the objective of this study is to analyze the efficacy of SC in the treatment of CSP. Methods: The sample of this retrospective study consisted of 64 CSP patients treated using SC between 2012 and 2022. Patients' demographic and clinical variables, including the thickness of the myometrium at the lower uterine segment between the urinary bladder and cesarean scar, were obtained from their medical records. The study's primary outcome was determined as the success rate of SC. Accordingly, the patients were categorized into two groups: successful SC (Group 1) and unsuccessful SC (Group 2). Results: The success rate of SC was determined as 78.1%. The number of previous cesarean deliveries, gestational age, baseline beta-human chorionic gonadotropin (β-hCG) values, and endometrial thickness was significantly higher in Group 2 (p&lt;0.05 for all), whereas the fetal cardiac activity and absence of an embryonic pole were significantly higher in Group 2 (p = 0.001 and p = 0.004, respectively). There was no significant difference between the groups in the thickness of the myometrium at the lower uterine segment (p = 0.890). The hemoglobin levels decreased significantly after SC in both Groups 1 and 2 (p&lt;0.001 and p = 0.009, respectively). There was no significant difference between the groups in preoperative and postoperative hemoglobin values and the decrease in hemoglobin levels (p&gt;0.05). Conclusion: The study findings did not indicate any significant correlation between myometrial thickness at the lower uterine segment and the efficacy of SC in CSP patients. On the other hand, the number of cesarean deliveries, gestational age, baseline β-hCG values, endometrium thickness, fetal cardiac activity, and embryonic pole may be used to predict the outcome of SC in the treatment of CSP. Answer: Yes, medical treatment can be the first-line treatment for most ectopic pregnancies, as evidenced by several studies. A retrospective study of 202 ectopic pregnancies managed between 1999 and 2003 found that 74% were treated medically with methotrexate, achieving an 83% success rate. The success rate was even higher (90.7%) when the β-hCG threshold was below 2526 IU/l, compared to 68% when it was higher. Cardiac activity in the ectopic pregnancy tripled the failure rate of medical treatment, and some side effects were noted, such as an increase in transaminases and one case of severe dermatosis (PUBMED:20005640). Another prospective study from October 2006 to December 2010 reported an 82% overall success rate for medical treatment with methotrexate in 122 patients, with a primary success rate of 67% after a single injection and a high success rate of 27% after two injections (PUBMED:24008873). For cesarean scar pregnancies (CSP), suction curettage and Foley balloon treatment as the first-line option showed high success rates, did not harm fertility, and had positive effects on patients' future fertility outcomes (PUBMED:33654437). Similarly, local methotrexate injection as the first-line treatment for CSP had a success rate of 73.9% after a single injection, with an accumulated success rate of 88.5% after additional methotrexate administration (PUBMED:25889882). In the case of interstitial pregnancies, a multicentric analysis comparing surgery and medical treatment found that in situ methotrexate combined with systemic methotrexate or surgery were good options with high success rates, while a single-dose regimen of systemic methotrexate was less efficient (PUBMED:37937363). Overall, these studies suggest that medical treatment with methotrexate is an effective first-line option for most ectopic pregnancies, provided that certain criteria are met, such as the absence of rupture, low β-hCG levels, and no embryonic cardiac activity. However, patient compliance and close monitoring are crucial for the success of medical management (PUBMED:8677411, PUBMED:9341620, PUBMED:36592889).
Instruction: Are language and social communication intact in children with congenital visual impairment at school age? Abstracts: abstract_id: PUBMED:20025621 Are language and social communication intact in children with congenital visual impairment at school age? Background: Development of children with congenital visual impairment (VI) has been associated with vulnerable socio-communicative outcomes often bearing striking similarities to those of sighted children with autism.(1) To date, very little is known about language and social communication in children with VI of normal intelligence. Methods: We examined the presentation of language and social communication of 15 children with VI and normal-range verbal intelligence, age 6-12 years, using a standardised language assessment and parental reports of everyday social and communicative behaviours. Their profiles were compared to those of typically developing sighted children of similar age and verbal ability. Results: Compared to their sighted peers, and relative to their own good and potentially superior structural language skills, children with VI showed significantly poorer use of language for social purposes. Pragmatic language weaknesses were a part of a broader socio-communicative profile of difficulties, present in a substantial proportion of these children and consistent with the pattern found in sighted children with autism. Conclusions: There are ongoing socio-communicative and pragmatic language difficulties in children with congenital VI at school age, despite their good intellectual abilities and advanced linguistic skills. Further research is required to unpack the underlying causes and factors maintaining this vulnerability in such children. abstract_id: PUBMED:26809155 Language and communication development in preschool children with visual impairment: A systematic review. Background: Language and communication difficulties of young children with visual impairment (VI) are ascribed to intellectual disability, multiple disabilities and autism spectrum disorder (ASD) rather than their sensory impairment. Consequently, the communication difficulties of children with VI may have been underestimated and undertreated. Objectives: This report aims to critically appraise recent peer reviewed literature relating to communication and language development in children with VI. Method: A systematic search of the literature (2003–2013) was completed using the PRISMA guidelines, and primary and secondary search phrases. Nine publications were reviewed in terms of the strength of recent evidence. Thematic analysis was used to describe the early language and communication characteristics of children with VI. Results: All the selected articles (n = 9) were from developed countries and participants from seven of the studies had congenital VI. Five of the studies received an evidence level rating of III while four articles were rated as IIb. Two main themes emerged from the studies: early intervention, and multiple disabilities and ASD. Language and communication development is affected by VI, especially in the early stages of development. Speech-language therapists should therefore be included in early intervention for children with VI. Conclusion: Recent evidence on the early language and communication difficulties of children with VI exists, but children in developing countries with acquired VI appear to not be investigated. The identified language and communication developmental characteristics may assist speech-language therapists to build a knowledge base for participation in early intervention for young children with VI and their families. abstract_id: PUBMED:24165364 Story discourse and use of mental state language between mothers and school-aged children with and without visual impairment. Background: Lack of sight compromises insight into other people's mental states. Little is known about the role of maternal language in assisting the development of mental state language in children with visual impairment (VI). Aims: To investigate mental state language strategies of mothers of school-aged children with VI and to compare these with mothers of comparable children with typically developing vision. To investigate whether the characteristics of mother-child discourse were associated with the child's socio-communicative competence. Methods & Procedures: Mother-child discourse with twelve 6-12-year-old children with VI was coded during a shared book-reading narrative and compared with 14 typically sighted children matched in age and verbal ability. Outcomes & Results: Mothers of children with VI elaborated more and made significantly more references to story characters' mental states and descriptive elaborations than mothers of sighted children. Mental state elaborations of mothers in the VI group related positively with the level produced by their children, with the association remaining after mothers' overall verbosity and children's developmental levels were controlled for. Frequency of maternal elaborations, including their mental state language, was related to socio-communicative competence of children with VI. Conclusions & Implications: The findings offer insights into the potential contribution of maternal verbal scaffolding to mentalistic language and social-communicative competences of children with VI. abstract_id: PUBMED:33592150 Children With Cortical Visual Impairment and Complex Communication Needs: Identifying Gaps Between Needs and Current Practice. Purpose This scoping study sought to establish a baseline for how well the needs of children with cortical visual impairment (CVI) who use augmentative and alternative communication (AAC) are currently aligned with the services available to them. CVI is the most common cause of visual impairment in children today, and AAC methods rely heavily on vision. Yet, the prevalence of CVI in children who use AAC methods is not yet known, and there is virtually no research concerning use of AAC with children with CVI. Our overarching goals were to identify barriers and suggest priorities for improving outcomes for these children. Method Surveys were distributed anonymously online to professionals from multiple disciplines in different school-based settings and to parents of children with CVI who use AAC. Results School-based professionals identified many barriers, including a lack of knowledge and skills about CVI and about AAC, limited access to training and experts, and concerns about services being delivered in isolated silos with limited time allotted for interprofessional collaboration and planning. Parent reported that their children (Mage = 11 years) continued to rely predominantly on body-based forms of communication and lacked access to symbolic language, which, in AAC, is primarily visually based. Conclusions The barriers to services identified by school-based professionals indicate a need to develop and disseminate reliable information about CVI and AAC, both at a preservice and in-service level and, while doing so, to respect the diversity of stakeholders who need this information, including parents. Future research on what types of AAC approaches support the development of language and communication skills for children with CVI is essential. abstract_id: PUBMED:31236365 Visual problems: a review of prevalence studies on visual impairment in school-age children. Childhood visual impairment (VI) have a significant impact on the educational achievement, career choices and social life of affected individual, and in children, is mainly due to either preventable or treatable causes. Reliable data on the prevalence and causes of VI in children will guide the development of a systematic vision screening program for its early detection and successful treatment of possible causes. The purpose of this literature review is to summarize the available data on prevalence and causes of VI in school-age children from various regions globally. A discussion on the major findings highlighting the definition criteria, classifications and limitations for further studies is also presented. abstract_id: PUBMED:28756186 Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment. Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. abstract_id: PUBMED:22460107 Prevention of communication disorders--screening pre-school and school-age children for problems with hearing, vision and speech: European Consensus Statement. Background: Communication is an integral part of human behaviour. Communication disorders are associated mainly with impairment in hearing, vision, and/or speech, which influences the ability to receive, comprehend, produce, and express verbal, nonverbal, and graphic information. When unrecognized and unmanaged, these very often "invisible" conditions can have a significant detrimental effect on a child's development, affecting educational, social, and psychological progress. Material/methods: A panel of experts discussed the screening of pre-school and school-age children for problems with hearing, vision, and speech during the 10th Congress of the European Federation of Audiology Societies (EFAS), held in Warsaw, Poland, on 22 June, 2011. Results: The European Consensus Statement on Hearing, Vision, and Speech Screening in Pre-School and School-Age Children was the result of the scientific discussions. It was endorsed by experts in audiology, otolaryngology, phoniatry, ophthalmology, and speech language pathology from throughout Europe. Key elements of the consensus, as described herein, are: 1) defining the role of screening programmes in the identification and treatment of communication disorders; 2) identifying the target population; 3) promoting general awareness about the consequences of communication disorders; 4) recognizing the need for a quality control system in screening programmes; 5) encouraging cooperation among European countries to provide a high level of public health services for the prevention, identification, and treatment of communication disorders. Conclusions: The European Consensus Statement on Hearing, Vision, and Speech Screening in Pre-School and School-Age Children will encourage the appropriate authorities of the various countries involved to initiate screening for communication disorders in pre-school and school-age children. abstract_id: PUBMED:31807106 Prevalence And Associated Factors Of Visual Impairment Among School-Age Children In Bahir Dar City, Northwest Ethiopia. Introduction: Visual impairment (VI) is one of the major public health problems in the world. It is highly prevalent among children in sub-Saharan countries, including Ethiopia. Worldwide, the magnitude of VI among school-age children is 1%-10%. However, there was limited information regarding the prevalence and associated factors of VI among school-age children in the study area, which is essential to plan and implement appropriate interventions. Objective: The aim of this study was to determine the prevalence and associated factors of VI among school-age children livin g in Bahir Dar city, northwest Ethiopia. Methods: A community-based cross-sectional study was done on a sample of 632 school-age children selected by multistage sampling in Bahir Dar from April 30 to May 15, 2018. Data were collected through interviews and physical examinations. Face-to-face interviews were done with a pretested semistructured questionnaire. Physical examinations were done with visual acuity measures and assessment of ocular pathology by optometrists. Data were entered into Epi Info 7 and exported to and analyzed with SPSS 20. Binary logistic regression was fitted, and variables with P&lt;0.05 in the multivariate model were considered statistically significant. Results: A total of 601 study subjects were included in this study, giving a response rate of 95.2%. The median age was 13 (IQR 11-16) years, and 303 (50.3%) were male. Prevalence of VI was 52 (8.7%, 95% CI 6.2%-10.7%). In multivariate analysis, prematurity [AOR 2.8 (95% CI 1.19-6.83)], admission to a neonatal intensive-care unit (AOR 5.5, 95% CI 2.01-15.15), having a parent with VI (AOR 1.8, 95% CI 0.13-0.97), watching television from &lt;2 m (AOR 8.7, 95% CI 1.49-18.24), and mobile-phone exposure &gt;4 hours per day (AOR 1.6, 95% CI 1.32-4.45) were factors significantly associated with VI. Conclusion: The prevalence of VI among school-age children in Bahir Dar was significant. Premature birth, admission to a neonatal intensive-care unit, having a parent with VI, watching television from &lt;2 m, and mobile exposure &gt;4 hours per day were significantly associated. abstract_id: PUBMED:21166670 Developing a schedule to identify social communication difficulties and autism spectrum disorder in young children with visual impairment. Available observational tools used in the identification of social communication difficulties and diagnosis of autism spectrum disorder (ASD) rely partly on visual behaviours and therefore may not be valid in children with visual impairment. A pilot observational instrument, the Visual Impairment and Social Communication Schedule (VISS), was developed to aid in identifying social communication difficulties and ASD in young children with visual impairment affected by congenital disorders of the peripheral visual system (disorders of the globe, retina, and anterior optic nerve). The VISS was administered to 23 consecutive children (age range 1 y 9 mo-6 y 11 mo, mean 4 y 1 mo [SD 1.6]; 12 males, 11 females) with visual impairment (nine with severe and 14 with profound visual impairment). Item analysis was carried out by fit of the items to the Rasch model. Validity of the VISS was explored by comparison with the Childhood Autism Rating Scale (CARS) score, and the clinical ASD diagnosis (n=9). Correlation between the VISS and CARS total scores was highly significant (Spearman's rho=-0.89; p=0.01). Below threshold rating on the VISS (score of 35) showed good agreement with the clinical ASD diagnosis (sensitivity 89%, specificity 100%). This preliminary study shows the VISS to be a promising schedule to aid the identification of ASD in young children with visual impairment. abstract_id: PUBMED:34730745 What factors interfere with the performance of preschool children in the language subtest of Bayley-III? Purpose: to verify if the performance of pre-school children born prematurely and at term in the Bayley-III language subtest differs and to identify whether variables gestational age, birth weight, socioeconomic level, and maternal education are determinant in the outcome of language development. Methods: Descriptive cross-sectional case-control study in which 36 pre-school children born prematurely and 27 born at term were evaluated concerning language development by the Bayley III subtest. Preschoolers between 18 and 36 months of chronological age were considered; with no syndromes or genetic, sensory, neurological, auditory, or visual impairments; and had not previously undergone speech therapy. Mann-Whitney, Fisher's Exact, and binary logistic regression tests were used for statistical analysis. Results: the groups' performance did not differ either by the composite score (p = .701) or by the classification based on the percentile (p = .225). Gestational age, birth weight, and socioeconomic status did not influence the outcome of language development. However, maternal education was significant (p = .014) in the binary logistic regression model, suggesting that the mother having studied until basic education increases the chance of having a child underperforming in the Bayley III language subtest 6.31 times. Conclusion: there was no difference between the groups in the Bayley-III language subtest and only maternal education influenced the outcome of language development. Answer: Language and social communication are not fully intact in children with congenital visual impairment (VI) at school age. Studies have shown that despite having normal-range verbal intelligence and potentially superior structural language skills, children with congenital VI exhibit significantly poorer use of language for social purposes compared to their sighted peers. These pragmatic language weaknesses are part of a broader socio-communicative profile of difficulties that are consistent with the pattern found in sighted children with autism (PUBMED:20025621). Furthermore, research indicates that language and communication development is affected by VI, particularly in the early stages of development. This suggests that speech-language therapists should be included in early intervention for children with VI to address these difficulties (PUBMED:26809155). Another study found that mothers of children with VI tend to elaborate more and make more references to story characters' mental states than mothers of sighted children. The frequency of maternal elaborations, including mental state language, was related to the socio-communicative competence of children with VI (PUBMED:24165364). Additionally, children with VI have been reported to show an increased rate of autistic-related behaviors, pragmatic language deficits, as well as peer relationship and emotional problems (PUBMED:28756186). These findings collectively suggest that vision plays a crucial role in the typical development of social processing across modalities, and the absence or impairment of vision from birth can lead to ongoing socio-communicative and pragmatic language difficulties in school-aged children. Therefore, while children with congenital VI may have good intellectual abilities and advanced linguistic skills, their language and social communication are not entirely intact, and they face unique challenges that require targeted interventions and support.
Instruction: Can methicillin-resistant Staphylococcus aureus be found in an ambulance fleet? Abstracts: abstract_id: PUBMED:17454817 Can methicillin-resistant Staphylococcus aureus be found in an ambulance fleet? Objective: To perform an initial screening study of methicillin-resistant Staphylococcus aureus (MRSA) contamination in an ambulance fleet. Methods: This was a cross-sectional study of MRSA contamination in an ambulance fleet operating in the western United States in June 2006. Five specific areas within each of 21 ambulances (n = 105) were tested for MRSA contamination using dacron swabs moistened with a 0.85% sterile saline solution. These samples were then plated onto a screening media of mannitol salt agar containing 6.5% NaCl and 4 mcg/mL oxacillin. Results: Thirteen samples isolated from 10 of the 21 ambulances (47.6%) in the sample group tested positive for MRSA. Conclusions: The results of this preliminary study suggest that ambulances operating in the emergency medical services (EMS) system may have a significant degree of MRSA contamination and may represent an important reservoir in the transmission of potentially serious infections to patients. abstract_id: PUBMED:18791336 Staphylococcus aureus prevalence among hospitalized patients OBJECTIVE. To determine the prevalence of Staphylococcus aureus strains among hospitalized patients at the beginning of their hospitalization and during their treatment and the resistance of strains to antibiotics, and to evaluate epidemiologic characteristics of these strains. PATIENTS AND METHODS. Sixty-one patients treated at the Department of Cardiac, Thoracic and Vascular Surgery were examined. Identification of Staphylococcus aureus strains was performed using plasmacoagulase and DNase tests. The resistance of Staphylococcus aureus to antibiotics, beta-lactamase production, phagotypes, and phagogroups were determined. The isolated Staphylococcus aureus strains were tested for resistance to methicillin by performing disc diffusion method using commercial discs (Oxoid) (methicillin 5 microg per disk and oxacillin 1 microg per disk). RESULTS. A total of 297 Staphylococcus aureus strains were isolated. On the first day of hospitalization, the prevalence rate of Staphylococcus aureus strains among patients was 67.3%, and it statistically significantly increased to 91.8% on days 7-10 of hospitalization (P&lt;0.05). During hospitalization, patients were colonized with Staphylococcus aureus strains resistant to cephalothin (17.6% of patients, P&lt;0.05), cefazolin (14.6%, P&lt;0.05), tetracycline (15.0%, P&lt;0.05), gentamicin (37.7%, P&lt;0.001), doxycycline (30.7%, P&lt;0.001), and tobramycin (10.6%, P&gt;0.05). Three patients (4.9%) were colonized with methicillin-resistant Staphylococcus aureus strains, belonging to phage group II phage type 3A and phage group III phage types 83A and 77; 22.6-25.5% of Staphylococcus aureus strains were nontypable. During hospitalization, the prevalence rate of phage group II Staphylococcus aureus strains decreased from 39.6% to 5.7% (P&lt;0.05) and the prevalence rate of phage group III Staphylococcus aureus strains increased to 29.5% (P&lt;0.001). CONCLUSIONS. Although our understanding of Staphylococcus aureus is increasing, well-designed community-based studies with adequate risk factor analysis are required to elucidate further the epidemiology of Staphylococcus aureus and methicillin-resistant Staphylococcus aureus. Surveillance of methicillin-resistant Staphylococcus aureus provides relevant information on the extent of the methicillin-resistant Staphylococcus aureus epidemic, identifies priorities for infection control and the need for adjustments in antimicrobial drug policy, and guides intervention programs. abstract_id: PUBMED:28424995 The emergence of vancomycin-resistant Staphylococcus aureus in an intensive care unit in Kerman, Iran. Methicillin-resistant Staphylococcus aureus (MRSA) is a global threat to public health. This study is the first report of the emergence of vancomycin-resistant MRSA in Kerman, Iran. During a period of 15 months, a total of 205 clinical isolates of S. aureus were collected from three university hospitals affiliated with the Kerman University of Medical Science, Kerman, Iran. Screening of methicillin and vancomycin resistance was carried out by phenotypic methods. The resistance and virulence genes of vancomycin-resistant isolates were detected by polymerase chain reaction (PCR). Staphylococcal cassette chromosome mec (SCCmec) and spa typing were used for molecular typing of vancomycin-resistant isolates. Two S. aureus isolates were considered vancomycin-resistant by phenotypic and genotypic methods. Both isolates showed a minimum inhibitory concentration (MIC) ≥ 64 µg/ml and belonged to SCCmec III and spa type t030. Finding vancomycin-resistant S. aureus (VRSA) isolates represents a serious problem. More stringent infection control policies are recommended to prevent transmission of such life-threatening isolates in the hospital setting. abstract_id: PUBMED:30866024 Staphylococcus aureus Bacteraemia - an Interdisciplinary Challenge Background: Staphylococcus aureus is the second-most-common pathogen among bloodstream infections. Due to a high hospital mortality rate (15 - 40%), frequent complications and recurrences the clinical management of Staphylococcus aureus bacteremia (SAB) is distinct from bacteremia from other pathogens. Methods: A literature search was performed using PubMed. Guidelines and best practice expert recommendations were included. Results: The detection of S. aureus in blood culture should always be considered clinically relevant. The drug of choice for treatment of a bloodstream infection with methicillin-sensitive S. aureus is flucloxacillin (8 - 12 g i. v./d). In a bloodstream infection with methicillin-resistant S. aureus (MRSA) vancomycin or daptomycin (the latter not in pneumonic focus) are recommended. Follow-up blood cultures, source identification, including transoesophageal echocardiography in patients with risk profile, and rapid source control are important measures. The duration of therapy is at least 14 days with uncomplicated SAB, whereby the entire therapy should be carried out intravenously. In case of complicated SAB, a total therapy duration of at least 4 - 6 weeks is recommended. Adherence to this set of measures can reduce mortality by up to 50%. Conclusion: SAB is associated with high morbidity and mortality. Clinical management is complex. By adhering to diagnostic and therapeutic measures, the prognosis can be improved. abstract_id: PUBMED:34508427 Methicillin-resistant Staphylococcus aureus in Nepal. Staphylococcus aureus is both a frequent commensal and a leading cause of endocarditis, bacteremia, osteomyelitis and skin and soft tissue infections and device-related infections. We performed this minireview to summarize the prevalence of Staphylococcus aureus among clinical samples and estimate the proportion of methicillin-resistant Staphylococcus aureus. The prevalence of Staphylococcus aureus among clinical isolates in Nepal is 34.5%. On average, the proportion of multi-drug resistance in Staphylococcus aureus is 57.1%. Methicillin-resistant Staphylococcus aureus accounts for a total of 41.7%. Inducible clindamycin resistance was detected in about 35% of the isolates. A regular antimicrobial resistance surveillance mechanism is necessary to mitigate the development of resistance among organisms and further spread of superbugs like methicillin-resistance Staphylococcus aureus. abstract_id: PUBMED:9791595 Methicillin-resistant Staphylococcus aureus in Dakar This study was carried out at the Fann University Hospital Center in Dakar, Senegal between 1994 and 1996. The purpose was to assess the prevalence of methicillin-resistant Staphylococcus aureus and to propose alternative treatments. A total of 149 stains of Staphylococcus aureus were isolated from a variety of clinical specimens. Sensitivity to various antibiotics was tested by the disc diffusion technique (anti-biogram). Resistance to methicillin was evaluated by the oxacillin disc diffusion technique on Mueller-Hinton agar containing 5% NaCl. The prevalence of methicillin resistance was 66.4%. All strains of Staphylococcus aureus were sensitive to vancomycin. Other highly effective antibiotics included fucidic acid (94%), aminoglycosides (91%), cotrimoxazol (89.6%), and norfloxacin (84.5%). Most strains (70.6%) presented a wide profile against macrolides and related groups. The LSa phenotype (resistance to lincosamines and streptogramines) was the predominant resistant phenotype. The results of this study indicate that the prevalence of methicillin-resistant Staphylococcus aureus is increasing in Senegal. Since these strains respond well to chloramphenicol and cotrimoxazole, clinicians are advised to use these drugs as an alternative first-line treatment. abstract_id: PUBMED:21240183 Methicillin resistant Staphylococcus aureus - an overview. Staphylococcus aureus strains those are resistant to methicillin are referred to as Methicillin resistant Staphylococcus aureus. These express mecA gene to produce altered penicillin binding protein. At present Methicillin resistant Staphylococcus aureus has been increasing as a serious nosocomial and community pathogen having the property of multi drug resistant. Humans are the natural reservoir for Staphylococcus aureus and asymptomatic colonization is far more common than infection. Many hospitals of different country of the world including Bangladesh are struggling with increasing number of this versatile pathogen. Early and specific diagnosis is important to ensure a favourable outcome. In this paper we attempted to explore history, prevalence, transmission, risk factors, pathogenicity, laboratory diagnosis, prevention and control of Methicillin resistant Staphylococcus aureus as a critical review to provide some new upgrade regarding this super bug. abstract_id: PUBMED:35320234 Methicillin resistant Staphylococcus aureus: A brief review of virulence and resistance. Staphylococcus aureus is a common gram-positive human pathogen involved in both community-acquired and nosocomial infections ranging from localised superficial lesions to food poisoning and fatal systemic infections owing to its impressive array of virulence factors responsible for attaching, colonising, invading, and avoiding host immune system. The discovery of antibiotics effectively checked the once deadly infections. However, resistance started soon after their discovery and the first methicillin-resistant strain of staphylococcus aureus was reported in the early 1960s. The most important attribute of methicillin-resistant staphylococcus aureus is its acquisition of mecA gene coding for penicillin-binding protein-2a that blocks inhibitory action on peptidoglycan cross-linking. Methicillin-resistant staphylococcus aureus presents a serious global healthcare concern being responsible for prolonged hospital stays and increased mortality. The precise information of virulence factors and resistant traits of methicillin-resistant staphylococcus aureus and their interplay in a community is key to minimize the intermixing of resistant and susceptible pathogens in the community. abstract_id: PUBMED:19472781 Methicillin resistant Staphylococcus aureus: phenotypic and genotypic studies Background: Staphylococcus aureus is a human opportunistic pathogen. Its important pathogenicity and the increasingly rate of resistance to methicillin are the main causes of morbidity and mortality. Aim: In order to evaluate the epidemiologic situation of Methicillin Resistant S. aureus (MRSA) at Charles Nicolle hospital. Methods: A four years retrospective study (January 1999-December 2002) was conducted. Results: 65 non redundant MRSA isolates were collected. Identification was based on morphology, culture and biochemical characters. Antibiotic susceptibility was determined by disk diffusion method. Resistance to methicillin was confirmed by mec A PCR. Molecular typing was performed by Random Amplified Polymorphic DNA using ERIC-IR. Despite a perfect biotypic similarity between strains, ERIC-IR PCR revealed 7 genotypes. Conclusion: The combination of phenotypic methods and RAPD fingerprinting were easy to perform routinely for MRSA typing. However, phylogenetic relationship between strains needs more investigations. abstract_id: PUBMED:1341138 Staphylococcus aureus in the hospital milieu This study concern the distribution and sensibility towards some antibiotics of 318 Staphylococcus aureus strains isolated in bacteriological samples coming from la Rabta hospital units. The study of distribution shows that 75% of these strains are isolated in bacteriological samples coming from surgery and intensive care unit dermatology unit, and ear nose and throat unit. The bacteriological samples are matter, blood, and intensive care materials. The study of sensibility found a high frequency of methicillin resistant Staphylococcus aureus. These strains are also resistant towards macrolide and lincosamide. MLSB resistance is the predominant phenotypic aspect. It was also observed that all of Staphylococcus aureus gentamicin resistant strains were also resistant toward methicillin. At last is seems that pritinamicin, sulfamethoxazole-trimethoprim, and ofloxacin, are good alternatives for treatment of staphylococcus aureus infections diseases. Answer: Yes, methicillin-resistant Staphylococcus aureus (MRSA) can be found in an ambulance fleet. A cross-sectional study of MRSA contamination in an ambulance fleet operating in the western United States in June 2006 found that 13 samples isolated from 10 of the 21 ambulances (47.6%) in the sample group tested positive for MRSA (PUBMED:17454817). This suggests that ambulances may represent an important reservoir in the transmission of potentially serious infections to patients.
Instruction: Can we trust the new generation of infrared tympanic thermometers in clinical practice? Abstracts: abstract_id: PUBMED:22860884 Can we trust the new generation of infrared tympanic thermometers in clinical practice? Aims And Objectives: To explore the reliability and validity of the new generation of infrared tympanic thermometers, comparing with rectal and core temperature, and to decide their applicability to clinical practice. Background: Digital contact thermometers for rectal measurements and infrared tympanic thermometers are the most common way to measure patients' temperature. Previous studies of the infrared tympanic thermometers revealed misdiagnosis, and validity of early models was questioned. Design: Reliability and validity study. Methods: Temperature was measured with two infrared tympanic thermometers brands in both ears and compared with rectal temperature twice a day at the ward (n = 200). At the intensive care unit, patients (n = 42) underwent the same measurement procedures every fourth hour for 24 hours. In addition, core temperature was measured. Statistical analyses included descriptive and mixed models analyses. Results: Ward: Infrared tympanic thermometers measured the temperature lower than the rectal temperature. Descriptive statistics indicate higher variation in temperature measurements made in the ear. No statistically significant difference in temperature was found for left ear vs. right ear. Intensive care unit: The mean rectal temperature was higher than the mean core and ear temperature. Mixed models analyses of the temperatures at the ward and the intensive care unit showed the same overall trends, but with less discrepancy between the two infrared tympanic thermometers brands, compared with the rectal temperature. Only rectal temperature measurements differed significantly from the core temperature. Conclusion: Our study shows good reliability using the new generation of infrared tympanic thermometers. We found good agreement between core and infrared tympanic thermometers at the intensive care unit, but the measuring inaccuracy for infrared tympanic thermometers was greater than expected. Relevance To Clinical Practice: The new generation of infrared tympanic thermometers may be used in clinical practice, but it is important to do repeatedly measurements if there is discrepancy between the temperature and the observation of the patient condition. abstract_id: PUBMED:24127699 Accuracy of tympanic and forehead thermometers in private paediatric practice. Aim: To compare infrared tympanic and infrared contact forehead thermometer measurements with traditional rectal digital thermometers. Methods: A total of 254 children (137 girls) aged one to 24 months (median 7 months) consulting a private paediatric practice because of fever were prospectively recruited. Body temperature was measured using the three different devices. Results: The median and interquartile range for rectal, tympanic and forehead thermometers were 37.6 (37.1-38.4)°C, 37.5 (37.0-38.1)°C and 37.5 (37.1-37.9)°C, respectively (p &lt; 0.01). The limits of agreement in the Bland-Altman plots were -0.73 to +1.04°C for the tympanic thermometer and -1.18 to +1.64°C for the forehead thermometer. The specificity of both the tympanic and forehead thermometers for detecting fever above 38°C was good, but sensitivity was low. Forehead measurements were susceptible to the use of a radiant warmer. Conclusion: Both the tympanic and forehead devices recorded lower temperatures than the rectal thermometers. The limits of agreement were particularly wide for the forehead thermometer and considerable for the tympanic thermometer. In the absence of valid alternatives, because of the ease to use and little degree of discomfort, tympanic thermometers can still be used with some reservations. Forehead thermometers should not be used in paediatric practice. abstract_id: PUBMED:10110257 Laboratory and hospital testing of new infrared tympanic thermometers. A patented approach to infrared thermometry based on the use of a standard pyrosensor has resulted in the development of two new infrared tympanic thermometers, one for professional use, the other for home use. Both were tested to evaluate accuracy in the laboratory and to evaluate equivalence to standards, correlation to standards, and precision in human subjects. Accuracy was found to be well within ASTM standards on both models. Mean ear temperatures were 0.2 degrees C below oral and 0.7 degrees C below bladder temperature. Correlations between ear and oral and ear and bladder temperatures were r = .77 to .84. Repeatability in the same ear was very high at r = .95 (left) and .97 (right). Reproducibility between left and right ear ranged from r = .89 to .92. abstract_id: PUBMED:22319287 Evaluation of performance and uncertainty of infrared tympanic thermometers. Infrared tympanic thermometers (ITTs) are easy to use and have a quick response time. They are widely used for temperature measurement of the human body. The accuracy and uncertainty of measurement is the importance performance indicator for these meters. The performance of two infrared tympanic thermometers, Braun THT-3020 and OMRON MC-510, were evaluated in this study. The cell of a temperature calibrator was modified to serve as the standard temperature of the blackbody. The errors of measurement for the two meters were reduced by the calibration equation. The predictive values could meet the requirements of the ASTM standard. The sources of uncertainty include the standard deviations of replication at fixed temperature or the predicted values of calibration equation, reference standard values and resolution. The uncertainty analysis shows that the uncertainty of calibration equation is the main source for combined uncertainty. Ambient temperature did not have the significant effects on the measured performance. The calibration equations could improve the accuracy of ITTs. However, these equations did not improve the uncertainty of ITTs. abstract_id: PUBMED:20736400 Accuracy of tympanic and infrared skin thermometers in children. Background: Rectal measurement is considered a gold standard in many healthcare systems for body temperature measurement in children. Although this method has several disadvantages, an ideal alternative thermometer has not yet been introduced. However tympanic and infrared skin thermometers are potential alternatives. Methods: A prospective cohort study was performed including 100 children between 0 and 18 years of age admitted to the general paediatric ward of Spaarne Hospital in The Netherlands between January and March 2009. The objectives of this study are to evaluate the accuracy of tympanic and two types of infrared skin thermometers (Beurer and Thermofocus) compared to rectal measurement and furthermore to evaluate the influence of different variables on temperature measurements. Results: Compared to rectal measurement (37.56°C), the mean temperatures of the tympanic (37.29°C), Beurer (36.79°C) and Thermofocus (37.30°C) thermometers differed significantly (p&lt;0.001). Mean and SD of differences between rectal temperature and temperature measured with these alternative devices varied significantly (p&lt;0.001). Sensitivity, specificity, positive and negative predictive values for detecting rectal fever measured with the tympanic, Beurer and Thermofocus thermometers are unacceptable, especially for the Beurer thermometer. This difference in temperature between rectal and the alternative thermometers remained after stratification on gender, age, skin colour and otoscopic abnormalities. Conclusions: In this study the authors demonstrated that the tympanic, Beurer and Thermofocus thermometers cannot reliably predict rectal temperature. Therefore the authors do not advise replacement of rectal measurement as the gold standard for detecting fever in children by one of these devices. When rectal measurement is not used, the infrared skin thermometers appear to perform less well than tympanic measurements. abstract_id: PUBMED:9290138 An assessment of infrared tympanic thermometers for body temperature measurement. This article provides an experimental assessment of three commercially available clinical thermometers, using different thermal infrared sensors. This kind of thermometer measures body temperature by detecting infrared radiation from the tympanic membrane. These thermometers are growing in popularity thanks to their simplicity of use, rapid response and minimal distress to the patient. The purpose of the laboratory tests presented here was to assess the effect of varying ambient temperature and varying simulated patient temperature on the performance of the three infrared tympanic thermometers. abstract_id: PUBMED:9633875 The use of infrared ear thermometers in pediatric and family practice offices. Objective: To describe the use of infrared (IR) ear thermometers in pediatric and family practice offices. Methods: The authors mailed a questionnaire to 350 randomly selected members of the American Academy of Pediatrics and to 355 randomly selected members of the American Academy of Family Physicians. Results: Of respondents in clinical practice, 78% had used IR ear thermometers at least once in the past; 65% of pediatricians and 64% of family practice physicians were current users. Seventeen percent of pediatric offices and 18% of family practice offices that had used IR ear thermometers had discontinued use, most citing inaccuracy or lack of staff trust in the device. Pediatric offices were less likely than family practice offices to use the device in well neonates and sick neonates and more likely to use it in sick children. Advantages cited included rapid readings, ease of use, and accuracy. Seventy-five percent of current users reported at least one problem, including low readings and lack of staff trust. Conclusions: IR ear thermometers are widely used in pediatric and family practice offices. Some offices limit use of these devices to older children and adults, and most of the offices surveyed report using other devices as a check on the accuracy of IR thermometers. Statements by professional organizations that provide user guidelines and establish appropriate age cut-offs would be helpful. abstract_id: PUBMED:15373157 Reviewing the effectiveness of tympanic thermometers. Tympanic thermometers have become a popular alternative to glass mercury thermometers for recording patients' temperatures. This article reviews studies that have been performed to audit their use in clinical practice. It examines the accuracy of tympanic thermometers and discusses ways to improve the use of this instrument and the implications for nursing practice. abstract_id: PUBMED:23663659 Accuracy of tympanic temperature measurement using an infrared tympanic membrane thermometer. Background: During investigation and diagnosis of patients, accurate temperature measurement is of great importance. The advantages of tympanic membrane thermometry are speed (temperature reading available within seconds), safety, and ease of use. The aim of this study was to compare the accuracy of infrared tympanic thermometers in comparison to mercury thermometers in measurement of body temperature. Methods: Axillary and tympanic temperature was measured simultaneously in consecutive patients using mercury glass and infrared tympanic thermometers at Omdurman Hospital, Sudan during October 2012. Results: In total, temperature was measured in 174 patients, 95 of whom (54.6%) were male. The mean (SD) patient age and weight was 33.18 (25.07) years and 52.13 (69.85) kg. There was no significant difference in mean (SD) temperature measurement between mercury and infrared tympanic membrane thermometers, 37.29°C (0.91) versus 37.38°C (0.95), P = 0.373, respectively. There was a significant positive correlation between axillary and tympanic body temperature measurements (r = 0.697, P &lt; 0.001). The mean difference between the two readings (with limits of agreements) was - 0.093 (-0.20; 0.02) °C. Conclusion: In this study, tympanic membrane thermometry is as reliable and accurate as axillary mercury glass thermometry. Thus, tympanic thermometry can be used in clinical practice, especially in the emergency setting, where ease of use and speed of obtaining the temperature reading are important. abstract_id: PUBMED:8955971 A comparison of four infrared tympanic thermometers with tympanic membrane temperatures measured by thermocouples. Purpose: To compare measurements made with four infrared tympanic thermometers (Genius, Thermopit, Quickthermo, and Thermoscan) with those recorded from thermocouples positioned in the contralateral ear. Methods: Four tympanic thermometers were evaluated in 50 healthy volunteers (12 female and 38 male). Temperatures were measured, in random order, at the right tympanic membrane four times and the highest temperature was considered to be the true value measured by each thermometer. The control temperature was measured at the left tympanic membrane using Mon-a-Therm thermocouples. Results: The tympanic membrane temperature measured by Genius correlated best with the Mon-a-therm measurement (TM) (r = 0.74). The tympanic membrane temperatures measured by Thermopit, Quickthermo, and Thermoscan correlated moderately with TM (r = 0.56, 0.63, and 0.58, respectively). Mean differences between TM and each temperature (TG, TTP, TQ, and TTS) were -0.3, 0.73, 0.42, and -0.3 degrees C, respectively. Likewise standard deviations were 0.33, 0.37, 0.35, and 0.35. Conclusion: We conclude that all but the Thermopit (TTP) are similarly useful for the management of patients during anaesthesia. Answer: Based on the provided abstracts, the new generation of infrared tympanic thermometers shows good reliability and may be used in clinical practice, but with some considerations. A study found good agreement between core and infrared tympanic thermometers at the intensive care unit, although the measuring inaccuracy for infrared tympanic thermometers was greater than expected (PUBMED:22860884). Another study indicated that tympanic thermometers can still be used with some reservations due to their ease of use and little discomfort, despite recording lower temperatures than rectal thermometers (PUBMED:24127699). Laboratory and hospital testing of new infrared tympanic thermometers showed that accuracy was within ASTM standards, with high repeatability and reproducibility between left and right ear measurements (PUBMED:10110257). However, another study concluded that tympanic, Beurer, and Thermofocus thermometers cannot reliably predict rectal temperature, and therefore should not replace rectal measurement as the gold standard for detecting fever in children (PUBMED:20736400). The performance of two infrared tympanic thermometers was evaluated, and while calibration equations could improve accuracy, they did not improve the uncertainty of the measurements (PUBMED:22319287). Infrared tympanic thermometers are widely used in pediatric and family practice offices, but some offices limit their use to older children and adults due to concerns about accuracy and staff trust (PUBMED:9633875). A review of studies auditing the use of tympanic thermometers in clinical practice examined their accuracy and discussed ways to improve their use (PUBMED:15373157). Another study found that tympanic membrane thermometry is as reliable and accurate as axillary mercury glass thermometry, suggesting it can be used in clinical practice, especially in emergency settings (PUBMED:23663659). In conclusion, while the new generation of infrared tympanic thermometers is generally considered reliable and may be used in clinical practice, it is important to be aware of their limitations and to perform repeated measurements if there is a discrepancy between the temperature and the patient's condition (PUBMED:22860884). Additionally, other methods such as rectal temperature measurement may still be preferred in certain populations, like children, to ensure accuracy in detecting fever (PUBMED:20736400).
Instruction: Atretic parietal cephaloceles revisited: an enlarging clinical and imaging spectrum? Abstracts: abstract_id: PUBMED:25610208 Recurrent atretic parietal cephalocele in adult and radiological findings. Atretic parietal cephaloceles are an extremely rare and occult form of congenital herniations that involve the meninges, remnants of glial cells or central nervous system structures. They are detected as subscalp lesions that are covered by skin. In the literature, atretic parietal cephaloceles have been reported in children in 59 cases and in two cases in adults. We present here a case of a recurrent atretic parietal cephalocele that we recently observed in an adult. This case indicates that an atretic parietal cephalocele should always be considered in the differential diagnosis of subscalp lesions. Radiologic diagnosis is a lifesaving measure that can be undertaken before an operation. Additionally, regular follow-up magnetic resonance imaging is recommended for each case with remnant lesions due to the slow growth of these masses. abstract_id: PUBMED:34345467 The spectrum of venous anomalies associated with atretic parietal cephaloceles: A literature review. Background: Parietal atretic cephalocele (PAC) is a small, subscalp lesion with underlying extracranial meningeal, neural, and glial tissues. In this paper, we analyze the related literature on the continuum of PAC-associated venous anomalies and report an exemplary case. Methods: The PubMed Medline database was searched using the following search algorithm: (Atretic encephalocele) OR (Rudimentary meningocele,) OR (Atypical meningocele) OR (Meningocele manqué) OR (Meningeal heterotopia). Only papers detailing the venous anomalies associated with PACs have been included. Results: A total of 30 papers in our search documented PAC-associated venous abnormalities. The overall number of cases reported was 68 (including our exemplary case). The most frequently identified associated venous anomaly was the presence of a "fenestrated superior sagittal sinus" recorded in 48.5% of cases (n = 33), followed closely by "persistent falcine sinus" in 47% (n = 32) and vertical embryonic positioning of the straight sinus (SS) in 44% (n = 30). The complete absence of a SS was reported in 39.7% (n = 27) and various anomalies of the Galenic system were reported in 26.8% of cases (n = 12). Conclusion: Although benign in nature, PACs are often a marker for the presence of complex and variable cerebral venous malformations, requiring extensive preoperative imaging workup for both the superficial and deep venous systems to obtain an accurate understanding of the anatomy of the venous system and guide surgical planning. abstract_id: PUBMED:9576676 Atretic parietal cephaloceles revisited: an enlarging clinical and imaging spectrum? Purpose: We describe imaging features that are clues to the diagnosis of atretic cephaloceles and discuss clinical findings and a possible mechanism by which these lesions develop. Methods: Eight children (five girls and three boys) ranging in age from 1 day to 3 years 4 months with midline subscalp lesions underwent radiologic examination with CT or MR imaging. In all cases, the lesions were surgically excised and subjected to pathologic examination. Imaging studies and medical records were reviewed retrospectively. Results: Six of eight children had vertical embryonic positioning of the straight sinus with a prominent superior cerebellar cistern. A "spinning-top" configuration of the tentorial incisura, a "cigar-shaped" CSF tract within the interhemispheric fissure, fenestration of the superior sagittal sinus, and "peaking" of the tentorium were associated findings helpful in making this diagnosis. Two of the eight children had findings indistinguishable from focal dermoid, six were developmentally normal, one had mild motor delay, and one died at the age of 3 years. Pathologic examination revealed glial, meningeal (arachnoid), fibrous, and dermal elements. Conclusion: Characteristic findings on MR images and CT scans provide clues to the diagnosis of atretic cephalocele. However, even in the presence of abnormal imaging findings, these children may be developmentally normal. abstract_id: PUBMED:29235085 Recurrence of atretic parietal cephalocele in adult: a case report and review of literature. Common presentation of atretic parietal cephalocele is mostly seen in infants and young children. It is a palpable midline parietal soft tissue mass which is thought to represent involuted true cephalocele (meningocele or encephalocele) connected to dura mater via a fibrous stalk. Atretic parietal cephalocele is associated with increased incidence of intracranial anomalies.. Parietal cephaloceles comprise about 1% of all cerebrospinal congenital malformations and 10% of cephaloceles. We report here the case of an atretic parietal cephalocele with no associated brain malformations in adult. abstract_id: PUBMED:25963151 Parietal atretic cephalocele: Associated cerebral anomalies identified by CT and MR imaging. We describe a case of atretic cephalocele (AC) characterized by the presence of various cerebral anomalies of different midline structures. In our patient the presence of a parietal AC was associated with an embryonic position of the straight sinus, fenestration of the superior sagittal sinus, an abnormal insertion of the cerebellar tentorium with prominence of the superior cerebellar cistern and a septum pellucidum cyst. These findings, associated with AC, could lead to a worse prognosis with regard to neurodevelopmental milestones. This suggests that even if AC is a benign lesion, a complete evaluation of the brain structures should always be performed in these young patients. abstract_id: PUBMED:3418387 Parietal cephalocele: clinical importance of its atretic form and associated malformations. In this study of atretic cephaloceles, the authors have considered the pedunculated or sessile type of cephalocele and also small nonpedunculated scalp defects developing in the vertex midline. Parietal cephaloceles were found in 15 infants (10 boys and five girls), and accounted for 37.5% of all cephaloceles. They consisted of four encephaloceles, six meningoceles, and five atretic cephaloceles. The clinical and morphological characteristics of parietal cephaloceles were investigated and compared with those arising at other locations. Parietal cephaloceles carried a much less favorable prognosis than those in the occipital region, regardless of the type of cephalocele; they were associated with cerebral malformations more frequently and were more severe than occipital cephaloceles. Grave congenital anomalies were found in 87% of patients with parietal cephalocele, and only two patients (neither of whom had any other malformation) attained normal development. Brain malformations were closely related to the site from which the cephalocele issued, and dorsal cyst malformation was found in eight patients with parietal cephalocele. Two types of atretic cephaloceles were found, each in a different location. The first type was an alopecic lesion occurring in the parietal midline; all five patients with this type had dorsal cyst malformations and none developed normally. The second type was a nodular lesion developing at the occipital midline, not associated with cerebral anomalies; all five patients with this type showed normal development. The pathogenesis of atretic cephaloceles and their associated intracranial malformations are discussed. abstract_id: PUBMED:12391465 Adult atretic parietal cephalocele. Atretic parietal cephalocele is a congenital anomaly of childhood with poor prognostic value. The authors report an adult with associated brain malformations (interhemispheric cyst, partial corpus callosum agenesis) - but, interestingly, without any clinical symptoms related to the entity. abstract_id: PUBMED:27461273 Atretic cephaloceles: a comprehensive analysis of historical cohort. Purpose: This study aims to analyze the historical cohort of patients with atretic cephaloceles (ACs) comprehensively, since the literature regarding ACs is limited by small case series or case reports and to report a new venous anomaly concomitant with a parietal AC in our exemplary case. Methods: After a PubMed/Medline search with "atretic cephalocele" in April 2016, we retrieved 42 articles, 41 of which were relevant with the topic. Full texts of accessible articles and abstracts of inaccessible articles were read comprehensively. References of retrieved articles were manually searched. Additional 29 articles reporting the cases of atretic, rudimentary, occult, or sequestered cephaloceles were accessed, since these terms have been used interchangeably in the literature. Results: Median age and mean age of the patients were 1 and 6.69 years, respectively. Female/male ratio was 1.03:1. Atretic cephaloceles were located in different regions, in decreasing order of frequency: parietal (55 %; n = 105), occipital (37 %; n = 71), parieto-occipital (n = 8), frontal (n = 4), asterion (n = 1), and sincipital (n = 1). Concomitant central nervous system (CNS) anomalies were present in 73.3 % of patients with parietal atretic cephaloceles and in 32.4 % of patients with occipital atretic cephaloceles (p &lt; 0.0001). We realized that all patients with worse outcomes had concomitant CNS anomalies (100 %), whereas only 48 % of those with good clinical outcomes had concomitant anomalies (p = 0.0248). Conclusions: Atretic cephaloceles present as nodular extrusions on the parietal and occipital regions. Concomitant CNS anomalies, which impair the clinical outcomes, are more common in patients with parietal ACs. Observation-only approach is enough in patients with asymptomatic ACs. However, life-long follow-ups are required, because of recurrence after the surgery. abstract_id: PUBMED:37181183 Atretic cephaloceles with different imaging phenotypes - Case series with review of literature. Atretic cephaloceles (ACs) are congenital skull defects with herniation of rudimentary intracranial structures through the defect and associated with persistent falcine sinus or embryonic positioning of straight sinus. We describe five cases of ACs, out of which only one had embryonic straight sinus. Three cases had other intracranial malformations such as hypoplasia of corpus callosum, dysplastic tectum in one child and parieto-occipital polymicrogyria with falcotentorial dehiscence in the other, and frontal horn deformity and cortical dysplasia in the third. The prognosis of AC depends on the coexistent intracranial abnormalities and this highlights the role of magnetic resonance imaging in diagnosing the other associated anomalies for prediction of prognosis and planning of necessary surgical management. abstract_id: PUBMED:21741191 Atretic parietal cephalocele associated with sinus pericranii: embryological consideration. We report a case of atretic parietal cephalocele with a persistent parietal falcine sinus and partial absence of the straight sinus. The direct puncture angiographic study demonstrated that there was a major venous channel through the parietal skull defect. From the embryological point of view, this association may provide us with possible pathoetiologic evidence of congenital sinus pericranii is one of the alternative venous drainage pathways to compensate venous outflow in the cases of intracranial developmental anomalies. Answer: Atretic parietal cephaloceles (APCs) are a rare form of congenital herniation involving meninges, glial cells, or central nervous system structures, typically presenting as subscalp lesions covered by skin. Although primarily reported in children, there have been cases in adults, including recurrent APCs (PUBMED:25610208; PUBMED:29235085). APCs are often associated with venous anomalies, such as fenestrated superior sagittal sinus, persistent falcine sinus, and vertical embryonic positioning of the straight sinus, among others (PUBMED:34345467). These venous malformations necessitate thorough preoperative imaging to guide surgical planning. Imaging features such as a "spinning-top" configuration of the tentorial incisura, a "cigar-shaped" CSF tract, and fenestration of the superior sagittal sinus can help diagnose APCs. Despite abnormal imaging findings, some children with APCs may be developmentally normal (PUBMED:9576676). However, APCs are associated with a higher incidence of intracranial anomalies, and their presence could indicate a worse prognosis for neurodevelopmental milestones (PUBMED:25963151; PUBMED:3418387). The literature suggests that APCs can be associated with various cerebral anomalies, such as interhemispheric cysts and partial corpus callosum agenesis, even in asymptomatic adults (PUBMED:12391465). A comprehensive analysis of historical cohorts of patients with APCs indicates that concomitant central nervous system anomalies are more common in patients with parietal APCs compared to occipital ones, and these anomalies are linked to worse clinical outcomes (PUBMED:27461273). Different imaging phenotypes of APCs have been reported, with some cases exhibiting additional intracranial malformations like corpus callosum hypoplasia, dysplastic tectum, polymicrogyria, and cortical dysplasia. The prognosis of APCs depends on these coexistent intracranial abnormalities, highlighting the importance of magnetic resonance imaging in assessing prognosis and planning surgical management (PUBMED:37181183). In summary, APCs present an enlarging clinical and imaging spectrum, with a range of associated venous and cerebral anomalies that can impact prognosis and treatment strategies.
Instruction: Improvement in survival of metastatic colorectal cancer: are the benefits of clinical trials reproduced in population-based studies? Abstracts: abstract_id: PUBMED:23642328 Improvement in survival of metastatic colorectal cancer: are the benefits of clinical trials reproduced in population-based studies? Aim Of The Study: To describe trends in survival of non-resectable metastatic colorectal cancer (MCRC) over a 34-year period in a French population-based registry taking into account major advances in medical therapy. Patients And Methods: 3804 patients with non-resectable metastatic colorectal cancer diagnosed between 1976 and 2009 were included. Three periods (1976-96, 1997-2004 and 2005-09) were considered. Results: The proportion of patients receiving chemotherapy dramatically increased from 19% to 57% between the first two periods, then increased steadily thereafter reaching 59% during the last period (p&lt;0.001). Median relative survival increased from 5.9 months during the 1976-96 period to 10.2 months during the 1997-2004 period but, despite the availability of targeted therapies, remained at 9.5 months during the 2005-09 period. During the last study period, less than 10% of elderly patients received targeted therapies compared to more than 40% for younger patients. Their median relative survival was 5.0 months compared to 15.6 months in younger patients. Conclusion: There was an improvement in survival in relation with the increased use of more effective medical treatment. However, at a population-based level, patients are not all treated equally and most of them, especially the elderly, do not benefit from the most up-to-date treatment options. abstract_id: PUBMED:28268936 Assessing the population representativeness of colorectal cancer treatment clinical trials. The generalizability (external validity) of clinical trials has long been a concern for both clinical research community as well as the general public. Results of trials that do not represent the target population may not be applicable to the broader patient population. In this study, we used a previously published metric Generalizability Index for Study Traits (GIST) to assess the population representativeness of colorectal cancer (CRC) treatment trials. Our analysis showed that the quantitative eligibility criteria of CRC trials are in general not restrictive. However, the qualitative eligibility criteria in these trials are with moderate or strict restrictions, which may impact their population representativeness of the real-world patient population. abstract_id: PUBMED:21365628 Prognostic web-based models for stage II and III colon cancer: A population and clinical trials-based validation of numeracy and adjuvant! online. Background: Numeracy and Adjuvant! are 2 web-based calculators that are used widely to estimate the prognosis and potential benefit of adjuvant 5-fluorouracil (5-FU)-based therapy for patients with stage II and III colon cancer. In this study, the authors compared the predicted survival estimates from these models with the actual observed estimates in independent datasets that were derived from a population cohort and from clinical trials. Methods: The population cohort was derived from the British Columbia Colorectal Cancer Outcomes Unit database, which identified referred patients with stage II and III colon cancer from 1995 to 1996 and from 1999 to 2003. Patients who were enrolled in North Central Cancer Trials Group (NCCTG) trials NCCTG 94651 and NCCTG 914653 were included in the trials dataset. Patient and disease data were used to predict 5-year relapse-free and overall survival using both tools. Results: In the population-based dataset (N = 2033), Adjuvant! offered more reliable predictions of prognosis for patients who underwent surgery alone, but it had reliability similar to that of Numeracy for predicting the prognosis for patients who received adjuvant 5-FU. Both models tended to overestimate survival for patients with stage II disease who received 5-FU. In the trials dataset of patients who underwent and received 5-FU (N = 1729), Numeracy and Adjuvant! demonstrated similar performance and improved correctness. Conclusions: This independent validation analysis demonstrated that both Numeracy and Adjuvant! had similar predictive performance and acceptable reliability for patients with stage III disease. Survival outcomes of patients with stage II colon cancer who received adjuvant 5-FU were slightly lower than estimated by either model. abstract_id: PUBMED:26831330 Patient representatives' views on patient information in clinical cancer trials. Background: Patient enrolment into clinical trials is based on oral information and informed consent, which includes an information sheet and a consent certificate. The written information should be complete, but at the same time risks being so complex that it may be questioned if a fully informed consent is possible to provide. We explored patient representatives' views and perceptions on the written trial information used in clinical cancer trials. Methods: Written patient information leaflets used in four clinical trials for colorectal cancer were used for the study. The trials included phase I-III trials, randomized and non-randomized trials that evaluated chemotherapy/targeted therapy in the neoadjuvant, adjuvant and palliative settings. Data were collected through focus groups and were analysed using inductive content analysis. Results: Two major themes emerged: emotional responses and cognitive responses. Subthemes related to the former included individual preferences and perceptions of effect, while subthemes related to the latter were comprehensibility and layout. Based on these observations the patient representatives provided suggestions for improvement, which largely included development of future simplified and more attractive informed consent forms. Conclusions: The emotional and cognitive responses to written patient information reported by patient representatives provides a basis for revised formats in future trials and add to the body of information that support use of plain language, structured text and illustrations to improve the informed consent process and thereby patient enrolment into clinical trials. abstract_id: PUBMED:15998615 An Internet-based cancer clinical trials matching resource. Background: Many patients are now accessing the Internet to obtain cancer clinical trials information. However, services offering clinical trials recruitment information have not been well defined. Objectives: This study describes one of the first Web-based cancer clinical trials matching resources and the demographics of users who were successfully matched. Methods: OncoLink is the Internet-based educational resource managed by the University of Pennsylvania Cancer Center (UPCC) and serves between 1 and 2 million pages per month to over 385000 unique IP addresses. OncoLink launched one of the first clinical trials matching resources on the Internet that allowed patients to enter demographic data through a secure connection and be matched to clinical trials. For patients with matches to potential trials, appointments were facilitated with the principal investigators. Results: While we did not keep track of patients who could not be matched, 627 patients who submitted online applications between January 2002 and April 2003 were successfully matched for potential enrollment in clinical trials. The mean age of the patient population was 56 years (range 18-88 years). Males represented 60% of the patient population, and over 90% of users were Caucasian. Most of the applications were from patients with colorectal cancer (13%), lung cancer (14%), melanoma (10%), and non-Hodgkin's lymphoma (9%). Conclusions: This report shows that a significant number of patients are willing to use the Internet for enrolling in clinical trials. Care must be taken to reach patients from a variety of socioeconomic and racial backgrounds. This Internet resource helps to facilitate a consultation with a cancer patient who is prescreened and motivated to enroll in clinical trials. abstract_id: PUBMED:32134686 Developing an FHIR-Based Computational Pipeline for Automatic Population of Case Report Forms for Colorectal Cancer Clinical Trials Using Electronic Health Records. Purpose: The Fast Healthcare Interoperability Resources (FHIR) is emerging as a next-generation standards framework developed by HL7 for exchanging electronic health care data. The modeling capability of FHIR in standardizing cancer data has been gaining increasing attention by the cancer research informatics community. However, few studies have been conducted to examine the capability of FHIR in electronic data capture (EDC) applications for effective cancer clinical trials. The objective of this study was to design, develop, and evaluate an FHIR-based method that enables the automation of the case report forms (CRFs) population for cancer clinical trials using real-world electronic health records (EHRs). Materials And Methods: We developed an FHIR-based computational pipeline of EDC with a case study for modeling colorectal cancer trials. We first leveraged an existing FHIR-based cancer profile to represent EHR data of patients with colorectal cancer, and then we used the FHIR Questionnaire and QuestionnaireResponse resources to represent the CRFs and their data population. To test the accuracy of and overall quality of the computational pipeline, we used synoptic reports of 287 Mayo Clinic patients with colorectal cancer from 2013 to 2019 with standard measures of precision, recall, and F1 score. Results: Using the computational pipeline, a total of 1,037 synoptic reports were successfully converted as the instances of the FHIR-based cancer profile. The average accuracy for converting all data elements (excluding tumor perforation) of the cancer profile was 0.99, using 200 randomly selected records. The average F1 score for populating nine questions of the CRFs in a real-world colorectal cancer trial was 0.95, using 100 randomly selected records. Conclusion: We demonstrated that it is feasible to populate CRFs with EHR data in an automated manner with satisfactory performance. The outcome of the study provides helpful insight into future directions in implementing FHIR-based EDC applications for modern cancer clinical trials. abstract_id: PUBMED:27096270 The organization of clinical trials for oncology at IRCCS Istituto Nazionale Tumori "Fondazione G. Pascale" Napoli and the impact of the OECI accreditation process. The Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) Istituto Nazionale Tumori "Fondazione G. Pascale" (INT-Pascale) is the largest Clinical Care and Research Cancer Center in Southern Italy. The mission is prevention, diagnosis, and care of cancer and innovative research in oncology. In 2013, INT-Pascale joined the Organisation of European Cancer Institutes (OECI) accreditation and classification project along with other Italian IRCCS cancer centers. One of the major OECI requirements that a cancer center must fulfill in order to achieve and maintain OECI certification is a strong emphasis in translational and clinical research: increasing the number of patients enrolled in clinical trials, establishing easily accessible databases for operators, and informing all possible stakeholders, including patients. A characterizing theme of INT-Pascale is a strong commitment to clinical experimental studies. In the 2007-2014 period, 440 clinical trials were activated at INT-Pascale; in this period, the number of clinical trials and observational studies has had an increment achieving in 2014, respectively, the share of 60 clinical trials and 35 observational studies activated. Optimization of clinical trials management and dissemination of the clinical research culture at INT-Pascale are main objectives to be achieved through several actions and procedures being implemented as a component of the OECI improvement plan. Participation in the OECI program has represented an important challenge to improve quality and processes related to promoting, prioritizing, and monitoring clinical trials at INT-Pascale. abstract_id: PUBMED:24772337 Should we move beyond VEGF inhibition in metastatic colorectal cancer? Lessons from early phase clinical trials. Data from recent clinical trials utilizing bevacizumab or other anti-VEGF agents in patients with metastatic colorectal cancer (mCRC) show improvements in progression-free survival (PFS) but modest, if any, improvements in overall survival (OS). Despite modest improvements, use of bevacizumab beyond first and second progression is routinely done in clinical practice. Recently, the CORRECT trial using regorafenib, a multi-kinase inhibitor with VEGF inhibitory properties, reported modest improvements in PFS and OS when compared to placebo, leading to FDA approval in the third-line setting. Prior to regorafenib, heavily pre-treated patients were often enrolled onto early phase clinical trials with many of these studies reporting efficacy amongst patients with mCRC; however, a collective efficacy analysis of mCRC patients enrolled into early phase clinical trials stratified by class of agents and their mechanism of action has not been done. To assess this, we performed an analysis of efficacy and stratified these findings based on VEGF inhibition versus non-VEGF inhibition in mCRC patients enrolled onto phase I trials at our institution from 3/2004-9/2012. Similar to many reported clinical studies, our data showed that VEGF inhibitors have a statistically significant improvement in PFS when compared to non-VEGF targeting agents; however, no differences in OS were observed between these two different classes of agents. We were not able to identify predictive biomarkers that correlate with efficacy of VEGF inhibitors. This should be further explored in prospective studies in order to identify active agents in this heavily pre-treated population that improve efficacy while minimizing cost and toxicity. abstract_id: PUBMED:12454117 Early detection of toxicity and adjustment of ongoing clinical trials: the history and performance of the North Central Cancer Treatment Group's real-time toxicity monitoring program. Prospective clinical trials are the gold standard for evidence-based methodology used to support changes in the practice of medicine. Clinical researchers, regulatory agencies, payers, and the public embrace the conduct of phase I, II, and III clinical trials as integral to improving patient care. The National Cancer Institute (NCI) funds a number of cooperative oncology groups to conduct such clinical trials in the United States. In order to protect enrolling patients, the NCI requires expedited reporting to allow rapid identification of severe side effects on NCI-sponsored clinical trials. However, chemotherapy drugs frequently cause predictable side effects, the rapid reporting of which would potentially overwhelm the system. This article describes the development and documents the performance of a real-time toxicity reporting system implemented by the North Central Cancer Treatment Group. The goal of this system is to supplement the currently required NCI adverse event monitoring procedures and to permit study teams to identify the need to modify ongoing clinical trials. The system has proven its value in the monitoring of phase II and III trials, including trial N9741, a three-arm, phase III, advanced colorectal cancer chemotherapy study exploring combinations of irinotecan, oxaliplatin, and fluorouracil. We believe the methods described present opportunities for improving patient safety in clinical research. abstract_id: PUBMED:18278696 Evidence-based medicine in oncology: do the results of trials reflect clinical reality? Evidence-based medicine (ebm) is the answer to the postulate to grade the basis of scientific knowledge in medical care and to protect it against proceedings of unjustifiable arbitrariness. The ranking of controlled clinical trials, the evaluation of publications, meta-analyses, and references to "levels of evidence" in medical guidelines are well established. This is not inconsistent with the fact that many diagnostic and therapeutic measures are not evidence-based and that, even in reputable scientific journals, marketing intentions come into conflict with evidence-based facts. The demand for implementing ebm is furthermore an unsustainable ethical pretension as long as ebm itself is not evidence-based. In many cases better results from ebm are not supported by outcome studies. Health services research which, amongst others, evaluates implementation of study results under everyday conditions should be seen as an essential part of ebm. In oncology, cancer registries contribute to this type of transparency. Cancer registries show to what extent ebm is established as an encouraging future programme for the daily cancer health-care delivery and whether ebm exists as a barely realisable parallel world of promising controlled clinical trials. Answer: The improvement in survival of metastatic colorectal cancer (MCRC) observed in clinical trials does not seem to be fully reproduced in population-based studies. A study analyzing data from a French population-based registry over a 34-year period found that while there was an improvement in survival in relation to the increased use of more effective medical treatment, at a population level, not all patients are treated equally. Most notably, the elderly do not benefit from the most up-to-date treatment options, which includes targeted therapies. The median relative survival for elderly patients was significantly lower than for younger patients (5.0 months compared to 15.6 months) (PUBMED:23642328). This discrepancy between clinical trial results and real-world outcomes may be partly due to the representativeness of clinical trials. A study assessing the population representativeness of colorectal cancer (CRC) treatment trials using the Generalizability Index for Study Traits (GIST) found that while the quantitative eligibility criteria of CRC trials are generally not restrictive, the qualitative eligibility criteria often have moderate or strict restrictions. This could impact the population representativeness of the real-world patient population, suggesting that the benefits observed in clinical trials may not be fully generalizable to the broader patient population (PUBMED:28268936). Additionally, web-based models like Numeracy and Adjuvant! Online, which are used to estimate prognosis and potential benefit of adjuvant therapy for patients with stage II and III colon cancer, have shown to overestimate survival for patients with stage II disease who received 5-FU in a population-based dataset. This indicates that the predictive performance of these models may not always align with actual observed outcomes in the general population (PUBMED:21365628). In conclusion, while clinical trials have shown improvements in survival for patients with MCRC, these benefits are not fully realized in a population-based context, particularly among certain groups like the elderly. The representativeness of clinical trials and the generalizability of their results to the broader patient population remain concerns that can affect the translation of clinical trial benefits to real-world settings.
Instruction: Low-grade gliomas: do changes in rCBV measurements at longitudinal perfusion-weighted MR imaging predict malignant transformation? Abstracts: abstract_id: PUBMED:18372467 Low-grade gliomas: do changes in rCBV measurements at longitudinal perfusion-weighted MR imaging predict malignant transformation? Purpose: To prospectively perform longitudinal magnetic resonance (MR) perfusion imaging of conservatively treated low-grade gliomas to determine whether relative cerebral blood volume (rCBV) changes precede malignant transformation as defined by conventional MR imaging and clinical criteria. Materials And Methods: All patients gave written informed consent for this institutional ethics committee-approved study. Thirteen patients (seven men, six women; age range, 29-69 years) with biopsy-proved low-grade glioma treated only with antiepileptic drugs were examined longitudinally with susceptibility-weighted perfusion, T2-weighted, fluid-attenuated inversion recovery, and high-dose contrast material-enhanced T1-weighted MR imaging at 6-month intervals to date or until malignant transformation was diagnosed. Student t tests were used to determine differences in rCBV values between "transformers" and "nontransformers" at defined time points throughout study follow-up. Results: Seven patients showed progression to high-grade tumors between 6 and 36 months (mean, 22.3 months), and disease in six patients remained stable over a period of 12-36 months (mean, 23 months). Transformers had a slightly (but not statistically significantly) higher group mean rCBV than nontransformers at the point of study entry (1.93 vs 1.31). In nontransformers, the rCBV remained relatively stable and increased to only 1.52 over a mean follow-up of 23 months. In contrast, transformers showed a continuous increase in rCBV up to the point of transformation, when contrast enhancement became apparent on T1-weighted images. The group mean rCBV was 5.36 at transformation but also showed a significant increase from the initial study at 12 months (3.14, P = .022) and at 6 months (3.65, P = .049) before transformation. Rates of rCBV change between two successive time points were also significantly higher in transformers than in nontransformers. Conclusion: In transforming low-grade glioma, susceptibility-weighted MR perfusion imaging can demonstrate significant increases in rCBV up to 12 months before contrast enhancement is apparent on T1-weighted MR images. abstract_id: PUBMED:24578243 Comparison of 18F-FET PET and perfusion-weighted MR imaging: a PET/MR imaging hybrid study in patients with brain tumors. Unlabelled: PET using O-(2-(18)F-fluoroethyl)-L-tyrosine ((18)F-FET) provides important diagnostic information in addition to that from conventional MR imaging on tumor extent and activity of cerebral gliomas. Recent studies suggest that perfusion-weighted MR imaging (PWI), especially maps of regional cerebral blood volume (rCBV), may provide similar diagnostic information. In this study, we directly compared (18)F-FET PET and PWI in patients with brain tumors. Methods: Fifty-six patients with gliomas were investigated using static (18)F-FET PET and PWI. For comparison, 8 patients with meningiomas were included. We generated a set of tumor and reference volumes of interest (VOIs) based on morphologic MR imaging and transferred these VOIs to the corresponding (18)F-FET PET scans and PWI maps. From these VOIs, tumor-to-brain ratios (TBR) were calculated, and normalized histograms were generated for (18)F-FET PET and rCBV maps. Furthermore, in rCBV maps and in (18)F-FET PET scans, tumor volumes, their spatial congruence, and the distance between the local hot spots were assessed. Results: For patients with glioma, TBR was significantly higher in (18)F-FET PET than in rCBV maps (TBR, 2.28 ± 0.99 vs. 1.62 ± 1.13; P &lt; 0.001). Histogram analysis of the VOIs revealed that (18)F-FET scans could clearly separate tumor from background. In contrast, deriving this information from rCBV maps was difficult. Tumor volumes were significantly larger in (18)F-FET PET than in rCBV maps (tumor volume, 24.3 ± 26.5 cm(3) vs. 8.9 ± 13.9 cm(3); P &lt; 0.001). Accordingly, spatial overlap of both imaging parameters was poor (congruence, 11.0%), and mean distance between the local hot spots was 25.4 ± 16.1 mm. In meningioma patients, TBR was higher in rCBV maps than in (18)F-FET PET (TBR, 5.33 ± 2.63 vs. 2.37 ± 0.32; P &lt; 0.001) whereas tumor volumes were comparable. Conclusion: In patients with cerebral glioma, tumor imaging with (18)F-FET PET and rCBV yields different information. (18)F-FET PET shows considerably higher TBRs and larger tumor volumes than rCBV maps. The spatial congruence of both parameters is poor. The locations of the local hot spots differ considerably. Taken together, our data show that metabolically active tumor tissue of gliomas as depicted by amino acid PET is not reflected by rCBV as measured with PWI. abstract_id: PUBMED:29184763 Usefulness of enhancement-perfusion mismatch in differentiation of CNS lymphomas from other enhancing malignant tumors of the brain. Background: Surgical planning and treatment options for primary or secondary central nervous system lymphomas (PCNSL or SCNSL) are different from other enhancing malignant lesions such as glioblastoma multiforme (GBM), anaplastic gliomas and metastases; so, it is critical to distinguish them preoperatively. We hypothesized that enhancement-perfusion (E-P) mismatch on dynamic susceptibility weighted magnetic resonance (DSC-MR) perfusion imaging which corresponds to low mean relative cerebral blood volume (mean rCBV) in an enhancing portion of the tumor should allow differentiation of CNS lymphomas from other enhancing malignant lesions. Methods: We retrospectively reviewed pre-treatment MRI exams, including DSC-MR perfusion images of 15 lymphoma patients. As a control group, pre-treatment DSC-MR perfusion images of biopsy proven 18 GBMs (group II), 13 metastases (group III), and 10 anaplastic enhancing gliomas (group IV) patients were also reviewed. Region of interests (ROIs) were placed around the most enhancing part of tumor on contrast-enhanced T1WI axial images and images were transferred onto co-registered DSC perfusion maps to obtain CBV in all 4 groups. The mean and maximum relative CBV values were obtained. Statistical analysis was performed on SPSS software and significance of the results between the groups was done with Mann-Whitney test, whereas optimal thresholds for tumor differentiation were done by receiver operating characteristic (ROC) analysis. Results: The enhancing component of CNS lymphomas were found to have significantly lower mean rCBV compared to enhancing component of GBM (1.2 versus 4.3; P&lt;0.001), metastasis (1.2 versus 2.7; P&lt;0.001), and anaplastic enhancing gliomas (1.2 versus 2.4; P&lt;0.001). Maximum rCBV of enhancing component of lymphoma were significantly lower than GBM (3.1 versus 6.5; P&lt;0.001) and metastasis (3.1 versus 4.9; P&lt;0.013), and not significantly lower than anaplastic enhancing gliomas (3.9 versus 4.2; P&lt;0.08). On the basis of ROC analysis, mean rCBV provided the best threshold [area under the curve (AUC) =0.92] and had better accuracy in differentiating malignant lesions. Conclusions: E-P mismatch in DSC perfusion MR, i.e., low mean rCBV in an enhancing portion of the tumor is strongly suggestive of lymphoma and should allow differentiation of CNS lymphoma from other enhancing malignant lesions. abstract_id: PUBMED:22870830 Differentiation between benign and malignant meningiomas using diffusion and perfusion MR imaging. The purpose of the study was to investigate the utility of diffusion-weighted (DW) and perfusion-weighted (PW) MR imaging to differentiate benign and malignant meningiomas which were verified histologically. A total of 29 patients with meningiomas (22 benign and 7 malignant) underwent conventional, perfusion and diffusion MR imaging. Conventional MRI revealed the ordinary signs of tumor. The absolute apparent diffusion coefficient (ADC) and normalized ADC (NADC) ratio, maximal relative cerebral blood volume (rCBV) and the corresponding relative mean time to enhance (rMTE) in both tumour parenchyma and peritumoral edema were measured. The mean ADC of malignant meningiomas (0.64±0.12 x 10(-3) mm(2)/s) was significantly lower compared with benign meningiomas (0.89±0.09 x 10(-3) mm(2)/s; P&lt;.0001). Mean NADC ratio in the malignant group (0.93±0.18) was also significantly lower than in the benign group (1.29±0.12; P&lt;.0001). The mean maximal rCBV values of benign and malignant meningiomas were 7.18±4.12 and 6.01±3.89, in the tumour parenchyma, and 1,07±0,95 and 3,85±1,41 in the peritumoral edema. The mean rMTE values were 1.14±0.23 and 1.24±0.27, in the tumour, and 0.88±0.24 and 1.21±0.31, in the peritumoral edema. The differences in rCBV and rMTE values between benign and malignant meningiomas were not significant (P&gt;0.05) in the tumour parenchyma, but both were significant (p&lt;0.05) in the peritumoral edema. Diffusion and perfusion MR imaging are useful in the preoperative differentiation between benign and malignant meningiomas. abstract_id: PUBMED:37848641 Assessment of Brain Tumour Perfusion Using Early-Phase 18F-FET PET: Comparison with Perfusion-Weighted MRI. Purpose: Morphological imaging using MRI is essential for brain tumour diagnostics. Dynamic susceptibility contrast (DSC) perfusion-weighted MRI (PWI), as well as amino acid PET, may provide additional information in ambiguous cases. Since PWI is often unavailable in patients referred for amino acid PET, we explored whether maps of relative cerebral blood volume (rCBV) in brain tumours can be extracted from the early phase of PET using O-(2-18F-fluoroethyl)-L-tyrosine (18F-FET). Procedure: Using a hybrid brain PET/MRI scanner, PWI and dynamic 18F-FET PET were performed in 33 patients with cerebral glioma and four patients with highly vascularized meningioma. The time interval from 0 to 2 min p.i. was selected to best reflect the blood pool phase in 18F-FET PET. For each patient, maps of MR-rCBV, early 18F-FET PET (0-2 min p.i.) and late 18F-FET PET (20-40 min p.i.) were generated and coregistered. Volumes of interest were placed on the tumour (VOI-TU) and normal-appearing brain (VOI-REF). The correlation between tumour-to-brain ratios (TBR) of the different parameters was analysed. In addition, three independent observers evaluated MR-rCBV and early 18F-FET maps (18F-FET-rCBV) for concordance in signal intensity, tumour extent and intratumoural distribution. Results: TBRs calculated from MR-rCBV and 18F-FET-rCBV showed a significant correlation (r = 0.89, p &lt; 0.001), while there was no correlation between late 18F-FET PET and MR-rCBV (r = 0.24, p = 0.16) and 18F-FET-rCBV (r = 0.27, p = 0.11). Visual rating yielded widely agreeing findings or only minor differences between MR-rCBV maps and 18F-FET-rCBV maps in 93 % of the tumours (range of three independent raters 91-94%, kappa among raters 0.78-1.0). Conclusion: Early 18F-FET maps (0-2 min p.i.) in gliomas provide similar information to MR-rCBV maps and may be helpful when PWI is not possible or available. Further studies in gliomas are needed to evaluate whether 18F-FET-rCBV provides the same clinical information as MR-rCBV. abstract_id: PUBMED:27003139 Grading diffuse gliomas without intense contrast enhancement by amide proton transfer MR imaging: comparisons with diffusion- and perfusion-weighted imaging. Objectives: To investigate whether amide proton transfer (APT) MR imaging can differentiate high-grade gliomas (HGGs) from low-grade gliomas (LGGs) among gliomas without intense contrast enhancement (CE). Methods: This retrospective study evaluated 34 patients (22 males, 12 females; age 36.0 ± 11.3 years) including 20 with LGGs and 14 with HGGs, all scanned on a 3T MR scanner. Only tumours without intense CE were included. Two neuroradiologists independently performed histogram analyses to measure the 90th-percentile (APT90) and mean (APTmean) of the tumours' APT signals. The apparent diffusion coefficient (ADC) and relative cerebral blood volume (rCBV) were also measured. The parameters were compared between the groups with Student's t-test. Diagnostic performance was evaluated with receiver operating characteristic (ROC) analysis. Results: The APT90 (2.80 ± 0.59 % in LGGs, 3.72 ± 0.89 in HGGs, P = 0.001) and APTmean (1.87 ± 0.49 % in LGGs, 2.70 ± 0.58 in HGGs, P = 0.0001) were significantly larger in the HGGs compared to the LGGs. The ADC and rCBV values were not significantly different between the groups. Both the APT90 and APTmean showed medium diagnostic performance in this discrimination. Conclusions: APT imaging is useful in discriminating HGGs from LGGs among diffuse gliomas without intense CE. Key Points: • Amide proton transfer (APT) imaging helps in grading non-enhancing gliomas • High-grade gliomas showed higher APT signal than low-grade gliomas • APT imaging showed better diagnostic performance than diffusion- and perfusion-weighted imaging. abstract_id: PUBMED:19727562 Predictive value of multimodality MRI using conventional, perfusion, and spectroscopy MR in anaplastic transformation of low-grade oligodendrogliomas. The aim of our study was to evaluate the role of proton magnetic resonance (MR) spectroscopy and MR perfusion in the follow-up of low-grade gliomas, since conventional MR imaging (MRI) is not reliable in detecting the passage from a low- to high-grade tumor. Twenty-one patients with a World Health Organisation (WHO) grade II glioma were followed up using proton MR spectroscopy, perfusion, and conventional MRIs. Follow-up MRIs had been performed at the third month of evolution and then twice a year, with an average of five MR studies per patient. Five out of the 21 patients had an anaplastic transformation. A choline to creatine ratio (choline/creatine ratio) above 2.4 is associated with an 83% risk of a malignant transformation in an average delay of 15.4 months. The choline/creatine ratio at this threshold was more efficient than perfusion MR in detecting the anaplastic transformation, with sensitivity of 80% and specificity of 94%. An increased choline/creatine ratio seemed to occur an average 15 months before the elevation of relative cerebral blood volume (rCBV). The mean annual growth of low-grade glioma was 3.65 mm. A growth rate higher than 3 mm per year was also correlated with greater risk of anaplastic transformation. Proton magnetic resonance spectroscopy should be recommended in the follow-up of low-grade gliomas since the choline/creatine ratio can predict anaplastic transformation before perfusion abnormalities, with high positive predictive value of 83%. abstract_id: PUBMED:14625221 Glioma grading: sensitivity, specificity, and predictive values of perfusion MR imaging and proton MR spectroscopic imaging compared with conventional MR imaging. Background And Purpose: Sensitivity, positive predictive value (PPV), and negative predictive value (NPV) of conventional MR imaging in predicting glioma grade are not high. Relative cerebral blood volume (rCBV) measurements derived from perfusion MR imaging and metabolite ratios from proton MR spectroscopy are useful in predicting glioma grade. We evaluated the sensitivity, specificity, PPV, and NPV of perfusion MR imaging and MR spectroscopy compared with conventional MR imaging in grading primary gliomas. Methods: One hundred sixty patients with a primary cerebral glioma underwent conventional MR imaging, dynamic contrast-enhanced T2*-weighted perfusion MR imaging, and proton MR spectroscopy. Gliomas were graded as low or high based on conventional MR imaging findings. The rCBV measurements were obtained from regions of maximum perfusion. Metabolite ratios (choline [Cho]/creatine [Cr], Cho/N-acetylaspartate [NAA], and NAA/Cr) were measured at a TE of 144 ms. Tumor grade determined with the three methods was then compared with that from histopathologic grading. Logistic regression and receiver operating characteristic analyses were performed to determine optimum thresholds for tumor grading. Sensitivity, specificity, PPV, and NPV for identifying high-grade gliomas were also calculated. Results: Sensitivity, specificity, PPV, and NPV for determining a high-grade glioma with conventional MR imaging were 72.5%, 65.0%, 86.1%, and 44.1%, respectively. Statistical analysis demonstrated a threshold value of 1.75 for rCBV to provide sensitivity, specificity, PPV, and NPV of 95.0%, 57.5%, 87.0%, and 79.3%, respectively. Threshold values of 1.08 and 1.56 for Cho/Cr and 0.75 and 1.60 for Cho/NAA provided the minimum C2 and C1 errors, respectively, for determining a high-grade glioma. The combination of rCBV, Cho/Cr, and Cho/NAA resulted in sensitivity, specificity, PPV, and NPV of 93.3%, 60.0%, 87.5%, and 75.0%, respectively. Significant differences were noted in the rCBV and Cho/Cr, Cho/NAA, and NAA/Cr ratios between low- and high-grade gliomas (P &lt;.0001,.0121,.001, and.0038, respectively). Conclusion: The rCBV measurements and metabolite ratios both individually and in combination can increase the sensitivity and PPV when compared with conventional MR imaging alone in determining glioma grade. The rCBV measurements had the most superior diagnostic performance (either with or without metabolite ratios) in predicting glioma grade. Threshold values can be used in a clinical setting to evaluate tumors preoperatively for histologic grade and provide a means for guiding treatment and predicting postoperative patient outcome. abstract_id: PUBMED:15709124 Differential chemosensitivity of tumor components in a malignant oligodendroglioma: assessment with diffusion-weighted, perfusion-weighted, and serial volumetric MR imaging. We report the case of malignant oligodendroglioma in a 36-year-old man who underwent diffusion-weighted, perfusion-weighted, and volumetric MR imaging before and after PCV (procarbazine, CCNU, vincristine) chemotherapy. The tumor regions exhibiting a low apparent diffusion coefficient and increased relative cerebral blood volume showed a marked response to chemotherapy and dramatic decrease in volume, whereas the remaining tumor regions showed little change. Diffusion-weighted and perfusion-weighted MR imaging may be helpful in predicting chemosensitivity of glial tumors. abstract_id: PUBMED:24199813 Advanced MR imaging techniques in the evaluation of nonenhancing gliomas: perfusion-weighted imaging compared with proton magnetic resonance spectroscopy and tumor grade. A significant number of nonenhancing (NE) gliomas are reported to be malignant. The purpose of this study was to compare the value of advanced MR imaging techniques, including T2*-dynamic susceptibility contrast PWI (DSC-PWI) and proton magnetic resonance spectroscopy ((1)HMRS) in the evaluation of NE gliomas. Twenty patients with NE gliomas underwent MRI including DSC-PWI and (1)HMRS. The relative CBV (rCBV) measurements were obtained from regions of maximum perfusion. The peak ratios of choline/creatine (Cho/Cr) and myo-inositol/creatine (mIns/Cr) were measured at a TE of 30 ms. Demographic features, tumor volumes, and PWI- and (1)HMRS-derived measures were compared between low-grade gliomas (LGGs) and high-grade gliomas (HGGs). In addition, the association of initial rCBV ratio with tumor progression was evaluated in LGGs. No significant difference was noted in age, sex or tumor size between LGGs and HGGs. Cho/Cr ratios were significantly higher in HGGs (1.7±0.63) than in LGGs (1.2±0.38). The receiver operating characteristic analysis demonstrated that a Cho/Cr ratio with a cutoff value of 1.3 could differentiate between LGG and HGG with a specificity of 100% and a sensitivity of 71.4%. There was no significant difference in the rCBV ratio and the mIns/Cr ratio between LGG and HGG. However, higher rCBV ratios were observed with more rapid progressions in LGGs. The results imply that Cho/Cr ratios are useful in distinguishing NE LGG from HGG and can be helpful in preoperative grading and biopsy guidance. On the other hand, rCBV ratios do not help in the distinction. Answer: Yes, changes in relative cerebral blood volume (rCBV) measurements at longitudinal perfusion-weighted MR imaging can predict malignant transformation in low-grade gliomas. A study that prospectively performed longitudinal MR perfusion imaging of conservatively treated low-grade gliomas found that in patients whose gliomas transformed into high-grade tumors, there was a continuous increase in rCBV up to the point of transformation. This increase in rCBV was significant and could be observed up to 12 months before contrast enhancement became apparent on T1-weighted MR images. Transformers had a higher rate of rCBV change between successive time points compared to non-transformers, indicating that rCBV changes can precede and predict malignant transformation (PUBMED:18372467).
Instruction: Does heart failure therapy differ according to patient sex? Abstracts: abstract_id: PUBMED:17551967 Does heart failure therapy differ according to patient sex? Objectives: To assess differences in clinical characteristics, treatment and outcome between men and women with heart failure (HF) treated at a multidisciplinary HF unit. All patients had their first unit visit between August 2001 and April 2004. Patients: We studied 350 patients, 256 men, with a mean age of 65 +/- 10.6 years. In order to assess the pharmacological intervention more homogeneously, the analysis was made at one year of follow-up. Results: Women were significantly older than men (69 +/- 8.8 years vs. 63.6 +/- 10.9 years, p &lt; 0.001). Significant differences were found in the HF etiology and in co-morbidities. A higher proportion of men were treated with ACEI (83% vs. 68%, p &lt; 0.001) while more women received ARB (18% vs. 8%, p = 0.006), resulting in a similar percentage of patients receiving either of these two drugs (men 91% vs. women 87%). No significant differences were observed in the percentage of patients receiving beta-blockers, loop diuretics, spironolactone, anticoagulants, amiodarone, nitrates or statins. More women received digoxin (39% vs. 22%, p = 0.001) and more men aspirin (41% vs. 31%, p = 0.004). Carvedilol doses were higher in men (29.4 +/- 18.6 vs. 23.8 +/- 16.4, p = 0.03), ACEI doses were similar between sexes, and furosemide doses were higher in women (66 mg +/- 26.2 vs. 56 mg +/- 26.2, p &lt; 0.05). Mortality at 1 year after treatment analysis was similar between sexes (10.4% men vs. 10.5% women). Conclusions: Despite significant differences in age, etiology and co-morbidities, differences in treatment between men and women treated at a multidisciplinary HF unit were small. Mortality at 1 year after treatment analysis was similar for both sexes. abstract_id: PUBMED:33170080 Long-Term Survival After Implantable Cardiac Defibrillator Therapy According to Sex: A Propensity Matched Study. Background: Whether the sex factor influences the benefit of the implantable cardioverter-defibrillator (ICD) for the prevention of sudden death remains a subject of debate. Using a prospective registry, we sought to analyze the survival and time to first ICD therapy according to sex. Materials and Methods: Retrospective analysis of a prospective cohort of patients undergoing an ICD implant from 2008 to 2019. Data about time to first appropriate therapy, type of therapy administered, and incidence and causes of mortality were collected. Results: Among 756 ICD patients, 150 (19.8%) were women. Women were younger (51 ± 15 years vs. 61 ± 14 years; p &lt; 0.001) and showed a lower rate of ischemic cardiomyopathy (23% vs. 54%; p &lt; 0.001) and atrial fibrillation (12% vs. 19%; p = 0.05). Women had higher left ventricular ejection fraction (39% ± 17% vs. 35% ± 13%) and showed more frequently left bundle branch block (39% vs. 28%, p = 0.027). The rate of primary prevention (68% vs. 59.6%; p = 0.058) and cardiac resynchronization therapy (27% vs. 19%, p = 0.02) were higher in women. After a median follow-up of 46 months (3382 patient-years), the incidence of both the primary combined endpoint of mortality/transplant (20% vs. 29%; logrank = 0.031) and ICD therapies (27% vs. 34%; p = 0.138) were lower in women. According to the propensity score-matching analysis, no differences were observed between both sexes with respect to the incidence of mortality/transplant (24.8% vs. 28.6%; logrank = 0.88), ICD therapies (28% vs. 27%; logrank = 0.17), and main cause of death (heart failure [HF]). Conclusions: The clinical characteristics at the moment of ICD implant are different between sexes. After adjusting them, both sexes equally benefit from the ICD. HF is the main cause of mortality both in men and women. abstract_id: PUBMED:11253263 Congestive heart failure medications: is there a rationale for sex-specific therapy? The etiology and physiology of symptomatic heart failure may differ by sex. Prognostic markers for outcomes of CHF may also have sex-specific sensitivity. Insufficient numbers of women have been enrolled in most large interventional survival studies of pharmacologic therapy for CHF to test for sex-specific efficacy or responses. There are some data that demonstrate sex-specific lack of efficacy and increase in side effects of therapy. Given that the number of women admitted to hospitals with CHF each year exceeds the number of men, there is an obvious need to specifically and definitively evaluate the pharmacologic therapy of CHF in women. abstract_id: PUBMED:7758181 Sex differences in patient acceptance of cardiac transplant candidacy. Background: The overwhelming majority of cardiac transplant recipients are men. This can be partially explained by the earlier age at which heart failure develops in men. However, an underrepresentation of women also may reflect physician referral or selection biases or differences in patients' access to or acceptance of heart transplantation. Methods And Results: We investigated whether sex bias occurred in the transplant candidate selection process at a single cardiac transplant center. We prospectively evaluated 386 individuals &lt; 70 years of age (295 men, 91 women) referred for management of moderate to severe heart failure and/or cardiac transplant evaluation. Age, race, sex, heart failure type, New York Heart Association class, left ventricular ejection fraction, peak exercise oxygen consumption, disease duration, resting hemodynamic measurements, comorbidity index score, health insurance coverage, and estimated household income were recorded. For patients not accepted for transplantation, the reason for rejection was also obtained. Univariable and multivariable (logistic regression) analyses were performed comparing men and women and patients accepted and those not accepted for cardiac transplantation. Female sex was independently associated with rejection for cardiac transplantation (odds ratio, 2.57; P = .01). However, the reason for rejection (odds ratio, 2.57; P = .01). However, the reason for rejection was more likely to be patient self-refusal for women than for men (29% versus 9%), and female sex was independently associated with patient self-refusal (odds ratio, 4.68; P = .003). When patients who refused transplant were reclassified as accepted for transplant, female sex was no longer associated with nonacceptance. However, lower patient income was associated with nonacceptance for transplant. Conclusions: We found no evidence of sex bias in the selection of cardiac transplant recipients at our center. These findings suggest that the underrepresentation of women among cardiac transplant recipients may result, in part, from a sex difference in treatment preference, with a decreased willingness of women to undergo transplantation. The reasons for the difference in acceptance rates between men and women need to be elucidated. abstract_id: PUBMED:26659647 Toward Sex-Specific Guidelines for Cardiac Resynchronization Therapy? An important treatment for patients with heart failure is cardiac resynchronization therapy (CRT). Even though only 20% of women were included in clinical trials for CRT, a benefit has been shown in recent studies for subgroups of women compared to their male counterparts. Given this low inclusion rate of women in clinical studies, professional society guideline-based CRT recommendations, such as those by the American College of Cardiology Foundation (ACCF)/American Heart Association (AHA)/Heart Rhythm Society (HRS), may not truly represent the best treatment for women, especially since most of the reports that showed this greater benefit in women were published after the latest guidelines. Despite having research and multiple publications regarding sex-specific heart failure outcomes and response to CRT, the ACCF/AHA/HRS guidelines have not yet been updated to account for the recent information regarding the differences in benefit for women and men with similar patient characteristics. This review discusses the physiology behind CRT, sex-specific characteristics of heart failure, and cardiac electrophysiology and summarizes the current sex-specific literature to encourage consideration of CRT guidelines for women and men separately. abstract_id: PUBMED:30648465 Sex Differences in Acute Complications of Cardiac Implantable Electronic Devices: Implications for Patient Safety. Background To date, limited population-level studies have examined the impact of sex on the acute complications of cardiac implantable electronic devices ( CIED) , including permanent pacemakers, implantable cardioverter defibrillators, and cardiac resynchronization therapy devices. Methods and Results We studied all patients aged &gt;18 years from 2010 to 2015 who were a resident of Australia or New Zealand, undergoing a new permanent pacemaker, implantable cardioverter defibrillator, or cardiac resynchronization therapy implant. Standardized variables were collected including patient demographic characteristics, primary and secondary diagnoses, procedures performed and discharge status. Diagnoses and procedures were coded as per the International Classification of Diseases, Tenth Revision ( ICD-10) and the Australian Classification of Health Interventions. The primary end point was the incidence of major CIED -related complications in-hospital or within 90 days of discharge, with the effect of sex evaluated using multiple logistic regression. A total of 81 304 new CIED (61 658 permanent pacemakers, 12 097 implantable cardioverter defibrillators, 7574 cardiac resynchronization therapy) implants were included (38% women). Overall, 8.5% of women and 8.0% of men experienced a CIED complication ( P=0.008). Differences between women and men remained significant after adjustment for age, procedural acuity, and comorbidities (odds ratio 1.10, 95% CI: 1.04-1.16, P&lt;0.001). Differences in CIED complication rates were primarily driven by excess rate of in-hospital pleural drainage (1.2% women versus 0.6% men, P&lt;0.001; adjusted odds ratio 1.86, 95% CI: 1.59-2.17, P&lt;0.001) and pericardial drainage (0.3% women versus 0.1% men, P&lt;0.001; adjusted odds ratio 2.17, 95% CI: 1.48-3.18, P&lt;0.001). Conclusions Women are at higher risk of acute CIED complications. Improvements in implant technique and technologies are required to minimize the risk of implant-related complications in women. abstract_id: PUBMED:35175846 Sex-Specific Cardiovascular Risks of Cancer and Its Therapies. In both cardiovascular disease and cancer, there are established sex-based differences in prevalence and outcomes. Males and females may also differ in terms of risk of cardiotoxicity following cancer therapy, including heart failure, cardiomyopathy, atherosclerosis, thromboembolism, arrhythmias, and myocarditis. Here, we describe sex-based differences in the epidemiology and pathophysiology of cardiotoxicity associated with anthracyclines, hematopoietic stem cell transplant (HCT), hormone therapy and immune therapy. Relative to males, the risk of anthracycline-induced cardiotoxicity is higher in prepubertal females, lower in premenopausal females, and similar in postmenopausal females. For autologous hematopoietic cell transplant, several studies suggest an increased risk of late heart failure in female lymphoma patients, but sex-based differences have not been shown for allogeneic hematopoietic cell transplant. Hormone therapies including GnRH (gonadotropin-releasing hormone) modulators, androgen receptor antagonists, selective estrogen receptor modulators, and aromatase inhibitors are associated with cardiotoxicity, including arrhythmia and venous thromboembolism. However, sex-based differences have not yet been elucidated. Evaluation of sex differences in cardiotoxicity related to immune therapy is limited, in part, due to low participation of females in relevant clinical trials. However, some studies suggest that females are at increased risk of immune checkpoint inhibitor myocarditis, although this has not been consistently demonstrated. For each of the aforementioned cancer therapies, we consider sex-based differences according to cardiotoxicity management. We identify knowledge gaps to guide future mechanistic and prospective clinical studies. Furthering our understanding of sex-based differences in cancer therapy cardiotoxicity can advance the development of targeted preventive and therapeutic cardioprotective strategies. abstract_id: PUBMED:29759781 Sex-Specific Response to Cardiac Resynchronization Therapy: Effect of Left Ventricular Size and QRS Duration in Left Bundle Branch Block. Objectives: In this study, the authors sought to assess the impact of body and heart size on sex-specific cardiac resynchronization therapy (CRT) response rate, according to QRS duration (QRSd) as a continuum. Background: Effects of CRT differ between sexes for any given QRSd. Methods: New York Heart Association functional class III/IV patients with nonischemic cardiomyopathy and "true" left bundle branch block (LBBB) were evaluated. Left ventricular mass (LVM) and end-diastolic volume were measured echocardiographically. Positive response was defined by left ventricular ejection fraction (LVEF) improvement post-CRT. Results: Among 130 patients (LVEF 19 ± 7.1%; QRSd 165 ± 20 ms; 55% female), CRT improved LVEF to 32 ± 14% (p &lt; 0.001) during a median 2 years follow-up. Positive responses occurred in 103 of 130 (79%) (78% when QRSd &lt;150 ms vs. 80% when QRSd ≥150 ms; p = 0.8). Body surface area (BSA), QRSd, and LVM were lower in women, but QRSd/LVM ratio greater (p &lt; 0.0001). Sexes did not differ for pharmacotherapy and comorbidities, but female CRT response was greater: 90% (65 of 72) versus 66% (38 of 58) in males (p &lt; 0.001). With QRSd as a continuum, the overall CRT-response relationship showed a progressive increase to plateau between 150 and 170 ms, then a decrease. Sex-specific differences were conspicuous: among females, a peak effect was observed between 135 and 150 ms, thereafter a decline, with the male response rate lower, but with a gradual increase as QRSd lengthened. Sex-specific differences were unaltered by BSA, but resolved with integration of LVM or end-diastolic volume. Conclusions: Sex differences in the QRSd-response relationship among CRT patients with LBBB were unexplained by application of strict LBBB criteria or by BSA, but resolved by QRSd normalization for heart size using LV mass or volume. abstract_id: PUBMED:36262368 Sex and Gender Differences in Heart Failure. Heart failure (HF) phenotypes differ according to sex. HF preserved ejection fraction (EF) has a greater prevalence in women and HF reduced EF (HFrEF) in men. Women with HF survive longer than men and have a lower risk of sudden death. Ischemia is the most prominent cause in men, whereas hypertension and diabetes contribute to a greater extent in women. Women with HF have a greater stiffness of the smaller left ventricle and a higher EF than men. This higher stiffness of women's hearts may be based on an increase in fibrosis at old age. In younger women estrogen reduces collagen production in female cardiac fibroblasts, but stimulates it in males. Lipid and energy metabolism is better maintained in female than in male stressed hearts. Pulse pressure is a key determinant of outcome in HF women but not in men. Takotsubo and peripartum cardiomyopathy are rare diseases affecting predominantly or exclusively women. Sudden cardiac arrest affects more men than women, but women are less adequately treated. New findings in HF therapy indicate that women with HFrEF need lower doses of beta-blockers and angiotensin-converting enzyme inhibitors than men for optimal effects. The combined neprilysin inhibitor/angiotensin II receptor blockers sacubitril-valsartan led to a significant reduction in event rate versus valsartan in women, which was not observed in men. Unfortunately, only less than 10% of recent randomized controlled trial report effects and adverse drug reactions for women and men separately. More research on sex differences in pathophysiology and therapy of HF is needed. abstract_id: PUBMED:29314424 The interaction of sex, height, and QRS duration on the effects of cardiac resynchronization therapy on morbidity and mortality: an individual-patient data meta-analysis. Aims: To explore possible associations that may explain the greater benefit from cardiac resynchronization therapy (CRT) reported amongst women. Methods And Results: In an individual-patient data meta-analysis of five randomized controlled trials, all-cause mortality and the composite of all-cause mortality or first hospitalization for heart failure (HF) were compared among 794 women and 2702 men assigned to CRT or a control group. Multivariable analyses were performed to assess the impact of sex, QRS duration, HF aetiology, left ventricular end-diastolic diameter (LVEDD), and height on outcome. Women were shorter, had smaller LVEDD, more often left bundle branch block, and less often ischaemic heart disease, but QRS duration was similar between sexes. Women tended to obtain greater benefit from CRT but sex was not an independent predictor of either outcome. For all-cause mortality, QRS duration was the only independent predictor of CRT benefit. For the composite outcome, height and QRS duration, but not sex, were independent predictors of CRT benefit. Further analysis suggested increasing benefit with increasing QRS duration amongst shorter patients, of whom a great proportion were women. Conclusions: In this individual-patient data meta-analysis, CRT benefit was greater in shorter patients, which may explain reports of enhanced CRT benefit among women. Further analyses are required to determine whether recommendations on the QRS threshold for CRT should be adjusted for height. (ClinicalTrials.gov numbers: NCT00170300, NCT00271154, NCT00251251). Answer: Heart failure therapy can differ according to patient sex, as evidenced by several studies that have explored this issue. Women with heart failure tend to be older than men and have different etiologies and comorbidities associated with their heart failure. For example, women were found to be significantly older and had different heart failure etiologies compared to men, with a higher proportion of men treated with ACE inhibitors and more women receiving ARBs and digoxin (PUBMED:17551967). Despite these differences, mortality at one year after treatment was similar between sexes. In terms of implantable cardiac defibrillator (ICD) therapy, women and men showed different clinical characteristics at the moment of ICD implant, but after adjusting for these differences, both sexes equally benefited from the ICD (PUBMED:33170080). However, women are at higher risk of acute complications related to cardiac implantable electronic devices (CIED), suggesting a need for improvements in implant technique and technologies to minimize the risk of implant-related complications in women (PUBMED:30648465). Sex-specific responses to cardiac resynchronization therapy (CRT) have also been noted, with women showing a greater benefit from CRT in certain subgroups compared to men (PUBMED:26659647). This could be due to differences in heart size, as sex differences in the QRS duration-response relationship among CRT patients with left bundle branch block were resolved by QRS duration normalization for heart size using left ventricular mass or volume (PUBMED:29759781). Furthermore, heart failure phenotypes differ according to sex, with HF preserved ejection fraction being more prevalent in women and HF reduced ejection fraction in men. Women with heart failure survive longer than men and have a lower risk of sudden death. New findings in heart failure therapy indicate that women with HFrEF may need lower doses of beta-blockers and angiotensin-converting enzyme inhibitors than men for optimal effects (PUBMED:36262368). Overall, while there are sex-specific differences in heart failure etiology, comorbidities, and responses to certain therapies, the fundamental approach to heart failure therapy may be similar across sexes. However, these differences highlight the importance of considering sex as a factor in heart failure treatment and the potential need for sex-specific guidelines or adjustments in therapy (PUBMED:11253263, PUBMED:7758181, PUBMED:35175846).
Instruction: Motility abnormalities in esophageal body in GERD: are they truly related to reflux? Abstracts: abstract_id: PUBMED:15718863 Motility abnormalities in esophageal body in GERD: are they truly related to reflux? Background: Esophageal motility abnormalities have been observed in patients with gastroesophageal reflux disease. Goals: The aim of the present study was to determine if esophageal motor disorders in patients with a positive response to the omeprazole test are related to the existence of reflux or they are concomitant findings. Study: A 24-hour pH monitoring and a stationary manometry were performed on 128 patients: 49 of them had normal manometry, 31 hypotensive lower esophageal sphincter, 29 motor disorder in esophageal body, and 19 hypotensive lower esophageal sphincter and motor disorder in esophageal body. Results: We found an association between the presence of abnormal reflux and motor disorder in esophageal body (chi test; P &lt; 0.05). However, ineffective esophageal motility was the disorder most strongly related to reflux, whereas the hypercontractile disorders were not clearly attributed to it. Conclusions: Esophageal manometric abnormalities should be considered cautiously before considering a motor disorder as a consequence of abnormal reflux. abstract_id: PUBMED:24868489 Esophageal motility abnormalities in gastroesophageal reflux disease. Esophageal motility abnormalities are among the main factors implicated in the pathogenesis of gastroesophageal reflux disease. The recent introduction in clinical and research practice of novel esophageal testing has markedly improved our understanding of the mechanisms contributing to the development of gastroesophageal reflux disease, allowing a better management of patients with this disorder. In this context, the present article intends to provide an overview of the current literature about esophageal motility dysfunctions in patients with gastroesophageal reflux disease. Esophageal manometry, by recording intraluminal pressure, represents the gold standard to diagnose esophageal motility abnormalities. In particular, using novel techniques, such as high resolution manometry with or without concurrent intraluminal impedance monitoring, transient lower esophageal sphincter (LES) relaxations, hypotensive LES, ineffective esophageal peristalsis and bolus transit abnormalities have been better defined and strongly implicated in gastroesophageal reflux disease development. Overall, recent findings suggest that esophageal motility abnormalities are increasingly prevalent with increasing severity of reflux disease, from non-erosive reflux disease to erosive reflux disease and Barrett's esophagus. Characterizing esophageal dysmotility among different subgroups of patients with reflux disease may represent a fundamental approach to properly diagnose these patients and, thus, to set up the best therapeutic management. Currently, surgery represents the only reliable way to restore the esophagogastric junction integrity and to reduce transient LES relaxations that are considered to be the predominant mechanism by which gastric contents can enter the esophagus. On that ground, more in depth future studies assessing the pathogenetic role of dysmotility in patients with reflux disease are warranted. abstract_id: PUBMED:26675026 Esophageal motility abnormalities and their impact on esophageal acid exposure in patients with gastroesophageal reflux disease Objective: To identify the characteristics of esophageal motility abnormalities in patients with gastroesophageal reflux disease (GERD) and its influence on esophageal acid exposure. Methods: Patients with typical reflux symptoms and diagnosis of reflux esophagitis (RE) or non-erosive reflux disease (NERD), and healthy subject were enrolled in this prospective controlled study. The esophageal manometry and esophageal 26 hours pH monitoring were performed. GERD patients were divided into 3 groups according to their esophageal motility abnormalities: ① low lower esophageal sphincter pressure (LESP) group, ② ineffective esophageal motivation (IEM) group, ③ IEM and low LESP group. Esophageal acid exposure was analyzed among different groups. Results: A total of 27 GERD patients (15 RE, 12 NERD) and 10 healthy subjects were enrolled in this study. The esophageal motility abnormalities in GERD patients mainly presented as the decrease of LESP and distal esophageal body pressure. The proportion of 3 kinds of esophageal motility abnormalities has significant difference between RE and NERD patients (P = 0.017). In IEM and low LESP group, all patients suffered from RE, with the total number of acid exposure, the total acid exposure time and the acid exposure time in fasting higher than those in low LESP group [98.0 (63.3, 282.8) times vs 41.0 (25.0, 82.0) times, P = 0.029; 11.7% (4.1%, 30.0%) vs 2.2% (1.4%, 9.6%), P = 0.045; 2.6% (0.9%, 4.9%) vs 0.0 (0.0, 1.2%), P = 0.015]. Conclusions: Esophageal motility abnormalities in GERD patients are characterized as low LESP and IEM in distal esophagus. The coexistence of low LESP and IEM exacerbates esophageal acid exposure, which might explain the mechanism of esophageal mucosal injury in RE patients. abstract_id: PUBMED:36908189 Characteristics of reflux and gastric electrical activity in gastroesophageal reflux disease with ineffective esophageal motility. Objectives: The impact of ineffective esophageal motility (IEM) on gastroesophageal reflux disease (GERD) remains unknown, and abnormal esophageal motility often coexists with abnormal gastric motility. We aimed to investigate the role of IEM in GERD and its relationship with gastric electrical activity. Methods: Patients diagnosed as GERD based on GERD-questionnaire score ≥8 in our hospital from January 2020 to June 2022 were included. All patients underwent 24-h multichannel intraluminal impedance-pH monitoring, high-resolution manometry, and electrogastrogram and were categorized into the normal esophageal motility (NEM) and IEM groups, respectively. Reflux characteristics and gastric electric activity were compared between the two groups, and the correlation between gastric electric activity and reflux was analyzed. Results: Acid exposure time, total reflux episodes, and DeMeester score in the IEM group were higher than those in the NEM group. Distal mean nocturnal baseline impedance was significantly lower in the IEM group. Compared with the NEM group, the power ratio (PR) of fundus, antrum and pylorus and premeal and postmeal normal wave ratio of antrum were significantly lower in IEM. The total reflux episodes were negatively correlated with the PR of fundus and pylorus, and the DeMeester score was negatively correlated with the PR of corpus and pylorus. Conclusions: IEM may lead to increased reflux, resulting in esophageal mucosal damage. There may be consistency between abnormal esophageal motility and gastric motility. abstract_id: PUBMED:20400913 Esophageal motility in children with suspected gastroesophageal reflux disease. Background And Objectives: Motility abnormalities in adults with gastroesophageal reflux disease (GERD) include nontransmitted contractions, reduced distal esophageal contraction amplitude, and simultaneous contractions. Information on esophageal body motility in children with GERD is scarce. Our aim was to study esophageal motility patterns in children with GERD, taking into account the presence of anatomical abnormalities and neurological impairment, the effect of age on esophageal motility in GERD, and the relation between esophageal manometry and GERD severity parameters (acid exposure and presence of esophagitis). Patients And Methods: Consecutive children referred for severe GER(D) symptoms underwent a barium swallow, upper endoscopy, pH monitoring, and stationary water-perfused esophageal manometry. Results: Mean lower esophageal sphincter (LES) pressure and velocity of propagation in the proximal and distal esophagus decreased with age in this group of children with GERD (all P &lt; 0.01). Severely disturbed esophageal motility was found in children with previous esophageal atresia. Patients with psychomotor retardation had significantly lower occurrence of peristaltic waves (94% +/- 21% vs 79% +/- 38%; P = 0.001), distal propagation velocity (0.8 +/- 0.4 vs 0.6 +/- 0.5 cm/s; P = 0.05), and distal contraction duration (3.1 +/- 0.8 vs 3.4 +/- 1 seconds; P = 0.05). None of the manometric characteristics differed between patients with normal or abnormal esophageal pH monitoring or with or without erosive esophagitis. Conclusions: LES pressure and esophageal velocity decreased with increasing age. Esophageal manometry is severely impaired in children with esophageal atresia and psychomotor retardation. No specific esophageal motor abnormalities related to the presence of endoscopic esophagitis or abnormal pH monitoring were found. abstract_id: PUBMED:23178615 Esophageal motility pattern and gastro-esophageal reflux in chronic obstructive pulmonary disease. Background/aims: The association of esophageal motility and gastroesophageal reflux symptoms with respiratory symptoms is not well established in patients with chronic obstructive pulmonary disease (COPD). The aim of this work is to study the abnormalities of esophageal function in COPD patients and study its relation to smoking index, body mass index and indices of hyperinflation. Methodology: This study included 40 male COPD patients and 10 healthy controls. The patients and controls were subjected to spirometry, body plethysmography, esophageal manometry and 24hr pH-metry. Results: Upper gastrointestinal symptoms were found in 55% of patients, hypotensive upper esophageal sphincter pressure in 65% of patients and hypotensive lower esophageal sphincter pressure in 52.5% of patients. Pathological acid reflux was found in 35% of patients. The severity of GERD increased with increased age, smoking index and body mass index, p&lt;0.005. There was negative correlation between LESP and UESP compared with indices of hyperinflation, p&lt;0.001. Conclusions: There was high prevalence of esophageal motility disorders in COPD patients, LESP and UESP were significantly negatively correlated to indices of hyperinflation. There was a high prevalence of GERD in COPD patents especially elderly, severe stage of COPD, high smoking index and high body mass index (BMI). abstract_id: PUBMED:30486573 Esophageal motility and reflux characteristics in gastroesophageal reflux disease patients with or without extra-esophageal symptoms Objective: To investigate esophageal motility and reflux characteristics in gastroesophageal reflux disease (GERD) with and without extra-esophageal symptoms by high-resolution manometry and 24 h esophageal multichannel intraluminal impedance combined pH (MII-pH) monitoring. Methods: From February 2016 to June 2017, GERD patients with and without extra-esophageal symptoms were enrolled in this prospective controlled study. Esophageal HRM and 24 h MII-pH monitoring were performed. The differences in esophageal motility and reflux parameters were further analyzed between 30 GERD patients with extra-esophageal symptoms and 30 simple GERD patients. Results: The GERD symptom scores didn't show statistical difference between two groups. The GERD symptom scores didn't show statistical difference between two groups. The relaxation pressure of lower esophageal sphincter(LES), the integrated relaxation pressure, and the recovery time of upper esophageal sphincter(UES) of GERD patients with extra-esophageal symptoms were all lower than those of patients without extra-esophageal symptoms [(15±7) vs (21±11)mmHg, (8±3) vs (10±5)mmHg, (388±168) vs (492±170)ms, 1 mmHg=0.133 kPa], and the differences were statistically significant(all P&lt;0.05). The main classification of esophageal motility type of GERD patients with extra-esophageal symptoms was mild esophageal motility disorders (27%, 8/30), and the occurrence of ineffective esophageal motivation was 20% (6/30), which were similar with simple GERD patients. The proximal reflux percentages of weak acid reflux and nonacid reflux, abnormal nonacid reflux in GERD patients with extra-esophageal symptoms were significantly higher than those in simple GERD patients [84.6% (73.2%, 100.0%) vs 75.0% (60.0%, 87.65%), P=0.048; 90.0% (45.8%, 100.0%) vs 0(0, 100.0%), P=0.017; 46.7% vs 3.3%, P=0.03]. Conclusions: The pathogenesis of GERD with extra-esophageal symptoms may be different from typical GERD. Increase of proximal esophageal reflux and abnormal nonacid reflux may all participate in the mechanisms of GERD with extra-esophageal symptoms. abstract_id: PUBMED:11202308 Esophageal motility impairment--the cause or consequence of gastroesophageal reflux disease? There are at least three, well-known, different groups of motor activity disturbances of the upper part of alimentary tract which can induce the development of gastroesophageal reflux disease (GERD) and enlarge the risk of excessive exposure of the esophageal mucous membrane on gastric juice and/or biliary contents. Most important is insufficiency of the lower esophageal sphincter (LES), which causes gastroesophageal reflux at 50-60% of patients suffering from GERD. Other reasons include impairment of stomach function (increase of intra-gastric pressure, late emptying and/or hypersecretion), and impairment of esophageal clearance. The question: does motility impairment of the esophagus occur primary or secondary to the gastroesophageal reflux, is still not enough clarified. Motor activity of the esophagus before and after the antireflux operation was prospectively assessed in 57 patients. Motility of the esophagus was determined by estimation: the efficacy of LES, general motor activity of the body of the esophagus and motor activity of the body during the reflux episodes, basing on 24-h manometry. Comparison of general pre- and postoperative data revealed significantly positive influence of Nissen-Rossetti fundoplication on improvement of motor activity of the esophagus, but the results differed in relation to the height of the measurements. Moreover comparison of the data during gastroesophageal reflux episodes revealed negative changes of the manometric parameters in the upper and middle esophagus. We conclude that post-operative improvement of esophageal motility confirms the secondary dysfunction in the peristalsis, connected with pathological reflux. However, lack of the complete normalization in motor activity after operation suggests that disorder may partially occur as primary impairment of motor activity or as the secondary dysfunction, which is fixed in the course of GERD. abstract_id: PUBMED:9834362 Laparoscopic Toupet fundoplication for gastroesophageal reflux disease with poor esophageal body motility. Impaired esophageal body motility is a complication of chronic gastroesophageal reflux disease (GERD). In patients with this disease, a 360-degree fundoplication may result in severe postoperative dysphagia. Forty-six patients with GERD who had a weak lower esophageal sphincter pressure and a positive acid reflux score associated with impaired esophageal body peristalsis in the distal esophagus (amplitude &lt;30 mm Hg and &gt;10% simultaneous or interrupted waves) were selected to undergo laparoscopic Toupet fundoplication. They were compared with 16 similar patients with poor esophageal body function who underwent Nissen fundoplication. The patients who underwent Toupet fundoplication had less dysphagia than those who had the Nissen procedure (9% vs.44%; P=0.0041). Twenty-four-hour ambulatory pH monitoring and esophageal manometry were repeated in 31 Toupet patients 6 months after surgery. Percentage of time of esophageal exposure to pH &lt;4.0, DeMeester reflux score, lower esophageal pressure, intra-abdominal length, vector volume, and distal esophageal amplitude all improved significantly after surgery. Ninety-one percent of patients were free of reflux symptoms. The laparoscopic Toupet fundoplication provides an effective antireflux barrier according to manometric, pH, and symptom criteria. It avoids potential postoperative dysphagia in patients with weak esophageal peristalsis and results in improved esophageal body function 6 months after surgery. abstract_id: PUBMED:23903620 What is the real impairment on esophageal motility in patients with gastroesophageal reflux disease? Context: Impairment of esophageal motility is a common finding in patients with gastroesophageal reflux disease (GERD) as reduced lower esophageal sphincter (LES) basal pressure. A very low LES pressure might facilitate the occurrence of more gastroesophageal reflux whereas abnormal esophageal peristalsis may contribute to impaired esophageal clearance after reflux. Objective: Evaluate the esophageal motor function of the lower esophageal sphincter and esophageal body in the various forms of gastroesophageal reflux disease. Methods: The manometrics records of 268 patients, who had evaluation of the esophageal motility as part of the diagnostic gastroesophageal reflux disease were split into four groups, as follows: 33 patients who had no esophagitis; 92 patients who had erosive esophagitis; 101 patients who had short Barrett's esophagus and 42 patients who had long Barrett's esophagus. Results: The group who had long Barrett's esophagus showed smaller mean LES pressure and higher percentage of marked LES hypotonia; in the distal segment of the esophageal body the this group showed higher percentage of marked hypocontractility of the distal segment (&lt;30 mm Hg); this same group showed higher percentage of esophageal motility disorders. Conclusions: The most intense esophageal motility disorders and lower pressure of lower esophageal sphincter were noted in the group with long Barrett's esophagus. Those with reflux esophagitis and short Barrett's esophagus had esophageal motility impairment, intermediate among patients with esophagitis and long Barrett's esophagus. Patients with typical symptoms of gastroesophageal reflux but without esophagitis by endoscopy study showed no impairment of esophageal motility. Answer: The relationship between motility abnormalities in the esophageal body and gastroesophageal reflux disease (GERD) has been a subject of investigation, and the evidence suggests a complex interplay. One study found an association between the presence of abnormal reflux and motor disorder in the esophageal body, with ineffective esophageal motility (IEM) being the disorder most strongly related to reflux. However, hypercontractile disorders were not clearly attributed to reflux, leading to the conclusion that esophageal manometric abnormalities should be considered cautiously before considering a motor disorder as a consequence of abnormal reflux (PUBMED:15718863). Further research has indicated that esophageal motility abnormalities are increasingly prevalent with increasing severity of reflux disease, from non-erosive reflux disease (NERD) to erosive reflux disease and Barrett's esophagus. Novel esophageal testing techniques, such as high-resolution manometry with or without concurrent intraluminal impedance monitoring, have better defined transient lower esophageal sphincter (LES) relaxations, hypotensive LES, ineffective esophageal peristalsis, and bolus transit abnormalities as being strongly implicated in the development of GERD (PUBMED:24868489). Another study showed that GERD patients with esophageal motility abnormalities, characterized by low LES pressure (LESP) and IEM in the distal esophagus, experienced exacerbated esophageal acid exposure. The coexistence of low LESP and IEM was particularly associated with esophageal mucosal injury in patients with reflux esophagitis (RE) (PUBMED:26675026). Additionally, it has been observed that IEM may lead to increased reflux, resulting in esophageal mucosal damage, and that there may be a consistency between abnormal esophageal motility and gastric motility (PUBMED:36908189). In conclusion, while there is evidence to suggest that motility abnormalities in the esophageal body are related to GERD, the relationship is not straightforward. Some motility disorders, particularly IEM, are strongly associated with reflux, while others may not be directly caused by it. The severity of reflux disease appears to correlate with the prevalence of esophageal motility abnormalities, and the presence of such abnormalities can exacerbate the effects of acid exposure on the esophageal mucosa.
Instruction: Is the general practice laboratory medically and economically valuable? Abstracts: abstract_id: PUBMED:11416974 Is the general practice laboratory medically and economically valuable? Unlabelled: Between 1998 and 2000 we evaluated the office-based laboratory activities of general practitioners. The aim was to clarify whether there is a medical and economic benefit of these activities. Methods: The study was performed in four parts: I. A cross-sectional study with a random sample of general practitioners of the German and French speaking part of Switzerland. II. A prospective evaluation of the office-based laboratory activities of 56 GP's. III. A cross-sectional study of the preference of 837 patients in 52 of GP's offices. IV. A consensus panel with nine experts using the RAND method. Results: 1999 there were 55.4 Million laboratory tests ordered by GP's (excl. pediaters) of which 78.9% were analysed in the office-based laboratory. The probability of a second visit is reduced by 60%, if all of the tests could be performed in the office-based laboratory. 85% of the patients appreciate the possibility to discuss the test results within the same consulation. In the consensus panel, 43 tests were proposed of which only bicarbonate, chloride and urea were assessed as not useful for the office based laboratory. Conclusion: The office-based laboratory is a well embodied institution in Switzerland. It's predominant advantage is the possibility of point of care testing. It allows a quick management of the patient and avoids unnecessary second consultations. abstract_id: PUBMED:26488038 Managing medically unexplained illness in general practice. Background: Patients with medically unexplained symptoms commonly present to general practice and experience significant disability. Many have a history of trauma, which complicates the therapeutic relationship between doctor and patient. Because diagnosis is an expected outcome of a medical interaction, doctors and patients can feel frustrated and lost without one. Objective: This article provides practical management strategies that general practitioners (GPs) can use when patients present with medically unexplained symptoms. Discussion: Three types of common presentations are discussed. Enigmatic illnesses occur when the doctor and patient believe that a bio-medical disease is likely, but a diagnosis is not forthcoming. Contested illnesses occur when a patient is committed to a diagnosis the doctor does not accept. Chaotic illnesses occur when symptoms are over-determined; there are many possible diagnoses, but none fully explain the complex web of distress the patient experiences. Common strategies for managing medically unexplained symptoms are discussed, and specific approaches to each presentation are outlined. abstract_id: PUBMED:31392313 Explanations for medically unexplained symptoms: a qualitative study on GPs in daily practice consultations. Background: General practice is the centre of care for patients with medically unexplained symptoms (MUS). Providing explanations for MUS, i.e. making sense of symptoms, is considered to be an important part of care for MUS patients. However, little is known how general practitioners (GPs) do this in daily practice. Objective: This study aimed to explore how GPs explain MUS to their patients during daily general practice consultations. Methods: A thematic content analysis was performed of how GPs explained MUS to their patients based on 39 general practice consultations involving patients with MUS. Results: GP provided explanations in nearly all consultations with MUS patients. Seven categories of explanation components emerged from the data: defining symptoms, stating causality, mentioning contributing factors, describing mechanisms, excluding explanations, discussing the severity of symptoms and normalizing symptoms. No pattern of how GPs constructed explanations with the various categories was observed. In general, explanations were communicated as a possibility and in a patient-specific way; however, they were not very detailed. Conclusion: Although explanations for MUS are provided in most MUS consultations, there seems room for improving the explanations given in these consultations. Further studies on the effectiveness of explanations and on the interaction between patients and GP in constructing these explanations are required in order to make MUS explanations more suitable in daily primary care practice. abstract_id: PUBMED:21219525 Is it economically viable to employ the nurse practitioner in general practice? Aims: This article provides insight into the potential economic viability of nurse practitioner employment in Dutch general practices. Background: General practitioners face the challenging task of finding the most efficient and effective mix of professionals in general practice to accommodate future care demands within scarce health care budgets. To enable informed decision-making about skill mix issues, economic information is needed. Design: Discursive paper. Method: A descriptive and explorative design was chosen to study the economic viability of nurse practitioner employment in general practice. The conditions under which the nurse practitioner is able to earn back his/her own cost of employment were identified. Preferences and expectations of general practitioners and health insurers about nurse practitioner reimbursement were made transparent. Results: Although general practitioners and health insurers acknowledge the importance of the nurse practitioner in accommodating primary care demands, they have polarised views about reimbursement. The employment of nurse practitioners is seldom economically viable in current practices. It requires a reallocation of (80% of) the general practitioner's freed up time towards practice growth (12% number of patients). Conclusion: The economic viability of the nurse practitioner has proven difficult to achieve in every day health care practice. This study provided insight into the complex interaction of the (cost) parameters that result in economic viability and feeds a further discussion about the content of the nurse practitioner role in general practice based on optimal quality of care vs. efficiency. Relevance To Clinical Practice: Effective and efficient health care can only be provided if the actual care needs of a population provide the basis for deciding which mix of professionals is best equipped to deal with the changing and increasing demand of care. A macro-level intervention is needed to help a broad-scale introduction of the nurse practitioner in general practice. abstract_id: PUBMED:16632487 Persistent presentation of medically unexplained symptoms in general practice. Objectives: To estimate the prevalence of persistent presentation of medically unexplained physical symptoms (MUPS) in general practice. To assess socio-demographic characteristics, health status and use of health services of patients who frequently present MUPS, compared with reference groups. Design: One-year, nationwide, representative survey of morbidity in general practice in The Netherlands, including 400,000 enlisted patients in 104 general practices. Results: Of all patients (age: 18 years and older), 2.45% who visited their GP at least once a year, presented at least 4 times in 1 year with symptoms which are commonly considered medically unexplained without getting a medical diagnosis during that period, that might explain the symptoms. These patients are significantly older, more often female, less educated, more frequently unemployed and more frequently from a non-Western origin than 'average' patients or patients with a medical diagnosis. Conclusion: Although 25-50% of all reasons for visit to a GP concern symptoms that are not medically explained, the frequent presentation of such symptoms is much more rare. Frequent attendance because of MUPS is most common among elderly women with a lower socio-economic status. abstract_id: PUBMED:21740334 Quality assurance of laboratory work and clinical use of laboratory tests in general practice in norway: a survey. Objective: Virtually all the general practices in Norway participate in the Norwegian Quality Improvement of Laboratory Services in Primary Care, NOKLUS. In order to assess and develop NOKLUS's services, it was decided to carry out an investigation in the largest participating group, general practices. Design: In autumn 2008 a questionnaire was sent to all Norwegian general practices asking for feedback on different aspects of NOKLUS's main services: contact with medical laboratory technologists, sending of control materials, use and maintenance of practice-specific laboratory binders, courses, and testing of laboratory equipment. In addition, attitudes were elicited towards possible new services directed at assessing other technical equipment and clinical use of tests. Results: Responses were received from 1290 of 1552 practices (83%). The great majority thought that the frequency of sending out control material should continue as at present, and they were pleased with the feedback reports and follow-up by the laboratory technologists in the counties. Even after many years of practical experience, there is still a need to update laboratory knowledge through visits to practices, courses, and written information. Practices also wanted quality assurance of blood pressure meters and spirometers, and many doctors wanted feedback on their use of laboratory tests. Conclusion: Services regarding quality assurance of point-of-care tests, guidance, and courses should be continued. Quality assurance of other technical equipment and of the doctor's clinical use of laboratory tests should be established as part of comprehensive quality assurance. abstract_id: PUBMED:2283933 Laboratory testing in, or for, general practice. Recent technological advances make it possible for general practitioners to do many laboratory tests in the doctor's office. This has advantages and limitations, and a new balance must be found between testing within and outside laboratories. General practitioners will need to decide which tests to do within the practice, and laboratory staff have opportunities to become more involved in the needs and problems of general practice, so that they are able to advise on when and how tests are best performed. abstract_id: PUBMED:12747264 Laboratory examinations performed in rural practice Graduates from Jichi Medical School are obligated to work at rural clinics or hospitals, where most of them are the only medical doctor in the house. To understand how these graduates actually use laboratory examinations, what examinations they found most important in their practice, and when they were confident of their laboratory techniques, we designed a questionnaire to address these questions. Many respondents reported that their institutions had electrocardiographs, abdominal and/or cardiac ultrasonographs, urinalysis test paper, and portable blood glucose meters, and more than half of them reported having used these instruments without assistance in emergency situations. Moreover, a majority of the respondents said that they considered it important that a physician is able to use these instruments without the help of other staff members. Proficiency in many laboratory techniques was obtained and physicians were confident during their first postgraduate clinical practice. These responses clearly show the importance and usefulness of covering examination techniques and the principles of laboratory medicine in medical education and the first postgraduate clinical practice. abstract_id: PUBMED:25868476 Best vaccination practice and medically attended injection site events following deltoid intramuscular injection. Analysis of medically attended injection site events data provides a vehicle to appreciate the inadequacies of vaccination practice for deltoid intramuscular injection and to develop best practice procedures. These data can be divided into 3 groups; nerve palsies, musculoskeletal injuries and cutaneous reactions and reflect inappropriate site of injection, needle over or under penetration, local sepsis and vascular complications. The aim of this review is to formulate best vaccination practice procedures for deltoid intramuscular injection of vaccines through the collation and analysis of medically attended injection site events. abstract_id: PUBMED:7973197 Survey of office laboratory tests in general practice. Objective: To describe the types of laboratory test done in general practice, and to investigate whether practice characteristics were associated with the implementation of certain tests or instruments. Setting: A questionnaire was mailed to an 8% random sample of Norwegian GPs. Results: 85% responded; and we obtained data from 175 office laboratories. Medical secretaries did most of the analytical work. The "basic" repertoire consisted of haemoglobin, ESR, glucose; urine test strips, microscopy, culture, and HCG; and faecal occult blood. Group practices had a larger repertoire than had solo practices. Overall, 34% of the practices had urine strips with fields for leucocytes and nitrite, 26% had a cell counter, and 17% did clinical chemistry. A large number of different test methods and instruments were in use, but in most cases one or two brands had a market share of more than 50%. Logistic regression showed that employing a medical laboratory technician was associated with a large analytical repertoire, and the use of cell counters and clinical chemistry analyzers. The GP's form of payment (fixed or fee-for-service) did not influence the repertoire. Conclusions: The diversity of test methods and instruments makes quality assurance difficult, and recommendations are overdue. Answer: Yes, the general practice laboratory is considered to be medically and economically valuable. The office-based laboratory activities of general practitioners in Switzerland have been found to have a predominant advantage in the possibility of point-of-care testing, which allows for quick management of the patient and avoids unnecessary second consultations. The study found that the probability of a second visit is reduced by 60% if all of the tests could be performed in the office-based laboratory, and 85% of patients appreciate the possibility to discuss the test results within the same consultation. Only a few tests were assessed as not useful for the office-based laboratory, indicating that the majority of tests are considered valuable (PUBMED:11416974). Moreover, the general practice laboratory contributes to the quality assurance of laboratory work and clinical use of laboratory tests. Services regarding quality assurance of point-of-care tests, guidance, and courses are important and should be continued, and there is a desire for quality assurance of other technical equipment and of the doctor's clinical use of laboratory tests as part of comprehensive quality assurance (PUBMED:21740334). However, the economic viability of employing nurse practitioners in general practice, which could be related to the laboratory services they provide, has been found to be challenging under current practices. It requires a reallocation of the general practitioner's time towards practice growth to be economically viable (PUBMED:21219525). In summary, general practice laboratories provide medical value by enabling quick patient management and reducing the need for multiple consultations, and they are integral to the quality assurance of laboratory work. However, the economic viability of expanding certain roles within general practice, such as nurse practitioners, may require further adjustments to practice structures and reimbursement models.
Instruction: Laparoscopic nephrectomy: new standard of care? Abstracts: abstract_id: PUBMED:23436090 Laparoscopic left hemihepatectomy a consideration for acceptance as standard of care. Introduction: Since the inception of laparoscopic liver surgery, the left-lateral sectionectomy has become the standard of care for resection of lesions located in segments II and III. However, few centers employee laparoscopic left hemihepatectomy on a routine basis. This study evaluated the safety and efficacy of the laparoscopic left hemihepatectomy as a standard of care. Methods: An international database of 1,620 laparoscopic liver resections was established and outcomes analyzed comparing the laparoscopic left lateral sectionectomy (L lat) to laparoscopic left hemihepatectomy (LH). All data are presented as mean ± standard deviation. Results: A total of 222 laparoscopic L lat and 82 LH were identified. The L lat group compared with LH group had a higher incidence of cirrhosis (27 vs. 21 %; p = 0.003) and cancer (48 vs. 35 %; p = 0.043). Tumors were larger in the LH group (7.09 ± 4.2 vs. 4.89 ± 3.1 cm; p = 0.001). Operating time for LH was longer than L lat (3.9 ± 2.3 vs. 2.9 ± 1.4 h; p &lt; 0.001). Operative blood loss was higher in LH (306 vs. 198 cc; p = 0.003). Patient morbidity (20 vs. 18 %; p = 0.765) was equivalent with a longer length of stay (7.1 ± 5.1 vs. 2.5 ± 2.3 days; p &lt; 0.001) for LH. Patient mortality and tumor recurrence were equivalent. Conclusions: Laparoscopic left hemihepatectomy is a more technically challenging and often time-consuming procedure than a left-lateral sectionectomy. This international multi-institutional confirmed that intraoperative blood loss, complications, and conversions are more than acceptable for laparoscopic left hemihepatectomy in expert hands. Postoperative morbidity and mortality rates together with adequate surgical margins and long-term recurrence are not compromised by the laparoscopic approach. abstract_id: PUBMED:12704892 Laparoscopic cholecystectomy--surgical standard in cholelithiasis Laparoscopic surgery showed a dramatic development in the last years of the 20th century. From the beginning laparoscopic cholecystectomy (LCCE) has been the pacemaker of this development. Today laparoscopic cholecystectomy is the first choice for treatment of cholecystolithiasis in nearly all surgical clinics. Therefore laparoscopic cholecystectomy is the most common part of minimal invasive technique. LCCE is the golden standard in therapy of gallstones, more than 90% of cholecystectomies in specialized clinics are done laparoscopically. It is an established, evidence based operation today. Open cholecystectomy is left for special indications only. A problem of LCCE is the occult carcinoma of the gallbladder. In histological proven carcinoma of the gallbladder LCCE is the adequate operation only for Tis and T1 carcinoma. In T2 and T3 carcinoma a radical oncologic resection with lymph node dissection should be performed. Due to the poor prognosis T4 tumors should be left with laparoscopic biopsy only. abstract_id: PUBMED:28254504 Robotic-Assisted Laparoscopic Trachelectomy: A Standard Technique. Study Objective: To describe a standard reproducible technique for simple robotic- assisted laparoscopic trachelectomy. Design: Step-by-step demonstration of the technique using videos and pictures (Canadian Task Force classification level 3). Setting: The incidence of trachelectomy after supracervical hysterectomy has been increasing, given the popularity of supracervical hysterectomy. The most common indication for trachelectomy is symptomatic cyclic bleeding. Trachelectomy can be performed vaginally as well as laparoscopically with or without robotic assistance. This video demonstrates a standard reproducible technique for robotic-assisted trachelectomy in 2 women who underwent previous supracervical hysterectomy with a retained cervix. Interventions: Robotic-assisted laparoscopic trachelectomy using a standard technique is demonstrated in 2 different examples. The steps include opening the vesicocervical peritoneum, creating a bladder flap and pushing the bladder down beyond the vaginal cuff, opening the remnant of the broad ligament and delineating the course of cervical branch of the uterine artery and ureter bilaterally, inserting a uterine manipulator with a colpotomizer under vision, performing desiccation of the cervical branches of the uterine artery, and performing colpotomy and closure of the vaginal cuff. Conclusion: Robotic-assisted laparoscopic trachelectomy is a safe and simple procedure that should be part of all general gynecologists' armamentarium. The described technique is reproducible and efficient and can be adopted for all cases requiring simple trachelectomy. abstract_id: PUBMED:24701492 Pain management after laparoscopic cholecystectomy-a randomized prospective trial of low pressure and standard pressure pneumoperitoneum. Background: Abdominal pain and shoulder tip pain after laparoscopic cholecystectomy are distressing for the patient. Various causes of this pain are peritoneal stretching and diaphragmatic irritation by high intra-abdominal pressure caused by pneumoperitoneum . We designed a study to compare the post operative pain after laparoscopic cholecystectomy at low pressure (7-8 mm of Hg) and standard pressure technique (12-14 mm of Hg). Aim : To compare the effect of low pressure and standard pressure pneumoperitoneum in post laparoscopic cholecystectomy pain . Further to study the safety of low pressure pneumoperitoneum in laparoscopic cholecystectomy. Settings And Design: A prospective randomised double blind study. Materials And Methods: A prospective randomised double blind study was done in 100 ASA grade I &amp; II patients. They were divided into two groups -50 each. Group A patients underwent laparoscopic cholecystectomy with low pressure pneumoperitoneum (7-8 mm Hg) while group B underwent laparoscopic cholecystectomy with standard pressure pneumoperitoneum (12-13 mm Hg). Both the groups were compared for pain intensity, analgesic requirement and complications. Statistical Analysis: Demographic data and intraoperative complications were analysed using chi-square test. Frequency of pain, intensity of pain and analgesics consumption was compared by applying ANOVA test. Results: Post-operative pain score was significantly less in low pressure group as compared to standard pressure group. Number of patients requiring rescue analgesic doses was more in standard pressure group . This was statistically significant. Also total analgesic consumption was more in standard pressure group. There was no difference in intraoperative complications. Conclusion: This study demonstrates the use of simple expedient of reducing the pressure of pneumoperitoneum to 8 mm results in reduction in both intensity and frequency of post-operative pain and hence early recovery and better outcome.This study also shows that low pressure technique is safe with comparable rate of intraoperative complications. abstract_id: PUBMED:38366263 A Prospective Multicenter Standard of Care Study of Outpatient Laparoscopic Sleeve Gastrectomy. A global shift is occurring as hospital procedures move to ambulatory surgical settings. Surgeons have performed outpatient sleeve gastrectomy (SG) in bariatric surgery since 2010. However, prospective trials are needed to ensure its safety before widespread adoption. Purpose: The study aimed to present a comprehensive report on the prospective data collection of 30-day outcomes of outpatient primary laparoscopic SG (LSG). This trial seeks to assess whether outpatient LSG is non-inferior to hospital-based surgery in selected patients who meet the outpatient surgery criteria set by the American Society for Metabolic and Bariatric Surgery. Materials And Methods: This study is funded by the Society of American Gastrointestinal and Endoscopic Surgeons and has been approved by the Advarra Institutional Review Board (Pro00055990). Cognizant of the necessity for a prospective approach, data collection commenced after patients underwent primary LSG procedures, spanning from August 2021 to September 2022, at six medical centers across the USA. Data centralization was facilitated through ArborMetrix. Each center has its own enhanced recovery protocols, and no attempt was made to standardize the protocols. Results: The analysis included 365 patients with a mean preoperative BMI of 43.7 ± 5.7 kg/m2. Rates for 30-day complications, reoperations, readmissions, emergency department visits, and urgent care visits were low: 1.6%, .5%, .2%, .2%, and 0%, respectively. Two patients (0.5%) experienced grade IIIb complications. There were no mortalities or leaks reported. Conclusion: The prospective cohort study suggests that same-day discharge following LSG seems safe in highly selected patients at experienced US centers. abstract_id: PUBMED:27473800 Laparoscopic Sleeve Gastrectomy: Surgical Technique and Perioperative Care. Laparoscopic sleeve gastrectomy is a bariatric surgical procedure created for long-term weight loss and improvement of metabolic syndrome abnormalities in morbidly obese patients. It has been shown to demonstrate durable results compared with other bariatric procedures. This article aims to describe the surgical technique and perioperative care for patients undergoing laparoscopic sleeve gastrectomy based on current scientific literature and best standard practices. abstract_id: PUBMED:27957025 Day Care Laparoscopic Cholecystectomy: Next Standard of Care for Gall Stone Disease. Background: To access the feasibility, safety and success of day care laparoscopic cholecystectomy in a tertiary center in India. Methods: This is a retrospective analysis of prospectively collected data between 2004 and 2009 from a tertiary center in north India. All patients of symptomatic gallstone diseases having age less than 70 years, American Society of Anesthesiologists (ASA) grade I and grade II, living within 20 Kilometers of the hospital, availability of a responsible adult care taker at home, access to a telephone and a means of transportation to hospital if needed, underwent laparoscopic cholecystectomy under the care of the two participating surgeons, were considered for day care laparoscopic cholecystectomy. Clinical and operative data were recorded prospectively. All patients were discharged 6 to 8 hours after surgery with the advice to contact the surgical team over phone whenever necessary or on the day after discharge. Results: A total of 602 laparoscopic cholecystectomies were performed over a period of 6years, among them 309 (51.32%) were operated on day care basis. Nine patients in day care procedure group had conversion to open procedure (5 due to distorted anatomy of calot's triangle, 2 due to common bile duct stones, 1 due to bile duct injury and 1 due to bleeding from cystic artery stump). One patient had myocardial infarction and 3 had nausea and vomiting which failed to resolve by intravenous ondensteron and all these (13) patients (4.20%) needed unplanned admission to the hospital. Two hundred and ninety-six patients (95.79%) were discharged on same day. Conclusions: In conclusion day care laparoscopic cholecystectomy is feasible, safe and equally effective in selected patients in Indian setup. abstract_id: PUBMED:31988859 Comparative Analysis of Hemodynamic Changes and Shoulder Tip Pain Under Standard Pressure Versus Low-pressure Pneumoperitoneum in Laparoscopic Cholecystectomy. Background: Laparoscopic cholecystectomy is the gold standard procedure for cholelithiasis. Pneumoperitoneum is created using carbon dioxide (CO2) which is usually maintained at a range of 12-14 mm Hg. An emerging trend has been the use of low-pressure pneumoperitoneum in the range of 7-10 mm Hg in an attempt to lower the impact of pneumoperitoneum on the human physiology while providing adequate working space. Our study proposes to compare the effects of low-pressure pneumoperitoneum with the use of standard pressure of pneumoperitoneum. Aims And Objective: To compare and analyze various factors like blood pressure, heart rate, end-tidal CO2 and postoperative shoulder tip pain in cases undergoing laparoscopic cholecystectomy using standard pressure versus low pressure. Materials And Methods: This is a prospective randomized study carried out at Santosh Medical College and Hospitals, Ghaziabad from September 2017 to December 2018. This study included 60 patients of cholelithiasis which were divided into two groups of 30 patients each. Group I was offered laparoscopic cholecystectomy under standard pressure pneumoperitoneum and group II underwent laparoscopic cholecystectomy using low-pressure pneumoperitoneum. Patients in each group were evaluated for various intraoperative physiological changes and postoperative shoulder tip pain. Observations And Results: Cholelithiasis is commonly seen in middle-aged females. There is no significant difference in duration of surgery between the two groups. However, various factors like systolic blood pressure, heart rate, end-tidal CO2 were significantly better in the low-pressure group. Postoperative shoulder tip pain (measured by VAS scoring system) was significantly less in the low-pressure group during the first 24 hours. Conclusion: Laparoscopic cholecystectomy under low-pressure pneumoperitoneum causes minimal physiological changes and less postoperative shoulder tip pain. How To Cite This Article: Goel A, Gupta S, et al. Comparative Analysis of Hemodynamic Changes and Shoulder Tip Pain Under Standard Pressure Versus Low-pressure Pneumoperitoneum in Laparoscopic Cholecystectomy. Euroasian J Hepatogastroenterol 2019;9(1): 5-8. abstract_id: PUBMED:24783176 Single incision laparoscopic cholecystectomy using Konyang Standard Method. Purpose: Single incision laparoscopic cholecystectomy (SILC) is a minimally invasive surgery that is growing rapidly among surgical procedures. However, there is no standard method for SILC. Therefore, we evaluated the adequacy and feasibility of SILC using Konyang Standard Method. Methods: We retrospectively reviewed our series of 307 SILCs performed between April 2010 and August 2012. Initially we excluded the patients who were more than 70 years old, had cardiologic or pulmonologic problems and complications of acute cholecystitis. After 50 cases, we did not apply the exclusion criteria. We performed SILC by Konyang Standard Method using three-trocar single port (hand-made) and long articulated instruments. Results: Three hundred and seven patients underwent SILC. Male were 131 patients and female were 176 patients. Mean age was 51.6 ± 13.7 years old and mean body mass index was 24.8 ± 3.6 kg/m(2). Ninety-three patients had histories of previous abdominal operation. Patient's pathologies included: chronic cholecystitis (247 cases), acute cholecystitis (30 cases), gall bladder (GB) polyps (24 cases), and GB empyema (6 cases). Mean operating time was 53.1 ± 25.4 minutes and mean hospital stay was 2.9 ± 3.4 days. There were four cases of 3-4 ports conversion due to cystic artery bleeding. Complications occurred in 5 cases including wound infection (2 cases), bile duct injury (1 case), duodenal perforation (1 case), and umbilical hernia (1 case). Conclusion: SILC using Konyang Standard Method is safe and feasible. Therefore, our standard procedure can be applied to almost all benign GB disease. abstract_id: PUBMED:16568257 Therapeutic splitting as standard treatment for cholelithiasis At the moment, therapeutic splitting is still regarded by the vast majority of surgeons as the gold standard for stones in the common bile duct. Endoscopic clearance of the duct certainly is much less invasive than open exploration. However, this does not apply when compared with laparoscopic stone removal. Both are equivalent in respect to stone clearance rates, but the laparoscopic techniques protect patients from the long-term sequelae of endoscopic papillotomy. This can be important particularly for younger patients. Laparoscopic bile duct exploration is cost-effective and safe. Special experience in laparoscopic surgical techniques, however, is mandatory. Thus, surgeons should intensify their training in laparoscopic bile duct exploration in order to increase the acceptance of these techniques. Answer: The provided abstracts do not directly address the status of laparoscopic nephrectomy as a standard of care. However, they do discuss the acceptance and standardization of various laparoscopic procedures in different surgical fields. For instance, laparoscopic left hemihepatectomy has been evaluated for safety and efficacy and is considered acceptable in expert hands, with postoperative morbidity and mortality rates not compromised by the laparoscopic approach (PUBMED:23436090). Similarly, laparoscopic cholecystectomy is established as the gold standard for the treatment of cholecystolithiasis, with over 90% of cholecystectomies in specialized clinics being performed laparoscopically (PUBMED:12704892). Robotic-assisted laparoscopic trachelectomy is also described as a safe and simple procedure that should be part of all general gynecologists' armamentarium (PUBMED:28254504). Furthermore, studies have been conducted to improve the outcomes of laparoscopic procedures, such as using low-pressure pneumoperitoneum during laparoscopic cholecystectomy to reduce postoperative pain (PUBMED:24701492, PUBMED:31988859). Outpatient laparoscopic sleeve gastrectomy has been performed safely in selected patients, suggesting a shift towards ambulatory surgical settings for certain procedures (PUBMED:38366263). Day care laparoscopic cholecystectomy has also been shown to be feasible and safe in selected patients (PUBMED:27957025). While these abstracts highlight the trend towards minimally invasive laparoscopic techniques becoming standard care in various surgical procedures, they do not provide specific information on laparoscopic nephrectomy. To determine if laparoscopic nephrectomy is the new standard of care, one would need to review literature specifically focused on the outcomes, safety, efficacy, and acceptance of laparoscopic nephrectomy in the urological surgical community.
Instruction: Does assisted hatching improve implantation rates after in vitro fertilization or intracytoplasmic sperm injection in all patients? Abstracts: abstract_id: PUBMED:8825162 Does assisted hatching improve implantation rates after in vitro fertilization or intracytoplasmic sperm injection in all patients? A prospective randomized study. Purpose: Preliminary data from some research centers indicate that assisted hatching might be of value to increase embryo implantation rate in the human, at least in selected cases. It is not clear, however, whether this technique would be of benefit for all patients undergoing an embryo transfer. We therefore performed a prospective randomized study to evaluate the effect of assisted hatching on the implantation rate in our in vitro fertilization (IVF)/intracytoplasmic sperm injection (ICSI) program. Methods: In total, 120 couples undergoing an embryo transfer were randomized between two groups: in one group no assisted hatching was performed (AH-), whereas in the other group the embryos selected for transfer were subjected to partial zona dissection (PZD) immediately prior to the transfer (AH+). Using a computer-generated minimization procedure, patients were allocated to one of the two groups according to four pre-selected criteria: the number of embryos transferred, the cumulative score of transferred embryos, the age of the patient, and the use of ICSI. Results: Pregnancy and implantation rates in the AH+ and AH- groups were, respectively, 42.1 versus 38.1% and 17.9 versus 17.1%. Conclusions: From our data we conclude that assisted hatching through partial zona dissection prior to embryo transfer does not improve pregnancy and embryo implantation rates in unselected patients undergoing IVF or ICSI. abstract_id: PUBMED:8981127 Enhancement of outcome from intracytoplasmic sperm injection: does co-culture or assisted hatching improve implantation rates? In two separate prospectively randomized trials, intracytoplasmic sperm injection (ICSI) cycles were studied in a controlled manner to monitor the effects of either bovine oviductal epithelial cell co-culture (n = 119) or assisted hatching by zona drilling (n = 100). In the first study, immediately following ICSI, all eggs were placed directly either onto partial monolayers of bovine oviductal cells or into regular culture medium. Although the embryo developmental rate was apparently compromised in part by the presence of the co-culture cells, ultimately there were no significant differences in either the viable pregnancy rate (31.6% co-culture versus 29.0% control) or the embryonic implantation rate (11.4% co-culture versus 13.6% control). Assisted hatching also had no significant impact on ICSI cycle outcome in terms of either the viable pregnancy rate (30.0% assisted hatching versus 32.0% control) or the embryonic implantation rate (8.5% assisted hatching versus 13.5% control). However, in female patients aged &gt; or = 35 years, assisted hatching appeared to convey a marginally significant benefit in terms of both the viable pregnancy rate (35.5% assisted hatching versus 11.1% control) and the embryonic implantation rate (10.3% assisted hatching versus 3.1% control). It seems that the overall improvement of ICSI cycle outcome cannot be achieved by the general application of either co-culture or assisted hatching. Nevertheless, it is possible that there remain specific patient groups that might benefit from selected use of either of these modalities. abstract_id: PUBMED:15258534 Assisted hatching. Assisted hatching entails the opening or thinning of the zona pellucida before embryo transfer in order to improve the results of in vitro fertilization (IVF) and intracytoplasmic sperm injection (ICSI). The technique can be performed mechanically, chemically or with a laser beam. A piezoelectric method has also been described. Meta-analyses of randomised trials have shown that assisted hatching increases the clinical pregnancy, implantation and on-going pregnancy rates in patients with poor prognosis for IVF and ICSI, particularly those with repeated implantation failure. The technique is not without risks, and has been associated with an increased incidence of monozygotic twinning. Nevertheless, it remains an invaluable tool in assisted reproductive technology. abstract_id: PUBMED:16895643 Effects of assisted hatching method and age on implantation rates of IVF and ICSI. The objective of this study was to investigate whether a change in assisted hatching (AH) technique from total to partial penetration of the zona pellucida improved the outcome of IVF and intracytoplasmic sperm injection cycles where AH was indicated. This was an observational study conducted from the beginning of January 2000 to the end of April 2005. Total AH was performed in 312 cycles, while partial AH was performed in 592 cycles. In women of all ages, implantation, clinical pregnancy and live birth rates were higher in the partial AH group than in the total AH group (12.6 versus 7.2%, P = 0.0001; 22.3 versus 15.7%, P = 0.02; 18.2 versus 12.5%, P = 0.03 respectively). The benefit of partial AH was most marked in women under 38 years old (i.e. the recurrent implantation failure group). The authors conclude that partial AH is associated with higher implantation and pregnancy rates than total AH, especially in women under 38 years old who suffer from recurrent implantation failure. abstract_id: PUBMED:15823228 Implantation failures: success of assisted hatching with quarter-laser zona thinning. Implantation failure after IVF is one of the factors associated with a reduced chance of pregnancy for some patients. Assisted hatching methodologies are designed to facilitate the embryo's escape from the zona pellucida, and this strategy has been suggested as a means of improving pregnancy rates in patients with previous implantation failure. The aim of this prospective and randomized study was to evaluate the efficacy of quarter-laser zona thinning assisted hatching (qLZT-AH) in improving the implantation of embryos in patients with previous implantation failure. A total of 150 patients with a history of previous implantation failure were treated with intracytoplasmic sperm injection, and allocated into two groups: group 1, only one previous implantation failure, and group 2, repeated implantation failures. The patients in each group were randomized at the time of embryo transfer into a control group (no qLZT-AH) or experimental group where qLZT-AH was performed. For patients with repeated implantation failures, the implantation rate in those who received laser-thinned embryos was significantly higher (P = 0.02) than in those whose embryos were not laser-thinned (10.9 and 2.6% respectively). However, this difference was not observed in patients who presented with only one previous implantation failure. The data demonstrate that qLZT-AH is an effective strategy for improving the implantation of embryos in patients with repeated implantation failures. abstract_id: PUBMED:33730422 Assisted hatching on assisted conception (in vitro fertilisation (IVF) and intracytoplasmic sperm injection (ICSI)). Background: Failure of implantation and conception may result from inability of the blastocyst to escape from its outer coat, which is known as the zona pellucida. Artificial disruption of this coat is known as assisted hatching and has been proposed as a method for improving the success of assisted conception by facilitating embryo implantation. Objectives: To determine effects of assisted hatching (AH) of embryos derived from assisted conception on live birth and multiple pregnancy rates. SEARCH METHODS: We searched the Cochrane Gynaecology and Fertility Group Specialised Register (until May 2020), the Cochrane Central Register of Controlled Trials (CENTRAL; until May 2020), in the Cochrane Library; MEDLINE (1966 to May 2020); and Embase (1980 to May 2020). We also searched trial registers for ongoing and registered trials (http://www.clinicaltrials.gov - a service of the US National Institutes of Health; http://www.who.int/trialsearch/Default.aspx - The World Health Organization International Trials Registry Platform search portal) (May 2020). Selection Criteria: Two review authors identified and independently screened trials. We included randomised controlled trials (RCTs) of AH (mechanical, chemical, or laser disruption of the zona pellucida before embryo replacement) versus no AH that reported live birth or clinical pregnancy data. Data Collection And Analysis: We used standard methodological procedures recommended by Cochrane. Two review authors independently performed quality assessments and data extraction. Main Results: We included 39 RCTs (7249 women). All reported clinical pregnancy data, including 2486 clinical pregnancies. Only 14 studies reported live birth data, with 834 live birth events. The quality of evidence ranged from very low to low. The main limitations were serious risk of bias associated with poor reporting of study methods, inconsistency, imprecision, and publication bias. Five trials are currently ongoing. We are uncertain whether assisted hatching improved live birth rates compared to no assisted hatching (odds ratio (OR) 1.09, 95% confidence interval (CI) 0.92 to 1.29; 14 RCTs, N = 2849; I² = 20%; low-quality evidence). This analysis suggests that if the live birth rate in women not using assisted hatching is about 28%, the rate in those using assisted hatching will be between 27% and 34%. Analysis of multiple pregnancy rates per woman showed that in women who were randomised to AH compared with women randomised to no AH, there may have been a slight increase in multiple pregnancy rates (OR 1.38, 95% CI 1.13 to 1.68; 18 RCTs, N = 4308; I² = 48%; low-quality evidence). This suggests that if the multiple pregnancy rate in women not using assisted hatching is about 9%, the rate in those using assisted hatching will be between 10% and 14%. When all of the included studies (39) are pooled, the clinical pregnancy rate in women who underwent AH may improve slightly in comparison to no AH (OR 1.20, 95% CI 1.09 to 1.33; 39 RCTs, N = 7249; I² = 55%; low-quality evidence). However, when a random-effects model is used due to high heterogeneity, there may be little to no difference in clinical pregnancy rate (P = 0.04). All 14 RCTs that reported live birth rates also reported clinical pregnancy rates, and analysis of these studies illustrates that AH may make little to no difference in clinical pregnancy rates when compared to no AH (OR 1.07, 95% CI 0.92 to 1.25; 14 RCTs, N = 2848; I² = 45%). We are uncertain about whether AH affects miscarriage rates due to the quality of the evidence (OR 1.13, 95% CI 0.82 to 1.56; 17 RCTs, N = 2810; I² = 0%; very low-quality evidence). Authors' Conclusions: This update suggests that we are uncertain of the effects of assisted hatching (AH) on live birth rates. AH may lead to increased risk of multiple pregnancy. The risks of complications associated with multiple pregnancy may be increased without evidence to demonstrate an increase in live birth rate, warranting careful consideration of the routine use of AH for couples undergoing in vitro fertilisation (IVF) or intracytoplasmic sperm injection (ICSI). AH may offer a slightly increased chance of achieving a clinical pregnancy, but data quality was of low grade. We are uncertain about whether AH influences miscarriage rates. abstract_id: PUBMED:25286785 Monozygotic twinning after in vitro fertilization/intracytoplasmic sperm injection treatment is not related to advanced maternal age, intracytoplasmic sperm injection, assisted hatching, or blastocyst transfer. Objective: To evaluate the effect of assisted reproductive techniques on the incidence of monozygotic twins (MZT) and the associated pregnancy outcomes. Materials And Methods: This was a retrospective study of all in vitro fertilization (IVF)/intracytoplasmic sperm injection (ICSI) cycles with MZT pregnancies in our center from January 2001 to December 2011. The diagnosis of MZT pregnancies with their respective placental configurations was based on the results of ultrasonographic examinations performed during either the first or second trimester. The treatment characteristics and outcomes of each IVF cycle were recorded and stored in a computer database. Results: A total of 17 cycles with MZT pregnancies were identified, resulting in an overall incidence of MZT of 1.3%. The incidence of MZT for women aged &lt;35 years and ≥35 years were 1.5% and 0.8%, respectively (p = 0.319). The incidence was not significantly different between ICSI and non-ICSI cycles (1.4% vs. 1.0%; p = 0.620). In addition, the incidence was not increased in the assisted hatching (AH) group compared to those without AH (0.9% vs. 2.1%; p = 0.103). Finally, cycles with embryo transfer at the blastocyst stage had an MZT incidence that was not significantly different from those transferred at the cleavage stage (1.4% vs. 1.3%, respectively; p = 1.000). The incidence of each type of chorionicity, dichorionic-diamniotic, monochorionic-diamniotic, and monochorionic-monoamniotic, was 33.3%, 46.7%, and 20.0%, respectively. A total of 11 of 39 (28%) monozygotic babies and 16 of 19 (84%) coexisting heterozygotic babies were born alive. Conclusion: Until definite conclusions are drawn from larger trials, patients receiving IVF should not be overly concerned about the increase in MZT risk when proceeding to various assisted reproductive procedures (i.e., ICSI, AH, and blastocyst transfer). However, there is some evidence that the incidence of monochorionic-monoamniotic twins may be significantly increased after IVF/ICSI cycles. Patients should be informed about the possible obstetric complications regarding this rare type of MZT. abstract_id: PUBMED:24453510 Comparison between Cleavage Stage versus Blastocyst Stage Embryo Transfer in an Egyptian Cohort Undergoing in vitro Fertilization: A Possible Role for Laser Assisted Hatching. Background: Extended in vitro embryo culture and blastocyst transfer have emerged as essential components of the advanced reproductive technology armamentarium, permitting selection of more advanced embryos considered best suited for transfer. Aim Of Study: The aim of this study was to compare between cleavage stage and blastocyst stage embryo transfer in patients undergoing intracytoplasmic sperm injection, and to assess the role of assisted hatching technique in patients undergoing blastocyst transfer. Patients And Methods: This study was carried out on two groups. Group I: 110 patients who underwent 120 cycles of intracytoplasmic sperm injection with day 2-3 embryo transfer-for unexplained infertility or male factor within the previous 3 years. Their data obtained retrospectively from medical records. Group II: 46 age matched infertile female patients undergoing 51 intracytoplasmic sperm injection cycles for similar causes. Patients in Group II were further subdivided into 2 equal subgroups; Group IIa (23 patients), which had laser assisted hatching and Group IIb (23 patients), which did not have assisted hatching. All patients had an infertility workup including basal hormonal profile, pelvic ultrasound, hysterosalpingogram and/or laparoscope and semen analysis of the patient's partner. All patients underwent controlled ovarian hyperstimulation: Using long protocol of ovulation induction. Laser assisted hatching was done for blastocysts of 23 patients. Results: Comparison between both groups as regards the reproductive outcome showed a significant difference in pregnancy and implantation rates, both being higher in group II (P &lt; 0.05) Comparison between both subgroups as regards the reproductive outcome showed a highly significant difference in pregnancy and implantation rates, both being higher in Group IIa (P &lt; 0.01). There was also a significantly higher rate of multiple pregnancies among Group IIa (P &lt; 0.05). Conclusion: Blastocyst transfer is a successful and improved alternative for patients with multiple failed in vitro fertilization attempts, associated with a significant increase in pregnancy and implantation rates. Furthermore, laser assisted hatching increases implantation and clinical pregnancy rates. abstract_id: PUBMED:32466993 Evaluation of fertilization, usable blastocyst development and sustained implantation rates according to intracytoplasmic sperm injection operator experience. Research Question: Is intracytoplasmic sperm injection (ICSI) operator experience associated with fertilization, usable blastocyst development and sustained implantation rates (SIR) when at least two embryologists carry out ICSI for a single cohort of oocytes? Design: A retrospective cohort study of all IVF/ICSI cycles at a single large infertility centre between 2008 and 2018. Cycles were included if a cohort of oocytes was split between two embryologists for ICSI. The embryologist's experience of ICSI was used to evaluate laboratory and clinical outcomes overall and by pairs of inseminating embryologists. Logistic regression, analysis of variance and Kruskal-Wallis testing were used where appropriate. Results: Analysis of 14,362 ICSI procedures showed an association between least ICSI experience and lower mean fertilization rates (P &lt; 0.0001), higher odds of failed fertilization (adjusted OR 4.3; P &lt; 0.0001) and lower number of fertilization 'wins' per cohort (P &lt; 0.0001). Usable blastocyst development rates (number of usable blastocysts/number of two pronuclear zygotes) were not associated with ICSI embryologist experience (P = 0.44), but the odds of obtaining no usable blastocysts were higher (adjusted OR 1.4; P &lt; 0.0001) and the proportion of usable blastocyst 'wins' was lower (P = 0.0001) when embryologists with the least experience carried out ICSI. Increased ICSI experience was associated with higher mean SIR (P &lt; 0.0001). Laboratory and clinical outcomes were similar among embryologists once 1000 ICSI cycles and above were carried out. Conclusions: Increased ICSI operator experience is associated with higher fertilization rates, SIR and a lower likelihood of failed fertilization and usable blastocyst development. Splitting a single oocyte cohort between more than one embryologist for ICSI is a quality-control measure that can be implemented. abstract_id: PUBMED:25918590 The outcomes of intracytoplasmic sperm injection and laser assisted hatching in women undergoing in vitro fertilization are affected by the cause of infertility. Background: We sought to determine the association between factors that affected clini- cal pregnancy and live birth rates in patients who underwent in vitro fertilization (IVF) and received intracytoplasmic sperm injection (ICSI) and/or laser assisted hatching (LAH), or neither. Materials And Methods: In this retrospective cohort study, the records of women who underwent IVF with or without ICSI and/or LAH at the Far Eastern Memorial Hospital, Taipei, Taiwan between January 2007 and December 2010 were reviewed. We divided patients into four groups: 1. those that did not receive ICSI or LAH, 2. those that received ICSI only, 3. those that received LAH only and 4. those that received both ICSI and LAH. Univariate and multivariate analyses were performed to determine factors associated with clinical pregnancy rate and live birth rate in each group. Results: A total of 375 women were included in the analysis. Oocyte number (OR=1.07) affected the live birth rate in patients that did not receive either ICSI or LAH. Mater- nal age (OR=0.89) and embryo transfer (ET) number (OR=1.59) affected the rate in those that received ICSI only. Female infertility factors other than tubal affected the rate (OR=5.92) in patients that received both ICSI and LAH. No factors were found to affect the live birth rate in patients that received LAH only. Conclusion: Oocyte number, maternal age and ET number and female infertility fac- tors other than tubal affected the live birth rate in patients that did not receive ICSI or LAH, those that received ICSI only, and those that received both ICSI and LAH, respectively. No factors affected the live birth rate in patients that received LAH only. These data might assist in advising patients on the appropriateness of ICSI and LAH after failed IVF. Answer: Assisted hatching does not universally improve implantation rates after in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) in all patients. A prospective randomized study concluded that assisted hatching through partial zona dissection prior to embryo transfer does not improve pregnancy and embryo implantation rates in unselected patients undergoing IVF or ICSI (PUBMED:8825162). Similarly, another study found that assisted hatching had no significant impact on ICSI cycle outcome in terms of either the viable pregnancy rate or the embryonic implantation rate, although it appeared to convey a marginally significant benefit in female patients aged ≥ 35 years (PUBMED:8981127). Meta-analyses of randomized trials have shown that assisted hatching increases clinical pregnancy, implantation, and on-going pregnancy rates in patients with poor prognosis for IVF and ICSI, particularly those with repeated implantation failure (PUBMED:15258534). An observational study indicated that partial assisted hatching is associated with higher implantation and pregnancy rates than total assisted hatching, especially in women under 38 years old who suffer from recurrent implantation failure (PUBMED:16895643). For patients with repeated implantation failures, quarter-laser zona thinning assisted hatching (qLZT-AH) has been shown to significantly improve implantation rates (PUBMED:15823228). However, a Cochrane review suggests that the effects of assisted hatching on live birth rates are uncertain and may lead to an increased risk of multiple pregnancy, indicating that the routine use of assisted hatching should be carefully considered (PUBMED:33730422). In conclusion, while assisted hatching may benefit specific patient groups, such as older women or those with repeated implantation failures, it does not improve implantation rates across all patients undergoing IVF or ICSI. The decision to use assisted hatching should be individualized based on patient characteristics and clinical history.
Instruction: Is screening once at age 65 an effective method for detection of abdominal aortic aneurysms? Abstracts: abstract_id: PUBMED:8790524 Is screening once at age 65 an effective method for detection of abdominal aortic aneurysms? Objective: To evaluate whether a single screening at 65 or screening at 65 and 70 are the methods of choice to detect the majority of abdominal aortic aneurysms (AAA) suitable for treatment. Setting: District general hospital, district general practice and community. Methods: Data were collated from four studies (a pilot, a control trial, and two cohorts) of abdominal ultrasound screening of an invited group of 11 666 men and women aged 65-80 to derive age related prevalence, invitation acceptance rates, and operative mortality. Deaths from AAA rupture were obtained from district registrars' returns. Results: The prevalence of AAA increased with age being greater overall in men (6.8%) than in women (1.2%). From ages 65 to 80 the prevalence in men increased from 5.4% to 10.4%, and in women from 0.6% to 2.1%. In men and women 4% of patients dying from AAA rupture were aged 65 or under and 85% of deaths occurred over the age of 70. Acceptance rates for screening decreased between 65 and 80 from 80.5% to 66.2% in men and from 72.7% to 58.3% in women. In elective or symptomatic patients operative mortality was zero up to the age of 70, increasing to 7.1% in older patients. Conclusions: Men should be screened at age 65 to obtain a high acceptance rate and to identify those with AAA most likely to benefit from treatment. The prevalence is relatively low at this age so an additional screen at age 70 is recommended, which would pick up a further 3.7% of patients with AAA, at which age intervention is likely to be of benefit. abstract_id: PUBMED:8652992 Abdominal ultrasound as a screening method Abdominal ultrasound is increasingly used as part of the initial patient evaluation, without a specific indication. However, such an indiscriminate use of abdominal ultrasound is still controversial. The review of available literature on the value of abdominal ultrasound in clinical screening suggests the following conclusions: 1) The primary screening examination of asymptomatic persons leads to clinically relevant findings in less than 0.5% of cases. However, approximately 50% of the persons examined have abnormal findings without clinical relevance. This high frequency of abnormal findings may cause high costs due to unnecessary follow-up examinations. 2) A sonographic screening of asymptomatic persons may, however, be useful for specific indications in preselected individuals. This has been demonstrated for the detection of abdominal aortic aneurysm in the age group over 65 years. 3) Routine abdominal ultrasound in patients with a known internal disease appears to be useful even in the absence of a specific indication. This 'secondary screening' yields unexpected findings which turn out to be relevant for therapeutic decisions or for the final diagnosis in 6-25% of the cases. Routine abdominal ultrasound of all patients with internal disease may thus be a valuable extension of the initial patient evaluation. abstract_id: PUBMED:11397028 The long-term benefits of a single scan for abdominal aortic aneurysm (AAA) at age 65. Aim: To investigate the efficacy of a single ultrasonic scan at age 65 to identify patients at risk from ruptured abdominal aortic aneurysm (AAA). Method: A total of 6058 men aged 64-81 were recruited to a randomised trial, and 3000 were invited to attend a single screening test. An additional population of 1011 men was offered screening as they reached age 65. If a normal aorta was identified in this sub-group, further scans were offered at two-yearly intervals. Follow up and treatment of those identified as having an aortic dilatation of 3 cm or greater was undertaken. All subject groups were monitored for deaths occurring over the study period, and date and cause of death were recorded. Results: A total of 2212 men attended screening in the randomised trial; the overall compliance was 74%, and prevalence of AAA was 7.7%. Compliance decreased, and prevalence increased, with age. Mortality from ruptured AAA was reduced by 68% at 5 years (screened group compared to the age-matched control population), and by 42% in the study arm (screened and refusers) compared with controls. The benefit persisted at ten years (53% and 21% respectively). Of the uncontrolled sample of 1011 men offered a scan at age 65, 681 attended and 649 of these were found to have a normal aortic diameter; re-screening demonstrated new aneurysm development in 4% over ten years. The aortic diameters of the new AAAs were under 4 cm and would therefore have a low risk of rupture.1 Mortality from rupture in all those with an initially normal aortic diameter was low, at 1 case per 1000 scans over ten years. Conclusion: Screening once for AAA at age 65 can identify the majority of AAA that are of clinical significance and can identify a large population at low risk from rupture who do not require surveillance. This policy has been effective when combined with selective treatment in reducing the risk of rupture for ten years in those who attend the screening programme. abstract_id: PUBMED:9123627 Screening for abdominal aortic aneurysm In spite of increasing number of elective resections of abdominal aortic aneurysms (AAA) the mortality or ruptured AAA is increasing. The advantages of elective operations are obvious; the lethality is 2-6% while the lethality of ruptured AAA is 75-95%. However, AAA seldom causes symptoms before rupture. Ultrasonographic screening for AAA takes 10 minutes per scan, and the sensitivity and specificity are high. Ultrasonographic screening for AAA is a reliable, safe and inexpensive method for screening, and screening for AAA is discussed worldwide. One point four percent of deaths among men from 65 to 80 year of age are caused by ruptured AAA. Screening men over 65 for AAA can theoretically prevent a substantial number of deaths. Our calculations predict one prevented AAA-death per 200-300 scans for a cost of about 4000 DKK per saved year of life. However, cost-benefit analyses are based on uncertain assumptions concerning prevalence, incidence and risk of rupture. Therefore a randomized trial screening of 65-73 year old males is taking place in the County of Viborg in Denmark. abstract_id: PUBMED:27364735 Extending Abdominal Aortic Aneurysm Detection to Older Age Groups: Preliminary Results from the Liège Screening Programme. Background: There is evident benefit in terms of reduced aneurysm-related mortality from screening programs of abdominal aortic aneurysm (AAA) in men aged 65 years and more. Recent studies in the United Kingdom and Sweden have shown a decline of the prevalence of AAA in the general population. Current screening policies (e.g., men aged 65-74 years), however, do not account for aging and increased life expectancy of Western populations. This study investigated AAA detection by extending the target population to older age groups (75-85 years). Methods: AAA screening was conducted in the County of Chaudfontaine (Liège, Belgium) on the population of elderly (n = 3,054). The participation rate was 36%. The 1,101 participants (722 men aged 65-85 years and 379 women aged 74-85 years) were examined by ultrasound scan. AAA was defined as an infrarenal aortic outer-outer diameter of at least 3 cm. Demographics, clinical parameters, and risk factors were also recorded. AAA prevalence was estimated, and patients with and without AAA were compared by logistic regression. Results: The overall AAA prevalence was 3.6% (n = 40). In female participants, AAA prevalence was low (1.3%). In men, it amounted 2.7% in the 65-74 age group but rose to 7.3% in the age-extended group (75-85 years). Further in addition to age, height, current smoking, history of coronary artery disease, hypercholesterolemia, peripheral artery disease of the lower limbs, and varicose veins were significantly associated with the presence of AAA. Conclusions: These preliminary findings, based on a representative sample of the elderly population of the Liège region, support the idea that current AAA screening policies should be updated to cover an increasingly aging population. The presence of varicose veins as a potential risk factor for AAA should also be considered during screening. abstract_id: PUBMED:19017393 Is population screening for abdominal aortic aneurysm cost-effective? Background: Ruptured abdominal aortic aneurysm (AAA) is responsible for 1-2% of all male deaths over the age of 65 years. Early detection of AAA and elective surgery can reduce the mortality risk associated with AAA. However, many patients will not be diagnosed with AAA and have therefore an increased death risk due to the untreated AAA. It has been suggested that population screening for AAA in elderly males is effective and cost-effective. The purpose of this study was to perform a systematic review of published cost-effectiveness analyses of screening elderly men for AAA. Methods: We performed a systematic search for economic evaluations in NHSEED, EconLit, Medline, Cochrane, Embase, Cinahl and two Scandinavian HTA data bases (DACEHTA and SBU). All identified studies were read in full and each study was systematically assessed according to international guidelines for critical assessment of economic evaluations in health care. Results: The search identified 16 cost-effectiveness studies. Most studies considered only short term cost consequences. The studies seemed to employ a number of "optimistic" assumptions in favour of AAA screening, and included only few sensitivity analyses that assessed less optimistic assumptions. Conclusion: Further analyses of cost-effectiveness of AAA screening are recommended. abstract_id: PUBMED:19946430 Ultrasonographic screening for the detection of abdominal aortic aneurysms. Background: The prevalence of abdominal aortic aneurysms (AAA) with a maximal diameter of 3 cm or more is age-dependent; among persons over age 65, it lies between 4% and 8% in men and between 0.5% and 1.5% in women. About 10% of all AAAs have a maximum diameter of 5 cm or more. The prognosis of ruptured AAA (rAAA) is dismal, with an overall mortality of at least 80%. Ultrasonography of the abdominal aorta is a safe and technically simple method of detecting AAAs. Methods: Evaluation of population-based, randomized studies of ultrasonographic screening for the detection of AAA, based on a selective review of the literature. Results: A meta-analysis of four randomized controlled studies showed that ultrasonographic screening was associated with a significant lowering of AAA-related mortality in men aged 65 to 80 after it had been performed for 3-5 years (risk reduction 44%, odds ratio [OR] 0.56, 95% confidence interval [CI] 0.44-0.72) and after it had been performed for 7-15 years (risk reduction 53%, OR 0.47, 95% CI 0.25-0.90). AAA screening was also associated with a significant lowering of the overall mortality after 7-15 years, but not in the first 5 years. Ultrasonographic screening led to a significant increase in the number of elective AAA operations performed and to a 50% reduction of the number of emergency operations for rAAA. Conclusion: Ultrasonographic screening for AAA is a technically simple diagnostic test that is associated with a major reduction of AAA-related mortality. In view of the higher prevalence of AAA among the elderly, it is recommended that all men aged 65 or older and all men and women with a family history of AAA should be systematically screened. A national ultrasound screening program should be urgently implemented in Germany in order to bring about a major reduction in AAA-associated mortality. abstract_id: PUBMED:25294290 No benefit of screening for abdominal aortic aneurysm among 70- instead of 65-year-old men. Aim: Screening 65-year-old men for abdominal aortic aneurysm (AAA) reduces mortality from ruptured AAA (rAAA). Lower than expected prevalence of AAA is now found, why screening at a higher age and rescreening has been discussed. Our aim was to determine if screening at 70 years of age, instead of 65, increases clinical effectiveness. Methods: Seven thousand nine hundred fifty-one and 5623 previously un-screened 65- and 70-year old men were invited to ultrasound screening. Results: The attendance rate was 85.7% and 84.0%, P&lt;0.01, for the 65- and 70-year old men respectively. The screening-detected prevalence did not differ, being 1.9% and 2.3%, P=0.15, respectively, probably due to the fact that 23.5% and 37.4% of all known AAA among 65- and 70-year-old men, were detected by other means prior to screening, P&lt;0.01. However, the total known prevalence differed between the age-groups, being at least 2.1% and 3.0% respectively, P&lt;0.001. Conclusion: The screening-detected AAA-prevalence did not differ between 65- and 70-year-old men, due to the greater number of AAA known prior to screening among 70- compared to 65-year-old men. Screening men at 70 instead of 65 years of age would not result in detection of substantially more previously unknown AAA, thus not preventing rAAA and consequently not more saved life-years. Further, data also indicates that it is questionable if re-screening the 65-year-old male population after five years would generate any important clinical effect. abstract_id: PUBMED:30654619 Screening Program of Abdominal Aortic Aneurysm. In Europe, the prevalence of abdominal aortic aneurysms (AAAs) in the elderly population (≥65 year old) has declined in the past decades to &lt;4%. Aneurysmal degeneration of the aorta is a serious and potentially life-threatening vascular disease. Abdominal aortic aneurysms typically develop subclinically and often only become symptomatic when complicated by impending rupture. Most AAAs are discovered incidentally while investigating for an unrelated pathology. Ruptured AAA is the tenth leading cause of death in Belgium (0.32% of all deaths in 2014). Health-care providers have emphasized the importance of early detection of AAA and elective repair when the rupture risk outweighs operative risk (usual diameter threshold of 55 mm). Routine AAA screening programs, consisting of a single abdominal ultrasonography at the age of 65 years, aim to reduce the number of AAA-related deaths. Does population-based ultrasound screening for AAA achieve its objective and is it cost-effective? This literature review tries to answer these challenging questions. abstract_id: PUBMED:26826057 Age at detection of abdominal aortic aneurysms in siblings of patients with abdominal aortic aneurysms. Objective: Few countries offer organized screening of siblings of patients with abdominal aortic aneurysms (AAAs), although a hereditary trait is well known to exist. Male relatives, but not female, are invited within the population-based screening programs for elderly men in Sweden. Evidence regarding the optimal age to screen siblings is scarce. The aim of this study was to describe the age at detection in siblings found with AAAs. Methods: All patients treated for AAAs in two Swedish counties were screened for siblings. Consenting siblings aged 80 and younger were examined (N = 529) with ultrasound and were interviewed per protocol. Results: In the cohort of 529 siblings to AAA patients, 53 siblings were diagnosed with AAAs (sisters 16/276 [5.8%] and brothers 37/253 [14.6%]). The prevalence of AAAs in the siblings 65 years of age or younger was 16/207 (7.7%). One-third of the siblings found with AAAs were young (16/53 [30%]). Among the young siblings with AAAs, 8/16 (50%) had an aneurysm larger than 50 mm or were already surgically treated. The prevalence of AAAs in siblings older than 65 years of age was 37/322 (12%). Conclusions: The AAA prevalence in this sibling cohort is strikingly high compared to the prevalence in the population (in Sweden, 1.4%-2.2% in 65-year-old men). The young ages among diagnosed siblings reinforce that male siblings of AAA patients should be screened before age 65 (before the population-based program) and that structured programs for female siblings are called for. Answer: Screening once at age 65 for abdominal aortic aneurysms (AAA) appears to be an effective method for detection. A study found that a single ultrasonic scan at age 65 can identify the majority of AAAs that are of clinical significance and can identify a large population at low risk from rupture who do not require surveillance. This policy, combined with selective treatment, has been effective in reducing the risk of rupture for ten years in those who attend the screening programme (PUBMED:11397028). Another study supports the idea that current AAA screening policies should be updated to cover an increasingly aging population, suggesting that extending the target population to older age groups could be beneficial (PUBMED:27364735). However, the effectiveness of screening at age 65 may be influenced by several factors. The prevalence of AAA increases with age, being greater overall in men than in women, and acceptance rates for screening decrease between ages 65 and 80. Therefore, while men should be screened at age 65 to obtain a high acceptance rate and to identify those most likely to benefit from treatment, an additional screen at age 70 is recommended to pick up further cases of AAA (PUBMED:8790524). Additionally, a study found that the screening-detected AAA prevalence did not differ significantly between 65- and 70-year-old men, suggesting that screening at 70 instead of 65 years of age would not result in the detection of substantially more previously unknown AAAs (PUBMED:25294290). In summary, screening at age 65 is effective but may be optimized by considering an additional screening at age 70 or adjusting the screening age based on the changing prevalence of AAA in the population and the potential benefits of detecting AAAs in older age groups.
Instruction: Can the delivery method influence lower urinary tract symptoms triggered by the first pregnancy? Abstracts: abstract_id: PUBMED:22555033 Can the delivery method influence lower urinary tract symptoms triggered by the first pregnancy? Introduction And Objectives: The increase of the intensity of urinary symptoms in late pregnancy and postpartum has been well documented by several authors, but their causes remain uncertain, partly because of its probable multifactor origin. There are also controversies whether the etiology of lower urinary tract symptoms during pregnancy is the same as postpartum and whether the method of delivery could influence the risk of onset of urinary symptoms. This study aimed to evaluate the urinary symptoms triggered during pregnancy and its evolution in the late puerperium, correlating them with the delivery method. Materials And Methods: A longitudinal study was conducted, which included 75 primigravidae women, classified according to method of delivery as: (VD) vaginal delivery with right mediolateral episiotomy (n = 28); (CS) elective caesarean section (n = 26); and (EC) emergency caesarean section (n = 21). Urinary symptoms were assessed in the last trimester of pregnancy and at 45 days (± 10) of puerperium with validated versions for Portuguese language of the following questionnaires: International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form (ICIQ-UI SF) and the International Consultation on Incontinence Questionnaire Overactive Bladder (ICIQ-OAB). Results: It was observed that frequency, urgency, nocturia and urge incontinence, triggered during pregnancy, decreased significantly in the postpartum period, regardless of the delivery method (p = 0.0001). However, symptoms related to urinary loss due to stress persisted after vaginal delivery (p = 0.0001). Conclusions: Urgency, frequency and nocturia triggered during pregnancy tend to disappear in the late postpartum period, regardless of the delivery method, but the symptoms related to urinary loss due to stress tend to persist in late postpartum period after vaginal delivery. abstract_id: PUBMED:16831150 Bothersome lower urinary tract symptoms 1 year after first delivery: prevalence and the effect of childbirth. Objective: To assess the severity of both stress urinary incontinence (SUI) and overactive bladder (OAB) symptoms during and after the first pregnancy, using a self-reported health-related quality-of-life questionnaire, and to assess the effect of pregnancy and childbirth on bothersome lower urinary tract symptoms (LUTS) persisting at 1 year after the first childbirth. Patients And Methods: In a prospective cohort study, 344 women completed four self-reported questionnaires. Urogenital symptoms were assessed with the Urogenital Distress Inventory (UDI), assessing if a urogenital symptom is present and the amount of bother it causes, measured on a 4-point Likert scale, i.e. 'not at all', 'slightly', 'moderately' and 'greatly bothered'. Bothersome LUTS were defined as reporting moderate or great bother from the symptom, and as not bothersome if it was absent or present with none or only a slight degree of self-reported bother. In the analysis we used three of the five subscales from the UDI; UI, OAB and obstructive voiding, where each subscale has a range of 0 (no symptom) to 100 (all symptoms present with the highest degree of bother). Results: Of the 344 women, 83 (24.2%) reported having a moderate to greatly bothersome frequency symptom at 36 weeks of gestation. After childbirth there was a statistically significant decline in the prevalence of bothersome frequency to 38 (9.6%) women (P &lt; 0.001). Bothersome SUI was present in 53 (15.4%) women at 36 weeks of gestation, and in 36 (10.5%) at 1 year after childbirth. Fifty-eight (16.9%) women reported having moderate to greatly bothersome urge UI (UUI) and at 1 year after childbirth, 51 (14.8%) were still bothered by it. After univariate and multivariate analysis, the predictive factors for the presence of bothersome SUI were greater maternal age (32.5 vs 30.3 years old at delivery) and the presence of bothersome SUI at 12 weeks of gestation. Bothersome UUI was significantly associated with a lower educational level (odds ratio 0.08, 95% confidence interval 0.02-0.36). Women after a Caesarean delivery had more bothersome UUI and women after a spontaneous vaginal delivery developed more bothersome SUI (neither statistically significant, possibly because there were too few samples). During pregnancy, all UDI subscale scores increased significantly and after childbirth all scores decreased significantly vs 36 weeks of gestation. However, the score on the UI subscale remained significantly higher at 1 year after birth than at 12 weeks of gestation, whereas the scores on the OAB and obstructive voiding subscales were lower at 1 year after birth than at 12 weeks of gestation. Nevertheless, the scores for UI and obstructive voiding were low, indicating little bother. Conclusion: Most women are not bothered by their LUTS after their first delivery. As the prevalence of bothersome symptoms was highest at 36 weeks of gestation, they are probably part of a normal pregnancy. However, OAB symptoms can be perceived as bothersome. Physiotherapy and bladder training can be offered to women with bothersome LUTS. Bothersome SUI in early pregnancy and a greater maternal age were predictive of bothersome SUI at 1 year after first childbirth. A Caesarean delivery seemed to be protective for bothersome SUI at 1 year after birth, but bothersome UUI was more prevalent after a Caesarean than a vaginal delivery. More research with a larger sample is needed to allow definite statements about the effect of the mode of delivery and bothersome UI symptoms. abstract_id: PUBMED:22777581 Clinical impact of and contributing factors to urinary incontinence in women 5 years after first delivery. Introduction And Hypothesis: This study was conducted to investigate the prevalence of and contributing factors to urinary incontinence (UI) in women 5 years after their first birth and to evaluate the associations of UI with delivery mode and quality of life. Methods: Between 2005 July and 2006 March, primiparous women who delivered at term in a tertiary hospital were recruited into this cohort study. Immediately postpartum, the women completed a structured urogynecological questionnaire regarding lower urinary tract symptoms. Then the same urogynecological questionnaire, the Incontinence Impact Questionnaire (IIQ-7), and the Urinary Distress Inventory (UDI-6) were mailed to them 5 years later to follow up on UI. Three hundred and twelve women responding to the mailed questionnaires were included in the analyses. Results: The prevalence 5 years after first delivery of stress (SUI) and urge (UUI) UI were 43.6 % and 19.2 %, respectively. Women with UI during their first pregnancy were more likely to develop UI 5 years postpartum than those without it; women who delivered their first child vaginally had a greater incidence of UI than those having cesarean birth; UUI in women following cesarean delivery more negatively impacted emotional health than it did following vaginal birth, whereas the impact of SUI did not significantly differ between delivery groups. Conclusions: UI during the first pregnancy and vaginal delivery in primiparous women may predict an increased risk of having UI 5 years after delivery. UUI adversely affected women's emotional health, especially in those undergoing cesarean section. abstract_id: PUBMED:30644569 Lower urinary tract symptoms 7 years after the first delivery: Correlation to the mode of delivery. Objective: To investigate the extent of lower urinary tract symptoms (LUTS) during pregnancy and 7 years after birth in primipara and their association with delivery mode history and other factors. Methods: A longitudinal cohort study was conducted in 12 hospitals representative of the diversity of the seven administrative regions of China. Of 10 098 women recruited at the third trimester, 3862 (38.25%) responded at 1 year, and 1115 (11.04%) responded at 7 years. A modified Chinese Bristol Female Lower Urinary Tract Symptoms questionnaire was administered. No intervention was implemented. Results: Among the participants, 75.4% reported nocturia during gestation, followed by frequency (58.4%), which gradually decreased after delivery. The prevalence of urge urinary incontinence (UUI) increased from 5.7% during pregnancy to 22.8% at 1 year after delivery and 10.2% at 7 years postpartum. The prevalence of stress urinary incontinence (SUI) increased from 29.9% during pregnancy to 46.4% at 1 year postpartum and 23.7% at 7 years after delivery. The odds of storage LUTS and voiding LUTS 7 years after delivery were lower for the cesarean section (CS) group than for the vaginal delivery (VD) group (OR = 0.49, 95%CI, 0.36-0.68; OR = 0.25, 95%CI, 0.09-0.70). The most frequent moderately to severely bothersome LUTS after VD was SUI (19.18%), while that for CS was UUI (19.18%). Conclusion: The incidence of storage LUTS was higher than that of voiding LUTS both during pregnancy and postpartum. The most frequently reported LUTS during pregnancy were nocturia and frequency, while the most frequent LUTS after delivery were urgency and SUI. CS was associated with a decreased prevalence of both storage and voiding LUTS after delivery. abstract_id: PUBMED:11835420 The risk of lower urinary tract symptoms five years after the first delivery. Aim Of The Study: To estimate the prevalence and 5-year incidence of lower urinary tract symptoms (LUTS) after the first delivery and to evaluate the impact of pregnancy per se and delivery per se on long-lasting symptoms. Materials And Methods: A longitudinal cohort study of 305 primiparae questioned a few days, 3 months, and 5 years after their delivery. The questionnaire used was tested and validated, and the questions were formulated according to the definitions of the International Continence Society (ICS). Maternal, obstetric, and neonatal data concerning every delivery and objective data concerning surgeries during the observation period were obtained from the records. From the sample of 278 women (91%) who responded 5 years after their first delivery, three subpopulations were defined: 1) women without initial LUTS before or during the first pregnancy or during the puerperal period, 2) women with onset of LUTS during the first pregnancy, and 3) women with onset of LUTS during the first puerperium. The risk of LUTS 5 years after the first delivery was examined using bivariate analyses. The obstetric variables in the bivariate tests with a significant association with long-lasting urinary incontinence were entered into a multivariate logistic regression. Results: The prevalence of stress and urge incontinence 5 years after first delivery was 30% and 15%, respectively, whereas the 5-year incidence was 19% and 11%, respectively. The prevalence of urgency, diurnal frequency, and nocturia 5 years after the first delivery was 18%, 24%, and 2%, respectively, whereas the 5-year incidence was 15%, 20%, and 0.5%, respectively. The prevalence of all LUTS except nocturia increased significantly during the 5 years of observation. The risk of long-lasting stress and urge incontinence was related to the onset and duration of the symptom after the first pregnancy and delivery in a dose-response-like manner. Vacuum extraction at the first delivery was used significantly more often in the group of women with onset of stress incontinence during the first puerperium, whereas an episiotomy at the first delivery was performed significantly more often in the group of women with onset of stress incontinence in the 5 years of observation. The prevalence of urgency and diurnal frequency 5 years after the first delivery was not increased in women with symptom onset during the first pregnancy or puerperium compared with those without such symptoms. The frequency of nocturia 5 years after the first delivery was too low for statistical analysis. Conclusion: The first pregnancy and delivery may result in stress and urge incontinence 5 years later. Women with stress and urge incontinence 3 months after the first delivery have a very high risk of long-lasting symptoms. An episiotomy or a vacuum extraction at the first delivery seems to increase the risk. Subsequent childbearing or surgery seems without significant contribution. Long-lasting urgency, diurnal frequency, or nocturia cannot be predicted from onset during the first pregnancy or puerperium. abstract_id: PUBMED:19852893 Urinary incontinence during 12 years after delivery Introduction: First pregnancy and delivery often result in urinary incontinence. The aim of this 12-year cohort study was to estimate the impact of incontinence and the need for treatment. Material And Methods: A total of 305 women were questioned about lower urinary tract symptoms a few days, three months, five years and 12 years after their first delivery. Results: A total of 242 women (79%) answered a questionnaire 12 years after their first delivery. Urinary incontinence increased significantly (p &lt; 0.05) during the 12-year period to a total of 114 symptomatic women (47%); 25 of 102 women had stress incontinence (SI) and seven of 49 women had urge incontinence (UI) for at least 12 years. Three of 114 incontinent women (2.6%) had received treatment, while 162 women had exercised the pelvic floor the previous three months. None underwent SI surgery. Eighteen incontinent women wanted treatment; six of whom had SI alone and 12 of whom had both SI and UI. Among the 18 SI women ten had daily symptoms, while nine experienced social or hygienic inconvenience. Among the 86 incontinent women who did not want treatment, 51 had SI alone, 24 had both SI and UI, and 11 women had UI alone. Among the 75 SI women, five had daily symptoms, while ten experienced social or hygienic inconvenience. Conclusion: Urinary incontinence is prevalent 12 years after first delivery, though few receive treatment. Incontinence severity is associated with a desire for treatment. abstract_id: PUBMED:11147740 Lower urinary tract symptoms 5 years after the first delivery. The aim of the study was to estimate the prevalence and 5-year incidence of lower urinary tract symptoms after the first delivery. A total of 278 primiparae were questioned about lower urinary tract symptoms after their first pregnancy and puerperium, and again 5 years later. The prevalence of such symptoms in the study population increased significantly during the 5 years of observation. The prevalence of stress or urge incontinence 5 years after first delivery was significantly higher in women with onset during the first pregnancy or 1st puerperium than in those without incontinence before or during that period. The prevalence of urgency or frequent voiding 5 years after first delivery was not increased in the women with onset during that time compared to those without such symptoms. Stress or urge incontinence during the first pregnancy and puerperium predicts an increased risk of having the symptom 5 years later. Urgency and diurnal frequent voiding cannot be predicted from onset during that period. abstract_id: PUBMED:19142516 Lower urinary tract symptoms three years after delivery: a prospective study Purpose: to evaluate the frequency of lower urinary tract symptoms (LUTS), three years after delivery in women previously interviewed at the third gestation trimester, and to compare the gestation and delivery impact on LUTS, analyzing the social and hygienic discomfort associated with micturition complaints. Methods: analytical prospective study. In 2003, 340 pregnant women were selected in the pre-natal outpatient unit, and asked to answer a pre-tested questionnaire about LUTS and obstetric data. Three years after delivery, it was possible to get in touch by telephone with 120 of the 340 women who had been interviewed in the first study. They answered a second questionnaire about obstetric data, LUTS and its social impact. LUTS have been divided into stress urinary incontinence (SUI) and irritative urinary symptoms (IUS). McNemar's and chi-square tests were used for statistical analysis (p&lt;0.05). Results: SUI and nocturia have occurred in 57.5 and 80% of the pregnant women and the appearance of those symptoms after delivery, in 13.7 and 16.7%, respectively. Urge urinary incontinence has been significantly more frequent after delivery (30.5%) than in gestation (20.8%). Only 35.6% of the women with IUS presented social discomfort, but this rate has gone up to 91.4% in women with IUS associated with SUI. Conclusions: gestation, more than delivery, was associated with the appearance of SUI and nocturia, while the urge urinary incontinence was significantly higher after delivery. Most of the women have mentioned that SUI causes social problems. abstract_id: PUBMED:28593365 Urinary symptoms after bladder flap at the time of primary cesarean delivery: a randomized controlled trial (RTC). Introduction And Hypothesis: The aim of this study was to evaluate urinary symptoms in the postpartum period after omission of the bladder flap at the time of primary cesarean delivery. Methods: This was a single-blind parallel-group randomized comparison (bladder flap, no bladder flap) in women scheduled for a primary cesarean delivery at 37 weeks gestation or later. The primary outcome was urinary symptom scores at 6-8 weeks postpartum. Secondary outcomes included comparisons of preoperative and postoperative pelvic floor symptom scores and the proportions of symptom bother responses between the study groups. Results: A total 43 women were available for analysis. Randomization was as follows: omission of the bladder flap (n = 22) and bladder flap (n = 21). Demographic characteristics and baseline pelvic floor symptom scores were similar between the groups. The primary outcome, urinary symptom scores at 6-8 weeks postpartum, did not differ significantly between the groups, but urinary symptom bother was significantly higher in women who received a bladder flap. Pelvic floor symptom scores improved significantly following delivery. Conclusions: Urinary symptom scores as measured by the UDI-6 did not differ between women randomized to bladder flap or omission of the bladder flap, but the proportion of women with urinary symptom bother was significantly higher among those who received a bladder flap. abstract_id: PUBMED:18639264 Incidence and remission of lower urinary tract symptoms during 12 years after the first delivery: a cohort study. Purpose: We estimated the incidence and remission of lower urinary tract symptoms during the 12 years following the first pregnancy and delivery. Materials And Methods: In a cohort study 242 primiparae were questioned about lower urinary tract symptoms 3 months, 5 years and 12 years after the first delivery. Results: From 3 months to 5 years after first delivery the incidence of stress urinary incontinence, urge urinary incontinence, urgency, diurnal frequency and nocturia was 56 of 213 cases (26.3%), 33 of 219 (15.1%), 33 of 206 (16.0%), 49 of 219 (22.4%) and 6 of 225 (2.7%), respectively. Remission of stress urinary incontinence, urge urinary incontinence, urgency, diurnal frequency and nocturia during the same period occurred in 1 of 11 cases (9.1%), 2 of 5 (40%), 2 of 5 (40%), 4 of 7 (57.1%) and 1 of 1 (100%), respectively. From 5 to 12 years after the first delivery the incidence of stress urinary incontinence, urge urinary incontinence, urgency, diurnal frequency and nocturia was 40 of 158 cases (25.3%), 25 of 188 (13.3%), 24 of 175 (13.7%), 40 of 174 (23.0%) and 13 of 220 (5.9%), respectively. Remission of stress urinary incontinence, urge urinary incontinence, urgency, diurnal frequency and nocturia during the same period occurred in 14 of 66 cases (21.2%), 13 of 36 (36.1%), 22 of 36 (61.1%), 14 of 52 (26.9%) and 4 of 6 (66.7%), respectively. Of 62 women with pure stress urinary incontinence during the first pregnancy and puerperium 20 (32.2%) had pure stress urinary incontinence, 3 (4.8%) had pure urge urinary incontinence and 15 (24.2%) had mixed urinary incontinence 12 years later. Of 13 women with pure urge urinary incontinence during the first pregnancy and puerperium 3 (23.1%) had pure urge urinary incontinence, 2 (15.4%) had pure stress urinary incontinence and 3 (23.1%) had mixed urinary incontinence 12 years later. The overall prevalence of lower urinary tract symptoms 12 years after the first delivery increased significantly. Conclusions: The incidence and remission of lower urinary tract symptoms after the first pregnancy and delivery fluctuate and the types of urinary incontinence may interchange, while the overall prevalence of lower urinary tract symptoms increases in the long term. Answer: Yes, the delivery method can influence lower urinary tract symptoms (LUTS) triggered by the first pregnancy. Several studies have investigated the relationship between delivery method and the development or persistence of LUTS postpartum. One study found that urinary symptoms such as frequency, urgency, nocturia, and urge incontinence, which were triggered during pregnancy, decreased significantly in the postpartum period regardless of the delivery method. However, symptoms related to urinary loss due to stress persisted after vaginal delivery (PUBMED:22555033). Another study reported that women after a Caesarean delivery had more bothersome urge urinary incontinence (UUI), while women after a spontaneous vaginal delivery developed more bothersome stress urinary incontinence (SUI), although the differences were not statistically significant, possibly due to the small sample size. It was also noted that bothersome SUI in early pregnancy and greater maternal age were predictive of bothersome SUI at 1 year after first childbirth (PUBMED:16831150). Further research indicated that UI during the first pregnancy and vaginal delivery in primiparous women may predict an increased risk of having UI 5 years after delivery. UUI following cesarean delivery more negatively impacted emotional health than it did following vaginal birth (PUBMED:22777581). Another study found that the odds of storage LUTS and voiding LUTS 7 years after delivery were lower for the cesarean section group than for the vaginal delivery group. The most frequent moderately to severely bothersome LUTS after vaginal delivery was SUI, while that for cesarean section was UUI (PUBMED:30644569). Overall, these studies suggest that the method of delivery can have an impact on the type and persistence of LUTS experienced by women postpartum, with vaginal delivery being more associated with stress-related urinary symptoms and cesarean delivery with urge-related symptoms. However, the exact relationship can be complex and multifactorial, and further research with larger sample sizes may be needed to make definitive statements about the effect of the mode of delivery on bothersome urinary symptoms (PUBMED:16831150).
Instruction: Does prior percutaneous coronary intervention adversely affect early and mid-term survival after coronary artery surgery? Abstracts: abstract_id: PUBMED:25436206 The effect of prior percutaneous coronary intervention on the immediate and late outcome after coronary artery bypass grafting: systematic review and meta-analysis. Introduction: A number of studies reported on a possible increased risk of morbidity and mortality after coronary artery bypass grafting in patients with prior percutaneous coronary intervention. Methods: A systematic review and meta-analysis of studies comparing the outcome of patients undergoing coronary surgery with or without prior percutaneous coronary intervention was performed. Only studies reporting results of adjusted analysis and excluding acute percutaneous coronary intervention failures were included in this meta-analysis. Results: Literature search yielded nine studies reporting on 68,645 patients who underwent coronary surgery. Of them, 8,358 (12.2%) had a prior percutaneous coronary intervention. Patients without prior percutaneous coronary intervention were significantly older (p=0.002), had significantly higher prevalence of left main stenosis (p=0.005) and three-vessel disease (p&lt;0.0001). Prior percutaneous coronary intervention was associated with higher risk of resternotomy for bleeding (p=0.04) and dialysis (p=0.003). Thirty-day/in-hospital mortality was significantly higher in patients with prior percutaneous coronary intervention (pooled rate: 2.7% vs 2.0%, risk ratio 1.39, 95% confidence interval 1.06-1.84, p=0.02) as confirmed also by generic inverse variance analysis (risk ratio 1.47, 95% confidence interval 1.12-1.93, p=0.005). Prior percutaneous coronary intervention did not affect late outcome (five studies included, risk ratio 1.07, 95% confidence interval 0.90-1.28, p=0.43). Conclusions: Prior percutaneous coronary intervention seems to be associated with an increased risk of immediate postoperative morbidity and mortality after coronary surgery, but does not affect late mortality. These results are not conclusive and need to be confirmed by studies of better quality evaluating the impact of indication, timing, type of stents, amount of treated vessels and number of previous percutaneous coronary interventions. abstract_id: PUBMED:19695544 Does prior percutaneous coronary intervention adversely affect early and mid-term survival after coronary artery surgery? Objectives: To determine the association between previous percutaneous coronary intervention (PCI) and results after coronary artery bypass graft surgery (CABG). Background: Increasing numbers of patients undergoing CABG have previously undergone PCI. Methods: We analyzed consecutive first-time isolated CABG procedures within the Australasian Society of Cardiac and Thoracic Surgeons Database from June 2001 to May 2008. Logistic regression and propensity score analyses were used to assess the risk-adjusted impact of prior PCI on in-hospital mortality and major adverse cardiac events. Cox regression model was used to assess the effect of prior PCI on mid-term survival. Results: Of 13,184 patients who underwent CABG, 11,727 had no prior PCI and 1,457 had prior PCI. Mean follow-up was 3.3 +/- 2.1 years. Patients without prior PCI had a higher EuroSCORE value (4.4 +/- 3.3 vs. 3.6 +/- 3.0, p &lt; 0.001), were older, and more likely to have left main stem stenosis and recent myocardial infarction. There was no difference in unadjusted in-hospital mortality (1.65% vs. 1.55%, p = 0.78) or major adverse cardiac events (3.0% vs. 3.0%, p = 0.99) between patients with or without prior PCI. After adjustment, prior PCI was not a predictor of in-hospital (odds ratio: 1.22, 95% confidence interval [CI]: 0.76 to 2.0, p = 0.41) or mid-term mortality at 6-year follow-up (hazard ratio: 0.94, 95% CI: 0.75 to 1.18, p = 0.62). Conclusions: In this large registry study, prior PCI was not associated with increased short- or mid-term mortality after CABG. Good outcomes can be obtained in the group of patients undergoing CABG who have had previous PCI. abstract_id: PUBMED:25919046 Survival difference between coronary bypass surgery and percutaneous coronary intervention. Objectives: To assess whether the previously observed lower death rate with coronary artery bypass surgery compared with percutaneous coronary intervention in subsets of patients with coronary artery disease persists in more recent years. Design: Retrospective study from Feiring Heart Clinic database of survival in 17739 patients followed for 5 years after coronary revascularization. The cohorts treated in 1999-2005 and 2006-2011 were compared using Cox regression and propensity score analyses. Results: Cox regression and propensity score analyses revealed no difference in survival in either time period in one- and two-vessel diseases. In three-vessel disease, the hazard ratios between bypass surgery and percutaneous intervention were 0.62 (95% confidence interval [CI]: 0.53-0.71, p &lt; 0.001) and 0.59 (95% CI: 0.47-0.73, p &lt; 0.001), respectively, in the two time periods, indicating persistent higher survival with bypass surgery. Conclusions: The previously observed lower death rate of coronary artery bypass surgery compared with percutaneous intervention in patients with three-vessel disease is persistent in more recent years and indicates that bypass surgery still should be the standard treatment for these patients. abstract_id: PUBMED:25499479 Coronary artery surgery versus percutaneous coronary intervention in octogenarians: long-term results. Background: The aim of this study was to compare 7-year rates of all-cause death, cardiac death, myocardial infarction, target vessel revascularization, and stroke in a large cohort of octogenarians with left main coronary artery or multivessel disease, treated with coronary artery bypass grafting or percutaneous coronary intervention. Methods: Two propensity score-matched cohorts of patients undergoing revascularization procedures at regional public and private centers of Emilia-Romagna, Italy, from July 2002 to December 2008 were used to compare long-term outcomes of percutaneous coronary intervention (947 patients) and coronary artery bypass grafting (441 patients). Results: There were no significant differences between groups in 30-day mortality. In the follow-up the overall and the matched percutaneous coronary intervention population experienced significantly worse outcomes in terms of cardiac mortality, myocardial infarction, and target vessel revascularization. No difference was found for stroke between treatment groups. Percutaneous coronary intervention was an independent predictor of increased death at long-term follow-up. The subgroups in which coronary artery bypass grafting reduced more clearly the risk of death were age 80 to 85 years, previous myocardial infarction, history of cardiac heart failure, chronic renal failure, peripheral vascular disease, and patients with three-vessel disease associated with the left main coronary artery. Conclusions: In this real-world setting, surgical coronary revascularization remains the standard of care for patients with left main or multivessel disease. The long-term outcomes of current percutaneous coronary intervention technology in octogenarians are yet to be determined with adequately powered prospective randomized studies. abstract_id: PUBMED:30371287 Prognostic Impact of Multiple Prior Percutaneous Coronary Interventions in Patients Undergoing Coronary Artery Bypass Grafting. Background Multiple percutaneous coronary interventions ( PCIs ) are considered determinant of poor outcome in patients undergoing coronary artery bypass grafting ( CABG ), but scarce data exist to substantiate this. Methods and Results Patients who underwent CABG without history of prior PCI or with PCI performed &gt;30 days before surgery were selected for the present analysis from the prospective, multicenter E-CABG (European Multicenter Study on Coronary Artery Bypass Grafting) registry. Out of 6563 patients with data on preoperative SYNTAX (Synergy between PCI With Taxus and Cardiac Surgery) score, 1181 patients (18.0%) had undergone PCI &gt;30 days before CABG . Of these, 11.6% underwent a single PCI , 4.4% 2 PCI s, and 2.1% ≥3 PCI s. PCI of a single main coronary vessel was performed in 11.3%, of 2 main vessels in 4.9%, and of 3 main vessels in 1.6% of patients. Multivariable analysis showed that differences in early mortality and other outcomes were not significantly different in the study cohorts. The adjusted hospital/30-day mortality rate was 1.8% in patients without history of prior PCI , 1.9% in those with a history of 1 PCI , 1.4% after 2 PCI s, and 2.5% after ≥3 PCI s (adjusted P=0.8). The adjusted hospital/30-day mortality rate was 2.0% in those who had undergone PCI of 1 main coronary vessel, 1.3% after PCI of 2 main vessels, and 3.1% after PCI of 3 main coronary vessels (adjusted P=0.6). Conclusions Multiple prior PCI s are not associated with increased risk of early adverse events in patients undergoing isolated CABG . The present results are conditional to survival after PCI and should not be viewed as a support for a policy of multiple PCI as opposed to earlier CABG . Clinical Trial Registration URL : http://www.Clinicaltrials.gov . Unique identifier: NCT 02319083. abstract_id: PUBMED:32653282 Minimally invasive coronary bypass versus percutaneous coronary intervention for isolated complex stenosis of the left anterior descending coronary artery. Objective: Debate continues as to the optimal minimally invasive treatment modality for complex disease of the left anterior descending coronary artery, with advocates for both robotic-assisted minimally invasive direct coronary artery bypass and percutaneous coronary intervention with a drug-eluting stent. We analyzed the midterm outcomes of patients with isolated left anterior descending disease, revascularized by minimally invasive direct coronary artery bypass or drug-eluting stent percutaneous coronary intervention, focusing on those with complex lesion anatomy. Methods: A retrospective review was undertaken of all patients who underwent coronary revascularization between January 2008 and December 2016. From this population, 158 propensity-matched pairs of patients were generated from 158 individuals who underwent minimally invasive direct coronary artery bypass for isolated complex left anterior descending disease and from 373 patients who underwent percutaneous coronary intervention using a second-generation drug-eluting stent. Midterm survival and incidence of repeat left anterior descending intervention were analyzed for both patient groups. Results: Overall 9-year survival was not significantly different between patient groups both before and after propensity matching. Midterm mortality in the matched minimally invasive direct coronary artery bypass group was low, irrespective of patient risk profile. By contrast, advanced age (hazard ratio, 1.10; P = .012) and obesity (hazard ratio, 1.09; P = .044) predicted increased late death after drug-eluting stent percutaneous coronary intervention among matched patients. Patients who underwent minimally invasive direct coronary artery bypass were significantly less likely to require repeat left anterior descending revascularization than those who had percutaneous coronary intervention, both before and after propensity matching. Smaller stent diameter in drug-eluting stent percutaneous coronary intervention was associated with increased left anterior descending reintervention (hazard ratio, 3.53; P = .005). Conclusions: In patients with complex disease of the left anterior descending artery, both minimally invasive direct coronary artery bypass and percutaneous coronary intervention are associated with similar excellent intermediate-term survival, although reintervention requirements are lower after surgery. abstract_id: PUBMED:26277472 Impact of off-pump coronary artery bypass grafting on long-term percutaneous coronary interventions. Objectives: The debate regarding the advantages and limitations of off-pump versus on-pump coronary artery bypass grafting (CABG) has yet to be resolved. This study was designed to compare the impact of surgical technique on long-term mortality and subsequent revascularization. Methods: The Predicting Long-Term Outcomes After Isolated Coronary Artery Bypass Surgery (PRIORITY) project was designed to evaluate the long-term outcomes of 2 large, prospective multicenter cohort studies on CABG conducted in Italy between 2002 and 2004 and in 2007 and 2008. Clinical data on isolated CABG were compiled from 2 administrative databases. Results: The study population consisted of 11,021 patients who underwent isolated CABG (27.2% off-pump CABG). Surgical strategy did not affect in-hospital mortality. Multivariate logistic regression demonstrated that on-pump CABG was the only factor that protected from in-hospital percutaneous coronary intervention after surgery (odds ratio, 0.61). Although unadjusted long-term survival was significantly worse for off-pump CABG, adjustment did not confirm off-pump CABG as a risk factor for mortality (hazard ratio, 0.96; 95% confidence interval, 0.87-1.06). The on-pump CABG group had a significantly lower hospitalization for subsequent percutaneous coronary intervention, a finding confirmed even with adjustment for confounding factors (hazard ratio, 0.70; 95% confidence interval, 0.62-0.80; P &lt; .001). Off-pump CABG thus carried a 42% higher risk for subsequent percutaneous coronary intervention than on-pump CABG. The incidence of repeat CABG was similar between groups. Conclusions: This study demonstrated that off-pump OPCAB did not affect short- and long-term mortality, but it was a significant risk factor for rehospitalization for percutaneous coronary intervention. abstract_id: PUBMED:34410028 Revascularization by percutaneous coronary intervention in a case of left main coronary artery atresia. Left main coronary artery atresia (LMCAA) is a rare congenital anomaly of the coronary artery that may have an unfavorable prognosis if left untreated. Surgical revascularization by either coronary artery bypass grafting or coronary reconstruction osteoplasty is unanimously recognized as the treatment of choice for this condition. Here, we first report a case of LMCAA treated with revascularization by percutaneous coronary intervention (PCI) through a retrograde approach. Intravascular ultrasound revealed an unusual coronary structure for the left main artery. PCI appears to be a feasible and effective alternative to surgery for the treatment of LMCAA, particularly if a minimally invasive method is desired. abstract_id: PUBMED:38038010 Direct Oral Anticoagulants use in Patients with Stable Coronary Artery Disease, Acute Coronary Syndrome or Undergoing Percutaneous Coronary Intervention. The investigation for the optimal anticoagulation strategy for patients with stable coronary artery disease, acute coronary syndromes, and undergoing percutaneous coronary intervention constitutes a great challenge for physicians and is a field of extensive research. Although aspirin is commonly recommended as a protective measure for all patients with coronary artery disease and dual antiplatelet therapy for those undergoing procedures, such as percutaneous coronary intervention or coronary artery bypass graft surgery, the risk of recurrent cardiovascular events remains significant. In this context, the shortcomings associated with the use of vitamin K antagonists have led to the assessment of direct oral anticoagulants as promising alternatives. This review will explore and provide a comprehensive analysis of the existing data regarding the use of direct oral anticoagulants in patients with stable coronary artery disease or acute coronary syndrome, as well as their effectiveness in those undergoing percutaneous coronary intervention or coronary artery bypass graft surgery. abstract_id: PUBMED:28213668 Effect of Percutaneous Coronary Intervention on Survival in Patients with Stable Ischemic Heart Disease. Purpose Of Review: This study aims to determine if percutaneous coronary intervention (PCI) does improve survival in stable ischemic heart disease (SIHD). Recent Findings: The International Study of Comparative Health Effectiveness with Medical and Invasive Approaches (ISCHEMIA) trial will evaluate patients with moderate to severe ischemia and will be the largest randomized trial of an initial management strategy of coronary revascularization (percutaneous or surgical) versus optimal medical therapy alone for SIHD. Although the ISCHEMIA trial may show a benefit with upfront coronary revascularization in this high-risk population, cardiac events after PCI are largely caused by plaque rupture in segments outside of the original stented segment. Furthermore, given the robust data from prior randomized trials, which showed no survival benefit with PCI, and the likelihood that the highest risk patients in ISCHEMIA will be treated with surgery, it is unlikely that the ISCHEMIA trial will show a survival benefit particular to PCI. Recent Findings: Although PCI relieves symptoms, the evidence base indicates that it does not prolong survival in SIHD. Answer: The impact of prior percutaneous coronary intervention (PCI) on early and mid-term survival after coronary artery bypass grafting (CABG) has been explored in several studies with varying results. One systematic review and meta-analysis found that prior PCI was associated with higher risk of immediate postoperative morbidity and mortality after coronary surgery, but did not affect late mortality (PUBMED:25436206). However, another large registry study concluded that prior PCI was not associated with increased short- or mid-term mortality after CABG, suggesting that good outcomes can be obtained in patients undergoing CABG who have had previous PCI (PUBMED:19695544). Further research indicated that multiple prior PCIs are not associated with increased risk of early adverse events in patients undergoing isolated CABG (PUBMED:30371287). This suggests that the number of previous PCIs does not necessarily worsen the prognosis for patients who later undergo CABG. In summary, while some evidence suggests an increased risk of immediate postoperative complications and mortality in patients with prior PCI undergoing CABG, other studies have found no significant impact on early and mid-term survival. These discrepancies highlight the need for more high-quality research to conclusively determine the effects of prior PCI on CABG outcomes.
Instruction: RET germline mutation in codon 791 in a family representing 3 generations from age 5 to age 70 years: should thyroidectomy be performed? Abstracts: abstract_id: PUBMED:15251615 RET germline mutation in codon 791 in a family representing 3 generations from age 5 to age 70 years: should thyroidectomy be performed? Objective: To describe a kindred with a rare RET germline mutation in codon 791 and discuss potential management strategies. Methods: We present clinical and biochemical data as well as results of mutation analysis in our study subjects and provide an overview of related published reports. Results: Multiple endocrine neoplasia type 2 (MEN 2) is a familial cancer syndrome characterized by the development of medullary thyroid carcinoma (MTC), pheochromocytoma, and parathyroid hyperplasia or adenoma. Germline mutations in RET are responsible for this autosomal dominant syndrome. Familial MTC is a variant of MEN 2A and can be caused by RET mutations in codon 791. Deaths from gene carriers with mutations in these codons have not yet been reported. In general, gene carriers with these RET mutations have late-onset MTC. Because only a few kindreds with this specific mutation have been identified and no long-term follow-up data are available, management of these patients can be a challenge. We illustrate the difficulties with decisions about not only when to perform thyroidectomy in these patients but also whether thyroidectomy should even be considered in such gene carriers with a benign course. Our reported kindred included four carriers with a codon 791 RET germline mutation, one of whom had the rare concomitant occurrence of acromegaly and MEN 2A. The 70-year-old mother had acromegaly and hyperparathyroidism but normal serum calcitonin levels and normal findings on thyroid ultrasound examination. She refused pentagastrin testing and any surgical intervention. The 37-year-old daughter had hypothyroidism, a small thyroid gland, and negative results of pentagastrin stimulation testing of calcitonin. The 18-year-old grandson also had a negative pentagastrin test result and normal thyroid ultrasound findings. The 5-year-old granddaughter had normal results of thyroid ultrasonography. In all patients, we recommended thyroidectomy. Conclusion: Prospective studies are needed to clarify which patients with codon 791 RET germline mutation should undergo thyroidectomy. abstract_id: PUBMED:16053382 Penetrance and clinical manifestations of non-hotspot germline RET mutation, C630R, in a family with medullary thyroid carcinoma. Germline mutations in specific hot spot-codons of the RET proto-oncogene are associated with multiple endocrine neoplasia type 2 (MEN 2). Clinical RET gene testing has been routine for the last 10 years in some countries. In Argentina, RET testing excluding MEN 2B was always reported with a mutation at codon 634, with one exception: we described a novel mutation T &gt; C transition at codon 630 (C630R), the family to which we extend the study in the present report. This family comprised 29 members in four generations including 6 individuals affected with medullary thyroid cancer (MTC), positive for the C630R mutation and normal adrenaline/ noradrenaline and ionized calcium/parathyroid hormone levels. Two asymptomatic mutation carriers aged 5 and 11 years underwent total thyroidectomy. The histopathologic examination showed C-cell hyperplasia and microcarcinoma foci, while preoperative basal calcitonins were normal for both. Our report emphasizes the importance of testing for non-hot spot RET mutations in apparently mutation negative MEN 2 families. Furthermore, it would appear that C630R mirrors C634R in penetrance (100% in this family) and in early age of onset of MTC, although paradoxically, no pheochromocytomas and hyperparathyroidism have developed. In addition to recommending RET testing before 5 years of age; we also can postulate that codon 630 may be the key point along the extracellular domain, important in the tissue-specific penetrance. abstract_id: PUBMED:24617864 Prevalence by age and predictors of medullary thyroid cancer in patients with lower risk germline RET proto-oncogene mutations. Background: Age-related risk of medullary thyroid carcinoma (MTC) development in presymptomatic carriers of lower risk germline RET mutations is uncertain; such data may aid counseling patients regarding timing of thyroidectomy. Methods: From an institutional database and an exhaustive literature review, we identified 679 patients with American Thyroid Association (ATA) level A or B mutations who were identified because of family screening (index cases of MTC were excluded to minimize selection bias). We evaluated age at thyroidectomy or last evaluation if no thyroidectomy, preoperative calcitonin level (elevated or not), the mutated codon, and outcome (MTC vs. no MTC after thyroidectomy or no clinical evidence of MTC if thyroid intact). Data were used to estimate the cumulative prevalence of MTC and/or assess likelihood of MTC stratified by codon. After exclusion of cases with missing data or small representation, 503 patients with mutations in codons 533, 609, 611, 618, 620, 791, and 804 were analyzed. Results: 236 patients had MTC. Cumulative prevalence and median time to MTC varied by codon and within ATA risk levels (p&lt;0.0001). Patients with a codon 620 mutation were 2.8-6.9 times more likely to have MTC than other level B mutation carriers, and 5.1-21.7 times more likely than level A mutation carriers included in our focus population. The youngest median time to MTC was 19 years for codon 620 and the oldest was 56 years for codon 611. Cumulative prevalence of MTC by age 20 was 10% or lower for codons 533, 609, 611, 791, and 804. By age 50, it ranged from 18% for codon 791 to 95% for codon 620. An elevated preoperative calcitonin level strongly predicted MTC on final pathology, though false-negative rates varied by codon (p&lt;0.0001). Positive predictive values ranged from 76% to 100% by codon with an overall positive predictive value of 87% across codons. Conclusions: This study offers a better understanding of the age-related development of MTC in lower risk RET mutation carriers, provides evidence of further distinctions between lower risk mutations within ATA subgroups, and clarifies the clinical significance of codon 791 mutations. The data support individualized "codon-based" management approaches coupled with clinical data such as calcitonin levels. abstract_id: PUBMED:11386462 A family of multiple endocrine neoplasia type 2A with the RET proto-oncogene mutation in codon 618 (Cys--&gt;Arg). Multiple endocrine neoplasia type 2 (MEN-2) is a hereditary syndrome characterized by medullary thyroid carcinoma (MTC), pheochromocytoma and hyperplasia or adenoma of the parathyroid gland with hyperparathyroidism. Recent genetic studies have identified the presence of germline missense mutations in the RET proto-oncogene in almost 100% of MEN-2 patients. We report here three generations of one MEN-2 family with rare missense mutation at codon 618 (Cys--&gt;Arg) of the RET proto-oncogene. The first patient was surgically treated at the age of 63 years but died of bone metastasis. His two children (29-year-old daughter and 25-year-old son) were treated surgically for MTC and neck lymph node metastasis. Germline mutations of the RET proto-oncogene of these three MTC patients and two children of the 29-year-old daughter (9-year-old female and 7-year-old male) were examined. Three MTC patients and the 9-year-old female possessed the mutation. The phenotype of the family with this rare point mutation of the RET proto-oncogene is reported. abstract_id: PUBMED:9360560 Germline dinucleotide mutation in codon 883 of the RET proto-oncogene in multiple endocrine neoplasia type 2B without codon 918 mutation. The autosomal dominant multiple endocrine neoplasia type 2 syndromes (MEN 2) comprise three clinically distinct entities, MEN 2A, familial medullary thyroid carcinoma and MEN 2B, which share a common clinical feature: medullary thyroid carcinoma (MTC). MEN 2B is considered to have the most aggressive form of MTC. Therefore, early detection of MEN 2B in order to prevent potentially lethal MTC is important. More than 95% of all MEN 2B cases are caused by germline mutation at codon 918 (M918T) in exon 16 of the RET proto-oncogene. In this study, we demonstrate the presence of germline codon 883 mutation (A883F) in 2 of 3 unrelated MEN 2B cases without codon 918 mutation. Our data demonstrate a novel etiologic event which may have roles in predisposition to MEN 2B when present in the germline and in the pathogenesis of sporadic MTC when somatic. abstract_id: PUBMED:34441382 Late-Onset Medullary Thyroid Cancer in a Patient with a Germline RET Codon C634R Mutation. Background: Multiple endocrine neoplasia type 2A (MEN2A) is a rare, hereditary syndrome resulting from a germline mutation in the RET proto-oncogene and characterized primarily by medullary thyroid cancer (MTC), pheochromocytoma (PHEO), and hyperparathyroidism. Types of RET mutation have been associated with age at onset, clinical outcomes of MTC, and the penetrance of other components. Patients classified as 'high-risk' by the American Thyroid Association (ATA), based on the aggressiveness of MTC and the penetrance of other components, are recommended to undergo early prophylactic thyroidectomy at age ≤ 5 years and to be screened for PHEO at age ≥ 11 years. Patients with RET codon C634R mutations have been classified as high-risk. Case presentation: The present study describes a 71-year-old woman newly diagnosed with hereditary MTC related to a RET C634R germline mutation. Her basal serum calcitonin level was high, but there was no evidence of distant metastases. Surgery revealed bilateral MTC with two metastatic lymph nodes. Because microscopic resection was incomplete and extranodal extension was observed, the patient underwent adjuvant external beam radiotherapy. Response to therapy was excellent. Follow-up after 1.5 years showed no evidence of disease or other manifestations of MEN2A. Conclusion: Despite RET C634R carriers being classified as high-risk by the ATA, this patient did not present with either distant MTC or PHEO until her seventies. To our knowledge, only one other patient has shown a similar late identification of a RET C634R mutation, but MTC could not be diagnosed because the patient was lost to follow-up. Further research is required to develop optimal protocols that could allow patients requiring prophylactic thyroidectomy to be differentiated from those who can be monitored closely without early surgery. abstract_id: PUBMED:12193298 Papillary thyroid carcinoma in patients with RET proto-oncogene germline mutation. The occurrence of papillary thyroid carcinoma in patients with RET germline mutations has been described in only eight cases since 1993. We report three women with a RET germline mutation in exon 13 and 14, affecting codon 790, 791, and 804, respectively, who underwent prophylactic thyroidectomy at the age of 29, 39, and 24 years, respectively. Histologic examination revealed C-cell hyperplasia and a small medullary thyroid carcinoma in the first patient and no pathologic changes of the C-cells in either of the other patients. However, all patients had papillary thyroid carcinoma (PTC). Concerning the frequency of PTC in patients with RET germline mutations who underwent surgery at our center (n = 104), it was found in 9.1% of all patients with RET mutation in codon 790, 791, and 804 (n = 33) but in none of the 104 patients with RET germline mutations not affecting codon 790, 791, or 804 (p = 0.0015). Our data and the data from the literature suggest a possible pathogenesis of PTC caused by exon 13 and 14 RET mutations that affect the intracellular domain of the encoded protein. Further investigation is necessary to confirm a potential pathogenetic role of exon 13 and 14 RET mutations with regard to the development of PTC. abstract_id: PUBMED:12409662 Various penetrance of familial medullary thyroid carcinoma in patients with RET protooncogene codon 790/791 germline mutations. Objective: To describe a genotype-phenotype correlation in MEN2 families with germline mutations of codons 790/791 and discuss options for the therapeutic management of gene carriers. Summary Background Data: Heredity of MEN2 syndromes is caused by a heterozygous germline mutation in the protooncogene. Rare mutations of codons 790/791 associated with incomplete penetrant MEN2A/FMTC phenotype were reported in five families, contraindicating the prophylactic thyroidectomy for the genetically affected children. Methods: Forty-five patients with a putative sporadic MTC were screened for germline mutations by direct DNA sequencing. Family members of identified index cases underwent genetic analysis. Gene carriers were examined clinically and biochemically, and all gene carriers underwent prophylactic thyroidectomy. Results: Five index patients were identified, four of whom harbored mutations in codons 790/791 and one in codon 634. In the kindreds, four L790F carriers and one Y791F carrier were detected. The thyroid gland histology of L790F carriers revealed medullary thyroid carcinoma in two patients (aged 29 and 50 years) and C-cell hyperplasia in two additional patients (aged 9 and 16 years). The Y791F carrier had a normal histology. Conclusions: Codon 790/791 mutations had diverse penetrance. Whereas prophylactic thyroidectomy in children is a justifiable approach for codon 790 mutation carriers, the indication for thyroidectomy should depend on the clinical course of codon 791 carriers. abstract_id: PUBMED:18248648 Difference in development of medullary thyroid carcinoma among carriers of RET mutations in codons 790 and 791. Objectives: Hereditary medullary thyroid carcinoma (MTC) is caused by germ-line mutations in the RET proto-oncogene. Our study addresses the difference in development of MTC between rare mutations in RET codons 790, 791 and 804. Design: We evaluated tumour stage, calcitonin levels, biochemical cure rates and associated endocrinopathies in 153 German/Austrian patients with RET 790 (n = 47), 791 (n = 56) and 804 mutations (n = 50), divided into index- and screening groups. Results: Age at diagnosis in index-patients did not differ significantly among the three codon groups (medians of 57, 61 and 53 years). Tumour stage at diagnosis was significantly less advanced with codon 791 (n = 22) than 790 (n = 16) and 804 (n = 16) mutations (P = 0.001). In screening patients, age at diagnosis did not differ significantly among the three groups (medians 19, 24 and 32 years). Tumour stage at diagnosis was also significantly less advanced with codon 791 (n = 34) than 790 (n = 31) and 804 (n = 34) (P = 0.032). Preoperative basal calcitonin levels were significantly lower in codon 791 carriers compared to codon 790 carriers, and cure rates were significantly higher in both index (75%vs. 31%; P = 0.03) and screening patients (100%vs. 75%; P = 0.015). Additional endocrinopathies were observed only with codon 791 carriers (four pheochromocytomas and two hyperparathyroidism). Conclusion: There is a significant difference in MTC development with less extensive C-cell disease, higher cure rate and more frequent additional endocrinopathies in carriers of RET codon 791 mutations compared with carriers of codons 790 and 804 mutations. This information should be considered when age of prophylactic thyroidectomy is discussed. abstract_id: PUBMED:37223679 Case Report: Lung adenocarcinoma associated with germline ERCC2 frameshift mutation. Family history is an established risk factor for lung cancer. Previous studies have found that germline genetic alterations, such as those in EGFR, BRCA1, BRCA2, CHEK2, CDKN2A, HER2, MET, NBN, PARK2, RET, TERT, TP53, and YAP1, are associated with an increased risk of developing lung cancer. The study reports the first of a lung adenocarcinoma proband with germline ERCC2 frameshift mutation c.1849dup (p. A617Gfs*32). Her family cancer history review demonstrated that her two healthy sisters, a brother with lung cancer, and three healthy cousins were positive for ERCC2 frameshift mutation, which might contribute to increased cancer risk. Our study highlights the necessity of performing comprehensive genomic profiling in discovering rare genetic alterations, early cancer screening, and monitoring for patients with family cancer history. Answer: The decision to perform thyroidectomy in patients with RET germline mutations, particularly in codon 791, is complex and requires careful consideration of various factors. The RET germline mutation in codon 791 is associated with multiple endocrine neoplasia type 2 (MEN 2), which is characterized by the development of medullary thyroid carcinoma (MTC), pheochromocytoma, and parathyroid hyperplasia or adenoma (PUBMED:15251615). However, the clinical course of patients with this specific mutation can be benign, and the onset of MTC tends to be late (PUBMED:15251615). In the described family, the 70-year-old mother had normal serum calcitonin levels and normal thyroid ultrasound findings, while the 37-year-old daughter and the 18-year-old grandson had negative pentagastrin stimulation testing of calcitonin and normal thyroid ultrasound findings. The 5-year-old granddaughter also had normal thyroid ultrasonography results. Despite these findings, thyroidectomy was recommended for all patients (PUBMED:15251615). The penetrance and clinical manifestations of RET mutations can vary, and some mutations, such as C630R, have shown high penetrance and early onset of MTC in other families (PUBMED:16053382). However, for codon 791, the cumulative prevalence of MTC by age 50 ranges from 18%, indicating a lower risk compared to other mutations (PUBMED:24617864). An elevated preoperative calcitonin level is a strong predictor of MTC, but false-negative rates can vary by codon (PUBMED:24617864). Given the variability in clinical presentation and the potential for a benign course in some patients with codon 791 mutations, the decision to perform thyroidectomy should be individualized. Prospective studies are needed to clarify which patients with codon 791 RET germline mutations should undergo thyroidectomy (PUBMED:15251615). It is important to consider the age of the patient, the presence of clinical symptoms, biochemical test results, and the potential risks and benefits of surgery. Close monitoring and individualized "codon-based" management approaches, coupled with clinical data such as calcitonin levels, may be appropriate for some patients (PUBMED:24617864).
Instruction: Nasal bone hypoplasia: is it a common finding amongst chromosomally normal fetuses of southern Chinese women? Abstracts: abstract_id: PUBMED:12937342 Nasal bone hypoplasia: is it a common finding amongst chromosomally normal fetuses of southern Chinese women? Objective: To assess the incidence of nasal hypoplasia among fetuses of southern Chinese women. Methods: This prospective observational study was conducted at Princess Margaret Hospital, Hong Kong. All pregnant women referred for chromosomal studies before 14 weeks were assessed. The crown-rump length, nuchal translucency thickness, and nasal bone were assessed between 11 and 14 weeks. Results: A total of 143 women were recruited. In 119 fetuses, facial profiles were successfully obtained and the presence or absence of nasal bones was assessed. One hundred and fourteen fetuses had normal karyotypes. The rate of absence nasal bone when assessed between 11-14 weeks was 0.88% in the fetuses with a normal karyotype. During the same period, 2 out of 3 fetuses with Down syndrome had absence of nasal bones. A nasal bone was visualized in one fetus with Edward syndrome and the other with Turner syndrome (45 X). Conclusion: The incidence of nasal hypoplasia is not increased among fetuses of southern Chinese women. Assessment of ossification of the nasal bone can be used to supplement first-trimester screening in Chinese women. abstract_id: PUBMED:12389656 Fetal nasal bone length in chromosomally normal and abnormal fetuses at 11-14 weeks of gestation. Objective: To determine the value of measuring fetal nasal bone length at 11-14 weeks of gestation in screening for chromosomal defects. Methods: The fetal profile was examined and the nasal bone length was measured in 1092 fetuses immediately before chorionic villous sampling for karyotyping at 11-14 weeks of gestation. Results: The median gestation was 12 (11-14) weeks. The fetal profile was successfully examined in all cases. The fetal karyotype was normal in 955 pregnancies and abnormal in 137, including 79 cases of trisomy 21. In the chromosomally normal group, the fetal nasal bone length increased significantly with crown-rump length (CRL) from a mean of 1.3 mm at a CRL of 45 mm to 2.1 mm at a CRL of 84 mm. In 54 of the 79 (68.4%) cases of trisomy 21, the nasal bone was absent. In the 25 cases with present nasal bone, the nasal bone length for the CRL was not significantly different from normal. Similarly, there were no significant differences from normal in the nasal bone length of fetuses with other chromosomal defects. Conclusions: At 11-14 weeks of gestation, the nasal bone length of chromosomally abnormal fetuses is not significantly different from normal. abstract_id: PUBMED:15027007 Likelihood ratio for trisomy 21 in fetuses with absent nasal bone at the 11-14-week scan. Objective: To update the likelihood ratio for trisomy 21 in fetuses with absent nasal bone at the 11-14-week scan. Methods: Ultrasound examination of the fetal profile was carried out and the presence or absence of the nasal bone was noted immediately before karyotyping in 5918 fetuses at 11 to 13+6 weeks. Logistic regression analysis was used to examine the effect of maternal ethnic origin and fetal crown-rump length (CRL) and nuchal translucency (NT) on the incidence of absent nasal bone in the chromosomally normal and trisomy 21 fetuses. Results: The fetal profile was successfully examined in 5851 (98.9%) cases. In 5223/5851 cases the fetal karyotype was normal and in 628 cases it was abnormal. In the chromosomally normal group the incidence of absent nasal bone was related first to the ethnic origin of the mother, being 2.2% for Caucasians, 9.0% for Afro-Caribbeans and 5.0% for Asians; second to fetal CRL, being 4.7% for CRL of 45-54 mm, 3.4% for CRL of 55-64 mm, 1.4% for CRL of 65-74 mm and 1% for CRL of 75-84 mm; and third to NT, being 1.6% for NT &lt; or = 95th centile, 2.7% for NT &gt; 95th centile-3.4 mm, 5.4% for NT 3.5-4.4 mm, 6% for NT 4.5-5.4 mm and 15% for NT &gt; or = 5.5 mm. In the chromosomally abnormal group there was absent nasal bone in 229/333 (68.8%) cases with trisomy 21 and in 95/295 (32.2%) cases with other chromosomal defects. Logistic regression analysis demonstrated that in the chromosomally normal fetuses significant independent prediction of the likelihood of absent nasal bone was provided by CRL, NT and Afro-Caribbean ethnic group, and in the trisomy 21 fetuses by CRL and NT. The likelihood ratio for trisomy 21 for absent nasal bone was derived by dividing the likelihood in trisomy 21 by that in normal fetuses. Conclusion: At the 11-14-week scan the incidence of absent nasal bone is related to the presence or absence of chromosomal defects, CRL, NT and ethnic origin. abstract_id: PUBMED:14558019 Ultrasound evaluation of fetal nasal bone at 11 to 14 weeks in a consecutive series of 1906 fetuses. Objective: The aim of this study is to evaluate the significance of nasal bone ossification as a marker for trisomy 21 at 11 to 14 weeks' gestation in an unselected obstetric population referred to our Centre. Methods: A total of 1906 consecutive fetuses undergoing nuchal translucency scan at 11 to 14 weeks' gestation were evaluated for the presence of hypoplasia/absence of nasal bone. The data obtained were correlated with fetal karyotype. Results: A successful view of the fetal profile was obtained in 1752 fetuses (91.9%). The nasal bone was hypoplastic/absent in 12 of 19 fetuses with chromosomal abnormalities. There were 10 cases of trisomy 21, in 8 of which hypoplastic/absent nasal bone was observed. Furthermore, absence of nasal bone was recorded in 24 of 1733 chromosomally normal fetuses. Conclusions: Nasal bone evaluation may improve the detection of trisomy 21 in the first trimester in an unselected obstetric population. Although numerically limited, our experience confirms that delayed nasal bone ossification (hypoplasia/absence of nasal bone) is rarely observed in chromosomally normal fetuses (1.4%). An appropriate training of operators is mandatory in order to achieve an acceptable performance. abstract_id: PUBMED:17357344 Efficiency of measuring nasal bone as an ultrasound marker of Down syndrome in 11th to 13th+6 week of pregnancy Objectives: The aim of this study is to evaluate the significance of nasal bone as a marker for trisomy 21 in the group of women that underwent invasive procedures in our center at 11 to 14 weeks' gestation. Methods: The data of 181 women who had undergone the invasive procedures were evaluated for the presence or absence of nasal bone retrospectively and were correlated with fetal karyotype. Results: A successful view of the fetal profile was obtained in 135 fetuses. The nasal bone was absent in 5 of 8 fetuses with trisomy 21 and in 3 of 4 fetuses with trisomy 18. In the group of chromosomally normal fetuses the nasal bone was absent in 4 of the 123 cases. The false positive rate of our screening program dropped from 4.5% to 2.5% after the introduction of the nasal bone evaluation into our risk calculation model for trisomy 21. Conclusion: Nasal bone evaluation improved the detection of trisomy 21 in the first trimester in our screening program and reduced the need for invasive procedures in our department. Absence of the nasal bone showed a sensitivity of 63% for a 5% false positive rate for trisomy 21 in our study. It is rarely observed in chromosomally normal fetuses (2.5%). An appropriate training is mandatory in order to achieve acceptable results. abstract_id: PUBMED:12528155 Nasal bone hypoplasia in trisomy 21 at 15-22 weeks' gestation. Objective: To investigate the potential value of ultrasound examination of the fetal profile for present/hypoplastic fetal nasal bone at 15-22 weeks' gestation as a marker for trisomy 21. Methods: This was an observational ultrasound study in 1046 singleton pregnancies undergoing amniocentesis for fetal karyotyping at 15-22 (median, 17) weeks' gestation. Immediately before amniocentesis the fetal profile was examined to determine if the nasal bone was present or hypoplastic (absent or shorter than 2.5 mm). The incidence of nasal hypoplasia in the trisomy 21 and the chromosomally normal fetuses was determined and the likelihood ratio for trisomy 21 for nasal hypoplasia was calculated. Results: All fetuses were successfully examined for the presence of the nasal bone. The nasal bone was hypoplastic in 21/34 (61.8%) fetuses with trisomy 21, in 12/982 (1.2%) chromosomally normal fetuses and in 1/30 (3.3%) fetuses with other chromosomal defects. In 3/21 (14.3%) trisomy 21 fetuses with nasal hypoplasia there were no other abnormal ultrasound findings. In the chromosomally normal group hypoplastic nasal bone was found in 0.5% of Caucasians and in 8.8% of Afro-Caribbeans. The likelihood ratio for trisomy 21 for hypoplastic nasal bone was 50.5 (95% CI 27.1-92.7) and for present nasal bone it was 0.38 (95% CI 0.24-0.56). Conclusion: Nasal bone hypoplasia at the 15-22-week scan is associated with a high risk for trisomy 21 and it is a highly sensitive and specific marker for this chromosomal abnormality. abstract_id: PUBMED:18599011 Fetal nasal bone status in Chinese women undergoing first-trimester screening for trisomy 21. Objective: The objective of the study was to determine the likelihood ratio for trisomy 21 in Chinese fetuses with absent nasal bone (NB) at 11-14 weeks of gestation. Study Design: The NB was classified as present or absent in 8101 chromosomally/phenotypically normal or trisomy 21 pregnancies. The effect of crown rump length (CRL), nuchal translucency (NT), maternal age, and fetal karyotype on the incidence of absent NB was examined using logistic regression. Results: The NB was successfully examined in 7925 cases (97.8%). The incidence of absent NB in unaffected and trisomy 21 cases were 2.1% (164/7899) and 50% (13/26), respectively. CRL (odds ratio [OR], 0.9; 95% confidence interval [CI], 0.87-0.92), NT (OR, 1.7; 95% CI, 1.4-2.3), and trisomy 21 (OR, 22.3; 95% CI, 8.1-61) were significant independent predictors of likelihood of absent NB. Positive likelihood ratios (LR) for trisomy 21 in Chinese were lower than that of Caucasians for the equivalent CRL and NT measurements. Conclusion: Estimated risk for trisomy 21 in Chinese incorporating nasal bone status should use LRs derived in Chinese rather than Caucasians. abstract_id: PUBMED:14770395 Ultrasonographic measurement of fetal nasal bone in a low-risk population at 19-22 gestational weeks. Objective: To determine the potential value of sonographic measurement of fetal nasal bone at 19-22 weeks' gestation in screening for trisomy 21 in a low-risk population. Methods: The fetal nasal bone was measured in a mid-sagittal view in 2035 fetuses at 19-22 weeks' gestation. A reference range was constructed and the measurements in fetuses with trisomy 21 were compared to the normal group. Results: The fetal profile was successfully examined in 1913/2035 (94%) fetuses. The mean nasal bone length increased linearly with gestation from 6.2 mm at 19 weeks to 6.8 mm at 22 weeks. Nasal bone hypoplasia, defined by absence of the bone or a measurement below the 2.5th centile, was observed in 34/1899 (1.8%) chromosomally normal fetuses (1.8%), in 5/5 fetuses with trisomy 21 and in 0/9 fetuses with other chromosomal defects. Conclusion: At 19-22 weeks' gestation, nasal bone hypoplasia is observed in a high proportion of trisomy 21 fetuses and in less than 2% of chromosomally normal fetuses. abstract_id: PUBMED:16724362 First-trimester examination of fetal nasal bone in the Chinese population. Objectives: Absence of the nasal bone in fetuses with trisomy 21 is a potential ultrasound marker for Down syndrome from the first trimester onwards. Racial differences in fetal nasal bone length have been reported. There was no reference range for the first-trimester fetal nasal bone length (NBL) in the Chinese population. Methods: From January 2002 to February 2005, we investigated the reference range of NBL in 2169 singleton fetuses whose parents were Chinese with normal pregnancy outcome. Ultrasound measurements of NBL were performed on a midsagittal plane at 12-14 weeks' gestation. Results: The nasal bone length measurement showed a significant increase with gestational age (GA) (p &lt; 0.05). A linear relationship between nasal bone length and gestational age (NBL = -3.3462 x GA + 0.627, R(2) = 0.30, p &lt; 0.001) as well as a linear relationship between nasal bone length and crown-rump length (CRL) (NBL = 0.3741 x CRL + 0.284, R(2) = 0.30, P &lt; 0.001) was established. Conclusion: The measurement of nasal bone length was feasible in the first trimester. The reference range of NBL in normal Chinese fetuses in the first-trimester of pregnancy was established. This formed a basis for further study on the use of fetal nasal bone measurement in the screening for aneuploidy in the Chinese population. abstract_id: PUBMED:11728540 Absence of nasal bone in fetuses with trisomy 21 at 11-14 weeks of gestation: an observational study. Background: Prenatal diagnosis of trisomy 21 requires an invasive test in women regarded as being at high risk after screening. At present there are four screening tests, and for a 5% false-positive rate, the sensitivities are about 30% for maternal age alone, 60-70% for maternal age and second-trimester maternal serum biochemical testing, 75% for maternal age and first-trimester fetal nuchal translucency scanning, and 85% for maternal age with fetal nuchal translucency and maternal serum biochemistry at 11-14 weeks. In this study, we examined the possible improvement in screening for trisomy 21 by examining the fetal nasal bone with ultrasound at 11-14 weeks of gestation. Methods: We did an ultrasound examination of the fetal profile in 701 fetuses at 11-14 weeks' gestation immediately before karyotyping for a possible chromosomal abnormality detected by maternal age and fetal nuchal translucency screening. The presence or absence of a nasal bone was noted. Findings: The fetal profile was successfully examined in all cases. The nasal bone was absent in 43 of 59 (73%) trisomy 21 fetuses and in three of 603 (0.5%) chromosomally normal fetuses. The likelihood ratio for trisomy 21 was 146 (95% CI 50-434) for absent nasal bone and 0.27 (0.18-0.40) for present nasal bone. In screening for trisomy 21, by a combination of maternal age and fetal nuchal translucency, we estimated that inclusion of examination of the fetal profile for the presence or absence of nasal bone could increase the sensitivity to 85% and decrease the false-positive rate to about 1%. Interpretation: In screening for trisomy 21, examination of the fetal nasal bone could result in major reduction in the need for invasive testing and a substantial increase in sensitivity. Answer: Nasal bone hypoplasia is not a common finding amongst chromosomally normal fetuses of southern Chinese women. A study conducted at Princess Margaret Hospital, Hong Kong, found that the rate of absence of nasal bone when assessed between 11-14 weeks was only 0.88% in fetuses with a normal karyotype (PUBMED:12937342). This suggests that the incidence of nasal hypoplasia is not increased among fetuses of southern Chinese women, and assessment of ossification of the nasal bone can be used to supplement first-trimester screening in Chinese women.
Instruction: Is there a survival benefit to adjuvant radiotherapy in high-risk surgical stage I endometrial cancer? Abstracts: abstract_id: PUBMED:12217745 Is there a survival benefit to adjuvant radiotherapy in high-risk surgical stage I endometrial cancer? Objective: The aim of this study was to examine the effects of therapeutic modalities on survival of stage I endometrial cancer and also to evaluate the surgical morbidity and the prognostic importance of surgicopathological variables. Methods: A hundred and ninety-six stage I endometrial cancer patients treated at Hacettepe University Hospital between 1982 and 1997 were included. After initial diagnosis all patients underwent surgical procedures including peritoneal cytology, infracolic omentectomy, total abdominal hysterectomy, bilateral salphingoopherectomy, and complete pelvic-paraaortic lymphadenectomy. The mean age at initial diagnosis was 56 years (SD = 9.9 years) and the patients were followed 3-18 years (median, 8 years). All patients had endometrioid carcinoma. Stage IC and/or grade 3 tumors were considered high-risk factors and by this definition 147 (75%) patients were low risk and 49 (25%) patients were high risk. Forty-nine percent of high-risk patients received adjuvant radiotherapy compared with 3.5% of patients in the low-risk group. Results: The 10-year disease-free and overall survival rates of the entire group were 97 and 98%, respectively. Ten-year overall survival rate for the low-risk group was 100% compared with 94% for patients with high-risk features (P = 0.002). The 10-year disease-free survival rate in the high-risk group was 96% for 24 patients who received adjuvant radiotherapy versus 92% for 25 patients who did not receive adjuvant therapy (P = 0.53). Only high grade was a significant predictor of poor survival (P = 0.0004). Overall surgical morbidity rate was 8.1% without mortality related to surgery. Conclusions: Surgical staging achieved excellent survival for stage I endometrial cancer patients without incurring untoward morbidity and mortality. No survival advantage of adjuvant radiotherapy was detected even for high-risk patients, so we suggest the use of radiotherapy may be reserved for relapse. abstract_id: PUBMED:23691474 Adjuvant Pelvic Radiotherapy vs. Sequential Chemoradiotherapy for High-Risk Stage I-II Endometrial Carcinoma. Objective: To explore if the addition of adjuvant chemotherapy with paclitaxel and carboplatin to radiotherapy confers an advantage for overall survival (OAS), and progression free survival (PFS); to assess the incidence of relapses over standard pelvic radiotherapy; and to evaluate the related toxicity in high-risk stage I-II endometrial carcinoma. Methods: Medical records were reviewed to identify high-risk stage I-II endometrial carcinoma cases treated in the Clinical Oncology and Nuclear Medicine department between 2002 and 2008 with adjuvant radiotherapy alone (arm I) (57 patients) or with sequential carboplatin (AUC5-6) and paclitaxel (135-175 mg/m(2)) with radiotherapy (arm II) (51 patients). Radiotherapy was performed through the four-field box technique at doses of 45-50 Gy (1.8 Gy/day × 5 days/week). Results: The toxicity was manageable and predominantly hematologic with a grade 3 neutropenia and thrombocytopenia in 9.8% and 6% of the patients in arm I and arm II, respectively, without febrile neutropenia. All patients experienced hair loss. Chemoradiotherapy arm was associated with a lower incidence rate of relapse (9.8% vs. 22.7%). After a median follow-up period of 48 months, the 5-year OAS and PFS rates for chemoradiotherapy-treated patients were significantly more favorable than those who did not receive chemotherapy (P=0.02 and 0.03, respectively). In arm I, the OAS and PFS rates were 73.7% and 66.7% compared with those in arm II, whose rates were 90.2% and 84.3%. Conclusions: Adjuvant chemoradiation with paclitaxel and carboplatin improved the survival rates and decreased the recurrence rates in patients with high-risk stage I-II endometrial carcinoma. Chemotherapy was associated with an acceptable rate of toxicity. However, a prospective study with a larger number of patients is needed to define a standard adjuvant treatment for high-risk stage I-II endometrial carcinoma. abstract_id: PUBMED:28012794 Improved overall survival with adjuvant radiotherapy for high-intermediate and high risk Stage I endometrial cancer. Background/purpose: To perform a large analysis of Stage I endometrioid-type endometrial cancer patients to determine the impact of adjuvant radiotherapy (ART) on survival. Material/methods: 132,976 FIGO Stage I endometrioid-type endometrial cancer patients treated surgically were identified within the National Cancer Database (NCDB) comprising Commission on Cancer facilities in the United States. Patients were categorized as observation (OBS) or ART (vaginal brachytherapy, external beam radiotherapy, or both). Univariable generalized linear mixed effects models were used to estimate the odds of receiving ART, and a multivariable frailty survival model was used to estimate the instantaneous hazard of death for those receiving OBS versus ART. Due to the presence of a significant interaction, these estimates were stratified by PORTEC-based low, low-intermediate, high-intermediate, and high risk groups. Results: 104,645 (79%) underwent OBS while 28,331 (21%) received ART. Of those receiving ART, 12,913 (46%) received VBT alone, 12,857 (45%) received EBRT alone, and 2561 (9%) received EBRT+VBT. On univariable analysis, increasing stage/myometrial invasion, higher grade, older age, presence of lymphovascular space invasion, and larger tumor size predicted poorer survival (all p&lt;0.01). On multivariable analysis, patients at high-intermediate risk and high risk experienced improved survival with ART with a hazard ratio of 0.796 (95% CI: 0.731-0.867; p&lt;0.001) and 0.783 (95% CI: 0.693-0.885; p&lt;0.001), respectively. There was no survival benefit for ART among patients at low or low-intermediate risk. Conclusions: In Stage I high-intermediate and high risk endometrioid-type endometrial cancer patients, ART significantly improves overall survival. abstract_id: PUBMED:35277934 Adjuvant radiotherapy shows benefit in selected stage I uterine sarcoma: A risk scoring system based on a population analysis. Background: The potential therapeutic benefit of adjuvant radiotherapy for patients with stage I uterine sarcoma has not been clear. In this study, we aimed to develop a risk scoring model to select the subgroup of patients with stage I uterine sarcoma who might benefit from adjuvant radiotherapy. Methods: Patients with stage I uterine sarcoma from the Surveillance, Epidemiology, and End Results program from 2010 to 2014 were retrospectively included in this analysis. Cox proportional hazards models were performed to identify risk factors. Results: A total of 947 stage I uterine sarcoma patients were included. The 5-year disease-specific survival (DSS) of the overall cohort was 75.81%. Multivariate analysis identified stage (p = 0.013), tumor grade (p &lt;0.001) and histology (p = 0.043) as independent prognostic factors for DSS, and these factors were used to generate the risk scoring model. The low-risk group presented a better DSS than the high-risk group (95.51% vs. 49.88%, p &lt; 0.001). The addition of radiotherapy to surgery significantly increased the DSS in the high-risk group compared with surgery alone (78.06% vs. 46.88%, p = 0.022), but no significant survival benefit was observed in the low-risk group (98.36% vs. 100%, p = 0.766). Conclusions: Our risk scoring model based on stage, tumor grade, and histology predicted the outcome of patients with stage I uterine sarcoma cancer. This system may help to select stage I uterine sarcoma cancer patients who might benefit from adjuvant radiotherapy. abstract_id: PUBMED:34756697 Incorporation of vaginal brachytherapy to external beam radiotherapy in adjuvant therapy for high-risk early-stage cervical cancer: A comparative study. Purpose: To examine trends, characteristics, and outcomes related to addition of vaginal brachytherapy (VBT) to external beam radiotherapy (EBRT) for adjuvant radiotherapy in high-risk early-stage cervical cancer. Methods And Materials: This comparative study is a retrospective observational analysis of the National Cancer Institutes' Surveillance, Epidemiology, and End Results Program. Surgically treated women with stage T1-2 cervical cancer who had high-risk factors (nodal metastasis and/or parametrial invasion) and received adjuvant radiotherapy from 2000 to 2018 were examined. Propensity score inverse probability of treatment weighting was used to assess the survival estimates for addition of VBT use. Results: Among 2470 women with high-risk factors receiving EBRT, 760 (30.8%) had additional VBT. During the study period, there was an increasing trend of VBT use from 27.4% to 36.1% (p&lt; 0.001). In a multivariable analysis, year of diagnosis and high-risk tumor factors: parametrial involvement, large tumor size, and use of chemotherapy remained independent characteristics associated with VBT use (all, p&lt; 0.05). In propensity score-weighted models, VBT use with EBRT and EBRT alone had comparable overall survival (5-year rates 73.8% vs. 77.4%, hazard ratio [HR] 1.07, 95% confidence interval [CI] 0.92-1.25). Nonsignificant association was also observed in squamous or nonsquamous tumors, young or old age, low or high nodal ratio, chemotherapy use, and simple or radical hysterectomy (all, p&gt; 0.05). Lastly, the addition of VBT was not associated with cervical cancer-specific survival (subdistribution-HR 1.15, 95% CI 0.94-1.41). Conclusions: Utilization of VBT with EBRT for adjuvant radiotherapy in high-risk early-stage cervical cancer is increasing in the United States. Addition of VBT was associated with neither overall survival nor cancer-specific survival. abstract_id: PUBMED:33437305 Improved disease-free survival with adjuvant radiotherapy in early-stage endometrial cancer: 10-year outcome analysis. Purpose: The purpose of this study was to evaluate long-term treatment outcomes and prognostic factors affecting survival of patients with early-stage endometrial carcinoma. Material And Methods: Data of 311 patients with FIGO stage I-II endometrial cancer, curatively treated at two different tertiary centers between June 2001 and December 2016 were retrospectively reviewed. The patients had primary surgery, 74 (24%) received no further treatment, 4 (1%) obtained chemotherapy only, 234 (75%) received radiotherapy, and 24 (7%) received both. Results: Median follow-up time was 102 (range, 3-205) months. During this period, 68 (21.9%) patients died. 5-year and 10-year disease-free survival (DFS) were 76% and 74.3%, respectively. In multivariate analysis, lower uterine segment invasion positivity and no adjuvant radiotherapy were determined as independent unfavorable prognostic factors for DFS. The 5-year and 10-year disease-specific survival (DSS) were 86.8% and 82.2%, respectively. For DSS, high-grade, lymphovascular space invasion positivity, stage II, ≥ 65 age, and no adjuvant radiotherapy were found to be independent unfavorable prognostic markers. Conclusions: The findings of our cohort have confirmed the importance of adjuvant radiotherapy on long-term early-stage endometrial carcinoma outcome. abstract_id: PUBMED:23359621 High-risk endometrial cancer may be benefit from adjuvant radiotherapy plus chemotherapy. Objective: To present patterns of practice and outcomes in the adjuvant treatment of intermediate- and high-risk endometrial cancer. Methods: Retrospective data on 224 women with intermediate-risk and high-risk endometrial cancer from 1999 to 2006 were reviewed. All patients underwent surgical staging. Patterns of adjuvant treatment, consisting of pelvic radiotherapy, chemotherapy, and radiotherapy plus chemotherapy, were assessed. The 3- and 5-year disease-specific survival (DSS) rates were calculated using the Kaplan-Meier method. Results: The difference in 5-year DSS rate was statistically significant between adjuvant group and non-adjuvant group (80.65% vs. 63.80%, P=0.040). In 110 high-risk patients who underwent adjuvant treatment, both 5-year DSS rate and recurrent rate were significantly different in combined radiotherapy and chemotherapy group compared with radiotherapy alone and chemotherapy alone groups (DSS rate, P=0.049; recurrent rate, P=0.047). In 83 intermediate-risk women who underwent adjuvant treatment, there was no significant difference in 5-year DSS rate and recurrence rate among the combined radiotherapy and chemotherapy, radiotherapy alone and chemotherapy alone groups (DSS rate, P=0.776; recurrent rate, P=0.937). Conclusions: Adjuvant radiotherapy plus chemotherapy is associated with a higher 5-year DSS rate and lower recurrence rate compared with radiotherapy alone and chemotherapy alone in high-risk endometrial cancer patients. Patients with intermediate-risk endometrial cancer may be not likely to benefit from adjuvant combined radiotherapy and chemotherapy. abstract_id: PUBMED:23810000 The benefit of adjuvant chemotherapy combined with postoperative radiotherapy for endometrial cancer: a meta-analysis. The objective of our study was to determine whether adjuvant chemotherapy combined with postoperative radiotherapy would have benefits for the disease-free survival and overall survival in patients with high-risk endometrial cancer. Electronic searches for studies of adjuvant chemotherapy combined with postoperative radiotherapy in endometrial cancer patients between March 1971 and March 2012 were made on MEDLINE, SCOPUS, and the Cochrane library. Articles with more than 4 stars on the Newcastle-Ottawa scale or a score of more than 4 on the modified Jadad scale were included. A meta-analysis was performed, and pooled hazard ratios (HR) of progression-free survival (PFS) and overall survival (OS) between patients whose adjuvant chemotherapy was combined with radiotherapy (the CTx+RTx group) and patients with adjuvant radiotherapy only (the RTx group) were derived from the fixed effect model or random effect model. Three observational studies and 3 randomized clinical trials (RCTs) were included in the final analysis. Subgroup analysis for FIGO stage showed that the CTx+RTx group had a more significant survival benefit compared to that of the RTx group in advanced stage endometrial cancer (OS HR 0.53, 95% CI 0.36-0.80; PFS HR 0.54, 95% CI 0.37-0.77), but no significant benefit in early stage endometrial cancer (OS HR 0.96, 95% CI 0.70-1.32; PFS HR 1.00, 95% CI 0.39-2.58). This meta-analysis suggests that adjuvant chemotherapy combined with postoperative radiotherapy could probably reduce disease progression and overall death in patients with advanced-stage disease. In order to examine whether the multimodal treatment has benefit in high-risk endometrial cancer, we need further large-scale RCTs. abstract_id: PUBMED:30295094 Adjuvant therapy for high-risk endometrial cancer: recent evidence and future directions. Introduction: Although the majority of women with endometrial cancer have a favorable prognosis due to early symptoms, 15-20% have high-risk disease features and are at increased risk of recurrence. In order to improve prognosis for these patients, several trials have compared chemotherapy (CT), radiotherapy (RT) or the combination of CTRT. Areas covered: This review focuses on the current evidence on adjuvant treatment for women with high-risk endometrial cancer and future perspectives. Expert commentary: For stage I-II high-risk endometrial cancer, external beam radiotherapy ensured good local control and no significant benefit in progression-free or overall survival was found with the addition of chemotherapy in 2 recent randomized trials. For women with stage III disease, the combination of chemotherapy and radiotherapy improved progression-free survival with a non-significant improvement of overall survival. Adjuvant chemotherapy alone resulted in higher rates of pelvic and para-aortic recurrence. More toxicity and reduced quality of life were found during and after adjuvant CTRT. It is essential to discuss the benefits and disadvantages of chemotherapy and radiotherapy with individual patients for shared decision-making. Translational research is ongoing to further characterize individual tumors, identify sensitivity to (immuno)therapies and find new treatment targets to improve outcomes. abstract_id: PUBMED:26401108 Is adjuvant radiotherapy necessary for FIGO stage 1a grade 2 endometrial endometrioid adenocarcinoma? Objective: The impact of adjuvant radiotherapy on the rates of survival and local recurrence was analyzed in patients diagnosed with International Federation of Gynecology and Obstetrics (FIGO) stage 1a grade 2 endometrial endometrioid adenocarcinoma. Material And Methods: Medical records of 82 patients diagnosed and treated for FIGO stage 1a grade 2 endometrial endometrioid adenocarcinoma were reviewed retrospectively. A group of 59 patients who received postoperative radiotherapy was compared with a control group of 23 subjects treated without adjuvant radiotherapy; the duration of survival as well as the local recurrence and metastasis rates were evaluated in both groups. Results: The analysis of patient data has revealed the rate of local recurrence as 4.3% vs. 1.7% (p=0.485), the rate of distant metastasis as 4.3% vs. 6.9% (p=1.000), and the mean survival time as 83.6±38.7 vs. 81.5±37.5 months (p=0.828) in the adjuvant radiotherapy and control groups, respectively. Conclusion: In the presented study, adjuvant radiotherapy failed to improve the overall survival of the patients in the low-risk group (stage 1a grade 2). With the addition of the significant risk of radiation toxicity, it is highly probable that these patients will not benefit from postoperative radiotherapy. Close observation should be performed following the primary surgery in this patient group. Nevertheless, it should also be considered that adjuvant radiotherapy is a very effective treatment modality for the recovery of patients with vaginal relapse. Answer: The survival benefit of adjuvant radiotherapy in high-risk surgical stage I endometrial cancer is a subject of ongoing research and debate. According to the studies provided, the evidence is mixed, with some studies suggesting a benefit and others not. One study found no survival advantage of adjuvant radiotherapy even for high-risk patients, suggesting that the use of radiotherapy may be reserved for relapse (PUBMED:12217745). In contrast, another study reported that adjuvant chemoradiation with paclitaxel and carboplatin improved survival rates and decreased recurrence rates in patients with high-risk stage I-II endometrial carcinoma (PUBMED:23691474). A large analysis of Stage I endometrioid-type endometrial cancer patients indicated that adjuvant radiotherapy (ART) significantly improves overall survival for patients at high-intermediate and high risk (PUBMED:28012794). Similarly, another study suggested that adjuvant radiotherapy shows benefit in selected stage I uterine sarcoma, with a risk scoring system based on a population analysis (PUBMED:35277934). However, a study on early-stage cervical cancer, which is not directly about endometrial cancer but may offer insights into adjuvant therapy for gynecological cancers, found that the addition of vaginal brachytherapy (VBT) to external beam radiotherapy (EBRT) was associated with neither overall survival nor cancer-specific survival (PUBMED:34756697). Another study emphasized the importance of adjuvant radiotherapy on long-term early-stage endometrial carcinoma outcome (PUBMED:33437305), while a different study suggested that high-risk endometrial cancer may benefit from adjuvant radiotherapy plus chemotherapy (PUBMED:23359621). A meta-analysis indicated that adjuvant chemotherapy combined with postoperative radiotherapy could probably reduce disease progression and overall death in patients with advanced-stage disease (PUBMED:23810000). Lastly, a review article highlighted that for stage I-II high-risk endometrial cancer, external beam radiotherapy ensured good local control, and no significant benefit in progression-free or overall survival was found with the addition of chemotherapy in two recent randomized trials (PUBMED:30295094). In summary, while some studies suggest that adjuvant radiotherapy may improve survival in high-risk stage I endometrial cancer, especially when combined with chemotherapy, others do not find a significant benefit. The decision to use adjuvant radiotherapy should be individualized based on specific patient risk factors and discussed thoroughly with the patient.
Instruction: Do perceptions of vulnerability and worry mediate the effects of a smoking cessation intervention for women attending for a routine cervical smear test? Abstracts: abstract_id: PUBMED:19290718 Do perceptions of vulnerability and worry mediate the effects of a smoking cessation intervention for women attending for a routine cervical smear test? An experimental study. Objective: Numerous correlational studies have examined whether perceptions of vulnerability or worry are better predictors of health-related behavior. The aim of this experimental study was to explore some of the potential causal relationships involved: Are the effects of a brief smoking cessation intervention (for women attending for cervical smear tests) on intention to stop smoking mediated by perceived vulnerability or worry about cervical cancer? Design: A mediation analysis of an experimental study. Main Outcome Measures: Perceived vulnerability to and worry about cervical cancer, and intention to stop smoking in the next month. Results: Questionnaires were completed by 172 (71%) women at 2-week follow-up. Compared with women in the control group, those in the intervention group had higher perceptions of vulnerability, worry, and intention to stop smoking. Personal vulnerability (p &lt; .01) and comparative vulnerability (p &lt; .05) were significant mediators of the relationship between study group and intention to stop smoking. Worry about cervical cancer was not related to intention. Conclusion: Worry may be a less important construct in relation to disease prevention behaviors such as smoking cessation. More experimental studies comparing different behaviors are needed to determine the causal relationship between worry and outcomes. abstract_id: PUBMED:35449007 Perceptions and attitude of women of Luderitz, Namibia on Pap smear and cervical cancer prevention. Background: Cervical cancer is one of the leading malignancies globally and has taken third place in Namibia amongst women aged 15-44 years. Infection with the Human Immunodeficiency Virus (HIV) has been proven to increase women's susceptibility to developing cervical carcinoma. Sadly, Namibia carries a twin burden of HIV and cervical cancer. Namibians are aware of HIV/AIDS, but remain poorly informed about cervical cancer. Furthermore, among those who are aware of the disease, low utilisation of screening tests have been reported. Objective: The purpose was to explore perceptions and attitudes held by women about cervical cancer, reasons for low uptake of Pap smear testing amongst those who are aware of the malignancy as well as unearth motivation factors that has fuelled women to go for screening. Methodology: A descriptive, cross-sectional study was conducted using convenience sampling as a sampling technique. The survey instrument used was a self-administered questionnaire. It consisted of both closed and open ended questions. A total of 136 women were surveyed. Results And Conclusion: The level of awareness for cervical cancer (92.6%) and Pap smear (93.4%) were high. Most were able to identify that Pap smear test is used for screening for pre-cancerous lesions. However, knowledge about the impact of a HIV positive status along with co infection with HPV as the leading causes for progression of invasive cervical carcinoma was not well known. Knowledge about the other risk factors such as multiple sexual partners (39.7%), early sex debut (34.9%) and smoking was poorly demonstrated. This suggests that a high awareness level does not necessarily translate into having a good perception or understanding of a disease. A good attitude towards screening was observed although less than half of the study population reported ever having a test done. abstract_id: PUBMED:15917009 Increasing women's intentions to stop smoking following an abnormal cervical smear test result. Background: The aim of this study is to assess the impact of two leaflets upon intentions to stop smoking in the next month in women receiving abnormal cervical smear test results. Methods: 330 women smokers who received an abnormal smear test result were randomly allocated to one of three groups: Group 1: sent a leaflet about the threat of cervical cancer from smoking and the benefits to cervical health in stopping, containing a detailed explanation of how smoking harms the cervix; Group 2: sent the same leaflet but without the detailed explanation; Group 3: no leaflet. Questionnaires were completed 1 month later. Results: Women in Group 1 had higher intentions to stop smoking compared with Group 3. This effect was mediated by perceived coherence of the link between smoking and cervical cancer and moderated by self-efficacy (confidence): only women with high levels of self-efficacy had higher intentions to stop smoking. Conclusion: Explaining the link between smoking and cervical cancer increases intentions to stop smoking in women receiving abnormal cervical smear test results. Further studies are needed to determine how self-efficacy can be increased and whether, in addition to the detailed leaflet, this increases rates of smoking cessation. abstract_id: PUBMED:9572218 Promoting smoking cessation among women who seek cervical cancer screening. Objective: To determine whether 1) women smokers are aware that smoking increases their risk for cervical cancer; 2) awareness of risk factors and its impact on motivation to quit smoking vary by age; and 3) providers are advising smokers to quit smoking during routine gynecologic visits. Methods: Women aged 18 years and older who had a Papanicolaou test in the prior month were identified from a managed care cytology database. Four weeks after receiving their test result, women were called to complete a health behavior survey in which their smoking status, sexual history, knowledge of risk factors for cervical cancer, and motivation to quit smoking were assessed. Results: Only 49% of the 613 smokers surveyed were aware that smoking increased their risk of cervical cancer. Women aged 34 years and younger were significantly more likely to agree that smoking increased their risk for cervical cancer than women aged 45 years or older (54% versus 40%). Concern that smoking might cause cervical cancer or other cancers was endorsed highly by women as a motivator for cessation, particularly for women smokers aged 18-26 (mean 8.1 and 7.6 on 10-point scales, respectively). About 50% of the women reported that their provider had asked about smoking and had encouraged cessation. Only about 30% of smokers were aware of well-established risk factors for cervical cancer such as the number of lifetime sexual partners and not using a condom. Conclusion: Routine gynecologic screening visits represent an important and underutilized clinical opportunity to intervene with young women smokers. abstract_id: PUBMED:22296432 A review of smoking cessation services in women attending colposcopy clinics in Scotland. The most common reason for women being referred to colposcopy clinics is an abnormal smear suggesting pre-cancerous change within the cervix. It has been demonstrated that in women with low-grade lesions, smoking cessation led to a reduction in size of the cervical lesion over a 6-month period. Smoking is also recognised as an independent risk factor for treatment failure of cervical intraepithelial neoplasia. As 'stop smoking' services have become an integral part of the colposcopy clinic at St John's Hospital, Livingston, a national survey was undertaken. The aim of the survey was to establish what smoking cessation information, advice and support is available in colposcopy clinics throughout Scotland, and if staff had attended standardised training on raising the issue of smoking. The study demonstrated that 52% of colposcopy clinics in Scotland always establish smoking status but only 37% of clinics discuss the benefits of smoking cessation in relation to abnormal cervical smears. There was inaccurate and inconsistent written and verbal advice given on the benefits of cessation and whether women should cut down or stop smoking completely. The majority of staff had not attended standardised training on how to raise the issue of smoking and very few clinics have established referral pathways for smoking cessation support. abstract_id: PUBMED:30510096 Smoking cessation advice after cervical screening: a qualitative interview study of acceptability in Dutch primary care. Background: Cervical cancer screening in general practice could be a routine and opportune moment to advise females who smoke to stop smoking. Aim: The aims of this study were to investigate the attitudes of females who smoke to receiving advice about stopping smoking after cervical screening and to identify factors associated with the acceptability of this advice. Design And Setting: This qualitative interview study was conducted with Dutch females who smoked, had undergone cervical screening, and were aged 30 to 60 years. Interviews were performed between December 2016 and September 2017. Method: In this study 15 participants were interviewed and transcripts were analysed using thematic analysis. Results: Females who smoke were ambivalent (positive or sceptical) about being advised to stop smoking after they had undergone cervical screening. An explanation of why smoking behaviour is addressed by the practice assistant performing the smear, and making females feel at ease during the smear test, were found to be factors that might influence acceptability of such advice. Although a personal and non-judgemental approach to discussing smoking was considered essential, participants expressed different preferences regarding the form and content of cessation support. This was reflected by the variations in knowledge about smoking cessation support, previous experiences of cessation attempts, and received cessation advice or support. Conclusion: Study participants had mixed opinions about being given advice about smoking cessation after their cervical smear test and differed in their preferences for the type of support for smoking cessation. An interactive approach might improve how well a smoking cessation intervention is received by females who smoke and cater to their individual needs and preferences. abstract_id: PUBMED:21170293 Pregnant women's responses to a tailored smoking cessation intervention: turning hopelessness into competence. Background: Cognitive behavioral interventions consisting of brief counseling and the provision of self-help material designed for pregnancy have been documented as effective smoking cessation interventions for pregnant women. However, there is a need to understand how such interventions are perceived by the targeted group. Aim: To understand the cognitive, emotional, and behavioral responses of pregnant women to a clinic-based smoking cessation intervention. Methods: In-depth interviews with women attending four antenatal clinics in Cape Town, South Africa, who were exposed to a smoking intervention delivered by midwives and peer counselors. Women were purposively selected to represent a variation in smoking behavior. Thirteen women were interviewed at their first antenatal visit and 10 were followed up and reinterviewed later in their pregnancies. A content analysis approach was used, which resulted in categories and themes describing women's experiences, thoughts, and feelings about the intervention. Results: Five women quit, five had cut down, and three could not be traced for follow-up. All informants perceived the intervention positively. Four main themes captured the intervention's role in influencing women's smoking behavior. The process started with 'understanding their reality,' which led to 'embracing change' and 'deciding to hold nothing back,' which created a basis for 'turning hopelessness into a feeling of competence.' Conclusion: The intervention succeeded in shifting women from feeling pessimistic about ever quitting to feeling encouraged to try and quit. Informants rated the social support they received very highly and expressed the need for the intervention to become a routine component of clinic services. abstract_id: PUBMED:24966773 Dental patient's knowledge and perceptions about the effects of smoking and role of dentists in smoking cessation activities. Objectives: The objective of the following study is to assess the knowledge of patients about the consequences of smoking on the general and oral health, to analyze the patient's perceptions about the role of dentists in smoking prevention, counseling and cessation and to analyze the willingness of smokers to quit smoking following dentist's advice. Materials And Methods: A descriptive study was conducted in a sample of 486 dental patients visiting Vokkaligara Sangha Dental College and Hospital, Bengaluruin the month of August 2011 using a self-administered questionnaire. Descriptive statistics and Chi-square test was performed to analyze data. Results: The patients had good knowledge about the effects of smoking on general and oral health. Statistically significant difference was observed in the knowledge of smokers and non-smokers about the effects of smoking on ageing (P = 0.008) and oral cancer (P = 0.0012). Patients had a positive perception about the role of dentists in smoking cessation activities. Smokers exhibited a willingness to quit if suggested by the dentist. Conclusion: Patients perceive that dentists can play an important part in smoking cessation activities. Hence, dentists should not hesitate to give smoking cessation advice to their patients and grasp this opportunity to improve the general and oral health of the community. abstract_id: PUBMED:17484195 The impact of knowledge, perceived barriers and perceptions of risk on attendance for a routine cervical smear. Objectives: Cervical cancer is one of the most preventable malignant tumours, and 90% of cases can be identified and treated in its early stages in a simple outpatient procedure. In spite of this, many new cases of cervical cancer are diagnosed each year. The present study examines the impact of knowledge, perceived risk, past experience and perceived barriers on attendance for a routine cervical smear test. Methods: The study is based on a sample of over 1,000 women from the Irish Cervical Screening Programme (ICSP) database who were sent a letter of invitation to attend for a smear test. Results: The results show that women have relatively poor levels of knowledge about cervical cancer and screening. Significant barriers to attendance included: the perception of having a cervical smear test as time consuming (P &lt; 0.01), causing greater distress (P &lt; 0.01) and being more afraid of the test (P &lt; 0.05). Women who attended for a smear perceived themselves at greater risk of cervical cancer than non-attenders. Conclusions: Barriers to attendance should be addressed in programmes that focus on the benefits outweighing the costs of attending for a cervical smear test. abstract_id: PUBMED:33518358 A Feasibility Study of Smoking Cessation Utilizing an Exercise Intervention among Black Women: 'Quit and Fit'. Background: Women who engage in higher levels of exercise while trying to quit smoking have been shown to be less likely to relapse and to sustain their smoking abstinence longer. This study sought to examine the benefits of exercise for improving smoking cessation among Black women. Methods: We evaluated the feasibility of a 12-week smoking and exercise intervention, Quit and Fit, tailored for Black women. All participants (intervention and control) received 12 weeks of smoking cessation counseling via telephone and 9 weeks of nicotine lozenges. Participants who were randomly assigned to the intervention condition were also assigned to a 12-week exercise group. Results: Thirty-eight women were enrolled and 27 completed a 12-week follow-up assessment. Women from the intervention group were more likely to complete the 12-week follow-up assessment compared to participants in the control group (80% vs. 61%). Overall, 7 of the 38 participants (18%) were abstinent at 12 weeks (biochemically verified by expired carbon monoxide). Among the 25 women who completed the 12-week follow-up, abstinence was reported in 33% of the intervention group and 20% of the control group. Using an intent-to-treat approach, 25% of women in the intervention group were abstinent at 12 weeks (n = 5), compared to 11.1% for the control group (n = 2). These differences were not statistically significant. Conclusions: The overall retention rate was 71% (27/38) at 12 weeks with higher among the intervention group (16/20; 80%) compared to the control group (11/18; 61%). The study demonstrates that it is feasible to retain African-American women in a short-term study of smoking cessation and exercise. Answer: Yes, perceptions of vulnerability do mediate the effects of a smoking cessation intervention for women attending for a routine cervical smear test. An experimental study found that compared with women in the control group, those in the intervention group had higher perceptions of vulnerability, worry, and intention to stop smoking. Personal vulnerability and comparative vulnerability were significant mediators of the relationship between study group and intention to stop smoking. However, worry about cervical cancer was not related to intention (PUBMED:19290718). This suggests that while worry may not be a significant factor, perceptions of vulnerability can influence the effectiveness of smoking cessation interventions in the context of cervical cancer prevention.
Instruction: Cardiac function in types II and III spinal muscular atrophy: should we change standards of care? Abstracts: abstract_id: PUBMED:25539139 Cardiac function in types II and III spinal muscular atrophy: should we change standards of care? Objective: In the last years, there has been increasing evidence of cardiac involvement in spinal muscular atrophy (SMA). Autonomic dysfunction has been reported in animal models and in several patients with types I and III SMA, these findings raising the question whether heart rate should be routinely investigated in all SMA patients. The aim of our study was to detect possible signs of autonomic dysfunction and, more generally, of cardiac involvement in types II and III SMA. Patients And Methods: We retrospectively reviewed 24-hour electrocardiography (ECG) in 157 types II and III SMA patients (age range, 2-74 years). Of them, 82 also had echocardiography. Results: None of the patients had signs of bradycardia, atrial fibrillation, or the other previously reported rhythm disturbances regardless of the age at examination or the type of SMA. Echocardiography was also normal. There were no signs of congenital cardiac defects with the exception of one patient with a history of ventricular septal defects. Conclusions: Our results suggest that cardiac abnormalities are not common in type II and type III SMA. These findings provide no evidence to support a more accurate cardiac surveillance or changes in the existing standards of care. abstract_id: PUBMED:35707038 Real-World Data on Access to Standards of Care for People With Spinal Muscular Atrophy in the UK. Spinal Muscular Atrophy (SMA) is characterized by muscle atrophy and weakness and has an incidence of 1:11. 000 live births which projects an estimated population in the UK of 650-1,300 affected patients. Standards of Care (SoC) were updated in 2017 and they have been widely adopted as a reference for implementation of care in SMA across the globe. The effectiveness of implementation and adherence to these standards across different countries is unclear. The aim of this study is to describe the experience of individuals with SMA regarding their care in the UK. An online anonymised survey was sent out via patient organizations, the UK SMA Patient Registry, professional networks, and social media to reach across the UK. The survey captured demographic profile, professionals involved in a patient's care, Interventions and access to mobility aids and home adaptations. Participants responded about their access to services and to rate how important each professional and intervention was for their health and wellbeing. One hundred and twenty-eight responses were collected with a median age of 34 years (1-81). Seventy-three percent of participants were adults and 60% men. Overall good access to neurologist (&gt;90%) but limited to nurse specialist (48%) and physiotherapist (57%). Good access to respiratory support was reported but limited for interventions for positioning and bracing and exercise. This survey highlights that access to certain professionals for people with SMA is limited in the UK. Striking differences were noted between pediatric and adult populations. Limited access to care were regularly reported, with half of the study population consistently not accessing full multidisciplinary care. Access to interventions for contracture management were recorded to have significant limitations. Mobility aids and home adaptations are widely available and were also reported as the most valued interventions. Access to nutritional support or speech and language therapy appears only to be available for a small proportion of the participants. Access to respiratory care was good especially in severe forms of SMA. We found pockets of good practice in the UK that align with the SoC. However, access is not equal for adults and children and access to certain professionals is significantly limited. abstract_id: PUBMED:14629498 Pulmonary function and scoliosis in children with spinal muscular atrophy types II and III. Aims: The objectives were to evaluate the clinical course of spinal muscular atrophy (SMA) types II and III patients necessitating scoliosis surgery at the National University Hospital, Singapore. Methods: A retrospective review of SMA types II and III patients, born over a 10-year period between 1983 and 1992, was conducted. Results: There were eight patients: four with SMA type II and four with SMA type III. The mean age at scoliosis surgery was 9 years 7 months (range 7 years 6 months-12 years 4 months). The mean preoperative Cobb angle was 65.4 degrees (range 43-90 degrees ) and the mean postoperative Cobb angle was 22.6 degrees (range 12-45 degrees ), with a mean correction of 64.8% (range 47.7-77.8%). The decline in percentage predicted forced vital capacity (FVC) was 7.7% (95% CI: 12.4% to 3.0%) per year preoperatively and this was reduced to 3.8% (95% CI: 5.8% to 1.9%) per year postoperatively. The mean length of preoperative and postoperative lung function follow-up was 6.3 months (range 0.03-31 months) and 44 months (range 0-110 months), respectively. Conclusions: This study suggests that pulmonary function in SMA types II and III continues to decline after scoliosis surgery, though the rate of decline is less marked. Overall, the combined results from this study and all other previously published studies are conflicting in regard to the effect of scoliosis surgery on pulmonary function in SMA types II and III, though half of the studies (3 of 6) did demonstrate a continued decline in lung function postoperatively. This decline in pulmonary function despite spinal stabilization is likely secondary to the progressive neuromuscular weakness of the disease. abstract_id: PUBMED:33043451 Cardiac findings in pediatric patients with spinal muscular atrophy types 2 and 3. Background: It is unclear whether the heart is affected in pediatric patients with milder forms of spinal muscular atrophy (SMA). Therefore, we aimed to determine the presence of any cardiac abnormalities in these patients. Methods: We conducted a cross-sectional study of children and adolescents with SMA types 2 and 3 between July 2018 and July 2019. All patients underwent a comprehensive cardiac evaluation, including history-taking, physical examination, electrocardiography, echocardiography, measurement of cardiac biomarkers (cardiac troponin T [cTnT] and N-terminal pro-brain natriuretic peptide [NT-proBNP]), and 24-hour Holter monitoring. Results: In total, 42 patients were enrolled (27 and 15 with SMA type 2 and 3, respectively). No patient had structural heart disease, except for one with mitral valve prolapse. None had signs of ventricular dysfunction on echocardiography. Both cTnT and NT-proBNP levels were normal in all patients. Electrocardiography showed sinus tachycardia in seven patients (16.7%), and prolonged P-R interval in one (2.4%). Holter monitoring detected benign ventricular arrhythmias in two patients (4.8%), and rare supraventricular premature beats in one. The mean 24-hour heart rate was elevated in six patients (14.3%), whereas both the minimum 24-hour heart rate and the maximum R-R interval were increased in 23 (54.8%). Discussion: The prevalence of cardiac disease in pediatric patients with SMA types 2 and 3 is low; however, these patients may have increased resting heart rates. A complete cardiac history and physical examination are a useful screen. Additional cardiac investigations may be performed as needed. abstract_id: PUBMED:27807779 The effect of scoliosis surgery on pulmonary function in spinal muscular atrophy type II patients. Purpose: Various results of the previous literature related to surgical effect on pulmonary function of spinal muscular atrophy (SMA) patients might be due to different SMA type, different fusion level and technique. The aim of this study was to determine the value of scoliosis surgery for SMA type II patients with regard to pulmonary function, under the same fusion level, fusion technique and average long-term follow-up. Methods: Ten SMA II patients who underwent spinal correction procedures from 1993 to 2010 were identified. Data on clinical features and pulmonary function, including forced vital capacity (FVC) and forced expiratory volume in 1st second (FEV1), were collected. The data on pulmonary function were divided into preoperative, postoperative short-term (0-5 years), mid-term (5-10 years), and long-term (&gt;10 years). Statistical comparisons were made using the Wilcoxon test for pulmonary function and body weight analysis. Questions were answered by parents on how surgery influenced the frequency of respiratory infection and the ability to sit at school. Results: The average length of postoperative pulmonary function follow-up was 12.3 years (range 4.9-15.9 years). There was no significant difference in FVC or FEV1 between preoperative and each postoperative period. However, a significant decline from mid-term to long-term was observed (p = 0.028). Body weight increased significantly in all postoperative periods and was moderately correlated to pulmonary function (r = 0.526 for FVC). The answers to the questionnaire revealed that 80% of the patients had obvious improvement in the frequency of respiratory infection and 100% were tolerable sitting at school. Conclusions: Surgical correction for scoliosis in SMA II patients results in pulmonary function being maintained during long-term follow-up. In addition, the advantages of surgery also include body weight gain, better sitting tolerance, and reduced frequency of respiratory infection. abstract_id: PUBMED:28399889 Cardiac pathology in spinal muscular atrophy: a systematic review. Background: Hereditary proximal spinal muscular atrophy (SMA) is a severe neuromuscular disease of childhood caused by homozygous loss of function of the survival motor neuron (SMN) 1 gene. The presence of a second, nearly identical SMN gene (SMN2) in the human genome ensures production of residual levels of the ubiquitously expressed SMN protein. Alpha-motor neurons in the ventral horns of the spinal cord are most vulnerable to reduced SMN concentrations but the development or function of other tissues may also be affected, and cardiovascular abnormalities have frequently been reported both in patients and SMA mouse models. Methods: We systematically reviewed reported cardiac pathology in relation to SMN deficiency. To investigate the relevance of the possible association in more detail, we used clinical classification systems to characterize structural cardiac defects and arrhythmias. Conclusions: Seventy-two studies with a total of 264 SMA patients with reported cardiac pathology were identified, along with 14 publications on SMA mouse models with abnormalities of the heart. Structural cardiac pathology, mainly septal defects and abnormalities of the cardiac outflow tract, was reported predominantly in the most severely affected patients (i.e. SMA type 1). Cardiac rhythm disorders were most frequently reported in patients with milder SMA types (e.g. SMA type 3). All included studies lacked control groups and a standardized approach for cardiac evaluation. The convergence to specific abnormalities of cardiac structure and function may indicate vulnerability of specific cell types or developmental processes relevant for cardiogenesis. Future studies would benefit from a controlled and standardized approach for cardiac evaluation in patients with SMA. abstract_id: PUBMED:36441395 Nusinersen for spinal muscular atrophy types II and III: a retrospective single-center study in South Korea. Background: This study investigated the efficacy and safety of nusinersen, an antisense oligonucleotide, in patients with spinal muscular atrophy (SMA) types II (OMIM: 253,550) or III (OMIM: 253,400), including those with severe scoliosis or requiring respiratory support via mechanical ventilation. Methods: Data from 40 patients with genetically confirmed SMA who were treated with nusinersen at our institute from March 2019 to April 2022 were retrospectively analyzed. Of these, 30 patients with an age of onset &lt; 3 years and not on permanent ventilation were selected. Clinical and genetic characteristics were investigated, and motor function was evaluated based on the Hammersmith Functional Motor Scale-Expanded (HFMSE) score. Results: The mean age of symptom onset was 1.2 years. Most patients were diagnosed with SMA type II (27/30, 90%). Nusinersen was administered via computed tomography-guided or direct intrathecal injection in 87% (26/30) and 13% (4/30) of the patients, respectively. At the 6-, 14-, 22-, and 26-month follow-ups, 72%, 71%, 88%, and 86% of patients showed motor improvement, respectively, with mean changes in HFMSE scores of 2.10, 2.88, 4.21, and 5.29, respectively. Multivariable analysis showed that the use of noninvasive ventilation was associated with poorer outcomes of motor function. Conclusions: Patients with SMA type II or III who received nusinersen treatment showed significant improvement in motor function. A longer treatment duration led to a higher number of patients with improved motor function. No significant side effects of nusinersen were observed. Patients with SMA, even those with severe scoliosis or on respiratory support, can be safely treated using nusinersen. abstract_id: PUBMED:32006461 Drug treatment for spinal muscular atrophy types II and III. Background: Spinal muscular atrophy (SMA) is caused by a homozygous deletion of the survival motor neuron 1 (SMN1) gene on chromosome 5, or a heterozygous deletion in combination with a (point) mutation in the second SMN1 allele. This results in degeneration of anterior horn cells, which leads to progressive muscle weakness. Children with SMA type II do not develop the ability to walk without support and have a shortened life expectancy, whereas children with SMA type III develop the ability to walk and have a normal life expectancy. This is an update of a review first published in 2009 and previously updated in 2011. Objectives: To evaluate if drug treatment is able to slow or arrest the disease progression of SMA types II and III, and to assess if such therapy can be given safely. Search Methods: We searched the Cochrane Neuromuscular Specialised Register, CENTRAL, MEDLINE, Embase, and ISI Web of Science conference proceedings in October 2018. In October 2018, we also searched two trials registries to identify unpublished trials. Selection Criteria: We sought all randomised or quasi-randomised trials that examined the efficacy of drug treatment for SMA types II and III. Participants had to fulfil the clinical criteria and have a homozygous deletion or hemizygous deletion in combination with a point mutation in the second allele of the SMN1 gene (5q11.2-13.2) confirmed by genetic analysis. The primary outcome measure was change in disability score within one year after the onset of treatment. Secondary outcome measures within one year after the onset of treatment were change in muscle strength, ability to stand or walk, change in quality of life, time from the start of treatment until death or full-time ventilation and adverse events attributable to treatment during the trial period. Treatment strategies involving SMN1-replacement with viral vectors are out of the scope of this review, but a summary is given in Appendix 1. Drug treatment for SMA type I is the topic of a separate Cochrane Review. Data Collection And Analysis: We followed standard Cochrane methodology. Main Results: The review authors found 10 randomised, placebo-controlled trials of treatments for SMA types II and III for inclusion in this review, with 717 participants. We added four of the trials at this update. The trials investigated creatine (55 participants), gabapentin (84 participants), hydroxyurea (57 participants), nusinersen (126 participants), olesoxime (165 participants), phenylbutyrate (107 participants), somatotropin (20 participants), thyrotropin-releasing hormone (TRH) (nine participants), valproic acid (33 participants), and combination therapy with valproic acid and acetyl-L-carnitine (ALC) (61 participants). Treatment duration was from three to 24 months. None of the studies investigated the same treatment and none was completely free of bias. All studies had adequate blinding, sequence generation and reporting of primary outcomes. Based on moderate-certainty evidence, intrathecal nusinersen improved motor function (disability) in children with SMA type II, with a 3.7-point improvement in the nusinersen group on the Hammersmith Functional Motor Scale Expanded (HFMSE; range of possible scores 0 to 66), compared to a 1.9-point decline on the HFMSE in the sham procedure group (P &lt; 0.01; n = 126). On all motor function scales used, higher scores indicate better function. Based on moderate-certainty evidence from two studies, the following interventions had no clinically important effect on motor function scores in SMA types II or III (or both) in comparison to placebo: creatine (median change 1 higher, 95% confidence interval (CI) -1 to 2; on the Gross Motor Function Measure (GMFM), scale 0 to 264; n = 40); and combination therapy with valproic acid and carnitine (mean difference (MD) 0.64, 95% CI -1.1 to 2.38; on the Modified Hammersmith Functional Motor Scale (MHFMS), scale 0 to 40; n = 61). Based on low-certainty evidence from other single studies, the following interventions had no clinically important effect on motor function scores in SMA types II or III (or both) in comparison to placebo: gabapentin (median change 0 in the gabapentin group and -2 in the placebo group on the SMA Functional Rating Scale (SMAFRS), scale 0 to 50; n = 66); hydroxyurea (MD -1.88, 95% CI -3.89 to 0.13 on the GMFM, scale 0 to 264; n = 57), phenylbutyrate (MD -0.13, 95% CI -0.84 to 0.58 on the Hammersmith Functional Motor Scale (HFMS) scale 0 to 40; n = 90) and monotherapy of valproic acid (MD 0.06, 95% CI -1.32 to 1.44 on SMAFRS, scale 0 to 50; n = 31). Very low-certainty evidence suggested that the following interventions had little or no effect on motor function: olesoxime (MD 2, 95% -0.25 to 4.25 on the Motor Function Measure (MFM) D1 + D2, scale 0 to 75; n = 160) and somatotropin (median change at 3 months 0.25 higher, 95% CI -1 to 2.5 on the HFMSE, scale 0 to 66; n = 19). One small TRH trial did not report effects on motor function and the certainty of evidence for other outcomes from this trial were low or very low. Results of nine completed trials investigating 4-aminopyridine, acetyl-L-carnitine, CK-2127107, hydroxyurea, pyridostigmine, riluzole, RO6885247/RG7800, salbutamol and valproic acid were awaited and not available for analysis at the time of writing. Various trials and studies investigating treatment strategies other than nusinersen (e.g. SMN2-augmentation by small molecules), are currently ongoing. Authors' Conclusions: Nusinersen improves motor function in SMA type II, based on moderate-certainty evidence. Creatine, gabapentin, hydroxyurea, phenylbutyrate, valproic acid and the combination of valproic acid and ALC probably have no clinically important effect on motor function in SMA types II or III (or both) based on low-certainty evidence, and olesoxime and somatropin may also have little to no clinically important effect but evidence was of very low-certainty. One trial of TRH did not measure motor function. abstract_id: PUBMED:27241662 Parents' Experiences and Wishes at End of Life in Children with Spinal Muscular Atrophy Types I and II. Objective: To explore experiences and wishes of bereaved parents concerning end-of-life care for their child with severe spinal muscular atrophy. Study Design: A follow-up survey was conducted in 2013 on parents of deceased Swedish children who were born between 2000 and 2010 and later diagnosed with spinal muscular atrophy type I or II (n = 48). The questions used in this study covered location of death (LoD), support from health care staff, and parents' wishes and concerns about their child's end-of-life care. Results: One-half of those who had wishes about LoD (16/32) wanted their child to die at home, rather than at the hospital. All of those who wanted the child to die at the hospital had their wishes fulfilled. Among those who wanted the child to die at home, 10 of 16 got their wish. Among parents who talked with a physician about how they wanted their child to pass away (n = 26), all but 2 had their wishes fulfilled. Thirty-six parents (75%) reported that their child had siblings: 12 reported that the sibling was too young for professional psychological support, and only 4 of the remaining 24 siblings received such support after the death of their brother or sister. Conclusions: Parents' communication with the physician about their wishes and concerns regarding their child's end-of-life care and preferred LoD contributed to their wishes being fulfilled. The wish of hospital death was fulfilled more often than the wish of home deaths. A vast majority of siblings did not receive psychological support after death of their brother or sister. abstract_id: PUBMED:27890489 Spinal Muscular Atrophy, types I and II: What are the differences in body composition and resting energy expenditure? Background & Aims: Different neuromuscular functional domains in types I and II Spinal Muscular Atrophy (SMAI and SMAII) could lead to differences in body composition (BC) and resting energy expenditure (REE). Their identification could provide the key to defining appropriate strategies in clinical dietary management, but data comparing SMAI and SMAII in terms of BC and REE are not yet available. We measured total and regional fat (FM), lean (LBM), mineral (BMC) masses, body water (total, intra- and extra-cellular, TBW, ICW, ECW) and REE in a sample of SMAI and II children, matched for age and sex, and also adjusting for body size to compare these features of the two SMA phenotypes. Methods: 15 SMAI and 15 SMAII children, (M/F = 9/6 vs 9/6, age 3.6 ± 1.9 vs 3.5 ± 1.8 years, p = 0.99), confirmed genetically, were measured as follows: Anthropometric measurements [Body Weight (BW), Supine Length (SL), Arm Length (AL), Femur Length (FL), Tibia Length (TL)], Dual x-ray Energy Absorptiometry (DEXA) [total and segmental FM, LBM, FFM, and BMC], Bioelectrical impedance (BIA) [TBW, ICW, ECW] and Indirect Calorimetry (REE, respiratory quotients) were collected by the same trained dietician. BW, SL and Body Mass Index (BMI) Z-scores were calculated according to CDC Growth Charts (2000). Results: SMA children had high percentages of FM and a lower percentage of TBW and ECW compared to the respective reference values for sex and age, whereas the BMC percentages did not differ, even splitting the two phenotypes. SMA I children had a lower BW and BMI-Z score compared to children with SMA II, but similar total and segmental FM. On the contrary, total FFM and LBM were significantly lower in SMAI (7290.0 ± 1729.1 g vs 8410.1 ± 1508.4 g; 6971.8 ± 1637.1 g vs 8041.7 ± 1427.7 g, p = 0.039, p = 0.037, respectively), particularly at the trunk level. Arm BMC also resulted significantly lower in SMAI. The measured REE values were similar (684 ± 143 kcal/day vs 703 ± 122 Kcal/day p = 0.707) whereas REE per FFM unit was higher in SMA I children than in SMA II (95 ± 12 kcal/FFMkg vs 84 ± 11 kcal/FFMkg p = 0.017). Conclusions: This study has shown that BW and BMI Z-score measurements alone can be misleading in assessing nutritional status, particularly in SMAI. The differences between SMAI and II in total and regional BC are related only to FFM, LBM and BMC, and seem to be more linked to the magnitude of neurofunctional impairment rather than to the nutritional status derangement. SMA I and SMA II children can have different energy requirements in relation to their specific BC and hypermetabolism of FFM. Based on these results, our recommendation is to use direct BC and REE measurements in the nutritional care process until SMA-specific predictive equations become available. Answer: Based on the available abstracts, there is evidence suggesting that cardiac abnormalities are not common in type II and type III Spinal Muscular Atrophy (SMA). A study that retrospectively reviewed 24-hour electrocardiography (ECG) in 157 types II and III SMA patients found no signs of bradycardia, atrial fibrillation, or other previously reported rhythm disturbances, and echocardiography was also normal in the majority of patients. Only one patient with a history of ventricular septal defects was reported to have congenital cardiac defects. These findings suggest that there is no evidence to support a need for more accurate cardiac surveillance or changes in the existing standards of care for cardiac function in these patients (PUBMED:25539139). Another cross-sectional study of children and adolescents with SMA types 2 and 3 found no structural heart disease, except for one patient with mitral valve prolapse. There were no signs of ventricular dysfunction on echocardiography, and cardiac biomarkers were normal in all patients. While some patients showed signs of increased resting heart rates, the prevalence of cardiac disease in pediatric patients with SMA types 2 and 3 was low. The study concluded that a complete cardiac history and physical examination are useful screens, and additional cardiac investigations may be performed as needed (PUBMED:33043451). A systematic review also reported that structural cardiac pathology, mainly septal defects and abnormalities of the cardiac outflow tract, was reported predominantly in the most severely affected patients (i.e., SMA type 1). Cardiac rhythm disorders were most frequently reported in patients with milder SMA types (e.g., SMA type 3). However, the review noted that all included studies lacked control groups and a standardized approach for cardiac evaluation (PUBMED:28399889). In summary, the current evidence from these studies does not support a need to change the standards of care specifically regarding cardiac function surveillance for patients with SMA types II and III. However, individual patient circumstances may warrant additional cardiac investigations, and it is important for healthcare providers to remain vigilant for any potential cardiac issues in this patient population.
Instruction: Does pancreaticosplenectomy contribute to better survival? Abstracts: abstract_id: PUBMED:12239961 Does pancreaticosplenectomy contribute to better survival? Background/aims: This study was conducted to clarify the impact of pancreaticosplenectomy on the prognosis of patients with gastric carcinoma. Methodology: Two hundred and seventy-two patients who underwent total gastrectomy with distal pancreatectomy and splenectomy were retrospectively reviewed. Results: Lymph node metastases at the splenic hilum (#10) and along the splenic artery (#11) occurred in 12.4% and 19.2% of cases, respectively. The 5-year survival rate of those without metastasis at #10 was 62.8%. Once nodal metastasis occurred, the prognosis became very poor; only 18.2% in those with a single positive node and 15.4% of those with two or more positive nodes at this location survived 5 years. Similar trends in survival were observed with respect to nodes at #11. When stratified by nodal status as currently determined by microscopic examination, pancreaticosplenectomy saved 4.5% of patients with positive nodes, but was insufficient in 17.3% of cases and was not necessary in the 78.2% of cases who were node negative at these locations. Conclusions: The data indicate that pancreaticosplenectomy can save some patients with positive nodes in these regions; however, the small survival benefit does not provide a basis for the general application of this highly morbid procedure. To further evaluate these results in a randomized study, selection of a subset of patients who are likely to have metastasis is the key. abstract_id: PUBMED:10027374 No survival benefit from combined pancreaticosplenectomy and total gastrectomy for gastric cancer. Background: It remains unknown whether extended surgery improves the survival rate of patients with gastric cancer. This study was a review of combined pancreaticosplenectomy with total gastrectomy for gastric cancer. Methods: The clinicopathological details of 190 patients who underwent combined pancreaticosplenectomy with total gastrectomy between 1969 and 1996 were compared retrospectively with those of 206 patients who underwent total gastrectomy with splenectomy alone. Results: There was no statistical difference in the survival rate when pancreaticosplenectomy was added to total gastrectomy. The mortality rate was similar, but the morbidity rate was higher in patients who had pancreaticosplenectomy. Six patients who had direct pancreatic invasion and two thought to have involved lymph nodes along the splenic artery survived for more than 5 years after operation. In the latter group, the metastatic lymph nodes along the splenic artery could have been resected without pancreatectomy. Conclusion: Since a combined pancreaticosplenectomy adds no survival advantage to total gastrectomy for gastric cancer, this procedure should no longer be regarded as routine. abstract_id: PUBMED:36891436 Splenectomy and Distal Pancreaticosplenectomy: Surgical Anatomy and Technique Specific to Advanced Ovarian Cancer. Ovarian cancer is one of the most aggressive malignancies in women and usually presents at an advanced stage. Complete tumor debulking and platinum sensitivity are the two important determinants of survival in ovarian cancer. Upper abdominal surgery with bowel resections and peritonectomy are usually needed to achieve optimal cytoreduction. Splenic disease in the form of diaphragmatic peritoneal disease or omental caking at the splenic hilum is not infrequent. Around 1-2% of these require distal pancreaticosplenectomy (DPS) and the decision to perform DPS versus splenectomy should be made early in the intraoperative period to prevent unnecessary hilar dissection and bleeding. We hereby describe the surgical anatomy of the spleen and pancreas and point of technique of splenectomy and DPS specific to advanced ovarian cancers. abstract_id: PUBMED:10206294 Total gastrectomy with simultaneous pancreaticosplenectomy or splenectomy in patients with advanced gastric carcinoma. A splenectomy or distal pancreaticosplenectomy is often performed simultaneously with total gastrectomy in the treatment of gastric carcinoma to facilitate dissection of the lymph nodes around the splenic artery and splenic hilus. However, the negative impact of splenectomy and pancreaticosplenectomy has also been reported. A retrospective analysis was performed to evaluate the outcomes of distal pancreaticosplenectomy and total gastrectomy, splenectomy and total gastrectomy, and gastrectomy alone in the patients with advanced gastric carcinoma without distant metastasis. Prognostic factors were examined. No significant differences existed in 5-year survival in the patients who underwent gastrectomy with splenectomy, gastrectomy with distal pancreaticosplenectomy, or gastrectomy alone. Neither splenectomy, nor distal pancreaticosplenectomy were prognostic factors. However, distal pancreaticosplenectomy was an independent predictor of pancreatic fistula. In conclusion, the addition of distal pancreaticosplenectomy or splenectomy to total gastrectomy for gastric cancer increases the risk of severe complications, but does not improve survival. abstract_id: PUBMED:10827328 Is there a benefit of pancreaticosplenectomy with gastrectomy for advanced gastric cancer? Background: In Japan, wide resection with extended lymph node dissection has been performed for advanced cancer with good prognosis. Pancreaticosplenectomy with gastrectomy is performed to facilitate dissection of the lymph nodes around the splenic artery. We attempted to evaluate the effects of pancreaticosplenectomy and splenectomy with gastrectomy for advanced gastric cancer. Methods: Gastric cancer patients underwent splenectomy with gastrectomy (78 cases), pancreaticosplenectomy with gastrectomy (105 cases), or gastrectomy alone (1,755 cases). Survival rates were compared among the three groups for each factor of the depth of invasion, stage, and curability. Results: There were no significant differences among the three groups. Pancreaticosplenectomy or splenectomy with gastrectomy to dissect lymph nodes does not improve survival but is associated with severe complications. Conclusions: The spleen should be resected when a patient has clearly positive node metastasis around the splenic hilus and artery, and pancreaticosplenectomy be performed when the cancer lesion invades the pancreas. abstract_id: PUBMED:11462954 Indications for pancreaticosplenectomy in advanced gastric cancer. Background/aims: The lymph nodes along the splenic artery (No. 11) and at the splenic hilum (No. 10) are classified in group 2 (n2) in the Japanese Classification of Gastric Carcinoma. Pancreaticosplenectomy is performed to achieve complete D2 dissection, but its efficacy remains controversial. To clarify the indications for pancreaticosplenectomy in gastric cancer, surgical results were investigated. Methodology: This study investigated 111 gastric cancer patients who underwent potentially curative total gastrectomy with pancreaticosplenectomy accompanied by D2 or more extensive lymph node dissection. The rate of lymph node metastasis and the number of Nos. 10 and 11 lymph nodes that contained metastases were ascertained from several histopathological findings. Predictive factors for metastasis in lymph nodes Nos. 10 and 11 and prognostic factors for survival were calculated and compared using the univariate and Cox proportional hazard regression model. Results: Lymph node metastasis to No. 10 or 11 was observed in 19 patients. Of these, 8 (42.1%) had paraaortic lymph node metastases. The average number of metastatic lymph nodes in the 19 patients was 19.4 +/- 19.2. The location of the primary tumor and the number of metastatic lymph nodes were correlated to lymph node metastasis to Nos. 10 and 11. Of the regional lymph nodes, the right paracardial lymph nodes and those along the short gastric vessels frequently metastasized to No. 10 or 11. The 5-year survival rate of patients with metastases in lymph nodes No. 10 or 11 was 23.8% and that with No. 16 metastases was 24.5%, whereas that in n2 without metastasis in No. 10 or 11 was 41.4%. The independent prognostic factor was the number of metastatic lymph nodes. Of the postoperative complications, pancreatic fistula was observed in 43 patients (38.7%) and followed by anastomotic leakage in 6 (5.4%). Conclusions: Pancreaticosplenectomy is indicated in patients with advanced gastric cancer in the upper third or the whole of the stomach and with lymph node metastasis at right paracardial or along the short gastric vessels. To obtain good surgical results, pancreaticosplenectomy with paraaortic lymph node dissection (D3) should be carried out in patients with as few metastatic lymph nodes as possible. abstract_id: PUBMED:32074797 Feasibility, safety and long-term efficacy of laparoscopic total gastrectomy combined with distal pancreaticosplenectomy for T4b gastric cancer Objective: To explore the feasibility, safety and long-term efficacy of laparoscopic total gastrectomy combined with distal pancreaticosplenectomy for the treatment of T4b gastric cancer. Methods: A retrospective cohort study was performed. Clinical data of consecutive patients with T4b gastric cancer invading pancreatic tail undergoing laparoscopic or open total gastrectomy combined with distal pancreaticosplenectomy from January 2010 to December 2014 were analyzed retrospectively. Enrollment criteria: (1) primary gastric cancer confirmed by pathology as T4b adenocarcinoma; (2) chest+abdominal+pelvic enhanced CT indicated cancer invading pancreatic tail without distant metastasis, and R0 resection was evaluated as feasible before operation; (3) physical status was ECOG score 0 to 2, and was tolerant to operation. Patients with peritoneal implant metastasis and tumor invasion of other organs during operation, or changes in surgical methods for other reasons were excluded. All the operations were performed by the same surgical team, which had the experiences of more than 100 cases of laparoscopic and 100 cases of open radical gastrectomy with D2 lymph node dissection. The choice of surgical procedure was discussed by the surgeon and the patient, and decided according to the patient's intension. Patients were divided into the laparoscopic group and open group according to the surgical method. Intraoperative and perioperative findings were compared between the two groups. The 3-year disease-free survival rate were analyzed with Kaplan-Meier survival curve and compared by using log-rank test. Results: A total of 37 consecutive patients were enrolled, including 21 in the laparoscopic group and 16 in the open group, and no one receiving laparoscopic procedure was converted to open surgery. The baseline data of two groups were comparable (all P&gt;0.05). Compared with the open group, the laparoscopic group had significantly longer operation time [(264.0±35.1) minutes vs. (226.6±49.9) minutes, t=2.685, P=0.011], significantly less intraoperative blood loss [(65.7±37.4) ml vs. (182.2±94.6) ml, t=-4.658, P&lt;0.001], significantly shorter time to postoperative flatus [(2.8±0.7) days vs. (4.1±0.7) days, t=-5.776, P&lt;0.001] and significantly shorter postoperative hospital stay [(13.3±2.8) days vs. (16.6±4.3) days, t=-2.822, P=0.008]. Morbidity of postoperative complications, including anastomotic leakage, pancreatic fistula, abdominal abscess, intraperitoneal hemorrhage and duodenal stump leakage, in two groups was similar [19.0% (4/21) vs. 4/16, P=0.705]. There were no cases of anastomotic bleeding or stenosis. The 30-day postoperative mortality was 0 in the laparoscopic group and 1/16 in the open group, respectively (P=0.432). The 3-year disease-free survival rates were 38.1% and 37.5% in the laparoscopic and open group, respectively (P=0.751). Conclusion: Laparoscopic total gastrectomy combined with distal pancreaticosplenectomy performed by experienced surgeons for T4b gastric cancer is safe and effective. abstract_id: PUBMED:37466897 Inframesocolic main pancreatic vessels-first approach for minimally invasive radical antegrade modular pancreaticosplenectomy (RAMPS): technical description and first experience. Radical modular antegrade pancreaticosplenectomy (RAMPS) improves posterior tumor-free margins during resections of pancreatic neoplasia involving the body or tail. However, minimally invasive RAMPS is technically challenging and has been reported seldom. We present for the first time a minimally invasive RAMPS technique with an innovative approach providing early dissection and control of the main peripancreatic vessels from an inframesocolic embryonal window, suitable for laparoscopy and robotics. Minimally invasive RAMPS with inframesocolic main pancreatic vessels-first approach was performed at the Tricase Hospital (Italy) from May 2017 to April 2022 in 11 consecutive patients with neoplastic lesions of the pancreas (8 laparoscopic RAMPS and 3 robotic RAMPS). Among the laparoscopic cases, 1 included a portal vein tangential resection and 1 a celiac artery resection (modified Appleby procedure). There were no conversions, no Clavien-Dindo complications &gt; 2, all resections' margins were tumor free, and no 90-day mortality. abstract_id: PUBMED:15011889 Postoperative morbidity/mortality and survival rates after total gastrectomy, with splenectomy/pancreaticosplenectomy for patients with advanced gastric cancer. Background/aims: Indications for splenectomy in patients with proximal and middle gastric cancer remain controversial. We investigated characteristic findings in patients with lymph node metastasis to the splenic hilus and the indication of splenectomy with total gastectomy for T2 and T3 advanced gastric cancer. Methodology: Two hundred and forty-one Japanese patients underwent curative operations for T2 and T3 advanced gastric cancer. Results: The mortality rates were similar, but the morbidity rate for patients who underwent pancreaticosplenectomy was higher than for patients who underwent either total gastrectomy alone or with splenectomy (p&lt;0.007). The rates in cases of lymph node metastasis at the depth of tumor invasion within the subserosa and serosa (T3) were 1.7% and 17.5%, respectively (p&lt;0.003). Lymph node metastasis to the splenic hilus was also evident in patients with T3 or T2 advanced gastric cancer with multiple lymph nodes metastasis (more than 7 nodes). The 10-year survival rates for patients who underwent total gastrectomy alone, with splenectomy, and with pancreaticosplenectomy in T3 advanced gastric cancers were 25%, 42% and 32%, respectively (p=0.184). Conclusions: Based on these data, the addition of distal pancreaticosplenectomy to total gastrectomy in patients with T2 and T3 advanced gastric cancer increased the risk of complications. Nevertheless, we recommend that total gastrectomy with splenectomy should be done for patients with T3 advanced gastric cancers [and T2 advanced gastric cancer with multiple lymph nodes metastasis (more than 7 nodes)], recognizing the lymph node metastasis to the splenic hilus. abstract_id: PUBMED:31142071 Gastrectomy with either wedge resection of the pancreas or pancreaticosplenectomy for adenocarcinoma of the esophagogastric junction invading the pancreas: a comparison study Objective: To compare the outcomes of gastrectomy with either wedge resection of the pancreas or pancreaticosplenectomy for adenocarcinoma of the esophagogastric junction (AEG) invading pancreas. Methods: From May 2005 to December 2015, a total of 64 patients with AEG invading pancreas underwent gastrectomy with either wedge resection of pancreas (n=25) or pancreaticosplenectomy (n=39) at Department of Thoracic Surgery, Fourth Hospital, Hebei Medical University. There were 53 males and 11 females, with a mean age of 60.2 years (range: 39 to 77 years). According to the AJCC esophageal cancer staging system, 8(th) edition, there were 27 patients in phase T4N0M0, 18 in phase T4bN1M0, 9 in phase T4bN2M0 and 10 in phase T4bN3M0. Follow-up was carried out every 6 months. The t-test for the measurement data and the χ(2) test, Fisher exact test or Wilcoxon ran-sum test for the enumeration data were used between the two groups. Survival curves were generated using the Kaplan-Meier method, and compared using the Log-rank test. Multivariate analysis was undertaken using the Cox proportional hazard model (forward stepwise regression). Results: In 39 patients who underwent pancreaticosplenectomy, incision infection occurred in 5 patients, anastomotic leak, peritoneal infection, lung infarction each occurred in 1 patient. There was 1 respiratory failure and 1 peritoneal infection in 25 patients undergoing wedge resection of the pancreas. There were no significant difference in the incidence of postoperative complications between the 2 groups (8/39 vs. 2/25, P=0.292), and no postoperative death in the study. Fifty-seven patients were followed up, with a follow-up rate of 89.1%. The 5-year overall survival rate was 32.3% in patients who underwent simultaneous gastrectomy and pancreaticosplenectomy, compared to 0 in those who underwent gastrectomy and wedge resection of the pancreas (χ(2)=4.484, P=0.034). The 5-year overall survival rate for patients who undergoing adjuvant chemotherapy was 32.3%, compared to 17.2% in whom underwent surgery alone (χ(2)=4.186, P=0.041). Conclusions: Survival benefit from R0 resection by simultaneous gastrectomy and pancreaticosplenectomy for AEG invading the pancreas can be achieved. Adjuvant chemotherapy is necessary for these patients. Answer: The evidence from the provided abstracts suggests that pancreaticosplenectomy does not consistently contribute to better survival rates for patients with gastric cancer. Several studies found no significant survival benefit from adding pancreaticosplenectomy to total gastrectomy for gastric cancer (PUBMED:10027374, PUBMED:10206294, PUBMED:10827328, PUBMED:15011889). In fact, some studies reported that the procedure could lead to higher morbidity rates without improving survival (PUBMED:10027374, PUBMED:10206294, PUBMED:15011889). One study indicated that pancreaticosplenectomy might be beneficial for a subset of patients with positive nodes in specific regions (PUBMED:12239961), but the overall small survival benefit does not justify the general application of this highly morbid procedure. Another study suggested that pancreaticosplenectomy is indicated for patients with advanced gastric cancer in certain parts of the stomach and with lymph node metastasis at specific locations (PUBMED:11462954). However, the procedure should be carried out with as few metastatic lymph nodes as possible to obtain good surgical results. In the context of advanced ovarian cancer, splenectomy and distal pancreaticosplenectomy are sometimes necessary to achieve optimal cytoreduction, but the decision to perform these procedures should be made early in the intraoperative period (PUBMED:36891436). For T4b gastric cancer, one study found that laparoscopic total gastrectomy combined with distal pancreaticosplenectomy was safe and effective, with similar disease-free survival rates compared to open surgery (PUBMED:32074797). However, this study focused on the feasibility and safety of the laparoscopic approach rather than the direct impact on survival. In summary, while there may be specific cases where pancreaticosplenectomy is indicated, the general consensus across the studies is that the procedure does not provide a significant survival advantage for gastric cancer patients and is associated with higher morbidity rates. Therefore, it should not be regarded as routine and its use should be carefully considered based on individual patient circumstances (PUBMED:10027374, PUBMED:10206294, PUBMED:10827328, PUBMED:11462954, PUBMED:12239961, PUBMED:15011889, PUBMED:32074797).
Instruction: Are generalized and localization-related epilepsies genetically distinct? Abstracts: abstract_id: PUBMED:9520007 Are generalized and localization-related epilepsies genetically distinct? Background: Whether the genetic influences are distinct for generalized and localization-related epilepsies or whether some susceptibility genes raise the risk for both types of epilepsy is uncertain. Objective: To evaluate genetic heterogeneity in epilepsy. Methods: We used Cox proportional hazards analysis to compute rate ratios (RRs) for generalized and localization-related idiopathic or cryptogenic epilepsy in the first-degree relatives of 1498 adult probands with idiopathic or cryptogenic epilepsy ascertained from voluntary organizations. The reference group comprised the first-degree relatives of 362 probands from the same study with postnatal symptomatic epilepsy in whom the genetic contributions appear to be minimal. Results: In the parents and siblings, the risk for all idiopathic or cryptogenic epilepsy was greater if the proband's epilepsy was generalized than if it was localization-related (RR, 4.7 vs 2.4). However, in the parents and siblings of each group of probands, the increased risk was not restricted to the same type of epilepsy as in the proband. The results differed in offspring, with a greater risk for all types of epilepsy if the proband's epilepsy was localization-related than if it was generalized (RR, 4.2 vs 1.6) and a greater risk for localization-related epilepsy than for generalized epilepsy (RR, 7.8 vs 1.8) if the proband's epilepsy was localization-related. Conclusions: These findings in parents and siblings suggest that some susceptibility genotypes raise the risk for both generalized and localization-related epilepsies but are more common in persons affected with generalized epilepsy. The different findings in offspring may reflect a different influence on susceptibility in that subgroup. abstract_id: PUBMED:33098311 Generalized, focal, and combined epilepsies in families: New evidence for distinct genetic factors. Objective: To determine the roles of shared and distinct genetic influences on generalized and focal epilepsy operating in individuals who manifest features of both types (combined epilepsies), and in families manifesting both generalized and focal epilepsies in separate individuals (mixed families). Methods: We analyzed the deeply phenotyped Epi4K cohort of multiplex families (≥3 affected individuals per family) using methods that quantify the aggregation of phenotypes within families and the relatedness of individuals with different phenotypes within family pedigrees. Results: The cohort included 281 families containing 1021 individuals with generalized (n = 484), focal (304), combined (51), or unclassified (182) epilepsies. The odds of combined epilepsy was higher in relatives of participants with combined epilepsy than in relatives of those with other epilepsy types (odds ratio [OR] 5.2, 95% confidence interval [CI] 1.7-16.1, P = .004). Individuals with combined epilepsy co-occurred in families more often than expected by chance (P = .03). Within mixed families, individuals with each type of epilepsy were more closely related to relatives with the same type than to relatives with other types (P &lt; .001). Significance: These findings suggest that distinct genetic influences underlie the recently recognized entity of combined epilepsies, just as generalized epilepsies and focal epilepsies each have distinct genetic influences. Mixed families may in part reflect chance co-occurrence of these distinct genetic influences. These conclusions have important implications for molecular genetic studies aimed at identifying genetic determinants of the epilepsies. abstract_id: PUBMED:12919389 Evidence for distinct genetic influences on generalized and localization-related epilepsy. Purpose: Determining the existence of syndrome-specific genetic factors in epilepsy is essential for phenotype definition in genetic linkage studies, and informs research on basic mechanisms. Analysis of concordance of epilepsy syndromes in families has been used to assess shared versus distinct genetic influences on generalized epilepsy (GE) and localization-related epilepsy (LRE). However, it is unclear how the results should be interpreted in relation to specific genetic hypotheses. Methods: To assess evidence for distinct genetic influences on GE and LRE, we examined concordance of GE and LRE in 63 families containing multiple individuals with idiopathic or cryptogenic epilepsy, drawn from the Epilepsy Family Study of Columbia University. To control for the number of concordant families expected by chance, we used a permutation test to compare the observed number with the number expected from the distribution of individuals with GE and LRE in the study families. Results: Of the families, 62% were concordant for epilepsy type, and 38% were discordant. In all analyses, the proportion of concordant families was significantly greater than expected. Conclusions: This suggests that some genetic influences predispose specifically to either GE or LRE. Because of the ascertainment bias resulting from the selection of families containing multiple individuals with epilepsy, we could not test whether there are also shared genetic influences on these two epilepsy subtypes. Population-based studies will be needed to explore these results further. abstract_id: PUBMED:35706911 Genetic generalized epilepsy and generalized onset seizures with focal evolution (GOFE). "Generalized Onset with Focal Evolution" (GOFE) is an underrecognized seizure type defined by an evolution from generalized onset to focal activity during the same ictal event. We aimed to discuss electroclinical aspects of GOFE and to emphasize its link with Genetic Generalized Epilepsy (GGE). Patients were identified retrospectively over 10 years, using the video-EEG data base from the Epilepsy Unit of Strasbourg University Hospital. GOFE was defined, as previously reported, from an EEG point of view with an evolution from generalized onset to focal activity during the same ictal event. Three male patients with GOFE were identified among 51 patients with recorded tonic-clonic seizures. Ages at onset of seizures were 13, 20 and 22 years. Focal clinical features (motor asymmetric phenomenology) could be identified. EEG showed generalized interictal discharges with focal evolution of various localization. Four seizures were recorded characterized by 2-3 s of generalized abnormalities followed by focal (parieto-occipital or frontal) discharges. There were initially uncontrolled seizures with lamotrigine, but all patients reported a good outcome with valproate monotherapy. We emphasize that GOFE presents many similarities with GGE. Recognition of the GOFE entity could bring a therapeutic interest avoiding misdiagnosis of focal epilepsy and consequently inappropriate use of narrow spectrum anti-seizure medicine. abstract_id: PUBMED:35329938 Distinct Functional Cortico-Striato-Thalamo-Cerebellar Networks in Genetic Generalized and Focal Epilepsies with Generalized Tonic-Clonic Seizures. This study aimed to delineate cortico-striato-thalamo-cerebellar network profiles based on static and dynamic connectivity analysis in genetic generalized and focal epilepsies with generalized tonic-clonic seizures, and to evaluate its potential for distinguishing these two epilepsy syndromes. A total of 342 individuals participated in the study (114 patients with genetic generalized epilepsy with generalized tonic-clonic seizures (GE-GTCS), and 114 age- and sex-matched patients with focal epilepsy with focal to bilateral tonic-clonic seizure (FE-FBTS), 114 healthy controls). Resting-state fMRI data were examined through static and dynamic functional connectivity (dFC) analyses, constructing cortico-striato-thalamo-cerebellar networks. Network patterns were compared between groups, and were correlated to epilepsy duration. A pattern-learning algorithm was applied to network features for classifying both epilepsy syndromes. FE-FBTS and GE-GTCS both presented with altered functional connectivity in subregions of the motor/premotor and somatosensory networks. Among these two groups, the connectivity within the cerebellum increased in the static, while the dFC variability decreased; conversely, the connectivity of the thalamus decreased in FE-FBTS and increased in GE-GTCS in the static state. Connectivity differences between patient groups were mainly located in the thalamus and cerebellum, and correlated with epilepsy duration. Support vector machine (SVM) classification had accuracies of 66.67%, 68.42%, and 77.19% when using static, dynamic, and combined approaches to categorize GE-GTCS and FE-GTCS. Network features with high discriminative ability predominated in the thalamic and cerebellar connectivities. The network embedding of the thalamus and cerebellum likely plays an important differential role in GE-GTCS and FE-FBTS, and could serve as an imaging biomarker for differential diagnosis. abstract_id: PUBMED:36190316 "Generalized-to-focal" epilepsy: stereotactic EEG and high-frequency oscillation patterns Objective: We aimed to clarify the pathophysiology of epilepsy involving seizures with apparently generalized onset, progressing to focal ictal rhythm through stereotactic EEG (SEEG) implantation, recording, stimulation and high-frequency oscillation (HFO) analysis. Methods: We identified two patients with seizures with bilateral electrographic onset evolving to focal ictal rhythm, who underwent SEEG implantation. Patients had pre-surgical epilepsy work-up, including prolonged video scalp EEG, brain MRI, PET, ictal/interictal SPECT, MEG, and EEG-fMRI prior to SEEG implantation. Results: Both patients had childhood-onset seizures involving behavioural arrest and left versive head and eye deviation, evolving to bilateral tonic-clonic convulsions. Seizures were electrographically preceded by diffuse, bilateral 3-Hz activity resembling absence seizures. Both had suspected focal lesions based on neuroimaging, including 3T MRI and voxel-based post-processing in one patient. Electrode stimulation did not elicit any habitual electroclinical seizures. HFO analysis showed bilateral focal regions with high fast-ripple rates. Significance: “Generalized-to-focal” seizures may occur due to a diffuse, bilateral epileptic network, however, both patients showed ictal evolution from a generalized pattern to a single dominant focus which may explain why the focal aspect of their seizures had a consistent clinical semiology. Patients such as these may have a unique form of generalized epilepsy, but focal/multifocal cerebral abnormalities are also a possibility. abstract_id: PUBMED:24938654 Focal abnormalities in idiopathic generalized epilepsy: a critical review of the literature. Conventionally, epilepsy is dichotomized into distinct "focal" and "generalized" categories. However, many studies have reported so-called focal features among patients with idiopathic generalized epilepsy (IGE) in the domains of semiology, electroencephalography, neuropsychology, neuropathology, and neuroimaging. We sought to review such features and clinical implications. A Web of Science database search was conducted to identify relevant publications. Our search yielded 145 papers describing focal features involving different domains in IGE, with 117 papers analyzed after excluding abstracts and case reports. Focal semiologic features are commonly seen in IGE. There are conflicting data from studies in the domains of electroencephalography, neuroimaging, and neuropathology. Studies on neuropsychology are suggestive of frontal lobe functional deficits in juvenile myoclonic epilepsy. Most advanced neuroimaging studies demonstrate the involvement of both the thalamus and the cortex during generalized spike-wave discharges (GSWDs). A few electroencephalographic and neuroimaging studies indicate that the cortex precedes the thalamus at the onset of GSWD. Focal features may contribute to misdiagnosis of IGE as focal epilepsy. However there are methodologic limitations in the studies that affect the results. abstract_id: PUBMED:26619379 Generalized onset seizures with focal evolution (GOFE) - A unique seizure type in the setting of generalized epilepsy. Purpose: We report clinical and electrographic features of generalized onset seizures with focal evolution (GOFE) and present arguments for the inclusion of this seizure type in the seizure classification. Methods: The adult and pediatric Epilepsy Monitoring Unit databases at Vanderbilt Medical Center and Children's Hospital were screened to identify generalized onset seizures with focal evolution. We reviewed medical records for epilepsy characteristics, epilepsy risk factors, MRI abnormalities, neurologic examination, antiepileptic medications before and after diagnosis, and response to medications. We also reviewed ictal and interictal EEG tracings, as well as video-recorded semiology. Results: Ten patients were identified, 7 males and 3 females. All of the patients developed generalized epilepsy in childhood or adolescence (ages 3-15years). Generalized onset seizures with focal evolution developed years after onset in 9 patients, with a semiology concerning for focal seizures or nonepileptic events. Ictal discharges had a generalized onset on EEG, described as either generalized spike-and-wave and/or polyspike-and-wave discharges, or generalized fast activity. This electrographic activity then evolved to focal rhythmic activity most commonly localized to one temporal or frontal region; five patients had multiple seizures evolving to focal activity in different regions of both hemispheres. The predominant interictal epileptiform activity included generalized spike-and-wave and/or polyspike-and-wave discharges in all patients. Taking into consideration all clinical and EEG data, six patients were classified with genetic (idiopathic) generalized epilepsy, and four were classified with structural/metabolic (symptomatic) generalized epilepsy. All of the patients had modifications to their medications following discharge, with three becoming seizure-free and five responding with &gt;50% reduction in seizure frequency. Conclusion: Generalized onset seizures may occasionally have focal evolution with semiology suggestive of focal seizures, leading to a misdiagnosis of focal onset. This unique seizure type may occur with genetic as well as structural/metabolic forms of epilepsy. The identification of this seizure type may help clinicians choose appropriate medications, avoiding narrow spectrum agents known to aggravate generalized onset seizures. abstract_id: PUBMED:34571366 Distinct gene-set burden patterns underlie common generalized and focal epilepsies. Background: Analyses of few gene-sets in epilepsy showed a potential to unravel key disease associations. We set out to investigate the burden of ultra-rare variants (URVs) in a comprehensive range of biologically informed gene-sets presumed to be implicated in epileptogenesis. Methods: The burden of 12 URV types in 92 gene-sets was compared between cases and controls using whole exome sequencing data from individuals of European descent with developmental and epileptic encephalopathies (DEE, n = 1,003), genetic generalized epilepsy (GGE, n = 3,064), or non-acquired focal epilepsy (NAFE, n = 3,522), collected by the Epi25 Collaborative, compared to 3,962 ancestry-matched controls. Findings: Missense URVs in highly constrained regions were enriched in neuron-specific and developmental genes, whereas genes not expressed in brain were not affected. GGE featured a higher burden in gene-sets derived from inhibitory vs. excitatory neurons or associated receptors, whereas the opposite was found for NAFE, and DEE featured a burden in both. Top-ranked susceptibility genes from recent genome-wide association studies (GWAS) and gene-sets derived from generalized vs. focal epilepsies revealed specific enrichment patterns of URVs in GGE vs. NAFE. Interpretation: Missense URVs affecting highly constrained sites differentially impact genes expressed in inhibitory vs. excitatory pathways in generalized vs. focal epilepsies. The excess of URVs in top-ranked GWAS risk-genes suggests a convergence of rare deleterious and common risk-variants in the pathogenesis of generalized and focal epilepsies. Funding: DFG Research Unit FOR-2715 (Germany), FNR (Luxembourg), NHGRI (US), NHLBI (US), DAAD (Germany). abstract_id: PUBMED:31882201 Focal EEG abnormalities and focal ictal semiology in generalized epilepsy. In clinical practice, the diagnosis of focal vs generalized epilepsy dictates the management of the patient. The distinction between generalized and focal epilepsy is at times imperfect and some epilepsies have features that fall in between these two extremes. An example is the occurrence of focal interictal and focal ictal abnormalities in generalized epilepsies. As a part of the special issue on "The epileptogenic zone in pediatric epilepsy surgery", this focused narrative review will discuss different focal abnormalities seen in generalized epilepsy. An overlap of focal and generalized epileptiform abnormalities may support a continuum between focal and generalized epilepsy. When evaluating patients in the "gray zone", other factors such as ictal semiology, neuroimaging, genetic testing and functional deficits may need to be considered to reach an accurate diagnosis. Being aware of possible occurrence of focal clinical and EEG features in generalized epilepsy will help clinicians select more preferred AED (s), avoiding potential iatrogenic side effects and inappropriate consideration for epilepsy surgery. Answer: The evidence from the provided abstracts suggests that generalized and localization-related epilepsies have both shared and distinct genetic influences. Some studies have found that there are genetic influences that predispose specifically to either generalized epilepsy (GE) or localization-related epilepsy (LRE), indicating that they may be genetically distinct to some extent (PUBMED:12919389). For example, in the study of first-degree relatives of probands with idiopathic or cryptogenic epilepsy, there was an increased risk for all types of epilepsy, not restricted to the same type as in the proband, suggesting that some susceptibility genotypes raise the risk for both generalized and localization-related epilepsies (PUBMED:9520007). However, other studies have identified distinct genetic factors for generalized, focal, and combined epilepsies, suggesting that these epilepsy types may have unique genetic underpinnings. For instance, the Epi4K cohort study found that individuals with combined epilepsy (features of both generalized and focal) were more likely to have relatives with the same type of epilepsy, and within mixed families, individuals with each type of epilepsy were more closely related to relatives with the same type than to relatives with other types (PUBMED:33098311). This indicates that there may be distinct genetic factors at play for different types of epilepsies. Additionally, the concept of "generalized onset with focal evolution" (GOFE) and "generalized-to-focal" seizures has been discussed, which further complicates the dichotomy between generalized and focal epilepsies. These seizure types suggest that there may be a spectrum of genetic influences contributing to the manifestation of epilepsy, with some patients exhibiting both generalized and focal features (PUBMED:35706911, PUBMED:36190316). Furthermore, studies have shown that there are distinct functional networks in genetic generalized and focal epilepsies, as evidenced by differences in cortico-striato-thalamo-cerebellar network connectivity (PUBMED:35329938). This supports the idea that there are different pathophysiological mechanisms, which could be underpinned by distinct genetic factors. In summary, while there is evidence for some shared genetic susceptibility between generalized and localization-related epilepsies, there is also substantial evidence supporting the existence of distinct genetic factors for these epilepsy types. The presence of combined epilepsies and seizures with both generalized and focal features suggests a complex genetic landscape that may not be fully captured by a simple dichotomy between generalized and localization-related epilepsies.
Instruction: Elapsed time between teaching and evaluation: does it matter? Abstracts: abstract_id: PUBMED:18820495 Elapsed time between teaching and evaluation: does it matter? Background: Web-based course evaluation systems offer the potential advantage of timely evaluations. The authors examined whether elapsed time between teaching and student evaluation of teaching impacts preclinical courses' quality ratings. Method: The overall relationship of elapsed time with evaluation rating was explored with regression and ANOVA. Time between teaching event and evaluation was categorized by weeks. Within-teaching-events means and variances in evaluations related to elapsed weeks were compared using repeated-measures ANOVA. Results: With more elapsed weeks, quality mean ratings increased (P &lt; .001) and variability decreased (P &lt; .001); effect sizes were small (average effect size = 0.06). Trends were similar in regression analysis and for data aggregated by event. Conclusions: Summaries of event quality are negligibly impacted by evaluation timing. Future studies should examine the impact of other Web-based evaluation features on evaluation. abstract_id: PUBMED:35064529 Time Elapsed from Onset of Pediatric Convulsive Status Epilepticus to Antiepileptic Administration-An Experience of Single Institute. Data regarding time elapsed from the onset of pediatric convulsive status epilepticus (CSE) to antiepileptic (AED) administration remains scarce after the adoption of standard treatment-guidelines in Indian healthcare settings. A prospective observational analysis was performed on 52 children presenting to an urban, academic tertiary care teaching hospital diagnosed with CSE and evolving to refractory CSE (RCSE). Time frames of AED administration were compared to the adopted 'Status Epilepticus Management Protocol'. Fifty-two patients [36 (69.2%) male] were enrolled, with a median age of 4.1 y. After CSE onset, the median (p25-p75) time until the administration of the first-line, second-line, and third-line therapy phases of AED doses were 30 (25-37) min, 68 (48-79) min, and 105 (100-135) min, respectively. The second dose of non-BZD AED was administered at a median (p25-p75) of 90 (71-95) min. Twenty-six (50%) patients received at least one continuous infusion. The time elapsed from CSE onset to AED administration and escalation from one class to another was delayed. abstract_id: PUBMED:29852314 Time-elapsed synchrotron-light microstructural imaging of femoral neck fracture. Time-elapsed micro-computed-tomography (μCT) imaging allows studying bone micromechanics. However, no study has yet performed time-elapsed μCT imaging of human femoral neck fractures. We developed a protocol for time-elapsed synchrotron μCT imaging of the microstructure in the entire proximal femur, while inducing clinically-relevant femoral neck fractures. Three human cadaver femora (females, age: 75-80 years) were used. The specimen-specific force to be applied at each load step was based on the specimens' strength estimated a priori using finite-element analysis of clinical CT images. A radio-transparent compressive stage was designed for loading the specimens while recording the applied load during synchrotron μCT scanning. The total μCT scanning field of view was 146 mm wide and 131 mm high, at 29.81 µm isotropic pixel size. Specimens were first scanned unloaded, then under incremental load steps, each equal to 25% of the estimated specimens' strength, and ultimately after fracture. Fracture occurred after 4-5 time-elapsed load steps, displaying sub-capital fracturing of the femoral neck, in agreement with finite-element predictions. Time-elapsed μCT images, co-registered to those of the intact specimen, displayed the proximal femur microstructure under progressive deformation up to fracture. The images showed (1) a spatially heterogeneous deformation localized in the proximal femoral head; (2) a predominantly elastic recovery, after load removal, of the diaphyseal and trochanteric regions and; (3) post-fracture residual displacements, mainly localized in the fractured region. The time-elapsed μCT imaging protocol developed and the high resolution images generated, made publicly available, may spur further research into human femur micromechanics and fracture. abstract_id: PUBMED:26747514 Time-elapsed screw insertion with microCT imaging. Time-elapsed analysis of bone is an innovative technique that uses sequential image data to analyze bone mechanics under a given loading regime. This paper presents the development of a novel device capable of performing step-wise screw insertion into excised bone specimens, within the microCT environment, whilst simultaneously recording insertion torque, compression under the screw head and rotation angle. The system is computer controlled and screw insertion is performed in incremental steps of insertion torque. A series of screw insertion tests to failure were performed (n=21) to establish a relationship between the torque at head contact and stripping torque (R(2)=0.89). The test-device was then used to perform step-wise screw insertion, stopping at intervals of 20%, 40%, 60% and 80% between screw head contact and screw stripping. Image data-sets were acquired at each of these time-points as well as at head contact and post-failure. Examination of the image data revealed the trabecular deformation as a result of increased insertion torque was restricted to within 1mm of the outer diameter of the screw thread. Minimal deformation occurred prior to the step between the 80% time-point and post-failure. The device presented has allowed, for the first time, visualization of the micro-mechanical response in the peri-implant bone with increased tightening torque. Further testing on more samples is expected to increase our understanding of the effects of increased tightening torque at the micro-structural level, and the failure mechanisms of trabeculae. abstract_id: PUBMED:26005296 Estimation of time elapsed since the death from identification of morphological and histological time-related changes in dental pulp: An observational study from porcine teeth. Aim: Putrefaction of the human body with its rate and stages of the various changes occurring in this entire process have been explored widely by the forensic medicine experts to estimate the time elapsed since death. However, experimental data reported in literature pertaining to rates of putrefaction of the dental pulp retrieved from jaws of the dead is scarce. This study makes an attempt to find out the series of various changes which occur during the process of putrefaction of the dental pulp in a coastal environment like that of Southern India. An attempt has also been made to estimate the time elapsed since the death by assessing the duration for which dental pulp remains microscopically intact. Materials And Methods: Three different study setups at different times, followed one by other were created. In each setup, 10 specimens of porcine jaws with teeth were buried in surface soil and 10 specimens in subsurface soil. Dental pulp was retrieved at an interval of every 24 h to see for the various changes. All the environmental parameters including average daily rainfall precipitation, temperature, soil humidity, soil temperature, and soil pH were recorded. Results: A specific series of morphological changes in terms of changes in size, color, consistency, and odor; and a sequence of histological changes were observed from both surface and subsurface samples. Conclusion: Dental pulp buried in a coastal environment goes through a specific series of morphological and histological changes which can be interpreted up to 144 h from burial, after which pulp ceases to exist. abstract_id: PUBMED:31378752 Effect of elapsed time after air abrasion on bond strength of luting agent to CAD/CAM resin blocks. This experiment assessed the effect of elapsed time between air abrasion and bonding on tensile bond strength (TBS) between computer-aided design/computer-aided manufacturing (CAD/CAM) composite resin blocks and luting agents. Specimens were divided into eight groups classified by block type (Estelite Block or HC Block), elapsed time after air abrasion (none [D0] or 1 week [D7]), luting agent type (Estecem [ESC] or Rely X Ultimate [RLU]), and polymerization condition (chemical cured [CC] or light cured [LC]). In the CC+ESC group, TBS was significantly higher at D0 than at D7. There was also a significant difference between blocks in all groups except the LC+ESC group. Scanning electron microscopy and atomic force microscopy were used to observe the surfaces of blocks after air abrasion and indicated that blocks absorbed water in air with elapsed time, which affected TBS between CAD/CAM composite resin blocks and luting agents. abstract_id: PUBMED:34225105 Relationship between oral cancer stage and elapsed time from the onset of signs and symptoms to diagnosis and treatment. Background: Oral cancer includes a variety of diagnoses of malignancies that manifest in the oral tissues. Prognosis and treatment depend on the site of involvement, the time of diagnosis, and the stage of the tumor. Early diagnosis of oral mucosal lesions facilitates the early detection of cancer, which is a key step for treatment. The purpose of this study was to investigate the relationship between delayed referral of patients with oral cancer and disease progression at the time of diagnosis. Materials And Methods: In this cross-sectional study, data were collected from 108 patients with a definitive diagnosis of oral cancer by a standardized questionnaire. Data were analyzed by descriptive statistics, including mean, standard deviation, frequency, frequency percentage, and inferential statistics, including logistic regression analysis. P&lt;0.05 was considered to be statistically significant. Results: The mean time of referral to a therapist was 17.73 ± 22.80 weeks, with 53 (49.1%) patients having a delay of more than 10 weeks. Age, education level, smoking, disease stage, N class, M class, and type of cancer were not significantly associated with the incidence of delay (P&gt;0.05). Patients whose first signs were significant bleeding and/or unrecovered ulceration showed a significant reduction in the delay time (OR = 0.024 and P = 0.038). Conclusion: There was no significant relationship between the disease progression in oral cancer and the time elapsed from the onset of symptoms to diagnosis and treatment. Hence, it seems necessary to take appropriate measures to enhance public awareness of oral cancer and its symptoms. abstract_id: PUBMED:34474155 Medial entorhinal cortex lesions produce delay-dependent disruptions in memory for elapsed time. Our memory for time is a fundamental ability that we use to judge the duration of events, put our experiences into a temporal context, and decide when to initiate actions. The medial entorhinal cortex (MEC), with its direct projections to the hippocampus, has been proposed to be the key source of temporal information for hippocampal time cells. However, the behavioral relevance of such temporal firing patterns remains unclear, as most of the paradigms used for the study of temporal processing and time cells are either spatial tasks or tasks for which MEC function is not required. In this study, we asked whether the MEC is necessary for rats to perform a time duration discrimination task (TDD), in which rats were trained to discriminate between 10-s and 20-s delay intervals. After reaching a 90% performance criterion, the rats were assigned to receive an excitotoxic MEC-lesion or sham-lesion surgery. We found that after recovering from surgery, rats with MEC lesions were impaired on the TDD task in comparison to rats with sham lesions, failing to return to criterion performance. Their impairment, however, was specific to the longer, 20-s delay trials. These results indicate that time processing is dependent on MEC neural computations only for delays that exceed 10 s, perhaps because long-term memory resources are needed to keep track of longer time intervals. abstract_id: PUBMED:33539780 Recording of elapsed time and temporal information about biological events using Cas9. DNA has not been utilized to record temporal information, although DNA has been used to record biological information and to compute mathematical problems. Here, we found that indel generation by Cas9 and guide RNA can occur at steady rates, in contrast to typical dynamic biological reactions, and the accumulated indel frequency can be a function of time. By measuring indel frequencies, we developed a method for recording and measuring absolute time periods over hours to weeks in mammalian cells. These time-recordings were conducted in several cell types, with different promoters and delivery vectors for Cas9, and in both cultured cells and cells of living mice. As applications, we recorded the duration of chemical exposure and the lengths of elapsed time since the onset of biological events (e.g., heat exposure and inflammation). We propose that our systems could serve as synthetic "DNA clocks." abstract_id: PUBMED:3612866 Paramedic perception of elapsed field time. An independent observer was placed with city paramedic teams to: a) record exactly how long emergency field procedures take; and b) determine the paramedic's perception of elapsed field time. One hundred eighteen runs were monitored; 33% required advanced life support skills, and 36% were for trauma. The response time (mean +/- S.E.M.) averaged 5.4 +/- 0.26 min, scene time 11.2 +/- 0.54 min, and transport time 9.5 +/- 0.86 min. The average time required to perform scene procedures ranged from 1.5 +/- 0.5 min for defibrillation to 5.0 +/- 0.47 min to start an IV. Paramedic perception of elapsed time for the entire run varied from the actual time by an average absolute value of 20%, with the greatest errors occurring in scene time estimates. Paramedics tended to overestimate elapsed time during short runs, and underestimate time on long runs. This report provides realistic field procedure times for urban paramedics. Distortion of time perception in the field environment emphasizes the need for strict medical control. Answer: The question of whether the elapsed time between teaching and evaluation matters has been explored in the context of web-based course evaluation systems. A study examined the impact of the time interval between teaching events and student evaluations on the quality ratings of preclinical courses. The results indicated that with more elapsed weeks, the mean quality ratings increased and the variability decreased, although the effect sizes were small (average effect size = 0.06). The study concluded that the timing of evaluations negligibly impacted the summaries of event quality, suggesting that elapsed time between teaching and evaluation does not significantly affect the evaluation outcomes (PUBMED:18820495).
Instruction: Carotid artery plaque classification: does contrast enhancement play a significant role? Abstracts: abstract_id: PUBMED:22555579 Carotid artery plaque classification: does contrast enhancement play a significant role? Background And Purpose: Previous articles have demonstrated that carotid artery plaques may have enhancement after administration of contrast material. The purpose of this study was to evaluate the effect of enhancement in carotid artery classification. Materials And Methods: Three hundred eighty consecutive patients (235 men, 145 women; median age 64, age range 32-87 years) were analyzed using a multidetector row CT scanner. Examinations were performed before and after CM administration. Carotid artery plaques with a attenuation value &lt;60 HU were considered fatty, those from 60-130 HU were considered mixed, and those &gt;130 HU were considered calcified. χ(2) tests, Student t tests, and Cohen analyses were performed. Results: Before CM administration, we observed 226 calcified, 175 mixed, and 206 fatty plaques; after CM administration, 229 calcified, 213 mixed, and 165 fatty plaques were observed. A statistically significant difference between these 2 groups was observed (P = .016). We found that 19.9% of fatty plaques become mixed (n = 41), whereas 1.7% of the mixed plaques become calcified (n = 3). All calcified plaques remained in the same category. Fatty plaques that changed type showed a larger enhancement compared with those that remained in the same class (P = .001). Cohen analyses showed very good agreement between observers before (κ = 0.834) and after contrast material administration (κ = 0.86). Conclusions: The results of this study indicate that the carotid artery plaques (fatty and mixed) significantly change according to whether analysis is performed before or after administration of contrast material. abstract_id: PUBMED:23910043 Enhancement patterns in the fibro cellular tissue in different kinds of plaques of the internal carotid artery. Background: The differentiation between stable and vulnerable plaques in the internal carotid artery (ICA) remains a matter of interest. With the implementation of contrast agent in magnetic resonance imaging (MRI) a more detailed plaque characterization is possible. The study at hand focuses on enhancement patterns of fibro cellular tissue in different kinds of plaques in the ICA. Methods: Between May 2011 and December 2012, 49 patients (39 male) with &gt;50% stenosis of the ICA were consecutively enrolled. In 10 patients with bilateral ICA stenosis, both plaques were included for analysis. We performed a classification of plaques according to Cai and observed 11 type 4-5 plaques, 15 type 6 plaques and 33 type 8 plaques. MRI was performed on a 3T whole body MR system. The standard 12 channel head coil was combined with the neck extension coil and two bilateral 7 cm loop coils. Post-contrast T1w images were subtracted from pre contrast images to identify late enhancement in fibro cellular tissue. Enhancement patterns were allocated as intraluminal, intraplaque and vasa vasorum enhancement in different types of plaques. Results: Fibro cellular tissue always exhibited a higher contrast enhancement compared to the sternocleidomastoid muscle. This reflects a higher grade of vascularization of the fibrocellular tissue. Contrast enhancement was present irrespective of the plaque type. In detail, intraluminal, intraplaque and vasa vasorum enhancement were observed in all types of plaques. Even type 8 plaques, according to the classification of Cai, had a significant contrast enhancement, though supposed to be with low inflammatory activity. Conclusion: Type 8 plaques might not be as stable as postulated. Whether the relevant uptake of contrast agent is due to the fibrous tissue or reflects the inflammatory activity of the plaque should be matter of further investigations. abstract_id: PUBMED:22923447 Clinical and histological significance of gadolinium enhancement in carotid atherosclerotic plaque. Background And Purpose: Although the ability of MRI to investigate carotid plaque composition is well established, the mechanism and the significance of plaque gadolinium (Gd) enhancement remain unknown. We evaluated clinical and histological significance of Gd enhancement of carotid plaque in patients undergoing endarterectomy for carotid stenosis. Methods: Sixty-nine patients scheduled for a carotid endarterectomy prospectively underwent a 3-T MRI. Carotid plaque enhancement was assessed on T1-weighted images performed before and 5 minutes after Gd injection. Enhancement was recorded according to its localization. Histological analysis was performed of the entire plaque and of the area with matched contrast enhancement on MR images. Results: Gd enhancement was observed in 59% patients. Three types of carotid plaques were identified depending on enhancement location (shoulder region, shoulder and fibrous cap, and central in the plaque). Fibrous cap rupture, intraplaque hemorrhage, and plaque Gd enhancement was significantly more frequent in symptomatic than in asymptomatic patients (P=0.043, P&lt;0.0001, and P=0.034, respectively). After histological analysis, Gd enhancement was significantly associated with vulnerable plaque (American Heart Association VI, P=0.006), neovascularization (P&lt;0.0001), macrophages (P=0.030), and loose fibrosis (P&lt;0.0001). Prevalence of neovessels, macrophages, and loose fibrosis in the area of Gd enhancement was 97%, 87%, and 80%, respectively, and was different depending on the enhancement location in the plaque. Fibrous cap status and composition were different depending on the type of plaque. Conclusions: Gd enhancement of carotid plaque is associated with vulnerable plaque phenotypes and related to an inflammatory process. abstract_id: PUBMED:24157737 Correlation between fissured fibrous cap and contrast enhancement: preliminary results with the use of CTA and histologic validation. Background And Purpose: Previous studies demonstrated that carotid plaques analyzed by CTA can show contrast plaque enhancement. The purpose of this preliminary work was to evaluate the possible association between the fissured fibrous cap and contrast plaque enhancement. Materials And Methods: Forty-seven consecutive (men = 25; average age = 66.8 ± 9 years) symptomatic patients studied by use of a multidetector row CT scanner were prospectively analyzed. CTA was performed before and after contrast and radiation doses were recorded; analysis of contrast plaque enhancement was performed. Patients underwent carotid endarterectomy en bloc; histologic sections were prepared and evaluated for fissured fibrous cap and microvessel attenuation. The Mann-Whitney test was performed to evaluate the differences between the 2 groups. A multiple logistic regression analysis was performed to assess the effect of fissured fibrous cap and microvessel attenuation on contrast plaque enhancement. Receiver operating characteristic curve and area under the curve were also calculated. Results: Twelve patients had fissured fibrous cap. In 92% (11/12) of fissured fibrous cap-positive plaques, we found contrast plaque enhancement, whereas in 69% (24/35) of the plaques without fissured fibrous cap contrast plaque enhancement was found. The Mann-Whitney test showed a statistically significant difference between the contrast enhancement in plaques with fissured fibrous cap (Hounsfield units = 22.6) and without fissured fibrous cap (Hounsfield units = 12.9) (P = .011). On the regression analysis, both fissured fibrous cap and neovascularization were associated with contrast plaque enhancement (P = .0366 and P = .0001). The receiver operating characteristic curve confirmed an association between fissured fibrous cap and contrast plaque enhancement with an area under the curve of 0.749 (P = .005). Conclusions: The presence of fissured fibrous cap is associated with contrast plaque enhancement. Histologic analysis showed that the presence of fissured fibrous cap is associated with a larger contrast plaque enhancement compared with the contrast plaque enhancement of plaques without fissured fibrous cap. abstract_id: PUBMED:8733963 Vascular MR contrast enhancement in cerebrovascular disease. Purpose: To determine the significance of vascular enhancement in stroke patients with and without permanent neurologic deficit. Methods: We prospectively studied two groups of patients with spin-echo MR imaging before and after injection of gadopentetate dimeglumine. In the patients in group 1 (12 women, 22 men; age range, 32 to 76 years), who had permanent neurologic deficit caused by recent ischemic brain infarction, we obtained 3 to 13 serial MR images during follow-up examination. Group 2 consisted of 26 patients (14 women, 12 men; age range, 54 to 81 years) with transient neurologic deficit caused by angiographically proved high-grade stenosis or occlusion of the internal carotid artery. Results: Vascular enhancement was present in 59% of patients in group 1 and in 65% of patients of group 2. In group 1, the frequency of vascular enhancement declined steadily over several weeks, but it was still present in single cases even after 3 months. Vascular enhancement correlated positively with the extent of brain infarction in group 1 and with the degree of carotid stenosis in group 2. Conclusion: Vascular enhancement as shown by MR imaging may herald ischemic brain infarction and could persist over several weeks in areas that show collateral flow after infarction has occurred. abstract_id: PUBMED:10082353 Lumen reduction measurements of the internal carotid artery before and after Levovist enhancement: reproducibility and agreement with angiography. Our aim was to assess reproducibility of three different lumen reduction measuring methods--North American Symptomatic Carotid Endarterectomy Trial, European Carotid Surgery Trial, and common carotid--using power Doppler and color Doppler sonography before and after Levovist enhancement. We included 20 symptomatic patients with mild or severe carotid disease. North American Symptomatic Carotid Endarterectomy Trial, European Carotid Surgery Trial, and common carotid measurements on longitudinal views and European Carotid Surgery Trial measurements on transverse views were performed. Examinations were repeated and the results compared to assess reproducibility of measurements. Correlation with angiography was obtained by calculating Pearson correlation coefficients. Reproducibility was significantly better (P &lt; 0.05) for European Carotid Surgery Trial and common carotid measurements (95% limits of agreement between -10% to 10% and -19% to 17%) as compared to North American Symptomatic Carotid Endarterectomy Trial measurements (95% limits of agreement between -11% to 21% and -21% to 23%). Variability of measurements after enhancement increased slightly (not significant) for both power and color Doppler sonography. Additionally, European Carotid Surgery Trial measurements, using nonenhanced power Doppler or color Doppler sonography, did not correlate significantly with angiography, whereas North American Symptomatic Carotid Endarterectomy Trial and common carotid measurements correlated well with angiography, particularly in power Doppler mode after enhancement (r = 0.88 and r = 0.82, respectively). We conclude that for lumen reduction measurements of the internal carotid artery with power and color Doppler sonography, the common carotid method is the only method that is reproducible and has good correlation with angiography, which slightly improves after Levovist enhancement. abstract_id: PUBMED:10208108 Diagnostic benefit of echocontrast enhancement for the insufficient transtemporal bone window. Echocontrast agents (ECA) are known to improve transcranial color-coded duplex (TCCD) imaging, but its diagnostic benefit in the routine clinical setting has not clearly been defined. The authors investigated the diagnostic benefit of ECA application in 54 patients with insufficient transtemporal bone window, consecutively referred to their ultrasound laboratory. According to the precontrast imaging quality, patients were assigned to three categories: A, no intracranial structures or vessel segments visible on B-mode imaging and TCCD (n = 5); and intracranial structures visible on B-mode imaging and vessel segments less than 5 mm in length (B, n = 21), or larger than 5 mm in length (C, n = 28) visible on TCCD. The effect of the echocontrast enhancement was assessed with respect to signal enhancement, imaging quality, and diagnostic confidence. In 49 out of 54 patients (91%), a significant improvement of the imaging quality was noted, enabling 43 (80%) neurovascular diagnoses of sufficient diagnostic confidence. The diagnostic ECA effect was strongly dependent on the precontrast imaging quality: upon echoenhancement, a satisfactory image quality was obtained in none of the patients of category A, as opposed to 16 (76%) and 27 (96%) patients of categories B and C, respectively. In summary, in 80% of our consecutive patient series with insufficient transtemporal bone window, application of ECA allowed for a conclusive TCCD study. Properties of the transtemporal precontrast scans are strongly predictive of the diagnostic benefit and should be taken into the decisive consideration. abstract_id: PUBMED:19182087 Arterial wall enhancement overlying carotid plaque on CT angiography correlates with symptoms in patients with high grade stenosis. Background And Purpose: The degree of internal carotid artery (ICA) stenosis is an established primary risk factor for embolic stroke. Recent publications suggest that inflammatory features may also play an important role. Our purpose was to correlate acute neurological symptoms with either carotid artery wall enhancement or plaque calcification, on axial CT angiographic source images (CTA-SI) of patients with severe (&gt;or=70%) ICA stenosis. Methods: 75 consecutive patients with &gt;or=70% ICA stenosis on CTA-SI were identified. Each case was classified as symptomatic (n=37) or asymptomatic (n=38), and as having either calcified or noncalcified plaque. The latter group was stratified into those with versus without arterial wall enhancement, measured in absolute and relative Hounsfield Units (HU). Results: Calcified plaque was present in 39% (15/38) of the symptomatic patients and in 62% (23/37) of the asymptomatic patients (P=0.065). Of the 37 patients without calcified plaque, carotid wall enhancement was observed in 83% (19/23) of the symptomatic, but only in 57% (8/14) of the asymptomatic patients (P=0.041). When the "calcified plaque" and "no carotid wall enhancement" groups were pooled, versus the "carotid wall enhancement" group, enhancement was more likely in symptomatic patients (OR 3.625, CI 95% 1.3229 to 9.93, P=0.01 Fisher Exact test). Conclusions: In patients with severe ICA stenosis, additional stratification of stroke risk may be possible based on the presence of carotid wall enhancement on CTA-SI. Patients with carotid wall enhancement are more likely to be symptomatic, compared to those with either calcified plaque or no enhancement. abstract_id: PUBMED:18776788 Carotid Doppler: low velocity as a sign of significant disease. Hemodynamically significant stenosis of the internal carotid artery (ICA) is usually diagnosed by elevated velocities in a region of luminal narrowing. Many other significant diagnoses can be made based upon lower-than-normal velocities. Analysis of the combinations of low ICA velocity, abnormal ICA waveform, and abnormal velocities and waveforms in ipsilateral or contralateral ICA, common carotid artery, and vertebral arteries can allow diagnosis of significant vascular or valvular disease from the heart to the brain. abstract_id: PUBMED:27824664 Is Carotid Plaque Contrast Enhancement on MRI Predictive for Cerebral or Cardiovascular Events? A Prospective Cohort Study. Objective: The aim of this study was to correlate carotid plaque contrast enhancement (CPCE) to onset of cerebral/cardiovascular events (CCVE) in patients with atherosclerotic carotid disease. Methods: The ethics committee approved this prospective study. Patients with carotid artery stenosis underwent magnetic resonance angiography before/after injection of 0.1 mmol/kg of gadobenate dimeglumine. Carotid plaque contrast enhancement was graded as follows: 0, no CPCE; 1, 1 single enhancement focus; 2, 2 or more foci. Results: Seventy-seven patients (71 ± 9 years) had a stenosis degree: 34 mild, 16 moderate, 27 severe at the right side, and 36, 15, and 25 at the left side. Carotid plaque contrast enhancement was 0 in 30 patients, 1 in 26, 2 in 11 at the right, and 37, 19, and 13 at the left. Forty-seven CCVE occurred after magnetic resonance imaging, correlated to both stenosis degree (P = 0.006) and CPCE (P = 0.032). Excluding surgery/stenting, the correlation held only for CPCE (P = 0.017). Of 49 patients showing CPCE, 5 (10%) reported CCVE; of 21 patients without CPCE, none reported CCVE (P = 0.129). Conclusions: The absence of CPCE seems to be a negative predictor for CCVE. Answer: Yes, contrast enhancement plays a significant role in carotid artery plaque classification. Studies have shown that the administration of contrast material can significantly change the classification of carotid artery plaques. For instance, one study found that after contrast material administration, a percentage of fatty plaques were reclassified as mixed, and some mixed plaques became calcified, although all calcified plaques remained in the same category. This indicates that contrast material can affect the attenuation values of plaques on CT scans, leading to changes in their classification (PUBMED:22555579). Moreover, contrast enhancement patterns in fibrocellular tissue within different kinds of plaques have been observed irrespective of the plaque type, suggesting that even plaques thought to be stable with low inflammatory activity (type 8 plaques) showed significant contrast enhancement. This challenges the assumption that these plaques are stable and suggests that contrast enhancement may be related to the inflammatory activity of the plaque (PUBMED:23910043). Additionally, gadolinium (Gd) enhancement of carotid plaques on MRI has been associated with vulnerable plaque phenotypes and is related to an inflammatory process. The study found that Gd enhancement was significantly associated with histological features of vulnerable plaques, such as neovascularization, macrophage presence, and loose fibrosis (PUBMED:22923447). Furthermore, a preliminary study evaluating the association between fissured fibrous caps and contrast plaque enhancement found that plaques with fissured fibrous caps showed larger contrast enhancement compared to plaques without such caps. This suggests that contrast enhancement on CTA can be associated with features indicative of plaque vulnerability (PUBMED:24157737). In summary, contrast enhancement is a valuable tool in the classification and assessment of carotid artery plaques, as it can reveal changes in plaque composition and characteristics that are associated with plaque vulnerability and the risk of cerebrovascular events.
Instruction: Sentinel node mapping in Marjolin's ulcers: is it feasible? Abstracts: abstract_id: PUBMED:25786077 Sentinel node mapping in Marjolin's ulcers: is it feasible? Introduction: This study evaluated the feasibility of sentinel node biopsy in Marjolin's ulcers using peritumoral injection of the radiotracer and blue dye. Materials And Methods: Ten patients with Marjolin's ulcers (5 male and 5 females) ranging in age from 39-65 years, and who were candidates for surgical removal of their tumors, were included in this study. The patients received 2 mCi technetium 99m (99mTc) antimony sulfide colloid in 2 divided doses subcutaneously in the peritumoral region. Immediately after anesthesia induction, 1 cc patent blue V in 2 divided doses was also injected in the same fashion as the radiotracer. Lymphatic mapping and sentinel node biopsy was performed using a gamma probe. Any hot and/or blue lymph nodes were harvested as a sentinel node. Primary lesions were then excised and wounds grafted with a splitthickness skin graft. Results: A sentinel node could be identified on the lymphoscintigraphy images in only 2 patients. Sentinel node mapping was successful during surgery in these 2 patients. None of the harvested sentinel nodes were blue. Conclusion: Lymphatic mapping and sentinel node biopsy do not seem feasible in Marjolin's ulcers located in areas of extensive burn scarring due to a low detection rate. Larger studies are needed to validate the results of this study. abstract_id: PUBMED:15273464 Sentinel lymph node biopsy identifies occult nodal metastases in patients with Marjolin's ulcer. Since Marjolin's description, the management of burn scar carcinoma has remained controversial. A multitude of options and recommendations exist for the management of both primary lesions and regional nodal metastasis. This work reviews six cases of Marjolin's ulcer staged using sentinel lymph node biopsy. All primary lesions were confirmed to be squamous cell carcinoma and occurred a median of 29.5 years after burn. No patient had clinically detectable lymphadenopathy. In all cases, preoperative lymphoscintigraphy successfully identified a single draining regional nodal basin. Subsequent intraoperative lymphatic mapping/sentinel lymph node (SLN) biopsy was successful in five of six cases (83%). A successful intraoperative lymphatic mapping/SLN biopsy was defined as the identification of blue (uptake of isosulfan blue dye) or "hot" (uptake of radiolabeled sulfur colloid as measured with a handheld gamma counter) node(s) and subsequent excision. Four of five SLN biopsies identified previously occult nodal metastasis. SLN biopsy represents a minimally invasive and accurate staging procedure for Marjolin's ulcer. abstract_id: PUBMED:26622492 Clinical characteristics and therapeutic analysis of 51 patients with Marjolin's ulcers. Marjolin's ulcers, which are epidermoid carcinomas arising on non-healing scar tissue, may be of various pathological types, including squamous cell carcinoma. The pathogenesis of squamous cell carcinoma arising in an ulcer differs from that of the primary cutaneous squamous cell carcinoma. This squamous cell carcinoma is aggressive in nature, and has a high rate of metastasis. Between January 2001 and September 2013, 51 patients with Marjolin's ulcers were admitted to the Departments of Plastic Surgery of the Affiliated Foshan Hospital and the Second Affiliated Hospital of Sun Yat-sen University. The ulcers included 43 cases of squamous cell carcinoma, six of melanoma, one of basal cell carcinoma and one of epithelioid sarcoma. The clinical data of these patients were retrospectively analyzed. Patients were followed until mortality. Among the patients with squamous cell carcinoma, 30.23% exhibited sentinel lymph node metastasis and 11.63% had distant metastasis. Among the patients with melanoma, 66.67% had sentinel lymph node metastasis and 33.33% had distant metastasis. Sentinel lymph node metastasis was successfully detected in 11 patients with Marjolin's ulcer using 18F-fluorodeoxyglucose positron emission tomography-computed tomography and B-mode ultrasound guided biopsy. Squamous cell carcinoma was often treated by extended resection and skin grafting or skin flap repair. Patients with deep, aggressive squamous cell carcinoma of an extremity and sentinel lymph node metastasis underwent amputation and lymph node dissection. This treatment was also used for melanoma type Marjolin's ulcers. abstract_id: PUBMED:28852347 Marjolin's Ulcer of the Tibia With Pelvic Lymph Node Metastasis. Marjolin's ulcer, a form of squamous cell carcinoma, is a rare complication of a chronic wound infection. To date, there have not been any detailed reports of lymphatic spread of Marjolin's ulcer from an extremity. This report describes a 44-year old male with a history of an open tibia fracture at age 15, who presented with chronic osteomyelitis, increasing bloody wound drainage, and an enlarging vascular exophytic masses in the region of drainage. Biopsy of the mass showed squamous cell carcinoma. Initial staging with computed tomography (CT) was negative for metastatic disease. Six months after undergoing a below-knee amputation (BKA), the patient developed an enlarged lymph node in the left inguinal area. His re-staging CT and biopsy of inguinal node confirmed the presence of metastatic squamous cell carcinoma. The patient was subsequently treated with inguinal node dissection and adjuvant radiation therapy. At 2.5 years follow-up, the patient remains disease-free. Our case illustrates the importance suspecting squamous cell carcinoma in patients with chronic infections and diligent follow up for patients with Marjolin's ulcers. abstract_id: PUBMED:25325060 Marjolin's ulcers in the post-burned lesions and scars. Marjolin's ulcer (MU) represents malignant degeneration that typically ensues over a period of time in the post-burned lesions and scars or any other chronic wound. This review highlights various facets of the presentation and management of MUs that originate from post-burned lesions. The incidence of MUs in such lesions is reported to be 0.77%-2%. This malignancy characteristically develops in the areas of full thickness skin burns that had been allowed for weeks to months to heal spontaneously by secondary intention, or burn wounds which never healed completely over years and the unstable post-burned scars. In the majority of cases, the MU is a squamous cell carcinoma (SCC). The MUs contribute to an overall 2% of all SCCs and 0.03% of all basal cell carcinomas of the skin. Clinically MUs present in two major morphologic forms. The commoner form is the flat, indurated, ulcerative variety while the less common form is the exophytic papillary variety. Lower limbs represent the most frequently affected body parts. Surgical resection of the primary tumor with 2-4 cm horizontal clearance margin, nodal clearance and radiotherapy constitute the cornerstones of effective oncologic management. Despite best efforts, the overall mortality is reported to be 21%. abstract_id: PUBMED:6850540 Marjolin's ulcer. The LSU experience. Skin cancers originating in areas of chronic injury (Marjolin's ulcers) are thought to behave in a more aggressive fashion than those due to other causes. The initial surgical treatment, especially the management of the regional lymph nodes, remains controversial. This review of 37 patients with Marjolin's ulcer evaluated the effectiveness of local surgical treatment at our center. Three of 18 patients treated with amputation developed recurrences and died of metastatic disease. Five of 16 patients treated by wide excision developed recurrences, two dying with metastatic disease, the remaining three apparently cured by additional surgery. Only six (17.6%) of the patients treated by "local" modalities, i.e., wide excision or amputation, developed subsequent regional node metastases. Recurrences following local excision tend to be local and can usually be managed by re-excision or amputation. It can be inferred from this study that prophylactic node dissections are not required in most patients with Marjolin's ulcers. abstract_id: PUBMED:16967681 Marjolin's ulcer arising from a laparostomy scar. Marjolin's ulcer is the malignant transformation of a scar, usually as a squamous cell carcinoma. An uncommon presentation form is from a laparostomy scar. A 49-year-old patient that had a laparostomy during the treatment of a necrohemorrhagic pancreatitis in 1987 complained 13 years later of a 20-cm ulcer on the laparostomy scar. A resection of the abdominal wall including the ulcer and a segmental transverse colectomy were performed because of infiltration by an invasive squamous cell carcinoma. Ten months later, axillary lymphadenectomy was performed because of lymph node metastasis. Currently, the patient is free of disease. Lymph node infiltration is frequent in squamous cell carcinoma on Marjolin's ulcer and survival is not good. Prophylaxis of this disease includes meticulous care of wounds, with early skin grafts when required and treatment of infections. abstract_id: PUBMED:32805009 Current Trends in Management of Marjolin's Ulcer: A Systematic Review. Marjolin's ulcer (MU) is a rare, aggressive entity with frequent delay in diagnosis for a variety of regions. Although well described and classically taught in medical school, aspects of its treatment remain ill-defined and controversial. A systematic review was performed according to PRISMA guidelines to identify studies discussing patients who underwent surgical treatment of MU. A total of 31 papers, reporting on 1016 patients, were included. Burns were the most common etiology of malignant degeneration (68%), followed by trauma. The lower extremity was most affected (51%) and Squamous Cell Carcinoma (SCC) was found in 94% of cases, with the majority being well differentiated. Basal cell carcinoma and melenoma composed a minority of cases. Melanoma occurred more frequently in previously skin grafted wounds and had a higher rate of metastases than SCC. Most patients did not have associated regional or distant metastases present at diagnosis. Wide local excision (71%) was performed in most cases, unless amputation was indicated for severe disease or bone involvement. Lymphadenectomy and sentinel lymph node biopsy were variably reported, with conflicting evidence on the efficacy. Lymphadenectomy was most commonly indicated for known lymph node involvement. In cases of metastatic disease chemotherapy and radiation were used in conjunction with surgical treatment. Despite numerous articles on this topic, controversy remains in the management of MU. Early diagnosis of suspicious chronic wounds and prompt surgical intervention remains imperative to its treatment. abstract_id: PUBMED:20661151 Current concepts in the management of Marjolin's ulcers: outcomes from a standardized treatment protocol in 16 cases. Marjolin's ulcer is a malignant lesion observed in chronic wounds and in areas where the integrity of the skin is compromised because of any one of several reasons. The aim of this study was to define etiology, topography, and histopathology for Marjolin's ulcer and its surgical management. Sixteen cases were diagnosed and treated as Marjolin's ulcers. The mean age was 57.1 years (range, 32-85 years) and 15 of the patients (93.75%) had history of ulcer of 30 years or more. In 10 cases (62.5%), Marjolin's ulcer occurred after a flame burn and in 6 cases (37.5%) after a scalding burn injury. In six cases (37.5%), there was history of chronic trauma due to contact with the clothing. Primary lesions were at the leg, gluteal region, thigh, scalp, trunk, and hand in four (25%), three (18.7%), two (12.5%), two (12.5%), three (18.7%), and two (12.5%) cases, respectively. Amputation was carried out to treat two cases, and repair of the defects with partial-thickness skin grafting was performed in 14 cases after tumor resection on the skin. Superficial inguinal lymph node dissections were performed in four cases with tumor in the thigh and a positive inguinal lymph node. Squamous cell carcinoma was diagnosed in 14 cases (87.5%) and basal cell carcinoma in two cases (12.5%) postoperatively. The authors conclude that diagnosis and surgical planning based on the recent literature must be carried out even more intensively to improve the prognosis of Marjolin's ulcer. abstract_id: PUBMED:9601323 Marjolin's ulcer and chronic burn scarring. Marjolin's ulcer is a term used to describe squamous cell carcinomas which develop in chronic wounds. These carcinomas may also develop at the site of long-standing irritation, such as unstable burn scars. Development times for burn scar carcinomas of more than 30 years have been noted. This evaluation describes the treatment of 10 patients with burn scar carcinomas who have been treated using wide excision and closure of the defect with skin grafts or flaps, plus regional lymph node dissection if required. Results indicate a mean carcinoma development time of 26 years. Local recurrence occurred in only one patient. Answer: Sentinel node mapping in Marjolin's ulcers has been explored, but its feasibility remains uncertain and may depend on various factors. In one study, sentinel node mapping was attempted in ten patients with Marjolin's ulcers using peritumoral injection of a radiotracer and blue dye. However, a sentinel node could only be identified on lymphoscintigraphy images in two patients, and none of the harvested sentinel nodes were blue. The authors concluded that lymphatic mapping and sentinel node biopsy do not seem feasible in Marjolin's ulcers located in areas of extensive burn scarring due to a low detection rate, suggesting that larger studies are needed to validate the results (PUBMED:25786077). In contrast, another study reviewed six cases of Marjolin's ulcer staged using sentinel lymph node biopsy. Preoperative lymphoscintigraphy successfully identified a single draining regional nodal basin in all cases, and intraoperative lymphatic mapping/sentinel lymph node biopsy was successful in five of six cases (83%). Four of the five successful sentinel lymph node biopsies identified previously occult nodal metastasis, indicating that sentinel lymph node biopsy can be a minimally invasive and accurate staging procedure for Marjolin's ulcer (PUBMED:15273464). Another study involving 51 patients with Marjolin's ulcers, including various pathological types, found that sentinel lymph node metastasis was successfully detected in 11 patients using 18F-fluorodeoxyglucose positron emission tomography-computed tomography and B-mode ultrasound-guided biopsy (PUBMED:26622492). These findings suggest that while sentinel node mapping in Marjolin's ulcers may be challenging, particularly in areas with extensive burn scarring, it can be successful in some cases and may help identify occult nodal metastases. However, the variability in success rates and the need for larger studies to validate the feasibility of this technique are noted in the literature.
Instruction: Are ATP sensitive potassium channels involved in the pronounced antiarrhythmic effects of preconditioning? Abstracts: abstract_id: PUBMED:8324798 Are ATP sensitive potassium channels involved in the pronounced antiarrhythmic effects of preconditioning? Objective: The aim was to determine whether the antiarrhythmic effects of preconditioning are modified by blockade of K+ATP channels with glibenclamide in a model (anaesthetised dogs) in which this procedure has previously been shown to prevent the effects of preconditioning in reducing myocardial infarct size. Methods: 10 mongrel dogs were preconditioned by two 5 min occlusions of the left anterior descending coronary artery, separated by a 20 min reperfusion period, and then subjected, 20 min later, to a prolonged (25 min) occlusion and to subsequent reperfusion. In another 10 dogs glibenclamide (300 micrograms.kg-1) was given by intravenous injection both after the first preconditioning stimulus and before the prolonged occlusion. Control dogs (25) were subjected to a 25 min occlusion followed by reperfusion; five of these dogs also received glibenclamide. Results: Preconditioning reduced the severity of ventricular arrhythmias, epicardial ST segment elevation, and the degree of inhomogeneity of conduction. The antiarrhythmic effect of preconditioning was attenuated by glibenclamide (twice as many ventricular premature beats and more episodes of ventricular tachycardia) but there was no modification of preconditioning induced reduction in ventricular fibrillation either during ischaemia or during reperfusion, or on survival (0% in controls; 50% in preconditioned dogs with or without glibenclamide). Glibenclamide did, however, prevent the effects of preconditioning on the inhomogeneity of conduction and, less markedly, on epicardial ST segment elevation. Conclusions: In a similar model to that in which it has previously been shown that glibenclamide prevents the effect of preconditioning in reducing myocardial infarct size (suggesting involvement of K+ATP channels), the most pronounced antiarrhythmic effects of preconditioning (reduction in ventricular fibrillation; increase in survival) were not modified by glibenclamide. This, and other evidence, suggests that the mechanisms of the protective effect of preconditioning in reducing the severity of arrhythmias and on infarct size are not the same. abstract_id: PUBMED:15221348 Antiarrhythmic effect of ischemic preconditioning during low-flow ischemia. The role of bradykinin and sarcolemmal versus mitochondrial ATP-sensitive K(+) channels. Short episodes of ischemia (ischemic preconditioning) protect the heart against ventricular arrhythmias during zero-flow ischemia and reperfusion. However, in clinics, many episodes of ischemia present a residual flow (low-flow ischemia). Here we examined whether ischemic preconditioning protects against ventricular arrhythmias during and after a low-flow ischemia and, if so, by what mechanism(s). Isolated rat hearts were subjected to 60 min of low-flow ischemia (12% residual coronary flow) followed by 60 min of reperfusion. Ischemic preconditioning was induced by two cycles of 5 min of zero-flow ischemia followed by 5 and 15 min of reperfusion, respectively. Arrhythmias were evaluated as numbers of ventricular premature beats (VPBs) as well as incidences of ventricular tachycardia (VT) and ventricular fibrillation (VF) during low-flow ischemia and reperfusion. Ischemic preconditioning significantly reduced the number of VPBs and the incidence of VT and of VF during low-flow ischemia. This antiarrhythmic effect of preconditioning was abolished by HOE 140 (100 nM), a bradykinin B(2) receptor blocker. Similar to preconditioning, exogenous bradykinin (10 nM) reduced the number of VPBs and the incidence of VT and of VF during low-flow ischemia. Furthermore, the antiarrhythmic effects of both ischemic preconditioning and bradykinin were abolished by glibenclamide (1 microM), a non-specific blocker of ATP-sensitive K(+) (K(ATP)) channels. Finally, the antiarrhythmic effects of both ischemic preconditioning and bradykinin were abolished by HMR 1098 (10 microM), a sarcolemmal K(ATP) channel blocker but not by 5-hydroxydecanoate (100 microM), a mitochondrial K(ATP) channel blocker. In conclusion, ischemic preconditioning protects against ventricular arrhythmias induced by low-flow ischemia, and this protection involves activation of bradykinin B(2) receptors and subsequent opening of sarcolemmal but not of mitochondrial K(ATP) channels. abstract_id: PUBMED:28793823 Role of ATP-Sensitive Potassium Channels in Remote Ischemic Preconditioning Induced Tissue Protection. Remote ischemic preconditioning (RIPC) is an innovative treatment strategy that alleviates ischemia-reperfusion injury, whereby short episodes of regional ischemia and reperfusion delivered to remote organs including hind limb, kidney and intestine, and so on provide protection to the heart. The RIPC is known to reduce infarct size, serum levels of cardiac enzymes, and myocardial dysfunction in various animal species as well as in patients. There have been a large number of studies suggesting that the ATP-sensitive potassium channels (KATP channel) play a significant role as a mediator or end effector in RIPC. The present review discusses the role of KATP channels and possible mechanisms in RIPC-induced cardioprotection. abstract_id: PUBMED:23789436 Receptor and signalling mechanisms of antiarrhythmic effects of ischemic pre-conditioning It has been established that ischemic preconditioning (IP) exerts significant antiarrhythmic effects, as revealed in experiments both in vivo and in vitro. Consequently, processes arising within the myocardium play a key role in adaptive tolerance to ischemia/reperfusion. Preconditioning enhances cardiac electrical stability both in animals and humans. The antiarrhythmic effect of preconditioning is transient, with enhanced tolerance to ischemia-reperfusion triggered arrhythmogenesis dissipating 2-3 after the IP stimulus. The basis of the antiarrhythmic and cardioprotective effects of IP may differ. Preconditioning improves conduction of the cardiac electrical impulse, thereby preventing occurrence of re-entrant arrhythmias. NO-synthase and peroxynitrite play an important role in evolution of the antiarrhythmic effects of IP. Furthermore, intracellular Ca2+ may be a trigger of improved cardiac electrical stability after IP. It has been established that G(i/o)-protein coupled receptors are not involved in antiarrhythmic effects of IP, whereas bradykinin B2 and alpha1 adrenergic receptor activities are involved in IP-dependent improvements in cardiac electrical stability. Adenosine receptors contribute only partially to these effects. In terms of signalling mechanisms, protein kinase C appears essential to the antiarrhythmic effects of IP, whereas PI3-kinase and cyclooxygenase do not appear to be significantly involved. It has also been established that cardiac mast cells are involved in IP effects. Some data indicate that increased cardiac electrical stability with preconditioning depends upon mitoK(ATP) channel opening. Other data provide evidence that antiarrhythmic effects of preconditioning depends upon sarcK(ATP) channel opening. Some data indicate that an increase in electrical stability of heart after preconditioning depends upon mitoK(ATP) channel opening. Other data are evidence that antiarrhythmic effect of preconditioning depends upon sarCK(ATP) channel opening. Further work is needed to fully delineate the mechanistic basis of antiarrhythmic effects of IP. abstract_id: PUBMED:15907927 K(ATP) channels and preconditioning: a re-examination of the role of mitochondrial K(ATP) channels and an overview of alternative mechanisms. Preconditioning by one or several brief periods of ischemia activates an endogenous cardioprotective program that increases the resistance of cardiomyocytes to injury by subsequent prolonged periods of ischemia. Ischemic preconditioning can be mimicked by K(+) channel openers and various other substances, a phenomenon termed pharmacological preconditioning. Initially, ischemic preconditioning has been ascribed to the opening of ATP-sensitive K(+) channels at the surface membrane of cardiomyocytes. Since 1997, numerous publications have implicated mitochondrial ATP-sensitive K(+) channels (mK(ATP)) as a major trigger and/or end effector of preconditioning. Diazoxide has been suggested to be a specific activator of mK(ATP) channels, and the substituted fatty acid 5-hydroxydecanoate (5-HD) has been suggested to be a specific inhibitor. However, diazoxide and 5-HD have multiple K(+)-channel-independent actions, and the experimental evidence for an obligatory role of mK(ATP) channels in preconditioning, or even their existence, remains inconclusive. In contrast, surface K(ATP) channels have been well characterized, and we summarize the evidence suggesting that they make a major contribution to preconditioning. We also discuss a number of other factors involved in preconditioning: (1) generation of reactive oxygen species, (2) impairment of fatty acid metabolism, and (3) opening of the mitochondrial permeability transition pore. In the light of these emerging concepts, we critically re-examine the evidence for and against a role of mK(ATP) channels in ischemic and pharmacological preconditioning. abstract_id: PUBMED:12429584 The role of mitochondrial K(ATP) channels in antiarrhythmic effects of ischaemic preconditioning in dogs. 1. In the canine a single brief (5 min) coronary artery occlusion protects the myocardium against the severe ventricular arrhythmias and reduces the ischaemic changes that result from a subsequent, more prolonged (25 min) occlusion. The main purpose of the present study was to examine whether mitochondrial K(ATP) channels are involved in this protection. 2. In chloralose-urethane anaesthetized dogs, preconditioning (PC) was induced by a single 5 min period occlusion of the left anterior descending (LAD) coronary artery, 20 min prior to a 25 min occlusion of the same artery. In some of these PC dogs 5-hydroxydecanoate (5-HD; 150 micro g kg(-1) min(-1) by intracoronary infusion) was given over a period of 30 min either before, or after PC. In other dogs the mitochondrial K(ATP) channel opener diazoxide (1 mg kg(-1); i.c.) was given, either alone or in the presence of 5-HD. Control dogs (infused with saline) were simply subjected to a 25 min occlusion and reperfusion. 3. Compared to controls, both PC and diazoxide significantly reduced the number of ventricular premature beats (VPBs; 295+/-67 to 89+/-28 and 19+/-11, respectively; P&lt;0.05), the number of episodes of ventricular tachycardia (VT; 8.3+/-4.2 to 1.6+/-0.9 and 0.2+/-0.1; P&lt;0.05) and the incidences of VT (100 to 43 and 33%; P&lt;0.05) and ventricular fibrilation (VF; 60 to 0 and 17%; P&lt;0.05) during the 25 min occlusion of the LAD. Further, 43% of the PC dogs and 58% of the diazoxide treated dogs survived the combined ischaemia-reperfusion insult (cp. 0% in the controls; P&lt;0.05). The protection afforded by PC and diazoxide was abolished by 5-HD, especially when it was given prior to the PC occlusion. In the presence of 5-HD, three out of 10 dogs fibrillated during the PC occlusion and another three dogs died following reperfusion. Furthermore, there were no survivors in this group from the prolonged ischaemia/reperfusion insult. 5-HD given after PC only attenuated the antiarrhythmic protection. 4. Opening of mitoK(ATP) channels prior to ischaemia by preconditioning and diazoxide protects the myocardium against ischaemia and reperfusion-induced arrhythmias. This protection is abolished if the opening of these channels is prevented by the prior administration of 5-HD but only attenuated if 5-HD is given after preconditioning. The results indicate that opening of mitoK(ATP) channels prior to ischaemia is mandatory for protection against ischaemia and reperfusion-induced arrhythmias. abstract_id: PUBMED:21179767 Mechanisms of cardioprotection induced by preconditioning after activation of MITOK(ATP) channel Aim: To determine mechanisms of cardioprotection induced by combination angiotensin-converting enzyme inhibitors (ACEI) with subthreshold preconditioning after activation of mitochondrial ATP-sensitive potassium (mitoK(ATP)) channel. Methods: The Langendorff model of isolated rat heart was used. The time of the onset of uncoupling, the activities of sarcolemmal Na+/K+ -ATPase and Ca2+/Mg2+ -ATPase were measured. Results: The subthreshold preconditioning (2 min of ischemia and 10 min reperfusion) or captopril (an ACEI) alone did not protect hearts against injury of sustained ischemia. However combination captopril with subthreshold preconditioning increased LVDP. Pretreatment hearts with mitoK(ATP) channel inhibitor 5-HD abolished the protection effect. Combination captopril with subthreshold preconditioning delayed the onset of uncoupling, and enhanced the activities of sarcolemmal Na+/K+ ATPase and Ca2+/Mg2+ -ATPase in ischemia/reperfusion hearts. But 5-HD cancelled these cardioprotection effects. Conclusion: Combination ACEI with subthreshold preconditioning delays the onset of cellular uncoupling induced by acute ischemia, and promotes the stability of sarcolemmal ion channels, in which activation of the mitoK(ATP) channels may be involved. abstract_id: PUBMED:10471006 The role of ATP-sensitive potassium channels in the mechanism of ischemic preconditioning. We clarified the role of K(ATP) channels in the mechanism of ischemic preconditioning by using K(ATP) channel opener, nicorandil, and K(ATP) channel inhibitor, glibenclamide. Forty anesthetized dogs were divided into five groups: (a) control (C), (b) ischemic preconditioning (PC), (c) intravenous infusion of nicorandil before PC (Ni), (d) glibenclamide pretreated with PC (Gl + PC), and (e) glibenclamide pretreated with Ni (Gl + Ni). All groups were followed by 60-min ischemia and 60-min reperfusion and analyzed by the biochemical procedures. At the end of 60-min reperfusion, percentage of segment shortening in C indicated paradoxic bulging. This value was significantly recovered in PC and Ni, but it was still negative in Gl + PC and Gl + Ni. Ca2+ -adenosine triphosphatase (ATPase) activity of sarcoplasmic reticulum (SR) was significantly decreased in C. In PC and Ni, this activity was significantly maintained; however, in Gl + PC and Gl + Ni, it was similar to that in C. State III respiration of mitochondria showed similarity to the changes in SR. These results indicated that the K(ATP) channel opener enhanced the effects of ischemic preconditioning, and its blockade abolished these phenomena. We conclude that the ATP-sensitive potassium channel may play one of key roles in the mechanisms of ischemic preconditioning in the dog model. abstract_id: PUBMED:16728880 The antiarrhythmic effect and clinical consequences of ischemic preconditioning. Potentially hazardous short ischemic episodes increase the tolerance of myocardium to ischemia paradoxically. This condition decreases the infarct area markedly caused by a longer duration of coronary occlusion. This phenomenon is known as 'ischemic preconditioning' and its powerful cardioprotective effect has been shown in experimental and clinical studies. Ischemic preconditioning decreases cardiac mortality markedly by preventing the development of left ventricular dysfunction and ventricular and supraventricular arrhythmias after acute myocardial infarction. Ischemia-induced opening of ATP-sensitive potassium channels and synthesis of stress proteins via activation of adenosine, bradykinin and prostaglandin receptors seem to be the possible mechanisms. By understanding the underlying mechanisms of ischemic preconditioning, it may be possible to develop new pharmacologic agents that cause ischemic preconditioning with antiischemic and antiarrhythmic properties without causing myocardial ischemia. abstract_id: PUBMED:11334869 Late preconditioning in rat retina: involvement of adenosine and ATP-sensitive K(+) channel. To determine whether stimulation of adenosine receptors and opening of ATP-sensitive K(+) channels were involved in the protective effect of late preconditioning in the rat retina, rats were subjected to 60 min of retinal ischemia, and ischemic preconditioning was achieved by applying 5 min of ischemia 24 h before 60 min of ischemia. In non-preconditioned rats, cell loss in the ganglion cell layer and thinning of the inner plexiform and inner nuclear layer were observed 7 days after 60 min of ischemia. Ischemic preconditioning completely prevented the retinal tissue damage and 8-phenyltheophylline or 5-hydroxydecanoate reduced the protective effect of ischemic preconditioning. Therefore, stimulation of adenosine receptors and opening of ATP-sensitive K(+) channels might be involved in the mechanism of histological protection by late preconditioning in the retina. Answer: Yes, ATP-sensitive potassium channels (K_ATP channels) are involved in the pronounced antiarrhythmic effects of preconditioning. Several studies have demonstrated that the activation of these channels plays a significant role in the cardioprotective effects of ischemic preconditioning, which includes reducing the severity of arrhythmias during ischemic events. For instance, a study involving anaesthetized dogs showed that the antiarrhythmic effect of preconditioning was attenuated by glibenclamide, a K_ATP channel blocker, suggesting the involvement of these channels in the antiarrhythmic effects of preconditioning (PUBMED:8324798). Another study in isolated rat hearts found that ischemic preconditioning's protection against ventricular arrhythmias during low-flow ischemia involved the activation of bradykinin B2 receptors and subsequent opening of sarcolemmal K_ATP channels, but not mitochondrial K_ATP channels (PUBMED:15221348). Furthermore, remote ischemic preconditioning (RIPC), which involves delivering short episodes of ischemia and reperfusion to remote organs, has been shown to reduce myocardial dysfunction and infarct size, with ATP-sensitive potassium channels playing a significant role as mediators or end effectors in this process (PUBMED:28793823). Additional research has indicated that the antiarrhythmic effects of ischemic preconditioning may depend on the opening of mitochondrial K_ATP channels (mitoK_ATP) or sarcolemmal K_ATP channels (sarcK_ATP), although further work is needed to fully delineate the mechanistic basis of these effects (PUBMED:23789436). Moreover, the role of mitochondrial K_ATP channels in the antiarrhythmic effects of ischemic preconditioning has been supported by studies showing that the opening of these channels prior to ischemia by preconditioning and diazoxide protects the myocardium against ischemia and reperfusion-induced arrhythmias, and this protection is abolished by 5-hydroxydecanoate (5-HD), a mitochondrial K_ATP channel blocker (PUBMED:12429584). In summary, the evidence strongly suggests that ATP-sensitive potassium channels are involved in the antiarrhythmic effects of preconditioning, with both sarcolemmal and mitochondrial K_ATP channels contributing to the cardioprotective mechanisms.
Instruction: Should stable UNOS Status 2 patients be transplanted? Abstracts: abstract_id: PUBMED:15701434 Should stable UNOS Status 2 patients be transplanted? Background: Improved outcomes with contemporary medical therapy in patients with advanced heart failure brings into question the survival advantage of transplantation for patients in stable United Network for Organ Sharing (UNOS) Status 2. Methods: Between January 1999 and June 2001, a total of 7,539 adult patients were listed for heart transplantation. Of those, 4,255 (56.4%) patients were listed as UNOS Status 2. Using a competing risk method, we computed probabilities of events while on the waiting list. Additionally, we used a time-dependent proportional hazards model to determine predictors of death before and after transplantation. Results: Demographics included age &gt;60 (72%), female sex (23%), ischemic causes for transplantation (49%), white race (85%), and median time on the waiting list (544 days). Laboratory and hemodynamic values included mean serum albumin of 3.9 g/dl, serum creatinine of 1.4 mg/dl, mean pulmonary artery pressure of 28 mm Hg, mean pulmonary capillary wedge pressure of 19 mm Hg, and mean cardiac output of 4.5 liter/min. Final outcomes on the waiting list for patients initially listed as UNOS Status 2 were transplantation (48%), removal from the list (11.5%), death (11.4%), and continued listing (29%). At 30 months after transplantation, survival was 81% for patients undergoing transplantation as Status 1A, 77% as Status 1B, and 83% as Status 2, and showed no difference among groups. At 365 days, survival analysis showed no difference for patients listed and undergoing transplantation as UNOS Status 2 compared with those still waiting as Status 2. Conclusion: In the current era of advances in medical and surgical therapies for heart failure, we found no survival benefit of cardiac transplantation at 1 year for patients initially listed as UNOS Status 2. abstract_id: PUBMED:34786769 Impact of UNOS allocation policy changes on utilization and outcomes of patients bridged to heart transplant with intra-aortic balloon pump. Objective: Intra-aortic balloon pump (IABP) support may improve the hemodynamic profiles of patients in cardiogenic shock and bridge patients to heart transplant. In 2018, the United Network for Organ Sharing (UNOS) introduced new heart allocation criteria that increased the waitlist status of patients with IABPs to Status 2. This study assesses the impact of this change on IABP use and outcomes of patients with IABPs. Methods: We queried the UNOS database for first adult heart transplant candidates with IABPs listed or transplanted before and after the UNOS policy changes (October 18, 2016-October 17, 2018, or October 18, 2018-September 4, 2020). We compared post-transplant survival and waitlist outcomes using Kaplan-Meier and Fine-Gray analyses. Results: Two thousand three hundred fifty-eight patients met inclusion criteria. Utilization of IABPs for hemodynamic support increased by 338% in the two years after the policy change. Patients with IABPs listed after the policy change were more likely to receive a transplant and were transplanted more quickly (p &lt; .001). Posttransplant survival was comparable before and after the policy change (p = .056), but non-transplanted patients were more likely to be delisted post-policy change (p &lt; .001). Conclusion: The UNOS allocation criteria have benefited patients bridged with an IABP, given the higher transplant rate and shorter time to transplant. abstract_id: PUBMED:16893757 Should UNOS Status 2 patients undergo transplantation? Background: With recent improvements in medical and device therapy, the benefit of cardiac transplantation for UNOS Status 2 patients has been questioned. No randomized trial has been performed to compare transplantation versus contemporary medical therapy. Methods: Between January 1996 and December 2003, 203 patients were listed at our institution for heart transplantation as UNOS Status 2. We performed a retrospective review to determine outcomes in these patients. Results: Demographics of this cohort revealed a mean age of 52 years, female sex in 28%, and ischemic etiology in 47%. Eighty-one patients (40%) had an implantable cardiac defibrillator. A total of 64 patients (32%) had to be upgraded in their UNOS status, with 9 requiring a left ventricular assist device. Of the entire group, 95 (47%) underwent transplantation at a mean time of 303 days, 45 (22%) died while waiting at a mean time of 397 days, and 24 (12%) were removed from the waiting list due to deterioration in medical condition such that transplantation was no longer an option. The remaining patients continue to wait or have been removed from consideration due to improved condition. Survival at 1- and 3-years postlisting was 94% and 87% for patients who received transplants compared to 81% and 57% for patients who did not receive transplants (P &lt; .01). Conclusion: A significant number of patients listed as Status 2 are upgraded in UNOS status or die while on the waiting list. Early and midterm survival is significantly better with transplantation. Identification of variables associated with deterioration may allow for better risk stratification in the future. At this point, transplantation offers the best outcome. abstract_id: PUBMED:30885883 Karnofsky Performance Status Following Liver Transplantation in Patients With Multiple Organ Failures and Probable Acute-on-Chronic Liver Failure. Background & Aims: Little is known about outcomes of patients who underwent liver transplantation for acute on chronic liver failure (ACLF) and multiple organ failures. We compared Karnofsky Performance Status (KPS) before and after liver transplantation among patients with different numbers of organ failures and probable ACLF. Methods: We performed a retrospective cohort study of adults who underwent liver transplantation within 30 days of listing with the United Network for Organ Sharing (UNOS) network from January 1, 2006, through September 30, 2016. We determined the prevalence of organ failures using a modified version of the Chronic Liver Failure-Sequential Organ Failure Assessment scale and collected KPS scores at the time of transplantation and at intervals of 3 to 12 months after liver transplantation. Multivariate analyses were performed to adjust for confounders including UNOS region. Results: At the time of liver transplantation, 2838 patients had no organ failure, 2944 had 1 to 2 organ failures, and 1342 patients had 3 or more organ failures. KPS scores following liver transplantation improved significantly in all groups; scores ranged from 81 in patients with no organ failure to 72 in patients with 5 to 6 organ failures. Excellent performance status (KPS score, ≥80) by 1 year after transplantation was achieved by 60% of patients with 5 to 6 organ failures, 64% to 66% of patients with 3 to 4 organ failures, and 70% to 71% of patients with 1 to 2 organ failures, compared with 72.5% of patients without organ failure. Patients with 1 to 4 organ failure were more likely to achieve KPS scores of 80 or more than patients without organ failure, after we adjusted for other covariates and UNOS region. In addition, black patients were less likely, and patients with alcoholic cirrhosis were more likely, to have KPS scores of 80 or more after liver transplantation. Conclusions: In a retrospective cohort study of patients with probable ACLF who underwent liver transplantation within 30 days of listing with the UNOS network, 60% to 66% of patients with 3 or more organ failures achieved excellent performance 3 to 12 months later. abstract_id: PUBMED:32760368 Carbapenemase-Producing Klebsiella pneumoniae From Transplanted Patients in Brazil: Phylogeny, Resistome, Virulome and Mobile Genetic Elements Harboring blaKPC-2 or blaNDM-1. Objectives: Carbapenemase-producing Klebsiella pneumoniae (CP-Kp) is a major cause of infections in transplanted patients and has been associated with high mortality rates in this group. There is a lack of information about the Brazilian structure population of CP-Kp isolated from transplanted patients. By whole-genome sequencing (WGS), we analyzed phylogeny, resistome, virulome of CP-Kp isolates, and the structure of plasmids encoding blaKPC-2 and blaNDM-1 genes. Methods: One K. pneumoniae isolated from each selected transplanted patient colonized or infected by CP-Kp over a 16-month period in a hospital complex in Porto Alegre (Brazil) was submitted for WGS. The total number of strains sequenced was 80. The hospital complex in Porto Alegre comprised seven different hospitals. High-resolution SNP typing, core genome multilocus sequence typing (cgMLST), resistance and virulence genes inference, and plasmid reconstruction were performed in 80 CP-Kp. Results: The mortality rate of CP-Kp colonized or infected transplanted inpatients was 21.3% (17/80). Four CP-Kp epidemic clones were described: ST11/KPC-2, ST16/KPC-2, and ST15/NDM-1, all responsible for interhospital outbreaks; and ST437/KPC-2 affecting a single hospital. The average number of acquired resistance and virulence genes was 9 (range = 2-14) and 27 (range = 6-36), respectively. Two plasmids carrying the blaKPC-2 were constructed and belonged to IncN and IncM types. Additionally, an IncFIB plasmid carrying the blaNDM-1 was described. Conclusion: We detected intrahospital and interhospital spread of mobile structures and international K. pneumoniae clones as ST11, ST16, and ST15 among transplanted patients, which carry a significant range of acquired resistance and virulence genes and keep spreading across the world. abstract_id: PUBMED:31704164 Impact of stable angina on health status and quality of life perception of currently treated patients. The BRIDGE 2 survey. Objective: to explore 1) the perception of stable angina (SA) - impact on quality of life (QoL) and current condition related to SA; 2) SA burden - symptoms and frequency of anginal episodes; 3) impairment attributable to SA - limitations in daily activities and impact on work; 4) characteristics that might affect the patients' perception." Method: a proprietary questionnaire was administered on-line to SA patients selected using a purpose-built screening program from general population panels collaborating with IQVIA in Italy, Germany, Spain, and the UK. Exploratory analyses were performed: descriptive statistics on the total sample and different stratifications (gender, age class, time since diagnosis) were provided; we used Chi-square tests to compare subgroups. Results: of more than 25,000 subjects who accessed the survey, 268 were eligible and completed the questionnaire: mean age was 61 years and women accounted for 30%. Despite being treated, about 40% of patients reported that SA impacted "completely" or "very much" their QoL, 10% rated their condition as "not good", and 45.1% stated that they felt "Fair". The majority of patients were still symptomatic and many of them perceived that SA had a major impact on their working life. Women, younger patients and those with a more recent diagnosis reported a worse self-assessment of their condition, QoL and symptom burden. Conclusions: the results of our survey provide new insights on how patients with SA perceived their own health status and suggest that any patient with SA deserves a more detailed and accurate evaluation by their physicians. abstract_id: PUBMED:35184312 Trends and outcomes following total artificial heart as bridge to transplant from the UNOS database. Introduction: Bridge to transplantation (BTT) with a SynCardia Total Artificial Heart (TAH) has been gaining momentum as a therapy for patients with biventricular heart failure. Recent transplant waitlist and posttransplant outcomes with this strategy have not been comprehensively characterized. We reviewed the United Network for Organ Sharing (UNOS) database to examine BTT outcomes for the TAH system since approval. Methods: Adult patients listed for heart transplantation in the UNOS system between 2004 and 2020 who underwent BTT therapy with a TAH were included in the study. Trends in utilization of TAH compared with other durable mechanical support strategies were examined. The primary outcome was 1-year survival following heart transplantation following BTT with TAH. Secondary outcomes included waitlist deterioration and risk factors for waitlist or posttransplant mortality. Results: During the study 433 total patients underwent TAH implant as BTT therapy; 236 (54.4%) were listed with the TAH, while the remaining patients were upgraded to TAH support while on the waitlist. Waitlist mortality was 7.4%, with 375 patients (86.6%) ultimately being transplanted. Age, cerebrovascular disease, functional status, and ventilator dependence were risk factors for waitlist mortality. One-year survival following successful BTT was 80%. Risk factors for mortality following BTT included age, body mass index, and underlying diagnosis. Conclusions: Patients undergoing BTT with TAH demonstrate acceptable waitlist survival and good 1-year survival. While utilization initially increased as a BTT therapy, there has been a plateau in relative utilization. Individual patient and transplantation center factors deserve further investigation to determine the ideal population for this therapy. abstract_id: PUBMED:28976161 Approaches to the choice of antianginal therapy of elderly patients with stable angina pectoris depending on psycho-emotional status The aim of the research was to assess the condition and dynamics of the psychoemotional status in elderly patients with stable angina in different variants of antianginal therapy with beta-blockers or ivabradine. Various dynamics of anginal syndrome depending on the severity of manifestations of a depressive episode was revealed. It is proved that the assessment of the psychological status can appear perspective for differentiated selection of antianginal therapy, since the appointment beta-blockers to patients with a low-depressive syndrome can degrade the performance of depression, whereas the appointment of ivabradin improves the mental condition and can be recommended when identifying a major depressive syndrome. abstract_id: PUBMED:37778527 Non-inferior outcomes in lower urgency patients transplanted with extended criteria donor hearts. Background: Recent work has suggested that outcomes among heart transplant patients listed at the lower-urgency (United Network for Organ Sharing Status 4 or 6) status may not be significantly impacted by donor comorbidities. The purpose of this study was to investigate outcomes of extended criteria donors (ECD) in lower versus higher urgency patients undergoing heart transplantation. Methods: The United Network for Organ Sharing (UNOS) database was queried for all adult patients undergoing heart transplantation from October 18, 2018 through December 31, 2021. Patients were stratified by degree of urgency (higher urgency: UNOS 1 or 2 vs lower urgency: UNOS 4 or 6) and receipt of ECD hearts, as defined by donor hearts failing to meet established acceptable use criteria. Outcomes were compared using propensity score matched cohorts. Results: Among 9,160 patients included, 2,320 (25.4%) were low urgency. ECD hearts were used in 35.5% of higher urgency (HU) patients and 39.2% of lower urgency (LU) patients. While ECD hearts had an impact on survival among high-urgency patients (p &lt; 0.01), there was no difference in 1- and 2-year survival (p &gt; 0.05) found among low urgency patients receiving ECD versus standard hearts. Neither ECDs nor individual ECD criteria were independently associated with mortality in low urgency patients (p &gt; 0.05). Conclusions: Post-transplant outcomes among low urgency patients are not adversely affected by receipt of ECD vs. standard hearts. Expanding the available donor pool by optimizing use of ECDs in this population may increase transplant frequency, decrease waitlist morbidity, and improve postoperative outcomes for the transplant community at large. abstract_id: PUBMED:12822631 Does the pretransplant UNOS status modify the short- and long-term cardiac transplant prognosis? Background: We compared the morbidity and mortality rates of patients who had urgent heart transplantation or transplantation after bridging with a ventricular assist device, with the rates of patients whose clinical stability allowed them to wait at home. Methods: From March 1985 to December 2000, 404 patients underwent heart transplantation in a single center. There were 273 patients with UNOS status 2 (US 2), 103 patients with UNOS Status 1A (US 1A), and 28 patients with UNOS Status 1B (US 1B). We compared the groups retrospectively with respect to pretransplantation status and operative results. Results: Despite more severely impaired hemodynamics and a significantly higher preoperative infection rate in US 1A and 1B patients, there were no statistically significant differences in survival rates among the three groups. Donor sex and age, cytomegalovirus and toxoplasmosis, mismatch rate, ischemic time, method of myocardial protection, and operative technique did not differ statistically among the three groups. Length of intensive care unit stay, postoperative morbidity, first year postoperative rejection rate, and graft occlusive vascular disease rate were statistically similar among the three groups. Although pretransplantation cancer assessment was less complete in US 1A and 1B than in US 2 patients, the late-cancer rate was not statistically different among the three groups. Conclusions: These data suggest that urgently transplanted patients have both early and long term morbidity and mortality similar to those of patients waiting for transplantation at home or with a ventricular assist device. Answer: The question of whether stable UNOS Status 2 patients should be transplanted is complex and has been the subject of several studies. According to the research, there are differing perspectives on the survival benefit of transplantation for these patients. One study found no survival benefit of cardiac transplantation at 1 year for patients initially listed as UNOS Status 2 in the current era of advances in medical and surgical therapies for heart failure (PUBMED:15701434). This suggests that with contemporary medical therapy, the survival advantage of transplantation for stable UNOS Status 2 patients may not be as clear-cut as previously thought. However, another study reported that early and midterm survival is significantly better with transplantation for UNOS Status 2 patients. It was noted that a significant number of patients listed as Status 2 are upgraded in UNOS status or die while on the waiting list, indicating that transplantation offers the best outcome for these patients (PUBMED:16893757). The impact of UNOS allocation policy changes on the utilization and outcomes of patients bridged to heart transplant with intra-aortic balloon pump (IABP) showed that the new heart allocation criteria have benefited patients bridged with an IABP, given the higher transplant rate and shorter time to transplant (PUBMED:34786769). In the context of liver transplantation for patients with multiple organ failures and probable acute-on-chronic liver failure (ACLF), it was found that a significant percentage of patients achieved excellent performance status after transplantation, even among those with multiple organ failures (PUBMED:30885883). This suggests that transplantation can lead to positive outcomes even in patients with severe illness. For heart transplant patients listed at lower urgency status, recent work suggested that outcomes may not be significantly impacted by donor comorbidities, and post-transplant outcomes among low urgency patients are not adversely affected by receipt of extended criteria donor (ECD) versus standard hearts (PUBMED:37778527). In summary, the decision to transplant stable UNOS Status 2 patients should be individualized, taking into account the potential benefits and risks, as well as the availability of organs and the patient's overall health status. While some studies suggest no clear survival benefit at 1 year, others indicate better early and midterm survival with transplantation. Additionally, policy changes and the use of ECD hearts may influence outcomes and should be considered in the decision-making process.
Instruction: Is home-based pelvic floor muscle training effective in treatment of urinary incontinence after birth in primiparous women? Abstracts: abstract_id: PUBMED:23672520 Is home-based pelvic floor muscle training effective in treatment of urinary incontinence after birth in primiparous women? A randomized controlled trial. Objective: To assess the effect of pelvic floor muscle training (PFMT) on pelvic floor muscle strength and urinary incontinence (UI) in primiparous women who underwent a home training program between three and 9 months after delivery. Design: Randomized controlled trial. Population: One hundred primiparous women were consecutively recruited from four different antenatal clinics in the urban area of Stockholm, Sweden. Women with UI who had undergone normal term singleton vaginal delivery, 10-16 weeks postpartum were randomly allocated to either intervention or control group. Methods: Maximally voluntary contraction (MVC) and endurance were measured with a perionometer. The Oxford grading scale was used to manually estimate the strength of the pelvic floor muscle and self-reported symptoms of UI was registered through the Bristol Female Lower Urinary Tract Symptoms Module (ICIQ FLUTS) questionnaire. Main Outcome Measures: Maximally voluntary contraction of the pelvic floor muscle measured with a perionometer. Results: Maximally voluntary contraction increased significantly in both groups between baseline and follow up (p &lt; 0.05). The median MVC in cmHg for the intervention and control group was 16.2 and 12.1 at baseline and 26.0 and 18.2 at follow up, respectively. The median endurance, in seconds, for the intervention and control group was 9.6 and 12.0 at baseline and 26.7 and 23.4 at follow up, respectively. Pelvic floor muscle strength measured with the Oxford Scale increased significantly in both groups between baseline and follow up (p &lt; 0.05). Conclusion: The results indicate that home-based PFMT is effective. However, written training instructions were as efficient as home-based training with follow up visits every sixth week. abstract_id: PUBMED:33495013 Early postpartum biofeedback assisted pelvic floor muscle training in primiparous women with second degree perineal laceration: Effect on sexual function and lower urinary tract symptoms. Objective: To evaluate the short-term effect of routine early postpartum electromyographic biofeedback assisted pelvic floor muscle training on sexual function and lower urinary tract symptoms. Materials And Methods: From December 2016 to November 2017, primiparous women with vaginal delivery, who experienced non-extended second-degree perineal laceration were invited to participate. Seventy-five participants were assigned into a pelvic floor muscle training (PFMT) group or control group. Women in the PFMT group received supervised biofeedback-assisted pelvic floor muscle training at the 1st week and 4th week postpartum. Exercises were performed at home with the same protocol until 6 weeks postpartum. The Pelvic Organ Prolapse Urinary Incontinence Sexual Questionnaire (PISQ-12) and the Urinary Distress Inventory short form questionnaire (UDI-6) were used to evaluate sexual function and lower urinary tract symptoms respectively at immediate postpartum, 6 weeks, 3 months, and 6 months postpartum. Results: Forty-five women (23 in PFMT group,22 in control group) completed all questionnaires at 6 months postpartum. For overall sexual function and the three sexual functional domains, no statistically significant difference was found in PISQ scores from baseline to 6 weeks, 3 months, and 6 months postpartum between the PFMT and control groups. For postpartum lower urinary tract symptoms, all symptoms gradually improved over time for both groups without a statistically significant difference between groups. Conclusion: Our study showed that supervised biofeedback-assisted pelvic floor muscle training started routinely at one week postpartum did not provide additional improvement in postpartum sexual function and lower urinary tract symptoms. abstract_id: PUBMED:25921509 Pelvic floor muscle training and pelvic floor disorders in women Our goal is to provide an update on the results of pelvic floor rehabilitation in the treatment of urinary incontinence and genital prolapse symptoms. Pelvic floor muscle training allows a reduction of urinary incontinence symptoms. Pelvic floor muscle contractions supervised by a healthcare professional allow cure in half cases of stress urinary incontinence. Viewing this contraction through biofeedback improves outcomes, but this effect could also be due by a more intensive and prolonged program with the physiotherapist. The place of electrostimulation remains unclear. The results obtained with vaginal cones are similar to pelvic floor muscle training with or without biofeedback or electrostimulation. It is not known whether pelvic floor muscle training has an effect after one year. In case of stress urinary incontinence, supervised pelvic floor muscle training avoids surgery in half of the cases at 1-year follow-up. Pelvic floor muscle training is the first-line treatment of post-partum urinary incontinence. Its preventive effect is uncertain. Pelvic floor muscle training may reduce the symptoms associated with genital prolapse. In conclusion, pelvic floor rehabilitation supervised by a physiotherapist is an effective short-term treatment to reduce the symptoms of urinary incontinence or pelvic organ prolapse. abstract_id: PUBMED:23129397 Pelvic floor muscle training program increases muscular contractility during first pregnancy and postpartum: electromyographic study. Aims: The aim of this study was to evaluate the effect of a training program over both pelvic floor muscles contractility and urinary symptoms in primigravid pregnant and postpartum primiparous women. Patients And Methods: A clinical, prospective and blinded trial was conducted with 33 women divided into three groups: (G1) 13 primigravid pregnant women; (G2) 10 postpartum primiparous women (49.3 ± 5.84 days), after vaginal delivery with right mediolateral episiotomy; (G3) 10 postpartum primiparous women (46.3 ± 3.6 days), after cesarean section delivery. The evaluation was carried out using digital palpation (Modified Oxford Grading Scale), pelvic floor electromyography and, for the investigation of urinary symptoms, validated questionnaires (International Consultation on Incontinence Questionnaire-short form-ICIQ-UI SF and International Consultation on Incontinence Questionnaire Overactive Bladder-ICIQ-OAB). The protocol consisted of 10 individual sessions carried out by the physiotherapist through home visits, three times a week, with 60 min duration each. The statistical analysis was performed using ANOVA and Spearman's correlation coefficient. Results: The pelvic floor muscle contractility increased after the training program (P = 0.0001) for all groups. Decreases in the scores of both ICIQ-UI SF (P = 0.009) and ICIQ-OAB (P = 0.0003) were also observed after training. Conclusion: Pelvic floor muscle training is an effective means for the increase in its own contractility in both primigravid pregnant and primiparous postpartum women, accompanied with a concomitant decrease in urinary symptoms. abstract_id: PUBMED:36951973 Does multiple-component intensive pelvic floor muscle training decrease muscle fatigue and symptoms in women with urinary incontinence? Introduction And Hypothesis: A multiple-component intensive pelvic floor muscle training (MCI-PFMT) protocol was developed as a neurophysiological-based rehabilitation model to improve neuroplasticity. This study aimed to investigate the effects of the MCI-PFMT protocol on muscle fatigue and symptoms in women with urinary incontinence. Methods: This randomized controlled trial included 49 female patients with mixed urinary incontinence. Participants were divided into the MCI-PFMT group and the control group. The MCI-PFMT group performed supervised intensive pelvic floor muscle training, while the control group received bladder training and standard pelvic floor muscle training as a home program. Both training sessions were conducted 5 days a week for a single week. Participants' symptoms were evaluated with questionnaires, bladder diary, and pad tests. Superficial electromyography, ultrasonography, and the PERFECT scale were used to evaluate pelvic floor and abdominal muscle functions. Results: In the post-treatment evaluation, symptoms were decreased in both groups, with a significant decrease in the MCI-PFMT group (p &lt; 0.05). While average and peak work values of pelvic floor muscles, transversus abdominus, and internal oblique muscles increased in both groups, maximum voluntary contraction values of these muscles decreased (p &lt; 0.05). A 12.7% decrease was observed in the maximum voluntary contraction values of pelvic floor muscles in the control group, while a 9.6% decrease was observed in the MCI-PFMT group. Conclusions: The MCI-PFMT protocol can lead to pelvic floor and abdominal muscle fatigue. However, it may be effective at decreasing symptoms in women with urinary incontinence. Additional studies on this issue are needed. abstract_id: PUBMED:30704907 Pelvic floor muscle training versus no treatment, or inactive control treatments, for urinary incontinence in women: a cochrane systematic review abridged republication. Background: Pelvic floor muscle training is the most commonly used physical therapy treatment for women with urinary incontinence. Objectives: To assess the effects of Pelvic floor muscle training for women with urinary incontinence in comparison to a control treatment and to summarize relevant economic findings. Methods: Cochrane Incontinence Group Specialized Register (February 12, 2018). Selection Criteria: Randomized or quasi-randomized trials in women with stress, urgency or mixed urinary incontinence (symptoms, signs, or urodynamic). Data Collection And Analysis: Trials were independently assessed by at least two reviewers authors and subgrouped by urinary incontinence type. Quality of evidence was assessed by adopting the Grading of Recommendations, Assessment, Development and Evaluation approach. Results: The review included thirty-one trials involving 1817 women from 14 countries. Overall, trials were small to moderate size, and many were at moderate risk of bias. There was considerable variation in the intervention's content and duration. Based on data available, we can be confident that Pelvic floor muscle training can cure or improve symptoms of stress and all other types of urinary incontinence. It may reduce the number of leakage episodes and the quantity of leakage, while improving reported symptoms and quality of life. Women were more satisfied with Pelvic floor muscle training, while those in control groups were more likely to seek further treatment. Long-term effectiveness and cost-effectiveness of Pelvic floor muscle training needs to be further researched. Conclusions: The addition of ten new trials did not change the essential findings of the earlier review, suggesting that Pelvic floor muscle training could be included in first-line conservative management of women with urinary incontinence. abstract_id: PUBMED:33125272 Effectiveness of pelvic floor muscle and abdominal training in women with stress urinary incontinence. The aim of this study was to investigate the effectiveness of pelvic floor muscle training and abdominal training in women with stress urinary incontinence. The study included 64 female patients (aged 18 to 49 years) with stress urinary incontinence. Patients were randomly allocated into the pelvic floor muscle training (PFMT) or pelvic floor muscle plus abdominal training (PFMT+AT) groups. Clinical data included stress test results, pelvic floor activity measurements, and Urinary Distress Inventory, and Incontinence Impact Questionnaire responses. The increase in the pelvic floor muscle activity (from the 0th to the 4th week, from the 4th to the 8th week, and from the 0th to the 8th week) was significantly greater for the PFMT+AT group than for the PFMT group (p &lt; 0.05). The negativity rate of the stress test at the 4th week was significantly higher for the PFMT+AT group (93.7%) than for the PFMT group (53.1%) (p &lt; 0.001). Women with stress urinary incontinence benefit more from pelvic floor muscle training plus abdominal training than from pelvic floor muscle training alone in terms of increasing their pelvic floor muscle activity and quality of life, and they also experience an earlier recovery. abstract_id: PUBMED:25142280 Does pelvic floor muscle training abolish symptoms of urinary incontinence? A randomized controlled trial. Objective: To determine whether symptoms of urinary incontinence is reduced by pelvic floor muscle training, to determine whether urinary incontinence can be totally eliminated by strengthening the pelvic floor muscle to grade 5 on the Oxford scale. Design: Prospective randomized controlled clinical trial. Setting: Outpatient urogynecology department. Subjects: One hundred thirty cases with stress and mixed urinary incontinence. Intervention: All participants were randomly allocated to the pelvic floor muscle training group or control group. A 12-week home based exercise program, prescribed individually, was performed by the pelvic floor muscle training group. Main Measures: Urinary incontinence symptoms (Incontinence Impact Questionnaire-7, Urogenital Distress Inventory-6, bladder diary, stop test and pad test) were assessed, and the pelvic floor muscle strength was measured for (PERFECT testing, perineometric and ultrasound) all participants before and after 12 weeks of treatment. Results: The pelvic floor muscle training group had significant improvement in their symptoms of urinary incontinence (P=0.001) and an increase in pelvic floor muscle strength (P=0.001, by the dependent t test) compared with the control group. All the symptoms of urinary incontinence were significantly decreased in the patients that had reached pelvic floor muscle strength of grade 5 and continued the pelvic floor muscle training (P&lt;0.05). Conclusion: The study demonstrated that pelvic floor muscle training is effective in reducing the symptoms of stress and mixed urinary incontinence and in increasing pelvic floor muscle strength. abstract_id: PUBMED:29844662 The impact of pelvic floor muscle training on the quality of life of women with urinary incontinence: a systematic literature review. Purpose: The purpose of this review was to assess the effectiveness of pelvic floor muscle training (PFMT) in the treatment of urinary incontinence (UI) in women, with a particular focus on the impact of this form of therapy on the patients' quality of life (QoL). Methods: The following electronic databases were searched: PubMed, Embase, and Cochrane Library (articles only in English, 1990-2017). Search terms were as follows: urinary incontinence, pelvic floor muscle training, pelvic floor exercises, quality of life. Systematic review methods were based on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement. Results: The assessment of the impact of PFMT on the QoL of women with UI was conducted among 2,394 women in 24 selected studies. After the end of treatment, the majority of patients in the experimental groups noted a statistically significant improvement in QoL. Conclusion: The results of this literature review demonstrate that PFMT is an effective treatment for UI in women. PFMT significantly improves the QoL of women with UI, which is an important determinant of their physical, mental, and social functioning. abstract_id: PUBMED:33195869 Effects of pelvic floor muscle training in pregnant women. Objective: The aim is to verify the effectiveness of the pelvic floor muscle training (PFMT) program in pregnant women, by analyzing the amount of urine leakage. Design: Experimental study. Location: Care units in the Vila Real district, Portugal. Participants: Forty-three pregnant women divided into an experimental group (EG) (n = 22) and a control group (CG) (n = 21). Interventions: Both groups were evaluated in a predelivery stage (initial) and 6 months after delivery (final). The EG was given a PFMT exercise protocol with a duration of 6 weeks, applicable in classes and at home, another PFMT protocol for 9 weeks. Main Measurements: The strength of the pelvic floor muscle was measured by the Oxford Grading Scale and the amount of urine was assessed with a Pad test. Quality of life was measured by the King's Health Questionnaire and self-efficacy by the Broome Pelvic Muscle Self-Efficacy Scale. Results: The loss of urine, significantly reduced from 0.86 ± 0.83 to 0.50 ± 0.67 in the EG (P = .021), whereas in the CG no significant change was observed. The degree of muscle contraction increased by 4.82 ± 0.39 in EG and 3.95 ± 0.67 in CG. The quality of life observed significant improvements in both groups (P &lt; .05). As for self-efficacy, for the EG it improved significantly - initial (P = .001), final (P = .031), and for the CG the values remained the same. Conclusions: This PFMT protocol reduced urinary incontinence in pregnant women. The program allowed significant improvement in the quantity of urinary leakage and an increase in the strength of the pelvic floor muscle. Answer: Yes, home-based pelvic floor muscle training (PFMT) is effective in the treatment of urinary incontinence after birth in primiparous women. A randomized controlled trial showed that home-based PFMT led to a significant increase in maximally voluntary contraction and endurance of the pelvic floor muscle, as well as an increase in pelvic floor muscle strength measured with the Oxford Scale in both intervention and control groups. The study concluded that home-based PFMT is effective, and written training instructions were as efficient as home-based training with follow-up visits every sixth week (PUBMED:23672520). Additionally, another study found that pelvic floor muscle training is an effective means for increasing pelvic floor muscle contractility in both primigravid pregnant and primiparous postpartum women, accompanied by a concomitant decrease in urinary symptoms (PUBMED:23129397). Furthermore, a systematic review indicated that PFMT significantly improves the quality of life of women with urinary incontinence, which is an important determinant of their physical, mental, and social functioning (PUBMED:29844662). Moreover, a study that included a PFMT program for pregnant women found that the program reduced urinary incontinence, allowed significant improvement in the quantity of urinary leakage, and increased the strength of the pelvic floor muscle (PUBMED:33195869). Another study demonstrated that PFMT is effective in reducing the symptoms of stress and mixed urinary incontinence and in increasing pelvic floor muscle strength (PUBMED:25142280). In summary, the evidence from these studies supports the effectiveness of home-based PFMT in treating urinary incontinence in primiparous women postpartum.
Instruction: Fine needle aspiration biopsy of azoospermic testes. Could it replace histologic biopsy? Abstracts: abstract_id: PUBMED:11127754 Fine needle aspiration biopsy of azoospermic testes. Could it replace histologic biopsy? Objective: To evaluate the possibility of replacing histologic biopsy of azoospermic testes by fine needle aspiration (FNA) for diagnostic and management purposes. Study Design: Twenty-seven patients were examined under general anesthesia, and 53 testes were biopsied by FNA. Fifty-four testes were biopsied for histologic examination. Histology and cytology of each testis were compared to assess the discrepancy or concordance between the diagnoses. Results: Twenty-seven testes showed complete concordance of cytology and histology, and 13 testes showed a mild degree of discrepancy. A severe degree of discrepancy was seen in 13 testes. In the majority of discrepant cases, FNA biopsy diagnosis was more sensitive in detecting evidence of full or advanced maturation. In one case of discrepancy, the histologic finding was Sertoli cells only, and cytology revealed focal full maturation. In vitro fertilization using sperm from this case was successful. Conclusion: FNA biopsy of testes in azoospermia is a fast, reliable and minimally traumatic method. The prospects for utilizing FNA biopsy material in assisted and microassisted fertilization are promising. abstract_id: PUBMED:28508192 Ultrasound-guided fine needle aspiration versus core needle biopsy: comparison of post-biopsy hematoma rates and risk factors. Purpose: To compare post-biopsy hematoma rates between ultrasound guided-fine needle aspiration and ultrasound guided-core needle biopsy, and to investigate risk factors for post-biopsy hematoma. Methods: A total of 5304 thyroid nodules which underwent ultrasound guided biopsy were included in this retrospective study. We compared clinical and US features between patients with and without post-biopsy hematoma. Associations between these features and post-biopsy hematoma were analyzed. Results: Post-biopsy hematoma rate was 0.8% (43/5121) for ultrasound guided-fine needle aspiration and 4.9% (9/183) for ultrasound guided-core needle biopsy (P &lt; 0.001). For ultrasound guided-fine needle aspiration, gender, age, size, presence of vascularity, and suspicious US features were not associated with post-biopsy hematoma according to experience level. Post-biopsy hematoma occurred significantly more with ultrasound guided-core needle biopsy (9/179, 5.0%) than with ultrasound guided-fine needle aspiration (9/1138, 0.8%) (P &lt; 0.001) in experienced performers and ultrasound guided-core needle biopsy was the only significant risk factor for post-biopsy hematoma (adjusted Odds Ratio, 6.458, P &lt; 0.001). Conclusion: Post-biopsy hematoma occurred significantly more in ultrasound guided-core needle biopsy than in ultrasound guided-fine needle aspiration and ultrasound guided-core needle biopsy was the only independent factor of post-biopsy hematoma in thyroid nodules. abstract_id: PUBMED:24277901 Comparison of sonography with sonographically guided fine-needle aspiration biopsy and core-needle biopsy for initial axillary staging of breast cancer. Objectives: The purpose of this study was to determine the roles of sonography and sonographically guided fine-needle aspiration biopsy and core-needle biopsy for initial axillary staging of breast cancer. Methods: Of 220 patients with breast cancer who underwent preoperative or prechemotherapy sonography for axillary staging, 52 patients who underwent sonographically guided fine-needle aspiration biopsy and core-needle biopsy for cortical thickening or a compressed hilum of lymph nodes on sonography were prospectively enrolled. Sonography and fine-needle aspiration biopsy/core-needle biopsy findings were compared with final pathologic results from sentinel lymph node biopsy or axillary lymph node dissection. Results: Forty-eight patients met the final study criteria; we excluded 4 who had received primary systemic chemotherapy and showed negative fine-needle aspiration biopsy/core-needle biopsy results and negative final postoperative pathologic results. The positive predictive value of axillary sonography was 54%. The sensitivity and specificity of fine-needle aspiration biopsy were 73% and 100%, respectively, and those of core-needle biopsy were 77% and 100%. Results did not differ significantly between sonographically guided core-needle biopsy and fine-needle aspiration biopsy. The complication rates of fine-needle aspiration biopsy and core-needle biopsy were both 4%, and fine-needle aspiration biopsy and core-needle biopsy cost $180 and $350, respectively. Conclusions: Both sonographically guided fine-needle aspiration biopsy and core-needle biopsy were useful for axillary staging of breast cancer with high sensitivity. However, fine-needle aspiration biopsy is recommended based on the advantages of low cost and minimal invasiveness. abstract_id: PUBMED:28589181 Reliability of fine needle aspiration biopsy in large thyroid nodules. Objective: Fine needle aspiration biopsy provides one of the most important data that determines the treatment algorithm of thyroid nodules. Nevertheless, the reliability of fine needle aspiration biopsy is controversial in large nodules. The aim of this study was to evaluate the adequacy of fine needle aspiration biopsy in thyroid nodules that are four cm or greater. Material And Methods: We retrospectively examined 219 patients files who underwent thyroidectomy for thyroid nodules that were greater than four centimeter between May 2007 and December 2012. Seventy-four patients with hyperthyroidism, and 18 patients without preoperative fine needle aspiration cytology were excluded from the study. Histopathologic results after thyroidectomy were compared with preoperative cytology results, and sensitivity and specificity rates were calculated. Results: False-negativity, sensitivity and specificity rates of fine needle aspiration biopsy of thyroid nodules were found to be 9.7%, 55.5%, and 85%, respectively. Within any nodule of the 127 patients, 28 (22.0%) had thyroid cancer. However, when only nodules of at least 4 cm were evaluated, thyroid cancer was detected in 22 (17.3%) patients. Conclusion: In this study, fine needle aspiration biopsy of large thyroid nodules was found to have a high false-negativity rate. The limitations of fine-needle aspiration biopsy should be taken into consideration in treatment planning of thyroid nodules larger than four centimeters. abstract_id: PUBMED:31448027 The importance of fine needle aspiration biopsy in the diagnosis of parotid tumors This study aimed to determine the diagnostic value of fine needle aspiration biopsy (FNAB) and of its contribution to the management of parotid tumors. We conducted a retrospective study of 47 patients who had undergone parotidectomy as well as preoperative fine needle aspiration biopsy. The study highlighted that eighty one percent of patients had a benign tumor while 19% of patients had a malignant tumor. The sensitivity and specificity of FNAB were 78% and 92% respectively. Parotid tumors were correctly classified as malignant or benign in 89% of cases, overall accuracy was 64.4%. FNAB is a reliable examination providing preoperative informations about the treatment plan and the postoperative course. abstract_id: PUBMED:24551435 Combination of aspiration and non-aspiration fine needle biopsy for cytological diagnosis of thyroid nodules. Background: Good cytological sample is very important for the cytological diagnosis of thyroid nodules. The aim of this study was to evaluate the adequacy of prepared samples by the combination of aspiration and non- aspiration fine needle biopsy. Methods: In this descriptive - analytical study, sampling was done simultaneously for each patient in fine needle aspiration and non-aspiration biopsy. The sufficiency of samples was studied using Mair Scoring System. Wilcoxon Signed Rank test was used for the data analysis. Results: Three hundred two cases (289 females, 13 males) with the mean age of 43.83±12.9 years were evaluated. Inadequate samples were 31 (10.3%) in fine needle aspiration, 40 (13.2%) in non-aspiration and 13 cases (4.3%) by using two methods together (p=0.0001). The average total score was 6.00±2.17 in fine needle aspiration and 5.76±2.26 in non-aspiration method (p=0.08), and 6.6±1.98 in the combination of the two methods (p&lt;0001 comparing with one method alone). Conclusion: The results show that using both methods simultaneously in each nodule considerably increases the efficiency of samples for cytological diagnosis. abstract_id: PUBMED:15214827 Comparison of needle core biopsy and fine-needle aspiration for diagnostic accuracy in musculoskeletal lesions. Context: Needle core biopsy has been reported to be the choice of biopsy for musculoskeletal tumors. Fine-needle aspiration, on the other hand, has been widely accepted for nonmusculoskeletal tumors, but is only used in selected medical centers for musculoskeletal tumors. While fine-needle aspiration appears to have advantages to needle core biopsy in the aspects of simplicity and cost, the diagnostic accuracy should be the most critical parameter in determining the choice of biopsy. However, few studies comparing the diagnostic accuracy of these 2 biopsy methods have been performed. Objective: This study was designed to compare the diagnostic accuracy of fine-needle aspiration and needle core biopsy in musculoskeletal tumors. Design: Prospective study was performed in patients aged 10 years or older. Diagnostic accuracy was compared in 50 consecutive concurrent needle core biopsies and fine-needle aspirations of musculoskeletal lesions. Results: For primary musculoskeletal lesions, fine-needle aspiration achieved a diagnostic accuracy rate of 88% for nature of lesion, 64% for specific diagnosis, 78% for histologic grading, and 74% for histologic typing. Needle core biopsy achieved an accuracy rate of 93% for nature of lesions, 83% for specific diagnosis, 83% for histologic grading, and 90% for histologic typing. Both biopsy methods have a higher diagnostic accuracy rate for high-grade tumors than for low-grade or benign lesions in determining the nature, specific diagnosis, and histologic grading. Conclusions: The needle core biopsy has a higher diagnostic accuracy than fine-needle aspiration in all aspects, including determining the nature of the tumor, establishing the histologic type and grade, and achieving a specific diagnosis. abstract_id: PUBMED:24371424 Iris melanocytoma in child diagnosed by fine needle aspiration biopsy. We report a case of large iris melanocytoma in a child diagnosed by fine needle aspiration biopsy. In this interventional case report, cytologic features typical of melanocytoma were obtained by fine needle aspiration biopsy (FNAB). FNAB can be used in difficult diagnostic cases if a good sample is obtained, this technique has an accuracy of more than 99% in tumors larger than 3 mm; however, false-negative and false-positive results may be obtained. Its risk of local spread is very small, an advantage over incisional biopsy. The most common complication is intralesional hemorrhage and hyphema. abstract_id: PUBMED:12521423 Fine needle aspiration cytology compared with open biopsy histology for the diagnosis of azoospermia. This was a comparative study to determine the diagnostic value of fine needle aspiration of the testis compared with open testicular biopsy in azoospermic men. A total of 34 infertile, azoospermic patients aged from 26-54 years underwent concurrent testicular fine needle aspiration (FNA) and biopsy. The testis was aspirated at three separated sites (upper, middle and lower pole) using a 20 ml syringe and 21-G butterfly needle. A testicular biopsy was taken from the same sites. Samples obtained from FNA were air-dried and stained with Romanowsky-May-Grunwald-Giemsa method. Sections obtained from testicular biopsy were stained with haematoxylin and eosin (H&amp;E). The patient's history, semen analyses, hormonal profile and testicular volume were analysed, and in addition to the histological results azoospermia was classified into obstructive and non-obstructive. A good correlation between cytological smears and histological sections was found in 65 of 68 testes (95.6%). Normal spermatogenesis was diagnosed in seven patients (20.6%), hypospermatogenesis in nine (26.5%), late maturation arrest in five (14.7%), early maturation arrest in three (8.8%) and Sertoli only-cell in 10 (29.4%) patients. Discordance between cytology and histology was observed in three of 68 testes (4.4%) and was related to inadequacy of FNA. Testicular fine needle aspiration is a simple, minimally-invasive procedure that can diagnose accurately testicular function. In addition to the testicular volume and hormonal status, it can differentiate the obstructive from non-obstructive azoospermia. In cases of non-obstructive azoospermia, multiple passes with a fine needle can be performed instead of open testicular biopsy for sperm extraction if present. abstract_id: PUBMED:34176413 Impact of initial biopsy type on the time to final diagnostic biopsy in patients with follicular lymphoma and suspected histologic transformation. Diagnosis of histologic transformation (HT) of follicular lymphoma (FL) requires tissue biopsy. While surgical biopsy represents the gold standard, less invasive procedures such as fine-needle aspiration biopsy (FNAB) and core needle biopsy (CNB) are frequently performed. In this retrospective multi-institutional study including 269 patients with FL and suspected HT, the median time from initial clinical suspicion to final diagnostic biopsy was similar whether the workup began with FNAB, CNB, or surgical biopsy (4, 9, and 6 days, respectively; p=.27), despite more subsequent biopsies performed following initial FNAB. Periprocedural complications were uniformly minimal. Biopsy-proven HT was more common in the initial surgery group and in workups including positron emission tomography/computed tomography (PET/CT). Our findings, derived from US academic centers with specialized procedural and pathology expertise, suggest that FNAB, CNB, and surgical biopsy are all viable initial diagnostic procedures that can inform clinical decision-making in select FL patients with suspected HT. Answer: The study described in abstract PUBMED:11127754 aimed to evaluate the possibility of replacing histologic biopsy of azoospermic testes with fine needle aspiration (FNA) for diagnostic and management purposes. The study involved 27 patients and 53 testes biopsied by FNA, with 54 testes biopsied for histologic examination. The results showed that 27 testes had complete concordance between cytology and histology, while 13 testes showed a mild degree of discrepancy, and a severe degree of discrepancy was seen in 13 testes. In most discrepant cases, FNA biopsy diagnosis was more sensitive in detecting evidence of full or advanced maturation. In one case, histology showed Sertoli cells only, but cytology revealed focal full maturation, and in vitro fertilization using sperm from this case was successful. The conclusion of the study was that FNA biopsy of testes in azoospermia is a fast, reliable, and minimally traumatic method, with promising prospects for utilizing FNA biopsy material in assisted and microassisted fertilization. Based on this study, FNA biopsy has shown potential as a less invasive and reliable alternative to histologic biopsy for the evaluation of azoospermic testes, particularly in the context of assisted reproductive technologies. However, the study also noted discrepancies between FNA and histologic findings in some cases, which suggests that while FNA may be a promising tool, it may not completely replace histologic biopsy in all cases. Further research and validation would be necessary to fully establish FNA as a replacement for histologic biopsy in the diagnostic and management of azoospermic testes.
Instruction: Can secondary school students' self-reported measures of height and weight be trusted? Abstracts: abstract_id: PUBMED:16601105 Can secondary school students' self-reported measures of height and weight be trusted? An effect size approach. Background: Self-reported measures of height and weight are a cost-effective alternative to direct measures in large-scale studies. This study was designed to examine the accuracy of adolescent students' self-reported height and weight taking into consideration the magnitude of the differences. Methods: Self-reported height and weight were taken from 300 secondary public schools students. Participants' actual height and weight were subsequently verified. Body mass index (BMI; kg/m2) was calculated separately from reported and from actual measures. Adolescents' whose measured BMI was above the 85th percentile were characterized as 'at risk for overweight/obese'. Results: There was no gender effect on the discrepancy between reported and actual measures. Overall adolescents significantly underestimated their weight and BMI. Although correlation coefficients were high, eta-square (eta2) values indicate large bias for weight (0.36) and BMI (0.31). 'At risk for overweight/obese' individuals underestimated their weight and BMI to a greater extent than their 'normal weight' counterparts. Conclusions: The magnitude of the discrepancies call into question the accuracy of self-reported weight and consequently the estimated BMI. Correlation coefficients did not provide any valuable information about the discrepancy between the self-reported and actual measures. A better understanding of the validity of self-reported height and weight could be reached if interpretation of the results is based on both statistical significance and magnitude of the differences. abstract_id: PUBMED:15185275 Predictors of the accuracy of self-reported height and weight in adolescent female school students. Objective: The authors investigated predictors of the accuracy of self-reported values of body weight and height in adolescent females. Method: Self-reported and measured weight and height values were obtained for 683 school students aged 11-18 years. Predictors of accuracy were determined for self-reported weight, height, and body mass index (BMI; based on self-reported values). Results: Self-reported weight was underestimated. This was most likely for students with higher BMI values and actual weight. Younger, early pubertal and premenarcheal students were most likely to underestimate their height whereas older, postmenarcheal (more than 3 years) students were most likely to overestimate their height. The more exercise reported, the more accurate the height estimation. This translates to a greater underestimation of BMI as adolescents become older, have a longer duration since menarche, and exercise more. Discussion: When accurate information is required, clinicians should measure height and weight, particularly if the adolescent female is perimenarcheal or appears to differ from population averages for height and weight. abstract_id: PUBMED:12667732 Reliability and validity of self-reported height and weight among high school students. Purpose: To assess the reliability and validity of self-reported height and weight, and variables calculated from these values, in a diverse sample of adolescents. Methods: A convenience sample of students (n = 4619) in grades 9 through 12 reported their height and weight on two questionnaires administered approximately 2 weeks apart. Using a standard protocol, a subsample of these students (n = 2032) also were weighed and had their height measured following completion of the first questionnaire. Results: Self-reported heights at Time 1 and Time 2 were highly correlated, and the mean difference between height at Time 1 and Time 2 was small. Results were similar for self-reported weight at Time 1 and Time 2 and body mass index (BMI) calculated from these values. Although self-reported values of height, weight, and BMI were highly correlated with their measured values, on average, students overreported their height by 2.7 inches and underreported their weight by 3.5 pounds. Resulting BMI values were an average of 2.6 kg/m(2) lower when based on self-reported vs. measured values. The percentages of students classified as "overweight" or "at risk for overweight" were therefore lower when based on self-reported rather than on measured values. White students were more likely than those in other race/ethnic groups to overreport their height, and the tendency to overreport height increased by grade. Female students were more likely than male students to underreport their weight. Conclusions: Self-reported height, weight, and BMI calculated from these values were highly reliable but were discrepant from measured height, weight, and BMIs calculated from measured values. BMIs based on self-reported height and weight values therefore underestimate the prevalence of overweight in adolescent populations. abstract_id: PUBMED:29379548 The validity of self-reported vs. measured body weight and height and the effect of self-perception. Introduction: The objective was to assess the validity of self-reported body weight and height and the possible influence of self-perception of body mass index (BMI) status on the actual BMI during the adolescent period. Material And Methods: This cross sectional study was conducted on 3918 high school students. Accurate BMI perception occurred when the student's self-perception of their BMI status did not differ from their actual BMI based on measured height and weight. Agreement between the measured and self-reported body height and weight and BMI values was determined using the Bland-Altman metod. To determine the effects of "a good level of agreement", hierarchical logistic regression models were used. Results: Among male students who reported their BMI in the normal region, 2.8% were measured as overweight while 0.6% of them were measured as obese. For females in the same group, these percentages were 1.3% and 0.4% respectively. Among male students who perceived their BMI in the normal region, 8.5% were measured as overweight while 0.4% of them were measured as obese. For females these percentages were 25.6% and 1.8% respectively. According to logistic regression analysis, residence and accurate BMI perception were significantly associated with "good agreement" (p ≤ 0.001). Conclusions: The results of this study demonstrated that in determining obesity and overweight statuses, non-accurate weight perception is a potential risk for students. abstract_id: PUBMED:30314261 Self-Reported vs. Measured Height, Weight, and BMI in Young Adults. Self-reported height and weight, if accurate, provide a simple and economical method to track changes in body weight over time. Literature suggests adults tend to under-report their own weight and that the gap between self-reported weight and actual weight increases with obesity. This study investigates the extent of discrepancy in self-reported height, weight, and subsequent Body Mass Index (BMI) versus actual measurements in young adults. Physically measured and self-reported height and weight were taken from 1562 students. Male students marginally overestimated height, while females were closer to target. Males, on average, closely self-reported weight. Self-reported anthropometrics remained statistically correlated to actual measures in both sexes. Categorical variables of calculated BMI from both self-reported and actual height and weight resulted in significant agreement for both sexes. Researcher measured BMI (via anthropometric height and weight) and sex were both found to have association with self-reported weight while only sex was related to height difference. Regression examining weight difference and BMI was significant, specifically with a negative slope indicating increased BMI led to increased underestimation of weight in both sexes. This study suggests self-reported anthropometric measurements in young adults can be used to calculate BMI for weight classification purposes. Further investigation is needed to better assess self-reported vs measured height and weight discrepancies across populations. abstract_id: PUBMED:37451556 School-level self-reported versus objective measurements of body mass index in public high school students. Population-level surveillance of student weight status (particularly monitoring students with a body mass index (BMI) ≥95th percentile) remains of public health interest. However, there is mounting concern about objectively measuring student BMI in schools. Using data from the nation's largest school district, we determined how closely students' self-reported BMI approximates objectively-measured BMI, aggregated at the school level, to inform decision-making related to school BMI measurement practices. Using non-matched data from n = 82,543 students with objective height/weight data and n = 7676 with self-reported height/weight from 84 New York City high schools (88% non-white and 75% free or reduced-price meal-eligible enrollment), we compared school-level mean differences in height, weight, BMI, and proportion of students by weight status, between objective and self-reported measures. At the school-level, the self-reported measurement significantly underestimated weight (-1.38 kg; 95% CI: -1.999, -0.758) and BMI (-0.38 kg/m2; 95% CI: -0.574, -0.183) compared to the objective measurement. Based on the objective measurement, 12.1% of students were classified as having obesity and 6.3% as having severe obesity (per CDC definition); the self-report data yielded 2.5 (95% CI: -1.964, -0.174) and 1.4 (95% CI: -2.176, -0.595) percentage point underestimates in students with obesity and severe obesity, respectively. This translates to 13% of students with obesity and 21% of students with severe obesity being misclassified if using self-reported BMI. School-level high school students' self-reported data underestimate the prevalence of students with obesity and severe obesity and is particularly poor at identifying highest-risk students based on BMI percentile. abstract_id: PUBMED:22742593 Accuracy of self-reported height and weight in children aged 6 to 11 years. The purpose of this study was to determine the ability of first-, third-, and fifth-graders to accurately self-report height and weight. Self-reported and measured values for height and weight were recorded for 487 students. The ability to self-report a reasonable value for height and weight improved with grade level, but children in all 3 grade levels significantly underreported their height and weight. Only fifth-graders accurately self-reported their weight; therefore, using self-reported height and weight to determine the prevalence of overweight and obesity for elementary school-aged children is not recommended. abstract_id: PUBMED:28905662 Self-Reported Height and Weight in Oceanian School-Going Adolescents and Factors Associated With Errors. We examined the accuracy of self-reported weight and height in New Caledonian school-going adolescents. Self-reported and measured height and weight data were collected from 665 adolescents of New Caledonia and were then compared. Multivariable logistic regressions identified the factors associated with inaccurate self-reports. Sensitivity and specificity of self-reported body mass index values to detect overweight or obesity were evaluated. Self-reported weight was significantly lower than measured weight (boys, -3.56 kg; girls, -3.13 kg). Similar results were found for height (boys, -2.51 cm; girls, -3.23 cm). Multiple regression analyses indicated that the difference between self-reported and measured height was significantly associated with ethnicity and pubertal status. Inaccurate self-reported weight was associated with socioeconomic status, place of residence, body-size perception and weight status. Screening accuracy of self-reported body mass index was low, particularly in the Melanesian subgroup. These findings should be considered when overweight is estimated in the Melanesian adolescent population at individual scale. abstract_id: PUBMED:17096824 Comparison of self-reported and measured height and weight in eighth-grade students. The goal of this study was to evaluate the relationships between self-reported and measured height, weight, and body mass index (BMI) in a sample of eighth-grade students. The study population consisted of eighth-grade students in eastern North Carolina who completed a cross-sectional survey, self-reported their height and weight, and had their height and weight measured (N = 416). Fifty-nine percent of the sample was male; 42% African American, 46% white, and 12% other races. Mean self-reported weight (62.9 kg) was significantly lower than mean measured weight (64.4 kg). Mean self-reported BMI (22.8 kg/m2) was significantly lower than mean measured BMI (23.3 kg/m2). Race and BMI category were significantly associated with reporting errors. Specifically, African American and white students were significantly less likely to under-report their height compared to other race students. African American students were more likely to underestimate their weight compared to other race students. BMI was more likely to be underestimated in African American and white students compared to other race students. Students who were at risk for overweight and those that were overweight were more likely to underestimate their weight and BMI than students who were normal weight. Approximately 17% of students were misclassified in BMI categories when self-reported data were used. The results indicate that eighth-grade students significantly underestimate their weight, but on average provide valid estimates of their height. Race and measured BMI category influence this discrepancy. School-based research that addresses the prevalence of obesity in adolescents should utilize measured height and weight when feasible. abstract_id: PUBMED:26060545 Validity of self-reported height and weight in elderly Poles. Background/objectives: In nutritional epidemiology, collecting self-reported respondent height and weight is a simpler procedure of data collection than taking measurements. The aim of this study was to compare self-reported and measured height and weight and to evaluate the possibility of using self-reported estimates in the assessment of nutritional status of elderly Poles aged 65 + years. Subjects/methods: The research was carried out in elderly Poles aged 65 + years. Respondents were chosen using a quota sampling. The total sample numbered 394 participants and the sub-sample involved 102 participants. Self-reported weight (non-corrected self-reported weight; non-cSrW) and height estimates (non-corrected self-reported height; non-cSrH) were collected. The measurements of weight (measured weight; mW) and height (measured height; mH) were taken. Using multiple regression equations, the corrected self-reported weight (cSrW) and height (cSrH) estimates were calculated. Results: Non-cSrH was higher than mH in men on average by 2.4 cm and in women on average by 2.3 cm. In comparison to mW, non-cSrW was higher in men on average by 0.7 kg, while in women no significant difference was found (mean difference of 0.4 kg). In comparison to mBMI, non-cSrBMI was lower on average by 0.6 kg/m(2) in men and 0.7 kg/m(2) in women. No differences were observed in overweight and obesity incidence when determined by mBMI (68% and 19%, respectively), non-cSrBMI (62% and 14%, respectively), cSrBMI (70% and 22%, respectively) and pcSrBMI (67% and 18%, respectively). Conclusions: Since the results showed that the estimated self-reported heights, weights and BMI were accurate, the assessment of overweight and obesity incidence was accurate as well. The use of self-reported height and weight in the nutritional status assessment of elderly Poles on a population level is therefore recommended. On an individual level, the use of regression equations is recommended to correct self-reported height, particularly in women. Answer: Secondary school students' self-reported measures of height and weight have been found to be inaccurate in several studies. Adolescents tend to significantly underestimate their weight and BMI, with the discrepancies being larger for those who are 'at risk for overweight/obese' (PUBMED:16601105). The accuracy of self-reported values is influenced by factors such as BMI, actual weight, pubertal status, and exercise habits, with older, postmenarcheal students more likely to overestimate their height and those who exercise more providing more accurate height estimations (PUBMED:15185275). Self-reported height and weight are highly correlated with measured values, but on average, students overreport their height and underreport their weight, leading to an underestimation of BMI and the prevalence of overweight (PUBMED:12667732). The validity of self-reported measures is also affected by self-perception of BMI status, with significant discrepancies between perceived and actual BMI, especially among those who inaccurately perceive their weight status (PUBMED:29379548). While self-reported anthropometrics are statistically correlated with actual measures, the extent of discrepancy varies, with increased BMI leading to increased underestimation of weight in both sexes (PUBMED:30314261). At the school level, self-reported data significantly underestimate the prevalence of students with obesity and severe obesity, particularly failing to identify highest-risk students (PUBMED:37451556). In younger populations, such as children aged 6 to 11 years, self-reported height and weight are even less reliable, with significant underreporting observed across all grade levels (PUBMED:22742593). Similarly, in Oceanian school-going adolescents, self-reported weight and height were significantly lower than measured values, with multiple factors influencing the accuracy of self-reports (PUBMED:28905662). In summary, secondary school students' self-reported measures of height and weight are often unreliable and tend to underestimate their actual weight and BMI. Therefore, when accurate information is required, direct measurements are recommended, particularly for adolescents who appear to differ from population averages for height and weight or are at risk for overweight/obesity.
Instruction: The high yield of 1-year colonoscopy after resection: is it the handoff? Abstracts: abstract_id: PUBMED:19688387 The high yield of 1-year colonoscopy after resection: is it the handoff? Background: This study aimed to determine the diagnostic yield of colonoscopy 1 year after colorectal cancer (CRC) resection based on whether the index colonoscopy was performed by the operating surgeon. Methods: All patients undergoing surgery for colorectal cancer by two colon and rectal surgeons at a university hospital from 1991 to 2005 were identified from the tumor registry. Those patients with a complete preoperative colonoscopy by any physician and a 1-year follow-up examination by the operating surgeon were selected for the study population. Family history of colorectal cancer, tumor location, endoscopist, presence of synchronous neoplasms, and findings of 1-year colonoscopy were recorded. Fisher's exact test was used to compare the probability of finding any adenoma, advanced adenoma, or invasive cancer based on the index endoscopist. Results: Of the 719 patients who underwent resection during the study period, 432 met the inclusion criteria. The index colonoscopy for 117 of these patients (27.1%) was performed by one of the two surgeons. Overall, 10 patients (2.3%) had a "new" cancer diagnosed at 1 year, and 1 patient (0.2%) had a local recurrence. Patients whose index colonoscopy was performed by their operating surgeon appeared less likely to have an advanced lesion found at 1 year (5.1% vs 11.4%; p = 0.06). The index colonoscopy for 9 of the 10 of cancers found at 1 year was not performed by the operating surgeon. Conclusions: Colonoscopy 1 year after CRC resection is clearly justified. An index colonoscopy by the operating surgeon eliminates a "handoff" and may diminish the incidence of high-risk lesions at 1 year. abstract_id: PUBMED:30802606 Yield of Surveillance Colonoscopies 1 Year After Curative Surgical Colorectal Cancer Resections. Background & Aims: Endoscopic surveillance after curative colorectal cancer (CRC) resection is routine. However, there is controversy whether the 1-year interval between preoperative and postoperative colonoscopy is justified owing to improved colonoscopy standards. We aimed to assess the yield of surveillance colonoscopies 1 year after CRC surgery. Methods: We performed a retrospective cohort study of 572 patients (54.9% male; mean age, 66.2 ± 9.9 y), who underwent curative surgical resection of a first CRC from June 2013 through April 2016 in the Northwest region of The Netherlands. Patients were included if a complete clearing colonoscopy was performed before surgery and the interval between the preoperative and postoperative colonoscopy was 12 months (range, 6-20 mo), conforming to Dutch guidelines. The primary outcome of the study was the yield of CRC at the surveillance colonoscopy performed 1 year after curative resection. A secondary outcome was the yield of advanced neoplasia. Results: After a mean surveillance interval of 13.7 months (±2.8 mo), 10 of 572 patients (1.7%; 95% CI, 0.7%-2.8%) received a diagnosis of CRC. Of these, 5 CRCs were apparently metachronous cancers (3 were stage III or IV) and 5 were recurrences at the anastomosis (1 was stage IV). In 11.4% of patients (95% CI, 8.9%-13.8%), advanced neoplasia was detected at the 1-year follow-up colonoscopy. Synchronous advanced neoplasia at baseline colonoscopy was a risk factor for detection of advanced neoplasia at the follow-up colonoscopy (odds ratio, 2.2; 95% CI, 1.3-3.8; P ≤ .01). Conclusions: Despite high colonoscopy quality, the yield of CRC at surveillance colonoscopy 1 year after CRC resection was 1.7%. These were metachronous CRCs and recurrences, often of advanced stage. The high yield justifies the recommendation of a 1-year surveillance interval after surgical CRC resection. abstract_id: PUBMED:26048253 Yield of second surveillance colonoscopy to predict adenomas with high-risk characteristics. Background And Aims: The yield of surveillance colonoscopies for patients with a history of polyps is well established for first surveillance, but limited for second surveillance. The aim of this study was to evaluate the proportion of high-risk adenomas at second surveillance colonoscopy based on findings of previous colonoscopies. Methods: This retrospective cohort study was conducted in a tertiary hospital and patients who had undergone three colonoscopies were included. Based on the findings at index colonoscopy, patients were categorized into three groups: high-risk adenoma (n=252), low-risk adenoma (n=158) or no-adenoma (n=318). Findings of subsequent high-risk adenoma, low-risk adenoma and no adenoma at surveillance colonoscopies were documented in each group. Results: Among patients with high-risk adenoma at index and first surveillance colonoscopies, significantly higher rates of high-risk findings were found at second surveillance, compared with patients who had low-risk or no-adenoma at index colonoscopy and high-risk adenoma at first surveillance colonoscopy (58%, 33% and 10%, respectively, p&lt;0.001). Conclusions: Both index colonoscopy and first surveillance high-risk adenoma have an impact on incidence high-risk findings at second surveillance colonoscopy and these subjects need close surveillance. abstract_id: PUBMED:18938663 Diagnostic yield of colonoscopy based on symptoms. Objective: There are only a few data on the diagnostic yield of colonoscopy in different symptoms. The aim of this study was to assess the outcome of colonoscopy in patients with various gastrointestinal symptoms and to estimate the relation between the findings and the presenting symptoms. Material And Methods: 1121 consecutive colonoscopies were registered during 1 year. Asymptomatic subjects and patients with known inflammatory bowel disease (IBD) were excluded, leaving 767 eligible for the study. Symptoms, findings and clinical judgement about their relation were recorded. Results: In patients with bleeding symptoms (n=405), serious colonic pathology--cancers and adenomas &gt;1 cm, IBD and angiodysplasia--was found in 54 (13.3%), 83 (20.5%) and 20 (4.9%) patients, respectively; 162 (40%) patients had findings that could be related to the symptom. In 173 subjects with non-bloody diarrhoea, the diagnostic yield was 31.2%, i.e. mostly IBD and microscopic colitis. In 189 subjects with other gastrointestinal symptoms, the diagnostic yield was 13.2%. Serious colonic pathology was found in 8 of 362 (2.2%) subjects examined because of non-bleeding symptoms. Conclusion: The diagnostic yield of colonoscopy is high in patients with bleeding symptoms or diarrhoea, while the prevalence of significant findings is equal to a screening population in patients with other symptoms. abstract_id: PUBMED:31825933 Diagnostic Yield Variation with Colonoscopy among Pediatric Endoscopists. Background: The primary aim of our study was to determine provider variation in diagnostic yield in a pediatric endoscopy center. Secondary aims were to examine ileal intubation rates as well as procedural complications at the provider level. Methods: A retrospective review of sequential pediatric patients who underwent a colonoscopy, completed by June 2018, determined the rates of endoscopically abnormal (EA) and isolated histologically abnormal (IHA) colonoscopies; the overall diagnostic yield was the combination of EA and IHA. Results: In total, 374 charts were reviewed. This study found high variability in diagnostic yield among the 16 clinicians ranging from as low as 22% to as high as 86% (p = 0.11) with an overall diagnostic yield of 48% for colonoscopy; excluding follow-up colonoscopies, the diagnostic yield was 42%. Abnormal calprotectin and abnormal blood tests were associated with higher diagnostic yields of 83 and 65%, respectively, compared with symptoms such as abdominal pain, diarrhea, and rectal bleeding which had yields of 39, 43, and 61%. Ileal intubation rates averaged 90% (range -63-100%, p = 0.06). In patients with a normal colon, there were 21 (6%) patients with an EA ileum and an additional 16 (4%) with an IHA ileum. Prep quality was rated excellent, good, or average in 97%. In addition, there was a low rate of serious complications (1 of 374). Conclusions: This study highlights the individual variability in diagnostic yield and ileal intubation rates in a pediatric gastroenterology practice. Goals for pediatric endoscopy could include ileal intubation rates of &gt;90% and provider diagnostic yields of &gt;40%. abstract_id: PUBMED:33187921 The diagnostic yield of colonoscopy in hospitalized patients. An observational multicenter prospective study. Background: Colonoscopy demands a considerable amount of resources, and little is known about its diagnostic yield among inpatients. Aims: To assess indications, diagnostic yield and findings of colonoscopy for inpatients, and to identify risk factors for relevant findings and cancer. Methods: Multicentre, prospective, observational study including 12 hospitals. Consecutive adult inpatients undergoing colonoscopy were evaluated from February through November 2019. Results: 1,302 inpatients underwent colonoscopy. Diagnostic yield for relevant findings and cancer was 586 (45%) and 112 (8.6%), respectively. Adequate colon cleansing was achieved in 896 (68.8%) patients. Split-dose/same-day regimen was adopted in 847 (65%) patients. Factors associated to relevant findings were age ≥70 years (RR 1.32), male gender (RR 1.11), blood loss (RR 1.22) and adequate cleansing (RR 1.63). Age ≥70 years (RR 2.08), no previous colonoscopy (RR 2.69) and split-dose/same-day regimen (RR 1.59) significantly increased cancer detection. Implementing adequate cleansing and split-dose/same-day regimen in all patients would increase the diagnostic yield for any relevant findings and cancer from 43% to 70% and from 6% to 10%, respectively. Conclusion: Relevant colorectal diseases and cancer were frequent among inpatients. Factors associated with detection of relevant findings were identified. Adequate colon cleansing and split-dose/same-day regimen significantly increased colonoscopy diagnostic yield. abstract_id: PUBMED:7872270 Colonoscopy: a review of its yield for cancers and adenomas by indication. Colonoscopy for bleeding indications (positive fecal occult blood test, emergent or nonemergent rectal bleeding, melena with a negative upper endoscopy and iron deficiency anemia) has a substantial yield for cancers (1 per 9 to 13 colonoscopies), although slide rehydration of fecal occult blood tests decreases the yield (1 per 45 colonoscopies). Prospective studies indicate that nonbleeding colonic symptoms have a substantially lower yield for cancer than bleeding symptoms (1 per 109 colonoscopies). Patients with indications for screening colonoscopy with a relatively high yield of cancer are those with Lynch syndrome (1 per 39 colonoscopies) and males more than 60 yr old (1 per 64 colonoscopies). Perioperative colonoscopy in persons undergoing colorectal cancer resection has a high yield for synchronous cancer (2-3%). An initial examination in persons with long-standing ulcerative colitis has a high yield for cancer (12%). Surveillance colonoscopy after cancer resection has an intermediate yield for anastomotic cancer (1 per 74 procedures) and metachronous cancers (1 per 82 colonoscopies), although this number may overestimate the yield of metachronous cancer. Postpolypectomy surveillance and ulcerative colitis surveillance colonoscopy have relatively low yields for cancer (1 per 317 and 360 colonoscopies, respectively). However, postpolypectomy surveillance colonoscopy, in combination with initial clearing colonoscopy, has been proven to be almost entirely effective in preventing colorectal cancer death. Further, cancer yields for postpolypectomy surveillance should improve with implementation of new surveillance guidelines, with little or no impact on mortality. The effectiveness of ulcerative colitis surveillance is less certain. Referral of patients with low-grade dysplasia for colectomy would improve the value and effectiveness of surveillance colonoscopy in ulcerative colitis. Adenoma yields at colonoscopy are relatively independent of indication, as evidenced by the high yield of adenomas in screening colonoscopy studies. Demographic factors, including increasing age and male gender, are important predictors of adenomas at initial colonoscopy. Age, male gender, and multiple and large adenomas at initial examination are predictors of adenomas at subsequent postpolypectomy surveillance. Persons with ureterosigmoidostomies can be screened by interval flexible sigmoidoscopy and do not require screening colonoscopy. A history of breast cancer does not predict an improved yield of colonoscopic screening compared to average-risk persons. abstract_id: PUBMED:12362097 The yield of colonoscopy in average-risk patients with non-specific colonic symptoms. Objectives: The need for full colonoscopies in average-risk patients with non-specific colonic symptoms is controversial. We aimed to evaluate: (1) the yield of full colonoscopy; (2) the prevalence of proximal neoplasia in these patients; (3) the yield if any of doing full colonoscopies to diagnose proximal lesions in patients in whom the distal colon was clear; (4) the significance of this yield with respect to age. Design: This is a retrospective analysis to assess the value of open access colonoscopy. Patients And Methods: All patients who underwent a colonoscopy in our Endoscopy Unit during January 1996 to December 1999 were assessed (n = 3357). Results: We analysed 945 patients with average risk and non-specific colonic symptoms (significant risk factors excluded). The overall yield of adenomas was 5.8%. The yield of distal adenomas in patients &gt; or= 50 years of age was 8.2% (37 out of 450) versus 0.2% in the 50 years group (one out of 495; = 0.0001). The proximal adenoma yield in &gt; or= 50 year olds was 3.8% (17 out of 495) versus 0.2% in &lt; 50 year olds (one out of 495) (P = 0.0001). Conclusions: In a cohort of average-risk patients with non-specific colonic symptoms attending an "open access" colonoscopy clinic, the yield for proximal adenomas is small in the &lt; 50 years group. In patients aged &lt; 50 years, distal colonic examination is all that is required, whereas a full colonoscopy may be justified in patients &gt; or = 50 years old. abstract_id: PUBMED:23798797 Indications and findings at colonoscopy in Ilorin, Nigeria. Background: Colonoscopy is a safe and effective means of visual inspection of the large bowel from the distal rectum to the caecum. It may be carried out for diagnostic and or therapeutic reasons. There is a paucity of data on this procedure in Nigeria. We, therefore, determined the indications, findings, and diagnostic yield in Nigerians at colonoscopy. Materials And Methods: This was a hospital-based cross-sectional study carried out at the Endoscopy unit of Crescent hospital, Ilorin from January 2010 to May, 2012. The endoscopy register was reviewed, and the biodata, indications and colonoscopic findings were recorded on a pro forma. Results: A total of 103 patients had colonoscopy. Seventy (68.0%) were males while 33 (32.0%) were females. The indications for colonoscopy were rectal bleeding 41 (39.8%), suspected colon cancer 32 (31.1%), chronic constipation and chronic diarrhoea nine each (8.7%), abdominal/anal pain five (4.9%), suspected anorectal cancer and enterocutaneous fistula two each (1.9%), faecal incontinence, occult gastrointestinal bleeding, post-colostomy for Hirschsprung disease one each (1.0%). Endoscopic findings were normal findings 21 (20.4%), diverticulosis 17 (16.5%), polyps 16 (15.5%), haemorrhoids 16 (15.5%), anorectal cancer 13 (12.6%), angiodysplasia 12 (11.7%), colon cancer eight (7.8%), colitis 7 (6.8%), anorectal ulcer 4 (3.9%), anal warts two (1.9%), anal fissure, caecal tumour, faecal impaction and proctitis one each (1.0%). The diagnostic yield was 79.6%. Conclusions: The commonest indication for colonoscopy was rectal bleeding, while the most frequent pathology was diverticulosis. The diagnostic yield was high. abstract_id: PUBMED:11221158 The yield for colorectal cancer and adenoma by indication at colonoscopy. Colonoscopy is an integral part of the clinician armamentarium in the diagnosis of colorectal cancer and its precursor, the adenoma. Polypoid lesions when identified can be excised at colonoscopy and in turn reduce the risk of colorectal cancer. We prospectively evaluated the yield of colorectal cancer and adenomatous polyps by indication for colonoscopy over a one-year period. A total of 375 colonoscopies were carried out. The more common indications of colonoscopy were rectal bleeding, abdominal pain, surveillance of colorectal cancer and altered bowel habit. The highest yield for cancer was for rectal bleeding with 12.5% while surveillance of patients with a history of polyps yielded the highest percentage of new polyps. We conclude that rectal bleeding as an indication for colonoscopy yielded the highest number of cancers. Answer: The high yield of 1-year colonoscopy after colorectal cancer (CRC) resection may be influenced by the "handoff" between the endoscopist who performs the index colonoscopy and the operating surgeon. A study aimed to determine the diagnostic yield of colonoscopy 1 year after CRC resection based on whether the index colonoscopy was performed by the operating surgeon. The results indicated that patients whose index colonoscopy was performed by their operating surgeon appeared less likely to have an advanced lesion found at 1 year (5.1% vs 11.4%; p = 0.06). Moreover, the index colonoscopy for 9 of the 10 cancers found at 1 year was not performed by the operating surgeon, suggesting that an index colonoscopy by the operating surgeon may diminish the incidence of high-risk lesions at 1 year (PUBMED:19688387). This finding is consistent with the general justification for performing colonoscopy 1 year after CRC resection, as it has been shown to have a high diagnostic yield. Another study reported that despite high colonoscopy quality, the yield of CRC at surveillance colonoscopy 1 year after CRC resection was 1.7%, including both metachronous CRCs and recurrences, often of advanced stage. This high yield justifies the recommendation of a 1-year surveillance interval after surgical CRC resection (PUBMED:30802606). In conclusion, the high yield of 1-year colonoscopy after CRC resection may be influenced by the continuity of care when the operating surgeon performs both the index and follow-up colonoscopies, potentially reducing the incidence of high-risk lesions at 1 year. This supports the practice of having the operating surgeon involved in both preoperative and postoperative colonoscopies to ensure thorough surveillance and possibly better outcomes for CRC patients.
Instruction: Are there gender differences in self-reported smoking practices? Abstracts: abstract_id: PUBMED:24917440 Validity of the self-reported smoking status of schizophrenia patients, taking gender-related differences into consideration. Abstract Objective. It is unclear whether self-reported smoking status reflects the true smoking status in schizophrenia patients. This study aims to evaluate the consistency in smoking status in schizophrenia patients, as determined using self-reported questionnaires and objective measurements, while considering gender-related differences. Methods. The study involved 158 Japanese schizophrenia patients (79 men and 79 women), who were diagnosed according to DSM-IV-TR criteria. The patients answered questions about their smoking status and daily cigarette consumption. The carbon monoxide (CO) level in the expired air was measured to confirm the self-reported answers. We descriptively analysed the data and assessed the proportion of results showing consistency between the self-reported smoking status and measured CO levels. Results. The self-reported smoking status was consistent with the objective measurement results in both genders. Among the various cut-off CO levels considered, 7 ppm was found to be the optimal level in both men and women. Conclusions. Our study showed that self-reporting is a reliable method for evaluating the smoking status of schizophrenia patients, regardless of gender. However, for precise assessment on the basis of CO levels in expired air, lower cut-off points may be better than the recommended ones. abstract_id: PUBMED:31308965 Self-reported risky sexual practices among adolescents and young adults in Botswana. Background: Adolescents and young adults account for more than one-third of incident Human Immunodeficiency Virus (HIV) infections globally. Understanding sexual practices of this high-risk group is critical in designing HIV targeted prevention programming. Objectives: To describe self-reported risky sexual practices of adolescents and young adults aged 16-24 years from 30 Botswana communities. Methods: Cross-sectional, self-reported age at sexual debut; number of sexual partners; condom and alcohol use during sex; intergenerational sex; and transactional sex data were collected. Modified Poisson estimating equations were used to obtain univariate and multivariate-adjusted prevalence ratios (PR) and 95% confidence intervals (CI) comparing engagement in different sexual practices according to gender, accounting for the clustered design of the study. Results: Among the 3380 participants, 2311 reported being sexually active with more females reporting being sexually active compared to males (65% vs. 35%, respectively; p &lt; 0.0001). In univariate analyses, female participants were more likely to report inconsistent condom use (PR 1.61; 95% CI 1.44-1.80), intergenerational sex (PR 9.00; 95% CI 5.84-13.88) and transactional sex (PR 3.46; 95% CI 2.07-5.77) than males, yet less likely to report engaging in sex before age 15 years (PR 0.59; 95% CI: 0.41-0.85), using alcohol around the time of intercourse (PR: 0.59; 95% CI 0.45-0.76) or having ≥ two partners in the last 12 months (PR 0.65; 95% CI 0.57-0.74). Conclusions: Self-reported risky sexual practices of adolescents and young adults in Botswana differed significantly between males and females. Gender-specific risky sexual practices highlight the importance of developing tailored HIV prevention programming. abstract_id: PUBMED:27274720 Gender differences in the association between self-reported stress and cigarette smoking in Korean adolescents. Background: The aim of this study was to examine the relationship between stress and smoking among Korean adolescents, as well as the influence of gender on this relationship. Methods: A cross-sectional study was conducted using data from 3930 adolescents aged 12-18 years, collected in the 2007-2012 Korea National Health and Nutrition Examination Surveys. Results: An increased level of self-reported stress was positively associated with increasing levels of smoking in both girls and boys (p for trend &lt; 0.001). After adjusting for age, the odds ratios of smoking among girls and boys reporting very high levels of stress were 15.99 (95 % confidence interval (CI), 4.17-61.30) and 2.34 (95 % CI, 1.07-5.11), respectively, compared with those who reported low levels of stress. Conclusions: This study found a statistically significant association between stress and smoking among Korean adolescents and this association was stronger in girls than boys. Further research is needed to understand more fully the link between stress and smoking in adolescents, with particular attention to sex differences. abstract_id: PUBMED:21499469 The Prevalence of Self-Reported Halitosis and Oral Hygiene Practices among Libyan Students and Office Workers. Aims: The aims of this study were to assess the prevalence of self-reported halitosis, oral hygiene practices and related diseases among Libyan students and employees. Methods: Six hundred self-administered structured questionnaires were used to investigate self-perception of halitosis and oral hygiene practices among a group of Libyan volunteers. Chi square test was used to detect significant differences between frequencies and to test correlation between self-perception of halitosis and measures of oral hygiene. Results: Forty three percent of the subjects were males and 57% were females. Forty four percent of the males and 54% of the females revealed self-perception malodour. Malodour was reported with the highest frequency (68%) during wake up time. Malodour was perceived by 31.7% of the females and 23.4% of the males during the hand-on-mouth test (p=0.04). Significantly more females (89.9%) than males (75.7%) practiced brushing (p&lt;0.001). Fifty one percent of the males and 49.6% of females had dental caries. Smoking was significantly (p&lt;0.001) more prevalent among males (17%) than among females (1%). Brushing was practiced by 85% of non-smokers and 68% of smokers (p=0.004). About 71% of the subjects who practiced brushing reported malodour during wake up time in comparison to subjects who did not practice brushing (p=0.041). Conclusions: The prevalence of self-perceived malodour among the Libyan volunteers in this study is within the range of other studies. There is a great demand to reduce the incidence of dental caries and periodontal diseases. abstract_id: PUBMED:37168887 Moroccan EFL public university instructors' perceptions and self-reported writing practices. In previous research, the majority of university students seemed to lack proficiency in many areas of writing (e.g., grammar, vocabulary, organization, and sentence construction). This illustrates the challenging nature of writing (as a skill) and the frequent focus on the product rather than the process of writing instruction. Previous research focused on primary and secondary education; only a few studies investigated and compared the perceptions and self-reported practices of English as a Foreign Language (EFL) university instructors in relation to the product- and process-based writing approaches. Therefore, this study aims to examine this issue by surveying Moroccan EFL writing instructors' responses and comparing the findings with those of previous empirical studies from diverse educational levels. The following research questions guided our study: 1) What are the instructors' perceptions of product- and process-based writing? 2) What are their self-reported practices regarding these writing approaches? 3) Is there any alignment between their perceptions and their self-reported practices of product- and process-based writing instruction? For this purpose, a questionnaire was developed to collect data from 51 instructors. Since the questionnaire items regarding perceptions and practices were identical and covered five subscales under the two overall approaches, comparisons could be performed. The findings showed that the instructors agreed on the value of the subscale of Engagement in the writing revision process more than the other subscales. They also indicated that the two subscales of Engagement in the writing revision process and Writing as a final product were their most frequently reported writing practices. Based on the comparison between their perceptions and self-reported practices, consistencies were found in all five subscales. This study not only validated the questionnaire on the two approaches to teaching EFL writing in Moroccan universities but also showed the extent to which instructors' perceptions and self-reported practices matched one another. abstract_id: PUBMED:34417664 Gender differences in screening self-reported psychotic symptoms in a first help-seeking population. Gender differences in the frequency and severity of psychotic symptoms have been widely reported. However, in the screening process for the detection of early psychosis, gender differences were largely overlooked in China. This study investigated gender differences in self-reported psychotic symptoms in a clinical population who initially visited a mental health service. In total, 1931 consecutive new patients were included in the current analysis, with a mean age of 25.3 years, including 852 (44.1%) men and 1079 (55.9%) women, of whom 388 (20.1%) had psychotic disorders and 1543 (79.9%) had non-psychotic disorders. Psychotic symptoms were assessed using the PRIME Screen-Revised (PS-R) questionnaire. The cohort was grouped according to gender, age (adolescents ≤ 21 years, adults &gt; 21 years), and clinical diagnosis. Within the full sample, gender differences in psychotic symptoms were not significant, except that females appeared to have more severe symptoms of disorganized communication than males. However, gender differences began to appear at subgroup levels, after stratification by age and diagnosis. Female adolescents reported more severe psychotic symptoms than male adolescents, especially in the symptom of perceptual abnormalities, which refer to hallucinations. Different patterns and predictors were found to significantly discriminate between psychotic and non-psychotic disorders among age and gender groups. Our study highlights gender differences in the severity, frequency, and pattern of self-reported psychotic symptoms when screening in a first help-seeking population. Therefore, gender differences should be considered during psychotic symptoms screening. abstract_id: PUBMED:35774026 Teachers' physiological and self-reported stress, teaching practices and students' learning outcomes in Grade 1. Background: Teachers' self-reported stress is related to the quality of teacher-student interactions and students' learning outcomes. However, it is unclear if teachers' physiological stress is related to child-centred teaching practices in the classroom and whether teaching practices mediate the link between teachers' stress and students' learning outcomes. Aims: We studied the effect of teachers' physiological stress and self-reported stress on their teaching practices and thereby on students' learning outcomes in math. Sample: A total of 53 classroom teachers and 866 Grade 1 students participated in the study. Methods: Salivary cortisol in the middle of the school day and cortisol slope from morning peak to evening were used as indicators of teachers' physiological stress, in addition to self-reported teaching-related stress. Teaching practices were observed with the ECCOM instrument. Students' math skills controlled for gender and previous skills were used as a measure of learning outcomes. Data were analysed with a two-level SEM. Results: Teachers' physiological stress did not have an effect on teaching practices or students' math skills. Teachers reporting less stress used relatively more child-centred teaching practices compared with teacher-directed ones. These practices had a marginal effect on classroom-level differences in the gain of students' math skills in Grade 1. There was neither a direct nor indirect effect from teachers' stress on students' math skills. Altogether, our model explained 77% of classroom-level variance in math skills. Conclusions: Teachers' self-reported stress has an effect on their teaching practices, which, in turn, have a marginal effect on students' learning outcomes. abstract_id: PUBMED:27475755 A global assessment of the gender gap in self-reported health with survey data from 59 countries. Background: While surveys in high-income countries show that women generally have poorer self-reported health than men, much less is known about gender differences in other regions of the world. Such data can be used to examine the determinants of sex differences. Methods: We analysed data on respondents 18 years and over from the World Health Surveys 2002-04 in 59 countries, which included multiple measures of self-reported health, eight domains of functioning and presumptive diagnoses of chronic conditions. The age-standardized female excess fraction was computed for all indicators and analysed for five regional groups of countries. Multivariate regression models were used to examine the association between country gaps in self-reported health between the sexes with societal and other background characteristics. Results: Women reported significantly poorer health than men on all self-reported health indicators. The excess fraction was 15 % for the health score based on the eight domains, 28 % for "poor" or "very poor" self-rated health on the single question, and 26 % for "severe" or "extreme" on a single question on limitations. The excess female reporting of poorer health occurred at all ages, but was smaller at ages 60 and over. The female excess was observed in all regions, and was smallest in the European high-income countries. Women more frequently reported problems in specific health domains, with the excess fraction ranging from 25 % for vision to 35 % for mobility, pain and sleep, and with considerable variation between regions. Angina, arthritis and depression had female excess fractions of 33, 32 and 42 % respectively. Higher female prevalence of the presumptive diagnoses was observed in all regional country groups. The main factors affecting the size of the gender gap in self-reported health were the female-male gaps in the prevalence of chronic conditions, especially arthritis and depression and gender characteristics of the society. Conclusions: Large female-male differences in self-reported health and functioning, equivalent to a decade of growing older, consistently occurred in all regions of the world, irrespective of differences in mortality levels or societal factors. The multi-country study suggests that a mix of biological factors and societal gender inequalities are major contributing factors to gender gap in self-reported measures of health. abstract_id: PUBMED:34923032 Parental feeding practices and parental involvement in child feeding in Denmark: Gender differences and predictors. Studies about fathers and feeding are scarce and little is known about predictors of parental involvement in child feeding and of paternal feeding practices. Therefore, this study aimed to examine possible differences between Danish mothers and fathers with regard to their feeding practices and involvement in feeding related tasks, and to assess possible parent-related predictors of parental practices and involvement. A total of 261 mothers and 321 fathers of pre-schoolers completed an online survey with items from validated questionnaires. Gender differences were observed; fathers reported using higher levels of coercive control practices, while mothers reported using higher levels of structure practices and autonomy support practices. Both mothers and fathers reported to be highly involved in feeding their child. Regressions showed that a higher concern for child weight and a higher motivation for child preference when buying food were linked to a higher use of coercive control practices while a higher motivation for health control when buying food, cooking confidence, feeding/general self-efficacy and perceived responsibility for feeding were linked to a higher use of structure and autonomy support practices. The results of this study provide valuable insight into maternal and paternal practices in Denmark and their determinants. abstract_id: PUBMED:26438665 Gender, Ethnicity, and Their Intersectionality in the Prediction of Smoking Outcome Expectancies in Regular Cigarette Smokers. The current study utilized the intersectionality framework to explore whether smoking outcome expectancies (i.e., cognitions about the anticipated effects of smoking) were predicted by gender and ethnicity, and the gender-by-ethnicity interaction. In a cross-sectional design, daily smokers from the general community (32.2% women; non-Hispanic African American [n = 175], non-Hispanic White [n = 109], or Hispanic [n = 26]) completed self-report measures on smoking expectancies and other co-factors. Results showed that women reported greater negative reinforcement (i.e., anticipated smoking-induced negative affect reduction) and weight control (i.e., anticipated smoking-induced appetite/weight suppression) expectancies than men. Hispanic (vs. African American or White) smokers endorsed greater negative reinforcement expectancies. A gender-by-ethnicity interaction was found for weight control expectancies, such that White women reported greater weight control expectancies than White men, but no gender differences among African American and Hispanic smokers were found. These findings suggest that gender, ethnicity, and their intersectionality should be considered in research on cognitive mechanisms that may contribute to tobacco-related health disparities. Answer: Yes, there are gender differences in self-reported smoking practices. For instance, in a study involving Korean adolescents, it was found that an increased level of self-reported stress was positively associated with increasing levels of smoking in both girls and boys, but the association was stronger in girls than boys (PUBMED:27274720). Additionally, in a study assessing the prevalence of self-reported halitosis and oral hygiene practices among Libyan students and office workers, it was reported that smoking was significantly more prevalent among males (17%) than among females (1%) (PUBMED:21499469). Furthermore, a study exploring the prediction of smoking outcome expectancies found that women reported greater negative reinforcement and weight control expectancies than men, indicating gender-specific cognitive anticipations related to smoking (PUBMED:26438665). These findings suggest that there are indeed gender differences in self-reported smoking practices and related behaviors.
Instruction: Neuraxial hematoma after epidural anesthesia. Is it possible to prevent or detect it? Abstracts: abstract_id: PUBMED:21474029 Neuraxial hematoma after epidural anesthesia. Is it possible to prevent or detect it? Report of two cases. Background And Objectives: Spinal hematomas are rare and they affect the central nervous system. They can cause permanent neurologic sequelae and death if they are not treated properly. Early diagnosis and treatment are fundamental for a good neurologic prognostic. The objective of this report was to emphasize for anesthesiologists the importance of early diagnosis and treatment of spinal hematomas, besides improving their prevention. Case Reports: Case 1: The patient underwent epidural lumbar anesthesia for femoropopliteal revascularization. He was being treated with acetylsalicylic acid, clopidogrel, and enoxaparin, which were discontinued before the surgery. The patient developed paraplegia in the immediate postoperative period. Neurosurgical decompression was performed after the diagnosis, but without recovery in the long run. Case 2: The patient underwent epidural lumbar anesthesia for right knee osteotomy, without intercurrences. The patient remained without neurological complaints until approximately 48 hours after the surgery when he developed urinary retention, pain in the right lower limb, paresthesias, and difficulty moving both feet. The MRI showed an epidural lumbar hematoma, and the patient underwent immediate surgical decompression. He showed complete neurological recovery after 10 months of rehabilitation. Conclusions: The clinical cases presented here showed different outcomes, indicating the importance of early diagnosis and treatment for a good evolution. Diagnosis by MRI with early decompression shortly after the development of the first clinical manifestations remains the standard treatment. Identification of patients at risk for neuraxial bleeding and change in anesthetic technique, as well as the establishment of postoperative neurologic evaluation protocols in patients undergoing neuraxial anesthesia can contribute for prevention of severe neurologic sequelae. abstract_id: PUBMED:27159067 Neuraxial Anesthesia in Parturients with Low Platelet Counts. The obstetric anesthesiologist must consider the risk of spinal-epidural hematoma in patients with thrombocytopenia when choosing to provide neuraxial anesthesia. There are little data exploring this complication in the parturient. In this single-center retrospective study of 20,244 obstetric patients, the incidence of peripartum thrombocytopenia (platelet count &lt;100,000/mm) was 1.8% (368 patients). Of these patients, 69% (256) received neuraxial anesthesia. No neuraxial hematoma occurred in any of our patients. The upper 95% confidence limit for spinal-epidural hematoma in patients who received neuraxial anesthesia with a platelet count of &lt;100,000/mm was 1.2%. abstract_id: PUBMED:32269138 Neuraxial anesthesia is associated with improved survival after total joint arthroplasty depending on frailty: a cohort study. Background: Frailty increases risk for complications after total joint arthroplasty (TJA). Whether this association is influenced by anesthetic administered is unknown. We hypothesized that use of neuraxial (spinal or epidural) anesthesia is associated with better outcomes compared with general anesthesia, and that the effect of anesthesia type on outcomes differs by frailty status. Methods: This single-institution cohort study included all patients (≥50 years) from January 2005 through December 2016 undergoing unilateral, primary and revision TJA. Using multivariable Cox regression, we assessed relationships between anesthesia type, a preoperative frailty deficit index (FI) categorized as non-frail (FI &lt;0.11), vulnerable (FI 0.11 to 0.20), and frail (FI &gt;0.20), and complications (mortality, infection, wound complications/hematoma, reoperation, dislocation, and periprosthetic fracture) within 1 year after surgery. Interactions between anesthesia type and frailty were tested, and stratified models were presented when an interaction (p&lt;0.1) was observed. Results: Among 18 458 patients undergoing TJA, more patients were classified as frail (21.5%) and vulnerable (36.2%) than non-frail (42.3%). Anesthesia type was not associated with complications after adjusting for age, joint, and revision type. However, in analyzes stratified by frailty, vulnerable patients under neuraxial block had less mortality (HR=0.49; 95% CI 0.27 to 0.89) and wound complications/hematoma (HR=0.71; 95% CI 0.55 to 0.90), whereas no difference in risk by anesthesia type was observed among patients found non-frail or frail. Conclusions: Neuraxial anesthesia use among vulnerable patients was associated with improved survival and less wound complications. Calculating preoperative frailty prior to TJA informs perioperative risk and enhances shared-decision making for selection of anesthesia type. abstract_id: PUBMED:18200993 Incidence in Catalonia of spinal cord compression due to spinal hematoma secondary to neuraxial anesthesia Objective: To study the incidence in Catalonia of spinal cord compression due to spinal hematoma secondary to neuraxial anesthesia. Methods: The incidence of hematoma was based on published cases (MEDLINE, the Spanish Medical Index [Indice Medico Español], and Google) or cases reported at medical meetings or conferences by anesthesiologists from Catalan hospitals from 1996 to 2005, inclusive. The annual number of neuraxial anesthesias (spinal, epidural, and combined) was estimated based on the ANESCAT 2003 survey and the total number of anesthesias was calculated using the ANESCAT 2003 survey in conjunction with the surgical reports of Catalan hospitals. Results: A total of 11 cases of spinal hematoma after neuraxial anesthesia (7 after spinal anesthesia and 4 after epidural anesthesia) were reported or published from 1996 to 2005, inclusive. A total of 194 154 neuraxial anesthesias were performed in 2003 (126 560 spinal anesthesias and 5926 combined spinal-epidural anesthesias) and it was estimated that somewhat over 1 700 000 neuraxial anesthesias were performed over the 10 years reviewed. The incidence (95% confidence interval [CI]) of hematoma was 0.6 (95% CI, 0.3-1.2) per 100 000 neuraxial anesthesias, 0.6 (95% CI, 0.3-1.3) per 100 000 spinal anesthesias, and 0.7 (95% CI, 0.2-1.9) per 100 000 epidural anesthesias. Conclusions: The incidence of spinal hematoma after neuraxial anesthesia is slightly more than 1 per 150 000 anesthesias-a similar finding to that of other epidemiological studies. The incidence is slightly higher in epidural anesthesia. These data imply a risk of approximately 1 spinal hematoma per year in Catalonia. abstract_id: PUBMED:12463580 Use of neuraxial anesthesia with selective factor Xa inhibitors. It has become increasingly well documented that, compared with general anesthesia, neuraxial anesthesia reduces the risk of venous thromboembolism after joint replacement surgery. The concurrent use of anticoagulants with neuraxial anesthesia, however, has raised some safety concerns. Recent large-scale, prospective, randomized experience indicates that the selective factor Xa inhibitor fondaparinux does not increase the risk of epidural hematoma when used in combination with neuraxial anesthesia. abstract_id: PUBMED:36609075 Systemic Heparinization After Neuraxial Anesthesia in Vascular Surgery: A Retrospective Analysis. Objectives: The American Society of Regional Anesthesia and Pain Medicine's guidelines recommend a 1-hour interval after neuraxial anesthesia (NA) before systemic heparinization to mitigate the risk of spinal hematoma (SH). The study authors aimed to characterize the time interval between NA and systemic heparinization in vascular surgery patients (primary outcome). The secondary outcomes included the historic incidence of SH, and risk estimation of the SH formation based on available data. Heparin dose, length of surgery, difficulty and/or the number of NA attempts, and patient demographics were recorded. Design: A retrospective analysis between April 2012 and April 2022. Setting: A single (academic) center. Participants: Vascular surgery patients. Interventions: Intravenous heparin administration. Measurements And Main Results: All (N = 311) vascular patients were reviewed, of whom 127 (5 femoral-femoral bypass, 67 femoral-popliteal bypass, and 55 endovascular aneurysm repairs [EVAR]) received NA and were included in the final analysis. Patients receiving general anesthesia alone (N = 184) were excluded. Neuraxial anesthesia included spinal (N = 119), epidural (N = 4), or combined spinal-epidural (N = 4) blocks. The average time between NA and heparin administration was 42.8 ± 22.1 minutes, with 83.7% of patients receiving heparin within 1 hour of NA. The time between NA and heparin administration was 40.4 ± 22.3, 50.1 ± 23.4, and 31.3 ± 12.5 minutes for femoral-femoral bypass, femoral-popliteal bypass, and EVAR, respectively. Heparin was administered after 1 hour of NA in 20% of femoral-femoral bypass, 27% of femoral-popliteal bypass, and 3.9% of EVAR patients. No SHs were reported during the study period. Conclusions: The vast majority of vascular surgery patients at the authors' center received heparin within 1 hour of NA. Further studies are required to assess if their findings are consistent in other vascular surgery settings and/or centers. abstract_id: PUBMED:27662063 Delayed Neuraxial Hematoma in Parturient With Fontan Circulation Following Neuraxial Anesthesia for Cesarean Section. Objective: Neuraxial hematoma is a rare complication of spinal or epidural anesthesia. However, variable coagulation factor defects are relatively common in patients with Fontan circulation, and may predispose such patients to either increased risk of thrombosis or coagulopathy. These defects may indirectly increase their risk of neuraxial hematoma. Case Report: We report a case of delayed neuraxial hematoma after the start of full-dose anticoagulation for pulmonary embolus on a postpartum patient with Fontan physiology who had continuous spinal anesthesia for cesarean delivery 4 days earlier. Conclusions: Parturients with single ventricle physiology present numerous challenges to balance, including pregnancy-related physiologic alterations in blood volume, cardiac output, systemic vascular resistance, oxygen consumption, and coagulation. Although neuraxial anesthesia is common in this population, it is not without risks. We report the circumstances surrounding a parturient with single ventricle physiology who experienced neuraxial hematoma 4 days after continuous spinal anesthesia despite adherence to accepted guidelines. Eighteen months after undergoing a cesarean section, she had a full recovery and returned to her baseline medical status. abstract_id: PUBMED:29502991 Neuraxial block for delivery among women with low platelet counts: a retrospective analysis. Background: Laboring women with low platelet counts may be denied neuraxial block due to concerns about causing a spinal-epidural hematoma. Aims: To assess the anesthetic management, complications and outcome variables of women with low platelet counts, and to expand the existing data regarding the safety of neuraxial blocks in this patient population. Methods: This is a retrospective analysis of anesthetic and obstetric data from women with platelet counts &lt;100 000/μL, who were admitted to a single referral center during 2011-2014. The rate of neuraxial block and related complications were examined in relation to the platelet count and the results combined with published data to assess the risk of spinal-epidural hematoma. Results: During the study period, 471 of 45 462 women (1%) had a low platelet count (&lt;100 000/μL). The rate of neuraxial block was significantly higher in women with platelet counts of 70-99 000/μL (280/394, 71.1%) when compared to women with platelet counts of 50-69 000/μL and 0 to 49 000/μL (23/59, 38.9% and 5/18, 27.8%, respectively, P &lt;0.0001). Women in the lower platelet count ranges had a higher risk of cesarean delivery under general anesthesia and longer hospital stay. No neuraxial hematoma were reported. Conclusions: This study contributes a substantial series of neuraxial blocks among women with low platelet counts. The findings support that the risk of hematoma is low if the platelet count is &lt;100 000/μL, specifically between 70 and 99 000/μL. Risk assessment in the lower count ranges requires a much larger sample. abstract_id: PUBMED:24121606 An analysis of the safety of epidural and spinal neuraxial anesthesia in more than 100,000 consecutive major lower extremity joint replacements. Background And Objectives: A feared complication of spinal or epidural anesthesia is the development of epidural or spinal hematoma with subsequent neural element compression. Most available data are derived from the obstetric literature. Little is known about the frequency of hematoma occurrence among patients undergoing orthopedic joint arthroplasty, who are usually elderly and experience significant comorbidities. We sought to study the incidence of clinically significant lesions after spinal and epidural anesthesia and further describe their nature. Methods: We retrospectively analyzed a database of all patients who underwent total hip or total knee arthroplasty under neuraxial anesthesia at our institution between January 2000 and October 2010. Patients with radiographically confirmed epidural lesions were identified and further analyzed. Results: A total of 100,027 total knee and hip replacements under neuraxial anesthesia were performed at our institution. Ninety-seven patients underwent imaging studies to evaluate perioperative neurologic deficits (0.96/1000; 95% confidence interval, 0.77-1.16/1000). Eight patients were identified with findings of an epidural blood or gas collection (0.07/1000; 95% confidence interval, 0.02-0.13/1000). No patients receiving only spinal anesthesia were affected. All patients diagnosed with hematoma took at least 1 drug that potentially impaired coagulation (5 nonsteroidal anti-inflammatory agents, 1 a tricyclic antidepressant, and 1 an antiplatelet drug). No patient incurred persistent nerve damage. Conclusions: The incidence of epidural/spinal complications found in this consecutive case series is relatively low but higher than previously reported in the nonobstetric population. Further research using large data sets could quantify the significance of some of the potentially contributing factors observed in this study. abstract_id: PUBMED:18280874 Anesthesia complications in the birthplace: is the neuraxial block always to blame? This article highlights the common and some of the very serious complications that may occur following neuraxial analgesia for labor and delivery, including headache, backache, infection, hypotension, and hematoma. Total spinal and failed block also are discussed, as are complications unique to epidural anesthesia, such as the intravascular injection of large volumes of local anesthetic (causing seizure or cardiac arrest) and accidental dural puncture. Answer: Neuraxial hematoma after epidural anesthesia is a rare but serious complication that can lead to permanent neurologic sequelae and even death if not properly treated. Prevention and early detection are crucial for a good neurological prognosis (PUBMED:21474029). To prevent neuraxial hematoma, it is important to identify patients at risk for neuraxial bleeding and to modify the anesthetic technique accordingly. For example, discontinuing anticoagulant medications before surgery, as was done in the case of a patient who developed paraplegia after epidural anesthesia for femoropopliteal revascularization, is a preventive measure (PUBMED:21474029). Additionally, establishing postoperative neurologic evaluation protocols for patients who have undergone neuraxial anesthesia can contribute to the prevention of severe neurologic sequelae (PUBMED:21474029). Early diagnosis is key to managing neuraxial hematoma effectively. Magnetic Resonance Imaging (MRI) is the standard diagnostic tool, and early decompression shortly after the development of the first clinical manifestations remains the standard treatment (PUBMED:21474029). For instance, a patient who developed urinary retention and paresthesias approximately 48 hours after epidural anesthesia for knee osteotomy was diagnosed with an epidural lumbar hematoma via MRI and underwent immediate surgical decompression, leading to complete neurological recovery after 10 months of rehabilitation (PUBMED:21474029). The incidence of neuraxial hematoma is low, with studies reporting varying rates. For example, in Catalonia, the incidence was slightly more than 1 per 150,000 anesthesias, with a slightly higher incidence in epidural anesthesia (PUBMED:18200993). In obstetric patients with low platelet counts, no neuraxial hematoma occurred in a study of 20,244 patients, with the upper 95% confidence limit for spinal-epidural hematoma being 1.2% in patients who received neuraxial anesthesia with a platelet count of <100,000/mm (PUBMED:27159067). In conclusion, while it is not possible to completely eliminate the risk of neuraxial hematoma after epidural anesthesia, careful patient selection, modification of anesthetic techniques, adherence to guidelines regarding anticoagulation, and vigilant postoperative monitoring can significantly reduce the risk and aid in early detection, thereby improving patient outcomes (PUBMED:21474029; PUBMED:27159067; PUBMED:18200993).
Instruction: Does microalbuminuria predict diabetic nephropathy? Abstracts: abstract_id: PUBMED:14594696 How to predict nephropathy in type 1 diabetic patients. Objective: Do exaggerated increases in blood pressure and albuminuria during exercise occur earlier than microalbuminuria and which type of test is most predictive of diabetic nephropathy? Material And Methods: A total of 33 insulin-dependent normoalbuminuric men (mean duration of diabetes 14 years; mean age 28 years) and 34 age-matched apparently healthy control subjects were studied. Urinary albumin excretion, heart rate and blood pressure were measured during fixed workload (150 W) and fixed heart rate (155 beats/min) tests. Mean follow-up time was 13.1 +/- 3.2 years. A urinary albumin level in early-morning urine persistently &gt;30 mg/l was considered a sign of diabetic nephropathy. Results: Sixteen patients reached the endpoints of the study. Eleven had developed microalbuminuria and five macroalbuminuria (persistent levels of urinary albumin &gt;300 mg/l). Of the latter patients, two needed dialysis. Systolic blood pressure and albumin excretion during the fixed heart rate test were higher in diabetic patients who developed signs of nephropathy than in control subjects and diabetic subjects with persistent healthy kidneys. Such differences were not found in the fixed workload test. There were no differences in glycated haemoglobin, blood pressure levels or albumin excretion at baseline between the two diabetic groups. Conclusions: To predict the development of diabetic nephropathy it seems important to choose a fixed heart rate test. High levels of systolic blood pressure in such a test were associated with the development of micro- and macroalbuminuria. abstract_id: PUBMED:27138869 Type 2 diabetic patients with diabetic retinopathy and concomitant microalbuminuria showed typical diabetic glomerulosclerosis and progressive renal dysfunction. Aims: To determine whether or not diabetic retinopathy (DR) in type 2 diabetic patients can predict the renal functional decline. Methods: We examined 32 normo-microalbuminuric type 2 diabetic patients by renal biopsy (23 men, age 49±10yrs) divided into two groups according to the presence (n=19) or absence (n=13) of DR. Electron microscopic morphometry including mesangial fractional volume [Vv(Mes/glom)] were performed and light microscopic tissues were categorized as: C1, normal/near normal renal structure; C2, typical diabetic glomerulopathy; C3, atypical injury patterns. Patients were followed up for 7.1±3.8years, and glomerular filtration rate (GFR) and urinary albumin excretion (UAE) measurements were taken annually. Results: Vv(Mes/glom) was larger in DR+ than that in DR-. Vv(Mes/glom) positively correlated with the UAE if patients had DR. The patients with DR had a significant higher rate of C2 pattern compared to those in DR-. Among patients with DR and C2, GFR in microalbuminuria (n=7) decreased while GFR in normoalbuminuria (n=5) did not change during the observation. Conclusions: Type 2 diabetic patients with DR and C2 showed progressive renal dysfunction after they had microalbuminuria. DR and albuminuria should be considered to determine renal function decline in type 2 diabetic patients. abstract_id: PUBMED:19217462 Vascular complications in diabetic pregnancy. Progression of diabetic retinopathy (DR) occurs at least temporarily during pregnancy and postpartum. Nephropathy is associated with increased risk of preeclampsia, nephrotic syndrome, preterm delivery, fetal growth restriction, and perinatal mortality. The degree of renal impairment and proteinuria in early pregnancy predict pregnancy complications in women with diabetic nephropathy. The main pregnancy complication in this respect is pre-eclampsia which increases with severity of proteinuria. Presence of retinopathy increases also risk of preeclampsia and also poor glycemic control. The pregnancy itself (first or subsequent) is not a long-term risk factor for developing microalbuminuria, any retinopathy, proliferative retinopathy, or neuropathy. These findings have practical implications for counselling young women in planning their pregnancies. Good glycemic control, normotension, lack of nephropathy as well as lack of pre-proliferative/proliferative changes of diabetic retinopathy are good prognostic factors as regards the progression of vascular complications during pregnancy. Women with Type 1 diabetes should be evaluated before pregnancy for microangiopathies, treated and followed closely during pregnancy by obstetrician and internist/diabetologue. abstract_id: PUBMED:24379914 Diabetic nephropathy: Is it time yet for routine kidney biopsy? Diabetic nephropathy (DN) is one of the most important long-term complications of diabetes. Patients with diabetes and chronic kidney disease have an increased risk of all-cause mortality, cardiovascular mortality, and kidney failure. The clinical diagnosis of DN depends on the detection of microalbuminuria. This usually occurs after the first five years from the onset of diabetes, and predictors of DN development and progression are being studied but are not yet implemented into clinical practice. Diagnostic tests are useful tools to recognize onset, progression and response to therapeutic interventions. Microalbuminuria is an indicator of DN, and it is considered the only noninvasive marker of early onset. However, up to now there is no diagnostic tool that can predict which patients will develop DN before any damage is present. Pathological renal injury is hard to predict only with clinical and laboratory findings. An accurate estimate of damage in DN can only be achieved by the histological analysis of tissue samples. At the present time, renal biopsy is indicated on patients with diabetes under the suspicion of the presence of nephropathies other than DN. Results from renal biopsies in patients with diabetes had made possible the classification of renal biopsies in three major groups associated with different prognostic features: diabetic nephropathy, non-diabetic renal disease (NDRD), and a superimposed non-diabetic condition on underlying diabetic nephropathy. In patients with type 2 diabetes with a higher degree of suspicion for NDRD, it is granted the need of a renal biopsy. It is important to identify and differentiate these pathologies at an early stage in order to prevent progression and potential complications. Therefore, a more extensive use of biopsy is advisable. abstract_id: PUBMED:35848645 Does serum kidney injury molecule-1 predict early diabetic nephropathy: A comparative study with microalbuminuria. Introduction: Diabetic nephropathy (DN) is a multifactorial disease, one of the most common complications of diabetes and a major cause of chronic kidney disease. Kidney injury molecule-1 (KIM-1) is a sensitive and specific marker of kidney injury as well as a predictor of prognosis. Objective: The present study aimed to investigate the usefulness of serum KIM-1 as an early marker of DN. Patients And Methods: The present study included total 75 participants, among whom 25 nondiabetic participants were chosen as controls. The 50 diabetic participants were divided into two groups according to urine protein/creatinine ratio (UPCR) as participants with normoalbuminuria (T2DM patients without nephropathy) and microalbuminuria (T2DM patients with nephropathy). The complete blood count, blood glucose, HbA1c, serum electrolytes, and creatinine levels were measured using standard laboratory techniques, and serum KIM-1 levels were measured by sandwich enzyme-linked immunosorbent assay. Results: There was a significant difference in the mean serum KIM-1 between the control and diabetics without microalbuminuria (P = 0.0001). Patients with longer duration of diabetes had a higher serum KIM-1 values (P = 0.05 in DM without microalbuminuria; P = 0.007 for DM with microalbuminuria). Serum KIM-1 did not correlate with UPCR in controls (P = ‒0.167), in diabetics with microalbuminuria (P = 0.487). However, there was a significant correlation observed between UPCR and serum KIM-1 in diabetics without microalbuminuria (P = 0.04). Conclusion: The present study observed significantly increased levels of serum KIM-1 in both the diabetic groups compared to controls. Moreover, serum KIM-1 positively correlated with the duration of diabetes. Therefore, serum KIM-1 may be used as an early diagnostic marker to predict nephropathy among diabetes in our population. abstract_id: PUBMED:11168447 Predictors of diabetic renal lesions in type 2 diabetes associated with microalbuminuria. Not all type 2 diabetic patients with microalbuminuria show the same pattern of renal tissue injury, and heterogeneity in renal lesions has been reported. We determine clinical and laboratory findings that predict the presence of typical diabetic glomerulosclerosis in type 2 diabetic patients with microalbuminuria. Twenty-three type 2 diabetic patients with microalbuminuria who underwent renal biopsy were investigated. Two patterns of renal biopsy findings were defined as type D (typical diabetic glomerulosclerosis) and type A (atherosclerotic nephropathy without evidence of diabetic glomerulopathy). Thirteen patients (57%) were classified as type D, and 10 (43%) as type A. In stepwise multiple regression analysis, severity of diabetic retinopathy (P = 0.0006), relatively high urinary N-acetyl-beta-D-glucosaminidase activity (P = 0.0013), and relatively low serum creatinine concentration (P = 0.0303) significantly predicted type D findings as opposed to type A (R2 = 0.734, P &lt; 0.001). Certain patient characteristics can predict the presence of typical diabetic glomerulosclerosis in type 2 diabetic patients with microalbuminuria. abstract_id: PUBMED:15746541 Diabetic nephropathy: the proteinuria hypothesis. Background/aims: Proteinuria, nearly a universal finding in progressive kidney disease, has been the subject of frequent recent analyses in the renal literature. Proteinuria is a hallmark of diabetic nephropathy: microalbuminuria is the principal early predictor for progression of diabetic glomerulopathy, and proteinuria may be viewed as a measure of the severity and promoter of progression of nephropathy. Methods: This article critically reviews for the first time the full scope of diabetic proteinuria--complex molecular mechanisms, natural history, and analysis of treatment trials--in order to address the validity of 'the proteinuria hypothesis', i.e., that diabetic proteinuria is a modifiable determinant of renal progression. This hypothesis is analyzed in detail, including recent studies on the primary therapy of diabetic nephropathy, renin-angiotensin blockade. Results: As fully developed, this hypothesis consists of three postulates: that higher amounts of proteinuria predict progressive loss of function, that proteinuria reduction correlates with slowing progression, and that proteinuria is a surrogate endpoint for clinical trials. The latter postulate has not before been adequately linked to growing information about the first two postulates as they apply to diabetic kidney disease. Conclusion: While diabetic nephropathy is a disease model for the potential use of proteinuria as a surrogate marker for renal progression, this shift in perspective will require prospective data from additional clinical trials, particularly of non-renin-angiotensin blocking drugs, to be complete. abstract_id: PUBMED:34957874 Urinary transferrin and proinflammatory markers predict the earliest diabetic nephropathy onset. Aim: This study aimed to determine the earliest markers of diabetic nephropathy (DN) onset with discriminative potentials from controlled diabetes (CD). Methods: Sixty male Wistar rats were allocated into three groups (20/group), the two diabetic groups CD and DN received 45 and 65 mg/kg STZ in 0.1 mole/L citrate buffer, respectively, while the control group received only the vehicle. Serum/urinary levels of glomerular, tubular, oxidative and proinflammatory markers were weekly monitored. Results: Each diabetic group showed a different pattern of inflammatory, oxidative and signs of nephropathy along the study period, but none had a discriminative power until the fourth week. At this time point, levels of urinary transferrin, serum/urinary IL-6 and TNF-α as well as urinary IL-18 were significantly higher in DN group compared to CD (p = 0.0217, &lt;0.0001, 0.0005, 0.0004, 0.0006, 0.0019, respectively). Predictive thresholds of these markers were calculated by receiver operating characteristic (ROC) curve that showed area under curve (AUC) of 0.9375 for transferrin with cut-off value of 35.2 mg/dL and 1.000 for serum/urinary IL-6 and TNF-α and urinary IL-18 with cut-of values 224.1, 82.11, 6.596, 125.9 and 21.86 pg/mL, respectively. Conclusion: Urinary transferrin and the inflammatory endpoints proposed in this study might represent promising biomarkers for the early DN onset. abstract_id: PUBMED:36476132 Clinical application of RUBCN/SESN2 mediated inhibition of autophagy as biomarkers of diabetic kidney disease. Background: Deregulated autophagy in diabetes has been a field of many experimental studies recently. Impaired autophagy in diabetic kidneys orchestrates every step of diabetic nephropathy (DN) pathogenesis. This study aimed to evaluate three autophagy regulators; RUBCN, mTOR, and SESN2 as clinically applicable indicators of DN progression and as early predictors of DN. Methods: This retrospective study included 120 participants in 4 groups; G1: diabetic patients without albuminuria, G2: diabetic patients with microalbuminuria, G3: diabetic patients with macroalbuminuria and G4: healthy controls. RUBCN and SESN2 genes expression were tested by RT-qPCR. RUBCN, mTOR, and SESN2 serum proteins were quantitated by ELISA. Results: RUBCN mRNA was over-expressed in diabetic patients relative to controls with the highest level found in G3 followed by G2 then G1; (9.04 ± 0.64, 5.18 ± 0.73, 1.94 ± 0.41 respectively. P &lt; 0.001). SESN2 mRNA expression was at its lowest level in G3 followed by G2 then G1 (0.1 ± 0.06, 0.48 ± 0.11, 0.78 ± 0.13 respectively. P &lt; 0.001). Similar parallel reduction in serum SENS2 was observed. Serum RUBCN and mTOR were significantly elevated in diabetic patients compared to controls, with the increase parallel to albuminuria degree. RUBCN expression, serum RUBCN and mTOR strongly correlated with albuminuria (r = 0.912, 0.925 and 0.867 respectively). SESN2 expression and serum level negatively correlated with albuminuria (r = - 0.897 and -0.828 respectively); (All p &lt; 0.001). Regression analysis showed that serum RUBCN, mTOR, RUBCN and SESN2 mRNAs could successfully predict DN. Conclusions: The study proves the overexpression of RUBCN and mTOR in DN and the down-expression of SESN2. The three markers can be clinically used to predict DN and to monitor disease progression. abstract_id: PUBMED:2000893 Diabetic nephropathy: changing concepts of pathogenesis and treatment. The metabolic changes which accompany hyperglycemia in a person with diabetes are thought to cause renal hyperperfusion and intraglomerular hypertension, especially in the person with a predisposition to essential hypertension. Intraglomerular hypertension causing deposition of protein in the mesangium leads to glomerulosclerosis and renal failure. Screening for microalbuminuria can predict which type I diabetic patients will develop nephropathy. The decline in renal function in established diabetic nephropathy can be slowed with aggressive treatment of hypertension. The use of ACE inhibitors may also decrease intraglomerular hypertension. Whether similar treatment in the person with preclinical diabetic nephropathy would delay or prevent the onset of diabetic nephropathy is being investigated. Restricted protein intake, anti-platelet and rheolitic drugs may have a role in the treatment of established diabetic nephropathy. In end stage renal failure, renal transplantation is the treatment of choice. When transplantation cannot be performed, chronic ambulatory peritoneal dialysis is preferable to hemodialysis. Answer: Yes, microalbuminuria is considered an early predictor for the progression of diabetic nephropathy. It is a hallmark of diabetic nephropathy and is the principal early indicator for the progression of diabetic glomerulopathy (PUBMED:15746541). The clinical diagnosis of diabetic nephropathy often depends on the detection of microalbuminuria, which usually occurs after the first five years from the onset of diabetes (PUBMED:24379914). Additionally, the presence of microalbuminuria can predict which type 1 diabetic patients will develop nephropathy (PUBMED:2000893). Furthermore, in type 2 diabetic patients, the presence of microalbuminuria along with diabetic retinopathy has been associated with typical diabetic glomerulosclerosis and progressive renal dysfunction (PUBMED:27138869). Moreover, studies have shown that not all type 2 diabetic patients with microalbuminuria exhibit the same pattern of renal tissue injury, indicating that microalbuminuria can be a heterogeneous indicator of renal lesions (PUBMED:11168447). However, the presence of microalbuminuria is a significant factor in the prediction and progression of diabetic nephropathy, and its detection is a critical component in the management and treatment of diabetic patients to prevent or delay the onset of nephropathy (PUBMED:2000893).
Instruction: Systematic safety follow up in a cohort of 107 patients with spondyloarthropathy treated with infliximab: a new perspective on the role of host defence in the pathogenesis of the disease? Abstracts: abstract_id: PUBMED:12922954 Systematic safety follow up in a cohort of 107 patients with spondyloarthropathy treated with infliximab: a new perspective on the role of host defence in the pathogenesis of the disease? Background: Recent studies with infliximab indicate the therapeutic potential of tumour necrosis factor alpha blockade in spondyloarthropathy (SpA). Because defective host defence is implicated in the pathogenesis of SpA, the potential side effects of this treatment due to impact on the antimicrobial defence are a major concern. Objective: To report systematically the adverse events seen in a large cohort of patients with SpA treated with infliximab, with special attention to bacterial infections. Patients And Methods: 107 patients with SpA were treated with infliximab for a total of 191.5 patient years. All serious and/or treatment related adverse events were reported. Results: Eight severe infections occurred, including two reactivations of tuberculosis and three retropharyngeal abscesses, and six minor infections with clear bacterial focus. One patient developed a spinocellular carcinoma of the skin. No cases of demyelinating disease or lupus-like syndrome were seen. Two patients had an infusion reaction, which, however, did not relapse during the next infusion. Finally, three patients with ankylosing spondylitis developed palmoplantar pustulosis. All patients recovered completely with adequate treatment, and infliximab treatment had to be stopped in only five patients with severe infections. Conclusions: Although the global safety of infliximab in SpA is good compared with previous reports in rheumatoid arthritis and Crohn's disease, the occurrence of infections such as tuberculosis and retropharyngeal abscesses highlights the importance of careful screening and follow up. Focal nasopharyngeal infections and infection related symptoms, possibly induced by streptococci, occurred frequently, suggesting an impairment of specific host defence mechanisms in SpA. abstract_id: PUBMED:28339632 Dosing down with biologic therapies: a systematic review and clinicians' perspective. The effectiveness of biologic therapies now means that remission or low disease activity are realistic targets for treatment. However, after achieving remission/low disease activity, the next steps remain unclear. The aim of this publication was to conduct a broad systematic literature review to evaluate dosing down of biologics. After screening papers and abstracts for relevance and application of inclusion/exclusion criteria, a structured extraction process was used to collect information on the included studies. Fifty-two papers were included in the analysis across rheumatic disease. In patients who discontinue therapy, remission is not typically sustained, with reported rates of relapse and flare across early RA (48-54%), established RA (2-84%), axial spondyloarthritis (11-53%) and PsA (44.9%). In many cases, an acceptable disease activity can be regained upon retreatment. More research is needed to understand the long-term impacts of these strategies on efficacy, safety and cost. abstract_id: PUBMED:11830424 Repeated infusions of infliximab, a chimeric anti-TNFalpha monoclonal antibody, in patients with active spondyloarthropathy: one year follow up. Background: In a pilot study, the anti-tumour necrosis factor alpha monoclonal antibody, infliximab, induced a rapid and significant improvement in global, peripheral, and axial disease manifestations of patients with active spondyloarthropathy. Objective: To determine whether repeated infusions of infliximab would effectively and safely maintain the observed effect. Methods: Safety and efficacy of a maintenance regimen (5 mg/kg infliximab every 14 weeks) was evaluated using the measurements reported in the pilot study. Of the 21 patients, 19 completed the one year follow up for efficacy; two patients changed to another dosing regimen after week 12 owing to partial lack of efficacy. However, they are still being followed up for safety analysis. Results: After each re-treatment a sustained significant decrease of all disease manifestations was observed. Before re-treatment, symptoms recurred in 3/19 (16%) at week 20, in 13/19 (68%) at week 34, and in 15/19 (79%) at week 48. No withdrawals due to adverse events occurred. Twelve minor infectious episodes were observed. Twelve patients (57%) developed antinuclear antibodies; in four of them (19%) anti-dsDNA antibodies were detected. However, no lupus-like symptoms occurred. Conclusion: In this open study of infliximab in patients with active spondyloarthropathy, the significant improvement of all disease manifestations was maintained over a one year follow up period without major adverse events. Although recurrence of symptoms was noted in a rising number of patients before each re-treatment, no loss of efficacy was observed after re-treatment. abstract_id: PUBMED:24774503 The role of tumour necrosis factor in the pathogenesis of immune-mediated diseases. Immune-mediated inflammatory diseases (IMIDs), such as rheumatoid arthritis, psoriatic arthritis, psoriasis, axial spondyloarthropathies, Crohn's disease, ulcerative colitis and juvenile idiopathic arthritis, comprise a group of chronic disorders characterized by an immune-mediated pathogenesis. Although at clinical presentation these diseases appear unrelated, they have been recognized to share similar pathogenic mechanisms. Data from epidemiological and genetic studies further support the concept that IMIDs are interrelated, as they can co-occur in the same patient and share a similar genetic susceptibility. The specific aetiologies of IMIDs remain unknown, but all are known to involve dysregulation of the immune system, including an over-expression of the pro-inflammatory cytokine tumour necrosis factor (TNF). The pivotal role played by TNF in the pathogenesis and pathophysiology of IMIDs has been documented by extensive preclinical and clinical investigations, and confirmed by the efficacy of anti-TNF biotechnological drugs, such as etanercept, infliximab and adalimumab, in the therapeutic management of these disorders. In this narrative review, we discuss the available data on the TNF-dependent pathogenesis of IMIDs and associations among the different disorders. Although much remains to be discovered about the pathogenesis and aetiology of IMIDs, their common inflammatory pathological features may explain why they can be successfully targeted by anti-TNF drugs. Among these, adalimumab, a fully human monoclonal antibody, has been approved for treatment of nine distinct IMID indications and it is likely to become a valuable therapeutic tool for this complex cluster of chronic inflammatory disorders. abstract_id: PUBMED:12827403 Adaption and validation of the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) for use in Germany Objective: The purpose of this study was the validation of a German translation of the Bath Ankylosing Spondylitis Index (BASDAI). Patients And Methods: The German translation of the BASDAI was validated using data of 134 patients with ankylosing spondylitis (AS) from a spondyloarthropathy cohort. For validation the BASDAI was compared with parameters assessed by physicians (physicians global, C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), spinal mobility (BASMI), swollen joints, enthesitis (yes or no)), and parameters assessed by patients themselves (pain, function (BASFI) and quality of life (SF-12)). The discriminative validity was analyzed by comparison of BASDAI scores of patients subgroups who have different levels of disease severity. The sensibility to change was analyzed by using follow-up data of a clinical trial on efficacy of the anti-tumor necrosis factor (TNF) alpha agent infliximab in 70 AS-patients with active and severe disease. Results: The German translation of the BASDAI was easy to understand, the translation back to English corresponded well with the original English version. The 6 questions of the BASDAI represent different aspects of disease but correlated also well with each other (r &lt; 0.65). There was a good internal consistency for all 6 questions (Cronbach's Alpha 0.82), strong correlations to the physician's global (r = 0.659), pain (r = 789), function (BASFI; r = 0.752) and to CRP and ESR confirming that the BASDAI assesses a comprehensive picture of patient's disease activity. For subgroups of patients with different levels of disease severity or with different manifestations the BASDAI showed good discriminative properties. The BASDAI was sensitive to change during treatment with infliximab (effect size 1.18). Conclusion: The German translation of the BASDAI is easy to use, reliable, and sensitive to change for the assessment of disease activity in AS. abstract_id: PUBMED:29154920 Switching from originator infliximab to biosimilar CT-P13 in real-life: The weight of patient acceptance. Objective: To explore acceptance and retention rate of biosimilar CT-P13 after switching from originator infliximab (OI) in patients with various rheumatic diseases. Methods: Patients with stable rheumatoid arthritis (RA), ankylosing spondylitis (AS) or psoriatic arthritis (PsA) under OI were proposed to switch to CT-P13 at the same regimen. A prospective cohort of infliximab-naïve patients beginning CT-P13 and a retrospective cohort of patients treated with OI were used as controls. The primary outcome was to evaluate the retention rate of CT-P13. Secondary outcomes were the switch acceptance rate, reasons of failure and safety. Results: Switch was proposed to 100 patients and accepted by 89 of them (63 AS, 12 PsA and 14 RA). After a median follow-up of 33 weeks, 72% of patients were still treated with CT-P13. This retention rate was significantly lower than the one found in our retrospective and prospective control cohorts: 88% and 90% respectively (P-value=0.0002). Within patients who asked to be reswitched to OI, 13/25 (52%) presented clinical disease activity, one developed serum sickness and 11 (44%) presented no objective activity. A subanalysis excluding these 11 patients abrogated difference in retention rates between the 3 cohorts (P-value=0.453). After reswitching to OI, patients without objective disease activity claimed to recover original efficacy. Conclusions: Retention rate was lower after switching from OI to CT-P13 compared to our control cohorts. However, this difference faded after excluding patients without objective clinical activity, suggesting a reluctance of patients to the switch and a negative perception of the biosimilar. abstract_id: PUBMED:26337851 Spondyloarthropathy in inflammatory bowel disease patients on TNF inhibitors. Background: Musculoskeletal symptoms are the most common extra-intestinal manifestation associated with inflammatory bowel disease (IBD). Spondyloarthritis (SpA) is an umbrella term applied to a group of rheumatic diseases with some features in common and others distinct from other inflammatory arthritides. Aim: To determine self-reported prevalence of SpA associated musculoskeletal manifestations in an IBD cohort on tumour necrosis factor (TNF) inhibitors using a questionnaire incorporating Assessment of SpondyloArthritis International Society (ASAS) criteria. Methods: Consecutive IBD patients on TNF inhibitors attending a single IBD centre (May-September 2011) were asked to complete a SpA questionnaire. Data collected included SpA and IBD variables, demographics, concurrent medications, co-morbidities and autoimmune serology. Results: The 140-patient cohort included 96 suffering from Crohn disease and 44 from ulcerative colitis. The mean age of disease onset was 29.3 years and 45% were men. Concurrent or past history of inflammatory back pain was reported by 29% subjects. Using the imaging and clinical arms of the ASAS criteria, 30% and 14% subjects respectively had axial SpA. Arthritis was reported by 34%, enthesitis 17%, dactylitis 4%, uveitis 6%, psoriasis 6% and a family history of SpA in 39%. Peripheral SpA was present in 41% by the ASAS criteria. There were no differences in these frequencies between Crohn disease and ulcerative colitis. A positive antinuclear antibodies (&gt;1:80) was found in 19% before commencement of TNF inhibitor therapy and increased to 78% on therapy. Clinical drug-induced lupus erythematosus was uncommon (4%) and was characterised by new clinical signs and symptoms, including arthralgia, rash with elevated dsDNA titres and positive antinuclear antibodies. Conclusions: Inflammatory bowel disease patients on TNF inhibitors frequently reported musculoskeletal manifestations. Increased recognition of SpA occurred with use of an SpA self-reported questionnaire in IBD patients: this could alter management and improve patient outcomes. Clinical drug-induced lupus erythematosus was uncommon. abstract_id: PUBMED:24415857 Current issues in pediatric inflammatory bowel disease-associated arthropathies. Joint involvement is the most common extraintestinal manifestation in children with inflammatory bowel disease (IBD) and may involve 16%-33% of patients at diagnosis or during follow-up. It is possible to distinguish asymmetrical, transitory and migrating arthritis (pauciarticular and polyarticular) and spondyloarthropathy (SpA). Clinical manifestations can be variable, and peripheral arthritis often occurs before gastrointestinal symptoms develop. The inflammatory intestinal pattern is variable, ranging from sub-clinical inflammation conditions, classified as indeterminate colitis and nodular lymphoid hyperplasia of the ileum, to Crohn's disease or ulcerative colitis. Unlike the axial form, there is an association between gut inflammation and evolution of recurrent peripheral articular disease that coincides with a flare-up of intestinal disease. This finding seems to confirm a key role of intestinal inflammation in the pathogenesis of SpA. An association between genetic background and human leukocyte antigen-B27 status is less common in pediatric than n adult populations. Seronegative sacroiliitis and SpA are the most frequent forms of arthropathy in children with IBD. In pediatric patients, a correct therapeutic approach relies on the use of nonsteroidal antiinflammatory drugs, local steroid injections, physiotherapy and anti-tumor necrosis factor therapy (infliximab). Early diagnosis of these manifestations reduces the risk of progression and complications, and as well as increasing the efficacy of the therapy. abstract_id: PUBMED:27273894 Effect of Comedication With Conventional Synthetic Disease-Modifying Antirheumatic Drugs on Retention of Tumor Necrosis Factor Inhibitors in Patients With Spondyloarthritis: A Prospective Cohort Study. Objective: To evaluate whether use of comedication with conventional synthetic disease-modifying antirheumatic drugs (csDMARDs) influences the retention of tumor necrosis factor inhibitors (TNFi) in patients with spondyloarthritis (SpA). Methods: Patients with SpA from the Rheumatic Diseases Portuguese Register who started treatment with their first TNFi between 2001 and 2014 were included in this study. Cox regression analysis was used to estimate the effect of comedication with csDMARDs on TNFi retention in 2 types of models: a model in which baseline (time-fixed) variables were included, and a second model incorporating time-varying variables, including sociodemographic features, measures of disease activity, measures of physical function, and cotreatment with other drugs (nonsteroidal antiinflammatory drugs and oral steroids). To control for possible confounding by indication, the effect of csDMARD comedication on TNFi retention was also tested after adjustment for the treatment propensity score. Results: In total, 954 patients were included in the study, of whom 289 (30.3%) discontinued treatment with their first TNFi after a median follow-up time of 2.5 years (range 0.08-13 years). Inefficacy was the most common reason for TNFi discontinuation (55.7% of patients). In the multivariable analyses, comedication with csDMARDs had no measurable effect on TNFi retention, neither in the baseline model (hazard ratio [HR] 0.83, 95% confidence interval [95% CI] 0.59-1.16) nor during follow-up in the model adjusted for time-varying covariates (HR 1.07, 95% CI 0.68-1.68). The effect of csDMARD comedication remained nonsignificant after propensity score adjustment. Conclusion: Comedication with csDMARDs does not prolong TNFi retention in patients with SpA in clinical practice, suggesting that there is no benefit conferred by the concomitant use of these drugs. abstract_id: PUBMED:21299548 Pneumocystis jirovecii colonization in patients treated with infliximab. Background: Infliximab, a chimeric antitumour necrosis factor (TNF) monoclonal antibody, has become an established effective therapy for inflammatory rheumatic disease. However, TNF is a critical factor in host defence, and the suppression of its biological activity may be associated with the increased risk of opportunistic infections. The frequent use of infliximab in clinical practice has identified Pneumocystis jirovecii pneumonia (PcP) as a serious complication. Individuals colonized with Pneumocystis may be at high risk of development of PcP when they have undergone immunosuppression. Hence, we addressed the question of the frequency of Pneumocystis colonization among patients treated with infliximab. Design: We examined 125 oropharyngeal washes collected from 78 individuals with rheumatoid arthritis, 30 with ankylosing spondylitis and 17 with psoriatic arthritis, half of them underwent infliximab therapy, using a real-time polymerase chain reaction assay that employs specific primers from a portion of the mitochondrial large-subunit rRNA gene of P. jirovecii. Results: Pneumocystis jirovecii colonization was detected in 32 (25·6%) patients. In a multivariate regression model, only duration of infliximab treatment for more than 3 years and use of corticosteroid were significantly and independently associated with risk of Pneumocystis colonization. However, the effect of corticosteroid on P. jirovecii colonization rate was not linearly dose dependent as showed other logistic regression analysis. Conclusions: There is a high rate of P. jirovecii colonization among patients with rheumatologic diseases treated with infliximab. The identification of patients colonized by P. jirovecii before starting the treatment with infliximab could be a strategy for PcP prevention. Answer: The systematic safety follow-up of 107 patients with spondyloarthropathy (SpA) treated with infliximab provided insights into the role of host defense in the pathogenesis of the disease. The study reported adverse events, with special attention to bacterial infections, over a total of 191.5 patient years. The results showed eight severe infections, including two reactivations of tuberculosis and three retropharyngeal abscesses, as well as six minor infections with a clear bacterial focus. One patient developed a spinocellular carcinoma of the skin. No cases of demyelinating disease or lupus-like syndrome were observed. Two patients had an infusion reaction, which did not recur during subsequent infusions. Additionally, three patients with ankylosing spondylitis developed palmoplantar pustulosis. All patients recovered completely with adequate treatment, and infliximab treatment had to be stopped in only five patients with severe infections. The occurrence of infections such as tuberculosis and retropharyngeal abscesses highlighted the importance of careful screening and follow-up. The frequent occurrence of focal nasopharyngeal infections and infection-related symptoms, possibly induced by streptococci, suggested an impairment of specific host defense mechanisms in SpA (PUBMED:12922954). This study contributes to the understanding of the safety profile of infliximab in the treatment of SpA and underscores the need for vigilance regarding infections, which may reflect underlying host defense impairments associated with the pathogenesis of the disease.
Instruction: Can discoblock replace discography for identifying painful degenerated discs? Abstracts: abstract_id: PUBMED:23642762 Can discoblock replace discography for identifying painful degenerated discs? Objectives: The aim of the present study was to intra-individually compare provocative discography and discoblock (disc analgesia) of idiopathic degenerated discs (IDD) results to each other, to clinical parameters, and to MRI findings. By this the value of both diagnostic features should be critically reevaluated. Methods: 31 intervertebral IDD (Pfirrmann III°-IV°) of 26 patients were analyzed for surgery decision making by combined discoblock/discography procedure in an open MRI at 1T. A correlation analysis was performed between the Dallas Discogram Scale, pain discrimination score (PDS: concordant/discordant/no pain), positive discoblock (Numerical Rating Scale [NRS] reduction by ≥ 3, 60 min after intervention), presence of Modic changes or high intensity zones (HIZ), patient sex and age, intervention level, injection pressure and discography endpoint analysis (pain/pressure/anatomic/volume). Results: Concordant pain could be evoked in 35% of the IDDs whereas discoblock was positive in 64%. Patients' age, sex, Dallas I, Dallas II, and Pfirrmann scores, as well as the presence of HIZ did not correlate to PDS or discoblock results. Discoblock correlated positively to concordant pain. Further positive correlation was found between PDS and intervention level/pressure, between discoblock and Modic changes/discography endpoint as well as between HIZ and discography endpoint. Conclusions: We suggest discoblock to be an additional tool for surgery decision making in patients with IDD because it correlates to concordant pain evoked by provocative discography as well as to presence of Modic changes. Additionally, assessment of a release instead of provocation of pain can be of advantage. abstract_id: PUBMED:28194524 Painful Schmorl's nodes treated by discography and discoblock. Purpose: To determine the efficacy of discography and discoblock in the treatment of low back pain (LBP) associated with painful Schmorl's nodes (SNs). Methods: Between January 2010 and February 2015, 46 patients were studied who had LBP suspected to be secondary to SNs. There were 34 men and 12 women, and mean age was 54.2 years (range 42-68 years). All patients underwent provocation discography, and discoblock was given to positive patients (confirmed to have painful SNs). Visual analogue scores (VAS) and the Oswestry Disability Index (ODI) were evaluated at 4 h and 1, 3, 6, and 12 months post-operatively. MRI was also used to evaluate the SNs at 12 months. Results: Discography was performed on a total of 60 discs without infection or other complications. Positive findings were found in 71.7% discs; 20.0% were negative, and 8.3% were indeterminate. Among the positive patients who underwent discoblock, 89.2% reported an improvement in their LBP, and none reported worsening symptoms. VAS and ODI scores decreased significantly after discoblock, and there were no significant differences between 4 h and 1, 3, 6, and 12 months post-operatively. In patients with painful SNs, the vertebral body bone marrow surrounding the SN was characterized by low T1 and high T2 signals on MRI. At 12 months, the node demonstrated either high T1 and T2 signals or low T1 and T2 signals. The SNs tended to remain stable in size over time. Conclusions: Painful SNs refractory to medical or physical therapy should be an indication for treatment with discography and discoblock. abstract_id: PUBMED:36502874 Usefulness of discography, discoblock and a new mechanical method for identifying a painful segment/disc. Background: We previously described a procedure for eliciting deep spatial discrimination of individual segments in the healthy lumbar spine of normal subjects: the percutaneous mechanical provocation (PMP) test. Our goal was to devise a method for accurate identification of the spinal level of pathology in chronic low back pain (CLBP). In the present study, we validated the PMP test, using a subgroup of CLBP patients with isthmic spondylolisthesis (IS). Because there is clinical consensus that IS back pain originates in the slipped segment/disc, the level of pathology can be directly compared to the result of the PMP test. The test is agnostic with respect to the underlying pathological mechanism, and therefore might be useful in identifying the involved segment(s) irrespective of the painful structure. Methods: In 37 patients with confirmed IS (slippage 3-15mm), we compared sensitivity between the PMP test, the widely used provocative discography test and the discoblock test. Results: The PMP test reliably identified the slip level in patients with IS, with sensitivity of 92%. Accepting the slipped disc as the origin of pain in IS, the sensitivity of the provocative discography and discoblock tests were 49% and 35%, respectively: i.e., too low to be contributive in clinical practice. Conclusions: The PMP test reliably identified the origin of localized pain in IS as the slip level, but should be used with care in CLBP patients in selecting discogenic pain patients for fusion surgery, since the specificity of the test is not known and it may be positive for any origin of localized pain. abstract_id: PUBMED:27085700 In vivo effects of bupivacaine and gadobutrol on the intervertebral disc following discoblock and discography: a histological analysis. Objectives: The aim of the present study was to histologically compare chondrotoxicity in surgically harvested intervertebral discs (IVDs) of patients following discoblock, discography, or no preoperative intervention. Methods: Thirty patients (IVD degeneration Modic ≥ 2°, Pfirrmann 3° or 4°) at L4/5 or L5/S1 who were planned for anterior lumbar interbody fusion were randomly assigned to three groups (open MRI: group DG - discography with gadobutrol; group DB - discoblock with bupivacaine at 4 weeks prior to surgery; group C - no intervention). The intervertebral discs were histologically evaluated and compared using ANOVA and Bonferroni tests for cell count, apoptosis, and proliferation. Results: A reduced cell count (groups DG vs. DB vs. C: 14.9 ± 7.1, 9.2 ± 3.8, and 16.6 ± 5.2 cells/mm2, respectively; p ANOVA = 0.016), increased apoptosis (groups DG vs. DB vs. C: 34.9 ± 10.2, 47.4 ± 16.3, 32.6 ± 12.2 %, respectively; p ANOVA = 0.039) and increased cell proliferation (post hoc pDB vs. DG or C p &lt; 0.001; for 3-7 cell monoclonal cell nests: groups DG vs. DB vs. C: 2.4 ± 1, 3.9 ± 1, 2.2 ± 1.1, respectively; p interventionx nest size = 0.006) were found in the IVDs of patients in group DB. Conclusions: This in vivo study suggests that chondrotoxic effects occur in IVD cells after the intradiscal injection of bupivacaine but not after gadobutrol administration. Key Points: • Local bupivacaine administration to intervertebral discs leads to cell toxicity and proliferation. • Gadobutrol demonstrated no significant effect on cell count, apoptosis, or cell proliferation. • In vivo cytotoxicity was demonstrated histologically in humans for the first time. • Addition/administration of bupivacaine during discographies must be judged critically. abstract_id: PUBMED:38413238 Establishing a Gold Standard for Noninvasive Identification of Painful Lumbar Discs: Prospective Comparison of Magnetic Resonance Spectroscopy vs Low-Pressure Provocation Discography. Purpose: Verifying lumbar disc pain can present a clinical challenge. Low-pressure provocative discography (PD) has served as the gold standard, although it is invasive and often a challenge to interpret. We reported that magnetic resonance spectroscopy (MRS) biomarkers accurately predict PD results in lumbar discs and improved outcomes for patients with surgery at positive MRS levels versus nonsurgery. To further substantiate MRS for diagnosing painful discs, we report a prospective comparison of 2 MRS-derived measures: NOCISCORE (pain) and SI-SCORE (degeneration severity). Methods: Lumbar MRS and software-based postprocessing (NOCISCAN-LS, Aclarion Inc.) was performed in 44 discs in 14 patients (prospective cohort [PC]). PC data were compared to prior data used to establish the NOCISCORE (training cohort [TC]). The NOCISCORE was converted to an ordinal value (high/intermediate/low; NOCI+/mild/-) and compared against painful (P) versus nonpainful (NP) control diagnosis (PD) for 19 discs where PD was performed in the PC (12 NP; 7 P). Sensitivity, specificity, and positive and negative predictive values were calculated. The SI-SCORE was compared against MRI Pfirrmann Grades for 465 discs in 126 patients (PC plus TC). Results: For the PC, MRS (NOCI+/-) compared to PD (P/NP) with an accuracy of 87% and sensitivity and specificity of 100%. The positive and negative predictive values of MRS were 100%. NOCISCOREs were significantly higher for PD+ versus PD- discs for PC and TC (P &lt; 0.05), and the NOCISCORE distributions for PD+/- group were not statistically different between the PC and TC (P &gt; 0.05). SI-SCORES differed between Pfirrmann Grades 1 and 2 (less degenerated) versus Grades 3 and 4 (more degenerated; P &lt; 0.05), with a progressively decreasing trend with Pfirrmann Grades 1-5. Conclusion: These current data provide prospective confirmation of the predictive value of disc MRS for distinguishing painful discs and for assessing the disc structural integrity. Clinical Relevance: NOCISCAN is an adoptable, noninvasive, and objectively quantitative test to improve management of low back pain patients. Level Of Evidence: 2: abstract_id: PUBMED:19440168 Results of surgery for discogenic low back pain: a randomized study using discography versus discoblock for diagnosis. Study Design: Randomized, controlled study. Objective: To evaluate the diagnosis of discogenic low back pain (LBP) with discography and discoblock. Summary Of Background Data: Discogenic LBP is usually diagnosed by magnetic resonance imaging and discography. However, the reliability of discography is controversial. Previously, we reported the usefulness of discoblock with bupivacaine for diagnosis, and discoblock improved the results of anterior interbody fusion surgery. However, that study was not a randomized, controlled study. Therefore, the purpose of the current study was to compare the results of surgery after diagnosis of LBP by discography and discoblock. Methods: Patients (n = 42) with severe LBP showing L4-L5 or L5-S1 disc degeneration on magnetic resonance imaging were evaluated by discography (1.5 mL of contrast medium) or discoblock (intradisc injection of 0.75 mL of 0.5% bupivacaine). We randomized the patients in turn. Anterior discectomy and interbody fusion were performed in patients who responded to the diagnostic procedures. The visual analogue scale score (0, no pain; 100, worst pain), Japanese Orthopedic Association Score (0, worst pain; 3, no pain), Oswestry Disability Index, and patient satisfaction before and 3 years after surgery were recorded and compared between groups. Results: Twelve patients did not show pain provocation by discography or pain relief by discoblock and were excluded. Fifteen patients who showed pain provocation by discography and 15 patients who experienced pain relief with discoblock were evaluated. Rates of improvement in the visual analogue scale score, Japanese Orthopedic Association Score, and Oswestry Disability Index score in the discoblock group were significantly higher than those in the discography group (P &lt; 0.05) from baseline to 3 years after surgery. Three patients were dissatisfied with surgery after discography compared with one patient after discoblock. Conclusion: Pain relief after injection of a small amount of bupivacaine into the painful disc was a useful tool for the diagnosis of discogenic LBP compared with discography. abstract_id: PUBMED:28379575 No Transfer of Pressure to Adjacent Discs During Human Low-Pressure Controlled Discography: A Prospective Clinical Study. Background: A substantial part of low back pain (LBP) originates from degeneration of the intervertebral disc. To confirm the diagnosis of discogenic pain, provocation discography seems the best available tool. However, provocation discography is also considered to be a controversial and subjective test because the patient's personal pain response is the most crucial for the result of the test. Recently, an in vivo porcine study and a study in nine human subjects showed passing of pressure to the adjacent discs during discography. This could mean that the concordant pain the patient describes originates from an adjacent disc. The object of this study is to assess if during human lumbar pressure-controlled provocation discography there is pressure transmission to adjacent discs. Methods: Consecutive patients between age 18 and 65 years with intractable low back pain and at least 50% preserved height of the suspected painful disc were included. Exclusion criteria were prior lumbar surgery of the suspected level, use of anticoagulants, pregnancy, local infection, and iodine allergy. An arterial blood pressure monitoring system simultaneously assessed the pressure in the adjacent discs while low-speed flow, pressure-controlled discography was performed. Results: In 50 patients with a positive discography, the average intradiscal peak pressure was 15.1 psi (SD = 11.1). In 48 procedures, no pressure rise in the adjacent discs was found. A small pressure rise (1.1 psi) in the adjacent disc during discography was recorded in two patients. Conclusions: Pressure rise in adjacent discs does not seem to occur during low-speed flow pressure-controlled lumbar provocation discography. False-positive pain reactions caused by potentially painful adjacent discs are therefore unlikely during pressure-controlled discography. abstract_id: PUBMED:24980583 Guideline update for the performance of fusion procedures for degenerative disease of the lumbar spine. Part 6: discography for patient selection. Identifying the etiology of pain for patients suffering from chronic low-back pain remains problematic. Noninvasive imaging modalities, used in isolation, have not consistently provided sufficient evidence to support performance of a lumbar fusion. Provocative testing has been used as an adjunct in this assessment, either alone or in combination with other modalities, to enhance the diagnostic capabilities when evaluating patients with low-back pain. There have been a limited number of studies investigating this topic since the publication of the original guidelines. Based primarily on retrospective studies, discography, as a stand-alone test, is not recommended to formulate treatment strategies for patients with low-back pain. A single randomized cohort study demonstrated an improved potential of discoblock over discography as a predictor of success following lumbar fusion. It is therefore recommended that discoblock be considered as a diagnostic option. There is a possibility, based on a matched cohort study, that an association exists between progression of degenerative disc disease and the performance of a provocative discogram. It is therefore recommended that patients be counseled regarding this potential development prior to undergoing discography. abstract_id: PUBMED:24713605 Lovastatin prevents discography-associated degeneration and maintains the functional morphology of intervertebral discs. Background Context: Discography is an important diagnostic approach to identify the painful discs. However, the benefit of discography, a procedure involving needle puncture and injection of the diagnostic agent into the intervertebral disc, is controversial and has been reported to be associated with accelerated degeneration. Purpose: To investigate the effect of lovastatin on the prevention of degeneration caused by a discography simulation procedure in rat caudal discs. Study Design: In vivo study using rat caudal discs. Methods: A single flexible 27-gauge needle puncture into rat caudal discs was performed under fluoroscopic monitoring. Different concentrations (0.1, 1, 5, and 10 μM) of lovastatin were prepared and injected into randomly chosen caudal discs. RNA expression of selected genes, histologic, and immunohistochemical staining were performed to evaluate the phenotypic effects of lovastatin on rat caudal discs. Results: Simulation of the discography procedure by puncturing the rat caudal discs with a 27-gauge needle and injection of saline solution induced degenerative changes in the nucleus pulposus with minimal damage to the annulus fibrosus. Aggrecan, Type II collagen, and SOX9 expressions were upregulated, whereas Type I collagen expression was significantly suppressed in discs treated with 5 and 10 μM lovastatin. Discs treated with 5 and 10 μM lovastatin were subjected to alcian blue staining and immunohistochemistry that revealed higher levels of glycosaminoglycans and an increase in the number of cells producing S-100 proteins, Type II collagen, and bone morphogenetic protein-2 (BMP-2), respectively. The most effective phenotypic repair was observed in discs treated with 10 μM lovastatin. Conclusions: Intradiscal administration of lovastatin solution upregulated the expressions of BMP-2 and SOX9 and promoted chondrogenesis of rat caudal discs after needle puncture and substance injection. Therefore, we suggest that lovastatin promotes disc repair and can be used as a potential therapeutic agent for biological repair of disc degeneration after the diagnostic discography procedure. abstract_id: PUBMED:34593662 Predictors of discogenic pain in magnetic resonance imaging: a retrospective study of provocative discography performed by posterolateral approach. Background: Provocative discography (PD) is a test that is useful in diagnosing discogenic pain (DP). In this study, to diagnose DP, we used a posterolateral approach of needle placement and followed pressure criteria laid down by the Spine Intervention Society. The aim was to identify the correlation between magnetic resonance imaging (MRI) findings (desiccation, high intensity zone and change in shape and size of the disc) and the results of PD. Methods: Records of 50 patients who underwent PD for DP were analyzed. A total of 109 PDs were performed, with 54 suspect and 55 control discs. Alternate pain generators were ruled out. Results: A total of 35 suspect discs were positive on PD. The mean disc pressure in the suspect disc was 31.9 ± 7.9 psi (range, 15-44). Of the 50 patients who underwent PD, 35 had positive MRI findings. A significant positive correlation was found only between disc desiccation and discography result (r = 0.6, P &lt; 0.001). Logistic regression analysis revealed that only desiccation successfully predicted the result of discography (OR = 26.5, P &lt; 0.001); a high intensity zone and a disc protrusion/extrusion had an OR 2.3 and 1.24, respectively. Disc desiccation of Pfirmann grade 3 or more had a sensitivity and specificity of 0.93 and 0.64 respectively in identifying painful discs; the positive likelihood ratio was 2.58 while the negative likelihood ratio was 0.11. Conclusions: In patients with DP, disc desiccation is the most useful MRI feature that predicts a painful disc on PD. Answer: Based on the provided abstracts, discoblock appears to be a promising alternative to discography for identifying painful degenerated discs. Several studies have shown that discoblock, which involves the injection of an analgesic into the disc, correlates with concordant pain evoked by provocative discography and may be an additional tool for surgery decision-making in patients with idiopathic degenerated discs (IDD) (PUBMED:23642762). Furthermore, discoblock has been reported to improve the results of anterior interbody fusion surgery when used for the diagnosis of discogenic low back pain (LBP), with better outcomes compared to discography (PUBMED:19440168). In a study focused on painful Schmorl's nodes, discoblock following positive discography findings led to significant improvements in patients' low back pain, with 89.2% reporting an improvement in their symptoms (PUBMED:28194524). Another study suggested that discoblock may be more predictive of successful outcomes following lumbar fusion compared to discography alone (PUBMED:24980583). However, it is important to note that while discoblock shows potential, it is not without its drawbacks. For instance, an in vivo study indicated that bupivacaine, a local anesthetic commonly used in discoblock, can lead to chondrotoxic effects in intervertebral disc cells (PUBMED:27085700). This suggests that the use of bupivacaine during discographies must be critically evaluated. Moreover, the specificity of discoblock in identifying the origin of localized pain in chronic low back pain (CLBP) patients is not known, and it may be positive for any origin of localized pain (PUBMED:36502874). Therefore, while discoblock has shown positive correlations with concordant pain and may be useful in certain contexts, it should not be considered a standalone replacement for discography without further evidence and consideration of potential adverse effects. In conclusion, discoblock has demonstrated potential as a diagnostic tool for identifying painful degenerated discs and may complement or, in some cases, serve as an alternative to discography. However, more research is needed to fully understand its efficacy and safety profile before it can be recommended as a replacement for discography.
Instruction: Does executive impairment define a frontal variant of Alzheimer's disease? Abstracts: abstract_id: PUBMED:20719041 Does executive impairment define a frontal variant of Alzheimer's disease? Background: People with Alzheimer's disease (AD) who present with prominent frontal features such as a dysexecutive syndrome may be difficult to differentiate clinically from subjects with frontotemporal lobar degeneration (FTLD). This study was performed to improve the differential diagnosis between AD and FTLD and to better characterize the AD subgroup with greater executive dysfunction. Methods: Using a well-defined prospectively studied cohort of cognitively impaired subjects, which included those with AD and with FTLD, we nominated a frontal variant of AD (FvAD) group as those AD subjects with the lowest quartile of scores on the Frontal Assessment Battery (FAB), indicating greatest executive dysfunction, and compared them with the rest of the AD cases (whom we called the AD group) and those with FTLD across several baseline variables including cognitive, functional and behavioral scales. We also compared the changes from baseline for these three groups at 6 and 12 months. Additionally, we controlled for dementia severity by matching AD and FTLD cases on a functional scale, the SMAF, and repeated the same comparisons with these severity-matched groups. Results: The 114 FvAD subjects had a mean age of 78.1 years and Mini-mental State Examination (MMSE) scores of 16.6, and the (remaining) AD group had a mean age of 78.4 years and MMSE of 22.4. There were 30 FTLD subjects with a mean age at baseline of 70.9 years and a mean baseline MMSE of 23.4. The FvAD group was significantly more severely impaired than the other two groups on all baseline assessments except the behavioral scale, the Neuropsychiatric Inventory (NPI), where there was insignificantly less impairment than in the FTLD group. In the analysis of subjects matched at baseline for functional impairment, the FvAD and FTLD groups were not significantly different on most assessment scales although on the FAB, clock-drawing and MMSE the FvAD subjects were still significantly more impaired. These two severity-matched groups were also similar in other baseline characteristics except for older age and less psychotropic use in the FvAD group. The severity-matched FvAD group was significantly different from the AD group in almost all assessment scales. All three unmatched and matched groups declined similarly over 12 months. Conclusions: When groups were not matched for baseline severity, the use of the FAB defined a group of AD subjects with greater executive dysfunction that were distinguished from both the remainder of the AD and FTLD subjects in almost all domains except behavioral disturbance and probably were just more severely affected AD subjects. The FAB is thus more useful as a marker of dementia severity than as a scale to detect a frontal variant of AD or to distinguish AD from FTLD. Controlling for severity, however, did allow the definition of a subgroup of AD subjects that more closely resembled FTLD subjects than the remainder of the AD subjects. It is proposed that subjects with dementia presenting with greater executive impairment but without prominent behavioral symptoms are likely to have AD rather than FTLD, especially if they are quite functionally impaired. With time FTLD subjects develop increasing executive dysfunction and increasingly resemble the more severely affected AD subjects. abstract_id: PUBMED:26573591 Correlated patterns of neuropsychological and behavioral symptoms in frontal variant of Alzheimer disease and behavioral variant frontotemporal dementia: a comparative case study. Although the neuropathologic changes and diagnostic criteria for the neurodegenerative disorder Alzheimer's disease (AD) are well-established, the clinical symptoms vary largely. Symptomatically, frontal variant of AD (fv-AD) presents very similarly to behavioral variant frontotemporal dementia (bvFTD), which creates major challenges for differential diagnosis. Here, we report two patients who present with progressive cognitive impairment, early and prominent behavioral features, and significant frontotemporal lobe atrophy on magnetic resonance imaging, consistent with an initial diagnosis of probable bvFTD. However, multimodal functional neuroimaging revealed neuropathological data consistent with a diagnosis of probable AD for one patient (pathology distributed in the frontal lobes) and a diagnosis of probable bvFTD for the other patient (hypometabolism in the bilateral frontal lobes). In addition, the fv-AD patient presented with greater executive impairment and milder behavioral symptoms relative to the bvFTD patient. These cases highlight that recognition of these atypical syndromes using detailed neuropsychological tests, biomarkers, and multimodal neuroimaging will lead to greater accuracy in diagnosis and patient management. abstract_id: PUBMED:30911598 Breadth and depth of working memory and executive function compromises in mild cognitive impairment and their relationships to frontal lobe morphometry and functional competence. Introduction: The extent of working memory (WM) and executive function (EF) impairment in mild cognitive impairment (MCI) is not well-characterized. Methods: We compared 48 patients with MCI, 124 noncognitively impaired elderly healthy controls, and 57 patients with Alzheimer's disease (AD) on multiple WM/EF measures, frontal lobe integrity indexes, and functioning. Results: Patients with MCI demonstrated worse performance on nearly all WM/EF tests. This profile of impairment was refined in a factor analysis that identified three primary WM/EF constructs: WM storage; speed and controlled visual search; and manipulation of information and problem solving. EF impairments were associated with reductions in prefrontal cortical thickness. WM/EF accounted for over 50% of the variance in functional competence. Discussion: In MCI, WM/EF impairments are far from rare, based on specific compromises to frontal cortex circuitry, and are associated with loss of everyday functioning. WM/EF impairments, even at this potentially prodromal stage of AD, have clinically deleterious consequences. abstract_id: PUBMED:15178954 Isolated executive impairment and associated frontal neuropathology. Cognitive impairment in the absence of dementia is common in elderly individuals and is most often studied in the context of an isolated impairment in memory. In the current study, we report the neuropsychological and neuropathological features of a nondemented elderly individual with isolated impairment on a test of executive function (i.e., Trail Making Test) and preserved memory, language, and visuospatial function. Postmortem studies indicated that cortical neurofibrillary tangles (NFT) varied considerably, and some regions contained large numbers of neuritic senile plaques. Semiquantitative immunohistochemistry showed higher NFT and amyloid-beta (Abeta) loads in the frontal cortex relative to the temporal, entorhinal, occipital, and parietal cortices. A survey of the entire cingulate gyrus showed a wide dispersion of Abeta42 with the highest concentration in the perigenual part of the anterior cingulate cortex; Abeta appeared to be linked with neuron loss and did not overlap with the heaviest neuritic degeneration. The current case may represent a nonmemory presentation of mild cognitive impairment (executive mild cognitive impairment) that is associated with frontal and anterior cingulate pathology and may be an early stage of the frontal variant of Alzheimer disease. abstract_id: PUBMED:36617783 Neuropsychological Assessment in the Distinction Between Biomarker Defined Frontal-Variant of Alzheimer's Disease and Behavioral-Variant of Frontotemporal Dementia. Background: Frontal-variant of Alzheimer's disease (fvAD) was purposed for patients with AD pathology that, despite the typical amnestic presentation, show early and progressive deterioration of behavior and executive functions, closely resembling the behavioral-variant of frontotemporal dementia (bvFTD). This leads to a challenging differential diagnosis where neuropsychological evaluation and in vivo pathological evidence are essential. Objective: To evaluate the contribution of a comprehensive neuropsychological assessment (NP) battery in distinguishing between fvAD-dementia and bvFTD supported by cerebrospinal fluid (CSF) biomarkers. Methods: We included 40 patients with a baseline NP profile with prominent early executive and/or behavioral dysfunction, who meet both diagnosis of bvFTD and fvAD-dementia, according to international criteria. All patients underwent comprehensive NP assessment and CSF-AD biomarker evaluation. Neuropsychological domains as well as clinical and sociodemographic features, and APOE genotype were compared between groups. Results: 21 patients (52.5%) met the biological criteria for AD (decreased Aβ42 together with increased T-tau or P-tau in CSF) and were therefore classified as fvAD (mean age was 64.57, with 47.6% female). There were no differences between groups regarding age/age-at-onset, gender, or educational level. Regarding neuropsychological profile, performances in language and memory functions were equivalent in both groups. Significant differences were found in visuo-constructional abilities (p = 0.004), Trail Making Test A (p &lt; 0.001), and Raven's Colored Progressive Matrices (p = 0.019), with fvAD patients showing worst performances. Conclusion: In patients with an early prominent frontal profile, a higher impairment in attention and visuo-spatial functions, signaling additional right hemisphere fronto-parietal dysfunction, point towards a diagnosis of fvAD-dementia and may be useful in clinical practice. abstract_id: PUBMED:32362808 Evidence for Structural and Functional Alterations of Frontal-Executive and Corticolimbic Circuits in Late-Life Depression and Relationship to Mild Cognitive Impairment and Dementia: A Systematic Review. Depression is a risk factor for developing Alzheimer's disease and Related Dementia (ADRD). We conducted a systematic review between 2008 and October 2018, to evaluate the evidence for a conceptual mechanistic model linking depression and ADRD, focusing on frontal-executive and corticolimbic circuits. We focused on two neuroimaging modalities: diffusion-weighted imaging measuring white matter tract disruptions and resting-state functional MRI measuring alterations in network dynamics in late-life depression (LLD), mild cognitive impairment (MCI), and LLD+MCI vs. healthy control (HC) individuals. Our data synthesis revealed that in some but not all studies, impairment of both frontal-executive and corticolimbic circuits, as well as impairment of global brain topology was present in LLD, MCI, and LLD+MCI vs. HC groups. Further, posterior midline regions (posterior cingulate cortex and precuneus) appeared to have the most structural and functional alterations in all patient groups. Future cohort and longitudinal studies are required to address the heterogeneity of findings, and to clarify which subgroups of people with LLD are at highest risk for developing MCI and ADRD. abstract_id: PUBMED:22184615 Executive deficits are related to the inferior frontal junction in early dementia. Executive functions describe a wide variety of higher order cognitive processes that allow the flexible modification of thought and behaviour in response to changing cognitive or environmental contexts. Their impairment is common in neurodegenerative disorders. Executive deficits negatively affect everyday activities and hamper the ability to cope with other deficits, such as memory impairment in Alzheimer's disease or behavioural disorders in frontotemporal lobar degeneration. Our study aimed to characterize the neural correlates of executive functions by relating respective deficits to regional hypometabolism in early dementia. Executive functions were assessed with two classical tests, the Stroop and semantic fluency test and various subtests of the behavioural assessment of the dysexecutive syndrome test battery capturing essential aspects of executive abilities relevant to daily living. Impairments in executive functions were correlated with reductions in brain glucose utilization as measured by [(18)F]fluorodeoxyglucose positron emission tomography and analysed voxelwise using statistical parametric mapping in 54 subjects with early dementia, mainly Alzheimer's disease and frontotemporal lobar degeneration, and its prodromal stages: subjective and mild cognitive impairment. Although the analysis revealed task-specific frontoparietal networks, it consistently showed that hypometabolism in one region in the left lateral prefrontal cortex-the inferior frontal junction area-was related to performance in the various neuropsychological tests. This brain region has recently been related to the three component processes of cognitive control-working memory, task switching and inhibitory control. Group comparisons additionally showed hypometabolism in this area in Alzheimer's disease and frontotemporal lobar degeneration. Our study underlines the importance of the inferior frontal junction area for cognitive control in general and for executive deficits in early dementia. abstract_id: PUBMED:21841261 Executive dysfunction in mild cognitive impairment is associated with changes in frontal and cingulate white matter tracts. Mild cognitive impairment (MCI) may affect multiple neuropsychological domains. While amnestic MCI is associated with Alzheimer's disease, patterns of brain pathology in non-amnestic subtypes have been less studied. Twenty-three patients with attention/executive MCI (a/e MCI), seen at a university-based memory clinic, and 23 normal controls, matched according to age, gender, and education, were included in this study. All subjects were assessed with a neuropsychological test battery, including tests of memory, attention and executive function, and underwent magnetic resonance imaging. Diffusion tensor imaging derived white matter (WM) tract radial and mean diffusivity (DR and MD) were assessed using Tract-Based Spatial Statistics, and cortical thickness (CTH) was assessed using FreeSurfer. This study investigated changes of WM DR/MD and CTH in subjects with a/e MCI, and associations between these changes and different a/e subfunctions. WM DR/MD underlying rostral middle frontal, medial orbitofrontal, caudal anterior cingulate, posterior cingulate, retrosplenial and entorhinal cortices was higher for the a/e MCI than the control group, but CTH was not different from controls in any of the regions. WM DR/MD underlying superior frontal, rostral middle frontal, lateral/medial orbitofrontal and retrosplenial cortices were significantly associated with inhibition/switching performance, while caudal middle frontal CTH was significantly associated with attention and divided attention in the patient group. We conclude that increased WM DR/MD in frontal and cingulate regions and cortical thinning in caudal middle frontal region are both associated with executive dysfunction in MCI. abstract_id: PUBMED:31989779 Frontal variant of Alzheimer's disease with asymmetric presentation mimicking frontotemporal dementia: Case report and literature review. Introduction: Frontal variant of Alzheimer's disease (fvAD) is a rare nonamnestic syndrome of Alzheimer's disease (AD). Differentiating it from behavior variant of frontotemporal dementia (bvFTD), which has implications for treatment responses and prognosis, remains a clinical challenge. Methods: Molecular neuroimaging and biofluid markers were performed for the index patient for accurate premortem diagnosis of fvAD. The clinical, neuroimaging, and biofluid characteristics of the patient were compared to those reported in previous studies of fvAD from 1999 to 2019. Results: A 66-year-old man presented with progressive executive dysfunction, personality and behavioral changes, and memory decline since age 59. He had no family history of neurodegenerative disorders. A stimulus-sensitive myoclonus was noted over his left upper extremity. Neuropsychological assessment revealed moderate dementia with a Mini-Mental State Exam score of 10/30 and compromised executive and memory performance. Brain imaging showed asymmetrical atrophy and hypometabolism over the right frontal and temporal areas, mimicking bvFTD. However, we observed increased tau depositions based on 18 F-labeled T807 Tau PET in these areas and diffusely increased amyloid deposition based on 11 C-labeled Pittsburgh compound B positron emission tomography (PET). Plasma biomarker measures indicated an AD profile with increased Aβ1-42 (18.66 pg/ml; cutoff: 16.42 pg/ml), Aβ1-42/Aβ1-40 ratio (0.45; cutoff: 0.30), total tau (29.78 pg/ml; cutoff: 23.89 pg/ml), and phosphorylated tau (4.11 pg/ml; cutoff: 3.08 pg/ml). These results supported a diagnosis of fvAD. Conclusions: Our results with asymmetrical presentations extend current knowledge about this rare AD variant. Application of biofluid and molecular imaging markers could assist in early, accurate diagnosis. abstract_id: PUBMED:37356113 Frontal variant Alzheimer's disease: A systematic narrative synthesis. Background: Frontal variant Alzheimer's disease (fvAD) is considered a rare form of Alzheimer's disease (AD) which may be misdiagnosed as behavioural variant frontotemporal dementia (bvFTD). The literature has tended to conflate behavioural and executive dysfunction in fvAD cohorts and uses both AD diagnostic criteria and bvFTD diagnostic criteria to classify fvAD cohorts. The primary aim of this narrative synthesis was to summarise neuropsychological findings in fvAD cohorts in the context of established AD pathology. Methods: EMBASE, PsycINFO, PROQUEST and MEDLINE databases were searched for studies eligible for inclusion. Studies with both neuropsychological and biomarker evidence were included in the final narrative synthesis. Results: Ten studies were reviewed, including samples totalling 342 fvAD participants, 178 typical AD participants and 250 bvFTD participants. The review revealed areas worthy of further investigation that may aid differential diagnosis, including the degree of executive dysfunction in fvAD cohorts relative to bvFTD cohorts, the onset of behavioural and cognitive symptomatology, and similarities between fvAD and typical AD cognitive profiles. Conclusion: There was insufficient neuropsychological evidence to clearly differentiate fvAD and bvFTD cognitive phenotypes, however, the review has highlighted distinctive features of the two disorders that may guide differential diagnosis in future research. Moreover, the review has highlighted issues involving disparate diagnostic criteria used to classify fvAD cohorts, contributing to variation in findings. Answer: Executive impairment does not solely define a frontal variant of Alzheimer's disease (AD), but it is a significant feature that contributes to the characterization of this subgroup. Studies have explored the concept of a frontal variant of AD (FvAD), which presents with prominent executive dysfunction and may be clinically difficult to differentiate from frontotemporal lobar degeneration (FTLD) (PUBMED:20719041). Executive dysfunction in FvAD is associated with greater severity of cognitive impairment compared to typical AD and may resemble the executive dysfunction seen in FTLD when severity is matched (PUBMED:20719041). Neuropsychological assessments and biomarkers play a crucial role in distinguishing FvAD from behavioral variant frontotemporal dementia (bvFTD). For instance, one study reported that patients with FvAD showed greater executive impairment and milder behavioral symptoms compared to bvFTD patients, highlighting the importance of detailed neuropsychological tests and multimodal neuroimaging for accurate diagnosis (PUBMED:26573591). Furthermore, executive dysfunction in mild cognitive impairment (MCI), which can be a prodromal stage of AD, is linked to frontal lobe morphometry and functional competence, indicating that executive impairments have clinically significant consequences even in early stages of the disease (PUBMED:30911598). In some cases, isolated executive impairment has been associated with frontal neuropathology, suggesting that it may represent an early stage of FvAD (PUBMED:15178954). A comprehensive neuropsychological assessment can aid in distinguishing between FvAD and bvFTD, with attention and visuo-spatial functions being particularly affected in FvAD, pointing towards additional right hemisphere fronto-parietal dysfunction (PUBMED:36617783). Moreover, executive deficits have been related to changes in the inferior frontal junction and white matter tracts in the frontal and cingulate regions, further supporting the association between executive dysfunction and frontal brain alterations in early dementia (PUBMED:22184615; PUBMED:21841261). In summary, while executive impairment is a prominent feature of FvAD, it is not the sole defining characteristic. The diagnosis of FvAD requires a combination of clinical, neuropsychological, and biomarker evidence to differentiate it from other dementia subtypes, particularly bvFTD.
Instruction: Can selection of rapidly progressing patients shorten clinical trials in amyotrophic lateral sclerosis? Abstracts: abstract_id: PUBMED:16606769 Can selection of rapidly progressing patients shorten clinical trials in amyotrophic lateral sclerosis? Background: The marked variability in progression of amyotrophic lateral sclerosis (ALS) requires large numbers of patients to detect a significant effect in current clinical trial designs. Objective: To test the utility of a lead-in period to assess rate of progression so that patients with rapidly progressive ALS can be selected for subsequent clinical trials. Design: Prospective study. Setting: The ALS Center, University of Lisbon, Lisbon, Portugal. Patients: Fifty-seven consecutively recruited patients assessed at diagnosis and 3 months later (end of lead-in period). Interventions: Change in ALS Functional Rating Scale (ALS-FRS) score was analyzed to establish a statistically significant cutoff point to define patients with rapid (group 1) or slow (group 2) progression. Patients from both groups were reexamined 1 and 3 months after the lead-in period. Main Outcome Measures: Changes in ALS-FRS score, motor unit number estimation, and neurophysiologic index, and resultant grouping of patients according to rate of progression at 1 and 3 months. Results: Both the 80th percentile and 2 SDs above the mean of the change in ALS-FRS score identified the same patients. Twelve patients showed rapid progression (group 1) and 45 showed slow progression (group 2). One month after the lead-in period there was a significant reduction in ALS-FRS score, motor unit number estimation, and neurophysiologic index in group 1, and after 3 months all these measurements changed significantly in both groups. Conclusions: This strategy of selecting patients with rapidly progressing ALS for inclusion in exploratory, short phase II clinical trials offers substantial savings in costs and time, and could accelerate the process of testing potentially useful drugs for the treatment of ALS. abstract_id: PUBMED:34315786 Innovating Clinical Trials for Amyotrophic Lateral Sclerosis: Challenging the Established Order. Development of effective treatments for amyotrophic lateral sclerosis (ALS) has been hampered by disease heterogeneity, a limited understanding of underlying pathophysiology, and methodologic design challenges. We have evaluated 2 major themes in the design of pivotal, phase 3 clinical trials for ALS-(1) patient selection and (2) analytical strategy-and discussed potential solutions with the European Medicines Agency. Several design considerations were assessed using data from 5 placebo-controlled clinical trials (n = 988), 4 population-based cohorts (n = 5,100), and 2,436 placebo-allocated patients from the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database. The validity of each proposed design modification was confirmed by means of simulation and illustrated for a hypothetical setting. Compared to classical trial design, the proposed design modifications reduce the sample size by 30.5% and placebo exposure time by 35.4%. By making use of prognostic survival models, one creates a potential to include a larger proportion of the population and maximize generalizability. We propose a flexible design framework that naturally adapts the trial duration when inaccurate assumptions are made at the design stage, such as enrollment or survival rate. In case of futility, the follow-up time is shortened and patient exposure to ineffective treatments or placebo is minimized. For diseases such as ALS, optimizing the use of resources, widening eligibility criteria, and minimizing exposure to futile treatments and placebo is critical to the development of effective treatments. Our proposed design modifications could circumvent important pitfalls and may serve as a blueprint for future clinical trials in this population. abstract_id: PUBMED:17130405 Selecting promising ALS therapies in clinical trials. Riluzole is the only approved medication that extends survival for patients with amyotrophic lateral sclerosis (ALS). While other potential neuroprotective agents have been evaluated in randomized clinical trials, none has shown unequivocal success and none has been approved by regulatory agencies. Few symptomatic therapies have been tested in ALS. Effectiveness for drugs with modest benefit can be established only through large phase III randomized clinical trials. With numerous potential agents but limited resources, priority should be given to agents that show promise in phase II trials before proceeding to evaluation in phase III trials. In this article, we review drug development in early phase ALS trials and introduce novel designs. First, to maximize the therapeutic potential of the test medication, we need to identify the highest dose that produces a tolerable level of side effects. Second, candidate treatments should be ranked by conducting randomized selection trials between competing new treatments. The selection paradigm adopts a statistical viewpoint different from the hypothesis testing framework in conventional trials. We exemplify this approach by describing a group-sequential selection design developed for a phase II, randomized, multicenter trial of two combination treatments in patients with ALS, and illustrate the sample size reduction from a conventional trial. abstract_id: PUBMED:21956723 ALS clinical trials: do enrolled patients accurately represent the ALS population? Objective: To assess the effect of eligibility criteria in amyotrophic lateral sclerosis (ALS) clinical trials on the representativeness of the enrolled population. Methods: Patients enrolled in 8 placebo-controlled clinical trials in our ALS center from 2003 to 2008 were compared 1) to the patients included a prospective epidemiologic register (Piemonte and Valle d'Aosta register for ALS, PARALS) in the same period and 2) the subset of PARALS patients who met the usual criteria for inclusion in clinical trials (PARALS-ct) (definite, probable, probable laboratory-supported ALS; age between 18 and 75 years; disease duration &lt;36 months; vital capacity at diagnosis ≥70%; score ≥3 at the items swallowing and respiratory insufficiency at the Amyotrophic Lateral Sclerosis Functional Rating Scale-revised scale; riluzole therapy). Results: A total of 164 patients were enrolled in 8 different clinical trials. The PARALS cohort included 813 patients, of whom 539 (66.3%) met the entry criteria for clinical trials. Patients enrolled in clinical trials were different from both epidemiologic cohorts, since they were younger, had a longer diagnostic delay, and were more likely to have a spinal onset, and to be men. Tracheostomy-free survival was significantly longer in the group of patients enrolled in clinical trials (median survival time, trial patients, 3.9 years [95% confidence interval (CI) 3.4-4.4]; PARALS, 2.6 [2.4-2.8]; PARALS-ct, 2.9 [2.7-3.1]). Conclusions: Patients enrolled in clinical trials do not satisfactorily represent the ALS population; consequently, the findings of ALS trials lack of external validity (generalizability). Efforts should be made to improve patients' recruitment in trials, particularly enrolling incident rather than prevalent cases. abstract_id: PUBMED:15512898 Patient orientated research and clinical trials. The time is now ripe for the ALS community in the United States to set up a national ALS Study Group. NIH and other granting agencies should fund a program to set up and run the ALS Study Group for five years renewable. This program should result in a competitive request for applications. The benefits likely to derive from a national ALS Study Group would include cheaper and more cost-effective clinical therapeutic trials, the development of better techniques for pilot trials, an increased number of potential new drugs brought from bench to bedside for ALS trials, increased research on biological and surrogate markers of disease, and increased physician-initiated research. abstract_id: PUBMED:35629180 The Advent of Omics Sciences in Clinical Trials of Motor Neuron Diseases. The "omics revolution" has totally changed the scientific research approach and is contributing to the development of personalized therapies. In motor neuron diseases (MNDs), a set of complex, multifactorial, late-onset and chronic neurodegenerative diseases, the use of multi-omics approaches in clinical trials is providing new opportunities to stratify patients and develop target therapies. To show how omics science is gaining momentum in MNDs, in this work, we review the interventional clinical trials for MNDs based on the application of omics sciences. We analyze a total of 62 clinical trials listed in the ClinicalTrials database where different omics approaches have been applied in an initial phase, for diagnosis or patient selection, or in subsequent stages to cluster subjects, identify molecular signatures or evaluate drugs security or efficacy. The rise of omics sciences in clinical experimentation of MNDs is leading to an upheaval in their diagnosis and therapy that will require significant investments and means to ensure the correct and rapid evolution of personalized medicine. abstract_id: PUBMED:34901853 Clinical trials in amyotrophic lateral sclerosis: a systematic review and perspective. Amyotrophic lateral sclerosis is a progressive and devastating neurodegenerative disease. Despite decades of clinical trials, effective disease-modifying drugs remain scarce. To understand the challenges of trial design and delivery, we performed a systematic review of Phase II, Phase II/III and Phase III amyotrophic lateral sclerosis clinical drug trials on trial registries and PubMed between 2008 and 2019. We identified 125 trials, investigating 76 drugs and recruiting more than 15 000 people with amyotrophic lateral sclerosis. About 90% of trials used traditional fixed designs. The limitations in understanding of disease biology, outcome measures, resources and barriers to trial participation in a rapidly progressive, disabling and heterogenous disease hindered timely and definitive evaluation of drugs in two-arm trials. Innovative trial designs, especially adaptive platform trials may offer significant efficiency gains to this end. We propose a flexible and scalable multi-arm, multi-stage trial platform where opportunities to participate in a clinical trial can become the default for people with amyotrophic lateral sclerosis. abstract_id: PUBMED:19382169 A placebo arm is not always necessary in clinical trials of amyotrophic lateral sclerosis. Riluzole is currently the only approved medication for amyotrophic lateral sclerosis (ALS). While other potential neuroprotective agents have been tested in clinical trials, none has been effective, and few symptomatic treatments have been studied. Randomized placebo-controlled trials are necessary to establish the effectiveness of a drug, but an increasing number of potential therapies combined with limited resources means that only a few drugs at a time can be tested for efficacy in ALS. Therefore, priority must be given to agents that show an advantage in early phase trials before proceeding to Phase III efficacy trials. New strategies are being used to screen different agents, along with their correct dose, in a variety of neurological illnesses, including ALS. Early phase trial designs conducted without a placebo arm improve efficiency, reduce cost, and appeal to patients. Dose-ranging, futility, and selection trials are examples of Phase I and II trial designs that can be conducted without placebo groups. abstract_id: PUBMED:24965719 Experimental trials in amyotrophic lateral sclerosis: a review of recently completed, ongoing and planned trials using existing and novel drugs. Introduction: Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disorder that affects roughly 2 subjects per 100,000 in the United States; however, given the rapid decline and mortality, there are low prevalence rates. Although ALS is considered a single disease, it, in truth, probably represents a series of disorders with different clinical patterns and different pathophysiologic mechanisms that eventually coalesce into a single entity. The challenge has been to target these different pathophysiologic abnormalities, and so far, most drug studies have focused on only one or two different pathways. Over 50 well-designed clinical trials have been conducted in ALS over the last 25 years and with the exception of the Riluzole trial, all have failed. Areas Covered: In this review, the authors highlight some of the recently concluded, ongoing or planned Phase II and Phase III studies in ALS. Furthermore, they summarize the progress in the recently initiated stem-cell therapy trials in ALS. Expert Opinion: The challenge remains for developing effective targeted therapeutic interventions for ALS. However, with improved recognition of the complex interplay of several factors that may contribute to ALS pathogenesis, in addition to improved patient selection criteria, outcome measures and biomarkers for drug development, advancements may be made in the future. abstract_id: PUBMED:30982356 Post-hoc analyses of the edaravone clinical trials Study 16 and Study 19: a step toward more efficient clinical trial designs in amyotrophic lateral sclerosis. Objectives: The edaravone development program established a study design in which a treatment effect slowing functional loss in amyotrophic lateral sclerosis (ALS) could be documented within a 24-week time frame. This report elucidates the strategic enrichment design utilized to create efficiency and precision in the development program. Methods: Post-hoc analyses describe learning, sequential iteration, and evolution in study design. Results: The first Phase 3 study of edaravone in ALS (Study MCI186-16) included a large proportion (35%) of placebo patients who were minimal progressors. These patients demonstrated high heterogeneity in change in ALSFRS-R score (-4 median with interquartile range [IQR] 7.5) and a modal distribution score of 0, suggesting evidence of minimal change in ALSFRS-R during the study. This level of variability and rate of progression may have made it difficult to detect a prospective treatment effect in the study. A strategic enrichment strategy provided the second Phase 3 study (Study MCI186-19) with the ability to detect a treatment effect. In Study MCI186-19, only 13% of the placebo patients were minimal progressors. Further, these placebo patients demonstrated less heterogeneity and greater functional progression of ALS, thereby providing greater likelihood of detecting a treatment effect. The enrichment strategy may have excluded some rapidly progressing patients, potentially supporting the detection of a treatment effect. As previously published, Study MCI186-19 prospectively documented a 33% reduction in rate of progression of ALS (p = 0.0013). Conclusions: Strategic choices in the design of Study MCI186-19 reduced the proportion of minimally progressing patients and supported detection of a treatment effect. Answer: Yes, the selection of rapidly progressing patients can shorten clinical trials in amyotrophic lateral sclerosis (ALS). A study conducted at the ALS Center, University of Lisbon, demonstrated that by using a lead-in period to assess the rate of progression, patients with rapidly progressive ALS could be identified. This strategy allowed for the selection of these patients for inclusion in exploratory, short phase II clinical trials, which resulted in significant savings in costs and time, potentially accelerating the process of testing potentially useful drugs for ALS treatment (PUBMED:16606769). Moreover, the concept of selecting patients based on progression rates aligns with the broader goal of optimizing clinical trial designs in ALS. Innovations in trial design, such as the use of prognostic survival models, can potentially reduce sample sizes and placebo exposure time, thereby including a larger proportion of the population and maximizing generalizability. These design modifications can adapt the trial duration to correct for inaccurate assumptions made at the design stage and minimize patient exposure to ineffective treatments or placebo (PUBMED:34315786). However, it is important to note that while selecting rapidly progressing patients may offer benefits for shortening trial durations and reducing costs, it also raises concerns about the representativeness of the trial population. Patients enrolled in clinical trials often do not accurately represent the broader ALS population, which can impact the external validity and generalizability of trial findings (PUBMED:21956723). Therefore, while selecting rapidly progressing patients can be a useful strategy for certain trial designs, it is essential to balance this approach with efforts to ensure that trial populations are representative of the wider ALS community.
Instruction: Trends in postoperative sepsis: are we improving outcomes? Abstracts: abstract_id: PUBMED:36685268 Outcomes of endoscopic retrograde cholangio-pancreatography in patients with liver transplant. Aim Of The Study: Biliary complications are the leading causes of morbidity and mortality after liver transplant (LT). However, national data on endoscopic retrograde cholangiopancreatography (ERCP) usage and outcomes in LT patients are lacking. Our study aims to identify the trends, outcomes, and predictors of ERCP and related complications in this patient subgroup. Material And Methods: We derived our study cohort from the Nationwide Inpatient Sample (NIS) of the Healthcare Cost and Utilization Project (HCUP) between 2007 and 2017. LT patients were identified using ICD-9/10CM diagnosis codes and patients who underwent ERCP were identified by ICD-9/10-CM procedure codes. We utilized the Cochrane-Armitage trend test and multivariate logistic regression to analyze temporal trends, outcomes, and predictors. Results: A total of 372,814 hospitalizations occurred in LT patients between 2007 and 2017. ERCP was performed in 2.05% (n = 7632) of all hospitalizations. There was a rise in ERCP procedures from 1.96% (n = 477) in 2007 to 2.05% (n = 845) in 2017. Among LT patients who underwent ERCP, the in-hospital mortality rate was 1% (n = 73) and 8% (n = 607) were discharged to facilities. Mean length of hospital stay was 7 ±0.3 days. Septicemia was the most common periprocedural complication (18.3%, n = 1399) followed by post-ERCP pancreatitis (8.8%, n = 674). Conclusions: There has been an increase in ERCP procedures over the past decade among LT patients. Our study highlights the periprocedural complications and outcomes of ERCP in LT patients from a nationally representative dataset. abstract_id: PUBMED:33462669 Robotic Duodenal Switch Is Associated with Outcomes Comparable to those of Laparoscopic Approach. Introduction/purpose: This study evaluates the outcomes of robotic duodenal switch (RDS) when compared to conventional laparoscopy (LDS). Materials And Methods: Using the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP), patients who underwent RDS were compared to those of LDS (2015-2018) for perioperative characteristics and thirty-day postoperative outcomes. Operative complexity, complications, and resource utilization trends were plotted over the included years for the two approaches. Multivariable analysis was conducted to characterize the impact of each approach on these outcomes. Results: Of 7235 minimally invasive operations, 5720 (79.1%) were LDS while 1515 (20.9%) were RDS. Intraoperative endoscopy, anastomosis testing, and shorter operative duration were associated with LDS. RDS group had more concomitant procedures with less attending assistance. The odds ratios of organ space infection and sepsis were equivalent. RDS increased the odds ratios for venous thromboembolism [VTE] (odds ratio [OR] = 2.3, 95% confidence interval [CI] = 1.1-4.8, p = 0.02) and early discharge (OR = 7.3, CI = 4.9-10.9, p &lt; 0.01). The difference in wound infection between LDS and RDS has been decreasing (1.5% and 1.5% in 2018 from 2.3% and 4.1% in 2015, respectively) over the years. Similarly, the decreasing trends were noted for systemic infections. Conclusion: While the development of VTE after RDS was higher, most of the other complications were comparable between LDS and RDS in this study. RDS may reduce the need for advanced intraoperative assistance and minimize hospital stay in select cases, without increasing morbidity. The recent trends suggest a gradual decrease in the variations between LDS and RDS outcomes over time. abstract_id: PUBMED:34660142 Utilization Trends and Predictors of Non-invasive and Invasive Ventilation During Hospitalization Due to Community-Acquired Pneumonia. Background: Community-acquired pneumonia (CAP) is associated with significant morbidity and mortality. Non-invasive ventilation (NIV) and invasive mechanical ventilation (IMV) are most important interventions for patients with severe CAP associated with respiratory failure. We analysed utilization trends and predictors of non-invasive and invasive ventilation in patients hospitalized with CAP. Methods: Nationwide Inpatient Sample and Healthcare Cost and Utilization Project data for years 2008-2017 were analysed. Adult hospitalizations due to CAP were identified by previously validated International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) and International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) codes. We then utilized the Cochran-Armitage trend test and multivariate survey logistic regression models to analyse temporal incidence trends, predictors, and outcomes. We used SAS 9.4 software (SAS Institute Inc., Cary, NC, USA) for analysing data. Results: Out of a total of 8,385,861 hospitalizations due to CAP, ventilation assistance was required in 552,395 (6.6%). The overall ventilation use increased slightly; however, IMV utilization decreased, while NIV utilization increased. In multivariable regression analysis, males, Asian/others and weekend admissions were associated with higher odds of any ventilation utilization. Concurrent diagnoses of septicemia, congestive heart failure, alcoholism, chronic lung diseases, pulmonary circulatory diseases, diabetes mellitus, obesity and cancer were associated with increased odds of requiring ventilation assistance. Ventilation requirement was associated with high odds of in-hospital mortality and discharge to facility. Conclusion: The use of NIV among CAP patients has increased while IMV use has decreased over the years. We observed numerous factors linked with a higher use of ventilation support. The requirement of ventilation support is also associated with very high chances of mortality and morbidity. abstract_id: PUBMED:31892221 Epidemiology and Trends of Sepsis in Young Adults Aged 20-44 Years: A Nationwide Population-Based Study. Background: While sepsis may have especially marked impacts in young adults, there is limited population-based information on its epidemiology and trends. Methods: Population-based longitudinal study on sepsis in adults aged 20-44 years using the 2006-2015 Spanish national hospital discharge database. Cases are identified by an ICD-9-CM coding strategy. Primary endpoints are incidence and in-hospital mortality. Trends are assessed for annual percentage change (AAPC) in rates using Joinpoint regression models. Results: 28,351 cases are identified, representing 3.06‱ of all-cause hospitalisations and a crude incidence of 16.4 cases/100,000 population aged 20-44. The mean age is 36 years, 58% of cases are men, and around 60% have associated comorbidities. Seen in one third of cases, the source of infection is respiratory. Single organ dysfunction is recorded in 45% of cases. In-hospital mortality is 24% and associated with age, comorbidity and extent of organ dysfunction. Incidence rates increase over time in women (AAPC: 3.8% (95% CI: 2.1, 5.5)), whereas case-fatality decline with an overall AAPC of -5.9% (95% CI -6.6, -5.2). Our results indicate that sepsis is common in young adults and associated with high in-hospital mortality, though it shows a decreasing trend. The substantial increase in incidence rates in women needs further research. abstract_id: PUBMED:36060388 Temporal Trends, Predictors, and Outcomes of Disseminated Intravascular Coagulation in Hospitalizations With Sepsis. Background This retrospective study was conducted to analyze the temporal trends, predictors, and impact of disseminated intravascular coagulation (DIC) on outcomes among septicemic patients using a nationally representative database. Methods We derived data from the National Inpatient Sample (NIS) for the years 2008-2017 for adult hospitalizations due to sepsis. The primary outcomes were in-hospital mortality and discharge to facility. The Cochran-Armitage test and multivariable survey logistic regression models were used to analyze the data. Results Out of 12,820,000 hospitalizations due to sepsis, 153,181 (1.18%) were complicated by DIC. The incidence of DIC decreased from 2008 to 2017. In multivariable regression analysis, demographics and comorbidities were associated with higher odds of DIC. During the study period, in-hospital mortality among patients with sepsis decreased, but the attributable risk percent of in-hospital mortality due to DIC increased. We observed similar trends for discharge to facility; however, the adjusted odds of discharge to facility due to DIC remained stable over the study period. Conclusion Although the incidence of sepsis complicated by DIC decreased, the attributable in-hospital mortality rate due to DIC increased during the study period. We identified several predictors associated with the development of DIC in sepsis, some of which are potentially modifiable. abstract_id: PUBMED:34242670 Neonatal Birth Trauma: Analysis of Yearly Trends, Risk Factors, and Outcomes. Objective: To evaluate the trends, proportions, risk factors, resource utilization, and outcomes of neonatal birth trauma in the US. Study Design: This cross-sectional study of in-hospital births used the Nationwide Inpatient Sample for 2006-2014. We divided the cases by type of birth trauma: scalp injuries and major birth trauma. Linear regression for yearly trends and logistic regression were used for risk factors and outcomes. A generalized linear model was used, with a Poisson distribution for the length of stay and a gamma distribution for total spending charges. Results: A total of 982 033 weighted records with neonatal birth trauma were found. The prevalence rate increased by 23% from (from 25.3 to 31.1 per 1000 hospital births). Scalp injuries composed 80% of all birth traumas and increased yearly from 19.87 to 26.46 per 1000 hospital births. Major birth trauma decreased from 5.44 to 4.67 per 1000 hospital births due to decreased clavicular fractures, brachial plexus injuries, and intracranial hemorrhage. There were significant differences in demographics and risk factors between the 2 groups. Compared with scalp injuries, major birth trauma was associated with higher odds of hypoxic-ischemic encephalopathy, seizures, need for mechanical ventilation, meconium aspiration, and sepsis. Length of stay was increased by 56%, and total charges were almost doubled for major birth trauma. Conclusions: Neonatal birth trauma increased over the study period secondary to scalp injuries. Major birth trauma constitutes a significant health burden. Scalp injuries are also associated with increased morbidity and might be markers of brain injury in some cases. abstract_id: PUBMED:32148931 Sepsis trends: increasing incidence and decreasing mortality, or changing denominator? Numerous studies suggest that the incidence of sepsis has been steadily increasing over the past several decades while mortality rates are falling. However, reliably assessing trends in sepsis epidemiology is challenging due to changing diagnosis and coding practices over time. Ongoing efforts by clinicians, administrators, policy makers, and patient advocates to increase sepsis awareness, screening, and recognition are leading to more patients being labeled with sepsis. Subjective clinical definitions and heterogeneous presentations also allow for wide discretion in diagnosing sepsis rather than specific infections alone or non-specific syndromes. These factors create a potential ascertainment bias whereby the inclusion of less severely ill patients in sepsis case counts over time leads to a perceived increase in sepsis incidence and decrease in sepsis mortality rates. Analyses that rely on administrative data alone are further confounded by changing coding practices in response to new policies, financial incentives, and efforts to improve documentation. An alternate strategy for measuring sepsis incidence, outcomes, and trends is to use objective and consistent clinical criteria rather than administrative codes or registries to identify sepsis. This is feasible using data routinely found in electronic health record systems, such as blood culture draws and sustained courses of antibiotics to identify infection and laboratory values, vasopressors, and mechanical ventilation to measure acute organ dysfunction. Recent surveillance studies using this approach suggest that sepsis incidence and mortality rates have been essentially stable over the past decade. In this review, we summarize the major epidemiologic studies of sepsis trends, potential biases in these analyses, and the recent change in the surveillance paradigm toward using objective clinical data from electronic health records to more accurately characterize sepsis trends. abstract_id: PUBMED:29943059 National trends of endoscopic retrograde cholangiopancreatography utilization and outcomes in decompensated cirrhosis. Background: Endoscopic retrograde cholangiopancreatography (ERCP) can be challenging in patients with decompensated cirrhosis (DC) due to increased risk of adverse events related to liver dysfunction. Limited data exist regarding its national utilization in patients with DC. We aim to determine the trends in utilization and outcomes of ERCP among patients with DC in US hospitalizations. Methods: We identified hospitalizations undergoing ERCP (diagnostic and therapeutic) between 2000 and 2013 from the National Inpatient Sample (NIS) database and used validated ICD9-CM codes to identify DC hospitalizations. We utilized Cochrane-Armitage test to identify changes in trends and multivariable survey regression modeling for adjusted odds ratios (aOR) for adverse outcomes and mortality predictors. Results: There were 43782 cases of ERCPs performed in DC patients during the study period. Absolute number of ERCPs performed in this population from 2000 to 2013 showed an upward trend; however, the proportion of DC patients undergoing ERCP remained stable. We noted significant decrease in utilization of diagnostic ERCP and an increase of therapeutic ERCPs (P &lt; 0.01). There was a significant decrease in the mean length of stay for DC patients undergoing ERCP from 8.2 days in 2000 to 7.2 days in 2013 (P &lt; 0.01) with an increase in the mean cost of hospitalization from $17053 to $19825 (P &lt; 0.001). Mortality rates showed a downward trend from 2000 to 2013 from 13.6 to 9.6% (P &lt; 0.01). Increasing age, Hispanic race, diagnosis of hypertension and diabetes mellitus, and private insurance were related to adverse discharges(P &lt; 0.01). Increasing age, presence of hepatic encephalopathy, and sepsis were associated with higher mortality (P &lt; 0.01). Conclusions: There is an increasing trend in therapeutic ERCP utilization in DC hospitalizations nationally. There is an overall decrease in mortality in DC hospitalizations undergoing ERCP. This improvement in mortality suggests improvement in both procedural technique and peri-procedural care as well as overall decreasing mortality in cirrhosis. abstract_id: PUBMED:35003948 Epidemiology and Outcomes of Hospitalizations Due to Hepatocellular Carcinoma. Background Hepatocellular Carcinoma (HCC) is a severe complication of cirrhosis and the incidence of HCC has been increasing in the United States (US). We aim to describe the trends, characteristics, and outcomes of hospitalizations due to HCC across the last decade. Methods We derived a study cohort from the Nationwide Inpatient Sample (NIS) for the years 2008-2017. Adult hospitalizations due to HCC were identified using the International Classification of Diseases (9th/10th Editions) Clinical Modification diagnosis codes (ICD-9-CM/ICD-10-CM). Comorbidities were also identified by ICD-9/10-CM codes and Elixhauser Comorbidity Software (Agency for Healthcare Research and Quality, Rockville, Maryland, US). Our primary outcomes were in-hospital mortality and discharge to the facility. We then utilized the Cochran-Armitage trend test and multivariable survey logistic regression models to analyze the trends, outcomes, and predictors. Results A total of 155,436 adult hospitalizations occurred due to HCC from 2008-2017. The number of hospitalizations with HCC decreased from 16,754 in 2008 to 14,715 in 2017. Additionally, trends of in-hospital mortality declined over the study period but discharge to facilities remained stable. Furthermore, in multivariable regression analysis, predictors of increased mortality in HCC patients were advanced age (OR 1.1; 95%CI 1.0-1.2; p&lt; 0.0001), African American (OR 1.3; 95%CI 1.1-1.4;p&lt; 0.001), Rural/ non-teaching hospitals (OR 2.7; 95%CI 2.4-3.3; p&lt; 0.001), uninsured (OR 1.9; CI 1.6-2.2; p&lt; 0.0001) and complications like septicemia and pneumonia as well as comorbidities such as hypertension, diabetes mellitus, and renal failure. We observed similar trends in discharge to facilities. Conclusions In this nationally representative study, we observed a decrease in hospitalizations of patients with HCC along with in-hospital mortality; however, discharge to facilities remained stable over the last decade. We also identified multiple predictors significantly associated with increased mortality, some of which are potentially modifiable and can be points of interest for future studies. abstract_id: PUBMED:33225103 Time Trends of the Outcomes and Treatment Options for Disseminated Intravascular Coagulation: A Nationwide Observational Study in Japan. Introduction: Existing evidence on the mortality time trends of patients with disseminated intravascular coagulation (DIC) is limited, and whether the mortality trend or quality of care of DIC patients has improved remains unknown. This study aimed to investigate the temporal trend in mortality, patient outcomes, and treatment preferences of several anticoagulants in Japan. Methods: This retrospective observational study used the Japanese Diagnosis Procedure Combination inpatient database, which contains data from more than 1200 acute-care hospitals in Japan. We identified all adult patients that were diagnosed with DIC from July 2010 to March 2018 and sorted them into one of five predefined underlying conditions: sepsis, solid cancer, leukemia, trauma, or obstetric. The data collected as general outcomes were the 28-day mortality and major bleeding events. We also evaluated anticoagulant use for DIC treatment. Results: A total of 325,327 DIC patients were included in this study. Regarding the baseline characteristics, an increase in median age, worsened comorbid conditions, and higher illness severity were observed over time. The underlying conditions for DIC were largely unchanged. Over the study period, the 28-day mortality for overall DIC patients decreased from 41.8% (95% CI 41.2%-42.3%) to 36.1% (95% CI 35.6%-36.6%), which is a 14% decrease over the 8-year period (Ptrend &lt; 0.001). The downward trend in mortality was more evident in patients with sepsis and leukemia (15% and 14% decreases, respectively), whereas no clinically meaningful change in mortality occurred in trauma and obstetrics patients. Over time, major bleeding events modestly increased, and the length of hospital stay decreased. The temporal trend in the treatment preferences of anticoagulants for DIC patients clearly changed over time. Conclusions: The overall 28-day mortality for DIC patients clearly decreased from 2010 to 2017. The downward trend in mortality might have resulted from the advances made in the fundamental treatment of underlying diseases and from the changes in anti-DIC strategies. Answer: The trends in postoperative sepsis and outcomes have shown some improvement over time. According to a nationwide population-based study in Spain, the incidence rates of sepsis have increased over time in women, but the case-fatality rates have declined with an overall annual percentage change of -5.9%, indicating a decreasing trend in in-hospital mortality (PUBMED:31892221). Similarly, a study analyzing the temporal trends, predictors, and impact of disseminated intravascular coagulation (DIC) on outcomes among septicemic patients in the US found that while the incidence of sepsis complicated by DIC decreased, the attributable in-hospital mortality rate due to DIC increased during the study period (PUBMED:36060388). This suggests that although the incidence of certain complications like DIC may be decreasing, the severity of cases that do occur may be higher, leading to increased mortality associated with these complications. Another study examining national trends of endoscopic retrograde cholangiopancreatography (ERCP) utilization and outcomes in decompensated cirrhosis found a decreasing trend in mortality rates from 2000 to 2013 (PUBMED:29943059). This improvement in mortality suggests advancements in both procedural technique and peri-procedural care, as well as overall decreasing mortality in cirrhosis. Furthermore, a study on the epidemiology and outcomes of hospitalizations due to hepatocellular carcinoma (HCC) observed a decrease in hospitalizations of patients with HCC along with in-hospital mortality; however, discharge to facilities remained stable over the last decade (PUBMED:35003948). This indicates that while in-hospital mortality for HCC has decreased, the need for post-discharge care has not significantly changed. Lastly, a nationwide observational study in Japan on the outcomes and treatment options for DIC showed a clear decrease in the 28-day mortality for DIC patients from 2010 to 2017 (PUBMED:33225103). This downward trend in mortality might result from advances in the fundamental treatment of underlying diseases and changes in anti-DIC strategies. In summary, the evidence suggests that there have been improvements in the outcomes of postoperative sepsis, with a general trend of decreasing mortality rates. However, the incidence of sepsis and its complications, such as DIC, may vary, and the need for post-discharge care remains an area for further improvement.
Instruction: Is ECG-guidance a helpful method to correctly position a central venous catheter during prehospital emergency care? Abstracts: abstract_id: PUBMED:16045664 Is ECG-guidance a helpful method to correctly position a central venous catheter during prehospital emergency care? Background: Insertion of a central venous catheter (CVC) in an emergency situation is challenging and may be potentially associated with more complications. Because CVC positioning by ECG-guidance may help to decrease the frequency of a malpositioned catheter, we decided to prospectively evaluate the usefulness of positioning a CVC by ECG-guidance during prehospital emergency care. Methods: Prospective observational study during which all patients requiring CVC placement during prehospital care were included. We compared two periods of 1 year during which CVCs were inserted without and then with the help of ECG-guidance. Results: Eighty successive patients were included. We observed a significant reduction of incorrectly positioned CVCs with ECG-guidance (13% vs. 38%, P &lt; 0.05) and a decreased number of chest X-rays needed to verify the position of the CVC (40 vs. 54, P &lt; 0.05). Conclusion: ECG-guidance is a safe and feasible technique which significantly improved the rate of CVCs correctly positioned during prehospital emergency care. abstract_id: PUBMED:32935610 Central Venous Catheter Insertion in the Prone Position-A Last Resort in Critically Ill COVID-19 Patients. Background: In the setting of the COVID pandemic, many patients falling ill with acute respiratory distress syndrome eventually require prone positioning for gas exchange. Traditionally, central venous catheters are inserted with patient in the supine or Trendelenburg position. However, when a patient cannot tolerate supine position and the need for central venous access is urgent, catheter placement may be considered with the patient in the prone position. Case Summary: A 69-year-old male with rapidly declining respiratory status secondary to COVID pneumonia quickly developed acute respiratory distress syndrome, was rapidly intubated, and then placed in the prone position. Patient could not tolerate the supine position even briefly and required a central venous catheter insertion for continuous renal replacement therapy. We kept the patient in the prone position and successfully inserted a central venous catheter in such position with real-time ultrasound guidance and using micropuncture technique. Conclusion: In the setting of the COVID pandemic, many cases of acute respiratory distress syndrome require patients to be prone in order to improve gas exchange. In the most severe situations, these patients would not be able to tolerate rotating back to the supine position but would still require central venous catheter insertion urgently. We demonstrated feasibility of central venous catheter insertion in the prone position in these severely ill patients. abstract_id: PUBMED:25885836 Optimum depth of central venous catheter - Comparision by pere's, landmark and endocavitory (atrial) ECG technique: A prospective study. Context: Blind insertion of central venous catheter has many implications. Better options should be sought to perform this procedure. Aim: To evaluate various options for positioning central venous catheter tip. Settings And Design: This is institutional based randomized prospective controlled study. Materials And Methods: In this prospective study depth and position of central venous catheter were evaluated in 150 patients in intensive care unit. Three different methods: Pere's, landmark, and endocavitory (atrial) ECG control were used. Statistical Analysis: Twoway ANOVA test was applied on SPSS version 16 to test the significant difference between the three groups. Results: Patient characteristic and demographic data were similar in the three groups. The average depth of central venous catheter by Pere's, landmark, and endocavitory (ECG) technique were 14.20 ± 0.69 cm, 12.08 ± 0.98 cm, and 8.18 ± 0.74 cm, respectively. Conclusion: The correct position of central venous catheter by endocavitory (atrial) ECG appears not only to reduce the procedure related complications but also post procedure manipulation of catheter tip detected by post procedure chest X-ray. abstract_id: PUBMED:26112635 The Ultrasound-Only Central Venous Catheter Placement and Confirmation Procedure. The placement of a central venous catheter remains an important intervention in the care of critically ill patients in the emergency department. We propose an ultrasound-first protocol for 3 aspects of central venous catheter placement above the diaphragm: dynamic procedural guidance, evaluation for pneumothorax, and confirmation of the catheter tip location. abstract_id: PUBMED:9379762 Central venous catheter location by endocavitary ECG: an alternative to chest radiography Background: The location of a central venous catheter is checked with a chest radiograph. Right endocavitary electrocardiography (ECG), a technique in which the catheter is connected to the ECG recorder wire to record ECG in DII, might be an alternate method to check the position of the catheter. When the catheter enters the right atrium, there is a large increase in the amplitude of P wave. The aim of this study was to compare the efficacy of endocavitary ECG against the chest radiography as a method to check the position of a central catheter, and to determine whether the ECG method can replace the radiological method. Material And Methods: One-hundred central catheters which location was checked by ECG and by chest X-ray entered the study. Efficacy of endocavitary ECG, delay between catheter insertion and radiological check, and the cost of the two methods were studied. Results: Endocavitary ECG determined that de catheter was in a correct position in 80 cases and malpositioned in 20 cases. ECG check agreed with radiological check in 93 catheters. There were 4 false-positive and 3 false-negative cases. The efficacy of endocavitary ECG was 93% (sensitivity: 82%; specificity: 81%; positive predictive rate: 95%; negative predictive rate: 85%). Catheter location was checked immediately on insertion by ECG in all cases, while radiological check was delayed 32 +/- 25 min in the emergency department and 68 +/- 206 min in the admitted patients. The cost of the ECG method was less than that of the radiological method. Conclusions: The check of the position of a central venous catheter by endocavitary ECG is taster, cheaper, and of similar efficacy to the radiological method. Endocavitary ECG can replace chest X-rays to check the position of the catheter in more than 90% of cases. abstract_id: PUBMED:19318958 The use of central venous catheters during emergency prehospital care: a 2-year experience. Objective: Evaluation of the efficacy and safety of central venous catheter (CVC) use during prehospital emergency care. Methods: All prehospital patients who underwent CVC positioning by emergency medical services physicians in the greater area of Thessaloniki during a 2-year period were included. A two-lumen indwelling polyurethane 8F catheter was inserted using the Seldinger technique in all cases. Patients' demographics and underlying diseases, site of access, number of attempts, time spent for catheter placement, and insertion-related complications were recorded. Results: Four hundred and ninety-seven CVCs were inserted by emergency medical services physicians during the study period in patients with various underlying diseases [cardiac arrest (35.4%), other cardiac emergencies (16.3%), trauma (30.0%), coma (7.7%)]. Subclavian and internal jugular veins were accessed in 55.3 and 44.15% of patients, respectively. The mean number of attempts was 1.3 and the mean time spent for insertion was 2.0+/-0.5 min. Eleven (2.2%) hematomas at the insertion site of minor clinical importance and five (1.0%) uncomplicated arterial punctures were found. All of the 378 patients referred alive for admission in hospitals after prehospital resuscitation had radiological detection of their CVCs. Catheter malposition occurred in 11 (2.9%) cases. Three pneumothoraces were also detected (0.8%), but only one required chest tube placement. Conclusion: Insertion of CVCs during prehospital emergency care is effective in providing intravenous access, thus facilitating the delivery of fluids and medications in unstable patients. It is safe, as associated with a low incidence of complications in experienced hands. abstract_id: PUBMED:27331170 An Evaluation of Complications in Ultrasound-Guided Central Venous Catheter Insertion in the Emergency Department. Objectives: In emergency departments, emergency physicians frequently have to perform central venous access. In cases where peripheral venous access is not possible, central venous access is required for dialysis, fulfillment of urgent fluid need, or central venous pressure measurement. This study was carried out to evaluate the emergence of complications in the process of and in the 15 days following the insertion of central venous catheter under ultrasound guidance in the emergency department. Methods: For this study, patients who presented to the emergency department over a period of eight months with an urgent need for central catheter were examined prospectively. Age, gender, and accompanying diseases of patients as well as the type, time, duration, and indication of the venous access were recorded. Furthermore, the amount of experience of the physician was taken into consideration. Results: In the emergency department, physicians performed ultrasound-guided central venous catheter insertion for 74 patients (40 men and 34 women). For access, internal jugular vein was used in 65 (87.8%) patients, and femoral vein was used in 9 (12.2%) patients. The reason for access was urgent dialysis need in 55 (74.3%), CVP measurement in 3 (4.1%), fluid support due to severe hypovolemia in 6 (8.1%), and difficulty of peripheral venous access in 10 (13.5%) patients. None of the patients developed complications in the process of or after the insertion. Patients did not have infections related to the catheter in 15 days following the insertion. Conclusions: Central venous access is frequently required in emergency departments. The risk of complication is little if any in ultrasonographyguided access carried out under appropriate conditions. abstract_id: PUBMED:31303709 Bedside prediction of the central venous catheter insertion depth - Comparison of different techniques. Background And Aims: Central venous catheterization is a frequently performed procedure in anesthesia and critical care, and is indispensable in the practice of emergency medicine. Correct positioning of the central venous catheter (CVC) tip is often regarded as a secondary goal and there are various complications that can occur due to abnormal position of the catheter tip. Different methods have been advocated to guide accurate prediction of optimal CVC depth insertion before or during the procedure at the bedside. Material And Methods: A prospective randomized double blinded study was conducted in 180 patients aged between 18 to 65 years requiring central venous catheterization. The optimal depth of insertion of right internal jugular vein (IJV) catheter using three different techniques, Peres' formula method, Landmark technique and Intra atrial Electrocardiography (ECG) guided technique was performed and the three techniques were compared with respect to optimal positioning using carina as a landmark in post procedural chest radiograph. Correct position of the catheter tip was considered upto 1 cm above or below the carina in post procedure X ray. Results: The average final depth of insertion was 15.30 ± 0.62 cms in the Formula group, 12.74 ± 0.77 cms in landmark group and 12.64 ± 0.70 cms in ECG group. The vertical distance from carina was 0.91 ± 0.94 cms in formula group, 0.54 ± 0.67 cms in landmark group and 0.53 ± 0.43 cms in ECG group. The CVC tip was properly positioned within 1 cm above and below the carina in 58.33% patients in the formula group, 93.33% patients in landmark group and 96.67% patients in ECG group. Conclusion: We conclude that both landmark guidance and ECG guidance are comparable with regard to accurate central venous catheter tip positioning when CVCs are placed through right internal jugular vein whereas formula based technique is least accurate and results in over insertion of CVCs. abstract_id: PUBMED:33827330 Cannulation of the popliteal vein as an intraoperative emergency access in prone position: A case report. Many surgeries have to be performed in the prone position. Access to central vessels is very limited in this position. This requires forward planning with regard to the placement of a central venous catheter. Emergency situations or unexpected intraoperative events in prone position are challenging, as a conventional central venous access is not possible. We describe the cannulation of the popliteal vein in such a case. We report the case of a patient with severe intracerebral hemorrhage who was transferred to our emergency department where his clinical condition rapidly deteriorated. Due to the surgical approach, a prone position was necessary and preoperative central venous cannulation was not possible. We therefore performed an alternative vascular access in prone position using a ultrasound-guided cannulation of the popliteal vein. The cannulation of the popliteal vein described here can be carried out as a quick and easy feasible alternative in emergency situations in prone position. The ultrasound-guided catheter insertion succeeded without any problems and there was no evidence of a deep vein thrombosis after the intervention. abstract_id: PUBMED:29529967 Central venous catheterization for acute trauma resuscitation: Tip position analysis using routine emergency computed tomography. Purpose: Central venous catheter insertion for acute trauma resuscitation may be associated with mechanical complications, but studies on the exact central venous catheter tip positions are not available. The goal of the study was to analyze central venous catheter tip positions using routine emergency computed tomography. Methods: Consecutive acute multiple trauma patients requiring large-bore thoracocervical central venous catheters in the resuscitation room of a university hospital were enrolled retrospectively from 2010 to 2015. Patients who received a routine emergency chest computed tomography were analyzed regarding central venous catheter tip position. The central venous catheter tip position was defined as correct if the catheter tip was placed less than 1 cm inside the right atrium relative to the cavoatrial junction, and the simultaneous angle of the central venous catheter tip compared with the lateral border of the superior vena cava was below 40°. Results: During the 6-year study period, 97 patients were analyzed for the central venous catheter tip position in computed tomography. Malpositions were observed in 29 patients (29.9%). Patients with malpositioned central venous catheters presented with a higher rate of shock (systolic blood pressure &lt;90 mmHg) at admission (58.6% vs 33.8%, p = 0.023) and a higher mean injury severity score (38.5 ± 15.7 vs 31.6 ± 11.8, p = 0.041) compared with patients with correctly positioned central venous catheter tips. Logistic regression revealed injury severity score as a significant predictor for central venous catheter malposition (odds ratio = 1.039, 95% confidence interval = 1.005-1.074, p = 0.024). Conclusion: Multiple trauma patients who underwent emergency central venous catheter placement by experienced anesthetists presented with considerable tip malposition in computed tomography, which was significantly associated with a higher injury severity. Answer: Yes, ECG guidance is a helpful method to correctly position a central venous catheter during prehospital emergency care. A prospective observational study found that the use of ECG guidance significantly reduced the rate of incorrectly positioned central venous catheters (CVCs) from 38% to 13% and also decreased the number of chest X-rays needed to verify the position of the CVC from 54 to 40 (PUBMED:16045664). This suggests that ECG guidance is a safe and feasible technique that improves the correct positioning of CVCs in the prehospital emergency setting.
Instruction: Giving parents written information about children's anesthesia: are setting and timing important? Abstracts: abstract_id: PUBMED:15960637 Giving parents written information about children's anesthesia: are setting and timing important? Background: Research indicates that parents wish to receive more information and are anxious about anesthesia prior to their child's surgery. Methods: A pilot randomized controlled trial was conducted to ascertain if parents who received a general anesthesia information leaflet by post 1 week following their presurgical assessment clinic visit (n = 19) had greater knowledge, reduced anxiety, or greater satisfaction with anesthetic information on the day of surgery than parents who received the leaflet at the clinic visit (n = 21). Results: Parents had important knowledge deficits on the day of surgery, despite receiving the leaflet in addition to standard verbal communication. Parents who were given the leaflet at the presurgical assessment clinic had a modest 10% increase in knowledge compared with parents who received the leaflet by post. However, parents who received the leaflet within 2 weeks for their child's surgery also had 10% greater knowledge, regardless of the method of delivery. There were no differences in parental anxiety or satisfaction with the information related to method or timing of delivery. Parents appreciated receiving written information and a slight majority of parents (56%) preferred to receive the information at the presurgical assessment clinic visit. Conclusions: Parents have unmet information needs related to children's anesthetic care. Written information may improve parent knowledge and enhance satisfaction, but the setting and timing of information delivery are also important to consider. abstract_id: PUBMED:25899617 Parents of children with disabilities in Kuwait: a study of their information seeking behaviour. Background: Parents of children with disabilities desperately seek information regarding their children's conditions because of the high stakes involved. Objectives: This study investigates the information needs of parents in Kuwait with special needs children during and after their children's diagnoses. Understanding their information seeking behaviour by identifying their information sources and information seeking barriers will assist librarians and other information professionals in meeting these important information needs. Methods: A survey was conducted by means of questionnaires administered to 240 participants at a school for children with special needs. The data were analysed using nonparametric Mann-Whitney and Kruskal-Wallis tests. Results: Most parents needed information at the time of diagnosis, with information about educating the children having the highest mean. Doctors and physicians were the most preferred information sources, followed by books. Online support groups and social media applications were least desirable as information sources. Lack of Arabic resources was identified as the greatest information seeking barrier, followed by lack of information to help parents cope with their child's disability. Conclusions: Information sources and services for Kuwaiti parents of disabled children need further development and improvement. Librarians and other information professionals can assist by providing parents with information appropriate to their stage in understanding the child's diagnosis and education. abstract_id: PUBMED:33300661 Describing pre-appointment written materials as an intervention in the context of children's NHS therapy services: A national survey. Context: Pre-appointment written materials, including letters and leaflets, are commonly used by healthcare organisations to deliver professional-patient interactions. The written materials potentially change patients' knowledge and behaviour as part of a healthcare intervention but have received little investigation. Objective: To describe the content of pre-appointment written materials through a behaviour change intervention perspective. Design: Mixed methods study with an online questionnaire about pre-appointment written materials and an analysis of actual materials. Questionnaire data were analysed descriptively and pre-appointment materials by qualitative framework analysis. Setting And Participants: Children's community/outpatient occupational therapy, physiotherapy and/or speech and language therapy services across the UK. Service managers/clinical leads provided data. Intervention: Pre-appointment written materials. Results: Questionnaire responses were received from n = 110 managers/clinical leads from n = 58 NHS organisations. Written materials (n = 64) were received from n = 24 organisations. Current materials are used by therapy services as a conduit to convey the therapy service's expectations related to: accessing the service, decision-making about care and help-giving. The materials enrol the parent and child to the therapy services' expectations by behaviour change techniques. The materials configure the parent/child expectations, knowledge and behaviour towards the therapy services' operational procedures. Conclusion: Pre-appointment written materials configure patients to organisations' operational procedures. The written materials currently lack support for parent/child empowerment, shared decision-making and self-management to improve health. Patient Contribution: Four parents of children accessing therapy services were involved in the study. The parents shared their experiences to highlight the importance of the topic and contributed to the final research design and methods. abstract_id: PUBMED:38147711 The effect of written document in perioperative information on the anxiety level and family-centered care of parents of children undergoing ambulatory surgery: A randomized controlled trial. Purpose: To evaluate the effect of preoperative information based on written documents on anxiety levels and the family-centered care of parents of pediatric patients who had ambulatory surgery. Design And Methods: It is a randomized controlled study. Parents were randomly divided into two groups as intervention (n = 30) and control (n = 30). Parents in the intervention group were given written document and verbal information the day before the surgery. A brochure was prepared in accordance with the verbal information describing the perioperative process as a written document. The control group was given only verbal information. Parental State Anxiety Inventory (SAI) and Family Centered Care Assessment Scale (FCCAS) were assessed before and within 1-2 h after surgery, with verbal information based on written documentation. Data were evaluated with Student's t-test for dependent and independent groups and mixed design ANOVA test for time×group interaction. Partial eta square (η2) was calculated for the effect size. Results: There was a significant difference between the post-intervention pretest and posttest SAI and FCCAS scores of the parents in the intervention and control groups (p &lt; 0.05). Time group interactions anxiety and family-centered care (p &lt; 0.001) scores had a significant and large effect size. Conclusion: Verbal information supported by written documentation before pediatric ambulatory surgery can reduce parental SAI and increase parental FCCAS more than standard care. Practice Implications: Providing written document-based information to parents before surgery may be beneficial in reducing SAI and increasing FCCAS. The trial was registered in ClinicalTrials.gov (identifier: NCT05668416). abstract_id: PUBMED:32573937 Information seeking behaviour of dental trauma patients and their parents. Background And Aim: Dental trauma is a common public health issue that can be associated with high costs and lifelong treatment. Children and parents should have the correct information regarding its management to reduce the potential long-term sequelae. Understanding their information seeking behaviour is therefore important to ensure that they are able to access, understand and retain the relevant information. The aim of this study was to investigate (a) the different ways that patients undertaking treatment for dental trauma, and their parents, look for relevant information, (b) their information needs, and (c) their preferred format of information. Material And Methods: This was a two-phase study: Phase I involved in-depth interviews with 10 patients who were currently undergoing treatment for dental trauma, and 11 parents. In phase II, the themes and sub-themes identified from the interviews were used to develop patient and parent questionnaires, which were distributed to all trauma patients and parents who fulfilled the inclusion criteria and who were attending the dental clinics from March to May 2014. Results: Patients and parents had different concerns immediately following the trauma, during treatment and in the long term, and they wanted different forms of information to answer their concerns. Reliability was the main issue about trusting the information available in the Internet, social media and other online sources. Therefore, patients preferred to obtain information verbally from their dentist, while parents preferred written information. Conclusion: Patients and parents have different information needs, and these also vary at different stages of treatment. It is therefore important to customize the way information is provided for each group accordingly. abstract_id: PUBMED:37799529 Taiwanese parents' perspectives on young children's use of information communication technology. Introduction: How parents think and feel about their children's use of technology can influence how their kids behave online. The family's socioeconomic status (SES) may also affect this influence. In light of this, this research emphasizes the need for more investigation into parental attitudes and the role of SES in shaping how children consume media. Methods: This study surveyed 629 Taiwanese parents to explore their attitudes toward their young children's use of information communication technology (ICT), usage patterns, and the interplay with socioeconomic status. Results: The findings revealed a significant disconnect: although approximately 50% of parents considered above six years old to be a suitable age for children to start ICT, over 80% of children had already engaged with ICT before that age, indicating a large disparity between parental expectations and actual initiation. Furthermore, parents highlighted "learning interest" and "various content" as the most positive impacts of children's ICT use, while "addiction and overreliance" emerged as their primary concern. Notably, parents, as a whole, tended to perceive their child's ICT use more negative than positively, with fathers displaying greater acceptance of negative viewpoints than mothers. Parental attitudes toward children's ICT use were categorized into five clusters, ranging from balanced and optimistic views to value emphasis, conservatism, and negative doubts. This classification underscores the intricate and multifaceted nature of parental perspectives, encompassing both positive and negative outlooks on children's ICT utilization. Discussion: The findings underscore the nuanced character of parents' attitudes toward technology, shaped by the intricacies and challenges posed by the digital era. These insights emphasize that parental attitudes go beyond a simplistic positive-negative divide, reflecting a comprehensive response to the opportunities and complexities inherent in the digital age. abstract_id: PUBMED:16784504 Keeping It Together: an information KIT for parents of children and youth with special needs. Background: Provision and use of information about their children is a major concern to families who have a child with a disability. Strategies or systems to influence parents' abilities to receive, give and use information in a way that is satisfying to them have not been well validated. Methods: This paper reports on the development and evaluation of a Parent Information KIT ('Keeping It Together') information management system and child advocacy tool. The KIT was designed to assist parents in giving, getting and organizing information in an effective way. The KIT was evaluated by a sample of 440 parents, with outcomes of use, utility, impact and perceptions of family-centred care measured at baseline, and after 6 months, and 15 months. Results: Parents' perceptions of their ability, confidence and satisfaction when using information improved significantly after using the KIT. Parents' perceptions of care, as measured with the Measure of Processes of Care, also improved significantly in the areas of 'Enabling/Partnership' and 'Providing General Information'. Parents' ratings of the use and utility of the KIT were associated with increased impact and, in turn, predicted improved perceptions of care. Conclusion: Parents who received and actively used the Parent Information KIT experienced significant increases in their perception of their ability and self-confidence in getting, giving and using information to assist their child with a disability. Strategies for improving the use of the KIT among parents are discussed. abstract_id: PUBMED:27914304 Information seeking by parents of children with physical disabilities: An exploratory qualitative study. Background: Evidence suggests that parents of children with disabilities feel that not all their information needs are being met, but it remains unclear how parents try to fill these information gaps. Aims: The aim of this study is to describe how parents of children with physical disabilities search for and evaluate information. Methods: Qualitative semi-structured interviews were conducted with 15 parents of children with a disability, aged 1.5-21 years. Data were analysed using thematic analysis. Results: There was much variation in information needs between parents. Parents used different sources, depending on the type of information needed, the most important being healthcare professionals, peers, and websites. Peers played an important role in information provision and were the preferred source of experience-based knowledge and support. The Internet is a widely used medium to search for information and to access various sources. There was a general preference for closed Internet communities for peer contact. Information was commonly evaluated by comparing sources. Conclusions And Implications: Parents use different sources for different information needs, and evaluate information by comparing them. Healthcare professionals and parents can support each other in locating and evaluating information including experience-based knowledge. Healthcare professionals should guide parents in their search for information and experience-based knowledge from peers. abstract_id: PUBMED:22381915 Information sources used by parents to learn about medications they are giving their children. Background: Previous studies have indicated the use of medication by children to be dependent on the attitudes of knowledge of their parents; however, little is known about sources parents might use in driving medication use decisions. Objective: The aim of this study was to describe the information sources that parents use regarding their children's medication and their perceptions of the reliability of these information sources. Methods: This study is part of a cross-sectional population survey with a random sample of 6000 children younger than 12 years. The response rate of the questionnaire was 67%. Parents were asked about the use of information sources and their perception of reliability with a list of 16 information sources. Results: The information sources that parents reported having used were physicians (72%), patient information leaflets (PILs) (67%), public or school health nurses (52%), and pharmacists (44%). Regardless of the child's age, physicians were the parents' most-used information source. Physicians were the most-used information source also when the child had at least 1 of the 4 most common long-term diseases among children in Finland (asthma, diabetes, epilepsy, and juvenile idiopathic arthritis). Parents considered physicians (50%), PILs (31%), pharmacists (27%), nurses (20%), and public or school health nurses (17%) very reliable information sources. Conclusions: Finnish parents seek information from health care professionals and PILs when looking for information concerning their children's medication. Furthermore, they find health care professionals and PILs to be reliable information sources. abstract_id: PUBMED:30589952 Parents' experiences of videoconference as a tool for multidisciplinary information exchange for children with epilepsy and disability. Aims And Objectives: To explore the parents' experiences with the multidisciplinary information exchange between different levels of the health- social and school services using videoconference. Background: Children with epilepsy and disability are in need of long-term multidisciplinary help and support. The information exchange between the child's providers of support is challenging. Scant attention has been paid to the parents' experiences of the information exchange. Design: A qualitative study with a phenomenological-hermeneutical research approach. Method: Children with epilepsy and disability had undergone a multidisciplinary assessment in a hospital in the tertiary health service followed by information exchange with each child's local support service using videoconference. Five parents were interviewed individually within a week after the videoconference. The COREQ checklist was used in the process of reporting on the empirical material. Results: The increased local participation in the videoconference and the knowledge translation between the professionals made the parents save time, as they did not have to explain the written reports to each provider locally. The impact of technology created an emotional distance to the person on the screen and the information exchanged that some enjoyed while others disliked. The quality of the relationships to the professionals seemed to be crucial for the parents to feel safe talking about sensitive tasks. Conclusion: The parents prefer videoconference as a tool for information exchange to traditional face-to-face meetings as it enables discussions between professionals that might prevent treatment misunderstandings. They experience the method as time-saving and patient-centred. Relevance To Clinical Practice: Professionals should be aware of the possible side effects of the impact of technology that might affect the interaction and the information exchange in a negative way. The findings are likely to be transferable beyond the current context. Answer: Yes, the setting and timing of providing written information to parents about children's anesthesia are important. A pilot randomized controlled trial indicated that parents who received a general anesthesia information leaflet at the presurgical assessment clinic had a modest 10% increase in knowledge compared with parents who received the leaflet by post. Additionally, parents who received the leaflet within 2 weeks of their child's surgery had 10% greater knowledge, regardless of the method of delivery. However, there were no differences in parental anxiety or satisfaction with the information related to method or timing of delivery. A slight majority of parents (56%) preferred to receive the information at the presurgical assessment clinic visit. The study concluded that written information could improve parent knowledge and enhance satisfaction, but the setting and timing of information delivery should be considered (PUBMED:15960637). Moreover, another study on the effect of written document in perioperative information on the anxiety level and family-centered care of parents of children undergoing ambulatory surgery found that verbal information supported by written documentation before pediatric ambulatory surgery can reduce parental State Anxiety Inventory (SAI) and increase parental Family Centered Care Assessment Scale (FCCAS) more than standard care. This suggests that providing written document-based information to parents before surgery may be beneficial in reducing anxiety and increasing family-centered care (PUBMED:38147711).
Instruction: Does treatment with endoscopic third ventriculostomy result in less concern among parents of children with hydrocephalus? Abstracts: abstract_id: PUBMED:20428876 Does treatment with endoscopic third ventriculostomy result in less concern among parents of children with hydrocephalus? Purpose: A possible benefit of endoscopic third ventriculostomy (ETV) is that families might harbor less concern and anxiety compared to shunt. This has not yet been demonstrated, however. Our goal was to compare parental concern in a large sample of children with hydrocephalus treated with ETV or shunt, using our previously developed measure of parental concern, the Hydrocephalus Concerns Questionnaire for Parents (HCQ-P). Methods: The parents of children 5-18 years old with previously treated hydrocephalus at three Canadian pediatric neurosurgery centers completed the HCQ-P. HCQ-P scores were compared between those who were initially treated with ETV and those initially treated with shunt. A multivariable linear regression analysis was used to adjust for center, current age, age at initial hydrocephalus surgery, seizures, etiology, hydrocephalus complications, and quality of life. Results: Six hundred three families participated (58 ETV [9.6%], 545 shunt [90.4%]). In unadjusted comparison, ETV parents had lower overall concern (HCQ-P = 0.41 versus 0.51, p = 0.02). After adjustment for multiple patient factors, ETV parents still had lower concern (p = 0.03) but the only questions for which there was a still a statistically significant difference were those related to concerns about shunt/ETV complications. Conclusions: Parents of children who have had ETV experience less concern than those who have had shunt and this is due almost exclusively to less concern about hydrocephalus treatment complications. While this could be interpreted as a beneficial aspect of ETV treatment, it remains important for neurosurgeons to ensure that parents are not being overly complacent about the possibility of ETV failure requiring urgent treatment. abstract_id: PUBMED:30110690 Update on Endoscopic Third Ventriculostomy in Children. Endoscopic third ventriculostomy (ETV) provides a physiological restoration of cerebrospinal fluid and a shunt-free option for hydrocephalus children. Continuous developments in techniques and instruments have improved ETV as the first-line treatment. This paper focuses on the recent advances in surgical techniques, instruments, predictive models, imaging tools, and new cohort studies. The efficacy, safety, indications, and remaining challenges of ETV are discussed. More patients undergo ETV with a better outcome, identifying a new era of hydrocephalus treatment. Deeper understanding of ETV will improve a better shunt-free survival for pediatric hydrocephalus patients. abstract_id: PUBMED:26662270 Endoscopic third ventriculostomy in the treatment of idiopathic normal pressure hydrocephalus: a review study. Idiopathic normal pressure hydrocephalus is a hydrodynamic disorder whose etiology remains unclear. The diagnosis is mainly clinical and the traditional treatment is cerebrospinal fluid shunt diversion. With the introduction of modern management strategies, endoscopic third ventriculostomy has become a viable alternative to shunting and constitutes a well-established method of treatment for obstructive hydrocephalus. The new hydrodynamic concept of hydrocephalus suggests that endoscopic third ventriculostomy (ETV) may be an effective treatment for communicative hydrocephalus. In our current review, the authors focus on the up-to-date knowledge regarding the consideration of endoscopic third ventriculostomy as a safe surgical option in the management of idiopathic normal pressure hydrocephalus. abstract_id: PUBMED:37019302 Extra-axial Endoscopic Third Ventriculostomy: A Novel Treatment for Managing Hydrocephalus Due to Vertebrobasilar Dolichoectasia. Background: Vertebrobasilar dolichoectasia, a rare vascular anomaly, rarely presents with hydrocephalus. The traditional treatment for hydrocephalus is a ventriculoperitoneal shunt. Conventional endoscopic third ventriculostomy can avoid shunt-related complications but is considered risky due to the presence of the dolichoectatic vessel. A subfrontal extra-axial fenestration of the lamina terminalis can circumvent this anatomic constraint and establish cerebrospinal fluid communication between the third ventricle and subarachnoid space. Methods: We performed an extra-axial endoscopic third ventriculostomy to manage hydrocephalus due to vertebrobasilar dolichoectasia in a 26-year-old male. The clinical description, surgical technique, outcome, and rationale are described. Results: The patient had symptomatic improvement in his headaches and vision. There was also improvement in the postoperative ventricular indices: Evans index-19% reduction, frontal occipital horn ratio-14.1% reduction, and third ventricle index-39.5% reduction. A cine-phase magnetic resonance image showed cerebrospinal fluid flow void through the lamina terminalis fenestration, suggesting patency. Conclusions: Extra-axial endoscopic third ventriculostomy may be a suitable treatment alternative to circumvent anatomic constraints produced by vertebrobasilar dolichoectasia in performing conventional endoscopic third ventriculostomy. abstract_id: PUBMED:31158842 Role of Secondary Endoscopic Third Ventriculostomy in Children: Review of an Institutional Experience. Background: Endoscopic third ventriculostomy (ETV) has become a standard and safe procedure for obstructive hydrocephalus. ETV can also play an important role in children presenting with shunt malfunction with an added advantage of shunt independence. Secondary ETV can be defined as either a redo endoscopic ventriculostomy done after primary ETV stoma closure or that done in cases presenting with shunt malfunction. Objective: The aim of our study was to evaluate the role of secondary ETV in the pediatric age group patients. Methods: This is a retrospective analysis of 36 children (&lt;18 years) who underwent ETV after shunt malfunction and 4 children with ETV done after previous ETV stoma closure from 2004 until 2018. In all patients, the obstructive pattern suggesting aqueduct outflow obstruction was observed on MRI. Patients were followed up for a mean period of 4.25 years. Results: ETV was considered successful if the patient avoided a shunt insertion later on in their life. Considering this definition, a success rate of 72% was observed with secondary ETV for shunt malfunction whereas a success rate of 75% was observed after primary ETV failure without any major side effects in any of the patients. Conclusion: ETV can be considered a primary treatment modality in children with shunt malfunction and has a good success rate in cases presenting with closure of previously performed ETV stoma. abstract_id: PUBMED:16041551 Success of endoscopic third ventriculostomy in children less than 2 years of age. Current literature reveals different opinions about the effectiveness of endoscopic third ventriculostomy in the treatment of hydrocephalus in children less than 2 years of age. Performing a retrospective evaluation of our own experience in this age group, we aimed to contribute to the growing data on the controversial issues related to this procedure in children. In a series of 97 endoscopic third ventriculostomy procedures, 25 were performed in children less than 2 years of age as an initial treatment for hydrocephalus. A retrospective analysis of our data revealed that the overall success rate of endoscopic third ventriculostomy in this age group was 56%. However, analysis of the results in subgroups with different etiologies of hydrocephalus showed that the success rate of the procedure was 83% in patients with defined anatomic obstruction, 66.6% in post-hemorrhagic hydrocephalus, 50% in infection related hydrocephalus and 41.6% in hydrocephalus accompanied by myelomeningocele. This article considers our data and the features of endoscopic third ventriculostomy procedure in this age group, with a detailed review of the literature. In our experience, the success of endoscopic third ventriculostomy is etiology related rather than age-dependent. We suggest that there are no grounds for denying children younger than 2 years this chance for a shunt-free life. abstract_id: PUBMED:35733459 Failure of Endoscopic Third Ventriculostomy. Endoscopic third ventriculostomy (ETV) is an alternative to cerebrospinal fluid (CSF) shunting in the treatment of hydrocephalus. Careful patient selection is critical as patient age, etiology of hydrocephalus, and previous shunting have been shown to influence ETV success rates. Intraoperatively, patient anatomy and medical stability may prevent or limit the completion of the ventriculostomy procedure, and findings such as a patulous third ventricular floor or cisternal scarring may portend a lower chance of successful hydrocephalus treatment. Patients in whom a ventriculostomy is completed may still experience continued symptoms of hydrocephalus or CSF leak, representing an early ETV failure. In other patients, the ETV may prove a durable treatment of hydrocephalus for several months or even years before recurrence of hydrocephalus symptoms. The failure pattern for ETV is different than that of shunting, with a higher early failure rate but improved long-term failure-free survival rates. The risk factors for failure, along with the presentation and management of failure, deserve review. abstract_id: PUBMED:25282515 Treatment of child hydrocephalus by endoscopic third ventriculostomy in Senegal Introduction: Since its advent, endoscopic third ventriculostomy (ETV) has been an effective alternative to shunt placement for the treatment of hydrocephalus. The aim of this study was to report the results of our experience with this technique in children in Senegal. Materials And Methods: This was a retrospective study of 70 cases of children aged between 5 months to 15 years who were treated by ETV in the Neurosurgery Department of Fann Hospital in Dakar, between January 2010 to December 2012. The results were evaluated based on the clinical criteria of Drake and the Canadian Pediatric Neurosurgery Study Group. The mean follow-up duration was 24 months (9-32 months). Results: The mean age at diagnosis was 29 months. A male predominance (sex-ratio 1/3) was observed. We also noted a macrocephaly in 64.4 % of cases, psychomotor retardation in 40 % and decreased vision in 31.4 %. Headache and vomiting were found in 42.8 % and 61.4 % respectively. The main etiology was a stenosis of the mesencephalon aqueduct (30 %), followed by a Dandy-Walker malformation (25.7 %). Significant intraoperative bleeding was found in 2.8 % of patients. The most common postoperative complication was CSF leakage (18.6 %), followed by infections (14.2 %). The success rate according to the clinical criteria of Drake was 71.4 %. This success rate was influenced by the age of patients and the hydrocephalus etiology. No deaths occurred. Conclusion: The endoscopic third ventriculostomy is a simple, safe and effective technique. Its advantages in terms of quality of life and morbidity compared with bypass valves makes it the technique of choice, particularly in developing countries. abstract_id: PUBMED:35438316 Repeat endoscopic third ventriculostomy combined with choroid plexus cauterization as salvage surgery for failed endoscopic third ventriculostomy. Purpose: Although the endoscopic third ventriculostomy (ETV) is an effective treatment for hydrocephalus, failures do on occasion occur. In such cases, a repeat ETV or shunt insertion is usually performed. However, there is, as of yet, no clear consensus on the best measure to take in the event of a failed ETV. We herein examined the outcomes of a repeat ETV combined with choroid plexus cauterization for ETV failure. Methods: All patients who underwent an ETV at the Department of Neurosurgery at Tokyo Metropolitan Children's Medical Center between April 2013 and March 2019 were retrospectively analyzed. Results: In total, 36 patients received an ETV. Six patients experienced ETV failure; three of these underwent a repeat ETV combined with choroid plexus cauterization. Three of the six patients who experienced early ETV failure received a ventriculoperitoneal shunt. During the median follow-up period of 42 months (range: 32-73 months), all repeat ETVs were successful. Conclusion: A repeat ETV combined with choroid plexus cauterization can be an effective salvage therapy in the event of ETV failure. abstract_id: PUBMED:38423458 COMBINED PREDICTIVE MODEL FOR ENDOSCOPIC THIRD VENTRICULOSTOMY SUCCESS IN ADULTS AND CHILDREN. Background: The selection of patients in whom endoscopic third ventriculostomy (ETV) can be effective remains poorly defined. The endoscopic third ventriculostomy success score (ETVSS) and the presence of bowing of the third ventricle have been identified as independent factors for predicting success, each with limitations. The objective of this study is to elaborate a combined predictive model to predict ETV success in a mixed cohort of patients. Methods: Demographic, intraoperative, postoperative, and radiological variables were analyzed in all ventriculostomies performed consecutively at a single institution from December 2004 to December 2022. Qualitative and quantitative measurements of preoperative, immediate and late postoperative MRI were conducted. Univariate analysis and logistic regression models were performed. Results: 118 ETV were performed in the selected period. 106 procedures met inclusion criteria. The overall success rate was 71.7%, with a median follow-up of 3.64 years [1.06;5.62]. The median age was 36.1 years [11.7;53.5]. 35.84% were children (median=7.81 years). Among the 80 patients with third ventricle bowing, success rate was 88.8% (p&lt;0.001). Larger third ventricle dimensions on preoperative mid-sagittal MRI were associated with increased ETV success. The model with the best receiver operating characteristic (ROC) curves, with an area under the curve (AUC) of 0.918 (95% CI 0.856;0.979) includes sex, ETVSS, presence of complications and third ventricle bowing. Conclusions: The presence of bowing of the third ventricle is strongly associated with a higher ETV success rate. However, a combined predictive model that integrates it with the ETVSS is the most appropriate approach for selecting patients for ETV. Answer: Yes, treatment with endoscopic third ventriculostomy (ETV) results in less concern among parents of children with hydrocephalus. A study comparing parental concern in children with hydrocephalus treated with ETV or shunt, using the Hydrocephalus Concerns Questionnaire for Parents (HCQ-P), found that parents of children who had ETV experienced less concern than those who had a shunt. This reduced concern was almost exclusively due to less worry about hydrocephalus treatment complications (PUBMED:20428876).
Instruction: Are Recommended Doses of Acetaminophen Effective for Children Aged 2 to 3 Years? Abstracts: abstract_id: PUBMED:26720060 Are Recommended Doses of Acetaminophen Effective for Children Aged 2 to 3 Years? A Pharmacokinetic Modeling Answer. Objectives: Acetaminophen (APAP) elixir is a widely used pediatric antipyretic medication. It has been shown that up to 30% of febrile children presenting to a large urban pediatric emergency department received inadequate APAP dosages at home with errors primarily due to age-based dosing. Parental education material in the form of weight-based dosing guides has been proposed; however, validation of current recommended APAP dosages using pharmacokinetic models is needed. This study used a mathematical model of APAP absorption to predict plasma concentrations and to compare them with the range required to reach and achieve antipyresis (10-20 μg/mL). Methods: A common APAP preparation (Children's Tylenol Elixir) was tested (children aged 2-3 years, 10.9-15.9 kg). The manufacturer's suggested dose of 160 mg was compared with the standard 10 to 15 mg/kg dose range. Results: The model predicts a peak plasma concentration between 6.38 and 8.55 μg/mL for 10 mg/kg dose and 9.57 and 12.8 μg/mL for 15 mg/kg dose. The manufacturer's suggested dose of 160 mg was tested across the limits of the weight range (10.9-15.9 kg). A peak plasma concentration between 9.36 and 12.6 μg/mL was found for the lower weight limit (10.9 kg child) and 6.42 to 8.61 μg/mL for the upper weight limit (15.9 kg child). Conclusions: With the use of this model, the 10 mg/kg dose does not reach the plasma concentration value for antipyresis (10-20 μg/mL), whereas 15 mg/kg is adequate only if assuming a greater absorption constant. The 160 mg dose is effective only for children weighing 10.9 kg. Individual differences in drug bioavailability, volume of distribution, and absorption/elimination constants undoubtedly exist, and future studies directly measuring plasma APAP concentration and pharmacokinetics are needed. However, these results indicate that dosages for APAP in children should be weight based and manufacturers should review their dosing recommendations. abstract_id: PUBMED:35891219 Short-Term Side Effects and SARS-CoV-2 Infection after COVID-19 Pfizer-BioNTech Vaccine in Children Aged 5-11 Years: An Italian Real-World Study. Vaccination against COVID-19 is the most effective tool to protect both the individual and the community from this potentially life-threatening infectious disease. Data from phase-3 trials showed that two doses of the BNT162b2 vaccine were safe, immunogenic, and effective against COVID-19 in children aged 5-11 years. However, no surveys in real-life settings have been carried out in this age range. Here, we conducted a cross-sectional study to evaluate the short-term adverse reactions (ARs) and the rate of protection against infection of the BNT162b2 vaccine in children aged 5-11 years by the compilation of two surveillance questionnaires conceived using Google Forms. Five-hundred and ninety one children were included in the analysis. ARs were reported by 68.9% of the children, being mainly local. The incidence of systemic ARs, especially fever, was higher after the second dose. The incidence of infection after completing the immunization accounted for 13.6% of the children. COVID-19 symptoms reported were mild, with the exception of one case of pneumonia. Only 40% of infected participants needed to take medication to relieve symptoms, mostly paracetamol and NSAIDs, and none reported persistent symptoms. The Pfizer-BioNTech vaccine in children aged 5-11 years is safe and well tolerated. The mild clinical course of COVID-19 in immunized children confirmed the favorable risk-benefit ratio, encouraging parents to immunize their children. abstract_id: PUBMED:31099052 Acetaminophen Protein Adducts in Hospitalized Children Receiving Multiple Doses of Acetaminophen. Previous reports have questioned the safety of multiple doses of acetaminophen administered to ill children. Acetaminophen protein adducts (adducts) are a biomarker of acetaminophen-induced liver injury and reflect the oxidative metabolism of acetaminophen, a known mechanism in acetaminophen toxicity. In this prospective observational study, we analyzed adduct concentrations in 1034 blood samples obtained from 181 hospitalized children (1 to 18 years inclusive) who received 2 or more doses of acetaminophen. Linear regression analysis showed that serum adduct concentrations increased as a function of the cumulative acetaminophen dose, which could be attributed, in part, to a long half-life of adducts (2.17 ± 1.04 days [mean ± standard deviation]) in children. However, few patients (2%) were found to have adduct concentrations higher than 1.0 nmol/mL, a previously identified toxicity cut point for the diagnosis of acetaminophen-induced liver injury in patients with alanine aminotransferase values exceeding 1000 IU/L. A small cohort of patients with suspected infection was noted to show higher adduct concentrations. In addition, adduct concentrations showed a stronger correlation with cumulative acetaminophen doses in adolescents compared with children (R2 = 0.41 vs 0.26). No other covariates (body weight, body mass index z score, sex, race, or surgery) remarkably correlated with adduct elevation. In summary, low levels of adducts can be detected in hospitalized children receiving multiple doses of acetaminophen, and adduct levels correlate with cumulative acetaminophen dose. abstract_id: PUBMED:34719983 Mothers' ability to determine and measure paracetamol doses for children-a contrived observational study. Caregivers are primarily responsible for the administration of Over The Counter (OTC) medications in children. This study examines the mothers' ability to determine and measure paracetamol doses for children aged between 1 and 5 years. A contrived observational study was conducted for mothers of preschool aged children at two Public Health Midwifery (PHM) areas in Southern province, Sri Lanka. Stratified random sampling was used. Only 26.9% (n = 95, 95% CI = 22.5%-31.7%) of the 353 participants correctly determined and measured the doses of paracetamol. Errors were frequently made in both determining and measuring dose together (n = 113, 32.0%, 95% CI = 27.3%-37.1%), determining only (n = 94, 26.6%, 95% CI = 22.2%-31.5%) and measurement only (n = 51, 14.4%, 95% CI = 11.1%-18.5%). Dose determined errors were not significantly associated with maternal education, number of children in the family, total monthly income and age of the index child. Similarly measuring errors were not significantly associated with mothers' education, income of the family and number of children in the family. However, there was a weak positive correlation between measuring errors and age of the index child. The study suggests that mothers made errors when determining doses and measuring doses of paracetamol. Results emphasize importance of clear, concise guardian information leaflet and healthcare professionals' guidance to minimize dosing errors of child medication. abstract_id: PUBMED:21242792 Acute liver failure after recommended doses of acetaminophen in patients with myopathies. Objective: To determine the likelihood that recommended doses of acetaminophen are associated with acute liver failure in patients with myopathies. Design: Retrospective analysis. Setting: Level III pediatric intensive care unit. Patients: Two pediatric patients with myopathies and acute liver failure. Clinical Investigations: We determined acetaminophen protein adduct levels, in combination with a literature review and systematic evaluation of the cases, using the Roussel Uclaf Causality Assessment Method for drug-induced liver injury to assess causality between recommended acetaminophen dosing and acute liver failure in two children with myopathies. Main Results: The serum adduct levels were consistent with the values previously reported in children with acute liver injury following acetaminophen overdose. We found four similar cases of acute liver failure in pediatric and adult patients with myopathies following recommended acetaminophen doses in the literature (n = 3) and personal communication (n = 1). The Roussel Uclaf Causality Assessment Method suggested a probable relationship between acetaminophen use at recommended doses and acute liver failure in our myopathy patients. Conclusion: Our data suggest that some patients with myopathies who are receiving recommended doses of acetaminophen may be at increased risk for the development of toxicity resulting in acute liver failure. More studies are needed to corroborate these findings. In the meantime, we would advise physicians to be alert in these patients while taking acetaminophen, especially when critically ill or postoperative. abstract_id: PUBMED:34773285 Has the time come to stop routine N-acetylcysteine treatment in young children in Denmark? A review of 300 suspected paracetamol overdoses in children aged 0-6 years. Aim: To evaluate the prevalence of potentially hepatoxic paracetamol ingestion and associated N-acetylcysteine treatment in young children suspected of paracetamol poisoning. Methods: A retrospective cohort study of children aged 0-6 years suspected of paracetamol poisoning with a related plasma-paracetamol measurement in the Capital Region of Denmark in the period 2010-2017. Data from the clinical laboratory system were linked to data from electronic patient records via the unique identification number given to all Danish residents. Results: Of 297 children included, suspected single paracetamol overdoses were present in 281 (95%). Sixty-nine per cent were treated with N-acetylcysteine, and the mean treatment period was 20.3 h (SD 20.8). A maximum of 6 (2%) of the children suspected of single overdose had plasma-paracetamol concentrations that exceeded the recommended treatment thresholds. No cases of severe hepatotoxicity were registered. Adverse events to N-acetylcysteine-treatment were registered in 3 (2%) children including one anaphylactoid reaction (0.5%). Conclusion: This study shows that initiating N-acetylcysteine as a 'one size fit all' treatment regimen in all children aged 0-6 years with a suspected single paracetamol overdose leads to substantial overtreatment. The data support that it is feasible to initiate N-acetylcysteine within 10 h based on an early plasma-paracetamol test. abstract_id: PUBMED:29577416 Retrospective study of paracetamol poisoning in children aged zero to six years found no cases of liver injury. Aim: This study focused on children aged zero to six years with suspected single-dose paracetamol poisoning, which has not been investigated in Denmark. We evaluated the incidence of liver injuries and the use of activated charcoal and N-acetylcysteine treatment. Methods: Our retrospective study was performed in three paediatric hospital centres from 2001 to 2012. Data on symptoms, time of ingestion, blood biochemistry, treatment and adverse reactions were collected. The results were evaluated against the Rumack-Matthew nomogram. Results: We identified 221 children (58% male), with a mean age of 2.67 ± 1.05 years. Activated charcoal treatment was given in 87% of cases, but only 15% of the children received treatment within one hour of the suspected paracetamol poisoning. Although 80% of the children received N-acetylcysteine treatment, only one case (0.5%) had a toxic plasma paracetamol level according to the treatment nomogram. Abdominal pain or vomiting was associated with higher paracetamol levels in plasma. None of the children developed liver injuries. Conclusion: We found a low incidence of significant poisoning and liberal use of N-acetylcysteine and activated charcoal treatment in Danish children aged zero to six years with suspected paracetamol poisoning. Vomiting or abdominal pain was associated with elevated plasma paracetamol levels. No liver injuries were reported. abstract_id: PUBMED:35221435 Hypothermia induced by therapeutic doses of antipyretics: misdiagnosis of cold sepsis. Fever is a natural and almost universal mammalian response to infection. There exists a fear amongst general public and healthcare providers regarding fever being harmful leading to its overzealous management with antipyretics. Although the National Institute for Health and Care Excellence (NICE) guidelines suggest the use of single antipyretic agent for management of fever, combination therapy with paracetamol, and ibuprofen is common in paediatric practice in the United Kingdom. These antipyretics at times can cause significant adverse events even when administered at regular therapeutic doses. We describe a young boy who presented with significant hypothermia (34.1°C) and was initially treated as cold sepsis. Once the boy got warmed up and as blood results became subsequently available, it became clearer that the hypothermia was secondary to therapeutic doses of antipyretics. In conclusion, we hope to improve awareness regarding this condition in clinical practice and educate health care professionals and caregivers with recommended NICE guidelines. abstract_id: PUBMED:28337078 Comparison of the Efficacy and Safety of 2 Acetaminophen Dosing Regimens in Febrile Infants and Children: A Report on 3 Legacy Studies. Objective: Compare efficacy and safety of 10 to 15 mg/kg with 20 to 30 mg/kg acetaminophen in febrile children 6 months to ≤ 11 years from 3 double-blind, randomized, single or multiple dose studies. Methods: Doses were compared on sum of the temperature differences (SUMDIFF), maximum temperature difference (MAXDIFF), temperature differences at each time point, and dose by time interactions. Alanine aminotransferase (ALT) was evaluated in the 72-hour duration study. Results: A single dose of acetaminophen 20 to 30 mg/kg produced a greater effect on temperature decrement and duration of antipyretic effect over 8 hours than a single dose of 10 to 15 mg/kg. When equivalent total doses (i.e., 2 doses of 10 to 15 mg/kg given at 4-hour intervals and 1 dose of 20 to 30 mg/kg) were given over the initial 8-hour period, there were no significant temperature differences. Over a 72-hour period, 10 to 15 mg/kg acetaminophen administered every 4 hours maintained a more consistent temperature decrement than 20 to 30 mg/kg acetaminophen administered every 8 hours. Following doses of 60 to 90 mg/kg/day for up to 72 hours, no child had a clinically important increase in ALT from baseline. The number of children with reported adverse events was similar between doses. Conclusions: Data demonstrate the antipyretic effect of acetaminophen is dependent on total dose over a given time interval. These 3 studies provide clinical evidence that the recommended standard acetaminophen dose of 10 to 15 mg/kg is a safe and effective dose for treating fever in pediatric patients when administered as a single dose or as multiple doses for up to 72 hours. abstract_id: PUBMED:21965813 British National Formulary for Children: the risk of inappropriate paracetamol prescribing. Background: Paracetamol is the most commonly prescribed medicine for children. Age-based dosing guidelines can lead to inappropriate dosing. Methods: A review of age-based guidelines for paracetamol in the British National Formulary for Children (BNFC) 2011-2012 was undertaken. Single and cumulative daily doses of paracetamol for boys and girls at 9th, 50th and 91st centiles for weight were calculated for the age groups 1-3 months, 3-12 months, 1-6 years and 6-12 years. Results: For children at the 9th centile, aged 3 months and above, doses were above recommended single and cumulative daily dose therapeutic limits when given the highest dose specified for their age. For children at the 91st centile at all ages, doses were below recommended single and cumulative daily dose therapeutic limits when given the lowest dose specified for their age. Conclusions: Underweight and overweight children are at risk of inappropriate paracetamol administration based on BNFC age-based dosing instructions. Answer: The recommended doses of acetaminophen (APAP) for children aged 2 to 3 years have been called into question based on pharmacokinetic modeling. A study using a mathematical model of APAP absorption to predict plasma concentrations in children aged 2 to 3 years found that a 10 mg/kg dose does not reach the plasma concentration value for antipyresis (10-20 μg/mL), which is required to reduce fever. The 15 mg/kg dose was adequate only if assuming a greater absorption constant. The manufacturer's suggested dose of 160 mg was effective only for children at the lower weight limit of 10.9 kg. These findings suggest that dosages for APAP in children should be weight-based and that manufacturers should review their dosing recommendations (PUBMED:26720060). The study highlights the importance of weight-based dosing rather than age-based dosing to ensure the effectiveness and safety of acetaminophen in pediatric patients. This is further supported by the observation that both underweight and overweight children are at risk of inappropriate paracetamol administration based on age-based dosing instructions, as found in a review of the British National Formulary for Children (BNFC) (PUBMED:21965813). Additionally, the study underscores the need for future research directly measuring plasma APAP concentration and pharmacokinetics in children to validate the model's predictions and to account for individual differences in drug bioavailability, volume of distribution, and absorption/elimination constants.
Instruction: Is it possible to sanitize athletes' shoes? Abstracts: abstract_id: PUBMED:25415415 Is it possible to sanitize athletes' shoes? Context: Footwear should be designed to avoid trauma and injury to the skin of the feet that can favor bacterial and fungal infections. Procedures and substances for sanitizing the interior of shoes are uncommon but are important aspects of primary prevention against foot infections and unpleasant odor. Objective: To evaluate the efficacy of a sanitizing technique for reducing bacterial and fungal contamination of footwear. Design: Crossover study. Setting: Mens Sana basketball team. Patients Or Other Participants: Twenty-seven male athletes and 4 coaches (62 shoes). Intervention(s): The experimental protocol required a first sample (swab), 1/shoe, at time 0 from inside the shoes of all athletes before the sanitizing technique began and a second sample at time 1, after about 4 weeks, April 2012 to May 2012, of daily use of the sanitizing technique. Main Outcome Measure(s): The differences before and after use of the sanitizing technique for total bacterial count at 36 °C and 22 °C for Staphylococcus spp, yeasts, molds, Enterococcus spp, Pseudomonas spp, Escherichia coli , and total coliform bacteria were evaluated. Results: Before use of the sanitizing technique, the total bacterial counts at 36 °C and 22 °C and for Staphylococcus spp were greater by a factor of 5.8 (95% confidence interval [CI] = 3.42, 9.84), 5.84 (95% CI = 3.45, 9.78), and 4.78 (95% CI = 2.84, 8.03), respectively. All the other comparisons showed a reduction in microbial loads, whereas E coli and coliforms were no longer detected. No statistically significant decrease in yeasts (P = .0841) or molds (P = .6913) was recorded probably because of low contamination. Conclusions: The sanitizing technique significantly reduced the bacterial presence in athletes' shoes. abstract_id: PUBMED:35162340 The Impact of Grounding in Running Shoes on Indices of Performance in Elite Competitive Athletes. The introduction of carbon fiber plate shoes has triggered a plethora of world records in running, which has encouraged shoe industries to produce novel shoe designs to enhance running performance, including shoes containing conductor elements or "grounding shoes" (GS), which could potentially reduce the energy cost of running. The aim of this study was to examine the physiological and perceptual responses of athletes subjected to grounding shoes during running. Ten elite runners were recruited. Firstly, the athletes performed an incremental running test for VO2max and anaerobic threshold (AT) determination, and were familiarized with the two shoe conditions (traditional training shoe (TTS) and GS, the latter containing a conductor element under the insole). One week apart, athletes performed running economy tests (20 min run at 80% of the AT) on a 400 m dirt track, with shoe conditions randomized. VO2, heart rate, lactate, and perceived fatigue were registered throughout the experiment. No differences in any of the physiological or perceptual variables were identified between shoe conditions, with an equal running economy in both TTS and GS (51.1 ± 4.2 vs. 50.9 ± 5.1 mL kg-1 min-1, respectively). Our results suggest that a grounding stimulus does not improve the energy cost of running, or the physiological/perceptual responses of elite athletes. abstract_id: PUBMED:23487344 Technical aspects in the provision of orthopedic shoes for athletes From the technical point of view different groups must be distinguished in the provision of orthopedic shoes for athletes. The one group encompasses athletes who are provided with insoles in the hope of improved comfort or better performance. The other group includes athletes with diseases or injuries of the lower limbs for whom the provision of appropriate shoes and insoles makes it possible for them to participate again actively in their chosen sport. In such cases one has to differentiate between the purely physical disorder of the foot and biomechanical disorders. For the first group the correct fitting of the shoe is the most important factor whereas in the second group attempts are made to correct and/or compensate for the deficit by means of insoles and wedges as well as supporting or bedding elements. The particular challenge for all such measures is to achieve an as small as possible impairment on the sport activity or, in the ideal cases, even to support it. abstract_id: PUBMED:29143929 A Comparison of the Energetic Cost of Running in Marathon Racing Shoes. Background: Reducing the energetic cost of running seems the most feasible path to a sub-2-hour marathon. Footwear mass, cushioning, and bending stiffness each affect the energetic cost of running. Recently, prototype running shoes were developed that combine a new highly compliant and resilient midsole material with a stiff embedded plate. Objective: The aim of this study was to determine if, and to what extent, these newly developed running shoes reduce the energetic cost of running compared with established marathon racing shoes. Methods: 18 high-caliber athletes ran six 5-min trials (three shoes × two replicates) in prototype shoes (NP), and two established marathon shoes (NS and AB) during three separate sessions: 14, 16, and 18 km/h. We measured submaximal oxygen uptake and carbon dioxide production during minutes 3-5 and averaged energetic cost (W/kg) for the two trials in each shoe model. Results: Compared with the established racing shoes, the new shoes reduced the energetic cost of running in all 18 subjects tested. Averaged across all three velocities, the energetic cost for running in the NP shoes (16.45 ± 0.89 W/kg; mean ± SD) was 4.16 and 4.01% lower than in the NS and AB shoes, when shoe mass was matched (17.16 ± 0.92 and 17.14 ± 0.97 W/kg, respectively, both p &lt; 0.001). The observed percent changes were independent of running velocity (14-18 km/h). Conclusion: The prototype shoes lowered the energetic cost of running by 4% on average. We predict that with these shoes, top athletes could run substantially faster and achieve the first sub-2-hour marathon. abstract_id: PUBMED:36674119 A Comparative Study of the Fatigue of the Lower Extremities According to the Type of Shoes Worn When Firing a 10 m Air Pistol. The purpose of this study was to compare the differences in shooting performance, physical stability, and lower extremity muscle fatigue in high-level shooters wearing shooting shoes and sports shoes, and to determine the causes. Eight high-level 10 m air pistol shooters wore shooting shoes and sports shoes in a simulated shooting qualification experiment (60 rounds in 75 min) and we recorded shooting scores, pressure centers (COP), and median frequency of muscle fatigue index (MDF). All the data are expressed as mean ± standard deviation and are compared using a paired t-test. Athletes in shooting shoes scored higher than those in sports shoes (p &lt; 0.05); COP moved less in the front-to-back and left-to-right directions for athletes wearing shooting shoes rather than sports shoes; and fewer muscles experienced fatigue, with fatigue also occurring later, for athletes wearing shooting shoes rather than sports shoes. Shooting shoes may reduce the sway of athletes’ center of pressure in the anterior−posterior and left−right directions, enhance postural stability, and result in higher shooting scores. In addition, they may make shooters feel more comfortable and relaxed during long training sessions and competitions. abstract_id: PUBMED:36379468 The Effect of Using Marathon Shoes or Track Spikes on Neuromuscular Fatigue caused by a Long-distance Track Training Session. This study aims to compare the effect of the Nike ZoomX Dragonfly track spikes and the Nike ZoomX VaporflyNext% 2 marathon shoes on the fatigue manifestations present over and after a long-distance track training session. Thirteen highly trained athletes completed two training sessions (i. e., 9- and 3-minute time trials with complete recovery) with the aforementioned footwear models. The pace, ground contact time, and stride length were measured over the time trials, and maximal countermovement jumps were performed previously and after the training session. The results revealed that, although there was no significant interaction in the pace distribution (p≥0.072), athletes tend to be only able to increase the pace at the last lap with the marathon shoes (5.4 meters [-3.7 to 14.5 meters]) meanwhile with the track spikes it further decreased (-3.1 meters [-9.8 to 3.6 meters]). A reduced ground contact time over the session (p=0.025) and a tendency toward increasing stride length (p=0.09) in the last time trial were observed. The significant interaction on the countermovement jump height (p=0.023; Track spikes: -5.60%; Marathon shoes: 0.61%) also indicates that footwear influences the resulted allostatic load. abstract_id: PUBMED:37734742 Recreational Runners Gain Physiological and Biomechanical Benefits From Super Shoes at Marathon Paces. Purpose: Advanced footwear technology is prevalent in distance running, with research focusing on these "super shoes" in competitive athletes, with less understanding of their value for slower runners. The aim of this study was to compare physiological and biomechanical variables between a model of super shoes (Saucony Endorphin Speed 2) and regular running shoes (Saucony Cohesion 13) in recreational athletes. Methods: We measured peak oxygen uptake (VO2peak) in 10 runners before testing each subject 4 times in a randomly ordered crossover design (ie, Endorphin shoe or Cohesion shoe, running at 65% or 80% of velocity at VO2peak [vVO2peak]). We recorded video data using a high-speed camera (300 Hz) to calculate vertical and leg stiffnesses. Results: 65% vVO2peak was equivalent to a speed of 9.4 km·h-1 (0.4), whereas 80% vVO2peak was equivalent to 11.5 km·h-1 (0.5). Two-way mixed-design analysis of variance showed that oxygen consumption in the Endorphin shoe was 3.9% lower than in the Cohesion shoe at 65% vVO2peak, with an interaction between shoes and speed (P = .020) meaning an increased difference of 5.0% at 80% vVO2peak. There were small increases in vertical and leg stiffnesses in the Endorphin shoes (P &lt; .001); the Endorphin shoe condition also showed trivial to moderate differences in step length, step rate, contact time, and flight time (P &lt; .001). Conclusions: There was a physiological benefit to running in the super shoes even at the slower speed. There were also spatiotemporal and global stiffness improvements indicating that recreational runners benefit from wearing super shoes. abstract_id: PUBMED:31111408 Onychomycosis in Athletes. Onychomycosis is a common disorder that is difficult to cure. Prevalence is lower in children (0.7%), but athletes are 2.5-fold more likely to develop the disease, with infections of the toenails seven times more prevalent than those of the fingernails. This is a concern for athletes as it can interfere with their performance. The risk of developing onychomycosis is increased by the warm environment of many sports activities; the use of occlusive footwear; the warm, moist environment associated with socks and sweating; shared, close quarters among athletes; and trauma to the foot and toenail. Once infected, onychomycosis treatment requires a long duration of treatment with strict compliance, a potential problem for younger patients. Treatment carries the risk of significant side effects, and recurrence rates remain high. Avoiding infection can be a potent first line of defense and may circumvent the need for treatment. Preventive recommendations such as keeping toenails short and proper washing of laundry, to name a few, can be effective and are discussed here. Technological improvements such as synthetic, moisture-wicking socks and well-ventilated, mesh shoes have also been shown to reduce moisture and injury. Education about preventing fungal spread and improving hygiene in the locker room, gym, and pool are of critical importance. This overview of onychomycosis focuses primarily on the preventive measures and innovative changes in athletic gear. It also provides a compact step-by-step guide to prevention intended to be useful for both the general public and the professional. It can be reproduced to use as a handout for athletes, trainers, and coaches. abstract_id: PUBMED:18577583 Effect of neutral-cushioned running shoes on plantar pressure loading and comfort in athletes with cavus feet: a crossover randomized controlled trial. Background: High injury rates observed in athletes with cavus feet are thought to be associated with elevated plantar pressure loading. Neutral-cushioned running shoes are often recommended to manage and prevent such injuries. Purpose: To investigate in-shoe plantar pressure loading and comfort during running in 2 popular neutral-cushioned running shoes recommended for athletes with cavus feet. Study Design: Controlled laboratory study. Methods: Plantar pressures were collected using the in-shoe Novel Pedar-X system during overground running in 22 athletes with cavus feet in 2 neutral-cushioned running shoes (Asics Nimbus 6 and Brooks Glycerin 3) and a control condition (Dunlop Volley). Comfort was measured using a validated visual analog scale. Results: Compared with the control, both neutral-cushioned running shoes significantly reduced peak pressure and pressure-time integrals by 17% to 33% (P &lt; .001). The Brooks Glycerin most effectively reduced pressure beneath the whole foot and forefoot (P &lt; .01), and the Asics Nimbus most effectively reduced rearfoot pressure (P &lt;.01). Both neutral-cushioned running shoes reduced force at the forefoot by 6% and increased it at the midfoot by 12% to 17% (P &lt; .05). Contact time and area increased in both neutral-cushioned running shoes (P &lt; .01). The Asics Nimbus was the most comfortable, although both neutral-cushioned running shoes were significantly more comfortable than the control (P &lt; .001). Conclusion: Two popular types of neutral-cushioned running shoes were effective at reducing plantar pressures in athletes with cavus feet. Clinical Relevance: Regional differences in pressure reduction suggest neutral-cushioned running shoe recommendation should shift from being categorical in nature to being based on location of injury or elevated plantar pressure. abstract_id: PUBMED:32438717 Review of Terms and Definitions Used in Descriptions of Running Shoes. Objective: Our study aim is to identify and describe the definitions used for different types of running shoes. In addition, we highlight the existence of gaps in these concepts and propose possible new approaches. Methods: This review was undertaken in line with the guidelines proposed by Green et al., based on a literature search (until December 2019) of the PubMed, Web of Science, Scopus, SPORTDiscus and Google Scholar databases. A total of 23 papers met the inclusion criteria applied to identify the definition of running shoes. Results: Although there is a certain consensus on the characteristics of minimalist footwear, it is also described by other terms, such as barefoot-style or barefoot-simulating. Diverse terms are also used to describe other types of footwear, and in these cases, there is little or no consensus regarding their characteristics. Conclusions: The terms barefoot-simulated footwear, barefoot-style footwear, lightweight shoes and full minimalist shoes are all used to describe minimalist footwear. The expressions partial minimalist, uncushioned minimalist and transition shoes are used to describe footwear with non-consensual characteristics. Finally, labels such as shod shoes, standard cushioned running shoes, modern shoes, neutral protective running shoes, conventional, standardised, stability style or motion control shoes span a large group of footwear styles presenting different properties. Answer: Yes, it is possible to sanitize athletes' shoes. A study conducted on the Mens Sana basketball team, which included 27 male athletes and 4 coaches (62 shoes), evaluated the efficacy of a sanitizing technique for reducing bacterial and fungal contamination of footwear. The study design was a crossover study, and the intervention involved a sanitizing technique used daily for about 4 weeks. The main outcome measures were the differences in total bacterial count at 36 °C and 22 °C for various microorganisms before and after the use of the sanitizing technique. The results showed that the sanitizing technique significantly reduced the bacterial presence in athletes' shoes, with a greater reduction in total bacterial counts at 36 °C and 22 °C and for Staphylococcus spp. There was also a reduction in microbial loads for other bacteria, and E. coli and coliforms were no longer detected. However, there was no statistically significant decrease in yeasts or molds, likely due to low initial contamination (PUBMED:25415415).
Instruction: Is abdominoperineal resection a good option for stage IV adenocarcinoma of the distal rectum? Abstracts: abstract_id: PUBMED:12210018 Is abdominoperineal resection a good option for stage IV adenocarcinoma of the distal rectum? Background And Objectives: The management of the primary lesion in patients with stage IV adenocarcinoma of the distal rectum is controversial. An abdominoperineal resection (APR) may be a good option. Methods: A retrospective analysis of the medical records of 21 patients with stage IV distal rectal adenocarcinoma treated with an APR between January 1991 to December 2000 was performed. Results: All patients had an Eastern Cooperative Oncology Group (ECOG) performance status of 1 and normal preoperative alkaline phosphatase and total bilirubin levels. Twelve patients (92%) with liver metastases had less than 25% of total liver volume involvement. Twenty patients (95%) had complete resolution of their symptoms related to the primary rectal cancer. The median follow-up was 19 months (range 3-92 months), with a median survival of 21.6 months and a 2-year overall survival of 34%. Conclusions: Patients with stage IV distal rectal adenocarcinoma who have a good performance status, normal preoperative liver function tests, and minimal metastatic disease to the liver can be offered resective surgery. abstract_id: PUBMED:33745283 Rectal Granular Cell Carcinoma Requiring Abdominoperineal Resection. Background: Granular cell tumors, derived from neural crest cells, are rare tumors infrequently located in the colon or rectum. We will discuss a patient with a rectal granular cell tumor invading the anal sphincters requiring an abdominoperineal resection. Methods: A 56-year-old male, with anal pain, was found to have a perirectal mass. Pathology from ultrasound-guided transrectal biopsy demonstrated low grade granular cell tumor. The patient underwent a laparoscopic abdominoperineal resection with perineum reconstruction. Results: Pathology demonstrated a granular cell tumor of 4.5 centimeters with tumor invasion of the anal sphincters. Surgical margins were free of neoplasm. Discussion: This is the only documented case of a colorectal granular cell tumor that has required an abdominoperineal resection. On histology, it was considered low grade but its behavior was more consistent with a malignant process. Additional research on malignant granular cell tumors is necessary to help improve treatment options, prevent recurrence, and improve overall survival. His medical course will be followed for disease progression or metastasis. abstract_id: PUBMED:12847358 Oncologic results following abdominoperineal resection for adenocarcinoma of the low rectum. Purpose: The role of abdominoperineal resection for rectal cancer has changed because of advances in sphincter-preserving surgery. Our aim was to evaluate the results of this operation in the five-year period following introduction of the concept of total mesorectal excision Methods: Data on all patients undergoing abdominoperineal resection for very low rectal cancer between 1992 and 1997 were collected prospectively. All patients had had total mesorectal excision. Curative resection was defined as absence of macroscopic disease after resection and local recurrence as any infiltration or tumor identified in the pelvis, alone or combined with distant disease. Survival and local recurrence rates were calculated using the Kaplan-Meier method and log-rank analysis. Results: Of 165 abdominoperineal resections performed, 106 were for primary adenocarcinoma of the rectum. The male:female ratio was 50:56, with a median age of 65 (range, 33-85) years. There was one postoperative death. Twenty-seven patients received short-course preoperative radiotherapy (25 Gy over 1 week), whereas 22 had a longer course, with concomitant chemotherapy in 2. Postoperative chemotherapy was administered in 29, postoperative radiotherapy in 4, and combined therapy in 8. After curative resection (n = 91), survival at five years was 76 percent and differed significantly by stage. Recurrence at any site was 7 percent (3/34) for Stage I, 27 percent (6/26) for Stage II, and 53 percent (16/31) for Stage III. Nine patients presented with local recurrence, with an overall rate at five years of 10 percent. Isolated locale recurrence was observed in only 5 percent of patients Conclusions: After abdominoperineal resection and total mesorectal excision, good rates of local control may be achieved provided surgical technique is meticulous. abstract_id: PUBMED:3728811 Management of carcinoma of the rectum and anus without abdominoperineal resection. Since 1979, 16 patients with cancer of the rectum and anus have been treated without abdominoperineal resection. All have received radiotherapy. Nine of these had local excision before receiving radiotherapy. There has been no evidence of recurrent cancer in 15 of the 16 patients with a median follow-up period of 24 months (range 14 to 52 months) for a recurrence rate of 6 percent. We conclude that limited excision, radiotherapy, or both are viable alternatives to abdominoperineal resection of small, early cancers of the rectum and anus. abstract_id: PUBMED:25105771 Combined radical prostatectomy and abdominoperineal resection for locally invasive rectal cancer. Introduction: An infiltration of urological organs is found in 5-10% of patients with colorectal carcinoma. Total pelvic exenteration is the standard procedure for locally advanced rectal cancer. In selected patients with rectal cancer involving the prostate or seminal vesicles, the bladder can be preserved and en bloc radical prostatectomy with abdominoperineal rectal resection can be performed. We report two patients who treated with this combined approach. Presentation Of Case: Two patients with symptoms of rectal bleeding and pelvic pain were investigated. Colonoscopy demonstrated a tumor in the lower rectum. Biopsies revealed adenocarcinoma. Both pelvic MRI and endorectal ultrasound showed tumors that invaded the prostate and the seminal vesicles directly but without invasion of the bladder. After neoadjuvant chemoradiation, combined radical prostatectomy and abdominoperineal amputation was performed. None has developed local recurrence, but one of them was operated on for a single lung metastasis. After a follow-up of 28 and 20 months, respectively, the patients are alive without evidence of local recurrence or distant disease. Discussion: This procedure obviates the need for urinary diversion without compromising the local tumor control. Intraoperative and postoperative diagnostic difficulties, and clinical aspects in relation to postoperative anastomotic leak and survival of patients are discussed. Conclusion: En bloc radical prostatectomy and proctosigmoidectomy is feasible in selected patients with rectal cancer and invasion limited to the prostate or seminal vesicles because it provides good local tumor control and significantly improves the patient's quality of life in comparison to total pelvic exenteration. abstract_id: PUBMED:24310759 Laparoscopic abdominoperineal resection of the rectum - how is it done; initial results. Unlabelled: The abdominoperineal resection of the rectum is a classical operation performed in case of patients diagnosed with rectal cancer. The development of laparoscopic techniques in recent years, introduced yet another method of treatment, considering patients with rectal cancer- laparoscopic abdominoperineal resection of the rectum. The aim of the study was to present initial treatment results considering the above-mentioned patients. Material And Methods: The study group comprised 25 patients (16 male and 9 female) diagnosed with low-rectal cancer, subjected to surgery by means of the above-mentioned method. Mean patient age amounted to 66 years. Three (12%) patients required conversion to classical surgery (laparotomy), while one patient required reoperation, due to presacral vascular bleeding. Complications were observed in 10 (40%) patients. Average hospitalization was 7 days. In case of all patients the radial margin was negative, and mean number of removed lymph nodes amounted to 9.6. Mortality was not observed during the perioperative period. Due to the initial character of the study analysis (mean observation period in case of 68% of patients was shorter than 2 years), oncological results were not subject to evaluation. Conclusions: Laparoscopic abdominoperineal resections are considered as technically difficult operations, requiring significant experience of the operating team. However, they enable the patient to take advantage of the many assets of minimally invasive surgery, with comparable rates of postoperative complications. abstract_id: PUBMED:23932752 Comparison of abdominoperineal resection and low anterior resection in lower and middle rectal cancer. Introduction: This study aimed to investigate local control and survival rates following abdominoperineal resection (APR) compared with low anterior resection (LAR) in lower and middle rectal cancer. Methods: In this retrospective study, 153 patients with newly histologically proven rectal adenocarcinoma located at low and middle third that were treated between 2004 and 2010 at a tertiary hospital. The tumors were pathologically staged according to the 7th edition of the American Joint Committee on Cancer (AJCC) staging system. Surgery was applied for 138 (90%) of the patients, of which 96 (70%) underwent LAR and 42 were (30%) treated with APR. Total mesorectal excision was performed for all patients. In addition, 125 patients (82%) received concurrent (neoadjuvant, adjuvant or palliative) pelvic chemoradiation, and 134 patients (88%) received neoadjuvant, adjuvant or concurrent chemotherapy. Patients' follow-up ranged from 4 to 156 (median 37) months. Results: Of 153 patients, 89 were men and 64 were women with a median age of 57 years. One patient (0.7%) was stage 0, 15 (9.8%) stage I, 63 (41.2%) stage II, 51 (33.3%) stage III and 23 (15%) stage IV. There was a significant difference between LAR and APR in terms of tumor distance from anal verge, disease stage and combined modality therapy used. However, there was no significant difference regarding 5-year local control, disease free and overall survival rates between LAR and APR. Conclusion: LAR can provide comparable local control, disease free and overall survival rates compared with APR in eligible patients with lower and middle rectal cancer. abstract_id: PUBMED:8873522 Inadvertent perforation of the rectum during abdominoperineal resection. Background: Intraoperative inadvertent perforation of the rectum is a potentially avoidable complication of abdominoperineal resection (APR). Although widely thought to be detrimental, the impact of inadvertent perforation on outcome has not been conclusively determined, especially after controlling for potential confounding variables. The objective of this study was to determine if inadvertent perforation of the rectum during APR for rectal cancer is an independent risk factor for the adverse outcomes of local recurrence and/or death. Methods: This retrospective cohort study included all patients who underwent APR for primary adenocarcinoma of the rectum at a single teaching hospital from 1980 to 1990. Data were obtained regarding patient demographics, presence of inadvertent perforation, histopathological characteristics, adjuvant therapy, local recurrence, and survival. Results: Of 178 patients included in the study, 42 (24%) had inadvertent perforation. By univariate analysis, local recurrence was significantly higher in the perforated group than the nonperforated group (54% vs 17%; P &lt; 0.001). Similarly, 5-year survival was significantly decreased with inadvertent perforation (29% vs 59%; P = 0.003). Multivariate analysis controlling for stage, grade, age, sex, and adjuvant therapy showed inadvertent perforation to be an independent risk factor for both increased local recurrence and decreased 5-year survival (Hazard Ratio for each model). Conclusions: Inadvertent perforation of the rectum during APR is associated with increased local recurrence and decreased 5-year survival. The detrimental implications of inadvertent perforation during APR mandates meticulous avoidance. abstract_id: PUBMED:23542653 New rectal construction after abdominoperineal resection for carcinoma rectum. Unlabelled: The results of the new reconstruction of the rectum after Abdominoperineal Resection (APR) with 'S'-trap arrangement of the colon and continent perineal colostomy controlled by adynamic bilateral graciloplasty are evaluated. The fully stretched gracilis muscles were utilized for occlusion of the lumen of the colon. Methods And Materials: Between April 1993 and December 2006, selected 42 patients (30 males and 12 females) with median age of 43.5 years (25-64 years) were treated by a one-stage procedure without colonic diversion. All patients were suffering from adenocarcinoma of the lower third of the rectum. The abdominoperineal resection was carried out in all cases. A 25-cm-long vascular segment of the colon was installed in the sacral curve and 'S'-shaped trap was developed with fixed colonic curves at the coccyx and to the left ischial rami of the pubic bone. The left gracilis was wrapped around just distal to the fixed curve of the colon to the left ischial rami and right one around the perineal colostomy with different grades of tightness. Continence was satisfactory in 34 patients; they could hold more than 1,000 ml of saline. Occasional soiling was reported in the night in 8 patients. All patients have achieved near-natural continence. The new rectal reconstruction with 'S' arrangement of colon and continent perineal colostomy achieved near-natural continence in 66% of cases. abstract_id: PUBMED:7660266 Laparoscopic abdominoperineal excision of the rectum. In laparoscopic abdominoperineal resection of the rectum (LAP-AP) an abdominal incision is completely avoided as the tumor is delivered through the perineal incision. It is our belief that the view provided in the pelvis by laparoscopy is significantly better than at laparotomy and allows excellent anatomical definition and meticulous dissection. In this study we compared the adequacy of excision of the first 12 patients undergoing LAP-AP to the last 16 patients undergoing open abdominoperineal resection (OP-AP). In all patients the procedure was carried with curative intent for adenocarcinoma and the Dukes staging and Jass score's were similar in both groups. [table: see text] The data demonstrate similar nodal harvest in both groups as well as extent of radial excision. However, two patients in the open group had microscopic radial margin involvement despite being microscopically clear at surgery. We conclude that although long-term follow-up is required to address the issue of local cancer recurrence, laparoscopic rectal dissection appears as good as open surgery and may allow a more precise assessment of excision margins. Answer: Abdominoperineal resection (APR) can be a good option for patients with stage IV adenocarcinoma of the distal rectum under certain conditions. A retrospective analysis of 21 patients with stage IV distal rectal adenocarcinoma treated with APR showed that patients with a good performance status, normal preoperative liver function tests, and minimal metastatic disease to the liver can be offered resective surgery (PUBMED:12210018). These patients had a median survival of 21.6 months and a 2-year overall survival of 34%, with 95% experiencing complete resolution of symptoms related to the primary rectal cancer. However, the decision to perform APR should be individualized based on the patient's overall health, extent of metastatic disease, and potential for symptom relief. It is important to note that APR is a major surgical procedure with significant implications for the patient's quality of life, and the benefits must be weighed against the risks and the patient's prognosis. In the context of other rectal cancers, APR has been used successfully for conditions such as granular cell tumors invading the anal sphincters (PUBMED:33745283) and for adenocarcinoma of the low rectum with good rates of local control when combined with total mesorectal excision (PUBMED:12847358). Additionally, APR has been compared with low anterior resection (LAR) for lower and middle rectal cancer, showing no significant difference in 5-year local control, disease-free, and overall survival rates (PUBMED:23932752). However, it is also important to consider alternatives to APR, such as limited excision and radiotherapy for small, early cancers of the rectum and anus (PUBMED:3728811), or combined radical prostatectomy and APR for locally invasive rectal cancer with invasion limited to the prostate or seminal vesicles (PUBMED:25105771). In conclusion, APR can be a good option for stage IV adenocarcinoma of the distal rectum in well-selected patients, but the decision should be made on a case-by-case basis considering the patient's condition and the potential benefits of the procedure.
Instruction: Does oral health counseling effectively improve oral hygiene of orthodontic patients? Abstracts: abstract_id: PUBMED:22971253 Does oral health counseling effectively improve oral hygiene of orthodontic patients? Aim: The aim of this study was to compare the effectiveness of oral health counseling sessions with traditional oral hygiene education in orthodontic patients with fixed appliances. Materials And Methods: Study Design: randomised control trial with experimental and control group. A group of 99 adolescents with fixed orthodontic appliances were randomly assigned to oral health counseling (experimental) or traditional health education (control) group. Subjects in the control group received verbal instructions and a demonstration of the modified Bass brushing technique on a model. The experimental group also received the verbal information with demonstration on the model and in addition a personalised 40-minutes counseling session on oral hygiene. Plaque Index (PI) and gingivitis (G) were recorded before, 1 and 6 months after the counseling session/traditional education. Results: Oral health counseling and traditional education improved the oral hygiene of orthodontic patients. PI values were significantly lower after 6 months compared to the baseline in both groups, but the prevalence of gingival inflammation remained significantly lower only in the experimental group. Conclusion: Oral health counseling increased plaque removal efficacy and control of gingival inflammation. The efficiency of counseling and traditional education was similar. Counseling is a promising approach that warrants further attention in a variety of dental contexts. abstract_id: PUBMED:20953561 Oral hygiene status among orthodontic patients. Aim: The aim of this study was to evaluate the oral hygiene status of patients with fixed orthodontic appliances. Methods And Materials: The following indices were used to evaluate the oral hygiene status of patients in orthodontic treatment: gingival bleeding index (GBI), plaque index (PI), and ortho-plaque index (OPI). A self-administrated questionnaire was prepared covering oral hygiene practice, oral hygiene cleaning aids, and number of visits to a dental hygienist. Results: Fifty patients (15-30 years old) were selected for the study from among the orthodontic patients treated at the King Saud University College of Dentistry, Riyadh, Saudi Arabia. Results showed that the PI and OPI were high with mean scores of 65.24 (SD 16.43) and 53.56 (SD 8.74) respectively, while the average GBI was a much lower value at 19.14 (SD 7.95). No significant difference was observed between male and female patients for the PI (p=0.925) and for the OPI (p=0.072), but a significant difference was observed for the GBI at the 5 percent significance level (p=0.033). The result of OPI showed that 20 (40 percent) of the patients had fair oral hygiene, whereas 30 (60 percent) had poor oral hygiene. Only 16 (32 percent) of the participants reported visiting the dental hygienist during their orthodontic treatment, while the remaining 34 (68 percent) did not. Conclusions: The oral home care of the orthodontic patients surveyed was not at an optimal level, which indicated the need to establish an oral hygiene maintenance program. Clinical Significance: Inadequate oral home care among orthodontic patients may make them more prone to develop gingivitis during orthodontic treatment. It is, therefore, essential that oral hygiene instructions and a hygiene maintenance program not be overlooked during orthodontic treatment. abstract_id: PUBMED:25925523 Assessing the standards of online oral hygiene instructions for patients with fixed orthodontic appliances. Background: The authors conducted this study to assess the quality of the information available on the Web about oral hygiene for patients with fixed orthodontic appliances. Methods: The authors entered the search terms "cleaning braces," "brushing braces," and "oral hygiene and braces" into Google, Yahoo, and Bing search engines. They analyzed Web sites satisfying the inclusion criteria from the first 20 hits of each search for accessibility, usability, and reliability by using the LIDA instrument; for readability by using the Flesch Reading Ease (FRE) score; and for the completeness of oral hygiene instructions. Results: Sixty-two Web sites met the inclusion criteria. The mean total LIDA score of 71.2 indicated the moderate quality of the design of the reviewed Web sites. The mean (standard deviation [SD]) values of LIDA scores for accessibility, usability, and reliability were 85.9 (7.0), 63.4 (16.1), and 48.0 (10.4), respectively. The mean (SD) FRE Score of 68.6 (9.7) applied to standard reading skills. The completeness of information (mean [SD] = 67.1 [27.8]) presented the highest variability. Conclusions: Overall, the authors found that the standards of online oral hygiene materials for orthodontic patients with fixed appliances exhibited modest scores. Readability appeared to be appropriate for young adolescents, whereas the comprehensiveness of the displayed information was highly variable. Further improvement of the infrastructure of electronic health information (that is, e-health) in orthodontics is necessary to meet patients' needs. Practical Implications: Given the moderate quality of oral hygiene instruction available on the Web for patients with fixed appliances, orthodontic patients and caregivers should be cautious when browsing the Internet for relevant information. Dental professionals should refer patients to valid Web-based educational materials. abstract_id: PUBMED:37351389 Oral hygiene awareness and practice in orthodontic patients in Makkah city: A cross sectional study. Objectives: To evaluate the level of oral hygiene awareness and practice among orthodontic patients that have fixed appliances and investigate the differences in awareness as per age, gender, and education qualification. Materials And Methods: A cross-sectional study with 250 patients was included. A questionnaire was used to collect information regarding the patient's oral hygiene. Results: The level of awareness and oral hygiene practice was significantly higher among females, with a consistent stepwise increase toward older ages. Also, increased with higher education level, longer duration since getting the appliance, and receiving instructions from the orthodontist. Conclusions: Oral hygiene instructions received by the orthodontist play an important role in the patient's awareness level. Also, many predictors have a significant effect on the patient's cooperation in practicing oral hygiene measures. abstract_id: PUBMED:35017981 A Comparative Assessment of the Effect of Professional Oral Hygiene Measures on the Periodontal Health of Patients Undergoing Fixed Orthodontic Appliance Therapy. Background: Fixed orthodontic treatment provides good alignment and improves the maxilla-mandibular and dentoalveolar function. The present study was conducted to evaluate fixed orthodontic patients' periodontal condition with professional oral hygiene measures. Materials And Methods: One-hundred and twenty patients undergoing fixed orthodontics were included in the study, divided with 60 patients in the control group (Group I) and the other 60 in the experimental group (Group II). Periodontal indices were compared. Results: In Group I, there were 40 females and 20 males, and in Group II, there were 35 females and 25 males. The mean gingival index was 0.62 and 1.14, plaque index was 0.94 and 1.38, and pocket depth was 1.50 and 1.68 at T0 and T1 in Group I, respectively. The mean gingival index was 0.80 and 1.14, plaque index was 1.2 and 0.81, and pocket depth was 1.31 and 1.57 at T0 and T1 in Group II, respectively. Conclusion: For patients undertaking orthodontic fixed appliance therapy, professional oral hygiene measures can significantly improve periodontal health. abstract_id: PUBMED:30040617 Oral and periodontal hygiene in orthodontic patients Introduction: Orthodontic appliances promote dental plaque retention and make tooth-brushing less easy. They result in both quantitative and qualitative changes in dental plaque, exposing orthodontic patients to dental caries and periodontal diseases. Materials And Methods: The authors explain the effects of orthodontic treatments on dental plaque and the occurrence of dental and periodontal problems; they emphasize the role of pathogenic bacteria and highlight the need for efficient (both individual and professional) control of dental plaque. Discussion: Therefore, orthodontists have to educate their patients on oral and periodontal hygiene in order to control this bacterial factor and to avoid the occurrence of dental and/or periodontal complications. abstract_id: PUBMED:33116442 Adherence to Dietary Advice and Oral Hygiene Practices Among Orthodontic Patients. Introduction: Adherence to dietary advice and proper oral hygiene practices during orthodontic treatment are critical to maintaining good oral health and achieving successful treatment results. Thus, understanding patients' habits and practices before, during, and after orthodontic treatment is needed to ensure better oral health-related behavior among these patients. Purpose: To investigate sugar-related dietary habits, visits to dental hygienists and dentists, and oral hygiene practices among patients during orthodontic treatment and after treatment compared to patients before treatment. Methods: A cross-sectional self-administered survey collected data from 375 patients (aged 10 years and older) from the Orthodontic Department at King Abdulaziz University Dental Hospital, Jeddah, Saudi Arabia. Patients were grouped according to their history of receiving orthodontic treatment into: patients before orthodontic treatment, patients during treatment, and patients after treatment. The survey assessed sugar-related dietary habits, frequency of visits to hygienists and dentists, and oral hygiene practices. Results: Patients during orthodontic treatment and those after treatment were less likely to eat sticky food compared to patients before treatment (OR=0.4, 95% CI=0.2-0.9 and OR=0.5, 95% CI=0.3-0.9, respectively). No significant differences were found in the odds of dental visits among the groups. Compared to patients before orthodontic treatment, patients during treatment and patients after treatment had higher odds of tooth brushing twice or more per day (OR=4.8, 95% CI=1.7-14.3, and OR=4.0, 95% CI=1.2-13.6, respectively) and were more likely to brush for an adequate time (OR=2.6, 95% CI=1.6-4.4 and OR=1.9, 95% CI=1.0-3.5, respectively). Moreover, the likelihood of flossing once daily was higher in patients during treatment and in patients after treatment relative to those before treatment. However, only patients during orthodontic treatment used interdental brushes more than patients before treatment. Conclusion: Patients' oral health-related behavior seems to improve during and after orthodontic treatment. The findings of this study suggest that orthodontists can play a role in improving oral hygiene practices among their patients. More reinforcement of dietary instructions and periodic dental and hygiene visits is needed during orthodontic follow-up appointments. abstract_id: PUBMED:25799001 Usefulness of an app in improving oral hygiene compliance in adolescent orthodontic patients. Objective: To evaluate the influence of an app-based approach in a protocol for domestic oral hygiene maintenance in a group of adolescent patients wearing fixed multibracket appliances. Materials And Methods: Eighty adolescent patients scheduled to start an orthodontic multibracket treatment were randomly divided into two groups of 40. Plaque index (PI), gingival index (GI), white spots (WS), and caries presence were recorded in all patients, and they were instructed regarding domestic oral hygiene maintenance on the day of braces application (t0) and every 3 months (t1, t2, t3, t4) during the first year of treatment. Study group (SG) patients were enrolled in a WhatsApp chat room-based competition and instructed to share monthly with the other participants two self-photographs (selfies) showing their oral hygiene status. Results: SG patient participation in the chat room was regular and active throughout the observation period. At t2, t3, and t4, SG patients had significantly lower values of both PI and GI and a lower incidence of new WS and caries, compared with the control group. Conclusion: Integration of new "social" technologies in a standard oral hygiene motivation protocol is effective in improving compliance of adolescent patients and in improving their oral health status during orthodontic multibracket treatment. abstract_id: PUBMED:28959765 Computer-Based Oral Hygiene Instruction versus Verbal Method in Fixed Orthodontic Patients. Statement Of Problem: Fixed orthodontic appliances in the oral cavity make tooth cleaning procedures more complicated. Objectives: This study aimed to compare the efficacy of computerized oral hygiene instruction with verbal technique among fixed orthodontic patients referred to the evening clinic of Orthodontics of Shiraz Dental School. Materials And Methods: A single-blind study was performed in Orthodontic Department of Shiraz, Islamic Republic of Iran, from January to May 2015 following the demonstrated exclusion and inclusion criteria. The sample size was considered 60 patients with 30 subjects in each group. Bleeding on probing and plaque indices and dental knowledge were assessed in the subjects to determine pre-intervention status. A questionnaire was designed for dental knowledge evaluation. The patients were randomly assigned into the computerized and verbal groups. Three weeks after the oral hygiene instruction, indices of bleeding on probing and plaque index and the dental knowledge were evaluated to investigate post-intervention outcome. The two groups were compared by chi-square and student t tests. The pre- and post-intervention scores in each group were compared using paired t-test. Results: In the computerized group, the mean score for plaque index and bleeding on probing index was significantly decreased while dental health knowledge was significantly increased after oral hygiene instruction, in contrast to the verbal group. Conclusions: Within the limitations of the current study, computerized oral hygiene instruction is proposed to be more effective in providing optimal oral health status compared to the conventional method in fixed orthodontic patients. abstract_id: PUBMED:29415697 Study protocol of a randomized controlled trial to test the effect of a smartphone application on oral-health behavior and oral hygiene in adolescents with fixed orthodontic appliances. Background: Adolescents with fixed orthodontic appliances are at high risk of developing dental caries. To date, new smartphone technologies have seldom been used to support them in the preventive behavior that can help prevent dental caries. After an intervention-mapping process, we developed a smartphone application (the WhiteTeeth app) for preventing dental caries through improved oral-health behavior and oral hygiene. The app, which is intended to be used at home, will help adolescents with fixed orthodontic appliances perform their oral self-care behavior. The app is based on the Health Action Process Approach (HAPA) theory, and incorporates several behavior-change techniques that target the psychosocial factors of oral-health behavior. This article describes the protocol of a randomized controlled trial (RCT) to evaluate the effects of the WhiteTeeth app on oral-health behavior and oral-hygiene outcomes (presence of dental plaque and gingival bleeding) compared with those of care as usual, in patients aged 12-16 with fixed orthodontic appliances. Methods/design: The RCT has two conditions: an experimental group that will receive the WhiteTeeth app in addition to care as usual, and a control group that will only receive care as usual. Care as usual will include routine oral-health education and instruction at orthodontic check-ups. In the western part of the Netherlands 146 participants will be recruited from four orthodontic clinics. Data will be collected during three orthodontic check-ups: baseline (T0), 6 weeks of follow-up (T1) and 12 weeks of follow-up (T2). The primary study outcomes are the presence of dental plaque (measured with a modified Silness and Loë Plaque Index); and gingival bleeding (measured with the Bleeding on Marginal Probing Index). Secondary outcomes include changes in self-reported oral-health behaviors and its psychosocial factors identified by the HAPA theory, such as outcome expectancies, intention, action self-efficacy, coping planning and action control. Discussion: Since the intervention was designed to target psychosocial factors in the motivational and volitional components of the behavior-change process, we hypothesize that the app will cause greater improvements in oral-health behavior and oral hygiene more than traditional oral-health-promotion programs (i.e., care as usual). Trial Registration: The trial has been registered with the Dutch Trial Register ( NTR6206 : 20 February 2017). Answer: Yes, oral health counseling effectively improves oral hygiene of orthodontic patients. A study comparing the effectiveness of oral health counseling sessions with traditional oral hygiene education in orthodontic patients with fixed appliances found that both counseling and traditional education improved oral hygiene. However, the prevalence of gingival inflammation remained significantly lower only in the experimental group that received personalized counseling sessions, indicating that oral health counseling increased plaque removal efficacy and control of gingival inflammation (PUBMED:22971253). Additionally, professional oral hygiene measures have been shown to significantly improve periodontal health in patients undergoing fixed orthodontic appliance therapy (PUBMED:35017981). Moreover, the use of an app-based approach in a protocol for domestic oral hygiene maintenance in adolescent patients wearing fixed multibracket appliances resulted in significantly lower values of plaque index and gingival index and a lower incidence of new white spots and caries, compared with the control group (PUBMED:25799001). Similarly, a study that compared the efficacy of computerized oral hygiene instruction with verbal technique among fixed orthodontic patients found that the computerized group had a significant decrease in plaque index and bleeding on probing index while dental health knowledge significantly increased after oral hygiene instruction (PUBMED:28959765). These findings suggest that oral health counseling, especially when personalized or supported by technology, can be more effective than traditional methods alone in improving oral hygiene among orthodontic patients.
Instruction: Poor adolescent expectant mothers: can we assess their potential for child abuse? Abstracts: abstract_id: PUBMED:11587911 Poor adolescent expectant mothers: can we assess their potential for child abuse? Purpose: To explore the correlates of high scores on the Child Abuse Potential Inventory in adolescent expectant mothers. Methods: Child Abuse Potential scores and data on demographics, pregnancy desire, history of maltreatment, psychological functioning, and perceived social support were obtained by self-report and semi-structured interview. The sample consisted of 50 poor single adolescents recruited from prenatal clinics during the second half of the pregnancy. The relationships among the variables were assessed using Pearson product-moment correlation and multiple regression strategies. Results: Higher Child Abuse Potential scores were associated with higher maternal psychological distress, maternal history of psychiatric diagnosis, and lack of perceived support by the father of the baby. Older pregnant teenagers were more likely to report childhood history of maltreatment, higher psychological distress, and perceived and expected less support by the maternal mother. Expectant mothers who were raised by a single parent were more likely to have a history of childhood maltreatment, less likely to live with the father of the baby during their pregnancy and to expect less support from him. Conclusions: Child Abuse Potential scores, obtained during pregnancy in a sample of poor single adolescents provide a marker of maternal prenatal functioning and perceived social support. Further studies are warranted to validate prenatal use of the Child Abuse Potential Inventory (CAPI), which may help identify populations at particularly high risk for child abuse during pregnancy and inform strategies for early preventive interventions. Adolescent education on family planning, child rearing, and social support programs should address the importance of the fathers' role. abstract_id: PUBMED:11766012 The Child Abuse Potential Inventory and pregnancy outcome in expectant adolescent mothers. Objective: The study explores the prenatal Child Abuse Potential (pCAP) scores derived from the Child Abuse Potential Inventory administered to expectant adolescent mothers. The aim of the study was to assess the association of the pCAP scores with maternal negative prenatal behaviors, and evaluate the contribution of the pCAP scores to neonatal morbidity. Method: The pCAP scores, demographic data, and self-report on prenatal behaviors were obtained during the second half of the pregnancy in a sample of 45 poor single adolescent mothers. A pediatrician blind to the prenatal data reviewed the neonatal records to assess neonatal morbidity. Maternal prenatal records were reviewed for obstetric risk assessment by an obstetrician who was blind to the rest of the data. The relations among the pCAP scores, prenatal behaviors, and neonatal morbidity were analyzed. Results: In the prenatal period, the pCAP scores were positively correlated with self-reported prenatal smoking and substance use. The multiple linear regression analysis showed that the pCAP scores significantly contributed to neonatal morbidity independently of obstetric risk factors. Conclusions: The Child Abuse Potential scores obtained during pregnancy in poor single adolescent mothers reflect domains of maternal functioning that are associated with negative prenatal behaviors and appear to be important for predicting neonatal morbidity. Further studies are warranted to validate the prenatal use of the Child Abuse Potential Inventory. abstract_id: PUBMED:10819101 Childhood history of abuse and child abuse potential in adolescent mothers: a longitudinal study. Objectives: The purpose of this research was to determine whether adolescent mothers of newborns are at higher risk for child abuse than adult mothers of newborns and to examine whether adolescent mothers with memories of child maltreatment have a higher risk for child abuse. Method: Two groups (adolescents and adults) of pregnant mothers were followed for 20 months beginning between the 5th and the 7th month of pregnancy until the child was 18 months old. Adolescent (N = 24) and adult (N = 24) mothers were matched on sociodemographic variables. During pregnancy, memories of child maltreatment were evaluated. When child was 1, 6, 12, and 18 months old, risk for child abuse was evaluated. Results: Adolescent and adult mothers showed no differences in memories of childhood physical or emotional abuse. Nevertheless, adolescent mothers showed higher child abuse potential and depression scores than adult mothers. Mothers with memories of severe physical punishment showed higher child abuse potential scores and mothers with memories of physical punishment producing physical damage showed higher child abuse potential and depression scores. A statistically significant age of the mother by physical punishment producing physical damage interaction was found for depression. Conclusions: The results of this longitudinal study indicated that the potential for abuse was significantly greater in adolescent mothers than in adult mothers, and in mothers who had been victims of physical abuse than in those who had not. It also appeared that, among adolescent mothers, those who had been victims of childhood physical abuse constitute a higher risk group for child physical abuse. abstract_id: PUBMED:38275232 Medical Child Abuse - A Guideline to Detecting this Particular Form of Child Maltreatment The term Medical Child Abuse (MCA) describes a form of child abuse in which the medical system is "abused" by carrying out unnecessary medical procedures on a child. This abuse of the medical system occurs through misrepresentation, non-disclosure, fabrication, misinterpretation or active causation of symptoms by a parent. In this article, the construct ofmedical child abuse is defined and predisposing and motivational factors are examined. It also provides an overview of terms that are used synonymously or comparably in the literature and discusses the connection between MCA and Munchausen-by-proxy-syndrome.The core of the article is the presentation of an internal guideline, which was created by the interdisciplinary working group on MCA of the Clinics for Paediatric and Adolescent Medicine, the Department of Child and Adolescent Psychiatry, Psychosomatic Medicine and Psychotherapy, the Child Protection Outpatient Clinic and the Social Services at the Charité. It also outlines possible interventions. abstract_id: PUBMED:8018230 Child &amp; adolescent psychiatry and child sexual abuse. 8-10% of Swedish women have been subject to sexual abuse. The sexual abused child is in a difficult situation. She feels that she cannot tell anybody anything. She is surrounded by shame, difficulty to remember, fear of not being believed in, feeling of catastrophe ie. The primary task of child and adolescent psychiatry is to give the sexual abused child an opportunity to feel safe and to help her repairing the damages that might occurred. Child and adolescent psychiatry should also participate in the legal process of sexually abused children, because the possibilities of repairing the wounds caused by a sexual abuse depends upon the child's confirmation of its credibility. The child and adolescent psychiatric investigation should be kept between the limits that are allowed by the professional methods and not make statements on subjects we don't know just because the legal process wishes us to. abstract_id: PUBMED:36078382 Association between Child Abuse and Poor Oral Habits in Mongolian Adolescents. (1) This study aimed to investigate the association between child abuse and oral habits in adolescents in Mongolia. (2) A cross-sectional survey was conducted with children and their caregivers in Ulaanbaatar, Mongolia. Parents of 770 children enrolled in two public schools in Ulaanbaatar, Mongolia, completed questionnaires regarding the physical and psychological abuse that their children were subjected to and the presence of poor oral habits (biting nails/lips/pens and bruxism). Multivariable Poisson regression models were fitted with adjustment for age, gender, age of the mother, parental education, family income level, birth order, and living status with grandparents. (3) Biting nails/lips/pens and bruxism were reported by 39.0% and 17.5% of the respondents, respectively. Biting nails/lips/pens was significantly associated with physical abuse but was not significantly associated with psychological abuse (prevalence ratio, PR [95% confidence interval, CI]: 1.44 [1.07-1.95] and 1.34 [0.98-1.83], respectively). However, bruxism was not associated with physical or psychological abuse (PR [95% CI]: 1.16 [0.77-1.77] and 1.04 [0.68-1.61], respectively). (4) Child abuse was associated with biting habits among Mongolian adolescents. abstract_id: PUBMED:24881128 Physical child abuse potential in adolescent girls: associations with psychopathology, maltreatment, and attitudes toward child-bearing. Objective: Adolescent mothers are at increased risk of mistreating their children. Intervening before they become pregnant would be an ideal primary prevention strategy. Our goal was to determine whether psychopathology, exposure to maltreatment, preparedness for child-bearing, substance use disorders (SUDs), IQ, race, and socioeconomic status were associated with the potential for child abuse in nonpregnant adolescent girls. Method: The Child Abuse Potential Inventory (CAPI) was administered to 195 nonpregnant girls (aged 15 to 16 years; 54% African American) recruited from the community. Psychiatric diagnoses from a structured interview were used to form 4 groups: conduct disorder (CD), internalizing disorders (INTs; that is, depressive disorder, anxiety disorder, or both), CD + INTs, or no disorder. Exposure to maltreatment was assessed with the Childhood Trauma Questionnaire, and the Childbearing Attitudes Questionnaire measured maternal readiness. Results: CAPI scores were positively correlated with all types of psychopathology, previous exposure to maltreatment, and negative attitudes toward child-bearing. IQ, SUDs, and demographic factors were not associated. Factors associated with child abuse potential interacted in complex ways, but the abuse potential of CD girls was high, regardless of other potentially protective factors. Conclusions: Our study demonstrates that adolescent girls who have CD or INT are at higher risk of perpetrating physical child abuse when they have children. However, the core features of CD may put this group at a particularly high risk, even in the context of possible protective factors. Treatment providers should consider pre-pregnant counselling about healthy mothering behaviours to girls with CD. abstract_id: PUBMED:29482109 Differences in child sexual abuse cases involving child versus adolescent complainants. While adolescents report the highest rates of sexual abuse victimization, few studies have investigated how child sexual abuse (CSA) cases involving adolescent complainants may differ from cases involving child complainants. The current study draws on 3,430 allegations of CSA in Canada to compare abuse characteristics and judicial outcomes in cases involving adolescent complainants to cases involving child complainants. Adolescent complainants were more likely than child complainants to be abused by a stranger or a person with a community connection to the complainant, while children were more likely than adolescents to be abused by a parent or other relative. Furthermore, compared to child complainants, adolescent complainants were more frequently involved in the most intrusive offenses and their cases were more likely to involve violence. Both groups were most likely to disclose the abuse to a parent, though a greater proportion of children disclosed the abuse to a parent. There were no differences in the delay to disclosure. Accused were equally likely to plead "guilty" and to be convicted in cases involving child and adolescent complainants. However, offenders convicted of the most intrusive offenses received longer probation sentences when the complainant was a child than when the complainant was an adolescent. These findings have implications for ensuring appropriate support and services to adolescent victims of CSA. abstract_id: PUBMED:2207805 Child abuse by adolescent caregivers. The purpose of this analysis was to estimate the frequency and severity of child abuse committed by adolescents who were in nonparental caregiving roles. The sample was composed of cases in which either physical or sexual abuse was substantiated through child welfare investigation. The main comparisons were between adolescent and adult caregivers. While caregiver age did not appear to have a consistent effect on the occurrence of physical abuse, notable differences between adolescents and adults were found in the area of sexual abuse. Not only were adolescents observed to commit substantially more sexual abuse than older caregiver cohorts, but the sexual abuse they committed was more likely to involve intercourse and physical assault. These findings have implications for future research and practice. abstract_id: PUBMED:10700376 The internal consistency of the child abuse potential inventory with adolescent mothers. In this study, 105 adolescents completed the Child Abuse Potential Inventory (CAP) at an average of 2 months postpartum. The purpose was to assess the reliability (internal consistency) of the CAP with adolescent mothers. The second purpose was to begin to establish a line of inquiry that examines the value of using the CAP with this population. The results showed that the alpha reliabilities were low for the CAP abuse scale (0.65) and low to moderate for its six subscales (range 0.59-0.74). The conclusion was drawn that further research is needed to understand the psychometric properties of the CAP with adolescent mothers. Answer: Yes, it is possible to assess the potential for child abuse in poor adolescent expectant mothers. Research has explored the correlates of high scores on the Child Abuse Potential Inventory (CAPI) in this population. Higher Child Abuse Potential scores were associated with factors such as higher maternal psychological distress, a maternal history of psychiatric diagnosis, and lack of perceived support by the father of the baby. Additionally, older pregnant teenagers were more likely to report a childhood history of maltreatment, higher psychological distress, and perceived and expected less support from their own mothers. Those raised by a single parent were more likely to have a history of childhood maltreatment and less likely to live with the baby's father during their pregnancy or to expect support from him (PUBMED:11587911). Furthermore, the prenatal Child Abuse Potential (pCAP) scores were positively correlated with self-reported prenatal smoking and substance use, and these scores significantly contributed to neonatal morbidity independently of obstetric risk factors (PUBMED:11766012). Adolescent mothers with memories of child maltreatment showed higher child abuse potential and depression scores than those without such memories (PUBMED:10819101). The assessment of the potential for child abuse in adolescent mothers can be an important step in preventing child abuse. The use of tools like the CAPI during pregnancy can provide a marker of maternal prenatal functioning and perceived social support, which may help identify populations at particularly high risk for child abuse. This can inform strategies for early preventive interventions, including adolescent education on family planning, child-rearing, and the importance of social support programs that emphasize the father's role (PUBMED:11587911).
Instruction: Utility of the Surgical Apgar Score in Kidney Transplantation: Is it Feasible to Predict ICU Admission, Hospital Readmission, Length of Stay, and Cost in This Patient Population? Abstracts: abstract_id: PUBMED:27207400 Utility of the Surgical Apgar Score in Kidney Transplantation: Is it Feasible to Predict ICU Admission, Hospital Readmission, Length of Stay, and Cost in This Patient Population? Background: This study analyzed the utility of the Surgical Apgar Scoring (SAS) system in predicting morbidity in kidney transplantation. Recipient comorbidities were evaluated for any effect on the SAS and then globally assessed for any relationship with intensive care unit (ICU) admission, need for dialysis, creatinine at discharge, length of stay, incremental, and total cost of transplantation. The hypothesis for this study is that a low SAS will be a statistically significant predictor of postoperative morbidity and associated costs. Methods: This was an institutional review board (IRB)-approved retrospective longitudinal cohort study on 204 solitary kidney transplant recipients (2009-2011). Patients were divided into 2 groups: low to moderate = SAS ≤ 7 and high = SAS ≥ 8. These groups were then analyzed against a host of variables. Results: Sixty-five percent of patients had an SAS of 7 or lower, while 35% had an SAS of 8 and higher. Recipients with a history of stroke were 88% more likely to be in the low-moderate SAS group (P = .017). Patients with lower SASs trended toward having less extended criteria donors (0.097) but were more likely to be admitted to the ICU (P = .043), leading to significantly higher transplant event hospitalization costs. Higher SASs were more likely to be readmitted to the hospital within 30 days of discharge (P = .027), leading to higher 30-day postdischarge costs (P = .014). Readmission rates, however, and 30-day follow-up costs were similar between SAS groups after controlling for donor characteristics, specifically donor marginality and recipient estimated glomerular filtration rate (eGFR). Conclusion: The findings of this study suggest that a history of stroke in the recipient may lend to a lower SAS and that a low SAS is associated with ICU admission following transplant, leading to higher hospital costs. abstract_id: PUBMED:26107682 Severity of Anemia Predicts Hospital Length of Stay but Not Readmission in Patients with Chronic Kidney Disease: A Retrospective Cohort Study. The aim of this study was to examine the relationship of severe anemia to hospital readmission and length of stay (LOS) in patients with chronic kidney disease (CKD) stage 3-5. Compared with the general population, patients with moderate CKD have a higher hospital readmission rate and LOS. Anemia in patients with moderate CKD is associated with higher morbidity and mortality. The influence of anemia on hospital outcomes in patients with moderate CKD has not been characterized.We conducted a retrospective cohort study at Maine Medical Center, a 606-bed academic tertiary care hospital. Patients with CKD stages 3-5 and not on dialysis admitted during February 2013 to January 2014 were eligible. Patients with end stage renal disease on hemodialysis or peritoneal dialysis, kidney transplant, acute kidney injury, gastrointestinal bleeding, active malignancy, pregnancy, and surgery were excluded. The cohort was split into severe anemia (hemoglobin ≤9 g/dL) versus a comparison group (hemoglobin &gt;9 g /dL), and examined for differences in 30-day hospital readmission and LOS.In this study, the data of 1141 patients were included, out of which 156 (13.7%) had severe anemia (mean hemoglobin 8.1 g/dL, SD 0.8). Severe anemia was associated with increased hospital LOS (mean 6.4 (SD 6.0) days vs mean 4.5 (SD 4.0) days, P &lt; 0.001). The difference was 1.7 day longer (95% CI 0.94, 2.45). There was no difference in readmission rate (mean 11.5% vs 10.2%, P = 0.7).Patients with moderate CKD and severe anemia are at risk for increased hospital LOS. Interventions targeting this high-risk population, including outpatient management of anemia, may benefit patient care and save costs through improved hospital outcomes. abstract_id: PUBMED:20439248 Association between admission hyperglycemia and length of stay after renal transplantation. Objective: To compare length of stay, readmissions, infections, and mortality in patients with end-stage renal disease who have been admitted to receive renal transplant, stratified according to diabetes status and admission glucose concentration. Methods: We conducted a retrospective analysis of all adult patients who underwent renal transplant at an academic center during 2006. Patients were stratified according to diabetes status before transplant and glucose concentration at hospital admission (hyperglycemic [&gt; 180 mg/dL] or normoglycemic [≤ = 180 mg/dL]). The groups were compared with respect to length of stay, number of readmissions during the 2-year period after transplant, infections, and mortality. Results: Ninety-eight patients underwent renal transplant during the study period, and 11 were excluded because of incomplete data. Thus, 87 patients were included. There was a trend towards greater length of stay and higher mortality in patients with known diagnosis of diabetes. When stratified according to glucose concentration at admission, patients with hyperglycemia had a significantly longer length of stay than normoglycemic patients (10 ± 4.3 days vs 7.9 ± 2.9; P = .039), even after correcting for diabetes status. Conclusion: Hyperglycemia at hospital admission, rather than a known diagnosis of diabetes, is associated with increased length of stay in patients admitted for renal transplant. abstract_id: PUBMED:26474070 Early Hospital Readmission After Simultaneous Pancreas-Kidney Transplantation: Patient and Center-Level Factors. Early hospital readmission is associated with increased morbidity, mortality, and cost. Following simultaneous pancreas-kidney transplantation, rates of readmission and risk factors for readmission are unknown. We used United States Renal Data System data to study 3643 adult primary first-time simultaneous pancreas-kidney recipients from December 1, 1999 to October 31, 2011. Early hospital readmission was any hospitalization within 30 days of discharge. Modified Poisson regression was used to determine the association between readmission and patient-level factors. Empirical Bayes statistics were used to determine the variation attributable to center-level factors. The incidence of readmission was 55.5%. Each decade increase in age was associated with an 11% lower risk of readmission to age 40, beyond which there was no association. Donor African-American race was associated with a 13% higher risk of readmission. Each day increase in length of stay was associated with a 2% higher risk of readmission until 14 days, beyond which each day increase was associated with a 1% reduction in the risk of readmission. Center-level factors were not associated with readmission. The high incidence of early hospital readmission following simultaneous pancreas-kidney transplant may reflect clinical complexity rather than poor quality of care. abstract_id: PUBMED:33380430 Impact of transplant pharmacists on length of stay and 30-day hospital readmission rate: a single-centre retrospective cohort study. Objectives: Our study was conducted to evaluate the impact of clinical transplant pharmacy services on the kidney transplant programme at our centre, following the introduction of these services in terms of inpatient length of stay (LOS) and all-cause 30-day readmission rates by comparing these data to data from before (and therefore in the absence of) the services. Methods: This was a single-centre retrospective cohort analysis. Data were collected in two phases: phase I (pre-transplant pharmacist period, that is, transplant pharmacist service was not available) comprised transplant patients between 1 October 2015 and 30 September 2016 and phase II (post-transplant pharmacist period) comprised transplant patients between 1 October 2016 and 30 September 2017. Patients ≥18 years of age, who received a kidney transplant in our centre, and received steroids, tacrolimus and mycophenolate for maintenance immunosuppression, were included. Transplant pharmacy services provided followed the expectations of the Centers for Medicare &amp; Medicaid Services for transplant centres. Primary outcomes were LOS after transplant surgery and all-cause 30-day hospital readmission rate. Secondary outcomes included the number of pharmacy notes, the achievement of therapeutic levels of tacrolimus at day 7 post-surgery and delayed graft function. Results: The two groups (n=101 in phase I and n=104 in phase II) had similar demographics and transplant characteristics at baseline. There was a statistically significant difference in the inpatient LOS (6.58 vs 5.76 days; p=0.041) between phase I and phase II, respectively. However, this did not pan out in the rate of 30-day hospital readmission (36 (35.6%) vs 27 (25.9%); p=0.133) between the two phases. The number of transplant pharmacist notes pre-transplant, post-surgery and on discharge were significantly higher in the post-transplant pharmacist group. There was no significant difference in tacrolimus concentration at day 7 (mean 7.15 ng/mL in phase I vs 6.95 ng/mL in phase II; p=0.673) or delayed graft function. Conclusion: Our study showed a favourable inpatient LOS and comparable 30-day hospital readmission rate before and after the implementation of transplant pharmacy services. abstract_id: PUBMED:17524867 Prolonged rehospitalizations following renal transplantation: causes, risk factors, and outcomes. Background: Although some studies have described rehospitalization after transplantation, few have focused on risk factors and consequences of prolonged hospital stay. Our goal was to determine the causes, risk factors, and outcomes of prolonged rehospitalizations after renal transplantation. Patients And Methods: In this retrospective study, 574 randomly selected rehospitalization records of kidney transplant recipients were reviewed from 1994 to 2006. Admissions were divided into group 1, prolonged stay (length of stay &gt;14 days, n=149), and group II, short stay (length of stay &lt;or=14 days, n=425). Demographic data, cause of end-stage renal disease (ESRD), cause of readmission, ICU admission, time interval between transplantation and rehospitalization, costs, and in-patient mortality were compared between the two groups. Results: Mean (+/-SD) hospital stay was 10.6 +/- 9.8 days. Median hospital stay was 5 days for renal stones, 7 days for surgical complications, 8 days for malignancy, 9 days for infection, and 10 days for renal dysfunction. We found higher rates of ESRD due to diabetes in group I (28% vs. 15.4%; P=.006). Admissions due to infections (56.4% vs 42.4%; P=.003) or renal dysfunctions (55% vs 41.4%; P=.004) were the cause of higher proportions of total hospitalizations with prolonged stay. Prolonged stay also correlated with higher ICU admissions (8.8% vs 2.8%; P=.002) and mortality (6.7% vs 3.05%; P=.001). Mean total hospital cost for short versus prolonged hospitalizations were US$ 586 versus US$ 2750, respectively. Conclusion: In this study, we found that prolonged hospital stays accounted for &gt;62% of all hospital costs; however, they comprised only 26% of the patients. High-risk kidney transplant recipients for prolonged hospitalizations should be closely observed for infections and graft rejection. abstract_id: PUBMED:11926752 Multicenter US study of hospital resource utilization associated with cytomegalovirus-related readmission of renal and heart transplant patients. Incidence of cytomegalovirus (CMV)-related rehospitalization and associated resource use were captured by the Transplant Infection Cost Analysis (TICA) program, which examined patient records and hospital billing data in multiple solid organ transplant centers in the US. The experiences of two adult heart and three adult renal transplant centers were each pooled for analysis. Financial data were standardized to 1998 US dollars using the Medical Care component of the US Consumer Price Index. CMV-related readmissions among renal transplant patients averaged 10.5 days (range 1-56) with average charges of $22,598. Heart transplant patients readmitted for CMV incurred an average charge of $42,111 and average hospital stay of 10.9 days (range 2-95). CMV-related hospital resource use represented a significant portion of the average cost of the original transplant and associated length of stay. abstract_id: PUBMED:26859220 Prehabilitation in our most frail surgical patients: are wearable fitness devices the next frontier? Purpose Of Review: Frailty is the concept of accumulating physiologic declines that make people less able to deal with stressors, including surgery. Prehabilitation is intervention to enhance functional capacity before surgery. Frailty and prehabilitation among transplant populations and the role of wearable fitness tracking devices (WFTs) in delivering fitness-based interventions will be discussed. Recent Findings: Frailty is associated with increased complications, longer length of hospital stay and increased mortality after surgery. Frail kidney transplant patients have increased delayed graft function, mortality and early hospital readmission. Frail lung or liver transplant patients are more likely to delist or die on the waitlist. Prehabilitation can mitigate frailty and has resulted in decreased length of hospital stay and fewer postsurgical complications among a variety of surgical populations. Increasingly, WFTs are used to monitor patient activity and improve patient health. Interventions using WFTs have resulted in improved activity, weight loss and blood pressure. Summary: Frailty is a measurable parameter that identifies patients at risk for worse health outcomes and can be mitigated through intervention. Prehabilitation to reduce frailty has been shown to improve postsurgical outcomes in a variety of populations. WFTs are being integrated in healthcare delivery for monitoring and changing health behavior with promising results. abstract_id: PUBMED:23470704 Acuity-adaptable patient room improves length of stay and cost of patients undergoing renal transplant: a pilot study. The acuity-adaptable patient room concept is an emerging care model where patient is cared for in the same room from admission through discharge regardless of the patient level of acuity. After implementation of the care cluster strategy to support the implementation of an acuity-adaptable patient room, a descriptive study was conducted looking at so whether there will be a decreased length of stay and cost on patient cared for in the acuity-adaptable patient room compared to patients cared for in a transitional care process. Result of the study showed decreased length of stay of kidney transplant patients from 9.6 (11.0) days (before acuity-adaptable patient room) to 4.1 (1.3) days (acuity-adaptable patient room). Not only that the acuity-adaptable patient room improves patient outcome and cost but with the nursing competency preparation to support the implementation of the acuity-adaptable patient room, a hybrid nurse was created who possessed both critical care and medical-surgical skills. This can be a potential trend in the professional nurse model to address the health care challenges we face today in terms of nursing shortage, abbreviated plan of care, and facility operation efficiency. abstract_id: PUBMED:37126410 Postoperative length of stay following kidney transplantation in patients without delayed graft function-An analysis of center-level variation and patient outcomes. Background: Early discharge after surgical procedures has been proposed as a novel strategy to reduce healthcare expenditures. However, national analyses of the association between discharge timing and post-transplant outcomes following kidney transplantation are lacking. Methods: This was a retrospective cohort study of all adult kidney transplant recipients without delayed graft function from 2014 to 2019 in the Organ Procurement and Transplantation Network and Nationwide Readmissions Databases. Recipients were divided into Early (LOS ≤ 4 days), Routine (LOS 5-7), and Delayed (LOS &gt; 7) cohorts. Results: Of 61 798 kidney transplant recipients, 26 821 (43%) were discharged Early and 23 279 (38%) Routine. Compared to Routine, patients discharged Early were younger (52 [41-61] vs. 54 [43-62] years, p &lt; .001), less commonly Black (33% vs. 34%, p &lt; .001), and more frequently had private insurance (41% vs. 35%, p &lt; .001). After adjustment, Early discharge was not associated with inferior 1-year patient survival (Hazard Ratio [HR] .74, 95% Confidence Interval [CI] 0.66-0.84) or increased likelihood of nonelective readmission at 90-days (HR .93, CI .89-.97), relative to Routine discharge. Discharging all Routine patients as Early would result in an estimated cost saving of ∼$40 million per year. Multi-level modeling of post-transplantation LOS revealed that 28.8% of the variation in LOS was attributable to interhospital differences rather than patient factors. Conclusions: Early discharge after kidney transplantation appears to be cost-efficient and not associated with inferior post-transplant survival or increased readmission at 90 days. Future work should elucidate the benefits of early discharge and develop standardized enhanced recovery protocols to be implemented across transplant centers. Answer: The Surgical Apgar Score (SAS) has been analyzed for its utility in predicting morbidity in kidney transplantation, including ICU admission, hospital readmission, length of stay, and associated costs. The study found that recipients with a history of stroke were more likely to have a lower SAS, and a low SAS was associated with ICU admission following transplant, leading to higher hospital costs. However, higher SASs were more likely to be readmitted to the hospital within 30 days of discharge, leading to higher 30-day post-discharge costs. After controlling for donor characteristics and recipient estimated glomerular filtration rate (eGFR), readmission rates and 30-day follow-up costs were similar between SAS groups (PUBMED:27207400). Other studies have identified various factors influencing hospital length of stay and readmission rates in patients with kidney-related conditions. For instance, severe anemia in patients with chronic kidney disease (CKD) was associated with increased hospital length of stay but not readmission rates (PUBMED:26107682). Admission hyperglycemia, rather than a known diagnosis of diabetes, was associated with increased length of stay in patients admitted for renal transplant (PUBMED:20439248). Early hospital readmission following simultaneous pancreas-kidney transplantation was found to be high, with patient age and donor race among the factors associated with readmission risk (PUBMED:26474070). The introduction of clinical transplant pharmacy services was shown to have a favorable impact on inpatient length of stay but did not significantly affect the 30-day hospital readmission rate (PUBMED:33380430). Prolonged hospital stays after renal transplantation were found to be costly and were associated with higher rates of infections, ICU admissions, and mortality (PUBMED:17524867). The acuity-adaptable patient room concept, where a patient is cared for in the same room from admission through discharge, was shown to decrease the length of stay for kidney transplant patients (PUBMED:23470704). Lastly, early discharge after kidney transplantation was not associated with inferior post-transplant survival or increased readmission at 90 days, suggesting that early discharge can be cost-efficient (PUBMED:37126410).
Instruction: Are drug experimenters better adjusted than abstainers and users? Abstracts: abstract_id: PUBMED:16982382 Are drug experimenters better adjusted than abstainers and users?: a longitudinal study of adolescent marijuana use. Purpose: Experimentation with substance use is normative during adolescence and prior research suggests that adolescents who refrain from experimentation may be psychologically maladjusted. This longitudinal study compared lifetime marijuana abstainers (n = 1177), experimenters (n = 873), and frequent users (n = 205) at grade 12 on psychosocial functioning during late adolescence and young adulthood. Methods: Participants were recruited from middle schools in 1985 (grade 7) and assessed repeatedly, including in 1990 (grade 12) and 1995 (age 23). Self-report surveys assessed lifetime substance use at grade 12, and psychosocial functioning at grade 12 and age 23. Group differences after controlling for key demographics were estimated using multivariate logistic regression and analysis of covariance. Results: Adolescent abstainers from marijuana often fared better (and in no case worse) than experimenters and frequent users both concurrently and later in life on school engagement, family and peer relations, mental health, and deviant behavior. Similar results were found in ancillary analyses using a definition of adolescent "abstainer" that also accounted for cigarette and alcohol use. Conclusions: Results refute the idea that adolescents who abstain from substance use are maladjusted, and suggest instead that they function better than experimenters later in life, during the transition to young adulthood. abstract_id: PUBMED:10756515 Differences in young adult psychopathology among drug abstainers, experimenters, and frequent users. Shedler and Block offered the provocative proposal that individuals who experiment with drugs are psychologically healthier than either those who abstain completely or those who are frequent users. Not all studies have come to such conclusions, however. In an effort to specify under what conditions Shedler and Block's conclusions might hold, the present study examined three groups of drug users (abstainers, experimenters, frequent users) classified according to three different criteria: (a) marijuana use at age 20; (b) alcohol use during 10th grade; and (c) alcohol use at age 20. The three groups were compared at age 20 in terms of personality, deviant behavior, and psychopathology. The results revealed that abstainers were never more psychologically impaired, and were occasionally healthier, than experimenters. Frequent users of marijuana were consistently more imparied than both the abstainers and experimenters, in terms of both internalizing and externalizing disorders. Classification according to marijuana use appeared to be more related to psychopathology than did classification according to alcohol use. abstract_id: PUBMED:36975225 Implicit Attitudes of New-Type Drug Abstainers towards New-Type Drugs and Their Relapse Tendencies. Over the last decade, new-type drugs have been replacing traditional-type drugs in China. However, studies of implicit attitudes towards new-type drugs are insufficient and contradictory results exist. Previous studies have suggested that implicit attitudes and relapse tendencies are a dual structure model, but that is for traditional or mixed drug addicts. For new drug addicts, is the dual structure model completely suitable or partially supported? This study attempts to explore this point. At a drug rehabilitation center, we randomly selected 50 abstainers (25 males and 25 females; age range: 21-41 years) who only took new-type drugs prior to abstention to participate in this study. Participants complete the General Situation Questionnaire, the Drug Use Characteristics Questionnaire, the Drug Relapse Risk Scale (DRRS), and the Single Category Implicit Association Test (SC-IAT). The relationship between implicit attitudes and relapse tendencies of new-type drug abstainers towards new-type drugs was investigated. The results showed: (1) abstainers had negative attitudes towards new-type drugs, and the data had statistical correlation with abstainers' drug use characteristics and each relapse risk index; (2) females held relatively positive implicit attitudes towards new-type drugs; (3) being female and divorced could significantly predict abstainers' implicit attitudes; (4) there is no significant correlation between implicit attitudes and relapse tendencies of new-type drug abstainers towards new-type drugs, which partially supports the dual structure model; (5) fender influences the self-assessment of relapse probability. Compared with traditional or mixed drug addicts, the dual structure model is only partially supported for new-type drug abstainers towards new-type drugs. That is because being female and divorced are the main factors influencing implicit attitudes and relapse tendencies. A few women or divorced people regard consuming new drugs as the source of happiness and forget the harm brought by the drugs themselves in their implicit attitude, which leads to more drug abuse in their explicit behavior. Therefore, we should pay more attention to women who have become new-type drug addicts and pay special attention to the impact of divorce. abstract_id: PUBMED:26434251 Disability-Adjusted Life Years among Drug Users in Khon Kaen: An Evidence-Based Approach. Objective: To examine the Disability Adjusted Life Years (DALYs) among drug users in Khon Kaen. Material And Method: A cross-sectional descriptive study was conducted between January 2014 and February 2015. Patients' medical records, supported by the Khon Kaen Provincial Public Health Office, were used for data regarding the treatment ofthe drug users between October 2013 and September 2014. SPSS version 19.0 for windows was used to analyze descriptive statistics and Pearson's correlations. Results: A review of medical records revealed a total number of drug users were 3,605 males (92.25%) and 303 females (7.75%). A total of 2,195 cases (56.17%) were drug dependence. 20 drug users died due to drug-related causes. A total DALYs were 6,772.26 and a prevalence rate of DALYs was 3.76/1,000 populations. The correlation analyses showed significantly factors related to the DALYs among drug users were age, occupation, pattern of drug use, levels of severity, drug-related death and treatment services. Conclusion: The DALYs were apparently high among male drug users, who completed primay school education, were drug-dependent and being treated in group therapy. The health authorities may use these associated factors in order to develop or improve programs related to prevention, harm reduction and treatment services effectively. abstract_id: PUBMED:35413426 Altered neural processing of social reward in male heroin abstainers. Individuals who are addicted to drugs often face unfavourable social conditions and difficulty with social adaptation. Both may be closely related to impaired social cognitive ability. This study posits that social cognitive impairments likely arise from blunted social reward processing in drug users. This study aimed to explore the electrophysiological mechanism of social reward processing in people who abstain from using heroin (heroin abstainers). Twenty-eight male heroin abstainers and 27 matched controls completed the social incentive delay task. At the same time, their corresponding behaviour and electroencephalography data were recorded. The feedback-related negativity (FRN) elicited by positive outcomes was significantly more positive than that elicited by negative outcomes for the healthy controls. However, no significant difference in FRN was found between negative and positive outcomes among the heroin abstainers. There was no significant difference in P3 (a positive event-related potential component after FRN) elicited by negative and neutral outcomes in the heroin abstainers. Meanwhile, negative outcomes elicited greater P3 than neutral outcomes in the healthy controls. In addition, this study also found that withdrawal time was negatively correlated with the difference wave of FRN for the heroin abstainers. Heroin abstainers may be hyposensitive to the processing of social reward outcomes. In other words, they may have insufficient motivation to acquire social rewards. Abnormal social reward processing found in heroin abstainers can be improved with an increase in abstinence time. These results deepen our understanding of the social reward impairments associated with chronic drug use. abstract_id: PUBMED:34456796 Identifying Methamphetamine Abstainers With Convolutional Neural Networks and Short-Time Fourier Transform. Few studies have investigated the functional patterns of methamphetamine abstainers. A better understanding of the underlying neurobiological mechanism in the brains of methamphetamine abstainers will help to explain their abnormal behaviors. Forty-two male methamphetamine abstainers, currently in a long-term abstinence status (for at least 14 months), and 32 male healthy controls were recruited. All subjects underwent functional MRI while responding to drug-associated cues. This study proposes to combine a convolutional neural network with a short-time Fourier transform to identify different brain patterns between methamphetamine abstainers and controls. The short-time Fourier transformation provides time-localized frequency information, while the convolutional neural network extracts the structural features of the time-frequency spectrograms. The results showed that the classifier achieved a satisfactory performance (98.9% accuracy) and could extract robust brain voxel information. The highly discriminative power voxels were mainly concentrated in the left inferior orbital frontal gyrus, the bilateral postcentral gyri, and the bilateral paracentral lobules. This study provides a novel insight into the different functional patterns between methamphetamine abstainers and healthy controls. It also elucidates the pathological mechanism of methamphetamine abstainers from the view of time-frequency spectrograms. abstract_id: PUBMED:28779309 Reduction in N2 amplitude in response to deviant drug-related stimuli during a two-choice oddball task in long-term heroin abstainers. Rationale: Chronic heroin use can cause deficits in response inhibition, leading to a loss of control over drug use, particularly in the context of drug-related cues. Unfortunately, heightened incentive salience and motivational bias in response to drug-related cues may exist following abstinence from heroin use. Objectives: The present study aimed to examine the effect of drug-related cues on response inhibition in long-term heroin abstainers. Methods: Sixteen long-term (8-24 months) male heroin abstainers and 16 male healthy controls completed a modified two-choice oddball paradigm, in which a neutral "chair" picture served as frequent standard stimuli; the neutral and drug-related pictures served as infrequent deviant stimuli of different conditions respectively. Event-related potentials were compared across groups and conditions. Results: Our results showed that heroin abstainers exhibited smaller N2d amplitude (deviant minus standard) in the drug cue condition compared to the neutral condition, due to smaller drug-cue deviant-N2 amplitude compared to neutral deviant-N2. Moreover, heroin abstainers had smaller N2d amplitude compared with the healthy controls in the drug cue condition, due to the heroin abstainers having reduced deviant-N2 amplitude compared to standard-N2 in the drug cue condition, which reversed in the healthy controls. Conclusions: Our findings suggested that heroin addicts still show response inhibition deficits specifically for drug-related cues after longer-term abstinence. The inhibition-related N2 modulation for drug-related could be used as a novel electrophysiological index with clinical implications for assessing the risk of relapse and treatment outcome for heroin users. abstract_id: PUBMED:33754934 Updating Emotional Stimuli in Heroin Abstainers' Working Memory: An Event-Related Potential Investigation. Background: It is well-documented that heroin users demonstrate aberrant emotion-processing abilities. However, the mechanism by which heroin users process emotional information after it has captured their attention and entered their working memory is unclear. Objectives: A modified emotional 2-back task was used to examine whether heroin abstainers demonstrate specific bias patterns in updating emotional stimuli in their working memory. Methods: In total, 26 male heroin abstainers and 29 healthy controls were asked to identify whether the current picture was the same as a picture that had appeared two trials earlier, while behavioral data and electroencephalogram data were collected. Results: Contrary to predictions, the heroin abstainers and healthy controls demonstrated a similar pattern of P300 activity in response to emotional stimuli with no between-group differences in accuracy or reaction time. More specifically, the P300 amplitudes were larger for negative pictures than for positive and neutral pictures. Surprisingly, we found larger P300 amplitudes at Fz electrodes than at Cz and Pz electrodes in the control group, whereas there was no significant difference at midline electrodes in the heroin abstainers. Conclusions/importance: Although subtle differences may exist in attentional engagement toward incoming emotional stimulus between two groups, the similar P300 pattern may indicate partial preservation of emotional working memory capacity associated with adaptive emotion regulation in heroin abstainers. These results deepen our understanding of the emotion regulation impairments associated with chronic drug use. abstract_id: PUBMED:28635346 Effect of drug-related cues on response inhibition through abstinence: A pilot study in male heroin abstainers. Background: Chronic heroin use can cause a deficit of inhibitory function, leading to a loss of control over drug use. Exposure to drug-related cues is considered as one of the contributing factors. However, it is unclear whether there are dynamic changes on the effect of drug-related cues on response inhibition following prolonged abstinence. Objective: The present study investigated the effect of drug-related cues on response inhibition in heroin abstainers at different abstinent phases. Methods: 26 shorter-term (2-6 months) and 26 longer-term (19-24 months) male heroin abstainers performed on a modified two-choice Oddball task, which included two conditions: in the cued condition, neutral pictures served as the background of standard stimuli (yellow frame) and heroin-related pictures served as the background of deviant stimuli (blue frame), reversed in the controlled conditions. Results: Compared to longer-term abstainers, mean reaction time (RT) for drug deviants in shorter-term abstainers was significantly longer. Shorter-term abstainers also showed markedly slower response to neutral deviants relative to drug deviants, but this tendency was not observed in longer-term abstainers. Nevertheless, both groups had similar RT for standard stimuli regardless of their paired background pictures. Conclusion: Effect of drug-related cues on response inhibition remains at the early stage of abstinence; however, this effect may be reduced following a longer period of drug abstinence. Our findings highlight the importance of assessing and improving the ability of inhibiting drug-related cue reactivity during treatment. abstract_id: PUBMED:31915860 Reduced response inhibition after exposure to drug-related cues in male heroin abstainers. Rationale: Deficits in response inhibition associated with heroin use could last several months after abstinence in heroin users, and their response inhibition can also be interfered with task-irrelevant drug-related cues. However, it is unclear whether exposure to drug-related cues affects subsequent response inhibition in heroin users following abstinence. Objectives: The present study aimed to investigate how drug-related cues with different durations between stimulus presentations, referred to as stimulus onset asynchronies (SOAs), affect subsequent response inhibition in heroin abstainers (HAs) with different length of abstinence. Methods: Sixty-seven male HAs performed a modified Go/NoGo task in which a motor response to frequent Go targets and no response to rare NoGo targets were required and a Go or NoGo target was displayed after either a heroin-related or a neutral picture presented for the 200 ms and 600 ms SOAs. Results: The HAs responded significantly faster to Go targets following the neutral pictures for the 600 ms SOA compared to other conditions. They also made more commission errors following heroin-related pictures compared to neutral pictures regardless of the SOAs. The shorter-term HAs made more commission errors compared to the longer-term HAs following the 200 ms SOA, and it was only a trend when the SOA was 600 ms. Additionally, negative correlations between the duration of current abstinence and commission errors were observed following cues with the 200 ms SOA. Conclusions: Impaired response inhibition in HAs can be improved through protracted drug abstinence. However, that effect can be reduced by exposure to drug-related cues, which may increase the risk of relapse. Answer: The research suggests that drug experimenters are not necessarily better adjusted than abstainers. According to a longitudinal study, lifetime marijuana abstainers often fared better than experimenters and frequent users on various measures of psychosocial functioning both during late adolescence and young adulthood (PUBMED:16982382). This study refutes the idea that adolescents who abstain from substance use are maladjusted and instead suggests that they function better than experimenters later in life. Another study also found that abstainers were never more psychologically impaired and were occasionally healthier than experimenters, with frequent users of marijuana being consistently more impaired than both abstainers and experimenters (PUBMED:10756515). Moreover, research on new-type drug abstainers indicates that abstainers had negative attitudes towards new-type drugs, and there was no significant correlation between implicit attitudes and relapse tendencies of new-type drug abstainers towards new-type drugs (PUBMED:36975225). This partially supports the dual structure model, which suggests that implicit attitudes and relapse tendencies are not always linked. In terms of social reward processing, male heroin abstainers showed altered neural processing, indicating that they may be hyposensitive to social reward outcomes and have insufficient motivation to acquire social rewards (PUBMED:35413426). Additionally, studies on heroin abstainers have shown that they exhibit deficits in response inhibition specifically for drug-related cues after longer-term abstinence (PUBMED:28779309), and while they may demonstrate specific bias patterns in updating emotional stimuli in their working memory, they still show partial preservation of emotional working memory capacity associated with adaptive emotion regulation (PUBMED:33754934). Overall, the evidence does not support the notion that drug experimenters are better adjusted than abstainers. Instead, abstainers often show better psychosocial functioning and adjustment in various aspects of life compared to both experimenters and frequent users.
Instruction: Infection-dependent phenotypes in MHC-congenic mice are not due to MHC: can we trust congenic animals? Abstracts: abstract_id: PUBMED:15245582 Infection-dependent phenotypes in MHC-congenic mice are not due to MHC: can we trust congenic animals? Background: Congenic strains of mice are assumed to differ only at a single gene or region of the genome. These mice have great importance in evaluating the function of genes. However, their utility depends on the maintenance of this true congenic nature. Although, accumulating evidence suggests that congenic strains suffer genetic divergence that could compromise interpretation of experimental results, this problem is usually ignored. During coinfection studies with Salmonella typhimurium and Theiler's murine encephalomyelitis virus (TMEV) in major histocompatibility complex (MHC)-congenic mice, we conducted the proper F2 controls and discovered significant differences between these F2 animals and MHC-genotype-matched P0 and F1 animals in weight gain and pathogen load. To systematically evaluate the apparent non-MHC differences in these mice, we infected all three generations (P0, F1 and F2) for 5 MHC genotypes (b/b, b/q and q/q as well as d/d, d/q, and q/q) with Salmonella and TMEV. Results: Infected P0 MHC q/q congenic homozygotes lost significantly more weight (p = 0.02) and had significantly higher Salmonella (p &lt; 0.01) and TMEV (p = 0.02) titers than the infected F2 q/q homozygotes. Neither weight nor pathogen load differences were present in sham-infected controls. Conclusions: These data suggest that these strains differ for genes other than those in the MHC congenic region. The most likely explanation is that deleterious recessive mutations affecting response to infection have accumulated in the more than 40 years that this B10.Q-H-2q MHC-congenic strain has been separated from its B10-H-2b parental strain. During typical experiments with congenic strains, the phenotypes of these accumulated mutations will be falsely ascribed to the congenic gene(s). This problem likely affects any strains separated for appreciable time and while usually ignored, can be avoided with the use of F2 segregants. abstract_id: PUBMED:29128518 Verification of genetic loci responsible for the resistance/susceptibility to the Sendai virus infection using congenic mice. Sendai virus (SeV) is one of the most important pathogens in the specific-pathogen free rodents. It is known that there are some inbred mouse strains susceptible or resistant to SeV infection. The C57BL/6 (B6) and DBA/2 (D2) mice are representative of the resistant and susceptible strains, respectively. Previous study with the quantitative trait locus (QTL) analysis identified three QTLs responsible for resistance or susceptibility to SeV infection on different chromosomes and indicated that resistance or susceptibility to SeV infection was almost predicted by genotypes of these three QTLs. In this paper, to verify the above hypothesis, congenic lines were generated as follows; B6-congenic lines carrying one of the D2 alleles of three QTLs and combination of these three QTLs, and D2-congenic lines carrying single or combination of B6 alleles of three QTLs. All these congenic lines were then challenged with SeV infection. D2 congenic lines introgressed single or combination of B6 alleles of QTLs changed to resistance to SeV infection. Especially, a D2 triple-congenic line became resistant as similar level to B6-parental strain. However, B6-congenic lines introgressed single or combination of D2 alleles of QTLs all remained to be resistant to SeV infection. Both IL-6 and TNF-α in broncho-alveolar lavage fluid of D2 triple-congenic line were decreased to the similar level of B6 mice, suggesting that this is a part of factors that D2 triple-congenic line became resistant to the similar level of B6 mice. Data obtained from these congenic mice verified that three QTLs identified previously were indeed responsible for the resistance/susceptibility to SeV infection in B6 and D2 mice. abstract_id: PUBMED:19596769 Clinical chemistry of congenic mice with quantitative trait loci for predicted responses to Trypanosoma congolense infection. Trypanosoma congolense is a protozoan parasite that causes severe diseases in livestock. Three major quantative trait loci (QTL), Tir1, Tir2, and Tir3, control the survival time of mice after infection with T. congolense. Congenic mice carrying the C57BL/6 resistance alleles on the A/J background were developed for each of these loci. The congenic mice were used to physically map the regions containing the QTL gene(s) and to investigate the physiological effect of each locus. Clinical chemistry data for infected A/J, C57BL/6, and BALB/c mice were obtained for 15 analytes at five time points. Congenic mice were assessed for survival, parasitemia, and anemia as well as seven clinical-chemical analytes. The survival times were significantly increased in the Tir1 and Tir2 mice but not Tir3 congenic mice. The survival time of the parental inbred mice correlated negatively with parasitemia but positively with alanine aminotransferase activities in serum, suggesting that inflammatory reactions in the liver had a beneficial effect possibly associated with reduced parasitemia. However, there was no difference in parasitemia or liver enzyme activities of Tir1 and Tir2 congenic mice relative to their controls, showing that survival, parasitemia, and degree of liver damage are not associated with each other, despite the correlation in the parental lines. These data suggest that the congenic loci affect survival but do not affect control of parasite number. They may therefore act by limiting the pathological consequences of T. congolense infection. abstract_id: PUBMED:29616197 The Use of CRISPR/Cas9 Gene Editing to Confirm Congenic Contaminations in Host-Pathogen Interaction Studies. Murine models of Salmonella enterica serovar Typhimurium infection are one of the commonest tools to study host-pathogen interactions during bacterial infections. Critically, the outcome of S. Typhimurium infection is impacted by the genetic background of the mouse strain used, with macrophages from C57BL/6 and BALB/c mice lacking the capacity to control intracellular bacterial replication. For this reason, the use of congenic strains, which mix the genetic backgrounds of naturally protected mouse strains with those of susceptible strains, has the capacity to significantly alter results and interpretation of S. Typhimurium infection studies. Here, we describe how macrophage knockout cell lines generated by CRISPR/Cas9 gene editing can help determine the contribution of background contaminations in the phenotypes of primary macrophages from congenic mice, on the outcome of S. Typhimurium infection studies. Our own experience illustrates how the CRISPR/Cas9 technology can be used to complement pre-existing knockout models, and shows that there is great merit in performing concurrent studies with both genetic models, to exclude unanticipated side-effects on host-pathogen interactions. abstract_id: PUBMED:30030204 Analysis for genetic loci controlling protoscolex development in the Echinococcus multilocularis infection using congenic mice. The resistance/susceptibility to Echinococcus multilocularis infection in mice is genetically controlled. However, genetic factors responsible for these differences remain unknown. Our previous study in genetic linkage analysis has revealed that there is a significant quantitative trait locus (QTL) for the establishment of cyst (Emcys1), and a highly significant QTL for the development of protoscolex of E. multilocularis larvae (Empsc1), on mouse chromosomes 6 and 1, respectively. The current study aimed to confirm these QTLs and narrow down the critical genetic region that controls resistance/susceptibility to E. multilocularis infection by establishing congenic and subcongenic lines from C57BL/6 (B6) and DBA/2 (D2) mice. For protoscolex development phenotype, two congenic lines, B6.D2-Empsc1 and D2.B6-Empsc1 were developed, where responsible QTL, Empsc1 was introgressed from D2 into B6 background and vice versa. For cyst establishment phenotype, two congenic lines, B6.D2-Emcys1 and D2.B6-Emcys1 were developed, where responsible QTL, Emcys1 was introgressed from D2 into B6 background and vice versa. Because there was no significant difference in cyst establishment between B6.D2-Emcys1 and D2.B6-Emcys1 mice after challenge with E. multilocularis, it is suggested that the Emcys1 does not solely control the cyst establishment in mouse liver. However, infection experiments with B6.D2-Empsc1 and D2.B6-Empsc1 mice showed a significant difference in protoscolex development in the cyst. It confirms that the Empsc1 controls phenotype of the protoscolex development in the cyst. Subsequently, two subcongenic lines, B6.D2-Empsc1.1 and B6.D2-Empsc1.2 from B6.D2-Emcys1 and one subcongenic line, D2.B6-Empsc1.1 from D2.B6-Empsc1 were developed to narrow down the critical region responsible for protoscolex development. From the results of infection experiments with E. multilocularis in these subcongenic mice, it is concluded that a gene responsible for protoscolex development is located between D1Mit290 (68.1 cM) and D1Mit511 (97.3 cM). abstract_id: PUBMED:10421099 Pathological and immunological findings of athymic nude and congenic wild type BALB/c mice experimentally infected with Neospora caninum. Neospora is a cyst-forming coccidian parasite that causes abortions and neuromuscular disorders in a wide variety of mammals. Japanese bovine isolate JPA1 was inoculated intraperitoneally into BALB/c nu/ nu (athymic nude) and BALB/c (congenic wild type) female mice to examine the distribution of parasites and resistance mechanisms to Neospora infection. All the athymic nude mice died within 28 days after intraperitoneal injection of 2 x 10(5) JPA1 tachyzoites, whereas all the congenic wild type mice survived without exhibiting any clinical signs. Tachyzoites were identified in the uterus and pancreas and later spread to many other organs. Most tachyzoites identified in the necrotic foci were localized in the epithelium of the venules and capillaries. Nude mice developed high level of serum interferon-gamma and interleukin-6 as infection proceeded. Inflammatory response to Neospora infection might be mediated by Th1-type dependent cellular immunity. abstract_id: PUBMED:12874366 Major histocompatibility complex-dependent susceptibility to Cryptococcus neoformans in mice. To evaluate the role of major histocompatibility complex (MHC) genes in the resistance to Cryptococcus neoformans, we conducted infection experiments in MHC-congenic strains of mice. Significant MHC-dependent susceptibility differences were found among homozygotes and heterozygotes. This study is the first experimental demonstration of MHC-dependent susceptibility to C. neoformans infections in mice and indicates that MHC genes can be important in host resistance. abstract_id: PUBMED:16571124 Expression of S100A8 correlates with inflammatory lung disease in congenic mice deficient of the cystic fibrosis transmembrane conductance regulator. Background: Lung disease in cystic fibrosis (CF) patients is dominated by chronic inflammation with an early and inappropriate influx of neutrophils causing airway destruction. Congenic C57BL/6 CF mice develop lung inflammatory disease similar to that of patients. In contrast, lungs of congenic BALB/c CF mice remain unaffected. The basis of the neutrophil influx to the airways of CF patients and C57BL/6 mice, and its precipitating factor(s) (spontaneous or infection induced) remains unclear. Methods: The lungs of 20-day old congenic C57BL/6 (before any overt signs of inflammation) and BALB/c CF mouse lines maintained in sterile environments were investigated for distinctions in the neutrophil chemokines S100A8 and S100A9 by quantitative RT-PCR and RNA in situ hybridization, that were then correlated to neutrophil numbers. Results: The lungs of C57BL/6 CF mice had spontaneous and significant elevation of both neutrophil chemokines S100A8 and S100A9 and a corresponding increase in neutrophils, in the absence of detectable pathogens. In contrast, BALB/c CF mouse lungs maintained under identical conditions, had similar elevations of S100A9 expression and resident neutrophil numbers, but diverged in having normal levels of S100A8. Conclusion: The results indicate early and spontaneous lung inflammation in CF mice, whose progression corresponds to increased expression of both S100A8 and S100A9, but not S100A9 alone. Moreover, since both C57BL/6 and BALB/c CF lungs were maintained under identical conditions and had similar elevations in S100A9 and neutrophils, the higher S100A8 expression in the former (or suppression in latter) is a result of secondary genetic influences rather than environment or differential infection. abstract_id: PUBMED:14762398 Dissociation of disease susceptibility, inflammation and cytokine profile in lmr1/2 congenic mice infected with Leishmania major. Severity of disease caused by Leishmania major depends on the genetics of the host. Early induction of T helper cell type 1 (Th1)-type responses in resistant C57BL/6 mice and T helper cell type 2 (Th2) in susceptible BALB/c mice is thought to determine cure or disease respectively. We have mapped three loci that confer susceptibility or resistance upon congenic mice on the C57BL/6 or BALB/c backgrounds. Here we examine the histopathology and production of interleukin 4 (IL-4) and interferon gamma (IFN-gamma) in the skin and draining lymph nodes in the congenic and parental mice. We show an evolving granuloma with a staged infiltration of inflammatory cells, but no difference between the groups. As an indication of an early-polarised Th1/Th2 response we measured IFN-gamma and IL-4 in the lymph nodes and found no difference between any of the mice during the first 48 h. During infection, the level of IL-4 correlated with the lesion size, indicating that IL-4 reflects the disease severity rather than controls it. Considering this effect, B6.C(lmr1,lmr2) mice had similar cytokine levels to the parental C57BL/6 mice despite increased susceptibility and C.B6(lmr1,lmr2) were similar to BALB/c despite increased resistance. We conclude that the lmr loci affect disease severity by a mechanism independent of conventional helper T-cell responses. abstract_id: PUBMED:17130251 C57BL/6 and congenic interleukin-10-deficient mice can serve as models of Campylobacter jejuni colonization and enteritis. Campylobacter jejuni is a globally distributed cause of human food-borne enteritis and has been linked to chronic joint and neurological diseases. We hypothesized that C. jejuni 11168 colonizes the gastrointestinal tract of both C57BL/6 mice and congenic C57BL/6 interleukin-10-deficient (IL-10(-/-)) mice and that C57BL/6 IL-10(-/-) mice experience C. jejuni 11168-mediated clinical signs and pathology. Individually housed mice were challenged orally with C. jejuni 11168, and the course of infection was monitored by clinical examination, bacterial culture, C. jejuni-specific PCR, gross pathology, histopathology, immunohistochemistry, and anti-C. jejuni-specific serology. Ceca of C. jejuni 11168-infected mice were colonized at high rates: ceca of 50/50 wild-type mice and 168/170 IL-10(-/-) mice were colonized. In a range from 2 to 35 days after infection with C. jejuni 11168, C57BL/6 IL-10(-/-) mice developed severe typhlocolitis best evaluated at the ileocecocolic junction. Rates of colonization and enteritis did not differ between male and female mice. A dose-response experiment showed that as little as 10(6) CFU produced significant disease and pathological lesions similar to responses seen in humans. Immunohistochemical staining demonstrated C. jejuni antigens within gastrointestinal tissues of infected mice. Significant anti-C. jejuni plasma immunoglobulin levels developed by day 28 after infection in both wild-type and IL-10-deficient animals; antibodies were predominantly T-helper-cell 1 (Th1)-associated subtypes. These results indicate that the colonization of the mouse gastrointestinal tract by C. jejuni 11168 is necessary but not sufficient for the development of enteritis and that C57BL/6 IL-10(-/-) mice can serve as models for the study of C. jejuni enteritis in humans. Answer: The question of whether infection-dependent phenotypes in MHC-congenic mice are solely due to MHC is addressed in the study by PUBMED:15245582. The researchers found significant differences in weight gain and pathogen load between MHC-genotype-matched P0 and F1 animals and their F2 controls during coinfection studies with Salmonella typhimurium and Theiler's murine encephalomyelitis virus (TMEV). These differences suggest that strains may differ for genes other than those in the MHC congenic region. The study concluded that deleterious recessive mutations affecting response to infection may have accumulated in the MHC-congenic strain, which has been separated from its parental strain for over 40 years. This indicates that phenotypes observed in experiments with congenic strains might be falsely ascribed to the congenic gene(s) when, in fact, they could be due to other genetic differences that have arisen over time. The study by PUBMED:15245582 highlights a potential issue with the use of congenic animals in research, suggesting that genetic divergence beyond the intended congenic region can occur and influence experimental outcomes. This divergence can lead to misinterpretation of the role of specific genes, such as those in the MHC region, in infection-dependent phenotypes. Therefore, while congenic animals are valuable tools in genetic and immunological research, it is crucial to consider the possibility of additional genetic differences when interpreting results. The use of F2 segregants is recommended to avoid the misattribution of phenotypes to the congenic gene(s) and to ensure more accurate conclusions about gene function.
Instruction: Is athletic really ideal? Abstracts: abstract_id: PUBMED:26735391 Is athletic really ideal? An examination of the mediating role of body dissatisfaction in predicting disordered eating and compulsive exercise. Background: Investigations into female body image have suggested that rather than thinness, preference is now being given to a female "athletic ideal" characterised by a toned abdomen, firmer lower body and muscular upper body. This study sought to investigate a) whether greater internalization of the athletic-ideal is associated with higher body dissatisfaction, dieting, bulimic symptoms and compulsive exercise, and b) whether body dissatisfaction mediates the relationship between athletic-ideal internalization and the disordered eating and exercise behaviours of dieting, bulimic symptoms and compulsive exercise. Methods: Participants were 388 females aged between 17 and 35years. Participants completed a battery of questionnaires measuring athletic-ideal internalization, body dissatisfaction, dieting, compulsive exercise and bulimic symptoms. Results: Athletic-ideal internalization was not found to be associated with body dissatisfaction, but was found to predict dieting, bulimic symptoms and compulsive exercise directly. Body dissatisfaction did not mediate the relationship between athletic-ideal internalization and any of the disordered eating and exercise behaviours. Limitations: The study was limited by its cross sectional design which precluded conclusions being drawn about the direction of causality and temporal associations. Conclusions: Athletic-ideal internalization, while not associated with body dissatisfaction, was associated with a range of disordered eating and exercise behaviours. Results from the study suggest that the female athletic-ideal is an equally unrealistic and problematic ideal for women to strive towards. abstract_id: PUBMED:38113660 The female athletic ideal - Friend or foe? Fit, thin, and athletic body ideals and their associations with women's body image. The internalization of body ideals is a critical predictor of body image and health behaviors. Thin and athletic ideal internalization in women need to be elucidated. The aims of the current study included: (i) describing the endorsement of thin and athletic body ideals among women and (ii) examining group differences in negative and positive self-conscious emotions and body image. Participants (N = 392 women; Mage = 25.6, SD = 5.3 years) were assigned to Fit Ideal (n = 110), Thin Ideal (n = 83), Athletic Ideal (n = 55), and Low Internalization (n = 144) groups based on scores for thin and athletic internalization. Three multivariate analyses of variance were conducted to examine Body Ideal group differences on negative appearance and fitness self-conscious emotions, and positive self-conscious emotions and body image. The Athletic Ideal and Low Internalization groups generally scored lower on negative self-conscious emotions and higher on positive self-conscious emotions and body image in comparison to the Fit and Thin Ideal groups (Cohen's d =.35 to 1.15). Results suggest that differing levels of thin and athletic internalization may intersect uniquely with women's body-related self-conscious emotions and positive body image and thus may have distinct effects on women's health and behavior. abstract_id: PUBMED:26828821 A test of athletic internalisation as a mediator in the relationship between sociocultural influences and body dissatisfaction in women. The tripartite model has been an important and empirically supported theoretical model positing that the influence of peers, family, and media leads women to internalise the thin societal body ideal. This internalisation in turn leads women to experience body dissatisfaction. Recently, a new societal 'athletic ideal' for women has emerged, which promotes a body frame with pronounced lean muscle mass. This study tested the role of the athletic ideal in the tripartite model of influence with a sample of 421 women aged 17-40 years. Athletic ideal internalisation was neither found to be associated with body dissatisfaction, nor act as a mediator in the relationship between sociocultural influences and body dissatisfaction. Although more research is required, the results of this study suggest that for this cross-sectional sample of women, internalisation of an athletic and muscular, rather than thin ideal, may be less detrimental to body satisfaction. abstract_id: PUBMED:29128549 Personality traits and appearance-ideal internalization: Differential associations with body dissatisfaction and compulsive exercise. Thin-ideal internalization is a robust risk factor for body dissatisfaction and eating pathology. Conversely, athletic-ideal internalization is often unrelated to body dissatisfaction, but predicts compulsive exercise (i.e., rigid, rule-driven exercise that is continued despite adverse consequences). Distinct personality traits could relate to internalization of different appearance ideals, which may be associated with divergent eating disorder outcomes. Past research has shown that neuroticism is related to body dissatisfaction, whereas extraversion and conscientiousness have been associated with regular and problematic exercise. The current study examined associations among personality traits (i.e., neuroticism, extraversion, conscientiousness), appearance-ideal internalization (i.e., thin- and athletic-ideal), and eating disorder cognitions/behaviors (i.e., body dissatisfaction, compulsive exercise) among 531 college men and women. Moreover, we tested whether appearance-ideal internalization mediated the relationships between personality traits with body dissatisfaction and compulsive exercise. As expected, body dissatisfaction was positively related to neuroticism, and compulsive exercise was positively associated with extraversion. Thin-ideal internalization positively correlated with neuroticism, athletic-ideal internalization positively correlated with conscientiousness, and both thin- and athletic-ideal internalization were positively related to extraversion. After controlling for gender, body mass index, the other appearance-ideal internalization, and the remaining personality traits, the indirect effects of both neuroticism and extraversion on body dissatisfaction through thin-ideal internalization were significant. Extraversion and conscientiousness were indirectly related to compulsive exercise through athletic-ideal internalization, whereas the indirect effect of neuroticism was dependent on covariates. As such, personality traits may be related to distinct eating disorder cognitions/behaviors through internalization of specific appearance ideals. abstract_id: PUBMED:20226748 Athletic-ideal and thin-ideal internalization as prospective predictors of body dissatisfaction, dieting, and compulsive exercise. Although internalization of the thin ideal has been extensively researched and is now regarded as a risk factor for eating disturbance, endorsement of the firm, athletic body ideal has received only minimal attention. This short-term longitudinal study explored whether internalization of two aspects of the current cultural ideal (thinness and athleticism) prospectively predicted three potentially deleterious outcomes: body dissatisfaction, dieting, and compulsive exercise. Undergraduate women (N=231) completed self-report measures at the beginning of the academic year and again 7 months later (N=156 at Time 2). Athletic-ideal internalization predicted change in compulsive exercise over the 7-month study period but not body dissatisfaction or dieting; thin-ideal internalization predicted change in all three outcomes. When both internalization measures were tested simultaneously, neither contributed unique variance. Results suggest that athletic-ideal internalization is not as detrimental as thin-ideal internalization. abstract_id: PUBMED:22461461 Strength training for athletes: does it really help sports performance? The use of strength training designed to increase underlying strength and power qualities in elite athletes in an attempt to improve athletic performance is commonplace. Although the extent to which strength and power are important to sports performance may vary depending on the activity, the associations between these qualities and performance have been well documented in the literature. The purpose of this review is to provide a brief overview of strength training research to determine if it really helps improve athletic performance. While there is a need for more research with elite athletes to investigate the relationship between strength training and athletic performance, there is sufficient evidence for strength training programs to continue to be an integral part of athletic preparation in team sports. abstract_id: PUBMED:34165215 Internalization of body shape ideals and body dissatisfaction: A systematic review and meta-analysis. Objective: The present meta-analysis summarizes existing evidence on the relationship between the internalization of body shape ideals (IBSI) and body dissatisfaction. Method: Pooled effect sizes (r) were computed using a random-effects model. The robustness of the results was examined using influence analyses. Potential predictors of effect heterogeneity were examined using subgroup analysis and univariable/multivariable meta-regressions. Publication bias was examined using the three-parameter selection model (3PSM). Results: A total of 141 effect-sizes from 78 studies (N = 39,491) were retrieved. Results revealed medium (r = .25; muscular/athletic-ideal internalization) to very large (r = .43, general attractiveness-ideal internalization; and r = .45, thin-ideal internalization) relationships, all these being largely similar in male and female individuals. Discussion: IBSI and body dissatisfaction were strongly linked: (a) in younger individuals (general attractiveness-ideal internalization); (b) when IBSI was operationalized in terms of endorsement rather than mere awareness of appearance ideals; (c) in studies of lower methodological quality (muscular/athletic-ideal internalization); and (d) when IBSI was assessed using instruments involving social comparisons. Given the largely cross-sectional nature of data included in this meta-analysis, further testing in prospective and experimental studies is needed to confirm hypotheses about causality of observed associations. Such findings would have impact for prevention and intervention studies. abstract_id: PUBMED:36101023 Identifying the ideal "body shape" associated with athletic performance using three dimensional (height, body mass and waist) allometry. The purpose of the current study was to identify the most appropriate "body shape" associated with six athletic performance (AP) tests, using three-dimensional allometry (height (HT), body mass (M), and waist circumference (WC)), in a sample of Brazilian children and adolescents. The sample consisted of 11018 Brazilian children/adolescents (5147 boys) aged 7-17 years. The 6 AP tests were, a 20 m sprint test, an agility test, a 6/9 minute walk/run endurance test, an upper body strength test, an abdominal muscular endurance test, and a lower limb strength test. The "three-dimensional" allometric model was, AP=a⋅Mk1⋅HTk2⋅WCk3. Results indicated that two of the three body-size dimensions (HT and WC) were consistently associated with all six athletic performance tests. These were a positive HT exponent/term together with a negative WC exponent/term, confirming that being taller benefited all six performance tests but excess WC was always detrimental. The contribution of M depended on whether the athletic performance required the children/adolescent to perform the exercise/test in a "weight-baring" protocol, i.e. having to perform the test carrying their own body weight or not. In conclusion, three-dimensional allometry provides new insights into what constitutes the "ideal" athletic shape associated with children/adolescents. abstract_id: PUBMED:10230578 Aquatic therapy in the rehabilitation of athletic injuries. Water is the ideal environment for active physical therapy. By taking advantage of the physical properties of water, early resumption of rehabilitation is possible. An individualized program in aquatic rehabilitation requires the attending clinician to have specialized knowledge and experience in design and supervision. This article provides an overview of aquatic physical therapy and includes suggestions for the treatment of common athletic injuries. abstract_id: PUBMED:26121951 Against Genetic Tests for Athletic Talent: The Primacy of the Phenotype. New insights into the genetics of sport performance lead to new areas of application. One area is the use of genetic tests to identify athletic talent. Athletic performances involve a high number of complex phenotypical traits. Based on the ACCE model (review of Analytic and Clinical validity, Clinical utility, and Ethical, legal and social implications), a critique is offered of the lack of validity and predictive power of genetic tests for talent. Based on the ideal of children's right to an open future, a moral argument is given against such tests on children and young athletes. A possible role of genetic tests in sport is proposed in terms of identifying predisposition for injury. In meeting ACCE requirements, such tests could improve individualised injury prevention and increase athlete health. More generally, limitations of science are discussed in the identification of talent and in the understanding of complex human performance phenotypes. An alternative approach to talent identification is proposed in terms of ethically sensitive, systematic and evidence-based holistic observation over time of relevant phenotypical traits by experienced observers. Talent identification in sport should be based on the primacy of the phenotype. Answer: The concept of an "athletic ideal" in terms of body image and its impact on women's health and behavior has been explored in several studies. The athletic ideal is characterized by a toned abdomen, firmer lower body, and muscular upper body, and is distinct from the traditional thin ideal. Research has shown mixed results regarding the impact of internalizing the athletic ideal on body dissatisfaction and disordered eating and exercise behaviors. One study found that internalization of the athletic ideal was not associated with body dissatisfaction but did predict dieting, bulimic symptoms, and compulsive exercise directly. Body dissatisfaction did not mediate the relationship between athletic-ideal internalization and these behaviors, suggesting that striving for an athletic ideal can be problematic for women (PUBMED:26735391). Another study indicated that women who internalized the athletic ideal generally scored lower on negative self-conscious emotions and higher on positive self-conscious emotions and body image compared to those who internalized the fit or thin ideal. This suggests that the athletic ideal may have distinct effects on women's health and behavior, potentially being less detrimental than the thin ideal (PUBMED:38113660). Further research tested the role of the athletic ideal in the tripartite model of influence and found that athletic ideal internalization was neither associated with body dissatisfaction nor acted as a mediator between sociocultural influences and body dissatisfaction. This suggests that internalization of an athletic and muscular ideal may be less detrimental to body satisfaction compared to the thin ideal (PUBMED:26828821). However, another study found that while athletic-ideal internalization predicted compulsive exercise, it did not predict body dissatisfaction or dieting. When compared with thin-ideal internalization, athletic-ideal internalization was not as detrimental (PUBMED:20226748). In conclusion, the athletic ideal is not unequivocally ideal. While it may be associated with some positive body image aspects and lower negative self-conscious emotions, it can also predict disordered eating and exercise behaviors. The impact of internalizing this ideal varies and may depend on individual personality traits and the specific aspects of the ideal that are internalized (PUBMED:29128549). Therefore, the athletic ideal can be both a friend and a foe, depending on the context and the individual.
Instruction: Should we use the rate-adaptive AV delay in cardiac resynchronization therapy-pacing? Abstracts: abstract_id: PUBMED:28491086 Left univentricular pacing for cardiac resynchronization therapy using rate-adaptive atrioventricular delay. Objective: To evaluate left univentricular (LUV) pacing for cardiac resynchronization therapy (CRT) using a rate-adaptive atrioventricular delay (RAAVD) algorithm to track physiological atrioventricular delay (AVD). Methods: A total of 72 patients with congestive heart failure (CHF) were randomized to RAAVD LUV pacing versus standard biventricular (BiV) pacing in a 1: 1 ratio. Echocardiography was used to optimize AVD for both groups. The effects of sequential BiV pacing and LUV pacing with optimized A-V (right atrio-LV) delay using an RAAVD algorithm were compared. The standard deviation (SD) of the S/R ratio in lead V1 at five heart rate (HR) segments (RS/R-SD5), defined as the "tracking index," was used to evaluate the accuracy of the RAAVD algorithm for tracking physiological AVD. Results: The QRS complex duration (132 ± 9.8 vs. 138 ± 10 ms, P &lt; 0.05), the time required for optimization (21 ± 5 vs. 50 ± 8 min, P &lt; 0.001), the mitral regurgitant area (1.9 ± 1.1 vs. 2.5 ± 1.3 cm2, P &lt; 0.05), the interventricular mechanical delay time (60.7 ± 13.3 ms vs. 68.3 ± 14.2 ms, P &lt; 0.05), and the average annual cost (13,200 ± 1000 vs. 21,600 ± 2000 RMB, P &lt; 0.001) in the RAAVD LUV pacing group were significantly less than those in the standard BiV pacing group. The aortic valve velocity-time integral in the RAAVD LUV pacing group was greater than that in the standard BiV pacing group (22.7 ± 2.2 vs. 21.4 ± 2.1 cm, P &lt; 0.05). The RS/R-SD5 was 4.08 ± 1.91 in the RAAVD LUV pacing group, and was significantly negatively correlated with improved left ventricular ejection fraction (LVEF) (ΔLVEF, Pearson's r = -0.427, P = 0.009), and positively correlated with New York Heart Association class (Spearman's r = 0.348, P = 0.037). Conclusions: RAAVD LUV pacing is as effective as standard BiV pacing, can be more physiological than standard BiV pacing, and can decrease the average annual cost of CRT. abstract_id: PUBMED:22772135 Rate-adaptive AV delay and exercise performance following cardiac resynchronization therapy. Background: Physiological shortening of the atrioventricular (AV) interval with increasing heart rate is well documented in normal human beings and is an established component of dual-chamber pacing for bradycardia. Objectives: To assess the effect of exercise on optimal AV delay and the impact of a patient-specific rate-adaptive AV delay (RAAVD) on exercise capacity in patients with heart failure following cardiac resynchronization therapy. Methods: Phase 1: We performed iterative AV optimization at rest and exercise in 52 cardiac resynchronization therapy patients in atrial-sensed mode (mean age 71.6 ± 9.2 years, 25% females). Phase 2: Subsequently, 20 consecutive volunteers from this group (mean age 69.2 ± 9.6 years, 15% females) underwent cardiopulmonary exercise testing with RAAVD individually programmed ON (RAAVD-ON) or OFF (RAAVD-OFF). Results: Phase 1: In 94% of the patients, there was a marked reduction (mean 50%) in optimal AV delay with exercise. The optimal resting vs exercise AV delay was 114.2 ± 29 ms at a heart rate of 64.4 ± 7.1 beats/min vs 57 ± 31 ms at a heart rate of 103 ± 13 beats/min (P &lt; .001). No patients required an increase in AV delay with exercise, and 3 (6%) showed no change. Phase 2: With RAAVD-ON, significantly better exercise times were achieved (8.7 ± 3.2 minutes) compared with RAAVD-OFF (7.9 ± 3.2 minutes; P = .003), and there was a significant improvement in Vo(2)max (RAAVD-ON 16.1 ± 4.0 vs RAAVD-OFF 14.9 ± 3.7 mL/(kg · min); P = .024). Conclusions: There was a dramatic reduction in optimal AV delay with physiological exercise in the majority of this heart failure cardiac resynchronization therapy cohort. Replicating this physiological response with a programmable RAAVD translated into a 10% improvement in exercise capacity. abstract_id: PUBMED:18037668 Should we use the rate-adaptive AV delay in cardiac resynchronization therapy-pacing? Aims: Recommendations for programming the rate-adaptive AV delay in CRT. Methods And Results: In cases of continual biventricular pacing, the optimal AV delay in CRT (AVD(opt)) is the net effect of the pacemaker-related interatrial conduction time (IACT), duration of the left-atrial electromechanical action (LA-EAC(long)), and the duration of the left-ventricular latency period (S(V)-EAC(short)). It can be calculated by AVD(opt) = IACT+LA-EAC(long)-S(V)-EAC(short). We measured these three components in 20 CRT-ICD patients during rest and submaximal ergo metric exercise (71 +/- 9 W) resulting in a 22.5 +/- 9.6 bpm rate increase. IACT and S(V)-EAC(short) did not reveal significant differences. LA-EAC(long), however, varied significantly by -10.7 +/- 16.1 ms (P = 0.008) during exercise. In contrast to AVD(optVDD), there was a significant difference in AVD(optDDD) of -8.8 +/- 14.5 ms (P = 0.014) between the resting and submaximal exercise conditions. In DDD pacing, AVD(opt) was shortened by 2.6 ms/10 bpm. Conclusion: In consideration of the findings of the studies performed to date, the rate-adaptive AV delay should be deactivated. abstract_id: PUBMED:28814710 Left Univentricular Pacing by Rate-Adaptive Atrioventricular Delay in Treatment of Chronic Heart Failure. BACKGROUND Cardiac resynchronization therapy (CRT) is efficacious in the treatment of chronic heart failure (CHF); however, because it is non-physiological, some patients are unresponsive. The present study used rate-adaptive atrioventricular delay (RAAVD) to track the physiological atrioventricular delay and investigated the effects of left univentricular pacing on CRT. MATERIAL AND METHODS Patients with CHF fulfilling the indication of CRT Class I were categorized into a left univentricular pacing by RAAVD group and a standard biventricular pacing group. Preoperative and postoperative electrocardiography QRS duration, echocardiographic indicators, quality of life, cardiac function, and annual treatment cost were estimated. The standard deviation (RS/R-SD5) of the S/R ratio in lead V1 at 5 heart rate segments in the left univentricular pacing by RAAVD was calculated, and the accuracy of RAAVD in tracking the physiological AV delay was evaluated. RESULTS The comparison between the left univentricular pacing by RAAVD group and the standard biventricular pacing group after operation showed a significantly reduced QRS duration (137±11 vs. 144±11 ms, P&lt;0.05), increased AVVTI (21.84±2.25 vs. 20.45±2.12 cm, P&lt;0.05), reduced IVMD (64.27±12.29 vs. 71.39±13.64 ms, P&lt;0.05), decreased MRA (3.09±1.12 vs. 3.73±1.19 cm², P&lt;0.05), and reduced average annual treatment cost (1.30±0.1 vs. 2.20±0.2 million Yuan, P&lt;0.05). The RS/R-SD5 in the left univentricular pacing by RAAVD group was negatively correlated with improvements in cardiac function (r=-0.394, P=0.031). CONCLUSIONS Left univentricular pacing by RAAVD has treatment effects similar to those of standard biventricular pacing, and is an economically and physiologically effective method for biventricular systolic resynchronization in the treatment of CHF. abstract_id: PUBMED:26069448 Feasibility of a novel atrioventricular delay optimization method using transmitral and pulmonary venous flow in patients with sequential ventricular pacing or cardiac resynchronization therapy. Background: Although several echo-Doppler methods were proposed to optimize atrioventricular (AV) delay in patients with sequential ventricular pacing, "echo-guided" AV optimization has not been widely adopted clinically. A combination of trasmitral flow (TMF) and pulmonary venous flow (PVF) measurements may be beneficial to further optimize AV delay to achieve better cardiac function. The aim of this study was to assess the feasibility and usefulness of AV delay optimization by combined use of TMF and PVF. Methods: A total of 32 patients after sequential ventricular pacemaker implantation were enrolled and studied. The optimal AV delay was defined as the timing to minimize the duration between PVF reversal (a) wave and the duration of the "A" wave of TMF. Stroke volume was measured at the "optimized" AV delay (AVD(OPT)) and was compared with that obtained at shorter (AVD(OPT) - 50 ms) and longer (AVD(OPT) + 50 ms) AV delays. Results: AV optimization was feasible in 27 of 32 patients (87%). Stroke volume at AVD(OPT) was significantly higher than that at shorter or longer AV delay (63 ± 18 ml vs. 57 ± 15 ml vs. 56 ± 16 ml, P = 0.001). Conclusions: AV delay optimization using TMF and PV flow was feasible. Usefulness of this method requires further investigation with a larger study population. abstract_id: PUBMED:29956361 Cardiac resynchronization therapy resulting from atrial pacing: An unusual case of intraventricular conduction delay. Cardiac resynchronization therapy device implantation is complicated by a significant rate of failure to place a left ventricular lead via the coronary sinus. The present case describes one such failure ironically leading to resynchronization. The patient's QRS narrowing subsequent to postimplant atrial pacing alone suggests that bradycardia-dependent phase 4 block was the indirect but exclusive cause of the patient's intraventricular conduction delay. Thus, phase 4 block should be considered when atrial pacing at a faster rate resolves a wide QRS interval. abstract_id: PUBMED:21796453 Cardiac resynchronization therapy is certainly cardiac therapy, but how much resynchronization and how much atrioventricular delay optimization? Cardiac resynchronization therapy has become a standard therapy for patients who are refractory to optimal medical therapy and fulfill the criteria of QRS &gt;120 ms, ejection fraction &lt;35% and NYHA class II, III or IV. Unless there is some other heretofore unrecognized effect of pacing, the benefits of atrio-biventricular pacing on hard outcomes observed in randomized trials can only be attributed to the physiological changes it induces such as increases in cardiac output and/or reduction in myocardial oxygen consumption leading to an improvement in cardiac function efficiency. The term "Cardiac Resynchronization Therapy" for biventricular pacing presupposes that restoration of synchrony (simultaneity of timing) between left and right ventricles and/or between walls of the left ventricle is the mechanism of benefit. But could a substantial proportion of these benefits arise not from ventricular resynchronization but from favorable shortening of AV delay ("AV optimization") which cannot be termed "resynchronization" unless the meaning of the word is stretched to cover any change in timing, thus, rendering the word almost meaningless. Here, we examine the evidence on the relative balance of resynchronization and AV delay shortening as contributors to the undoubted clinical efficacy of CRT. abstract_id: PUBMED:29405324 Combination of the best pacing configuration and atrioventricular and interventricular delays optimization in cardiac resynchronization therapy. Background: Cardiac resynchronization therapy optimization can be pursued by left ventricular pacing vector selection and atrioventricular (AV) and interventricular (VV) delays optimization. The combination of these methods and its comparison with multipoint pacing (MPP) is scarcely studied. Methods: Using noninvasive cardiac output (CO) measurement, the best of five left ventricular pacing vectors was determined, then AV and VV delays optimization was applied on top of the best vector. Response to the optimization protocol was defined as a &gt;5% CO increase compared to the standard biventricular configuration. Results: Twenty-two patients (18 men, age 71 ± 9 years) were included. Standard biventricular configuration increased CO compared to baseline (4.65 ± 1.55 L/min vs 4.27 ± 1.53 L/min, respectively, P = 0.02). The best quadripolar configuration increased CO to 4.85 ± 1.67 L/min (P = 0.03 compared to the standard biventricular configuration). AV then VV delay optimization both provided additional benefit (final CO 5.56 ± 2.03 L/min, P = 0.001 compared to the best quadripolar configuration). Fifteen (68%) patients responded to the optimization protocol. Anatomical MPP (based on maximal anatomical separation between electrodes) and electrical MPP (based on maximal electrical activation difference between electrodes) were evaluated in 16 patients and yielded a CO similar to that of the optimization procedure. Conclusions: The combination of choosing the best quadripolar pacing configuration and optimizing atrioventricular and interventricular delays resulted in an improvement of cardiac output compared to standard biventricular stimulation in 68% of patients. The final cardiac output was comparable to multipoint pacing. abstract_id: PUBMED:35715081 Role of Electrical Delay in Cardiac Resynchronization Therapy Response. Traditionally, left ventricular (LV) lead position was guided by anatomic criteria of pacing from the lateral wall of the LV. However, large trials showed little effect of LV lead position on outcomes, other than noting worse outcomes with apical positions. Given the poor correlation of cardiac resynchronization therapy (CRT) outcomes with anatomically guided LV lead placement, focus shifted toward more physiologic predictors such as targeting the areas of delayed mechanical and electrical activation. Measures of left ventricular delay and interventricular delay are strong predictors of CRT response. abstract_id: PUBMED:20550613 Should we optimize cardiac resynchronization therapy during exercise? Should We Optimize CRT During Exercise? Cardiac resynchronization therapy aims at diminishing cardiac dyssynchrony in patients with heart failure. The effect of cardiac resynchronization therapy can be improved by optimization of the atrioventricular (AV) and interventricular (VV) delays. Currently, optimization of these pacing settings is mainly performed during resting conditions. This paper aims to objectively review the current literature about a rate-adaptive AV and VV delay in cardiac resynchronization therapy. The current evidence for a rate-adaptive AV and VV delay comprises only small nonrandomized studies on acute effects. The effect of exercise on the optimal AV delay was heterogeneous between studies. The optimal VV delay was influenced by exercise conditions in some, but not all patients. Possible explanations lie in the heterogeneous electrical and mechanical responses to exercise in patients with a complex disease such as heart failure with asynchronous contraction. Current evidence is insufficient to show the superiority of a rate-adaptive AV or VV delay in all CRT patients. Individualized exercise programming may be warranted in selected patients. Answer: The use of rate-adaptive atrioventricular (AV) delay in cardiac resynchronization therapy (CRT) pacing is a topic of ongoing research and debate. Some studies suggest that rate-adaptive AV delay (RAAVD) can be beneficial. For instance, a study found that left univentricular (LUV) pacing with RAAVD was as effective as standard biventricular pacing, could be more physiological, and decreased the average annual cost of CRT (PUBMED:28491086). Another study showed that replicating the physiological response of AV delay shortening during exercise with programmable RAAVD translated into a 10% improvement in exercise capacity (PUBMED:22772135). Additionally, a study concluded that left univentricular pacing by RAAVD has treatment effects similar to those of standard biventricular pacing and is an economically and physiologically effective method for biventricular systolic resynchronization in the treatment of chronic heart failure (CHF) (PUBMED:28814710). However, there are also recommendations against the use of RAAVD in CRT pacing. One study suggested that the rate-adaptive AV delay should be deactivated based on their findings, indicating that the optimal AV delay should be shortened by a specific amount per 10 bpm increase in heart rate during exercise (PUBMED:18037668). Another study questioned the extent to which the benefits of CRT arise from ventricular resynchronization versus favorable shortening of AV delay, suggesting that the term "resynchronization" might be overstretched if it covers any change in timing (PUBMED:21796453). Furthermore, the role of electrical delay as a predictor of CRT response has been highlighted, suggesting that targeting areas of delayed mechanical and electrical activation may be more important than anatomical criteria for LV lead placement (PUBMED:35715081). The evidence for a rate-adaptive AV and VV delay comprises only small nonrandomized studies on acute effects, and the current evidence is insufficient to show the superiority of a rate-adaptive AV or VV delay in all CRT patients (PUBMED:20550613). In conclusion, while some studies support the use of RAAVD in CRT pacing due to its potential physiological benefits and improvement in exercise capacity, other studies recommend against its use or suggest that the benefits may not be solely due to resynchronization. The decision to use RAAVD in CRT pacing should be individualized, taking into account the specific characteristics and responses of each patient to therapy. Further research and larger randomized studies are needed to establish clear guidelines on the use of RAAVD in CRT pacing.
Instruction: Paediatric driveway run-over injuries: time to redesign? Abstracts: abstract_id: PUBMED:19680300 Paediatric driveway run-over injuries: time to redesign? Aims: To investigate the demographic, accident, and environmental characteristics associated with driveway run-over injuries in order to identify potentially modifiable factors and prevention strategies Methods: Retrospective review of all children less than 15 years old who were hospitalised or killed due to a driveway run-over injury in the Auckland region of New Zealand over the 50-month period, November 2001 to December 2005. Data was collected on the demographics, accident and environmental characteristics, and parental awareness. Data was obtained from clinical records and telephone interviews with parents. Results: A total of 93 cases were identified, including 9 fatalities. The median age was 2 years with 73% under 5 years old. Children of Pacific Island and Maori ethnicity comprised 43% and 25% of cases respectively. Injuries occurred predominately on the child's home driveway (80% of cases). In 64% the driveway was the usual play area for the child. Only 13% of driveways were fenced. 51% were long driveways extending through the section, and 51% were shared with other properties. 51% of properties were rented and of these 57% were government houses. Conclusion: The absence of physical separation between driveways and children's play and living areas may predispose to driveway injuries. Further research is needed to investigate the ideal way to implement such separation in current properties and future property developments. abstract_id: PUBMED:11008592 Driveway motor vehicle injuries in children. Objectives: To describe the frequency, nature and outcome of driveway injuries in children. Design: Retrospective case series of driveway-related injuries in children under 16 years of age admitted to the New Children's Hospital (NCH), New South Wales, from November 1995 to February 2000, and deaths reported to the New South Wales Paediatric Trauma Death (NPTD) Registry from January 1988 to December 1999. Main Outcome Measures: Circumstances of injury; type and number of injuries identified. Results: 42 children were admitted to our institution with driveway-related injuries over four years and four months. These represent 12% of all children admitted with pedestrian motor vehicle injuries. Fourteen deaths (including one of the children admitted to NCH) were reported to the NPTD Registry over 12 years, accounting for 8% of all paediatric pedestrian motor vehicle deaths reported to the registry. Typically, the injury involved a parent or relative reversing a motor vehicle in the home driveway over a toddler or preschool-age child in the late afternoon or early evening. Four-wheel-drive or light commercial vehicles were involved in 42% of all injuries, although they accounted for just 30.4% of registered vehicles in NSW. These vehicles were associated with a 2.5-times increased risk of fatality. In 13 of the 14 deaths, the cause was a severe head injury not amenable to medical intervention. Conclusions: Driveway injuries in children account for a significant proportion of paediatric pedestrian motor vehicle injuries and deaths in NSW. Prevention represents the only effective approach to reducing deaths from this cause. abstract_id: PUBMED:20825609 Driveway runover, the influence of the built environment: a case control study. Aim: Driveway runover injuries are a frequent cause of paediatric mortality and morbidity throughout the world. Driveway runovers occur as a result of an interaction between human factors (child and driver), vehicle factors (visibility) and environmental factors (driveway design and surroundings). This study investigates the environmental factors involved in these injuries. Methods: Case control study, Auckland, New Zealand. Cases were the properties where paediatric driveway injuries (age &lt; 7 years) requiring hospital admission had occurred. Control properties were selected from the addresses of children presenting to the emergency department with a non-driveway injury. Blinded assessment of properties was completed using satellite images, site visits and searches of council records. Results: Analysis was completed on 88 case properties and 181 controls. The risk of injury was increased by a driveway length greater than 12m (OR = 1.8, 95%CI = 1.1-3.0), exiting the driveway onto a local road (OR = 5.5, 95% CI = 2.7-11.2) and the driveway exiting onto a cul-de-sac (OR = 2.3, 95%CI = 1.4-3.9). The risk of driveway injury was increased when more parking areas were on the property (accessed from the driveway) (OR = 3.0, 95%CI = 1.6-5.4) and when the driveway runs along the property boundary (OR = 2.9, 95%CI = 1.6-5.2). A separate pedestrian pathway on the property was associated with a lower risk of injury (OR = 0.4, 95%CI= 0.2-0.9). Conclusions: A number of built environment features contribute to driveway runover injuries. This information should be used by those within the design and building community to reduce the risk of further driveway runover injury. abstract_id: PUBMED:12362205 Driveway-related motor vehicle injuries in the paediatric population: a preventable tragedy. Aim: To describe the incidence and demographics of children injured by slow-moving vehicles in Auckland driveways in order to develop preventative strategies. Methods: We retrospectively reviewed all children less than 15 years, 76 in total, who were injured by a vehicle in a driveway within Auckland between January 1998 and October 2001. Patient demographics and risk factors were identified by clinical note review, contacting the parents, and inspecting the driveways involved. Results: These injuries occur most commonly in children less than 4 years old (86%), usually boys (58%), who are reversed over by a relative (68%) at the home driveway (71%). There were six deaths within the study period. Head and thoracic injuries predominated and were typically associated with lower limb injuries. Maori and Pacific Island children represented 74% of all the cases. Nineteen drivers saw the child immediately prior to reversal in a presumed safe position. None of the driveways had any fencing to separate the driveway from rest of the property. Conclusions: A combination of targeted public health messages to increase awareness, safer driveway design and the fencing of domestic rental properties will prevent these injuries. abstract_id: PUBMED:23298707 Towards a structured understanding of caregivers' safety behaviour in the domestic and driveway setting. Slow speed run-overs represent a major cause of injury and death among Australian children, with higher rates of incidents being reported in Queensland than in the remaining Australian states. Yet, little attention has been given to how caregivers develop their safety behaviour in and around the driveway setting. To address this gap, the current study aimed to develop a conceptual model of driveway child safety behaviours among caregivers of children aged 5 years or younger. Semi-structured interviews were conducted with 26 caregivers (25 females/1 male, mean age, 33.24 years) from rural and metropolitan Queensland. To enable a comparison and validation of findings from the driveway, the study analysed both driveway and domestic safety behaviours. Domestic safety behaviours were categorised and validated against driveway safety behaviours, uncovering a process of risk appraisal and safety behaviour that was applicable in both settings (the Safety System Model). However, noteworthy differences between the domestic and driveway setting were uncovered. Unlike in the domestic setting, driveway risks were perceived as shifting according the presence of moving vehicles, which resulted in inconsistent safety behaviours. While the findings require further validation, they have implications for the design and implementation of driveway run-over interventions. abstract_id: PUBMED:7862481 Driveway-related child pedestrian injuries: a case-control study. Objectives: To examine risk factors for driveway-related child pedestrian injuries. Design: A community based case-control study. Setting: The Auckland region of New Zealand. Participants: Cases (n = 53) were children killed or hospitalized as a result of a driveway-related pedestrian injury, in the Auckland region over a period of 2 years and 2 months. Controls (n = 159) were an age-matched random sample of the child population of the Auckland region. Results: The absence of physical separation of the driveway from the children's play area was associated with a threefold increase in the risk of driveway-related child pedestrian injury (OR = 3.50; 95% CI 1.38, 8.92). Children living in homes with shared driveways were also at significantly increased risk (OR = 3.24; 95% CI 1.22, 8.63). The population attributable risk associated with the absence of physical separation of the driveway from the children's play area was 50.0% (95% CI 24.7, 75.3). Conclusion: The fencing of residential driveways as a strategy for the prevention of driveway-related child pedestrian injuries deserves further attention. abstract_id: PUBMED:11483795 Driveway injuries in children: risk factors, morbidity, and mortality. Background/purpose: Injuries that occur around the driveway are not typically regarded as reportable to the police and thus are often underrecognized. The aim of this study was to characterize the pattern and consequences of motor vehicle collisions that occur in the driveway. Methods: Over the past 13 years, 64 patients admitted to the Children's Hospital of Pittsburgh sustained motor vehicle-related injuries in a driveway. These injuries resulted from a vehicle driven by an adult driver striking a child (group 1) or a child shifting an idle vehicle out of gear (group 2). We compared demographic variables and outcome measures between the 2 groups. Results: There was no difference in gender, injury pattern, Injury Severity Score, length of stay, or operations performed between the groups. Patients in group 1 were younger, smaller, had a lower Glasgow Coma Scale, and had poorer outcomes. The majority of collisions (~65%) in group 1 resulted from a truck or sport-utility vehicle going in reverse. Conclusions: Younger children are more severely injured in driveway-related crashes, which are most likely to be caused by a truck or sport-utility vehicle going in reverse. These vehicles should be equipped with additional safety features such as extended mirrors to visualize small children. abstract_id: PUBMED:16338329 The prevalence of driveway back-over injuries in the era of sports utility vehicles. Background: Small children are vulnerable to serious accidents when a motor vehicle is placed in motion in a driveway. We describe a series of such accidents, consider the predisposing factors, and analyze the outcomes. Methods: We conducted a retrospective review of the trauma database of a large, level I, freestanding children's hospital with specific attention to driveway auto-pedestrian accidents. Results: During an 8-year period, 495 children were treated for injuries sustained in auto-pedestrian accidents, with 128 occurring in the driveway. The children's median age was 2.9 years, with 54% of the injuries sustained by boys. These often serious accidents carried an overall mortality rate of 6%. The most common injuries were abrasions, blunt head injury, and fractures. Chest trauma was associated with the highest mortality (11%), and both chest and abdominal trauma had the highest median Injury Severity Score of 13. Orthopedic injuries were the most common reason for operative intervention. Thirty-one percent of the children required intensive care unit monitoring, with their average unit stay being 3.9 days. Cars, trucks, and sports utility vehicles comprised 55%, 25%, and 12% of the accidents, respectively. Truck accidents carried the highest mortality rate (19%). Accidents were more likely to occur between 3:00 and 8:00 pm, between Thursday and Saturday, and between May and October. An increasing number of accidents occurred during the last 4 years of the study. Conclusions: Driveway injuries are an underrecognized often severe form of auto-pedestrian accidents. To prevent these family tragedies, drivers of large vehicles with children younger than 12 years old should be extremely attentive and account for children outside the vehicle before moving. abstract_id: PUBMED:26696087 Paediatric trauma resuscitation: an update. Purpose: Paediatric trauma is the leading cause of mortality in children. Paediatric trauma resuscitation is the first and foremost step towards a successful treatment and subsequent recovery. Significant advances have taken place in the last years in relation to this field of trauma care. Methods: In this narrative review, we attempt to summarise the recent development in the concepts of fluid resuscitation, massive transfusion, permissive resuscitation, management of coagulopathy and use of tranexamic acid, literature pertaining to implementation of transfusion protocols in the paediatric population and education related to the paediatric trauma resuscitation. Results/conclusions: The current evidence although emerging is still sparse and high-quality studies are needed to shed more light on most of the above domains of resuscitation. abstract_id: PUBMED:35096537 Epidemiology and incidence of paediatric orthopaedic trauma workload during the COVID-19 pandemic: A multicenter cohort study of 3171 patients. Background: Coronavirus disease 2019 (COVID-19) has a major influence on all parts of society. Aim: To examine the consequences of the national lockdown and political initiatives during the first surge of the COVID-19 pandemic expressed by changes in incidences of musculoskeletal paediatric injuries. Methods: Study design was a retrospective multicenter cohort study. A 'pandemic' cohort was established from 16 March 2020 to 21 April 2020, where all institutions including day care and schools were closed. A 'pre-pandemic' cohort was established from the same period in 2019 for comparison. Included were all patients admitted at the emergency departments with paediatric musculoskeletal injuries (aged 0-15 years) identified by a relevant musculoskeletal ICD-10 diagnosis (DSxxx), concussions (DZ033D), or burns (DT2xx). Results: The 'pre-pandemic' cohort consisted of 2101 patients, and the 'pandemic' cohort consisted of 1070 patients, indicating a decrease of paediatric musculoskeletal injuries of 51%. The incidence of paediatric injury in the 'pre-pandemic' cohort was 10460/100000/year. In the 'pandemic' cohort, the incidence was 5344/100000/year. Conclusion: A resource re-allocation to help serve the COVID-19 patients might be possible without reducing the level of care for injury-related paediatric patients. Answer: The research on pediatric driveway run-over injuries suggests that there is a significant need to consider redesigning driveways to prevent such injuries. The studies highlight that driveway run-over injuries are a frequent cause of pediatric mortality and morbidity, with a substantial number of cases leading to hospitalization or death (PUBMED:19680300; PUBMED:11008592; PUBMED:20825609; PUBMED:12362205). These injuries typically involve young children, often under the age of 5, who are run over by a vehicle, usually driven by a parent or relative, in their own home driveway (PUBMED:11008592; PUBMED:12362205). Several environmental factors have been identified as increasing the risk of driveway run-over injuries. These include long driveways, driveways that extend through the property, shared driveways, and the absence of physical separation between driveways and children's play areas (PUBMED:19680300; PUBMED:20825609; PUBMED:7862481). The presence of more parking areas accessed from the driveway and driveways running along the property boundary also increase the risk, while separate pedestrian pathways on the property are associated with a lower risk of injury (PUBMED:20825609). The studies suggest that the design of driveways and the built environment play a crucial role in the occurrence of these injuries. The absence of fencing to separate driveways from the rest of the property is a common factor in many cases (PUBMED:12362205; PUBMED:7862481). Additionally, vehicles such as four-wheel drives or light commercial vehicles, which have reduced visibility for small children, are involved in a significant proportion of injuries and are associated with an increased risk of fatality (PUBMED:11008592). In conclusion, the evidence points to the need for redesigning driveways and implementing safety measures to prevent pediatric run-over injuries. This could include creating physical barriers between play areas and driveways, designing shorter driveways, ensuring separate pedestrian pathways, and possibly incorporating safety features in vehicles to improve visibility for drivers. These changes could potentially reduce the risk of such tragic accidents occurring (PUBMED:19680300; PUBMED:11008592; PUBMED:20825609; PUBMED:12362205; PUBMED:7862481).
Instruction: Does the source of laser energy influence the coagulation of chorionic plate vessels? Abstracts: abstract_id: PUBMED:17003553 Does the source of laser energy influence the coagulation of chorionic plate vessels? Comparison of Nd:YAG and diode laser on an ex vivo placental model. Objectives: To compare the histological effects of diode and Neodymium-Yttrium Aluminium Garnet (Nd:YAG) laser coagulation of chorionic plate vessels. Methods: In selected chorionic plate vessels in an ex vivo term placenta perfused with warm saline solution, diode (wavelength 940 nm) and Nd:YAG (wave length 1,064 nm) laser were used with an output of 30, 40, and 50 W, and 55 and 70 W respectively using preset energy and duration of impact. All vessels were examined histologically blindly to the procedures' characteristics. Results: A total of 23 vessels were coagulated. Similar histological lesions were observed using diode and Nd:YAG lasers. The lesions were compatible with an acceptable clinical effect at all power outputs tested. The results were not related to the diameter or type of vessels. Lesions of the endothelium and reduction of the vessel lumen were best achieved with a diode laser at 40 W. Conclusion: Nd:YAG and Diode laser induce significant and comparable changes in chorionic plate vessels compatible with an efficient coagulation process under the experimental condition used. abstract_id: PUBMED:10210117 Time-related cotyledonary effects of laser coagulation of superficial chorionic vessels in an ovine model. Endoscopic Nd:YAG laser coagulation of superficial chorionic plate vessels has been suggested as a 'causal' therapy for severe twin-to-twin transfusion syndrome (TTS), as it is believed to arrest flow over vascular anastomoses. The therapy remains debated, partly because it is questioned whether coagulation of superficial chorionic vessels can arrest blood flow over deeply located anastomoses. The latter are believed to be responsible for the majority of cases of severe TTs. In this study, the coagulation performance, and the immediate and delayed tissue effects of laser coagulation were evaluated in an in vivo ovine model. Eight pregnant ewes of about 100 days' gestational age (term= 145 days) were studied. In utero coagulation was done through a 1.2 mm fetoscope loaded with a 600 microm laser fibre, connected to an Nd:YAG laser in continuous mode. Superficial cotyledonary vessels were coagulated under visual control at a distance of 1 cm with a mean power of 60 watts. 31 lasered cotyledons were harvested, either immediately (n= 15), 14 days (n = 8) or 42 days (n = 8) after the surgery and evaluated by inspection and histology. Control cotyledons were either neighbouring cotyledons from the same amniotic sac (internal control) or cotyledons from another gestational-age-matched sac (external control). Immediate effects consisted of localized coagulation necrosis at the zone of laser impact with no significant histological effect at more than 2 mm distance, except tissue congestion. Long-term effects consisted of fibrosis without any vascular recanalization or tissue regeneration extending from the impact site over the whole cotyledon. All treated cotyledons underwent complete infarction over time. This study provides experimental evidence supporting that laser coagulation of superficial chorionic vessels entering a cotyledon achieves complete functional elimination of the involved cotyledon. abstract_id: PUBMED:35589781 Laser coagulation and hemostasis of large diameter blood vessels: effect of shear stress and flow velocity. Photocoagulation of blood vessels offers unambiguous advantages to current radiofrequency approaches considering the high specificity of blood absorption at available laser wavelengths (e.g., 532 nm and 1.064 µm). Successful treatment of pediatric vascular lesions, such as port-wine stains requiring microvascular hemostasis, has been documented. Although laser treatments have been successful in smaller diameter blood vessels, photocoagulation of larger sized vessels is less effective. The hypothesis for this study is that a primary limitation in laser coagulation of large diameter blood vessels (500-1000 µm) originates from shear stress gradients associated with higher flow velocities along with temperature-dependent viscosity changes. Laser (1.07 µm) coagulation of blood vessels was tested in the chicken chorio-allantoic membrane (CAM). A finite element model is developed that includes hypothetical limitations in laser coagulation during irradiation. A protocol to specify laser dosimetry is derived from OCT imaging and angiography observations as well as finite element model results. Laser dosimetry is applied in the CAM model to test the experimental hypothesis that blood shear stress and flow velocity are important parameters for laser coagulation and hemostasis of large diameter blood vessels (500-1000 µm). Our experimental results suggest that shear stress and flow velocity are fundamental in the coagulation of large diameter blood vessels (500-1000 µm). Laser dosimetry is proposed and demonstrated for successful coagulation and hemostasis of large diameter CAM blood vessels. abstract_id: PUBMED:9101219 Effect of laser coagulation on placental vessels: histological aspects. In order to evaluate the effect of laser coagulation on placental vessels and to define the optimal power density that should be used for this procedure, coagulation of placental vessels and histological examination were performed on freshly delivered placentae. This study suggests that 45- to 55-watt (6,000-7,666 J/cm2) shots can cause adequate placental injury. The extent of vascular lesions depends upon incident beam energy, fiber diameter, distance and angle to the vessel. abstract_id: PUBMED:19929297 Blood flow measurement system for fetoscopic laser photocoagulation of chorionic plate anastomosing vessels (FLPC). Fetoscopic laser photocoagulation of chorionic plate anastomosing vessels (FLPC) applies to the treatment of previable fetuses with severe twin-twin transfusion syndrome (TTTS). The ultimate goal of FLPC is selective blood flow interruption of anastomotic communicating vessels on the placenta fetoscopically. However, there has not been an established method to confirm that the blood flow is blocked, thus, it depends on the operator's experience or intuition to evaluate whether the FLPC was performed successfully or not. For this issue, we have developed a composite-type optical fiberscope (2.2 mm in diameter), which has centrally-located cautery laser fiber and surrounding located fiberglasses for viewing. This fiberscope enables transmission of 50 W Yb fiber laser which can be focused to 10 mm focal length using two lenses on the fiberscope tip. In this study, we combined the fiberscope and a laser blood-flow meter, and irradiated cautery laser to porcine mesenteric vein with measuring blood flow at the same time. From the experimental results, we could quantitatively measure the blood flow before and after laser irradiation, and confirm the blood flow blocking with our system. abstract_id: PUBMED:2332998 Effects of Nd: YAG laser coagulation of myocardium on coronary vessels. Laser coagulation of arrhythmogenic myocardium proved to be an effective surgical therapy for refractory arrhythmias. To determine the influence of Nd: YAG laser coagulation of myocardium on coronary vessels, a total of 48 transcatheter laser impacts (10 W, 10 sec, 7 W mm2) were directed to the left ventricular free wall via endocardial (24 lesions in 2 dogs) or epicardial (24 lesions in 2 dogs) approach. In 8 of 24 endomyocardial lesions (2 and 6 months old) coronary vessels with a diameter of greater than 50 microns were found within the coagulation zones. The volume of these lesions was significantly (P less than 0.01) smaller (139 +/- 43 mm3) than those (n = 16) with coronary vessels measuring less than 50 microns (311 +/- 87 mm3). Volumes of epimyocardial lesions (1-2 hours old) produced by transcoronary irradiation with normal coronary blood flow were significantly (P less than 0.01) smaller (31 +/- 17 mm3) than with reduced (73 +/- 22) or interrupted (119 +/- 34 mm3) blood flow (n = 8, each). Both directly irradiated coronary vessels and those found within the coagulation zones and scars appeared histologically normal through all layers with an intact intima without fibrosis or thrombosis. The ultrastructure of irradiated arteries, even with total interruption of blood flow, was no different from that of non-irradiated controls. Thus, coronary artery blood flow significantly reduces the volume of coagulated myocardium, whereas the vessels themselves appear to remain undamaged by laser irradiation as used for myocardial coagulation. abstract_id: PUBMED:30908661 Carbon debris and fiber cleaving: Effects on potassium-titanyl-phosphate laser energy and chorioallantoic membrane model vessel coagulation. Objectives/hypothesis: Photoangiolytic precision afforded by the 532-nm potassium-titanyl-phosphate (KTP) laser relies on predictable energy delivery. Inadequate energy output can cause vessel rupture, and excessive energy can cause thermal damage. The quality of the cleaved surface and carbon deposits from ablated tissue are two factors that could negatively impact fiber performance. The effects of these on energy output and blood vessel coagulation were assessed using a chorioallantoic membrane (CAM) model. Study Design: Comparative analysis. Methods: Laser fibers with carbon debris, optimal fiber cleaving, and suboptimal cleaving were inspected at three times magnification, and the light dispersion pattern of each fiber was rated. The average energy output from consecutive pulses through each fiber configuration was recorded. The effect of these fiber conditions on clinical efficacy was estimated by measuring vessel coagulation versus rupture in the CAM model. Repeated measures analysis of variance compared results. Results: Carbon debris and suboptimal cleaving resulted in decreased energy output in comparison to optimal cleaving ([-Δ244 mJ, d = 4.31, P &lt; .001] and [-Δ195 mJ, d = 6.04, P &lt; .001]). Optimal cleaving resulted in immediate coagulation of vessels. Fibers with suboptimal cleaving and carbon debris had unpredictable outcomes, requiring multiple pulses for coagulation or causing vessel rupture. Conclusions: KTP laser fiber function is significantly affected by fiber tip condition. Carbon debris and suboptimal cleaving create significant attenuation of energy, which results in an unpredictable angiolytic effect, as demonstrated by increased vessel rupture in the CAM model. Optimal recleaving of KTP laser fibers restores prior energy output and predictable coagulation. Care should be taken to avoid carbon debris on laser-fiber tips and to cleave fibers properly. Level Of Evidence: NA Laryngoscope, 129:2244-2248, 2019. abstract_id: PUBMED:9115631 Underwater Nd:YAG laser coagulation of blood vessels in a rat model. Endoscopic coagulation of placental and umbilical cord vessels has been suggested as a treatment for selected cases of twin-twin transfusion syndrome and of acardiac twin. The feasibility, safety and hemostatic effect of neodymium:yttrium-aluminium-garnet (Nd:YAG) laser in an underwater environment were experimentally studied in an in vivo model, mimicking the in utero situation and fetoplacental vessels. In 10 male Wistar rats, immersed in normal saline at 38 degrees C, femoral vessels, carotid artery, abdominal aorta and vena cava were coagulated under endoscopic control. A 100-Watt Nd:YAG laser was used in continuous mode with a 600-micron fiber. Outcome measurements were vascular diameter, total energy (joules) used to obtain visual coagulation and subsequent vascular obliteration. Overall failure rate was 32% and perforation of the vessel occurred in 7.7% of cases. abstract_id: PUBMED:19156708 Endoscopic laser coagulation of feeding vessels in large placental chorioangiomas: report of three cases and review of invasive treatment options. Objective: To report three cases of large placental chorioangiomas managed with endoscopic laser coagulation of the feeding vessels, and review the literature regarding cases of chorioangioma treated with invasive techniques. Methods: Intrauterine endoscopic surgery was performed using a 2.5-mm fetoscope under epidural anesthesia. Coagulation of the feeding vessels was attempted with laser energy, and the operation was completed with amniodrainage. Results: The feeding vessels were successfully coagulated in one case, resulting in a term delivery. Histopathologic examination of the placenta confirmed a capillary chorioangioma with extensive necrosis. There were intraoperative complications in the other two cases, including bleeding at the site of coagulation requiring intrauterine transfusion in one, and incomplete vascular ablation in the other. In the former case the infant was born prematurely and developed chronic renal insufficiency. In the latter, the fetus died within the first week of the surgery. Conclusions: There are limited interventions available for the management of large, symptomatic placental chorioangiomas. Although they can be treated with endoscopic laser coagulation of the feeding vessels, fetal bleeding, exsanguination, and death are potential complications of the procedure; therefore, this technique should be used with caution. The role of this treatment modality in the early devascularization of placental chorioangiomas prior to the development of significant complications warrants consideration. abstract_id: PUBMED:34506961 Morphological changes of veins and perivenous tissues during endovenous laser coagulation using 2-μm laser radiation and various types of optical fibers. Objective: To determine the morphological changes in veins and perivenous tissues after endovenous laser coagulation (EVLC) using laser radiation with a wavelength of 1910 nm and different types of fibers (bare tip and radial). Methods: The EVLC procedure was carried out on 22 surface veins of six sheep. The radiation source was a diode-pumped solid-state laser, which was based on a LiYF4:Tm crystal and had an emission wavelength of 1910 nm and a maximum output power of 10 W. Two types of optical fibers were used: (1) bare tip and (2) radial or radial with two rings. Histological and morphometric methods were used, and the statistical digital data were analyzed. Results: The use of a linear endovenous energy density of 20 J/cm and optical bare fibers for veins with diameters of 3-4 mm resulted in a slit-shaped or wide venous wall perforation. A thermal effect was observed on the perivenous connective tissue (PVCT), which caused damage to its structures. Wide perforations were accompanied by complete destruction of the PVCT in the projection of the formed defect. The distance between the remaining vein wall fragment, located opposite to the perforation, and injured small vessels was 257.7 ± 23.6 μm. The radius of thermal damage increased to 2073.5 ± 8.0 μm near the vessel perforation. Using optical radial fibers for veins with diameters of 3.9 ± 0.5 mm did not lead to perforations. The destructive effect of the laser on small vessels of the PVCT extended to a distance of 425.7 ± 22.0 μm. Conclusions: Analysis of thermal vessel damage in perivenous tissue after EVLC with bare-tip fiber shows that in the projection of a wide perforation, the damaged vessels of the PVCT are located at a large distance from the coagulated vein wall. On the opposite side of the perforation, the distance from the coagulated vein wall to the damaged vessels of the PVCT is significantly reduced because of the minimal output of laser radiation energy through the poorly damaged part of the wall. Using an optical radial fiber facilitates the application of a uniform distribution of thermal energy to the vein wall and damage to all its layers; at the same time, it minimizes the thermal energy that extends beyond the vein wall and damages the surrounding tissue. Clinical Relevance: The use of radiation with a wavelength of 1910 nm will make it possible to carry out endovenous laser coagulation of varicose veins at lower power values compared with radiation in the micron and one and a half micron regions of the spectrum. Understanding of morphological changes of veins and perivenous tissues after endovenous laser coagulation with 1910-nm laser radiation and different types of optical fibers (bare-tip, radial, radial 2ring) help predict possible complications and reduce their rate. Answer: Yes, the source of laser energy does influence the coagulation of chorionic plate vessels. Studies comparing different types of lasers, such as diode and Neodymium-Yttrium Aluminium Garnet (Nd:YAG), have shown that while both can induce significant and comparable changes in chorionic plate vessels compatible with an efficient coagulation process, there may be differences in the optimal power settings and the extent of the lesions produced (PUBMED:17003553). For instance, lesions of the endothelium and reduction of the vessel lumen were best achieved with a diode laser at 40 W (PUBMED:17003553). Additionally, the effectiveness of laser coagulation can be influenced by factors such as blood flow velocity and shear stress, particularly in larger diameter blood vessels (500-1000 µm). These factors are fundamental in the coagulation process, and laser dosimetry must be adjusted accordingly for successful coagulation and hemostasis (PUBMED:35589781). Histological examination of placental vessels after laser coagulation has suggested that the extent of vascular lesions depends on the incident beam energy, fiber diameter, distance, and angle to the vessel, indicating that the source and delivery of laser energy are critical for achieving the desired effect (PUBMED:9101219). Moreover, the wavelength of the laser used can also impact the coagulation process. For example, a study using a 1910 nm wavelength laser for endovenous laser coagulation (EVLC) demonstrated that different types of optical fibers (bare tip and radial) can result in varying morphological changes in veins and perivenous tissues, suggesting that the choice of laser source and fiber type can influence the outcome of the coagulation (PUBMED:34506961). In summary, the source of laser energy, including the type of laser, power settings, wavelength, and delivery method, plays a significant role in the coagulation of chorionic plate vessels.
Instruction: Rural placements in Tasmania: do experiential placements and background influence undergraduate health science student's attitudes toward rural practice? Abstracts: abstract_id: PUBMED:18767916 Rural placements in Tasmania: do experiential placements and background influence undergraduate health science student's attitudes toward rural practice? Introduction: Each year growing numbers of undergraduate health science students, from a variety of disciplines, participate in a University of Tasmania Department of Rural Health supported rural placement program in Tasmania. This study aimed to investigate the influence rural placement and rural background had on students' intentions to live and work in a rural or remote location after graduation. Methods: Between January 2005 and December 2006, 336 students participated in the placement program. Students were requested to complete a survey at the completion of their placement. A response rate of 239 was achieved (71%). The survey measured students' stated rural career intentions and rural background status according to location of primary and secondary school attendance. A demographic analysis of respondents was undertaken and results cross tabulated according to the rural, remote and metropolitan area (RRMA) classification system. Statistical analyses, including paired t-tests and a Wilcoxon signed rank test, were conducted to compare reported mean intention to practise rurally both prior to and after placement. Results: The results from this survey show that rural placements in the undergraduate health science programs have a predominantly positive influence on students' intention to work in a rural community post-graduation. While these findings were significant for the disciplines of nursing, medicine and allied health, the results were not significant for pharmacy students. Students' average intention to practise rurally significantly increased after the placement for students from RRMA classifications 1 and 3-5. Conclusion: The value of rural placements as a method for increasing health science students' intentionality to take up rural practice as a positive and viable career option is considerable. abstract_id: PUBMED:35324046 Does undertaking rural placements add to place of origin as a predictor of where health graduates work? Objective: To determine the work location (metropolitan, regional, rural and remote) of graduates in nursing, allied health and oral health disciplines who complete their professional training, end-to-end training, in a regional or rural area noting the potential inclusion of a metropolitan-based placement for speciality practice not available in rural or regional Victoria. Methods: We tracked the place of employment from the Australian Health Practitioners Regulation Agency (AHPRA) of all graduates from a regional/rural tertiary education provider. The student home address at enrolment, locations where they undertook all placements and their current place of work were described using an objective geographical model of access, the Modified Monash Model. Results: Seventy-five per cent of 5506 graduates were located in the AHPRA database. About one third of graduates were working in metropolitan areas, 1/3 in regional cities and 1/3 in rural areas. Students' origin accounted for 1/3 of variance in current workplace location. The more placement days students completed in regional/ rural areas was also a significant predictor of working in a regional or rural area. Conclusion: End-to-end training in regional/rural areas is an effective approach to retaining a regional/rural workforce. Student origin is a strong predictor of working rural or regionally, as is undertaking placements in rural areas. This suggests that priority for rural/ regional student placements should be given to students in end-to-end regional/ rural programs and students from a regional/ rural background. abstract_id: PUBMED:26122723 Increasing quality and quantity of student placements in smaller rural health services: It can be done. Problem: Future rural nursing and midwifery workforce shortage and current shortage of placements for undergraduate nursing and midwifery students. Design: Developmental evaluation. Setting: Five small hospitals and a regional community health service. Key Measures Of Improvement: Increased number of placements available for students; sustainable, quality clinical placement system in place for the future. Strategies For Change: Clinical facilitators developed a clinical facilitation model and resources that increased quality and quantity of student placements, assisted clinicians to provide higher quality teaching to students, enabled students to develop skills in rural health care service provision and enhanced knowledge around future career options. Effects Of Change: Placement targets were met and all health services involved chose to continue the model of clinical facilitation developed after project funding ceased. The clinical education skills developed by staff under the project remain in the region to support future students. Lessons Learnt: It is possible to create a sustainable, high-quality, rural placement experience for larger numbers of nursing and midwifery students. Funding sources are available to continue the clinical facilitation model in to the future, but for it to work optimally, a steady supply of students across the year is required. abstract_id: PUBMED:33793014 Co-designing community-focused rural placements for collaborative practice. Aims And Context: As a University Department of Rural Health, we have identified recurrent areas of service need among vulnerable rural populations, specifically the need for allied health. Concurrently, we have also identified missed opportunities for deliberate collaborative practice in rural clinical placements. This paper provides a commentary on our work in progress as we work to leverage available opportunities to provide both service from and education for health profession students on rural clinical placements. Approach: We developed a transdisciplinary placement model, informed by practice theory, which encompasses pre-placement preparation, student support, host sites and clinicians, and a structured evaluation strategy. This model aims to facilitate service provision alongside of student learning about community and collaborative practice. In particular, the co-design of the model is expected to facilitate student's sense of social accountability and reduce stigma in working with vulnerable population groups. Conclusion: This paper highlights the need for greater alignment between rural health education and practice, describes a placement model that is working towards this and showcases how this has been enacted in a remote community in New South Wales. More cross-sector discussion and evaluation is needed to determine the implications of adopting this model more widely if service and learning opportunities are to be equally achieved, and to determine the ways in which training and service provision can be aligned with community need, as recommended in the recent Rural Health Commissioner Report. abstract_id: PUBMED:27184770 The changing landscape of physiotherapy student clinical placements: An exploration of geographical distribution and student performance across settings. Objective: To describe the geographical distribution of physiotherapy clinical placements and investigate the relationship between geographical setting and clinical placement marks in physiotherapy students. Design: A retrospective cohort design was used for this study. Setting: The University of Newcastle, New South Wales. Participants: Data from entry-level Bachelor of Physiotherapy student clinical placements. Main Outcome Measure(s): Data from all clinical placements in the Physiotherapy program between 2003 and 2014 were included. For all clinical placements, student assessment mark, year of study, type of placement and placement location were collected. Placement location was then classified using the Modified Monash Model (MMM) categories: one (most metropolitan) to seven (most remote). Results: Over the 12 year period of the study 3964 placements were completed. Between 2003 and 2005 the average proportion of clinical placements occurring in metropolitan areas (MMM1) was 78% and in rural areas (MMM categories 3-6) was 22%. In 2014 these proportions had changed to 59% (MMM1) and 40% (MMM3-6). There were significant differences in clinical placement grades between MMM1 and all other categories except MMM2, with lower assessment marks in MMM1 than other categories. Conclusions: The changing distribution of physiotherapy clinical placements may be reflective of increasing student numbers and greater efforts to support students completing rural and remote placements. This change may lead to a positive effect on the rural and remote physiotherapy workforce. Further research is required to determine the specific training and support needs of students and clinical educators in rural and remote settings. abstract_id: PUBMED:33982851 Delivering introductory physiotherapy clinical placements incorporating simulated learning experiences in rural settings. Objective: To investigate the delivery, in rural settings, of introductory physiotherapy clinical placements incorporating simulation and describe the impact of these placements on the uptake of longer-term rural immersion opportunities. Design: Retrospective cohort design. Setting: The University of Newcastle and the University of Newcastle Department of Rural Health (UONDRH), New South Wales, Australia. Participants: Data from undergraduate physiotherapy students were included. Intervention: Second-year University of Newcastle physiotherapy students undertake an introductory placement which includes one week of simulated learning followed by a two-week traditional health care placement. Supervisor training and active promotion of placements were undertaken to increase the capacity of rural sites to deliver these placements, including both simulation and immersion components. Main Outcome Measure(s): Data relating to numbers of students undertaking introductory placements and final-year rural immersion experiences in the UONDRH between 2012 and 2020 were described. A simple review of the placement strategy was also undertaken. Results: Introductory placements incorporating simulation were successfully implemented in the UONDRH settings and have continued annually. Physiotherapy staff in the UONDRH have been upskilled and have supported local clinicians to increase their supervisory capacity. The proportion of students undertaking introductory placements supported by the UONDRH has increased considerably as has the number of students completing full-year rural immersion experiences in the UONDRH. Conclusions: Delivering introductory physiotherapy placements incorporating simulation in rural settings was feasible and this strategy leads to increased placement capacity. Early rural clinical placement opportunities can increase students' uptake of longer-term rural immersion experiences which, in turn, can positively influence rural practice intentions. abstract_id: PUBMED:35286753 Impact of the COVID-19 pandemic on medical student placements in rural Queensland: A survey study. Objective: The COVID-19 pandemic has adversely impacted medical students' learning experiences. Students in one Australian Rural Clinical School were surveyed to investigate the impact of disruptions to clinical placements and satisfaction with educational changes implemented as a result of the pandemic. Design: Cross-sectional survey. Setting: The University of Queensland Rural Clinical School. Methods: Students undertaking one or two years of study at the participating Rural Clinical School in November 2020. Main Outcome Measure: A 20-item anonymised survey with questions on personal health and safety, quality of clinical training experience, response to changes in learning and student environment, and progression to completion of the medical degree. Results: The survey was completed by 124 students (76% response rate). Students were satisfied with the changes made to their learning to accommodate the disruptions to health service delivery and placements. Final year students were more satisfied with their learning experiences compared to their third-year counterparts. Conclusions: The Rural Clinical School implemented a range of academic and psychological support strategies which appear to have helped with mitigating mental health concerns experienced by students completing rural placements, who are already prone to experiencing social isolation. Strengthening communication between the academic and health service sectors can improve the quality of learning for medical students on placements. abstract_id: PUBMED:32000498 Factors associated with rural work for nursing and allied health graduates 15-17 years after an undergraduate rural placement through the University Department of Rural Health program. Introduction: Very little is known about the long term workforce outcomes, or factors relating to these outcomes, for nursing and allied health rural placement programs. The positive evidence that does exist is based on short term (1-3 year) evaluations, which suggest that undergraduate rural placements are associated with substantial immediate rural practice of 25-30% graduates practising rurally. These positive data suggest the value of examining long term practice outcomes, since such data are necessary to providing an evidence base for future workforce strategies. The objective was to measure long term (15-17 year) rural practice outcomes for nursing and allied health graduates who had completed an undergraduate rural placement of 2-18 weeks through a university department of rural health (UDRH). Methods: This was a longitudinal cohort study, with measures taken at the end of the placement, at one year and at 15-17 years post-graduation. Participants were all nursing and allied health students who had taken part in a UDRH rural placement, who consented to be followed up, and whose practice location was able to be identified. The main outcome measure was factors associated with location of practice as being either urban (RA 1) or rural (RA 2-5). Results: Of 776 graduates initially surveyed, 474 (61%) were able to be contacted in the year after their graduation, and 244 (31%) were identified through the Australian Health Practitioner Regulation Agency, 15-17 years later. In univariate analysis at the first graduate year, previously lived rural, weeks in placement, discipline and considering future rural practice all had significant relationships with initial rural practice. In multivariate analysis, only rural background retained significance (odds ratio (OR) 3.19, confidence interval (CI) 1.71-5.60). In univariate analysis 15-17 years later, previously lived rural and first job being rural were significantly related to current rural practice. In multivariate analysis, only first job being rural retained significance (OR 11.57, CI 2.77-48.97). Conclusion: The most significant long term practice factor identified in this study was initial rural practice. This suggests that funding to facilitate a rural pathway to not just train but also support careers in rural nursing and allied health rural training, similar to that already established for pharmacy and medicine, is likely to have beneficial long term workforce outcomes. This result adds to the evidence base of strategies that could be implemented for the successful development of a long term rural health workforce. abstract_id: PUBMED:22713111 Does the positive influence of an undergraduate rural placement persist into postgraduate years? Introduction: Medical schools worldwide are playing a role in addressing the shortage of rural health practitioners. Selection of rural-origin students and long-term rural undergraduate placements have been shown to have a positive influence on a subsequent career choice of rural health. Evidence for the impact of short-term rural placements is less clear. In New Zealand, the Otago University Faculty of Medicine introduced a 7 week rural undergraduate placement at the Dunedin School Of Medicine, one of its three clinical schools, in 2000. A study of the first two annual cohorts showed a positive influence of the course on student attitudes to rural health and their intention to practise in a rural setting. The purpose of this study was to test whether or not these effects persisted into postgraduate years. Method: The original study cohorts were posted a questionnaire (questions worded identically to the original survey) in 2009 (5th and 6th postgraduate years). Non-responders were followed up after 2 months. Graduates from the same year cohort at the two other Otago clinical schools (Christchurch and Wellington) were also surveyed. In addition to analysis by question, principal component analysis (PCA) identified 3 questions which represented the influence of the medical undergraduate program on students' attitudes towards rural general practice. This was used as an index of influence of the undergraduate curriculum. Results: There was a statistically significant difference among graduates from Dunedin and the other two schools in reporting a positive influence towards rural practice from the undergraduate course.When asked how the medical undergraduate program influenced their attitude towards a career in rural practice, 56% of respondents from Dunedin reported a positive influence compared with 24% from Christchurch and 15% Wellington. This effect was less strong than that obtained immediately after the rural placement where 70% of Dunedin based students reported a positive influence. The index value for positive effect on attitudes was significantly higher for respondents who studied at Dunedin than at Wellington (mean index value 0.552 for Dunedin, -0.374 for Wellington t=4.172, p=0.000) or Christchurch (mean index value -0.083 for Christchurch t=2.606, p=0.011). There was no significant difference between Christchurch and Wellington (t=1.420, p=0.160). There was no significant difference among schools in the proportion of graduates who had worked or intended to work in rural general practice at any point in their career (24% Dunedin, 31% Christchurch, 16% Wellington (Phi=0.160, p=0.178). Conclusion: Most of the literature on the influence of rural undergraduate placements, especially short term placements, examines immediate changes. This study adds to the evidence by showing that positive effects from a rural undergraduate placement persist into the postgraduate years, although that in isolation is unlikely to result in a significant workforce effect. Further investigation is warranted into which features of the undergraduate placement result in an extended positive effect on student attitudes. abstract_id: PUBMED:38429667 Australian rural service learning student placements: a national survey. This preliminary national study is the first of its kind to investigate how service learning placements are implemented in real world settings in rural Australia and what factors enable or hinder their implementation. An anonymous survey was distributed to 17 University Departments of Rural Health (UDRH) in Australia. Numerical data were analysed descriptively. Textual data were analysed using a hybrid content analysis approach. Thirty seven respondents provided data representing 12 UDRHs. Responding UDRHs reported facilitating service learning programs, with experience in this context ranging from 3 months to 21 years. Service learning placements predominantly occurred in schools and aged care facilities. Occupational therapy, physiotherapy, and speech pathology were the most frequently involved professions in service learning. Enablers and barriers identified were categorised into: People, Partnerships, and Place and Space. This national-scale study provides a springboard for more in-depth investigation and implementation research focused on development of a conceptual model to support service learning across rural and remote Australia. Answer: Yes, rural placements in Tasmania do influence undergraduate health science students' attitudes toward rural practice. The study conducted in Tasmania found that experiential rural placements had a predominantly positive influence on students' intention to work in a rural community post-graduation. This was particularly significant for students in the disciplines of nursing, medicine, and allied health, although the results were not significant for pharmacy students. Students' average intention to practice rurally significantly increased after the placement, especially for students from rural, remote, and metropolitan area (RRMA) classifications 1 and 3-5 (PUBMED:18767916). Additionally, the study suggests that the value of rural placements as a method for increasing health science students' intentionality to take up rural practice as a positive and viable career option is considerable. This aligns with the broader literature that indicates both rural background and rural placements during education are predictors of future rural practice (PUBMED:35324046). Furthermore, other studies have shown that rural placements can be successfully implemented and can lead to increased placement capacity and positive effects on students' rural practice intentions (PUBMED:33982851). The impact of rural placements on student attitudes and intentions to practice rurally appears to persist into postgraduate years as well (PUBMED:22713111). Overall, the evidence suggests that rural placements, combined with students' backgrounds, play a significant role in shaping their attitudes towards rural practice and can be an effective strategy for addressing rural workforce shortages.
Instruction: Is human myometrial sampling at the time of cesarean delivery safe? Abstracts: abstract_id: PUBMED:11120532 Is human myometrial sampling at the time of cesarean delivery safe? Objective: The mechanism for the initiation of human labor remains unknown and is under extensive investigation. Myometrium from patients in labor and not in labor is the ideal tissue to study structural, cellular, and molecular changes that occur during parturition. This study was designed to determine whether myometrial sampling at the time of cesarean delivery increases maternal morbidity. Study Design: This is a prospective cohort study including 118 study and 236 control patients. A full-thickness myometrial sample was obtained from the superior edge of a transverse uterine incision at the time of cesarean delivery. Demographics and standard surgical morbidity data were collected. Statistical methods used included univariate and multivariate analysis. Results: The study and control groups did not differ significantly with respect to age, gravidity, parity, birth weight, and Apgar scores. The estimated intraoperative blood loss was greater in the control group (P &lt;.02); however, the change in hematocrit level (preoperative vs postoperative values) was not different. There were no significant differences in the rates of endometritis, wound infection, and venous thrombosis up to 6 weeks post partum. When study and control patients were stratified into term in labor, term not in labor, preterm in labor, and preterm not in labor categories and compared for maternal morbidity, there were still no significant differences for any of the outcome measures evaluated. Conclusion: On the basis of our data, human myometrial sampling at cesarean delivery does not increase overall maternal morbidity, irrespective of gestational age and the presence or absence of labor. abstract_id: PUBMED:27235325 The study of progesterone action in human myometrial explants. Study Hypothesis: Myometrial explants represent a superior model compared with cell culture models for the study of human myometrial progesterone (P4) signalling in parturition. Study Finding: Gene expression analysis showed myometrial explants closely resemble the in vivo condition and the anti-inflammatory action of P4 is not lost with labour onset. What Is Known Already: Circulating P4 levels decline before the onset of parturition in most animals, but not in humans. This has led to the suggestion that there is a functional withdrawal of P4 action at the myometrial level prior to labour onset. However, to date, no evidence of a loss of P4 function has been provided, with studies hampered by a lack of a physiologically relevant model. Study Design, Samples/materials, Methods: Myometrial biopsies obtained at Caesarean section were dissected into explants after a portion was immediately snap frozen (t = 0). Microarray analysis was used to compare gene expression of t = 0 with paired (i) explants, (ii) passage 4 myometrial cell cultures or (iii) the hTERT myometrial cell line. Western blotting and chemokine/cytokine assays were used to study P4 signalling in myometrial explants. Main Results And The Role Of Chance: Gene expression comparison of t = 0 to the three models demonstrated that explants more closely resemble the in vivo status. At the protein level, explants maintain both P4 receptor (PR) and glucocorticoid receptor (GR) levels versus t = 0 whereas cells only maintain GR levels. Additionally, treatment with 1 µM P4 led to a reduction in interleukin-1 (IL-1) β-driven cyclooxygenase-2 in explants but not in cells. P4 signalling in explants was PR-mediated and associated with a repression of p65 and c-Jun phosphorylation. Furthermore, the anti-inflammatory action of P4 was maintained after labour onset. Limitations/reasons For Caution: There is evidence of basal inflammation in the myometrial explant model. Wider Implications Of The Findings: Myometrial explants constitute a novel model to study P4 signalling in the myometrium and can be used to further elucidate the mechanisms of P4 action in human labour. Large Scale Data: Data deposited at http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token=gvmpggkurbgxfqf&amp;acc=GSE77830. Study Funding And Competing Interest: This work was supported by grants from the Joint Research Committee of the Westminster Medical School Research Trust, Borne (No. 1067412-7; a sub-charity of the Chelsea and Westminster Health Charity) and the Imperial NIHR Biomedical Research Centre. The views expressed are those of the author(s) and not necessarily those of the NHS or the Department of Health. The authors have no conflict of interest. abstract_id: PUBMED:6465248 Prolonged inhibition of human myometrial contractility by intermittent isoproterenol. Regular, spontaneous contractions of human myometrial strips obtained at the time of elective cesarean section were recorded in a tissue bath in five experiments. Administration of the beta-adrenergic agonist isoproterenol (1 mumol/L) resulted in the rapid onset of myometrial relaxation. In the continuous presence of the beta-agonist, myometrial contractions of amplitude equal to or greater than that of baseline resumed after 10 to 40 minutes (mean +/- SEM = 29.7 +/- 7.3), indicating the development of desensitization of the tissue to the tocolytic effects of isoproterenol. In contrast, intermittent exposure of the myometrial strips to isoproterenol prevented the onset of desensitization and resulted in prolonged inhibition of myometrial contractions. These findings suggest that modification of the present regimen of administration of beta-adrenergic agonists from continuous to intermittent infusion may improve the success of these agents in the prevention of preterm births. abstract_id: PUBMED:34404382 Repair of the myometrial scar defect at repeat caesarean section: a modified surgical technique. Background: To investigate whether the existing surgical technique for uterine closure at repeat lower segment Caesarean section (LSCS) can be modified to achieve adequate residual myometrial thickness (RMT) to ensure scar integrity and reduce complications in future pregnancy. Methods: Women with a significant scar defect at repeat LSCS had the anterior uterine wall closed by a single experienced obstetrician with a technique focused on recognition, mobilisation and apposition of the retracted myometrial edges at the boundary of the defect. This was aimed at anatomical restoration of the lower segment. The RMT at the scar area was assessed by postnatal pelvic ultrasound scan at three months. Results: Thirty women with a history of at least one previous CS, incidentally found to have a large defect at operation underwent the technique with prior consent. A postnatal scan showed a mean residual myometrial thickness of 8.4 mm (SD ±1.3 mm; range 5.6-11.0 mm). The average operating time was 91 mins and the average blood loss 728 ml. Two women who underwent the repair have gone on to have a further uneventful CS. Conclusion: This modified technique resulted in scan evidence of an RMT indicative of uterine wall stability postnatally and offers the potential for reducing the risk of rupture and placenta accreta spectrum (PAS) in future pregnancy. abstract_id: PUBMED:27007079 The Recovery Time of Myometrial Responsiveness After Oxytocin-Induced Desensitization in Human Myometrium In Vitro. Background: Postpartum hemorrhage secondary to uterine atony is a leading cause of maternal morbidity. Prolonged exposure to oxytocin for labor augmentation can result in the desensitization phenomenon, a decrease in the responsiveness of myometrium to further oxytocin. It is currently not known whether waiting for a specific time interval after the cessation of oxytocin allows the oxytocin receptors to resensitize and recover, thereby improving subsequent oxytocin-induced myometrial contractility. We aimed to investigate the effect of a rest period of 30, 60, and 90 minutes after oxytocin administration on the recovery of oxytocin-desensitized human myometrium in vitro. We hypothesized that the longer the rest period, the better the responsiveness and subsequent oxytocin-induced contractility of the myometrium. Methods: Myometrial tissue was obtained from women undergoing elective cesarean deliveries. The myometrial sample was dissected into 4 strips, and each strip was mounted in a single organ bath with physiological salt solution (PSS) under homeostatic conditions and then pretreated for 2 hours with oxytocin 10 M. After pretreatment, each strip was washed with PSS and allowed to rest in PSS solution for 30, 60, or 90 minutes. At the end of the rest period, dose-response testing to oxytocin 10 to 10 M was performed. A control group consisted of oxytocin dose-response testing without any oxytocin pretreatment. Contractile parameters were measured and compared among the groups after square root transformation. The primary outcome was motility index (frequency × amplitude), and secondary outcomes included frequency, amplitude, and area under the curve. Results: Fifty-five experiments were conducted from samples obtained from 16 women. The mean motility index (√g·contractions/10 min) during the dose-response curve (oxytocin 10 to 10 M) in the control group was significantly greater than all the experimental groups; the mean estimated differences (95% confidence intervals) were -1.33 (-2.50 to -0.15, P = 0.02), -1.59 (-2.68 to -0.50, P = 0.004), and -1.88 (-2.97 to -0.80, P = 0.001) for the 30-, 60-, and 90-minute groups, respectively. When the experimental groups were compared, there were no significant differences in any of the contractility parameters; however, confidence intervals were wide. Conclusions: Our study shows that oxytocin pretreatment attenuates oxytocin-induced contractility in human myometrium despite a rest period of up to 90 minutes after oxytocin administration. However, we were unable to determine whether increasing the rest period from 30 to 90 minutes results in improvement in myometrial contractility because of our small sample size relative to the variability in the contractile parameters. Further laboratory and clinical in vivo studies are necessary to determine whether a rest period up to 90 minutes results in improvement in myometrial contractility. In addition, further experimental studies are necessary to determine the key mechanisms of oxytocin receptor resensitization. abstract_id: PUBMED:35982885 Uterine Myometrial Preischemia: Yet Another Physiological Change of Pregnancy! Background The uterus shows tremendous increase in size during pregnancy to nurture the fetus within it. It may show a spectrum of physiological changes or pathological lesions that may affect the pregnancy favorably or adversely. The main purpose of our study was to analyze the physiological changes in the uterus during pregnancy and the postpartum period, thereby gaining deeper knowledge. Materials and Methods We studied a total of 152 uterine specimens obtained from obstetric hysterectomies and maternal autopsies for the presence or absence of normal physiological changes. As a control group, an equal number of surgical uterine specimens received for abnormal uterine bleeding were studied. Results Decidual change was observed from 6 weeks of gestation to 16 days postpartum. It was partially deficient to absent in four cases of placenta accreta. Myometrial hypertrophy was seen in 150 cases (98.68%) from 8 weeks of gestation till 30 days postpartum period. Vascular remodeling was partially deficient to absent in eight cases of pregnancy-induced hypertension. Preischemia of myometrial fibers was an unexpected finding noticed from 20 weeks of gestation to 16 days postpartum in 131 cases (86.18%). Cervical wall hemorrhages were seen in 84.84% cases of vaginal delivery and in only 17.64% cases of lower segment cesarean section. Conclusion Ours is the first study to describe the duration of the routine physiological uterine changes during pregnancy. The relationship between cervical wall hemorrhages and vaginal delivery as well as between myometrial preischemia and gestational age, both being normal physiologic findings, was found to be statistically significant. abstract_id: PUBMED:11467695 Human chorionic gonadotrophin inhibition of pregnant human myometrial contractility. Objective: To evaluate the effect of human chorionic gonadotrophin (hCG) on pregnant human myometrial contractility in vitro and to determine whether the hCG-elicited effect was oestrogen dependant. Methods: Isometric tension recording was performed under physiological conditions in isolated myometrial strips from biopsies obtained at elective caesarean section. The effect of cumulative additions of hCG (0.001, 0.01, 0.1, 1.0 and 10 iu/mL) on myometrial contractility was evaluated. Secondarily, the contractile activity of pregnant myometrium following hCG exposure was investigated in tissue pre-treated with beta-oestradiol. Results: hCG exerted a statistically significant relaxant effect on pregnant human myometrial tissue. The relaxant effect increased with increasing concentrations of hCG from 8.96% (SEM 2.06) (0.001 iu/mL hCG: P &lt; 0.01 ) to a net cumulative total of 58.50% (SEM 3.74) (10 iu/mL hCG; P &lt; 0.01). The relaxant effect was also time-dependant, increasing in magnitude throughout the duration of experiments. Beta-oestradiol did not significantly affect the response of myometrial tissue to hCG. Conclusions: These results clearly demonstrate that hCG exerts a significant concentration-dependant relaxant effect on human myometrial tissue obtained rate in pregnancy. These findings outline an inhibitory physiological role of hCG on human myometrial contractility and raise the possibility of its potential use as a tocolytic. abstract_id: PUBMED:35687352 Caesarean section operation is not associated with myometrial hypertrophy-a prospective cohort study. Maternity statistics of England in 2020 showed rise in Caesarean Section (CS) rate to 31%. Some studies correlated adverse gynaecological symptoms e.g. menstrual irregularities and pelvic pain to 'niche' formation at CS scar site. Niche formation was speculated to cause myometrial hypertrophy aggravating these symptoms. This was a prospective comparative histological study including 52 consecutive benign hysterectomy specimens which were categorised into 2 groups: (i) specimens with CS scar (n = 22), (ii) specimens with no CS scar (n = 30). Median (IQ range) uteri weight was 97.2grms (43.5-226) and 91.7grms (35.7-201.7) in study and control groups, respectively (p = .991). Mean (±SD) thickness of anterior myometrial wall was 18.7 mm (±3.6) and 19.4 mm (±4.5) in study and control groups, respectively (p = .58). Mean (±SD) thickness of posterior myometrial wall was 19.1 mm (±3.7) and 18.7 mm (±3.9) in study and control groups, respectively (p = .78). The assumption that CS scar causes myometrial hypertrophy was not demonstrated in this study.IMPACT STATEMENTWhat is already known on this subject? Maternity statistics world-wide show a continuous rise in the rate of Caesarean Section (CS) operation. The CS scar is assumed to be related to adverse clinical gynaecological symptoms such as intermenstrual bleeding, dysmenorrhoea, dyspareunia and chronic pelvic pain; however, the mechanism of this association is not clear. Further, little is known about the effects of CS scar on uterine wall morphology and function.What do the results of this study add? This study was the first prospective series in the literature to compare the uteri with scar with those without in respect of weight and myometrial wall thickness. It was not able to demonstrate the association between having CS scar and myometrial hypertrophy which was hypothesised to be the cause of adverse gynaecological symptoms. However, the microscopic examination of the CS scar revealed adenomyosis, haemorrhage and/or chronic inflammation in most cases.What are the implications of these findings for clinical practice and/or future research? The clinical implication of the histological changes shown in the CS scar requires large comparative clinical studies. abstract_id: PUBMED:20132415 The contribution of Kv7 channels to pregnant mouse and human myometrial contractility. Premature birth accounts for approximately 75% of neonatal mortality and morbidity in the developed world. Despite this, methods for identifying and treating women at risk of preterm labour are limited and many women still present in preterm labour requiring tocolytic therapy to suppress uterine contractility. The aim of this study was to assess the utility of Kv7 channel activators as potential uterine smooth muscle (myometrium) relaxants in tissues from pregnant mice and women. Myometrium was obtained from early and late pregnant mice and from lipopolysaccharide (LPS)-injected mice (day 15 of gestation; model of infection in pregnancy). Human myometrium was obtained at the time of Caesarean section from women at term (38-41 weeks). RT-PCR/qRT-PCR detected KCNQ and KCNE expression in mouse and human myometrium. In mice, there was a global suppression of all KCNQ isoforms, except KCNQ3, in early pregnancy (n= 6, P &lt; 0.001 versus late pregnant); expression subsequently increased in late pregnancy (n= 6). KCNE isoforms were also gestationally regulated (P &lt; 0.05). KCNQ and KCNE isoform expression was slightly down-regulated in myometrium from LPS-treated-mice versus controls (P &lt; 0.05, n= 3-4). XE991 (10 μM, Kv7 inhibitor) significantly increased spontaneous myometrial contractions in vitro in both human and mouse myometrial tissues (P &lt; 0.05) and retigabine/flupirtine (20 μM, Kv7 channel activators) caused profound myometrial relaxation (P &lt; 0.05). In summary, Kv7 activators suppressed myometrial contraction and KCNQ gene expression was sustained throughout gestation, particularly at term. Consequently, activation of the encoded channels represents a novel mechanism for treatment of preterm labour. abstract_id: PUBMED:38173883 A rare case of massive intrapartum hemorrhage followed by inner myometrial laceration during a vaginal delivery: A case report. Key Clinical Message: Considering the laceration of the inner layer of the myometrium as an important and controllable cause of bleeding during childbirth can lead to saving the mother's life. Abstract: Laceration of the inner layer of the myometrium can cause massive bleeding during and after childbirth, which can lead to the death of the mother if it is not diagnosed in time.we presented a rare case of massive intrapartum bleeding following myometrial laceration that diagnosed correctly and the patient survived with in-time treatments. The patient was a 26-year-old woman who was under observation for term pregnancy and complaint of rupture of membranes (ROM) and vaginal bleeding. Following the spontaneous course of labor and without receiving oxytocin, during the normal course of labor, she was with an estimated total blood loss of 750 mL bleeding, which despite the normal fetal heart rate and with the mother's indication for cesarean section, was transferred to the operating room and underwent cesarean section. During the cesarean section, the amniotic fluid was clear, after the removal of the placenta, severe and clear bleeding was flowing from the posterior wall of the uterus, which was caused by the laceration of the inner layer of the myometrium in the posterior wall of the lower segment of the uterus. The myometrial laceration was repaired with absorbable continuous locked sutures and hemostasis was established, and then the patient used uterotonic drugs, and after monitoring, the patient was discharged from the hospital in good condition. Answer: Yes, human myometrial sampling at the time of cesarean delivery is considered safe. A prospective cohort study including 118 study and 236 control patients, where a full-thickness myometrial sample was obtained from the superior edge of a transverse uterine incision during cesarean delivery, found no significant increase in maternal morbidity. The study and control groups did not differ significantly in terms of age, gravidity, parity, birth weight, and Apgar scores. The estimated intraoperative blood loss was actually greater in the control group, and there were no significant differences in the rates of endometritis, wound infection, and venous thrombosis up to 6 weeks postpartum. This was consistent across different categories of patients, including those who were term or preterm, and those in labor or not in labor. The conclusion drawn from the data was that myometrial sampling at cesarean delivery does not increase overall maternal morbidity, irrespective of gestational age and the presence or absence of labor (PUBMED:11120532).
Instruction: "Cold calling" in psychiatric follow up studies: is it justified? Abstracts: abstract_id: PUBMED:12930861 "Cold calling" in psychiatric follow up studies: is it justified? Background: The ethics of cold calling-visiting subjects at home without prior appointment agreed-in follow up research studies has received little attention although it is perceived to be quite common. We examined the ethical implications of cold calling in a study of subjects with defined neurotic disorders followed up 12 years after initial assessment carried out to determine outcome in terms of symptoms, social functioning, and contact with health services. The patients concerned were asked at original assessment if they would agree to be followed up subsequently and although they agreed no time limit was put on this. Objectives: To decide if cold calling was ethically justifiable and, if so, to set guidelines for researchers. Design: The study was a cohort study of patients with neurotic disorder treated initially for 10 weeks in a randomised controlled trial. Findings: At follow up by a research medical practitioner 18 of the 210 patients had died and of the remaining 192 patients 186 (97%) were seen or had a telephone interview. Four patients refused and two others did not have interviews but agreed to some data being obtained. However, only 104 patients (54%) responded to letters inviting them to make an appointment or to refuse contact and the remainder were followed up by cold calling, with most patients agreeing readily to the research interview. The findings illustrate the dilemma of the need to get the maximum possible data from such studies to achieve scientific validity (and thereby justify the ethics of the study) and the protection of subjects' privacy and autonomy. Conclusions: More attention needs to be paid to consent procedures if cold calling is to be defended on ethical grounds but it is unreasonable to expect this to be obtained at the beginning of a research study in a way that satisfies the requirements for informed consent. A suggested way forward is to obtain written consent for the research at the time that cold calling takes place before beginning the research. abstract_id: PUBMED:38247676 Structural Relationships between Counselors' Sense of Calling, Meaning of Work, Living a Calling, and Psychological Burnout. This study examined the influence of counselors' sense of calling on psychological burnout, mediated by meaning of work and living a calling, based on the work as a calling theory (WCT) and preceding studies. Furthermore, the sequential mediating effects of meaning of work and living a calling were investigated. Data were collected from 420 Korean counselors working in counseling centers located nationwide and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results revealed that a sense of calling negatively impacted psychological burnout. Second, the sense of calling did not affect psychological burnout through the mediation of meaning of work. Third, the sense of calling negatively impacted psychological burnout through the mediation of living a calling. Fourth, the sense of calling negatively affected psychological burnout through the sequential mediation of meaning of work and living a calling. Based on this study's findings, implications can be provided to enhance counselors' professional calling and reduce psychological burnout, thereby aiding them in resolving their psychological issues during counseling practice and providing higher-quality psychological services to clients. Suggestions for improvements and future research are also discussed. abstract_id: PUBMED:31607969 When a Calling Goes Unanswered: Exploring the Role of Workplace Personalizations as Calling Enactments. Individuals are sometimes unable to realize their callings in their formal careers. The literature has highlighted that such unanswered callings produce negative outcomes in the individual's career and personal life and that coping strategies, such as job and leisure crafting, can help them buffer such consequences. We developed a grounded theory regarding how people cope with their unanswered callings through a previously unexplored strategy in the calling literature: workplace personalization. Our study revealed that through this strategy, individuals retain the aspects of an unanswered calling in their self-concept and then reduce the consequences of not realizing the calling. Some participants enjoy some of the benefits of perceiving a calling, even without performing it in a formal work role. This phenomenon occurs because workplace personalization can be used to represent unanswered callings performed in the past and present, or that are intended to be performed in the future. This form of enactment produces interpersonal and intrapersonal processes that help buffer the negative consequences of not realizing a calling. abstract_id: PUBMED:33708163 The Search for and Presence of Calling: Latent Profiles and Relationships With Work Meaning and Job Satisfaction. Previous studies showed inconsistent results on the association between searching for calling and its psychosocial functioning outcomes (i.e., work meaning and job satisfaction). The link of searching for calling to its psychosocial functioning outcomes may be influenced by the presence of calling because the search for and presence of calling can co-exist within individuals. Thus, the present study employed a person-centered method (latent profile analysis) to identify subgroups combining the search for and presence of a calling and then explore the identified profiles' differences in work meaning and job satisfaction. Study participants were Chinese kindergarten teachers (n = 726). Latent profile analysis revealed four different groups: (1) actively maintaining calling (24.93%), (2) unsustainable calling (11.43%), (3) moderately increasing calling (23.14%), and (4) actively increasing calling (40.50%). Subsequent analyses showed notable differences across the four groups on work meaning and job satisfaction. Participants in profile 1 with both the highest searching for and presence of calling would experience more work meaning and job satisfaction than those in the other profiles whose strengths of searching for and presence of calling were relatively low. Participants in profile 4 had higher searching for and presence of calling than those in profile 3, and they experienced more meaningfulness at work and were more satisfied with their job. These findings indicate that actively searching for calling is closely associated with more work meaning and job satisfaction among people who already perceive intensive calling. Implications, limitations, and future directions of the results are discussed. abstract_id: PUBMED:36476058 Sleep and psychiatric symptoms in young child psychiatric outpatients - a Follow-up study. Background: Knowledge of the continuity of sleep problems and the associations between sleep and psychiatric symptoms in child psychiatric patients is scarce. Objectives: To investigate the persistency of sleep problems and how sleep at preschool age predicts sleep problems and psychiatric symptoms at school age in child psychiatric patients. Methods: Participants (n = 68) were child psychiatry outpatients at Helsinki University Hospital in 2015-2017. Caregivers evaluated sleep with the Sleep Disturbance Scale for Children (SDSC) and psychiatric symptoms with the Child Behavior Checklist (CBCL) at baseline (age 4-7 years) and again at follow-up (age 8-13 years). Family background information was collected at both time points. Results: Sleep problems at preschool age predicted sleep problems at school age (R2Adjusted = .48, p &lt; .001). Persistent sleep problems associated strongly with the intensity of psychiatric symptoms (p = .001). Internalizing symptoms were predicted by sleep problems (p = .038) even after controlling for age, sex, and psychiatric symptoms at preschool age. Conclusion: Sleep problems are prevalent and persistent and relate to psychiatric symptoms in children treated at child psychiatry clinics. These results emphasize the need for identification and treatment of sleep problems in these children. abstract_id: PUBMED:30780201 Our observations with cold calling: Patient anger and undesirable experiences. Little is known about the effects of the 'cold calling' technique (telephone contact without prior warning) for patient follow up in ED research. Recently, we undertook a prospective, observational pain management study. Patients were cold called 48 h post-discharge and surveyed regarding their pain management satisfaction. We made contact with 778 patients. Among these, we observed 12 cases of patient anger: mistaken identity, disbelief that the hospital was calling, frustration that test results and appointment times could not be provided, abuse about ED management and outpourings of sadness. We also observed eight cases of an undesirable experience for either the patient, their family or the caller: five patients had died (including one 'at her last moments'), precipitation of patient distress and uncomfortable situations for the caller. Given our experience, we believe that cold calling should be avoided, where possible, and other techniques (e.g. limited disclosure) considered as alternatives. abstract_id: PUBMED:1927568 A 30-year follow-up study of a child psychiatric clientele. II. Psychiatric morbidity. Psychiatric morbidity, expressed as hospital admissions during a 30-year follow-up period, was studied among 322 former child psychiatric patients, who were admitted from 1949-1951, and who were followed up as of December 31, 1980. A total of 115 patients (36%)--55 boys (29%) and 60 girls (45%)--had been admitted to an adult psychiatric department, with 50 patients having only one admission. The mean age at the time of the study was 39 years. The former child psychiatric patients were admitted to adult psychiatric hospitals 50 times more often than comparable age groups from the general population. At all times women had a higher prevalence of admission. The cumulative percentage of first admissions of men was almost unchanged during the last 10 years of the follow-up period, and the figure for women was gradually increasing. The longitudinal course of mental disorders in the sample, measured as psychiatric admissions, was studied in relation to age at the time of admission to the child psychiatric department. The results consistently showed that older age of admission as a child meant fewer psychiatric admissions as an adult during the follow-up period. A total of 39 of the psychiatrically admitted patients (34%) had been granted a disability pension. A total of 7 patients (6%) died during the study period, including 2 patients who committed suicide. By the variables employed, 37% of the sample were judged to have had a good overall outcome, with diagnosis being an inconsistent predictor of outcome.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:10709317 The use of psychotropic drugs in an Italian psychiatric hospital: a two-year-long follow-up study Objective: Following the introduction of guidelines of rational drug use, the pharmacoepidemiology of psychotropic drugs was investigated in a sample of long-stay patients living in a Italian psychiatric hospital. Design: A prospective, longitudinal two-year follow-up study was carried out. Information about sociodemographic and clinical characteristics of the inpatient population, and about medications prescribed, was collected at baseline and after one and two years of follow-up. Setting: Three wards of the psychiatric hospital of Milan. Main Outcome Measures: Number of patients taking psychotropic drugs, number of patients taking more than one neuroleptic or benzodiazepine, mean neuroleptic dose, psychopathological status according to the Brief Psychiatric Rating Scale (BPRS). Results: 70 patients were recruited and followed for two years. At follow-up a reduction in the number of patients taking neuroleptic drugs was recorded, together with a 50% decrease in the number of patients taking more than one neuroleptic. A reduction in the use of depot formulations was in addition shown. Patients taking benzodiazepines decreased of 50%. According to the BPRS, no psychopatological changes were observed during the study. Conclusions: These data suggest that education in psychopharmacology may guide towards a more rational use of drugs; longitudinal clinical audits should be implemented to monitor everyday practice. abstract_id: PUBMED:30289015 Cold hypersensitivity after hand injuries. A prospective 7-year follow-up. We studied changes in cold hypersensitivity from 3 to 7 years following severe hand injuries. Data was collected using postal questionnaires 7 years after injury in 71 patients who had participated in a 3-year follow-up from the time of injury. There was no change in cold sensitivity measured using the McCabe Cold Sensitivity Severity scale (CSS) from 3 to 7 years after injury. However, there was a trend toward decreased severity measured using a five-level scale of self-reported cold hypersensitivity. Compared to the 3-year follow-up, fewer respondents rated their condition as severe and two patients had recovered from their cold hypersensitivity at the 7-year follow-up. Furthermore, 21 (30%) of the respondents stated a decrease in cold hypersensitivity during the last 2 years. Limitations in cold associated activities and the importance of being less limited in leisure activities (NRS 0-10) did not change between the two follow-ups. In conclusion, the CSS-scores did not change from 3 to 7 years after injury. Several patients experienced improvements in cold hypersensitivity, but few recovered completely from the condition. abstract_id: PUBMED:28466161 Environmental factors influencing calling in sympatric anurans. Anuran reproduction is influenced by various biotic and abiotic factors, of which temperature and rainfall are the best studied. Here we examine the relationship between multiple abiotic environmental variables - specifically, air and water temperature, rainfall, barometric pressure, relative humidity and wind velocity - and the calling activity of five species (Rana sylvatica, Pseudacris crucifer, Bufo americanus, Rana clamitans, and Rana catesbeiana) in an anuran community in New Brunswick, Canada. Acoustical and environmental data were sampled hourly for 4 months during the breeding season in 1997. Logistic regression analyses indicated that each species responded to a unique combination of meteorological variables, even when calling concurrently. Calling in the spring breeding species, R. sylvatica, P. crucifer, and B. americanus, was most associated with the time of day (i.e., they called primarily at night), while calling in the summer breeding species, R. clamitans and R. catesbeiana, was associated primarily with high water temperature. Species with short breeding periods (i.e., explosive breeders; R. sylvatica, B. americanus) responded to fewer environmental variables than did species with prolonged breeding periods (P. crucifer, R. clamitans, R. catesbeiana). Prolonged breeding species responded differently to climatic variables throughout the breeding season: during the latter half of their calling periods, the time of day and a variable that predicts rain, i.e., barometric pressure, became more important, and water temperature became less important. Answer: The ethics of "cold calling," which involves visiting subjects at home without a prior appointment agreed upon, in psychiatric follow-up studies has been debated. In the study described in abstract PUBMED:12930861, researchers examined the ethical implications of cold calling in a follow-up study of subjects with neurotic disorders. The original assessment had asked patients if they would agree to be followed up subsequently, and while they agreed, no time limit was set on this agreement. The follow-up study faced a dilemma between the need to obtain maximum possible data for scientific validity and the protection of subjects' privacy and autonomy. The study found that only 54% of patients responded to letters inviting them to make an appointment, and the remainder were followed up by cold calling, with most patients agreeing readily to the research interview. The conclusion drawn was that more attention needs to be paid to consent procedures if cold calling is to be ethically defended, suggesting that written consent for the research should be obtained at the time that cold calling takes place before beginning the research. In contrast, another study (PUBMED:30780201) reported negative effects of cold calling, such as patient anger and undesirable experiences, when patients were contacted 48 hours post-discharge for a pain management study. The researchers observed cases of patient anger and undesirable experiences for the patient, their family, or the caller, including instances where patients had died. Based on these experiences, the authors recommended avoiding cold calling where possible and considering alternative techniques. In summary, while cold calling can be a method to achieve higher response rates in follow-up studies, which is crucial for scientific validity, it raises significant ethical concerns regarding informed consent and respect for privacy. The justification for cold calling in psychiatric follow-up studies hinges on the balance between scientific needs and ethical considerations, with a strong recommendation for improved consent procedures to ensure that the rights and autonomy of subjects are adequately protected (PUBMED:12930861). However, given the potential for negative experiences, as reported in another context (PUBMED:30780201), researchers should carefully weigh the benefits against the risks and consider alternative methods whenever possible.
Instruction: Current standards of care for inpatient glycemic management and metabolic control: is it time for definite standards and targets? Abstracts: abstract_id: PUBMED:15251634 Current standards of care for inpatient glycemic management and metabolic control: is it time for definite standards and targets? Objective: To review the available literature on the presence of diabetes in the inpatient population and its effect on outcomes. Results: Diabetes is a progressively worsening epidemic in the United States. Patients with diabetes have a disproportionate representation among the inpatient population in this country, and their share of total health-care costs is both disproportionate and growing rapidly. Patients with diabetes are often admitted to the hospital not primarily because of their diabetes but rather because of the need for treatment of the end-stage complications of diabetes, such as cardiovascular, cerebrovascular, and peripheral vascular disease as well as diabetic nephropathy and retinopathy, or because of unrelated illnesses for which diabetes is a complication. Diabetes is a frequent comorbid condition and increases the duration of hospitalization by 1 to 2 days. Numerous organizations have issued guidelines for outpatient diabetes management and metabolic control and have updated them periodically; however, no such guidelines or standards have been formulated for inpatient diabetes management. Conclusion: In view of a rapidly growing body of evidence suggesting that enhanced glycemic control decreases morbidity and mortality in patients with hyperglycemia, such as those with new-onset diabetes, as well as in patients with previously established diabetes, the creation of clearly defined standards and targets for inpatient management of hyperglycemia and metabolic control seems important for improvement of outcomes in hospitalized patients with diabetes. abstract_id: PUBMED:31862749 6. Glycemic Targets: Standards of Medical Care in Diabetes-2020. The American Diabetes Association (ADA) "Standards of Medical Care in Diabetes" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, a multidisciplinary expert committee (https://doi.org/10.2337/dc20-SPPC), are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations, please refer to the Standards of Care Introduction (https://doi.org/10.2337/dc20-SINT). Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC. abstract_id: PUBMED:36507646 6. Glycemic Targets: Standards of Care in Diabetes-2023. The American Diabetes Association (ADA) "Standards of Care in Diabetes" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, a multidisciplinary expert committee, are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations and a full list of Professional Practice Committee members, please refer to Introduction and Methodology. Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC. abstract_id: PUBMED:34964868 6. Glycemic Targets: Standards of Medical Care in Diabetes-2022. The American Diabetes Association (ADA) "Standards of Medical Care in Diabetes" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, a multidisciplinary expert committee (https://doi.org/10.2337/dc22-SPPC), are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations, please refer to the Standards of Care Introduction (https://doi.org/10.2337/dc22-SINT). Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC. abstract_id: PUBMED:31862752 9. Pharmacologic Approaches to Glycemic Treatment: Standards of Medical Care in Diabetes-2020. The American Diabetes Association (ADA) "Standards of Medical Care in Diabetes" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, a multidisciplinary expert committee (https://doi.org/10.2337/dc20-SPPC), are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations, please refer to the Standards of Care Introduction (https://doi.org/10.2337/dc20-SINT). Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC. abstract_id: PUBMED:29616146 Expectations for nursing care in newborn units in Kenya: moving from implicit to explicit standards. Neonatal mortality currently accounts for 45% of all child mortality in Kenya, standing at 22 per 1000 live births. Access to basic but high quality inpatient neonatal services for small and sick newborns will be key in reducing neonatal mortality. Neonatal inpatient care is reliant on nursing care, yet explicit nursing standards for such care do not currently exist in Kenya. We reviewed the Nursing Council of Kenya 'Manual of Clinical Procedures' to identify tasks relevant for the care of inpatient neonates. An expert advisory group comprising major stakeholders, policy-makers, trainers, and frontline health-workers was invited to a workshop with the purpose of defining tasks for which nurses are responsible and the minimum standard with which these tasks should be delivered to inpatient neonates in Kenyan hospitals. Despite differences in opinions at the beginning of the process, consensus was reached on the minimum standards of neonatal nursing. The key outcome was a comprehensive list and grouping of neonatal nursing task and the minimum frequency with which these tasks should be performed. Second, a simple categorisation of neonatal patients based on care needs was agreed. In addition, acceptable forms of task sharing with other cadres and the patient's family for the neonatal nursing tasks were agreed and described. The process was found to be acceptable to policy-makers and practitioners, who recognised the value of standards in neonatal nursing to improve the quality of neonatal inpatient care. Such standards could form the basis for audit and quality evaluation. abstract_id: PUBMED:38078586 6. Glycemic Goals and Hypoglycemia: Standards of Care in Diabetes-2024. The American Diabetes Association (ADA) "Standards of Care in Diabetes" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, an interprofessional expert committee, are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations and a full list of Professional Practice Committee members, please refer to Introduction and Methodology. Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC. abstract_id: PUBMED:36507650 9. Pharmacologic Approaches to Glycemic Treatment: Standards of Care in Diabetes-2023. The American Diabetes Association (ADA) "Standards of Care in Diabetes" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, a multidisciplinary expert committee, are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations and a full list of Professional Practice Committee members, please refer to Introduction and Methodology. Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC. abstract_id: PUBMED:38078590 9. Pharmacologic Approaches to Glycemic Treatment: Standards of Care in Diabetes-2024. The American Diabetes Association (ADA) "Standards of Care in Diabetes" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, an interprofessional expert committee, are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations and a full list of Professional Practice Committee members, please refer to Introduction and Methodology. Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC. abstract_id: PUBMED:38421427 Clinical care standards for the management of low back pain: a scoping review. The objective of this study is to compare and contrast the quality statements and quality indicators across clinical care standards for low back pain. Searches were performed in Medline, guideline databases, and Google searches to identify clinical care standards for the management of low back pain targeting a multidisciplinary audience. Two independent reviewers reviewed the search results and extracted relevant information from the clinical care standards. We compared the quality statements and indicators of the clinical care standards to identify the consistent messages and the discrepancies between them. Three national clinical care standards from Australia, Canada, and the United Kingdom were included. They provided from 6 to 8 quality statements and from 12 to 18 quality indicators. The three standards provide consistent recommendations in the quality statements related to imaging, and patient education/advice and self-management. In addition, the Canadian and Australian standards also provide consistent recommendations regarding comprehensive assessment, psychological support, and review and patient referral. However, the three clinical care standards differ in the statements related to psychological assessment, opioid analgesics, non-opioid analgesics, and non-pharmacological therapies. The three national clinical care standards provide consistent recommendations on imaging and patient education/advice, self-management of the condition, and two standards (Canadian and Australian) agree on recommendations regarding comprehensive assessment, psychological support, and review and patient referral. The standards differ in the quality statements related to psychological assessment, opioid prescription, non-opioid analgesics, and non-pharmacological therapies. Answer: The literature suggests that diabetes is a significant and growing concern in the inpatient population, with patients often admitted for treatment of end-stage complications of diabetes or other illnesses complicated by diabetes. This condition increases the duration of hospitalization and contributes to a disproportionate share of healthcare costs (PUBMED:15251634). Despite the presence of guidelines for outpatient diabetes management, there has been a lack of clearly defined standards and targets for inpatient diabetes management. However, the American Diabetes Association (ADA) has been actively updating its "Standards of Medical Care in Diabetes" annually, which includes clinical practice recommendations for diabetes care, general treatment goals, and guidelines (PUBMED:31862749, PUBMED:36507646, PUBMED:34964868, PUBMED:31862752, PUBMED:38078586, PUBMED:36507650, PUBMED:38078590). These standards are intended to provide components of diabetes care and tools to evaluate the quality of care, with the ADA Professional Practice Committee responsible for their regular updates. Given the evidence suggesting that enhanced glycemic control can decrease morbidity and mortality in patients with hyperglycemia, the creation of definite standards and targets for inpatient management of hyperglycemia and metabolic control is important for improving outcomes in hospitalized patients with diabetes (PUBMED:15251634). The ADA's ongoing efforts to provide updated clinical practice recommendations reflect a move towards establishing such standards and targets. Therefore, it can be concluded that the field is indeed moving towards definite standards and targets for inpatient glycemic management and metabolic control, as evidenced by the ADA's continuous updates to its Standards of Care in Diabetes.
Instruction: Is metronidazole teratogenic? Abstracts: abstract_id: PUBMED:36424779 Medicinal Plants as Natural Anti-Parasitic Agents Against Blastocystis Species. Background: Blastocystis species (sp.) are enteric parasites that live in both humans' and animals' gastrointestinal tracts. Blastocystis hominis (B. hominis) is the recognizable human isolates in clinical and diagnostic specimens. Human infection occurs via the oro-fecal route, particularly in developing areas due to the lack of sanitation and hygienic facilities. B. hominis can exist in the large intestine for weeks to years until treated appropriately. Metronidazole is the drug of choice for the treatment of Blastocystis infection. However, it induces intolerable side effects and has been shown to have teratogenic and carcinogenic potential. Several medicinal plant extracts have been experimentally tested against Blastocystis infection in comparison to currently available treatments. Objective: Based on in vitro and in vivo studies, this article reviewed anti-Blastocystis activity of some medicinal plants. Methods: To conduct the research for this review, Google Scholar and PubMed were the primary search engines used to find relevant literature. A total of 19 published in vitro and in vivo studies were evaluated to identify the anti-Blastocystis effects of various medicinal plants. Results: Multiplication of Blastocystis parasites as well as nucleic acids and protein synthesis, all be inhibited by extracts from different medicinal plants. These natural agents have been shown to be both safe and effective when compared to the existing treatment options. Conclusion: Different medicinal plants can combat Blastocystis infection and could be a good substitute for metronidazole and other synthetic treatments. abstract_id: PUBMED:5986672 Treatment of trichomonas vaginitis in the pregnant woman with Metronidazole (8 823 RP). Absence of teratogenic effects N/A abstract_id: PUBMED:27981254 Ginger (Zingiber officinale) induces apoptosis in Trichomonas vaginalis in vitro. Background: Trichomoniasis is the most common sexually transmitted protozoan diseases in the worldwide. Metronidazole is the choice drug for trichomoniasis treatment, however, metronidazole resistant Trichomonas vaginalis (T.vaginalis) has been reported. Natural products are the source of most new drugs, and Zingiber officinale (Ginger) is widely used ingredient in the traditional medicine. Objective: The aim of the present study was to determine the effect of different concentrations of the ginger ethanol extract on the growth of T.vaginalis trophozoites in vitro. Materials And Methods: In this experimental study, 970 women who were attend in Kashan health centers were examined for T. vaginalis. Of them, 23 samples were infected with T.vaginalis. Three T. vaginalis isolates were cultured in a TYI-S-33 medium. The effect of ginger ethanol extracts and its toxicity in different concentrations (25, 50, 100, 200, 400, 800 µg/ml) on mouse macrophages were measured in triplicate exam by MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assay. The effect of ginger on apoptosis induction was determined by Flow cytometry. Results: The IC50 of ginger and metronidazole were 93.8 and 0.0326 µg/ml, respectively. 12, 24 and 48 hr after adding different concentrations of extract on mouse macrophages, fatality rates in maximum dose (800 µg/ml) were 0.19, 0.26 and 0.31 respectively. Flow cytometry results showed the apoptosis rate following treatment with different concentrations of the extract after 48 hr were 17, 28.5, 42.1, 58.8, 76.3 and 100% respectively, while in the control group was 2.9%. Conclusion: Ginger ethanol extract induces programmed death in T. vaginalis. It is recommended that due to the known teratogenic effect of metronidazole, ginger can be considered as an alternative drug for metronidazole. abstract_id: PUBMED:9278206 Is metronidazole teratogenic? A meta-analysis. Aim: In order to assess whether the use of metronidazole during pregnancy is associated with a higher risk of congenital malformations, a meta-analysis was conducted. Methods: All epidemiological studies (cohort and case-control) which estimate risk of congenital malformations after exposure to metronidazole during early pregnancy were included in the meta-analysis. To obtain a summary odds ratio, the Mantel-Haenszel method was used. A test to verify absence of heterogeneity was also performed. Results: One unpublished case-control and four published cohort studies fulfilled the inclusion criteria and were not statistically heterogeneous. A summary odds ratio was calculated for metronidazole exposure during the first trimester: OR = 1.08, 95% CI: 0.90-1.29, heterogeneity test chi2 = 4.72, P = 0.32. Conclusions: This meta-analysis did not find any relationship between metronidazole exposure during the first trimester of pregnancy and birth defects. abstract_id: PUBMED:3771623 Carcinogenicity, perinatal carcinogenicity and teratogenicity of low dose metronidazole (MNZ) in Swiss mice. Metronidazole (MNZ), a widely used therapeutic drug, was administered to male and female Swiss mice intragastrically at a dose of 2 mg MNZ/mouse per day, 5 days a week, every alternate week, throughout their life span to test its carcinogenicity. The treatment induced a significant increase in the overall incidence of tumors in female mice but not in male mice. At the same dose, no teratogenic effect was observed. Perinatal carcinogenicity was studied by following up animals till the end of the F2 generation. Though different groups of animals were considered, a significant increase in tumor incidence was observed only in F1 mice which received MNZ treatment during gestation as compared to the corresponding control mice which received distilled water. The F1 mice which received MNZ during gestation, lactation, and subsequently in adulthood, and the F2 progeny had a tumor incidence comparable to that observed in control mice. abstract_id: PUBMED:9759210 Bacteroides fragilis meningitis revealing a meningorectal fistula Unlabelled: Cases of meningitis due to Bacteroides fragilis are rare; we report a case revealing a meningorectal fistula. Case Report: A 2-month-old infant developed a severe sepsis syndrome following a rectosigmoidoscopy for rectal bleeding. Lumbar puncture diagnosed bacterial meningitis. Cerebrospinal fluid (CSF) culture evidenced B fragilis with betalactamase. The initial antibiotherapy was changed for imipenem-metronidazole, which is at present the recommended antibiotherapy. Malformation including pre-spinal tumor and meningorectal fistula was evoked on magnetic resonance imaging (MRI) and confirmed by surgery. The outcome was favorable after surgery and antibiotherapy. Conclusion: B fragilis meningitis are usually associated with sepsis, whose origin is obvious. In our case, meningitis was isolated, revealing a meningorectal fistula. abstract_id: PUBMED:23282454 Safety of drug therapy for inflammatory bowel disease in pregnant and nursing women. : Drug therapy is justified in pregnant patients with active inflammatory bowel disease. Selection of medical treatment depends on disease severity and the potential for fetal toxicity. Preferably, pregnancy should be planned to coincide with periods of disease quiescence, so that drug requirements can be minimized. Sulphasalazine and prednisolone are clearly safe in pregnancy and lactation. Preliminary studies suggest that lowto-moderate-dose mesalazine is well tolerated in pregnant and nursing mothers. Immunosuppressive therapy during pregnancy in transplant and nontransplant recipients may be associated with an increased risk of fetal growth retardation and prematurity. The risk of congenital malformations from azathioprine and cyclosporin is not markedly increased, although exposure to methotrexate during the first trimester may cause fetal loss and characteristic anomalies. Short-term therapy with metronidazole in the first trimester is not associated with an increased risk of teratogenicity, although the safety of this drug in pregnancy as primary therapy for Crohn's disease using higher doses for prolonged periods has not been confirmed. abstract_id: PUBMED:1082689 Therapeutic results achieved through the advantage of a combination of oral metronidazole and vaginal natamycin tablets in urogenital protozoan, trichobacterial and blastomyces infections Vaginal inflammations caused by Trichomonas, Leptothrix vaginalis or Candida and other less common organisms respectively, were treated very successfully with a combination of metronidazole orally and a natamycin preparation vaginally (Pimafucin vaginal tablets). In women with urogenital Trichomoniasis a cure was obtained in 96,1% of the cases (microbiologically controlled); Candida mycoses of the vagina were cured clinically in 89% of the cases after the first course of treatment and mycologically in 79% of the cases. Of special importance is the rapid and successful elimination of the mixed infection before birth, because this diminishes perinatal morbidity of the fetus. The combination treatment of bilateral, constantly interchanging Candida and Trichomonas infections of the vagina is of importance to both sexes, especially with respect to the elimination of sterility. From the onco-gynecological point of view the combination treatment with metronidazole and natamycin appeared to be of value as it limits the incidence of dyskariosis of the epithelial cells, which occur frequently in cases of inflammation of the vagina or cervix uteri. No teratogenic effect was observed with this combination when applied during pregnancy. abstract_id: PUBMED:12318707 Contraceptive discovery: mandelic acid can be used as a spermicide. Sperms can be quickly and effectively killed by the mandelic acid which is extracted from peach leaves or of the synthetic kind, according to a research project undertaken by Prof. Guo Renyu and Prof. Zhang Minru of Xi'an Medical University. This discovery has been recognized by the Ministry of Public Health. The findings of the research jointly undertaken by the Embryological and Parasitic teaching and Research Sections of Xi'an Medical University indicate that the mandelic acid is effective in killing both sperms and trichomoniasis. It is as effective in killing both sperms and trichomoniasis. It is as effective as nonoxynol-9 in devitalizing sperms and can also achieve the same clinical effect of contraception. It is 2 to 5 times as effective as metronidazole in curing trichomoniasis vaginitis. The research findings show that the mandelic acid will neither incur side-effects of mutation and malformation of fetus nor ill effects on livers, kidneys, and other viscera. It will not disturb the normal activities of bacteria in the vagina and little is to be absorbed by the body. Moreover, it will not provoke any stimulation to mucosa of the vagina. abstract_id: PUBMED:6884686 Teratogenic effect of metronidazole on the prosencephalon N/A Answer: The question of whether metronidazole is teratogenic has been addressed in several studies with varying conclusions. A meta-analysis conducted to assess the risk of congenital malformations after exposure to metronidazole during early pregnancy did not find any relationship between metronidazole exposure during the first trimester and birth defects (OR = 1.08, 95% CI: 0.90-1.29) (PUBMED:9278206). This suggests that metronidazole is not teratogenic based on the data analyzed in this study. Another study on the carcinogenicity, perinatal carcinogenicity, and teratogenicity of low dose metronidazole in Swiss mice found no teratogenic effect at the dose tested (PUBMED:3771623). Additionally, a study on drug therapy for inflammatory bowel disease in pregnant and nursing women indicated that short-term therapy with metronidazole in the first trimester is not associated with an increased risk of teratogenicity (PUBMED:23282454). However, there is also a reference to the known teratogenic effect of metronidazole in a study discussing the potential of ginger as an alternative treatment for Trichomonas vaginalis, suggesting that there may be some evidence or belief in the medical community regarding the teratogenic risks of metronidazole (PUBMED:27981254). In conclusion, the majority of the evidence provided, including a meta-analysis, does not support the claim that metronidazole is teratogenic, at least not when used in short-term therapy during the first trimester of pregnancy. However, caution may still be advised, and alternative treatments are being considered in some contexts due to concerns about potential side effects and resistance issues with metronidazole.
Instruction: Treatment of alpha(0)-thalassemia (--(SEA)/--(SEA)) via serial fetal and post-natal transfusions: Can early fetal intervention improve outcomes? Abstracts: abstract_id: PUBMED:25116001 Treatment of alpha(0)-thalassemia (--(SEA)/--(SEA)) via serial fetal and post-natal transfusions: Can early fetal intervention improve outcomes? Objective And Importance: Homozygous Southeast Asian alpha-thalassemia mutation (--(SEA)/--(SEA)) results in deletion of all alpha-globin genes (alpha(0)-thalassemia). Since all alpha-globin chains are absent, hemoglobin F cannot be synthesized, and hemoglobin Bart's becomes the dominant fetal hemoglobin. Hemoglobin Bart's is a γ tetramer with a very high oxygen affinity, thus oxygen delivery to the tissues is poor. Clinical manifestations include severe fetal anemia, hydrops fetalis, fetal demise, and high risk of neurodevelopmental impairment in the rare survivors. Clinical Presentation: A 39-year-old Vietnamese woman presented to our center at 28 0/7 weeks' gestation with fetal alpha(0)-thalassemia (--(SEA)/--(SEA) type deletion) and ultrasound markers suggestive of severe fetal anemia. Intervention: The fetus was treated with four intrauterine transfusions followed by post-natal chronic transfusions. Formal neurodevelopmental testing (Battelle Developmental Inventory, Second Edition) was performed at 18 months of age, and the developmental quotient was 93 (32nd percentile) with all subdomains noted within normal limits, indicating overall intact neurodevelopment. Conclusion: We posit that earlier diagnosis and fetal treatment, prior to clinical findings suggestive of fetal anemia, may improve long-term outcomes by enhancing oxygen delivery to the tissues of the developing fetus. abstract_id: PUBMED:11564083 The role of heterocellular hereditary persistence of fetal haemoglobin in beta(0)-thalassaemia intermedia. Beta(0)-thalassaemia intermedia (beta(0)-TI) describes patients who lack beta-globin synthesis yet manifest a non-transfusion-dependent form of beta-thalassaemia. Co-inheritance of alpha-thalassaemia, certain variants of the beta-like globin gene cluster and elevated fetal haemoglobin (HbF) production are all associated with beta(0)-TI. However, the mild phenotypes of many beta(0)-TI patients are unexplained. Genetically determined HbF levels in beta-thalassaemia are difficult to assess because erythrocytes containing HbF (F cells) preferentially survive over erythrocytes without HbF. To evaluate the importance of genetically elevated HbF in beta-thalassaemia, F-cell levels of 19 TI patients' relatives were compared with relatives of transfusion-dependent beta-thalassaemia major patients and those of beta-globin genotype-matched controls. The beta-globin and alpha-globin genotypes, as well as their Ggamma promoter were also examined. Using this approach, in all but one patient the mild phenotype was attributable to either alpha-globin genotype, gamma-globin promoter polymorphism or inherited elevated F-cell levels. The findings of this study establish the F-cell levels required to modify the degree of disease severity significantly and demonstrate that F-cell level is a crucial parameter in the understanding of phenotypic variation in beta-thalassaemia. abstract_id: PUBMED:16601468 Fetal anaemia as a cause of fetal injury: diagnosis and management. Purpose Of Review: This review provides up-to-date information on the diagnosis and management of fetal anaemia. Recent Findings: Exciting advances in the field of red blood cell isoimmunization are phasing out the need for invasive procedures. Rhesus blood group genotyping using fetal DNA in maternal plasma has been introduced into clinical practice with remarkable success. The role of middle cerebral artery peak systolic velocity in screening for fetal anaemia has been confirmed in various causes of fetal anaemia. A recent review of a successful Iran national screening programme for thalassaemia provided a timely and valuable educational opportunity. The value of intrauterine transfusion as a treatment for fetal anaemia was demonstrated by both high success rate and low procedure-related pregnancy loss rate. Summary: Fetal anaemia is one of the severe fetal conditions that affect the worldwide population. Rhesus isoimmunization remains an important health issue despite the recommendation for anti-D immunoglobulin prophylaxis and injection after sensitizing events. Noninvasive ultrasound diagnostic methods have replaced traditional invasive procedures in the assessment of fetus at risk of anaemia due to various causes, including red blood cell isoimmunization, parvovirus B19 infection and thalassaemia. The increased availability of intrauterine transfusion will help to improve the prognosis of these anaemic fetuses. abstract_id: PUBMED:16339672 Single and combination drug therapy for fetal hemoglobin augmentation in hemoglobin E-beta 0-thalassemia: Considerations for treatment. Patients with hemoglobin E (Hb E)-beta 0-thalassemia, one of the most common hemoglobinopathies worldwide, could benefit from drugs that increase fetal and total hemoglobin levels and thereby decrease the need for transfusions. The long-term clinical outcome of such therapy, its hematologic effects, and which patients are likely to benefit from treatment are unknown. Consequently, the use of such drugs for Hb E-beta 0-thalassemia is limited, and countries where resources for safe and regular transfusion are scarce cannot benefit from them. In a multicenter trial of 42 patients treated with hydroxyurea for two years, almost half the patients demonstrated a significant increase in steady-state hemoglobin level. Drug toxicity was minimal. Combined treatment of hydroxyurea with erythropoietin benefited selected patients, but the addition of sodium phenyl butyrate was ineffective. After 5 years of follow-up, a subset of patients remained off transfusions. Hydroxyurea should be considered for a subset of Hb E-beta 0-thalassemia patients. abstract_id: PUBMED:16304357 Fetal globin induction--can it cure beta thalassemia? The beta thalassemias are one of a few medical conditions in which reactivation of a gene product that is expressed during fetal life can functionally replace a deficiency of essential proteins expressed at a later developmental stage. The fetal globin genes are present and normally integrated in hematopoietic stem cells, and at least one fetal gene appears accessible for reactivation, particularly in beta degrees thalassemia. However, rapid cellular apoptosis from alpha globin chain precipitation, and relatively low levels of endogenous erythropoietin (EPO) in some beta(+) thalassemia patients contribute to the anemia in beta thalassemia syndromes. In clinical trials, three classes of therapeutics have demonstrated proof-of-principle of this approach by raising total hemoglobin levels by 1-4 g/dL above baseline in thalassemia patients: EPO preparations, short chain fatty acid derivatives (SCFADs), and chemotherapeutic agents. Although thalassemic erythrocytes survive only for a few days, the magnitude of these responses is similar to those induced by rhu-EPO in anemic conditions of normal erythrocyte survival. New oral therapeutic candidates, which stimulate both fetal globin gene expression and erythropoiesis, and combinations of therapeutics with complementary molecular actions now make this gene-reactivation approach feasible to produce transfusion independence in many patients. Development of the candidate therapeutics is hindered largely by costs of drug development for an orphan patient population. abstract_id: PUBMED:16225658 Fetal haemoglobin augmentation in E/beta(0) thalassaemia: clinical and haematological outcome. Patients with E/beta(0) thalassaemia, the most common haemoglobinopathy in many Asian countries, might benefit from drugs that increase fetal and total haemoglobin and thereby decrease the need for transfusions. The long-term clinical efficacy and safety of such therapy is unknown, limiting its use in countries where resources for safe and regular transfusion are scarce. In this study, 45 patients were treated with hydroxyurea (18-20 mg/kg) for 24+/-9 months, hydroxyurea with sodium phenyl butyrate (n=8) and hydroxyurea with erythropoietin (n=9), each for approximately 6 months, and followed for 3 years from study exit. Hydroxyurea had minimal toxicity, resulted in a mean 1.3 g/dl steady-state increase in haemoglobin in 40% of patients, and a milder response (&lt;OR=1 g/dl) in the others. Baseline haemoglobin F was significantly associated with an increase in haemoglobin (P&lt;0.001). Combined treatment with erythropoietin benefited selected patients, but the addition of sodium phenyl butyrate had no effect. Of the 27/45 patients who discontinued regular transfusions before the study, 13 remained transfusion independent during long-term follow-up, 6/13 continued hydroxyurea. Hydroxyurea moderately increased steady-state haemoglobin in a sub-group of E/beta(0) thalassaemia patients and can be considered for patients with intermediate severity disease, thus delaying or avoiding the need for life-long transfusions. Continuous monitoring of toxicity and growth is required. abstract_id: PUBMED:6700458 Antenatal intrauterine diagnosis of fetal thalassaemia. The results are reported of the intrauterine diagnosis of fetal thalassaemia in 72 pregnant women investigated during the six years from 1977 to 1982. In the majority of cases fetal blood was collected by fetoscopy for globin chain synthesis at 18 to 19 weeks' gestation. Adequate fetal blood samples for the diagnosis of the thalassaemia status of the fetus were obtained in 85% of cases. The technical difficulties associated with fetoscopy and fetal blood collection are discussed, and the maternal and fetal complications of the procedure described. abstract_id: PUBMED:16339673 Induction of fetal globin in beta-thalassemia: Cellular obstacles and molecular progress. Accelerated apoptosis of erythroid progenitors in beta-thalassemia is a significant barrier to definitive therapy because the beneficial effects of fetal globin-inducing agents on globin chain balance may not be inducible in cells in which programmed cell death is established early. Accordingly, our objectives have been to identify methods to decrease cellular apoptosis and to identify orally tolerable fetal globin gene inducers. A pilot clinical trial was conducted to determine whether combined use of a fetal globin gene inducer (butyrate) and rhu-erythropoietin (EPO), the hematopoietic growth factor that prolongs erythroid cell survival and stimulates erythroid proliferation, would produce additive hematologic responses in any thalassemia subjects. Butyrate and EPO were administered in 10 patients. Novel fetal globin gene inducers that also stimulate erythroid proliferation were evaluated for pharmacokinetic profiles. Patients with beta+-thalassemia had relatively low levels of endogenous EPO (&lt;145 mU/mL) and had additive responses to administered EPO and butyrate. Patients with at least one beta 0-globin mutation had higher baseline HbF levels (&gt;60%) and EPO levels (&gt;160 mU/mL), and three-fourths of these subjects responded to the fetal globin gene inducer alone. A few select fetal globin-inducing short-chain fatty acid derivatives that stimulated cell proliferation also had favorable pharmacokinetics. These studies identify a significant subset of thalassemia patients who appear to require exogenous EPO to respond optimally to any HbF inducer, as well as new therapeutic candidates that act on both cellular and molecular pathologies of beta-thalassemia. Both approaches now offer excellent potential for tolerable, definitive treatment of beta-thalassemia. abstract_id: PUBMED:29240037 Fetal Anemia Causing Hydrops Fetalis From an Alpha-Globin Variant: Homozygous Hemoglobin Constant Spring. Background: Fetal anemia is often assumed to be due to red cell alloimmunization and Parvovirus infection, and can lead to hydrops fetalis and death in utero. Other causes, such as mutations of hemoglobin alpha, are less commonly considered. Methods: We report 7 cases with fetal anemia causing hydrops fetalis. Serial Doppler ultrasound for measurement peak systolic velocity (PSV) of middle cerebral artery (MCA) was used for evaluation of fetal anemia. Fetal anemia is suggested if the MCA/PSV ratio is &gt;1.5 multiple of median. Cordocentesis was performed subsequently to find the cause of fetal anemia and check fetal hemoglobin for consideration of intrauterine infusion. Investigations for fetal anemia include complete blood count, blood morphology, and blood group of mother and fetus, reticulocyte counts, red cell indices, screening for thalassemia, hemoglobin typing, acid elution test, parvovirus B 19 serology, and TORCH titer (toxoplasmosis, rubella, cytomegalovirus, herpes simplex virus, human immunodeficiency virus, and syphilis). Intrauterine infusion, using irradiated prestorage filtered red cell with hematocrit level of 80%, is indicated if fetal hemoglobin is &lt;10 g/dL. Result: Seven cases with fetal anemia were prenatally diagnosed from gestational ages 20 to 34 weeks. Initial hematocrit in these cases varied from 9% to 17.2%. In each case, causes of anemia were determined using the investigations listed above. All cases underwent uneventfully up to 3 intrauterine transfusions. DNA study for thalassemia demonstrated homozygous Constant Spring (CS) in 5 cases, homozygous CS with heterozygous E in 1 case, and compound heterozygous CS and Pakse in 1 case. The perinatal outcomes were normal term in 5 cases, preterm in 2 cases. Low birth weight was determined in 2 cases. The screening for thalassemia major, including the osmotic fragility and dichlorophenol indophenol precipitation test (DCIP), is not helpful for detecting hemoglobin variants such as Constant Spring or Pakse. Summary: This study emphasizes homozygous Constant Spring and compound heterozygous CS and Pakse as a cause of hydrops fetalis. Proper management for the fetus after diagnosis can lead to a good fetal outcome. Prevention control programs should include screening of parents for the heterozygous state. abstract_id: PUBMED:29984090 Prenatal detection of thalassemia by cell-free fetal DNA (cffDNA) in maternal plasma using surface enhanced Raman spectroscopy combined with PCR. Thalassemias are widely occurring genetic hemoglobin disorders; patients with severe thalassemia often require regular blood transfusions for survival. Prenatal detection of thalassemia is currently invasive and carries the risk of miscarriage and infection. A polymerase chain reaction (PCR)-based surface enhanced Raman spectroscopy (SERS) technique was investigated in this paper for the purpose of detecting prenatal α-thalassemia Southeast Asian (SEA) type deletion using maternal plasma. Couples with the same SEA thalassemia (-SEA/αα) were selected, and the quantification of SEA and wild type (WT) alleles in the maternal plasma sample predicted the fetal genotype. PCR was performed using two pairs of fluorescence tag-labeled primers to produce tag-labeled PCR products for both the SEA (labeled with R6G) and WT (labeled with Cy3) alleles. Then, the labeled PCR products containing the two fluorescence tags were measured by SERS. The ratios between the R6G and Cy3 tags were obtained using multiple linear regressions (MLR), and these ratios corresponded with the physical ratio of WT and SEA concentrations in maternal plasma. After verifying this technique on DNA mixtures with known SEA and WT ratios, the plasma from 24 pregnant women was screened. An accuracy of 91.7% was achieved for detecting the fetal genotypes of Hb Bart's, alpha-trait, and normal trait. The results indicated that the simple PCR-SERS method may be sensitive enough for use on cell free fetal DNA (cffDNA) in maternal plasma for non-invasive prenatal detection (NIPD). Answer: The treatment of alpha(0)-thalassemia (--(SEA)/--(SEA)) through serial fetal and post-natal transfusions raises the question of whether early fetal intervention can improve outcomes. In a case study, a fetus diagnosed with alpha(0)-thalassemia received four intrauterine transfusions followed by post-natal chronic transfusions. The early intervention, prior to clinical findings suggestive of fetal anemia, was associated with overall intact neurodevelopment at 18 months of age, as indicated by a developmental quotient within normal limits (PUBMED:25116001). This suggests that earlier diagnosis and treatment may enhance oxygen delivery to the developing fetus's tissues, potentially improving long-term outcomes. The importance of fetal hemoglobin (HbF) in ameliorating the severity of related thalassemia conditions, such as beta(0)-thalassemia intermedia (beta(0)-TI), has been established. Elevated HbF production, due to heterocellular hereditary persistence, can modify the degree of disease severity (PUBMED:11564083). This indicates that interventions aimed at increasing HbF levels could be beneficial in the management of thalassemia syndromes. Advances in the field of fetal anemia diagnosis, such as the use of noninvasive ultrasound methods and the increased availability of intrauterine transfusion, have improved the prognosis of anemic fetuses (PUBMED:16601468). These advances support the potential benefits of early fetal interventions. In summary, while the case study (PUBMED:25116001) provides evidence that early fetal intervention via intrauterine transfusions can lead to positive neurodevelopmental outcomes, the broader literature on thalassemia and fetal anemia management suggests that such interventions, particularly when aimed at increasing HbF levels and improving oxygen delivery, could indeed improve outcomes for fetuses affected by alpha(0)-thalassemia (--(SEA)/--(SEA)).
Instruction: Comparison on Endoscopic Hemoclip and Hemoclip Combination Therapy in Non-variceal Upper Gastrointestinal Bleeding Patients Based on Clinical Practice Data: Is There Difference between Prospective Cohort Study and Randomized Study? Abstracts: abstract_id: PUBMED:26289241 Comparison on Endoscopic Hemoclip and Hemoclip Combination Therapy in Non-variceal Upper Gastrointestinal Bleeding Patients Based on Clinical Practice Data: Is There Difference between Prospective Cohort Study and Randomized Study? Background/aims: Endoscopic hemoclip application is an effective and safe method of endoscopic hemostasis. We conducted a multicenter retrospective study on hemoclip and hemoclip combination therapy based on prospective cohort database in terms of hemostatic efficacy not in clinical trial but in real clinical practice. Methods: Data on endoscopic hemostasis for non-variceal upper gastrointestinal bleeding (NVUGIB) were prospectively collected from February 2011 to December 2013. Among 1,584 patients with NVUGIB, 186 patients treated with hemoclip were enrolled in this study. Subjects were divided into three groups: Group 1 (n = 62), hemoclipping only; group 2 (n = 88), hemoclipping plus epinephrine injection; and group 3 (n = 36), hemocliping and epinephrine injection plus other endoscopic hemostatic modalities. Primary outcomes included rebleeding, other therapeutic management, hospitalization period, fasting period and mortality. Secondary outcomes were bleeding associated mortality and overall mortality. Results: Active bleeding and peptic ulcer bleeding were more common in group 3 than in group 1 and in group 2 (p &lt;&#156; 0.001). However, primary outcomes (rebleeding, other management, morbidity, hospitalization period, fasting period and mortality) and secondary outcomes (bleeding associated mortality and total mortality) were not different among groups. Conclusions: Combination therapy of epinephrine injection and other modalities with hemoclips did not show advantage over hemoclipping alone in this prospective cohort study. However, there is a tendency to perform combination therapy in active bleeding which resulted in equivalent hemostatic success rate, and this reflects the role of combination therapy in clinical practice. abstract_id: PUBMED:12612510 A prospective, randomized trial of endoscopic hemoclip placement and distilled water injection for treatment of high-risk bleeding ulcers. Background: Although endoscopic hemoclip therapy is widely used in the treatment of GI bleeding, there are few prospective trials that assess its efficacy. This study evaluated the efficacy and safety of hemoclip placement and distilled water injection for the treatment of high-risk bleeding ulcers. Methods: Seventy-nine patients with major stigmata of ulcer hemorrhage were randomly assigned to either endoscopic hemoclip placement (n = 39) or injection with distilled water (n = 40). Results: Initial hemostasis was achieved in all patients treated with hemoclips and 39 treated by distilled water injection (respectively, 100.0% vs. 97.5%; p = 1.00). Bleeding recurred in 4 and 11 of patients, respectively, in the hemoclip and water injection groups. It occurred significantly more frequently in the injection group (hemoclip, 10.3%; injection, 28.2%; p = 0.04). No major procedure-related complication occurred in either group. Emergency operations were performed in 5.1% of patients treated with hemoclips versus 12.5% of those in the water injection group (p = 0.43). Hospital days and mortality rate were similar in both groups. Conclusion: Endoscopic hemoclip placement is a safe and effective hemostatic method that is superior to distilled water injection for treatment of bleeding peptic ulcer. abstract_id: PUBMED:16340629 Factors associated with failure of initial endoscopic hemoclip hemostasis for upper gastrointestinal bleeding. Background: Endoscopic hemoclip is widely used for the management of bleeding peptic ulcers. The major difficulty in clinical application of the hemoclip is deployment to the lesion during initial hemostasis. The aim of this study was to define factors associated with the failure of endoscopic hemoclip for initial hemostasis of upper GI bleeding. Patients And Methods: From January to December 2003, we prospectively studied 77 randomized patients with clinical evidence of upper GI bleeding due to either active bleeding or a visible vessel identified by upper GI endoscopy in our emergency department. Results: Among the 77 patients, 13 (16.9%) failed treatment (Group 1) and 64 (83.1%) were successfully (Group 2) treated by endoscopic hemoclip for lesions related to upper GI bleeding. There were no differences due to gender, blood pressure, initial heart rate, and hemoglobulin before or after endoscopic treatment, platelet count, serum creatinine, and albumin between groups. The mean age of Group 1 was higher than that of Group 2 (73.31+/-9.38 years vs. 65.41+/-16.45 years, respectively; P=0.083). Most patients who did not achieve initial hemostasis by endoscopic hemoclip had upper GI lesions over the gastric antrum and duodenal bulb. Among the 13 patients who failed to achieve endoscopic hemoclip initial hemostasis, four lesions were located over the posterior wall of the antrum, and four lesions over the lesser curvature side of the duodenal bulb. Conclusion: Endoscopic hemoclip is an effective hemostatic method for upper GI bleeding. Age, gastric antrum, and duodenal bulb lesions may be associated with the failure of initial hemostasis by endoscopic hemoclip. abstract_id: PUBMED:24719941 Comparison the efficacy of intermediate dose argon plasma coagulation versus hemoclip for upper gastrointestinal non-variceal bleeding. Background/aims: Endoscopic argon plasma coagulation (APC) and hemoclip were used for the treatment of bleeding peptic ulcers. There are wide ranges of hemostatic doses (power and flow) of APC used in previous studies. The aim of our study was to assess the efficacy and safety of "intermediate dose" APC compared to hemoclips for hemostasis from bleeding peptic ulcer. Methodology: The present study was designed as a retrospective study using historical controls. One hundred and ninety-four consecutive upper GI bleeding patients with bleeding visible vessel lesions were treated with either APC or hemoclips. There are 110 patients received APC treatment and 84 patients received hemoclip hemostasis. The main outcome measurements were one week rebleeding rate, one month rebleeding rate, surgery, morality, amount of blood transfusion and durations of hospital stay. Results: There were no significant differences between the two groups in 1 week rebleeding rate (1.8% vs. 2.4%, p = 1.0), 1 month rebleeding rate (0% vs. 1.2%, p = 0.433), mortality, surgery and amount of blood transfusion (2.67 +/- 3.27 vs. 3.04 +/- 2.75 units, p = 0.322). However, the hospital stay was longer in hemoclip group (5.38 +/- 6.76 vs. 8.49 +/- 11.19 days p = 0.011). Conclusions: APC and hemoclip are with different hemostatic mechanisms, but the hemostatic outcomes were not significantly different between the two groups. APC is an effective, safe, and easily applicable endoscopic hemostatic modality as hemoclip for patients with non-variceal bleeding. abstract_id: PUBMED:26487199 Endoscopic hemostasis for peptic ulcer bleeding: systematic review and meta-analyses of randomized controlled trials. Background: Peptic ulcer represents the most common cause of upper gastrointestinal bleeding. Endoscopic therapy can reduce the risks of rebleeding, continued bleeding, need for surgery, and mortality. The objective of this review is to compare the different modalities of endoscopic therapy. Methods: Studies were identified by searching electronic databases MEDLINE, Embase, Cochrane, LILACS, DARE, and CINAHL. We selected randomized clinical trials that assessed contemporary endoscopic hemostatic techniques. The outcomes evaluated were: initial hemostasis, rebleeding rate, need for surgery, and mortality. The possibility of publication bias was evaluated by funnel plots. An additional analysis was made, including only the higher-quality trials. Results: Twenty-eight trials involving 2988 patients were evaluated. Injection therapy alone was inferior to injection therapy with hemoclip and with thermal coagulation when evaluating rebleeding and the need for emergency surgery. Hemoclip was superior to injection therapy in terms of rebleeding; there were no statistically significant differences between hemoclip alone and hemoclip with injection therapy. There was considerable heterogeneity in the comparisons between hemoclip and thermal coagulation. There were no statistically significant differences between thermal coagulation and injection therapy, though their combination was superior, in terms of rebleeding, to thermal coagulation alone. Conclusions: Injection therapy should not be used alone. Hemoclip is superior to injection therapy, and combining hemoclip with an injectate does not improve hemostatic efficacy above hemoclip alone. Thermal coagulation has similar efficacy as injection therapy; combining these appears to be superior to thermal coagulation alone. Therefore, we recommend the application of hemoclips or the combined use of injection therapy with thermal coagulation for the treatment of peptic ulcer bleeding. abstract_id: PUBMED:12709692 Short-term and long-term benefits of endoscopic hemoclip application for Dieulafoy's lesion in the upper GI tract. Background: Dieulafoy's lesion is an important cause of GI bleeding. The results of hemoclip application as the first treatment of choice for Dieulafoy's lesions have not been reported. This study prospectively examined the short- and long-term outcomes of hemoclip application for Dieulafoy's lesions. Methods: The study was conducted over a 7-year period beginning in January 1995 during which hemoclip application was used as the first-choice hemostatic treatment for Dieulafoy's lesions. Clinical data, endoscopic findings, and outcome of treatment were evaluated. Long-term outcome for patients was also investigated. Results: A Dieulafoy's lesion was diagnosed in 34 patients (27 men, 7 women; mean age, 54 years, range 21-81 years) after a mean of 1.2 endoscopies; 79.4% of the patients had active bleeding. Initial hemostasis was attained by hemoclip placement in 94.1%. The rate of recurrent bleeding was 9.3%. No patient required additional treatment such as surgery. The 30-day mortality rate was 2.9% (1 death, unrelated). For the remaining 33 patients, median follow-up was 53.8 months (range 19 to 90 months). Another Dieulafoy's lesion developed in one patient during follow-up, but in a different location compared with the index lesion. Conclusions: Endoscopic hemoclip application for Dieulafoy's lesions was effective and safe with short- and long-term benefits. abstract_id: PUBMED:30927124 Randomized controlled trial of hemostatic powder versus endoscopic clipping for non-variceal upper gastrointestinal bleeding. Background And Aims: Despite advances in pharmacological and endoscopic management of non-variceal upper gastrointestinal bleeding (NVUGIB), mortality is still relevant. TC-325 (Hemospray-Cook Medical) is a mineral powder with adsorptive properties, designed for endoscopic hemostasis. There are still no comparative trials studying this new hemostatic modality. The objective of this research was to compare the use of TC-325 (associated with epinephrine injection) with the combined technique of endoscopic clipping and epinephrine injection for the treatment of patients with NVUGIB. Methods: We conducted a pilot randomized controlled trial with patients that presented NVUGIB with an actively bleeding lesion at the endoscopic evaluation. Patients were randomized either to the Hemospray or Hemoclip group. The randomization list was generated by a computer program and remained unknown throughout the entire trial. All patients underwent second-look endoscopy. Results: Thirty-nine patients were enrolled. Peptic ulcer was the most frequent etiology. Primary hemostasis was achieved in all Hemospray cases and in 90% of Hemoclip group (p = 0.487). Five patients in Hemospray group underwent an additional hemostatic procedure during second-look endoscopy, while no patient in the Hemoclip group needed it (p = 0.04). Rebleeding, emergency surgery and mortality rates were similar in both groups. No toxicity, allergy events, or gastrointestinal obstruction signs were observed in Hemospray group. Conclusions: TC-325 presents similar hemostatic results when compared with conventional dual therapy for patients with NVUGIB. Hemospray's excellent primary hemostasis rate certifies it as a valuable tool in arduous situations of severe bleeding or difficult location site. abstract_id: PUBMED:12085039 Prospective evaluation of hemoclip application with injection of epinephrine in hypertonic saline solution for hemostasis in unstable patients with shock caused by upper GI bleeding. Background: It is difficult to arrest severe upper GI bleeding with any of the available hemostatic modalities in unstable patients who are in shock, and the rates of persistent bleeding and mortality in this group remain high. This prospective study evaluated hemoclip application alone and in combination with injection of hypertonic saline solution with epinephrine in this subgroup of patients with GI bleeding. Methods: Twenty-two patients in shock because of upper GI bleeding were enrolled and divided into 2 groups based on the response of systolic blood pressure to rapid infusion of 1000 mL of lactated Ringer's solution: an unstable shock group, in which systolic blood pressure did not stabilize at greater than 90 mm Hg, and a stable shock group, in which systolic blood pressure stabilized at greater than 90 mm Hg. Emergency endoscopy was performed in both groups; those in the stable group were treated by hemoclip application alone and those in the unstable group were treated by hemoclip application combined with injection of hypertonic saline solution with epinephrine. The following parameters were compared: vital signs on admission and after infusion of lactated Ringer's solution, hemoglobin concentration, endoscopic classification of type and site of bleeding, number of hemoclips required to arrest bleeding, volume of hypertonic saline solution with epinephrine injected, initial hemostatic rate, rate of recurrent bleeding, the need for additional preventive therapy (hemoclip application), and mortality. Results: The rate of initial hemostasis was 92% in the stable shock group and 100% in the unstable shock group. Bleeding did not recur in either group. The volume of packed red cells transfused and the endotracheal intubation rate were significantly greater in the unstable shock group. Preventive application of hemoclips was performed at endoscopic follow-up 12 times in 10 patients in the stable shock group and 9 times in 7 patients in the unstable shock group. There were no deaths in either group. Conclusion: Endoscopic injection of hypertonic saline solution with epinephrine combined with hemoclip application provides effective hemostasis in unstable patients in shock caused by severe upper GI bleeding. The hemostatic result is comparable with that achieved by hemoclip application alone in patients with bleeding but less severe shock. abstract_id: PUBMED:8491134 Endoscopic hemoclip treatment for gastrointestinal bleeding. We conducted an uncontrolled study to evaluate an improved metallic clip (Olympus hemoclip) for the endoscopic treatment of nonvariceal gastrointestinal bleeding. A total of 88 patients (mean age 63 +/- 14, 60 males) with bleeding from a wide range of sources were treated. Seventy-eight patients had active bleeding (spurting in 50, oozing in 28) and 10 patients had a nonbleeding visible vessel. Initial hemostasis was achieved in all patients with active bleeding. A total of 255 clips were placed (average of 2.9 clips per patient, range of 1-10 clips). Spurting arterial bleeders required more clips on average than oozing bleeders (3.2 versus 2.7); active bleeders required more clips than cases with nonbleeding visible vessels (3.0 versus 2.2). Mean follow-up was 397 +/- 148 days. Recurrent bleeding was observed in 5 patients, all of whom had active bleeding on initial presentation. Re-bleeding was successfully treated with hemoclips in 4 patients and one patient underwent surgery. Clips appeared to be retained well; early clip dislodgement resulted in rebleeding in only 1 patient. No complications resulted from this treatment. Clips did not impair healing of peptic ulcers. We conclude that endoscopic hemoclip placement is a highly effective and safe method for treating nonvariceal gastrointestinal bleeding and deserves comparative studies with other methods of endoscopic hemostasis. abstract_id: PUBMED:15742412 Epinephrine injection therapy versus a combination of epinephrine injection and endoscopic hemoclip in the treatment of bleeding ulcers. Aim: To assess the efficacy of hemoclip application in combination with epinephrine injection in the treatment of bleeding peptic ulcers and to compare the clinical outcomes between patients treated with such a combination therapy and those treated with epinephrine injection alone. Methods: A total of 293 patients (211 males, 82 females) underwent endoscopic therapy for bleeding peptic ulcers. Of these, 202 patients (152 males, 50 females) received epinephrine injection therapy while 91 patients (59 males, 32 females) received combination therapy. The choice of endoscopic therapy was made by the endoscopist. Hemostatic rates, rebleeding rates, need for emergency surgery and 30-d mortality were the outcome measures studied. Results: Patients who received combination therapy were significantly older (mean age 66+/-16 years, range 24-90 years) and more suffered from chronic renal failure compared to those who received epinephrine injection therapy alone (mean age 61+/-17 years, range 21-89 years). Failure to achieve permanent hemostasis was 4% in the group who received epinephrine injection alone and 11% in the group who received combination therapy. When the differences in age and renal function between the two treatment groups were taken into account by multivariate analysis, the rates of initial hemostasis, rebleeding rates, need for surgery and 30-d mortality for both treatment options were not significantly different. Conclusion: Combination therapy of epinephrine injection with endoscopic hemoclip application is an effective method of achieving hemostasis in bleeding peptic ulcer diseases. However, superiority of combination therapy over epinephrine injection alone, could not be demonstrated. Answer: The comparison between endoscopic hemoclip and hemoclip combination therapy in non-variceal upper gastrointestinal bleeding (NVUGIB) patients based on clinical practice data reveals some insights into their efficacy and usage in different study designs. In a multicenter retrospective study based on a prospective cohort database, it was found that combination therapy of epinephrine injection and other modalities with hemoclips did not show an advantage over hemoclipping alone in terms of primary outcomes such as rebleeding, other management, morbidity, hospitalization period, fasting period, and mortality, as well as secondary outcomes like bleeding-associated mortality and total mortality. However, there was a tendency to perform combination therapy in cases of active bleeding, which resulted in an equivalent hemostatic success rate, reflecting the role of combination therapy in clinical practice (PUBMED:26289241). On the other hand, a prospective, randomized trial comparing endoscopic hemoclip placement with distilled water injection for the treatment of high-risk bleeding ulcers found that hemoclip placement was superior to distilled water injection in preventing recurrent bleeding. Initial hemostasis was achieved in all patients treated with hemoclips, and the recurrence of bleeding was significantly lower in the hemoclip group compared to the water injection group. This study suggests that endoscopic hemoclip placement is a safe and effective hemostatic method (PUBMED:12612510). Another randomized controlled trial comparing hemostatic powder versus endoscopic clipping for NVUGIB found that TC-325 (Hemospray) presented similar hemostatic results when compared with conventional dual therapy (endoscopic clipping and epinephrine injection) for patients with NVUGIB. The primary hemostasis rate was excellent in the Hemospray group, certifying it as a valuable tool in severe bleeding or difficult location sites (PUBMED:30927124). In summary, while prospective cohort studies and randomized controlled trials may have different methodologies and levels of evidence, the findings suggest that endoscopic hemoclip therapy is effective for NVUGIB. The addition of combination therapy may be used in clinical practice, particularly in cases of active bleeding, but its superiority over hemoclip alone is not consistently demonstrated across different study designs.
Instruction: Substitutes or complements? Abstracts: abstract_id: PUBMED:27249324 Substitution and Complementarity of Alcohol and Cannabis: A Review of the Literature. Background: Whether alcohol and cannabis are used as substitutes or complements remains debated, and findings across various disciplines have not been synthesized to date. Objective: This article is a first step towards organizing the interdisciplinary literature on alcohol and cannabis substitution and complementarity. Method: Electronic searches were performed using PubMed and ISI Web of Knowledge. Behavioral studies of humans with "alcohol" (or "ethanol") and "cannabis" (or "marijuana") and "complement(*)" (or "substitut(*)") in the title or as a keyword were considered. Studies were organized according to sample characteristics (youth, general population, clinical and community-based). These groups were not set a priori, but were informed by the literature review process. Results: Of the 39 studies reviewed, 16 support substitution, ten support complementarity, 12 support neither and one supports both. Results from studies of youth suggest that youth may reduce alcohol in more liberal cannabis environments (substitute), but reduce cannabis in more stringent alcohol environments (complement). Results from the general population suggest that substitution of cannabis for alcohol may occur under more lenient cannabis policies, though cannabis-related laws may affect alcohol use differently across genders and racial groups. Conclusions: Alcohol and cannabis act as both substitutes and complements. Policies aimed at one substance may inadvertently affect consumption of other substances. Future studies should collect fine-grained longitudinal, prospective data from the general population and subgroups of interest, especially in locations likely to legalize cannabis. abstract_id: PUBMED:36117737 On polyhedral graphs and their complements. We find all polyhedral graphs such that their complements are still polyhedral. These turn out to be all self-complementary. abstract_id: PUBMED:38444737 Boundedness of Complements for Log Calabi-Yau Threefolds. In this paper, we study the theory of complements, introduced by Shokurov, for Calabi-Yau type varieties with the coefficient set [0, 1]. We show that there exists a finite set of positive integers N, such that if a threefold pair (X/Z∋z,B) has an R-complement which is klt over a neighborhood of z, then it has an n-complement for some n∈N. We also show the boundedness of complements for R-complementary surface pairs. abstract_id: PUBMED:37560092 Teaching and Incentives: Substitutes or Complements? Interventions to promote learning are often categorized into supply- and demand-side approaches. In a randomized experiment to promote learning about COVID-19 among Mozambican adults, we study the interaction between a supply and a demand intervention, respectively: teaching via targeted feedback, and providing financial incentives to learners. In theory, teaching and learner-incentives may be substitutes (crowding out one another) or complements (enhancing one another). Experts surveyed in advance predicted a high degree of substitutability between the two treatments. In contrast, we find substantially more complementarity than experts predicted. Combining teaching and incentive treatments raises COVID-19 knowledge test scores by 0.5 standard deviations, though the standalone teaching treatment is the most cost-effective. The complementarity between teaching and incentives persists in the longer run, over nine months post-treatment. abstract_id: PUBMED:31896343 The Relationship Between Hospital and Community Psychiatry: Complements, Not Substitutes? Community-based psychiatric services are essential to mental health. For decades, researchers, advocates, and policy makers have presumed that expanding the supply of these services hinges on reducing the supply of hospital-based care. Cross-national data from the World Health Organization call this presumption into question. Community and hospital psychiatry appear to be complements, not substitutes. abstract_id: PUBMED:11747052 Family health effects: complements or substitutes. Genetic endowments play a fundamental role in the production of health. At birth individuals have different capacities to be healthy, largely due to genetic dispositions. Whether or not individuals realize this health depends on their choice of health behaviours. Previous research has linked negative factors beyond the individual's control, which include genetic endowments, to both poor health and poor health behaviours. The health economics literature proposes that behaviours and genetic (or family health) endowments can be either substitutes or complements in the production of health. The goal of this paper is to investigate the behavioural consequences of changes in knowledge about one's genetic endowment. Using two waves of the National Health and Nutrition Examination Survey I Epidemiologic Followup Study, I find that for smokers, smoking intensity substitutes for newly diagnosed smoking-related family cancers, while smoking intensity is complementary to newly diagnosed non-smoking-related family cancers. I find no evidence for the hypothesized relationships with respect to alcohol consumption among drinkers. These results have implications for the growing field of genetic testing and test development. These results also reinforce current practices of ascertaining family health histories in the context of medical history taking. abstract_id: PUBMED:23162239 Skin substitutes: An Indian perspective. There have been numerous alternatives developed to replace skin. These can either be permanent substitutes or temporary substitutes, which need to be replaced later by autologous grafts. These have been tried in recent times as an attempt to reduce the need or in the case of permanent substitutes ,altogether replace autologous skin grafts. However till date no ideal skin substitute has been developed. Various factors have to be considered while choosing one of these substitutes. In a developing country like India awareness and availability of these skin substitutes is not adequate considering the volume of cases that require this modality of treatment. Also there are skin substitutes developed in our country that need to be highlighted. This article is an attempt to review the vast array of skin substitutes that have been developed and consider their utility and feasibility for developing countries. abstract_id: PUBMED:3042586 Inpatient and outpatient psychiatric services: substitutes or complements? Patients at risk for psychiatric hospitalization make only limited use of outpatient services, although clinical research has demonstrated that outpatient treatments and home care can be as effective as inpatient psychiatric treatment in treating certain mental health problems. The substitution of ambulatory services for inpatient care has in part been limited because insurers have restricted outpatient mental health benefits to control utilization. The authors critically review evidence from the economic and clinical literatures to determine the extent to which outpatient psychiatric treatment substitutes or complements inpatient treatment. To encourage clinically effective substitution of outpatient for inpatient services, further research is needed to identify the patient populations for whom substitution is possible and the treatment and environmental conditions in which it is most likely to be successful. abstract_id: PUBMED:18704540 Bone substitutes Bone substitutes are used to supplement or substitute autogenous transplantation of cancellous bone. These materials should provide a scaffold structure and support bone healing alone or in combination with other substances. In trauma surgery the indication for use of bone substitutes lies mostly in filling of small metaphyseal cancellous bone defects with high vascularization following fracture. In comparison to transplantation of cancellous bone, the advantages of bone substitutes are sufficient availability without additional donor site morbidity. Moreover, biomaterials can be stored, also ensuring ready availability. This educational article gives an overview of bone substitutes currently in clinical use. abstract_id: PUBMED:35291275 Plant-based meat and dairy substitutes on the Norwegian market: comparing macronutrient content in substitutes with equivalent meat and dairy products. Objective: To assess and compare the macronutrient and salt content in meat and dairy substitutes available on the Norwegian market. Design: Comparison between substitute products and two groups of meat and dairy products where one group represented the healthiest option (Keyhole) and one the most used option (Regular). Kruskal-Wallis test with pairwise comparison was conducted on categories with more than two groups, and Mann-Whitney U test was conducted on categories with two groups. Setting: Online stores in Norway. Hundred and two meat substitute products and 173 milk and dairy substitute products on sale spring and autumn 2020 were assessed; additionally, ninety-eight equivalent meat products and 105 milk and dairy products. Results: While Keyhole and Regular meat did not contain fibre, meat substitutes contained 3⋅5-5⋅0 g fibre per 100 g. The saturated fat content in meat substitutes was on average 1⋅9 times lower than in Keyhole products and 5⋅8 times lower than in Regular products. Milk and dairy substitutes contained 3⋅2 and 3⋅4 times less protein than Keyhole and Regular products, respectively. Conclusions: The study results indicate that meat and dairy substitutes on the Norwegian market vary in nutritional composition. Compared to Keyhole and Regular, substitutes contained lower levels of saturated fat, meat substitutes contained higher levels of fibre and milk and dairy substitutes less protein. Future studies should include content of micronutrients for a more comprehensive assessment. Answer: The concept of substitutes or complements is explored in various contexts across the provided abstracts. In the context of alcohol and cannabis use, the literature review suggests that these substances can act as both substitutes and complements, with the relationship influenced by factors such as age, policy environment, and demographic characteristics (PUBMED:27249324). In the field of education, teaching and financial incentives were found to be more complementary than substitutive in promoting learning about COVID-19 among Mozambican adults, contrary to expert predictions (PUBMED:37560092). In the realm of mental health services, community and hospital psychiatry appear to complement each other rather than substitute one another, challenging the assumption that expanding community services requires reducing hospital-based care (PUBMED:31896343). Similarly, inpatient and outpatient psychiatric services are discussed in terms of their potential to substitute or complement each other, with the need for further research to identify conditions for effective substitution (PUBMED:3042586). The relationship between genetic endowments and health behaviors is also examined, with findings suggesting that smoking intensity can act as a substitute for newly diagnosed smoking-related family cancers, but as a complement to non-smoking-related family cancers (PUBMED:11747052). In the medical field, skin substitutes are discussed in terms of their role as either temporary or permanent replacements for autologous skin grafts, with the need for awareness and availability in developing countries like India (PUBMED:23162239). Bone substitutes are used to fill bone defects and are considered advantageous due to their availability and lack of donor site morbidity (PUBMED:18704540). Lastly, plant-based meat and dairy substitutes are compared with their animal-based counterparts in terms of macronutrient content, with meat substitutes containing more fiber and less saturated fat, and dairy substitutes containing less protein than regular meat and dairy products (PUBMED:35291275). Overall, the concept of substitutes or complements is context-dependent, with various factors influencing the relationship between the items or interventions in question.
Instruction: Selecting patients for rehabilitation after acute stroke: are there variations in practice? Abstracts: abstract_id: PUBMED:37755349 Nursing Care for Stroke Patients: Current Practice and Future Needs. Background: Stroke is the second leading cause of death and the third leading cause of disability worldwide. Stroke nurses play an important role in the care of patients living with stroke by using best practices and adhering to stroke-management guidelines. This study aims at examining the current nursing practice for stroke patients in Saudi Arabia. Method: A cross-sectional descriptive design was used to collect data from nurses working in the stroke unit and intensive care unit between the period of February and June 2022 using electronic self-administered questionnaires. Results: A convenience sample of 131 nurses who provided care for stroke patients was enrolled. Significant differences in nursing practice were found between the stroke units and the intensive care units regarding the activation of the stroke code, X2 (4, N = 131) = 48.34, p &lt; 0.001; transferring stroke patients to a designated bed, X2 (4, N = 131) = 48.74, p = 0.002; applying the NIHSS, X2 (4, N = 131) = 70.11, p &lt; 0.001; using the modified Rankin scale, X2 (4, N = 131) = 61.24, p &lt; 0.001; providing intervention for neglect syndrome, X2 (4, N = 131) = 44.72, and hemianopsia, X2 (4, N = 131) = 39.22; screening for poststroke depression, X2 (4, N = 131) = 101.59, p &lt; 0.001; assessing for psychosocial needs, X2 (4, N = 131) = 74.44, p &lt; 0.001, and encouraging patients to express their feelings, X2 (4, N = 131) = 58.64, p &lt; 0.001; educating patients and families about stroke prevention, X2 (4, N = 131) = 40.51, p &lt; 0.001. Conclusion: As per the results of the study, there is an urgent need for stroke units run by specialized stroke nurses to provide early stroke management and improve survivors' outcomes. Structured stroke-care programs are needed to improve nursing practice and meet the international standard of stroke care. abstract_id: PUBMED:27374876 Selecting relevant and feasible measurement instruments for the revised Dutch clinical practice guideline for physical therapy in patients after stroke. Purpose: To select relevant and feasible instruments for the revision of the Dutch clinical practice guideline for physical therapy in patients with stroke. Methods: In this implementation study a comprehensive proposal for ICF categories and matching instruments was developed, based on reliability and validity. Relevant instruments were then selected in a consensus round by 11 knowledge brokers who were responsible for the implementation of the selected instruments. The feasibility of the selected instruments was tested by 36 physical therapists at different work settings within stroke services. Finally, instruments that were deemed relevant and feasible were included in the revised guideline. Results: A total of 28 instruments were recommended for inclusion in the revised guideline. Nineteen instruments were retained from the previous guideline. Ten new instruments were tested in clinical practice, seven of which were found feasible. Two more instruments were added after critical appraisal of the set of the measurement instruments. Conclusions: The revised guideline contains 28 relevant and feasible instrument selected and tested in clinical practice by physical therapists. Further education and implementation is needed to integrate instruments in clinical practice. Further research is proposed for developing and implementing a core set of measurement instruments to be used at fixed time points to establish data registries that allow for continuous improvement of rehabilitation for stroke patients. Implications for Rehabilitation The revised Dutch Stroke Physical Therapy Guideline recommends a total of 28 instruments, that are relevant and feasible for clinical practice of physical therapist in the different settings of stroke rehabilitation. The selection of instrument in daily practice should be part of the clinical reasoning process of PTs and be tailored to individual patients' needs and the degree of priority of the affected ICF category. Suggested education strategies for further integration of instruments in of the daily practice of PTs in Stroke Rehabilitation are: 'Training on the job' and 'peer assessment in clinical situations'. abstract_id: PUBMED:27628728 A Pathophysiological Model-Driven Communication for Dynamic Distributed Medical Best Practice Guidance Systems. There is a great divide between rural and urban areas, particularly in medical emergency care. Although medical best practice guidelines exist and are in hospital handbooks, they are often lengthy and difficult to apply clinically. The challenges are exaggerated for doctors in rural areas and emergency medical technicians (EMT) during patient transport. In this paper, we propose the concept of distributed executable medical best practice guidance systems to assist adherence to best practice from the time that a patient first presents at a rural hospital, through diagnosis and ambulance transfer to arrival and treatment at a regional tertiary hospital center. We codify complex medical knowledge in the form of simplified distributed executable disease automata, from the thin automata at rural hospitals to the rich automata in the regional center hospitals. However, a main challenge is how to efficiently and safely synchronize distributed best practice models as the communication among medical facilities, devices, and professionals generates a large number of messages. This complex problem of patient diagnosis and transport from rural to center facility is also fraught with many uncertainties and changes resulting in a high degree of dynamism. A critically ill patient's medical conditions can change abruptly in addition to changes in the wireless bandwidth during the ambulance transfer. Such dynamics have yet to be addressed in existing literature on telemedicine. To address this situation, we propose a pathophysiological model-driven message exchange communication architecture that ensures the real-time and dynamic requirements of synchronization among distributed emergency best practice models are met in a reliable and safe manner. Taking the signs, symptoms, and progress of stroke patients transported across a geographically distributed healthcare network as the motivating use case, we implement our communication system and apply it to our developed best practice automata using laboratory simulations. Our proof-of-concept experiments shows there is potential for the use of our system in a wide variety of domains. abstract_id: PUBMED:21154249 Scope of practice of an Advanced Practice Nurse at the Swiss Epilepsy Centre Epilepsy is, after stroke, the second most prevalent neurological disease. The disease has a manifold etiology and symptoms and hence treatment options. From the patients who seek treatment in epilepsy centres, some suffer from psychogenic, non-epileptic seizures. Patient with epileptic seizures have to integrate a medication regimen into their lives, accept symptoms and change their life style to seizure preventing habits. Patients with non-epileptic seizures need psychotherapeutic treatment. Care has to be offered over long periods of time and needs to be tailored to very diverse patients' and family situations. Nurses in this field need broad knowledge about the diseases and treatments as well as enhanced skills in counselling and caring for these patients. Therefore the Swiss Epilepsy Centre in Zurich hired an Advanced Practice Nurse (APN) to increase patients' satisfaction with care and enhance nurses' skills and knowledge. This article analyses the work content of the advanced practice nurse and describes first experiences one year after the implementation of an APN-role. The APN worked half of her time in direct patient and family care. The rest of her work load concerned practice development in coaching and educating the nursing staff, being involved in projects and collaborating with the head nurse on specific topics. In conclusion, implementation of an advanced practice nurse has been shown to be beneficial since patients' feedback were very positive and the increase in nurses' skills and competencies has been assessed as noticeable. abstract_id: PUBMED:36634331 Development and Implementation of a Standard Assessment Battery Across the Continuum of Care for Patients After Stroke. Abstract: Stroke rehabilitation occurs across the continuum of care starting in the acute hospital and through the inpatient and outpatient settings. Rehabilitation aims to minimize impairments and maximize function in individuals after stroke. Because patients often undergo rehabilitation for extended periods, longitudinal assessment of impairment, activity, and participation can facilitate the evaluation of patients' progress toward recovery, as well as communication and decision making to guide clinical practice regarding the intervention(s) to be used and may also be leveraged for clinical research. However, the clinical implementation of a standard assessment battery that spans the continuum of care for patients after stroke is challenging because of operational and time constraints. Here, we describe the development and implementation of a standard assessment battery across the continuum of care by physical therapists, occupational therapists, and speech-language pathologists at the Sheikh Khalifa Stroke Institute. We specifically describe our experience in (1) identifying the core team to lead the process, (2) selecting the measures for the standard assessment battery, and the timeframe for administration, and (3) implementing the standard assessment battery in routine clinical practice. abstract_id: PUBMED:27624737 Safety of Non-Vitamin K Antagonist Oral Anticoagulants in Clinical Practice: Focus on Rivaroxaban in Stroke Prevention in Patients With Atrial Fibrillation. Despite direct oral anticoagulants (DOACs) have overcome the most relevant limitations of vitamin K antagonists, many patients with atrial fibrillation are not receiving the appropriate anticoagulant therapy. In addition, when patients are anticoagulated with DOACs, some of them are not taking the dose recommended in the summary of product characteristics. This may be related, at least in part, to the concern about the applicability of findings from randomized clinical trials to real-life patients. In this context, performing studies in daily clinical practice to assess the efficacy and safety of DOACs in real-world setting is mandatory. The aim of this review is to update the current evidence regarding safety of rivaroxaban in clinical practice. The Rivaroxaban Once Daily Oral Direct Factor Xa Inhibition Compared with Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation (ROCKET-AF) showed that rivaroxaban was at least as effective as warfarin for preventing stroke or systemic embolism, with similar rates of major bleeding but with lesser risk of intracranial and fatal bleedings. Data from noninterventional studies and registries have confirmed the good results of ROCKET-AF. Even more, rates of bleeding may be inferior in real-life patients, since they have a lower risk profile than those included in ROCKET-AF. abstract_id: PUBMED:36172097 Protocol for the development of a clinical practice guideline for discharge planning of stroke patients. Background: A standardized discharge plan is important to continuous medical care and discharge management of stroke patients. Currently, there is a lack of high-quality, evidence-based discharge planning guidelines for stroke patients. Most existing discharge planning guidelines have been developed for other diseases and stroke-related guidelines focus more on prevention, treatment, and rehabilitation and less on discharge planning. Therefore, they do not provide a systematic and comprehensive answer to the key issues of discharge planning for stroke patients. To improve the level of recovery and quality of life of stroke patients, to better guide clinical caregivers in developing and implementing discharge plans, the Evidence-based Nursing Center of West China Hospital, Sichuan University and the World Health Organization (WHO) Collaborating Centre for Guideline Implementation and Knowledge Translation have jointly initiated the development of the clinical practice guideline for discharge planning of patients with stroke. Methods: The guideline development process is designed to follow the WHO handbook for guideline development and Guidelines 2.0. Evidence grading and guideline recommendations are based on the Grading of Recommendations Assessment, Development, and Evaluation (GRADE). The key steps in developing the guideline include: (I) establishing the guideline working groups; (II) selecting the priority clinical questions; (III) evidence retrieval and evaluation; (IV) grading the quality of evidence; (V) forming recommendations; and (VI) external review. Discussion: This guideline will follow the clinical characteristics and management priorities of stroke and will be developed by a multidisciplinary guideline development team, in strict accordance with the core principles and methods of guideline development. This guideline will provide an evidence-based reference for standardized discharge screening, assessment, discharge procedures, and outpatient follow up, so as to improve the quality of discharge services and standardize the discharge management of stroke patients, and ultimately improve their post-discharge rehabilitation and quality of life. Trial Registration: The guideline was registered at the Practice guidelines REgistration for transPAREncy. The registration No. is IPGRP-2022CN331. abstract_id: PUBMED:28932000 The influences of the mental practice on the quality of life of the stroke patients. [Purpose] The purpose of this research is to take a look at the influences of the mental practice on the quality of life of stroke patients. [Subjects and Methods] Through a randomization of 32 stroke patients, this research allocated the experimental group (n=15) and the control group (n=17). Regarding the experimental group, the imagination practice intervention and the ordinary rehabilitation therapy had been applied 1 time a day and 5 times a week for 4 weeks. Regarding the control group, the ordinary rehabilitation therapy had been provided during the same session as that of the experimental group. In order to measure the quality of life, the SS-QOL was used. [Results] The mental practice and CON groups showed significant differences with the SS-QOL scores. [Conclusion] Through the result of this research, we were able to find out that the mental practice intervention is effective with regard to the improvement of the quality of life of the stroke patients. Regarding the researches that will take place later on, the researches which achieved the securing of even more experiment subjects and the control of the diverse variables of the quality of life will be needed. abstract_id: PUBMED:26951931 From clinical trials to clinical practice. Experience with rivaroxaban in the anticoagulant treatment of patients with non-valvular atrial fibrillation Despite the information provided by clinical trials is important, there are relevant clinical differences between those patients included in clinical trials and data of daily outpatient clinics. As a result, in some cases, the results of randomized clinical trials could not be directly applied to clinical practice. In this context, to perform «real-life» registries is mandatory. In the ROCKET-AF study, rivaroxaban, a once-daily direct oral anticoagulant, was at least as effective as warfarin for preventing stroke or systemic embolism, with similar rates of major bleeding, but with a lesser risk of intracranial, critical and fatal bleedings. In the last years, different large registries have confirmed that rivaroxaban is effective and even safer in real-life patients than in ROCKET-AF. The aim of this review is to update the current evidence about the efficacy, effectiveness and safety of rivaroxaban in real-life patients. abstract_id: PUBMED:27418212 Optimizing Stroke Prevention in Patients With Atrial Fibrillation: A Cluster-Randomized Controlled Trial of a Computerized Antithrombotic Risk Assessment Tool in Australian General Practice, 2012-2013. Introduction: Clinicians have expressed a need for tools to assist in selecting treatments for stroke prevention in patients with atrial fibrillation. The objective of this study was to evaluate the impact of a computerized antithrombotic risk assessment tool (CARAT) on general practitioners' prescribing of antithrombotics for patients with atrial fibrillation. Methods: A prospective, cluster-randomized controlled trial was conducted in 4 regions (in rural and urban settings) of general practice in New South Wales, Australia (January 2012-June 2013). General practitioner practices were assigned to an intervention arm (CARAT) or control arm (usual care). Antithrombotic therapy prescribing was assessed before and after application of CARAT. Results: Overall, the antithrombotic therapies for 393 patients were reviewed by 48 general practitioners; we found no significant baseline differences in use of antithrombotics between the control arm and intervention arm. Compared with control patients, intervention patients (n = 206) were 3.1 times more likely to be recommended warfarin therapy (over any other treatment option; P &lt; .001) and 2.8 times more likely to be recommended any anticoagulant (in preference to antiplatelet; P = .02). General practitioners agreed with most (75.2%) CARAT recommendations; CARAT recommended that 75 (36.4%) patients change therapy. After application of CARAT, the proportion of patients receiving any antithrombotic therapy was unchanged from baseline (99.0%); however, anticoagulant use increased slightly (from 89.3% to 92.2%), and antiplatelet use decreased (from 9.7% to 6.8%). Conclusion: Tools such as CARAT can assist clinicians in selecting antithrombotic therapies, particularly in upgrading patients from antiplatelets to anticoagulants. However, the introduction of novel oral anticoagulants has complicated the decision-making process, and tools must evolve to weigh the risks and benefits of these new therapy options. Answer: The provided abstracts do not directly address variations in practice regarding the selection of patients for rehabilitation after acute stroke. However, they do touch on various aspects of stroke care and the implementation of guidelines and tools to improve the quality of care for stroke patients. For instance, the study on nursing care for stroke patients in Saudi Arabia highlights significant differences in nursing practice between stroke units and intensive care units in several areas, including the activation of the stroke code, transferring patients to designated beds, and applying specific scales and assessments (PUBMED:37755349). This suggests that there may be variations in practice within the same country, which could extend to the selection of patients for rehabilitation. The Dutch clinical practice guideline for physical therapy in patients after stroke discusses the selection of relevant and feasible instruments for inclusion in the guideline, which implies that there may be variations in the assessment tools used by physical therapists, potentially affecting the selection process for rehabilitation (PUBMED:27374876). The development and implementation of a standard assessment battery across the continuum of care for patients after stroke also indicate efforts to standardize the evaluation of patients' progress, which could influence the selection for rehabilitation (PUBMED:36634331). Lastly, the protocol for the development of a clinical practice guideline for discharge planning of stroke patients suggests that there is a need for standardized guidelines to improve the quality of discharge services, which includes post-discharge rehabilitation (PUBMED:36172097). In summary, while the abstracts do not provide specific information on variations in the selection of patients for rehabilitation after acute stroke, they do suggest that there are differences in practices related to stroke care, which could potentially impact the rehabilitation selection process.