input
stringlengths
6.82k
29k
Instruction: Do risk factors for re-arrest differ for female and male drunk-driving offenders? Abstracts: abstract_id: PUBMED:11104112 Do risk factors for re-arrest differ for female and male drunk-driving offenders? Background: The present study investigated gender differences in factors affecting recidivism among 628 female and 659 male drunk-driving offenders. The study population included residents from New Mexico who completed a screening program for offenders and who were still residents when contacted 5 years later. Method: Risk factors for re-arrest in the 5-year period after screening referral were examined using multiple logistic regression models. Predictor variables included gender, age, ethnicity, education, marital status, blood alcohol concentration at arrest, parental alcohol problems, spousal alcohol problems, lifetime use of cannabis, cocaine, or amphetamines, abusive behavior toward spouse, and scores on two standardized assessments. Results: Risk factors for re-arrest were similar for males and females except that young age predicted higher recidivism among males but not females. The overall 5-year re-arrest rate was 26%-20% for women, 38% for males age 30 and under, and 24% for males age 31 and older. Conclusions: Young age predicts re-arrest for males but not for females. Neither the type of risk factors nor the number of risk factors fully explained female offenders' disproportionately lower recidivism rates, compared with young males. abstract_id: PUBMED:37944190 Better understanding female and male driving offenders' behavior: Psychological resources and vulnerabilities matter! Although driving risk taking appears to be mainly male, an increase in driving violations has been observed in recent years among French female drivers. The main objective of the present study was to explore the driving behaviors, psychological resources, and vulnerabilities of female and male driving offenders participating in a French driver rehabilitation program. The second aim was to examine to what extent females' and males' resources and vulnerabilities predicted their violations, engagement in distracting activities while driving, and prosocial driving behaviors. In the course of 110 rehabilitation programs, 1686 driving offenders (22.4% females) completed a paper-and-pencil questionnaire. Compared to male offenders, females were more likely to have received a higher education, be divorced, or separated, and drive fewer annual kilometers. They also had had fewer demerit points than males in the last three years. They were more empathetic but also more impulsive than their male counterparts and less self-compassionate and mindful. Regression and moderation analyses revealed that, across genders, certain psychological resources such as mindfulness can be considered as protective factors for driving offenders as they tend to decrease dangerous behaviors and increase prosocial ones, while vulnerabilities such as aggressive driving anger expression seem to have the opposite effect. Our results provide a better understanding of driving offenders' behavior and the influence of personal dispositions. They also open new interesting research avenues in the prevention of dangerous behaviors among this high-risk population. abstract_id: PUBMED:24316505 Traffic violations in Guangdong Province of China: speeding and drunk driving. The number of speeding- and drunk driving-related injuries in China surged in the years immediately preceding 2004 and then began to decline. However, the percent decrease in the number of speeding and drunk driving incidents (decrease by 22%) is not proportional to the corresponding percent decrease in number of automobile accident-related injuries (decrease by 47%) from the year 2004 to 2010 (Traffic Management Bureau, Ministry of Public Security, Annual Statistical Reports on Road Traffic Accidents). Earlier studies have established traffic violations as one of the major risks threatening road safety. In this study, we examine in greater detail two important types of traffic violation events, speeding and drunk driving, and attempt to identify significant risk factors associated with these types of traffic violations. Risk factors in several different dimensions, including driver, vehicle, road and environmental factors, are considered. We analyze the speeding (N=11,055) and drunk driving (N=10,035) data for the period 2006-2010 in Guangdong Province, China. These data, obtained from the Guangdong Provincial Security Department, are extracted from the Traffic Management Sector-Specific Incident Case Data Report and are the only comprehensive and official source of traffic accident data in China. Significant risk factors associating with speeding and drunk driving are identified. We find that several factors are associated with a significantly higher probability of both speeding and drunk driving, particularly male drivers, private vehicles, the lack of street lighting at night and poor visibility. The impact of other specific and unique risk factors for either speeding or drunk driving, such as hukou, road type/grades, commercial vehicles, compulsory third party insurance and vehicle safety status, also require particular attention. Legislative or regulatory measures targeting different vehicle types and/or driver groups with respect to the various driver, vehicle, road and environmental risk factors can subsequently be devised to reduce the speeding and drunk driving rates. As the country with the highest number of traffic accident fatalities in the world, applying these findings in workable legislation and enforcement to reduce speeding and drunk driving rates will save tens of thousands of lives. abstract_id: PUBMED:23890612 Profile of a drunk driver and risk factors for drunk driving. Findings in roadside testing in the province of Uusimaa in Finland 1990-2008. The aim of the present study was to define the profile of a drunk driver and to determine risk factors for drunk driving by analyzing data on both sober and drunk drivers. Systematic roadside surveys have been carried out in Southern Finland for over 18 years, with 20,000-30,000 drivers breath tested annually. During the study period, 1241 drunk drivers were caught (legal blood alcohol limit 0.50‰). The comparison material consisted of 3407 sober drivers. The surveys were designed to further investigate demographic features and driving habits of drivers. The prevalence of drunk driving has been 0.2% over the time period, with only random variations. According to the data, a typical drunk driver is a man aged 40-49 who has a valid driving license and drives his own car, usually alone, with a blood alcohol concentration (BAC) of 1.0‰. He has a job and is married or cohabiting. The profile remained consistent throughout the study period. The risk of drunk driving was found to be five times higher for men than for women. Divorcees and widow(er)s had a substantially higher risk factor for being caught drunk driving than married drivers. Drunk drivers are most likely to be caught by roadside testing on Saturday mornings. During the study period the blood alcohol limit for aggravated drunk driving was lowered in 1994 from 1.5 to 1.2‰. In 2004 the taxation of alcohol beverages was reduced by 30%. Neither of these measures affected the prevalence of drunk driving or the mean BAC of drunk drivers (p=0.63). abstract_id: PUBMED:24438112 Usefulness of indirect alcohol biomarkers for predicting recidivism of drunk-driving among previously convicted drunk-driving offenders: results from the recidivism of alcohol-impaired driving (ROAD) study. Aim: In several European countries, drivers under the influence (DUI), suspected of chronic alcohol abuse are referred for medical and psychological examination. This study (the ROAD study, or Recidivism Of Alcohol-impaired Driving) investigated the usefulness of indirect alcohol biomarkers for predicting drunk-driving recidivism in previously convicted drunk-driving offenders. Design, Setting, Participants And Measurements: The ROAD study is a prospective study (2009-13) that was performed on 517 randomly selected drivers in Belgium. They were convicted for drunk-driving for which their licence was confiscated. The initial post-arrest blood samples were collected and analysed for percentage carbohydrate-deficient transferrin (%CDT), transaminsase activities [alanine amino transferase (ALT), aspartate amino transferase (AST)], gamma-glutamyltransferase (γGT) and red cell mean corpuscular volume (MCV). The observation time for each driver was 3 years and dynamic. Findings: A logistic regression analysis revealed that ln(%CDT) (P < 0.001), ln(γGT) (P < 0.01) and ln(ALT) (P < 0.05) were the best biochemical predictors of recidivism of drunk-driving. The ROAD index (which includes ln(%CDT), ln(γGT), -ln(ALT) and the sex of the driver) was calculated and had a significantly higher area under the receiver operator characteristic curve (0.71) than the individual biomarkers for drunk-driving recidivism. Drivers with a high risk of recidivating (ROAD index ≥ 25%; third tertile) could be distinguished from drivers with an intermediate risk (16% ≤ ROAD index < 25%; second tertile; P < 0.001) and a low recidivism risk (ROAD index < 16%; first tertile; P < 0.05). Conclusions: Of all routinely used indirect alcohol markers, percentage of carbohydrate-deficient transferrin is the major predictor of recidivism of drunk-driving. The association with gamma-glutamyltransferase, alanine amino transferase and the sex of the driver could have additional value for identifying drunk-drivers at intermediate risk of recidivism. Non-specific indirect alcohol markers, such as alanine amino transferase, gamma-glutamyltransferase, aspartate amino transferase and red cell mean corpuscular volume have minimal added value to % carbohydrate-deficient transferrin for distinguishing drunk drivers with a low or high risk of recidivism. abstract_id: PUBMED:29927629 Women judging: Is a young male drunk driver perceived as more negligent than a young female drunk driver? Objective: It is well established that young men are the riskiest group of all drivers, and men in general more often drive under the influence of alcohol. However, potentially oversimplified representations such as a "young male problem" and "drunk driving as a male problem" can influence action and reinforce existing attitudes by selectively directing attention to stereotypically consistent behavior. The purpose of this study was to test the hypothesis of possible bias toward young male drivers in the context of drunk driving. Methods: We have created a scenario study investigating whether the sex of an imaginary young drunk driver would be associated with a different perceived negligence (ranging from 0 = not negligent at all to 10 = extremely negligent) among our participants. These participants were a representative sample of Finnish female driver's license holders. The data for the study were gathered as part of a larger survey study on women's drinking and driving culture. Results: Perceptions of how negligent a person was depended on the age of the respondents such that the older the respondent, the higher the perceived negligence. Perceived negligence was similar for male and female drivers in the scenario; however, there was an interaction effect between driver sex and the age of respondents. The youngest (20-29 years) and 2 oldest (50-69 years) groups of our respondents found the young woman to be more negligent, whereas the opposite occurred for the other 2 groups. Conclusions: The results of our imaginary scenario study of a representative sample of Finnish female driver's license holders do not support a hypothesis that there would be a negative bias toward young male drivers in the context of drunk driving behavior. abstract_id: PUBMED:27016386 Examining cortical thickness in male and female DWI offenders. Some sex differences have been detected in driving while impaired by alcohol (DWI) offenders. However, understanding of the key factors contributing to DWI among male and female drivers remains elusive, limiting development of targeted interventions. Sex-based neurocognitive analyses could provide the much-needed insight. We examined whether male DWI offenders show cortical thickness anomalies that differ from those in female DWI offenders, when compared to their respective controls. Moderating role of sex and alcohol use on DWI status was also investigated. Sixty-one DWI offenders (29 male; 32 female) and 58 controls (29 male; 29 female) completed an anatomical brain scan and assessments on other relevant characteristics. Only male DWI offenders had reduced cortical thickness in the right dorsal posterior cingulate cortex (PCC), a region involved in cognitive control. Lower cortical thickness was associated with increased odds of DWI status only among males who have not engaged in very hazardous pattern of alcohol misuse in the previous 12 months. Thus, for these male DWI drivers, interventions that could impact PCC could be most advantageous. Continued multidimensional sex analysis of the neural characteristics of male and female DWI offenders is warranted. abstract_id: PUBMED:26253424 Profiling drunk driving recidivists in Denmark. Drunk drivers are a menace to themselves and to other road users, as drunk driving significantly increases the risk of involvement in road accidents and the probability of severe or fatal injuries. Although injuries and fatalities related to road accidents have decreased in recent decades, the prevalence of drunk driving among drivers killed in road accidents has remained stable, at around 25% or more during the past 10 years. Understanding drunk driving, and in particular, recidivism, is essential for designing effective countermeasures, and accordingly, the present study aims at identifying the differences between non-drunk drivers, drunk driving non-recidivists and drunk driving recidivists with respect to their demographic and socio-economic characteristics, road accident involvement and other traffic and non-traffic-related law violations. This study is based on register-data from Statistics Denmark and includes information from 2008 to 2012 for the entire population, aged 18 or older, of Denmark. The results from univariate and multivariate statistical analyses reveal a five year prevalence of 17% for drunk driving recidivism, and a significant relation between recidivism and the drunk drivers' gender, age, income, education, receipt of an early retirement pension, household type, and residential area. Moreover, recidivists are found to have a higher involvement in alcohol-related road accidents, as well as other traffic and, in particular, non-traffic-related offences. These findings indicate that drunk driving recidivism is more likely to occur among persons who are in situations of socio-economic disadvantage and marginalisation. Thus, to increase their effectiveness, preventive measures aiming to reduce drunk driving should also address issues related to the general life situations of the drunk driving recidivists that contribute to an increased risk of drunk driving recidivism. abstract_id: PUBMED:33310635 Driving after alcohol consumption: A qualitative analysis among Chinese male drunk drivers. In China, alcohol-related impaired driving has been perceived as a crucial contributor to traffic accidents. However, the topic of driving after alcohol consumption is still understudied in the country. To fill this void, the aim of this study was to find out male drivers' unique experiences of drunk driving in China. A qualitative analysis was adopted based on semi-structured interviews with 38 Chinese male drunk drivers who were serving community corrections. The findings of this study revealed that, being regular alcohol consumers, Chinese male drunk drivers had very little knowledge of the impairment caused by alcohol intake on driving. They mostly underestimated alcohol's negative influence on their driving ability and were overconfident that they would have a safe drive and would not be involved in any traffic accidents after consuming alcohol. In addition, most participants of this study reported that there was very little possibility that they would be arrested after fully assessing the situation and implementing strategies to avoid detection by traffic police. All these reasons affected these men's decision to drive drunk. Several criminological theories are applied to explain the findings and implications are finally recommended to reduce the possibility of drunk driving among Chinese men. abstract_id: PUBMED:11702800 Victim impact panels: do they impact drunk drivers? A follow-up of female and male, first-time and repeat offenders. Objective: To examine gender differences and re-arrest rates of first-time and repeat offenders following referral to Victim Impact Panels (VIPs). Method: Study participants (N = 6,702; 79% men) were first-time and repeat driving-while-impaired offenders who were referred to and completed a screening program in Bernalillo County, New Mexico, from 1989 to 1994. Whether subjects were mandated to attend a VIP was self-reported in a personal interview. Multivariate Cox proportional hazards analysis was used to test the effects of VIP referral and other predictors of recidivism. Separate models were developed for female and male first-time and repeat offenders. Results: After controlling for multiple risk factors, VIP referral was not statistically associated with recidivism for female or male first offenders. However, female repeat offenders referred to VIPs were significantly more likely to be re-arrested compared with those not referred, with an odds of rearrest more than twice that of females not referred. Conclusions: Although many factors influence recidivism, this study raises the possibility that VIPs may have a negative impact on female repeat offenders. Answer: Yes, risk factors for re-arrest do differ for female and male drunk-driving offenders. The study with residents from New Mexico found that young age predicted higher recidivism among males but not females. The overall 5-year re-arrest rate was 26%, with 20% for women and 38% for males age 30 and under, while it was 24% for males age 31 and older. This suggests that while some risk factors are similar for both genders, young age is a particularly significant predictor of re-arrest for male offenders (PUBMED:11104112). Additionally, a study on driving offenders participating in a French driver rehabilitation program found that female offenders were more likely to have received a higher education, be divorced or separated, and drive fewer annual kilometers compared to male offenders. Females also had fewer demerit points than males in the last three years, were more empathetic but also more impulsive, and less self-compassionate and mindful. Psychological resources such as mindfulness were protective factors for driving offenders of both genders, decreasing dangerous behaviors and increasing prosocial ones, while vulnerabilities such as aggressive driving anger expression had the opposite effect (PUBMED:37944190). In the context of drunk driving recidivism, the ROAD study found that the percentage of carbohydrate-deficient transferrin was the major predictor of recidivism of drunk-driving, with the association with gamma-glutamyltransferase, alanine amino transferase, and the sex of the driver potentially having additional value for identifying drunk-drivers at intermediate risk of recidivism (PUBMED:24438112). Moreover, a study examining gender differences and re-arrest rates of first-time and repeat offenders following referral to Victim Impact Panels (VIPs) found that VIP referral was not statistically associated with recidivism for female or male first offenders. However, female repeat offenders referred to VIPs were significantly more likely to be re-arrested compared with those not referred (PUBMED:11702800). These findings indicate that there are indeed differences in risk factors for re-arrest between female and male drunk-driving offenders, and these differences may be influenced by a variety of demographic, psychological, and behavioral factors.
Instruction: Can we provide reperfusion therapy to all unselected patients admitted with acute myocardial infarction? Abstracts: abstract_id: PUBMED:9207637 Can we provide reperfusion therapy to all unselected patients admitted with acute myocardial infarction? Objectives: This study sought to assess the maximal rate of acute Thrombolysis in Myocardial Infarction (TIMI) grade 3 patency that can be achieved in unselected patients. Background: Early and complete (TIMI grade 3 flow) reperfusion is an important therapeutic goal during acute myocardial infarction. However, thrombolysis, although widely used, is often contraindicated or ineffective. The selective use of primary and rescue percutaneous transluminal coronary angioplasty (PTCA) may increase the number of patients receiving reperfusion therapy. Methods: A cohort of 500 consecutive unselected patients with acute myocardial infarction were prospectively treated using a patency-oriented scheme: Thrombolysis-eligible patients received thrombolysis (n = 257) and underwent 90-min angiography to detect persistent occlusion for treatment with rescue PTCA. Emergency PTCA (n = 193) was attempted in patients with contraindications to thrombolysis, cardiogenic shock or uncertain diagnosis and in a subset of patients admitted under "ideal conditions." A small group of patients (n = 38) underwent acute angiography without PTCA. Conventional medical therapy was used in 12 patients with contraindications to both thrombolysis and PTCA. Results: Ninety-eight percent of patients received reperfusion therapy (thrombolysis, PTCA or acute angiography), and angiographically proven early TIMI grade 3 patency was achieved in 78%. Among patients with TIMI grade 3 patency, thrombolysis alone was the strategy used in 37%, emergency PTCA in 40% and rescue PTCA after failed thrombolysis in 15%; spontaneous patency occurred in 8%. Conclusions: Reperfusion therapy can be provided to nearly every patient (98%) with acute myocardial infarction. Rescue and direct PTCA provided effective early reperfusion to patients in whom thrombolysis failed or was excluded. abstract_id: PUBMED:8070474 Eligibility for reperfusion therapy and outcome in elderly patients with acute myocardial infarction. Reperfusion therapy by thrombolysis or angioplasty was considered in 260 unselected patients consecutively admitted within 6 h of the onset of Q wave myocardial infarction. Rates of reperfusion and in-hospital mortality were compared in 206 patients < 70 years and 54 patients > or = 70 years. Early reperfusion was obtained in 86.4% of the patients under 70 years and in 72.2% of those over 70 (P < 0.01). Thrombolysis was more frequently used in the younger group (66.0% vs 31.5%, P < 10(-5)), and primary angioplasty in the older (44.4% vs 29.6%, P < 0.05). Overall in-hospital mortality was higher in the older group (22.2% vs 4.4%, P < 10(-5)). After successful reperfusion, mortality was 12.8% in the patients over 70 and 3.9% in those under 70. After failed or unproven reperfusion, mortality was 46.7% in the patients over 70 and 7.1% in those under 70. Reperfusion therapy is feasible in the majority of patients over 70 years, but failure to attempt or to achieve reperfusion is associated with a poor outcome. Although not controlled, this study provides an incentive for attempting early reperfusion therapy as often as possible in the elderly with acute myocardial infarction. abstract_id: PUBMED:9232071 Orientated management towards reperfusion in the acute phase of myocardial infarction. Results in a cohort of 700 consecutive patients The principal objective of treatment of the acute phase of myocardial infarction is the obtention of TIMI 3 complete patency. Usually, only a minority of patients receives thrombolytic therapy and complete reperfusion in unusual. Between June 1988 and April 1996, 700 consecutive patients were admitted to Bichat hospital within 6 hours of the onset of transmural myocardial infarction (81% men; age 59 +/- 13 years). The objective of treatment was to obtain maximal coronary patency in the acute phase, either by thrombolysis (with systematic angiography at 90 minutes and salvage angioplasty in case of failure), or primary angioplasty or conventional treatment (usually in cases of spontaneous reperfusion). The emergency angiography and angioplasty procedures were performed by a medical team on 24 hour duty. During the acute phase, 316 patients received intravenous thrombolysis (angiography at 90' in 302 patients with salvage angioplasty in 79 patients), 304 underwent primary angioplasty (TIMI 3 artery in 85% of cases) and 80 underwent conventional treatment (including 52 cases of angiographically documented spontaneous reperfusion). Therefore, a 81% (566/700) rate of patent TIMI 3 arteries was obtained. The hospital mortality was 8.9%, lower in TIMI 3 arterial patency (6%) than TIMI 2 (20%) or TIMI 0-1 (23%), p < 0.001. The mortality was 4% in patients treated by thrombolysis. Therefore, a reperfusion strategy associating thrombolysis and/or angioplasty provides a high TIMI 3 patency rate in the acute phase of myocardial infarction with a low mortality (6%) in consecutive, unselected patients. abstract_id: PUBMED:17292384 Limited prognostic value of noninvasive assessment of reperfusion by continuous vectorcardiography in an unselected cohort of patients with acute ST-elevation myocardial infarction treated with thrombolysis. Aims: We studied the prognostic value of different reperfusion criteria of short-term continuous vectorcardiography (VCG) in an unselected cohort of 400 patients with ST-elevation myocardial infarction, treated at 4 coronary care units in Stockholm, Sweden, between 1999 and 2002. The main outcome measure was 1-year mortality. Results: Of 400 study patients, 41 (10.2%) died within 1 year. One-year mortality in patients without reperfusion at 90 minutes, defined as ST resolution below 50% on VCG, was 11.6% compared with 9.0% in patients with reperfusion, (P = 0.4). Ninety-eight (24.5%) patients underwent intervention before discharge and percutaneous coronary intervention or coronary artery bypass grafting or both during the index admission. Percutaneous coronary intervention or coronary artery bypass grafting was related to improved 1-year survival (97 +/- 2% vs 87 +/- 2%, P = .0076). ST-vector magnitude resolution at 90 minutes was lower in patients who underwent intervention (P = .045). None of the reperfusion criteria of VCG was significantly associated with 1-year mortality. Conclusion: Our results show that noninvasive assessment of reperfusion by continuous VCG has limited prognostic value in unselected patients treated with thrombolysis because of ST-elevation myocardial infarction when subsequent revascularizations are performed. However, VCG might be useful in selecting patients for coronary angiography with subsequent revascularization. abstract_id: PUBMED:8304816 Reperfusion arrhythmia Reperfusion arrhythmias were described from the first trials of intracoronary thrombolysis for myocardial infarction. The prevalence of ventricular fibrillation during intravenous thrombolysis is low (< 7%) and comparable to that observed with classical treatment. Holter recording shows that ventricular tachycardia and accelerated idioventricular rhythms occur in over 80% of cases. These arrhythmias are generally well tolerated and do not require specific therapy. A bradycardia-hypotension syndrome is observed in about a quarter of reperfused patients, nearly always in inferior wall infarction. It normally resolves spontaneously or after atropine or vascular filling. Reperfusion is associated with a clearcut increase in the number of episodes of arrhythmia. Some arrhythmias such as sustained ventricular tachycardia, early accelerated idioventricular rhythms (occurring in the first 6 hours) or the bradycardia-hypotension syndrome may be considered as non-invasive criteria of reperfusion. More severe ischemia and sudden reperfusion favour the arrhythmogenicity of reperfusion in the animal. Recent data suggest that this may be the case in the clinical context. In some uncontrolled studies, lidocaine, betablockers and aspirin did not affect the prevalence of the arrhythmias. Preliminary trials indicate that flunarizine and captopril may reduce the incidence of reperfusion arrhythmias in man. Ventricular arrhythmias and myocardial stunning could be the result of sa single phenomenon (the extent of the ischemic lesions or reperfusion lesions). Studies currently under way should clarify the relationship between the incidence of arrhythmias, the severity of stunning and myocardial recovery. Protocols evaluating therapeutic interventions on the reperfusate should include Holter monitoring. abstract_id: PUBMED:22446643 Angioplasty compared to thrombolysis as the initial reperfusion therapy in acute myocardial infarction Background: Primary angioplasty is superior to intravenous thrombolysis as reperfusion therapy for acute myocardial infarction. Aim: To compare the results of available reperfusion strategies for initial management of acute myocardial infarction during hospitalization and 5 years follow up. Patients And Methods: Historical cohort study from a prospective registry of patients admitted with acute myocardial infarction to our center. Patients treated with primary angioplasty were identified and were then matched by age, sex and date of event with patients treated with throm-bolysis. The clinical outcomes were compared including hospitalization and 5-years follow-up. Results: From March 1993 to August 2001, 98 patients were treated with primary angioplasty and matched with 98 thrombolyzed patients. The groups were comparable. Compared to thrombolysis, angioplasty had a higher success rate (68 and 91% respectively), resulted in less complications and reduced mortality (11 and 2% respectively), required less revascularization procedures and shorter hospital stay (17 and 13 days, respectively). During the follow-up of survivors, no differences in events or additional mortality were detected at 1 or 5 years. Conclusions: Primary angioplasty is superior as treatment in terms of achieving success and reducing mortality during hospitalization. Evolution after hospitalization is independent of initial therapy. abstract_id: PUBMED:24419804 Reperfusion therapy for ST elevation acute myocardial infarction 2010/2011: current status in 37 ESC countries. Aims: Primary percutaneous coronary intervention (PPCI) is the preferred reperfusion therapy in ST-elevation myocardial infarction (STEMI). We conducted this study to evaluate the contemporary status on the use and type of reperfusion therapy in patients admitted with STEMI in the European Society of Cardiology (ESC) member countries. Methods And Results: A cross-sectional descriptive study based on aggregated country-level data on the use of reperfusion therapy in patients admitted with STEMI during 2010 or 2011. Thirty-seven ESC countries were able to provide data from existing national or regional registries. In countries where no such registries exist, data were based on best expert estimates. Data were collected on the use of STEMI reperfusion treatment and mortality, the numbers of cardiologists, and the availability of PPCI facilities in each country. Our survey provides a brief data summary of the degree of variation in reperfusion therapy across Europe. The number of PPCI procedures varied between countries, ranging from 23 to 884 per million inhabitants. Primary percutaneous coronary intervention and thrombolysis were the dominant reperfusion strategy in 33 and 4 countries, respectively. The mean population served by a single PPCI centre with a 24-h service 7 days a week ranged from 31 300 inhabitants per centre to 6 533 000 inhabitants per centre. Twenty-seven of the total 37 countries participated in a former survey from 2007, and major increases in PPCI utilization were observed in 13 of these countries. Conclusion: Large variations in reperfusion treatment are still present across Europe. Countries in Eastern and Southern Europe reported that a substantial number of STEMI patients are not receiving any reperfusion therapy. Implementation of the best reperfusion therapy as recommended in the guidelines should be encouraged. abstract_id: PUBMED:20222505 Implementation of reperfusion therapy in ST segment elevation myocardial infarction (STEMI). A policy statement from the Belgian Society of Cardiology and its working group of acute cardiology and interventional cardiology Myocardial infarction remains a major healthcare problem. Reperfusion therapy has been shown to influence favourably short- and long-term patient survival. The authors reviewed the data of early recognition of STEMI (ST Elevation Myocardial Infarction), the reperfusion modalities including a flowchart management, as proposed by the Belgian working groups (BIWAC and BWGIC), and the lessons learned from European and American registries. Primary PCI often remains the treatment of choice. A national policy is still required to implement the guidelines and improve clinical practice for our STEMI patients. abstract_id: PUBMED:2332994 Reperfusion in acute myocardial infarction. During the past decade, the general acceptance of the primary role of thrombosis in acute myocardial infarction (AMI) has led to intense interest in the potential efficacy of reperfusion therapy, particularly thrombolytic therapy, in AMI. Accumulating evidence indicates that systemic thrombolytic therapy administered early after the onset of symptoms of AMI can restore infarct-related artery patency, salvage myocardium, and reduce mortality. Recommendations about the proper use of thrombolytic therapy, contraindications, and concomitant therapies (such as aspirin, heparin, nitrates, beta-adrenergic blocking agents, and calcium channel blockers) are reviewed. Although percutaneous transluminal coronary angioplasty (PTCA) is useful for subsets of patients with AMI (for example, patients with anterior infarctions with persistent occlusion of the infarct-related artery after thrombolytic therapy and those with cardiogenic shock), a conservative strategy, including angiography and PTCA only for postinfarction ischemia, is indicated for most patients with AMI in whom initial thrombolytic therapy is apparently successful. The use of PTCA after failed thrombolysis or as direct therapy for AMI seems promising, although further comparisons of PTCA and intravenous thrombolytic therapy are needed. Ongoing studies should help further define the risk-to-benefit ratio of various reperfusion strategies in different subsets of patients, define time limitations for reperfusion therapy, and provide data on therapeutic modalities that will limit reperfusion injury and therefore enhance salvage of myocardium. abstract_id: PUBMED:33224750 Prevalence and predictors associated with in-hospital mortality in acute ST segment elevation myocardial infarction after reperfusion therapy in developing country. Background: The prevalence and predictors of in-hospital mortality in the unselected patients with ST-segment elevation myocardial infarction (STEMI) after reperfusion therapy with fibrinolytic or primary percutaneous coronary intervention (PCI) in developing country have not well established. Methods: We conducted a retrospective cohort study design. All admission records of acute ST elevation myocardial infarction at Suratthani Hospital between October 2015 and September 2017. Results: The finding between October 2015 and September 2017, a total of 431 patients received reperfusion therapy and were included in the present study. Majority of the patient were male (80.5%), almost half of the population had anterior wall MI, 32 patients died (7.4%), and 399 survived during the index hospitalization. The prevalence of chronic kidney disease, anterior wall MI, cardiogenic shock and cardiac arrest in non-survived group was higher than the survived group. Echocardiogram was performed in 299 patient (69.4%) and the median left ventricular ejection fraction (LVEF) in overall population was 51%. After adjustment for baseline variables, LVEF (aOR 0.91, 95% CI, 0.83-0.98, P=0.02) and cardiac arrest at presentation (aOR 22, 95% CI, 1.22-410.05, P=0.036) were independently associated with in-hospital mortality. Conclusions: LVEF and cardiac arrest at presentation were independently associated with in-hospital mortality in STEMI patients receiving reperfusion therapy either fibrinolytic therapy or primary PCI. Answer: Based on the provided abstracts, it appears that reperfusion therapy can be provided to nearly every patient with acute myocardial infarction. A study involving 500 consecutive unselected patients with acute myocardial infarction demonstrated that 98% of patients received reperfusion therapy, which included thrombolysis, percutaneous transluminal coronary angioplasty (PTCA), or acute angiography. Angiographically proven early TIMI grade 3 patency was achieved in 78% of these cases (PUBMED:9207637). Another study with 700 consecutive patients admitted within 6 hours of the onset of transmural myocardial infarction showed that a strategy combining thrombolysis and/or angioplasty provided a high TIMI 3 patency rate in the acute phase of myocardial infarction with a low mortality rate (PUBMED:9232071). However, the success of reperfusion therapy can vary with age. In patients over 70 years old, reperfusion therapy is feasible in the majority, but failure to attempt or to achieve reperfusion is associated with a poor outcome (PUBMED:8070474). Additionally, the type of reperfusion therapy may influence outcomes. For instance, primary angioplasty has been shown to be superior to intravenous thrombolysis as reperfusion therapy for acute myocardial infarction, with higher success rates and reduced mortality during hospitalization (PUBMED:22446643). Despite the feasibility and benefits of reperfusion therapy, there are still variations in its implementation across Europe, with some countries reporting that a substantial number of STEMI patients do not receive any reperfusion therapy (PUBMED:24419804). Moreover, a policy statement from the Belgian Society of Cardiology emphasized the need for a national policy to implement guidelines and improve clinical practice for STEMI patients (PUBMED:20222505). In conclusion, while it is possible to provide reperfusion therapy to nearly all unselected patients with acute myocardial infarction, the actual implementation and outcomes may vary based on patient characteristics, the type of reperfusion therapy used, and the healthcare system in place.
Instruction: Do risk attitudes differ across domains and respondent types? Abstracts: abstract_id: PUBMED:17545498 Do risk attitudes differ across domains and respondent types? Objective: To evaluate differences in risk attitude across the domains of health and money for 2 types of respondents, patients and community members. Methods: Two groups of respondents, patients with multiple sclerosis (n = 56) and members of the general community (n = 57), completed a survey that collected information on risk attitudes and socioeconomic and clinical variables (e.g., disability level). Risk attitude was measured using 2 standard-gamble questions on money and 1 standard-gamble question on health outcomes. Multivariate regression was used to evaluate the relationship between risk attitude and respondent type (patient v. community), adjusting for covariates that could affect risk attitude. Results: The median certainty equivalents for money gambles were significantly different from and less than the expected value of the gamble for both types of respondents. Median certainty equivalents for the health gamble were not significantly different from the expected value for either group of respondents. For all 3 gambles, there was no difference in median certainty equivalents between the 2 types of respondents in both unadjusted and adjusted analyses. Conclusions: Risk attitude varied across domains but not by respondent type. Patients and community members were predominantly risk neutral with respect to health outcomes and risk averse with respect to money. Research on risk preferences on money outcomes may not be an appropriate proxy for risk preferences regarding health outcomes. Risk preferences may depend more on characteristics of the choice than on respondent type. abstract_id: PUBMED:37977015 The stability of physicians' risk attitudes across time and domains. Risk attitude is known to influence physicians' decision-making under uncertainty. Research on the risk attitudes of physicians is therefore important in facilitating a better understanding of physicians' decisions. However, little is known about the stability of physicians' risk attitudes across domains. Using five waves of data from a prospective panel study of Australian physicians from 2013 to 2017, we explored the stability of risk attitudes over a four-year period and examined the association between negative life events and risk attitudes among 4417 physicians. Further, we tested the stability of risk attitude across three domains most relevant to a physician's career and clinical decision-making (financial, career and clinical). The results showed that risk attitude was stable over time at both the mean and individual levels but the correlation between domains was modest. There were no significant associations between negative life events and risk attitude changes in all three domains. These findings suggest that risk attitude can be assumed to be constant but domain-specificity needs to be considered in analyses of physician decision-making. abstract_id: PUBMED:29460281 Alexithymia and risk preferences: Predicting risk behaviour across decision domains. Risk-taking is a critical health factor as it plays a key role in several diseases and is related to a number of health risk factors. The aim of the present study is to investigate the role of alexithymia in predicting risk preferences across decision domains. One hundred and thirteen participants filled out an alexithymia scale (Toronto Alexithymia Scale-TAS-20), impulsivity and venturesomeness measures (I7 scale), and-1 month later-the Cognitive Appraisal of Risky Events (CARE questionnaire). The hierarchical regression analyses showed that alexithymia positively predicted risk preferences in two domains: aggressive/illegal behaviour and irresponsible academic/work behaviour. The results also highlighted a significant association of the alexithymia facet, externally oriented thinking (EOT), with risky sexual activities. EOT also significantly predicted aggressive/illegal behaviour and irresponsible academic/work behaviour. The alexithymia facet, Difficulty Identifying Feelings, significantly predicted irresponsible academic/work behaviour. The results of the present study provide interesting insights into the connection between alexithymia and risk preferences across different decision domains. Implications for future studies and applied interventions are discussed. abstract_id: PUBMED:28567550 HIV Prevalence, Knowledge, Attitudes, and Practices Among Polydrug Users in Brazil: A Biological Survey Using Respondent Driven Sampling. Brazil has a concentrated HIV epidemic among key populations. In 2009, the Ministry of Health conducted a survey in 10 Brazilian cities aiming to estimate HIV prevalence, knowledge, and associated risk behaviors of polysubstance users (PSU). Using Respondent Driven Sampling (RDS), 3449 PSU were recruited, answered an Audio-Computer Self Assisted Interview (ACASI) and were tested for HIV and syphilis. Analyses were weighted by individual's social network size generated on RDSAT. Pooled HIV prevalence was 5.8% but varied across cities. Most PSU were male, non-white, without income, unemployed, with low levels of education. Overall, 12.0% used injectable drugs, 48.7% had sex with occasional partners and 46.4% engaged in commercial sex. A majority received free condoms (71.4%) but 76.7% exhibited inconsistent condom use. Findings can support policies aiming to improve health care and preventive interventions tailored to this population that remains at high risk of acquiring and transmitting HIV/STI in multiple scenarios. abstract_id: PUBMED:26802189 Motivational Affordance and Risk-Taking Across Decision Domains. We propose a motivational affordance account to explain both stability and variability in risk-taking propensity in major decision domains. We draw on regulatory focus theory to differentiate two types of motivation (prevention, promotion) that play a key role in predicting risk-taking. Study 1 demonstrated that prevention motivation is negatively associated with risk-taking across six key decision domains, including health/safety, ethics, recreation, gambling, investment, and social. In contrast, promotion motivation is positively associated with risk-taking in the social and investment domains. Study 2 replicated the same pattern and provided direct evidence that promotion motivation is a strong predictor of risk-taking only in domains where there is true potential for gains. Study 3 manipulated promotion (vs. prevention) motivation experimentally to demonstrate that motivational affordance is a critical mechanism for understanding risk-taking behaviors. abstract_id: PUBMED:33684700 Public attitudes toward genetic risk scoring in medicine and beyond. Advances in genomics research have led to the development of polygenic risk scores, which numerically summarize genetic predispositions for a wide array of human outcomes. Initially developed to characterize disease risk, polygenic risk scores can now be calculated for many non-disease traits and social outcomes, with the potential to be used not only in health care but also other institutional domains. In this study, we draw on a nationally-representative survey of U.S. adults to examine three sets of lay attitudes toward the deployment of genetic risk scores in a variety of medical and non-medical domains: 1. abstract belief about whether people should be judged on the basis of genetic predispositions; 2. concrete attitudes about whether various institutions should be permitted to use genetic information; and 3. personal willingness to provide genetic information to various institutions. Results demonstrate two striking differences across these three sets of attitudes. First, despite almost universal agreement that people should not be judged based on genetics, there is support, albeit varied, for institutions being permitted to use genetic information, with support highest for disease outcomes and in reproductive decision-making. We further find significant variation in personal willingness to provide such information, with a majority of respondents expressing willingness to provide information to health care providers and relative finder services, but less than a quarter expressing willingness to do so for an array of other institutions and services. Second, while there are no demographic differences in respondents' abstract beliefs about judging based on genetics, demographic differences emerge in permissibility ratings and personal willingness. Our results should inform debates about the deployment of polygenic scores in domains within and beyond medicine. abstract_id: PUBMED:35998085 Risk Attitudes in Late Adulthood: Do Parenthood Status and Family Size Matter? How older persons react to high-stakes decisions concerning their finance and healthcare depends a great deal on their orientation towards risk-taking. This study examines the associations between parenthood status, family size, and risk attitudes in late adulthood based on nationally-representative data from the Singapore Life Panel. Multivariate analyses are employed to estimate how older adults' willingness to take risks in the general, financial, and health domains varies by gender and among childless individuals and parents of different family size. Older mothers are found to be less risk tolerant than their childless counterparts across the three risk domains. Conversely, mothers with more children demonstrate greater risk tolerance than mothers with fewer children. We find no evidence that older men's risk attitudes vary by parenthood status and family size. We discuss the implications of our findings for understanding individual and societal well-being in the context of rapid fertility decline and population aging. abstract_id: PUBMED:27082309 Evidence-Based Practice: Attitude and Knowledge of Social Workers across Geographic Regions. The objective of the author in this article was to examine possible differences in knowledge and attitudes toward evidence-based practice (EBP) among social workers across geographic regions. A random national sample of 180 NASW members was obtained from mail and Internet groups. MANOVA analysis was performed to determine possible differences in knowledge and attitudes toward EBP among these social workers. Findings suggest that knowledge and attitude toward EBP did not differ among these practitioners. Despite increasing efficacy and widespread knowledge of EBPs, there is little or no empirical evidence to support any differences in attitudes and knowledge of EBP among social workers across geographic regions. abstract_id: PUBMED:31480904 The effects of personality and aging attitudes on well-being in different life domains. Objectives: Personality plays a major role in determining the way people adjust to life experiences, ultimately affecting life satisfaction. Aging attitudes also impact well-being, but there is little research on whether these personality and attitudinal effects reflect the same mechanism. The purpose of this study was to examine whether aging attitudes mediate the relationship between personality and well-being across seven different domains of everyday functioning, and whether this effect depends on age. Methods: Data from 563 adults ranging in age from 30 to 89 were used. Sociodemographic information as well as the Big Five Inventory, aging attitudes, and well-being (i.e. current self-views and life satisfaction) in seven different life domains was assessed. Results: The mediating effect of aging attitudes in the relationship between personality and well-being was strong for neuroticism, conscientiousness, and agreeableness and varied across domains. Significant mediation effects were limited for openness and extraversion. Significant moderated mediations were rather limited but the effects were stronger in later life. Discussion: These results suggest that personality influences aging attitudes, which in turn affect well-being. Further, our results indicate that such relationships are context-specific, suggesting that the global assessments of attitudes and well-being may not fully characterize significant aging outcomes. abstract_id: PUBMED:28063131 Individual classification of strong risk attitudes: An application across lottery types and age groups. Empirical evaluations of risk attitudes often rely on a weak definition of risk that concerns preferences towards risky and riskless options (e.g., a lottery vs. a sure outcome). A large body of work has shown that individuals tend to be weak risk averse in choice contexts involving risky and riskless gains but weak risk seeking in contexts involving losses, a phenomenon known as the reflection effect. Recent attempts to evaluate age differences in risk attitudes have relied on this weak definition, testing whether the reflection effect increases or diminishes as we grow older. The present work argues that weak risk attitudes have limited generalizability and proposes the use of a strong definition of risk that is concerned with preferences towards options with the same expected value but different degrees of risk (i.e., outcome variance). A reanalysis of previously-published data and the results from a new study show that only a minority of individuals manifests the reflection effect under a strong definition of risk, and that, when facing certain lottery-pair types, older adults appear to be more risk seeking than younger adults. Answer: Risk attitudes do indeed differ across domains and respondent types, as evidenced by several studies. For instance, a study involving patients with multiple sclerosis and members of the general community found that risk attitude varied across the domains of health and money, but not by respondent type. Both patients and community members were predominantly risk neutral with respect to health outcomes and risk averse with respect to money, suggesting that research on risk preferences on money outcomes may not be an appropriate proxy for risk preferences regarding health outcomes (PUBMED:17545498). Another study exploring the stability of physicians' risk attitudes over time and across different domains (financial, career, and clinical) found that while risk attitude was stable over time, the correlation between domains was modest. This indicates that risk attitudes can be assumed to be constant but domain-specificity needs to be considered in analyses of physician decision-making (PUBMED:37977015). Research on alexithymia and risk preferences showed that alexithymia positively predicted risk preferences in certain domains, such as aggressive/illegal behavior and irresponsible academic/work behavior, highlighting the role of specific alexithymia facets in predicting risk behavior across different decision domains (PUBMED:29460281). A study on motivational affordance and risk-taking across decision domains proposed that different types of motivation (prevention, promotion) play a key role in predicting risk-taking, with prevention motivation being negatively associated with risk-taking across several key decision domains, while promotion motivation was positively associated with risk-taking in domains where there is true potential for gains (PUBMED:28567550). In the context of late adulthood, a study found that older mothers were less risk tolerant than their childless counterparts across general, financial, and health risk domains, while mothers with more children demonstrated greater risk tolerance than those with fewer children. However, no evidence was found that older men's risk attitudes varied by parenthood status and family size (PUBMED:35998085). These findings collectively suggest that risk attitudes are not uniform across different domains or respondent types, and factors such as the specific decision-making context, individual characteristics, and life circumstances can influence risk preferences.
Instruction: Contralateral papillary thyroid cancer: does size matter? Abstracts: abstract_id: PUBMED:38098751 Acute exposure to diesel particulate matter promotes collective cell migration in thyroid cancer cells. Several ecological studies suggest that ambient air pollution is associated with the occurrence of thyroid cancer. In this study, we used certified diesel particulate matter as a proxy for fine particulate matter. Human thyroid cancer cell lines 8505C and TPC-1 were incubated with different concentrations of NIST1650b for 5 days and subjected to functional assays. We found that NIST1650b treatment did not affect short-term cell growth but reduced colony formation at high concentrations. Notably, NIST1650b-treated cells showed altered morphology toward cluster coalescence following treatment. Wound healing assays revealed that leading-edge cells formed protruding tips while maintaining cell-cell adhesion, and a significantly higher ratio of wound closure following treatment at 10 μg/mL was seen in both cell lines. A weak stimulatory effect on transwell cell migration was observed in 8505C cells. Taken together, our results suggest that fine particulate matter induced a coherent phenotype accompanied by augmented collective cell migration in thyroid cancer cells. abstract_id: PUBMED:34210530 Ambient particulate matter air pollution is associated with increased risk of papillary thyroid cancer. Background: The association between exposure to air pollution and papillary thyroid carcinoma is unknown. We sought to estimate the relationship between long-term exposure to the fine (diameter ≤ 2.5 μm) particulate matter component of air pollution and the risk of papillary thyroid cancer. Methods: Adult (age ≥18) patients with newly diagnosed papillary thyroid carcinoma between January 1, 2013 and December 31, 2016 across a single health system were identified using electronic medical records. Data from 1,990 patients with papillary thyroid carcinoma were compared with 3,980 age- and sex-matched control subjects without any evidence of thyroid disease. Cumulative fine (diameter <2.5 μm) particulate matter exposure was estimated by incorporating patients' residential zip codes into a deep learning neural networks model, which uses both meteorological and satellite-based measurements. Conditional logistic regression was performed to assess for association between papillary thyroid carcinoma and increasing fine (diameter ≤2.5 μm) particulate matter concentrations over 1, 2, and 3 years of cumulative exposure preceding papillary thyroid carcinoma diagnosis. Results: Increased odds of developing papillary thyroid carcinoma was associated with a 5 μg/m3 increase of fine (diameter ≤2.5 μm) particulate matter concentrations over 2 years (adjusted odds ratio = 1.18, 95% confidence interval: 1.00-1.40) and 3 years (adjusted odds ratio = 1.23, 95% confidence interval: 1.05-1.44) of exposure. This risk differed by smoking status (pinteraction = 0.04). Among current smokers (n = 623), the risk of developing papillary thyroid carcinoma was highest (adjusted odds ratio = 1.35, 95% confidence interval: 1.12-1.63). Conclusion: Increasing concentration of fine (diameter ≤2.5 μm) particulate matter in air pollution is significantly associated with the incidence of papillary thyroid carcinoma with 2 and 3 years of exposure. Our novel findings provide additional insight into the potential associations between risk factors and papillary thyroid carcinoma and warrant further investigation, specifically in areas with high levels of air pollution both nationally and internationally. abstract_id: PUBMED:36823621 Socioeconomic disparity in the association between fine particulate matter exposure and papillary thyroid cancer. Background: Limited data exists suggesting that cumulative exposure to air pollution in the form of fine particulate matter (aerodynamic diameter ≤ 2.5 μm [PM2.5]) may be associated with papillary thyroid carcinoma (PTC), although this relationship has not been widely established. This study aims to evaluate the association between PM2.5 and PTC and determine the subgroups of patients who are at the highest risk of PTC diagnosis. Methods: Under IRB approval, we conducted a case-control study of adult patients (age ≥ 18) newly diagnosed with PTC between 1/2013-12/2016 across a single health care system were identified using electronic medical records. These patients were compared to a control group of patients without any evidence of thyroid disease. Cumulative PM2.5 exposure was calculated for each patient using a deep learning neural networks model, which incorporated meteorological and satellite-based measurements at the patients' residential zip code. Adjusted multivariate logistic regression was used to quantify the association between cumulative PM2.5 exposure and PTC diagnosis. We tested whether this association differed by gender, race, BMI, smoking history, current alcohol use, and median household income. Results: A cohort of 1990 patients with PTC and a control group of 6919 patients without thyroid disease were identified. Compared to the control group, patients with PTC were more likely to be older (51.2 vs. 48.8 years), female (75.5% vs 46.8%), White (75.2% vs. 61.6%), and never smokers (71.1% vs. 58.4%) (p < 0.001). After adjusting for age, sex, race, BMI, current alcohol use, median household income, current smoking status, hypertension, diabetes, COPD, and asthma, 3-year cumulative PM2.5 exposure was associated with a 1.41-fold increased odds of PTC diagnosis (95%CI: 1.23-1.62). This association varied by median household income (p-interaction =0.03). Compared to those with a median annual household income <$50,000, patients with a median annual household income between $50,000 and < $100,000 had a 43% increased risk of PTC diagnosis (aOR = 1.43, 95%CI: 1.19-1.72), and patients with median household income ≥$100,000 had a 77% increased risk of PTC diagnosis (aOR = 1.77, 95%CI: 1.37-2.29). Conclusions: Cumulative exposure to PM2.5 over 3 years was significantly associated with the diagnosis of PTC. This association was most pronounced in those with a high median household income, suggesting a difference in access to care among socioeconomic groups. abstract_id: PUBMED:37095323 Impact of social and economic factors on global thyroid cancer incidence and mortality. Purpose: The incidence of thyroid cancer has increased substantially over the past few decades and is partially explained by overdiagnosis. Geographical variations in incidence rates were reported to be related to national development status. This study aimed to gain deeper insights into global thyroid cancer burden by incorporating additional social and economic factors to account for cross-national disparities. Methods: We performed a multivariate analysis of age-standardized incidence and mortality data from the GLOBOCAN 2020 database for 126 countries that had more than 100 incident cases of thyroid cancer. The human development index (HDI), current health expenditure, and additional Global Health Observatory indicators were extracted from multiple sources. Results: Age-standardized incidence was highly correlated with HDI (standardized coefficient beta = 0.523, 95% confidence interval [CI] = 0.275-0.771) among the countries studied. The prevalence of raised fasting blood glucose was associated with age-standardized mortality (beta = 0.277, 95% CI = 0.038-0.517). Generally, the mortality-to-incidence ratio was higher in males than in females. In multivariate analysis, HDI (beta = - 0.767, 95% CI = - 0.902 to - 0.633), current health expenditure (beta = 0.265, 95% CI = 0.137-0.394), and fine particulate matter (PM2.5) concentrations (beta = 0.192, 95% CI = 0.086-0.298) were associated with mortality-to-incidence ratios. Conclusions: National developments gauged by HDI explain the majority of the variation in incidence rates of thyroid cancer but play a smaller role in disparities in mortality rates. The association between air pollution and thyroid cancer outcomes warrants further investigation. abstract_id: PUBMED:36433931 Mutagenicity of the organic fraction of World Trade Center dust. Most studies of the health effects and chemical characterization of the dust resulting from the catastrophic collapse of the World Trade Center (WTC) on September 11, 2001, have focused on the large inorganic fraction of the dust; however, chemical analyses have identified mutagens and carcinogens in the smaller organic fraction. Here, we determined the mutagenicity of the organic fraction of WTC dust in Salmonella. Only 0.74% of the mass of the particulate matter (PM) <53 μm in diameter was extractable organic matter (EOM). Because the EOM was 10 times more mutagenic in TA100 +S9 than in TA98 +S9 and was negative in TA98 -S9, we inferred, respectively, that polycyclic aromatic hydrocarbons (PAHs) played a role in the mutagenicity and not nitroarenes. In TA98 +S9, the mutagenic potency of the EOM (0.1 revertant/μg EOM) was within the range of EOMs from air and combustion emissions. However, the EOM-based mutagenic potency of the particles (0.0007 revertants/μg PM) was 1-2 orders of magnitude lower than values from a review of 50 combustion emissions and various air samples. We calculated that 37 PAHs analyzed previously in WTC EOM were 5.4% of the EOM mass and 0.04% of the PM mass; some air contained 0.3 μg WTC EOM/m3 (0.02 μg PAHs/m3 ). Populations exposed to WTC dust have elevated levels of prostate and thyroid cancer but not lung cancer. Our data support earlier estimates that PAH-associated cancer risk among this population, for example, PAH-associated lung cancer, was unlikely to be significantly elevated relative to background PAH exposures. abstract_id: PUBMED:37624503 Environmental factors and incidence of thyroid cancer in the world (1990-2019): an ecological study. Thyroid cancer (TC) is the most common endocrine malignancy. Environmental risk factors such as presence of pollutants in air as well as the combustion of fossil fuels or carbon as a cooking habit in closed environments inside houses affect thyroid hormonal homeostasis and diseases. This study aimed to estimate the association between environmental risk factors and the incidence of thyroid cancer worldwide from 1990 to 2019 including particulate pollutants coming from fossil fuels employed in closed environments. Data on the incidence of thyroid cancer and some environmental risk factors were extracted from the Global Cancer Observatory (GLOBOCAN) for 204 countries and territories from 1990 to 2019. Pearson's correlation coefficient was used to determine the correlation between the thyroid cancer incidence and environmental risk factors. Finally, a generalized additive model was fitted for modeling. R 3.5.0 was used for analysis of the data. The most relevant results showed that the age-standardized incidence rate (ASIR) of thyroid cancer has a positive and significant correlation with environmental air pollution by O3 (r=0.63, P value<0.001), by particulate matter pollution (r=0.23, P value<0.001), and by household PM2.5 air pollution (r=0.52, P value≤0.001). In contrast, the correlation between ASIR and high temperature (T>25.6°C) (r=-0.27, P value<0.001) is negative and significant. The modeling results showed that particulate matter pollution and O3 pollution and household PM2.5 air pollution which originated from solid fuels are risk factors for thyroid cancer. Therefore, more research in this field is necessary in areas with high levels of air pollution at the national and international levels. abstract_id: PUBMED:36482383 Comparison of surgical smoke between two approaches for endoscopic thyroidectomy and open thyroidectomy. Background: Surgical smoke has been recognized as a potential health risk by an increasing number of researchers. Moreover, the counts of surgical smoke produced during different surgical approaches are different. This study aimed to measure and compare the particulate matter (PM) of surgical smoke generated during open thyroidectomy and two endoscopic approaches for thyroidectomy to provide guidance for safe clinical practices. Methods: Forty-eight patients with thyroid cancer admitted to our hospital from June 2020 to December 2021 and treated with different surgical approaches were enrolled in this study. The total and peak counts of PM, dynamic changes, and other characteristics of surgical smoke produced during surgery were recorded. PM was classified as PM2.5 (size ≤ 2.5 μm) and PM10 (size ≤ 10 μm). Results: In a single cut, both the peak and total counts of PM2.5 and PM10 of surgical smoke in the open thyroidectomy group (n = 15) were significantly higher than those in the breast approach endoscopic thyroidectomy with CO2 insufflation group (n = 15) and the gasless transaxillary endoscopic thyroidectomy group (n = 18) (p < 0.001). Moreover, the latter two groups showed no significant differences in the peak and total counts of PM2.5 and PM10 (p > 0.05). Conclusion: In thyroid surgery, more surgical smoke is produced during open thyroidectomy than during endoscopic thyroidectomy, while different endoscopic approaches showed no significant difference in surgical smoke production. Thus, endoscopic approaches outperform the open thyroidectomy approach with regard to surgical smoke production. abstract_id: PUBMED:32664367 Cerebral MRI and Clinical Findings in Children with PTEN Hamartoma Tumor Syndrome: Can Cerebral MRI Scan Help to Establish an Earlier Diagnosis of PHTS in Children? Background: PTEN Hamartoma Tumor Syndrome (PHTS) is caused by germline autosomal-dominant mutations of the tumor suppressor gene PTEN. Subjects harbour an increased risk for tumor development, with thyroid carcinoma occurring in young children. Establishing a diagnosis is challenging, since not all children fulfill diagnostic criteria established for adults. Macrocephaly is a common feature in childhood, with cerebral MRI being part of its diagnostic workup. We asked whether distinct cMRI features might facilitate an earlier diagnosis. Methods: We retrospectively studied radiological and clinical data of pediatric patients who were presented in our hospital between 2013 and 2019 in whom PTEN gene mutations were identified. Results: We included 27 pediatric patients (18 male) in the analysis. All patients were macrocephalic. Of these, 19 patients had received at least one cMRI scan. In 18 subjects variations were detected: enlarged perivascular spaces (EPVS; in 18), white matter abnormalities (in seven) and less frequently additional pathologies. Intellectual ability was variable. Most patients exhibited developmental delay in motor skills, but normal intelligence. Conclusion: cMRI elucidates EPVS and white matter abnormalities in a high prevalence in children with PHTS and might therefore aid as a diagnostic feature to establish an earlier diagnosis of PHTS in childhood. abstract_id: PUBMED:29748234 A Matter of Controversy: Is Radioiodine Therapy Favorable in Differentiated Thyroid Carcinoma? Radioiodine therapy is a matter of controversy because different opinions exist about its use for differentiated thyroid carcinoma. The following article sheds light on the different opinions and explains why we advocate the use of radioiodine therapy in more than only high-risk patients. In comparison to other malignancies, differentiated thyroid carcinoma has a different tumor biology due to its usually slow growth pattern. Radioiodine therapy was first used about 75 y ago and provided cure at a time when prospective randomized controlled trials had yet to be developed. Large patient cohorts and usually at least a decade of clinical follow-up are needed to demonstrate a benefit from radioiodine therapy. Thus, especially in low-risk patients, many factors define an individual treatment decision, including tumor stage, extent of surgery, tumor biology, clinical and imaging data, life expectancy, and patient preferences. abstract_id: PUBMED:34732774 National cohort and meteorological data based nested case-control study on the association between air pollution exposure and thyroid cancer. The objective of this study was to evaluate the influence of exposure to meteorological conditions, including air pollution, on thyroid cancer. A nested case-control study was conducted utilizing 4632 patients with thyroid cancer and 18,528 control subjects who were matched at a 1:4 ratio by age group, sex, income, and region of residence. Korean National Health Insurance Service-Health Screening Cohort data from 2002 to 2015 were used. Odds ratios (ORs) with 95% confidence intervals (CIs) were calculated for thyroid cancer correlated with meteorological and air pollution exposure over a moving average of 3 years before the index dates. For all participants, the adjusted ORs associated with relative humidity (1.01, 95% CI 1.00-1.03, P value = 0.023), ambient atmospheric pressure (1.02, 95% CI 1.01-1.03, P value < 0.001), and sunshine duration (1.17, 95% CI 1.04-1.31, P value = 0.007) indicated correlations with the occurrence of thyroid cancer; however, these results were inconsistent in the subgroup analyses. Overall, exposure to nitrogen dioxide (NO2) (1.33, 95% CI 1.24-1.43, P value < 0.001) and particulate matter (PM10) (0.64, 95% CI 0.60-0.69, P value < 0.001) were related to thyroid cancer. These relationships persisted in the subgroup analyses. In conclusion, thyroid cancer occurrence was positively associated with NO2 exposure and negatively associated with PM10 exposure. Answer: The provided abstracts do not directly address the question of whether the size of contralateral papillary thyroid cancer matters. The abstracts focus on the association between air pollution, particularly fine particulate matter (PM2.5), and the risk of developing papillary thyroid cancer (PTC), as well as the impact of socioeconomic factors and environmental conditions on thyroid cancer incidence and mortality (PUBMED:38098751, PUBMED:34210530, PUBMED:36823621, PUBMED:37095323, PUBMED:36433931, PUBMED:37624503, PUBMED:36482383, PUBMED:32664367, PUBMED:29748234, PUBMED:34732774). They also discuss the mutagenicity of certain pollutants, the comparison of surgical smoke between different thyroidectomy approaches, and the clinical findings in children with PTEN Hamartoma Tumor Syndrome, which can include thyroid carcinoma. However, none of the abstracts provide information on the clinical significance of the size of contralateral papillary thyroid cancer, which would be relevant to questions of prognosis, treatment decisions, and surgical planning. This specific question would typically be addressed in studies focusing on the clinical outcomes and management of thyroid cancer, rather than the etiology and risk factors associated with its development. Therefore, to answer the question "Contralateral papillary thyroid cancer: does size matter?" one would need to consult clinical guidelines, oncological research, or studies that specifically address the prognostic value of tumor size in thyroid cancer.
Instruction: A trauma series in the injured child: do we really need it? Abstracts: abstract_id: PUBMED:33915985 Correlation between Hospital Volume of Severely Injured Patients and In-Hospital Mortality of Severely Injured Pediatric Patients in Japan: A Nationwide 5-Year Retrospective Study. Appropriate trauma care systems, suitable for children are needed; thus, this retrospective nationwide study evaluated the correlation between the annual total hospital volume of severely injured patients and in-hospital mortality of severely injured pediatric patients (SIPP) and compared clinical parameters and outcomes per hospital between low- and high-volume hospitals. During the five-year study period, we enrolled 53,088 severely injured patients (Injury Severity Score, ≥16); 2889 (5.4%) were pediatric patients aged <18 years. Significant Spearman correlation analysis was observed between numbers of total patients and SIPP per hospital (p < 0.001), and the number of SIPP per hospital who underwent interhospital transportation and/or urgent treatment was correlated with the total number of severely injured patients per hospital. Actual in-hospital mortality, per hospital, of SIPP patients was significantly correlated with the total number patients per hospital (p < 0.001,). The total number of SIPP, requiring urgent treatment, was higher in the high-volume than in the low-volume hospital group. No significant differences in actual in-hospital morality (p = 0.246, 2.13 (0-8.33) vs. 0 (0-100)) and standardized mortality ratio (SMR) values (p = 0.244, 0.31 (0-0.79) vs. 0 (0-4.87)) were observed between the two groups; however, the 13 high-volume hospitals had an SMR of <1.0. Centralizing severely injured patients, regardless of age, to a higher volume hospital might contribute to survival benefits of SIPP. abstract_id: PUBMED:32517132 Early Spinal Injury Stabilization in Multiple-Injured Patients: Do All Patients Benefit? Background: Thoracolumbar spine fractures in multiple-injured patients are a common injury pattern. The appropriate timing for the surgical stabilization of vertebral fractures is still controversial. The purpose of this study was to analyse the impact of the timing of spinal surgery in multiple-injured patients both in general and in respect to spinal injury severity. Methods: A retrospective analysis of multiple-injured patients with an associated spinal trauma within the thoracic or lumbar spine (injury severity score (ISS) >16, age >16 years) was performed from January 2012 to December 2016 in two Level I trauma centres. Demographic data, circumstances of the accident, and ISS, as well as time to spinal surgery were documented. The evaluated outcome parameters were length of stay in the intensive care unit (ICU) (iLOS) and length of stay (LOS) in the hospital, duration of mechanical ventilation, onset of sepsis, and multiple organ dysfunction syndrome (MODS), as well as mortality. Statistical analysis was performed using SPSS. Results: A total of 113 multiple-injured patients with spinal stabilization and a complete dataset were included in the study. Of these, 71 multiple-injured patients (63%) presented with an AOSpine A-type spinal injury, whereas 42 (37%) had an AOSpine B-/C-type spinal injury. Forty-nine multiple-injured patients (43.4%) were surgically treated for their spinal injury within 24 h after trauma, and showed a significantly reduced length of stay in the ICU (7.31 vs. 14.56 days; p < 0.001) and hospital stay (23.85 vs. 33.95 days; p = 0.048), as well as a significantly reduced prevalence of sepsis compared to those surgically treated later than 24 h (3 vs. 7; p = 0.023). These adverse effects were even more pronounced in the case where cutoffs were increased to either 72 h or 96 h. Independent risk factors for a delay in spinal surgery were a higher ISS (p = 0.036), a thoracic spine injury (p = 0.001), an AOSpine A-type spinal injury (p = 0.048), and an intact neurological status (p < 0.001). In multiple-injured patients with AOSpine A-type spinal injuries, an increased time to spinal surgery was only an independent risk factor for an increased LOS; however, in multiple-injured patients with B-/C-type spinal injuries, an increased time to spinal surgery was an independent risk factor for increased iLOS, LOS, and the development of sepsis. Conclusion: Our data support the concept of early spinal stabilization in multiple-injured patients with AOSpine B-/C-type injuries, especially of the thoracic spine. However, in multiple-injured patients with AOSpine A-type injuries, the beneficial impact of early spinal stabilization has been overemphasized in former studies, and the benefit should be weighed out against the risk of patients' deterioration during early spinal stabilization. abstract_id: PUBMED:32071531 Predictors of clavicle fixation in multiply injured patients. Introduction: Clavicle fractures account for approximately 10% of all fractures in multiply injured patients. Our study aims to determine factors associated with surgical fixation of the clavicle fracture in multiply injured patients. Methods: Major adult trauma patients from 2005 to 2014 with a clavicle fracture were included. Multivariate analysis was undertaken to determine the variables associated with fixation. Results: 1779 patients (median age of 47 and a median Injury Severity Score of 17) were included. 273 (15%) patients underwent clavicle fixation. Factors associated with surgical fixation of the clavicle included: year, younger age, ICU admission, or an associated humerus or scapula fracture. abstract_id: PUBMED:31520156 It is time for a change in the management of elderly severely injured patients! An analysis of 126,015 patients from the TraumaRegister DGU®. Background: The number of elderly patients among the severely injured has been increasing continuously. It has been suggested that an increased life expectancy and a higher level of activity and mobility in older ages could explain this observation. Elderly trauma patients have relevant higher mortality rates and poorer functional outcomes. The reasons remain unclear. The aim of this study was to look for differences in the management of severely injured elderly patients compared to younger age groups and to evaluate their potential impact on outcome. Methods: The TraumaRegister DGU® is a multicenter database that documents de-identified data of severely injured patients since 1993. Trauma cases documented between 2009 and 2016 with an ISS ≥ 9 were divided in four age groups. The groups were compared with respect to mechanism of injury, pattern of injury, severity of injury, management and outcome. Results: The analysis of 126,015 severely injured patients showed that 37.5% of the population were elderly patients (≥ 60 years). Their rate actually increased every year by 1.7%. The elderly trauma patients experience different mechanisms of injury (more low energy trauma) and different pattern of injuries (more brain trauma, less abdominal and extremity injuries). Evaluating the management of patients showed that elderly patients have lower intubation rates and less volume replacement in the prehospital setting. Diagnostic interventions like CT scans in the emergency room were performed more restrictively. Elderly trauma patients also received fewer surgical interventions for brain injuries, pelvic fractures and femur fractures. Their hospital mortality rates were higher. Conclusions: Severely injured elderly patients are treated with a more "wait and see approach" resulting in higher mortality rates. We suggest that this population needs a more "aggressive management" to improve their outcome, if the wish to perform complete treatment including surgical procedures and intensive care medicine has not been excluded by the patients or their legal guardian. abstract_id: PUBMED:29238847 Which pre-hospital triage parameters indicate a need for immediate evaluation and treatment of severely injured patients in the resuscitation area? Purpose: To find ways to reduce the rate of over-triage without drastically increasing the rate of under-triage, we applied a current guideline and identified relevant pre-hospital triage predictors that indicate the need for immediate evaluation and treatment of severely injured patients in the resuscitation area. Methods: Data for adult trauma patients admitted to our level-1 trauma centre in a one year period were collected. Outpatients were excluded. Correct triage for trauma team activation was identified for patients with an ISS or NISS ≥ 16 or the need for ICU treatment due to trauma sequelae. In this retrospective analysis, patients were assigned to trauma team activation according to the S3 guideline of the German Trauma Society. This assignment was compared to the actual need for activation as defined above. 13 potential predictors were retained. The relevance of the predictors was assessed and 14 models of interest were considered. The performance of these potential triage models to predict the need for trauma team activation was evaluated with leave-one-out cross-validated Brier and logarithmic scores. Results: A total of 1934 inpatients ≥ 16 years were admitted to our trauma department (mean age 48 ± 22 years, 38% female). Sixty-nine per cent (n = 1341) were allocated to the emergency department and 31% (n = 593) were treated in the resuscitation room. The median ISS was 4 (IQR 7) points and the median NISS 4 (IQR 6) points. The mortality rate was 3.5% (n = 67) corresponding to a standardized mortality ratio of 0.73. Under-triage occurred in 1.3% (26/1934) and over-triage in 18% (349/1934). A model with eight predictors was finally selected with under-triage rate of 3.3% (63/1934) and over-triage rate of 10.8% (204/1934). Conclusion: The trauma team activation criteria could be reduced to eight predictors without losing its predictive performance. Non-relevant parameters such as EMS provider judgement, endotracheal intubation, suspected paralysis, the presence of burned body surface of > 20% and suspected fractures of two proximal long bones could be excluded for full trauma team activation. The fact that the emergency physicians did a better job in reducing under-triage compared to our final triage model suggests that other variables not present in the S3 guideline may be relevant for prediction. abstract_id: PUBMED:36474145 Is prehospital intubation of severely injured children in accordance with guidelines? Background: The current German S3 guideline for polytrauma lists five criteria for prehospital intubation: apnea, severe traumatic brain injury (GCS ≤8), severe chest trauma with respiratory failure, hypoxia, and persistent hemodynamic instability. These guideline criteria, used in adults in daily practice, have not been previously studied in a collection of severely injured children. The aim of this study was to assess the extent to which the criteria are implemented in clinical practice using a multivariate risk analysis of severely injured children. Methods: Data of 289,698 patients from the TraumaRegister DGU® were analyzed. Children meeting the following criteria were included: Maximum Abbreviated Injury Scale 3+, primary admission, German-speaking countries, years 2008-2017, and declaration of intubation. Since children show age-dependent deviating physiology, four age groups were defined (years old: 0-2; 3-6; 7-11; 12-15). An adult collective served as a control group (age: 20-50). After a descriptive analysis in the first step, factors leading to prehospital intubation in severely injured children were analyzed with a multivariate regression analysis. Results: A total of 4489 children met the inclusion criteria. In this cohort, young children up to 2 years old had the significantly highest injury severity (Injury Severity Score: 21; p ≤ 0.001). Falls from both high (> 3 m) and low heights (< 3 m) were more common in children than in adults. The same finding applied to the occurrence of severe traumatic brain injury. When at least one intubation criterion was formally present, the group up to 6 years old was least likely to actually be intubated (61.4%; p ≤ 0.001). Multivariate regression analysis showed that Glasgow Coma Scale score ≤ 8 in particular had the greatest influence on intubation (odds ratio: 26.9; p ≤ 0.001). Conclusions: The data presented here show for the first time that the existing criteria in the guideline for prehospital intubation are applied in clinical practice (approximately 70% of cases), compared to adults, in the vast majority of injured children. Although severely injured children still represent a minority of all injured patients, future guidelines should focus more on them and address them in a specialized manner. abstract_id: PUBMED:30800801 Managing the Complex Issues of Pediatric Nonaccidental Trauma: A Simulation-Based Case of a Critically Injured Child. Introduction: Trainees generally have insufficient training in managing critically ill and injured pediatric patients due to limited exposure to such patients. Patient simulation experiences allow trainees to learn management skills needed in such a crisis. Herein, we describe a case regarding a critically injured pediatric patient. This case requires trainees to use teamwork skills, medical knowledge, and technical skills to manage the patient. Methods: We developed a team-based simulation regarding the resuscitation of a critically injured child-a toddler with multiple injuries, all requiring emergent care. The case was developed for senior medical students and residents and can be completed in a single 1-hour session, including a debriefing period. We also address psychosocial issues of managing a critically injured child by having the mother and her boyfriend present for part of the case. The team must address the underlying issue of suspected nonaccidental trauma while managing a medical resuscitation. Results: We have performed this scenario with a cohort of 100 trainees. Through direct observations, all teams have been able to manage the patient successfully. The average response to the effectiveness of the case in terms of developing pediatric resuscitation skills was very positive, with scores of 6.7 on a scale of 1 to 7. Discussion: Medical simulation has been demonstrated to be a valuable tool for assessing the knowledge and skills of trainees. This pediatric simulation improved learners' general understanding of managing a pediatric resuscitation. Accordingly, this case has been incorporated as part of resident and medical student training. abstract_id: PUBMED:26815750 Predicting Health-related Quality of Life of Severely Injured Patients: Sociodemographic, Economic, Trauma, and Hospital Stay-related Determinants. Introduction: Due to remarkable improvements in emergency and intensive care medicine in the recent past, the mortality rate for severely injured patients is decreasing. Outcome research therefore should no longer focus only on questions of survival, but also on aspects of the quality of life after severe trauma. This study examined the long-term effect of different sociodemographic, economic, trauma, and hospital-related factors on the health-related quality of life (SF-36) of severely injured patients. Patients And Methods: A written questionnaire was sent to 121 trauma patients who received treatment in two hospitals in Cologne/Northrhine-Westfalia between 1996 and 2001. The inclusion criteria were more than one injury and a sum of abbreviated injury score of the two worst injuries ≥ 6. The response rate after using the total-design-method was 77.6% (n = 90). Results: Severely injured patients showed significant reductions for all subscales of the SF-36, on average 4 years after discharge on average, in comparison to a German norm population. Specifically, aspects of the physical-component scale were dramatically reduced. Linear regressions controlling for time after discharge suggested that higher age, lower socioeconomic status, living together with a partner, and the severity of trauma and injury of extremities were significant predictors for a reduced quality of life, while satisfaction with the hospital stay had a positive effect. Discussion: All in all, it is important to identify trauma- patients who will suffer a reduced quality of life. In so doing, it will be possible to take into account the specific circumstances of their recovery during medical treatment, care, and rehabilitation. abstract_id: PUBMED:30237602 Management of Orthopaedic Injuries in Multiply Injured Child. Multiply injured child is a unique challenge to the medical communities worldwide. It is a leading cause of preventable mortality and morbidity in children. Common skeletal injuries include closed or open fractures of tibia and femur and pelvic injuries. Initial management focuses on saving life and then saving limb as per pediatric advanced life support and advanced trauma life support. Orthopedic management of open fracture includes splinting the limb, administration of prophylactic antibiotic, and surgical debridement of the wound when safe. However, gross contamination, compartment syndrome, and vascular injuries demand urgent attention. abstract_id: PUBMED:36590891 Pre-Existing Anxiety and Depression in Injured Older Adults: An Under-Recognized Comorbidity With Major Health Implications. To compare differences in baseline depression and anxiety screenings between older injured patients with pre-existing diagnoses and those without. Background: Little is known about the prevalence and impact of psychiatric comorbidities on early postinjury depression and anxiety in nonneurologically injured older adults. Methods: This was a retrospective post-hoc analysis of data from the Trauma Medical Home, a multicenter randomized controlled trial (R01AG052493-01A1) that explored the effect of a collaborative care model on postinjury recovery for older adults compared to usual care. Results: Nearly half of the patients screened positive for at least mild depressive symptoms as measured by the Patient Health Questionnaire-9. Forty-one percent of the patients screened positive for at least mild anxiety symptoms as measured by the Generalized Anxiety Disorder Scale. Female patients with a history of concurrent anxiety and depression, greater injury severity scores, and higher Charlson scores were more likely to have mild anxiety at baseline assessment. Patients with a history of depression only, a prior history of depression and concurrent anxiety, and higher Charlson scores (greater medical comorbidity) had greater odds of at least mild depression at the time of hospital discharge after traumatic injury. Conclusions: Anxiety and depression are prevalent in the older adult trauma population, and affect women disproportionately. A dual diagnosis of depression and anxiety is particularly morbid. Mental illness must be considered and addressed with the same importance as other medical diagnoses in patients with injuries. Answer: The provided abstracts do not directly address the specific question of whether a trauma series (a set of diagnostic imaging tests typically used to assess traumatic injuries) is necessary in the management of injured children. However, they do provide insights into various aspects of trauma care in pediatric patients, which can be relevant to the broader context of trauma evaluation and management. For instance, the abstract from PUBMED:33915985 discusses the correlation between hospital volume of severely injured patients and in-hospital mortality of severely injured pediatric patients, suggesting that centralizing care to higher volume hospitals might contribute to survival benefits for severely injured pediatric patients. This implies the importance of specialized care and resources, which could include comprehensive diagnostic imaging as part of the trauma series. The abstract from PUBMED:32071531 highlights the predictors of clavicle fixation in multiply injured patients, indicating that certain injuries in pediatric patients may require specific interventions, which could be identified through diagnostic imaging. The abstract from PUBMED:31520156 suggests that elderly severely injured patients are treated with a more conservative approach, which results in higher mortality rates. While this study focuses on elderly patients, it raises the question of whether a similar conservative approach in pediatric trauma care, potentially including the omission of a full trauma series, could also lead to suboptimal outcomes. The abstract from PUBMED:36474145 discusses the adherence to guidelines for prehospital intubation of severely injured children, which is a part of the initial assessment and management of trauma patients. This could imply the importance of following established protocols, which may include the use of a trauma series for proper evaluation. Lastly, the abstract from PUBMED:30237602 emphasizes the management of orthopedic injuries in multiply injured children, noting that initial management focuses on saving life and then saving limb, following pediatric advanced life support and advanced trauma life support guidelines. This could suggest that a trauma series is part of the comprehensive assessment needed to prioritize interventions. In summary, while the abstracts do not directly answer the question about the necessity of a trauma series in injured children, they collectively underscore the importance of thorough assessment and adherence to guidelines in the management of pediatric trauma, which may support the use of a trauma series as part of the evaluation process.
Instruction: Can "Hybrid stent implantation" improve long-term safety without adversely affecting efficacy when treating multilesion coronary artery disease in the drug-eluting stent era? Abstracts: abstract_id: PUBMED:23652038 Can "Hybrid stent implantation" improve long-term safety without adversely affecting efficacy when treating multilesion coronary artery disease in the drug-eluting stent era? Background: Though drug-eluting stent (DES) almost solved a problem of restenosis, safety issues related to stent thrombosis are still the major concern of DES. We hypothesized that hybrid stent implantation may decrease the use of DES, probably improving the long-term safety but not affecting efficacy adversely when treating multilesion coronary artery disease in the DES era. Methods: From April 2004 to October 2006, 848 patients with multilesion disease underwent hybrid stent implantation. During the same period 5647 patients with multilesion coronary heart disease were treated by exclusive DES implantation in Fu Wai Hospital. According to propensity score matching, we chose 823 pairs of patients with multileison coronary artery disease for inclusion into our study. We obtained the 24-month clinical outcome including death, myocardial infarction (MI), thrombosis, target lesion revascularization (TLR), target vessel revascularization (TVR), and major adverse cardiac events (MACE, the composite of death, MI, and TVR). We used Cox's proportional-hazard models to assess relative risks of all the outcome measures after propensity match. Results: At 24 months, patients in the hybrid stent implantation group showed a significantly higher risk of TLR (8.39% vs. 3.28%, HR 2.38, 95%CI: 1.50 - 3.70), TVR (11.07% vs. 6.32%, HR 1.61, 95%CI: 1.15 - 2.27) and MACE (13.75% vs. 8.75%, HR 1.37, 95%CI: 1.02 - 1.85). No significant difference was apparent in terms of mortality (1.22% vs. 1.70%, HR 0.55, 95%CI: 0.24 - 1.25), MI (1.95% vs. 2.31%, HR 0.73, 95%CI: 0.37 - 1.42), or thrombosis (definite + probable) (0.73% vs. 1.58%, HR 0.40, 95%CI: 0.15 - 1.05). Conclusions: In patients with multilesion coronary artery disease, the exclusive DES implantation was associated with significantly lower risks of TLR, TVR and MACE, and the hybrid stent implantation did not result in any significant improvements regarding safety issues. Prospective studies are needed to confirm our results. abstract_id: PUBMED:22915938 Long-term effectiveness and safety of sirolimus drug-eluting stents. The root cause of coronary artery disease is atherosclerosis, ie, intraluminal narrowing (stenosis) of the arteries that supply blood to tissues of the heart. The introduction of the drug-eluting stent over the past decade has revolutionized the field of interventional cardiology. It is used extensively in clinical practice for the treatment of coronary artery disease. The first drug-eluting stent to receive US Food and Drug Administration approval was the sirolimus-eluting stent. Recently, two other stent analogs of sirolimus were approved, ie, the zotarolimus-eluting stent and the everolimus-eluting stent. However, concern has arisen in recent years about the long-term safety and efficacy of drug-eluting stents, due to the occurrence of late adverse clinical events, such as stent thrombosis. This review focuses on clinical studies that have been performed with the sirolimus-eluting stent or its analogs. We discuss the pharmacology, safety, and various therapeutic options that exist when choosing stents for coronary artery disease. Our aim is to provide a thorough review of the long-term efficacy and safety of sirolimus drug-eluting stents, and also to discuss currently approved and promising investigational drug-eluting stents, in an effort to provide insight into how these stents are currently evolving and generate further investigation in this area. abstract_id: PUBMED:28951394 Impact of Stent Size Selection on Acute and Long-Term Outcomes After Drug-Eluting Stent Implantation in De Novo Coronary Lesions. Background: Although significant undersizing often results in incomplete stent apposition or underexpansion, the possible impact of oversized stent implantation on arterial wall injury has not been systematically investigated with drug-eluting stents. The aim of this study was to investigate the impact of stent oversizing on acute and long-term outcomes after drug-eluting stents implantation in de novo coronary lesions. Methods And Results: Serial (baseline and 6-12 months) coronary angiography and intravascular ultrasound were performed in 2931 lesions treated with drug-eluting stents (355 sirolimus, 846 paclitaxel, 1387 zotarolimus, and 343 everolimus). The percentage of stent oversizing to angiographic reference vessel diameter (RVD) was calculated as (nominal stent diameter-RVD)/RVD×100 (%). Clinical outcomes, including target lesion revascularization and stent thrombosis, were followed for 1 year. Overall, smaller preintervention RVD was associated with higher percentage of stent oversizing (P<0.001). The significant oversizing group underwent less post-dilatation (P=0.002) but achieved greater stent expansion (P<0.001) and less incomplete stent apposition (P<0.001) without increase of edge dissection after procedure. When stratified by vessel size and stent oversizing, progressive decreases of restenosis (P=0.002) and target lesion revascularization rates (P=0.007) were found in favor of larger vessel size and oversized stents. Stent thrombosis was observed the most in small RVD with low percentage of stent oversizing group among the subgroups (P=0.040). Conclusions: The positive impact of stent oversizing was documented on procedural and clinical outcomes. In particular, small vessels treated with smaller stents were associated with greater adverse events, suggesting that aggressive selection of larger stents, with appropriate attention to edge effects, may optimize long-term outcomes, even in drug-eluting stents implantation. abstract_id: PUBMED:26003022 Long-Term Efficacy and Safety of Paclitaxel-Eluting Balloon for the Treatment of Drug-Eluting Stent Restenosis: 3-Year Results of a Randomized Controlled Trial. Objectives: This study sought to investigate the long-term comparative efficacy and safety of paclitaxel-eluting balloon (PEB), paclitaxel-eluting stent (PES), or balloon angioplasty (BA) for the treatment of drug-eluting stent restenosis. Background: The optimal treatment of drug-eluting stent restenosis remains unknown. Although PEB has shown encouraging results, the long-term clinical efficacy and safety of PEB remains poorly defined. Methods: A total of 402 patients with clinically significant restenosis in limus-eluting stents were randomly assigned to receive PEB (n = 137), PES (n = 131), or BA (n = 134). For this analysis, PEB versus PES and PEB versus BA were compared. The primary efficacy and safety endpoints were target lesion revascularization and the composite of death or myocardial infarction. Results: At a median follow-up of 3 years, the risk of target lesion revascularization was comparable with PEB versus PES (hazard ratio [HR]: 1.46, 95% confidence interval [CI]: 0.91 to 2.33; p = 0.11) and lower with PEB versus BA (HR: 0.51, 95% CI: 0.34 to 0.74; p < 0.001). The risk of death/myocardial infarction tended to be lower with PEB versus PES (HR: 0.55, 95% CI: 0.28 to 1.07; p = 0.08), due to a lower risk of death (HR: 0.38, 95% CI: 0.17 to 0.87; p = 0.02). The risk of death/myocardial infarction was similar with PEB versus BA (HR: 0.96, 95% CI: 0.46 to 2.0; p = 0.91). Conclusions: At 3 years, the use of PEB as compared with PES to treat patients with limus-eluting stent restenosis has similar efficacy and safety. PEB remains superior to BA. The sustained efficacy without trade-off in safety supports the role of PEB as treatment option for patients with drug-eluting stent restenosis. (Intracoronary Stenting and Angiographic Results: Drug Eluting Stent In-Stent Restenosis: 3 Treatment Approaches [ISAR-DESIRE 3]; NCT00987324). abstract_id: PUBMED:31043984 Clinical experience with 12-month follow-up in patients after implantation of a novel long-tapered sirolimus drug-eluting stent. Introduction: Long lesions contribute to a significant number of percutaneous coronary interventions. Aim: To assess the efficacy and safety of a novel long-tapered drug-eluting stent (DES) at a 12-month follow-up (FU) in patients with long coronary atherosclerotic lesions. Material And Methods: A prospective clinical cohort study was conducted in 32 patients who underwent percutaneous coronary intervention using a BioMime Morph tapered stent (Meril Life Sciences, India). The patients were followed for 3, 6, and 12 months. The safety endpoints were death, myocardial infarction (MI), target lesion revascularization (TLR), target vessel revascularization (TVR), and MACE and/or major bleeding. Results: Mean lesion length was 48 mm (range: 35-70 mm) measured via quantitative coronary analysis (QCA). In most cases, the target lesion was located in the LAD (68.75%). A GuideLiner catheter (Vascular Solutions Inc., MN, USA) was used in 12.5% of procedures; buddy-wire technique in 9.4% of cases. Bifurcation lesions were treated in 40.6% of cases. Additional stent implantation was needed in 56% of the procedures (25% of cases due to proximal or distal dissection, or due to insufficient stent length in 31% of cases). On 12-month FU we observed 1 TLR (3.1%), 1 TVR (3.1%), and 1 non-cardiovascular death. Conclusions: The long sirolimus-eluting stent with tapered structure was characterized by good deliverability in long coronary lesions, although in some cases "buddy wire" or extension microcatheter use was necessary. Follow-up at 3, 6, and 12 months showed no significant major adverse cardiovascular events related to the device. abstract_id: PUBMED:27822889 Intravascular ultrasound-guided drug-eluting stent implantation. Intravascular ultrasound (IVUS) is a reliable imaging tool to guide percutaneous coronary intervention. There has been increasing evidence supporting the clinical utility of IVUS-guided drug-eluting stent (DES) implantation, including randomized trials, observational studies, and meta-analyses of both. IVUS provides cross-sectional views of the coronary artery wall, and allows us to assess stenosis severity, identify plaque morphology, optimize stent implantation, and understand mechanism of stent failure. IVUS guidance can increase DES efficacy and decrease clinical events. In this review article, we summarize available evidence on IVUS-guided DES implantation. abstract_id: PUBMED:24973842 Safety and efficacy of the Yukon Choice Flex sirolimus-eluting coronary stent in an all-comers population cohort. Aims: The use of biodegradable-polymer drug-eluting stents has been shown to provide favorable results when compared with durable polymer drug-eluting stents and long-term follow up data have recently shown significant reductions in terms of very late stent thrombosis. Aim of the present study was to assess the safety and efficacy profile of a novel biodegradable polymer DES, the Yukon Choice Flex sirolimus-eluting stent. Methods: We report here the one-year clinical outcomes associated with the use of the Yukon Choice Flex sirolimus-eluting stent in an all-comers patient population. The present stent represents a further refinement of the stent platform tested in the ISAR TEST 3 and 4 randomized clinical trials. A total of 778 consecutive patients undergoing implantation of this stent were enrolled in the present observational study and prospectively followed for one year. Results: The use of the Yukon Choice Flex stent in a patient population with complex coronary lesion morphology was associated with optimal immediate angiographic results. At one year follow up the rates of death, myocardial infarction, definite stent thrombosis and ischemia-driven target lesion revascularization were respectively 2.4%, 1.9%, 0.3% and 11.3%. Conclusions: The use of the sirolimus-eluting biodegradable polymer Yukon Choice Flex stent in an all-comers population of patients with complex coronary artery disease is associated with a favorable safety and efficacy profile up to one year follow up. abstract_id: PUBMED:26219730 Efficacy of Endeavor zotarolimus-eluting stent implantation for the treatment of very late stent thrombosis with late-acquired incomplete stent apposition after sirolimus-eluting stent implantation. Very late stent thrombosis (VLST) is a serious complication after percutaneous coronary intervention. However, the best therapy for VLST with late-acquired incomplete stent apposition and incomplete neointimal coverage remains unknown. In these cases, neointimal coverage was nearly complete and no late-acquired malapposition was detected at 18 months after Endeavor zotarolimus-eluting stent (ZES) implantation for the treatment of VLST with late-acquired incomplete stent apposition after sirolimus-eluting stent implantation. We presented that Endeavor ZES implantation may become an attractive therapeutic strategy for the treatment of VLST with late-acquired incomplete stent apposition and incomplete neointimal coverage. abstract_id: PUBMED:31237187 Severe Acute Stent Malapposition After Drug-Eluting Stent Implantation: Effects on Long-Term Clinical Outcomes. Background The effects of severe acute stent malapposition ( ASM ) after drug-eluting stent implantation on long-term clinical outcomes are not clearly understood. We evaluated long-term clinical outcomes of severe ASM using optical coherence tomography. Methods and Results We pooled patient- and lesion-level data from 6 randomized studies. Five studies investigated follow-up drug-eluting stent strut coverage and one investigated ASM . In this data set, a total of 436 patients with 444 lesions underwent postintervention optical coherence tomography examination and these data were included in the analysis. Severe ASM was defined as lesions with ≥400 μm of maximum malapposed distance or ≥1 mm of maximum malapposed length. Composite events (cardiac death, target lesion-related myocardial infarction, target lesion revascularization, and stent thrombosis) were compared between patients with and without severe ASM . The postintervention optical coherence tomography findings indicated that 62 (14.2%) patients had lesions with ≥400 μm of maximum malapposed distance and 186 (42.7%) patients had lesions with ≥1 mm of maximum malapposed length. The 5-year clinical follow-up was completed in 371 (86.1%) of the eligible 431 patients. The cumulative rate of composite events was similar among the patients in each group during 5-year follow-up: 3.3% in patients with ASM ≥400 μm of maximum malapposed distance versus 3.1% in those with no ASM or ASM <400 μm of maximum malapposed distance ( P=0.89), and 1.2% in patients with ASM ≥1 mm of maximum malapposed length versus 4.6% in those with no ASM or ASM <1 mm of maximum malapposed length ( P=0.06). Conclusions During the 5-year follow-up, ASM severity was not associated with long-term clinical outcomes in patients treated with drug-eluting stents. abstract_id: PUBMED:35967938 <Editors' Choice> Very long-term clinical outcomes after percutaneous coronary intervention for complex vs non-complex lesions: 10-year outcomes following sirolimus-eluting stent implantation. Few studies have reported the long-term outcomes (>10 years) following first-generation drug-eluting stent implantation. In this single-center retrospective study, we investigated the very long-term clinical outcomes after first-generation sirolimus-eluting stent (SES) implantation in patients with complex lesions. The study included 383 consecutive patients who underwent initial SES implantation between July 2004 and January 2006; 84 and 299 of these patients reported a history of percutaneous coronary intervention (PCI) for complex and noncomplex lesions, respectively. Complex PCI was defined as having at least one of the following features: left main trunk PCI, implantation of ≥3 stents, bifurcation lesions with implantation of 2 stents, total stent length >60 mm, or chronic total occlusion. The target lesion revascularization (TLR) rate was significantly higher in the complex PCI than in the noncomplex PCI group (29.4% vs 13.0%, P=0.001), and we observed a significant intergroup difference in the late TLR (>1 year) rates (21.6% vs 9.5%, P=0.008). Late TLR continued over 10 years at a rate of 2.4%/year in the complex PCI and 1.1%/year in the noncomplex PCI group. Cox regression analysis revealed that complex PCI was related to TLR both over 10 years (hazard ratio 2.29, P=0.003) and beyond 1 year (hazard ratio 2.32, P=0.01). Cardiac death was more common in the complex PCI than in the noncomplex PCI group, particularly 4 years after PCI (15.8% vs 7.5%, P=0.031). Sudden death was the major cause of cardiac death beyond 4 years in the complex PCI group. These data indicate that long-term careful follow-up is essential for patients implanted with SES, especially those treated for complex lesions. Answer: Based on the provided abstracts, "hybrid stent implantation," which presumably involves using a combination of drug-eluting stents (DES) and other types of stents, does not appear to improve long-term safety without adversely affecting efficacy when treating multilesion coronary artery disease in the drug-eluting stent era. A study comparing hybrid stent implantation to exclusive DES implantation found that the hybrid approach was associated with significantly higher risks of target lesion revascularization (TLR), target vessel revascularization (TVR), and major adverse cardiac events (MACE). There was no significant improvement in safety issues such as mortality, myocardial infarction (MI), or thrombosis (PUBMED:23652038). The abstracts suggest that the use of DES, including sirolimus-eluting stents and their analogs, has been a significant advancement in the treatment of coronary artery disease. However, concerns about long-term safety and efficacy, particularly regarding late adverse clinical events like stent thrombosis, have been raised (PUBMED:22915938). Other studies have focused on optimizing outcomes by considering factors such as stent size selection (PUBMED:28951394), the use of paclitaxel-eluting balloons for DES restenosis (PUBMED:26003022), and the efficacy and safety of novel long-tapered sirolimus-eluting stents (PUBMED:31043984). Intravascular ultrasound (IVUS)-guided DES implantation has been shown to increase efficacy and decrease clinical events, suggesting that precise stent placement and sizing may play a role in improving outcomes (PUBMED:27822889). Additionally, the use of a biodegradable polymer DES, the Yukon Choice Flex sirolimus-eluting stent, has shown a favorable safety and efficacy profile in an all-comers population (PUBMED:24973842). Overall, while the concept of hybrid stent implantation might have been considered as a potential strategy to balance the benefits and risks of DES, the evidence from the abstracts does not support its superiority over exclusive DES implantation in terms of long-term safety and efficacy. Instead, the focus seems to be on optimizing the use of DES through better stent selection, placement, and post-procedural imaging guidance.
Instruction: Does laparoscopic and endoscopic cooperative surgery for gastric submucosal tumors preserve residual gastric motility? Abstracts: abstract_id: PUBMED:24039367 Laparoscopic-endoscopic cooperative surgery for gastric submucosal tumors. Aim: To assess the feasibility, safety, and advantages of minimally invasive laparoscopic-endoscopic cooperative surgery (LECS) for gastric submucosal tumors (SMT). Methods: We retrospectively analyzed 101 consecutive patients, who had undergone partial, proximal, or distal gastrectomy using LECS for gastric SMT at Peking Union Medical College Hospital from June 2006 to April 2013. All patients were followed up by visit or telephone. Clinical data, surgical approach, pathological features such as the size, location, and pathological type of each tumor; and follow-up results were analyzed. The feasibility, safety and effectiveness of LECS for gastric SMT were evaluated, especially for patients with tumors located near the cardia or pylorus. Results: The 101 patients included 43 (42.6%) men and 58 (57.4%) women, with mean age of 51.2 ± 13.1 years (range, 14-76 years). The most common symptom was belching. Almost all (n = 97) patients underwent surgery with preservation of the cardia and pylorus, with the other four patients undergoing proximal or distal gastrectomy. The mean distance from the lesion to the cardia or pylorus was 3.4 ± 1.3 cm, and the minimum distance from the tumor edge to the cardia was 1.5 cm. Tumor pathology included gastrointestinal stromal tumor in 78 patients, leiomyoma in 13, carcinoid tumors in three, ectopic pancreas in three, lipoma in two, glomus tumor in one, and inflammatory pseudotumor in one. Tumor size ranged from 1 to 8.2 cm, with 65 (64.4%) lesions < 2 cm, 32 (31.7%) > 2 cm, and four > 5 cm. Sixty-six lesions (65.3%) were located in the fundus, 21 (20.8%) in the body, 10 (9.9%) in the antrum, three (3.0%) in the cardia, and one (1.0%) in the pylorus. During a median follow-up of 28 mo (range, 1-69 mo), none of these patients experienced recurrence or metastasis. The three patients who underwent proximal gastrectomy experienced symptoms of regurgitation and belching. Conclusion: Laparoscopic-endoscopic cooperative surgery is feasible and safe for patients with gastric submucosal tumor. Endoscopic intraoperative localization and support can help preserve the cardia and pylorus during surgery. abstract_id: PUBMED:32990798 Closed laparoscopic and endoscopic cooperative surgery for early gastric cancer with difficulty in endoscopic submucosal dissection: a report of three cases. Background: Endoscopic submucosal dissection (ESD) is increasingly applied for early gastric cancer. ESD is a less invasive procedure and could be a radical treatment. However, in some cases, ESD cannot be completed owing to patient or technical factors. In such cases, which could have the potential for curative resection with ESD, standard gastrectomy is excessively invasive. Through closed laparoscopic and endoscopic cooperative surgery (LECS), gastric tumor can be precisely resected without exposing tumor cells to the abdominal cavity. Compared with standard gastrectomy, closed LECS is less invasive for the treatment of early gastric cancer. Case Presentation: We performed closed LECS for three cases of early gastric cancer after failed ESD. In all three cases, ESD was interrupted owing to technical and patient factors, including perforation, respiratory failure, and carbon dioxide narcosis. All three cases successfully underwent closed LECS with complete tumor resection and showed an uneventful postoperative course. All three patients remain alive and have experienced no complications or recurrence, with a median follow up of 30 (14-30) months. Conclusions: Closed LECS is less invasive and useful procedure for the treatment of early gastric cancer, particularly in cases with difficulty in ESD. abstract_id: PUBMED:36158631 Laparoscopic and endoscopic cooperative surgery for full-thickness resection and sentinel node dissection for early gastric cancer. The endoscopic submucosal dissection (ESD) technique has become the gold standard for submucosal tumors that have negligible risk of lymph node metastasis (LNM), due to its minimal invasiveness and ability to improve quality of life. However, this technique is limited in stage T1 cancers that have a low risk of LNM. Endoscopic full thickness resection can be achieved with laparoscopic endoscopic cooperative surgery (LECS), which combines laparoscopic gastric wall resection and ESD. In LECS, the surgical margins from the tumor are clearly achieved while performing organ-preserving surgery. To overcome the limitation of classical LECS, namely the opening of the gastric wall during the procedure, which increases the risk of peritoneal tumor seeding, non-exposed endoscopic wall-inversion surgery was developed. With this full-thickness resection technique, contact between the intra-abdominal space and the intragastric space was eliminated. abstract_id: PUBMED:32066204 Laparoscopic-endoscopic cooperative surgery for early gastric cancer with gastroesophageal varices. This report describes a patient with early gastric cancer with gastroesophageal varices who underwent laparoscopic-endoscopic cooperative surgery. The patient, a 75-year-old woman, was referred to our hospital for the treatment of gastric cancer. Esophagogastroduodenoscopy revealed a superficial spreading-type tumor on the posterior wall of the upper third of the stomach that was very close to the gastric varices. Biopsy specimens showed a moderately differentiated tubular adenocarcinoma. Abdominal contrast-enhanced CT showed an enhanced submucosal area with noticeably dilated veins around the perigastric region and no masses in the liver. The patient underwent local resection of the stomach using laparoscopic-endoscopic cooperative surgery; this procedure was chosen because of the patient's risk of bleeding from the gastric varices in the area adjacent to the cancer during endoscopic submucosal dissection. Macroscopic examination of the resected specimen showed a superficial spreading lesion measuring 55 × 50 mm. The pathological diagnosis was moderately differentiated tubular adenocarcinoma invading the gastric submucosal layer without lymphovascular invasion. abstract_id: PUBMED:24968310 Does laparoscopic and endoscopic cooperative surgery for gastric submucosal tumors preserve residual gastric motility? Results of a retrospective single-center study. Background: Laparoscopic and endoscopic cooperative surgery (LECS) is a minimally invasive surgical technique used to resect gastric submucosal tumors with intraluminal growth. Endoscopic submucosal dissection is used to determine the appropriate resection line from within the stomach lumen as it minimizes the stomach wall resection area and prevents postoperative stomach deformity. Although LECS is intended to preserve gastric function, few reports have evaluated postoperative residual gastric motility. Therefore, we conducted a retrospective analysis of patients who underwent LECS to determine the effects of LECS on residual gastric motility. Methods: Twenty-two patients underwent endoscopy 3 to 12 months after LECS. Patients were evaluated for endoscopic evidence of gastric motility disorder, namely food residue and occurrence/exacerbation of reflux esophagitis. We considered patients with new onset of gastric symptoms and endoscopic evidence of gastric motility disorder to have clinically relevant gastric motility disorder. We described patient characteristics, tumor location, and surgical findings. Results: Two of 22 patients developed clinically relevant gastric motility disorder after LECS. In one of these patients, the symptoms were not severe; only one had reduced dietary intake and had lost weight. We identified clinically relevant gastric motility disorder in two patients with gastrointestinal stromal tumors located in the lesser curvature of the stomach. The major axis of these two tumors was 34 mm and 38 mm. Conclusions: Many patients did not have clinically relevant gastric motility disorder after LECS. Further investigation is required to identify predisposing factors for gastric motility disorder. abstract_id: PUBMED:31131352 Laparoscopic endoscopic cooperative surgery (LECS) for the gastrointestinal tract: Updated indications. Laparoscopic and endoscopic cooperative surgery (LECS) is a procedure combining laparoscopic gastric resection with endoscopic submucosal dissection for local resection of gastric tumors with appropriate, minimal surgical resection margins. The LECS concept was initially developed from the classical LECS procedure for gastric submucosal tumor resection. Many researchers reported that classical LECS was a safe and feasible technique for resection of gastric submucosal tumors, regardless of tumor location, including the esophagogastric junction. Recently, LECS was approved for insurance coverage by Japan's National Health Insurance plan and widely applied for gastric submucosal tumor resection. However, the limitations of classical LECS are the risk of abdominal infection, scattering of tumor cells in the abdominal cavity, and tumor cell seeding in the peritoneum. The development of modified LECS procedures, such as inverted-LECS, non-exposed endoscopic wall-inversion surgery, a combination of laparoscopic and endoscopic approaches to neoplasia with a non-exposure technique, and closed-LECS, has almost resolved these drawbacks. This has led to a recent increase in the indication of modified LECS to include patients with gastric epithelial neoplasms. The LECS concept is also beginning to be applied to tumor excision in other organs, such as the duodenum, colon and rectum. Further evolution of LECS procedures is expected in the future. Sentinel lymph node mapping could also be combined with LECS, resulting in a portion of early gastric cancers being treated by LECS with sentinel node mapping. abstract_id: PUBMED:25394216 Laparoscopic endoscopic cooperative surgery. Laparoscopic and endoscopic cooperative surgery (LECS) is a newly developed concept for tumor dissection of the gastrointestinal tract that was first investigated for local resection of gastric gastrointestinal stromal tumors (GIST). The first reported version of LECS for GIST has been named 'classical LECS' to distinguish it from other modified LECS procedures, such as inverted LECS, a combination of laparoscopic and endoscopic approaches to neoplasia with a non-exposure technique (CLEAN-NET), and non-exposed endoscopic wall-inversion surgery (NEWS). These modified LECS procedures were developed for dissection of malignant tumors which may seed tumor cells into the abdominal cavity. While these LECS-related procedures might prevent tumor seeding, their application is limited by several factors, such as tumor size, location and technical difficulty. Currently, classical LECS is a safe and useful procedure for gastric submucosal tumors without mucosal defects, independent of tumor location, such as proximity to the esophagogastric junction or pyloric ring. For future applications of LECS-related procedures for other malignant diseases with mucosal lesions such as GIST with mucosal defects and gastric cancer, some improvements in the techniques are needed. abstract_id: PUBMED:30487950 Laparoscopic and endoscopic cooperative surgery for gastric tumors: Perspective for actual practice and oncological benefits. Laparoscopic and endoscopic cooperative surgery (LECS) is a surgical technique that combines laparoscopic partial gastrectomy and endoscopic submucosal dissection. LECS requires close collaboration between skilled laparoscopic surgeons and experienced endoscopists. For successful LECS, experience alone is not sufficient. Instead, familiarity with the characteristics of both laparoscopic surgery and endoscopic intervention is necessary to overcome various technical problems. LECS was developed mainly as a treatment for gastric submucosal tumors without epithelial lesions, including gastrointestinal stromal tumors (GISTs). Local gastric wall dissection without lymphadenectomy is adequate for the treatment of gastric GISTs. Compared with conventional simple wedge resection with a linear stapler, LECS can provide both optimal surgical margins and oncological benefit that result in functional preservation of the residual stomach. As technical characteristics, however, classic LECS involves intentional opening of the gastric wall, resulting in a risk of tumor dissemination with contamination by gastric juice. Therefore, several modified LECS techniques have been developed to avoid even subtle tumor exposure. Furthermore, LECS for early gastric cancer has been attempted according to the concept of sentinel lymph node dissection. LECS is a prospective treatment for GISTs and might become a future therapeutic option even for early gastric cancer. Interventional endoscopists and laparoscopic surgeons collaboratively explore curative resection. Simultaneous intraluminal approach with endoscopy allows surgeons to optimizes the resection area. LECS, not simple wedge resection, achieves minimally invasive treatment and allows for oncologically precise resection. We herein present detailed tips and pitfalls of LECS and discuss various technical considerations. abstract_id: PUBMED:28616596 Laparoscopic endoscopic cooperative surgery (LECS) for the upper gastrointestinal tract. We developed the laparoscopic and endoscopic cooperative surgery (LECS) technique, which combines endoscopic submucosal dissection (ESD) and laparoscopic gastric resection to resect gastric submucosal tumors (SMTs). Many researchers have reported LECS to be a feasible technique for gastric submucosal tumor resection regardless of tumor location, including the esophagogastric junction (EGJ). Recently, the Japanese National Health Insurance system approved LECS for insurance coverage, and it is now widely applied for gastric submucosal tumor resection. Initially, we applied LECS to gastric SMTs without ulcerative lesions due to concern about the possibility of tumor cells seeding into the peritoneal cavity, in a procedure termed "classical LECS". To expand the indication of LECS for gastric epithelial neoplasms, modified LECS procedures such as inverted LECS, non-exposed endoscopic wall-inversion surgery (NEWS), combination of laparoscopic and endoscopic approaches to neoplasia with non-exposure technique (CLEAN-NET), and closed laparoscopic and endoscopic cooperative surgery (closed-LECS) have been developed and applied to patients with gastric epithelial neoplasms. As a future perspective of LECS, we plan to combine the modified LECS procedure and sentinel node (SN) navigation surgery (SNNS) for clinical (c)Stage IA gastric cancer resection. This strategy could be a minimally invasive surgical technique for cStage IA early gastric cancer. abstract_id: PUBMED:26306144 Gastric carcinoma originating from the heterotopic submucosal gastric gland treated by laparoscopy and endoscopy cooperative surgery. Gastric carcinoma is derived from epithelial cells in the gastric mucosa. We reported an extremely rare case of submucosal gastric carcinoma originating from the heterotopic submucosal gastric gland (HSG) that was safely diagnosed by laparoscopy and endoscopy cooperative surgery (LECS). A 66-year-old man underwent gastrointestinal endoscopy, which detected a submucosal tumor (SMT) of 1.5 cm in diameter on the lesser-anterior wall of the upper gastric body. The tumor could not be diagnosed histologically, even by endoscopic ultrasound-guided fine-needle aspiration biopsy. Local resection by LECS was performed to confirm a diagnosis. Pathologically, the tumor was an intra-submucosal well differentiated adenocarcinoma invading 5000 μm into the submucosal layer. The resected tumor had negative lateral and vertical margins. Based on the Japanese treatment guidelines, additional laparoscopic proximal gastrectomy was curatively performed. LECS is a less invasive and safer approach for the diagnosis of SMT, even in submucosal gastric carcinoma originating from the HSG. Answer: Yes, laparoscopic and endoscopic cooperative surgery (LECS) for gastric submucosal tumors generally preserves residual gastric motility. A retrospective single-center study evaluated the effects of LECS on residual gastric motility by examining patients who underwent the procedure. The study found that many patients did not have clinically relevant gastric motility disorder after LECS. Only two of the 22 patients developed clinically relevant gastric motility disorder postoperatively, and in one of these patients, the symptoms were not severe. The study suggests that LECS, which is intended to preserve gastric function, has minimal impact on postoperative gastric motility for most patients (PUBMED:24968310).
Instruction: Does outdoor alcohol advertising around elementary schools vary by the ethnicity of students in the school? Abstracts: abstract_id: PUBMED:18821155 Does outdoor alcohol advertising around elementary schools vary by the ethnicity of students in the school? Objectives: The objectives of this study were to determine: (1) if the number of outdoor alcohol advertisements around schools varied by ethnicity of students in the school; and (2) how the content of alcohol advertising around schools varied by the ethnicity of students in the school. Methods: All outdoor alcohol advertisements within 1500 feet of 63 Chicago schools were documented and coded for content and theme. On average, the ethnic make-up of schools was 54.9% African American, 24.4% Hispanic, and 16.2% White; 79.7% of students were low SES. To compare the mean number and type of ads by ethnicity of the school, schools were coded by ethnicity (i.e., 20% or more Hispanic students compared with less than 20% Hispanic students). Results: Youth attending schools with 20% or more Hispanic students were exposed to 6.5 times more alcohol advertising than students attending schools with less than 20% Hispanic students. Schools with 20% or more Hispanic students were also surrounded by more beer advertising and alcohol advertisements on bars and liquor stores. Conclusions: Alcohol advertising is more prevalent around schools with 20% or more Hispanic students. Policies should be considered to reduce the amount of alcohol advertising around schools. abstract_id: PUBMED:28277028 Targeting Hispanic adolescents with outdoor food & beverage advertising around schools. Objectives: Although some research has focused on the food environment and food marketing, little has examined outdoor food and beverage (FB) advertising, particularly its relationship to the Hispanic composition in schools. Therefore, the purpose of this study was to determine if the prevalence of outdoor FB advertising was greater around middle and high schools with a majority Hispanic population as compared to schools with a lower Hispanic population. Design: All FB advertisements located within a half-mile of 47 schools in Central Texas were documented. Advertisements were coded as free standing or on establishments. Advertisements were coded for theme including price (emphasizing price) and deals/value meals (promoting discounted price/meal deals). These two themes were combined to create an overall price promotion variable. In order to determine if the prevalence of FB advertising varied by the Hispanic composition of the students in the school, data from the Texas Education Agency was used to create a variable which dichotomized the schools into two groups: schools that reported ≥60% Hispanic students or 'Hispanic schools' (n = 21) and schools that reported <60% Hispanic students or 'non-Hispanic schools' (n = 26). Linear regression analyses were conducted to determine if the prevalence of outdoor FB advertising was greater around Hispanic schools as compared to non-Hispanic schools. Results: Hispanic schools had more overall outdoor FB advertisements as compared to non-Hispanic schools (p = 0.02). Similarly, we found significantly more outdoor FB establishment (p = 0.02) and price promotion (p = 0.05) around Hispanic schools as compared to non-Hispanic schools. Differences in freestanding advertisements by school type approached significance (p = 0.07) with Hispanic schools having more freestanding FB advertisements on average. Conclusion: Further research is needed that documents the content of these advertisements and determines the extent to which these advertisements affect Hispanic and other racial/ethnic minority youth's attitudes and behaviors toward the consumption of these products. abstract_id: PUBMED:17568965 Outdoor alcohol advertising near schools: what does it advertise and how is it related to intentions and use of alcohol among young adolescents? Objective: The objectives of this study were to (1) document and describe all outdoor alcohol advertisements surrounding schools and (2) examine the association between exposure to alcohol advertising in sixth grade and youth alcohol use, intentions, norms, and attitudes in eighth grade. Method: All outdoor alcohol advertisements within 1,500 feet of 63 Chicago school sites were documented and coded for content and theme. Longitudinal mixed-effects regression analysis was used to determine the association between number of alcohol advertisements around a school in sixth grade and student alcohol behaviors, intentions, norms, and attitudes at the end of eighth grade, 2 years later. Participants included 2,586 sixth-grade students in the 2002-2003 school year. The sample was 37% black, 33% Hispanic, and 15% white. Gender was evenly distributed, and the average age was 12.2 at the end of sixth grade. Results: A total of 931 alcohol advertisements were found within 1,500 feet of the 63 school sites. Exposure to alcohol advertising around schools at the end of sixth grade was found to predict alcohol intentions at the end of eighth grade. This finding held true even for those students who were nonusers of alcohol in sixth grade. Conclusions: Exposure to outdoor alcohol advertising around schools is associated with subsequent youth intentions to use alcohol. The association between exposure to alcohol advertising and youth alcohol-use intentions was found even among sixth-grade nonusers of alcohol, suggesting that even those who have not used alcohol are still influenced by alcohol advertising. These findings suggest that restrictions in alcohol advertising near schools may be warranted. abstract_id: PUBMED:19076742 The commercial food landscape: outdoor food advertising around primary schools in Australia. Objective: Food marketing is linked to childhood obesity through its influence on children's food preferences, purchase requests and food consumption. We aimed to describe the volume and nature of outdoor food advertisements and factors associated with outdoor food advertising in the area surrounding Australian primary schools. Methods: Forty primary schools in Sydney and Wollongong were selected using random sampling within population density and socio-economic strata. The area within a 500 m radius of each school was scanned and advertisements coded according to pre-defined criteria, including: food or non-food product advertisement, distance from the school, size and location. Food advertisements were further categorised as core foods, non-core foods and miscellaneous drinks (tea and coffee). Results: The number of advertisements identified was 9,151, of which 2,286 (25%) were for food. The number of non-core food advertisements was 1,834, this accounted for 80% of food advertisements. Soft drinks and alcoholic beverages were the food products most commonly advertised around primary schools (24% and 22% of food advertisements, respectively). Non-core food products were twice as likely to be advertised close to a primary school (95 non-core food advertisements per km(2) within 250 m vs. 46 advertisements per km(2) within 250-500 m). Conclusions: The density of non-core food advertisements within 500 m of primary schools, and the potential for repeated exposure of children to soft drink and alcoholic beverage advertisements in particular, highlights the need for outdoor food marketing policy intervention. Implications: Outdoor advertising is an important food marketing tool that should be considered in future debates on regulation of food marketing to children. abstract_id: PUBMED:33845809 Mapping of outdoor food and beverage advertising around primary and secondary schools in Kampala city, Uganda. Background: Marketing of unhealthy foods and beverages is recognized as a contributing factor to the global increase in overweight and obesity, particularly among children. Such marketing negatively affects children's dietary preferences, food choices, purchasing requests, and consumption patterns. Given that little is known about food marketing in Africa, including in Uganda, monitoring children's exposure to food marketing is essential to generate evidence on the problem and develop meaningful policy responses. The aim of this study was to describe the food and beverage marketing environment surrounding schools in urban and peri-urban areas of Kampala city. Methods: Outdoor advertising around 25 randomly sampled primary and secondary schools within a radius of 250 m of each school was mapped. Information on size, setting, type, and position of the advertisements and the healthiness of the foods and beverages promoted was collected using the INFORMAS Outdoor Advertising Protocol. The occurrence of advertising was described using frequencies, median, and density per 100m2. Results: A total of 1034 branded advertisements were identified around the schools. Of these, 86% featured unhealthy products, 7% healthy products, and 7% miscellaneous products. The most advertised products were sugar-sweetened beverages and alcoholic beverages (51 and 23%, respectively). Schools in the urban area were surrounded by more unhealthy ads than those in the peri-urban areas (median of 45 vs 24 advertisements). Conclusion: The widespread extent of unhealthy food and beverage advertisements around primary and secondary schools highlights the need for food marketing regulation in Uganda, in line with the World Health Organization's recommendations, to ensure that young people are protected from unhealthy food marketing. abstract_id: PUBMED:34418222 Children's exposure to outdoor food advertising near primary and secondary schools in Australia. Issue Addressed: Previous research has highlighted children's frequent exposure to advertisements of unhealthy food and beverages on television. However, the food industry is increasingly utilising non-broadcast channels such as outdoor advertising (eg billboards, bus shelters, shop fronts) for product marketing. Few studies have investigated children's exposure to outdoor food advertising around primary and secondary schools. This study aimed to quantify the presence and content of outdoor food advertisements within a 500 m radius of primary and secondary schools in Perth, Western Australia. Methods: A cross-sectional design was used to capture outdoor advertisements within a 500 m radial buffer around the school boundary. The INFORMAS protocol for monitoring outdoor advertising around child-serving institutions was used. Sixty-four primary and secondary schools in Perth, Australia, were selected using random sampling within socio-economic and population density strata. Results: In total, 5636 outdoor advertisements were identified within a 500 m radius of all 64 schools combined and 30% were for food. Of the 1708 food advertisements, 74% were for unhealthy (non-core) food. The most frequently advertised food products were alcohol, fast food and sugar-sweetened beverages. Only 8% of food advertisements featured a healthy product. The majority of schools (75%) had at least one food advertisement within 500 m (mean 36, range 3-190). Schools in lower socio-economic areas had more food advertisements and a significantly higher proportion of unhealthy food advertisements within 250 m. Conclusion: Outdoor advertising around schools constitutes a potential frequent source of children's exposure to unhealthy food and alcohol advertising. SO WHAT?: Policy interventions restricting the content of outdoor food advertising near schools are needed. abstract_id: PUBMED:32718399 Density of tobacco advertising around schools. BACKGROUND: Indonesia has the second highest smoking prevalence among adult males in the world, with over 61.4 million current smokers. However, there is no national regulation on outdoor tobacco advertising.OBJECTIVE: >To assess the density of outdoor tobacco advertising around schools in Semarang City, Indonesia.METHODS: We conducted geospatial analyses using buffer and hotspot analyses based on advertising and school data in ArcMap 10.6. We statistically tested the significance of different densities, including between 100 m and 100-300-m buffers from schools using Stata 15.1.RESULTS: We found a total of 3453 tobacco advertisements, of which 3026 (87%) were at least medium in size (1.3 m x l.9 m), and 2556 (74%) were within 300 m of schools. We also found hotspots with a 45% higher density of adverts within 100 m of schools (compared to within 100-300 m). A total of 378 schools (39%) were in these advertising hotspots.CONCLUSION: There was high density of outdoor tobacco advertising, with significant clusters in close proximity to schools in Semarang City. The policy implications of this are discussed. abstract_id: PUBMED:34547127 Effects of a web-based alcohol drinking prevention program linking school-to-home in elementary students. Objectives: We investigated the effects of a web-based alcohol drinking prevention program among early elementary school students in South Korea using the theory of planned behavior. This study utilized a combined family and school intervention approach. Design: This was a quasi-experimental study with a non-equivalent control group. Sample: A total of 251 third graders (experimental group: 170, control group: 81) from four public schools were included. Measurements: A web-based alcohol drinking prevention program was implemented in five weekly sessions. Students' attitudes, subjective norms, perceived behavioral control (PBC), and intentions toward alcohol drinking prevention behavior were evaluated through pre- and post-tests. Results: The average age of the participants was 9 years in both the experimental (82 male and 88 female students) and control (44 male and 37 female students) groups. A significant improvement in attitudes, PBC, and intentions toward alcohol drinking prevention behaviors was found in the experimental group. Conclusions: A web-based alcohol drinking prevention program combining family and school interventions for lower elementary school students was effective at increasing intentions toward alcohol drinking prevention behavior. Alcohol drinking prevention programs should target lower elementary school students to effectively postpone the onset of alcohol drinking. abstract_id: PUBMED:34585012 Entrepreneurial self-efficacy among elementary students: the role of entrepreneurship education. Since the increasing attention among scholars on entrepreneurship education, there is a lack of empirical evidence on how to prepare for entrepreneurial education at an early age. Therefore, this study examines the effect of the outdoor learning environment and entrepreneurial education on entrepreneurial self-efficacy in Indonesia's elementary school students. This study incorporated 320 elementary students in the five and six-year students as respondents in several elementary schools in Jakarta of Indonesia. The sample was gathered using convenience sampling by giving online questionnaires with five scales of measurement. The findings showed that the outdoor learning environment has closely related to entrepreneurship education and entrepreneurial self-efficacy. Second, entrepreneurship education plays a significant role in mediating the relationship between the outdoor learning environment and students' self-efficacy. Our research provides insight to future researchers that primary education plays a significant role in the growth and development of self-efficacy and entrepreneurial intentions. abstract_id: PUBMED:34886166 Attending a Sports Club Can Help Prevent Visual Impairment Caused by Cram School in Elementary School Children in Japan. Longer durations for near-work activities, such as studying, worsen eyesight. In contrast, outdoor exercise is effective in reducing the risk of developing myopia. Despite these findings, however, the interaction between studying and exercise in eyesight has not been quantitatively evaluated. Moreover, since there is a culture of attending lessons in Japan, it is important to investigate the relationship between elementary school activities, such as cram schools or sports clubs, and vision. Therefore, in this study, we examined whether attending cram schools and/or sports clubs is associated with the use of glasses among elementary school students. We conducted a survey among 7419 elementary school students in Tokyo, Japan using a food education questionnaire. A logistic regression analysis was used to evaluate the relationship between wearing glasses, an objective variable, and attending sports clubs and cram schools. Sex and school year were considered confounding factors. The results of this study showed that students who attended only sports clubs were more likely to be categorized into the "not wearing glasses" group (p = 0.03, OR = 1.45), whereas those who attended only cram schools were more likely to be categorized into the "wearing glasses" group (p = 0.008, OR = 0.67). In addition, students who attended both cram schools and sports clubs were more likely to be categorized into the "not wearing glasses" group than those who only attended cram schools (p = 0.28, OR = 0.85). Our findings indicate that attending not only cram schools but also sports clubs may prevent deterioration of eyesight. Parents and health care providers need to take these findings into account in order to prevent visual impairment in children. Answer: Yes, outdoor alcohol advertising around elementary schools does vary by the ethnicity of students in the school. A study found that youth attending schools with 20% or more Hispanic students were exposed to 6.5 times more alcohol advertising than students attending schools with less than 20% Hispanic students. Additionally, schools with a higher percentage of Hispanic students were surrounded by more beer advertising and alcohol advertisements on bars and liquor stores (PUBMED:18821155).
Instruction: Can extracorporeal magnetic innervation be a treatment modality for primary dysmenorrhea? Abstracts: abstract_id: PUBMED:24853619 Can extracorporeal magnetic innervation be a treatment modality for primary dysmenorrhea? Background/aims: To evaluate the efficacy of extracorporeal magnetic innervation (ExMI) as a treatment for primary dysmenorrhea compared with nonsteroidal anti-inflammatory drugs (NSAIDs) and combined oral contraceptives (COCs). Methods: The cases were randomized into three groups (NSAID = 51, ExMI = 53, COC = 54). ExMI was applied for a total of 10 sessions. Women in the NSAID group used an oral NSAID at the start of each menstruation. Women in the COC group were given combined pills. Of the treatment options, ExMI was applied for only a single period, whereas NSAID and COC use continued for 12 months. Results: At the first menstruation, visual analog scale (VAS) scores improved significantly in all groups (p < 0.001). NSAIDs and COCs continued to show efficacy over the entire study period (p < 0.05). However, in the ExMI group, VAS values increased from the first menstruation until 12 months. The VAS score at the 12th month was significantly higher in the ExMI group than in the other groups (p < 0.05), but markedly lower than the pretreatment value (49.9 ± 8.3 vs. 71.1 ± 10.1, p < 0.001). Conclusions: ExMI therapy might be a promising novel noninvasive option for primary dysmenorrhea. Efficacy began to decline after 3 months, but continued for 12 months. abstract_id: PUBMED:32046753 Instant analgesic effect of radial extracorporeal shock wave therapy on primary dysmenorrhoea according to functional magnetic resonance imaging: study protocol for a randomised placebo-controlled trial. Background: Primary dysmenorrhoea (PDM) is defined as a series of pain-dominated symptoms during and after menstruation without organic lesions. Nonsteroidal anti-inflammatory drugs and oral contraceptives are usually recommended as first-line therapy for the clinical treatment of PDM, but their widespread long-term application is controversial. Radial extracorporeal shock wave therapy (rESWT) has been widely applied in musculoskeletal rehabilitation because of its secure and noninvasive characteristics and its confirmed effect in improving pain symptoms. This research seeks to explore the efficacy of rESWT for PDM and the changes in brain function of patients with PDM. Methods: This clinical research will be a randomised, blind, sham-controlled trial. Thirty-six patients with PDM will be randomly divided into the rESWT group (n = 18) and the sham rESWT group (n = 18). In the rESWT group, treatment will be applied once within 48 h of menstruation at six abdominal myofascial trigger points. The sham rESWT group will receive sham shockwave therapy on the same sites but without energy input. Other dysmenorrhoea-related treatments in both groups will be limited. The main indicators include the short form of the McGill Pain Questionnaire and the Cox Menstrual Symptom Scale. The secondary indicators include the Zung Self-rating Anxiety Scale and Self-rating Depression Scale and functional magnetic resonance imaging (fMRI) changes in brain regions. Results will be evaluated at the screening, at baseline, and before and after treatment, and adverse treatments will be examined. Inter- and intragroup analyses will be performed. Discussion: This randomised controlled study is designed to explore the immediate efficacy of rESWT for PDM. After rESWT treatment, PDM symptom tests and pain tests, as well as fMRI data, will be investigated for the potential connections between immediate neuroanalgesic mechanisms, which are associated with pain and brain networks. The main results will be used to assess the efficacy of rESWT, and secondary results will focus on improving the neurobiological understanding of disease treatment. Trial Registration: China Clinical Trial Register, ChiCTR1900020678. Registered on 13 January 2019. abstract_id: PUBMED:29160427 Review of magnetic resonance-guided focused ultrasound in the treatment of uterine fibroids. Uterine leiomyoma is the most frequently occurring solid pelvic tumor in women during the reproductive period. Magnetic resonance-guided high-intensity focused ultrasound is a promising technique for decreasing menorrhagia and dysmenorrhea in symptomatic women. The aim of this study is to review the role of Magnetic resonance-guided high-intensity focused ultrasound in the treatment of uterine fibroids in symptomatic patients. We performed a review of the MEDLINE and Cochrane databases up to April 2016. The analysis and data collection were performed using the following keywords: Leiomyoma, High-Intensity Focused Ultrasound Ablation, Ultrasonography, Magnetic Resonance Imaging, Menorrhagia. Two reviewers independently performed a quality assessment; when there was a disagreement, a third reviewer was consulted. Nineteen studies of Magnetic resonance-guided high-intensity focused ultrasound-treated fibroid patients were selected. The data indicated that tumor size was reduced and that symptoms were improved after treatment. There were few adverse effects, and they were not severe. Some studies have reported that in some cases, additional sessions of Magnetic resonance-guided high-intensity focused ultrasound or other interventions, such as myomectomy, uterine artery embolization or even hysterectomy, were necessary. This review suggests that Magnetic resonance-guided high-intensity focused ultrasound is a safe and effective technique. However, additional evidence from future studies will be required before the technique can be recommended as an alternative treatment for fibroids. abstract_id: PUBMED:33592837 Extracorporeal shock wave therapy for treating primary dysmenorrhea: A randomized controlled trial. Background: There are scanty data to apply radial extracorporeal shock wave therapy (rESWT) on the acupuncture points in the lower abdomen to reduce the menstrual pain. This trial aimed to test the rESWT safety and efficacy for treating primary dysmenorrhea (PD). Methods: Forty-four young-women with PD were randomly assigned to one of the three groups: to receive rESWT on the acupuncture points during the follicular phase (Group A, n = 15) or during the luteal phase (Group B, n = 14), or to apply heat patch to the acupuncture points during the follicular phase as the control (Group C, n = 15) over three menstrual cycles. The pain severity (using 0-to-10 visual analog scale), the pain duration (hours), plasma PGF2α prostaglandin F2alpha and prostaglandin E2 (PGE2), self-rating anxiety scale and menstrual blood loss were assessed before and after interventions. Results: The pain severity and duration significantly decreased in all groups after interventions. Although the reduced pain duration was not different among the groups, the reduced pain severity was more significant (P = .003) in Groups A (-53.8 ± 33.7%) and B (-59.3 ± 36.7%) than in Group C (-18.7 ± 27.1%). The rESWT intervention did not change plasma prostaglandins in Group A, although there was a decreased prostaglandin F2alpha (-20.5 ± 32.9%) in Group B or a decreased PGE2 (-18.9 ± 17.8%) in Group C. The anxiety level showed no change after intervention. The menstrual blood volume reduced slightly after intervention and the change of menstrual blood loss in Group B was significant (P = .038). Conclusion: The rESWT applications on the abdominal acupuncture points safely and effectively reduced the menstrual pain, which was not associated with the prostaglandin changes. The rESWT-reduced pain seemed equally effective with the intervention applied during the follicular phase or luteal phase of the menstrual cycle. Heat patch placed on the abdominal acupuncture points also reduced the pain severity and duration, indicating that the improved blood flow could effectively alleviate the menstrual pain with PD. The changes in anxiety level and menstrual blood loss were slight after intervention. abstract_id: PUBMED:37202156 Comparison of high-intensity focused ultrasound for the treatment of internal and external adenomyosis based on magnetic resonance imaging classification. Objective: To compare the safety and efficacy of high-intensity focused ultrasound (HIFU) treatment for patients with internal or external adenomyosis based on magnetic resonance imaging (MRI) classification. Materials And Methods: A total of 238 patients with internal adenomyosis and 167 patients with external adenomyosis who received HIFU treatment were enrolled. HIFU treatment results and adverse effects between patients with internal and external adenomyosis were compared. Results: The treatment time and sonication time for patients with external adenomyosis were significantly longer than that for patients with internal adenomyosis. The total energy used and EEF for patients with external adenomyosis were higher than that for patients with internal adenomyosis (p < 0.05). The pre-HIFU median dysmenorrhea score in patients with internal or external adenomyosis was 5 or 8 points, the median score decreased to 1 or 3 points in these two groups at 18-month post-HIFU (p < 0.05). The relief rate of dysmenorrhea was 79.5% in patients with internal adenomyosis, and it was 80.8% in patients with external adenomyosis. The pre-HIFU median menorrhagia score in patients with internal or external adenomyosis was 4 or 3 points, the median score decreased to 1 point in both groups at 18-month post-HIFU with a relief rate of 86.2% and 77.1%, respectively (p = 0.030). No serious complication occurred in any of these patients. Conclusions: HIFU is a safe and effective treatment either for patients with internal adenomyosis or external adenomyosis. It seemed that internal adenomyosis is easier to be treated with HIFU and with a higher relief rate of menorrhagia than external adenomyosis. abstract_id: PUBMED:22403954 The application of combined physical therapy for the treatment of women with pelvic pain syndrome A total of 98 women presenting with pelvic pain syndrome of different etiology (inflammatory diseases of small pelvic organs following surgical interventions, dysmenorrhoea, ovulatory syndrome) were enrolled in this study with an object of estimating the efficacy of the combined central and local application of physiotherapeutic techniques. An AVIM-1 apparatus was used to cause vibration in and apply a magnetic field to the perineal region. Transcranial magnetotherapy and electrostimulation were applied centrally using an AMO-ATOS-E device. The study has demonstrated that the combined central and local application of physiotherapeutic techniques supplemented by medicamental treatment according to the indications made it possible to attenuate pain syndrome by 64.6% and improve the vegetative nervous function and cerebral blood circulation by 30% and 18% respectively compared with the women in the control group. Moreover, the 1.5-2-fold improvement of the main characteristics of microcirculation was achieved. The overall resistance increased under the influence of physiotherapy in 71% of the women which promoted their reproductive potential and enhanced the probability of pregnancy. abstract_id: PUBMED:23535077 Is magnetic resonance imaging sufficient to diagnose rudimentary uterine horn? A case report and review of the literature. Imaging is often part of the evaluation of gynecologic disorders, with transvaginal ultrasound being the most frequently used imaging modality. Although laparoscopy, hysterosalpingography, and hysteroscopy can add diagnostic accuracy, they are invasive and costly. Magnetic resonance imaging (MRI) has been increasingly used because it is both noninvasive and highly accurate. Although MRI is more expensive than ultrasound, it is less so than surgery. Given the demonstrated accuracy of MRI in assessing müllerian anomalies, additional imaging is not often sought once an MRI diagnosis is made. However, when imaging findings are not pathognomonic via MRI or otherwise, inaccurate diagnoses and their consequences may occur. We describe the case of a 21-year-old woman with unilateral dysmenorrhea whose MRI features suggested a unicornuate uterus with a hematometrous noncommunicating horn although laparoscopy ultimately revealed a necrotic myoma without an accompanying müllerian anomaly. abstract_id: PUBMED:29390219 Combined treatment of endometriosis: radical yet gentle Endometriosis is the second most common benign female genital disease after uterine myoma. This review discusses the management of individual patients. This should take into account the severity of the disease and whether the patient desires to have children. Particular emphasis is laid on the anatomical intersections which, when injured, can lead to persistent damage of the anterior, middle or posterior compartment and are not infrequently the cause of urological and urogynaecological follow-up measures. Typical symptoms of endometriosis include chronic pelvic pain, subfertility, dysmenorrhoea, deep dyspareunia, cyclical bowel or bladder symptoms (e. g. dyschezia, bloating, constipation, rectal bleeding, diarrhoea and haematuria), abnormal menstrual bleeding, chronic fatigue and low back pain. Approx. 50 % of all female teenagers and up to 32 % of all women of reproductive age who have been operated for chronic pelvic pain or dysmenorrhoea suffer from endometriosis. The time interval between the first unspecific symptoms and the medical diagnosis of endometriosis is about 7 years. This is caused not only by the non-specific nature of the symptoms but also by the frequent lack of awareness on the part of the cooperating disciplines with which the patients have first contact. As the pathogenesis of endometriosis is not clearly understood, causal treatment is still impossible. Treatment options include expectant management, analgesia, hormonal medical therapy, surgical intervention and the combination of medical treatment before and/or after surgery. The treatment should be as radical as necessary and as minimal as possible. The recurrence rate among treated patients lies between 5 % and > 60 % and is very much dependent on integrated management and surgical skills. Consequently, to optimise the individual patient's treatment, a high degree of interdisciplinary cooperation in diagnosis and treatment is crucial and should be reserved to appropriate centres - especially in the case of deep infiltrating endometriosis. abstract_id: PUBMED:25523859 Magnetic resonance-guided focused ultrasound surgery (MRgFUS) of uterine fibroids in Singapore. Introduction: Uterine fibroids are the most common type of gynaecologic benign tumours, occurring in 25% to 50% of women during their reproductive lives. About half of the affected women have clinically significant symptoms, including abnormal bleeding, menstrual pain, frequent urination, constipation and abdominal distension. Magnetic resonance-guided focused ultrasound surgery (MRgFUS) has been used to treat patients with benign lesions and a variety of malignancies. The objective of this study is to evaluate symptom relief before and after MR-guided ultrasound ablation of fibroids. Materials And Methods: A total of 37 patients with symptomatic uterine fibroids were treated in this study. Results: MRgFUS treatment led to a significant, time-dependent decrease in not only Symptom Severity Scores (SSS), but also the mean fibroid volume. The average reductions in volume were 41.6% and 52.6% at 6 months and 12 months respectively (P <0.05). The mean SSS of the 37 patients was 41.7 ± 2.8 before treatment whereas the average SSS was 26.9 ± 3.6, 20.7 ± 3.4, 18.5 ± 3.6, 16.5 ± 7.1, 9.8 ± 3.6 at 3 months, 6 months, 1 year, 2 years, and 3 to 4 years respectively. The decrease in scores was significant at all time points up to 3 to 4 years (P <0.05 and P <0.001). Conclusion: MRgFUS is a safe and effective non-invasive treatment for patients with symptomatic fibroids. abstract_id: PUBMED:31880762 An innovative approach to the combination treatment of endometriosis Background: According to modern statistics, endometriosis is a disease that occupies the third place among the most problematic diseases of the female reproductive system, since it develops mainly in young women and, therefore, negatively affects their reproductive function. The paper presents the data obtained from a clinical trial of the combination treatment of Stages I-III endometriosis, including those using sanatorium-and-spa aftercare using radon tampons. Objective: To identify the most effective treatment for endometriosis. Material And Methods: Examinations were made in 348 women of reproductive age (mean age 30±4 years) with Stages I-III endometriosis. According to the therapy received, the patients were divided into 2 groups: 1) 174 patients who used drug therapy and radon tampons; 2) 174 patients who received only drug therapy. Results: Resultative positive changes were observed in Group 1 patients who received a combination of anti-inflammatory therapy, pathogenetic therapy of endometriosis, and sanatorium spa aftercare with radon tampons. The investigators excluded pain syndrome in 93.7% of cases, dysmenorrhea in 85.2%, and dyspareunia in 100%; menstrual blood loss decreased in 55.6% of patients; pregnancy occurred in 97.1% of women; the psychoemotional background improved in 100% of cases; in 96.2% of cases, subsequent recurrences were not observed (in the study period of 2014-2018). Conclusion: The results of the study showed that a comprehensive and stage approach to treating Stages I-III endometriosis is fundamental to success in women's reproductive health. Answer: Yes, extracorporeal magnetic innervation (ExMI) can be considered a treatment modality for primary dysmenorrhea. A study evaluated the efficacy of ExMI for primary dysmenorrhea and compared it with nonsteroidal anti-inflammatory drugs (NSAIDs) and combined oral contraceptives (COCs). The study found that at the first menstruation, visual analog scale (VAS) scores improved significantly in the ExMI group, indicating a reduction in pain. Although the efficacy of ExMI began to decline after 3 months, it continued to provide some level of pain relief for up to 12 months. The VAS score at the 12th month was significantly higher in the ExMI group than in the NSAID and COC groups, but it was still markedly lower than the pretreatment value, suggesting that ExMI therapy might be a promising noninvasive option for primary dysmenorrhea (PUBMED:24853619).
Instruction: Does maternal hydronephrosis have an impact on urinary neutrophil gelatinase-associated lipocalin (uNGAL) levels? Abstracts: abstract_id: PUBMED:33815957 Urinary neutrophil glatinase-associated lipocalin level (uNGAL) may predict the severity of congenital hydronephrosis in infants. Clinical findings suggest that the urinary neutrophil gelatinase-associated lipocalin (uNGAL) level may be a highly sensitive biomarker and predictor of progressive tubular and glomerular injury. This cross-sectional study aimed to determine the predictive power of uNGAL in infants with congenital hydronephrosis. Forty-five children (30 males and 15 females) under the age of two with congenital obstructive uropathy were evaluated for urinary levels of creatinine, uNGAL and uNGAL/uCreatinine (Cr) ratio. Totally, 62.2% of patients had mild, 15.6% had moderate and 22.2% had severe hydronephrosis. We observed a higher significantly uNGAL level in cases with severe form than cases with mild to moderate forms (P=0.002). Also, infants with severe hydronephrosis showed a higher ratio of uNGAL/uCr compared with mild to moderate cases (P=0.006). Correlation analysis showed a significant inverse correlation between uCr levels and pelvic diameter (P=0.002) and direct correlations between uNGAL and uNGAL/uCr ratio and pelvic diameter (P<0.001). By defining a cut-off point of 73.7 ng/ml for uNGAL in ROC analysis, we observed a sensitivity of 70.0% and a specificity of 91.4% forthe prediction of severe hydronephrosis. Our results indicate the potential predictive valueof uNGAL and uNGAL/uCr ratio for hydronephrosis and, more importantly, for discrimination of the severe hydronephrosis from mild to moderate forms. abstract_id: PUBMED:27125324 Does maternal hydronephrosis have an impact on urinary neutrophil gelatinase-associated lipocalin (uNGAL) levels? Objective: To determine urinary neutrophil gelatinase-associated lipocalin (uNGAL) levels and creatinine clearance values in women with different degrees of asymptomatic hydronephrosis during pregnancy. Methods: A total of 44 pregnant women with different degrees of hydronephrosis and 46 without hydronephrosis were consecutively enrolled in this prospective study. Basic serum and urine parameters, uNGAL levels, and creatinine clearance values were evaluated. All results were compared between the two groups. Regression analysis was used to determine independent predictors, which were mostly related to hydronephrosis. Results: Demographic data, basal laboratory parameters, and creatinine clearance values were similar, whereas significantly higher uNGAL levels were detected in women with hydronephrosis compared to those without hydronephrosis (45.3 versus 33.2 ng/mL, respectively) (p = 0.004). An increasing trend in uNGAL levels was detected with increasing degrees of hydronephrosis; as it was not statistically significant (p = 0.163). Linear regression analysis revealed that the parameter of "pelvic diameter" was found as a significant independent factor influencing uNGAL concentrations (β = 0.289; 95% CI: 0.522-3.061; p = 0.006). Other independent variables were not found to influence uNGAL concentrations (p > 0.05). Conclusion: The results obtained from this study indicate a significant increase of urinary concentration of NGAL in the presence of asymptomatic maternal hydronephrosis. This impact is likely to be more profound in those with severe hydronephrosis although this has not been specifically investigated. This theory needs to be validated in larger populations. abstract_id: PUBMED:27785626 Neutrophil gelatinase-associated lipocalin: utility in urologic conditions. Neutrophil gelatinase-associated lipocalin (NGAL) is best known as a non-invasive early marker of acute kidney injury (AKI). However, recent published reports have described additional utility of both plasma NGAL (pNGAL) and urine NGAL (uNGAL) in various pathologic conditions within the pediatric urinary tract, including urinary tract infection, vesicoureteral reflux (VUR), renal scarring, and obstructive uropathy. These two forms of NGAL have different applications related to their mechanisms of upregulation: pNGAL can serve as a marker of systemic inflammatory conditions, whereas uNGAL is specific for insults to the renal epithelium. Therefore, pNGAL has good predictive accuracy in systemic inflammation associated with pyelonephritis and renal damage, while uNGAL is an effective marker for identifying infection within the genitourinary environment as well as subclinical renal damage as a result of scarring or obstruction. Continued work should focus on the effect of trending NGAL values in patients with pyelonephritis, VUR, and hydronephrosis, to determine if longitudinal NGAL patterns have value in predicting adverse outcomes. abstract_id: PUBMED:36057907 Role of urinary Neutrophil Gelatinase-Associated Lipocalin (NGAL), Monocyte Chemoattractant Protein-1(MCP-1), and Interleukin-6(IL-6) as biomarkers in pediatric patients with hydronephrosis. Objectives: The decision to surgically intervene in a hydronephrotic kidney in children is based on many debatable guidelines, some requiring repeated ultrasounds or renal scans. Urinary proteins have the potential to reflect renal disorders and hence can be the alternatives to such scans. Here, we aim to assess the role of urinary Neutrophil Gelatinase-Associated Lipocalin, Monocyte Chemoattractant Protein-1, and Interleukin-6 (IL-6) in such patients. Methods: Seventeen children had obstructive hydronephrosis requiring pyeloplasty (UPJO), while seven were kept on conservative management in view of non-obstructive dilation (NOD). Urine samples were measured for the three urinary proteins at the time of presentation and following pyeloplasty using commercially available ELISA kits. Results: The levels of all three urinary proteins were significantly higher in patients with UPJO children compared to the NOD group. Cut-off values to differentiate obstructive from non-obstructive hydronephrosis were obtained. A significant fall in the post-operative value of urinary IL-6 was also observed. Conclusion: This study highlights the potentiality of urinary proteins as biomarkers in identifying children with hydronephrosis and picking out the ones with obstructive hydronephrosis who will require pyeloplasty. The drop in levels after pyeloplasty can be employed to evaluate the effectiveness of pyeloplasty when sent serially. abstract_id: PUBMED:27744618 Plasma neutrophil gelatinase-associated lipocalin: a marker of acute pyelonephritis in children. Objectives: This study was designed to compare the diagnostic accuracy of plasma neutrophil gelatinase-associated lipocalin (NGAL) with procalcitonin (PCT), C-reactive protein (CRP), and white blood cells (WBCs) for predicting acute pyelonephritis (APN) in children with febrile urinary tract infections (UTIs). Materials And Methods: In total, 138 children with febrile UTIs (APN 59, lower UTI 79) were reviewed retrospectively. Levels of NGAL, PCT, CRP, and WBCs in blood were measured on admission. The diagnostic accuracy of the biomarkers was investigated. Independent predictors of APN were identified by multivariate logistic regression analysis. Results: Receiver operating curve (ROC) analyses showed good diagnostic profiles of NGAL, PCT, CRP, and WBCs for identifying APN [area under the curve (AUC) 0.893, 0.855, 0.879, and 0.654, respectively]. However, multivariate analysis revealed only plasma NGAL level was an independent predictor of APN (P = 0.006). At the best cutoff values of all examined biomarkers for identifying APN, sensitivity (86 %), specificity (85 %), positive predictive value (81 %), and negative predictive value (89 %) of plasma NGAL levels were the highest. The optimal NGAL cutoff value was 117 ng/ml. The positive likelihood ratio [odds ratio (OR) 5.69, 95 % confidence interval (CI) 3.56-8.78], and negative likelihood ratio (OR 0.16, 95 % CI 0.08-0.29) of plasma NGAL for APN diagnosis also showed it seemed to be more accurate than serum PCT, CRP, and WBCs. Conclusion: Plasma NGAL can be more useful than serum PCT, CRP, and WBC levels for identifying APN in children with febrile UTIs. abstract_id: PUBMED:35360434 Serum and Urinary Neutrophil Gelatinase-Associated Lipocalin Levels as Early Markers of the Renal Function in Patients With Urinary Stone-Induced Hydronephrosis. Introduction: Urinary stones cause hydronephrosis, which leads to kidney function impairment. The serum creatinine level is frequently used as a marker of kidney function. However, in some patients with hydronephrosis, it does not reflect the kidney function changes in the early stages of kidney stone disease. Neutrophil gelatinase-associated lipocalin (NGAL) is a novel indicator of the kidney function. Previous NGAL-related research has focused on its role in acute kidney injury. This study aimed to determine the usefulness of NGAL as an early marker of the kidney function in patients with urinary stones and hydronephrosis. Methods: Eighty-five patients with urinary stones who were admitted to the Shanghai Tenth People's Hospital (USP group) and 65 healthy volunteers (NC group) were recruited. Blood and urine samples collected from the study participants were evaluated using an enzyme-linked immunosorbent assay to determine the NGAL concentration. Data on the height, weight, age, medical history, and blood and urine findings were collected. Computed tomography data were collected from the USP group. Results: Compared to in the NC group, NGAL levels were significantly elevated in the USP group (P < 0.001). However, no significant differences in the NGAL levels were observed among the USP group members with different degrees of hydronephrosis. Furthermore, no significant between-group differences in the creatinine level or the estimated glomerular filtration rate were observed. The areas under the receiver operating characteristic curves for the serum and urinary NGAL levels with hydronephrosis were 92.03 and 99.54%, respectively. The areas under the receiver operating characteristic curves for the serum and urinary NGAL levels with kidney stones were 85.05 and 91.89%, respectively. Conclusion: NGAL is a sensitive indicator of hydronephrosis secondary to urinary stones. abstract_id: PUBMED:26096437 Urinary NGAL, KIM-1 and L-FABP concentrations in antenatal hydronephrosis. Introduction: The clinical tests currently in use for obstructive nephropathy (such as renal ultrasonography, differential radionuclide renal scans and urinary creatinine concentration data) are not efficient predictors of the subsequent clinical course. Novel and simple biomarkers are required which, if proven, could be clinically beneficial in determining if a patient is eligible for surgery or reno-protective therapy. More recently, the interest of clinicians has focused on the potential of urinary neutrophil gelatinase-associated lipocalin (uNGAL), urinary kidney injury molecule-1 (uKIM-1) and urinary liver-type fatty acid-binding proteins (uL-FABP) as biomarkers for renal function in children with hydronephrosis (HN). Objective: The purpose of this study was to investigate possible clinical applications of uNGAL, uKIM-1 and uL-FABP as beneficial non-invasive biomarkers to determine whether or not surgical intervention is required in children with HN. Study Design: Renal ultrasonography and radionuclide renal scans were used as diagnostic tools to detect HN. Patients were divided into two groups based on the antero-posterior diameter of their renal pelvis and the presence of dysfunction. Group 1 included 26 children with severe HN (with dysfunction), and group 2 consisted of 36 children with mild HN (without dysfunction). Urine samples were collected from 62 children with HN and 20 healthy children. Results: Hydronephrosis was more common in males than in females, with a male to female ratio of 9:1 in the study sample. The incidence of left kidney involvement (32 patients) was slightly higher than right kidney involvement (28 patients). Compared with controls and group 2, the ratio of uNGAL to creatinine was significantly higher in group 1 (p < 0.05). The biomarker uNGAL/Cr exhibited fairly good diagnostic accuracy, with an area under the curve of 0.68 [95% confidence interval 0.6-0.7] and an optimal cut-off value of 0.16 ng/mg Cr (sensitivity 58%, specificity 75%) (p < 0.05). There was a positive correlation between the uNGAL/Cr ratio and the uKIM-1/Cr ratio (r = 0.582, p < 0.05) and uL-FABP/Cr ratio (r = 0675, p < 0.05) in group 1. Discussion: The results clearly demonstrated that children with hydronephrosis and dysfunction had significantly increased uNGAL, and uNGAL/Cr concentrations. However, uKIM-1, uKIM-1/Cr, uL-FABP and uL-FABP/Cr concentrations were not significantly different when compared with controls. These results support the use of uNGAL concentrations as an early marker for renal dysfunction in HN. Conclusions: The study clearly demonstrated that pediatric patients with hydronephrosis and dysfunction had significantly higher uNGAL to creatinine concentrations as compared with controls. abstract_id: PUBMED:23791906 Urinary NGAL levels correlate with differential renal function in patients with ureteropelvic junction obstruction undergoing pyeloplasty. Purpose: Recent investigations described the use of NGAL, a sensitive biomarker for kidney injury, in the setting of ureteropelvic junction obstruction. We prospectively evaluated urinary NGAL levels in the affected renal pelvis and bladder of children with ureteropelvic junction obstruction undergoing unilateral dismembered pyeloplasty. Our hypothesis was that higher NGAL in the kidney and bladder would correlate with decreased ipsilateral differential function. Materials And Methods: We performed a prospective cohort study in patients treated with unilateral dismembered pyeloplasty from 2010 to 2012. Urine was obtained intraoperatively from the bladder and obstructed renal pelvis. A control population of unaffected children was recruited to provide a voided bladder specimen. Bladder NGAL levels were compared between the study and control populations. We tested our study hypothesis by correlating bladder and renal pelvic NGAL levels with the differential renal function of the affected kidney. Results: A total of 61 patients with a median age at surgery of 1.62 years (range 0.12 to 18.7) were enrolled in the study. Median bladder NGAL was 18.6 ng/mg (range 1.4-1,650.8) and median renal pelvic NGAL was 26.2 ng/mg (range 1.2-18,034.5, p = 0.004). Median bladder NGAL was significantly higher than in controls (p = 0.004). The correlation of bladder and renal pelvic NGAL with differential renal function was r = -0.359 (p = 0.004) and r = -0.383 (p = 0.002), respectively. Conclusions: Bladder NGAL is increased in children with ureteropelvic junction obstruction. Renal pelvic and bladder normalized urinary NGAL levels correlate inversely with the relative function of the affected kidney in cases of unilateral ureteropelvic junction obstruction. abstract_id: PUBMED:34333489 Can Urinary KIM-1 and NGAL Predict Management Endoscopic Surgery in Acute Unilateral Obstructive Stone Disease? Results from a Prospective Cohort Study. Introduction: Kidney injury molecule-1 (KIM-1) and neutrophil gelatinase-associated lipocalin (NGAL) are the leading novel biomarkers used efficiently in acute kidney injury (AKI). The levels of these biomarkers increase especially in the early period of nephrotoxic and ischemic renal damage. In this study, we aimed to investigate the clinical importance of NGAL and KIM-1 biomarkers used in the effective evaluation of kidney functions in patients with acute unilateral obstructive stone disease (AUOSD) in the management of endoscopic surgery. Materials And Methods: We prospectively included patients who underwent endoscopic surgery due to AUOSD between January 2018 and December 2019. Urine KIM-1 and NGAL values of the patients were measured preoperative period, postoperative 4th h, and postoperative 7th day. The patients were evaluated according to the location and size of the stone, the degree of renal hydronephrosis, the duration of the operation, complications, and JJ stent placement. Results: The study enrolled 50 patients. Urinary KIM-1/Cr and urinary NGAL/Cr ratios were higher in postoperative 4th h than in others (p < 0.001). Also, we found that urinary KIM-1/Cr and urinary NGAL/Cr ratios with Grade 2 and higher hydronephrosis were statistically higher than Grade 0-1 hydronephrosis (p < 0.001 and p: 0.042, respectively). Additionally, a preoperative urinary KIM-1 value of 1.24 ng/mL had a sensitivity of 78% and a specificity of 63% to predict the presence of hydronephrosis. When urine KIM-1 and NGAL results were compared with surgery time, stone size and location, serum creatinine (sCr) value, and Post-Ureteroscopic Lesion Scale grade, the difference was not statistically significant. Postoperative 7th day NGAL/Cr and KIM-1/Cr ratios were statistically higher in those with JJ stents placed (p: 0.03 and p: 0.004, respectively). Conclusion: KIM-1 and NGAL can be used in our assessment of renal function in patients with AUOSD, even if sCr is normal. Also, these biomarkers can predict the presence of hydronephrosis. It can be helpful in determining the time of surgical treatment, as well as providing information in the follow-up of patients with JJ stents after treatment. abstract_id: PUBMED:28653226 Urinary biomarkers in the early detection and follow-up of tubular injury in childhood urolithiasis. Background: To investigate relationships among urinary biomarkers [kidney injury molecule-1 (KIM-1), N-acetyl-β-glucosaminidase (NAG)], neutrophil gelatinase-associated lipocalin (NGAL) levels and renal tubular injury in childhood urolithiasis. Methods: Seventy children [36 girls, mean age: 7.3 ± 5.0 years (0.5-18.2)] with urolithiasis/microlithiasis and 42 controls [18 girls, mean age: 8.5 ± 3.8 years (0.9-16.2)] were included in this multicenter, controlled, prospective cohort study. Patients were evaluated three times in 6-month intervals (0, 6 and 12th months). Anthropometric data, urinary symptoms, family history and diagnostic studies were recorded. Urine samples were analyzed for metabolic risk factors (urinary calcium, uric acid, oxalate, citrate, cystine, magnesium, and creatinine excretion), and the urinary KIM-1, NAG, and NGAL levels were measured. Results: Stones were mostly located in the upper urinary system (82.9%), and six patients (8.6%) had hydronephrosis. Thirty patients (42.9%) had several metabolic risk factors, and the most common metabolic risk factor was hypocitraturia (22.9%). Urinary KIM-1/Cr, NAG/Cr and NGAL/Cr ratios were not significantly different between patients and controls. Furthermore, no significant changes in their excretion were shown during follow-up. Notably, the urinary KIM-1/Cr, NAG/Cr, and NGAL/Cr levels were significantly higher in children under 2 years of age (p = 0.011, p = 0.006, and 0.015, respectively). NAG/Cr and NGAL/Cr ratios were significantly increased in patients with hydronephrosis (n = 6, p = 0.031 and 0.023, respectively). Conclusions: The results of this study suggest that none of the aforementioned urinary biomarkers (KIM-1, NAG and NGAL levels) may be useful for the early detection and/or follow-up of renal tubular injury and/or dysfunction in childhood urolithiasis. Answer: Yes, maternal hydronephrosis does have an impact on urinary neutrophil gelatinase-associated lipocalin (uNGAL) levels. A study found that pregnant women with different degrees of hydronephrosis had significantly higher uNGAL levels compared to those without hydronephrosis (45.3 versus 33.2 ng/mL, respectively) (PUBMED:27125324). This suggests that the presence of asymptomatic maternal hydronephrosis is associated with an increase in urinary concentration of NGAL. The study also noted an increasing trend in uNGAL levels with increasing degrees of hydronephrosis, although this trend was not statistically significant (PUBMED:27125324). Additionally, linear regression analysis identified the parameter of "pelvic diameter" as a significant independent factor influencing uNGAL concentrations (PUBMED:27125324). These findings indicate that maternal hydronephrosis can impact uNGAL levels, which may reflect renal stress or injury associated with the condition.
Instruction: Correction of tetralogy of Fallot: does the time period of surgery influence the outcome? Abstracts: abstract_id: PUBMED:11505316 Correction of tetralogy of Fallot: does the time period of surgery influence the outcome? Background: The perioperative outcome of patients with tetralogy of Fallot (TOF) seems to have improved over the last four decades. Methods: To prove this hypothesis, we retrospectively analysed the data of 269 TOF patients operated on between 1975 and 1999 in our institution. Results: Over the years, younger patients (median age 1975 - 1980: 4.5 years, 1995 - 1999: 0.9 years) were operated on with a lower mortality (1975 - 1980: 8.6 %, 1995 - 1999: 2.4 %). Residual defects such as pulmonary stenosis or insufficiency and VSD occurred with a similar frequency over time, whereas rhythm disturbances were significantly reduced (1981 - 1985: 51.2 %, 1995 - 1999: 24.4 %, p = 0.012). Postoperative length of hospital stay was significantly (p < 0.05) shorter in the years 1995 - 1999 (11.0 - 11.4 days) than in 1975 - 1980 (16.9 +/- 16.5 days). Conclusions: Over time periods, there was a trend towards lower mortality and towards operating on patients in a younger age. The rate of rhythm disturbances and the LOS after surgery proved to be reduced during the last decade. These differences did not turn out to be statistically significant. Therefore, we conclude that the time period of surgery has only little impact on the early outcome of patients after definitive correction of TOF. abstract_id: PUBMED:27052060 The role of blood lactate clearance as a predictor of mortality in children undergoing surgery for tetralogy of Fallot. Background: The identification of biomarkers for predicting morbidity and mortality, particularly in pediatric population undergoing cardiac surgery will contribute toward improving the patient outcome. There is an increasing body of literature establishing the clinical utility of hyperlactatemia and lactate clearance as prognostic indicator in adult cardiac surgical patients. However, the relationship between lactate clearance and mortality risk in the pediatric population remains to be established. Objective: To assess the role of lactate clearance in determining the outcome in children undergoing corrective surgery for tetralogy of Fallot (TOF). Methods And Study Design: A prospective, observational study. Setting: A tertiary care center. Study Population: Two hundred children undergoing elective surgery for TOF. Study Method: Blood lactate levels were obtained as baseline before operation (T0), postoperatively at admission to the cardiac intensive care unit after surgery (T1), and then at every 6 h for the first 24 h of Intensive Care Unit (ICU) stay (T6, T12, T18, and T24, respectively). The lactate clearance in the study is defined by the equation ([lactate initial - lactate delayed]/lactate initial) ×100%. Lactate clearance was determined at T1-T6, T1-T12, T1-T18, and T1-T24 time interval, respectively. The primary outcome measured was mortality. Secondary outcomes measured were the duration of mechanical ventilation, duration of inotropic requirement, and duration of ICU stay. Results: Eleven out of the two hundred patients enrolled in the study died. Nonsurvivors had higher postoperative lactate concentration (P < 0.05) and low-blood lactate clearance rate during 24 h (P < 0.05) in comparison to the survivors. Lactate clearance was significantly higher in survivors than in nonsurvivors for the T1-T6 period (19.55 ± 14.28 vs. 5.24 ± 27.79%, P = 0.009) and remained significantly higher for each studied interval in first 24 h. Multivariate logistic regression analysis of statistically significant univariate variables showed early lactate clearance to have a significant relationship with mortality. Patients with a lactate clearance >10%, relative to patients with a lactate clearance <10%, in the early postoperative period, had improved outcome and lower mortality. Conclusion: Lactate clearance in the early postoperative period (6 h) is associated with decreased mortality rate. Patients with higher lactate clearance (>10%) after 6 h have improved outcome compared with those with lower lactate clearance. abstract_id: PUBMED:21269407 Outcome of complex adult congenital heart surgery in the developing world. Background: There is scanty information on the outcome of adult congenital heart disease surgery from the developing world. Methods: This was a retrospective chart review of the surgical outcome of 153 adults with congenital heart disease over a 5-year period. Surgical atrial septal defect closure was considered "simple" while all other surgeries were considered "complex." Results: There were 102 patients in the "simple" group and 51 in the "complex" group. Only three (2%) patients had prior operations. The "complex" group had longer bypass time and cross clamp time. Intensive care unit stay, ventilation time, and inotrope administration were longer. Major complications were more common and there were two deaths in the "complex" group. Age more than 30 years, cyanosis, and New York Heart Association class more than II were predictors of longer stay in the intensive care unit. Surgical repair of Tetralogy of Fallot in adults tended to have a longer ventilation time and intensive care unit stay with a mortality of 4%. At follow up, all patients were in New York Heart Association class I or II. Improvement of the functional class with negligible adverse events was noted in both groups. Conclusions: A retrospective evaluation of 153 adults with congenital heart disease who underwent open heart surgery at a single center in India showed strikingly fewer reoperations compared with large European studies. There was a similar prevalence of complex lesions. Surgical mortality was low, and long-term functional outcome was gratifying. abstract_id: PUBMED:23112014 Incidence and outcome of prenatally diagnosed, chromosomally normal congenital heart defects in Singapore. Introduction: Congenital heart defect (CHD) is a significant cause of neonatal and infant mortality. We aimed to evaluate the incidence and pregnancy outcome of foetuses diagnosed with chromosomally normal CHD in KK Women's and Children's Hospital (KKH), Singapore, in 2008-2009. Methods: We reviewed the medical records of pregnant women who underwent first trimester screening and were diagnosed with foetal CHD at KKH. Additional information was obtained from the Birth Defect Registry for the period 2008-2009. Foetuses with abnormal karyotype or minor lesions not expected to be detected by ultrasonography were excluded. Results: 38 out of 9,834 euploid foetuses were diagnosed with CHD. Major defects were found in 26 (68%) foetuses, while 12 (32%) had minor CHDs. Tetralogy of Fallot, atrioventricular septal defect, hypoplastic left heart syndrome, transposition of the great arteries and ventricular septal defect constituted the five most common major CHDs observed. In 14 (54%) foetuses with prenatally diagnosed major CHD, the outcome was termination of pregnancy, while 12 (46%) pregnancies continued to birth. Among the live-born babies with major CHD, eight (67%) underwent surgery. Conclusion: The incidence of non-chromosomal major CHD in Singapore was about 2.6 per 1,000 foetuses. A detection rate of 88.5% was achieved for major CHD during the study period. Advances in CHD management have thrown up new challenges for clinicians in the area of diagnosis, treatment and ethics. Therefore, it may be beneficial to constitute a regulatory entity as a fundamental guide to improve the future management of foetuses diagnosed with CHD. abstract_id: PUBMED:23804814 Pregnancy in patients with tetralogy of fallot: outcome and management. The objective was to evaluate pregnancy outcome in women with tetralogy of Fallot, including impact of corrective cardiac surgery on pregnancy outcome in a tertiary care referral hospital. The study was a retrospective analysis of maternal and perinatal outcome in all women with tetralogy of Fallot treated in a cardio-obstetric unit during 1996-2008. Ten women had 21 pregnancies. Of the 10 women, 7 with uncorrected tetralogy of Fallot had 16 pregnancies. Obstetric and cardiac complications were more frequent in the uncorrected group (70% vs 40% and 40% vs nil, respectively). The frequency of spontaneous abortion and preterm birth was greater in the uncorrected group (37.5% vs nil and 25% vs nil, respectively). The percentage of babies who were small for gestational age was 40% in the uncorrected group and 20% in the corrected group. Tetralogy of Fallot carries substantial risk to mother and fetus. Surgical correction is associated with improved maternal and perinatal outcome. These patients need detailed prepregnancy evaluation and should be under joint supervision of an obstetrician, a cardiologist, a congenital cardiac surgeon, and an anesthetist. abstract_id: PUBMED:9770809 The mechanical properties of the lungs in patients with congenital heart defects in the immediate postoperative period Static compliance and inspiratory resistance were assessed in 218 patients aged over 3 years after open-heart surgery for correction of congenital heart disease during the immediate postoperative period without complications. Cst in the immediate postoperative period in patients with Fallot's tetralogy, intraventricular septum defect, intraatrial septum defect, and incomplete form of atrioventricular communication was higher than in patients operated on for Fallot's tetralogy with a previous systemic-pulmonary anastomosis and pulmonary artery atresia, whereas Rinsp was virtually the same in all groups. Age-specific differences were noted. The data are recommended for use as reference values in stress. abstract_id: PUBMED:15223425 Outcome analysis of major cardiac operations in low weight neonates. Background: From June 1995 to January 2003, 49 consecutive neonates of less than 2,500 g underwent early surgery for congenital heart disease. A retrospective analysis was performed to evaluate the early to medium term outcome. Methods: Major cardiac surgery for congenital heart defects included a complete correction in 31 patients (group I) and a palliative procedure in 18 patients (group II). Mean age at operation was 15.2 days (1 day-90 days) and mean weight was 2,190 g (1,300 g-2,500 g). Twenty-four children (49%) were born prematurely. All neonates were critically ill and 47% were already ventilated preoperatively. Heart defects included mainly ventricular septal defect (10), tetralogy of Fallot complexes (8), aortic coarctation (8), transposition complexes (7), single ventricle anomalies (4), pulmonary atresia with intact septum (4), interrupted aortic arch (3), totally anomalous pulmonary venous return (3), and common atrioventricular septal defect (2). Results: Overall surgical mortality was 18%: 4 neonates died after definitive repair and 5 after palliation; representing, respectively, 13% and 28% of each group. Postoperative morbidity occurred in half of the patients (53%). Age, weight, prematurity, type of first surgical procedure, and use of cardiopulmonary bypass did not influence the early outcome. After a mean follow-up of 2.82 years (2 months to 6 years), survival was 87% in the correction group and 54% in the palliation group. All children were in NYHA class I-II. Freedom from reintervention at 18 months was 68% after correction versus 8% after palliation. Conclusions: Cardiac surgery for congenital malformations in critically ill, low weight neonates can be achieved with acceptable mortality, at the cost of an increased morbidity. Early outcome seems independent of age, weight, prematurity, use of extracorporeal perfusion, and type of first intervention. Moreover, primary correction appears to result in an early survival benefit, remaining constant over time. abstract_id: PUBMED:19535259 Synergistic interaction between right ventricular mechanical dyssynchrony and pulmonary regurgitation determines early outcome following tetralogy of Fallot repair. Objective: The ability of the right ventricle to tolerate acute pulmonary regurgitation (PR) following tetralogy of Fallot (TOF) repair is variable and the mechanisms that underlie this are not completely understood. We hypothesise that dyssynchronous wall mechanics affects the RV tolerance to postoperative PR with adverse effect on early surgical outcome. Methods: Twenty-four TOFs (mean age 19.5+/-15.5 months) undergoing elective repair were prospectively recruited. Ventricular wall mechanics was studied by tissue Doppler echocardiography following induction (preop) and postoperative day one (POD1) and compared with a control group (10 VSD/AVSD). Segmental dyssynchrony, defined as out-of-phase peak myocardial contraction, was determined at the base, mid, apical segments of the septum, RV and LV free walls and scored by the total number of affected segments. PR was graded from absent to severe and RV dimension was quantified by end-diastolic area index (RVEDAI). Cardiac index (CI) was measured by pulse contour cardiac output analysis. Outcome measures were CI, mixed venous oxygen saturation (SvO2), lactate, and duration of ventilation and critical care stay. Results: Preoperatively, biventricular free-wall motion was synchronous in both groups. Following surgery, TOF developed RV-septal dyssynchrony (>2 segments in 11 (46%) vs none in control, p=0.01), while the LV free wall remained normal in both groups. RV-septal dyssynchrony correlated with the ventilation time (rho=0.69, p=0.003), critical care stay (rho=0.58, p=0.02) in the presence of PR (n=16), but not with other outcome measures. The relationships between dyssynchrony and early outcome were not seen when PR was absent. In the presence of PR, median RVEDAI was greater with higher dyssynchrony score (>3 segments; p=0.009). The degree of PR did not affect critical care/ventilation time or RVEDAI. The presence of transannular patch (p=0.007) or at least moderate PR (p=0.01) was associated with a more severe dyssynchrony. Conclusions: Dyssynchronous RV-septal wall mechanics occurs early after Fallot repair. The magnitude of dyssynchrony appears to interact synergistically with pulmonary regurgitation to influence RV dimension and early outcome. abstract_id: PUBMED:30830282 Catheterization Performed in the Early Postoperative Period After Congenital Heart Surgery in Children. The aim of this study was to describe pediatric patients who underwent early postoperative cardiac catheterization after congenital heart surgery, their clinical indications, findings, interventions, and complications in a cardiovascular center. A descriptive retrospective study was performed. All catheterizations performed within 6 weeks after congenital heart surgery between January 2004 and December 2014 were reviewed. We analyzed 101 early postoperative catheterizations. They were performed on median postoperative day five (IQR: 0-39); the median age was 64 days (IQR: 22-240). The most common diagnoses were single ventricle (53%), left heart obstruction (12%), and tetralogy of fallot or pulmonary atresia with ventricular septal defect (11%). Most common indications were persistent cyanosis (53%), low cardiac output (24%), and residual defect on echocardiogram (20%). Most frequent findings during the catheterization were pulmonary artery stenosis (29%), surgical conduit obstruction (12%), and coarctation or hypoplasia of the aorta (11%). Forty-six (45%) procedures involved intervention. Most frequent interventions were pulmonary artery, aorta, and Blalock-Taussig fistula angioplasty with or without stent implantation. There were adverse effects in 11 cases (11%), and 30-day mortality was 28% (28 patients) with the majority unrelated to the catheterization directly. Although early postoperative catheterizations are high-risk procedures, they are currently a very good option to solve acute problems in critically ill patients. This study provides relevant information for a better understanding and approach to this complex group of patients. abstract_id: PUBMED:31535511 Short And Midterm Outcome Of Fallot's Tetralogy Repair In Infancy: A Single Center Experience In A Developing Country. Background: Primary repair of ToF between 3-12 months is the preferred mode of treatment worldwide, with low surgical mortality. This study reviews our experience of ToF repair in infancy and its short and midterm outcomes in a single centre from a developing country. Methods: Data of all patients with Tetralogy of Fallot repair during infancy from January 2007 to Feb 2018 was reviewed. Preoperative, operative, and postoperative data was analysed. Outcome of the infants was assessed through discharge/death, low cardiac output syndrome (LCOS), prolonged intubation, duration of cardiac intensive care unit (CICU) and hospital stay. Results: Forty-four patients who underwent TOF repair in infancy during this period were included. The mean age and weight were 9.39±2.32 and 7.20±1.30 respectively, 77.3% (34 patients) were male, 68.18% (30 patients) had saturation >75%. Mean intubation period was 4.05±6.58 days, 12 (27.3%) patients developed LCOS, mean cardiopulmonary bypass (CPB) time, aortic cross clamp (ACC) time and ionotropic score were 133.52±62.4, 98.66±58.62 and 33.27±71.13 respectively. Mean CICU and hospital stay was 6.60±7.18 and 12.05±7.74 respectively. Five (11.3%) patients expired in postoperative period. Baseline saturation ≤75% is independent risk factor for LCOS and prolong intubation period. In the last six years our mortality decreased to 8% from 15.7% during the previous six years, while our mean intubation duration, CPB time, ACC, hospital stay and CICU stay have all shown improvement. Conclusions: TOF repair during infancy is safe procedure in expert hands with acceptable morbidity and mortality. Baseline saturation ≤75% is independent risk factor for LCOS and prolonged intubation period. Last six years have shown considerable improvement in our surgical morbidity and mortality due to improvement in surgical expertise. Answer: The time period of surgery for tetralogy of Fallot (TOF) does seem to influence the outcome to some extent. Over the years, there has been a trend towards operating on younger patients with TOF, which has been associated with a lower mortality rate. For instance, between 1975 and 1980, the median age of patients operated on was 4.5 years with a mortality rate of 8.6%, while between 1995 and 1999, the median age was 0.9 years with a mortality rate of 2.4% (PUBMED:11505316). Additionally, rhythm disturbances were significantly reduced over time, and the postoperative length of hospital stay was shorter in the later years. However, these differences were not statistically significant, leading to the conclusion that the time period of surgery has only a little impact on the early outcome of patients after definitive correction of TOF (PUBMED:11505316). Moreover, other studies have shown that lactate clearance in the early postoperative period is associated with decreased mortality rate, indicating that patients with higher lactate clearance (>10%) after 6 hours have improved outcomes compared with those with lower lactate clearance (PUBMED:27052060). This suggests that perioperative factors, such as lactate clearance, may also play a significant role in the outcome of TOF surgery. In the context of adult congenital heart surgery in the developing world, a retrospective evaluation showed low surgical mortality and gratifying long-term functional outcomes, despite the prevalence of complex lesions (PUBMED:21269407). This indicates that with advancements in surgical techniques and postoperative care, the outcomes of TOF correction have improved over time, even in resource-limited settings. Overall, while the time period of surgery for TOF may have some influence on the outcome, it is likely that improvements in surgical techniques, perioperative care, and the management of postoperative complications have also contributed significantly to the improved outcomes observed in recent decades.
Instruction: Seeing mum drinking a 'light' product: is social learning a stronger determinant of taste preference acquisition than caloric conditioning? Abstracts: abstract_id: PUBMED:11423917 Seeing mum drinking a 'light' product: is social learning a stronger determinant of taste preference acquisition than caloric conditioning? Objective: It was examined whether caloric conditioning or social learning strategies dominate in taste preference acquisition in children. The caloric learning paradigm predicts that eating or drinking artificially sweetened products, which deliver virtually no energy, will not lead to a taste preference whereas the social learning paradigm predicts that seeing important others modelling the eating and drinking of these 'light' products will induce a preference for the taste of light products in the child. Design: In a 2 x 2 between subjects factorial design, the amount of energy and social modelling was varied. Setting: The study was undertaken at primary schools in Maastricht, The Netherlands. Subjects: Forty-five children participated and six children dropped out. The 39 children who completed the study (14 boys and 25 girls) had a mean age of 67 months (range 51--81, s.d. 5.6). Interventions: Each subject took part in nine conditioning trials with an individually selected tasting yoghurt which was not preferred very much at the pre-test. Results: The children in the combined caloric and social condition showed an increase in their preference for the conditioned taste which was larger than a regression-to-the-mean effect (P=0.007), whereas children in the other groups did not. Conclusion: Caloric and social learning combined, ie modelling the consumption of energy-rich foods or drinks, is the best way to establish taste preferences. Children more easily learn a preference for energy-rich food that is eaten by significant others than for food that is low in energy and eaten by significant others. abstract_id: PUBMED:7180616 Alcohol and taste-mediated learning. Taste-mediated learning is relevant to the alcohol consumption patterns of animals. This review concludes that taste aversion learning has thus far prevented development of an animal model of alcoholism. The presence of a taste cue, lack of control over alcohol administration, and high alcohol concentrations or dosages all facilitate the development of alcohol aversions. There is little evidence that taste preference learning is involved in the development of alcohol dependence. Data from taste-mediated learning research with animals are consistent with drinking patterns of human alcoholics. abstract_id: PUBMED:29862893 Effects of caloric or non-caloric sweetener long-term consumption on taste preferences and new aversive learning. Food palatability and caloric content are crucial factors in guiding diet choice and amount consumed; as a result, sweet caloric tastes are associated with a positive hedonic value. Recent evidence in rodents indicates that consumption of artificial (non-caloric) sweeteners, in which sweet taste is dissociated from normal caloric consequences, could induce changes in energy and body weight regulation, suggesting that sweeteners not only modify intake and appetitive behavior, but could also change taste-learning processes. Particularly, there are different properties in some artificial sweeteners, like saccharin, that might differ from sugar in the reward responses that, after long-term consumption, could also be associated with the inability to learn new negative consequences related to the same taste. Thus, the main goal of this study was to determine, in adult rats, the effects of long-term consumption (14 days) of sugar or saccharin, on taste preference, on new aversive learning, i.e. latent inhibition (LI) of conditioned taste aversion (CTA), and appetitive taste re-learning after aversive taste associations. The results showed that 14 days' exposure to sugar, but not to saccharin, induced a significant increment in the LI of CTA and that taste preference is rapidly recovered during the next 3 days (e.g. CTA extinctions), indicating that long-term sugar consumption significantly accelerates aversive memory extinction during appetitive re-learning of a specific sweet taste; furthermore, high familiarization to sugar, but not to saccharin, promotes appetitive learning for the same taste. Overall, the results indicate that long-term consumption of sugar, but not saccharin, produces changes in appetitive re-learning and suggests that long-term sugar consumption could trigger escalating consumption due to the inability to learn new negative consequences associated with the same taste. abstract_id: PUBMED:18318427 Potentiation of taste and extract stimuli in conditioned flavor preference learning. In these experiments, we investigated the nature of potentiation in the conditioned flavor preference paradigm. Almond and banana extracts, which have strong odor components, were combined with salt and saccharin (liked tastes; Experiment 1) or quinine and citric acid (disliked tastes; Experiment 2) in a flavor preference procedure that mixed these solutions with a caloric reinforcer (polycose). The results showed that liked tastes potentiated preference conditioning to extracts (Experiment 1), whereas extracts potentiated preference conditioning to disliked tastes (Experiment 2). In both experiments, the presumably less liked stimulus (i.e., the extract in Experiment 1 and the disliked taste in Experiment 2) was the potentiated cue. abstract_id: PUBMED:35501556 Partial reinforcement effects on acquisition and extinction of a conditioned taste aversion. Four experiments with rat subjects asked whether a partial reinforcement extinction effect (PREE) occurs in taste aversion learning. The question has received little attention in the literature, and to our knowledge no taste aversion experiment has previously demonstrated a PREE. In each of the present experiments, experimental groups received a taste mixed in drinking water for 20 min; such taste exposures were sometimes paired with a lithium chloride (LiCl) injection and sometimes not. Control groups received only taste-LiCl pairings. There was evidence that each reinforced and non-reinforced trial produced increments and decrements in aversion strength (respectively), and trials mattered more than accumulated time during the conditioned stimulus and during the background (as emphasized in time-accumulation models like those of Gallistel and Gibbon, Psychological Review, 107, 289-344, 2000, and Gibbon and Balsam, Autoshaping and conditioning theory, Academic Press, New York, pp. 219-235, 1981). In addition, a partial reinforcement extinction effect was observed when there was a relatively large number of conditioning trials. The results extend our understanding of extinction in taste aversion learning and provide more evidence that aversion learning might follow rules that are qualitatively similar to those of other forms of learning. abstract_id: PUBMED:1141823 Conditioned and latent inhibition in taste-aversion learning: clarifying the role of learned safety. Experiments 1-3 investigated the applicability of the classical conditioning concept of conditioned inhibition to taste-aversion learning. Rats made ill after drinking saccharin and subsequently administered a "safe" exposure to saline (or casein hydrolysate) evidenced an enhanced preference for the safe fluid (relative to either a third, slightly aversive, solution or to water) when compared to controls in which saccharin was not previously poisoned. Such active condition inhibition was significantly reduced in Experiment 4 when two safe exposures to saline preceded saccharin-illness pairings. These results indicate that conditioned inhibition can be established in a taste-aversion procedure and that a latent inhibition manipulation reduces the ability of a taste to become a signal for safety. Implications of these findings for the learned safety theory of taste-aversion learning and the relevance to bait-shyness of principles established within the classical conditioning paradigm are considered. abstract_id: PUBMED:3097721 Effects of adrenaline on the acquisition and maintenance of ethanol preference in a taste conditioning paradigm. The effects of subcutaneous adrenaline administration on preference for ethanol (2.5% solution) have been investigated, using a two-bottle choice situation. Administration of the amine (50 micrograms/kg) immediately after the conditioning session significantly attenuated ethanol preference. Adrenaline treatment (10, 50 or 100 micrograms/kg) prior to the first retention test induced a significant reduction in ethanol preference. When the amine was injected prior to conditioning only the dose of 100 micrograms/kg reduced later ethanol preference. Our results indicate that systemically administered adrenaline impairs the acquisition of preference to a weak ethanol solution. It is suggested that this effect of the amine may be linked to interference with consolidation of memory and retrieval processes. abstract_id: PUBMED:8817645 Effects of dexfenfluramine on glucose drinking and glucose-conditioned flavour preferences in rats: taste versus post-ingestive conditioning. A comparison was made of the effects of dexfenfluramine (DF, 3-10 mg kg-1) on intake of and conditioning with glucose solutions varying in orostimulant properties (taste) and post-ingestive actions (calories), in rats. First, sham-feeding, using gastric-fistulated rats, was performed to assess the orostimulant properties of the solutions. Then, two experiments were done. In the first experiment, we examined the effects of DF, given at doses of 13 and 10 mg kg-1, on the intake of two glucose solutions having different orostimulant properties and different caloric values. The solutions were a mix of 1% glucose plus 0.125% saccharin (low caloric, more orostimulant), and 4% glucose (high caloric, less orostimulant). At doses of 10 mg kg-1, DF administration markedly reduced intake of both solutions (P < 0.05 vs Control Group, respectively). In second experiment, we examined the effects of DF (10 mg kg-1) on flavour preference conditioning in two parts. In the first part of the experiment, rats consumed two distinctively flavoured solutions having equal orostimulant properties but different caloric value for a conditioning period of 16 days. The solutions were a mix of 1% glucose plus 0.125% saccharin (low caloric), and 6.1% glucose (high caloric). At the end of the conditioning period, the flavour paired with ingestion of more calories was subsequently preferred (P < 0.05 vs low caloric glucose-saccharin mix). DF, when given during and after the conditioning period, attenuated this flavour-calorie conditioning (P < 0.05 vs Control Group). In the second part of the experiment, rats were conditioned with flavours associated with a mix of 20% glucose plus 0.4% citric acid and 20% glucose solutions. These solutions were equally caloric but differed in orostimulant properties. The flavour paired with better orostimulant properties was subsequently preferred (P < 0.05 vs less orostimulant glucose-citric acid mix). DF, when given during and after the conditioning period, also attenuated this flavour-flavour conditioning (P < 0.05 vs Control Group). These results suggest that DF may impair flavour preference learning. abstract_id: PUBMED:23943541 A history of morphine-induced taste aversion learning fails to affect morphine-induced place preference conditioning in rats. Drugs of abuse have both rewarding and aversive effects, as indexed by the fact that they support place preferences and taste aversions, respectively. In the present study, we explored whether having a history with the aversive effects of morphine (via taste aversion conditioning) impacted the subsequent rewarding effects of morphine, as measured in the place preference design. In Experiment 1, rats were exposed to a taste aversion procedure in which saccharin was followed by morphine. Place preference conditioning was then initiated in which animals were injected with morphine and placed on one side of a two-chambered apparatus. Animals with a taste aversion history acquired place preferences to the same degree as controls without such a history, suggesting that morphine's affective properties condition multiple effects, dependent on the specific stimuli present during conditioning. To determine whether these results were a reflection of processes operating in traditional associative conditioning, in a modified blocking procedure, place preference conditioning was attempted in the presence of a taste previously associated with morphine (Exp. 2). Under these conditions, animals still acquired morphine-induced place preferences comparable to those of animals without a morphine or conditioning history. These results are consistent with the position that drugs of abuse have multiple stimulus effects (positive and negative) that are differentially associated with specific stimuli (environmental and taste) that drive different behavioral responses (approach and avoidance). abstract_id: PUBMED:7229574 Conditioned taste preferences based on caloric density. Conditioning involved adding one flavor (e.g., vanilla) to an artificial diet of high caloric density and a second flavor (e.g., anise) to a similar low-calorie diet. When the rats were tested with identical foods to which the flavors were added, they showed a strong preference for the high-calorie flavor. The parameters of conditioning were varied in four experiments, which showed that the conditioning of taste preferences is rapid and powerful and has generality over a range of conditioned stimulus parameters. The last experiment indicated that the unconditioned stimulus for this conditioning is probably some oral cue, such as the taste of starch, which is correlated with caloric benefit, rather than caloric benefit per se. Answer: The question of whether social learning is a stronger determinant of taste preference acquisition than caloric conditioning is addressed in the study by Martijn et al. (PUBMED:11423917). This study found that a combination of caloric and social learning, specifically modeling the consumption of energy-rich foods or drinks, is the most effective way to establish taste preferences in children. The children who were exposed to both caloric content and social modeling showed a significant increase in their preference for the conditioned taste, more so than children in other groups. This suggests that children more easily learn a preference for energy-rich food that is eaten by significant others than for food that is low in energy and eaten by significant others. Therefore, in the context of this study, social learning appears to be a strong determinant of taste preference acquisition, especially when combined with caloric conditioning. Other studies have explored different aspects of taste-mediated learning. For instance, research on alcohol and taste-mediated learning in animals suggests that taste aversion learning can prevent the development of an animal model of alcoholism, and there is little evidence that taste preference learning is involved in the development of alcohol dependence (PUBMED:7180616). This indicates that aversive taste experiences can be a strong deterrent to the consumption of certain substances, which could be considered a form of learning that is distinct from caloric conditioning or social learning. Moreover, studies on the effects of long-term consumption of caloric or non-caloric sweeteners on taste preferences and new aversive learning in rodents show that long-term sugar consumption can affect taste-learning processes, potentially leading to an inability to learn new negative consequences associated with the same taste (PUBMED:29862893). This suggests that the caloric content of foods can influence taste preferences and learning processes related to taste. In conclusion, while social learning is a significant factor in the acquisition of taste preferences, as demonstrated by the study involving children observing their mothers (PUBMED:11423917), other forms of learning, such as caloric conditioning and aversive taste experiences, also play important roles in shaping taste preferences and consumption behaviors.
Instruction: Exposure to pro-smoking media in college students: does type of media channel differentially contribute to smoking risk? Abstracts: abstract_id: PUBMED:23536120 Exposure to pro-smoking media in college students: does type of media channel differentially contribute to smoking risk? Background: There are almost no data on whether the different channels through which pro-smoking media appear (i.e., point-of-sale advertising, movie smoking) differently influence smoking. Purpose: This study used ecological momentary assessment to examine whether differences in smoking risk were observed for exposures to different pro-smoking media channels. Methods: College students (n = 134) carried smartphones for 21 days, recording their exposures to pro-smoking media and the media channels for that exposure and responding to three randomly issued control prompts per day. Participants answered questions about their future smoking risk after each pro-smoking media exposure and random prompt. Results: Participants had elevated future smoking risk following exposure to pro-smoking media at point of sale (p < 0.001); smoking risk at times of exposure to smoking in movies did not differ from risk measured during control prompts (p = 0.78). Conclusions: There is merit to examining the relative contribution of different pro-smoking media channels to smoking behavior. abstract_id: PUBMED:24268361 Quantifying the persistence of pro-smoking media effects on college students' smoking risk. Purpose: To quantify the persistence of pro-smoking media exposure effects on college students' intentions to smoke and smoking refusal self-efficacy. Method: A total of 134 college students (ages 18-24 years) were enrolled in an ecological momentary assessment study in which they carried handheld data collection devices for 3 weeks and reported their exposures to pro-smoking media as they occurred in the real world. Smoking intentions and smoking refusal self-efficacy were assessed after each exposure to pro-smoking media and at random prompts during each day of the 3-week assessment period. A generalized additive model was used to determine how long the effect of an exposure to pro-smoking media persisted. Results: The effect of pro-smoking media exposures persisted for 7 days. After exposure, smoking intentions immediately increased (.56; 95% confidence interval [CI]: [.26, .87]) and then steadily decreased (-.12; 95% CI: [-.19, -.05]) each day for 7 days, while smoking refusal self-efficacy immediately decreased (-.42; 95% CI: [-.75, -.10]) and then steadily increased (.09; 95% CI: [.02, .16]) each day for 7 days. Daily changes occurring after 7 days were not statistically significant, suggesting that smoking intentions and refusal self-efficacy had stabilized and were no longer affected by pro-smoking media exposure. Conclusions: Exposures to pro-smoking media may have strong implications for emerging young adults smoking risk as the impact of an individual exposure appears to persist for at least a week. abstract_id: PUBMED:22353027 Momentary effects of exposure to prosmoking media on college students' future smoking risk. Objective: This study used ecological momentary assessment to examine acute changes in college students' future smoking risk as a function of their exposure to prosmoking media (e.g., smoking in movies, paid advertising, point-of-sale displays). Method: A sample of 135 college students ("ever" and "never" smokers) carried handheld computers for 21 days, recording their exposures to all forms of prosmoking media during the assessment period. They also responded to three investigator-initiated control prompts during each day of the assessment period (i.e., programmed to occur randomly). After each prosmoking media exposure and after each random control prompt they answered questions that measured their risk of future smoking. Responses between prosmoking media encounters were compared (within subjects) to responses made during random control prompts. Results: Compliance with the study protocol was high, with participants responding to over 83% of all random prompts. Participants recorded nearly three encounters with prosmoking media each week. Results of linear mixed modeling indicated that all participants had higher future smoking risk following exposure to prosmoking media compared with control prompts (p < .05); this pattern of response did not differ between ever and never smokers (p = .769). Additional modeling of the variances around participants' risk of future smoking revealed that the response of never smokers to prosmoking media was significantly more variable than the response of ever smokers. Conclusion: Exposure to prosmoking media is associated with acute changes in future smoking risk, and never smokers and ever smokers respond differently to these exposures. abstract_id: PUBMED:27956757 How social media influence college students' smoking attitudes and intentions. Building on the influence of presumed influence (IPI) model, this study examines how smoking- related messages on social media influence college students' smoking. We surveyed 366 college students from three U.S. Midwestern universities in 2012 and examined the effects of expression and reception of smoking-related messages on smoking using path analysis. We found that the expression and reception of prosmoking messages not only directly affected smoking but also had indirect effects on smoking through (1) perceived peer expression of prosmoking messages and (2) perceived peer smoking norms. For antismoking messages, only reception had a significant indirect influence on smoking through (1) perceived peer reception of antismoking messages and (2) perceived peer smoking norms. In conclusion, social media function as an effective communication channel for generating, sharing, receiving, and commenting on smoking-related content and are thus influential on college students' smoking. abstract_id: PUBMED:21331968 Genre-specific media and perceptions of personal and social risk of smoking among South Korean college students. The smoking rate among adult men in South Korea is one of the highest in the world, standing at about 53%. Although various mass media-based educational initiatives have been taken to reduce this rate, their contribution toward the smoking risk perceptions of South Koreans has not been investigated. This study examined the association between genre-specific media exposure and personal and social risk perceptions of smokers and nonsmokers. Data from a survey of 558 South Korean college students (39% smokers) show that genre-specific media exposure differentially predicts personal and social risk perceptions of smokers and nonsmokers. News media exposure predicted smokers' personal risk perceptions, whereas entertainment media exposure predicted nonsmokers' personal risk perceptions. Exposure to a hybrid genre, health infotainment, predicted social risk perceptions, but not personal risk perceptions, of both smokers and nonsmokers. High rates of exposure to medical documentary were associated with low personal risk perceptions of nonsmokers, but not smokers. These results collectively suggest that mixed-media strategies may effectively address perceptions of personal and social risk of smoking. Suggestions for future research, and theoretical and practical implications, are offered. abstract_id: PUBMED:36503015 The effect of exposure to tobacco smoking-related media messages on youths' smoking behavior in Jordan: A longitudinal, school-based study. Only a few studies investigated the link between tobacco smoking-related media and youth smoking in the Eastern Mediterranean Region (EMR). This study aimed to assess the influence of both promotional and control messages on cigarette smoking behavior among young Jordanian students. Generalized Linear Mixed Models were analyzed using data from the Irbid Longitudinal Smoking Study that followed a random sample of 2174 students (2008-2011). We examined the associations of media messaging with smoking behavior, as well as intention-to-quit smoking, and intention-to-start smoking, among young adolescents. At baseline, 12.2% and 43.7% of students were exposed to only pro-smoking or only anti-smoking messages, while 41.8% were equally exposed to both. Exposure to anti-smoking messages was associated with lower odds of ever smoking at baseline among girls (AOR = 0.4; 95% CI: 0.2, 0.8). Boys who were exposed to anti-smoking messages were more likely to report an intention to quit, with borderline significance (AOR = 2.0; 95% CI: 0.9, 4.1). The cumulative exposure to anti-smoking messages over time was associated with lower odds of intention to smoke among girls (AOR = 0.5; 95% CI: 0.3, 0.9) but with higher odds among boys (AOR = 1.8; 95% CI: 1.0, 3.1). In both sexes, media messaging was not associated with progression of the smoking habit. In conclusion, this comprehensive analysis of both pro- and anti-smoking messages advances our understanding of their role in influencing youths' smoking behaviors, and could guide the development of evidence-based interventions to address adolescent tobacco smoking in Jordan and the EMR. abstract_id: PUBMED:22686961 Friends moderate the effects of pro-smoking media on college students' intentions to smoke. Exposure to prosmoking media (e.g., smoking in movies, advertising in magazines) contributes to smoking in young people. However, the extent to which the impact of exposure depends on the social context in which those exposures occur has not been investigated. This study used ecological momentary assessment to examine the moderating role of social context in the relationship between college students' exposure to prosmoking media and their smoking refusal self-efficacy and intention to smoke. College students (n = 134) carried handheld computers for 21 days, recording their exposure to all forms of prosmoking media during the assessment period. They also responded to three investigator-initiated control prompts (programmed to occur randomly) each day of the assessment. After each exposure to prosmoking media and after each control prompt, participants answered questions about smoking refusal self-efficacy and their intentions to smoke; they also indicated whether they were with friends, with family, with a romantic partner, or alone (i.e., their social context). When participants were with friends, prosmoking media exposures were associated with stronger smoking intentions and lower smoking refusal self-efficacy; these associations were not present when participants were alone. Being with family members or with a romantic partner did not moderate the impact of prosmoking media exposure on either dependent variable. These results suggest a new role for peers in the development of youth smoking. abstract_id: PUBMED:29189021 Effects of antismoking media on college students' smoking-related beliefs and intentions. Ecological momentary assessment was used to examine immediate changes in 87 college students' smoking-related attitudes, beliefs, and intentions as a joint function of their exposure to antismoking media and smoking status. Students (37 never smokers, 41 experimental smokers, and 9 current intermittent smokers) carried handheld data-collection devices for 3 weeks to record naturally occurring exposures to antismoking media and respond to investigator-initiated control prompts. At each reported exposure to antismoking media and each control prompt, participants reported their smoking-related attitudes, perceptions of the prevalence of smoking among their peers, resistance self-efficacy, and intentions to smoke. Mixed-effects regression was used to compare responses between encounters with antismoking media and control prompts. Experimental smokers reported weaker intentions to smoke and greater resistance self-efficacy at moments of exposure to antismoking media than at control prompts. Regardless of smoking experience, participants reported higher perceived prevalence of smoking at times of exposure to antismoking media than at control prompts. These findings generally support the value of antismoking media messages for shifting the beliefs and intentions of experimental smokers, who are at high risk for becoming committed regular smokers. (PsycINFO Database Record abstract_id: PUBMED:19731126 Associations between smoking and media literacy in college students. Organizations recommend media literacy to reduce tobacco use, and higher media literacy has been associated with lower smoking among high school students. The relationship between smoking media literacy and tobacco use, however, has not been systematically studied among college students. The purpose of this study was to determine the association between smoking and smoking media literacy among college students. We conducted the National College Health Assessment (NCHA) at a large, urban university, adding six items measuring smoking media literacy. A total of 657 students responded to this random sample e-mail survey. We used multiple logistic regression to determine independent associations between smoking media literacy items and current smoking. The media literacy scale was internally consistent (alpha = 0.79). Of the respondents, 21.5% reported smoking cigarettes over the past 30 days. In a fully adjusted multivariate model, participants with medium media literacy had an odds ratio (OR) for current smoking of 0.45 (95% CI = 0.29, 0.70), and those with high media literacy had an OR for current smoking of 0.38 (95% CI = 0.20, 0.70). High smoking media literacy is independently associated with lower odds of smoking. Smoking media literacy may be a valuable construct to address in college populations. abstract_id: PUBMED:18206196 Adolescent smoking and volume of exposure to various forms of media. Objective: To assess the association between adolescent smoking and volume of exposure to various forms of media after controlling for multiple relevant covariates. Methods: A survey of all adolescents at a large suburban high school assessed: (1) current smoking and susceptibility to future smoking; (2) volume of exposure to various media; and (3) covariates related to smoking. Multivariate logistic regression models assessed relationships between each of the independent variables (media exposures) and the two smoking outcomes after controlling for covariates. Results: Of the 1138 respondents, 19% (n=216) reported current smoking. Forty percent (n=342) of the non-smokers (n=922) were susceptible to future smoking. Students reported exposure to an average of 8.6 (standard deviation 5.1)h of media daily, including 2.6h of music. Those with high exposure to films and music were more likely to be smokers (P(trend)=0.036 and P(trend)<0.001, respectively), and those with high exposure to books were less likely to be smokers (P(trend)<0.001). After controlling for all relevant covariates, those with high exposure to music had greater odds of being smokers than those with low exposure [odds ratio (OR) 1.90, 95% confidence intervals (CI) 1.10-3.30], and those with high exposure to books had lower odds of being current smokers (OR 0.55, 95% CI 0.33-0.94). Conclusion: Exposure to films and music are associated with smoking, but only the relationship between music exposure and smoking persists after rigorous covariate control. Exposure to books is associated with lower odds of smoking. Answer: Yes, the type of media channel does differentially contribute to smoking risk among college students. According to a study using ecological momentary assessment, college students had elevated future smoking risk following exposure to pro-smoking media at the point of sale, but smoking risk at times of exposure to smoking in movies did not differ from risk measured during control prompts (PUBMED:23536120). This suggests that not all pro-smoking media channels have the same impact on smoking risk, with point-of-sale advertising being more influential than movie smoking in this context. Moreover, the persistence of pro-smoking media exposure effects on college students' intentions to smoke and smoking refusal self-efficacy was quantified, showing that the effect of pro-smoking media exposures persisted for 7 days, with immediate increases in smoking intentions and decreases in smoking refusal self-efficacy after exposure (PUBMED:24268361). This indicates that the influence of pro-smoking media can have a lasting impact on smoking-related attitudes and intentions. Additionally, exposure to pro-smoking media was associated with acute changes in future smoking risk for both "ever" and "never" smokers, with never smokers showing significantly more variability in their response to pro-smoking media (PUBMED:22353027). This highlights the immediate and potentially variable impact of pro-smoking media on different groups of college students. In summary, the type of pro-smoking media channel does contribute differentially to smoking risk among college students, with certain channels like point-of-sale advertising having a more pronounced effect than others, such as movie smoking. The impact of pro-smoking media can also persist over time and affect both current and potential future smokers.
Instruction: Hepatic outcomes after jejunoileal bypass: is there a publication bias? Abstracts: abstract_id: PUBMED:16989701 Hepatic outcomes after jejunoileal bypass: is there a publication bias? Background: One of the reasons why jejunoileal bypass (JIB) was abandoned were reports of liver failure. The aim of this study was to describe histological findings in the intraoperative and follow-up liver biopsies of a cohort of super-obese patients who had undergone JIB. Methods: 50 consecutive patients underwent JIB. Samples of liver biopsies performed intraoperatively (41 patients) and in the follow-up (31 patients) were evaluated. Brunt's scale was used. Results: Mean age at operation was 37.9 +/- 7.6 years, and 15 patients (30.6%) had diabetes type 2, 20 (40.8%) had dyslipidemia, 29 (59.2%) had high blood pressure, and one (0.5%) had hepatitis C. Mean BMI preoperatively was 52.8 +/- 7.5 kg/m(2). Mean follow-up time was 67.0 +/- 42.8 months. At the time of the latest liver biopsy, the mean BMI was 35.7 +/- 7.5 kg/m(2). The % excess weight loss (%EWL) was 62.4 +/- 20.0%. 8 deaths (16%) have occurred, none from liver-related complications. At liver biopsy during the JIB operation, NAFLD was confirmed in 36 patients (86.7%) and NASH in 13 (31.7%). In 25 patients with mean follow-up of 4.8 +/- 4.0 years, there was no statistically significant change in the liver histology regarding the extent of steatosis (P=0.20), steatohepatitis (P=0.74) and fibrosis (P=0.71). Conclusions: There was a significant metabolic improvement, maintenance of the %EWL, and no worsening of liver histology. There has possibly been a publication bias concerning liver outcomes, where the type of JIB and the concomitance of hepatitis C were not taken into account. abstract_id: PUBMED:4054809 Late effects of jejunoileal bypass operations on hepatic inflammation, fibrosis and lipid content. Patients more than six years following jejunoileal bypass operations for morbid obesity underwent liver biopsy for evaluation of hepatic histology and determination of hepatic lipid content. The amount of hepatic inflammation and fibrosis present was quantitated and compared to that present in liver biopsy specimens obtained at the time of performance of the jejunoileal bypass operation. Hepatic lipid content was compared to that present in the livers of six normal-weight patients with no liver disease, and with liver specimens from twenty morbidly obese patients undergoing gastroplasty operations. Since their jejunoileal bypass operations, hepatic fibrosis has increased in 87% of the patients and hepatic inflammation in 52%. Fibrotic liver disease is currently present in 50% of the patients compared to 10% at the time of performance of the jejunoileal bypass operations. Patients with jejunoileal bypass operations were found to have nine times more hepatic triglyceride than was found in patients with normal livers, and twice as much as identified in obese patients. These results suggest that some patients with jejunoileal bypass operations have severe liver histologic disease associated with increased hepatic triglyceride content. Patients with jejunoileal bypass operations should have periodic histologic evaluation of their liver. abstract_id: PUBMED:31857906 Comparison of outcomes following three surgical techniques for patients with severe jejunoileal atresia. Background: Severe jejunoileal atresia is associated with prolonged parenteral nutrition, higher mortality and secondary surgery. However, the ideal surgical management of this condition remains controversial. This study aimed to compare the outcomes of patients with severe jejunoileal atresia treated by three different procedures. Methods: From January 2007 to December 2016, 105 neonates with severe jejunoileal atresia were retrospectively reviewed. Of these, 42 patients (40.0%) underwent the Bishop-Koop procedure (BK group), 49 (46.7%) underwent primary anastomosis (PA group) and 14 (13.3%) underwent Mikulicz double-barreled ileostomy (DB group). Demographics, treatment and outcomes including mortality, morbidity and nutrition status were reviewed and were compared among the three groups. Results: The total mortality rate was 6.7%, showing no statistical difference among the three groups (P = 0.164). The BK group had the lowest post-operative complication rate (33.3% vs 65.3% for the PA group and 71.4% for the DB group, P = 0.003) and re-operation rate (4.8% vs 38.8% for the PA group and 14.3% for the DB group, P < 0.001). Compared with the BK group, the PA group showed a positive correlation with the complication rate and re-operation rate, with an odds ratio of 4.15 [95% confidence interval (CI): 1.57, 10.96] and 12.78 (95% CI: 2.58, 63.29), respectively. The DB group showed a positive correlation with the complication rate when compared with the BK group, with an odds ratio of 7.73 (95% CI: 1.67, 35.72). The weight-for-age Z-score at stoma closure was -1.22 (95% CI: -1.91, -0.54) in the BK group and -2.84 (95% CI: -4.28, -1.40) in the DB group (P = 0.039). Conclusions: The Bishop-Koop procedure for severe jejunoileal atresia had a low complication rate and re-operation rate, and the nutrition status at stoma closure was superior to double-barreled enterostomy. The Bishop-Koop procedure seems to be an appropriate choice for severe jejunoileal atresia. abstract_id: PUBMED:14756078 Jejunoileal bypass Obesity is a condition which can be found very frequently today, both in developed and 3rd world countries. The incidence of obesity in adult population of Romania is about 35%, and most of these patients are females. We'll present the case of a 54 years old woman with BMI = 57 kg/m2, who was hospitalized for the treatment of a postoperative eventration after an umbilical hernia. Her nocturnal breathing troubles, knee pains and walking difficulties made us consider the idea of a digestive by-pass. The surgical intervention consisted of jejunoileal by-pass, abdominoplasty and dermolipectomy with bipolar drainage. Many complications occurred in the postoperative period (renal failure due to severe diarrhea). The weight loss after 18 months was 37%, which means 66% of the weight surplus (similar results can be found in professional statistics--around 70%). After 18 mounts her weight is 95 kg and she allowed to consume any food. 18 mounts after the operation, the number of stools decreased to normal (1-2 per day). In conclusion the morbid obesity can and must be treated surgically. Jejunoileal by-pass is a highly effective procedure, but surgeons must be aware of the pact that severe complications which may occur anytime and must be treated immediately. After this kind of operation, weight stabilization can be achieved within 2 years, no diet being necessary as an additional treatment. abstract_id: PUBMED:2595265 Dissociation of changes in plasma cholesterol and hepatic cholesterol synthesis, assessed by precursors in bile, after jejunoileal bypass with 3:1 and 1:3 jejunoileal ratio. Hepatic cholesterol synthesis, assessed by the biliary content of post-squalene cholesterol precursors relative to cholesterol, was related to fasting plasma cholesterol fractions in obese patients before (n = 7) and after jejunoileal bypass surgery with either a 3:1 ratio (n = 15) or a 1:3 ratio (n = 14) between jejunum and ileum left in function. The content of precursors increased after bypass surgery, and more so with a 3:1 than with a 1:3 jejunoileal ratio. However, the two types of bypass resulted in similar reductions in plasma concentrations of total cholesterol and low-density lipoprotein cholesterol. Within the unoperated group and the 3:1 bypass group the precursor content and plasma cholesterol levels were, as expected, inversely correlated, but within the 1:3 bypass group there were either weaker or even positive correlations. These findings suggest that the link between stimulation of hepatic cholesterol synthesis and reduction in plasma cholesterol may depend on the length of jejunum and ileum left in function. abstract_id: PUBMED:18157608 Steatohepatitis and cirrhosis: first manifestation 23 years after jejunoileal bypass surgery Intestinal shunting procedures followed by gastrointestinal bypass surgery have been used as therapeutic modalities in the treatment of morbid obesity since the mid 1950s. Enthusiasm reached its peak in the early 1960s with the introduction of the jejunoileal bypass, however began to wane as various complications were identified in the remote postoperative period and later. Finally, the jejunoileal bypass was abandoned in the 1980s. Apart from renal disorders, it frequently resulted in abnormal liver function and liver failure which are attributed to fatty infiltration. We report a 56-year-old woman, who underwent jejunoileal bypass surgery 23 years ago. She was admitted to our ICU because of hepatic encephalopathy IV, caused by upper gastrointestinal bleeding. Beside hepatic encephalopathy there were signs of severe liver failure (INR 2.8, cholesterol 32 mg/dl, ICG PDR 5%). Liver biopsy showed fatty infiltration and cirrhosis. Excluding other causes of liver disease, severe fatty liver disease following jejunoileal bypass surgery was diagnosed. The very late onset of severe liver disease emphasizes the importance of lifelong follow-up of these patients. abstract_id: PUBMED:37869423 Case report: Case series featuring anastomotic colonic adenocarcinoma following jejunoileal bypass requiring oncologic resection and jejunoileal bypass reversal. Like all surgical fields, bariatric surgery has evolved immensely, so much so that previous procedures are now obsolete. For instance, the jejunoileal bypass has fallen out of favor after severe metabolic consequences resulted in prolonged morbidity and even mortality. Despite this, several patients persevered long enough to develop other pathology, such as cancer. This progression has been validated in animal models but not human patients. Nonetheless, contemporary surgeons may encounter situations where they must resect and re-establish intestinal continuity in patients with this antiquated anatomy. When faced with this scenario, the question of whether or not the previously bypassed small bowel can be safely reunited plagues the surgeon remains unanswered. Unfortunately, the literature does not effectively answer this question, even anecdotally through case reports or series. Therefore, we share our experience with three patients who developed colon cancer following jejunoileal bypass and subsequently underwent oncologic resection with simultaneous reversal of their jejunoileal bypasses. abstract_id: PUBMED:3655313 Alcohol-induced liver injury after jejunoileal bypass operation in rats. The objective of this study was to investigate whether alcohol administration exerts a synergistic effect on jejunoileal bypass-induced liver dysfunction in rats. Male Wistar rats were subjected to 90% jejunoileal bypass or sham operation. For 10 weeks, subgroups were pair-fed either an alcohol-containing (36% of total calories) liquid diet or a liquid diet where alcohol was replaced isocalorically by starch. Alcohol feeding in rats with jejunoileal bypass increased hepatic triglyceride content about 6-fold as compared with bypassed rats receiving control diet. Neither jejunoileal bypass nor alcohol feeding led to significant changes in hepatic DNA and protein contents. Alcohol feeding increased cytochrome P-450 levels both in operated and in sham-operated rats. The administration of alcohol-containing diet decreased the activity of succinic dehydrogenase, the decrease being distinctly more pronounced in rats with jejunoileal bypass than in the sham-operated controls. Light microscopy revealed no significant morphological alterations in liver sections of rats fed the control diet after jejunoileal bypass or of rats receiving either the alcohol-containing diet or the control diet after sham operation. Alcohol feeding in bypassed rats, however, produced marked diffuse accumulation of fat, and regularly led to other histological abnormalities in the liver. These abnormalities included ballooning of hepatocytes and disarray of the trabecular structure of the liver lobule, hyalin inclusions resembling megamitochondria, single-cell necrosis and focal clustering of necrosis, increased number of mitotic figures, and infiltrates with inflammatory cells. The histological lesions of the liver of bypassed rats receiving alcohol exhibited no obvious zonal distribution. The results demonstrate that alcohol feeding to rats subjected to jejunoileal bypass leads to marked liver injury which mimics, at least in part, that of alcohol-induced liver disease in man. Rats subjected to jejunoileal bypass may, therefore, provide a new model for the study of alcoholic liver disease. abstract_id: PUBMED:999502 Reversibility of hepatic failure following jejunoileal bypass. Jejunoileal bypass was performed in 50 morbidly obeses patients. The morbidity encountered compared favorably with that of other series. All patients manifesting hepatic failure demonstrated hyperbilirubinemia within the first three months postoperatively. Hyperbilirubinemia, if uncorrected, resulted in a mortality of 75%. Augmentation jejunal interposition was performed in three patients who demonstrated hepatic decompensation or severe electrolyte imbalance or both. This resulted in rapid correction of electrolyte disturbances, liver function measurements, and patient symptoms without significant postoperative weight gain. Persistent hyperbilirubinemia or recalcitrant electrolyte problems or both are indications for augmentation jejunal interposition. abstract_id: PUBMED:29709288 Is the Bishop-Koop procedure useful in severe jejunoileal atresia? Purpose: The aim of this study was to report our experience using the Bishop-Koop procedure for the treatment of various surgical problems of jejunoileal atresia including luminal discrepancy, complex meconium peritonitis, type IIIb and type IV atresia which we defined as severe jejunoileal atresia. Methods: This retrospective study was performed on the patients with severe jejunoileal atresia who underwent Bishop-Koop procedure at a tertiary center in China over a five year of period. The mortality, complication rate, nutrition status and the risk factors for postoperative adverse outcomes were explored. Results: A total of 41 neonates underwent the Bishop-Koop procedure. The median duration of the hospital stay and total parenteral nutrition and the point at which oral feeding was initiated postoperatively were 24 days (95% CI =18.99-29.01), 13 days (95% CI = 9.03-16.97) and 11 days (95% CI = 10.17-11.83) respectively. The mortality rate was 7.32% (3/41). The complication rate was 41.4% (17/41) including anastomotic leak, intestinal obstruction, high output stoma and cholestasis. The weight for age Z-score at stoma closure was restored to normal levels (-0.86, 95% confidence interval (CI) = -1.44, -0.28). The main factor associated with adverse outcomes in severe jejunoileal atresia was premature delivery (odds ratio (OR) = 4.44, 95% CI = 1.06-18.67). Conclusions: Bishop-Koop procedure appears to be a technically efficient method for severe jejunoileal atresia, although larger studies are needed to compare Bishop-Koop procedure and other operation techniques. Type Of Study: Therapeutic. Level Of Evidence: Level IV. Answer: The question of whether there is a publication bias concerning hepatic outcomes after jejunoileal bypass (JIB) is addressed in the study by PUBMED:16989701. This study aimed to describe histological findings in liver biopsies from a cohort of super-obese patients who had undergone JIB. The results showed significant metabolic improvement, maintenance of % excess weight loss (%EWL), and no worsening of liver histology. The authors suggest that there may have been a publication bias in previous reports concerning liver outcomes after JIB, particularly where the type of JIB and the concomitance of hepatitis C were not taken into account. In contrast, another study (PUBMED:4054809) reported that patients more than six years following JIB operations exhibited increased hepatic fibrosis in 87% of the patients and increased hepatic inflammation in 52%. This study found that fibrotic liver disease was present in 50% of the patients, compared to 10% at the time of the JIB operation, and that patients with JIB had significantly higher hepatic triglyceride content than normal-weight patients without liver disease. These conflicting reports highlight the possibility of a publication bias, where negative outcomes may be more frequently reported or given more attention in the literature. The study by PUBMED:16989701 suggests that liver outcomes after JIB may not be as detrimental as previously thought, provided that factors like the specific type of JIB and concurrent conditions such as hepatitis C are properly considered. However, the presence of severe liver histologic disease in some patients, as reported by PUBMED:4054809, indicates that the risk of liver complications after JIB cannot be entirely dismissed. Overall, the evidence suggests that while there may be a publication bias in reporting hepatic outcomes after JIB, the risk of liver complications does exist and may manifest long after the surgery, as indicated by the case of steatohepatitis and cirrhosis manifesting 23 years after JIB surgery (PUBMED:18157608). Therefore, it is crucial for patients who have undergone JIB to have periodic histologic evaluation of their liver to monitor for potential complications.
Instruction: Local anesthesia during transrectal ultrasonography-guided prostate biopsy: does it have any effect on sexual function? Abstracts: abstract_id: PUBMED:24858158 Use of local anesthesia in transrectal ultrasound guided prostatic biopsy. Transrectal ultrasound (TRUS) guided biopsy of the prostate is the gold standard for detecting prostate cancer. Intraprostatic administration of local anesthesia significantly decreases pain during prostatic biopsy. Transrectal ultrasound guided intraprostatic lidocaine administration along with intrarectally lidocaine gel is a new local anesthesia technique for prostatic biopsy. It was a randomized prospective, comparative, interventional type of the study. A total of 60 patients of age over 55 years included in the study among them 30 patients with transrectal lidocaine gel was in Group I and 30 patients with intraprostatic lidocaine injection with intra rectal gel were in Group II. The mean age was almost identically distributed between the two groups (p=0.668). About 47% of patients in Group I exhibited hard prostate on digital rectal examination (DRE), 33.3% single nodule and 20% multinodule compared to 33.3% of patients in Group II had hard prostate, 26.7% single nodule and 40% multinodule (p=0.236). The mean serum prostate specific antigen (PSA) was significantly higher in Group II than that in Group I (36.7±9.2 vs. 7.5±5.8ng/ml; p=0.007). The mean pain intensity during biopsy was almost two times higher in Group I than in Group II (p<0.001). The present study concludes that the intraprostatic lidocaine injection along with intra rectal lidocaine gel is less painful method and can be considered in all patients undergoing transrectal ultrasound guided prostate biopsy. abstract_id: PUBMED:28073199 Transrectal ultrasound (TRUS) guided prostate biopsy: Three different types of local anesthesia. Transrectal Ultrasound (TRUS) guided prostate biopsy is regarded as the gold standard for prostate cancer diagnosis. The majority of patients perceive TRUS-guided prostate biopsy as a physically and psychologically traumatic experience. We aimed to compare in this paper the efficacy of three different anesthesia techniques to control the pain during the procedure. Materials And Methods: 150 patients who underwent transrectal ultrasound (TRUS) guided prostate biopsy were randomly divided into three groups. Group A included 50 patients who received one hour before the procedure a mixture of 2.5% lidocaine and 2.5% prilocaine, Group B: 50 patients who received intrarectal local anesthetic administration (lidocaine 5 ml 10%) and lidocaine local spray 15 % and Group C included 50 patients who received periprostatic block anesthesia (lidocaine 10 ml 10%). Visual analogue scale (VAS) of patients in different groups was evaluated at the end of the biopsy and 30 minutes after the procedure. Results: The VAS of patients in Group A was 1.32 ± 0.65 (VAS I) and 2.47 ± 0.80 (VAS II). In group B the VAS of patients was 1.09 ± 0.47 (VAS I) and 1.65 ± 0.61 (VAS II). In group C the VAS of patients was 2.63 ± 0.78 (VAS I) and 1.70 ± 0.85 (VAS II). There was no statistically significant difference in term of VAS I between group A and B. A statistically significant difference was determined in terms of VAS II between group A and B. There was no statistically significant difference in term of VAS between group B and C. Conclusions: The most effective of the three methods for pain control we used was intrarectal local anesthetic administration and lidocaine local spray 15% that enables an ideal patient comfort. abstract_id: PUBMED:15103923 Periprostatic local anesthesia before transrectal ultrasound-guided prostate biopsy Purpose: We prospectively assessed the safety and efficacy of periprostatic local anesthesia before transrectal ultrasound (TRUS)-guided prostate biopsy. Materials And Methods: A total of 178 consecutive men undergoing transrectal prostate biopsy at our institution were enrolled in this study. From January to June 2001, 84 men underwent prostate biopsy without anesthesia (control group). From July to December 2001, 94 men received local anesthesia before prostate biopsies (anesthesia group). A 5-ml dose of 1% lidocaine was injected into the periprostatic nerve plexus on each side via a 22 gauge needle at 3 minutes before the procedure. Pain during and after biopsy was assessed using a 10-point visual analog scale (VAS). Complications were evaluated with a self-administered questionnaire. Results: The average pain score during biopsy was 3.18 in the anesthesia group versus 4.16 in the control group (p = 0.0067), while average pain score on the next day was 2.12 and 2.25, respectively (p = 0.7451). In the anesthesia group 13% of patients had a pain score > 5 versus 34% in the control group (p = 0.0043). The complication rate showed no significant difference between the two groups. Conclusion: Periprostatic lidocaine injection is a safe and effective method of anesthesia for transrectal prostate biopsy. abstract_id: PUBMED:33665862 Intraprostatic prophylactic antibiotic injection in patients undergoing transrectal ultrasonography-guided prostate biopsy. Objectives: To evaluate the efficacy of intraprostatic injection of prophylactic antibiotics for the prevention of infections following transrectal ultrasonography-guided prostate biopsy. Methods: In this prospective interventional study, patients aged between 40 and 70 years with clinical indications for transrectal ultrasonography-guided prostate biopsy were enrolled. Consecutive patients who received intraprostatic injection of amikacin coupled with oral ciprofloxacin were compared with historical controls receiving only prophylactic oral fluoroquinolones prior to biopsy. Patients were followed for 7 days after biopsy for signs and symptoms of infection. Results: A total of 210 patients were included in this study (mean age 65.7 ± 7.8 years). Among patients without intraprostatic injection, nine (8.6%) developed infectious complications (lower urinary tract infection in seven patients, and prostatitis in two), which led to hospitalization in six patients (5.7%). None of the patients who received intraprostatic antibiotic injection developed infectious complications. The difference in the incidence of post-transrectal ultrasonography-guided biopsy infections between the two groups was statistically significant (P = 0.003). Conclusion: Local intraprostatic antibiotic injection during transrectal ultrasonography-guided prostate biopsy adjuvant to oral fluoroquinolones can significantly reduce the rate of post-biopsy infectious complications. abstract_id: PUBMED:32207137 Comparison of the Efficacy of Local Anesthesia Methods and Caudal Regional Anesthesia in Prostate Biopsy Applied Under Transrectal Ultrasonography: A Randomized Controlled Study. Purpose: To evaluate the efficacy of caudal regional anesthesia and local anesthesia methods in prostate biopsy applied under transrectal ultrasonography. Matherials and Methods: This prospective study included a total of 160 patients randomly separated into 4 equal groups as intrarectal local anesthesia (IRLA), periprostatic local anesthesia (PPLA), combined local anesthesia (IRLA+PPLA), and caudal regional anesthesia (CRA). The patients were evaluated using the pain scores on a visual analog scale. Results: The pain score during anesthesia induction was significantly higher in the CRA group than in the IRLA and IRLA+PPLA groups (p<0.001). The pain score during entry of the probe to the rectum and movement was significantly lower in the CRA group than the IRLA groups (p=0.014). The pain score on penetration of the needle to the prostate and at 30 mins after the biopsy was significantly higher in the IRLA group (p<0.001). At 2 hours after the biopsy, the pain score in the CRA group was significantly lower than IRLA groups (p=0.015). Conclusion: The PPLA alone can be applied more quickly than CRA, causes less pain during the application, and has similar efficacy in reducing pain during and after the prostate biopsy procedure. abstract_id: PUBMED:17006735 Local anesthesia during transrectal ultrasonography-guided prostate biopsy: does it have any effect on sexual function? Introduction: TRUS (Trans Rectal Ultra Sonographic)-guided biopsy of the prostate is the procedure of choice for prostate cancer diagnosis in urological clinical practice. TRUS-guided biopsies are associated with pain and anxiety and may interfere with sexual function and potency. The aim of this study was to evaluate whether local anesthesia during TRUS-guided prostate biopsies has any effect on the sexual behavior of patients and to compare the periprostatic infiltration with lidocaine to simple sonographic gel application in a randomized prospective trial. Patients And Methods: A total of 62 consecutive patients were included in the study randomized in two groups; Group A (n = 30, control group) and Group B (n = 32, lidocaine infiltration group). Interviews regarding their sexual status were conducted at the time they were informed of the need for biopsy, at the time of biopsy and at two scheduled interviews following the biopsy. Results: Similar results of sexual dysfunction were observed between the two groups. About 6% of patients experienced some degree of dysfunction in anticipation of biopsy (P > 0.02 between the two groups) that was resolved by the end of the follow-up period. In total, only one Group B patient continued to show sexual dysfunction at the time of the last interview compared to two patients in Group A. Conclusions: Local infiltration with lidocaine does not seem to play a role in sexual dysfunction following prostate biopsies. Psychological factors influence patients and the urologist should be ready to inform and reassure both the patient and his family. abstract_id: PUBMED:10604339 Periprostatic local anesthesia before ultrasound guided prostate biopsy. Purpose: We assessed the morbidity and benefit of periprostatic local anesthesia administered before ultrasound guided prostate biopsy. Materials And Methods: After placing the transrectal ultrasound probe and visualizing the prostate 50 consecutive patients received local anesthesia before prostate biopsies. Results: There was no morbidity associated with the infiltration of local anesthesia into the periprostatic neurovascular plexus. Only 1 patient had discomfort during prostate biopsies, and 10 patients who previously underwent biopsies without anesthesia commented favorably on the dramatic difference. Conclusions: Many patients have pain during transrectal ultrasound guided biopsies of the prostate and few clinicians provide a periprostatic nerve block before this procedure. A periprostatic nerve block administered before the biopsies dramatically decreases discomfort. We urge all urologists to attempt this procedure, and we are confident that they will adopt it as part of their practice. abstract_id: PUBMED:24505906 Local periprostatic anesthesia between option and necessity in transrectal ultrasound-guided prostate biopsy. Unlabelled: According to the European Association of Urology guidelines, local periprostatic anesthesia during ultrasound guided biopsy is "state of the art" without specifying the exact benefits and character of choice vs. necessity of this maneuver. Aim: To determine the benefits of using periprostatic anesthesia as standard method of analgesia in patients undergoing transrectal ultrasound guided prostate biopsy. Material And Methods: We conducted a prospective randomized study involving 100 biopsy patients. The patients were randomized in two groups, 50 patients benefiting from local periprostatic anesthesia with 10 ml of lidocaine and the remaining 50 without local anesthesia. In our clinic we use the 12-core prostate biopsy procedure using 18G/20 cm caliber needles. To assess perceived pain intensity during the procedure, immediately after biopsy we applied to patients a VAS questionnaire (Visual Analogue Scale) as a simple method of quantitative evaluation of a symptom the perception of which varies greatly between individuals. Results: A reduction in perceived pain by 45.06% (30.47 vs. 16.74) was recorded in the group receiving local periprostatic anesthesia. It is also worth mentioning that the patients receiving anesthesia said that anesthesia punctures were the most painful (the remaining punctures being much less painful), while patients without anesthesia reported pain intensity levels more or less equal in all 12 performed punctures. Conclusions: Local anesthesia is a necessity in ultrasound guided prostate biopsies as it significantly reduces pain intensity in patients undergoing this diagnostic procedure. abstract_id: PUBMED:35437964 Effect of Sedation Anesthesia With Intravenous Propofol on Transrectal Ultrasound-Guided Prostate Biopsy Outcomes. Background: Sedation anesthesia during transrectal ultrasound (TRUS)-guided prostate biopsy is known to decrease patient pain and anxiety, but little is known whether it affects the procedure's prostate cancer detection and complication rates. This study aimed to determine the effect of sedation anesthesia with intravenous (IV) propofol on TRUS-guided prostate biopsy outcomes. Methods: A retrospective analysis of 2,119 patients who underwent TRUS-guided prostate biopsy between November 2009 and February 2019 was undertaken. The patients were divided into two groups: patients who underwent sedation anesthesia with IV propofol and patients who underwent local anesthesia with intrarectal lidocaine gel instillation. Cancer detection and complication rates were compared between the two groups. Univariate and multivariate binary logistic regression and multinomial logistic regression analyses were conducted to investigate the effects of sedation anesthesia with IV propofol on prostate cancer detection and complication rates. Results: The cancer detection rate of patients in the sedation group was 34.0%, whereas it was 29.2% in the local group (P = 0.024). Multivariate logistic regression analysis regarding factors associated with cancer detection rate after TRUS-guided prostate biopsy in patients with prostate specific antigen (PSA) < 10 showed that IV propofol usage, age, PSA density and core length were significant factors. Multivariate logistic regression analysis regarding factors associated with complications (voiding dysfunction, bleeding and infection) showed that IV propofol usage, age and prostate size were significant factors for voiding dysfunction. Conclusion: Sedation anesthesia with IV propofol during TRUS-guided prostate biopsy was associated with a higher cancer detection rate than local anesthesia with intrarectal lidocaine gel instillation. Cancer detection rate could be an important factor to consider when selecting for the optimal anesthesia for TRUS-guided prostate biopsy. abstract_id: PUBMED:16765193 Local anesthesia in transrectal prostate biopsy. The administration of local anesthesia both topical and injectable before transrectal ultrasound-guided prostate biopsy is safe and efficacious. We describe our technique and review the relevant published data on this topic. Answer: Local anesthesia during transrectal ultrasonography-guided prostate biopsy does not appear to have any significant effect on sexual function. A study aimed to evaluate whether local anesthesia during TRUS-guided prostate biopsies has any effect on the sexual behavior of patients compared the periprostatic infiltration with lidocaine to simple sonographic gel application in a randomized prospective trial. The results showed similar rates of sexual dysfunction between the two groups, with only a small percentage of patients experiencing some degree of dysfunction in anticipation of the biopsy, which resolved by the end of the follow-up period. In total, only one patient in the lidocaine infiltration group continued to show sexual dysfunction at the time of the last interview compared to two patients in the control group. The study concluded that local infiltration with lidocaine does not seem to play a role in sexual dysfunction following prostate biopsies and that psychological factors may influence patients more significantly. The urologist should be ready to inform and reassure both the patient and their family regarding this concern (PUBMED:17006735).
Instruction: Does artificial UV use prior to spring break protect students from sunburns during spring break? Abstracts: abstract_id: PUBMED:23651274 Does artificial UV use prior to spring break protect students from sunburns during spring break? Background: Dark-skinned individuals are less likely than light-skinned individuals to become sunburned or develop skin cancer. Some have extrapolated this relationship and surmised that developing and maintaining a tan will reduce the risk of sunburns and melanoma. In order to examine whether this strategy indeed protects against sunburns, we surveyed college students about both their tanning habits prior to spring break and their spring break activities. Methods: Sorority and fraternity students were recruited after spring break. Analyses examined associations between potential risk factors and the development of one or more sunburns during spring break. Results: As expected, the risk of obtaining a sunburn increased with: time spent in the sun during spring break; light complexion, as assessed by various sun-sensitivity factors; and lack of sunscreen use. We also found that tanning using an artificial UV source during the 10 weeks prior to spring break was not associated with reduced risk of sunburns during spring-break, but rather with a marginal increase in this risk. Conclusions: These data provide evidence that maintaining a tan may not provide protection from sunburns. Public health messages need to address this misconception, stating clearly that a tan does not protect against or reduce the chances of developing a sunburn. abstract_id: PUBMED:24020690 Association of UV index and sunscreen use among White high school students in the United States. Background: When used appropriately, sunscreen decreases the amount of ultraviolet (UV) radiation exposure to the skin and is recommended to prevent skin cancer. This study examined the association between annual average UV index and sunscreen use among White, non-Hispanic youth. Methods: The 2007 and 2009 national Youth Risk Behavior Survey (YRBS) data were analyzed. The YRBS is a school-based, cross-sectional study that uses a 3-stage cluster sample design to obtain a nationally representative sample of public and private high school students in the United States. Overall response rates were 68% and 72% (N = 14,041 and 16,410, respectively). Logistic regression analyzes, controlling for grade, examined whether annual average (mean) UV index where students attend school, a proxy measure of potential sunlight exposure, was associated with sunscreen use. Results: Among White male students, as the mean UV index increased, the adjusted odds of never wearing sunscreen increased (adjusted odds ratio, AOR = 1.15 [95% CI = 1.04, 1.27], p < .01) and the adjusted odds of most of the time or always wearing sunscreen decreased (AOR = 0.85 [0.78, 0.94], p < .01). There was no association between sunscreen use and mean UV index among White female students. Conclusion: These findings suggest a need for renewed public health efforts among school, clinical, and public health professionals to promote sunscreen use as part of a comprehensive approach to prevent skin cancer. School and community approaches that passively protect young people from the sun should be considered as a complement to efforts to increase sunscreen use. abstract_id: PUBMED:14770335 Ozone decline and UV increase The following results have been obtained from long-term observations on the ozone layer and UV at the Meteorological Observatory Hohenpeigenberg:The seasonally varying decline of the ozone layer determines the maximum exposure to UV. Since ozone decline shows the highest rates in the spring months the UV exposure has most strongly increased in this time of the year. This is especially important because in spring the human skin is not adapted to UV exposure. Weather changes from day to day can induce rapid ozone reductions in spring about -30% which in turn is followed by an increase in UV of about 40%. Clouds, especially the transparent cirrus clouds (high clouds consisting of ice particles) have increased in frequency during spring and fall while a decrease is observed in summer. This change in cloudiness reduces the daily UV dose in spring and fall while it is enhanced in summer. With increasing height above sea level UV rises by roughly 10% per 1000 m (rule of thumb). Snow reflects the UV-radiation by up to 80% enhancing the UV-doses at relevant conditions. Strong volcano eruptions destroy ozone in the stratosphere additionally during 1-2 years after the eruption. Therafter the ozone layer recovers. In April 1993, after the eruption of Mt. Pinatubo (1991), the UV burden was still 40% higher than average. Miniholes and streamers can appear unexpected on a short-time scale and cross over Central Europe within 1-2 days, thus enhancing UV irradiation. The human skin reacts to UV exposure depending on the type of skin. The campaign "Sonne(n) mit Verstand" of the Bavarian Ministries for Environment, for Health and for Education informs about the danger of UV radiation (see www.sonne-mit-ver-stand.de). The German Weather Service informs the public on present developments of the ozone layer and relevant topics byits ozone bulletin, which is also available via internet under (www.dwd.de/deFundE/Observator/MOHp/hp2/ozon/bulletin.htm). abstract_id: PUBMED:32189399 Protection-adjusted UV dose estimated for body areas: Daily self-reported sun protection modification of wearable UV sensor dose. Background: During the last 20 years, 50% of adults with sun-sensitive skin have sunburned annually. Reducing the proportion of people who sunburn requires understanding the circumstances and outdoor activities during which at-risk people sunburn. Methods: A 7-day observational study of melanoma survivors (n = 20) and young adult first-degree relatives of melanoma survivors (FDRs) (n = 20) captured daily UV exposure, sun protection, and sunburns during spring and summer in the Midwest of the United States (latitude 41.8°N). Participants wore UV and physical activity sensors and completed a daily self-reported survey of sun protection, sunburn, and physical activities. The estimated protection-adjusted UV dose was calculated for each body area by integrating self-reported sun protection with UV sensor dose. Results: In 254 days, at least one body area in 9 of 20 (45%) melanoma survivors and 11 of 20 (55%) FDRs was sunburned (erythema at 24 hours). Sunburns were associated with spring and walking for transportation or leisure, especially walking the dog. Melanoma survivors used sunscreen daily on the face; however, forearms and lower legs were not protected during walking. Young adults did not use sun protection on the face, forearms or lower legs during walking for transportation and use was ineffective during sports. Conclusion: The sun protection patterns of daily living identified in this study may promote recognition of erythema as sunburn and inform the development of tailored sun protection mobile applications promoting self-monitoring with wearable UV sensors. abstract_id: PUBMED:37344707 The effects of UV filters on health and the environment. Sunscreens are an important means of protection against sunburns, dyspigmentation, photoaging, and photocarcinogenesis. Sunscreens come in a variety of formulations that can protect against ultraviolet B (UVB) radiation, both UVB and ultraviolet A (UVA) radiation (broad-spectrum sunscreens), and UVB, UVA, and visible light (tinted broad-spectrum sunscreens). In the USA, there is currently a paucity of FDA-approved broad-spectrum filters on the market. Studies have identified the presence of multiple UV filters in water sources globally. Many laboratory studies have implicated the potential impact of UV filters on coral reef bleaching, the food chain, and human health. However, many of these studies are performed at concentrations that are much higher than those present in the natural environment. With increasing discussion surrounding the role of organic and inorganic UV filters as potential environmental pollutants over the past decade, approval of additional broad-spectrum filters would be an important means of alleviating the use of more controversial filters. The aim of this article is to review the effects of UV filters on health and the environment and explore potential adjunctive agents for photoprotection. abstract_id: PUBMED:18717960 Consistency of reported tanning behaviors and sunburn history among sorority and fraternity students. Background: Reliable measurements of behavior are crucial to evaluating health promotion efforts. The goal of this reliability study was to examine the consistency of recalled tanning behaviors in a population with a potentially high use. Methods: We used the test-retest reliability of a self-administered survey to examine the consistency of reporting behavior patterns regarding artificial ultraviolet (UV) tanning, sunburns and sun sensitivity among sorority- and fraternity-affiliated university students. Results: High reliability on test-retest for questions evaluating the number of artificial UV tanning sessions and the number of sunburns during specific time periods was found. Conclusions: Overall, this study suggests that this sub-population of 18-23-year-olds, with a wider range of exposures than other 18-23-year-olds and older subjects, report lifetime artificial UV tanning consistently when required to recall time period-specific exposures before estimating their lifetime exposure. abstract_id: PUBMED:12374536 Awareness of the risks of tanning lamps does not influence behavior among college students. Hypothesis: Awareness of the risks of artificial tanning influences tanning behavior among college students. Objective: To correlate the prevalence of tanning lamp use, the perceived benefits and risks associated with UV exposure, and knowledge about skin cancer among university students. Design: A survey was designed and administered to college students seeking "walk-in" care at a university student health center from September 7, 1999, through September 30, 1999. Setting: A large midwestern public university student health center. Participants: Undergraduate and graduate students attending the student health center for any medical condition. Intervention: None. Main Outcome Measure: Completion of the survey. Results: Of the surveyed students, 47% had used a tanning lamp during the preceding 12 months. Female students were more common users than male students. Of the students surveyed, 39% reported never having used tanning lamps. More than 90% of users of tanning lamps were aware that premature aging and skin cancer were possible complications of tanning lamp use. Conclusions: Despite adequate knowledge of the adverse effects of UV exposure, university students freely and frequently use tanning lamps, primarily for desired cosmetic appearance. To alter this risky behavior will require a fundamental change in the societal belief that tans are attractive and healthy. abstract_id: PUBMED:17435110 Solar UV forecasts: a randomized trial assessing their impact on adults' sun-protection behavior. This study examined the effectiveness of solar UV forecasts and supporting communications in assisting adults to protect themselves from excessive weekend sun exposure. The study was conducted in Australia, where 557 adult participants with workplace e-mail and Internet access were randomly allocated to one of three weather forecast conditions: standard forecast (no UV), standard forecast + UV, standard forecast + UV + sun-protection messages. From late spring through summer and early autumn, they were e-mailed weekend weather forecasts late in the working week. Each Monday they were e-mailed a prompt to complete a Web-based questionnaire to report sun-related behavior and any sunburn experienced during the previous weekend. There were no significant differences between weather forecast conditions in reported hat use, sunscreen use, sun avoidance, or sunburn. Results indicate that provision of solar-UV forecasts in weather forecasts did not promote markedly enhanced personal sun-protection practices among the adults surveyed. abstract_id: PUBMED:27149135 Knowledge about Ultraviolet Radiation Hazards and Tanning Behavior of Cosmetology and Medical Students. Dear Editor, Ultraviolet (UV) radiation is a well-known physical hazard responsible for photoaging, photoallergic, and phototoxic reactions as well as carcinogenesis, including life-threatening melanomas (1,2). Overexposure to both natural and artificial UV radiation is a public health concern. 30% of cancers diagnosed worldwide are skin cancers. Approximately three million non-melanoma skin cancers and 132 000 new cases of melanomas are diagnosed globally each year (3). Sunburns, especially in childhood, are a very important risk factor for melanomas. Several studies demonstrated a positive association between sunbed use and an increased incidence of malignant melanoma (4). Current medical and cosmetology students will soon be knowledge providers about the risks of excessive exposure to UV radiation and prophylaxis of its consequences. Our aim was to evaluate their knowledge about the side effects of ultraviolet radiation and tanning behaviors. Details on the knowledge and habits of students were obtained during classes at the Poznan University of Medical Sciences. With approval from the Institutional Bioethical Committee, a 41-question anonymous survey was conducted in the spring of 2012 among 190 medical (1-6 year) and cosmetology students (1-5 year). The mean age of the study group was 22.3 years (standard deviation (SD) = 2.4 years), range 19-28 years. The survey was composed of closed and open-ended questions prepared by the authors. The first part of the form included demographic data: gender, age, degree course, and school year. The students were also asked about their reaction to sunlight, sunburns in childhood, and personal and family history of skin cancers or dysplastic nevus syndrome. The factual section of the survey contained questions evaluating responder knowledge about sunbeds and risk of UV radiation as well as their personal tanning habits. The open-ended questions asked responders to provide definitions of: skin phototype, sun protection factor (SPF), and tanorexia. The students were additionally asked to mention possible side effects of solar radiation and contraindications to sunbeds and drugs, which may induce photosensitivity. Statistical analysis was performed using The R Project for Statistical Computing. Chi-squared test was used to compare both sun-risk knowledge and tanning behaviors between medical and cosmetology students. P<0.05 was considered statistically significant. We distributed 220 questionnaires and received 190 (86%) eligible for evaluation. Table 1 shows the study population. Gender distribution among groups was uneven, with significantly more male subjects in the medicine program group. We decided to include their answers in this study to provide an unbiased view of both of those programs. Where appropriate, we additionally provided comparisons between female subjects in both groups to prove that differences were not solely due to uneven gender distribution. When we asked students to define skin phototype, cosmetology students more frequently gave a correct definition. In the group of students who stated they knew the definition of skin phototype, medical students were significantly more frequently wrong when we asked them to explain the term in their own words. Cosmetology students correctly answered significantly more knowledge checking questions (Table 2). When we asked students to list photosensitizing agents, students of the cosmetology program gave twice as many correct answers per respondent as students of the medicine program (see Table 3). Cosmetology students more frequently listed retinoids, while medical students listed tetracyclines as the main photosensitizing drug. The most common answer in the cosmetology group was the herb of Hypericum perforatum, although it is not considered a drug. Psoralens were identified by only 4 medical students as a possible cause of phototoxicity. When students were asked to list adverse effects of sunbathing, we specifically looked for three responses (see Table 4). Cosmetology students listed those answers significantly more often than medical students. Students of the cosmetology program gave significantly more correct answers when asked to list contraindications for sunbathing. While medical students reported mainly pregnancy (as a contraindication for most medical procedures), cosmetology students reported history of skin cancer as the most frequent answer (Table 5). Cosmetology students (89.04%) stated they visited a tanning salon more often than medical students (46.55%) (P<0.0001). When we restricted this analysis to only female subjects there still was a significant difference (P=0.0002) between cosmetology and medical female students. Cosmetology students reported lower incidence of sunscreen use (83.78% vs. 97.39%; P=0.0019). The age of the first tanning studio visit was also lower for cosmetology students (mean = 16.5 years) than medical students (mean = 17.2 years), (P=0.0290). Figure 1 illustrates the frequency of student tanning studio visits; the difference between groups was significant (P=0.0308). Skin cancers, dysplastic nevi syndrome, and precancerous lesions were reported in the family history by 19 students (10.00%). 12 of those students (63.16%) were also tanning salons users. 85 students (44.74%) reported a history of a sunburn in their childhood and over half of them continue visiting tanning salons. Some American (5) and French (6) studies assessed medical student knowledge and behaviors concerning sun risk and its prevention. The results of these studies indicated that medical school students did not have a satisfactory awareness about sun risk hazards. The French evaluation showed medical student knowledge was comparable to that of the French general population. Studies evaluating Polish student knowledge (7-9) showed ignorance of the term Fitzpatrick's skin phototype. We emphasize this because patients with phototype 1 and 2 are more susceptible to the development of skin cancers (10) and ignorance in this matter may be dangerous. UV rays may promote drug-induced photosensitivity reactions such as phototoxicity and photoallergy (1), with the most common causes being: non-steroidal anti-inflammatory agents (ketoprofen, ibuprofen, piroxicam, diclofenac), cardiovascular drugs (furosemid, amiodarone, thiazides), antibiotics (tetracyclines, ciprofloxacine, sulfonamides), psoralens, and oral contraceptives (11). Our study found deficient knowledge about drugs which may trigger photosensitivity reactions. Cosmetology students reported significantly more risky tanning behavior but did better in knowledge checking questions, which may be explained by their personal interest in this subject or by educational focus due to their major. We suggest that better knowledge about sunbathing in general is due to increased interest in this matter (not solely due to formal education) and this interest derives from a positive attitude towards a tanned appearance. It has been proven that sunbathing shows signs of addictive behavior (12). Tanorexia as a term was more widely known among cosmetology students, which may illustrate that although students knew about the addictive properties of tanning, they were sure that this did not apply to them. Many studies showed that increased knowledge did not translate into safer tanning habits (13,14). Our study agrees with those findings. Our study demonstrated that medical and cosmetology student knowledge about sunbeds and risk of UV radiation is deficient. However, cosmetology students demonstrated better knowledge than medical students. Future cosmetologists may be better information providers about sun risk and its prevention. On the other hand, students of the cosmetology faculty tended to tan more often and longer and engage in more risky behavior despite being aware of the hazards of tanning. They may be more likely to develop skin cancers in the future. abstract_id: PUBMED:32714780 Promoting sunscreen use in adolescents playing outdoor sports using UV detection stickers. Sunscreen is a popular form of sun protection and when applied sufficiently (2 mg/cm2) has been shown to block the harmful molecular effects of ultraviolet radiation (UV). This field study tested the effectiveness of UV detection stickers to improve sunscreen use. During a rugby league two-day sporting event (28-29 February 2020) in Queensland, Australia interested players 14-18 years old were provided with access to free sunscreen on DAY-1 and during the subsequent day (DAY-2) were provided with a free UV detection sticker and access to sunscreen. On DAY-2, one UV detection sticker was handed out to 550 attendees. The sunscreen bottles were weighted periodically by research staff throughout both event days. Primary outcomes were sunscreen usage. Overall, 868 g of sunscreen was used across both DAY-1 and DAY-2, with 19% (167 g) of sunscreen used on DAY-1 and 81% (701 g) of sunscreen used on DAY-2. This resulted in a >3-fold improvement in sunscreen use on DAY-2 when the UV detection stickers were provided. We found UV detection stickers may improve use of sunscreen in adolescents during sporting events in high UV environments. Answer: No, the use of artificial UV prior to spring break does not protect students from sunburns during spring break. A study that surveyed college students about their tanning habits prior to spring break and their activities during spring break found that tanning using an artificial UV source during the 10 weeks prior to spring break was not associated with a reduced risk of sunburns during spring break. In fact, it was associated with a marginal increase in the risk of sunburns (PUBMED:23651274). This evidence suggests that maintaining a tan may not provide protection from sunburns, and public health messages should address this misconception by stating clearly that a tan does not protect against or reduce the chances of developing a sunburn.
Instruction: Does the use of bilateral internal mammary artery grafts impact survival of veterans undergoing coronary artery bypass surgery? Abstracts: abstract_id: PUBMED:18789418 Does the use of bilateral internal mammary artery grafts impact survival of veterans undergoing coronary artery bypass surgery? Background: The impact of bilateral internal mammary artery (BIMA) versus single left internal mammary artery (LIMA) grafts on long-term survival in veterans after coronary artery bypass graft (CABG) surgery is unknown. Methods: A review of prospectively collected data identified all patients (n = 784) who underwent primary isolated CABG surgery from December 1991 through December 1998. Grafting was performed with LIMA in 713 (90.9%) patients and with BIMA in 71 (9.1%) patients. We identified 66 propensity-matched patient pairs. Results: The matched cohort was all male. The mean follow-up was 9.7 +/- 3.8 years. Comparing matched patients showed no significant survival benefit for the BIMA group versus the LIMA group at 5 years (89% versus 86%) and 10 years (73% versus 69%) (P = .99). Factors associated with decreased survival were advanced age, higher New York Heart Association heart failure class, and diabetes. Conclusions: Using BIMA grafting instead of LIMA grafting had no significant survival benefit for male veterans who underwent CABG surgery. Further study is needed to fully evaluate the role of BIMA grafting in this unique patient population. abstract_id: PUBMED:24087830 Does the use of bilateral mammary artery grafts compared with the use of a single mammary artery graft offer a long-term survival benefit in patients undergoing coronary artery bypass surgery? A best evidence topic in cardiac surgery was written according to a structured protocol. The question addressed was: 'Does the use of bilateral mammary artery grafts compared with the use of a single mammary artery graft offer a long-term survival benefit in patients undergoing coronary artery bypass surgery?' Altogether 214 papers were found using the reported search, of which 13 represented the best evidence to answer the clinical question. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results of these papers are tabulated. All the included studies were follow-up studies; eight studies used prospective data collection, and five studies collected the study data retrospectively. No randomized controlled trials were found. Nine of the 13 included papers used a propensity-score-matched comparison of the survival of bilateral mammary artery graft [or, bilateral internal thoracic artery (BITA) graft] patients vs single mammary artery graft [or, single internal thoracic artery (SITA) graft] patients. These studies consistently showed an enhanced survival of BITA patients compared with propensity-score-matched SITA patients. Three of the 13 included papers used Cox proportional hazards regression analysis to compare survival of BITA vs SITA patients; one larger study showed better crude survival of BITA patients, but did not identify BITA grafts as independent predictor of enhanced survival. The remaining two studies also did not identify BITA grafts as independent predictor of enhanced survival. One study only presented crude survival estimates of BITA vs SITA patients and therefore was of limited informative value. We conclude that the use of BITA grafts seems to offer a long-term survival benefit compared with a SITA graft for patients undergoing coronary artery bypass grafting surgery. Although randomized evidence is lacking, observational evidence supporting this hypothesis is mounting. abstract_id: PUBMED:35149863 What is the best choice for third conduit when using bilateral internal mammary arteries for coronary artery bypass grafting-radial artery or saphenous vein graft? A best evidence topic in cardiac surgery was written according to a structured protocol. The question addressed was 'What is the best choice for third conduit when using bilateral internal mammary arteries for coronary artery bypass grafting-radial artery or saphenous vein graft?'. Altogether >525 papers were found using the reported search, of which 7 represented the best evidence to answer the clinical question. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results of these papers are tabulated. Overall, there was no survival benefit demonstrated with the use of a radial artery over the use of a saphenous vein graft as a choice of third conduit following bilateral internal mammary artery grafts for coronary artery bypass grafting. The main limitation of the current evidence available is the restricted follow-up periods and the high attrition rates with small sample sizes affecting the strength of conclusions that can be drawn beyond 10 years of follow-up. We conclude that despite previous evidence supporting improved long-term patency of radial arterial grafts, there is no strong evidence that the use of a radial artery, over a saphenous vein graft, has any survival benefit when used as the third conduit following bilateral internal mammary artery grafts. abstract_id: PUBMED:33478236 Left or bilateral internal mammary artery employment in coronary artery bypass grafting: midterm results. Background: The gold standard for coronary artery bypass grafting to the left anterior descending artery is use of the left internal mammary artery. Better long-term survival has been reported using bilateral internal mammary arteries compared to left internal mammary artery only, but many surgeons are reluctant to employ bilateral internal mammary arteries in coronary artery bypass grafting. This study aimed to evaluate the effectiveness and safety of bilateral internal mammary artery use. Methods: From 2014 to 2017, 1703 patients underwent coronary artery bypass grafting in our institute. Of these, 772 met the inclusion criteria and were randomly assigned to receive bilateral (n = 387) or left (n = 385) internal mammary artery grafts. The mean age was 67.1 ± 6.0 years (range 48-85 years) and 474 (61.4%) were male. The mean number of diseased vessels was 3.1 ± 0.9, and mean EuroSCORE II was 3.4% ± 1.1%. Results: Hospital mortality was 1.2% in the left internal mammary artery group vs. 1.8% in the bilateral internal mammary artery group (p = 0.55). There was no difference in procedure-related complications between groups. Mean follow-up was 65.9 months. Survival in the bilateral internal mammary artery group at 1, 3, and 5 years was 98.7%, 98.7%, and 94.8% vs. 98.1%, 98.1%, and 90.9%, respectively, in the left internal mammary artery group (p = 0.63). Conclusion: Application of bilateral internal mammary arteries in coronary artery bypass grafting is safe and effective, with comparable midterm results to those with the left internal mammary artery only. abstract_id: PUBMED:36047135 Up-to-Date, Skeletonized or Pedicle Bilateral Internal Mammary Artery; Does It Matter? Purpose: In this article, we reported on the up-to-date literature regarding skeletonized bilateral internal mammary artery (BIMA) flow and the effect on sternal perfusion. We also reviewed the pros and cons of the skeletonization technique versus the conventional pedicle technique for harvesting the BIMA. Methods: We performed an up-to-date review using the PubMed database, with a specific focus on the contemporary published literature. Results: BIMA skeletonization can preserve the sternal microcirculation, minimize tissue damage, and maintain blood supply to the chest wall at the tissue level. This effect is also apparent in diabetics. Deep sternal wound infection (DSWI) rates are significantly less with skeletonization versus the conventional pedicle technique and are comparable to single internal mammary artery harvesting. Conclusions: Contemporary large-scale studies demonstrate that skeletonization of the BIMA increases conduit length, provides superior flow, reduces the incidence of DSWIs, and improves late survival. Hopefully, this review will increase awareness of the compelling evidence in favor of using skeletonized internal mammary arteries and stimulate increased uptake of BIMA revascularization surgery. abstract_id: PUBMED:25463040 Surgical strategies for bilateral internal mammary artery grafting. The internal mammary artery is universally recognised as the preferred conduit for coronary artery bypass grafting. Accumulating evidence in recent years has demonstrated the superiority of bilateral internal mammary artery (BIMA) grafting over single internal mammary artery grafting in terms of survival, event-free survival, and freedom from re-intervention. The survival benefit seen with BIMA grafting has been associated particularly to grafting the myocardium supplied by the left coronary artery system. Several surgical strategies have been used to achieve left-sided myocardial revascularisation with BIMA grafting. These include in-situ right internal mammary artery (RIMA) to the left anterior descending and the left internal mammary artery (LIMA) to circumflex marginal branches, directing the RIMA through the transverse sinus in a retroaortic course, and free RIMA graft connected proximally either to the LIMA (composite grafting) or to the ascending aorta. Each one of these surgical strategies for BIMA grafting has its merits and demerits. This review article provides an overview of the various surgical strategies for BIMA grafting focussing on their technical aspects, pros and cons as well as outcomes. abstract_id: PUBMED:23977638 Bilateral internal mammary arteries: evidence and technical considerations. Bilateral internal mammary artery (BIMA) grafts are used for coronary revascularisation by only a minority of surgeons, despite a growing body of evidence suggesting improved survival when compared to use of only one internal mammary artery with additional saphenous vein grafts. Herein we review the evidence supporting revascularisation with BIMA and suggest reasons why the majority of surgeons use only one internal mammary artery. We discuss technical considerations, various graft combinations and the use of BIMA to facilitate anaortic off-pump coronary artery bypass (OPCAB). abstract_id: PUBMED:6383242 Techniques for multiple internal mammary artery bypass grafts. Long-term patency of the internal mammary artery graft surpasses that of all other bypass conduits. The use of multiple internal mammary artery grafts should improve the long-term results following coronary artery bypass grafting. Technical factors used in mobilizing, preparing, and anastomosing the internal mammary artery are discussed in this article. Fifty-three patients received sequential attached left internal mammary artery grafts from April, 1982, to August, 1983. In 6 of these patients, the right internal mammary artery was anastomosed to a circumflex marginal branch. There were no operative deaths or instances of low cardiac output. One patient suffered anastomotic narrowing that possibly could have been prevented by excision of excess tissue from the internal mammary artery pedicle. Multiple internal mammary artery grafting should have a profound influence on the results of coronary artery bypass grafting. abstract_id: PUBMED:12509904 Application of bilateral internal mammary artery in coronary artery bypass grafting Objective: To evaluate the effects of application of bilateral internal mammary artery grafts in coronary artery by pass grafting (CABG). Methods: From Jan. 1998 to Mar. 2001, 51 patients with coronary arteriosclerotic cardiopathy, all males, aged 36 - 65 (average 49.9 years), underwent coronary artery bypass grafting with bilateral internal mammary artery grafts. The pathologic change involved three branches of coronary artery in 48 cases and involved the left main trunk in 3 cases. 37 patients had the history of myocardial infarction. Four cases were complicated by left ventricular aneurysm. 47 cases were operated upon on-pump and 4 off-pump. Left ventricular aneurysmectomy was done in 4 patients and coronary endarterectomy in 1 patient. The patterns of anastomosis were as follows: right internal mammary artery (RIMA) to left anterior descending artery (LAD) and left internal mammary artery (LIMA) to obtuse marginal (OM) in 39 cases, LIMA to LAD and RIMA to right coronary artery (RCA) in 9 cases, and LIMA to LAD and RIMA to OM in 3 cases. Gastroepiploic artery and left radical artery were used in other bypass grafting. The average number of grafts used for each case was 3. Results: There were 3 early operative deaths, with an early operative mortality rate of 5.9%. One patient died of low output syndrome due to perioperative myocardial infection, 1 patient died of refractory arrhythmia, and 1 patient died of cerebral accident 40 days after operation. Intraaortic balloon pump was used in 4 cases post-operatively. Mediastinal infection occurred in 2 patients. The patients were followed up for 2 to 39 months (average 15.5 months). During the follow-up, no angina pectoris occurred and ultrasonography showed that the bilateral internal mammary artery grafts were unobstructed in all patients. Conclusion: Bilateral internal mammary artery grafts can be applied effectively in CABG with a good short-term effect, especially for young patients. abstract_id: PUBMED:2319795 Bilateral internal mammary artery grafts for coronary artery bypass operations in children. We performed myocardial revascularization with bilateral internal mammary arteries in eight children for coronary artery complications consequent to Kawasaki disease. Subjects included seven boys and one girl, ranging in age from 3 to 13 years (mean age, 8.3 +/- 3.4 years). The body surface area ranged from 0.65 to 1.65 m2 (average, 1.08 +/- 0.35 m2). Three patients had a previous myocardial infarction. The right internal mammary artery was anastomosed to the right coronary artery and the left internal mammary artery was sutured to the left anterior descending artery in all patients. The patients received an average of 2.4 grafts. Magnifying loupes of 3.5 X were used for anastomosis with 8-0 monofilament polypropylene sutures. Subjects were followed up from 12 to 38 months (23 +/- 10.8 months) after operation. All were doing well with no recurrence of angina, and body development was normal, including the sternum and thorax according to chest x-ray films and computed tomography of the chest. Patency of the bilateral internal mammary arteries was 100% in the early (within 1 month) postoperative period and remained so in the late (over 1 year) postoperative period. Anastomotic junctions between the internal mammary artery and the coronary artery developed well angiographically in the late postoperative period. The internal mammary artery is the graft of choice for pediatric myocardial revascularization because of its excellent long-term patency and growth potential. Bilateral internal mammary arteries should be used whenever indicated, and the use of bilateral internal mammary arteries did not adversely influence chest wall development in the children. Answer: The use of bilateral internal mammary artery (BIMA) grafts does not appear to significantly impact the survival of veterans undergoing coronary artery bypass surgery (CABG). A study specifically investigating this question in a veteran population found no significant survival benefit for the BIMA group compared to the single left internal mammary artery (LIMA) group at 5 years (89% versus 86%) and 10 years (73% versus 69%) with a mean follow-up of 9.7 +/- 3.8 years. Factors that were associated with decreased survival included advanced age, higher New York Heart Association heart failure class, and diabetes (PUBMED:18789418). However, it is important to note that while this particular study did not show a significant benefit in the veteran population, other studies have suggested that the use of BIMA grafts may offer a long-term survival benefit compared to a single mammary artery graft (SITA) in the general population undergoing CABG. These studies, which were not randomized controlled trials, consistently showed enhanced survival of BITA patients compared with propensity-score-matched SITA patients (PUBMED:24087830). In conclusion, while there is evidence suggesting a potential long-term survival benefit of BIMA grafts in the general population, the specific study on veterans did not demonstrate a significant survival advantage with the use of BIMA grafts in this unique patient population (PUBMED:18789418). Further study is needed to fully evaluate the role of BIMA grafting in veterans undergoing CABG.
Instruction: Does autopsy of antenatally diagnosed malformed foetuses aid genetic counselling? Abstracts: abstract_id: PUBMED:9808972 Does autopsy of antenatally diagnosed malformed foetuses aid genetic counselling? Background: Many pregnancies are terminated because of ultrasonographic diagnosis of malformation in the foetus. A detailed foetal autopsy is needed to arrive at a definite diagnosis on the basis of which genetic counselling can be provided. Methods: Sixty-one foetuses, terminated because of antenatal diagnosis of congenital malformations by ultrasound, were autopsied. The ultrasound diagnosis was compared with the diagnosis reached after autopsy. Results: In 31 cases (51%) the autopsy provided additional findings. In 21 cases (34.4%), the autopsy changed the primary diagnosis. The revised diagnosis led to a change in the risk of recurrence in 18 cases (29.5%). Conclusion: Genetic counselling depending solely on ultrasonographic foetal diagnosis may be erroneous. For appropriate genetic counselling, a detailed foetal examination should be carried out after termination in cases with ultrasonographically detected congenital malformations. abstract_id: PUBMED:24744972 A rare case of oropharyngeal teratoma diagnosed antenatally with MRI. Oropharyngeal teratomas are extremely rare congenital tumors and have an incidence rate of one in 35,000-200,000 newborn babies/neonates. Oropharyngeal teratomas may cause life threatening airway obstruction to a newborn. Early diagnosis with ultrasound and magnetic resonance imaging (MRI) is essential to plan management. Here, we present a rare case of oropharyngeal true teratoma diagnosed with MRI antenatally in a fetus at 34 weeks of gestation in a 25-year-old female who was being evaluated for polyhydraminos. We found MRI to be more helpful for antenatal diagnosis, counseling, and management than ultrasonography and computed tomography (CT). abstract_id: PUBMED:31842550 A practical approach to the examination of the congenitally malformed heart at autopsy. Congenital heart defects (CHD) represent the most frequent type of the heart disease in childhood, with incidence up to 1 % of all live-born children. Despite the improving echocardiographic diagnostics, part of CHD remains undiagnosed and can manifest in the later age or may be the cause of the early abortion. On the other hand, some foetuses with prenatally diagnosed severe CHD may be recommended to interruption. Therefore, each pathologist can encounter a malformed heart at the autopsy. Despite the current quality of the echocardiography, the macroscopic assessment of the heart by the pathologist is still considered the best method for evaluation of the structural heart disease. Knowledge of the basic pathologic anatomy thus remains an important prerequisite for adequately performed paediatric autopsy. abstract_id: PUBMED:26857075 Medico-legal autopsy of abandoned foetuses and newborns in India. Foeticide and the abandonment of newborns are important, albeit frequently neglected, issues. Concealment of childbirth is often seen in the setting of unwanted pregnancy which has been recognised as one of the most important factors in both cases. This study highlights the medico-legal autopsy findings of 238 abandoned foetuses and newborns over a period of 17 years (1996-2012) from the region of South Delhi, India. There was no sex predilection. The majority of the cases were full term. Nearly 35% of the foetuses were still born, about 29% were live born and the remainder were non-viable. Among the live born, death by homicide was more common than a natural death and most were left by the roadside. The abandoning and killing of newborns needs urgent attention, and strict measures are needed to save thousands of innocent lives. abstract_id: PUBMED:31572820 Amorphus Globosus Foetuses in Polish Holstein Cattle: Anatomical, Histological, and Genetic Studies. Introduction: A comprehensive description is presented of four novel cases ofamorphus globosus (ag) foetuses originating from multiple pregnancies of Polish Holstein cows. Material And Methods: Four amorphic foetuses were delivered by three cows. Tissue samples were collected during autopsy, embedded in paraffin, sectioned, and stained with haematoxylin and eosin. Genomic DNA was isolated from tissue samples of abnormal foetuses and from blood leukocytes of their healthy siblings. PCR reactions were used to reveal the presence of Y-linked genes (SRY and AMELY) and an X-linked gene (AMELX). Results: All foetuses were classified to the groupholoacardius amorphous (anideus). Molecular analysis clearly showed that at 17 microsatellite loci, the studied amorphous foetuses had identical genotypes to the viable co-twins. Conclusion: Foetuses had monozygotic origin. Histological analysis showed a low level of development of tissues of meso- and ectodermal origin, as well as features of degrading patterns. abstract_id: PUBMED:25533719 Diagnostic accuracy of post mortem MRI for abdominal abnormalities in foetuses and children. Background: To compare the diagnostic accuracy of post-mortem magnetic resonance imaging (PMMR) specifically for abdominal pathology in foetuses and children, compared to conventional autopsy. Methods: Institutional ethics approval and parental consent was obtained. 400 unselected foetuses and children underwent PMMR using a 1.5T Siemens Avanto MR scanner before conventional autopsy. PMMR images and autopsy findings were reported blinded to the other data respectively. Results: Abdominal abnormalities were found in 70/400 (12%) autopsies. Overall sensitivity and specificity (95% confidence interval) of PMMR for abdominal pathology was 72.5% (61.0, 81.6) and 90.8% (87.0, 93.6), with positive (PPV) and negative predictive values (NPV) of 64.1% (53.0, 73.9) and 93.6% (90.2, 95.8) respectively. PMMR was good at detecting renal abnormalities (sensitivity 80%), particularly in foetuses, and relatively poor at detecting intestinal abnormalities (sensitivity 50%). Overall accuracy was 87.4% (83.6, 90.4). Conclusions: PMMR has high overall accuracy for abdominal pathology in foetuses, newborns and children. PMMR is particularly good at detecting renal abnormalities, and relatively poor at detecting intestinal abnormalities. In clinical practice, PMMR may be a useful alternative or adjunct to conventional autopsy in foetuses and children for detecting abdominal abnormalities. abstract_id: PUBMED:36731744 An European overview of genetic counselling supervision provision. Genetic testing is becoming more commonplace in general and specialist health care, and should always be accompanied by genetic counselling, according to legislation in many European countries and recommendations by professional bodies. Personal and professional competence is necessary to provide safe and effective genetic counselling. Clinical and counselling supervision of genetics healthcare practitioners plays a key role in quality assurance, providing a safe environment not only for patients but for professionals too. However, in many European countries, genetic counsellors are still an emerging professional group and counselling supervision is not routinely offered and there are no enough evidences on the impact of these insufficiencies. This study aimed to explore the current status of genetic counselling supervision provision across Europe and to ascertain factors that might be relevant for the successful implementation of counselling supervision. A total of 100 practitioners responded to an online survey; respondents were from 18 countries, with the majority working in France (27%) and Spain (17%). Only 34 participants reported having access to genetic counselling supervision. Country of origin, the existence of a regulation system and years of experience were factors identified as relevant, influencing access and characteristics of counselling supervision. Although there is a growing number of genetic counsellors trained at European level, just a few countries have implemented and required as mandatory the access to genetic counselling supervision. Nevertheless, this is essential to ensure a safe and effective genetic counselling and should be regulated at the European genetic healthcare services. abstract_id: PUBMED:37287127 The effects of decision aids for genetic counselling among people considering genetic testing: A systematic review. Aim: To assess the effectiveness of decision aids for genetic counsellees to improve their conflicts in decision-making and psychological well-being when considering genetic tests for inherited genetic diseases, and their knowledge about these tests and their genetic risks. Design: Systematic review. Data Sources: Six electronic databases (PubMed, MEDLINE, OVID Nursing, APA PsycINFO, EMBASE and CINAHL) were searched from inception to May 2022. Review Methods: Only randomised controlled trials that examined the effect of decision aids for information provision centring genetic testing on outcomes including decisional conflicts, informed choice making, knowledge on genetic risks or genetic tests, and psychological outcomes among participants who had undergone genetic counselling were included. Their risk of bias was assessed using the Version 2 of the Cochrane risk of bias tool for randomised trials. Results were presented narratively. The review was conducted according to the PRISMA checklist. Results: Eight included studies examined the effect of booklet-based, computer-based, film-based or web-based decision aids on individuals considering genetic testing for their increased cancer risks. Despite contrasting findings across studies, they showed that decision aids enable genetic counsellees to feel more informed in decision-making on genetic tests, although most showed no effect on decisional conflict. Knowledge of genetic counsellees on genetic risks and genetic tests were increased after the use of decision aids. Most studies showed no significant effect on any psychological outcomes assessed. Conclusions: Review findings corroborate the use of decision aids to enhance the effective delivery of genetic counselling, enabling genetic counsellees to gain more knowledge of genetic tests and feel more informed in making decisions to have these tests. Relevance To Clinical Practice: Decision aids can be used to support nurse-led genetic counselling for better knowledge acquisition and decision-making among counsellees. No Patient Or Public Contribution: Patient or public contribution is not applicable as this is a systematic review. abstract_id: PUBMED:24754734 Relevance of labor room fetal autopsy in increasing its acceptance. Background: Fetal autopsy is included in the basic protocol of investigating a perinatal death, parental refusal is the main reason for its low rate. Aim: To increase acceptance of fetal autopsy and to provide better counselling to the couple regarding risk of recurrence in future pregnancies. Materials And Methods: All cases with antenatally diagnosed congenital anomaly resulting in stillbirth or termination before 20 weeks were offered fetal autopsy and it was performed in labor room itself by the fetal medicine specialist after consents. External and internal examination, photograph, infantogram and karyotyping were done, and relevant tissue was sent for histopathology. Correlation between the ultrasound and autopsy finding was done. Results: Total 674 cases of antenatally detected major congenital anomaly were included in the study. Out of 403 cases of stillbirth and abortion, consent for autopsy was given in 312. Most common defect was cranio-vertebral defect followed by genitourinary anomaly. The autopsy finding correlated with USG findings fully in 63.5% cases, there were additional findings altering diagnosis in 24.7% cases, the diagnosis completely changed in 11.8% cases. Conclusion: Autopsy if done in labor room increases its acceptance by the couple. Additional findings on autopsy helped in reaching at diagnosis and counseling accordingly. abstract_id: PUBMED:37600657 The growing needs of genetic counselling-Feasibility in utilization of tele-genetic counselling in Asia and Hong Kong. The need for the expansion of genomic services has been at a record time high in the past decade. As technological advancement continues to strengthen the entire genetic and genomic pipeline and clinical operational workflow, the major challenge remains to be the speed of workforce development to meet service growth. In particular, the international expansion of genetic counselling (GC) services has been a topic of interest for the past few years. GC is an emerging profession in most of Asia, and in many countries the profession of GC often refers to physicians or front-line health workers with expertise in genetics to provide GC services rather than being a specific independent profession. As genetic and genomic services, especially pre-test and post-test GC, expand globally, the need to tackle the longstanding obstacles of GC personnel shortage and funding issues must not be overlooked. There is an urgent need internationally, and especially in Asia, where GC profession is comparatively less well-established, to seek alternative approaches to meet service demand. The present review examines the global development and feasibility of tele-genetics and tele-genetic counselling (TGC), and serves as the foundation to explore a possible roadmap in Hong Kong via the Hong Kong Genome Project. Answer: Yes, autopsy of antenatally diagnosed malformed foetuses does aid genetic counselling. A study involving 61 foetuses terminated due to antenatal diagnosis of congenital malformations by ultrasound found that in 51% of cases, autopsy provided additional findings. In 34.4% of cases, the autopsy changed the primary diagnosis, and this revised diagnosis led to a change in the risk of recurrence in 29.5% of cases. The conclusion drawn from this study is that genetic counselling based solely on ultrasonographic foetal diagnosis may be erroneous, and for appropriate genetic counselling, a detailed foetal examination should be carried out after termination in cases with ultrasonographically detected congenital malformations (PUBMED:9808972). Additionally, the relevance of labor room fetal autopsy has been highlighted in increasing its acceptance and providing better counselling to couples regarding the risk of recurrence in future pregnancies. The study showed that autopsy findings correlated with ultrasound findings in 63.5% of cases, with additional findings altering the diagnosis in 24.7% of cases, and a complete change in diagnosis in 11.8% of cases. This suggests that autopsy can provide significant additional information that can impact genetic counselling (PUBMED:24754734). Moreover, the importance of a detailed examination of congenitally malformed hearts at autopsy is emphasized, as despite improvements in echocardiographic diagnostics, some congenital heart defects (CHD) may remain undiagnosed and can manifest later or be the cause of early abortion. The macroscopic assessment of the heart by the pathologist is still considered the best method for evaluating structural heart disease, which is crucial for an adequately performed paediatric autopsy (PUBMED:31842550). In summary, autopsy of antenatally diagnosed malformed foetuses is a valuable tool that can significantly aid genetic counselling by providing a more accurate diagnosis and information on the risk of recurrence, which is essential for counselling couples about future pregnancies.
Instruction: Theory of mind deficits in Parkinson's disease: a product of executive dysfunction? Abstracts: abstract_id: PUBMED:36206823 The impact of executive dysfunctions on Theory of Mind abilities in Parkinson's disease. Theory of Mind (ToM) is the ability to infer and reason about others' mental states, a process impaired by Parkinson's disease (PD). ToM performance in PD seems to be strongly related to executive functioning but the exact nature of this relationship is still unclear. We aim to investigate the direct impact of several executive dysfunctions on ToM deficits (Affective and Cognitive ToM) in PD patients. Sixty-eight PD patients underwent neuropsychological tests evaluating executive control such as inhibition, cognitive flexibility, processing speed or working memory and Cognitive and Affective ToM. We divided participants into two groups based on their performance on executive tests: PD patients with poor executive functioning (PD-EF-) and those with preserved executive functioning (PD-EF+). To explore the direct impact of executive subdomains on ToM abilities, two mediation models were executed in the whole sample. We found that PD patients with poor executive functioning reported poorer scores on Affective and Cognitive ToM tasks than PD patients with preserved executive functions, controlling for age and education. Moreover, parallel mediation models, conducted in the whole sample, indicated that performance on phonological fluency mediated the relationships between educational level and both Affective and Cognitive ToM, controlling the effect of other executive tests. These findings further support the idea that executive functions are crucial in ToM processes. Particularly, phonological fluency, whose execution requires both verbal abilities and cognitive flexibility, mediated ToM performance controlling the effect of other executive functions. The identification of neuropsychological processes underpinning ToM abilities might represent a plausible target for cognitive training to strengthen ToM abilities in PD. abstract_id: PUBMED:23356595 Theory of mind deficits in Parkinson's disease: a product of executive dysfunction? Objective: Patients with Parkinson's disease (PD) can perform poorly on tasks involving theory of mind (ToM): the ability to reason about mental states. We investigated whether patients' ToM deficits were independent of executive dysfunction. Method: Experiment 1 aimed to establish that ToM deficits were present, and 2 following experiments manipulated the working memory (WM) demands of the ToM task. Results: In Experiment 1, 15 patients with PD performed significantly more poorly than controls on a false belief vignette task but not on a faux pas task. Errors were related to poor verbal fluency. In Experiment 2, 24 patients with PD made fewer errors on shorter false belief vignettes than the original FBT, and errors on the latter were related to WM impairment. In Experiment 3, the FBT was presented as a comic strip visible throughout questioning, reducing WM demands. Patients (n = 24) made memory errors but no false belief errors on the comic strip. They exhibited no verbal fluency or WM impairments, but did exhibit deficits on a black-and-white Stroop task. False belief errors were not correlated with executive performance. Conclusions: PD patients made very few ToM errors that were independent of errors on memory questions, so in this sample, ToM deficits per se appear unlikely. However, patients still made errors on ToM tasks when associated incidental WM demands were considerably reduced, highlighting the need for future investigations of ToM in PD to account for the role of more general cognitive restrictions exhibited by even some medicated, early stage patients. abstract_id: PUBMED:38049608 Theory of mind in mild cognitive impairment and Parkinson's disease: The role of memory impairment. Background: Social cognition is impaired in Parkinson's disease (PD). Whether social cognitive impairment (iSC) is a by-product of the underlying cognitive deficits in PD or a process independent of cognitive status is unknown. To this end, the present study was designed to investigate the weight of specific cognitive deficits in social cognition, considering different mild cognitive impairment subtypes of PD (PD-MCI). Methods: Fifty-eight PD patients underwent a neuropsychological battery assessing executive functions, memory, language, and visuospatial domains, together with social cognitive tests focused on theory of mind (ToM). Patients were divided into subgroups according to their clinical cognitive status: amnestic PD-MCI (PD-aMCI, n = 18), non-amnestic PD-MCI (PD-naMCI, n = 16), and cognitively unimpaired (PD-CU, n = 24). Composite scores for cognitive and social domains were computed to perform mediation analyses. Results: Memory and language impairments mediated the effect of executive functioning in social cognitive deficits in PD patients. Dividing by MCI subgroups, iSC occurred more frequently in PD-aMCI (77.8%) than in PD-naMCI (18.8%) and PD-CU (8.3%). Moreover, PD-aMCI performed worse than PD-CU in all social cognitive measures, whereas PD-naMCI performed worse than PD-CU in only one subtype of the affective and cognitive ToM tests. Conclusions: Our findings suggest that ToM impairment in PD can be explained by memory dysfunction that mediates executive control. ToM downsides in the amnesic forms of PD-MCI may suggest that subtle changes in social cognition could partly explain future transitions into dementia. Hence, the evaluation of social cognition in PD is critical to characterize a possible behavioral marker of cognitive decline. abstract_id: PUBMED:24019747 The effect of cognitive status and visuospatial performance on affective theory of mind in Parkinson's disease. It is now well accepted that theory of mind (ToM) functioning is impaired in Parkinson's disease (PD) patients. However, what remain unknown are the functions that underlie this impairment. It has been suggested that cognitive skills may be key in this area of functioning; however, many of the cognitive tests used to assess this have relied on intact visuospatial abilities. This study aimed to examine whether deficits in ToM were generated by cognitive or visuospatial dysfunction and the mediating effect of visuospatial function on ToM performance. Fifty PD patients (31 male, 19 female; mean age = 66.34 years) and 49 healthy controls (16 male, 33 female; mean age = 67.29 years) completed a ToM task (reading the mind in the eyes) and visuospatial task (line orientation). The results revealed that current cognitive status was a significant predictor for performance on the ToM task, and that 54% of the total effect of cognitive status on ToM was mediated by visuospatial abilities. It was concluded that visuospatial functioning plays an important mediating role for the relationship between executive dysfunction and affective ToM deficits in PD patients, and that visuospatial deficits may directly contribute to the presence of affective ToM difficulties seen in individuals with PD. abstract_id: PUBMED:32248368 Theory of Mind in multiple system atrophy: comparison with Parkinson's disease and healthy subjects. Theory of Mind is defined as the ability to attribute mental state and emotions to other people and is relevant to social relationships. The cortical and subcortical regions involved in Theory of Mind are damaged by neurodegenerative processes of Parkinsonian syndromes, so the aim of the present study was to explore, for the first time, possible deficits of Theory of Mind and their cognitive correlates in multiple system atrophy (MSA). Twenty-six patients with MSA, 25 patients with Parkinson's disease (PD) and 25 healthy subjects were enrolled. Cognitive and affective subcomponents of Theory of Mind, executive functions, long-term memory and apathy were evaluated. The three groups did not differ on demographic variables. MSA and PD groups performed worse than healthy subjects on both cognitive (advanced test of ToM) and affective (emotion attribution task) ToM tasks, but no significant difference was found between patients' groups. However, when using another affective ToM task (Eyes Test), MSA group had poorer performance than healthy subjects and Parkinsonian patients, whereas Parkinsonian patients had similar performance to healthy subjects. Regression analysis revealed an association between poor cognitive flexibility and dysfunctional cognitive component of Theory of Mind. Deficit of cognitive and affective components of Theory of Mind occurred in MSA. Dysfunction of cognitive component was related to executive dysfunction (i.e. cognitive rigidity). These findings might suggest the usefulness of an early evaluation of social cognition in MSA to identify individuals with impaired Theory of Mind who are at risk of social withdrawal, and reduced quality of life. abstract_id: PUBMED:28584926 Theory of mind performance in Parkinson's disease is associated with motor and cognitive functions, but not with symptom lateralization. Next to the typical motor signs, Parkinson's disease (PD) goes along with neuropsychiatric symptoms, amongst others affecting social cognition. Particularly, Theory of Mind (ToM) impairments have mostly been associated with right hemispherical brain dysfunction, so that it might prevail in patients with left dominant PD. Fourty-four PD patients, twenty-four with left and twenty with right dominant motor symptoms, engaged in the Reading the Mind in the Eyes (RME) and the Faux Pas Detection Test (FPD) to assess affective and cognitive ToM. The results were correlated with performance in further cognitive tests, and analyzed with respect to associations with the side of motor symptom dominance and severity of motor symptoms. No association of ToM performance with right hemispheric dysfunction was found. RME results were inversely correlated with motor symptom severity, while FPD performance was found to correlate with the performance in verbal fluency tasks and the overall cognitive evaluation. Affective ToM was found associated with motor symptom severity and cognitive ToM predominantly with executive function, but no effect of PD lateralization on this was identified. The results suggest that deficits in social cognition occur as a sequel of the general corticobasal pathology in PD, rather than as a result of hemisphere-specific dysfunction. abstract_id: PUBMED:23242360 Parkinsonian patients with deficits in the dysexecutive spectrum are impaired on theory of mind tasks. Understanding the mental states of others entails a number of cognitive processes known as Theory of Mind (ToM). A relationship between ToM deficits and executive disorders has been hypothesized in individuals with Parkinson's disease (PD). The present study was aimed at investigating the effect of dysexecutive deficits on ToM abilities in PD patients without dementia. Participants included 30 PD patients and 30 healthy subjects (HC). PD patients were divided into two groups according to their executive test performance: patients with poor (dysexecutive group; n=15) and normal (executively unimpaired group; n=15) performance. All participants were administered faux pas recognition written stories. The dysexecutive PD patients performed less accurately than both HC and executively unimpaired PD individuals on all faux pas story questions (p< 0.05); the executively unimpaired PD group performed as accurately as the HC group on the ToM tasks. Results of the study clearly demonstrate that PD is not tout court associated with ToM impairments and that these may occur in PD patients as a function of the degree of their executive impairment. Our findings also indirectly confirm previous data on the role of the prefrontal regions in mediating ToM capacities. abstract_id: PUBMED:25772323 Freezing of gait and affective theory of mind in Parkinson disease. Background: Affective "Theory of Mind" (ToM) is the specific ability to represent own and others' emotional states and feelings. Several studies examined affective ToM ability in patients with Parkinson's disease (PD), using the "Reading the Mind in the Eyes test" (RMET). However, there has been no agreement as to whether or not affective ToM ability is impaired in PD and such discrepancy may be due to the heterogeneous clinical presentation of PD. Affective disturbance has been linked to the akinetic-rigid form of PD and to gait disturbances, like freezing of gait (FOG). Particularly, FOG has been associated with dysfunction in striatum ability of processing affective inputs. Here we hypothesized that the presence of FOG can be associated with impaired affective ToM ability in PD patients. Methods: We evaluated ToM by means of RMET and executive functions using the Tower Of London (ToL) test in 29 PD patients (15 with FOG and 14 without FOG) and 19 healthy age-matched subjects. Results: Our results showed that affective ToM is abnormal in PD patients, compared to healthy subjects and that it is more impaired in patients with FOG than in patients without FOG. Further, PD patients with FOG performed worse than PD patients without FOG on the ToL test. Conclusion: The affective aspects of ToM can be associated to FOG in patients with PD, thus supporting the idea that FOG is caused by a complex interplay between motor, cognitive and affective factors, rather than being a pure motor problem. abstract_id: PUBMED:21915910 Neuropsychological correlates of theory of mind in patients with early Parkinson's disease. The theory of mind is the ability to attribute mental states to oneself and others and to understand that others have beliefs, desires and intentions different from one's own. The aim of the study was to explore the neuropsychological correlates of theory of mind in patients affected by early Parkinson's disease (PD). Thirty-three PD patients and 33 age-, sex-, and education-matched control subjects underwent the Frontal Assessment Battery, as well as tasks assessing "cognitive" and "affective" theory of mind, and memory abilities; questionnaires evaluating behavioral disorders and quality of life were also administrated. Although the 2 groups did not differ on neuropsychological tasks, PD patients' performance on tasks assessing cognitive and affective theory of mind was significantly worse than controls. Moreover, PD patients had more behavioral disorders and worse quality of life than controls. After covarying for behavioral and quality of life scores, the differences between patients and controls on theory of mind tasks remained significant. "Cognitive" theory of mind was associated with Frontal Assessment Battery score and 2 domains of quality of life scale, whereas "affective" theory of mind scores correlated only with behavioral scales such as the Frontal Behavioral Inventory and Apathy Evaluation Scale. The results demonstrate that both affective and cognitive aspects of theory of mind are simultaneously impaired in early PD and suggest that deficits in the 2 subcomponents of theory of mind may be linked to dysfunction of different frontosubcortical circuitries in early PD. abstract_id: PUBMED:22440401 The relationship between executive functions and fluid intelligence in Parkinson's disease. Background: We recently demonstrated that decline in fluid intelligence is a substantial contributor to frontal deficits. For some classical 'executive' tasks, such as the Wisconsin Card Sorting Test (WCST) and Verbal Fluency, frontal deficits were entirely explained by fluid intelligence. However, on a second set of frontal tasks, deficits remained even after statistically controlling for this factor. These tasks included tests of theory of mind and multitasking. As frontal dysfunction is the most frequent cognitive deficit observed in early Parkinson's disease (PD), the present study aimed to determine the role of fluid intelligence in such deficits. Method: We assessed patients with PD (n=32) and control subjects (n=22) with the aforementioned frontal tests and with a test of fluid intelligence. Group performance was compared and fluid intelligence was introduced as a covariate to determine its role in frontal deficits shown by PD patients. Results: In line with our previous results, scores on the WCST and Verbal Fluency were closely linked to fluid intelligence. Significant patient-control differences were eliminated or at least substantially reduced once fluid intelligence was introduced as a covariate. However, for tasks of theory of mind and multitasking, deficits remained even after fluid intelligence was statistically controlled. Conclusions: The present results suggest that clinical assessment of neuropsychological deficits in PD should include tests of fluid intelligence, together with one or more specific tasks that allow for the assessment of residual frontal deficits associated with theory of mind and multitasking. Answer: Theory of mind (ToM) deficits in Parkinson's disease (PD) appear to be closely related to executive dysfunction. Several studies have investigated this relationship and have found that executive functions are crucial in ToM processes, with specific executive subdomains impacting ToM abilities. For instance, phonological fluency, which requires verbal abilities and cognitive flexibility, has been shown to mediate ToM performance, controlling the effect of other executive functions (PUBMED:36206823). Additionally, PD patients with poor executive functioning perform less accurately on ToM tasks compared to those with preserved executive functions (PUBMED:23242360). Moreover, memory impairment has been identified as a mediator of the effect of executive functioning on social cognitive deficits in PD patients, particularly in those with amnestic mild cognitive impairment (PD-aMCI), suggesting that memory dysfunction can explain ToM impairment in PD (PUBMED:38049608). Visuospatial abilities have also been found to play a mediating role between executive dysfunction and affective ToM deficits in PD patients (PUBMED:24019747). However, the relationship between ToM deficits and executive dysfunction is not straightforward. Some studies have found that ToM deficits can still occur in PD patients even when incidental working memory demands are reduced, indicating that ToM deficits may not be entirely explained by executive dysfunction (PUBMED:23356595). Additionally, ToM performance in PD has been associated with motor and cognitive functions, but not with symptom lateralization, suggesting that deficits in social cognition occur as a result of the general corticobasal pathology in PD rather than hemisphere-specific dysfunction (PUBMED:28584926). In summary, while there is strong evidence to suggest that ToM deficits in PD are related to executive dysfunction, other cognitive factors such as memory and visuospatial abilities also play a role. The complexity of the relationship indicates that ToM deficits in PD may not be solely a product of executive dysfunction, but rather a multifaceted issue involving various cognitive impairments.
Instruction: Should the IDC-9 Trauma Mortality Prediction Model become the new paradigm for benchmarking trauma outcomes? Abstracts: abstract_id: PUBMED:22695443 Should the IDC-9 Trauma Mortality Prediction Model become the new paradigm for benchmarking trauma outcomes? Background: Optimum quantification of injury severity remains an imprecise science with a need for improvement. The accuracy of the criterion standard Injury Severity Score (ISS) worsens as a patient's injury severity increases, especially among patients with penetrating trauma. The objective of this study was to comprehensively compare the mortality prediction ability of three anatomic injury severity indices: the ISS, the New ISS (NISS), and the DRG International Classification of Diseases-9th Rev.-Trauma Mortality Prediction Model (TMPM-ICD-9), a recently developed contemporary injury assessment model. Methods: Retrospective analysis of patients in the National Trauma Data Bank from 2007 to 2008. The TMPM-ICD-9 values were computed and compared with the ISS and NISS for each patient using in-hospital mortality after trauma as the outcome measure. Discrimination and calibration were compared using the area under the receiver operator characteristic curve. Subgroup analysis was performed to compare each score across varying ranges of injury severity and across different types of injury. Results: A total of 533,898 patients were identified with a crude mortality rate of 4.7%. The ISS and NISS performed equally in the groups with minor (ISS, 1-8) and moderate (ISS, 9-15) injuries, regardless of the injury type. However, in the populations with severe (ISS, 16-24) and very severe (ISS, ≥ 25) injuries for all injury types, the NISS predicted mortality better than the ISS did. The TMPM-ICD-9 outperformed both the NISS and ISS almost consistently. Conclusion: The NISS and TMPM-ICD-9 are both superior predictors of mortality as compared with the ISS. The immediate adoption of NISS for evaluating trauma outcomes using trauma registry data is recommended. The TMPM-ICD-9 may be an even better measure of human injury, and its use in administrative or nonregistry data is suggested. Further research on its attributes is recommended because it has the potential to become the basis for benchmarking trauma outcomes. Level Of Evidence: Prognostic study, level III. abstract_id: PUBMED:26115808 Benchmarking statewide trauma mortality using Agency for Healthcare Research and Quality's patient safety indicators. Background: Improving clinical outcomes of trauma patients is a challenging problem at a statewide level, particularly if data from the state's registry are not publicly available. Promotion of optimal care throughout the state is not possible unless clinical benchmarks are available for comparison. Using publicly available administrative data from the State Department of Health and the Agency for Healthcare Research and Quality (AHRQ) patient safety indicators (PSIs), we sought to create a statewide method for benchmarking trauma mortality and at the same time also identifying a pattern of unique complications that have an independent influence on mortality. Methods: Data for this study were obtained from State of Florida Agency for Health Care Administration. Adult trauma patients were identified as having International Classification of Disease ninth edition codes defined by the state. Multivariate logistic regression was used to create a predictive inpatient expected mortality model. The expected value of PSIs was created using the multivariate model and their beta coefficients provided by the AHRQ. Case-mix adjusted mortality results were reported as observed to expected (O/E) ratios to examine mortality, PSIs, failure to prevent complications, and failure to rescue from death. Results: There were 50,596 trauma patients evaluated during the study period. The overall fit of the expected mortality model was very strong at a c-statistic of 0.93. Twelve of 25 trauma centers had O/E ratios <1 or better than expected. Nine statewide PSIs had failure to prevent O/E ratios higher than expected. Five statewide PSIs had failure to rescue O/E ratios higher than expected. The PSI that had the strongest influence on trauma mortality for the state was PSI no. 9 or perioperative hemorrhage or hematoma. Mortality could be further substratified by PSI complications at the hospital level. Conclusions: AHRQ PSIs can have an integral role in an adjusted benchmarking method that screens at risk trauma centers in the state for higher than expected mortality. Stratifying mortality based on failure to prevent PSIs may identify areas of needed improvement at a statewide level. abstract_id: PUBMED:30274757 External benchmarking of trauma services in New South Wales: Risk-adjusted mortality after moderate to severe injury from 2012 to 2016. Background: Trauma centres and systems have been associated with improved morbidity and mortality after injury. However, variability in outcomes across centres within a given system have been demonstrated. Performance improvement initiatives, that utilize external benchmarking as the backbone, have demonstrated system-wide improvements in outcomes. This data driven approach has been lacking in Australia to date. Recent improvement in local data quality may provide the opportunity to engage in data driven performance improvement. Our objective was to generate risk-adjusted outcomes for the purpose of external benchmarking of trauma services in New South Wales (NSW) based on existing data standards. Methods: Retrospective cohort study of the NSW Trauma Registry. We included adults (>16 years), with an Injury Severity Score >12, that received definitive care at either Major Trauma Services (MTS) or Regional Trauma Services (RTS) between 2012-2016. Hierarchical logistic regression models were then used to generate risk-adjusted outcomes. Our outcome measure was in-hospital death. Demographics, vital signs, transfer status, survival risk ratios, and injury characteristics were included as fixed-effects. Median odds ratios (MOR) and centre-specific odds ratios with 95% confidence intervals were generated. Centre-level variables were explored as sources of variability in outcomes. Results: 14,452 patients received definitive care at one of seven MTS (n = 12,547) or ten RTS (n = 1905). Unadjusted mortality was lower at MTS (9.4%) compared to RTS (11.2%). After adjusting for case-mix, the MOR was 1.33, suggesting that the odds of death was 1.33-fold greater if a patient was admitted to a randomly selected centre with worse as opposed to better risk-adjusted mortality. Definitive care at an MTS was associated with a 41% lower likelihood of death compared to definitive care at an RTS (OR 0.59 95%CI 0.35-0.97). Similar findings were present in the elderly and isolated severe brain injury subgroups. Conclusions: The NSW trauma system exhibited variability in risk-adjusted outcomes that did not appear to be explained by case-mix. A better understanding of the drivers of the described variation in outcomes is crucial to design targeted locally-relevant quality improvement interventions. abstract_id: PUBMED:38059567 Benchmarking Pediatric Trauma Care in Mixed Trauma Centers: Adult Risk-Adjusted Mortality Is Not a Reliable Indicator of Pediatric Outcomes. Background: Trauma center benchmarking has become standard practice for assessing quality. The American College of Surgeons adult trauma center verification standards do not specifically require participation in a pediatric-specific benchmarking program. Centers that treat adults and children may therefore rely solely on adult benchmarking metrics as a surrogate for pediatric quality. This study assessed discordance between adult and pediatric mortality within mixed trauma centers to determine the need to independently report pediatric-specific quality metrics. Study Design: A cohort of trauma centers (n = 493, including 347 adult-only, 44 pediatric-only, and 102 mixed) that participated in the American College of Surgeons TQIP in 2017 to 2018 was analyzed. Center-specific observed-to-expected mortality estimates were calculated using TQIP adult inclusion criteria for 449 centers treating adults (16 to 65 years) and using TQIP pediatric inclusion criteria for 146 centers treating children (0 to 15 years). We then correlated risk-adjusted mortality estimates for pediatric and adult patients within mixed centers and evaluated concordance of their outlier status between adults and children. Results: The cohort included 394,075 adults and 97,698 children. Unadjusted mortality was 6.1% in adults and 1.2% in children. Mortality estimates had only moderate correlation ( r = 0.41) between adult and pediatric cohorts within individual mixed centers. Mortality outlier status for adult and pediatric cohorts was discordant in 31% (32 of 102) of mixed centers (weighted Kappa statistic 0.06 [-0.11 to 0.22]), with 78% (23 of 32) of discordant centers having higher odds of mortality for children than for adults (6 centers with average adult mortality and high pediatric mortality and 17 centers with low adult mortality and average pediatric mortality, p < 0.01). Conclusions: Adult mortality is not a reliable surrogate for pediatric mortality in mixed trauma centers. Incorporation of pediatric-specific benchmarks should be required for centers that admit children. abstract_id: PUBMED:28486891 Assessing contemporary intensive care unit outcome: development and validation of the Australian and New Zealand Risk of Death admission model. The Australian and New Zealand Risk of Death (ANZROD) model currently used for benchmarking intensive care units (ICUs) in Australia and New Zealand utilises physiological data collected up to 24 hours after ICU admission to estimate the risk of hospital mortality. This study aimed to develop the Australian and New Zealand Risk of Death admission (ANZROD0) model to predict hospital mortality using data available at presentation to ICU and compare its performance with the ANZROD in Australian and New Zealand hospitals. Data pertaining to all ICU admissions between 1 January 2006 and 31 December 2015 were extracted from the Australian and New Zealand Intensive Care Society Adult Patient Database. Hospital mortality was modelled using logistic regression with development (two-thirds) and validation (one-third) datasets. All predictor variables available at ICU admission were considered for inclusion in the ANZROD0 model. Model performance was assessed using Brier score, standardised mortality ratio and area under the receiver operating characteristic curve. The relationship between ANZROD0 and ANZROD predicted risk of death was assessed using linear regression. After standard exclusions, 1,097,416 patients were available for model development and validation. Observed mortality was 9.5%. Model performance measures (Brier score, standardised mortality ratio and area under the receiver operating characteristic curve) for the ANZROD0 and ANZROD in the validation dataset were 0.069, 1.0 and 0.853; 0.057, 1.0 and 0.909, respectively. There was a strong positive correlation between the mortality predictions with an overall R2 of 0.73. We found that the ANZROD0 model had acceptable calibration and discrimination. Predictions from the models had high correlations in all major diagnostic groups, with the exception of cardiac surgery and possibly trauma and sepsis. abstract_id: PUBMED:26921148 Risk factors for and prediction of mortality in critically ill medical-surgical patients receiving heparin thromboprophylaxis. Background: Previous studies have suggested that prediction models for mortality should be adjusted for additional risk factors beyond the Acute Physiology and Chronic Health Evaluation (APACHE) score. Our objective was to identify risk factors independent of APACHE II score and construct a prediction model to improve the predictive accuracy for hospital and intensive care unit (ICU) mortality. Methods: We used data from a multicenter randomized controlled trial (PROTECT, Prophylaxis for Thromboembolism in Critical Care Trial) to build a new prediction model for hospital and ICU mortality. Our primary outcome was all-cause 60-day hospital mortality, and the secondary outcome was all-cause 60-day ICU mortality. Results: We included 3746 critically ill non-trauma medical-surgical patients receiving heparin thromboprophylaxis (43.3 % females) in this study. The new model predicting 60-day hospital mortality incorporated APACHE II score (main effect: hazard ratio (HR) = 0.97 for per-point increase), body mass index (BMI) (main effect: HR = 0.92 for per-point increase), medical admission versus surgical (HR = 1.67), use of inotropes or vasopressors (HR = 1.34), acetylsalicylic acid or clopidogrel (HR = 1.27) and the interaction term between APACHE II score and BMI (HR = 1.002 for per-point increase). This model had a good fit to the data and was well calibrated and internally validated. However, the discriminative ability of the prediction model was unsatisfactory (C index < 0.65). Sensitivity analyses supported the robustness of these findings. Similar results were observed in the new prediction model for 60-day ICU mortality which included APACHE II score, BMI, medical admission and invasive mechanical ventilation. Conclusion: Compared with the APACHE II score alone, the new prediction model increases data collection, is more complex but does not substantially improve discriminative ability. Trial Registration: ClinicalTrials.gov Identifier: NCT00182143. abstract_id: PUBMED:24747447 Benchmarking trauma centers on mortality alone does not reflect quality of care: implications for pay-for-performance. Background: Trauma centers are currently benchmarked on mortality outcomes alone. However, pay-for-performance measures may financially penalize centers based on complications. Our objective was to determine whether the results would be similar to the current standard method of mortality-based benchmarking if trauma centers were profiled on complications. Methods: We analyzed data from the National Trauma Data Bank from 2007 to 2010. Patients 16 years or older with blunt or penetrating injuries and an Injury Severity Score (ISS) of 9 or higher were included. Risk-adjusted observed-to-expected (O/E) mortality ratios for each center were generated and used to rank each facility as high, average, or low performing. We similarly ranked facilities on O/E morbidity ratios defined as occurrence of any major complication. Concordance between hospital performance rankings was evaluated using a weighted κ statistic. Correlation between morbidity- and mortality-based O/E ratios was assessed using Pearson coefficients. Sensitivity analyses were performed to mitigate the competing risk of death for the morbidity analyses. Results: A total of 449,743 patients from 248 facilities were analyzed. The unadjusted morbidity and mortality rates were 10.0% and 6.9%, respectively. No correlation was found between morbidity- and mortality-based O/E ratios (r = -0.01). Only 40% of the centers had similar performance rankings for both mortality and morbidity. Of the 31 high performers for mortality, only 11 centers were also high performers for morbidity. A total of 78 centers were ranked as average, and 11 ranked as low performers on both outcomes. Comparison of hospital performance status using mortality and morbidity outcomes demonstrated poor concordance (weighted κ = 0.03, p = 0.22). Conclusion: Mortality-based external benchmarking does not identify centers with high complication rates. This creates a dichotomy between current trauma center profiling standards and measures used for pay-for-performance. A benchmarking mechanism that reflects all measures of quality is needed. Level Of Evidence: Prognostic/epidemiologic study, level III. abstract_id: PUBMED:35210139 The BUrn Mortality Prediction (BUMP) Score - An improved mortality prediction score based on data of the German burn registry. Background: Burn injuries constitute the fourth most common injuries globally. Patient outcomes must be currently assessed to provide appropriate patient care with high quality standards. However, existing mortality prediction scoring methods have been shown to lack accuracy in current burn patient populations. Therefore, this study aimed to validate existing scores using current patient data and assess whether new prediction parameters can provide better accuracy. Methods: A retrospective analysis of the patient data from the German Burn Registry between 2016 and 2019 was performed to evaluate all Abbreviated Burn Severity Index (ABSI) score parameters. All patients over 16 years of age who received intensive care were included. Descriptive statistics and logistic regression analysis were used to identify novel prediction parameters based on the parameters documented at admission and establish a new prediction score, the BUrn Mortality Prediction (BUMP) score. The quality of the new score was subsequently compared to that of the original ABSI, modified ABSI, Galeiras, Revised Baux score and TIMM. The new prediction score was then validated using patient data collected in the German Burn Registry in 2020. Results: In total, 7276 patients were included. Age; the presence of at least two comorbidities; burn injuries caused by work-related accidents, traffic accidents and suicide attempts; total burn surface area; inhalation trauma and full-thickness burns were identified as independent significant predictors of mortality (p < 0.001). Additionally, we evaluated new age groups to improve prediction accuracy. The number of comorbidities (p < 0.001) and the aetiology (burns occurring at work [p = 0.028], burns caused by traffic accidents [p < 0.001] or burns due to attempted suicide [p < 0.001]) had a significant influence on mortality. The BUMP score, which was developed based on these parameters, showed the best fitness and showed more accurate mortality prediction than all the above-mentioned scores (area under the receiver operating characteristic curve: 0.947 [0.939-0.954] compared to 0.926 [0.915-0.936], 0.928 [0.918-0.939], 0.937 [0.928-0.947], 0.939 [0.930-0.948], 0.940 [0.932-0.949] respectively). Conclusions: A novel score (BUMP score) was developed for the purpose of external quality assessment of burn centres participating in the German burn registry, where observed and expected outcomes are compared on a hospital level, and for scientifically applications. The clinical impact of this score and its generalisability to other patient populations needs to be evaluated. abstract_id: PUBMED:19693630 In search of benchmarking for mortality following multiple trauma: a Swiss trauma center experience. Background: The manifestations associated with non-survival after multiple trauma may vary importantly between countries and institutions. The aim of the present study was to assess the quality of performance by comparing actual mortality rates to the literature. Methods: The study involved evaluation of a prospective consecutive multiple trauma cohort (injury severity score, ISS > 16) primarily admitted to a university hospital. Univariate and multivariate testing of routine parameters and scores, such as the Trauma and Injury Severity Score (TRISS), was used to determine their predictive powers for mortality. Results: The 30-day mortality of 22.8% (n = 54) exactly matched predicted TRISS versions of Champion or the Major Trauma Outcome Study for our 237 multiple trauma patients (42.8 +/- 20.9 years; ISS 29.5 +/- 11.5). Univariate analysis revealed significant differences between survivors and non-survivors when compared for age, ISS, Glasgow coma scale (GCS), pulse oximeter saturation (SapO2), hemoglobin, prothrombin time, and lactate. In multivariate analysis, age, ISS, and GCS (P < 0.001 each) functioned as major independent prognostic parameters of both 24 h and 30-day mortality. Various TRISS versions hardly differed in their precision (area under the curve [AUC] 0.83-0.84), but they did differ considerably in their level of requirement, with the TRISS using newer National Trauma Data Bank coefficients (NTDB-TRISS) offering the highest target benchmark (predicted mortality 13%, Z value -5.7) in the prediction of 30-day mortality. Conclusions: Because of the current lack of a single, internationally accepted scoring system for the prediction of mortality after multiple trauma, the comparison of outcomes between medical centers remains unreliable. To achieve effective quality control, a practical benchmarking model, such as the TRISS-NTDB, should be used worldwide. abstract_id: PUBMED:12528788 Triage scoring systems, severity of illness measures, and mortality prediction models in pediatric trauma. Trauma triage scores, severity of illness measures, and mortality prediction models quantitate severity of injury and stratify patients according to a specified outcome. Triage scoring systems are typically used to assist prehospital personnel determine which patients require trauma center care, but they are not recommended as the sole determinant of triage. Severity of illness measures and mortality prediction models are used in clinical and health services research for risk-adjusted outcomes analyses and institutional benchmarking. As clinicians and researchers, it is imperative that we be knowledgeable of the methodologies and applications of these scoring and risk prediction systems to ensure their quality and appropriate utilization. Answer: The IDC-9 Trauma Mortality Prediction Model (TMPM-ICD-9) has been suggested as a potentially superior measure for benchmarking trauma outcomes. According to the study by the National Trauma Data Bank from 2007 to 2008, the TMPM-ICD-9 outperformed both the New Injury Severity Score (NISS) and the traditional Injury Severity Score (ISS) in predicting mortality for trauma patients, especially in populations with severe and very severe injuries (PUBMED:22695443). The study concluded that the TMPM-ICD-9 may be an even better measure of human injury and recommended its use in administrative or nonregistry data, suggesting further research on its attributes due to its potential to become the basis for benchmarking trauma outcomes. However, it is important to note that benchmarking trauma outcomes is complex and cannot rely solely on mortality rates. Other studies have shown that benchmarking on mortality alone does not reflect the quality of care and that complications should also be considered (PUBMED:24747447). Additionally, the use of the Agency for Healthcare Research and Quality's patient safety indicators (PSIs) has been proposed as a method for benchmarking trauma mortality and identifying patterns of unique complications that influence mortality (PUBMED:26115808). Moreover, external benchmarking of trauma services has demonstrated variability in outcomes across centers within a given system, indicating the need for a better understanding of the drivers of variation in outcomes to design targeted quality improvement interventions (PUBMED:30274757). In pediatric trauma care, it has been shown that adult risk-adjusted mortality is not a reliable indicator of pediatric outcomes, suggesting the need for independent pediatric-specific quality metrics (PUBMED:38059567). In conclusion, while the TMPM-ICD-9 has shown promise as a superior predictor of mortality compared to the ISS and NISS, the decision to adopt it as the new paradigm for benchmarking trauma outcomes should consider the complexity of trauma care, the need to include measures beyond mortality, and the variability in outcomes across different trauma centers and patient populations. Further research and validation are recommended to fully assess its potential and to ensure that any benchmarking model used reflects all measures of quality in trauma care.
Instruction: Induced sputum eosinophilia in ulcerative colitis patients: the lung as a mirror image of intestine? Abstracts: abstract_id: PUBMED:19230639 Induced sputum eosinophilia in ulcerative colitis patients: the lung as a mirror image of intestine? Background: Ulcerative colitis (UC) is a systemic disease of unknown etiology with extra-intestinal manifestation. Induced sputum (IS) non-invasively assesses extrapulmonary involvement in Crohn's disease. We sought to determine whether there is a cellular marker of lung injury in UC patients detectable by IS. Methods: Nineteen UC patients (mean age 46.4+/-11.3 years, disease duration 8.6+/-7.5 years [range 1-25 years] 68.4% males) were studied, 6 with active disease and 13 in remission. Eleven received 5-ASA, 5 received steroids and/or azathioprine and 3 patients were untreated. UC patients were compared with 27 healthy non-smoker controls. IS was recovered after 20 min inhalation of 3% saline with an ultrasonic nebulizer by the selecting plugs method, and 300 cells were differentially cell counted in cytospin Giemsa-stained slides. CD4/CD8 subsets were identified by FACS. Pulmonary function tests were performed by the Jaeger Masterlab spirometer. Results: UC patients' IS contained higher %eosinophils than controls (p=0.05) and lower FEV(1)/FVC ratios (p=0.001). Steroid- and/or azathioprine-treated patients had significantly lower FEV(1)/FVC ratios than only 5-ASA-treated patients (p=0.019). Eosinophil infiltration in airways was high in 5-ASA-treated patients compared to those receiving steroids and/or azathioprine (p=0.046) and those with less extensive disease (p=0.05). Using a cutoff of 3% eosinophils, IS had a sensitivity of 67% and specificity of 73% to differentiate patients with a cutoff of 70 eosinophils/mm(2) in biopsy. Conclusions: The percentage of sputum eosinophils is significantly different between UC patients with proctitis and pancolitis. These immune abnormalities may be a common pattern that is present throughout the mucosae. abstract_id: PUBMED:27803750 Alterations of pulmonary function in patients with inflammatory bowel diseases. Aim: The aim of this study was to investigate the alterations of pulmonary function tests (PFTs) and their relationship with disease activity in inflammatory bowel diseases (IBDs). Methods: Sixty-four IBD patients (31 Crohn's disease [CD] and 33 ulcerative colitis [UC]) and thirty healthy individuals (controls) were studied with regard to the following parameters of PFTs: Forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC), their ratio, mid-forced expiratory flow of 25-75% (FEF 25-75), residual volume, total lung capacity, and diffusing capacity of the lung for carbon monoxide (DLCO). The disease activity was calculated using the Crohn's Disease Activity Index for CD and Mayo Clinic Score for UC. Correlation analysis was performed between disease activity and sputum cytology and PFTs. Results: Nineteen of the 31 CD patients (61.29%) and 17 of the 33 UC patients (51.52%) but none of the controls showed at least one abnormal PFTs (P < 0.05). Compared with controls, both CD and UC patients exhibited a significant reduction in FEV1 (P < 0.05), FVC (P < 0.05), FEF 25-75 (P < 0.05), and DLCO (P < 0.05). The majority with decreased measurements of PFTs were in the active phase of diseases (P < 0.05). IBD activity scores correlated negatively with some parameters of PFTs and positively with lymphocytosis and eosinophilia of sputum (P < 0.05). Conclusions: Pulmonary function disorders are significantly common in IBD patients. The impairment in active disease is significantly greater than in remission. abstract_id: PUBMED:17049827 Changes in pulmonary function in patients with ulcerative colitis. Objectives: Information on the occurrence and frequency of pulmonary involvement in patients with ulcerative colitis (UC) is inconsistent. Some authors reported pulmonary impairment with UC by standard pulmonary function tests (PFTs) and documented a reduced diffusing capacity for carbon monoxide (DLCO) especially in patients with active disease, whereas others could not detect differences in routine PFTs between UC patients and controls. Aim: The aim of this prospective study was to determine the frequency and type of pulmonary dysfunction in patients with UC with respect to disease activity. Furthermore, to evaluate the influence of smoking, nutritional status, sputum cytology and sulphasalazine therapy on PFT parameters. Patients And Methods: Twenty-six patients with UC (20 with active disease, 6 inactive) and 16 age and sex matched healthy controls were investigated with respect to the following pulmonary function tests, forced vital capacity (FVC), forced expiratory volume in the 1s (FEV(1)%) and their ratio (FEV(1)/FVC) and forced expiratory flow 25-75% (FEF25-75%) as well as oxygen saturation. For UC patients, colonoscopy and biopsy were done. Disease activity was assessed by Truelove index for UC. Induced sputum was sampled for cytology. Smoking habit, body mass index (BMI) and medications were recorded. Results: Fifteen out of 26 patients with UC (57.6%) exhibited at least one pathological pulmonary function test (<80% of predicted value). Small airway obstruction was reported in the 15 patients, restrictive dysfunction in 30.7% and obstructive dysfunction in 11.5%. The impairment of PFTs was significant and more pronounced in patients with active disease, FVC (-14% of predicted), FEV(1) (-9% of predicted) and FEF25-75% (-32% of predicted), P<0.01, 0.05 and 0.01, respectively. There was no significant influence of smoking and medications on PFTs. Conclusions: UC patients show significantly decreased lung function tests in comparison to healthy controls. The impairment in active disease exceeded that during the remission. Early recognition is important, as they can be strikingly steroid responsive. abstract_id: PUBMED:11999006 Sulphasalazine and lung toxicity. Sulphasalazine prescribing is on the increase. Pulmonary toxicity and blood dyscrasias are rare side-effects. Numerous case reports have been published implicating sulphasalazine in pulmonary toxicity. The authors searched the literature for cases of sulphasalazine induced lung toxicity and the 50 cases identified are discussed here. All published case reports/letters referring to sulphasalazine and lung toxicity were studied. The search terms "sulphasalazine" and "sulfasalazine" were combined with the terms "lung", "pulmonary disease", "pneumonitis" and "pleuritis" using Medline and PubMed databases. Typical presentation of sulphasalazine-induced lung disease was with new onset dyspnoea and infiltrates on chest radiography. Common symptoms were cough and fever. Crepitations on auscultation and peripheral eosinophilia were noted in half of the cases. Sputum production, allergy history, rash, chest pain and weight loss were inconsistent findings. Pulmonary pathology was variable, the commonest being eosinophilic pneumonia with peripheral eosinophilia and interstitial inflammation with or without fibrosis. Fatal reports were infrequent. Most patients were managed by drug withdrawal with 40% prescribed corticosteroids. In conclusion, sulphasalazine lung disease should be distinguished from interstitial lung disease due to underlying primary disease. Despite the increase in sulphasalazine prescribing, pulmonary toxicity remains rare. The majority of patients with suspected sulphasalazine-induced lung disease improved within weeks of drug withdrawal and the need for corticosteroids is debatable. abstract_id: PUBMED:24853024 Three cases of mesalazine-induced pneumonitis with eosinophilia. Oral mesalazine, or 5-aminosalicylate, is one of the first-choice medications for the treatment of ulcerative colitis and is commonly used for both induction and maintenance therapy. In a 6-month period, we treated three cases of mesalazine-induced pneumonitis. In all three cases, computed tomography images revealed upper lobe dominant bilateral peripherally localized consolidations. Such images are commonly observed in patients with cryptogenic organizing pneumonia or chronic eosinophilic pneumonia. Computed tomography images for mesalazine-induced pneumonitis have been rarely reported in the literature. abstract_id: PUBMED:38108401 The UDP-glucose/P2Y14 receptor axis promotes eosinophil-dependent large intestinal inflammation. Ulcerative colitis (UC) is a chronic disorder of the large intestine with inflammation and ulceration. The incidence and prevalence of UC have been rapidly increasing worldwide, but its etiology remains unknown. In patients with UC, the accumulation of eosinophils in the large intestinal mucosa is associated with increased disease activity. However, the molecular mechanism underlying the promotion of intestinal eosinophilia in patients with UC remains poorly understood. Here, we show that uridine diphosphate (UDP)-glucose mediates the eosinophil-dependent promotion of colonic inflammation via the purinergic receptor P2Y14. The expression of P2RY14 mRNA was upregulated in the large intestinal mucosa of patients with UC. The P2Y14 receptor ligand UDP-glucose was increased in the large intestinal tissue of mice administered dextran sodium sulfate (DSS). In addition, P2ry14 deficiency and P2Y14 receptor blockade mitigated DSS-induced colitis. Among the large intestinal immune cells and epithelial cells, eosinophils highly expressed P2ry14 mRNA. P2ry14-/- mice transplanted with wild-type bone marrow eosinophils developed more severe DSS-induced colitis compared with P2ry14-/- mice that received P2ry14-deficient eosinophils. UDP-glucose prolonged the lifespan of eosinophils and promoted gene transcription in the cells through P2Y14 receptor-mediated activation of ERK1/2 signaling. Thus, the UDP-glucose/P2Y14 receptor axis aggravates large intestinal inflammation by accelerating the accumulation and activation of eosinophils. abstract_id: PUBMED:8633548 Mesalamine-induced lung toxicity. Lung toxicity associated with 5-aminosalicylate (5-ASA) agents is a rare entity. We report the case of a 32-yr-old woman with ulcerative colitis who developed progressive shortness of breath while taking one of the 5-ASA drugs, oral mesalamine. Bilateral pulmonary infiltrates, peripheral eosinophilia, and histological findings consistent with acute pneumonitis characterized the lung injury. Although the differential diagnosis is broad, mesalamine-induced lung damage must be considered in patients who develop unexplained respiratory symptoms while taking this agent. abstract_id: PUBMED:28458997 Mesalamine induced eosinophilic pneumonia. We present the case of a twenty two year old Caucasian male with ulcerative colitis on mesalamine who presented with dry cough, fever, malaise and dyspnea on exertion. He had peripheral eosinophilia and apical ground-glass opacities on computed tomography of the chest. He had no new exposures and his infectious workup was non diagnostic. Bronchoalveolar lavage showed eosinophilia raising the suspicion of mesalamine induced eosinophilic pneumonia. His symptoms improved after discontinuing mesalamine with a tapered dose of steroids. abstract_id: PUBMED:21224053 Tissue eosinophilia in a mouse model of colitis is highly dependent on TLR2 and independent of mast cells. The mechanisms initiating eosinophil influx into sites of inflammation have been well studied in allergic disease but are poorly understood in other settings. This study examined the roles of TLR2 and mast cells in eosinophil accumulation during a nonallergic model of eosinophilia-associated colitis. TLR2-deficient mice (TLR2(-/-)) developed a more severe colitis than wild-type mice in the dextran sodium sulfate (DSS) model. However, they had significantly fewer eosinophils in the submucosa of the cecum (P < 0.01) and mid-colon (P < 0.01) than did wild-type mice after DSS treatment. Decreased eosinophilia in TLR2(-/-) mice was associated with lower levels of cecal CCL11 (P < 0.01). Peritoneal eosinophils did not express TLR2 protein, but TLR2 ligand injection into the peritoneal cavity induced local eosinophil recruitment, indicating that TLR2 activation of other cell types can mediate eosinophil recruitment. After DSS treatment, mast cell-deficient (Kit(W-sh/W-sh)) mice had similar levels of intestinal tissue eosinophilia were observed as those in wild-type mice. However, mast cell-deficient mice were partially protected from DSS-induced weight loss, an effect that was reversed by mast cell reconstitution. Overall, this study indicates a critical role for indirect TLR2-dependent pathways, but not mast cells, in the generation of eosinophilia in the large intestine during experimental colitis and has important implications for the regulation of eosinophils at mucosal inflammatory sites. abstract_id: PUBMED:21127763 Unilateral balsalazide-induced eosinophilic pneumonia in an ulcerative colitis patient. We report a case of unilateral eosinophilic pneumonia secondary to balsalazide monotherapy for ulcerative colitis. After commencing balsalazide, the patient presented with a history of cough, progressive dyspnoea and lethargy. Blood counts revealed peripheral eosinophilia. Her chest radiograph showed left-sided infiltration, and high-resolution computerised tomography demonstrated widespread nodular shadowing and ground glass opacifiction in the left lung. The right lung was normal. Transbronchial lung biopsy confirmed eosinophilic pneumonia. There was rapid clinical and radiological improvement after with-drawing balsalazide. Drug-induced eosinophilic pneumonia has been described with mesalazine and sulfasalazine, but no case reports of balsalazide-induced eosinophilic pneumonitis have been found. Answer: The concept of induced sputum eosinophilia in ulcerative colitis (UC) patients reflecting the lung as a mirror image of the intestine is supported by several studies that have found a correlation between UC and pulmonary abnormalities. One study found that UC patients had a higher percentage of eosinophils in their induced sputum compared to controls, as well as lower FEV1/FVC ratios, suggesting a cellular marker of lung injury detectable by induced sputum (PUBMED:19230639). This study also noted that the immune abnormalities may be a common pattern present throughout the mucosae, indicating a systemic aspect of UC that affects both the intestines and the lungs. Further supporting this notion, another study reported that a significant number of UC patients exhibited abnormal pulmonary function tests (PFTs), with the majority of these abnormalities occurring in the active phase of the disease. The study also found a negative correlation between IBD activity scores and some PFT parameters, and a positive correlation with lymphocytosis and eosinophilia of sputum (PUBMED:27803750). Additionally, a prospective study determined that UC patients show significantly decreased lung function tests compared to healthy controls, with the impairment being more pronounced in patients with active disease (PUBMED:17049827). Cases of drug-induced lung toxicity in UC patients have also been reported, with medications such as sulphasalazine and mesalamine being implicated in pulmonary toxicity and eosinophilic pneumonia (PUBMED:11999006, PUBMED:24853024, PUBMED:8633548, PUBMED:28458997, PUBMED:21127763). These cases further illustrate the complex relationship between UC and lung health, where medications used to treat UC can also contribute to lung issues. Moreover, research has identified molecular mechanisms that may underlie the promotion of intestinal eosinophilia in UC, such as the UDP-glucose/P2Y14 receptor axis, which aggravates large intestinal inflammation by accelerating the accumulation and activation of eosinophils (PUBMED:38108401). This suggests that similar pathways could be involved in pulmonary eosinophilia in UC patients. In summary, the evidence suggests that the lungs can indeed reflect intestinal inflammation in UC patients, with induced sputum eosinophilia serving as a potential marker for extraintestinal manifestations of the disease. The presence of eosinophils in the sputum and abnormalities in PFTs, along with the systemic nature of UC, support the concept of the lung as a mirror image of the intestine in UC patients.
Instruction: Do laboratories follow heart failure recommendations and guidelines and did we improve? Abstracts: abstract_id: PUBMED:23334056 Do laboratories follow heart failure recommendations and guidelines and did we improve? The CARdiac MArker Guideline Uptake in Europe (CARMAGUE). Background: Natriuretic peptides (NP) are well-established markers of heart failure (HF). During the past 5 years, analytical and clinical recommendations for measurement of these biomarkers have been published in guidelines. The aim of this follow-up survey was to investigate how well these guidelines for measurement of NP have been implemented in laboratory practice in Europe. Methods: Member societies of the European Federation of Clinical Chemistry and Laboratory Medicine were invited in 2009 to participate in a web-based audit questionnaire. The questionnaire requested information on type of tests performed, decision limits for HF, turn-around time and frequency of testing. Results: There was a moderate increase (12%) of laboratories measuring NP compared to the initial survey in 2006. The most frequently used HF decision limits for B-type NP (BNP) and N-terminal BNP (NT-proBNP) were, respectively, 100 ng/L and 125 ng/L, derived from the package inserts in 55%. Fifty laboratories used a second decision limit. Age or gender dependent decision limits were applied in 10% (8.5% in 2006). The vast majority of laboratories (80%) did not have any criteria regarding frequency of testing, compared to 33% in 2006. Conclusions: The implementation of NP measurement for HF management was a slow process between 2006 and 2009 at a time when guidelines had just been established. The decision limits were derived from package insert information and literature. There was great uncertainty concerning frequency of testing which may reflect the debate about the biological variability which was not published for most of the assays in 2009. abstract_id: PUBMED:35579428 Head-to-head comparison between recommendations by the ESC and ACC/AHA/HFSA heart failure guidelines. Recommendations represent the core messages of guidelines, and are particularly important when the body of scientific evidence is rapidly growing, as in the case of heart failure (HF). The main messages from two latest major HF guidelines, endorsed by the European Society of Cardiology (ESC) and the American College of Cardiology/American Heart Association/Heart Failure Society of America (ACC/AHA/HFSA), are partially overlapping, starting from the four pillars of treatment for HF with reduced ejection fraction. Some notable differences exist, in part related to the timing of recent publications (most notably, the Universal Definition of HF paper and the EMPEROR-Preserved trial), and in part reflecting differing views of the natural history of HF (with a clear differentiation between stages A and B HF in the ACC/AHA/HFSA guidelines). Different approaches are proposed to specific issues such as risk stratification and implantable cardioverter defibrillator use for primary prevention in HFrEF patients with non-ischaemic aetiology. The ACC/AHA/HFSA guidelines put a greater emphasis on some issues that are particularly relevant to the US setting, such as the cost-effectiveness of therapies and the impact of health disparities on HF care. A comparison between guideline recommendations may give readers a deeper understanding of the ESC and ACC/AHA/HFSA guidelines, and help them apply sensible approaches to their own practice, wherever that may be in the world. A comparison may possibly also help further harmonization of recommendations between future guidelines, by identifying why some areas have led to conflicting recommendation, even when ostensibly reviewing the same published evidence. abstract_id: PUBMED:37125281 Management of heart failure: similarities and discrepancies between the European Society of Cardiology and the American Heart Association guidelines. Recommendations are the fundamental elements of guidelines and are especially significant when the amount of scientific data is expanding fast, as is the scenario of heart failure (HF). Beginning with the four pillars of treatment for HF with reduced ejection fraction, the main messages of the two most recent major HF guidelines, endorsed by the European Society of Cardiology (ESC) and the American College of Cardiology/American Heart Association/Heart Failure Society of America (ACC/AHA/HFSA), partially overlap. There are notable differences, in part due to the timing of recent publications, like the Universal Definition of HF and the EMPEROR-Preserved trial, and in part due to differing perspectives on the natural history of HF. Specific challenges, such as risk stratification and the use of implanted cardioverter-defibrillators for primary prevention in HFrEF patients with non-ischaemic aetiology, are approached from a variety of perspectives. The ACC/AHA/HFSA recommendations place increased attention on topics that are especially pertinent to the US context, such as the cost-effectiveness of medications and the impact of health inequalities on HF care. A comparison of guideline suggestions may assist readers get a better grasp of the ESC and ACC/AHA/HFSA guidelines and apply logical ways to their own practice, wherever in the world that may be. A comparison may also contribute to the harmonization of future guidelines' recommendations by highlighting the reasons why certain areas have resulted to different recommendations while seemingly analysing the same published information. abstract_id: PUBMED:7496881 Computerizing guidelines to improve care and patient outcomes: the example of heart failure. Increasing amounts of medical knowledge, clinical data, and patient expectations have created a fertile environment for developing and using clinical practice guidelines. Electronic medical records have provided an opportunity to invoke guidelines during the everyday practice of clinical medicine to improve health care quality and control costs. In this paper, efforts to incorporate complex guidelines [those for heart failure from the Agency for Health Care Policy and Research (AHCPR)] into a network of physicians' interactive microcomputer workstations are reported. The task proved difficult because the guidelines often lack explicit definitions (e.g., for symptom severity and adverse events) that are necessary to navigate the AHCPR algorithm. They also focus more on errors of omission (not doing the right thing) than on errors of commission (doing the wrong thing) and do not account for comorbid conditions, concurrent drug therapy, or the timing of most interventions and follow-up. As they stand, the heart failure guidelines give good general guidance to individual practitioners, but cannot be used to assess quality or care without extensive "translation" into the local environment. Specific recommendations are made so that future guidelines will prove useful to a wide range of prospective users. abstract_id: PUBMED:16896019 Drug therapy recommendations from the 2005 ACC/AHA guidelines for treatment of chronic heart failure. Objective: To review and discuss key aspects of the drug therapy recommendations in the American College of Cardiology (ACC)/American Heart Association (AHA) 2005 Guideline Update for the Diagnosis and Management of Chronic Heart Failure (HF) in the Adult. Data Sources: Data were obtained from the ACC/AHA 2005 Guideline Update for Chronic HF. English-language clinical trials, observational studies, and pertinent review articles evaluating the pharmacotherapy of chronic HF were identified, based on MEDLINE searches through January 2006. Study Selection: Articles presenting information that impacts the evidence base for recommendations regarding the use of various drug therapies in patients with chronic HF were evaluated. Data Synthesis: The ACC/AHA 2005 Guideline Update for HF provides revised, evidence-based recommendations for the treatment of chronic HF. The new guidelines are based on a staging system that recognizes both the development and progression of HF. Recommendations are provided for 2 stages of patients (A and B) who do not yet have clinical HF but are clearly at risk and 2 stages (C and D) that include patients with symptomatic HF. The guidelines continue to emphasize the important role of neurohormonal blockade with angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-adrenergic blockers, and aldosterone antagonists. Based on recent trials, updated recommendations address the roles of combination therapy and the selective addition of hydralazine and isosorbide dinitrate. Along with specific drug recommendations, information on the practical use of various drugs is provided. Although the guidelines primarily focus on HF due to systolic dysfunction, general recommendations are also provided for patients with preserved systolic function. Conclusions: The ACC/AHA 2005 Guideline Update provides evidence-based recommendations for healthcare professionals involved in the care of adults with chronic HF. Recent clinical trial findings have further clarified the evolving role of neurohormonal-blocking drugs in the prevention and treatment of HF. abstract_id: PUBMED:31566106 Evolution of the American College of Cardiology and American Heart Association Cardiology Clinical Practice Guidelines: A 10-Year Assessment. Background The American College of Cardiology and American Heart Association periodically revise clinical practice guidelines. We evaluated changes in the evidence underlying guidelines published over a 10-year period. Methods and Results Thirty-five American College of Cardiology/American Heart Association guidelines were divided into 2 time periods: 2008 to 2012 and 2013 to 2017. Guidelines were categorized into the following topic areas: arrhythmias, prevention, acute and stable ischemia, heart failure, valvular heart disease, and vascular medicine. Changes in recommendations were assessed for each topic area. American College of Cardiology/American Heart Association designated class of recommendation as level I, II, or III (I represented "strongly recommended") and levels of evidence (LOE) as A, C, or C (A represented "highest quality"). The median number of recommendations per each topic area was 281 (198-536, interquartile range) in 2008 to 2012 versus 247 (190-451.3, interquartile range) in 2013 to 2017. The median proportion of class of recommendation I was 49.3% and 44.4% in the 2 time periods, 38.0% and 44.5% for class of recommendation II, and 12.5% and 11.2% for class of recommendation III. Median proportions for LOE A were 15.7% and 14.1%, 41.0% and 52.8% for LOE B, and 46.9% and 32.5% for LOE C. The decrease in the proportion of LOE C was highest in heart failure (24.8%), valvular heart disease (22.3%), and arrhythmia (19.2%). An increase in the proportion of LOE B was observed for these same areas: 31.8%, 23.8%, and 19.2%, respectively. Conclusions There has been a decrease in American College of Cardiology/American Heart Association guidelines recommendations, driven by removal of recommendations based on lower quality of evidence, although there was no corresponding increase in the highest quality of evidence. abstract_id: PUBMED:36912998 Comparison of American and European guidelines for cardio-oncology of heart failure. Heart failure is a complex clinical syndrome, whose signs and symptoms are caused by functional or structural impairment of ventricular filling or ejection of blood. Due to the interaction among anticancer treatment, patients' cardiovascular background, including coexisting cardiovascular diseases and risk factors, and cancer itself, cancer patients develop heart failure. Some drugs for cancer treatment may cause heart failure directly through cardiotoxicity or indirectly through other mechanisms. Heart failure in turn may make patients lose effective anticancer treatment, thus affecting the prognosis of cancer. Some epidemiological and experimental evidence shows that there is a further interaction between cancer and heart failure. Here, we compared the cardio-oncology recommendations among heart failure patients of the recent 2022 American guidelines, 2021 European guidelines, and 2022 European guidelines. Each guideline acknowledges the role of multidisciplinary (cardio-oncology) discussion before and during scheduled anticancer therapy. abstract_id: PUBMED:26320006 Evaluation of natriuretic peptide recommendations in heart failure clinical practice guidelines. Background: The B-type naturietic peptides (NPs) are associated with heart failure (HF). This investigation was designed to evaluate heart failure clinical practice guideline (CPG) recommendations for the use of NPs. Methods: A search for English language CPGs for HF published since 2011 was conducted. A search for systematic reviews (SR) and meta-analysis for NPs in HF was conducted for the years 2004-2012. Each HF CPG was evaluated by two independent reviewers. Key recommendations for NPs and the supporting references were abstracted. The key findings from each SR were abstracted. Results: Seven English language HF CPGs were found, all of which made recommendations for the use of NPs in diagnosis. Four made recommendations for prognosis and three for management. The European CPG scored highly for rigor of development with the Appraisal of Guidelines for Research and Evaluation Instrument (AGREE II) while the others did not. North American CPGs made stronger recommendations citing higher grades of evidence for the use of NPs in HF than the European or Australian CPGs. The CPGs mostly cited primary studies 47/66 to support the recommendations. From twelve available SRs, five were cited by CPGs. One CPG conducted a SR. Conclusions: The SR evidence to support NP use in CPGs has not been well cited in the CPGs and the recommendations are only partially supported by the SR evidence. Future CPGs should consider improving the methodology used to evaluate laboratory tests. abstract_id: PUBMED:23673899 Do primary care physicians follow the current recommendations for hypertensive pharmacotherapy? Introduction: In 2007, the joint recommendations of the European Society of Cardiology and the European Society of Hypertension (ESC/ESH) were announced. Objectives: The aim of this survey was to evaluate the implementation rate of the new ESC/ESH recommendations by primary care physicians and to assess the effectiveness of antihypertensive therapy. Patients And Methods: Data concerning pharmacotherapy, blood pressure (BP) measurements, and compliance with the guidelines were collected in 10,880 hypertensive patients during 3 subsequent follow-up visits. Results: Combined antihypertensive treatment (angiotensin converting enzyme inhibitors with β-blocker, diuretic, or calcium-channel blocker) was used in 69.2% of the patients at baseline. A combination of β-blocker with diuretic was prescribed in 7.4% of the patients. In 71% of these patients no history of cardiovascular events was reported (myocardial infarction, revascularization, or heart failure). Diuretics were not used in 20.7% of the patients receiving a 3-drug regimen and in 6.7% of those receiving a 4-drug regimen. BP target levels set by individual physicians were frequently lower than those recommended by the guidelines. The percentage of patients who reached the recommended BP target increased during the survey to 25.3%. Conclusions: A combination of β-blocker and diuretic is still commonly used in the treatment of hypertension in patients without coronary artery disease and heart failure. Despite the use of combination treatment in about 90% of hypertensive patients and attempts at reaching lower target BP values than those recommended by the guidelines, treatment targets were achieved only in one-fourth of the patients. abstract_id: PUBMED:29276898 Clinical and demographic characteristics of an outpatient Russian population with chronic heart failure at the time of enrollment in the QUALIFY registry for evaluating consistency with clinical guidelines on drug therapy Background: Chronic heart failure is the most common condition in patients with cardiovascular diseases. Therefore, evaluating consistency of doctor's recommendations with guidelines on drug therapy is a relevant issue. Aim: To evaluate consistency of doctor's recommendations with guidelines on drug therapy for CHF. Materials And Methods: The study presents enrollment visit data for the Russian part of the international prospective registry, QUALIFY. The study included 404 outpatient patients with CHF between February, 2013 through June, 2014. Consistency with guidelines for drug therapy was evaluated for 5 drug classes, including ACE inhibitors (ACEI), β-blockers (BB), angiotensin receptor antagonists (ARA), mineralocorticoid receptor antagonists (MRA), and ivabradine. Results: On the whole, the consistency was good in 64.9% of cases, moderate in 27.7% of cases, and poor in 7.4% of cases. Proportions of patients who received a target dose of a medicine and a dose ≥50% of the target dose were low, 21.5% and 62.3% for ACEI; 15% and 50.8% for BB; 20.3% and 39.8% for ARA; and 23.4% and 78.7% for ivabradine, respectively. These values, except for BB, were lower for patients with late hospitalization (hospitalization ≥6 months vs. Answer: The implementation of guidelines for the measurement of natriuretic peptides (NPs), which are markers of heart failure (HF), in laboratory practice in Europe has been a slow process. Between 2006 and 2009, there was only a moderate increase (12%) in laboratories measuring NPs. The decision limits for B-type NP (BNP) and N-terminal BNP (NT-proBNP) were mostly derived from package inserts, and there was significant uncertainty concerning the frequency of testing. This may reflect the lack of published data on the biological variability of most assays at that time (PUBMED:23334056). In terms of broader heart failure management recommendations, there are some discrepancies between the European Society of Cardiology (ESC) and the American College of Cardiology/American Heart Association/Heart Failure Society of America (ACC/AHA/HFSA) guidelines. While there is partial overlap in the core messages, such as the four pillars of treatment for HF with reduced ejection fraction, there are also notable differences. These differences may be due to the timing of recent publications and differing views on the natural history of HF. The ACC/AHA/HFSA guidelines also place greater emphasis on issues particularly relevant to the US setting, such as cost-effectiveness and health disparities (PUBMED:35579428, PUBMED:37125281). When it comes to the evolution of ACC/AHA cardiology clinical practice guidelines over a 10-year period, there has been a decrease in the number of recommendations, driven by the removal of recommendations based on lower quality evidence. However, there was no corresponding increase in the highest quality of evidence (PUBMED:31566106). In the context of cardio-oncology, both American and European guidelines recognize the importance of multidisciplinary discussions before and during anticancer therapy for patients with heart failure (PUBMED:36912998). Overall, while there is evidence of some improvement and adherence to heart failure guidelines, there are still areas where implementation is lacking or inconsistent. This includes the use of NPs in diagnosis, prognosis, and management of HF, as well as the application of drug therapy recommendations in clinical practice (PUBMED:26320006, PUBMED:23673899). The slow uptake of guidelines in laboratory practice and the discrepancies in recommendations between different guideline bodies suggest that while there has been some progress, there is still room for improvement in following heart failure recommendations and guidelines.
Instruction: Early carcinoma of the gastric cardia in Japan: is it different from that in the West? Abstracts: abstract_id: PUBMED:33758614 Is cardia cancer a special type of gastric cancer? A differential analysis of early cardia cancer and non-cardia cancer. Background: The prognosis of early cardia cancer and non-cardia cancer is still controversial. It is difficult to collect a large number of cases with complete information in clinical practice. Our study was aimed to identify the differences in clinicopathological characteristics and outcomes of early cardia gastric cancer and non-cardia gastric cancer. Methods: All cases analyzed were from Surveillance, Epidemiology, and End Results database. The data of the patients with early gastric cancer from 2004 to 2010 was retrospectively analyzed. Patients were distributed to cardia cancer group and non-cardia cancer group. Univariate and multivariate analyses were performed to examine differences between groups. The competitive risk model was made to compare the association with cardia cancer and non-cardia cancer about the causes of death. Propensity score matching (PSM) was performed to reduce the bias. Results: We found that cardia cancer was more common in male patients and the White than that in non-cardia cancer at early stage, signet ring cell carcinoma was more common in non-cardia cancer, and the differentiation of non-cardia cancer was worse. Univariate analysis showed that age, marital status, race, tumor location, histology, grade, stage, and operation or not can determine the prognosis. And the prognosis of patients with cardia cancer was worse than that of non-cardia cancer, according to lymph node metastasis and the depth of tumor invasion. Multivariate analysis showed cardia cancer was an independent prognostic factor for poor prognosis. After PSM, cardia cancer still exhibited poor prognosis. Conclusions: At early stage, cardia cancer had a poor prognosis compared with non-cardia cancer. The prevention and treatment of early cardia cancer need to be seriously treated. abstract_id: PUBMED:11135215 Early carcinoma of the gastric cardia in Japan: is it different from that in the West? Background: The incidence of adenocarcinoma of the gastric cardia has increased recently in the West. However, in Japan, most patients with gastric carcinoma have disease that is situated in the body and the distal stomach. The objectives of this study were to compare the clinicopathologic findings of patients with early gastric carcinoma (EGC) arising at the cardia and those with carcinoma in more distal parts of the stomach, then comparing the findings with those from patients with carcinoma of the gastric cardia in the West. Methods: Three thousand one hundred forty-four patients with EGC who underwent surgical resection between 1962 and 1997 at the National Cancer Center Hospital in Tokyo were studied. Seventy patients with EGC at the cardia were compared with those who had lesions in the middle and lower parts of the stomach. The body mass index (BMI), smoking, and drinking were evaluated using all patients with cardia EGC and 344 patients in a matched cohort in the latter group. Results: Seventy patients had an EGC located just at the cardia, whereas 2796 patients had lesions in the lower two-thirds of the stomach. The former lesions were different from those in the distal two-thirds of the stomach: More often, they were of an elevated type (34% vs. 14%, respectively, they were histologically well differentiated in 89% (vs. 59%), and there were more submucosal tumors (53% vs. 41%). The BMI, smoking, and drinking in the two groups were not different. The incidence of Barrett esophagus and gastroesophageal reflux disease (GERD) in patients with EGC were 2. 9% (2 of 70 patients) and 5.7% (4 of 70 patients), respectively. Conclusions: There were many significant differences in clinicopathologic characteristics between patients with carcinoma of the cardia and patients with carcinoma of the distal stomach in Japan. The incidence of early cardia carcinoma was very low in Japan, and obesity, smoking, drinking, Barrett esophagus, or GERD were not related to its occurrence, in contrast to reports in the West. abstract_id: PUBMED:33848673 Outcomes of endoscopic submucosal dissection for early esophageal and gastric cardia adenocarcinomas. Objectives: Adenocarcinomas of the esophagus and of the gastric cardia are regarded as a same clinical entity in oncology. For endoscopic resection however, endoscopic mucosal resection is recommended for esophageal adenocarcinoma, while endoscopic submucosal dissection (ESD) is advocated for gastric adenocarcinomas. Our aim was to compare the outcomes of ESD in both types of esophagogastric junction adenocarcinomas. Methods: Between March 2015 and December 2019, we included all patients who underwent an ESD for early adenocarcinoma of the esophagogastric junction at a French tertiary referral center. Esophageal and gastric cardia adenocarcinomas were compared in terms of clinical, procedural and histological outcomes. Results: 57 esophageal and 19 gastric cardia adenocarcinomas were included in the analysis, for a total of 76 patients. The median (IQR) size of the resections was 40 (40-57.5) and 50 (35-55)mm, p=0.96, respectively. En bloc resection was achieved in 100% and 89% for adenocarcinomas of the esophagus and the gastric cardia, p=0.06. Late adverse events occurred in 14% and 5.3%, respectively, p=0.44, with no severe adverse event. Curative resection rates were 67% and 63% for adenocarcinomas of the esophagus and the gastric cardia, respectively, p=0.89. Conclusion: ESD is a safe treatment for T1 adenocarcinomas of the esophagogastric junction, curative in two thirds of the patients, in tumors arising from the esophagus or from the stomach. ESD should be considered for the routine resection of esophageal adenocarcinomas. abstract_id: PUBMED:3828948 Early carcinoma of the gastric cardia. A clinicopathologic study of 21 cases. A clinicopathologic study was done on 21 cases of early carcinoma of the gastric cardia. The disease was preponderant in men, with a male-female ratio of 17:4. Five patients had no symptoms, while the other 16 had epigastric pain, epigastric discomfort, or other symptoms. Melena or hematemesis occurred in five patients. The first detection was made by barium study in 16 and by endoscopy in five. Detailed examinations using double-contrast radiography in both the prone right anterior oblique and supine left lateral projections were useful to detect the early carcinoma of the gastric cardia. All depressed-type carcinomas were incidentally detected during endoscopic examinations of benign peptic ulcers, therefore, whenever endoscopic examinations are being done for other entities, all the stomach including the cardia should be examined, carefully and continuously. Compared with early carcinomas of the other areas of the stomach, grossly elevated and histologically well-differentiated early carcinomas were more common in the gastric cardia. In the mucosa adjacent to the carcinoma, chronic gastritis was apparent in almost all cases, although intestinal metaplasia was found less frequently than in other areas of the stomach. abstract_id: PUBMED:32223013 Marked thickening of muscularis mucosae and submucosa in the gastric cardia: A histopathological study of 110 surgical resection cases. Objective: To investigate histopathologic changes of muscularis mucosae (MM) and submucosa in the gastric cardia. Methods: We performed a histopathology study of 50 distal esophagectomies with proximal gastrectomies for esophageal squamous cell carcinoma as the study (non-cancerous cardiac) group and 60 gastrectomies for early gastric cardiac carcinoma as the cancer group. The gastroesophageal junction was defined as the distal end of squamous epithelium, multilayered epithelium, or deep esophageal glands or ducts. Gastric cardia (n = 110) was defined as the presence of cardiac and cardio-oxyntic mucosae distal to the gastroesophageal junction. Results: The average thickness of MM and submucosa in the cardia was 1.04 and 1.41 mm, respectively, which was significantly thicker than that in distal stomach (n = 34) (0.22 and 0.99 mm) or distal esophagus (n = 92) (0.60 and 1.15 mm). In the cardia, thickened MM displayed frayed muscle fibers (93.3%) with a significantly higher prevalence of entrapped glands, cysts, and lymphoid follicles than in the distal stomach or distal esophagus. In the submucosa fatty changes, cysts, and abnormal arteries were significantly more common in the cardia than in the distal stomach or distal esophagus. Compared with the study group, the cardia in the cancer group showed significantly thicker MM (average 1.31 vs 0.72 mm) and submucosa (average 1.61 vs 1.16 mm), more frequent frayed MM (93.3% vs 60.0%), prolapse-like changes (50.0% vs 2.0%), and cysts (26.7% vs 4.0%). Conclusion: MM and submucosa of the cardia were significantly thickened, especially in early gastric cardiac carcinomas. abstract_id: PUBMED:32195175 Helicobacter pylori Is Associated With Precancerous and Cancerous Lesions of the Gastric Cardia Mucosa: Results of a Large Population-Based Study in China. Background:Helicobacter pylori (H. pylori) is widely accepted to be the most important cause of gastric non-cardia adenocarcinoma (GNCA), while its role in the development of gastric cardia adenocarcinoma (GCA) is not well-defined. We aimed to investigate current H. pylori infection in relation to the severity of both precancerous and cancerous lesions of the gastric cardia in an Asian population at high risk of GCA. Methods: A population-based cross-sectional study was conducted in Linzhou County, Henan Province, China. Two thousand three (2,003) randomly selected participants with data on current H. pylori infection, assayed by 13C-urea breath test (13C-UBT), and a sequence of histological diagnoses of the gastric cardia mucosa were analyzed. Results: Of 2,003 subjects, 828 (41.33%) were currently infected with H. pylori. The prevalence of current H. pylori infection increased with increasing severity of histological lesions, from 34.12% in subjects with normal gastric cardia mucosa to 52.17% in subjects with gastric cardia high-grade intraepithelial neoplasia (CHIN)/ gastric cardia adenocarcinoma (GCA) (P for trend <0.001). With H. pylori-negative subjects as the reference category, H. pylori-positive subjects had statistically significant elevated adjusted prevalence odds ratios (PORs) for each of the histological lesions. The PORs (95% CI) were 2.15 (1.74-2.64), 3.46 (2.08-5.75), 2.78 (1.90-4.07), and 3.05 (1.30-7.17) for subjects with carditis, cardia intestinal metaplasia (CIM), cardia low-grade intraepithelial neoplasia (CLIN), and CHIN/GCA), respectively. The associations remained when subjects with abnormal stomach non-cardia mucosa were excluded. Conclusions: This large epidemiologic study demonstrates a positive association between current H. pylori infection and the severity of both precancerous and cancerous lesions of the gastric cardia in an Asian population at high risk of GCA. These findings suggest that H. pylori infection may play a role throughout both early- and late-stage development of GCA. abstract_id: PUBMED:32596002 Contrasting Prognostic Effects of Tumor-Infiltrating Lymphocyte Density in Cardia and Non-cardia Gastric Adenocarcinomas. Purpose: This study sought to investigate the prognostic significance of tumor-infiltrating lymphocytes (TILs) in relation to tumor location within the stomach. Materials And Methods: The densities and prognostic significance of TIL subsets were evaluated in 542 gastric cancer patients who underwent gastrectomy. Immunohistochemical staining for CD3, CD4, CD8, forkhead/winged helix transcription factor (Foxp3), and granzyme B was performed. Results: Cardia cancer was associated with significantly lower densities of CD8 T-cells and higher densities of Foxp3 and granzyme B T-cells than non-cardia tumors. Multivariate analysis showed that advanced age (hazard ratio [HR], 1.023; 95% confidence interval [CI], 1.006-1.040), advanced T classification (HR, 2.029; 95% CI, 1.106-3.721), lymph node metastasis (HR, 3.319; 95% CI, 1.947-5.658), low CD3 expression (HR, 0.997; 95% CI, 0.994-0.999), and a high Foxp3/CD4 ratio (HR, 1.007; 95% CI, 1.001-1.012) were independent predictors of poor overall survival in cardia cancer patients. In non-cardia cancer patients, total gastrectomy (HR, 2.147; 95% CI, 1.507-3.059), advanced T classification (HR, 2.158; 95% CI, 1.425-3.266), lymph node metastasis (HR, 1.854; 95% CI, 1.250-2.750), and a low Foxp3/CD4 ratio (HR, 0.978; 95% CI, 0.959-0.997) were poor prognostic factors for survival. Conclusions: The densities and prognostic effects of TILs differed in relation to the location of tumors within the stomach. The contrasting prognostic effects of Foxp3/CD4 ratio in cardia and non-cardia gastric cancer patients suggests that clinicians ought to consider tumor location when determining treatment strategies. abstract_id: PUBMED:24612004 Detection of autoantibodies to a panel of tumor-associated antigens for the diagnosis values of gastric cardia adenocarcinoma. To evaluate the diagnostic values of using autoantibodies in sera to a panel of eight tumor-associated antigens (TAAs) of P53, Koc, P62, C-myc, IMP1, Survivn, P16 and Cyclin B1 full-length recombinant proteins for early detection of patients with gastric cardia adenocarcinoma (GCA) and high-risk subjects screening. Enzyme-linked immunosorbent assay was used to detect autoantibodies against the eight selected TAAs in 383 sera samples from four groups, including 140 subjects with normal gastric cardia epithelia (NOR), 76 patients with chronic atrophic gastritis (CAG), 79 patients with gastric cardia dysplasia (DYS) and 88 patients with GCA. In addition, the expression of the eight antigens was analyzed in gastric cardia tissues by immunohistochemical method. The individual autoantibodies to six TAAs (P53, P62, IMP1, Survivn P16 and Cyclin B1) were significantly higher in sera from patients with GCA than that in normal subjects (P < 0.05). When autoantibody assay successively accumulated to seven TAAs (P53, Koc, P62, C-myc, IMP1, Survivn and P16), a stepwise increased detection frequency of autoantibodies was found in the four sera groups (13% in NOR, 39% in CAG, 46% in DYS, and 64% in GCA, respectively), the risks to CAG, DYS and GCA steadily increased about 4.4-, 5.7- and 12.0-fold. The sensitivity and the specificity for autoantibodies against the seven TAAs in diagnosing GCA reached up to 64% and 87%, respectively. The area under the receiver operating characteristic curve for the seven anti-TAA autoantibodies was 0.73 (95%CI: 0.68-0.78) No more increase in sensitivity was found with the addition of new anti-TAA autoantibodies. A combination detection of autoantibodies to TAAs might be helpful to distinguish GCA patients from normal subjects and the patients with gastric cardia precancerous lesions. In addition, further studies in patients with GCA and precancerous lesions using enlarged TAA panels might improve the sensitivity and specificity of cancer detection and high-risk subjects screening. abstract_id: PUBMED:29535841 Neuroendocrine carcinoma of esophageal and gastric cardia: clinicopathologic and immunohistochemistry study of 80 cases. Neuroendocrine carcinoma (NEC) of the esophagus and gastric cardia is a rare tumor, and the Chaoshan region has one of the highest incidences of esophageal and gastric cardia cancer (GCC) worldwide. The aim of this study was to characterize the clinicopathologic features of esophageal NEC (n = 67) and gastric cardia NEC (n = 13) cases identified over a 9-year period in the Chaoshan region. Esophageal NECs were either purely NEC (n = 47) or mixed with squamous cell carcinoma or adenocarcinoma (n = 20). For GCC; pure NEC was found in 5 cases, whereas 8 cases were mixed with adenocarcinomas. The majority of esophageal and gastric cardia NECs was of the small cell type, and 24/67 esophageal and 5/13 gastric cardia patients were found with lymph node metastasis. Immunohistochemistry was performed in all cases, and positive staining for synaptophysin (Syn) was found for all cases, with half the esophageal NEC cases being also chromogranin A (CgA)-positive. In the multivariate Cox regression model, lymph node and further metastasis were independent prognostic factors for esophageal NEC. Our study revealed the clinicopathological features of esophageal and gastric cardia NECs in the Chaoshan region and found mixed NECs patients may have a better prognosis than pure NECs patients, which may provide therapeutic clue for treating this rare tumor. abstract_id: PUBMED:2097443 Clinical and pathological evaluation of early cancer in the gastric cardia. We report 9 rare cases (7 males and 2 females) of early gastric cancer of the esophagogastric junction. From 1976 to 1988, 1308 cases of gastric cancer were resected in our Institute. Of these, 479 (36.6%) cases were early gastric cancer. Among all 479 early gastric cancers, 9 (1.9%) were located within 2 cm of the esophagogastric junction on the lesser curvature and/or posterior wall of the cardia. This represents 11.7% (9/77) of the cases of cancer at the esophagogastric junction. This incidence contrasts with the 39.5% of early cancers elsewhere in the stomach. Overall, 36.6% (479/1308) were early gastric cancer. Eight of these 9 patients were resected through the abdominal approach and one by the thoracoabdominal approach. Seven underwent proximal and 2 underwent total gastrectomy. On microscopic examination, one cancer was m-cancer, and 8 were sm-cancer. Lymphnode metastasis was found to be absent in all 9 cases. The prognosis of cancer of the cardia is generally poorer than that for adenocarcinoma in the corpus or distal stomach. However, cancer recurrence 22-91 months postoperatively was zero in these 9 cases of early gastric cancer at the esophagogastric junction. Answer: Early carcinoma of the gastric cardia in Japan does exhibit differences compared to that in the West. According to a study conducted at the National Cancer Center Hospital in Tokyo, early gastric carcinoma (EGC) at the cardia in Japan was found to be different from EGC in the distal two-thirds of the stomach. The lesions at the cardia were more often of an elevated type, histologically well differentiated, and there were more submucosal tumors. Additionally, factors such as body mass index (BMI), smoking, and drinking were not different between the two groups, and the incidence of Barrett esophagus and gastroesophageal reflux disease (GERD) in patients with EGC was low. This contrasts with reports from the West, where obesity, smoking, drinking, Barrett esophagus, or GERD are related to the occurrence of early cardia carcinoma (PUBMED:11135215). Furthermore, the incidence of early cardia carcinoma is very low in Japan, and the clinicopathologic characteristics of patients with carcinoma of the cardia are significantly different from those with carcinoma of the distal stomach in Japan. This suggests that early cardia carcinoma in Japan may have a distinct pathogenesis compared to that observed in Western populations (PUBMED:11135215).
Instruction: Pediatricians and screening for obesity with body mass index: does level of training matter? Abstracts: abstract_id: PUBMED:19618793 Pediatricians and screening for obesity with body mass index: does level of training matter? Objectives: Pediatric obesity is increasing at epidemic rates in industrialized nations. It is recommended that pediatricians screen all children annually with the use of body mass index (BMI). However, it is unclear whether this recommendation is followed. This study sought to (1) determine the proportion of children screened for obesity with BMI, and (2) test whether attending physicians are more likely than resident physicians to document and plot BMI. Methods: We conducted a systematic review of medical records in an urban academic pediatric practice. Participants were children aged 5 to 11 years presenting for health care in 2004. We measured the proportion of subjects with documented and plotted BMI and compared results for attending and resident physicians. We used logistic regression analysis to determine factors associated with documentation of BMI. Results: Of 397 medical records reviewed, 59.7% contained the 2000 Centers for Disease Control and Prevention growth curve with BMI for age; 5.5% documented BMI, and 4.3% plotted BMI. Resident physicians were more likely to document (13.0% vs. 3.0%, p = 0.0008) and plot (9.0% vs. 2.7%, p = 0.0260) BMI compared with attending physicians. Children with a BMI--95% for age were more likely to have their BMI documented (odds ratio [OR] = 10.7, 95% confidence interval [CI] 3.7, 31.5) and plotted (OR = 7.1, 95% CI 2.3, 21.6). Conclusion: We found the use of BMI to screen for childhood obesity very poor in this academic pediatric practice. Resident physicians were more likely to document and plot BMI than attending physicians. abstract_id: PUBMED:22320334 Understanding pediatricians' views toward school-based BMI screening in Massachusetts: a pilot study. Background: Massachusetts (MA) mandated body mass index (BMI) screening in schools in 2010. However, little is known about pediatricians' views on school-based screening or how the pediatricians' perspectives might affect the school-based screening process. We assessed MA pediatricians' knowledge, attitudes, beliefs, and practices concerning BMI screening. Methods: An anonymous Web-based survey was completed by 286 members of the MA Chapter of the American Academy of Pediatrics who provided primary care (40% response rate). Results: Support for school-based BMI screening was mixed. While 16.1% strongly supported it, 12.2% strongly opposed it. About one fifth (20.2%) believed school-based screening would improve communication between schools and pediatricians; 23.0% believed school-based screening would help with patient care. More (32.2%) believed screening in schools would facilitate communication with families. In contrast, pediatricians embraced BMI screening in practice: 91.6% calculated and 85.7% plotted BMI at every well child visit. Pediatricians in urban practices, particularly inner city, had more positive attitudes toward BMI screening in schools, even when adjusting for respondent demographics, practice setting, and proportion of patients in the practice who were overweight/obese (p < .001). Conclusion: These data suggest MA pediatricians use BMI screening and support its clinical utility. However, support for school-based BMI screening was mixed. Urban-based pediatricians in this sample held more positive beliefs about screening in schools. Although active collaboration between schools and pediatricians would likely help to ensure that the screenings have a positive impact on child health regardless of location, it may be easier for urban-based schools and pediatricians to be successful in developing partnerships. abstract_id: PUBMED:1431769 Body mass and weight as indicators for cholesterol screening. Background: Universal screening of serum cholesterol levels in adults has been recommended but not achieved. We were interested in factors that affected screening rates, and whether obese patients were more likely to have elevated cholesterol levels than other patients in our practice. Methods: A sequential sample of charts was reviewed for height, weight, race, sex, diagnosis of hypertension or diabetes, and evidence of cholesterol screening. Results: Of 604 adult patients, 32% had serum cholesterol measurements. No correlation was found between weight or body mass index and cholesterol levels. Patients with hypertension or diabetes were more frequently screened. Sex and race did not influence screening rates. Conclusions: In this population neither weight nor body mass index was associated with elevated serum cholesterol levels, suggesting that screening must be offered without regard to level of obesity in order to find those patients who will benefit from intervention. abstract_id: PUBMED:28209292 Cholesterol Screening and Treatment Practices and Preferences: A Survey of United States Pediatricians. Objectives: To determine pediatricians' practices, attitudes, and barriers regarding screening for and treatment of pediatric dyslipidemias in 9- to 11-year-olds and 17- to 21-year-olds. Study Design: American Academy of Pediatrics (AAP) 2013-2014 Periodic Survey of a national, randomly selected sample of 1627 practicing AAP physicians. Pediatricians' responses were described and modeled. Results: Of 614 (38%) respondents who met eligibility criteria, less than half (46%) were moderately/very knowledgeable about the 2008 AAP cholesterol statement; fewer were well-informed about 2011 National Heart, Lung, and Blood Institute Guidelines or 2007 US Preventive Service Task Force review (both 26%). Despite published recommendations, universal screening was not routine: 68% reported they never/rarely/sometimes screened healthy 9- to 11-year-olds. In contrast, more providers usually/most/all of the time screened based on family cardiovascular history (61%) and obesity (82%). Screening 17- to 21-year-olds was more common in all categories (P?<?.001). Only 58% agreed with universal screening, and 23% felt screening was low priority. Pediatricians uniformly provided lifestyle counseling but access to healthy food (81%), exercise (83%), and adherence to lifestyle recommendations (96%) were reported barriers. One-half of pediatricians (55%) reported a lack of local subspecialists. Although 62% and 89% believed statins were appropriate for children and adolescents with high low-density lipoprotein cholesterol (200?mg/dL) unresponsive to lifestyle, a minority initiated statins (8%, 21%). Conclusions: US pediatricians report lipid screening and treatment practices that are largely at odds with existing recommendations, likely because of lack of knowledge and conflicts among national guidelines, and concern about treatment efficacy and harms. Education regarding pediatric lipid disorders could promote guideline implementation. abstract_id: PUBMED:20083518 Adoption of body mass index guidelines for screening and counseling in pediatric practice. Objective: The purpose of this study was to examine pediatrician implementation of BMI and provider interventions for childhood overweight prevention and treatment. Methods: Data were obtained from the American Academy of Pediatrics (AAP) Periodic Survey of Fellows No. 65, a nationally representative survey of AAP members. Surveys that addressed the provision of screening and management of childhood overweight and obesity in primary care settings were mailed to 1622 nonretired US AAP members in 2006. Results: One thousand five (62%) surveys were returned; 677 primary care clinicians in active practice were eligible for the survey. Nearly all respondents (99%) reported measuring height and weight at well visits, and 97% visually assess children for overweight at most or every well-child visit. Half of the respondents (52%) assess BMI percentile for children older than 2 years. Most pediatricians reported that they do not have time to counsel on overweight and obesity, that counseling has poor results, and that having simple diet and exercise recommendations would be helpful in their practice. Pediatricians in large practices and those who had attended continuing medical education on obesity were more familiar with national expert guidelines, were more likely to use BMI percentile, and had higher self-efficacy in practices related to childhood and adolescent overweight and obesity. Multivariate analysis revealed that pediatricians with better access to community and adjunct resources were more likely to use BMI percentile. Conclusions: BMI-percentile screening in primary pediatric practice is underused. Most pediatricians believe that they can and should try to prevent overweight and obesity, yet few believe there are good treatments once a child is obese. Training, time, and resource limitations affect BMI-percentile use. Awareness of national guidelines may improve rates of BMI-percentile use and recognition of opportunities to prevent childhood and adolescent obesity. abstract_id: PUBMED:16516284 Body mass index and use of mammography screening in the United States. Background: Obese and underweight women may be less likely to seek cancer screening because of health status, self-perception, and body image related to non-desirable weight. This study examined the relationship between body mass index (BMI) and mammography screening, using the data from the year 2000 United States National Health Interview Survey. Methods: This study included 7692 white and 1496 black female participants aged 40-80, who were randomly selected. Body mass index (kg/m2), based on self-reported weight and height, was compared between women with and without a mammogram in the past 2 years using logistic regression. Results: Compared to women with normal body mass index, underweight and extremely obese women were more likely to have no screening mammograms in the past 2 years (odds ratio (OR) = 1.8, 95% confidence interval (CI), 1.2-2.6 for underweight women; odds ratio = 1.3, 95% confidence interval, 1.0-1.8 for extremely obese women). When data were analyzed by race, the odds ratio estimates were 1.8 for underweight white women (95% confidence interval, 1.3-2.7) and 1.4 for extremely obese white women (95% confidence interval, 1.0-1.9). The corresponding odds ratio estimates were close to 1.0 for black women. Conclusions: Underweight and extreme obesity may increase the risk of underusing screening mammography. The association between body mass index especially underweight and underuse of mammography might exist primarily in white women. abstract_id: PUBMED:19444148 Assessment of body mass index screening of elementary school children - Florida, 2007-2008. The prevalence of childhood obesity has increased substantially in the United States and is associated with chronic diseases. State level surveillance is needed to monitor trends and investigate risk factors. In addition, data that identify at-risk communities can be used to inform those communities regarding childhood obesity. Body mass index (BMI) screening of Florida school children has been performed since 2001 as part of growth and development screening services and conducted by school districts and county health departments. Aggregated BMI data, by grade and county, are reported annually to the Florida Department of Health (FDOH). In 2008, FDOH considered establishing a more extensive statewide BMI surveillance system. To begin planning for such a system, during February-March 2008, FDOH surveyed school health coordinators in Florida's 67 counties to assess qualities of BMI screening activities. Among 66 counties that provided complete surveys, 58 (88%) screened >or=75% of children in the first, third, and sixth grades, and 51 (77%) had written protocols or guidelines for measuring weight, height, or BMI. Nineteen counties (29%) were training >or=90% of their screeners, and 21 (32%) consistently used appropriate equipment for measuring height and weight. Thirty-one counties (47%) used appropriate electronic systems to calculate BMI percentile-for-age. BMI screening activities need improvement in policy and guideline development, training procedures, appropriate selection and use of equipment, and use of electronic data systems before Florida establishes a more extensive statewide surveillance system. abstract_id: PUBMED:35363598 Increased Lean Body Mass After Bodyweight-Based High Intensity Interval Training in Overweight and Obese Men. Purpose: The effects of 8 weeks of bodyweight exercise-based, high-intensity interval training (BWHIIT) on body composition and blood-based markers of metabolic health were investigated in overweight and obese, sedentary young men. Methods: In a parallel group, PRE-POST design, n = 30 men (age, 25.7 ± 4.3 y; body mass index, 27.7 ± 2.1 kg m-2; 26.1 ± 5.2% body fat) were randomized to BWHIIT (n = 20) or a control group (CON; n = 10). BWHIIT consisted of supervised, group-based training sessions (~30 minutes) performed 3 times weekly. Each session consisted of 6 high-intensity bodyweight-based exercises, with each exercise being performed for 4 minutes in the manner of 8 sets of 20 seconds of exercise, 10 seconds of rest. Prior to commencing training (PRE), and 36 h after the final training session (POST), an overnight fasted blood sample was drawn, and body composition was assessed by dual-energy X-ray absorptiometry. Eighteen participants completed the intervention (CON, n = 9; BWHIIT, n = 9). Results: Lean body mass (LBM) was increased at POST in BWHIIT compared to CON (P = .011, η2p = .359), with the mean (95% confidence limits) increase in LBM from PRE to POST within BWHIIT being 1.23 (0.55, 1.92) kg. Body mass and fat mass were unchanged in both groups from PRE to POST. BWHIIT had no effect on serum concentrations of total cholesterol, HDL-C, LDL-C, triglycerides, NEFA, hsCRP, or glucose. Conclusion: Eight weeks of bodyweight exercise-based high intensity interval training by overweight and obese sedentary young men increased LBM by ~2%, but fat mass and blood-based markers of metabolic health were unchanged. abstract_id: PUBMED:23019316 Effects of aerobic and/or resistance training on body mass and fat mass in overweight or obese adults. Recent guidelines on exercise for weight loss and weight maintenance include resistance training as part of the exercise prescription. Yet few studies have compared the effects of similar amounts of aerobic and resistance training on body mass and fat mass in overweight adults. STRRIDE AT/RT, a randomized trial, compared aerobic training, resistance training, and a combination of the two to determine the optimal mode of exercise for obesity reduction. Participants were 119 sedentary, overweight or obese adults who were randomized to one of three 8-mo exercise protocols: 1) RT: resistance training, 2) AT: aerobic training, and 3) AT/RT: aerobic and resistance training (combination of AT and RT). Primary outcomes included total body mass, fat mass, and lean body mass. The AT and AT/RT groups reduced total body mass and fat mass more than RT (P < 0.05), but they were not different from each other. RT and AT/RT increased lean body mass more than AT (P < 0.05). While requiring double the time commitment, a program of combined AT and RT did not result in significantly more fat mass or body mass reductions over AT alone. Balancing time commitments against health benefits, it appears that AT is the optimal mode of exercise for reducing fat mass and body mass, while a program including RT is needed for increasing lean mass in middle-aged, overweight/obese individuals. abstract_id: PUBMED:31304507 Efficacy of the curves training program for loosing body weight, body circumferences and fat mass percentage: a non randomized clinical trial. Purpose: Hypokinesia is the fourth cause of endemic death in the world. The prevalence of obesity, caused by hypokinesia, in the world continues to increase and it is the main risk factor of chronic diseases. Our aim was to evaluate the effectiveness of curves program and High-Intensity Interval Training (HIIT) in healthy women. Methods: The study design is a non-randomized clinical trial. Eighty women with age between 30 to 40 years (who train 3/4 times a week) participated over a period of six months. They were allocated into a "Curves" program group (n = 40) and a "High-Intensity Interval Training" group (n = 40). BMI, body fat, the fat mass percentage was calculated by OMRON body fat 306 TM at baseline and the end of the intervention. Results: the results between high-intensity interval training and curves show that curves scores were significantly lower among Interval training. after 6 months the most impact was on "Abdomen loss "in Curves Group Mean(SD) = -4.48(1.70). According to the multivariate analysis, we can say that for all the dependent variables Weight loss -0.320 (<0.001), Trunk loss -0.376 (<0.001), Abdomen loss -0.276 (<0.001), Hip loss -0.302 (<0.001), Lower arm loss -0.248 (0.003) and Fat mass loss -0.153 (0.061) the curves group shows significant results in comparison with the high-intensity interval training group (p <0.001) while the civil status is significantly associated with only "Upper arm loss" variable (β = -0.357; p <0.001). Conclusions: This "Curves" program improved strength muscles, loss of fat and fat mass reduction more than High Intensity Interval Training. Answer: Yes, the level of training does matter when it comes to pediatricians and screening for obesity with body mass index (BMI). A study found that resident physicians were more likely to document (13.0% vs. 3.0%, p = 0.0008) and plot (9.0% vs. 2.7%, p = 0.0260) BMI compared with attending physicians. This suggests that the level of training can influence the likelihood of pediatricians to use BMI as a screening tool for childhood obesity (PUBMED:19618793).
Instruction: Managing depression in primary care: another example of the inverse care law? Abstracts: abstract_id: PUBMED:12429666 Managing depression in primary care: another example of the inverse care law? Background: Depression is a common problem, often being recurrent or becoming chronic. The National Service Framework for Mental Health (published by the Department of Health, 1999) states that people with depression should continue to be predominantly managed in primary care. There is much evidence that the detection and management of depression by GPs could be improved, but little work has focused on GPs' views of their work with depressed patients. Objectives: This was a qualitative study exploring GP attitudes to the management of patients with depression. Views of GPs in socio-economically deprived areas are compared with those serving more affluent populations. Methods: Semi-structured interviews were conducted with two groups of GPs in north-west England. One group of GPs (22) were practising in inner-city areas, and a second group (13) in suburban and semi-rural practices. All were Principals in practices that participated in undergraduate teaching. The interviews were audio-taped and subsequently transcribed verbatim. Analysis was by constant comparison until category saturation of each theme was achieved. Results: Subjects conceptualized depression as an everyday problem of practice, rather than as an objective diagnostic category. Thematic coding of their accounts suggests a tension between three kinds of views of depressed people: (i) That depression is a common and normal response to life events or change and that it reflects the medicalization of these conditions; (ii) That the label or diagnosis of depression offers a degree of secondary gain to both patients and doctors, particularly to those GPs practising in inner-city areas and (iii) That inner-city GPs experienced on-going management of depressed people as an interactional problem, in contrast to those GPs serving a less deprived population who saw depression as a treatable illness and as rewarding work for the GP. Conclusion: Depression is commonly presented to GPs who feel that the diagnosis often involves the separation of a normal reaction to environment and true illness. For those patients living in socio-economically deprived environments, the problems, and therefore the depression, are seen to be insoluble. This has an important implication for the construction of educational interventions around improving the recognition and treatment of depression in primary care: some doctors may be reluctant to recognize and respond to such patients in depth because of the much wider structural and social factors that we have suggested in this paper. That it is the doctors working with deprived populations who express these views, means that the 'Inverse care law' [Tudor Hart J. The inverse care Law. Lancet 1971; 1(7696): 405-412] operates in the management of depression. abstract_id: PUBMED:28848991 Managing depression in primary care. Major depression is common in the primary care setting. In the final article of this series, we illustrate the approach to the management of depression in primary care. Psychotherapy has been shown to be as effective as antidepressants for mild to moderate major depression. The common myth that antidepressants are addictive should be addressed. Antidepressants should be started at a subtherapeutic dose to assess tolerability, then gradually increased until a minimally effective dose is achieved. Apart from pharmacotherapy and psychotherapy, management of depression should include managing stressors, engaging social and community support, dealing with stigma and discrimination, and managing concomitant comorbidities. A strong therapeutic relationship and empathic listening are important between the primary care physician and patient. abstract_id: PUBMED:12674048 The practice nurse role in managing depression in primary care. Tackling depression is a core priority in primary care. Many patients with physical diseases present with depression and need care for both their physical and mental states. Practice nurses are adept at managing chronic diseases, and with guidance and education can be successful in helping patients with depression, as demonstrated by one award-winning scheme. abstract_id: PUBMED:29734692 The Inverse Response Law: Theory and Relevance to the Aftermath of Disasters. The Inverse Care Law is principally concerned with the effect of market forces on health care which create inequities in access to health services through privileging individuals who possess the forms of social capital that are valued within health care settings. The fields of disaster risk reduction need to consider the ways in which inequities, driven by economic and social policy as well as institutional decision-making, create vulnerabilities prior to a disaster, which are then magnified post disaster through entrenched structural differences in access to resources. Drawing on key principles within the Inverse Care Law, the Inverse Response Law refers to the idea that people in lower socio-economic groups are more likely to be impacted and to experience disparities in service provision during the disaster response and recovery phase. In a market model of recovery, vulnerable groups struggle to compete for necessary services creating inequities in adaptive capacity as well as in social and wellbeing outcomes over time. Both the Inverse Care Law and the Inverse Response Law focus on the structural organisation of services at a macro level. In this article, the Inverse Care Law is outlined, its application to medical treatment following disasters considered and an explanation of the Inverse Response Law provided. Case studies from recent disasters, in London, New Zealand, Puerto Rico and Mexico City are examined in order to illustrate themes at work relating to the Inverse Response Law. abstract_id: PUBMED:24936989 Managing major depression: improving the interface between psychiatrists and primary care. Responsibility for identifying and managing most people with major depressive disorder (MDD) rests firmly within the primary care setting. Unfortunately, circumstances continue to have a negative impact on depression outcomes, with low recognition rates, inadequate levels of treatment and poor follow-up all contributing to the provision of a less than optimal service for patients. The Hampshire Depression Project confirmed that improved primary care education, in isolation, does not have any long-term benefits for patients with MDD. However, many other studies have shown that stronger collaboration between psychiatrists and primary care can significantly improve the quality of care provided in the primary care setting, and ensure that most patients are managed effectively and in accordance with international and national guidelines. Simple collaborative care models, which encourage mental health specialists to work within the primary care system, have had a dramatic impact on the outcomes for patients, and significantly enhanced the satisfaction of both patients and physicians with treatment. These interventions could easily and cost-effectively be applied more broadly and would ensure that effective management of MDD in primary care becomes the rule and not just the exception. ( Int J Psych Clin Pract 2001; 5 (Suppl 1): S11-S18). abstract_id: PUBMED:27609318 Formative evaluation of practice changes for managing depression within a Shared Care model in primary care. Aim To investigate the implementation and initial impact of the Physician Integrated Network (PIN) mental health indicators, which are specific to screening and managing follow-up for depression, in three primary care practices with Shared Mental Health Care in Manitoba. Background: Manitoba Health undertook a primary care renewal initiative in 2006 called the PIN, which included the development of mental health indicators specific to screening and managing follow-up for depression. These indicators were implemented in three PIN group practice sites in Manitoba, which are also part of Shared Mental Health Care. Methods: The design was a non-experimental longitudinal design. A formative evaluation investigated the implementation and initial impact of the mental health indicators using mixed methods (document review, survey, and interview). Quantitative data was explored using descriptive and comparative statistics and a content and theme analysis of the qualitative interviews was conducted. Survey responses were received from 32 out of 36 physicians from the three sites. Interviews were conducted with 15 providers. Findings This evaluation illustrated providers' perceived attitudes, knowledge, skills, and behaviours related to recognizing and treating depression and expanded our understanding of primary care processes related to managing depression related to the implementation of a new initiative. Depression is viewed as an important problem in primary care practice that is time consuming to diagnose, manage and treat and requires further investigation. Implementation of the PIN mental health indicators was variable across sites and providers. There was an increase in use of the indicators across time and a general sentiment that benefits of screening outweigh the costs; however, the benefit of screening for depression remains unclear. Consistent with current guidelines, a question the findings of this evaluation suggests is whether there are more effective ways of having an impact on depression within primary care than screening. abstract_id: PUBMED:34216942 The inverse care law and the significance of income for utilization of longterm care services in a Nordic welfare state. The Inverse Care Law states that the availability of good medical care tends to vary inversely with the need of the population served, with previous research indicating that migrant populations might be particularly at risk of this phenomenon. However, the degree to which the law applies to long-term care services (LTCS) in diverse ageing societies, where sizable older migrant populations need to be accounted for, has not been well investigated. To ensure equitable service provision, and to achieve European goals promoting a social right to care, it is critical to assess the extent to which such diverse populations are being neglected. This paper investigates the relationship between income and utilization of LTCS in Sweden amongst older native-born residents and older migrants born in low-, middle-, and high-income countries. The universality of its welfare system and the documented income differentials between foreign- and Swedish-born persons makes Sweden a particularly interesting case for assessing whether the most disadvantaged are the most underserved. The analysis uses register data on a total population of all older residents in Sweden, encompassing approximately two million persons. The results indicate that the Inverse Care Law does not apply to the utilization of LTCS by Swedish-born older people, nor by the majority of older migrants. However, the Inverse Care Law does appear to operate for older persons born in low-income countries who do not have a partner. abstract_id: PUBMED:21814660 The inverse care law - is Australian primary care research funding headed this way? Tudor Hart's Inverse Care Law classically described the inequity in medical service access in South Wales. From his primary care perspective, the availability of good medical care varied inversely with the need and the population served. In Australia, future funding for primary care research capacity building appears headed in a similar direction - at least for newly established medical schools. abstract_id: PUBMED:30962249 Depression in primary care: part 2-management. Depression is a common and heterogeneous condition with a chronic and recurrent natural course that is frequently seen in the primary care setting. Primary care providers play a central role in managing depression and concurrent physical comorbidities, and they face challenges in diagnosing and treating the condition. In this two part series, we review the evidence available to help to guide primary care providers and practices to recognize and manage depression. The first review outlined an approach to screening and diagnosing depression in primary care. This second review presents an evidence based approach to the treatment of depression in primary care, detailing the recommended lifestyle, drug, and psychological interventions at the individual level. It also highlights strategies that are being adopted at an organizational level to manage depression more effectively in primary care. abstract_id: PUBMED:30722732 CME: Depression in Primary Care CME: Depression in Primary Care Abstract. Depressions belong to the most common illnesses at all and to the five most common illnesses in the primary care. The primary care is the first point of contact for depressive patients and therefore has a key role in the early detection and initial treatment of depressive patients. A depressive disorder presents frequently with diffuse, unspecific and mostly somatic complaints. In the primary care only about half of the depressive disorders are detected. An undetected and untreated depression can have fatal consequences for the patients. Once detected, a depressive disorder is treatable. A very important screening test for the early detection of a depression is the two-question-test (sensitivity 96 %, specificity 57 %). Answer: The concept of the inverse care law, which states that the availability of good medical care tends to vary inversely with the need of the population served, is relevant to the management of depression in primary care. A qualitative study exploring GP attitudes to the management of patients with depression found that GPs conceptualize depression as an everyday problem rather than an objective diagnostic category. GPs in socio-economically deprived areas, compared to those in more affluent populations, experienced the ongoing management of depressed people as an interactional problem and saw the problems, and therefore the depression, as insoluble. This suggests that the inverse care law operates in the management of depression, as doctors working with deprived populations may be reluctant to recognize and respond to such patients in depth because of wider structural and social factors (PUBMED:12429666). The inverse care law is also discussed in the context of long-term care services (LTCS) in Sweden, where it was found that the law does not apply to the utilization of LTCS by Swedish-born older people, nor by the majority of older migrants. However, it does appear to operate for older persons born in low-income countries who do not have a partner (PUBMED:34216942). Additionally, concerns have been raised that future funding for primary care research capacity building in Australia may be headed in a direction that reflects the inverse care law, particularly for newly established medical schools (PUBMED:21814660). In the management of depression in primary care, it is important to consider the inverse care law and its implications for service provision and patient outcomes. Strategies to address this issue include improving the interface between psychiatrists and primary care (PUBMED:24936989), implementing shared care models (PUBMED:27609318), and ensuring that primary care providers are equipped with the knowledge and tools to manage depression effectively (PUBMED:28848991, PUBMED:12674048, PUBMED:30962249, PUBMED:30722732).
Instruction: Are myofibroblasts activated in idiopathic carpal tunnel syndrome? Abstracts: abstract_id: PUBMED:25413457 Are myofibroblasts activated in idiopathic carpal tunnel syndrome? an immunohistochemical study. Objectives: This study aims to investigate whether myofibroblasts participate in the fibrotic process of idiopathic carpal tunnel syndrome (CTS). Patients And Methods: Forty patients (12 males, 28 females; median age 50.85 ± 11.2 years; range 30 to 71 years) who were operated in our clinic between March 2010 and August 2010 were included in the study. Twenty-five idiopathic CTS patients were assigned to the study group, and 15 trauma patients were assigned to the control group. Samples were taken from both transverse carpal ligament and subsynovial connective tissue (SSCT) of participants and they were analyzed by immunohistochemical method. Four immunohistochemical markers were used to analyze myofibroblast existence and vascular wall thickness (alpha smooth muscle actin [α-SMA]), collagen type IV antibodies, and T (CD3) and B (CD20) lymphocytes. Results: The existence of myofibroblasts (α-SMA) in SSCT of patients who were in early phase of idiopathic CTS was shown through the positive reaction of their antibodies with fibroblasts. A significantly increased reaction of α-SMA and collagen antibodies in vascular structure of SSCT demonstrated increased vascular wall thickness and vascularity in the study group (p<0.01). No significant difference was detected between the two groups in terms of T and B lymphocyte antibody reaction (inflammation). Conclusion: The findings of this study indicate a potential for myofibroblasts to be activated during the early phase of the disease and contribute to the onset of disease. Further studies with larger sample sizes would be of great assistance in determining the role of myofibroblasts in idiopathic CTS. abstract_id: PUBMED:25920601 Comparison of the histopathological findings of patients with diabetic and idiopathic carpal tunnel syndrome. Purpose: This study is aimed to investigate whether there are any histopathological differences between diabetic and idiopathic carpal tunnel syndromes. Methods: The biopsy specimens were taken from transverse carpal ligament (TCL), tenosynovium adjacent to median nerve and epineurium of median nerve and evaluated in 47 patients (21 diabetic and 26 idiopathic) who were diagnosed with carpal tunnel syndrome (CTS) and treated surgically with open carpal tunnel release. Fibroblast proliferation, fibrosis, perivascular inflammation, oedema, vascular proliferation and vascular wall thickness were determined and scored in all specimens. Results: There weren't any histopathological abnormalities in TCL specimens of both groups. Synovial hyperplasia, fibrosis and perivascular inflammation were not observed in tenosynovial analysis of both groups. Diabetic CTS patients, when compared with idiopathic CTS patients, had higher rates of synovial edema (idiopathic CTS 57 %, diabetic CTS 87 %), vascular proliferation (idiopathic CTS 30.8 %, diabetic CTS 90.5 %) and increased vascular wall thickness (idiopathic CTS 11.5 %, diabetic CTS 90.5 %). There was no oedema, fibrosis and perivascular inflammation of the epineurium in specimens of either group. But increases in vascular proliferation (idiopathic CTS 7.7 %, diabetic CTS 71.4 %) and vascular wall thickness (idiopathic CTS 3.8 %, diabetic CTS 71.4 %) was seen in the epineurium of diabetic patients and these differences were statistically significant (p < 0.05). Conclusion: Because of the severe synovial and epineurial histopathological abnormalities and inadequate neural regeneration capacity, surgical open carpal tunnel decompression should be planned earlier in diabetic CTS patients. Further studies should be considered to evaluate the histopathological features of diabetic CTS patients early in the course of the disease. abstract_id: PUBMED:35887794 The Radiographic Risk Factors Predicting Occurrence of Idiopathic Carpal Tunnel Syndrome in Simple Wrist X-ray. The causes of carpal tunnel syndrome are complex. However, little is known about the risk factors for carpal tunnel syndrome occurrence on simple radiographic images. To determine the X-ray imaging factors that can predict idiopathic carpal tunnel syndrome occurrence, we compared a group diagnosed with idiopathic carpal tunnel syndrome who received carpal tunnel release with a control group that had no symptoms. The simple wrist X-ray findings of 75 patients diagnosed with idiopathic carpal tunnel syndrome and 87 patients selected for the control group were evaluated. All the carpal tunnel syndrome patients were diagnosed based on clinical symptoms and nerve conduction studies. Anteroposterior and lateral radiographs of the wrists were taken in all the groups. The radial inclination, volar tilt, ulnar variance, radiolunate angle, and lunate-to-axis-of-radius distance were measured. Data were measured using two independent raters. After calculating the average of each value, the two groups were statistically compared. The diagnostic performance of statistically different figures was confirmed by drawing receiver operator characteristic curves. There was a significant difference in the radiolunate angle and lunate-to-axis-of-radius distance between the two groups (p < 0.01 and p = 0.028, respectively). The odd ratios for each parameter were 1.052 and 1.319, respectively. The area under the receiver operator characteristic curves were 0.715 and 0.601, respectively. In this study, radiolunate angle and lunate-to-axis-of-radius distance were useful as radiographic diagnostic tools. In other words, excessive dorsiflexion and volar displacement of the lunate can be considered as risk factors that may cause idiopathic carpal tunnel syndrome in the future. abstract_id: PUBMED:3263164 Multiple peripheral nerve entrapment in Forestier's disease (diffuse idiopathic skeletal hyperostosis). A patient with typical diffuse idiopathic skeletal hyperostosis and characteristic new bone formation around the elbows exhibited bilateral ulnar nerve entrapment at these sites as well as median nerve compression at one wrist. abstract_id: PUBMED:31950930 The bilaterality of idiopathic carpal tunnel syndrome among manual workers. Objectives: To evaluate: a) the prevalence of bilateral idiopathic carpal tunnel syndrome (CTS) in manual workers; b) a correlation between the duration of unilateral and bilateral CTS symptoms; c) a correlation between the onset of CTS symptoms in the unilateral dominant/non-dominant hand and the time of developing bilateral CTS; and d) findings of the nerve conduction study (NCS) in symptomatic and asymptomatic hands of patients with unilateral CTS. Material And Methods: Clinical and neurophysiological examinations were conducted along with a detailed analysis of job exposure of 332 manual workers admitted to the Occupational Medicine Department, the Nofer Institute of Occupational Medicine, with suspected occupational CTS. Eventually, 258 patients were excluded from the study: 34 with associated neuropathies and 206 with other conditions potentially associated with CTS. Cases with work-related CTS (18) were also excluded. Results: A total of 74 patients were diagnosed as idiopathic CTS. In idiopathic CTS, the right hand was affected in 15 (20.3%) patients, the left hand in 4 (5.4%) patients, and both hands in 55 (74.3%) patients. Symptoms duration was longer in the patients with bilateral CTS (4.01 years) than in those with a unilateral right (1.7 years, p = 0.002) or left hand condition (2.8 years, p = 0.313). Median nerve impairment at the wrist was revealed by NCS in 6 left and 2 right asymptomatic hands. Conclusions: The findings of the study indicate the need for "alerting" patients with unilateral CTS about the risk of the disease developing in the contralateral hand. Therefore, NCS should be routinely performed in the asymptomatic hands of patients with unilateral CTS, which is essential for the prevention of neuropathies, especially among manual workers performing repetitive manual tasks. Int J Occup Med Environ Health. 2020;33(2):151-61. abstract_id: PUBMED:26272109 Idiopathic Carpal Tunnel Syndrome: Evaluation of the Depth of the Carpal Tunnel by Ultrasonography. The objective of the work described here was to evaluate the depth of the carpal tunnel (DCT) in patients with idiopathic carpal tunnel syndrome (CTS) and healthy volunteers by ultrasonography (US), through measurement of the distance from the flexor retinaculum to the surface of the capitate bone at the carpal tunnel outlet, and compare it with other ultrasonographic and electrophysiologic parameters in CTS. The study was conducted in 60 non-diabetic patients with idiopathic carpal tunnel syndrome (unilateral n = 37, bilateral n = 23) evidenced by electrophysiologic diagnosis according to the criteria of the American Association of Electrodiagnostic Medicine (AAEM). Furthermore, 40 hands from 20 healthy volunteers were examined. Median nerve cross-sectional area (CSA); flattening ratio (FR), the ratio of the length to the width of the median nerve; and DCT at the canal outlet were measured for all participants. The mean age was 35.6 ± 9.48 y. The female-to-male ratio was 47:13 in the CTS patients. The sensitivity and specificity were 82% and 95% for CSA, 75% and 60% for FR and 75% and 87.5% for DCT, respectively. Differences between patients and healthy controls were significant for all three parameters, greatest for DCT, followed by CSA and then FR. We conclude that DCT increased in CTS and this new parameter is comparable in sensitivity and specificity to CSA and FR. DCT increased independently of the cause of the CTS (decrease in size of canal or increase in contents). abstract_id: PUBMED:25746143 Idiopathic carpal tunnel syndrome in children and adolescents. Purpose: A retrospective review of a single institution's experience with idiopathic carpal tunnel syndrome (CTS) in children and adolescents was performed to evaluate management and outcomes in an effort to establish a treatment protocol. Methods: All patients diagnosed with idiopathic CTS from ages 1 to 16 years of age between 1983 and 2013 were reviewed. The results of diagnostic testing and efficacy of therapeutic interventions were analyzed. The Boston Carpal Tunnel Questionnaire was sent to all patients following medical or surgical management. Results: A total of 20 patients with 31 involved wrists met criteria for entrance into the study. The mean age at diagnosis was 14.4 years. Orthosis fabrication was used as the initial treatment in 30 of 31 wrists and was successful in completely alleviating symptoms in 9 of 30 wrists. A steroid injection was performed in 11 of 31 wrists, completely relieving symptoms in 5 of 11 wrists. Carpal tunnel release was performed in 10 of 31 wrists. Following surgery, patients had complete relief of symptoms in 5 of 10 wrists and partial relief of symptoms in 5 of 10 wrists. Questionnaire response incidence was 55% (11 of 20), with an average long-term follow-up of 17.6 years. Eight questionnaire respondents continued to have mild to moderate symptoms while performing activities of daily living. Conclusions: Once metabolic, anatomical, and hereditary causes of pediatric CTS are ruled out, a reasonable treatment course should follow that of adults with orthosis fabrication, followed by injection, and then surgery for those that are refractory to nonsurgical treatment. Type Of Study/level Of Evidence: Prognostic IV. abstract_id: PUBMED:29781507 Effect of estradiol on fibroblasts from postmenopausal idiopathic carpal tunnel syndrome patients. Fibrosis of the subsynovial connective tissue (SSCT) is a characteristic finding in patients with idiopathic carpal tunnel syndrome (CTS). Idiopathic CTS frequently occurs in postmenopausal women; therefore, female steroid hormones, especially estrogens, may be involved in its development. In this study, we evaluated the effect of the estradiol on the expression of genes and proteins related to fibrosis of SSCT fibroblasts from patients with idiopathic CTS. This study included 10 postmenopausal women (mean age 76 years). Fibroblasts derived from SSCT were treated with estradiol (10-4 -10-12 M), and the expression levels of TGF-β-responsive genes were evaluated. The relationships between the expression of untreated estrogen receptor α (ERα) and ERβ and changes in gene expression due to estradiol treatment were examined by quantitative real-time polymerase chain reaction. The effects of 10-4 M estradiol on collagen type I (Col1) and collagen type III (Col3) protein expression levels were also evaluated by fluorescent staining. The relationships between ERα/β and Col1/3 expression were evaluated by immunohistochemical staining. The reduction in Col1A1 mRNA expression due to estradiol treatment was positively correlated with ERα expression (r = 0.903, p < 0.01). At the protein level, expression of Col1 and Col3 were down-regulated. These results indicated that ERα-mediated signaling may be involved in the regulation of Col1A1, and its regulatory effect may be dependent on the ERα expression level. The accurate evaluation of ERα expression level in the SSCT of individual patients with idiopathic CTS might guide the effective use of new estrogen replacement therapy. abstract_id: PUBMED:26201022 Involvement of pro-inflammatory cytokines and growth factors in the pathogenesis of Dupuytren's contracture: a novel target for a possible future therapeutic strategy? Dupuytren's contracture (DC) is a benign fibro-proliferative disease of the hand causing fibrotic nodules and fascial cords which determine debilitating contracture and deformities of fingers and hands. The present study was designed to characterize pro-inflammatory cytokines and growth factors involved in the pathogenesis, progression and recurrence of this disease, in order to find novel targets for alternative therapies and strategies in controlling DC. The expression of pro-inflammatory cytokines and of growth factors was detected by immunohistochemistry in fibrotic nodules and normal palmar fascia resected respectively from patients affected by DC and carpal tunnel syndrome (CTS; as negative controls). Reverse transcription (RT)-PCR analysis and immunofluorescence were performed to quantify the expression of transforming growth factor (TGF)-β1, interleukin (IL)-1β and vascular endothelial growth factor (VEGF) by primary cultures of myofibroblasts and fibroblasts isolated from Dupuytren's nodules. Histological analysis showed high cellularity and high proliferation rate in Dupuytren's tissue, together with the presence of myofibroblastic isotypes; immunohistochemical staining for macrophages was completely negative. In addition, a strong expression of TGF-β1, IL-1β and VEGF was evident in the extracellular matrix and in the cytoplasm of fibroblasts and myofibroblasts in Dupuytren's nodular tissues, as compared with control tissues. These results were confirmed by RT-PCR and by immunofluorescence in pathological and normal primary cell cultures. These preliminary observations suggest that TGF-β1, IL-1β and VEGF may be considered potential therapeutic targets in the treatment of Dupuytren's disease (DD). abstract_id: PUBMED:31875987 Correlation of neutrophil/lymphocyte and platelet/lymphocyte ratios with the severity of idiopathic carpal tunnel syndrome. Background: The aim of the present study was to investigate the correlation of the neutrophil/lymphocyte ratio (NLR) and platelet/lymphocyte ratio with the severity of idiopathic carpal tunnel syndrome (CTS). Methods: A total of 407 patients with idiopathic CTS (neurophysiologically 150 mild, 144 moderate, and 113 severe) and 206 subjects without CTS were included (control group). Results: There was a positive correlation between the severity of CTS and NLR (r = 0.224; P < 0.001), age (r = 0.333; P < 0.001), and body mass index (r = 0.251; P < 0.001). A 1-unit increase in NLR level was associated with an approximately 1.7-fold higher incidence of CTS (P = 0.002; odds ratio = 1.668; 95% confidence interval = 1.199-2.319). Conclusions: Our results suggest that neurophysiologically more severe CTS is associated with higher NLR levels. The role of systemic inflammation in CTS should be investigated in further studies. Answer: Yes, myofibroblasts are activated in idiopathic carpal tunnel syndrome (CTS). A study aimed at investigating whether myofibroblasts participate in the fibrotic process of idiopathic CTS found that the existence of myofibroblasts in the subsynovial connective tissue (SSCT) of patients in the early phase of idiopathic CTS was demonstrated through the positive reaction of their antibodies with fibroblasts. This study used immunohistochemical markers, including alpha smooth muscle actin (α-SMA), to analyze myofibroblast existence and vascular wall thickness. The findings indicated a significantly increased reaction of α-SMA and collagen antibodies in the vascular structure of SSCT, demonstrating increased vascular wall thickness and vascularity in the study group. These results suggest that myofibroblasts could be activated during the early phase of the disease and contribute to the onset of idiopathic CTS (PUBMED:25413457).
Instruction: Does endoscopy of difficult to catheterize channels spare some patients from formal open revision? Abstracts: abstract_id: PUBMED:27270068 Does endoscopy of difficult to catheterize channels spare some patients from formal open revision? Introduction: Patients with continent catheterizable channels (CCC) may develop difficulty catheterizing postoperatively. In complex cases, endoscopic evaluation with indwelling catheter placement may be indicated, but the risk factors for subsequent formal channel revision are not well defined. The purpose of this study was to determine the risk factors for formal channel revision after attempted endoscopic management of difficulty catheterizing. Materials And Methods: We performed an IRB-approved retrospective review of pediatric (<21 years old) patients undergoing CCC construction at our institution from 1999 to 2014 to identify patients who underwent endoscopy for difficulty catheterizing. Fisher's Exact test was used for categorical data and Mann-Whitney U test for continuous variables to examine the association between endoscopic intervention and subsequent formal revision. Results: Sixty-three of 434 patients (14.5%) underwent at least one endoscopy for reported difficulty catheterizing their CCC, with 77.8% of these requiring additional intervention during endoscopy (catheter placement, dilation, etc.). Of these, almost half with functioning channels (43.5%, 27/62) were managed successfully with endoscopy without formal revision; six (22.2%) of whom underwent more than one endoscopy. These 27 patients continued to catheterize well at a median follow-up of 3.2 years (interquartile range 2.0-6.0). Patients requiring revision had a median of 1.7 years between CCC creation and first endoscopy, versus 1.6 years in those who were not revised (p = 0.60). There was no statistically significant difference between revised and non-revised channels in terms of patient age at CCC creation, underlying patient diagnosis, status of bladder neck, stomal location, or channel type (p ≥ 0.05) (see Table). Conclusion: Approximately half of our patients did not require a formal channel revision after endoscopic management. We did not identify any specific risk factors for subsequent formal revision of a CCC. We recommend performing at least one endoscopic evaluation for those with difficulty catheterizing prior to proceeding with formal open revision. abstract_id: PUBMED:29144925 Re: Does Endoscopy of Difficult to Catheterize Channels Spare Some Patients from Formal Open Revision? N/A abstract_id: PUBMED:37330724 Endoscopic Versus Open Carpal Tunnel Surgery: Risk Factors and Rates of Revision Surgery. Purpose: The purpose of our study was to compare the 1-year revision surgery rates and outcomes of open versus endoscopic carpal tunnel release. Our hypothesis was that, compared to open release, endoscopic carpal tunnel release was an independent risk factor for revision surgery within 1-year. Methods: This was a retrospective cohort study of 4338 patients undergoing isolated endoscopic or open carpal tunnel release. Demographic data, medical comorbidities, surgical approach, need for revision surgery, hand dominance, history of prior injection, and Patient Reported Outcomes Measurement Information System upper extremity (UE), pain interference (PI) and physical function scores were analyzed. Multivariable analysis was used to identify the risk factors for revision surgery within one year of the index procedure. Results: In total, 3280 patients (76%) underwent open and 1058 (24%) underwent endoscopic carpal tunnel release. Within one year of the index procedure, 45 patients required revision carpal tunnel release. The average time to revision was 143 days. The rate of revision carpal tunnel release in the open group was 0.71% compared to 2.08% in the endoscopic group. Multivariable analysis demonstrated that endoscopic surgery, male sex, cubital tunnel syndrome, tobacco use, and diabetes were associated independently with revision surgery. Conclusions: In this study, we found that endoscopic carpal tunnel release was associated independently with a 2.96 times greater likelihood of requiring revision carpal tunnel release within one year, compared to open carpal tunnel release. Male sex, concurrent cubital tunnel syndrome, tobacco use, and diabetes also were associated independently with greater risk of needing revision carpal tunnel release within one year. Type Of Study/level Of Evidence: Prognostic II. abstract_id: PUBMED:30800694 Rates and Risk Factors for Revision Open and Arthroscopic Proximal Biceps Tenodesis. Background: Biceps tenodesis may be performed for symptomatic tendinopathy or tearing of the long head of the biceps tendon. Biceps tenodesis is also commonly performed as an adjunctive procedure. However, the indications and prevalence of biceps tenodesis have expanded. Purpose: To establish the incidence and risk factors for revision biceps tenodesis. Study Design: Case-control study; Level of evidence, 2. Methods: The PearlDiver database of Humana patient data was queried for patients undergoing arthroscopic or open biceps tenodesis (Current Procedural Terminology [CPT] 29828 and CPT 23430, respectively) from 2008 through the first quarter of 2017. Patients without a CPT laterality modifier were excluded from analysis. Revision biceps tenodesis was defined as patients who underwent subsequent ipsilateral open or arthroscopic biceps tenodesis. The financial impact of revision biceps tenodesis was also calculated. Multivariate binomial logistic regression was performed to identify risk factors for revision biceps tenodesis, such as patient demographics as well as concomitant procedures and diagnoses. Odds ratios (ORs) and 95% CIs were calculated, and all statistical comparisons with P < .05 were considered significant. Results: There were 15,257 patients who underwent biceps tenodesis. Of these, 9274 patients (60.8%) underwent arthroscopic biceps tenodesis, while 5983 (39.2%) underwent open biceps tenodesis. A total of 171 patients (1.8%) and 111 patients (1.9%) required revision biceps tenodesis after arthroscopic and open biceps tenodesis, respectively (P = .5). Male sex (OR, 1.38 [95% CI, 1.04-1.85]; P = .02) was the only independent risk factor for revision biceps tenodesis after the index open biceps tenodesis. After arthroscopic biceps tenodesis, age >45 years (OR, 0.58 [95% CI, 0.39-0.89]; P = .01) and concomitant rotator cuff tear (OR, 0.58 [95% CI, 0.47-0.71]; P < .001) were independent protective factors for revision biceps tenodesis. The total cost of revision biceps tenodesis after open and arthroscopic biceps tenodesis was US$3427.95 and US$2174.33 per patient, respectively. Conclusion: There was no significant difference in the revision rate between arthroscopic and open biceps tenodesis. Risk factors for revision surgery included male sex for open biceps tenodesis, while age >45 years and rotator cuff tears were protective factors for arthroscopic biceps tenodesis. abstract_id: PUBMED:37380740 Management of the Zenker diverticulum: multicenter retrospective comparative study of open surgery and rigid endoscopy versus flexible endoscopy. Background And Study Aim: Zenker's diverticulum is a rare disease that affects quality of life due to dysphagia and regurgitation. This condition can be treated by various surgical or endoscopic methods. Patients And Method: Patients treated for Zenker's diverticulum in three centers in the south of France between 2014 and 2019 were included. The primary objective was clinical efficacy. Secondary objectives were technical success, morbidities, recurrences, and need for a new procedure. Results: One hundred forty-four patients with a total of one hundred sixty-five procedures performed were included. A significant difference was found between the different groups in terms of clinical success (97% for open surgery versus 79% for rigid endoscopy versus 90% for flexible endoscopy, p = 0.009). Technical failure occurred more frequently in the rigid endoscopy group than in the flexible endoscopy and surgical groups (p = 0.014). Median procedure duration, median time to resumption of feeding, and hospital discharge were statistically shorter for endoscopies than for open surgery. On the other hand, more recurrences occurred in patients treated by endoscopy than those treated by surgery, and more reinterventions were required. Conclusion: Flexible endoscopy appears to be as effective and safe as open surgery in the treatment of Zenker's diverticulum. Endoscopy allows a shorter hospital stay at the expense of a higher risk of recurrence of symptoms. It could be used as an alternative to open surgery for the treatment of Zenker's diverticulum, especially in frail patients. abstract_id: PUBMED:27942330 Diagnostic Yield of Microscopic Colitis in Open Access Endoscopy Center. Background: The diagnostic yield in open access endoscopy has been evaluated which generally support the effectiveness and efficiency of open access endoscopy. With a few exceptions, diagnostic yield studies have not been performed in open access endoscopy for more specific conditions. Therefore, we conducted a study to determine the efficiency of open access endoscopy in the detection of microscopic colitis as compared to traditional referral via a gastroenterologist. Methods: A retrospective search of the pathology database at the University of Missouri for specimens from a local open access endoscopy center was conducted via SNOMED code using the terms: "microscopic", "lymphocytic", "collagenous", "spirochetosis", "focal active colitis", "melanosis coli" and "histopathologic" in the diagnosis line for the time period between January 1, 2004 and May 25, 2006. Specimens and colonoscopy reports were reviewed by a single pathologist. Results: Of 266 consecutive patients with chronic diarrhea and normal colonoscopies, the number of patients with microscopic disease are as follows: Lymphocytic colitis (n = 12, 4.5%), collagenous colitis (n = 17, 6.4%), focal active colitis (n = 15, 5.6%), and spirochetosis (n = 2, 0.4%). Conclusions: The diagnostic yield of microscopic colitis in this study of an open access endoscopy center does not differ significantly from that seen in major medical centers. In terms of diagnostic yield, open access endoscopy appears to be as effective in diagnosing microscopic colitis. abstract_id: PUBMED:35722090 Individualized Analysis and Treatment of Difficult Weaning From Ventilation Following Open Cardiac Surgery in Young Children With Congenital Heart Disease. Aims: The study explores the leading causes of postoperative extubation difficulties in pediatric patients (neonates and toddlers) with congenital heart diseases and establishes individualized treatment for different reasons. Method: We retrospectively analyzed medical records of 4,971 pediatric patients with congenital heart defects treated in three tertiary Congenital Heart Disease Centres in China from January 2005 to December 2020, from whom we selected those with difficulty extubation but successful weaning during the postoperative period. Next, we performed an analysis of risk factors and reported the combined experience of individualized treatment for successful extubation. Results: Seventy-five pediatric patients were identified in our database, among whom 23 had airway stenosis, 17 had diaphragmatic dysfunction, and 35 had pulmonary infection. The patients were all successfully weaned from the ventilator after an individualized treatment plan. In addition, the intubation time in the airway stenosis group was 17.7 ± 9.0, 33.6 ± 13.9 days in the diaphragmatic dysfunction group, and 11.9 ± 3.8 days in the pulmonary infection group. Conclusion: Given the primary reasons for difficult weaning following open-heart surgery in pediatric patients with congenital heart diseases, an individualized treatment scheme can achieve the ideal therapeutic effect where patients can be weaned faster with a shorter intubation period. abstract_id: PUBMED:29283085 Outcomes of revision surgery for difficult to catheterize continent channels in a multi-institutional cohort of adults. Introduction: The study aimed to describe the strategies of surgical revision for catheterizable channel obstruction and their outcomes, including restenosis and new channel incontinence. Methods: We retrospectively queried the charts of adults who underwent catheterizable channel revision or replacement from 2000-2014 for stomal stenosis, channel obstruction, or difficulty with catheterization at the Universities of Minnesota, Michigan, and Utah. The primary endpoint was channel patency as measured by freedom from repeat surgical intervention. Secondary endpoints included post-revision incontinence and complication rates. Revision surgeries were classified by strategy into "above fascia," "below fascia," and "channel replacement" groupings. Results: A total of 51 patients who underwent 68 repairs (age 18-82 years old; mean 45) were identified who met our inclusion criteria. Channel patency was achieved in 66% at a median 19 months post-revision for all repair types. There was no difference in patency by the type of channel being revised, but there was based on revision technique, with channel replacement and above the fascia repairs being more successful (p=0.046). Channel incontinence occurred in 40% and was moderate to severe in 12%. The type of channel being revised was strongly associated (p=0.003) with any postoperative channel incontinence. Surgical complications occurred in 29% of all revision procedures, although most were low-grade. Conclusions: Surgical revision of continent catheterizable channels for channel obstruction can be performed with acceptable rates of durable patency and incontinence; however, the surgeon needs to have experience in complex urinary diversion and familiarity with a variety of surgical revision strategies. abstract_id: PUBMED:23853414 Grafting in revision rhinoplasty. Rhinoplasty is one of the most difficult aesthetic surgery procedures with a high rate of revision. In revision rhinoplasty the surgeon should explore the patient's concerns and then verify the possibility to satisfy expectations after complete internal and external examination of the nose. For the vast majority of complex secondaries, an open approach is the only reasonable method. In fact, in secondary nasal surgery, because of the scarring process following the primary operation, dissection is tedious, and landmarks are lost. One of the main objectives for the surgeon who approaches secondary rhinoplasty is to restore the structural support of the nose and to replace the lost volume of soft tissues. To achieve this purpose, the surgeon must often rely on grafts. An ideal grafting material must be easy to sculpt, resistant to trauma, infection and extrusion, mechanically stable, inert and readily available. For all these reasons, autogenous cartilage grafts harvested from septum, auricular concha and rib represent the first choice in rhinoplasty. In order to obtain a camouflage graft that provides natural contouring to the nose, temporalis fascia can be used. All these carefully trimmed grafts are useful in tip revision surgery, in secondary surgery of the dorsum and to resolve or reduce functional problems. abstract_id: PUBMED:1678343 Efficacy and outcome of an open access endoscopy service. In a retrospective questionnaire-study, we have attempted to elucidate how open access endoscopy influences management of dyspeptic patients, with special focus on young patients (less than 40 year), since the proportion of examinations with findings not requiring medical treatment are consistently reported higher in this age group. During a one-year period, 436 patients referred for open access endoscopy and their general practitioners completed questionnaires giving details of medical treatment, consultation rate for dyspepsia and global assessment before and 6-18 months after the endoscopy. Relevant changes in medical treatment (stopped in patients with no or minor abnormalities or started in patients with major abnormalities) was found for 27% of the patients, irrespective of age group. The result of the endoscopy provided reassurance for 70% of the patients with no or minor abnormalities. Reassurance was coupled with a lower consultation rate and with fewer symptoms. Altogether, 83% of the young patients with no or minor abnormalities had a positive outcome of the endoscopy. As the endoscopy service introduced relevant and lasting prescription habits and reduced consultation rates at general practitioners, also for younger dyspeptic patients with no or minor abnormalities, the strategy generally proposed of a trial with H2-receptor antagonists before considering referral for endoscopy should be subjected to formal clinical trial evaluating all relevant levels of efficacy. Answer: Yes, endoscopy of difficult to catheterize channels can spare some patients from formal open revision. A study found that approximately half of the patients with continent catheterizable channels (CCC) who underwent endoscopy for reported difficulty catheterizing did not require a formal channel revision after endoscopic management. Specifically, 43.5% of patients with functioning channels were managed successfully with endoscopy without the need for formal revision, and these patients continued to catheterize well at a median follow-up of 3.2 years. The study did not identify any specific risk factors for subsequent formal revision of a CCC, suggesting that endoscopic evaluation should be performed at least once for those with difficulty catheterizing prior to proceeding with formal open revision (PUBMED:27270068).
Instruction: Can midwives reduce postpartum psychological morbidity? Abstracts: abstract_id: PUBMED:9892887 Can midwives reduce postpartum psychological morbidity? A randomized trial. Background: Women who are traumatized after childbirth find that listening, support, counseling, understanding, and explanation are the most useful treatments. However, little evidence is available from randomized trials of the relative efficacy of these treatments as a positive postnatal intervention. This study purpose was to examine if postnatal "debriefing" by midwives can reduce psychological morbidity after childbirth. Method: A randomized trial was conducted in a regional teaching hospital in northwest England. One hundred and twenty postnatal primigravidas were allocated by sealed envelopes to receive the debriefing intervention (n = 56) or not (n = 58). The main outcome measure was the Hospital Anxiety and Depression (HAD) scale administered by postal questionnaire 3 weeks after delivery. The proportion of women in each group with anxiety and depression scores of more than 10 points were compared, using odds ratios and 95% confidence intervals. Results: Women who received the intervention were less likely to have high anxiety and depression scores after delivery when compared with the control group. Conclusions: The support, counseling, understanding, and explanation given to women by midwives in the postnatal period provides benefits to psychological well-being. Maternity units have a responsibility to develop a service that offers all women the option of attending a session to discuss their labor. abstract_id: PUBMED:26217649 Early Detection of Postpartum Depressive Symptoms in Mothers and Fathers and Its Relation to Midwives' Evaluation and Service Provision: A Community-Based Study. Background: Postpartum parental mental health problems pose a serious risk for child development and often remain undetected in postpartum primary care. Within the framework of the German Midwifes Prevention Study, the aim of this study was to investigate the presence of postpartum emotional distress in mothers and fathers, and the detection of distressed parents by midwives in a primary care setting. We also examined whether a temporal extension of the postpartum midwife care period is associated with greater use of midwife contacts and higher rates of referral to further professional support if needed. Methods: Mothers, fathers, and midwives filled out questionnaires at 2 weeks (t 1) and 6 months (t 2) postpartum. Compared to standard care in the control group (CG), midwives in an intervention group (IG) offered extended postpartum care of 6 months postpartum. Parental psychological distress was assessed using the Edinburgh postnatal depression scale (EPDS). Midwives reported on parental psychological distress as well as the number of postpartum contacts and referrals to additional social- and health-care providers. Results: Based on their ratings, midwives identified half of mothers and around one-quarter of fathers with elevated depressive symptoms according to the EPDS at t 1 and t 2. IG mothers used significantly more midwife contacts than CG mothers. IG mothers with high-postnatal psychological distress at t 2 used significantly more contacts than mothers with lower levels of distress. IG mothers with high-psychological distress at t 2 were referred to additional support services more often than mothers with lower levels of distress. abstract_id: PUBMED:35676645 Midwives' experiences of reducing maternal morbidity and mortality from postpartum haemorrhage (PPH) in Eastern Nigeria. Background: Postpartum haemorrhage (PPH) is one of the major complications of childbirth which may result in maternal morbidity and mortality, especially in low and middle-income countries like Nigeria. Midwives play a vital role in preventing and managing PPH in Nigerian rural communities. The aim of this study is to understand the experiences of midwives in rural maternity care settings in order to provide appropriate support and improve practice. Methods: An exploratory, qualitative study of a purposive sample of 15 practicing midwives was carried out using semi-structured interviews from November 2018 to February 2019. Data were transcribed verbatim and analyzed using content analysis. Results: Four themes were identified: 1. interventions for preventing PPH; 2. approaches to managing PPH; 3. challenges of preventing and managing PPH and 4. ways of supporting midwives to overcome these challenges in rural health care settings. Midwives employed various strategies, such as antenatal education, diagnosis and treatment of anaemia to counteract complications from possible PPH. Understanding PPH as a life-threatening condition enabled the midwives to provide holistic and effective management that sometimes involved a multidisciplinary team approach. Inadequate resources and delay in seeking health care, however, militate against their efforts. The midwives also identified the need for continuing education and training to enhance their standards of care. Conclusion: These midwives in Nigerian rural health care settings engage in preventive practices and active management of PPH though not without barriers, such as inadequate resources. There is a need for midwives in rural areas to have cultural competence, be provided with adequate resources and participate in ongoing education in order to be more effective. abstract_id: PUBMED:30100194 Providing postpartum care with limited resources: Experiences of nurse-midwives and obstetricians in urban Tanzania. Background: Tanzania has high maternal and neonatal mortality rates. Comprehensive guidelines for postpartum care have been developed by the government as a means to improve health outcomes during the perinatal period. Despite the creation of these guidelines and the government's commitment to universal perinatal care for women and neonates, there is concern that the delivery of postpartum services may not be meeting the needs of mothers and neonates. Aim: The purpose of this feminist poststructuralist study was to explore nurse-midwives' and obstetricians' experiences of providing postpartum care in Tanzania. Methods: This qualitative study used feminist poststructuralism to explore the personal, social, and institutional discourses of postpartum care. We individually interviewed ten nurse-midwives and three obstetricians in Dar es Salaam, Tanzania. Feminist poststructuralist discourse analysis was used to analyze the transcribed interviews after their translation from Kiswahili to English. Findings: Four main themes were identified. In this paper, we present the main theme of availability of resources, and its four corresponding subthemes; (1) space, (2) equipment, (3) staffing, and (4) government responsibility. Discussion: The findings from our study illustrate the need for health workforce planning to be addressed in a comprehensive manner that accounts for context, required resources and systemic challenges. These findings are consistent with findings from other studies. Conclusion: Understanding the resource challenges that nurse-midwives and obstetricians are facing in one low-and-middle-income-country will assist researchers, decision makers, and politicians as they address issues of mortality, morbidity, and disrespectful maternity care. abstract_id: PUBMED:30579925 Midwives' practices and knowledge about fear of childbirth and postpartum posttraumatic stress disorder. Background: Women suffering from fear of childbirth and postpartum posttraumatic stress disorder are often not recognised by health care professionals. Aim: To evaluate practices, knowledge and the attitudes of midwives towards women with fear of childbirth and postpartum posttraumatic stress disorder. Methods: A cross-sectional study was performed amongst midwives who work in community practices and hospitals in the Netherlands with the use of a questionnaire purposefully designed for this research aim. Findings: 257 midwives participated in the study, of whom 217 completed all items in the questionnaire. Midwives were better equipped to answer knowledge questions concerning fear of childbirth than posttraumatic stress disorder (regarding symptomatology, risk factors, consequences and treatment). When tending to women with fear of childbirth or (suspected) postpartum posttraumatic stress disorder, most midwives referred to another caregiver (e.g. psychologist). Most midwives expressed a positive and compassionate attitude towards women with fear of childbirth and postpartum posttraumatic stress disorder. Discussion: The majority of midwives are well informed with respect to fear of childbirth, but knowledge of important aspects of postpartum posttraumatic stress disorder is often lacking. Midwives report no crucial issues related to their attitudes towards women with fear of childbirth and posttraumatic stress disorder. Most midwives provide adequate organisation of care and support. Conclusion: Midwives should acquire more in depth knowledge of fear of childbirth and postpartum posttraumatic stress disorder. This can be achieved by including the two conditions in the program of midwifery education. abstract_id: PUBMED:31773795 Knowledge of postpartum care and postbirth warning signs among midwives in Ghana. Background: In Ghana, midwives are the primary maternity care practitioners. Their knowledge of postpartum care is critical for preventing and reducing maternal deaths because it affects the quality of care provided to women. In addition, midwives' knowledge of postbirth warning signs is important for early identification and management of complications. This study assessed midwives' knowledge of postpartum care and postbirth warning signs to develop interventions to improve patient care. Methods: A cross-sectional survey of 246 midwives was conducted in the four main hospitals of Tamale, Ghana. Data were collected using a postpartum care knowledge questionnaire developed by JHPIEGO. Data were analyzed in SAS version 9.4 using descriptive, bivariate, and multivariate statistics. Results: Mean age of midwives was 31.9 years. The percentage of midwives who responded correctly to each postpartum care question ranged from 41.6% to 84.9%. Most midwives were knowledgeable about breastfeeding-however, knowledge about fundus location, postpartum examination, and care during first 2 hours postpartum was low. Hospital was associated with knowledge of postpartum care (P < .001). Only 28.1% of midwives identified all nine warning signs of complications. Most midwives could identify severe bleeding, severe headaches, and high temperature as warning signs-however, knowledge of warning signs of some life-threatening complications such as chest pain, obstructed breathing, and thoughts of hurting oneself was low. More years of experience was associated with better knowledge of postbirth warning signs (P = .03). Discussion: Findings suggest a need for additional training of midwives in how to care for postpartum patients and accurately identify warning signs for life-threatening complications. abstract_id: PUBMED:7697401 Identifying morbidity in postpartum women. Women with postpartum health problems do not readily initiate consultation, making it necessary for those providing care to devise methods by which problems can be identified. By taking detailed accounts of each woman's labour and delivery details when planning postpartum care, some morbidity could be preempted and its effect limited. Postnatal care requires a planned structure which could be modelled on current antenatal care organisation. All women could routinely be seen at three or six months post-delivery. Care could be given by midwives, with referral to GPs where necessary. Fatigue could put women at greater risk of developing postpartum depression, but few women spontaneously report fatigue as a health problem. Limiting the effects of childbirth on maternal health will have important implications for the future use of the medical services. abstract_id: PUBMED:32912738 Contraception provision in the postpartum period: Knowledge, views and practices of midwives. Background: Women are susceptible to unintended pregnancies in the first year after giving birth, particularly as consideration of contraception may be a low priority during this time. Discussing and providing contraception before women leave hospital after giving birth may prevent rapid repeat pregnancy and its associated risks. Midwives are well placed to assist with contraceptive decision-making and provision; however, this is not routinely undertaken by midwives in the Australian hospital setting and little is known regarding their views and experiences in relation to contraception. Methods: An anonymous survey was conducted with midwives at two urban hospitals in New South Wales to better understand their contraceptive knowledge, views and practices regarding midwifery-led contraception provision in the postpartum period. Findings: The survey was completed by 128 midwives. Most agreed that information about contraception provided in the postpartum period is valuable to women, although their knowledge about different methods was variable. The majority (88%) believed that midwives have a role in providing contraceptive information, and 79% reported currently providing contraceptive counselling. However, only 14% had received formal training in this area. Conclusion: Findings demonstrate that most midwives provide some contraception information and believe this is an important part of a midwife's role. Yet most have not undertaken formal training in contraception. Additional research is needed to explore the content and quality of midwives' contraception discussions with women. Training midwives in contraceptive counselling would ensure women receive accurate information about available options. Upskilling midwives in contraception provision may increase postpartum uptake and reduce rapid repeat pregnancies. abstract_id: PUBMED:28793365 Saving lives with caring assessments: How Tanzanian nurse-midwives and obstetricians negotiate postpartum practices. Aims And Objectives: To explore the nurse-midwives' and obstetricians' experiences delivering postpartum care assessments and how it was constructed through personal, social and institutional discourses. Introduction: The Tanzanian Government has prioritised maternal and child health as an urgent healthcare issue. Nurse-midwives and obstetricians are the two main providers of care throughout the prenatal and postpartum periods. Design: A qualitative design guided by a feminist poststructuralist methodology. Methods: Ten nurse-midwives and three obstetricians from three Regional Hospitals in Dar es Salaam participated in individual semi-structured in-depth interviews. Results: Assessment emerged as a significant theme with three subthemes. Nurse-midwives shared their beliefs and values about assessments that focused on the safety of mothers and babies. They felt proud working with mothers and babies and shared their frustrations having to deal with inadequate working conditions. Guidelines and practices were part of the institutional discourse that impacted the day-to-day experiences of nurse-midwives and obstetricians. The nurse-midwives held the belief that it was vital to complete a comprehensive assessment to identify danger signs, keep mothers and babies safe and look for any abnormalities. They were concerned that mothers were being sent home too early. Conclusions: Nurse-midwives' experiences in the provision of postpartum care portray that these health providers work heartedly to make sure that the mothers and their newborns receive the best care they can provide. The health system is challenged to address the needed supplies and equipment for reproductive health in particular postpartum care services. Relevance To Clinical Practice: Institutional health discourses significantly affect the practice of nurse-midwives and obstetricians to deliver timely and effective postpartum assessments. Immediate and ongoing postpartum assessments conducted by nurse-midwives and obstetricians can save lives. This study presents the first theme of the study: Caring assessments save lives. abstract_id: PUBMED:32035798 Psychological distress in pregnancy and postpartum. Purpose: Depression, stress, and anxiety, termed 'psychological distress,' are common in pregnancy and postpartum periods. However, it is unclear whether prenatal psychological distress predicts postpartum psychological distress. We studied the prevalence, comorbidity and associations of maternal depression, stress, and anxiety in the prenatal period in relation to the occurrence of these same measures in the postpartum period. Methods: Data originated from the MotherToBaby study of pregnant women residing in the U.S or Canada (2016-2018). Risk ratios and 95% Confidence Intervals using modified-Poisson regression models were used to investigate associations between prenatal psychological distress and postpartum psychological distress. Results: Of the 288 women in the analysis, 21.2% and 26.7% of women had evidence of prenatal and postnatal psychological distress, respectively. Among those with prenatal psychological distress, 43 (70.5%) also had postpartum psychological distress. Twenty-five (41%) of those with prenatal and 46 (60%) of those with postpartum psychological distress had comorbidity of at least two of the measures. Prenatal measures independently predicted the same postnatal measures; prenatal anxiety also independently predicted postpartum stress. Participants who experienced more types of prenatal psychological distress were at higher risk for postpartum depression, stress, and anxiety. Conclusion: Depression, stress, and anxiety are common in pregnant women and often occur together. Prenatal psychological distress measures are associated with postnatal psychological distress measures, with stronger associations among women with more than one type of psychological distress in pregnancy. Interventions during pregnancy may reduce the risk of postpartum psychological distress. Answer: Yes, midwives can reduce postpartum psychological morbidity. A randomized trial conducted in northwest England found that postnatal "debriefing" by midwives, which included listening, support, counseling, understanding, and explanation, led to women being less likely to have high anxiety and depression scores after delivery compared to a control group. This suggests that the interventions provided by midwives in the postnatal period can benefit psychological well-being (PUBMED:9892887). Additionally, a community-based study within the framework of the German Midwifes Prevention Study showed that midwives' evaluation and service provision were associated with early detection of postpartum emotional distress in mothers and fathers. The study found that midwives identified half of the mothers and around one-quarter of the fathers with elevated depressive symptoms. Moreover, mothers in the intervention group who received extended postpartum care of 6 months used significantly more midwife contacts and were referred to additional support services more often than mothers with lower levels of distress (PUBMED:26217649). These findings indicate that midwives play a crucial role in reducing postpartum psychological morbidity through their support and services, which can include early detection and referral to additional care if needed.
Instruction: Peripheral aneurysms and arteriomegaly: is there a familial pattern? Abstracts: abstract_id: PUBMED:9786252 Peripheral aneurysms and arteriomegaly: is there a familial pattern? Purpose: Studies have shown that 11% to 18% of patients with an abdominal aortic aneurysm (AAA) have a first-degree relative with an AAA. A familial pattern among patients with peripheral arterial aneurysms and arteriomegaly has not been reported. The objective of this study was to examine familial patterns among patients with peripheral arterial aneurysm and arteriomegaly and compare them with patterns among patients with AAA. Methods: Pedigrees were constructed for first-degree relatives of patients who received the diagnosis of peripheral arterial aneurysm, arteriomegaly, or AAA from 1988 through 1996. The presence of aneurysms and risk factors was confirmed for patients and relatives by means of telephone interviews and review of hospital and physician records. Results: Seven hundred three first-degree relatives older than 50 years were contacted for 140 probands with peripheral arterial aneurysm, AAA, or arteriomegaly. There were differences in risk factors for hernia and diabetes mellitus among the probands with peripheral arterial aneurysm, AAA, or arteriomegaly but none for relatives. Patients with peripheral arterial aneurysm (n = 40) had a 10% (4/40) familial incidence rate of an aneurysm, patients with AAA (n = 86) had a 22% (19/86) familial incidence rate, and patients with arteriomegaly (n = 14) had a 36% (5/14) familial incidence rate. AAA (24/28, or 86%) was the aneurysm diagnosed most commonly among first-degree relatives. Most aneurysms (85%) occurred among men. Conclusion: There appears to be a gradation of familial patterns from peripheral arterial aneurysm to AAA to arteriomegaly among patients with degenerative aneurysmal disease, and there appears to be a predominance among men. Relatives of patients with any of the 3 lesions-peripheral arterial aneurysm, AAA, arteriomegaly--most frequently have AAA. Relatives of patients with AAA, peripheral arterial aneurysm, or arteriomegaly may be screened by means of a physical examination for peripheral aneurysmal disease. Screening by means of ultrasound examination of the aorta should be limited to first-degree relatives of patients with aortic aneurysms or arteriomegaly. abstract_id: PUBMED:31447506 Pattern of Vascular Diseases at Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia. Background: Vascular diseases are evolving fast in sub-Saharan Africa, but its management is challenged by lack of expertise and infrastructure. In the light of the prevailing challenge, this study was done to see the pattern of vascular disease and its treatment in a tertiary referral hospital. Methods: A prospective cross-sectional study was done over a period of one year (February 9, 2016 to February 8, 2017) at Tikur Anbessa specialized hospital (TASH) at Addis Ababa, Ethiopia. Results: A total of 386 patients were seen at the surgical OPD. Of these, 78(20.2%) were admitted and operate on. The male to female ratio was 1.3:1. The mean age affected was 39 +/- 10 (Range 12-91 years). On the other hand, 132 (34.2%) patients came with PAD. Of them, 46(34.8%) presented with either frank Gangrene or pre-gangrene stage. The rest 86(65.1%) had claudication pain. The other diseases seen are Varicose Vein, 100(25.9%), Carotid body tumors, 60(15.4%), Aneurysmal diseases, 36(9.1%), Vascular malformations, 34(8.7%), and Vascular injuries, 22(5.6%). During the study period, 28(35.9%) PAD, 22(28.2%) trauma patients, 8(10.2%) Chemodectomas, 8(10.2%) aneurysms, 6(7.7%) hemangiomas, 4(5.1%) varicose vein and 2(2.6%) AV fistula patients were operated. Conclusion: The pattern of vascular disease in Ethiopia is becoming a challenge. The gap created due to limited vascular surgeons, poor infrastructure and absent supply chain system has significantly compromised the number and type of operated-on patients. These challenges result in preventable morbidity and mortality. abstract_id: PUBMED:21327891 Multiple mirror-image peripheral arterial aneurysms in Kawasaki disease. We present the case of a 5-year-old boy diagnosed with Kawasaki disease at 10 months of age who developed fusiform aneurysms of the coronary arteries as well as a previously undescribed pattern of mirror-image peripheral arterial aneurysms. These aneurysms were diagnosed on conventional angiography of the aortoiliac circulation and the bilateral upper extremity circulation. abstract_id: PUBMED:30880887 Pseudoaneurysms of the Peripheral Arteries. Peripheral arterial pseudoaneurysm, while relatively rare, are encountered by most vascular specialists. This review evaluates the epidemiology, diagnosis, natural history, and treatment of pseudoaneurysm in the peripheral arteries. Most of this review concentrates on iatrogenic peripheral pseudoaneurysms, but pseudoaneurysms of other etiologies will also be discussed. abstract_id: PUBMED:7988342 Peripheral arterial disease. Peripheral arterial diseases are common problems because atherosclerosis, the most common cause of both occlusive peripheral arterial disease and aneurysmal disease, is a feature of an aging population. The less common types of occlusive peripheral arterial disease--the vasospastic disorders and the arteritides--although frequently not addressed to any extent in medical school curricula or in residency programs in family and internal medicine, offer the alert and informed clinician diagnostic and therapeutic opportunities that are too important to neglect. As a group, abnormalities of the peripheral arteries are easy to detect if careful observation and examination of the extremities is included in the physical examination and attention to functional disorders is included in the medical history. In this presentation, the clinical features, physical findings, complications, useful noninvasive diagnostic tests, imaging techniques, and therapeutic options for atherosclerotic occlusive peripheral arterial disease (both chronic and acute), uncommon types of occlusive arterial disease of both the lower and upper extremities, the vasospastic disorders, and the peripheral arterial presentations of the arteritides are reviewed. The application of natural history and comorbidity of a particular arterial disease, when available, is emphasized in the formulation of management for the individual patient. abstract_id: PUBMED:23885937 Peripheral artery aneurysm. Peripheral aneurysms typically present as asymptomatic incidental findings or may present with symptoms when there is local compression of other structures, such as nerves or veins, with ischemia, or rarely with rupture. Larger and symptomatic aneurysms should be repaired. Ultrasonography, computed tomography angiography, and magnetic resonance angiography can be used to define inflow and outflow and better characterize the aneurysm, particularly size and thrombus. Repair of peripheral aneurysms typically involves resection with interposition grafting, although certain anatomic sites may be amenable to endovascular approaches. Femoral pseudoaneurysms can be managed with observation, surgical repair, ultrasound-guided compression, or ultrasound-guided thrombin injection. abstract_id: PUBMED:37152703 Is There a Typical Doppler Pattern in Patients With Apical Hypertrophic Cardiomyopathy With Aneurysm? Nineteen consecutive patients with apical hypertrophic cardiomyopathy and apical aneurysm underwent a comprehensive echo-Doppler including continuous wave Doppler at midventricular level. Three different flow patterns, pattern A (more frequent), pattern B, and pattern C, and expression of different intracavitary pressure gradients were defined. (Level of Difficulty: Intermediate.). abstract_id: PUBMED:1765715 Peripheral aneurysms. Between January 1970 and December 1990, 17 cases of peripheral aneurysms were observed in 15 patients. Fourteen patients were male and one patient was female. Age ranged between 4 years and 82 years. From the 17 peripheral aneurysms, 4 were false aneurysms, 9 were true atherosclerotic aneurysms, 2 were mycotic and 2 were true blunt traumatic aneurysms. The false aneurysms were caused by gun shot (one case, common carotid artery); complication of carotid endarterectomy (2 cases); iatrogenic (one case, common femoral artery). The atherosclerotic aneurysms (9 cases) were located in the subclavian artery (one case); superficial femoral artery (2 cases in a single patient); and popliteal artery (5 cases). The mycotic aneurysms (two in one patient) were located in the profunda femoris artery and in the anterior tibial artery. The two traumatic true aneurysms were located in the distal radial artery and in one interdigital artery, respectively. From the 17 peripheral aneurysms, 15 were submitted to surgical treatment, with fairly good results, except for one case of infection and another case of fatal renal insufficiency. Follow-up time ranged from 1 to 18 years. abstract_id: PUBMED:8519578 Cine phase contrast angiography of normal and diseased peripheral arteries. Preliminary results. Cine phase contrast angiography (PCA) is a modified MR phase contrast sequence that acquires up to 22 coronal phase images per mean cardiac cycle. The ability of the sequence to visualise local haemodynamics was investigated in 7 normal volunteers and 9 patients with flow disturbances of the peripheral arteries using a 1.5 T imager. Functional flow information provided by coronal cine PCA was correlated with quantitative data obtained by MR flow measurements and vessel morphology confirmed by conventional angiograms. Due to the yet suboptimal image quality, an aortic dissection and 1 of 4 aneurysms could not be depicted morphologically. The temporal pattern of arterial perfusion in cine PCA corresponded with flow velocity versus time data provided by quantitative MR flow measurements. Accuracy and time resolution of cine PCA was thus sufficient to provide functional information on the severity of occlusive vascular disease. abstract_id: PUBMED:25838923 Embolization of traumatic and non-traumatic peripheral vascular lesions with Onyx. Purpose: The aim of our study is to verify the feasibility and the efficacy of Onyx as embolization agent in the treatment of traumatic and non-traumatic peripheral vascular lesions. Materials And Methods: In the period between September 2006 and March 2012, we treated with Onyx 26 patients (14 males/12 females; age range, 18-85 years old; mean age, 65 years old), 11 of which with traumatic peripheral vascular lesions and 15 with non-traumatic vascular lesions (9 neoplastic hemorrhagic lesions, 3 arteriovenous malformations (AVMs) and 3 aneurysms). Follow-up controls were performed with clinical examination and by multidetector computed tomography (MDCT) imaging 1, 6, and 12 months after the procedure. Results: All peripheral vascular lesions were embolized with Onyx; 3 patients with aneurysms were treated with Onyx associated with endovascular coils. Four elective and 22 emergency embolization procedures were performed. In all patients, we obtained cessation of bleeding and the complete and permanent embolization of all vascular lesions. Conclusions: Onyx is an effective and safe embolization agent for peripheral vascular lesions. Answer: Yes, there appears to be a familial pattern in peripheral aneurysms and arteriomegaly. A study examining familial patterns among patients with peripheral arterial aneurysm, arteriomegaly, and abdominal aortic aneurysm (AAA) found that there is a gradation of familial patterns from peripheral arterial aneurysm to AAA to arteriomegaly among patients with degenerative aneurysmal disease. The study reported that patients with peripheral arterial aneurysm had a 10% familial incidence rate of an aneurysm, those with AAA had a 22% familial incidence rate, and patients with arteriomegaly had a 36% familial incidence rate. Additionally, AAA was the most commonly diagnosed aneurysm among first-degree relatives (86%), and most aneurysms (85%) occurred among men. The study concluded that relatives of patients with any of the three lesions—peripheral arterial aneurysm, AAA, arteriomegaly—most frequently have AAA, suggesting that screening for peripheral aneurysmal disease and ultrasound examination of the aorta may be beneficial for first-degree relatives of patients with aortic aneurysms or arteriomegaly (PUBMED:9786252).
Instruction: Minimally invasive cardiopulmonary bypass: does it really change the outcome? Abstracts: abstract_id: PUBMED:17433112 Minimally invasive cardiopulmonary bypass: does it really change the outcome? Introduction: Many innovative cardiopulmonary bypass (CPB) systems have recently been proposed by the industry. With few differences, they all share a philosophy based on priming volume reduction, closed circuit with separation of the surgical field suction, centrifugal pump, and biocompatible circuit and oxygenator. These minimally invasive CPB (MICPB) systems are intended to limit the deleterious effects of a conventional CPB. However, no evidence exists with respect to their effectiveness in improving the postoperative outcome in a large population of patients. This study aimed to verify the clinical impact of an MICPB in a large population of patients undergoing coronary artery revascularization. Methods: We conducted a retrospective analysis of 1,663 patients treated with an MICPB. The control group (conventional CPB) was extracted from a series of 2,877 patients according to a propensity score analysis. Results: Patients receiving an MICPB had a shorter intensive care unit (ICU) stay, had lower peak postoperative serum creatinine and bilirubin levels, and suffered less postoperative blood loss. Within a multivariable model, MICPB is independently associated with lower rates of atrial fibrillation (odds ratio [OR] 0.83, 95% confidence interval [CI] 0.69 to 0.99) and ventricular arrhythmias (OR 0.45, 95% CI 0.28 to 0.73) and with higher rates of early discharge from the ICU (OR 1.31, 95% CI 1.06 to 1.6) and from the hospital (OR 1.46, 95% CI 1.18 to 1.8). Hospital mortality did not differ between groups. Conclusion: MICPBs are associated with reduced morbidity. However, these results will need to be confirmed in a large, prospective, randomized, controlled trial. abstract_id: PUBMED:32906198 Analysis of clinical outcome and postoperative organ function effects in a propensity-matched comparison between conventional and minimally invasive mitral valve surgery. Background: Minimally invasive mitral valve (MV) surgery (MIVT) is increasingly performed with excellent clinical outcome, despite longer procedural times. This study analyzes clinical outcomes and effects on secondary organ functions in a propensity-matched comparison with conventional MV surgery. Methods And Results: Out of 439 patients undergoing MV surgery from January 2005 to May 2017, 233 patients were included after propensity-matching: 90 sternotomy patients and 143 MIVT patients. Endpoints focused on survival, quality of MV repair, and organ function effects through analysis of biomarkers and functional parameters. Regardless of longer cardiopulmonary bypass (sternotomy: 101(IQR33) min-MIVT:143(IQR45) min, p < .001) and cardioplegic arrest times(sternotomy: 64(IQR25) min-MIVT:90(IQR34) min, p < .001), no differences in survival nor complication rate were found. Effect on renal function(creatinine, p = .751 - ureum, p = .538 - glomerular filtration, p = .848), myocardial damage by troponine I level (sternotomy:1.8 ± 3.9 ng/ml - MIVT:1.2 ± 1.3 ng/ml, p = .438) and prolonged ventilatory support >24 h (sternotomy:5.5% - MIVT:8.4%, p = .417) were comparable. The systemic inflammatory reaction by postoperative C-reactive protein count was markedly lower for MIVT(p < .001). Increased rhadomyolysis was found after MIVT surgery, based on a significant elevation of creatinine-kinase levels(sternotomy: 431 ± 237 U/L - MIVT: 701 ± 595 U/L, p < .001). Conclusion: Despite an inherent learning curve, minimally invasive MV surgery guarantees a clinical outcome and MV repair quality, at least non-inferior to those of MV surgery via sternotomy. Notwithstanding longer cardiopulmonary bypass and cardiac arrest times, the impact on secondary organ function is negligible, except for a lower systemic inflammatory response. The postoperative increase of CK-enzymes suggestive for enhanced rhabdomyolysis needs to be accounted for when procedural times tend to exceed the critical time threshold for severe limb ischemia. abstract_id: PUBMED:34479337 Redefining the gold standard for aortic valve replacement: Minimally invasive with accelerated recovery Background: Despite being introduced 20 years ago minimally invasive aortic valve replacement is only performed routinely in a minority of patients world-wide. Aim: To report the operative outcome of minimally invasive aortic valve replacement done through a partial upper sternotomy. Patients And Methods: Retrospective analysis of data recorded prospectively of 450 consecutive patients with a median age of 66 years (59% males) who had a minimally invasive aortic replacement. Results: 79% of patients had aortic stenosis. Cross clamp/cardiopulmonary bypass times (median) were 56 and 68 minutes respectively. Conversion to full sternotomy was required in 2.6% of patients, reoperation for bleeding in 2.9%. 1.6% suffered a stroke and 19% postoperative atrial fibrillation. 0.9% required a permanent pacemaker. Postoperative mortality was 0.9%. Median postoperative hospital stay was six days. Conclusions: Minimally invasive aortic valve replacement can be performed with satisfactory results. abstract_id: PUBMED:9593549 Minimally invasive valve surgery. Minimally invasive surgical techniques have proliferated at a staggering rate in the last decade. Cardiac surgery has been a late entrant in this process but is currently experiencing a surge of interest in and techniques for minimally invasive approaches. Cardiac valve surgery has seen a rapid and ongoing evolution of minimally invasive approaches that, it is hoped, will decrease patient discomfort, operative morbidity, length of hospitalization, and cost, improve cosmetic healing, and facilitate return to normal function while not compromising short- or long-term outcomes of the surgical procedure. This article examines the evolution of recent experience with minimally invasive valve surgery and emphasizes the surgical considerations surrounding 1) the choice of incision, 2) access techniques for cardiopulmonary bypass (CPB), 3) methods of myocardial protection, 4) techniques for aortic occlusion, and 5) atrial incisions for exposure of the mitral valve. The results of early and currently available series are also reviewed. abstract_id: PUBMED:30394847 Aortic valve replacement with or without myocardial revascularization in octogenarians. Can minimally invasive extracorporeal circuits improve the outcome? Objective: The positive impact of minimally invasive extracorporeal circuits (MiECC) on patient outcome is expected to be most evident in patients with limited physiologic reserves. Nevertheless, most studies have limited their use to low-risk patients undergoing myocardial revascularization. As such, there is little evidence to their benefit outside this patient population. We, therefore, set out to explore their potential benefit in octogenarians undergoing aortic valve replacement (AVR) with or without concomitant myocardial revascularization. Methods: Based on the type of the utilized ECC, we performed a retrospective propensity score-matched comparison among all octogenarians (n = 218) who received a primary AVR with or without concomitant coronary artery bypass grafting in our institution between 2003 and 2010. Results: A MiECC was utilized in 32% of the patients. The propensity score matching yielded 52 matched pairs. The 30-day postoperative mortality (2% vs. 10%; p=0.2), the incidence of low cardiac output (0% vs. 6%; p=0.2) and the Intensive Care Unit (ICU) stay (2.5 ± 2.6 vs. 3.8 ± 4.7 days; p=0.06) were all in favour of the MiECC group, but failed to reach statistical significance while the 90-day postoperative mortality did (2% vs. 16%; p=0.02). Conclusion: MiECCs have a positive influence on the outcome of octogenarians undergoing AVR with or without concomitant coronary artery bypass grafting. Their use should, therefore, be extended beyond isolated coronary artery bypass graft (CABG) surgery. abstract_id: PUBMED:32669734 Minimally Invasive Aortic Valve Replacement on Minimally Invasive Extracorporeal Circulation: Going beyond Aesthetics. We present our multidisciplinary and multistep strategy in patients undergoing minimally invasive aortic valve replacement (mAVR) on minimally invasive extracorporeal circulation (MiECC) compared with control groups of a single strategy and conventional techniques. This cohort study included high-risk patients (Society of Thoracic Surgeons [STS] risk score >8%) undergoing aortic valve surgery under different strategies during the period from January 2017 until March 2019. Patients were matched for age, gender, body mass index, and STS score: group 1 (MiAVR) based on a minimally invasive technique with J-mini-sternotomy, rapid deployment valve (RDV), and type IV customized MiECC; group 2 (control-mAVR) consisted of minimally invasive technique with only J mini-sternotomy and RDV on a conventional extracorporeal system; group 3 (control-MiECC): full sternotomy and type IV customized MiECC; and group 4 (control): full sternotomy on a conventional extracorporeal system. The MiAVR group had significantly less duration of x-clamp time (35.4 ± 11 minutes), postoperative respiratory support (4.1 ± 1 hour), postoperative hemorrhage (250 ± 50 mL), and intensive care unit stay (1 ± .5 days) than the control-conventional (group 4) group. Seventy-six percent of patients did not receive any blood products in MiAVR (p = .025 vs. group 4). Incidence of atrial fibrillation (8%) and low cardiac output (14%) in MiAVR were significantly better than control. Critics of minimally invasive techniques sustain that potential advantages are offset by a longer cross-clamp and cardiopulmonary bypass duration, which may translate into inferior clinical outcomes. We advocate that our multidisciplinary approach supported by multiple technologies may be associated with faster recovery and superior outcomes than conventional minimally/conventional techniques. abstract_id: PUBMED:32243530 The impact of cardiopulmonary bypass management on outcome: a propensity matched comparison between minimally invasive and conventional valve surgery. Objectives: Research concerning cardiopulmonary bypass (CPB) management during minimally invasive cardiac surgery (MICS) is scarce. We investigated the effect of CPB parameters such as pump flow, haemoglobin concentration and oxygen delivery on clinical outcome and renal function in a propensity matched comparison between MICS and median sternotomy (MS) for atrioventricular valve surgery. Methods: A total of 356 patients undergoing MICS or MS for atrioventricular valve surgery between 2006 and 2017 were analysed retrospectively. Propensity score analysis matched 90 patients in the MS group with 143 in the MICS group. Logistic regression analysis was performed to investigate independent predictors of cardiac surgery-associated acute kidney injury in patients having MICS. Results: In MICS, CPB (142.9 ± 39.4 vs 101.0 ± 38.3 min; P < 0.001) and aortic cross-clamp duration (89.9 ± 30.6 vs 63.5 ± 23.0 min; P < 0.001) were significantly prolonged although no differences in clinical outcomes were detected. The pump flow index was lower [2.2 ± 0.2 vs 2.4 ± 0.1 l⋅(min⋅m2)-1; P < 0.001] whereas intraoperative haemoglobin levels were higher (9.25 ± 1.1 vs 8.8 ± 1.2; P = 0.004) and the nadir oxygen delivery was lower [260.8 ± 43.5 vs 273.7 ± 43.7 ml⋅(min⋅m2)-1; P = 0.029] during MICS. Regression analysis revealed that the nadir haemoglobin concentration during CPB was the sole independent predictor of cardiac surgery-associated acute kidney injury (odds ratio 0.67, 95% confidence interval 0.46-0.96; P = 0.029) in MICS but not in MS. Conclusions: Specific cannulation-related issues lead to CPB management during MICS being confronted with flow restrictions because an average pump flow index ≤2.2 l/min/m2 is achieved in 40% of patients who have MICS compared to those who have a conventional MS. This study showed that increasing the haemoglobin level might be helpful to reduce the incidence of cardiac surgery-associated acute kidney injury after minimally invasive mitral valve surgery. abstract_id: PUBMED:37241139 Right-Sided Minimally Invasive Direct Coronary Artery Bypass: Clinical Experience and Perspectives. Objective: Minimally invasive direct coronary artery bypass grafting (MIDCAB) using the left internal thoracic artery to the left descending artery is a clinical routine in the treatment of coronary artery disease. Far less is known on right-sided MIDCAB (r-MIDCAB) using the right internal thoracic artery (RITA) to the right coronary artery (RCA). We aimed to present our experience in patients with complex coronary artery disease who underwent r-MIDCAB. Materials and Methods: Between October 2019 and January 2023, 11 patients received r-MIDCAB using RITA to RCA bypass in a minimally invasive technique via right anterior minithoracotomy without using a cardiopulmonary bypass. Underlying coronary disease was complex right coronary artery stenosis (n = 7) and anomalous right coronary artery (ARCA; n = 4). All procedure-related and outcome data were evaluated prospectively. Results: Successful minimally invasive revascularization was achieved in all patients (n = 11). There were no conversions to sternotomy and no re-explorations for bleeding. Furthermore, no myocardial infarction, no strokes, and, most importantly, no deaths were observed. During the follow-up period (median 24 months), all patients were alive and 90% were completely angina free. Two patients received a repeated revascularization after surgery but independently from the RITA-RCA bypass, which was fully competent in both patients. Conclusion: Right-sided MIDCAB can be performed safely and effectively in patients with expected technically challenging percutaneous coronary intervention of the RCA and in patients with ARCA. Mid-term results showed high freedom from angina in nearly all patients. Further studies with larger patient cohorts and more evidence are needed to provide the best revascularization strategy for patients suffering from isolated complex RCA stenosis and ARCA. abstract_id: PUBMED:9805948 Minimally invasive valvular surgery Surgery for cardiac valvular pathology is one of the great success of the 20th century. Until recently the surgical techniques were centered on median sternotomy and cardiopulmonary bypass with aortic and atrial cannulation. The changes in cardiac surgical technique for minimally invasive valve surgery include: a different and smaller incision to minimize trauma to the body, a new technique of connecting the patient to the cardiopulmonary bypass, a different technique for myocardial perfusion and aortic occlusion. abstract_id: PUBMED:33403735 Minimally invasive tricuspid valve surgery without caval occlusion: Short and midterm results. Objective: The use of minimally invasive or transcatheter interventions rather than standard full sternotomy operations to treat tricuspid valve (TV) disease is increasing. The debate however is still open regarding venous drainage management during cardiopulmonary bypass (CPB) and wheatear or not superior and inferior vena cava should be occluded during the opening of the right atrium to avoid air entrance in the venous line. The aim of the present study is to report operative outcomes and midterm follow-up results of minimally invasive TV surgery performed without caval occlusion. Methods: A retrospective outcome evaluation from institutional records was performed with prospective data entry. Considered were consecutive patients who underwent right mini-thoracotomy TV surgery isolated or combined with mitral valve surgery during the period from June 2013 to February 2020. A telephone and echocardiographic follow-up was performed. Results: During the study period, 68 consecutive patients underwent minimally invasive TV surgery without occlusion of cava veins. The mean age was 69 ± 14 years and 48 (70%) were female. All operations were performed safely without air-lock during CPB. A perioperative cerebral stroke occurred in one patient. The survival at a 5- and 8-year follow-up was 100% and 79%, respectively. No severe tricuspid regurgitation was evident at echocardiographic follow-up. Conclusion: Our results show that performing tricuspid surgery without caval occlusion is safe. The air was captured by the active vacuum drainage system without causing damage. Midterm follow-up data confirm that a minimally invasive approach does not alter the quality of surgery. Answer: Minimally invasive cardiopulmonary bypass (MICPB) systems have been developed with the intention of reducing the negative effects associated with conventional cardiopulmonary bypass (CPB) during cardiac surgery. These systems typically feature reduced priming volume, closed circuits, centrifugal pumps, and biocompatible materials. The question of whether MICPB truly changes patient outcomes has been the subject of various studies. A retrospective analysis of 1,663 patients treated with MICPB showed that patients had shorter ICU stays, lower peak postoperative serum creatinine and bilirubin levels, less postoperative blood loss, and were associated with lower rates of atrial and ventricular arrhythmias. Additionally, MICPB was independently associated with higher rates of early discharge from the ICU and the hospital. However, hospital mortality did not differ between the MICPB group and the control group (conventional CPB), suggesting that while MICPB is associated with reduced morbidity, further confirmation in a large, prospective, randomized, controlled trial is needed (PUBMED:17433112). In the context of minimally invasive mitral valve (MV) surgery, a propensity-matched comparison with conventional MV surgery found no differences in survival or complication rates despite longer CPB and cardioplegic arrest times. The impact on secondary organ function was negligible, except for a lower systemic inflammatory response and increased rhabdomyolysis after minimally invasive surgery (PUBMED:32906198). For aortic valve replacement, minimally invasive surgery through a partial upper sternotomy has been shown to be performed with satisfactory results, with low rates of conversion to full sternotomy, reoperation for bleeding, stroke, and postoperative atrial fibrillation. The median postoperative hospital stay was six days (PUBMED:34479337). In octogenarians undergoing aortic valve replacement with or without myocardial revascularization, the use of minimally invasive extracorporeal circuits (MiECC) was associated with positive outcomes, including lower 90-day postoperative mortality when compared to conventional ECC (PUBMED:30394847). A study on minimally invasive aortic valve replacement on MiECC reported significantly less duration of cross-clamp time, postoperative respiratory support, postoperative hemorrhage, and ICU stay compared to conventional techniques. The incidence of atrial fibrillation and low cardiac output was also better in the minimally invasive group (PUBMED:32669734). In In another study, the impact of cardiopulmonary bypass management on outcome was investigated in a propensity-matched comparison between minimally invasive and conventional valve surgery. The study found that while CPB and aortic cross-clamp duration were significantly prolonged in minimally invasive cardiac surgery (MICS), there were no differences in clinical outcomes. Specific CPB management during MICS, such as lower pump flow index and higher intraoperative hemoglobin levels, did not adversely affect renal function or lead to cardiac surgery-associated acute kidney injury (PUBMED:32243530). Additionally, right-sided minimally invasive direct coronary artery bypass (r-MIDCAB) using the right internal thoracic artery to the right coronary artery was performed successfully in all patients in a study, with no conversions to sternotomy, no re-explorations for bleeding, and no myocardial infarctions, strokes, or deaths observed. The follow-up period showed that 90% of patients were completely angina-free (PUBMED:37241139). Lastly, a study on minimally invasive tricuspid valve surgery performed without caval occlusion reported that all operations were performed safely without air-lock during CPB. The survival rates at 5- and 8-year follow-up were high, and no severe tricuspid regurgitation was evident at echocardiographic follow-up, indicating that the minimally invasive approach without caval occlusion is safe and does not alter the quality of surgery (PUBMED:33403735). In conclusion, the evidence from these studies suggests that minimally invasive cardiopulmonary bypass can change the outcome by reducing morbidity, shortening ICU stays, and potentially improving recovery times without increasing mortality. However, the impact on long-term survival and other clinical outcomes requires further investigation through large-scale, randomized controlled trials to confirm these findings.
Instruction: Are supportive services meeting the needs of Australians with neurodegenerative conditions and their families? Abstracts: abstract_id: PUBMED:17058753 Are supportive services meeting the needs of Australians with neurodegenerative conditions and their families? Objective: To identify the needs for supportive care/palliative care services of people in Australia with four neurodegenerative disorders--motor neurone disease, multiple sclerosis, Parkinson's disease, Huntington's disease--and the needs of their families; and to determine the extent to which existing supportive and palliative care services models meet these needs. Design: A mailed self-administered questionnaire to individuals with the four neurodegenerative diseases and their carers, in Western Australia, Victoria, and Queensland, using stratified proportional sampling from membership lists of the disease associations. Participants: A total of 503 patients and 373 carers responded, representing a response rate of 25.6% and 19.0%, respectively. Exclusion criteria included those who had been recently diagnosed, and those too sick or disabled to participate. Results: As patient dependency increased and more support was needed, both patients and carers exhibited higher distress symptoms and a poorer quality of life. Those who received more tailored services and more palliative care services were the most satisfied. The results highlighted the need for tailored and flexible models of care for these groups with unique care requirements. Conclusions: This is the first empirical evidence of the needs and services used by these patient groups in Australia, and will form the basis for future developments of palliative and supportive care services for people with these four neurological conditions. abstract_id: PUBMED:36865803 Supportive care of neurodegenerative patients. Neurodegenerative illnesses are notorious for paucity of treatments and relentless clinical progression. Illness may follow a relatively acute presentation, as is seen with primary brain tumors such as glioblastoma or have a more insidious onset with a slower yet unyielding course, such as that seen in Parkinson's disease. Though disparate in presentation, these neurodegenerative illnesses are universally terminal, and both the patients and their families benefit from the intervention of supportive care in conjunction with primary disease management. Supportive palliative care has been shown to improve quality of life, enhance patient outcomes, and often extend patient life-but such care needs to be tailored. This clinical commentary examines the role of supportive palliative care in the management of neurologic patients, comparing and contrasting glioblastoma patients with idiopathic Parkinson's disease patients. Both patient populations are high utilizers of healthcare resources, require active management of multiple symptoms, and have high caregiver burden which underscores the need for supportive services in conjunction with disease management provided by the primary care team. Review of prognostication, patient and family communication, trust and relationship building, and complementary medicinal approaches are explored for these two diseases which broadly represent two differing poles of incurable neurological illness. abstract_id: PUBMED:25429895 Unmet health and rehabilitation needs of people with long-term neurological conditions in Queensland, Australia. The survival and life expectancy rates of people with traumatic and degenerative neurological conditions are increasing, sometimes up to several decades. Yet compared to the general population, people with a disability continue to experience poorer health and are at greater risk of developing secondary health problems and facing barriers to services they require. These trends have significant implications for provision of health and rehabilitation services. In this study, the adequacy of health and rehabilitation services provided to people with long-term neurological conditions and their unmet needs were explored from the perspectives of individual users, their nominated family members and key service providers. A qualitative research design with maximum variation sampling was used. Data were collected from semi-structured interviews with 65 participants comprising 25 long-term care service users, nominated family members or friends (n = 22) and care service providers (n = 18) in Queensland, Australia. All service users needed assistance with usual daily activities, and 22 were wheelchair dependent. The hours of funded personal care ranged from 2 to 201 hours per week. Data were analysed using framework analysis. Participants generally perceived that specialist medical and hospital services were adequate and satisfactory. They valued supportive health and rehabilitation professionals and receiving client-centred physical rehabilitation. However, the majority of participants (n = 17) had perceived unmet needs for physical rehabilitation (n = 14), other health or rehabilitation services (n = 10) or counselling (n = 6). Community-based physical maintenance rehabilitation was often perceived as inadequate, costly or inconveniently located. Participants highlighted the importance of personal and family counselling and information provision at time points such as diagnosis. The findings contribute to the limited international evidence on the gaps in health and rehabilitation services for people with neurological conditions receiving lifetime care services in the community. A continuum of integrated rehabilitation services to minimise avoidable impairments, optimise independence and functioning, and sustain quality of life is warranted. abstract_id: PUBMED:28069034 Health-related quality of life and unmet healthcare needs in Huntington's disease. Background: Huntington's disease (HD) is a rare neurodegenerative disorder with a prevalence of 6 per 100.000. Despite increasing research activity on HD, evidence on healthcare utilization, patients' needs for healthcare services and Health-Related Quality of Life (HRQoL) is still sparse. The present study describes HRQoL in a Norwegian cohort of HD patients, and assesses associations between unmet healthcare and social support service needs and HRQoL. Methods: In this cross-sectional population-based study, 84 patients with a clinical diagnosis of HD living in the South-East of Norway completed the HRQoL questionnaire EuroQol, EQ-5D-3L. Unmet needs for healthcare and social support services were assessed by the Needs and Provision Complexity Scale (NPCS). Furthermore, functional ability was determined using the Unified Huntington's Disease Rating Scale (UHDRS) Functional assessment scales. Socio-demographics (age, gender, marital status, occupation, residence, housing situation) and clinical characteristics (disease duration, total functional capacity, comorbidity) were also recorded. Descriptive statistics were used to describe the patients' HRQoL. Regression analyses were conducted in order to investigate the relationship between unmet healthcare needs and self-reported HRQoL. Results: The patients were divided across five disease stages as follows: Stage I: n = 12 (14%), Stage II: n = 22 (27%), Stage III: n = 19 (23%), Stage IV: n = 14 (16%), and Stage V: n = 17 (20%). Overall HRQoL was lowest in patients with advanced disease (Stages IV and V), while patients in the middle phase (Stage III) showed the most varied health profile for the five EQ-5D-3L dimensions. The regression model including level of unmet needs, clinical characteristics and demographics (age and education) accounted for 42% of variance in HRQoL. A higher level of unmet needs was associated with lower HRQoL (β value - 0.228; p = 0.018) whereas a better total functional capacity corresponded to higher HRQoL (β value 0.564; p < 0.001). Conclusions: The study findings suggest that patients with HD do not receive healthcare services that could have a positive impact on their HRQoL. abstract_id: PUBMED:25827780 What is known about the subjective needs of people with behavioural variant frontotemporal dementia? A scoping review. Behavioural variant frontotemporal dementia (bvFTD) is an early-onset and progressive neurodegenerative disease associated with strong changes in judgement, behaviour, personality and emotions. These changes can cause significant problems in everyday life for people with bvFTD and their families, and have implications for health and society. Currently, there are no suitable evidence-based specific interventions for people with bvFTD. This scoping review aims to identify the self-expressed needs, demands and coping strategies of people with bvFTD. Identifying these issues is the first step towards the development of need-based psycho-social interventions for people with bvFTD. A comprehensive literature research was conducted of German and English scientific articles published between January 2000 and October 2014 using the databases MEDLINE, CINAHL, PsycINFO, PSYNDEX, SocINDEX, GeroLit, the Cochrane Library, ProQuest, the German National Library and additional search strategies in terms of a scoping review. Articles were identified by combining search terms related to 'frontotemporal dementia' with terms related to 'self-expressions', 'needs/demands' and 'coping'. After excluding duplicates, two independent reviewers screened the titles and abstracts of 2317 records for eligibility. Because eligibility could not be assessed from the titles or abstracts of 28 articles, those articles were assessed using the full text. One poster abstract met our research question and a few articles were related, but no article met all of the inclusion criteria. This lack of scientifically based knowledge concerning the perspective of people with bvFTD is discussed with reference to the search strategy and the research questions, disease-related aspects such as changes in behaviour or language and emotions and the difficulties in researching this topic. Recommendations are formulated for future research considering the perspective of people with bvFTD and that will involve the development of appropriate data collection methods. Subsequently, specialised interventions must be developed. abstract_id: PUBMED:21069606 Assessment of patient and caregiver needs in fragile X-associated tremor/ataxia syndrome by utilizing Q-sort methodology. Objectives: Psychosocial stressors faced by patients with fragile X-associated tremor/ataxia syndrome (FXTAS) and their caregivers have not been systematically explored. FXTAS is a neurodegenerative disease occurring in approximately 45% of elderly male carriers and 8-16% of female carriers of the fragile X mental retardation one premutation. This study investigated the subjective needs of patients with FXTAS and their family caregivers, by utilizing Q-sort methodology. Method: Patients with FXTAS and their caregivers seen during January 2005 to June 2007 participated. The Q-sort was composed of 17 (eight formal and nine informal) items, designed to explore emotional, informational, and instrumental needs of patients with FXTAS and their caregivers. Item scores were generated from 1 = least important to 7 = most important. Analysis included descriptive statistics for all the demographic and outcome variables. Generalized estimating equations were used to identify which of the need domains were perceived as most important by the participants. Results: A total of 24 patients (79% men, mean age 65.6 ± 6.4 years) with FXTAS and 18 caregivers (11% men, mean age 63.6 ± 6.2 years) completed the Q-sort. Both patients and caregivers rated informational needs as most important, followed by emotional and, finally, by instrumental needs. Participants lacked many important resources, in particular those addressing instrumental needs. Conclusion: Providers should be educated and able to provide timely information and referrals to formal services, as well as to informal resources, including the National Fragile X Foundation online support network (www.fragilex.org). abstract_id: PUBMED:33757892 The Palliative Care Information Needs of Patients with Amyotrophic Lateral Sclerosis and their Informal Caregivers: A Scoping Review. Context: Amyotrophic Lateral Sclerosis (ALS) is a fatal neurodegenerative disease, associated with impaired quality of life for patients and caregivers. As treatment is largely supportive, early involvement of palliative care (PC) is recommended as standard of care. Despite this, literature surrounding PC information needs is limited. Objectives: To explore the PC information needs of patients with ALS and their caregivers and identify gaps in the literature. Methods: A scoping review using MEDLINE, EMBASE, CINAHL and PsycINFO databases (2000-2019) was conducted. Articles examining PC information needs as stated by ALS patients and/or current/bereaved caregivers were included. Studies examining other diagnoses and those focused on healthcare workers were excluded. Thematic synthesis was used to summarize and identify prevalent domains and themes in the literature. Results: 581 articles underwent primary screening, with thirty-two ultimately included (26 original articles, six reviews). Fourteen examined information needs of both patients and caregivers, 13 caregivers only, 5 patients only. The most common PC information needs were as follows: for patients, disease course/prognosis (n = 10), general disease information (n = 9), decision-making (n = 7) and symptoms (n = 6); for caregivers, services and resources (n = 15), disease course/prognosis (n = 14), general disease information (n = 13) and skills (n = 10). There was substantial variability in information needs, both between patients and caregivers and among members of the same group. Conclusion: ALS patients and caregivers have unique and varying PC information needs. Future research should better characterize these needs to improve patient and caregiver quality of life. The delivery of information must be tailored to individual patient or caregiver preferences. abstract_id: PUBMED:17077795 Palliative care and support for people with neurodegenerative conditions and their carers. Aim: to identify and compare the needs for supportive care/palliative care services of people in Australia with Motor Neurone Disease (MND), Multiple Sclerosis (MS), Huntington's Disease (HD) and Parkinson's Disease (PD) and the needs of their carers. Design: national, descriptive survey of service use and support needs for 503 patients and 373 carers. Patients and carers were recruited from the databases of the disease associations in three Australian states. Patients were excluded if recently diagnosed or too sick or disabled to complete the survey. This was a specification of the ethics committee overseeing this project. Findings: the MND and HD groups had the greatest needs for support and appeared most receptive to a palliative approach. The MS group was least likely to have a carer and more likely to live alone. HD and PD groups had the highest rates of anxiety and HD groups had the highest rate of depression. Conclusions: patients and carers who received more tailored services and more palliative care services were the most satisfied. Therefore, results highlight the need for tailored and flexible models of care for these groups with unique care requirements. abstract_id: PUBMED:35665539 Care partner needs of people with neurodegenerative disorders: What are the needs, and how well do the current assessment tools capture these needs? A systematic meta-review. Objectives: The burden on care partners, particularly once dementia emerges, is among the greatest of all caregiving groups. This meta-review aimed to (1) synthesis evidence on the self-reported needs of care partners supporting people living with neurodegenerative disorders; (2) compare the needs according to care partner and care recipient characteristics; and (3) determine the face validity of existing care partner needs assessment tools. Methods: We conducted a systematic review of reviews involving a thematic synthesis of care partner needs and differences in needs according to demographic and other characteristics. We then conducted a gap analysis by identifying the themes of needs from existing needs assessment tools specific to dementia and cross-matching them with the needs derived from the thematic synthesis. Results: Drawing on 17 published reviews, the identified range of needs fell into four key themes: (1) knowledge and information, (2) physical, social and emotional support, (3) care partner self-care, and (4) care recipient needs. Needs may differ according to disease trajectory, relationship to the care recipient, and the demographic characteristics of the care partner and recipient. The 'captured needs' range between 8% and 66% across all the included needs assessment tools. Conclusions: Current tools do not fully or adequately capture the self-identified needs of care partners of people living with neurodegenerative disorders. Given the high burden on care partners, which has been further exacerbated by the COVID-19 (SARS CoV-2) pandemic, the needs assessment tools should align with the self-reported needs of care partners throughout the caregiving trajectory to better understand unmet needs and target supportive interventions. abstract_id: PUBMED:38350294 Support needs of people with Multiple System Atrophy. Background: Multiple system atrophy (MSA) is a neurodegenerative disease with presentations including parkinsonism, autonomic disturbances, gait impairment and mood disorders. The progressive nature of MSA results in a significant deterioration in quality of life for patients. Objectives: To investigate the needs of people with MSA (PwMSA) in the United Kingdom (UK) and the support provided to them. Methods: A survey of members of the UK's MSA Trust including PwMSA and former carers explored questions on the physical, emotional, personal and financial impact of MSA and on support received. Results: There were 284 valid responses from PwMSA and 371 responses from former carers. Difficulties with movement were universally experienced by PwMSA. The majority of former carers reported these as difficult or very difficult to manage (90 %), followed by autonomic problems (85 %) and breathing, speech or swallowing problems (81 %). PwMSA also reported fatigue (96 %), mood disorders (80 %) and social isolation (82 %). 41 % of former carers felt that the emotional needs of PwMSA had not been met. Many also reported inadequately covered financial needs (48 %). Equipment used most commonly included mobility equipment (90 %). The majority received support from neurologists (95 %) and GPs (92 %), but few from palliative care or mental health professionals. Satisfaction was highest for support from palliative services (95 %) and specialist nurses (91 %). Conclusion: This survey demonstrates the high burden of MSA across various aspects of life for PwMSA. It suggests that whilst there is good provision of neurology services and physical equipment, there is a need to improve support for their emotional and social needs. Answer: Supportive services in Australia are not fully meeting the needs of individuals with neurodegenerative conditions and their families. A study conducted in Western Australia, Victoria, and Queensland found that as patient dependency increased, both patients and carers exhibited higher distress symptoms and a poorer quality of life. Those who received more tailored services and more palliative care services were the most satisfied, indicating a need for tailored and flexible models of care for these groups with unique care requirements (PUBMED:17058753). Another study in Queensland, Australia, highlighted that while specialist medical and hospital services were generally perceived as adequate, the majority of participants had perceived unmet needs for physical rehabilitation, other health or rehabilitation services, or counselling. Community-based physical maintenance rehabilitation was often perceived as inadequate, costly, or inconveniently located (PUBMED:25429895). Furthermore, a national descriptive survey of service use and support needs for patients with Motor Neurone Disease, Multiple Sclerosis, Huntington's Disease, and Parkinson's Disease and their carers in Australia revealed that patients and carers who received more tailored services and more palliative care services were the most satisfied. This underscores the importance of flexible care models that are tailored to the unique needs of these patient groups (PUBMED:17077795). In summary, while there are supportive services available, there is a clear need for more personalized and flexible care options to better meet the complex and varied needs of Australians with neurodegenerative conditions and their families.
Instruction: Does the antisecretory agent used affect the evolution of upper digestive hemorrhage? Abstracts: abstract_id: PUBMED:7612363 Does the antisecretory agent used affect the evolution of upper digestive hemorrhage? Objective: To investigate whether omeprazole has improved morbidity-mortality among patients with upper gastrointestinal bleeding of non-variceal origin in comparison with ranitidine. Material And Methods: Prospective, randomized and open study. We study 519 consecutive patients admitted to our Service between June 1991 and January 1993 for upper gastrointestinal bleeding of peptic origin, dividing the patients into two randomized groups that were homogeneous in terms of age, sex, previous history of gastric disease and upper gastrointestinal bleeding, intake of non-steroidal antiinflammatory drugs, and the severity of bleeding on admittance. Thus, Group A consisted of 252 patients treated immediately upon arrival at the emergency ward with 50 mg intravenous ranitidine, followed by a further 50 mg every 6 hours. Group B in turn consisted of 267 patients initially given a bolus dose of 80 mg omeprazole intravenously, followed by an additional 40 mg every 8 hours for 48 hours. Forty mg were subsequently administered every 12 hours until hospital discharge. Endoscopy was performed in all cases within the first 24 hours following admittance, those patients with active upper gastrointestinal bleeding resulted from Forrest-type ulcer of subjected to endoscopic sclerotherapy were excluded. Results: Duodenal ulcer was the most common cause of bleeding, followed by gastric ulcer and acute lesions of the mucosa. Emphasis should be placed on the high incidence of previous non-steroidal antiinflammatory drug intake in our series (54.5%). We encountered no statistically significant differences between the two groups on comparing bleeding stigmata, transfusion requirements, recurrences, emergency surgery, the duration of hospital stay, and mortality. Conclusions: Both drugs were found to possess a similar efficacy in treating upper gastrointestinal bleeding of peptic origin. abstract_id: PUBMED:25914744 Measuring the hepatic venous pressure gradient in the upper digestive hemorrhages. The upper digestive hemorrhage is one of the main causes of mortality from liver cirrhosis (CH). The measurement of the hepatic venous pressure gradient (HVPG) by angiographic way can be used for the determination of the risk of hemorrhage. The aim of this study is to verify the influence of the HVPG measurement upon the survival in patients with CH and upper digestive hemorrhage. A retrospective pilot study on 33 patients with upper digestive hemorrhage divided into two groups according to the therapeutic attitude followed, was carried out. One group was classically treated, with variceal band ligation, non-elective beta-blockers and in emergency Octreotide. The other group was treated depending on the value of HVPG. From the moment of the first episode of digestive hemorrhage, the survival period was in average of 8.1 months for the patients with viral etiology and for those with alcoholic etiology it was of 19.7 months. In patients treated after HVPG measurement, the average survival period was of 34.2 months and in patients "classically" treated, the average survival period was of 15.5 months (significant difference). In conclusion, the measurement of HVPG allowed the selection of a high-risk group of patients. This permitted the making of a therapeutical decision with a significant prolongation of life in these patients. abstract_id: PUBMED:26835241 Assessment of Antisecretory, Gastroprotective, and In-vitro Antacid Potential of Daucus carota in Experimental Rats. Objectives: In Indo China, carrots have been reported to regulate the functions of the stomach and intestines. The objective of the present investigation was to unravel the therapeutic potential of 50% ethanol extract from Daucus carota roots (EDC) on antisecretory, gastroprotective, and in vitro antacid capacity using experimental rats. Methods: Assessment of EDC antisecretory and in vivo antacid capacities was carried out using a pyloric ligation induced ulcer model. The gastroprotective effect was assessed with an absolute ethanol induced ulcer model. The integrity of gastric mucosa was evaluated using the estimation of glutathione and gastric mucus level and with histopathological examination of gastric mucosal cells. The in-vitro antacid capacity was evaluated using a titration method. The effect of the extract on the liver was assessed by measuring serum biochemical parameters. Results: The EDC significantly (p < 0.01-0.001) reduced gastric lesions in both models. Furthermore, the EDC also significantly (p < 0.05-0.001) reduced the volume of gastric content whereas the total acidity was significantly (p < 0.05-0.001) reduced with the doses of 100 mg/kg and 200 mg/kg EDC. Moreover, the mucus content and glutathione level increased significantly (p < 0.05) in the absolute alcohol-induced ulcer. The EDC also showed in-vitro antacid capacity. Histopathological studies further confirmed the potential of EDC by inhibiting congestion, edema, hemorrhage, and necrosis in gastric mucosa. Conclusion: The EDC exerted antisecretory, gastroprotective, and in vitro antacid potential. These activities could be attributed due to the presence of glycosides, phenolics, tannins, alkaloids, and flavonoids. abstract_id: PUBMED:3288906 Endoscopic treatment of upper digestive hemorrhages The main aspects of endoscopic haemostasis treatment of severe haemorrhages of the upper digestive tract are analysed with particular emphasis on the techniques employed and the results obtainable. abstract_id: PUBMED:22165068 Upper digestive hemorrhage appearing after cephalic duodeno-pancreatectomy The frequency of upper gastrointestinal hemorrhage as a postoperative complication of cephalic duodenopancreatectomy remained constant for decades despite the overall decrease in the incidence of mortality occuring after cephalic duodeno-pancreatomy. It is the second most common complication after anastomotic fistulas, but more frequently fatal, especially when the pancreas is anastomosed with the stomach. The case presented here is of a patient of 55 years age, diagnosed in our clinic with vaterian ampuloma for which was performed cephalic duodenopancreatectomy and gastrointestinal and hepatobiliary continuity was restored by performing terminolateral pancreato-gastric anastomosis, termino-lateral hepato-jejunal anatomosis and termino-lateral gastro-jejunal anastomosis on a jejunal loop ascended transmezocolic. Postoperative evolution of the patient was marked by appearance of two episodes of upper gastrointestinal hemorrhage, the first being solved by relaparotomy and the second benefiting from the contribution of an endoscopic intervention. From this case, we analyze risk factors for upper gastrointestinal hemorrhage appearing after cephalic duodeno-pancreatectomy and its therapeutic modalities, starting from the fact that currently there is no consensus among experts on this matter. abstract_id: PUBMED:37248086 A case of corrosive digestive tract and lung injury caused by ingestion of pipeline dredging agent Ingestion of corrosive substances can severely burn the upper digestive tract leading to bleeding or perforation, and may even be life-threatening. Less commonly, damage to the trachea and bronchi is involved. In this paper, a case of corrosive digestive tract injury and lung injury after oral administration of pipeline dredging agent (the main components are hydroxide, sodium carbonate, sodium hypochlorite, etc.) was analyzed. After active rescue treatment, the patient died of massive hemoptysis. It is suggested that serious complications may occur after ingestion of corrosive substances. Timely diagnosis and reasonable medical management are needed to improve the level of recognition and treatment of such diseases. abstract_id: PUBMED:22977805 Upper endoscopy in international digestive endoscopy network 2012: towards upper end of quality. Upper gastrointestinal (GI) endoscopy is the most basic part of endoscopy field. Although old and basic procedures are still in use, a line of innovative techniques and devices are being introduced to allow much complex and difficult procedures in endoscopy unit. High quality upper endoscopic procedures can replace or obviate surgical treatment. Selected reviews dealing with non-variceal upper GI bleeding, challenging esophageal stenting, endoscopic management of subpeithelial tumor, and endoscopic evaluation for candidate lesions of endoscopic submucosal dissection were selected among the topics from International Digestive Endoscopy Network 2012. abstract_id: PUBMED:30595882 Upper Gastrointestinal Bleeding - Initial Manifestation of Pancreatic Head Carcinoma. Pancreatic head carcinomas are a rare cause of upper digestive bleeding and the diagnosis and the treatment of these pose particular problems. We selected 6 cases from a number of 283 patients who were hospitalized for surgery between January 2014 and December 2016 with signs of upper digestive bleeding with no varicose origin who were subsequently diagnosed with pancreatic head carcinomas. The diagnosis was established by endoscopic and surgical methods. The evolution of these patients was influenced by whether there was active digestive bleeding or history of digestive bleeding and the possibility of tumor resection. Four patients needed emergency surgery due to continuous bleeding or rebleeding. The resectability of the cephalo-pancreatic tumor was determined and then subsequently performed in two patients who had a favorable postoperative outcome, while in two patients the tumor resection was impossible. The other two patients with upper digestive haemorrhage responded favorable to drug therapy, and digestive endoscopy and CT explorations were negative. After a 5-month interval they presented with clinical signs of a pancreatic neoplasm with invasion into the common bile duct, unwanted weight loss, abdominal pain, and icterus of the sclera and skin. The surgical intervention resulted in the confirmation of locally advanced pancreatic head carcinomas and the performing of bilio-digestive derivations. Pancreatic head carcinomas may be associated with upper digestive tract haemorrhage due to duodenal or bile duct invasion. The clinical picture of these patients can vary from occult haemorrhage to severe upper digestive tract haemorrhage accompanied by hypovolemic shock. abstract_id: PUBMED:8309607 Upper digestive hemorrhage. Comparison of terlipressin and octreotide The efficacy and tolerability of both Terlipressin and Octreotide in the treatment of upper digestive haemorrhage (oesophagus, stomach and duodenum) have been compared, at random, on 30 adult patients. Terlipressin dosage was 2 mg/4 hours i.v. during the first 24 hours, 2 mg/6 hours during the following 24 hours. Octreotide dosage was 50 micrograms i.v. and infusional therapy, 25 micrograms/hours, followed. Haemostasis was obtained in 100% of patients treated with Terlipressin and in 73% of patients treated with Octreotide. Rebleeding occurred in 3 patients treated with Terlipressin and 4 patients treated with Octreotide. Side effects occurred in 4 patients treated with Terlipressin and 8 patients treated with Octreotide. Terlipressin proved to have a better efficacy in comparison with Octreotide, and had less side effects. abstract_id: PUBMED:8831593 Nonsteroidal anti-inflammatory drug and phospholipid prodrugs: combination therapy with antisecretory agents in rats. Background & Aims: The gastrointestinal side effects of nonsteroidal anti-inflammatory drugs (NSAIDs) are reduced by antisecretory agents. The effects of combination therapy on the gastrointestinal toxicity and therapeutic activity of free and phospholipid-associated NSAIDs were investigated in rats. Methods: Fasted rats, pretreated with either saline or an antisecretory dose of omeprazole, ranitidine, or cimetidine, were intragastrically administered saline, aspirin, or indomethacin. In ulcer models, gastric lesions in aspirin-treated rats and intestinal bleeding in indomethacin-treated rats were measured. For antipyretic and analgesic activity, rectal body temperature in febrile rats and the rats' pain sensitivity to pressure applied to an inflamed limb were measured, respectively. Results: NSAID-induced gastrointestinal ulceration and bleeding were reduced in rats pretreated with antisecretory agents and abolished in rats administered phospholipid-associated NSAIDs in combination with inhibitors of acid secretion. The antipyretic and analgesic activity of both NSAIDs was attenuated in rats pretreated with an antisecretory agent. This pH-dependent block in therapeutic activity was overcome if the NSAID was preassociated with a phospholipid to enhance the drug's lipophilic characteristics. Conclusions: Combination therapy of antisecretory agents and NSAIDs, chemically associated with phospholipids, has distinct advantages with regard to both low gastrointestinal toxicity and restored therapeutic activity. Answer: The antisecretory agent used does not appear to significantly affect the evolution of upper digestive hemorrhage in terms of morbidity and mortality among patients with non-variceal upper gastrointestinal bleeding. A study comparing the efficacy of omeprazole and ranitidine in treating upper gastrointestinal bleeding of peptic origin found no statistically significant differences between the two drugs in terms of bleeding stigmata, transfusion requirements, recurrences, emergency surgery, the duration of hospital stay, and mortality (PUBMED:7612363). This suggests that both omeprazole and ranitidine possess similar efficacy in the treatment of this condition. However, it is important to note that the management of upper digestive hemorrhage can be influenced by other factors, such as the measurement of the hepatic venous pressure gradient (HVPG) in patients with liver cirrhosis, which can guide therapeutic decisions and potentially prolong life (PUBMED:25914744). Additionally, other substances like the ethanol extract from Daucus carota roots have shown antisecretory, gastroprotective, and in vitro antacid potential in experimental rats, indicating that alternative treatments may also have beneficial effects (PUBMED:26835241). In conclusion, while the choice of antisecretory agent (omeprazole vs. ranitidine) may not significantly alter the immediate clinical outcomes of upper digestive hemorrhage, other treatment strategies and measurements, such as HVPG, may play a role in the overall management and prognosis of the condition.
Instruction: Can perineural invasion on prostate needle biopsy predict prostate specific antigen recurrence after radical prostatectomy? Abstracts: abstract_id: PUBMED:10379750 Can perineural invasion on prostate needle biopsy predict prostate specific antigen recurrence after radical prostatectomy? Purpose: We evaluated the role of perineural invasion identified on prostate needle biopsy as a predictor of prostate specific antigen (PSA) recurrence after radical prostatectomy. Materials And Methods: Between 1993 and 1998 radical prostatectomy was performed in 319 consecutive patients. Prostate needle biopsies were reviewed in all cases. We compared perineural invasion with other preoperative parameters, including digital rectal examination, PSA and biopsy Gleason score, for the ability to predict PSA recurrence with recurrence defined as any serum PSA level greater than 0.2 ng./ml. Results: Perineural invasion was identified on 77 of 319 preoperative prostate biopsies (24%). There was PSA recurrence in 46 patients (14.4%) at a mean followup of 25.4 months (range 0.2 to 62.1). Perineural invasion statistically correlated with PSA recurrence. Kaplan-Meier analysis revealed disease-free survival rates of 24 versus 64% when perineural invasion was and was not present in the prostate biopsy (p = 0.0003, log rank 12.92). Multivariate analysis demonstrated that perineural invasion (p = 0.012) and PSA (p = 0.005) were independent preoperative predictive factors of PSA recurrence. When perineural invasion was compared with postoperative parameters, including disease stage, surgical margins and seminal vesicle invasion, it was not an independent predictor because it closely correlated with tumor stage. Conclusions: Perineural invasion on preoperative prostate needle biopsy is a strong independent predictor of PSA recurrence in patients in whom prostate cancer was treated with radical prostatectomy. abstract_id: PUBMED:30843451 Perineural invasion on biopsy specimen as predictor of tumor progression in aging male treated with radical prostatectomy. Could we use it for pre-surgical screening? We aimed to analyze the correlation of perineural invasion on transrectal ultrasound guided prostate biopsy with predictors of biochemical cancer recurrence, as well as its impact on clinical outcomes, for non-metastatic prostate cancer. For the study, patients with perineural invasion (N = 86) were recruited into group I and underwent open retropubic prostatectomy, regardless of clinical stage; cases with prostate cancer but without perineural invasion on biopsy, who received radical prostatectomy as the treatment modality, were placed into group II (n = 90). Perineural invasion was detected preoperatively in 43% of cases that revealed surgical margin positivity postoperatively, while 85% of the remaining cases (group II) had negative surgical margins. There was no correlation on prostate biopsy between perineural invasion and Gleason score or PSA, based on Sperman's rank-order correlation analysis. However, there was strong positive correlation of perineural invasion with clinical stage and patients age. Additionaly, we demonstrated that perineural invasion on biopsy is a non-independent risk factor for metastatic occurrence, although the correlation was significant in univariate analysis. Nevertheless, we found strong correlation between invasion on initial biopsy specimen with biochemical cancer recurrence, suggesting that perineural invasion on prostate biopsy is a significant predictor of worse prognostic outcome. abstract_id: PUBMED:30063011 Effects of perineural invasion in prostate needle biopsy on tumor grade and biochemical recurrence rates after radical prostatectomy. To predict local invasive disease before retropubic radical prostatectomy (RRP), the correlation of perineural invasion (PNI) on prostate needle biopsy (PNB) and RRP pathology data and the effect of PNI on biochemical recurrence (BR) were researched. For patients with RRP performed between 2005 and 2014, predictive and pathologic prognostic factors were assessed. Initially all and D'Amico intermediate-risk group patients were comparatively assessed in terms of being T2 or T3 stage on RRP pathology, positive or negative for PNI presence on PNB and positive or negative BR situation. Additionally the effect of PNI presence on recurrence-free survival (RFS) rate was investigated. When all patients are investigated, multivariate analysis observed that in T3 patients PSA, PNB Gleason score (GS) and tumor percentage were significantly higher; in PNI positive patients PNB GS, core number and tumor percentage were significantly higher and in BR positive patients PNB PNI positivity and core number were significantly higher compared to T2, PNI negative and BR negative patients, separately (p < 0.05). When D'Amico intermediate-risk patients are evaluated, for T3 patients PSA and PNB tumor percentage; for PNI positive patients PNB core number and tumor percentage; and for BR positive patients PNB PNI positivity were significantly higher compared to T2, PNI negative and BR negative patients, separately (p < 0.05). Mean RFS in the whole patient group was 56.4 ± 4.2 months for PNI positive and 96.1 ± 5.7 months for negative groups. In the intermediate-risk group, mean RFS was 53.7 ± 5.1 months for PNI positive and 100.3 ± 7.7 months for negative groups (p < 0.001). PNI positivity on PNB was shown to be an important predictive factor for increased T3 disease and BR rates and reduced RFS. abstract_id: PUBMED:11125380 Perineural invasion as a predictor of biochemical outcome following radical prostatectomy for select men with clinically localized prostate cancer. Purpose: The presence of perineural invasion on the prostate needle biopsy specimen has been suggested to be an independent predictor of prostate specific antigen (PSA) outcome following radical prostatectomy. We evaluated the clinical use of perineural invasion at biopsy for predicting time to PSA failure following radical prostatectomy after controlling for established prognostic factors. Materials And Methods: A prospective evaluation using a Cox regression multivariate analysis of 750 men with clinically localized or PSA detected prostate cancer was performed to evaluate the ability of PSA, biopsy Gleason score, perineural invasion on the needle biopsy specimen and the percent of positive prostate biopsies to predict PSA outcome following radical prostatectomy. Results: Multivariate analysis demonstrated that the presence of perineural invasion on the needle biopsy specimen provided additional information regarding 5-year PSA outcome (82% versus 95%, p = 0.04) for patients who were in the low risk group. This difference in PSA outcome could be explained by higher rates of positive surgical margins (25% versus 17%, p = 0.07). Patients whose prostate needle biopsy contained perineural invasion and who had the corresponding neurovascular bundle resected had a significantly lower positive margin rate (11% versus 100%, p = 0.001) compared to those who had the neurovascular bundle spared. The presence of perineural invasion on biopsy was not a significant predictor of PSA outcome following radical prostatectomy for patients in the intermediate or high risk group. Conclusions: Resection of the neurovascular bundle on the side corresponding to location of perineural invasion on the biopsy may decrease the positive surgical margin rate and improve outcome for low risk patients. abstract_id: PUBMED:11796287 Influence of biopsy perineural invasion on long-term biochemical disease-free survival after radical prostatectomy. Objectives: To investigate the influence of biopsy perineural invasion (PNI) on long-term prostate-specific antigen recurrence rates, final pathologic stage, and surgical margin status of men treated with radical prostatectomy. Radical prostatectomy offers the best chance for surgical cure when performed for organ-confined disease. However, the histologic identification of PNI on prostate biopsy has been associated with a decreased likelihood of pathologically organ-confined disease. Methods: Seventy-eight men with histologic evidence of PNI on biopsy underwent radical prostatectomy by a single surgeon between April 1984 and February 1995 and were compared with 78 contemporary matched (biopsy Gleason score, prostate-specific antigen level, clinical stage, age) controls without PNI. Biochemical disease-free survival and pathologic findings were compared. Results: After a mean follow-up of 7.05 +/- 2.2 years and 7.88 +/- 2.7 years (P = 0.04) for patients with biopsy PNI and controls, respectively, no significant difference in the long-term prostate-specific antigen recurrence rates was observed (P = 0.13). The final Gleason score and pathologic staging were also similar in this matched cohort. Although the numbers of neurovascular bundles resected were comparable between the groups, no difference was found in the rate of positive surgical margins identified (13% versus 10%, P = 0.62). Conclusions: We were unable to show that PNI on needle biopsy influences long-term tumor-free survival. abstract_id: PUBMED:11792909 Percent prostate needle biopsy tissue with cancer is more predictive of biochemical failure or adverse pathology after radical prostatectomy than prostate specific antigen or Gleason score. Purpose: Biopsy Gleason score, serum prostate specific antigen (PSA) levels, and clinical stage are known to be independent predictors of adverse pathological features and biochemical failure after radical prostatectomy. We determine whether various prostate needle biopsy parameters were predictive of either adverse pathological findings or disease recurrence after radical prostatectomy. Materials And Methods: A single pathologist reviewed the prostate needle biopsy specimens of 190 men who underwent radical prostatectomy between 1991 and 2000. Biopsy specimens were examined for Gleason score, perineural invasion, number and percent of cores with cancer, and percent of total biopsy tissue with cancer and Gleason grade 4 or 5 cancer. Multivariate analysis was used to determine the prostate needle biopsy parameters and preoperative clinical variables, including serum PSA, clinical stage, patient age and race, that were most significant for predicting positive surgical margins, nonorgan confined disease, seminal vesicle invasion and biochemical failure after radical prostatectomy. Results: Of the prostate needle biopsy parameters examined percent of tissue with cancer was the strongest predictor of biochemical recurrence in the multivariate analysis (p <0.001). Percent of tissue with cancer was a stronger predictor of biochemical recurrence than either PSA (p = 0.048) or biopsy Gleason score (p = 0.053). It was also a strong independent predictor of seminal vesicle invasion (p = 0.015) and nonorgan confined disease (p = 0.024). Perineural invasion, percent and number of cores with cancer, and percent of tissue with Gleason grade 4 or 5 were not independent predictors of either adverse pathology or biochemical failure. Conclusions: Of all the preoperative variables examined, including the standard clinical variables of serum PSA, Gleason score and clinical stage, percent of biopsy tissue with cancer was the strongest predictor of biochemical recurrence, seminal vesicle invasion and nonorgan confined disease. Consideration should be given to reporting percent of total biopsy tissue with cancer in all prostate biopsy results. abstract_id: PUBMED:9930144 Perineural invasion in prostate needle biopsy specimens. Correlation with extraprostatic extension at resection. The significance of perineural invasion in prostate needle biopsy specimens for predicting extraprostatic extension is controversial. We correlated the presence of perineural invasion in needle biopsy specimens from 340 men with the presence of extraprostatic extension in corresponding radical prostatectomy specimens. Perineural invasion was present in 57 biopsy specimens. The sensitivity of perineural invasion for predicting extraprostatic extension was 32%, the specificity 88%, and the positive predictive value 42%. Biopsy specimens with perineural invasion had significantly more core specimens involved with tumor and higher biopsy-determined Gleason scores than those without invasion. Biopsy specimens with perineural invasion were significantly more likely to show extraprostatic extension and Gleason scores were higher in the resection specimens than those without perineural invasion. Multivariate logistic regression analysis showed that perineural invasion remained an independent predictor of extraprostatic extension. However, in multivariate analysis, including preoperative serum prostate-specific antigen (PSA) for 173 of the patients, the only independent predictor of extraprostatic extension was PSA. While perineural invasion in biopsy specimens is a predictor of extraprostatic extension at resection that is independent of other histologic features, the positive predictive value is low and it is not an independent predictor when serum PSA is included. abstract_id: PUBMED:10824923 Needle biopsy of recurrent adenocarcinoma of the prostate after radical prostatectomy. The objective of this study was to evaluate needle biopsy of recurrent prostate cancer after radical prostatectomy. We evaluated 37 cases of recurrent prostate cancer after radical prostatectomy that were diagnosed by needle biopsy between March 1984 and July 1998. Fifteen were from consultations in which contributors were uncertain of the diagnosis, and 22 were from men who had come to The Johns Hopkins Hospital for treatment. The median interval from radical prostatectomy to biopsy showing recurrent tumor was 40 months. There was no correlation between the interval to recurrence and either pathologic features of the biopsy and radical prostatectomy or various clinical features. The mean extent of adenocarcinoma in the biopsies was 3.2 mm (range, 0.1 to 18 mm; median, 2 mm). The length of recurrent cancer on biopsy correlated with an abnormal rectal examination (P = .001). The mean Gleason score for the recurrent tumors was 6.5, which correlated with the grade of the radical prostatectomy cancer (P = .005). The cancers often lacked overt histologic features of malignancy. Benign prostatic acini were seen in five cases (14%), usually separate from the cancer. In 5 (33%) of the consultation cases, we would not have been able to diagnose cancer if not for the fact that atypical prostate glands should not be present after radical prostatectomy. In well-sampled radical prostatectomies, margins were almost always positive, as was extraprostatic extension. In cases with less sampling, there was a higher incidence of organ-confined disease and margin-negative disease implying suboptimal processing of the radical prostatectomy. After radical prostatectomy, recurrent cancer on needle biopsies may be focal and difficult to diagnose and must be assessed differently than in patients who have not had surgery. abstract_id: PUBMED:23727307 Adenocarcinoma of the prostate with Gleason score 9-10 on core biopsy: correlation with findings at radical prostatectomy and prognosis. Purpose: There is a paucity of data on the prognosis after radical prostatectomy for Gleason score 9-10 disease on needle biopsy. To our knowledge we report the largest study to date to specifically analyze the correlation of Gleason score 9-10 on needle core biopsy with radical prostatectomy outcomes. Materials And Methods: We identified 259 men with Gleason score 9-10 on biopsy who underwent radical prostatectomy from 1987 to 2012 at our institution. Preoperative variables analyzed were age, race, preoperative prostate specific antigen, adenocarcinoma site, perineural invasion, number of total biopsy cores, number of total positive cores, number of positive cores with Gleason score 9-10, maximum percent of core length with Gleason score 9-10 and maximum percent of adenocarcinoma overall. We determined pathological outcomes on univariate and multivariate analysis, including organ confinement, seminal vesicle invasion, margin status, lymph node metastasis, and biochemical-free and cancer specific survival. Results: Statistically significant predictors of radical prostatectomy outcome were organ confinement (total cores with Gleason score 9-10, maximum percent overall and perineural invasion), margin status (preoperative prostate specific antigen and clinical stage), seminal vesicle invasion (maximum percent overall, perineural invasion and clinical stage), lymph node metastasis (total number of cores with Gleason score 9-10 and clinical stage) and biochemical-free survival (maximum percent of Gleason score 9-10, maximum percent overall and clinical stage) (each p<0.05). Conclusions: In the highly select subset of patients who are good surgical candidates and have the appropriate combination of preoperative variables postoperative findings are sufficiently favorable to justify radical prostatectomy. abstract_id: PUBMED:30859516 Gleason Score Correlation Between Prostate Needle Biopsy and Radical Prostatectomy Materials. Objective: Prostate cancer is the second most common cancer in men. Digital rectal examination, transrectal ultrasonography and serum prostate specific antigen represents a diagnostic triad for the detection of prostatic carcinoma. About 50 years ago, Dr. Donald Gleason created a grading system for prostate cancer based on its histologic patterns. Currently, this system maintains its validity with various changes. New updates were made in 2005 and 2014 by the International Society of Urological Pathology. The goal of biopsies is to determine the Gleason score and prognosis in prostatectomy material. The aim of this study was to determine the concordance of the Gleason score, tumor volume and tumor laterality between prostate needle biopsy and prostatectomy materials. Material And Method: The study was performed with 112 patients who had biopsy and prostatectomy materials. The Gleason grades of the tumors have been evaluated with the new grading system. Tumor volumes were calculated by the number of positive blocks while tumor laterality was evaluated as unilateral or bilateral. Statistical analysis was performed on the obtained data. Results: Gleason score, tumor volume and tumor laterality discordance between needle biopsy and prostatectomy materials was found to be statistically significant. However, the concordance increased as the Gleason score and tumor volume increased. Conclusion: Digital examination, serum prostate specific antigen value and needle biopsy together are very sensitive for a prostate adenocarcinoma diagnosis. The Gleason score, localization and volume of the tumors are important for patient follow-up, treatment and prognosis. Answer: Yes, perineural invasion on prostate needle biopsy can predict prostate specific antigen (PSA) recurrence after radical prostatectomy. Studies have shown that perineural invasion identified on prostate needle biopsy is a strong independent predictor of PSA recurrence in patients treated with radical prostatectomy (PUBMED:10379750). Kaplan-Meier analysis in one study revealed significantly lower disease-free survival rates when perineural invasion was present in the prostate biopsy (PUBMED:10379750). Another study found a strong correlation between perineural invasion on initial biopsy specimen with biochemical cancer recurrence, suggesting that perineural invasion on prostate biopsy is a significant predictor of worse prognostic outcome (PUBMED:30843451). Additionally, perineural invasion positivity on prostate needle biopsy was shown to be an important predictive factor for increased T3 disease and biochemical recurrence rates and reduced recurrence-free survival (PUBMED:30063011). However, it is important to note that while perineural invasion is a predictor of PSA recurrence, its predictive value may not be as strong when compared with postoperative parameters such as disease stage, surgical margins, and seminal vesicle invasion, as it closely correlates with tumor stage (PUBMED:10379750). Moreover, in some studies, perineural invasion on biopsy was not a significant predictor of PSA outcome following radical prostatectomy for patients in the intermediate or high-risk group (PUBMED:11125380), and some research has indicated that it may not influence long-term tumor-free survival (PUBMED:11796287). In summary, perineural invasion on prostate needle biopsy is a significant predictor of PSA recurrence after radical prostatectomy, particularly in certain patient groups, but its predictive value should be considered in the context of other preoperative and postoperative factors.
Instruction: Is nitrous oxide necessary for general anaesthesia? Abstracts: abstract_id: PUBMED:19999230 Is nitrous oxide necessary for general anaesthesia? Background: Nitrous oxide (N2O) has been used for about 150 years in clinical anaesthesia. Several recent reviews of the effect of nitrous oxide have concluded that there are certain contraindications to the use of this gas for general anaesthesia and its ecological effects, ozone depleting potential, immune depression and the proven factor of PONV have questioned the routine use of nitrous oxide in patients undergoing surgical procedures in general anaesthesia. Methods: This study comprised of 200 adult patients undergoing general anaesthesia with 40% O2 and Sevoflurane with and without N2O. All patients had standard anaesthetic care and monitoring with BIS monitoring in 120 patients. The effect of avoiding N2O was observed on anaesthetic perioperative management and haemodynamics, PONV and pain in PACU. Results: Demographic and perioperative characteristics were similar to both groups. Nitrous oxide free group needed only 0.233% (mean) more Sevoflurane. There was a marked reduction in incidence of PONV (11% to 5%) in N2O free group. Duration of surgery (97.72 +/- 52.393 in N2O group, 103.75 +/- 48.671 in N2O free group) and induction dose of propofol (155.30 +/-38.572 in N2O group and 158.50 +/- 36.164 in N2O free group) did not differ significantly in the two groups. Conclusion: The omitting of N2O from anaesthetic regimen has a substantial impact on patient comfort after surgery by reducing incidence of PONV and it does not have any justifiable indication of its use in General anaesthesia. abstract_id: PUBMED:3113282 Should air-oxygen replace nitrous oxide-oxygen in general anaesthesia? The use of compressed air-oxygen mixtures to replace nitrous oxide-oxygen in general anaesthesia was investigated in 378 patients. There were neither prolongations of recovery time nor instances of awareness under anaesthesia. The cost of general anaesthesia using compressed air-oxygen was about half that for nitrous oxide-oxygen mixtures. abstract_id: PUBMED:27508523 Nitrous oxide-based versus nitrous oxide-free general anaesthesia and accidental awareness during general anaesthesia in surgical patients. Background: Accidental awareness during general anaesthesia (AAGA) is when a patient unintentionally becomes conscious during a procedure performed with general anaesthesia and subsequently has explicit recall of this event. Incidence estimates for AAGA vary, with the most common estimate being one to two cases per 1000 general anaesthetics. Evidence linking nitrous oxide use and an increased risk of AAGA has come from observational studies data but the literature is contradictory, with some studies finding a protective effect of nitrous oxide. Objectives: To assess the effect of general anaesthesia including nitrous oxide on the risk of AAGA in patients aged five years and over. Search Methods: We searched the following databases: Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and trial registers ((www.clinicaltrials.gov), the WHO International Clinical Trials Registry Platform (www.who.int/ictrp/network/en/) and Current Controlled Trials (www.isrctn.com/)) for eligible studies on December 9 2015. In addition, we conducted forward and backward citation searching using key identified papers. Selection Criteria: We considered all randomized controlled trials (RCTs), including quasi-randomized studies and cluster-randomized studies, of participants aged five years or older receiving general anaesthesia for any type of surgery.We included trials in which participants receiving general anaesthesia that included nitrous oxide for maintenance at a concentration of at least 30% were compared with participants receiving no nitrous oxide during general anaesthesia. The intervention group must have received nitrous oxide in conjunction with an additional anaesthetic. We excluded studies where the depth of anaesthesia differed between the study arms. For inclusion in the review, studies needed to state in their methods that they planned to assess AAGA. We defined this as when a patient becomes conscious during a procedure performed with general anaesthesia and subsequently has explicit recall of this event. Data Collection And Analysis: We used standard methodological procedures expected by Cochrane to identify studies. We extracted data and conducted 'Risk of bias' assessment using the Covidence database. Main Results: We included 15 studies. The total number of participants included in the analyses was 3520. Most studies were small with fewer than 120 participants, although two larger studies with 2012 and 671 participants were included. There was considerable variation in many of the study characteristics, including the anaesthetics used. The concentrations of nitrous oxide varied between 50% and 70%, and half of the studies used clinical signs and haemodynamic changes to monitor depth of anaesthesia.As it was not possible to blind the anaesthetist to the anaesthetic used, we rated all studies at high risk of performance bias and we therefore downgraded the quality of evidence by one level for risk of bias using the GRADE approach. Other types of bias were generally low, or were rated unclear due to missing information.No studies were designed to measure AAGA as the primary outcome, and were therefore statistically underpowered to answer this review question. Despite the inclusion of 3520 participants, only three awareness events were reported by two studies. In one study the event was due to technical failure. Due to the rarity of the events, we did not consider it appropriate to pool the data, and we therefore downgraded the quality of evidence by a further level for imprecision using GRADE. Authors' Conclusions: It is not possible to draw any conclusions from this review. The included studies were mainly small (fewer than 120 participants) and there were limited estimates of effect, with only two studies reporting any events. We cannot therefore determine whether the use of nitrous oxide in general anaesthesia increases, decreases or has no effect on the risk of accidental awareness. abstract_id: PUBMED:26545294 Nitrous oxide-based techniques versus nitrous oxide-free techniques for general anaesthesia. Background: Nitrous oxide has been used for over 160 years for the induction and maintenance of general anaesthesia. It has been used as a sole agent but is most often employed as part of a technique using other anaesthetic gases, intravenous agents, or both. Its low tissue solubility (and therefore rapid kinetics), low cost, and low rate of cardiorespiratory complications have made nitrous oxide by far the most commonly used general anaesthetic. The accumulating evidence regarding adverse effects of nitrous oxide administration has led many anaesthetists to question its continued routine use in a variety of operating room settings. Adverse events may result from both the biological actions of nitrous oxide and the fact that to deliver an effective dose, nitrous oxide, which is a relatively weak anaesthetic agent, needs to be given in high concentrations that restrict oxygen delivery (for example, a common mixture is 30% oxygen with 70% nitrous oxide). As well as the risk of low blood oxygen levels, concerns have also been raised regarding the risk of compromising the immune system, impaired cognition, postoperative cardiovascular complications, bowel obstruction from distention, and possible respiratory compromise. Objectives: To determine if nitrous oxide-based anaesthesia results in similar outcomes to nitrous oxide-free anaesthesia in adults undergoing surgery. Search Methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2014 Issue 10); MEDLINE (1966 to 17 October 2014); EMBASE (1974 to 17 October 2014); and ISI Web of Science (1974 to 17 October 2014). We also searched the reference lists of relevant articles, conference proceedings, and ongoing trials up to 17 October 2014 on specific websites (http://clinicaltrials.gov/, http://controlled-trials.com/, and http://www.centerwatch.com). Selection Criteria: We included randomized controlled trials (RCTs) comparing general anaesthesia where nitrous oxide was part of the anaesthetic technique used for the induction or maintenance of general anaesthesia (or both) with any general anaesthesia using a volatile anaesthetic or propofol-based maintenance of anaesthesia but no nitrous oxide for adults undergoing surgery. Our primary outcome was inhospital case fatality rate. Secondary outcomes were complications and length of stay. Data Collection And Analysis: Two review authors independently assessed trial quality and extracted the outcome data. We used meta-analysis for data synthesis. Heterogeneity was examined with the Chi² test and by calculating the I² statistic. We used a fixed-effect model if the measure of inconsistency was low for all comparisons (I² statistic < 50%); otherwise we used a random-effects model for measures with high inconsistency. We undertook subgroup analyses to explore inconsistency and sensitivity analyses to evaluate whether the results were robust. We assessed the quality of evidence of the main outcomes using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) system. Main Results: We included 35 trials (13,872 adult participants). Seven included studies were at low risk of bias. We identified eight studies as awaiting classification since we could not obtain the full texts, and had insufficient information to include or exclude them. We included data from 24 trials for quantitative synthesis. The results of meta-analyses showed that nitrous oxide-based techniques increased the incidence of pulmonary atelectasis (odds ratio (OR) 1.57, 95% confidence interval (CI) 1.18 to 2.10, P = 0.002), but had no effects on the inhospital case fatality rate, the incidence of pneumonia, myocardial infarction, stroke, severe nausea and vomiting, venous thromboembolism, wound infection, or the length of hospital stay. The sensitivity analyses suggested that the results of the meta-analyses were all robust except for the outcomes of pneumonia, and severe nausea and vomiting. Two trials reported length of intensive care unit (ICU) stay but the data were skewed so were not pooled. Both trials reported that nitrous oxide-based techniques had no effects on the length of ICU stay. We rated the quality of evidence for two outcomes (pulmonary atelectasis, myocardial infarction) as high, four outcomes (inhospital case fatality rate, stroke, venous thromboembolism, length of hospital stay) as moderate, and three (pneumonia, severe nausea and vomiting, wound infection rate) as low. Authors' Conclusions: Given the evidence from this Cochrane review, the avoidance of nitrous oxide may be reasonable in participants with pre-existing poor pulmonary function or at high risk of postoperative nausea and vomiting. Since there are eight studies awaiting classification, selection bias may exist in our systematic review. abstract_id: PUBMED:4024892 Prediction of post-operative middle ear pressure changes after general anaesthesia with nitrous oxide. There has been a growing interest in the changes of pressure which occur in the middle ears of patients undergoing general anaesthesia. In this study the time course of middle ear pressure change was investigated in 38 adult E.N.T. patients scheduled for surgery under general anaesthesia with nitrous oxide. There was found to be a strong correlation between the magnitude of the post-operative middle ear pressure and the theoretical partial pressure of nitrous oxide within the middle ear cleft at the cessation of anaesthesia. The use of the mathematical model developed together with measurements of middle ear pressure taken for several minutes after intubation in any one patient allows prediction of the likely magnitude of the post-operative middle ear pressure change. abstract_id: PUBMED:15316643 Nitrous oxide. Sense or nonsense for today's anaesthesia Nitrous oxide has been used in addition to other volatile anaesthetics to provide general anaesthesia and short time sedation for more than 150 years. However, the exact mechanisms of action remain unclear. For decades nitrous oxide was considered to be the ideal anaesthetic because of his favourable physical properties and low cardiovascular side effects. However, the known side effects of nitrous oxide as well as the implementation of new anaesthetic agents and short acting opiates led to more and more criticism about the administration of this gas. Nitrous oxide is still frequently used for mask induction primarily in paediatric anaesthesia and gynaecology. However, recent studies have shown that omitting nitrous oxide can also be a risk factor because of an increased susceptibility to intraoperative awareness. Careful consideration of the illustrated contraindications and side effects as well as the available alternatives, shows that nitrous oxide is still an option in general anaesthesia. abstract_id: PUBMED:917977 Clinical experience with methohexital-nitrous oxide anaesthesia (author's transl) The effects of methohexital, given as a 0.1% intravenous drip in combination with nitrous oxide were studied in 159 persons who had various types of surgical operations. The depth of anaesthesia was easily controlled, the circulation remained unaffected and the combination was well tolerated. There were no difficulties concerning either induction or recovery. The combination of i.v. methohexital and nitrous oxide proved very satisfactory for maintaining prolonged anaesthesia in elderly persons and patients in poor general health. abstract_id: PUBMED:6688020 Neuromuscular and cardiovascular effects of atracurium during nitrous oxide-fentanyl and nitrous oxide-isoflurane anaesthesia. The neuromuscular and cardiovascular effects of atracurium were compared during nitrous oxide-isoflurane and nitrous oxide-fentanyl anaesthesia in healthy surgical patients. The dose-response curve was shifted significantly to the left during nitrous oxide-isoflurane anaesthesia (ED50 0.068 mg kg-1) as compared with nitrous oxide-fentanyl anaesthesia (ED50 0.083 mg kg-1). For equipotent doses, the time course of neuromuscular effects (onset and duration) was not appreciably different between the nitrous oxide-isoflurane group and the nitrous oxide-fentanyl group. Minimal cardiovascular effects were observed from atracurium in doses up to 0.5 mg kg-1. Using comparisons from earlier studies, atracurium was approximately one-quarter as potent as pancuronium and vecuronium (Org NC 45). The onset time (time from injection to peak effect) for subparalytic doses of atracurium was approximately 6.5 min and is comparable to the onset time for equipotent doses of pancuronium and vecuronium. The duration of neuromuscular effects of atracurium (time from injection to 95% recovery) was approximately 20 min for subparalytic doses and is the same as that of vecuronium and one-third to one-half that of pancuronium. It is concluded that the peak effect of atracurium is enhanced more by nitrous oxide-isoflurane than by nitrous oxide-fentanyl anaesthesia, but for equipotent doses the time-course is the same. abstract_id: PUBMED:3707794 Breathing pattern during isoflurane anaesthesia with or without nitrous oxide. Tidal volume, respiratory timing and occlusion pressure were studied in spontaneously breathing patients during isoflurane anaesthesia and in the presence of surgery. In one group of patients, end-tidal concentrations of 1.5% and 2.5% isoflurane in oxygen were used and in the other group equivalent depths of anaesthesia were obtained with 0.75% and 1.75% end-tidal isoflurane plus 67% nitrous oxide in oxygen. During the deeper anaesthesia, the duration of the respiratory cycle decreased significantly in both groups. In the group breathing nitrous oxide, the greater isoflurane concentration caused a significant reduction in ventilation. Airway occlusion significantly decreased the duration of the inspiratory period in both groups, indicating that the presence of nitrous oxide was not necessary for this effect. abstract_id: PUBMED:27689429 Changes in heart rate variability during anaesthesia induction using sevoflurane or isoflurane with nitrous oxide. Background: The purpose of this study was to compare cardiac sympathetic and parasympathetic balance using heart rate variability (HRV) during induction of anaesthesia between sevoflurane and isoflurane in combination with nitrous oxide. Methods: 40 individuals aged from 30 to 60 years, scheduled for general anaesthesia were equally divided into sevoflurane or isoflurane groups. After 100% oxygen inhalation for a few minutes, anaesthesia was induced with nitrous oxide 3 L min-1, oxygen 3 L min-1 and sevoflurane or isoflurane. Sevoflurane or isoflurane concentration was increased by 0.5% every 2 to 3 breaths until 5% was attained for sevoflurane, or 3% for isoflurane. Vecuronium was administered to facilitate tracheal intubation. After intubation, sevoflurane was set to 2% while isoflurane was set to 1% with nitrous oxide with oxygen (1:1) for 5 min. Results: Both sevoflurane and isoflurane provoked a decrease in blood pressure, total power, the low frequency component (LF), and high frequency component (HF) of HRV. Although the heart rate increased during isoflurane anaesthesia, it decreased under sevoflurane. The power of LF and HF also decreased in both groups. LF was higher in the isoflurane group while HF was higher in the sevoflurane group. The LF/HF ratio increased transiently in the isoflurane group, but decreased in the sevoflurane group. Conclusion: Anaesthesia induction with isoflurane-nitrous oxide transiently increased cardiac sympathetic activity, while sevoflurane-nitrous oxide decreased both cardiac sympathetic and parasympathetic activities. The balance of cardiac parasympathetic/sympathetic activity was higher in sevoflurane anaesthesia. Answer: Nitrous oxide (N2O) has been a staple in clinical anaesthesia for approximately 150 years, but its necessity for general anaesthesia has been questioned due to various concerns, including its ecological effects, immune depression, and its association with postoperative nausea and vomiting (PONV) [PUBMED:19999230]. Studies have shown that omitting nitrous oxide from the anaesthetic regimen can reduce the incidence of PONV without significantly affecting the perioperative management, haemodynamics, or pain in the post-anaesthesia care unit (PACU) [PUBMED:19999230]. Additionally, the use of air-oxygen mixtures as a replacement for nitrous oxide-oxygen in general anaesthesia has been investigated and found to be cost-effective without prolonging recovery time or causing awareness under anaesthesia [PUBMED:3113282]. The relationship between nitrous oxide use and accidental awareness during general anaesthesia (AAGA) is unclear, with contradictory evidence in the literature. Some studies suggest a protective effect, while others indicate an increased risk. However, due to the rarity of AAGA events and the small size of most studies, it is not possible to draw definitive conclusions about the effect of nitrous oxide on the risk of AAGA [PUBMED:27508523]. A Cochrane review has indicated that nitrous oxide-based anaesthesia may increase the incidence of pulmonary atelectasis but does not affect the in-hospital case fatality rate, the incidence of pneumonia, myocardial infarction, stroke, severe nausea and vomiting, venous thromboembolism, wound infection, or the length of hospital stay. The review suggests that avoiding nitrous oxide may be reasonable in patients with pre-existing poor pulmonary function or at high risk of PONV [PUBMED:26545294]. In conclusion, while nitrous oxide has been a common component of general anaesthesia, it is not strictly necessary. Alternatives to nitrous oxide can be used effectively, potentially reducing certain adverse effects and costs associated with its use. The decision to use nitrous oxide should be based on a careful consideration of the patient's condition, potential risks, and available alternatives [PUBMED:19999230; PUBMED:3113282; PUBMED:27508523; PUBMED:26545294; PUBMED:15316643].
Instruction: Can regional anaesthesia for lymph-node dissection improve the prognosis in malignant melanoma? Abstracts: abstract_id: PUBMED:29620000 Lymph node dissection for melanoma using tumescence local anaesthesia: an observational study. The possibility that tumescence local anaesthesia (TLA) may lead to dissemination of tumour cells in lymph nodes is presently unclear. To evaluate whether infiltration by TLA influences metastatic spread and survival probability, compared to general anaesthesia (GA), based on lymph node dissection in melanoma patients. In total, 281 patients (GA: 162; TLA: 119) with cutaneous melanoma and clinically or histologically-confirmed metastases in regional lymph nodes were included. All patients underwent complete lymph node dissection. Median follow-up was 70 months. The rate of lymph node recurrence at the dissection site was 25.3% in the GA group and 17.6% in the TLA group (p = 0.082). No significant difference was found concerning 10-year melanoma-specific survival (GA: 56.2%, TLA: 67.4%; p = 0.09), disease-free survival (GA: 72.8 %, TLA: 81.1%; p = 0.095), or lymph node-free survival (GA: 72.8%, TLA: 81.1%; p = 0.095). Distant metastases-free survival appeared to be slightly reduced in the TLA group (GA: 49.9%, TLA: 64.0%; p = 0.025). No differences were identified between the GA and TLA groups regarding prognostic outcome for overall survival or disease-free survival. abstract_id: PUBMED:22705968 Can regional anaesthesia for lymph-node dissection improve the prognosis in malignant melanoma? Background: Optimized anaesthetic management might improve the outcome after cancer surgery. A retrospective analysis was performed to assess the association between spinal anaesthesia (SpA) or general anaesthesia (GA) and survival in patients undergoing surgery for malignant melanoma (MM). Methods: Records for 275 patients who required SpA or GA for inguinal lymph-node dissection after primary MM in the lower extremity between 1998 and 2005 were reviewed. The follow-up ended in 2009. Survival was calculated as days from surgery to the date of death or last patient contact. The primary endpoint was mortality during a 10 yr observation period. Results: Of 273 patients included, 52 received SpA and 221 GA, either as balanced anaesthesia (sevoflurane/sufentanil, n=118) or as total i.v. anaesthesia (propofol/remifentanil, n=103). The mean follow-up period was 52.2 (sd 35.69) months after operation. Significant effects on cumulative survival were observed for gender, ASA status, tumour size, and type of surgery (P=0.000). After matched-pairs adjustment, no differences in these variables were found between patients with SpA and GA. A trend towards a better cumulative survival rate for patients with SpA was demonstrated [mean survival (months), SpA: 95.9, 95% confidence interval (CI), 81.2-110.5; GA: 70.4, 95% CI, 53.6-87.1; P=0.087]. Further analysis comparing SpA with the subgroup of balanced volatile GA confirmed this trend [mean survival (months), SpA: 95.9, 95% CI, 81.2-110.5; volatile balanced anaesthesia: 68.5, 95% CI, 49.6-87.5, P=0.081]. Conclusions: These data suggest an association between anaesthetic technique and cancer outcome in MM patients after lymph-node dissection. Prospective controlled trials on this topic are warranted. abstract_id: PUBMED:17336247 Regional node dissection for melanoma: techniques and indication. Because virtually all microscopic nodal disease left untreated in melanoma patients will progress to clinically apparent macroscopic nodal disease, there is worse prognosis with macroscopic nodal disease, and ineffective systemic treatment currently exists, one must be cautious in favoring an observation approach to the regional basin in patients with a positive sentinel lymph node (SLN) in the hopes of avoiding the potential morbidity of a therapeutic node dissection. In the few patients with untreated microscopic nodal disease, the prognosis will be significantly worsened. Until further data are available, melanoma patients with a positive SLN by H&E analysis should proceed to a complete lymph node dissection. abstract_id: PUBMED:15278236 Sentinel lymph node in melanoma The procedure of sentinel node biopsy (SNB) has emerged as an important advance especially with respect to staging of malignant melanoma. Elective (prophylactic) lymph node dissection that had been practiced in primary melanoma with a suspected increased risk of (clinically occult) lymphatic metastasis has been replaced by SNB. Patients with proven metastatic involvement of the sentinel node (12-25%) can be specifically selected for regional lymph node dissection. Metastatic involvement of the sentinel node (SN) is a significant independent prognostic factor. The value of detecting metastasis by highly sensitive diagnostic tools such as RT-PCR is just as uncertain as is the therapeutic benefit of operative or conservative therapies in sentinel node-positive patients with respect to improving prognosis and is currently under study. abstract_id: PUBMED:31357833 Research progress of sentinel lymph node biopsy and regional dissection in melanoma Melanoma is a malignant tumor derived from the skin and mucous membrane, the epidemiological data showed that the incidence of melanoma elevated rapidly in the last decade. Early lymph node metastasis is a distinguishing characteristic of melanoma. The assessment of regional lymph nodes is a vital factor for melanoma staging and comprehensive therapeutic strategies. The sentinel lymph node biopsy (SLNB) plays an important role in this comprehensive diagnosis and treatment system. Completion lymph node dissection (CLND) with positive sentinel lymph node was accepted by traditional theories. But it has recently been questioned via the latest global clinical trial. CLND limited the benefit for melanoma specific survival. However, SLNB is the reliable procedure for staging and prognostic evaluation of melanoma patients with positive sentinel lymph node, and CLND can significantly improve the local control and decrease the regional recurrence according to the evidence-based medicine. The authors summary the recently correlational research of SLNB and CLND in melanoma in this review. abstract_id: PUBMED:31605168 Sentinel node biopsy and lymph node dissection in the era of new systemic therapies for malignant melanoma Background: Recently, adjuvant therapies with checkpoint inhibitors and BRAF/MEK inhibitors have become available for patients with malignant melanoma and microscopic nodal disease. Meanwhile the number of complete nodal dissections for a melanoma-positive sentinel node (SN) have decreased significantly. Objective: The authors discuss the significance of sentinel node biopsy (SNB) and early lymph node dissection in the era of adjuvant systemic therapy for stage III melanoma. Materials And Methods: Current publications and recommendations were evaluated. Results: Complete nodal dissection for a positive SN significantly reduces the risk of regional nodal relapse. However, neither SNB nor complete nodal dissection following a positive SN are associated with a benefit in survival. With the availability of novel adjuvant systemic treatment strategies for stage III melanoma, SNB has become an even more important part of modern staging diagnostics. Thus, detection of early dissemination of melanoma cells into the SN as well as the quantification of the tumor load are decisive for further therapy planning. Conclusion: Accurate assessment of the regional lymph node status by SNB is becoming even more important in the era of novel effective adjuvant therapies for microscopic nodal disease. Whether complete lymph node dissection is performed in patients with a positive SN needs to be assessed individually. In the case of "active nodal surveillance" instead of surgery, long-term close follow-up in specialized centers, including ultrasonographic controls, is required. abstract_id: PUBMED:34789434 Melanoma sentinel lymph node biopsy and completion lymph node dissection: A regional hospital experience. Completion lymph node dissection (CLND) following positive sentinel lymph node biopsy (SLNB) for cutaneous melanoma is a topic of controversy. The second Multicenter Selective Lymphadenectomy Trial (MSLT-II) suggested no survival benefit with CLND over observation amongst patients with a positive SLNB. The findings of the MSLT-II may have limited applicability to our high-risk population where nodal ultrasound and non-surgical melanoma treatment is rationed. In this regional, retrospective study, we reviewed primary melanoma, SLNB and CLND histopathological reports in the Bay of Plenty District Health Board (BOPDHB) across a 10-year period. The primary outcomes measured were size of sentinel lymph node metastases and non-sentinel node (NSN) positivity on CLND for patients with a positive SLNB. In the 157 SLNB identified, the mean sentinel lymph node metastatic deposit size was larger in BOPDHB compared with MSLT-II (3.53 vs 1.07/1.11mm). A greater proportion of BOPDHB patients (54.8%) had metastatic deposits larger than 1mm compared with MSLT-II (33.2/34.5%) and the rate of NSN involvement on CLND was also higher (23.8% vs 11.5%). These findings indicate that the BOPDHB is a high-risk population for nodal melanoma metastases. Forgoing CLND in the context of a positive SLNB may place these patients at risk. abstract_id: PUBMED:19914463 Complete lymph node dissection for regional nodal metastasis. The primary management of lymph nodes involved with metastatic melanoma is regional lymphadenectomy. Axillary or inguinal node complete lymph node dissection (CLND) is performed after an occult metastasis is found by sentinel lymph node biopsy, or after a clinically apparent regional lymph node metastasis. CLND completely removes all lymph-node-bearing tissue in a nodal basin. This procedure continues to be controversial. No randomized prospective studies have yet determined the survival advantage of CLND. The National Comprehensive Cancer Network recommends that all patients with stage III melanoma have a CLND. abstract_id: PUBMED:7245026 Regional lymph node dissection for malignant melanoma of the extremities. Seven hundred thirty-nine patients with malignant melanoma of the extremities were treated with a uniform surgical approach that included wide and deep excision of the primary site and regional node dissection therapeutically and electively for invasive lesions (Clark's levels III, IV, and V). Of the 490 patients who underwent lymph node dissections, follow-up was available for 457 (93%). Life-table comparison of 362 patients with histologically negative nodes to 95 with histologically proved lymph node metastases yielded statistically significant differences in survival (P less than 0.001). Five-year cumulative survival rates were 91% in the group without and 48% in the group with nodal metastases. Among histologically positive patients, differences in life-table survival curves for the 60 clinically negative patients compared to the 35 clinically positive patients were also statistically significant (P = 0.004); 5-year cumulative survival rates were 57% for the former group and 33% for the latter. Although there appears to be an advantage to regional lymph node dissection for micrometastases as opposed to gross nodal involvement, for the majority of patients metastatic melanoma in these nodes is the major indicator of systemic disease. abstract_id: PUBMED:12552406 Prognosis of malignant melanoma following dissection regional lymph node metastases Background: Malignant melanomas usually metastasize in the regional lymph nodes. This generally leads to a worsening of prognosis. Survival probability after extirpation of the tumorous regions is conditioned by various factors. Until now, the basis for differentiated therapy has been the precise understanding of the TNM classification. Clinical parameters are also relevant to therapy and of decisive importance to the further course of disease. Question: Which clinical parameters are important to the prognosis of patients with regional melanoma and lymph node disease? Patients And Methods: This study is a retrospective analysis of patients at the Clinic and Polyclinic of General Surgery of the University Hospital in Münster,Germany, together with a review of the literature. Included were 137 patients who received curative resection of lymph node metastases from malignant melanomas between 1974 and 1996. From the literature, 6,694 cases were found which allowed the establishment of 13 different prognosis parameters following lymph node metastasis resection These parameters were compared with our own results concerning established, relevant parameters. Results: The cumulative 5-year survival rate for patients studied was 50.7%. The 5-year survival rates relating to characteristics varied between 12.9% and 80.0%. Of the 13 variables from the literature relevant to prognosis that were included in this analysis, four were found to be significant: Breslow penetration depth, standardized age and age at primary tumor diagnosis, recurrence, and location of the primary tumor. Considering these significant characteristics, one can arrive at a detailed prognostic classification of patient and history. This is indispensable for correctly tailoring therapy to disease stage. Answer: The question of whether regional anesthesia for lymph node dissection can improve the prognosis in malignant melanoma has been explored in several studies, but the evidence is not conclusive. One retrospective analysis suggested an association between the type of anesthesia used during surgery for malignant melanoma and patient survival. Specifically, the study found a trend towards better cumulative survival rates for patients who received spinal anesthesia (SpA) compared to those who underwent general anesthesia (GA), although the difference was not statistically significant (PUBMED:22705968). Another observational study compared tumescence local anesthesia (TLA) with GA for lymph node dissection in melanoma patients and found no significant differences in melanoma-specific survival, disease-free survival, or lymph node-free survival between the two groups. However, distant metastases-free survival appeared to be slightly reduced in the TLA group (PUBMED:29620000). The literature also indicates that while complete lymph node dissection (CLND) for regional nodal metastasis is a standard practice following a positive sentinel lymph node biopsy (SLNB), its impact on survival remains controversial. Some studies suggest that CLND does not confer a survival benefit over observation (PUBMED:34789434, PUBMED:31605168). However, regional lymph node dissection is still recommended for patients with stage III melanoma, and the precise role of anesthesia in this context is not clearly defined (PUBMED:19914463). In summary, while there is some evidence to suggest that regional anesthesia might be associated with better survival outcomes in melanoma patients undergoing lymph node dissection, the data are not definitive, and further prospective controlled trials are warranted to clarify this potential relationship (PUBMED:22705968).
Instruction: Paediatrics at the cutting edge: do we need clinical ethics committees? Abstracts: abstract_id: PUBMED:9279747 Paediatrics at the cutting edge: do we need clinical ethics committees? Objectives: To investigate the need for hospital clinical ethics committees by studying the frequency with which ethical dilemmas arose, the perceived adequacy of the process of their resolution, and the teaching and training of staff in medical ethics. Design: Interviews with individuals and three multidisciplinary teams; questionnaire to randomly selected individuals. Setting: Two major London children's hospitals. Results: Ethical dilemmas arose frequently but were resolved in a relatively unstructured fashion. Ethical concerns included: the validity of consent for investigations and treatment; lack of children's involvement in consent; initiation of heroic or futile treatments; resource allocation. Staff expressed the need for a forum which would provide consultation on ethical issues, develop guidelines for good ethical practice, undertake teaching and training, and provide ethical reflection outside the acute clinical setting. Conclusion: Multidisciplinary, accountable and audited clinical ethics committees with predominantly advisory, practice development and educational roles could provide a valuable contribution to UK clinical practice and perhaps in other countries that have not developed hospital clinical ethics committees. abstract_id: PUBMED:37325989 This review describes the clinical ethics committees in Denmark. The clinical ethics committee is an interdisciplinary committee at a hospital intended to analyse ethically challenging situations and burdensome choices in patient care. The work in Danish KEKs takes place without formal organisation, in contrast to several other countries, where clinical ethics is regulated by law as research ethics is in Denmark. abstract_id: PUBMED:16758089 Legal repercussions of Clinical Ethics Committees reports Clinical Ethics Committees and Research Ethics Committees have their own specific roles. The Clinical Ethics Committee's pronouncements have an advisory function, whereas Research Ethics Committees' decisions are binding. This article analyzes the legal impact of the Clinical Ethics Committees' reports. Legal and medical reasoning share the same practical nature. Both can have several correct answers to the same situation. Clinical Ethics Committees deliberate about these alternatives and analyze the involved values. Their conclusions are non-compulsory recommendations. They do not replace nor diminish the doctor's personal responsibility. Even though the Clinical Ethics Committees' reports are not binding, they constitute a sort of "expert's opinion", expressed by qualified professionals, who assume their own professional responsibility as advisors. The members' behavior is necessarily subject to constitutional and legal regulations. When judges review the Clinical Ethics Committee's reports, they must realize that their nature is advisory, and also consider them an essential element to reduce the gap between the medical and legal fields. In this way, the problem of increasingly transforming medicine into a legal issue can be prevented. abstract_id: PUBMED:2719504 Ethics consultants and ethics committees. To address moral questions in patient care, hospitals and health care systems have enlisted the help of hospital ethicists, ethics committees, and ethics consultation services. Most physicians have not been trained in the concepts, skills, or language of clinical ethics, and few ethicists have been trained in clinical medicine, so neither group can fully identify, analyze, and resolve clinical ethical problems. Some ethics committees have undertaken clinical consultations themselves, but liability concerns and variable standards for membership hinder their efforts. An ethics consultation service comprising both physician-ethicists and nonphysician-ethicists brings complementary viewpoints to the management of particular cases. If they are to be effective consultants, however, nonphysician-ethicists need to be "clinicians": professionals who understand an individual patient's medical condition and personal situation well enough to help in managing the case. Ethics consultants and ethics committees may work together, but they have separate identities and distinct objectives: ethics consultants are responsible for patient care, while ethics committees are administrative bodies whose primary task is to advise in creating institutional policy. abstract_id: PUBMED:15244355 Developing clinical ethics committees. Formal clinical ethics review first developed in the USA and most hospitals there have structures in place. This is often a clinical ethics committee. Developments have been slower in the UK, where there has been uncertainty and variability about the role of clinical ethics committees. Even in the USA, their exact role is uncertain. Although we now have a good idea of the mechanisms for ethical review of clinical issues in NHS trusts and health authorities, the desirability, composition and terms of reference for clinical ethics committees should be agreed as part of the clinical governance agenda. abstract_id: PUBMED:28956795 Clinical ethics and the role of clinical ethics committees: proposals for a revival. Commentary. The issue addressed in the paper published by the Italian National Bioethics Committee (NBC) entitled "Clinical ethics committees", is highly significant for many reasons. One of these is the fact that the ethics committees charged with assessing clinical trials have so much responsibility and such a heavy work-load that they have little time available for other tasks such as engaging directly with patients "at the bedside", as a result of which the role of committees responsible for assessing clinical cases is especially important. According to the NBC, the opinions of clinical ethics committees should be formulated jointly and are non-binding. The NBC offers practical proposals not only for the Italian context. While the Italian National Institute of Health (Istituto Superiore di Sanità - ISS) is not involved directly in treating patients, its role in providing guidance is crucial to the national health service and it has always paid special attention to these issues. abstract_id: PUBMED:24182365 Ethics committees and consultants. The increasing complexity of healthcare creates numerous ethical challenges in patient care and in institutional functions and policies. During the past several decades, clinical ethics consultation services and institutional ethics committees have been developed to assist patients, medical professionals, and institutions in addressing ethical challenges. This chapter discusses the nature of clinical ethics consultation and the training of ethics consultants and committees. The authors review the experience at their institution (Mayo Clinic). Finally, the value of ethics consultation, as described in the medical literature, is reviewed. abstract_id: PUBMED:27333061 \How Can Clinical Ethics Committees Take on Organizational Ethics? Some Practical Suggestions. Although leaders in the field of ethics have for many years pointed to the crucial role that organizations play in shaping healthcare ethics, organizational ethics remains a relatively undeveloped area of ethics activity. Clinical ethics committees are an important source of potential expertise, but new skills will be required. Clinical ethics committees seeking to extend their purview to organizational issues will have to respond to three challenges-how to gain sanction and support for addressing controversial and sensitive issues, how to develop an acceptable process, and how to make a difference on the ground. The article presents practical suggestions for how clinical ethics committees meet these challenges. abstract_id: PUBMED:18549426 Barriers and challenges in clinical ethics consultations: the experiences of nine clinical ethics committees. Clinical ethics committees have recently been established in nearly all Norwegian hospital trusts. One important task for these committees is clinical ethics consultations. This qualitative study explores significant barriers confronting the ethics committees in providing such consultation services. The interviews with the committees indicate that there is a substantial need for clinical ethics support services and, in general, the committee members expressed a great deal of enthusiasm for the committee work. They also reported, however, that tendencies to evade moral disagreement, conflict, and 'outsiders' are common in the hospitals. Sometimes even the committees comply with some of these tendencies. The committees agree that there is a need to improve their routines and procedures, clarify the committees' profile and field of responsibility, to make the committees well-known, to secure adequate operating conditions, and to develop organizational integration and support. Various strategies to meet these challenges on a local, regional or national level are also explored in this paper. abstract_id: PUBMED:11954647 A national study of ethics committees. Conceived as a solution to clinical dilemmas, and now required by organizations for hospital accreditation, ethics committees have been subject only to small-scale studies. The wide use of ethics committees and the diverse roles they have played compel study. In 1999 the University of Pennsylvania Ethics Committee Research Group (ECRG) completed the first national survey of the presence, composition, and activities of U.S. healthcare ethics committees (HECs). Ethics committees are relatively young, on average seven years in operation. Eighty-six percent of ethics committees report that they played a role in ongoing clinical decision making through clinical ethics consultation. All are engaged in developing institutional clinical policy. Although 4.5% of HECs write policy on managed care, 50% of HEC chairs feel inadequately prepared to address managed care. The power and activity of ethics committees parallels the composition of those committees and the relationship of members to their institutions. The role of ethics committees across the nation in making policies about clinical care is greater than was known, and ethics committees will likely continue to play an important role in the debate and resolution of clinical cases and clinical policies. Answer: The question "Paediatrics at the cutting edge: do we need clinical ethics committees?" can be addressed by considering the findings and discussions presented in the provided abstracts. According to the study conducted in two major London children's hospitals, ethical dilemmas arose frequently in paediatric care, and there was a perceived need for a structured approach to resolving these issues (PUBMED:9279747). The ethical concerns identified included the validity of consent for investigations and treatment, lack of children's involvement in consent, initiation of heroic or futile treatments, and resource allocation. The staff expressed a need for a forum to provide consultation on ethical issues, develop guidelines for good ethical practice, undertake teaching and training, and offer ethical reflection outside the acute clinical setting. The study concluded that multidisciplinary, accountable, and audited clinical ethics committees with advisory, practice development, and educational roles could provide a valuable contribution to clinical practice. The role of clinical ethics committees (CECs) is further supported by the experiences of nine clinical ethics committees in Norway, which indicated a substantial need for clinical ethics support services (PUBMED:18549426). The committees faced challenges such as tendencies to evade moral disagreement and conflict, and the need to improve their visibility, operating conditions, and organizational integration. Moreover, the national study of ethics committees in the U.S. revealed that ethics committees are engaged in ongoing clinical decision-making through clinical ethics consultation and in developing institutional clinical policy (PUBMED:11954647). This suggests that ethics committees play a significant role in shaping clinical practice and policy. In conclusion, the evidence from the abstracts supports the need for clinical ethics committees in paediatrics, as they can provide structured and multidisciplinary approaches to ethical dilemmas, support clinical decision-making, develop policies, and offer education on medical ethics.
Instruction: Is urine methylhistamine a useful marker for interstitial cystitis? Abstracts: abstract_id: PUBMED:15538243 Is urine methylhistamine a useful marker for interstitial cystitis? Purpose: We determined whether urine methylhistamine (MH) is associated with other clinical features and bladder biopsy findings in interstitial cystitis (IC). Materials And Methods: Urine MH and University of Wisconsin symptom scores were measured before and 1 month after bladder distention for IC. Bladder biopsies were done at the time of distention. In new patients with IC who met cystoscopic criteria urine MH was compared before and after distention, and changes in urine MH were compared with changes in symptom scores. Pre-distention urine MH was tested for associations with the results of a symptom questionnaire, symptom response after distention, cystoscopic findings and bladder biopsy findings. Results: There were no significant associations between urine MH and symptom scores, response to bladder distention, cystoscopic findings or bladder biopsy features, including mast cell count by tryptase staining. Urine MH was similar in new patients with IC who did vs did not meet cystoscopic criteria. Urine MH was similar in new vs chronically treated patients with IC. Conclusions: Urine MH is unlikely to be useful as an objective marker of the response to bladder distention, as a method to predict which patients will respond to bladder distention or as a noninvasive substitute for bladder biopsy and mast cell counts by tryptase staining. abstract_id: PUBMED:8015069 Increased urine histamine and methylhistamine in interstitial cystitis. Interstitial cystitis is a painful bladder disorder occurring mostly in women, and is presently diagnosed by clinical presentation, as well as the presence of mucosal glomerulations and inflammation on bladder distention. An increased number of bladder mast cells have been implicated in the pathophysiology of interstitial cystitis but previous reports of spot urine histamine have not confirmed bladder mast cell activation. The availability of easily measurable objective criteria could make the diagnosis easier. Histamine and its major metabolite, methylhistamine, were measured in spot and 24-hour urine specimens from a number of normal female volunteers, control patients and interstitial cystitis patients. In interstitial cystitis patients the histamine levels were only slightly increased in the spot (p < 0.01) and 24-hour urine (p < 0.03) collections. Methylhistamine, on the other hand, was greatly elevated in spot (p < 10(-10)) and 24-hour (p < 0.0008) urine samples. These results indicate that methylhistamine levels could serve as useful diagnostic end points for interstitial cystitis. abstract_id: PUBMED:17070335 Interleukin-6, histamine, and methylhistamine as diagnostic markers for interstitial cystitis. Objectives: To examine the specificity and sensitivity of the inflammatory markers histamine, methylhistamine, and interleukin-6 (IL-6) in the urine of patients with interstitial cystitis compared with that in healthy controls. Methods: A total of 40 women with interstitial cystitis and 29 healthy controls collected 24-hour urine samples. During the 24 hours before urine collection, all participants refrained from consuming foods and medications that could contain bioactive amines. Methylhistamine and histamine were measured using radioimmunoassay kits and were normalized to urinary creatinine levels; IL-6 was measured using enzyme-linked immunosorbent assay. The data were analyzed by t tests, logistic regression analysis, and receiver operating characteristics. Results: IL-6 and histamine were significantly greater in the patients with interstitial cystitis than in the controls (P = 0.0003 and P = 0.038, respectively). The methylhistamine levels were also greater in the patients with interstitial cystitis than in the controls, but the results did not reach significance (P = 0.063). Using a combination of IL-6 and methylhistamine/creatinine, cutoff points were established. Using these cutoff points, the sensitivity was 70.0%, specificity 72.4%, positive predictive value 77.8%, and negative predictive value 63.6%. Conclusions: All three markers--histamine, methylhistamine, and IL-6--were greater in the patients with interstitial cystitis than in the controls. A combination of methylhistamine and IL-6 could be used as a sensitive and specific marker for interstitial cystitis. abstract_id: PUBMED:11992058 A comparison of multiple urine markers for interstitial cystitis. Purpose: We measured several urine markers in 24-hour specimens from patients with interstitial cystitis and healthy controls. For each marker we determined whether the urine level was significantly different in interstitial cystitis and control cases, and whether the marker level correlated with the symptom score. Materials And Methods: Study participants included 36 female patients with interstitial cystitis and 36 age matched female volunteers. Multiple urine aliquots were obtained to measure the various markers. Results: Certain markers were significantly increased in interstitial cystitis, including anti-proliferative factor, epidermal growth factor, insulin-like growth factor (IGF) binding protein-3 and interleukin (IL)-6. Markers significantly decreased in interstitial cystitis were heparin-binding epidermal growth factor-like growth factor, cyclic guanosine monophosphate and methylhistamine. Other markers were not significantly different in the interstitial cystitis and control groups, including total glycosaminoglycans, epitectin, hyaluronic acid, IL-8, IL-1 and nitrates plus nitrites. IGF-1 was undetectable in 24-hour urine samples but spot voided samples from the same interstitial cystitis population had IGF-1 levels similar to previously reported levels. The only significant association of marker with symptom score was a positive correlation of IL-6 with nocturia. For all markers the conclusions were the same whether the marker was normalized to creatinine or to 24 hours. Conclusions: This study confirmed several previously reported urine alterations in interstitial cystitis, including increased anti-proliferative factor, epidermal growth factor, IGF binding protein-3 and IL-6, and decreased heparin-binding epidermal growth factor-like growth factor and cyclic guanosine monophosphate. Of all markers studied anti-proliferative factor had the least overlap in the interstitial cystitis and control groups, and so it is the most likely candidate to become a diagnostic test. abstract_id: PUBMED:36983336 Urinary Oxidative Stress Biomarker Levels Might Be Useful in Identifying Functional Bladder Disorders in Women with Frequency and Urgency Syndrome. Purpose: lower urinary tract dysfunctions (LUTDs) are difficult to diagnose based on symptoms. This study used a cluster of urinary biomarkers, including inflammatory cytokines, neurogenic proteins, and oxidative stress biomarkers, to identify LUTDs in women with frequency and urgency symptoms. Methods: in total, 253 women with video urodynamics (VUDS)- and cystoscopy-confirmed detrusor overactivity (DO), interstitial cystitis/bladder pain syndrome (IC/BPS), dysfunctional voiding (DV), and hypersensitive bladder (HSB), and normal controls were included. Before diagnosis and treatment, urine samples were collected for analysis of biomarkers. The urine levels of biomarkers were compared between groups with bladder dysfunctions and controls and were combined to test the sensitivity in identifying total pathological bladder diseases and specific bladder diseases. Results: After video urodynamic study, VUDS, and urological examinations, bladder dysfunctions were classified into DO (n = 31), IC/BPS (n = 114), DV (n = 45), HSB (n = 29), and control (n = 34) groups. By using a cystomeric bladder capacity of ≤350 mL, 186/219 (84.9%) of the patients with DO, IC/BPS, DV, and HSB can be discriminated from the controls. Among these urine biomarkers, oxidative stress biomarkers 8-isoprostane, 8-hydroxydeoxyguanosine (8-OHdG), or total antioxidant capacity (TAC) are useful for identifying pathological bladder dysfunction (DO, IC/BPS, and DV) and HSB. With elevated IL-1β and lower IL-2, and elevated TNF-α levels, most patients with DV can be identified. Between DO and IC/BPS, a higher NGF level can identify 58.3% of IC/BPS cases, whereas a lower NGF level can identify 75.0% of DO cases. Conclusion: by using a cluster of urine biomarkers, DO, IC/BPS, and DV cases can be identified based on elevated levels of urine oxidative stress biomarkers 8-isoprostane, TAC, or 8-OHdG, and HSB cases with a low TAC. These urine biomarkers are useful for identifying specific LUTDs in women with frequency and urgency symptoms. abstract_id: PUBMED:7648070 Elevated mast cell tryptase in the urine of patients with interstitial cystitis. Objective: To investigate the number of tryptase positive bladder mast cells and the level of urine tryptase in interstitial cystitis (IC), a bladder disorder which occurs mostly in women and is characterized by suprapubic pain, frequency and nocturia. Patients And Methods: Bladder biopsies from 37 women with IC and 15 control women with other bladder conditions (age range 18-63 years) were obtained during diagnostic bladder distension and frozen immediately. Mast cells positive for tryptase were identified by immunohistochemistry and evaluated by light microscopy. Tryptase was measured by solid phase radioimmunoassay in urine samples, collected immediately (spot) and during a period of 24 h, obtained from normal women volunteers, controls and patients with IC. To adequately quantify tryptase, the physiologically active tetramer was dissociated to inactive monomers with supersaturated NaCl (6.0 M), and then dialysed. Results: The patients' spot urine sample tryptase levels were indistinguishable from those of controls, which included the normal women volunteers. However, the tryptase levels in 24 h urine samples were greatly elevated only in patients with IC, both before (P < 0.005), and especially after NaCl treatment and dialysis (P < 0.001). Conclusion: These results indicate that the tryptase levels of 24 h urine samples, combined with methylhistamine levels and the clinical criteria presently used, may possibly help to better identify IC. abstract_id: PUBMED:9258081 Excretion of fluorescein in the urine of women with interstitial cystitis. Purpose: Altered bladder permeability may have a role in the pathogenesis of interstitial cystitis. Fluorescein, a fluorescent dye of molecular weight 325, has been used to assess membrane permeability. Orally ingested fluorescein normally is rapidly conjugated to glucuronate by the liver and excreted in the urine. Materials And Methods: To test its use as a marker of bladder permeability, we administered fluorescein orally to 6 patients with interstitial cystitis who satisfied National Institutes of Health, National Institute for Diabetes and Digestive and Kidney Diseases criteria and to 6 normal female control subjects. After emptying the bladder and collection of a baseline blood sample, fasted subjects ingested 20 mg. fluorescein and blood samples were collected 1, 2, 3, 4 and 24 hours later. Urine was collected during each of the first 4 hours, and then from 4 to 10, 10 to 16 and 16 to 24 hours. Urine volume was measured, and all plasma and urine samples were analyzed for fluorescein. Results: Plasma fluorescein concentrations (ng./ml.) were significantly (p < 0.05) higher in interstitial cystitis patients than in control subjects at 1 and 2 hours after fluorescein ingestion. Urine fluorescein excretion (mg.) was significantly (p < 0.05) lower in interstitial cystitis patients than in control subjects at 4 to 10 hours after fluorescein ingestion, and for the entire 24 hours. Conclusions: The increased fluorescein concentration in the plasma and decreased excretion in the urine of interstitial cystitis patients suggest that fluorescein may be a useful marker of altered membrane permeability. abstract_id: PUBMED:8254814 The in vitro cytotoxicity of urine from patients with interstitial cystitis. Urine from patients with interstitial cystitis has been reported to be more cytotoxic than urine from healthy subjects when tested in vitro against cells from a normal urothelial cell line. The purpose of the present study was to develop a method to measure urinary cytotoxicity and so make it possible to estimate the toxicity of urine from interstitial cystitis patients. The study included 10 women with interstitial cystitis and 10 healthy controls. Urine specimens were obtained from both groups and urine cytotoxicity was measured by a modified 51Cr-release assay: A range of urine dilutions was added to suspensions of target cells with intracellular bound 51Cr, and cellular death was recorded by measuring the 51Cr-release from the target cells. The transitional cell bladder cell lines T24 and Hu609 and the erythroleukemia K562 cell line were used as target cells. There was no difference in urine cytotoxicity between interstitial cystitis patients and healthy controls. Urine cytotoxicity was increased by dilution in both groups. abstract_id: PUBMED:9684578 Cytologic examination of urine from patients with interstitial cystitis. Objective: To perform a cytologic examination of urine from patients with interstitial cystitis (IC). Study Design: Patients included in this study fit the definition of IC established by the National Institute of Arthritis, Diabetes, Digestive and Kidney Diseases. Cytologic material was identified by checking the pathology archives against medical record numbers of patients with biopsy-confirmed IC. Cytologic preparations included cytocentrifuge samples and filters. Representative material was submitted for toluidine blue staining. Results: Thirty-three urine specimens from 14 patients with IC were included in the study. All urines showed acute inflammation, predominantly polymorphonuclear cells. Fourteen (42%) specimens contained eosinophils, and 10 (30%) contained cells resembling mast cells. Other findings included numerous degenerated cells, reactive transitional cells and hematuria. Toluidine blue staining of urine cytocentrifuge samples failed to demonstrate metachromatic staining of the presumed mast cells. Discussion: Cytologic examination of urine from patients with IC showed consistent but nonetheless nonspecific findings. The predominant characteristic was the presence of inflammatory cells, mostly neutrophils and occasionally eosinophils. abstract_id: PUBMED:34242284 Evaluating extraction methods to study canine urine microbiota. The urinary microbiota is the collection of microbes present in urine that may play a role in host health. Studies of urine microbiota have traditionally relied upon culturing methods aimed at identifying pathogens. However, recent culture-free sequencing studies of the urine microbiota have determined that a diverse array of microbes is present in health and disease. To study these microbes and their potential role in diseases like bladder cancer or interstitial cystitis, consistent extraction and detection of bacterial DNA from urine is critical. However, urine is a low biomass substrate, requiring sensitive methods to capture DNA and making the risk of contamination high. To address this challenge, we collected urine samples from ten healthy dogs and extracted DNA from each sample using five different commercially available extraction methods. Extraction methods were compared based on total and bacterial DNA concentrations and bacterial community composition and diversity assessed through 16S rRNA gene sequencing. Significant differences in the urinary microbiota were observed by dog and sex but not extraction method. The Bacteremia Kit yielded the highest total DNA concentrations (Kruskal-Wallis, p = 0.165, not significant) and the highest bacterial DNA concentrations (Kruskal-Wallis, p = 0.044). Bacteremia also extracted bacterial DNA from the greatest number of samples. Taken together, these results suggest that the Bacteremia kit is an effective option for studying the urine microbiota. This work lays the foundation to study the urine microbiome in a wide range of urogenital diseases in dogs and other species. Answer: The utility of urine methylhistamine (MH) as a marker for interstitial cystitis (IC) has been explored in several studies with varying conclusions. One study found that urine MH levels were significantly elevated in spot and 24-hour urine samples from IC patients, suggesting that MH could serve as a useful diagnostic endpoint for IC (PUBMED:8015069). Another study also reported that histamine, MH, and interleukin-6 (IL-6) were greater in patients with IC than in controls, and a combination of MH and IL-6 could be used as a sensitive and specific marker for IC (PUBMED:17070335). However, a different study concluded that urine MH is unlikely to be useful as an objective marker for the response to bladder distention, to predict which patients will respond to bladder distention, or as a noninvasive substitute for bladder biopsy and mast cell counts by tryptase staining. This study found no significant associations between urine MH and symptom scores, response to bladder distention, cystoscopic findings, or bladder biopsy features (PUBMED:15538243). Another study that measured several urine markers in IC patients and healthy controls found that certain markers were significantly increased in IC, including anti-proliferative factor, epidermal growth factor, insulin-like growth factor binding protein-3, and IL-6, while markers significantly decreased in IC included heparin-binding epidermal growth factor-like growth factor, cyclic guanosine monophosphate, and MH (PUBMED:11992058). Given these conflicting results, it appears that the role of urine MH as a marker for IC is not definitively established. While some studies suggest it could be part of a combination of markers to diagnose IC, other research indicates it may not be a reliable standalone marker for the condition. Further research may be needed to clarify the utility of urine MH in the diagnosis and management of IC.
Instruction: Is abnormal iodine-123-MIBG kinetics associated with left ventricular dysfunction in patients with diabetes mellitus? Abstracts: abstract_id: PUBMED:26514320 Cardiac sympathetic innervation assessed with (123)I-MIBG retains prognostic utility in diabetic patients with severe left ventricular dysfunction evaluated for primary prevention implantable cardioverter-defibrillator. Background: Scintigraphy with iodine-123-metaiodobenzylguanidine ((123)I-MIBG) is a non-invasive tool for the assessment of cardiac sympathetic innervation (CSI) that has proven to be an independent predictor of survival. Recent studies have shown that diabetic patients with heart failure (HF) have a higher deterioration in CSI. It is unknown if (123)I-MIBG has the same predictive value for diabetic and non-diabetic patients with advanced HF. An analysis is performed to determine whether CSI with (123)I-MIBG retains prognostic utility in diabetic patients with HF, evaluated for a primary prevention implantable cardioverter-defibrillator (ICD). Material And Methods: Seventy-eight consecutive HF patients (48 diabetic) evaluated for primary prevention ICD implantation were prospectively enrolled and underwent (123)I-MIBG to assess CSI (heart-to-mediastinum ratio - HMR). A Cox proportional hazards multivariate analysis was used to determine the influence of (123)I-MIBG images for prediction of cardiac events in both diabetic and non-diabetic patients. The primary end-point was a composite of arrhythmic event, cardiac death, or admission due to HF. Results: During a mean follow-up of 19.5 [9.3-29.3] months, the primary end-point occurred in 24 (31%) patients. Late HMR was significantly lower in diabetic patients (1.30 vs. 1.41, p=0.014). Late HMR≤1.30 was an independent predictor of cardiac events in diabetic (hazard ratio 4.53; p=0.012) and non-diabetic patients (hazard ratio 12.31; p=0.023). Conclusions: Diabetic patients with HF evaluated for primary prevention ICD show a higher deterioration in CSI than non-diabetics; nevertheless (123)I-MIBG imaging retained prognostic utility for both diabetic and non-diabetic patients. abstract_id: PUBMED:11144470 Is abnormal iodine-123-MIBG kinetics associated with left ventricular dysfunction in patients with diabetes mellitus? Background: Although autonomic neuropathy is frequently recognized in patients with diabetes mellitus, it is uncertain whether cardiac sympathetic neuropathy may play a role in the development of diabetic cardiomyopathy. Methods And Results: In 10 control subjects and 61 patients with diabetes mellitus who did not have coronary artery disease, cardiac sympathetic function and left ventricular ejection fraction (LVEF) were evaluated by using iodine-123-metaiodobenzylguanidine (MIBG) imaging and echocardiography, respectively. Dynamic acquisitions and planar images obtained 15 and 150 minutes after injection were used as a means of measuring early and late myocardial uptake and clearance rates of MIBG from the heart. Eight patients with an LVEF less than 50% demonstrated a lower late myocardial MIBG uptake (0.0043% +/- 0.0017% vs. 0.0024% +/- 0.0009%/pixel, P = .002) and a higher clearance rate (22.9% +/- 17.7% vs. 49.3% +/- 12.2%, P<.0001) than the 53 patients with an LVEF of 50% or greater, although the age, sex, type of diabetes mellitus, and frequency of neuropathy, retinopathy, and nephropathy were not significantly different between the 2 subgroups. The LVEF correlated weakly but significantly with early and late myocardial uptake and clearance rate (r = 0.277, P = .03; r = 0.421, P = .001; r = 0.382, P = .002; respectively) in patients with diabetes mellitus. Conclusion: Marked MIBG abnormalities are associated with left ventricular dysfunction in patients with diabetes mellitus. However, long-term follow-up of patients with diabetes mellitus who have marked MIBG abnormalities and normal LVEF will be required to determine whether these patients would demonstrate systolic dysfunction earlier than patients without an MIBG abnormality. abstract_id: PUBMED:12751132 Prognostic implications of an initial loss of cardiac metaiodobenzylguanidine uptake and diabetes mellitus in patients with left ventricular dysfunction. Background: Although altered cardiac sympathetic innervation is related to fatal outcome, the mechanisms and prognostic value of an initial cardiac metaiodobenzylguanidine (MIBG) defect are not known. Methods And Results: After quantitative cardiac MIBG imaging, 205 patients with left ventricular ejection fraction <50% were prospectively followed up with a primary end-point of cardiac death for 35 months. In regard to 38 cardiac deaths, consisting of 25 pump failure deaths, 11 sudden deaths, and 2 fatal acute myocardial infarctions, multivariate analysis identified diabetes mellitus as a significant independent predictor as well as reduced cardiac MIBG activity, use of nitrate, and New York Heart Association functional status. Independent of washout kinetics and cardiac function, patients with profound loss of initial MIBG uptake and those with late-phase MIBG activity of 1.74 or less had significantly greater mortality rates than did their counterparts. Initial cardiac MIBG activity closely correlated inversely with annual cardiac death rate. Conclusions: An initial cardiac MIBG defect and presence of diabetes mellitus indicate a low probability of long-term survival. The profound loss of initial MIBG activity is likely to be due to structural deficit of sympathetic neurons themselves, rather than accelerated sympathetic function, suggesting that denervation is one of mechanisms of cardiac sympathetic dysfunction leading to lethal clinical outcomes. abstract_id: PUBMED:31537637 HIV and pericardial fat are associated with abnormal cardiac structure and function among Ugandans. Objectives: To examine the relationship between pericardial fat (PCF) and cardiac structure and function among HIV-infected patients in the sub-Saharan African country of Uganda. People living with HIV (PLHIV) have altered fat distribution and an elevated risk for heart failure. Whether altered quantity and radiodensity of fat surrounding the heart relates to cardiac dysfunction in this population is unknown. Methods: One hundred HIV-positive Ugandans on antiretroviral therapy were compared with 100 age and sex-matched HIV-negative Ugandans; all were >45 years old with >1 cardiovascular disease risk factor. Subjects underwent ECG-gated non-contrast cardiac CT and transthoracic echocardiography with speckle tracking strain imaging. Multivariable linear and logistic regression models were used to explore the association of PCF with echocardiographic outcomes. Results: Median age was 55% and 62% were female. Compared with uninfected controls, PLHIV had lower body mass index (27 vs 30, p=0.02) and less diabetes (26% vs 45%, p=0.005). Median left ventricular (LV) ejection fraction was 67%. In models adjusted for traditional risk factors, HIV was associated with 10.3 g/m2 higher LV mass index (LVMI) (95% CI 3.22 to 17.4; p=0.005), 0.87% worse LV global longitudinal strain (GLS) (95% CI -1.66 to -0.07; p=0.03) and higher odds of diastolic dysfunction (OR 1.96; 95% CI 0.95 to 4.06; p=0.07). In adjusted models, PCF volume was significantly associated with increased LVMI and worse LV GLS, while PCF radiodensity was associated with worse LV GLS (all p<0.05). Conclusions: In Uganda, HIV infection, PCF volume and density are associated with abnormal cardiac structure and function. abstract_id: PUBMED:27208801 Abnormal echocardiography in patients with type 2 diabetes and relation to symptoms and clinical characteristics. Objectives: We aimed to determine the prevalence of echocardiographic abnormalities and their relation to clinical characteristics and cardiac symptoms in a large, contemporary cohort of patients with type 2 diabetes. Results: A total of 1030 patients with type 2 diabetes participated. Echocardiographic abnormalities were present in 513 (49.8%) patients, mainly driven by a high prevalence of diastolic dysfunction 178 (19.4%), left ventricular hypertrophy 213 (21.0%) and left atrial enlargement, 200 (19.6%). The prevalence increased markedly with age from 31.1% in the youngest group (<55 years) to 73.9% in the oldest group (>75 years) (p < 0.001) and was equally distributed among the sexes (p = 0.76). In univariate analyses, electrocardiographic abnormalities, age, body mass index, known coronary heart disease, hypertension, albuminuria, diabetes duration and creatinine were associated with abnormal echocardiography along with dyspnoea and characteristic chest pain (p < 0.05 for all). Neither of the cardiac symptoms nor clinical characteristics had sufficient sensitivity and specificity to accurately identify patients with abnormal echocardiography. Conclusion: Echocardiographic abnormalities are very common in outpatients with type 2 diabetes, but neither cardiac symptoms nor clinical characteristics are effective to identify patients with echocardiographic abnormalities. abstract_id: PUBMED:30648300 Abnormal longitudinal peak systolic strain in asymptomatic patients with type I diabetes mellitus. Aims: In addition to increased risk of cardiovascular disease, the prevalence of diabetic cardiomyopathy is increasingly recognized in patients with type 1 diabetes mellitus (T1DM). We aimed to identify the occurrence of subclinical markers of cardiovascular risk and cardiac dysfunction and assess their relation to clinical parameters in asymptomatic patients with T1DM. Methods And Results: A total of 102 patients (mean age 46 years [20-73], 62% male) with a history of T1DM ranging from 5 to 47 years underwent standard 2D and pulse-wave tissue Doppler echocardiography (Philips iE33) and computerized tomography for assessment of coronary calcium score (CACS) and visceral fat. Global peak longitudinal strain (GPLSS, speckle tracking) was calculated by offline analysis (Qlab 9.0). Whereas systolic function was preserved in all patients (LVEF > 50%), subclinical dysfunction (defined as global longitudinal peak systolic strain [GLPSS] of >-20%) was present in 39% and 66% had diastolic dysfunction. Fifty patients had a CACS above the 50th percentile according to age and gender. These patients were older, more obese, had higher levels of visceral fat, higher SBP and increased levels of LDL cholesterol. Higher CACS meant increased risk of diastolic and subclinical systolic dysfunction. However, decreased GLPSS was also detected in 30% of patients with CACS of <50th percentile. Stepwise linear regression analysis indicated visceral fat as a strong predictor of abnormal GPLSS and CACS. Conclusion: Subclinical left ventricular dysfunction and atherosclerosis were highly prevalent in asymptomatic T1DM. Abnormal GPLSS was noted with or without associated increase in CACS. Visceral fat was a strong predictor of increased CACS as well as abnormal GLPSS. abstract_id: PUBMED:32371806 Concomitant hypertension is associated with abnormal left ventricular geometry and lower systolic myocardial function in overweight participants: the FAT associated CardiOvasculaR dysfunction study. Objective: We studied the impact of concomitant hypertension on left ventricular (LV) systolic myocardial function and geometry in apparently healthy women and men with increased BMI. Material And Methods: We performed a cross-sectional analysis of 535 participants (59% women) with BMI greater than 27 kg/m without known cardiovascular disease enrolled in the FAT associated CardiOvasculaR dysfunction (FATCOR) study. Hypertension was defined as use of antihypertensive treatment or elevated 24-h ambulatory blood pressure. Abnormal LV geometry was identified as increased relative wall thickness and/or LV mass index. Systolic myocardial function was assessed by midwall shortening (MWS) and speckle tracking peak global longitudinal strain (GLS). Results: Hypertensive participants were older (49 vs. 46 years), had higher BMI and waist circumference, higher prevalences of diabetes and abnormal LV geometry (29 vs. 16%), and lower GLS (-19 vs. -20%) and MWS (16.3 vs. 17.1%) compared with normotensive participants (all P < 0.01). In multivariable linear regression analyses, hypertension was associated with lower GLS (β=0.11, P = 0.035) and lower MWS (β=0.09, P = 0.029) independent of sex, diabetes, LV hypertrophy, ejection fraction, and waist circumference. Hypertension was also associated with presence of abnormal LV geometry [odds ratio 1.74 (95% confidence interval 1.04-2.89), P = 0.035) independent of the same confounders. When replacing waist circumference with BMI in the models, hypertension retained its association with lower myocardial function, whereas the association with abnormal LV geometry was attenuated. Conclusion: In participants with increased BMI without known clinical cardiovascular disease, concomitant hypertension was associated with lower systolic myocardial function and more abnormal LV geometry. Clinical Trial Registration: URL: http://www.clinicaltrials.gov NCT02805478. abstract_id: PUBMED:25450014 Abnormal left ventricular contractile response to exercise in the absence of obstructive coronary artery disease is associated with resting left ventricular long-axis dysfunction. Background: The etiology of reduced left ventricular (LV) ejection fraction after exercise, without obstructive coronary artery disease or other established causes, is unclear. The aims of this study were to determine whether patients undergoing treadmill stress echocardiography with this abnormal LV contractile response to exercise (LVCRE) without established causes have resting LV long-axis dysfunction or microvascular dysfunction and to determine associations with this abnormal LVCRE. Methods: Of 5,275 consecutive patients undergoing treadmill stress echocardiography, 1,134 underwent cardiac computed tomography angiography or invasive angiography. Having excluded patients with obstructive coronary artery disease, hypertensive response, submaximal heart rate response, resting LV ejection fraction < 50%, and valvular disease, 110 with "abnormal LVCRE" and 212 with "normal LVCRE" were analyzed. Resting mitral annular velocities were measured to assess LV long-axis function. Myocardial blush grade and corrected Thrombolysis In Myocardial Infarction frame count were determined angiographically to assess microvascular function. Results: Comparing normal LVCRE with abnormal LVCRE, age (mean, 59.7 ± 11.1 vs 61.4 ± 10.0 years), hypertension (53% vs 55%), diabetes (16% vs 20%), and body mass index (mean, 29.1 ± 5.4 vs 29.5 ± 6.4 kg/m(2)) were similar (P > .05). Abnormal LVCRE had reduced resting LV long-axis function with lower septal (mean, 6.1 ± 1.9 vs 7.7 ± 2.2 cm/sec) and lateral (mean, 8.1 ± 2.9 vs 10.4 ± 3.0 cm/sec) e' velocities (P < .001) and larger resting left atrial volumes (mean, 37.3 ± 10.1 vs 31.1 ± 7.2 mL/m(2), P < .001). On multivariate analysis, female gender (odds ratio [OR], 1.21; 95% confidence interval [CI], 1.15-1.99; P < .001), exaggerated chronotropic response (OR, 1.49; 95% CI, 1.09-2.05; P < .001), resting left atrial volume (OR, 2.38; 95% CI, 1.63-3.47; P < .001), and resting lateral e' velocity (OR, 1.70; 95% CI, 1.22-2.49; P = .003) were associated with abnormal LVCRE, but not myocardial blush grade or corrected Thrombolysis In Myocardial Infarction frame count. Conclusions: An abnormal LVCRE in the absence of established causes is associated with resting LV long-axis dysfunction and is usually seen in women. abstract_id: PUBMED:6280468 Increased incidence and clinical correlation of persistently abnormal technetium pyrophosphate myocardial scintigrams following acute myocardial infarction in patients with diabetes mellitus. "Persistently abnormal" technetium-99m stannous pyrophosphate myocardial scintigrams (PPi+) appear to be associated with a relatively poor prognosis after acute myocardial infarction (AMI). To assess the incidence and implications of PPi+, we performed a retrospective analysis in 29 patients with and 25 patients without diabetes mellitus who had abnormal myocardial scintigrams within 4 days of AMI and who had follow-up scintigrams at least 3 months after hospital discharge. There were no significant differences between patients with and without diabetes as regards age, incidence of transmural or nontransmural AMI, or degree of left ventricular dysfunction after AMI. Persistently abnormal PPi+ occurred more commonly in patients with diabetes than in nondiabetic patients (18 of 29, 62%, compared to 3 of 25, 12%; p less than 0.001). Patients with chronic PPi+ had more frequent cardiac complications following hospital discharge (p less than 0.005) including death, recurrent AMI, unstable angina, and intractable congestive heart failure. Postmortem analysis in two patients with diabetes and chronic PPi+ revealed marked myocytolysis. Thus, patients with diabetes mellitus have an increased incidence of post-AMI "persistently abnormal" technetium (PPi+) scintigrams and relatively poor prognosis following myocardial infarction. abstract_id: PUBMED:18985347 Prevalence and predictors of an abnormal stress myocardial perfusion study in asymptomatic patients with type 2 diabetes mellitus. Purpose: The purpose of this study was to evaluate the prevalence of an abnormal stress myocardial perfusion study in a cohort of truly asymptomatic patients with type 2 diabetes mellitus using myocardial perfusion imaging by means of single photon emission computed tomography (SPECT). Secondly, we determined which clinical characteristics may predict an abnormal stress myocardial perfusion study in this population. Methods: A total of 120 asymptomatic patients (mean age 53+/-10 years) with type 2 diabetes mellitus and one or more risk factors for coronary artery disease were prospectively recruited from an outpatient diabetes clinic. All patients underwent myocardial perfusion imaging by means of adenosine (99m)Tc sestamibi SPECT. Images were evaluated for the presence of perfusion abnormalities as well as other nonperfusion abnormalities that may indicate extensive ischaemia, including left ventricular dysfunction (defined as a left ventricular ejection fraction <45%), transient ischaemic dilatation and adenosine-induced ST segment depression. Multivariable analysis was performed using a backward selection strategy to identify potential predictors for an abnormal stress myocardial perfusion study. Finally, all patients were followed up for 12 months to determine the occurrence of cardiovascular events: (1) cardiac death, (2) nonfatal myocardial infarction, (3) unstable angina requiring hospitalization, (4) revascularization, or (5) stroke. Results: Of the 120 patients, 40 (33%) had an abnormal stress study, including myocardial perfusion abnormalities in 30 patients (25%). In 10 patients (8%), indicators of extensive (possibly balanced ischaemia) were observed in the absence of abnormal perfusion. The multivariable analysis identified current smoking, duration of diabetes and the cholesterol/high-density lipoprotein (HDL) ratio as independent predictors of an abnormal stress study. During a follow-up period of 12 months six patients (5%) had a cardiovascular event. Conclusion: The current study revealed a high prevalence of abnormal stress myocardial perfusion studies in patients with type 2 diabetes mellitus despite the absence of symptoms. In contrast to earlier studies, current smoking, duration of diabetes and the cholesterol/HDL ratio were identified as independent predictors of an abnormal study. Answer: Yes, abnormal iodine-123-metaiodobenzylguanidine (MIBG) kinetics is associated with left ventricular dysfunction in patients with diabetes mellitus. A study involving patients with diabetes mellitus who did not have coronary artery disease found that those with a left ventricular ejection fraction (LVEF) less than 50% demonstrated a lower late myocardial MIBG uptake and a higher clearance rate than patients with an LVEF of 50% or greater. The LVEF correlated weakly but significantly with early and late myocardial uptake and clearance rate (PUBMED:11144470). This suggests that marked MIBG abnormalities are associated with left ventricular dysfunction in patients with diabetes mellitus. Additionally, another study indicated that diabetic patients with left ventricular dysfunction and an initial loss of cardiac MIBG uptake have a low probability of long-term survival, suggesting that denervation is one of the mechanisms of cardiac sympathetic dysfunction leading to lethal clinical outcomes (PUBMED:12751132).
Instruction: Does sex matter? Abstracts: abstract_id: PUBMED:34117759 Sex Chromosome Dosage Effects on White Matter Structure in the Human Brain. Sex chromosome aneuploidies, a group of neurogenetic conditions characterized by aberrant sex chromosome dosage (SCD), are associated with increased risks for psychopathology as well as alterations in gray matter structure. However, we still lack a comprehensive understanding of potential SCD-associated changes in white matter structure, or knowledge of how these changes might relate to known alterations in gray matter anatomy. Thus, here, we use voxel-based morphometry on structural neuroimaging data to provide the first comprehensive maps of regional white matter volume (WMV) changes across individuals with varying SCD (n = 306). We show that mounting X- and Y-chromosome dosage are both associated with widespread WMV decreases, including in cortical, subcortical, and cerebellar tracts, as well as WMV increases in the genu of the corpus callosum and posterior thalamic radiation. We also correlate X- and Y-chromosome-linked WMV changes in certain regions to measures of internalizing and externalizing psychopathology. Finally, we demonstrate that SCD-driven WMV changes show a coordinated coupling with SCD-driven gray matter volume changes. These findings represent the most complete maps of X- and Y-chromosome effects on human white matter to date, and show how such changes connect to psychopathological symptoms and gray matter anatomy. abstract_id: PUBMED:36778448 Sex differences in grey matter networks in dementia with Lewy bodies. Objectives: Sex differences permeate many aspects of dementia with Lewy bodies (DLB), including epidemiology, pathogenesis, disease progression, and symptom manifestation. However, less is known about potential sex differences in patterns of neurodegeneration in DLB. Here, we test whether grey matter networks also differ between female and male DLB patients. To assess the specificity of these sex differences to DLB, we additionally investigate sex differences in healthy controls (HCs). Methods: A total of 119 (68.7 ± 8.4 years) male and 45 female (69.9 ± 9.1 years) DLB patients from three European centres and the Mayo Clinic were included in this study. Additionally, we included 119 male and 45 female age-matched HCs from the Mayo Clinic. Grey matter volumes of 58 cortical, subcortical, cerebellar, and pontine brain regions derived from structural magnetic resonance images were corrected for age, intracranial volume, and centre. Sex-specific grey matter networks for DLB patients and HCs were constructed by correlating each pair of brain regions. Network properties of the correlation matrices were compared between sexes and groups. Additional analyses were conducted on W-scored data to identify DLB-specific findings. Results: Networks of male HCs and male DLB patients were characterised by a lower nodal strength compared to their respective female counterparts. In comparison to female HCs, the grey matter networks of male HCs showed a higher global efficiency, modularity, and a lower number of modules. None of the global and nodal network measures showed significant sex differences in DLB. Conclusions: The disappearance of sex differences in the structural grey matter networks of DLB patients compared to HCs may indicate a sex-dependent network vulnerability to the alpha-synuclein pathology in DLB. Future studies might investigate whether the differences in structural network measures are associated with differences in cognitive scores and clinical symptoms between the sexes. abstract_id: PUBMED:29287244 Investigating the differential contributions of sex and brain size to gray matter asymmetry. Scientific reports of sex differences in brain asymmetry - the difference between the two hemispheres - are rather inconsistent. Some studies report no sex differences whatsoever, others reveal striking sex effects, with large discrepancies across studies in the magnitude, direction, and location of the observed effects. One reason for the lack of consistency in findings may be the confounding effects of brain size as male brains are usually larger than female brains. Thus, the goal of this study was to investigate the differential contributions of sex and brain size to asymmetry with a particular focus on gray matter. For this purpose, we applied a well-validated workflow for voxel-wise gray matter asymmetry analyses in a sample of 96 participants (48 males/48 females), in which a subsample of brains (24 males/24 females) were matched for size. By comparing outcomes based on three different contrasts - all males versus all females; all large brains versus all small brains; matched males versus matched females - we were able to disentangle the contributing effects of sex and brain size, to reveal true (size-independent) sex differences in gray matter asymmetry: Males show a significantly stronger rightward asymmetry than females within the cerebellum, specifically in lobules VII, VIII, and IX. This finding agrees closely with prior research suggesting sex differences in sensorimotor, cognitive and emotional function, which are all moderated by the respective cerebellar sections. No other significant sex effects on gray matter were detected across the remainder of the brain. abstract_id: PUBMED:34148074 Sex Differences in Cortical Morphometry and White Matter Microstructure During Brain Aging and Their Relationships to Cognition. Changes in brain structure are associated with aging, and accompanied by the gradual deterioration of cognitive functions, which manifests differently in males and females. Here, we quantify the age-related spatial aging patterns of brain gray and white matter structures, their volume reduction rate, their relationships with specific cognitive functions, as well as differences between males and females in a cross-sectional nondementia dataset. We found that both males and females showed extensive age-related decreases in the volumes of most gray matter and white matter regions. Females have larger regions where the volume decreases with age and a greater slope (females: 0.199%, males: 0.183%) of volume decrease in gray matter. For white matter, no significant sex differences were found in age-related regions, and the slope of volume decrease. More significant associations were identified between brain structures and cognition in males during aging than females. This study explored the age-related regional variations in gray matter and white matter, as well as the sex differences in a nondemented elderly population. This study helps to further understand the aging of the brain structure and sex differences in the aging of brain structures and provides new evidence for the aging of nondemented individuals. abstract_id: PUBMED:37127839 Sex differences in white matter tracts of capuchin monkey brains. Nonhuman primates exhibit sexual dimorphism in behavior, suggesting that there could be underlying differences in brain organization and function. Understanding this neuroanatomical variation is critical for enhancing our understanding of the evolution of sex differences in the human brain. Tufted capuchin monkeys (Sapajus [Cebus] apella) represent a phylogenetically diverse taxa of neotropical primates that converge on several behavioral characteristics with humans relevant to social organization, making them an important point of comparison for studying the evolution of sex differences in primates. While anatomical sex differences in gray matter have previously been found in capuchin monkeys, the current study investigates sex differences in white matter tracts. We carried out tract-based spatial statistical analysis on fractional anisotropy images of tufted capuchin monkeys (15 female, 5 male). We found that females showed significantly higher fractional anisotropy than males in regions of frontal-parietal white matter in the right cerebral hemisphere. Paralleling earlier findings in gray matter, male and female fractional anisotropy values in these regions were nonoverlapping. This complements prior work pointing toward capuchin sex differences in limbic circuitry and higher-order visual regions. We propose that these sex differences are related to the distinct socioecological niches occupied by male and female capuchins. Capuchin neuroanatomical sex differences appear to be more pronounced than in humans, which we suggest may relate to human adaptations for prolonged neurodevelopmental trajectories and increased plasticity. abstract_id: PUBMED:32063823 Investigating Gray and White Matter Structural Substrates of Sex Differences in the Narrative Abilities of Healthy Adults. Linguistic aspects of narration have been investigated in healthy populations, in a wide variety of languages and speech genres with very different results. There is some evidence indicating that linguistic elements, such as speech rate (i.e., the measure indicating the amount of speech produced in a certain time period), mean length of utterance (MLU) (i.e., the index reflecting sentence grammatical structure), frequency of nouns and verbs, might be affected by non-linguistic factors such as sex. On the other hand, despite the existence of neuroimaging evidence of structural differences between males and females, it is yet unknown how such differences could explain between-sex disparities in linguistic abilities in natural speech contexts. To date, no study has evaluated discourse production elements in relation to sex differences and their neural correlates in terms of brain structure, a topic that could provide unique insights on the relationship between language and the brain. The aim of the present study was to determine sex differences in narrative skills in healthy adults and to investigate white and gray matter structural correlates of linguistic skills in each group. Twenty-seven male and 30 female (N = 57) right-handed, neurologically intact, monolingual Greek speakers, matched for age and years of education, participated. Narrations of a personal medical event were elicited. Linguistic elements of speech rate (words per minute), MLUs, frequency of nouns and verbs were calculated for each speech sample, by two independent raters. Structural 3D T1 images were segmented and parcellated using FreeSurfer and whole-brain between-sex differences in cortical thickness, cortical volume and surface area, were obtained. Between-group differences in white matter diffusion tensor scalars were examined via Tract-Based Spatial-Statistics and whole-brain tractography and automated tract delineation using Automated Fiber Quantification. Speech rate and noun frequency were significantly lower for men, while verb frequency was significantly higher for women, but no differences were identified for MLU. Regarding cortical measures, males demonstrated increased volume, surface area and cortical thickness in several bilateral regions, while no voxel-wise or tractography-based between-group differences in white matter metrics were observed. Regarding the relationship between sex and speech variables, hierarchical regression analyses showed that the superior/middle frontal cluster in surface area may serve as a significant predictor of speech rate variance, but only in females. We discuss several possible interpretations of how sex-related speech abilities could be represented differently in men and women in gray matter structures within the broad language network. abstract_id: PUBMED:35645786 Sex-Specific Causes and Consequences of White Matter Damage in a Middle-Aged Cohort. Objective: To evaluate potential sex-specific effects of multiple cardiovascular risk factors on white matter pathology in normal aging men and women, as well as potential sex-differences in the association of white matter pathology and cognitive functions. Methods: We analyzed cross-sectional data of 581 participants (median age: 53 years, 54% women) of the population-based cohort of the BiDirect Study who completed clinical examinations, five neuropsychological tests, and an 3T MRI examination. White matter pathology was determined by the extent of white matter hyperintensities (WMH) on FLAIR images as well as the magnitude of global fractional anisotropy (FA) based on diffusion tensor imaging. Main effects, interaction as well as sex-stratified generalized linear regression models were used to evaluate the moderating effect of sex on the association of hypertension, diabetes mellitus, smoking, and obesity with WMH and FA, respectively. Associations of imaging markers with cognitive test results were determined with linear regression models. Results: Hypertension showed stronger associations with more extensive WMH and less FA in women compared to men. Current smoking was associated with more severe WMH in women only. Adjusted for age and education, WMH were not significantly associated with cognitive tests, but higher FA was associated with better performance in motor function in both sexes and with executive functions in men, even after adjustment for cardiovascular risk factors. Conclusion: We observed a stronger association of hypertension and smoking with white matter damage in women, suggesting a higher susceptibility for vascular pathology in women. However, there was no association of WMH with cognition, and FA was associated with executive function tests only in men, suggesting a higher cognitive reserve in women. abstract_id: PUBMED:36177528 White matter microstructure shows sex differences in late childhood: Evidence from 6797 children. Sex differences in white matter microstructure have been robustly demonstrated in the adult brain using both conventional and advanced diffusion-weighted magnetic resonance imaging approaches. However, sex differences in white matter microstructure prior to adulthood remain poorly understood; previous developmental work focused on conventional microstructure metrics and yielded mixed results. Here, we rigorously characterized sex differences in white matter microstructure among over 6000 children from the Adolescent Brain Cognitive Development study who were between 9 and 10 years old. Microstructure was quantified using both the conventional model-diffusion tensor imaging (DTI)-and an advanced model, restriction spectrum imaging (RSI). DTI metrics included fractional anisotropy (FA) and mean, axial, and radial diffusivity (MD, AD, RD). RSI metrics included normalized isotropic, directional, and total intracellular diffusion (N0, ND, NT). We found significant and replicable sex differences in DTI or RSI microstructure metrics in every white matter region examined across the brain. Sex differences in FA were regionally specific. Across white matter regions, boys exhibited greater MD, AD, and RD than girls, on average. Girls displayed increased N0, ND, and NT compared to boys, on average, suggesting greater cell and neurite density in girls. Together, these robust and replicable findings provide an important foundation for understanding sex differences in health and disease. abstract_id: PUBMED:33588003 Sex-related Differences in Stress Reactivity and Cingulum White Matter. The prefrontal cortex and limbic system are important components of the neural circuit that underlies stress and anxiety. These brain regions are connected by white matter tracts that support neural communication including the cingulum, uncinate fasciculus, and the fornix/stria-terminalis. Determining the relationship between stress reactivity and these white matter tracts may provide new insight into factors that underlie stress susceptibility and resilience. Therefore, the present study investigated sex differences in the relationship between stress reactivity and generalized fractional anisotropy (GFA) of the white matter tracts that link the prefrontal cortex and limbic system. Diffusion weighted images were collected and deterministic tractography was completed in 104 young adults (55 men, 49 women; mean age = 18.87 SEM = 0.08). Participants also completed self-report questionnaires (e.g., Trait Anxiety) and donated saliva (later assayed for cortisol) before, during, and after the Trier Social Stress Test. Results revealed that stress reactivity (area under the curve increase in cortisol) and GFA of the cingulum bundle varied by sex. Specifically, men demonstrated greater cortisol reactivity and greater GFA within the cingulum than women. Further, an interaction between sex, stress reactivity, and cingulum GFA was observed in which men demonstrated a positive relationship while women demonstrated a negative relationship between GFA and cortisol reactivity. Finally, trait anxiety was positively associated with the GFA of the fornix/stria terminalis - the white matter pathways that connect the hippocampus/amygdala to the hypothalamus. These findings advance our understanding of factors that underlie individual differences in stress reactivity. abstract_id: PUBMED:28045130 Sex-related difference in human white matter volumes studied: Inspection of the corpus callosum and other white matter by VBM. It has been contended that any observed difference of the corpus callosum (CC) size between men and women is not sex-related but brain-size-related. A recent report, however, showed that the midsagittal CC area was significantly larger in women in 37 brain-size-matched pairs of normal young adults. Since this constituted strong evidence of sexual dimorphism and was obtained from publicly available data in OASIS, we examined volume differences within the CC and in other white matter using voxel-based morphometry (VBM). We created a three-dimensional region of interest of the CC and measured its volume. The VBM statistics were analyzed by permutation test and threshold-free cluster enhancement (TFCE) with the significance levels at FWER < 0.05. The CC volume was significantly larger in women in the same 37 brain-size-matched pairs. We found that the CC genu was the subregion showing the most significant sex-related difference. We also found that white matter in the bilateral anterior frontal regions and the left lateral white matter near to Broca's area were larger in women, whereas there were no significant larger regions in men. Since we used brain-size-matched subjects, our results gave strong volumetric evidence of localized sexual dimorphism of white matter. Answer: Yes, sex does matter when it comes to the structure and function of the human brain. Research has shown that there are sex differences in various aspects of brain anatomy and physiology, which can have implications for health, behavior, and susceptibility to certain conditions. For instance, sex chromosome dosage effects have been observed in white matter structure, with both X- and Y-chromosome dosage associated with widespread decreases in white matter volume (WMV) and some increases in specific regions like the genu of the corpus callosum and posterior thalamic radiation. These changes also correlate with psychopathology and are coupled with gray matter volume changes (PUBMED:34117759). In dementia with Lewy bodies (DLB), sex differences have been noted in gray matter networks, with male and female DLB patients showing different patterns of neurodegeneration compared to healthy controls. The disappearance of sex differences in structural gray matter networks in DLB patients may indicate a sex-dependent network vulnerability to the pathology (PUBMED:36778448). Sex and brain size have been found to contribute differentially to gray matter asymmetry, with males showing a stronger rightward asymmetry in the cerebellum, which is associated with sensorimotor, cognitive, and emotional functions (PUBMED:29287244). During brain aging, sex differences have been quantified in cortical morphometry and white matter microstructure, with females showing larger regions where volume decreases with age and a greater slope of volume decrease in gray matter. However, no significant sex differences were found in age-related regions or the slope of volume decrease in white matter (PUBMED:34148074). In nonhuman primates, such as capuchin monkeys, sex differences have been observed in white matter tracts, with females showing higher fractional anisotropy in certain regions, suggesting sex differences in brain organization that may relate to distinct socioecological niches (PUBMED:37127839). Sex differences have also been investigated in relation to narrative abilities and their neural correlates in healthy adults, with structural differences in gray and white matter potentially explaining disparities in linguistic abilities (PUBMED:32063823). Furthermore, sex-specific effects of cardiovascular risk factors on white matter pathology have been evaluated, with hypertension and smoking showing stronger associations with white matter damage in women compared to men. This suggests a higher susceptibility for vascular pathology in women, although the association of white matter pathology with cognition differed between sexes (PUBMED:35645786).
Instruction: Is there benefit in referring patients with fibromyalgia to a specialist clinic? Abstracts: abstract_id: PUBMED:15570653 Is there benefit in referring patients with fibromyalgia to a specialist clinic? Objective: To examine the benefit of specialist rheumatology consultation and followup for the first 238 patients referred to a tertiary care fibromyalgia (FM) clinic with emphasis on final diagnosis and outcome. Methods: A retrospective chart review was performed for the first 238 patients attending a rheumatology subspecialty FM clinic. The main variables of interest were management received at the clinic, final diagnosis, and outcome. Results: The final diagnosis was FM in 68%, and some other condition in the remaining 32%. Specialist contact was identified as useful in 73% of the total patient group, 96 with FM and 74 with non-FM. In the patients with FM who received followup in the clinic, outcome was judged favorable in 54%, whereas 46% showed no change or decline in health status. Conclusion: An important value of specialist rheumatology contact for patients with a symptom suggestive of diffuse musculoskeletal pain is to ensure that some other potentially treatable condition is not overlooked, rather than the provision of ongoing care for those with FM. Continued followup in a specialist clinic for patients with a primary diagnosis of FM is of questionable benefit. abstract_id: PUBMED:31197458 Three years of the appointment service office: high rate of false referrals and no show patients-A retrospective anaysis of the Rheumatism Center Schleswig-Holstein Mitte This article presents a retrospective analysis of patients who attended a rheumatology specialist practice (with two specialist rheumatologists) between 2016 and 2018 via an appointment arranged by the appointment service office (Terminservicestelle, TSS). Patients were analyzed in a pseudonymized manner and categorized according to the following criteria: 1. patient did not keep the appointment, 2. patient had no inflammatory rheumatic disease, 3. patient suffered from an inflammatory rheumatic disease but had no urgent indications to be seen and 4. patient suffered from an inflammatory rheumatic disease with urgent indications to be seen. Since the start of the TSS at the beginning of 2016 until the end of 2018 a total of 103 patients were allocated to this specialist practice via the TSS. An appointment was offered to 102 patients who underwent further analysis: 4.9% of the patients (n = 5) suffered from an acute inflammatory rheumatic disease and had urgent indications to be seen, 18.63% of patients (n = 19) suffered from an inflammatory rheumatic disease with no urgent indications to attend, 28.43% of patients (n = 29) did not keep the appointment and 48.04% of patients (n = 49) did not have an inflammatory rheumatic disease but other diseases, such as osteoarthritis, fibromyalgia and other forms of chronic pain syndromes. The positive predictive value (PPV) for patients with inflammatory rheumatic disease and urgent indications was 0.05 when all patients were included in the analysis and 0.07 when only patients who showed up were included. This retrospective analysis demonstrates that the TSS does not fulfill its purpose, namely to promptly arrange appointments at a specalist rheumatologist practice for patients with an acute inflammatory rheumatic disease. abstract_id: PUBMED:32753805 Burden of Illness Due to Fibromyalgia in a Neurology Clinic. Background Fibromyalgia (FM) is a common disorder in general population and it causes an increased patient load in hospitals and specialty clinics. FM attendance will be high in clinics dealing with neuropathic pain and other pain syndromes. Though prevalence of FM has been studied in community and pain clinics in other countries, it has not so far been studied in India. So, a study is relevant and hence it was planned in neurology clinic of a teaching government hospital. At present, they are treated mainly by nonsteroidal anti-inflammatory drugs (NSAIDs) which are public health hazard. Methods Using 2016 revision of 2010/2011 American College of Rheumatology criteria of FM, patients were screened in neurology OPD. Proportion and clinical profile were noted. Study was continued for 6 months till the sample size was met. Results A total of 2,300 patients were screened. Two hundred and ninety-eight FM patients were identified among them. Proportion was 12.96%. Delayed diagnosis of more than a year occurred in 55%. Only 29.2% were treated, but none was offered cognitive behavioral therapy (CBT) before. NSAIDs for pain were given for 51.01%. Conclusion Proportion of FM detected is considerable. Affection of homemakers and manual laborers, delayed diagnosis, coexisting comorbid illness, and treatment of pain with NSAIDs are causes of concern. Clinicians should be sensitized to clinical profile and criteria of FM. Patients should be diagnosed and treated by CBT at the earliest and NSAIDs should be avoided as far as possible. abstract_id: PUBMED:27541267 One stop multidisciplinary pain clinic for survivors of torture. Bianca Kuehler and Susan Childs speak to Jade Parker, Commissioning Editor: Dr Bianca Kuehler initially qualified in Germany as an anesthetist in 1993 and is on the specialist register in the UK. After moving to the UK she obtained a Diploma in Occupational Health to supplement the understanding and implication of chronic pain on the work environment. She is very interested in multidisciplinary approaches in treatment of chronic and acute pain patients and, therefore, working closely with Dr Childs opened a plethora of opportunities to develop new services including a fibromyalgia clinic and a specialist clinic for patients who are survivors of torture. Dr Susan Childs is an experienced clinical psychologist who has worked within health psychology and mental health since 1997. Her particular area of expertise is chronic pain. Alongside this, she has developed expertise in the assessment and treatment of a wide range of psychological issues. She has more recently focused upon co-developing services alongside her medical lead and co-facilitator, Dr Bianca Kuehler, for patients who are survivors of torture. Susan leads therapy services at a major London National Health Service trust in a Consultant capacity and supports a team of physicians, surgeons, pain specialist physicians, physiotherapists and clinical specialist nurses. abstract_id: PUBMED:28082524 Rates and Correlates of Pain Specialty Clinic Use Nationally in the Veterans Health Administration. Objective: Chronic pain management is a growing focus of attention, in part because of concern over excessive use of opioids for treatment of chronic noncancer pain. In the Veterans Health Administration (VHA), pain specialty clinics have been established to address the needs of patients with challenging pain issues. The current study identified characteristics of such patients in a national sample of VHA service users in fiscal year 2012. Design: Bivariate analyses compared patients diagnosed with pain who visited a pain specialty clinic with those who did not on sociodemographic characteristics, medical, pain, and psychiatric diagnoses, health service use, and opioid and psychotropic drug use. Logistic regression identified variables that independently differentiated pain clinic users from nonusers. Results: Altogether, 122,240 of 2,025,765 patients with pain diagnoses (5.79%) attended pain specialty clinics. Pain clinic users had higher rates of muscle spasms, neuralgia, neuritis, radiculitis, and fibromyalgia, as well as major depression and personality disorders. Further, a fibromyalgia diagnosis was the strongest independent correlate of pain clinic attendance, along with the number of medical-surgical clinic visits. Veterans attending a pain clinic also received more opioids than those not attending (10.4 vs 6.7 prescriptions, respectively), but there were no substantial differences in other factors. Conclusions: Patients attending pain specialty clinics have more difficult-to-treat pain conditions and comorbid psychiatric disorders that are independent of major medical diagnoses, use more outpatient services, and receive a greater number of opioid prescriptions. These data support the inclusion of mental health care in the specialized treatment of chronic pain. abstract_id: PUBMED:21962316 Opioid use, misuse, and abuse in patients labeled as fibromyalgia. Background: As pain is the cardinal symptom of fibromyalgia, it is logical that treatments directed toward pain relief will be commonly used. Analgesic drug therapy remains the traditional treatment intervention for most chronic pain conditions, with a progressive increased use of opioids in the past 20 years. Concerns about efficacy, risk-benefit ratio, and possible long-term effects of chronic opioid therapy have been raised. There is limited information about opioid treatment in fibromyalgia, with all current guidelines discouraging opioid use. Methods: A chart review of all patients referred to a tertiary care pain center clinic with a referring diagnosis of fibromyalgia was conducted to evaluate use of opioid medications. Results: We have recorded opioid use by 32% of 457 patients referred to a multidisciplinary fibromyalgia clinic, with over two thirds using strong opioids. Opioid use was more commonly associated with lower education, unemployment, disability payments, current unstable psychiatric disorder, a history of substance abuse, and previous suicide attempts. Conclusion: We have observed negative health and psychosocial status in patients using opioids and labeled as fibromyalgia. Prolonged use of opioids in fibromyalgia requires evaluation. abstract_id: PUBMED:37378484 Exploring the prevalence and clinical impact of fibromyalgia syndrome in patients with shoulder diseases: a cross-sectional study. Objectives: Fibromyalgia (FM) is a musculoskeletal syndrome characterised by widespread chronic pain often associated with systemic manifestations such as mood disturbances, persistent fatigue, unrefreshed sleep, and cognitive impairment, substantially impacting patients' health-related quality of life. Based on this background, this study aimed to evaluate the prevalence of FM syndrome in patients referring to an outpatient clinic in a central orthopaedic institute for a painful shoulder. The demographic and clinical characteristics of patients fulfilling the criteria for FM syndrome were also correlated with the severity of symptoms. Methods: Consecutive adult patients referring to the shoulder orthopaedic outpatient clinic of the ASST Gaetano Pini-CTO, Milan, Italy, to undergo a clinical evaluation were assessed for eligibility in an observational, cross-sectional, monocentric study. Results: Two hundred-one patients were enrolled: 103 males (51.2%) and 98 females (48.8%). The mean age ± standard deviation (SD) of the patients was 55.3 ± 14.3 years in the whole population. Of all the patients, 12 (5.97%) fulfilled the 2016 FM syndrome criteria based on the FM severity scale (FSS). Of these, 11 were females (91.7%, p=0.002). The mean age (SD) was 61.3 (10.8) in the positive criteria sample. Patients with positive criteria had a mean FIQR of 57.3 ± 16.8 (range 21.6-81.5). Conclusions: We found that FM syndrome is more frequent than expected in a cohort of patients referring to a shoulder orthopaedic outpatient clinic, with a prevalence rate (6%) more than double that of the general population (2%). abstract_id: PUBMED:35669936 Postacute Sequelae of SARS-CoV-2 Infection-Lessons Learned From a Coordinated Health System Response. Objective: To outline a consensus-designed process for triaging and managing patients with post-coronavirus disease (COVID-19) syndrome at Mayo Clinic. Patients And Methods: We convened a central multidisciplinary team including members from the departments of general internal medicine, occupational medicine, physical medicine and rehabilitation, psychology, allergy and immunology, infectious disease, pulmonology, neurology, cardiology, and pediatrics and otorhinolaryngology with membership from all Mayo Clinic sites in Arizona, Florida, Iowa, Minnesota, and Wisconsin. Results: Consensus recommendations were made for the best practice guidelines on triaging and managing patients. Several innovations were agreed upon, including a postacute sequelae of COVID-19-specific appointment request form for data collection, a bioregistry, a biorepository, and a postacute sequelae of COVID-19-specific treatment program. Conclusion: Given that each clinical site had individual clinical practices, these recommendations were implemented using different models, which may provide broad applicability to other clinical settings. abstract_id: PUBMED:21681062 Description of a half-day interprofessional fibromyalgia clinic with an evaluation of patient satisfaction. Objective: This study aimed to evaluate patient satisfaction of a half-day interprofessional fibromyalgia clinic. Design: A convenience sample of 167 consecutively enrolled patients were evaluated retrospectively for clinical and laboratory findings. After 2 yrs, a patient satisfaction survey and the Fibromyalgia Impact Questionnaire were mailed for follow-up analyses. Results: Sixty-five patients returned the mailed questionnaire (54.2 % response rate). More than 90% of the patients rated their clinic experience and provider visits as very positive or positive. Notably, those patients who reported that their primary care provider regarded fibromyalgia as a valid diagnosis were willing to implement the consultant's recommendations, and those who were prescribed medications were more likely to experience an improvement in symptoms and function. Sleep, fatigue, pain, and work interference were the variables most likely to improve, whereas mood was largely unchanged. Finally, the patients with insulin-like growth factor-1 levels that were within the reference range based on age had a better Fibromyalgia Impact Questionnaire response compared with those with low insulin-like growth factor-1 levels. Conclusions: A half-day fibromyalgia treatment program is feasible and acceptable to patients. This program was designed to provide an expert interprofessional assessment and treatment recommendations to the referring primary care provider. Overall, the program was positively rated by the participants who, overall, experienced a modest improvement in their fibromyalgia symptoms over a 2-yr period. abstract_id: PUBMED:36105499 Maximizing patient benefit through a reversed pathway from specialist to generalist: the case of chronic pain. Polymyalgia Rheumatica (PMR) is a syndrome characterized by chronic pain and/or stiffness in the neck, shoulders or upper arms and hips. It affects adult patients usually over 50 years old and is treated with low-dose oral corticosteroids. In this case, a 68-year-old female with a history of PMR, diagnosed by a specialist sporadically seen in the past, presented to a primary care physician due to herpes zoster (HZ) infection. Thorough history taking, along with a careful review of previous laboratory results, raised serious doubts concerning her diagnosis (PMR). Because the patient described diffuse pain throughout her body, sleep disturbances and a depressed emotional state, fibromyalgia was suspected instead and appropriate treatment was given. The patient remained free of symptoms and corticosteroids for almost a year. Information from this case may help to point out that PMR is a disorder that can be easily confused with other chronic pain conditions with similar manifestations, especially when the initial diagnosis is sped up in terms of consultation depth and care continuity. Under certain circumstances, primary care can lead to improved clinical outcomes. Answer: Referring patients with fibromyalgia to a specialist clinic can have mixed benefits. A retrospective chart review of the first 238 patients attending a rheumatology subspecialty fibromyalgia (FM) clinic found that a final diagnosis of FM was confirmed in 68% of cases, while the remaining 32% were diagnosed with some other condition. Specialist contact was deemed useful in 73% of the total patient group, which included both FM and non-FM patients. However, for patients with FM who received follow-up in the clinic, the outcome was judged favorable in only 54%, with 46% showing no change or a decline in health status. This suggests that while specialist rheumatology contact is valuable for ensuring accurate diagnosis and ruling out other treatable conditions, ongoing care in a specialist clinic for FM may not always be beneficial (PUBMED:15570653). In another study, a retrospective analysis of patients attending a rheumatology specialist practice revealed a high rate of false referrals and no-show patients. Only a small percentage of patients referred through an appointment service office had acute inflammatory rheumatic diseases with urgent indications to be seen. This indicates that the referral system may not be effectively prioritizing patients with acute conditions, which could include those with FM (PUBMED:31197458). A study in a neurology clinic found that a significant proportion of patients (12.96%) were diagnosed with FM, with many experiencing delayed diagnosis and inadequate treatment, including the use of NSAIDs, which are a public health hazard. This highlights the need for better clinical awareness and earlier intervention with appropriate treatments such as cognitive behavioral therapy (CBT), rather than relying on NSAIDs (PUBMED:32753805). In summary, while specialist clinics can play a crucial role in accurately diagnosing FM and ruling out other conditions, the benefits of ongoing care in such clinics for FM patients are less clear. There may be a need for improved referral systems and treatment approaches that focus on early diagnosis and appropriate non-pharmacological interventions (PUBMED:15570653; PUBMED:31197458; PUBMED:32753805).
Instruction: Can a health clinic-based intervention increase safety in abused women? Abstracts: abstract_id: PUBMED:32154988 Addressing the Needs of Abused Chinese Women Through a Community-Based Participatory Approach. Purpose: Intimate partner violence against women is a priority global health issue. It has had serious negative consequences on women's health and has led to huge burdens on the community. Numerous interventions have been demonstrated to have positive effects, and the majority of them have been developed using a top-down approach. However, the specific needs of abused Chinese women have not been fully addressed; incorporating women's input seems to be an essential element for the success of any intervention. This article describes the experiences and success in using a community-based participatory approach (CBPA) to develop a culturally appropriate intervention for abused Chinese women in order to address their multifaceted needs. Design: A CBPA was adopted in this study to develop a culturally appropriate intervention in order to address the multifaceted needs of abused Chinese women. Methods: A core group of six participants (three abused Chinese women, one researcher, and two social workers who provided community services for abused Chinese women) was formed in a local community center in Hong Kong. The three Chinese women were recruited from the local community center, and according to the Abuse Assessment Screen they had been in an intimate relationship in the preceding 12 months and had been abused by an intimate partner. Findings: Three core group meetings were conducted over 6 weeks for intervention development. An integrated multicomponent intervention was developed, comprising a four-part, women-centered program called "Women-centred, we are with you." The program was implemented and completed over 4 months (October 2015 to January 2016). Eighteen abused Chinese women participated in the intervention. Conclusions: A CBPA provides actual empowerment spirit for abused Chinese women. This approach was well accepted by the women and enabled them to develop more culturally appropriate interventions in fulfilling their specify needs. Clinical Relevance: Researchers can consider using a CBPA to develop and tailor a culturally appropriate intervention for groups such as abused Chinese women in order to improve their health and eliminate disparities by addressing their specific and multifaceted needs. abstract_id: PUBMED:12464754 An intervention to increase safety behaviors of abused women: results of a randomized clinical trial. Background: Although intimate partner violence is recognized as a major threat to women's health, few interventions have been developed or tested. Objective: To test an intervention administered to abused women in order to increase safety-seeking behaviors. Method: A two-group clinical trial randomized 75 abused women to receive six telephone intervention sessions on safety behaviors. A control group of 75 women received standard care. Women in both groups were re-interviewed at 3 months and 6 months post-initial measurement. Results: Using repeated measures analysis of variance (ANOVA), we found significantly [F (2,146) 5.11, =.007] more adopted safety behaviors reported by women in the intervention group than by women in the control group at both the 3-month [F (91,74) = 19.70, <.001] and 6-month [F (1,74) = 15.90, <.001] interviews. The effect size (ES) of the intervention was large at 3 months (ES = 1.5) and remained substantial at 6 months (ES = 0.56). Discussion: These findings demonstrate that an intervention to increase safety behaviors of abused women is highly effective when offered following an abusive incident and remains effective for 6 months. abstract_id: PUBMED:9475129 Safety behaviors of abused women after an intervention during pregnancy. Objective: To evaluate an intervention protocol, administered during pregnancy, for increasing safety-seeking behaviors of abused women. Design: Prospective, ethnically stratified cohort analysis. Setting: Public prenatal clinics. Participants: Pregnant women reporting physical or sexual abuse in the year before or during the present pregnancy. One hundred thirty-two women met study criteria, received the intervention, and were followed for 1 year after the completion of the pregnancy. Intervention: Three education, advocacy, and community referral sessions that included information on safety behaviors. Main Outcome Measure: Adoption of safety behaviors by abused women. Safety behaviors were measured before the intervention, twice during pregnancy, and at 2, 6, and 12 months after completion of the pregnancy. Results: Repeated measures analysis of variance showed a significant increase in adoption of each safety behavior (p < .0001), with most behaviors showing a significant increase after the first intervention session. Conclusions: Pregnant women who were abused and were offered an intervention protocol report a significant increase in safety behavior adoption during and after pregnancy. Abuse during pregnancy is common. Identification of abuse and immediate clinical intervention that includes information about safety behaviors can result in safety behavior adoption that may prevent future abuse and increase the safety and well-being of women and infants. abstract_id: PUBMED:19627223 Can a health clinic-based intervention increase safety in abused women? Results from a pilot study. Background: Intimate partner violence (IPV) has been related to a number of adverse physical and mental health consequences. Research has identified relatively high IPV victimization rates among women seeking care in primary healthcare and emergency department settings. Studies have shown the potential usefulness of screening and intervention in these settings. Methods: This article reports results from a pilot study designed to assess the effect of a clinic-based intervention on women's engagement in safety-promoting behaviors. This study was conducted in a primary healthcare clinic for the uninsured in Baltimore, Maryland. Women who screened positive for recent IPV were randomly assigned to an intervention or control group. The intervention consisted of an on-site counseling session and six telephone counseling sessions over a 3-month period. Women in the control group received health information brochures, a list of community resources, and a monthly telephone call to confirm contact information. Results: A total of 41 women participated in the study. Results demonstrated that women who received the clinic-based intervention engaged in significantly more safety-promoting behaviors than did women in the control group. Conclusions: The results of this study and others indicate the potential usefulness of screening and intervention in a medical setting. abstract_id: PUBMED:37440255 Development of an HIV/STI and partner violence health promotion intervention for abused US Virgin Islands women. Women in the US Virgin Islands (USVI) experience intimate partner violence (IPV) and human immunodeficiency virus (HIV) at disproportionate rates compared to women on the US mainland. Women in violent relationships report experiencing controlling behaviours that decrease their ability to negotiate for sex using condoms or to prevent unwanted pregnancies. Though several evidence-based interventions exist to prevent either IPV or HIV, few address them through an integrated health promotion approach or attend to particular USVI cultural mores. This article describes the systematic development of a theory based, culturally tailored, integrated health promotion intervention that addresses IPV and HIV among USVI women experiencing abuse. The process included: (i) identifying and integrating evidence-based health promotion interventions, (ii) conducting formative research using focus groups, (iii) synthesizing focus group data to inform intervention development and (iv) developing a culturally and linguistically appropriate intervention specific to the needs and concerns of USVI women. The Empowered Sisters Project: Making Choices Reducing Risks (ESP) was developed through this research. ESP is a three-session health promotion curriculum focussed on enhancing sexual health and safety among women experiencing abuse. The ESP intervention components included promoting condom use, increasing IPV and HIV knowledge and developing a personalized safety plan. Health professionals facilitated individual intervention sessions using culturally tailored visual media and scripts. This program focussed on experiences of women living in the USVI and has implications for utility across the Caribbean diaspora. abstract_id: PUBMED:24977721 Feasibility of an online safety planning intervention for rural and urban pregnant abused women. Background: Intimate partner violence (IPV) in pregnancy is common and harmful to maternal-child health. Safety planning is the gold standard for intervention, but most abused women never access safety planning. Pregnant women may face increased barriers to safety planning and risk of severe IPV, particularly if they are also rural residents. Internet-based safety planning interventions may be useful, but no such interventions specific to the needs of pregnant women have been developed. Objectives: The aim was to evaluate feasibility (usability, safety, and acceptability) of Internet-based safety planning for rural and urban abused pregnant women and practicality of recruitment procedures for future trials. Methods: An existing Internet-based safety decision aid for pregnant and postpartum women was adapted; initial content validity was established with survivors of IPV, advocates, and national IPV experts; and a convenience sample of community-dwelling abused pregnant women was recruited and randomized into two groups to test the decision aid. Results: Fifty-nine participants were enrolled; 46 completed the baseline session, 41% of whom (n = 19) resided in nonmetropolitan counties. Participants' average gestational age was 20.2 weeks, and 28.3% resided with the abusive partner. Participants reported severe IPV at baseline (mean Danger Assessment score of 16.1), but all were able to identify a safe computer, and 73.9% completed the baseline session in less than 1 week, with no adverse events reported. Discussion: These findings provide preliminary evidence for the feasibility, acceptability, and safety of an Internet-based safety decision aid for urban and rural abused pregnant women. abstract_id: PUBMED:29299613 The effectiveness of critical time intervention for abused women leaving women's shelters: a randomized controlled trial. Objectives: To examine the effectiveness of critical time intervention (CTI)-an evidence-based intervention-for abused women transitioning from women's shelters to community living. Methods: A randomized controlled trial was conducted in nine women's shelters across the Netherlands. 136 women were assigned to CTI (n = 70) or care-as-usual (n = 66). Data were analyzed using intention-to-treat three-level mixed-effects models. Results: Women in the CTI group had significant fewer symptoms of post-traumatic stress (secondary outcome) (adjusted mean difference - 7.27, 95% CI - 14.31 to - 0.22) and a significant fourfold reduction in unmet care needs (intermediate outcome) (95% CI 0.06-0.94) compared to women in the care-as-usual group. No differences were found for quality of life (primary outcome), re-abuse, symptoms of depression, psychological distress, self-esteem (secondary outcomes), family support, and social support (intermediate outcomes). Conclusions: This study shows that CTI is effective in a population of abused women in terms of a reduction of post-traumatic stress symptoms and unmet care needs. Because follow-up ended after the prescribed intervention period, further research is needed to determine the full long-term effects of CTI in this population. abstract_id: PUBMED:15108570 Increasing the safety-promoting behaviors of abused women. Overview: Despite an epidemic of intimate-partner violence against women, and general agreement that women should be screened for it, few assessment and intervention protocols have been evaluated in controlled studies. To test a telephone intervention intended to increase the "safety-promoting behavior" of abused women, 75 women received six telephone calls over a period of eight weeks in which safety-promoting behaviors were discussed. A control group of 75 women received usual care. Women in both groups received follow-up calls to assess safety-promoting behaviors at three, six, 12, and 18 months after intake. Analysis showed that the women in the intervention group practiced significantly (P < 0.01) more safety-promoting behaviors than women in the control group at each assessment. On average, women in the intervention group practiced almost two more safety-promoting behaviors than they had at time of intake and nearly two more than women in the control group; the additional behaviors were practiced for 18 months. This nursing intervention requires only 54 minutes to complete (six nine-minute telephone calls) and can be integrated into any health care setting. Because less than one hour of professional nursing time is involved, the cost of the intervention is minimal. Future research should determine whether the adoption of safety-promoting behaviors by abused women averts trauma and its subsequent health care costs. abstract_id: PUBMED:28072474 Abused women's experiences of a primary care identification and referral intervention: a case study analysis. Aims: The aim of this study was to report the findings of a qualitative case study that investigated abused women's experiences of an identification and referral intervention and to discuss the implications for nurses, specifically those working in primary and community care. Background: Domestic violence and abuse is a significant public health issue globally but it is a hidden problem that is under-reported. In the UK, Identification and Referral to Improve Safety is a primary care-based intervention that has been found to increase referral rates of abused women to support and safety services. This paper reports on the findings of an evaluation study of two sites in England. Design: Qualitative study with a case study design. Methods: In line with case study design, the entire evaluation study employed multiple data collection methods. We report on the qualitative interviews with women referred through the programme. The aim was to elicit their experiences of the three aspects of the intervention: identification; referral; safety. Data collection took place March 2016. Findings: Ten women took part. Eight had exited the abusive relationship but two remained with the partner who had perpetrated the abuse. Women were overwhelmingly positive about the programme and irrespective of whether they had remained or exited the relationship all reported perceptions of increased safety and improved health. Conclusion: Nurses have an important role to play in identifying domestic violence and abuse and in referral and safety planning. As part of a portfolio of domestic violence and abuse interventions, those that empower women to take control of their safety (such as Identification and Referral to Improve Safety) are important. abstract_id: PUBMED:36388330 Elements for successful implementation of a clinic-based health literacy intervention. Improving health literacy is a national public health priority. Given the context of the COVID-19 pandemic, it is even more critical for health and medical information to be clear and understandable for patients and their families. Clinic-based programs to improve health literacy need to be pragmatic, feasible, and helpful for the implementing clinic and patients. This paper describes the development, implementation, and evaluation of a pragmatic, clinic-based health literacy intervention in a safety-net clinic that serves uninsured and indigent patients. Study methods are guided by a previous pilot study and components recommended for pragmatic interventions. An electronic readiness assessment was distributed to out-patient clinics affiliated with a statewide hospital association. The AskMe3 tool was used for the intervention as it is evidence informed and relatively easy to implement. Implementation included ongoing dialogue between the clinic and the academic research team. Within the implementing clinic, data collected from patients via verbally administered questionnaires was analyzed using descriptive statistics and chi-squares. Interview data collected from the clinic director was analyzed qualitatively for themes. The implementing clinic had some of the lowest average scores of the 34 clinics who participated in the initial readiness assessment. Despite this, they were able to successfully implement the health literacy intervention during a global pandemic. Eighty-eight participants completed patient questionnaires at this clinic. Most patients (96%) agreed the AskMe3 questions helped them talk with the doctor or nurse at their current appointment. Most (99%) also perceived the AskMe3 tool to be very helpful when used in a clinical setting. The clinic director offered that the staff initially thought the intervention would be difficult to implement. However, implementation by clinic volunteers with encouragement and prioritization of health literacy by the clinic director contributed to success. When considering interventions for clinical settings, a pragmatic approach can help with selection and implementation of a program that fits with the realities on the ground. Further, frequent technical assistance can help resolve implementation barriers. Interventions utilizing tools such as AskMe3, because of their simplicity, allow creative solutions to capacity issues for clinics who see a need for health literacy improvements. Answer: Yes, a health clinic-based intervention can increase safety in abused women. Studies have demonstrated that interventions administered in health clinic settings are effective in promoting safety behaviors among abused women. For instance, a pilot study conducted in a primary healthcare clinic for the uninsured in Baltimore, Maryland, found that women who received a clinic-based intervention engaged in significantly more safety-promoting behaviors than did women in the control group (PUBMED:19627223). Similarly, a randomized clinical trial showed that an intervention to increase safety behaviors of abused women was highly effective when offered following an abusive incident and remained effective for 6 months (PUBMED:12464754). Moreover, a randomized controlled trial examining the effectiveness of critical time intervention (CTI) for abused women transitioning from women's shelters to community living found that women in the CTI group had significantly fewer symptoms of post-traumatic stress and a significant reduction in unmet care needs compared to women in the care-as-usual group (PUBMED:29299613). Another study that tested a telephone intervention intended to increase the safety-promoting behavior of abused women found that the women in the intervention group practiced significantly more safety-promoting behaviors than women in the control group at each assessment (PUBMED:15108570). Additionally, a qualitative case study that investigated abused women's experiences of an identification and referral intervention in primary care settings reported that women were overwhelmingly positive about the program and all reported perceptions of increased safety and improved health (PUBMED:28072474). These findings suggest that health clinic-based interventions are not only feasible but also effective in enhancing the safety and well-being of abused women.
Instruction: Tumor progression while on chemotherapy: a contraindication to liver resection for multiple colorectal metastases? Abstracts: abstract_id: PUBMED:21160969 Aggressive liver resection including major-vessel resection for colorectal liver metastases. Aim: To clarify short- and long-term outcomes of combined resection of liver with major vessels in treating colorectal liver metastases. Methods: Clinicopathologic data were evaluated for 312 patients who underwent 371 liver resections for metastases from colorectal cancer. Twenty-five patients who underwent resection and reconstruction of retrohepatic vena cava, major hepatic veins, or hepatic venous confluence during hepatectomies were compared with other patients, who underwent conventional liver resections. Results: Morbidity was 20% (75/371) and mortality was 0.3% (1/312) in all patients after hepatectomy. Hepatic resection combined with major-vessel resection/reconstruction could be performed with acceptable morbidity (16%) and no mortality. By multivariate analysis, repeat liver resection (relative risk or RR, 5.690; P = 0.0008) was independently associated with resection/reconstruction of major vessels during hepatectomy, as were tumor size exceeding 30 mm (RR, 3.338; P = 0.0292) and prehepatectomy chemotherapy (RR, 3.485; P = 0.0083). When 312 patients who underwent a first liver resection for initial liver metastases were divided into those with conventional resection (n = 296) and those with combined resection of liver and major vessels (n = 16), overall survival and disease-free rates were significantly poorer in the combined resection group than in the conventional resection group (P = 0.02 and P < 0.01, respectively). A similar tendency concerning overall survival was observed for conventional resection (n = 37) vs major-vessel resection combined with liver resection (n = 7) performed as a second resection following liver recurrences (P = 0.09). Combined major-vessel resection at first hepatectomy (not performed; 0.512; P = 0.0394) and histologic major-vessel invasion at a second hepatectomy (negative; 0.057; P = 0.0005) were identified as independent factors affecting survival by multivariate analysis. Conclusion: Hepatic resection including major-vessel resection/reconstruction for colorectal liver metastases can be performed with acceptable operative risk. However, such aggressive approaches are beneficial mainly in patients responding to effective prehepatectomy chemotherapy. abstract_id: PUBMED:36185562 Oncologic aspects of the decision-making process for surgical approach for colorectal liver metastases progressing during chemotherapy. Colorectal cancer represents the third most diagnosed malignancy in the world. The liver is the main site of metastatic disease, affected in 30% of patients with newly diagnosed disease. Complete resection is considered the only potentially curative treatment for colorectal liver metastasis (CRLM), with a 5-year survival rate ranging from 35% to 58%. However, up to 80% of patients have initially unresectable disease, due to extrahepatic disease or bilobar multiple liver nodules. The availability of increasingly effective systemic chemotherapy has contributed to converting patients with initially unresectable liver metastases to resectable disease, improving long-term outcomes, and accessing tumor biology. In recent years, response to preoperative systemic chemotherapy before liver resection has been established as a major prognostic factor. Some studies have demonstrated that patients with regression of hepatic metastases while on chemotherapy have improved outcomes when compared to patients with stabilization or progression of the disease. Even if disease progression during chemotherapy represents an independent negative prognostic factor, some patients may still benefit from surgery, given the role of this modality as the main treatment with curative intent for patients with CRLM. In selected cases, based on size, the number of lesions, and tumor markers, surgery may be offered despite the less favorable prognosis and as an option for non-chemo responders. abstract_id: PUBMED:35216396 HOXB9 Overexpression Promotes Colorectal Cancer Progression and Is Associated with Worse Survival in Liver Resection Patients for Colorectal Liver Metastases. As is known, HOXB9 is an important factor affecting disease progression and overall survival (OS) in cancer. However, its role in colorectal cancer (CRC) remains unclear. We aimed to explore the role of HOXB9 in CRC progression and its association with OS in colorectal liver metastases (CRLM). We analysed differential HOXB9 expression in CRC using the Tissue Cancer Genome Atlas database (TCGA). We modulated HOXB9 expression in vitro to assess its impact on cell proliferation and epithelial-mesenchymal transition (EMT). Lastly, we explored the association of HOXB9 protein expression with OS, using an institutional patient cohort (n = 110) who underwent liver resection for CRLM. Furthermore, HOXB9 was upregulated in TCGA-CRC (n = 644) vs. normal tissue (n = 51) and its expression levels were elevated in KRAS mutations (p < 0.0001). In vitro, HOXB9 overexpression increased cell proliferation (p < 0.001) and upregulated the mRNA expression of EMT markers (VIM, CDH2, ZEB1, ZEB2, SNAI1 and SNAI2) while downregulated CDH1, (p < 0.05 for all comparisons). Conversely, HOXB9 silencing disrupted cell growth (p < 0.0001). High HOXB9 expression (HR = 3.82, 95% CI: 1.59-9.2, p = 0.003) was independently associated with worse OS in CRLM-HOXB9-expressing patients after liver resection. In conclusion, HOXB9 may be associated with worse OS in CRLM and may promote CRC progression, whereas HOXB9 silencing may inhibit CRC growth. abstract_id: PUBMED:36547120 Colorectal Cancer Liver Metastases: Is an R1 Hepatic Resection Accepted? Metastatic colorectal cancer is associated with a rather dismal 5-year overall survival. The liver is the most commonly affected organ. Improved 5-year survival rates after successful hepatic resections for metastases confined to the liver have been reported. Certainly, a hepatectomy that results in an incomplete tumor resection, in terms of leaving macroscopic residual tumor in the future liver remnant, is not associated with survival benefits. However, the prognostic implications of a microscopically positive surgical margin or a clear margin of less than 1 mm (R1) on pathology are debatable. Although it has been a field of extensive research, the relevant literature often reports contradictory results. The purpose of the present study was to define, assess the risk factors for, and, ultimately, analyze the effect that an R1 hepatic resection for colorectal cancer liver metastases might have on local recurrence rates and long-term prognosis by reviewing the relevant literature. Achieving an R0 hepatic resection, optimally with more than 1 mm of clear margin, should always be the goal. However, in the era of the aggressive multimodality treatment of liver metastatic colorectal cancer, an R1 resection might be the cost of increasing the pool of patients finally eligible for resection. The majority of literature reports have highlighted the detrimental effect of R1 resections on local recurrence and overall survival. However, there are indeed studies that degraded the prognostic handicap as a consequence of an R1 resection in selected patients and highlighted the presence of RAS mutations, the response to chemotherapy, and, in general, factors that reflect the biology of the disease as important, if not the determinant, prognostic factors. In these patients, the aggressive disease biology seems to outperform the resection margin status as a prognostic factor, and the recorded differences between R1 and R0 resections are equalized. Properly and accurately defining this patient group is a future challenge in the field of the surgical treatment of colorectal cancer liver metastases. abstract_id: PUBMED:15570210 Tumor progression while on chemotherapy: a contraindication to liver resection for multiple colorectal metastases? Objective: To evaluate the influence of the response to preoperative chemotherapy, especially tumor progression, on the outcome following resection of multiple colorectal liver metastases (CRM). Summary Background Data: Hepatic resection is the only treatment that currently offers a chance of long-term survival, although it is associated with a poor outcome in patients with multinodular CRM. Because of its better efficacy, chemotherapy is increasingly proposed as neoadjuvant treatment in such patients to allow or to facilitate the radicality of resection. However, little is known of the efficacy of such a strategy and the influence of the response to chemotherapy on the outcome of hepatic resection. Methods: We retrospectively analyzed the course of 131 consecutive patients who underwent liver resection for multiple (> or =4) CRM after systemic chemotherapy between 1993 and 2000, representing 30% of all liver resections performed for CRM in our institution during that period. Chemotherapy included mainly 5-fluorouracil, leucovorin, and either oxaliplatin or irinotecan for a mean of 9.8 courses (median, 9 courses). Patients were divided into 3 groups according to the type of response obtained to preoperative chemotherapy. All liver resections were performed with curative intent. We analyzed patient outcome in relation to response to preoperative chemotherapy. Results: There were 58 patients (44%) who underwent hepatectomy after an objective tumor response (group 1), 39 (30%) after tumor stabilization (group 2), and 34 (26%) after tumor progression (group 3). At the time of diagnosis, mean tumor size and number of metastases were similar in the 3 groups. No differences were observed regarding patient demographics, characteristics of the primary tumor, type of liver resection, and postoperative course. First line treatments were different between groups with a higher proportion of oxaliplatin- and/or irinotecan-based treatments in group 1 (P < 0.01). A higher number of lines of chemotherapy were used in group 2 (P = 0.002). Overall survival was 86%, 41%, and 28% at 1, 3, and 5 years, respectively. Five-year survival was much lower in group 3 compared with groups 1 and 2 (8% vs. 37% and 30%, respectively at 5 years, P < 0.0001). Disease-free survival was 3% compared with 21% and 20%, respectively (P = 0.02). In a multivariate analysis, tumor progression on chemotherapy (P < 0.0001), elevated preoperative serum CA 19-9 (P < 0.0001), number of resected metastases (P < 0.001), and the number of lines of chemotherapy (P < 0.04), but not the type of first line treatment, were independently associated with decreased survival. Conclusions: Liver resection is able to offer long-term survival to patients with multiple colorectal metastases provided that the metastatic disease is controlled by chemotherapy prior to surgery. Tumor progression before surgery is associated with a poor outcome, even after potentially curative hepatectomy. Tumor control before surgery is crucial to offer a chance of prolonged remission in patients with multiple metastases. abstract_id: PUBMED:30828863 Searching for the link; mechanisms underlying liver regeneration and recurrence of colorectal liver metastasis post partial hepatectomy. Despite excellent treatment of primary colorectal cancer, the majority of deaths occur as a result of metastasis to the liver. Recent population studies have estimated that one quarter of patients with colorectal cancer will incur synchronous or metachronous colorectal liver metastasis. However, only one quarter of these patients will be eligible for potentially curative resection. Tumor recurrence occurs in reportedly 60% of patients undergoing hepatic resection, and the majority of intrahepatic recurrence occurs within the first 6 months of surgery. The livers innate ability to restore its homeostatic size, and volume facilitates major hepatic resection that currently offers the only chance of cure to patients with extensive hepatic metastases. Experimental and clinical evidence supports the notion that following partial hepatectomy, liver regeneration (LR) paradoxically drives tumor progression and increases the risk of recurrence. It is becoming increasingly clear that the processes that drive liver organogenesis, regeneration, and tumor progression are inextricably linked. This presents a major hurdle in the management of colorectal liver metastasis and other hepatic malignancies because therapies that reduce the risk of recurrence without hampering LR are sought. The processes and pathways underlying these phenomena are multiple, complex, and cross-communicate. In this review, we will summarize the common mechanisms contributing to both LR and tumor recurrence. abstract_id: PUBMED:34647225 Surgical Resection of Colorectal Liver Metastases: Attitudes and Practice Patterns in the Deep South. Background: Metastatic disease is the leading cause of mortality in colorectal cancer. Resection of colorectal liver metastases, when possible, is associated with improved long-term survival and the possibility of cure. However, nationwide studies suggest that liver resection is under-utilized in the treatment of colorectal liver metastases. This study was undertaken to understand attitudes and practice patterns among medical oncologists in the Deep South. Methods: A survey of medical oncologists in the states of Alabama, Mississippi, and the Florida panhandle was performed. Respondents were queried regarding perceptions of resectability and attitudes towards surgical referral. Results: We received 63 responses (32% response rate). Fifty percent of respondents reported no liver surgeons in their practice area. Commonly perceived contraindications to liver resection included extrahepatic metastatic disease (72%), presence of > 4 metastases (72%), bilobar metastases (61%), and metastases > 5 cm (46%). Bilobar metastatic disease was perceived as a contraindication more frequently by non-academic medical oncologists (70% vs. 33%, p = 0.03). Conclusions: Wide variations exist in perceptions of resectability and referral patterns for colorectal liver metastases among surveyed medical oncologists. There is a need for wider dissemination of resectability criteria and more liver surgeon involvement in the management of patients with colorectal liver metastases. abstract_id: PUBMED:23737695 Liver resection for colorectal cancer metastases. Questions: Should surgery be considered for colorectal cancer (crc) patients who have liver metastases plus (a) pulmonary metastases, (b) portal nodal disease, or (c) other extrahepatic metastases (ehms)?What is the role of chemotherapy in the surgical management of crc with liver metastases in (a) patients with resectable disease in the liver, or (b) patients with initially unresectable disease in the liver that is downsized with chemotherapy ("conversion")?What is the role of liver resection when one or more crc liver metastases have radiographic complete response (rcr) after chemotherapy? Perspectives: Advances in chemotherapy have improved survival in crc patients with liver metastases. The 5-year survival with chemotherapy alone is typically less than 1%, although two recent studies with folfox or folfoxiri (or both) reported rates of 5%-10%. However, liver resection is the treatment that is most effective in achieving long-term survival and offering the possibility of a cure in stage iv crc patients with liver metastases. This guideline deals with the role of chemotherapy with surgery, and the role of surgery when there are liver metastases plus ehms. Because only a proportion of patients with crc metastatic disease are considered for liver resection, and because management of this patient population is complex, multidisciplinary management is required. Methodology: Recommendations in the present guideline were formulated based on a prepublication version of a recent systematic review on this topic. The draft methodology experts, and external review by clinical practitioners. Feedback was incorporated into the final version of the guideline. Practice Guideline: These recommendations apply to patients with liver metastases from crc who have had or will have a complete (R0) resection of the primary cancer and who are being considered for resection of the liver, or liver plus specific and limited ehms, with curative intent. 1(a). Patients with liver and lung metastases should be seen in consultation with a thoracic surgeon. Combined or staged metastasectomy is recommended when, taking into account anatomic and physiologic considerations, the assessment is that all pulmonary metastases can also be completely removed. Furthermore, liver resection may be indicated in patients who have had a prior lung resection, and vice versa.1(b). Routine liver resection is not recommended in patients with portal nodal disease. This group includes patients with radiologically suspicious portal nodes or malignant portal nodes found preoperatively or intraoperatively. Liver plus nodal resection, together with perioperative systemic therapy, may be an option-after a full discussion with the patient-in cases with limited nodal involvement and with metastases that can be completely resected.1(c). Routine liver resection is not recommended in patients with nonpulmonary ehms. Liver plus extrahepatic resection, together with perioperative systemic therapy, may be an option-after a full discussion with the patient-for metastases that can be completely resected.2(a). Perioperative chemotherapy, either before and after resection, or after resection, is recommended in patients with resectable liver metastatic disease. This recommendation extends to patients with ehms that can be completely resected (R0). Risks and potential benefits of perioperative chemotherapy should be discussed for patients with resectable liver metastases. The data on whether patients with previous oxaliplatin-based chemotherapy or a short interval from completion of adjuvant therapy for primary crc might benefit from perioperative chemotherapy are limited.2(b). Liver resection is recommended in patients with initially unresectable metastatic liver disease who have a sufficient downstaging response to conversion chemotherapy. If complete resection has been achieved, postoperative chemotherapy should be considered.3. Surgical resection of all lesions, including lesions with rcr, is recommended when technically feasible and when adequate functional liver can be left as a remnant. When a lesion with rcr is present in a portion of the liver that cannot be resected, surgery may still be a reasonable therapeutic strategy if all other visible disease can be resected. Postoperative chemotherapy might be considered in those patients. Close follow-up of the lesion with rcr is warranted to allow localized treatment or further resection for an in situ recurrence. abstract_id: PUBMED:32521979 Efficacy analysis on laparoscopic simultaneous resection of primary colorectal cancer and liver metastases Objective: To investigate the short-term outcomes of laparoscopic simultaneous resection of primary colorectal cancer and liver metastases in patients with resectable synchronous colorectal liver metastases (SCRLM). Methods: A descriptive case series study was performed. Clinicopathological data of patients with SCRLM who underwent laparoscopic simultaneous resection of colorectal cancer and liver metastases in Zhongshan Hospital between December 2015 and September 2018 were retrieved from a prospective colorectal cancer database. Perioperative presentations and short-term outcomes were analyzed. Results: A total of 53 patients were enrolled with average age of(61.7±11.3) years. Among them, 32 were male (60.4%) and 21 were female (39.6%). Twenty-five patients (47.2%) were American Society of Anesthesiologists (ASA) grade I and 28 (52.8%) were grade II. All the patients completed laparoscopic simultaneous resection without conversion. The average operation time was (320.2±114.5) min. The estimated blood loss was 150.0 (45.0-2000.0) ml, and only 2 cases (3.8%) received intraoperative transfusion. Postoperative pathologic results revealed that the average primary tumor size was (5.4±1.9) cm; 4 cases (7.5%) were T1-2 stage and 48 cases (90.6%) were T3-4 stage; 40 patients (75.5%) had lymph node metastasis; 19 (35.8%) had vascular involvement; 24 (45.3%) had neural invasion. The median number of liver metastases was 1.0 (1-8), and the average size of largest liver metastases was (3.0±1.9) cm. The median margin of liver metastases was 1.0 (0.1-3.5) cm, and only 1 case was R1 resection. The average time to the first postoperative flatus was (67.9±28.9) h, and the average time to the liquid diet was (107.0±33.8) h. The average postoperative indwelling catheterization time was (85.6±56.4) h. The average postoperative hospital stay was (9.2±4.4) d, and the average cost was (82±26) thousand RMB. No death within postoperative 30-day was found. The morbidity of postoperative complication was 32.1% (17/53) and 3 patients developed grade III to IV complications which were improved by conservative treatment. The median follow-up period was 23.2 months. During follow-up, 19 patients (35.8%) developed recurrence or metastasis, and 4 (7.5%) died. The 1- and 2-year disease-free survival (DFS) rates were 68% and 47% respectively, and the 1- and 2-year overall survival rates were 95% and 86% respectively. Conclusions: Laparoscopic simultaneous resection of primary colorectal cancer and liver metastases is safe and feasible in selected patients with SCRLM. Postoperative intestinal function recovery is enhanced, and morbidity and oncological outcomes are acceptable. abstract_id: PUBMED:32713080 Long-term outcome of liver resection for colorectal metastases in the presence of extrahepatic disease: A multi-institutional Japanese study. Background/purpose: The purpose of the present study was to assess long-term outcomes following liver resection for colorectal liver metastases (CRLM) with concurrent extrahepatic disease and to identify the preoperative prognostic factors for selection of operative candidates. Methods: In this retrospective, multi-institutional study, 3820 patients diagnosed with CRLM during 2005-2007 were identified using nationwide survey data. Data of identified patients with concurrent extrahepatic lesions were analyzed to estimate the impact of liver resection on overall survival (OS) and to identify preoperative, prognostic indicators. Results: Three- and 5-year OS rates after liver resection in 251 CRLM patients with extrahepatic disease (lung, n = 116; lymph node, n = 51; peritoneal, n = 37; multiple sites, n = 23) were 50.2% and 32.0%, respectively. Multivariate analysis revealed that a primary tumor in the right colon, lymph node metastasis from the primary tumor, serum carbohydrate antigen (CA) 19-9 level >37 UI/mL, the site of extrahepatic disease, and residual liver tumor after hepatectomy were associated with higher mortality. We proposed a preoperative risk scoring system based on these factors that adequately discriminated 5-year OS after liver resection in training and validation datasets. Conclusions: Performing R0 liver resection for colorectal liver metastases with treatable extrahepatic disease may prolong survival. Our proposed scoring system may help select appropriate candidates for liver resection. Answer: Tumor progression while on chemotherapy is not an absolute contraindication to liver resection for multiple colorectal metastases, but it is associated with a poorer outcome even after potentially curative hepatectomy. A study evaluating the influence of the response to preoperative chemotherapy on the outcome following resection of multiple colorectal liver metastases (CRM) found that patients who had tumor progression on chemotherapy before surgery had significantly lower overall survival and disease-free survival compared to those who had an objective tumor response or tumor stabilization (PUBMED:15570210). This suggests that tumor control before surgery is crucial to offer a chance of prolonged remission in patients with multiple metastases. However, it is also recognized that some patients may still benefit from surgery despite disease progression during chemotherapy, given the role of surgery as the main treatment with curative intent for patients with colorectal liver metastases (CRLM) (PUBMED:36185562). In selected cases, based on size, the number of lesions, and tumor markers, surgery may be offered despite the less favorable prognosis and as an option for non-chemo responders. The decision-making process for the surgical approach for CRLM progressing during chemotherapy should consider oncologic aspects, including the response to preoperative systemic chemotherapy, which has been established as a major prognostic factor. Patients with regression of hepatic metastases while on chemotherapy have improved outcomes compared to those with stabilization or progression of the disease (PUBMED:36185562). In conclusion, while tumor progression on chemotherapy is associated with decreased survival and is a negative prognostic factor, it does not necessarily preclude liver resection. The decision should be individualized, taking into account the patient's overall condition, the biology of the disease, and the potential for achieving a complete resection (PUBMED:15570210; PUBMED:36185562).
Instruction: Polytrauma with pelvic fractures and severe thoracic trauma: does the timing of definitive pelvic fracture stabilization affect the clinical course? Abstracts: abstract_id: PUBMED:22706659 Polytrauma with pelvic fractures and severe thoracic trauma: does the timing of definitive pelvic fracture stabilization affect the clinical course? Background: The aim of this study was to investigate the influence of the surgical timing in patients with pelvic fractures and severe chest trauma on the clinical course, especially on postoperative lung function. Methods: A total of 47 patients were included in a prospective dual observational study. The study investigated the clinical course depending on the time of operation based on the functional lung parameters, SAPS II, SOFA and total hospital stay. Results: The average ISS was 32±6, PTS was 34±11 and TTSS was 9±3 points. The pelvic fractures were stabilized definitively after an average of 7±2 days. The early stabilization correlated significantly with a lower TTSS and SAPS II on admission (p<0.05), shorter time of ventilation (p<0.05) and stay in the intensive care unit (p<0.01) as well as the decreased need for packed red blood cells (p<0.01). Conclusions: In this study patients with pelvic fractures and thoracic trauma benefited positively from an earlier definitive pelvic fracture stabilization with respect to a shorter time of ventilation and stay in the intensive care unit due to a lower need for red cell concentrates. abstract_id: PUBMED:10025036 Concomitant intra-abdominal injuries in pelvic trauma A pelvic fracture combined with abdominal injury and soft tissue damage, defined as complex pelvic trauma, is an expression of severe trauma and requires specific procedures in terms of diagnosis and treatment. The clinical course and outcome of 75 patients (56 males and 19 females) with complex pelvic injuries from 1985 to 1996 are presented in this follow-up study. There were 33 (44%) Type A (stable pelvic ring fracture), 10 (13%) Type B (rotationally unstable pelvic ring fracture) and 32 (43%) Type C (rotationally and vertically unstable pelvic ring fracture). Concomitant abdominal injuries included: 14 liver lacerations (19%), 28 splenic lesions (37%), 7 ruptures of the kidney (9%), 18 ruptures of the bladder (24%), 13 urethra lesions (17%), 13 intestinal lesions (17%) and 16 ruptures of the diaphragm (21%). The mean Hannover polytrauma score was 22 points. Twelve patients (16%) died within 6 hours after associated head and/or chest injury with hemorrhage, and 3 patients (4%) died in the intensive care unit from sepsis and multiorgan failure within 2 weeks after injury. The priority of the steps taken is of utmost importance in the diagnosis and treatment of such injuries. Immediate control of hemorrhage by surgical or minimally invasive radiological techniques, stabilization of pelvic fractures and inter-disciplinary cooperation are important requirements for management and successful treatment. abstract_id: PUBMED:24833399 The initial management in intensive care of pelvic ring injury patients Pelvic trauma care is complex since it is frequently associated with multiple injuries and may lead to dramatic and uncontrollable haemorrhage. After pelvic trauma, the mortality, around 8 to 10%, is mainly related to severe pelvic hemorrhage but also to extrapelvic injuries (thoracic, abdominal or brain injuries). It is therefore crucial to manage pelvic trauma in specialized trauma center. The initial trauma assessment aims to determine the role of the pelvic injury in hemorrhage to define the therapeutic strategy of pelvic trauma care (arterial embolisation/pelvic ring stabilisation). This review was performed with a systematic review of the literature; it describes the pelvic fracture pathophysiology, and the efficacy and safety of haemostatic procedures and with their respective indications. A decision making algorithm is proposed for the treatment of trauma patients with pelvic fracture. abstract_id: PUBMED:33779102 A review of blunt pelvic injuries at a major trauma centre in South Africa. Background: The collective five-year experience with the acute management of pelvic trauma at a busy South African trauma service is reviewed to compare the usefulness and applicability of current grading systems of pelvic trauma and to review the compliance with current guidelines regarding pelvic binder application during the acute phase of resuscitation. Methods: A retrospective review was conducted over a 5-year period from December 2012 to December 2017 on all polytrauma patients who presented with a pelvic fracture. Mechanism of injury and presenting physiology and clinical course including pelvic binder application were documented. Pelvic fractures were graded according to the Young- Burgess and Tile systems. Results: There was a cohort of 129 patients for analysis. Eighty-one were male and 48 female with a mean age was 33.6 ± 13.1 years. Motor vehicle-related collisions (MVCs) were the main mechanism of injury (50.33%) and pedestrian vehicle collisions (PVCs) were the second most common (37.98%). The most common associated injuries were abdominal injuries (41%), chest injury (37%), femur fractures (21%), tibia fractures (15%) and humerus fracture (14.7%). Thirty patients in this cohort (23%) underwent a laparotomy. They were mainly in the Tile B (70%) and lateral compression (63%) groups. Nine patients underwent pelvic pre-peritoneal packing. Thirty-five (27%) patients were admitted to ICU. Fifteen (12%) patients died. The Young-Burgess classification had a greater accuracy in predicting death than the Tile classification. Forty per cent of deaths occurred in ICU, 33% died secondary to a traumatic brain injury (TBI). Twenty per cent died in casualty and 6.6% in the operating room from ongoing haemorrhage. A pelvic binder was not applied in 66% of patients. In the 34% of patients who had a pelvic binder applied, it was applied post CT scan in 24.8%, in the pre-hospital setting in 7.2%, and on arrival in 2.4% of patients. In 73% of deaths, a binder was not applied, and of those deaths, 54% showed signs of haemodynamic instability. Conclusion: It would appear that our application of pelvic binders in patients with acute pelvic trauma is ad hoc. Appropriate selection of patients, who may benefit from a binder and it's timely application, has the potential to improve outcome in these patients. abstract_id: PUBMED:10969545 Associated injuries in severe pelvic trauma There has been a marked increase in the incidence of pelvic fractures over the last few years. Associated injuries to the urogenital and vascular system as well as nerve injuries worsen the prognosis. Over a five year period 126 patients with severe pelvic trauma were treated. Out of these 39 (30.9%) sustained additional peripelvic injuries and represent the study sample. Type B injuries according to the AO classification occurred in 16 (41%) patients, type C fractures in 23 (59%) patients. The spleen, liver and kidney were the most frequently injured organs (58.9%), followed by urogenital lesions (46.6%), nerve injuries (25.6%) and vascular lesions (15.3%). The most common extrapelvic lesions were thoracic injuries in 56.4% and severe head injuries (GCS < 8) in 33.3%. The mean Hannover Polytrauma score was 35.6 points, the mean Injury Severity Score 27.6 points. Osteosynthesis was performed in 21 pelvic ring fractures (53%), eight procedures (50%) in type B fractures and 13 (56%) in type C fractures. In type B injuries the anterior pelvic ring was stabilized with a tension band wiring in four cases, in two patients with an external fixator and with plate osteosynthesis in one case. In type C injuries the external fixator was applied as the only stabilizing procedure in six patients. In four cases the anterior ring was fixed with tension band wiring or plates and the dorsal aspect of the pelvic ring with sacral bars. Three patients had their additional acetabular fracture plated through a anterior approach. All surviving 28 patients were followed up for an average of 18 months (range 7-59 months) after the trauma. The patients were classified using the pelvic outcome score proposed by the German Society of Trauma Surgery. 53.4% of the type B fractures showed a good clinical outcome, 47.6% a poor outcome. 15.4% with type C fractures presented with a good outcome, 84.6% with a poor outcome. 80% of the type B and 23% of the type C fractures had a good radiological outcome. 20% of type B and 77% of type C injuries had a poor radiological outcome. Five patients (12.8%) sustained persistent urological symptoms. Three of these had urinary dysfunction, two used permanent cystotomies due to their severe neurological deficit after a head injury. Ten patients with nerve injuries at the time of trauma suffered long term neurological dysfunction of the lumbosacral plexus. The mortality rate was 28%. Seven patients died in the emergency room due to uncontrollable bleeding, four in the intensive care unit from multi-organ failure. The management of complex pelvic trauma consists of fracture treatment and interdisciplinary treatment of the associated injury. Lesions of the abdominal organs or of major vessels must be addressed first if hemodynamic instability is present. Injuries to smaller vessels can be embolized percutaneously. Urinary bladder ruptures are treated as an emergency, urethral lesions electively after four to six weeks. We recommend external fixation of the pelvis in the acute phase for control of both the osseous instability and control of haemorrhage through external compression. The treatment of choice for the anterior pelvic ring is tension band wiring or plating. If this is contraindicated due to an open fracture external fixation is the treatment of choice. Type C fractures require posterior ring stabilization which should be postponed until four days post admission. abstract_id: PUBMED:26312114 Pelvic Fractures in Paediatric Polytrauma Patients: Classification, Concomitant Injuries and Early Mortality. Purpose Of This Study: To review the characteristics, concomitant injuries and mortality in children with polytrauma and associated pelvic fractures treated in a Level-I Trauma Centre. Materials And Methods: Between December 2003 and November 2013, 49 children with an Injury Severity Score (ISS) of 16 or greater and a pelvic fracture met the inclusion criteria and were evaluated. The mortality, transfusion requirements and length of intensive care unit stay were correlated with the ISS, Abbreviated Injury Scale, concomitant limb and spine fractures, and type of pelvic ring injury (AO/OTA classification). Results: The mean ISS at presentation was 31.4 (range 16 to 57). 19 (38.7%) patients sustained a Type A, 27 (55.1%) a Type B and 3 (6.2%) a Type C injury. Head and face trauma was present in 33 (67.3%) cases. Blood transfusion during the resuscitation process was necessitated in six (12.2%) patients. Thirty-eight (77.5%) patients were managed non-operatively for their pelvic injuries. The mean duration of hospital stay was 23.9 days (range 1 to 146 days). In this cohort of polytrauma paediatric patients there were five (10.2%) mortalities (all suffered an associated head trauma ) within 30 days from the initial injury. Conclusion: Severe head injury and a high ISS are significantly associated with mortality in children with pelvic fractures. These patients have a high incidence of concomitant spine and chest injuries Hemorrhage due to pelvic injuries is rare. Severe head injuries predict a longer ICU stay in this population. abstract_id: PUBMED:9337365 Profile of geriatric pelvic fractures presenting to the emergency department. Few studies have examined differences in mechanism, presentation, and outcome of trauma in geriatric patients. This study compared pelvic fractures and associated injuries in geriatric and nongeriatric patients. The medical charts of all patients presenting to a large urban emergency medicine teaching program with a pelvic fracture between January 1, 1987 and December 31, 1993 were retrospectively reviewed by study-blinded physicians. Data collected included mechanism and site of injury, associated injuries, disposition (admission or discharge), need for operative repair, length of hospital stay, as well as subsequent deaths and causes. The data were stratified into patients less than 65 years of age (group A) and 65 years or older (group B). Two-hundred five pelvic fractures were reviewed with 85 (41%) in group B. A significantly greater number of pelvic fractures in group B occurred by fall (86% v 25%, P < .05) and significantly less by motor vehicle accident (14% v 75%, P < .05). Site of pelvic fracture differed significantly only in the decreased number of geriatric iliac fractures (6% v 16%, P < .05). The sites of pelvic fractures for geriatric patients in descending order were multiple sites (58%), pubic rami (56%), acetabulum (19%), ischium (11%), iliac (6%), and sacroiliac (2%), and did not differ from nongeriatric patients. Geriatric patients had significantly fewer total associated injuries (40% v 61%, P < .05) although associated chest injuries were significantly more common (21% v 8%, P < .05). Death occurred in three (3%) nongeriatric and nine (11%) geriatric patients. Six geriatric deaths were caused by exacerbation of underlying cardiovascular disease. Geriatric patients underwent significantly fewer operative procedures (6% v 43%, P < .05) but there were no significant differences in the percent admitted (85%) or mean length of hospital stay (9.59 days). Despite the decreased severity of pelvic fractures, care must be taken to prevent morbidity caused by exacerbation of premorbid illnesses in geriatric patients with pelvic fractures. abstract_id: PUBMED:31076162 Analysis of early treatment of multiple injuries combined with severe pelvic fracture. Purpose: To summarize and analyze the early treatment of multiple injuries combined with severe pelvic fractures, especially focus on the hemostasis methods for severe pelvic fractures, so as to improve the successful rate of rescue for the fatal hemorrhagic shock caused by pelvic fractures. Methods: A retrospective analysis was conducted in 68 cases of multiple trauma combined with severe pelvic fractures in recent 10 years (from Jan. 2006 to Dec. 2015). There were 57 males and 11 females. Their age ranged from 19 to 75 years, averaging 42 years. Causes of injury included traffic accidents in 34 cases (2 cases of truck rolling), high falling injuries in 17 cases, crashing injuries in 15 cases, steel cable wound in 1 case, and seat belt traction injury in 1 case. There were 31 cases of head injury, 11 cases of chest injury, 56 cases of abdominal and pelvic injuries, and 37 cases of spinal and limb injuries. Therapeutic methods included early anti-shock measures, surgical hemostasis based on internal iliac artery devasculization for pelvic hemorrhage, and early treatment for combined organ damage and complications included embolization and repair of the liver, spleen and kidney, splenectomy, nephrectomy, intestinal resection, colostomy, bladder ostomy, and urethral repair, etc. Patients in this series received blood transfusion volume of 1200-10,000 mL, with an average volume of 2850 mL. Postoperative follow-up ranged from 6 months to 1.5 years. Results: The average score of ISS in this series was 38.6 points. 49 cases were successfully treated and the total survival rate was 72.1%. Totally 19 patients died (average ISS score 42.4), including 6 cases of hemorrhagic shock, 8 cases of brain injury, 1 case of cardiac injury, 2 cases of pulmonary infection, 1 case of pulmonary embolism, and 1 case of multiple organ failure. Postoperative complications included 1 case of urethral stricture (after secondary repair), 1 case of sexual dysfunction (combined with urethral rupture), 1 case of lower limb amputation (femoral artery thrombosis), and 18 cases of consumptive coagulopathy. Conclusion: The early treatment of multiple injuries combined with severe pelvic fractures should focus on pelvic hemostasis. Massive bleeding-induced hemorrhagic shock is one of the main causes of poor prognosis. The technique of internal iliac artery devasculization including ligation and embolization can be used as an effective measure to stop or reduce bleeding. Consumptive coagulopathy is difficult to deal with, which should be detected and treated as soon as possible after surgical measures have been performed. The effect of using recombinant factor VII in treating consumptive coagulopathy is satisfactory. abstract_id: PUBMED:27817834 A comparison study of pelvic fractures and associated abdominal injuries between pediatric and adult blunt trauma patients. Purpose: Pelvic fractures are a marker of severe injury, mandating a thorough investigation for the presence of associated injuries. Anatomical and physiological differences between adults and children may lead to a different impact of pelvic fractures on these populations. The purpose of this study is to compare pelvic fractures between pediatric and adult blunt trauma victims, mainly regarding their severity and associated intraabdominal injuries. Methods: A retrospective study involving blunt trauma patients suffering pelvic fractures, according to the records of the Israeli National Trauma Registry. Patients included children, aged 0-14years, and adults between 15 and 64years. The presence and severity of associated injuries were assessed. Results: Overall, 7621 patients aged 0-64years were identified with pelvic fractures following blunt trauma. The incidence of pelvic fractures in children was (0.8%), as compared to 4.3% in adults, p <0.0001. The most common mechanism of injury was motor vehicle accident (MVA) in adults, and pedestrian hit by car (PHBC) in children. About a quarter of the patients in both groups had an ISS >25. Adults sustained significantly more moderate to severe pelvic fractures (AIS≥3) than children (26.7% vs. 17.4%, p<0.0001). The overall mortality rate was similar among the two groups (5.4% in adults, 5.2% in children, p=0.7554). The only associated injury with statistically significant difference in incidence among the two groups was rectal injury (1.2% among children, 0.2% among adults, p<0.0001). Among adult patients, there was a clear correlation between the severity of pelvic fractures and the severity of concomitant splenic and hepatic injuries (p=0.026, p=0.0004, respectively). Among children, a similar correlation was not demonstrated. Conclusions: Adults involved in blunt trauma are more likely to sustain pelvic fractures, and these are generally more severe fractures, as compared to children suffering from blunt trauma. Nonetheless, mortality rates were found similar in both groups. The only associated injury with statistically significant difference in incidence among the two groups was rectal injury. In adults, but not in children, higher grade pelvic fractures correlated with more severe concomitant splenic or hepatic injuries. Level Of Evidence: The level of evidence for this study is III (3). abstract_id: PUBMED:24804258 Epidemiology of hospitalized traumatic pelvic fractures and their combined injuries in Taiwan: 2000-2011 National Health Insurance data surveillance. Background: From the viewpoint of prehospital emergency medicine, a greater proportion of pelvic fractures not of a life-threatening status but combined with other injuries need more comprehensive recognition. Methods: A 12-year nationwide health database of inpatients was reviewed. All cases diagnosed as pelvic fractures were enrolled. The associated injuries classified into 20 categories were further analyzed. Results: During 2000-2011, the hospitalized incidence of pelvic fractures in Taiwan ranged from 17.17 to 19.42 per 100,000, and an increasing trend with age was observed. The mean case-fatality rate was 1.6% for females and 2.1% for males; male patients with pelvic fractures had a significantly higher risk of death than female patients after adjusting for other covariates. 74.2% of these cases were combined with other injuries. The most common associated injuries in an identified body region were other orthopedic fractures of the lower limbs (21.50%), spine/trunk (20.97%), or upper limbs (18.18%), followed by significant head injuries (17.59%), intra-abdominal injuries (11.00%), and thoracic injuries (7.20%). Conclusion: The incidence of hospitalized pelvic fractures in Taiwan was low and the case-fatality rate was lower than those of other countries. Concurrently, coexistence of major combined injuries with pelvic fractures was easily treated at medical centers. Answer: The timing of definitive pelvic fracture stabilization in patients with polytrauma, including severe thoracic trauma, does indeed affect the clinical course. A study found that early stabilization of pelvic fractures correlated significantly with a lower Thoracic Trauma Severity Score (TTSS) and Simplified Acute Physiology Score II (SAPS II) on admission. Additionally, early stabilization was associated with a shorter duration of ventilation, a reduced length of stay in the intensive care unit, and a decreased need for packed red blood cells (PUBMED:22706659). This suggests that patients with pelvic fractures and thoracic trauma benefit from earlier definitive pelvic fracture stabilization, leading to improved outcomes such as reduced time on mechanical ventilation and shorter ICU stays.
Instruction: Penetrating injuries in children: is there a message? Abstracts: abstract_id: PUBMED:36583395 Non-missile penetrating brain injury-surgical techniques for removing a long penetrating foreign body: a case report. Purpose: Penetrating brain injury (PBI), a relatively uncommon injury, is associated with remarkable secondary complications such as vascular injury, intracranial haemorrhage, infection, and mortality. Non-missile PBI (NMPBI) due to sharp or blunt objects is usually treated surgically by removing the penetrating object, evacuating the associated haemorrhage, identifying possible bleeders along with haemostasis, and performing debridement. Various approaches are used for different scenarios of non-missile PBI according to the object's characteristics, penetrating site, depth, associated intracerebral haemorrhage (ICH), and presence of vascular injury along the penetrating tract. NMPBI cases are rarely reported among civilians. We herein describe a patient who was successfully treated for NMPBI, as well as frontal ICH, by simultaneously removing the heavy, metallic penetrating foreign body. Methods: We performed corticotomy through a shorter tract instead of a deep penetrating trajectory, which minimizes the extent of damage to the brain and enables immediate management of vascular injury under direct vision while removing the foreign body, and intraoperative sonography, which provides real-time information of the penetrating object and the surrounding brain structure. We did not perform computed tomography angiography and digital subtraction angiography (DSA) because the stab location was at the frontal region, with low risk of vascular injury. Moreover, DSA is time-consuming, which may delay decompressive surgery. Results: The patient was successfully treated through an alternative approach removing the long, heavy, metallic penetrating foreign body and eliminating the accompanying frontal ICH simultaneously. Focal brain abscess developed 8 days after the injury and resolved completely after antibiotics treatment. Dysphasia gradually improved but right distal limbs weakness with spasticity is still present. Conclusions: Our findings suggest prompt diagnosis by preoperative imaging, screening of vascular injury, decompression with debridement, and antibiotics treatment are important. The alternative surgical approach we proposed is exceptional and should be considered while treating patients with deep NMPBI. abstract_id: PUBMED:29046084 Penetrating neck injuries: a guide to evaluation and management. Introduction Penetrating neck injury is a relatively uncommon trauma presentation with the potential for significant morbidity and possible mortality. There are no international consensus guidelines on penetrating neck injury management and published reviews tend to focus on traditional zonal approaches. Recent improvements in imaging modalities have altered the way in which penetrating neck injuries are now best approached with a more conservative stance. A literature review was completed to provide clinicians with a current practice guideline for evaluation and management of penetrating neck injuries. Methods A comprehensive MEDLINE (PubMed) literature search was conducted using the search terms 'penetrating neck injury', 'penetrating neck trauma', 'management', 'guidelines' and approach. All articles in English were considered. Articles with only limited relevance to the review were subsequently discarded. All other articles which had clear relevance concerning the epidemiology, clinical features and surgical management of penetrating neck injuries were included. Results After initial resuscitation with Advanced Trauma Life Support principles, penetrating neck injury management depends on whether the patient is stable or unstable on clinical evaluation. Patients whose condition is unstable should undergo immediate operative exploration. Patients whose condition is stable who lack hard signs should undergo multidetector helical computed tomography with angiography for evaluation of the injury, regardless of the zone of injury. Conclusions The 'no zonal approach' to penetrating neck trauma is a selective approach with superior patient outcomes in comparison with traditional management principles. We present an evidence-based, algorithmic and practical guide for clinicians to use when assessing and managing penetrating neck injury. abstract_id: PUBMED:34211880 Penetrating Injury of Superior Sagittal Sinus. Penetrating injury of superior sagittal sinus (SSS) is very rare yet serious which can lead to morbidity and mortality. Complications such as bleeding, thrombosis, and infection are possible and should be anticipated. We report a case of 3-year-old boy with penetrating injury caused by a nail at the middle third of SSS. The patient underwent the surgery for extraction and sinus repair and antibiotic treatment during the hospital stay. He was neurologically intact and recovered completely. Comprehensive treatment of both surgical and medical management is important in achieving the best possible outcome. abstract_id: PUBMED:1879786 Penetrating neck injuries The frequency of penetrating neck injuries has increased in recent years. In 16 patients with such injuries treated during the past 2 years, neck exploration was performed immediately. In 4 there was destruction of the throat or trachea, in 3 a tear in the esophagus, in 1 a tear in the hypopharynx and in 1 a tear in the submandibular region penetrating the floor of the mouth. In 8 there was no damage to the neck organs. Of the 16, 1 died on the operating table from hemorrhage from multiple tears of the hypopharynx, the esophagus and the thyroid gland. 2 different approaches are used in dealing with penetrating neck injuries: immediate exploration, or the use of imaging methods to diagnose the degree of damage, followed by close supervision. In the light of our experience we advocate immediate neck exploration. abstract_id: PUBMED:8297557 Penetrating chest trauma. This article reviews the current concepts in penetrating chest trauma. The authors discuss mechanisms of injury, nursing assessment, and interventions for penetrating injuries resulting in cardiac rupture, cardiac tamponade, tension pneumothorax, hemothorax, great vessel injury, and sucking chest wounds. abstract_id: PUBMED:26022154 Penetrating nontorso trauma: the head and the neck. Acute penetrating injuries to the head and neck cause considerable anxiety for most clinicians owing to concern for airway control and neurologic injury and to limited clinician experience in most centres. This article discusses an organized approach to the evaluation and initial treatment of penetrating injuries to the head and neck based on regional anatomy and clinical examination. The approach is particularly helpful in the context of ongoing hemorrhage and/or airway compromise. abstract_id: PUBMED:9122834 Neurosurgical management of penetrating spinal injury. Background: Penetrating spinal injuries are the third most common cause of spinal cord injury, exceeded only by motor vehicle accidents and falls. The spinal cord can vary from complete destruction to a normal appearing cord. There remains much controversy regarding the neurosurgical management of patients who sustain penetrating injuries of the spine. Methods: Penetrating spinal injuries account for four to seven new cases of penetrating injury per million persons per annum. The initial management of penetrating spinal injury follows the principles of standard trauma care. Once the patient is hemodynamically stabilized and concomitant life-threatening injuries have been addressed, care can be directed toward the spinal injury. Results: A meticulous neurological examination should be performed, including individual assessment of all muscle groups and sensory dermatomes. This examination becomes the baseline with which all subsequent examinations will be compared. The radiologic examination of the patient begins after all radiographs of higher priority have been obtained. Conclusions: Penetrating spinal cord injury is associated with significant morbidity. The optimal management for these patients has not been determined, however, spinal instability resulting from the initial missile injury is rare. abstract_id: PUBMED:6680605 Penetrating orbital trauma. A small number of patients suffer penetrating injury to the orbit in the absence of trauma to the globe. Hospital records from the Jules Stein Eye Institute and affiliated hospitals were reviewed to obtain such cases. Five cases of penetrating orbital trauma, in which the globe was spared, are discussed. Many of the injuries initially appeared trivial but subsequent complications were severe. A review of these cases suggests that the shape and speed of the penetrating object, the anatomical location of the entry wound, and the physical characteristics of intraorbital structures determine the character and severity of orbital injuries. abstract_id: PUBMED:20957055 Surgical treatment of a transorbital penetrating brain injury. Penetrating injury of the skull and brain are relatively uncommon events, representing about 0.4% of all head injuries. Transorbital penetrating brain injury is an unusual occurrence in emergency practice and presents with controversial management. We report the case of a 10-year-old boy who fell forward on a bamboo stick while playing with other children, causing a penetrating transorbital injury, resulting in meningitis. We performed a combined surgical approach with neurosurgeons and ophthalmogic surgeons. Upon discharge, the patient had a Glasgow Coma Scale score of 15, no motor deficit and no visual loss. We discuss the management of this case and review current literature. abstract_id: PUBMED:900347 Penetrating injuries of the appendix. Although rare, penetrating injuries of the appendix pose an interesting finding at laparotomy. Associated penetrating gastrointestinal injuries are common. Four additional cases of trauma to the appendix from penetrating wounds are described. Answer: Penetrating injuries in children are serious and can lead to significant morbidity and mortality. The management of such injuries requires a comprehensive approach that includes both surgical and medical interventions to achieve the best possible outcomes. For instance, a case of a 3-year-old boy with a penetrating injury caused by a nail at the middle third of the superior sagittal sinus (SSS) highlights the importance of prompt surgical extraction and sinus repair, along with antibiotic treatment during hospitalization, to ensure complete recovery and preservation of neurological function (PUBMED:34211880). In the context of penetrating brain injuries, the surgical techniques for removing a long penetrating foreign body involve corticotomy through a shorter tract to minimize brain damage, immediate management of vascular injury under direct vision, and the use of intraoperative sonography for real-time information about the object and surrounding brain structures. This approach, as demonstrated in a case report, can lead to successful treatment outcomes, including the resolution of a focal brain abscess with antibiotic treatment, although some residual neurological deficits such as dysphasia and limb weakness may persist (PUBMED:36583395). For penetrating neck injuries, an evidence-based, algorithmic, and practical guide suggests that after initial resuscitation, the management should depend on the patient's stability. Unstable patients require immediate operative exploration, while stable patients without hard signs should undergo multidetector helical computed tomography with angiography. This 'no zonal approach' has been shown to result in superior patient outcomes compared to traditional management principles (PUBMED:29046084). In summary, the message for managing penetrating injuries in children is that timely, tailored, and multidisciplinary interventions are crucial. Surgical techniques should be adapted to minimize further damage, and medical management, including antibiotics, is essential to prevent and treat infections. The use of advanced imaging techniques can guide the treatment plan and improve outcomes. Clinicians must be prepared for the complexity of these cases and the potential for long-term sequelae, even with optimal management.
Instruction: Measurement of neutrophil gelatinase-associated lipocalin (NGAL) in patients with non-communicable diseases: any additional benefit? Abstracts: abstract_id: PUBMED:26745339 Measurement of neutrophil gelatinase-associated lipocalin (NGAL) in patients with non-communicable diseases: any additional benefit? Aim: Chronic kidney disease (CKD) is one of the major complication from non-communicable diseases (NCD), such as diabetes or hypertension. We aimed to determine if routine measurement of neutrophil gelatinase-associated lipocalin (NGAL) in NCD patients could provide additional benefit for diagnosing kidney disease. Methods: Fasting serum NGAL, urea, creatinine, uric acid and albumin were measured in 298 NCD patients. Estimated glomerular filtration rate (e-GFR) was calculated using the CKD-EPI equation. Multiple linear regression modeling was used for analysis. Results: CKD patients have the highest levels of NGAL than the other NCD patients (p < 0.001). Although NGAL was significantly related to e-GFR, creatinine, urea and albumin in CKD-patients (p < 0.001) and not in the other NCD patients (p > 0.05), only serum creatinine predicted NGAL levels in CKD-patients (p < 0.001). Conclusion: NGAL level is best associated with serum creatinine only in patients with CKD and NGAL measurement may not be cost effective as a routine kidney function test in NCD patients in under-resourced economies. abstract_id: PUBMED:24697112 The role of neutrophil gelatinase-associated lipocalin (NGAL) in cerebrospinal fluids for screening of acute bacterial meningitis. Background: Acute bacterial meningitis is a rare but extremely severe disease. The aim of this study was to investigate whether neutrophil gelatinase-associated lipocalin (NGAL) is present and measurable in cerebrospinal fluid (CSF) and if its assessment may be useful for identifying patients with bacterial meningitis. Methods: Eligible specimens were all consecutive CSFs of patients with suspect acute bacterial meningitis that were referred from the Unit of Infectious Diseases for routine chemical and morphological analysis over a three months period. CSF measurements consisted in NGAL, glucose, and total protein concentrations, along with cell count and differential. Results: Eighty eight CSFs were received throughout the study period, 58 (66%) with CSF findings compatible with bacterial meningitis. The values of white blood cells (WBC), polymorphonuclear (PMN) and mononuclear (MONO) leukocytes, red blood cells (RBC), total proteins, and NGAL were significantly increased in positive CSFs, whereas that of glucose did not significantly differ. A significant correlation was found between CSF concentration of NGAL and CSF values of PMN, WBC, RBC and total proteins, but not with that of glucose and MONO. The concentration of NGAL in CSF showed an area under the curve (AUC) of 0.94 for identifying positive CSFs, with specificity and sensitivity of 1.00 and 0.741 at a diagnostic threshold of 13 ng/mL. Conclusions: NGAL is present in CSF of patients with bacterial meningitis and its measurement may be helpful for identifying positive CSFs. abstract_id: PUBMED:32219562 Determining acute complicated and uncomplicated appendicitis using serum and urine biomarkers: interleukin-6 and neutrophil gelatinase-associated lipocalin. Purpose: The study aim is to determine whether serum and urine interleukin-6 (IL-6) and neutrophil gelatinase-associated lipocalin (NGAL) can be included in the early diagnostic algorithm for pediatric appendicitis. Methods: Prospective single-center cohort study included 92 children divided into control, acute complicated appendicitis (AcA) and acute uncomplicated appendicitis (AnA) groups. Serum and urine samples were assayed for IL-6 and NGAL preoperatively, and on the second and fifth postoperative days. Intraoperative and bacteriological findings divided the appendicitis patients. Results: Average serum biomarker levels were higher in appendicitis patients versus the control, and the following values were produced via receiver operating characteristic (ROC) analysis. NGAL and IL-6 cutoff values were 113.95 ng/ml and 24.64 pg/ml, respectively, NGAL had 68.3% sensitivity and 65.5% specificity, while IL-6 had 72.6% and 86.2%. Comparing AcA and AnA, IL-6 was the only biomarker of significance yielding 77.4% sensitivity and 58.1% specificity with a 26.43 pg/ml cutoff value. Urine biomarkers were non-specific in differentiation appendicitis severity and ultimately, between infectious and non-infectious disease. Conclusion: Although NGAL provided measurable useful diagnostic information in evaluating children for appendicitis, its values were not sufficient for appendicitis severity. Serum IL-6 remains a strong biomarker for suspected acute appendicitis and has promising results predicting its severity. abstract_id: PUBMED:21102325 Neutrophil gelatinase-associated lipocalin as a marker of acute renal disease. Purpose Of Review: Acute renal disease has major prognostic impact in hospitalized patients. Early diagnosis using renal function markers such as serum creatinine or urinary output and effective treatment are currently not possible. This review focuses on recent studies examining the biological characteristics and the diagnostic and prognostic value of a novel biomarker, neutrophil gelatinase-associated lipocalin (NGAL), in patients at risk for acute renal disease. Recent Findings: NGAL levels increase early and proportional to the degree of renal injury in the urine and plasma. Here, we report the results of clinical studies where NGAL was measured in 325 patients with autoimmune and infectious diseases, those with cisplatin-associated nephrotoxicity, and more than 4000 patients at risk of acute kidney injury due to sepsis, cardiac surgery, exposure to contrast media, or after renal transplantation. Whenever renal damage occurred, as evidenced by histopathology or subsequent increase in serum creatinine levels, NGAL levels were higher than in patients without acute renal disease. An increase in NGAL substantially preceded clinical diagnosis of acute renal disease. Summary: NGAL fulfills many characteristics of an appropriate 'real-time' renal biomarker. The data on NGAL could justify re-assessment of the concept and definition of acute kidney injury. abstract_id: PUBMED:36688893 Serum lipocalin-2 levels are decreased in patients with leprosy. Background Leprosy is an infectious disease caused by Mycobacterium leprae affecting the skin, peripheral nerves and mucosae. Lipocalin-2 is a key component of the immune system's antimicrobial defence - it prevents iron uptake by binding and sequestering iron-scavenging siderophores and thus inhibits bacterial growth. Aim We evaluated serum lipocalin-2 levels in leprosy patients and its relationship to the pathogenesis and prognosis of the disease. Materials and methods In this case-control study, serum lipocalin-2 levels were measured by ELISA in 20 patients with leprosy and 20 healthy controls. Results Serum levels of lipocalin-2 were significantly reduced (P < 0.001) in leprosy patients as compared to controls. The levels were significantly higher (P < 0.014) in patients with multibacillary leprosy than in those with paucibacillary leprosy. Although the levels of lipocalin-2 were higher in patients with multiple nerve involvement as compared to those with involvement of 1 or 2 nerves, the results were not statistically significant. Limitation of the study The small sample size and the lack of different ethnic groups in the study were the major limitations of this study. Conclusion The lower lipocalin-2 concentrations in leprosy patients point to the importance of the protective functions of lipocalin-2. The elevated levels of lipocalin-2 observed in leprosy patients with neural involvement may be related to the reported neurodegenerative role of lipocalin-2. abstract_id: PUBMED:25861985 Human Urinary Composition Controls Antibacterial Activity of Siderocalin. During Escherichia coli urinary tract infections, cells in the human urinary tract release the antimicrobial protein siderocalin (SCN; also known as lipocalin 2, neutrophil gelatinase-associated lipocalin/NGAL, or 24p3). SCN can interfere with E. coli iron acquisition by sequestering ferric iron complexes with enterobactin, the conserved E. coli siderophore. Here, we find that human urinary constituents can reverse this relationship, instead making enterobactin critical for overcoming SCN-mediated growth restriction. Urinary control of SCN activity exhibits wide ranging individual differences. We used these differences to identify elevated urinary pH and aryl metabolites as key biochemical host factors controlling urinary SCN activity. These aryl metabolites are well known products of intestinal microbial metabolism. Together, these results identify an innate antibacterial immune interaction that is critically dependent upon individualistic chemical features of human urine. abstract_id: PUBMED:27489118 Diagnostic accuracy of procalcitonin and presepsin for infectious disease in patients with acute kidney injury. Procalcitonin (PCT) and presepsin (PSEP) are sepsis markers, but their diagnostic accuracy may be compromised in acute kidney injury (AKI). We evaluated their diagnostic accuracy in patients with/without AKI. This retrospective study comprised 91 patients with at least one criterion of systematic inflammatory response syndrome. AKI markers plasma neutrophil gelatinase-associated lipocalin (NGAL), plasma cystatin C (CysC), and estimated glomerular filtration rate (eGFR) were measured upon hospital admission and on days 1, 3, 5, and 7. Patients were divided into non-AKI and AKI groups. APACHE II severity scores were determined. PCT and PSEP levels were increased significantly in non-AKI and AKI patients with infection. NGAL, CysC, and eGFR in patients with infection were associated with PCT, PSEP, and APACHE II score, and levels of PCT and PSEP were correlated significantly with disease severity. PCT and PSEP are useful markers of bacterial infections in AKI but different thresholds should be applied. abstract_id: PUBMED:27780864 Human Metabolome-derived Cofactors Are Required for the Antibacterial Activity of Siderocalin in Urine. In human urinary tract infections, host cells release the antimicrobial protein siderocalin (SCN; also known as lipocalin-2, neutrophil gelatinase-associated lipocalin, or 24p3) into the urinary tract. By binding to ferric catechol complexes, SCN can sequester iron, a growth-limiting nutrient for most bacterial pathogens. Recent evidence links the antibacterial activity of SCN in human urine to iron sequestration and metabolomic variation between individuals. To determine whether these metabolomic associations correspond to functional Fe(III)-binding SCN ligands, we devised a biophysical protein binding screen to identify SCN ligands through direct analysis of human urine. This screen revealed a series of physiologic unconjugated urinary catechols that were able to function as SCN ligands of which pyrogallol in particular was positively associated with high urinary SCN activity. In a purified, defined culture system, these physiologic SCN ligands were sufficient to activate SCN antibacterial activity against Escherichia coli In the presence of multiple SCN ligands, native mass spectrometry demonstrated that SCN may preferentially combine different ligands to coordinate iron, suggesting that availability of specific ligand combinations affects in vivo SCN antibacterial activity. These results support a mechanistic link between the human urinary metabolome and innate immune function. abstract_id: PUBMED:37228436 Biomarkers for febrile urinary tract infection in children. Background: The current reference standard for pediatric urinary tract infection (UTI) screening, the leukocyte esterase (LE) dipstick test, has suboptimal accuracy. The objective of this study was to compare the accuracy of novel urinary biomarkers to that of the LE test. Methods: We prospectively enrolled febrile children who were evaluated for UTI based on their presenting symptoms. We compared the accuracy of urinary biomarkers to that of the test. Results: We included 374 children (50 with UTI, 324 without UTI, ages 1-35 months) and examined 35 urinary biomarkers. The urinary biomarkers that best discriminated between febrile children with and without UTI were urinary neutrophil gelatinase-associated lipocalin (NGAL), IL-1β, CXCL1, and IL-8. Of all examined urinary biomarkers, the urinary NGAL had the highest accuracy with a sensitivity of 90% (CI: 82-98) and a specificity of 96% (CI: 93-98). Conclusion: Because the sensitivity of the urinary NGAL test is slightly higher than that of the LE test, it can potentially reduce missed UTI cases. Limitations of using urinary NGAL over LE include increased cost and complexity. Further investigation is warranted to determine the cost-effectiveness of urinary NGAL as a screening test for UTI. abstract_id: PUBMED:36118679 Biomarkers and Predictors of Adverse Cardiovascular Events in Different Stages of Chronic Kidney Disease. Chronic kidney disease (CKD) is an important factor that contributes to the increase of all-cause morbidity and mortality in the group of non-communicable diseases, and it is also recognized as a strong and independent risk factor that contributes to cardiovascular disease (CVD). CVDs are a consequence of the action of a large number of risk factors among which are traditional and non-traditional. These risk factors have been the subject of a large number of studies which partially explained the unfavorable cardiovascular (CV) outcome of CKD patients. Therefore, valid studies about clinical and biohumoral predictors are of particular importance, especially in the early stages of renal disease, that is, in patients with creatinine clearance below 60 ml/min/1.73 m2 when preventive measures are most effective. Among potential predictors of adverse CV outcome are biomarkers of inflammation (Interleukin-18-IL-18), oxidative stress (ischemia-modified albumin-IMA; superoxide dismutase-SOD), acute kidney injury (kidney injury molecule-1-KIM-1; neutrophil gelatinase-associated lipocalin-NGAL), and microribonucleic acids (specific microRNA-133a). In this review, we tried to confirm the relationship between risk factors of CKD and CVD and newer, less frequently examined biomarkers with the occurrence of incidental CV events in renal patients. Answer: The measurement of neutrophil gelatinase-associated lipocalin (NGAL) in patients with non-communicable diseases (NCDs) such as diabetes or hypertension has been investigated to determine if it provides additional benefits for diagnosing kidney disease. In a study involving 298 NCD patients, NGAL levels were found to be highest in patients with chronic kidney disease (CKD) compared to other NCD patients. NGAL was significantly related to estimated glomerular filtration rate (e-GFR), creatinine, urea, and albumin in CKD patients, but not in other NCD patients. However, only serum creatinine predicted NGAL levels in CKD patients. The conclusion was that NGAL measurement may not be cost-effective as a routine kidney function test in NCD patients, particularly in under-resourced economies, as it is best associated with serum creatinine only in patients with CKD (PUBMED:26745339). In other studies, NGAL has been shown to have potential diagnostic value in different clinical contexts. For example, NGAL was found to be present in the cerebrospinal fluid (CSF) of patients with bacterial meningitis and could help identify positive CSFs (PUBMED:24697112). It has also been studied as a biomarker for pediatric appendicitis, where serum NGAL provided useful diagnostic information, although it was not sufficient for determining the severity of appendicitis (PUBMED:32219562). Additionally, NGAL has been recognized as a marker of acute renal disease, with levels increasing early and proportionally to the degree of renal injury, and it could precede the clinical diagnosis of acute renal disease (PUBMED:21102325). In summary, while NGAL has been found to be associated with kidney function in CKD patients within the context of NCDs, its routine measurement for diagnosing kidney disease in NCD patients may not offer additional benefits over traditional markers like serum creatinine, especially considering cost-effectiveness in resource-limited settings (PUBMED:26745339). However, NGAL has demonstrated potential as a diagnostic biomarker in other acute conditions such as bacterial meningitis and appendicitis, as well as a marker for acute renal disease (PUBMED:24697112, PUBMED:32219562, PUBMED:21102325).
Instruction: Can Plan Recommendations Improve the Coverage Decisions of Vulnerable Populations in Health Insurance Marketplaces? Abstracts: abstract_id: PUBMED:27028008 Can Plan Recommendations Improve the Coverage Decisions of Vulnerable Populations in Health Insurance Marketplaces? Objective: The Affordable Care Act's marketplaces present an important opportunity for expanding coverage but consumers face enormous challenges in navigating through enrollment and re-enrollment. We tested the effectiveness of a behaviorally informed policy tool--plan recommendations--in improving marketplace decisions. Study Setting: Data were gathered from a community sample of 656 lower-income, minority, rural residents of Virginia. Study Design: We conducted an incentive-compatible, computer-based experiment using a hypothetical marketplace like the one consumers face in the federally-facilitated marketplaces, and examined their decision quality. Participants were randomly assigned to a control condition or three types of plan recommendations: social normative, physician, and government. For participants randomized to a plan recommendation condition, the plan that maximized expected earnings, and minimized total expected annual health care costs, was recommended. Data Collection: Primary data were gathered using an online choice experiment and questionnaire. Principal Findings: Plan recommendations resulted in a 21 percentage point increase in the probability of choosing the earnings maximizing plan, after controlling for participant characteristics. Two conditions, government or providers recommending the lowest cost plan, resulted in plan choices that lowered annual costs compared to marketplaces where no recommendations were made. Conclusions: As millions of adults grapple with choosing plans in marketplaces and whether to switch plans during open enrollment, it is time to consider marketplace redesigns and leverage insights from the behavioral sciences to facilitate consumers' decisions. abstract_id: PUBMED:25425010 Strategies for expanding health insurance coverage in vulnerable populations. Background: Health insurance has the potential to improve access to health care and protect people from the financial risks of diseases. However, health insurance coverage is often low, particularly for people most in need of protection, including children and other vulnerable populations. Objectives: To assess the effectiveness of strategies for expanding health insurance coverage in vulnerable populations. Search Methods: We searched Cochrane Central Register of Controlled Trials (CENTRAL), part of The Cochrane Library. www.thecochranelibrary.com (searched 2 November 2012), PubMed (searched 1 November 2012), EMBASE (searched 6 July 2012), Global Health (searched 6 July 2012), IBSS (searched 6 July 2012), WHO Library Database (WHOLIS) (searched 1 November 2012), IDEAS (searched 1 November 2012), ISI-Proceedings (searched 1 November 2012),OpenGrey (changed from OpenSIGLE) (searched 1 November 2012), African Index Medicus (searched 1 November 2012), BLDS (searched 1 November 2012), Econlit (searched 1 November 2012), ELDIS (searched 1 November 2012), ERIC (searched 1 November 2012), HERDIN NeON Database (searched 1 November 2012), IndMED (searched 1 November 2012), JSTOR (searched 1 November 2012), LILACS(searched 1 November 2012), NTIS (searched 1 November 2012), PAIS (searched 6 July 2012), Popline (searched 1 November 2012), ProQuest Dissertation &Theses Database (searched 1 November 2012), PsycINFO (searched 6 July 2012), SSRN (searched 1 November 2012), Thai Index Medicus (searched 1 November 2012), World Bank (searched 2 November 2012), WanFang (searched 3 November 2012), China National Knowledge Infrastructure (CHKD-CNKI) (searched 2 November 2012).In addition, we searched the reference lists of included studies and carried out a citation search for the included studies via Web of Science to find other potentially relevant studies. Selection Criteria: Randomised controlled trials (RCTs), non-randomised controlled trials (NRCTs), controlled before-after (CBA) studies and Interrupted time series (ITS) studies that evaluated the effects of strategies on increasing health insurance coverage for vulnerable populations. We defined strategies as measures to improve the enrolment of vulnerable populations into health insurance schemes. Two categories and six specified strategies were identified as the interventions. Data Collection And Analysis: At least two review authors independently extracted data and assessed the risk of bias. We undertook a structured synthesis. Main Results: We included two studies, both from the United States. People offered health insurance information and application support by community-based case managers were probably more likely to enrol their children into health insurance programmes (risk ratio (RR) 1.68, 95% confidence interval (CI) 1.44 to 1.96, moderate quality evidence) and were probably more likely to continue insuring their children (RR 2.59, 95% CI 1.95 to 3.44, moderate quality evidence). Of all the children that were insured, those in the intervention group may have been insured quicker (47.3 fewer days, 95% CI 20.6 to 74.0 fewer days, low quality evidence) and parents may have been more satisfied on average (satisfaction score average difference 1.07, 95% CI 0.72 to 1.42, low quality evidence).In the second study applications were handed out in emergency departments at hospitals, compared to not handing out applications, and may have had an effect on enrolment (RR 1.5, 95% CI 1.03 to 2.18, low quality evidence). Authors' Conclusions: Community-based case managers who provide health insurance information, application support, and negotiate with the insurer probably increase enrolment of children in health insurance schemes. However, the transferability of this intervention to other populations or other settings is uncertain. Handing out insurance application materials in hospital emergency departments may help increase the enrolment of children in health insurance schemes. Further studies evaluating the effectiveness of different strategies for expanding health insurance coverage in vulnerable population are needed in different settings, with careful attention given to study design. abstract_id: PUBMED:24779769 Determinants of coverage decisions in health insurance marketplaces: consumers' decision-making abilities and the amount of information in their choice environment. Objective: To investigate the determinants and quality of coverage decisions among uninsured choosing plans in a hypothetical health insurance marketplace. Study Setting: Two samples of uninsured individuals: one from an Internet-based sample comprised largely of young, healthy, tech-savvy individuals (n = 276), and the other from low-income, rural Virginians (n = 161). Study Design: We assessed whether health insurance comprehension, numeracy, choice consistency, and the number of plan choices were associated with participants' ability to choose a cost-minimizing plan, given their expected health care needs (defined as choosing a plan costing no more than $500 in excess of the total estimated annual costs of the cheapest plan available). Data Collection: Primary data were collected using an online questionnaire. Principal Findings: Uninsured who were more numerate showed higher health insurance comprehension; those with more health insurance comprehension made choices of health insurance plans more consistent with their stated preferences; and those who made choices more concordant with their stated preferences were less likely to choose a plan that cost more than $500 in excess of the cheapest plan available. Conclusions: Increasing health insurance comprehension and designing exchanges to facilitate plan comparison will be critical to ensuring the success of health insurance marketplaces. abstract_id: PUBMED:29148327 Are Integrated Plan Providers Associated With Lower Premiums on the Health Insurance Marketplaces? As the health insurance industry becomes more consolidated, hospitals and health systems have started to enter the insurance business. Insurers are also rapidly acquiring providers. Although these "vertically" integrated plan providers are small players in the insurance market, they are becoming more numerous. The health insurance marketplaces (HIMs) offer a unique setting to study integrated plan providers relative to other insurer types because the HIMs were designed to promote competition. In this descriptive study, the authors compared the premiums of the lowest priced silver plans of integrated plan providers with other insurer types on the 2015 and 2016 HIMs. Integrated plan providers were associated with modestly lower premiums relative to most other insurer types. This study provides early insights into premium competition on the HIMs. Examining integrated plan providers as a separate insurer type has important policy implications because they are a growing segment of the marketplaces and their pricing behavior may influence future premium trends. abstract_id: PUBMED:24689124 What states are doing to simplify health plan choice in the insurance marketplaces. The new health insurance marketplaces aim to improve consumers' purchasing experiences by setting uniform coverage levels for health plans and giving them tools to explore their options. Marketplace administrators may choose to limit the number and type of plans offered to further simplify consumer decision-making. This issue brief examines the policies set by some state-based marketplaces to simplify plan choices: adopting a meaningful difference standard, limiting the number of plans or benefit designs insurers may offer, or requiring standardized benefit designs. Eleven states and the District of Columbia took one or more of these actions for 2014, though their policies vary in terms of their prescriptiveness. Tracking the effects of these different approaches will enhance understanding of how best to enable consumers to make optimal health insurance purchasing decisions and set the stage for future refinements. abstract_id: PUBMED:20813837 Expanding health insurance coverage in vulnerable groups: a systematic review of options. Vulnerable groups are often not covered by health insurance schemes. Strategies to extend coverage in these groups will help to address inequity. We used the existing literature to summarize the options for expanding health insurance coverage, describe which countries have tried these strategies, and identify and describe evaluation studies. We included any report of a policy or strategy to expand health insurance coverage and any evaluation and economic modelling studies. Vulnerable populations were defined as children, the elderly, women, low-income individuals, rural population, racial or ethnic minorities, immigrants, and those with disability or chronic diseases. Forty-five databases were searched for relevant documents. The authors applied inclusion criteria, and extracted data using pre-coded forms, on contents of health insurance schemes or programmes, and used the framework approach to establish categories. Of the 21,528 articles screened, 86 documents were finally included. Descriptions about the USA dominated (72), with only five from Africa, six from Asia and two from South America. We identified six main categories: (1) changing eligibility criteria of health insurance; (2) increasing public awareness; (3) making the premium more affordable; (4) innovative enrollment strategies; (5) improving health care delivery; and (6) improving management and organization of the insurance schemes. All six categories were found in the literature about schemes in the USA, and schemes often included components from each category. Strategies in developing countries were much more limited in their scope. Evaluation studies numbered 25, of which the majority were of time series design. All studies found that the expansion strategies were effective, as assessed by the author(s). In countries expanding coverage, the categories identified from the literature can help policy makers consider their options, implement strategies where it is common sense to do so and establish appropriate implementation monitoring. abstract_id: PUBMED:32141599 Actuarial cost and fiscal impact of expanding the Jordan Civil Insurance Programme for health coverage to vulnerable citizens. Background: Achieving universal health coverage is a strategic goal for the Government of Jordan. Estimating the cost of expanding health coverage to vulnerable Jordanians under the Civil Insurance Programme (CIP) is an important step towards achieving this goal. Aims: This study aimed to estimate the cost and fiscal impact of expanding health insurance coverage to vulnerable Jordanians. Methods: We identified and quantified vulnerable Jordanians and estimated their utilization and cost of health services provided at Ministry of Health facilities using allocation and macrocosting approaches. We calculated the annual actuarial cost per person and the fiscal impact of the expansion. Results: It was estimated that 4.9% of Jordanians were vulnerable. On average, a vulnerable Jordanian used 1.25 ambulatory visits and 0.027 admissions fewer annually than a person insured by CIP. The annual cost (US$ 79 million) and fiscal impact (US$ 73 million) of expanding coverage to vulnerable Jordanians were due to more ambulatory services (20%) and hospitalizations (80%). Conclusion: A combination of additional resources and improvement in system efficiencies may fund this expansion. abstract_id: PUBMED:31059357 Zero-Premium Health Insurance Plans Became More Prevalent In Federal Marketplaces In 2018. The Affordable Care Act established two federally funded subsidies-cost-sharing reductions and premium tax credits-available in the health insurance Marketplaces. In 2018 federal payments to insurers for cost-sharing reductions were terminated. Insurers responded by increasing plan premiums to account for the loss of these payments. Premiums for silver plans were increased more than those for other metal tiers because cost-sharing reductions are available only in silver plans, while premium tax credits can be applied across different metal tiers. One consequence of greater premium increases for silver plans was the increased availability and selection of plans with zero premiums for consumers. We examined the magnitude of this issue using plan selections through the federal Marketplaces during the open enrollment periods before (2017) and after (2018) the termination of payments. We found that zero-premium plan availability increased by 18.3 percentage points, selection increased by 7.9 percentage points, and selection conditional on having a zero-premium plan available increased by 8.8 percentage points. Were federal cost-sharing reduction payments to be restored, a reduction in availability and selection of zero-premium plans would likely occur, and more consumers could lose access to the plans. abstract_id: PUBMED:29141639 Tobacco use and health insurance literacy among vulnerable populations: implications for health reform. Background: Under the Affordable Care Act (ACA), millions of Americans have been enrolling in the health insurance marketplaces. Nearly 20% of them are tobacco users. As part of the ACA, tobacco users may face up to 50% higher premiums that are not eligible for tax credits. Tobacco users, along with the uninsured and racial/ethnic minorities targeted by ACA coverage expansions, are among those most likely to suffer from low health literacy - a key ingredient in the ability to understand, compare, choose, and use coverage, referred to as health insurance literacy. Whether tobacco users choose enough coverage in the marketplaces given their expected health care needs and are able to access health care services effectively is fundamentally related to understanding health insurance. However, no studies to date have examined this important relationship. Methods: Data were collected from 631 lower-income, minority, rural residents of Virginia. Health insurance literacy was assessed by asking four factual questions about the coverage options presented to them. Adjusted associations between tobacco use and health insurance literacy were tested using multivariate linear regression, controlling for numeracy, risk-taking, discount rates, health status, experiences with the health care system, and demographics. Results: Nearly one third (31%) of participants were current tobacco users, 80% were African American and 27% were uninsured. Average health insurance literacy across all participants was 2.0 (SD 1.1) out of a total possible score of 4. Current tobacco users had significantly lower HIL compared to non-users (-0.22, p < 0.05) after adjustment. Participants who were less educated, African American, and less numerate reported more difficulty understanding health insurance (p < 0.05 each.) CONCLUSIONS: Tobacco users face higher premiums for health coverage than non-users in the individual insurance marketplace. Our results suggest they may be less equipped to shop for plans that provide them with adequate out-of-pocket risk protection, thus placing greater financial burdens on them and potentially limiting access to tobacco cessation and treatment programs and other needed health services. abstract_id: PUBMED:37915025 States' racial resentment correlates with administrative distancing and lower rates of health plan selection in affordable care act marketplaces: a cross sectional analysis. Background: In the United States, the Affordable Care Act (ACA) pursued equity in healthcare access and treatment, but ACA implementation varied, especially limiting African Americans' gains. Marketplaces for subsidized purchase of coverage were sometimes implemented with limited outreach and enrollment assistance efforts. Reflecting state's ACA receptivity or reluctance, state's implementation may rest on sociopolitical stances and racial sentiments. Some states were unwilling to provide publicly supported healthcare to nonelderly, non-disabled adults- "the undeserving poor" -who evoke anti-black stereotypes. The present study assessed whether some states shunned Affordable Care Act (ACA) marketplaces and implemented them less vigorously than other states, leading to fewer eligible persons selecting insurance plans. It assessed if states' actions were motivated by racial resentment, because states connote marketplaces to be government assistance for unworthy African Americans. Methods: Using marketplace and plan selection data from 2015, we rated states' marketplace structures along a four-level continuum indicating greater acceptance of marketplaces, ranging from states assuming sole responsibility to minimal responsibility. Using national data from a four-question modern racism scale, state-wide racial resentment estimates were estimated at the state level. Analysis assessed associations between state levels of racial resentment with states' marketplace structure. Further analysis assessed relationships between both state levels of racial resentment and states' marketplace structure with states' consumer plan selection rates-representing the proportion of persons eligible to enroll in insurance plans who selected a plan. Results: Racial resentment was greater in states with less responsibility for the administration of the marketplaces than actively participating states. States higher in racial resentment also showed lower rates of plan selection, pointing to less commitment to implementing marketplace provisions and fulfilling the ACA's coverage-improvement mission. Differences persisted after controlling for differences in conservatism, uninsurance, poor health, and rejection of Medicaid expansion. Conclusions: Resentment of African Americans' purported irresponsibility and entitlement to government assistance may interfere with states structuring and operating marketplaces to maximize health insurance opportunities for everyone available under the ACA. Trial Registration: N/A. Answer: Yes, plan recommendations can improve the coverage decisions of vulnerable populations in health insurance marketplaces. A study conducted on a community sample of lower-income, minority, rural residents of Virginia found that plan recommendations resulted in a 21 percentage point increase in the probability of choosing the earnings maximizing plan. Specifically, when the government or providers recommended the lowest cost plan, it resulted in plan choices that lowered annual costs compared to marketplaces where no recommendations were made (PUBMED:27028008). This suggests that leveraging insights from the behavioral sciences to facilitate consumers' decisions through marketplace redesigns and plan recommendations can be an effective strategy to aid vulnerable populations in making better coverage decisions.
Instruction: Basilic vein transposition fistula: a good option for maintaining hemodialysis access site options? Abstracts: abstract_id: PUBMED:23599505 Basilic vein transposition in the forearm for secondary arteriovenous fistula. Radiocephalic (RC) fistulae remain the first choice access for hemodialysis. The antecubital fossa is recommended as the next site. However, for some patients a basilic vein can be used to create an arteriovenous (av) fistula. We report a series of patients where the forearm basilic vein served as an alternative conduit for secondary procedures. Over an 8-year period, 30 patients who had a failed RC fistula underwent a basilic vein transposition. The immediate results were satisfactory. All fistulas were successfully cannulated. Cumulative patency was 93% after 1 year, 78% after 2 years, and 55% after 3 years. No ischemic or infectious complications were noted during the study period. The use of the forearm basilic vein to create a native av fistula appears to be a good alternative to procedures in the antecubital fossa or upper arm, thus preserving more proximal veins for future use. abstract_id: PUBMED:25950565 Brachial-cephalic fistula recovered with drainage for the basilic vein: Case report. Vascular access is essential for the implementation of hemodialysis (HD). The arteriovenous fistula (AVF) can be constructed in various locations using various veins. However, the quality of the veins will influence the construction site, as well as the functioning of the AVF. Careful analysis of the vascular network allows options for the development of new fistulas presentations. We present and discuss the case of a woman aged 69 years in HD in which a brachial-cephalic fistula with drainage to basilic vein was created, through rotation of the cephalic vein on the forearm level. This kind of access serves to prolong the time spent dialyzing through native fistulae, with their reduced complications and greater cost-effectiveness. abstract_id: PUBMED:29082626 Two-Stage Basilic Vein Transposition: Second Stage Results. The increasing survival of hemodialysis patients results in the depletion of superficial venous capital justifying the use of the basilic vein. Many groups still prefer an arteriovenous graft due to transposition complexity and the time needed to achieve maturation. In this work we review the results of our series of basilic vein transpositions (BVT). BVTs were performed in two stages: first, creation of the fistula; second, transposition of the vein using three small incisions in the arm. All patients had to have direct arterialization of the basilic vein, therefore, patients with previous ipsilateral wrist fistulas also followed a two-stage protocol. Data were retrospectively revised from all transposition procedures made between September 2005 and November 2012. Patency and complication rates were the primary outcomes evaluated. A total of 276 basilic veins were transposed. Usage rate was 82.2%. 8% (N = 22) of the fistulas were never used due to thrombosis. Secondary patency rates at 1 and 2 years were, respectively, 84% and 66.3%. Complications occurred in 61.6% (N = 170) of fistulas and 65.9% (N = 112) of which had to undergo surgical or endovascular revision. The most frequent complication was vein stenosis (39.7% of late complications, N = 92). Albeit its greater technical complexity, the transposed basilic vein represents a hemodialysis access with good patency rates. Complication rates, although high, are less than those of CVC or prosthetic grafts. These results support the use of the transposed basilic vein as hemodialysis access after the brachiocephalic fistula. abstract_id: PUBMED:28074626 Radio-cephalic fistula recovered with drainage to forearm basilic vein. Dysfunction problems with vascular access are a concern to patients and dialysis units. The vascular surgeon should analyse such dysfunction and perform a careful assessment of the vascular network in order to find new fistula layouts. We introduce and discuss the case of creation of a radio-cephalic fistula with outflow into the forearm basilic vein through rotation of the forearm basilic vein toward the cephalic vein in the forearm of an 88-year-old hemodialysis male patient. This technique enables extending fistula patency and improves cost efficiency. abstract_id: PUBMED:27513244 Brachiobasilic arteriovenous fistula with transpositionof the basilic vein: a multicenter study. Background/aim: The aim of this paper was to present our 3-year multicenter experience in creating a vascular access using the basilic vein. The third choice in creating vascular access is the brachiobasilic arteriovenous fistula (AVFs) with transposition of the basilic vein. Materials And Methods: During 2010-2012, out of a total of 874 AVFs that were performed in our two centers, 54 were brachiobasilic fistulas (6.71%), performed on 54 patients. All surgical procedures were performed by two surgical teams, one for each center. There were no significant differences regarding the patients' age, sex, and comorbidities between the two centers. Results: We reported a total of six postoperative complications (11.10%): two wound infections, two arm edemas, one hematoma, and one bleeding (due to a collateral ligature slippage), which required surgical intervention the same day. None of the other complications required another surgery. The first year patency rate was 89.79% (four fistulas failed due to thrombosis and one due to perforation) and the second year patency rate was 62.12%. Conclusion: The brachiobasilic AVF with transposition of the basilic vein, according to the latest guidelines, remains a well-documented and feasible third option in creating vascular access with better results than graft fistulas. abstract_id: PUBMED:37425617 Forearm Basilic Vein Transposition: A Single-Centre Experience. Introduction: Forearm basilic vein transposition (FBVT) is a viable alternative for arteriovenous grafts (AVGs) and can be used as secondary vascular access as well, as it allows for the use of veins that are remote from the arterial source of inflow. FBVT involves two main steps: first, the basilic vein is dissected from its original location; and second, the basilic vein is transposed to a subcutaneous tunnel on the volar aspect of the forearm and anastomosed to a suitable artery, usually the radial or ulnar artery. Objective: This paper aims to present a series of FBVT cases performed at our hospital and present it as a viable option for secondary vascular access. We also aim to review the available literature relating to FBVT fistula in terms of surgical technique, patency rates, maturation time, and one-year outcome, and to establish a comparison with our clinical experience. Materials And Methods: This is a retrospective descriptive case series. The data were collected from online medical records, and patients were contacted by telephone to make a follow-up visit. For a review of the literature, a search was done on PubMed for articles containing the following keywords in either the title or the abstract: basilic, transposition, fistula, and forearm. Similarly, a search was done on Google Scholar for articles with the following words in the title: basilic, transposition, and forearm. The data are expressed as mean and standard deviation. Statistical analysis was done using SPSS 26.0 software (IBM Corp., Armonk, NY). Conclusion: The primary patency rate of FBVT in our study makes it a suitable solution to opt for before moving to AVGs. FBVT should be considered before moving more proximally in patients with inadequate forearm cephalic veins. abstract_id: PUBMED:30047255 Radiocephalic Fistula Recovery Using the Brachial Vein and Forearm Basilic Vein: A Case Series and Literature Review. Vascular access dysfunction is a serious problem in dialysis units. Some patients have complex dysfunctions that are difficult to resolve. In this article, we report the case a of two patients with radiocephalic arteriovenous fistulae (RC-AVF) who had stenosis/occlusion of the forearm median vein and where we used the basilic vein of the forearm as a solution. We reviewed the use of this surgical solution in RC-AVF. Two male patients on hemodialysis exhibited stenosis/occlusion of the forearm median vein. The forearm basilic vein was isolated and rotated toward the forearm median vein in order to solve RC-AVF problems. One patient had fistula thrombosis 5 months after the procedure, while for the other patient, the fistula continues to work without problems. Literature describes only a few cases using the forearm basilic vein or the brachial vein for fistula recovery. This procedure increased the patency of fistulas. This approach has been proven to be a good solution for solving outflow problems using the superficial or deep veins, increasing fistula patency and avoiding the need to place a central venous catheter and all the related complications. abstract_id: PUBMED:36050972 Early and midterm patency of arteriovenous fistula for hemodialysis access using different techniques and their advantages and disadvantages. This descriptive retrospective study was done to observe the different techniques of arteriovenous fistula creation for advantages and disadvantages, complications, and early and midterm patency. Fifty-three arteriovenous fistulas were created from September 2018 to August 2020 using four different techniques. Radio-cephalic arteriovenous fistula was usually the first option. Other techniques like brachio-cephalic arteriovenous fistula, radio-basilic arteriovenous fistula, and brachio-basilic arteriovenous fistula were used when the radio-cephalic fistula had thrombosed or could not be constructed due to small-sized cephalic vein in the forearm. The majority of patients (41 (77.35%)) underwent radio-cephalic fistula creation. Early patency was 38 (92.7%) in the radio-cephalic technique whereas it was 12 (100%) in the radio-basilic, brachio-cephalic, and brachio-basilic techniques combined. The midterm patency was 32 (78%) in the radio-cephalic technique, 5 (83%) in the brachio-cephalic technique, 3 (75%) in the radio-basilic technique, and 1 (50%) in the brachio-basilic technique. Radio-basilic and brachio-cephalic are alternative techniques for fistula creation after radio-cephalic depending upon the size of the basilic vein in the forearm or cephalic vein in the cubital fossa or arm. The radio-basilic technique may have advantages over the brachio-cephalic technique which need to be further evaluated. Proximal fistulas like brachio-basilic and brachio-cephalic are more commonly associated with limb edema. abstract_id: PUBMED:30116415 Transposed Basilic Vein Fistula: A Credible Option. Objective: Native fistula creation for hemodialysis in patients who have exhausted all their superficial veins presents a challenge to vascular practice. We review our experience in transposed basilic vein fistula (BVT) creation and its usage for hemodialysis. Materials and Methods: We analyzed the hospital operative registry from January 2009 till June 2012 to identify the total number of BVT created in our center. Medical records of all patients were traced, and patients were interviewed using a standard proforma. In our center, BVT is performed as a two-stage procedure. All patients were routinely assessed with duplex scan preoperatively and postoperatively. Results: Two hundred thirty-nine patients were recruited in the study. Of these patients, 50.6% were male and 49.4% were female. Mean age was 53.4 years. Of these patients, 81.2% had history of other previous fistula creation. Risk factors profile includes diabetes mellitus in 59.8%, hypertension in 86.2%, and smoking in 13.0%. Of these patients, 84.5% had no operative complication. Commonest complication was wound infection, 6.3%, followed by hemorrhage or hemotoma required surgical intervention, 5.9%, and pseudoaneurysm, 1.3%. Primary and secondary patencies were at 84.2% and 86.1% at 1 year and 67.7% and 70.5% at 3 years, respectively. Conclusion: BVT is a credible option for challenging patients with absence of superficial veins for native fistula creation with good patency and low operative complication rate. Preoperative ultrasound assessment improves patient selection and outcome of BVT. abstract_id: PUBMED:30116409 Transposed Brachial-Basilic Arteriovenous Fistula for Vascular Access in Japan. As more than 320,000 patients are currently receiving hemodialysis treatment in Japan, the creation and maintenance of hemodialysis access is a major concern. The national guidelines recommend autogenous arteriovenous hemodialysis, and the brachial-basilic arteriovenous fistula has been the focus of attention, because the need for secondary, tertiary, or even more vascular access is growing. Although favorable results have been reported in terms of patency and access-related complication, this fistula involves various unsolved or controversial issues, with limitations including complex procedures, which might contribute to the lower prevalence at this point in Japan. This review addresses those issues and discusses the role of fistula in Japan. Answer: Basilic vein transposition (BVT) fistula appears to be a good option for maintaining hemodialysis access site options. The use of the forearm basilic vein to create a native arteriovenous (AV) fistula is considered a viable alternative to procedures in the antecubital fossa or upper arm, thus preserving more proximal veins for future use (PUBMED:23599505). This technique is particularly useful when other sites have been exhausted or are not suitable. The cumulative patency rates reported for BVT are satisfactory, with 93% after 1 year, 78% after 2 years, and 55% after 3 years, and no ischemic or infectious complications were noted during the study period (PUBMED:23599505). Moreover, the two-stage BVT has shown good usage rates and secondary patency rates, with 82.2% of fistulas being used and secondary patency rates at 1 and 2 years being 84% and 66.3%, respectively (PUBMED:29082626). Although the complication rates are high, they are less than those of central venous catheters (CVC) or prosthetic grafts, supporting the use of transposed basilic vein as a hemodialysis access after the brachiocephalic fistula (PUBMED:29082626). Additionally, the brachiobasilic arteriovenous fistula with transposition of the basilic vein is presented as a third option in creating vascular access, with better results than graft fistulas (PUBMED:27513244). The primary patency rate of forearm basilic vein transposition (FBVT) in one study makes it a suitable solution before moving to arteriovenous grafts (AVGs), suggesting that FBVT should be considered before moving more proximally in patients with inadequate forearm cephalic veins (PUBMED:37425617). In conclusion, BVT is a credible option for patients with an absence of superficial veins for native fistula creation, offering good patency and a low operative complication rate (PUBMED:30116415). It serves to prolong the time spent dialyzing through native fistulae, with reduced complications and greater cost-effectiveness (PUBMED:25950565). Therefore, BVT fistula is indeed a good option for maintaining hemodialysis access site options, especially in patients who have exhausted other sites or have complex vascular access dysfunctions (PUBMED:30047255).
Instruction: Is There an Enabling Environment for Nutrition-Sensitive Agriculture in South Asia? Abstracts: abstract_id: PUBMED:26121704 Is There an Enabling Environment for Nutrition-Sensitive Agriculture in South Asia? Stakeholder Perspectives from India, Bangladesh, and Pakistan. Background: Almost half of all children in South Asia are stunted. Although agriculture has the potential to be a strong driver of undernutrition reduction and serves as the main source of livelihood for over half of South Asia's population, its potential to reduce undernutrition is currently not being realized. Objective: The Leveraging Agriculture for Nutrition in South Asia (LANSA) research consortium seeks to understand how agriculture and agrifood systems can be better designed to improve nutrition in South Asia. In 2013 and 2014, LANSA carried out interviews with stakeholders influential in, and/or knowledgeable of, agriculture-nutrition policy in India, Pakistan, and Bangladesh, to gain a better understanding of the institutional and political factors surrounding the nutrition sensitivity of agriculture in the region. Methods: Semistructured interviews were carried out in India, Bangladesh, and Pakistan with a total of 56 stakeholders representing international organizations, research, government, civil society, donors, and the private sector. Results: The findings point to mixed perspectives on countries' policy sensitivity toward nutrition. There was consensus among stakeholders on the importance of political commitment to nutrition, improving nutrition literacy, strengthening capacities, and improving the use of financial resources. Conclusions: Although there are different ways in which South Asian agriculture can improve its impact on nutrition, sensitizing key influencers to the importance of nutrition for the health of a country's population appears as a critical issue. This should in turn serve as the premise for political commitment, intersectoral coordination to implement nutrition-relevant policies, adequately resourced nutrition-specific and nutrition-sensitive programs, and sufficient capacities at all levels. abstract_id: PUBMED:26472197 Is There an Enabling Environment for Nutrition-Sensitive Agriculture in East Africa? Stakeholder Perspectives From Ethiopia, Kenya, and Uganda. Background: There is growing recognition that "nutrition-sensitive" development is necessary to ensure nutrition security and reduce malnutrition. While agriculture has the potential to be a strong driver of malnutrition reduction and serves as the main source of livelihood for approximately two-thirds of East Africa's population, its potential to reduce malnutrition is currently not being realized. Objective: Leveraging Agriculture for Nutrition in East Africa is a research study based in Ethiopia, Kenya, and Uganda that seeks to understand the enabling environment necessary for optimizing the contribution of the food and agriculture sector to nutrition outcomes. Its objectives were to explore stakeholder perceptions of nutrition-agriculture linkages, of political and institutional challenges and opportunities, of evidence that is available and influential for policy making, and of key issues with regard to capacity. Methods: Open-ended and semistructured interviews were conducted with 53 stakeholders from government, civil society, donors, United Nations organizations, private sector, and research/academic institutions in Ethiopia, Kenya, and Uganda in 2014. Results: Although policy opportunities and contexts are different between the 3 countries, stakeholders identified similar barriers to greater action, including a narrow focus on solely market-oriented and staple crop production, a lack of clarity and incentives within the agriculture sector about improving nutrition and how to translate policy into action, and lack of capacity in human and financial resources. Many actions to improve the nutrition sensitivity of agriculture were mentioned, including crop diversification, value chain activities and improved market access, nutrition education, and reduction in time and labor costs to women. Conclusion: Many opportunities exist to strengthen the impact of agriculture on nutrition in East Africa, but stronger formulation and implementation of policies will require adequate human resources, funds, timely data on the context, sector alignment on priority actions, and alignment on a framework or indicators for accountability. abstract_id: PUBMED:37283222 Fostering an Enabling Environment for Nutrition-Sensitive Agriculture and Food Systems in Nigeria. Background: Although recommendations exist for making agriculture and food systems nutrition sensitive, guidance about how to successfully operationalize these recommendations within national institutions is limited. Over the 13 years from 2010 to 2023, there were several undertakings to strengthen the enabling environment for sustainable nutrition-sensitive agriculture (NSA) and food systems in Nigeria. Some studies were also conducted over the period to foster better understanding of the national enabling environment and facilitate effective actions. Objective: This article documents successes and failures from Nigeria's experiences by reflecting on critical developments, events, policies, and programmes on Nigeria's journey toward advancing nutrition through agriculture and food systems; as well as findings from conducted studies. Discussion: Successes include the existence of a Nutrition and Food Safety Division in the Ministry of Agriculture and an approval for a Nutrition Department, existence of an agriculture sector nutrition strategy, increased private sector engagement and commitment to nutrition-sensitive food systems, and increased funding for nutrition in the agriculture sector. Challenges especially remain for scaling up strategic, operational, and delivery capacity of both organizations and individuals involved in advancing NSA and food systems. Lessons learned include-institutionalizing NSA and food systems takes time; knowledge brokering is indispensable and requires several collaborating entities and stakeholders; and approaches used should be suited to available government capacity. Conclusion: More than a decade of efforts targeting different enabling environment factors has resulted in greater political commitment for nutrition within the agriculture sector and an improved enabling environment for NSA and food systems. abstract_id: PUBMED:37317553 Government Budget for Nutrition-Sensitive Agriculture in Nigeria Increased Following Development of Related Strategy. Background: Financial resources that are commensurate with the magnitude of malnutrition problem are requisite for effective interventions to reduce malnutrition. Understanding the amount and nature of sectoral investments in nutrition is important for advocating and mobilizing increased government budgetary allocations and release. Objective: This study assessed trends in Nigeria's agriculture sector nutrition allocations and whether launch of nutrition-sensitive agriculture strategy and/or the COVID-19 pandemic may have affected these nutrition allocations. Methods: Agricultural budgets from 2009 to 2022 of Nigeria's federal government were analyzed. Nutrition-related budget lines were identified using a keyword search and were then classified as nutrition-specific, nutrition-sensitive, or potentially nutrition-sensitive, based on defined criteria. Potentially nutrition-sensitive items were further screened. Budget lines finally included as nutrition allocations directly targeted improvements in nutrition or intermediate outcomes in pathways between agriculture and nutrition. Budget lines were summed, and these nominal values were adjusted for inflation (using the consumer price index for each year) to obtain real values. Results: Nutrition allocations in the agriculture budget increased considerably even after adjusting for inflation and went from 0.13% of agriculture capital budget in 2009 to 2.97% in 2022; while the real value of total government agricultural budget declined. Large budgetary increases coincided with the development/launch of costed strategies with nutrition-sensitive agriculture components. Still, there were some missed opportunities to increase nutrition allocations. Conclusions: Existence of nutrition-sensitive agriculture strategies has facilitated increased nutrition funding and improved the enabling environment. There is need to optimize existing nutrition allocations while advocating for additional funds. abstract_id: PUBMED:32964746 A Review of Leadership and Capacity Gaps in Nutrition-Sensitive Agricultural Policies and Strategies for Selected Countries in Sub-Saharan Africa and Asia. Background: Agriculture policies and strategies designed, purposefully, to address malnutrition are considered nutrition-sensitive and are a critical component of global efforts to address malnutrition in all its forms. However, limited evidence exists on extent and how nutrition is being integrated into agriculture sector policies, strategies, and programs. A review was conducted to address 2 questions: How nutrition-sensitive are agriculture policies, plans, and investments in selected Sub-Saharan African (SSA) and Asian countries? and Which capacity and leadership gaps limit scale up of nutrition-sensitive agriculture policy and programs? Methods: The review of existing policies was conducted for 11 selected focus countries (9 in SSA and 2 from Asia) of the CGIAR (Consultative Group on International Agricultural Research) Collaborative Research Programme on Agriculture for Nutrition and Health led by the IFPRI (International Food Policy Research Institute). The Food and Agriculture Organization (FAO)'s 10-point key recommendations for designing nutrition-sensitive agricultural interventions was used as an analytical framework. Additionally, a rapid systematic review of published peer-reviewed and grey literature was carried out to identify capacity gaps based on the United Nations Development Program's capacity assessment framework. Results: We found that there is nutrition sensitivity of the policies and strategies but to varying degrees. There is limited capacity for optimum implementation of these policies, programs, and strategies. For most of the countries, there is capacity to articulate what needs to happen, but there are important capacity limitations to translate the given policy/program instruments into effective action. Conclusions: The gaps identified constitute important evidence to inform capacity strengthening of nutrition-sensitive actions for desired nutrition and health outcomes in Africa and Asia. abstract_id: PUBMED:37726846 Mapping of nutrition policies and programs in South Asia towards achieving the Global Nutrition targets. Background: South Asia continues to host the triple burden of child malnutrition with high levels of child undernutrition, hidden hunger (micronutrient deficiencies) and childhood overweight/obesity. To combat malnutrition, the international community along with the National governments have launched initiatives to track the country's progress towards achieving the Global Nutrition targets by 2025. This review captures the country-specific efforts of nutrition-specific and nutrition-sensitive sectoral programs and policies towards achieving these targets for eight South Asian countries. Methods: A systematic internet search was undertaken to search relevant policies and programs from Government websites and twelve International Organisations working in the region. The authors developed a template to map the policies against the following criteria: (i) enabling supportive environment; (ii) Initiatives targeted at nutrition-specific interventions; and (iii) Initiatives targeted at nutrition-specific interventions that impact child malnutrition. A narrative descriptive approach was used to present findings. Results: All eight countries had relevant policies and programs to address child malnutrition and macronutrient deficiencies with targets for significant reductions in stunting and improved breastfeeding. However, despite the outlined there are major challenges of implementation, monitoring, evaluation and quality that persist with increased dependency on international donors and organisations for funding and/or implementation of nutrition plans. Conclusion: There is a need to contextualise efforts designated to donors and governments to improve the tracking of efforts that impact nutrition. abstract_id: PUBMED:36047001 Stakeholders, Relationships, and Coordination: 2015 Baseline Study of Needed Enablers for Bridging Agriculture-Nutrition Gaps in Nigeria. Background: Renewed efforts by the Nigerian government to address malnutrition have led to nutrition actions by several sectors, including the agriculture sector. However, the success of these actions depends on the characteristics of the stakeholders involved, including their relationships and coordination. Objective: This article reports a 2015 study of nutrition-sensitive agricultural stakeholders in Nigeria that assessed what the stakeholders do, where they work and how they are organized to improve nutrition. The study provides a baseline for assessing progress and measuring stakeholder and coordination changes in the Nigerian nutrition-sensitive agriculture landscape. Methods: Semi-structured interviews (n = 17) and focus group discussions (n = 2) were held with federal, state, and local government level stakeholders; reviews of stakeholder program documents were also conducted. Results: The study identified 7 groups of nutrition-sensitive agriculture stakeholders and several coordination challenges. Political leadership, advocacy and provision of material and human resource support by nongovernmental organizations, and donor interest and funding have been vital for mobilizing nutrition-sensitive agriculture. Still, although stakeholders frequently highlighted that nutrition was an important consideration in their interventions, nutrition goals and activities and/or indicators to measure outcomes were not regularly communicated. Also, while coordination mechanisms existed, there appeared to be minimal actual cross-sectoral partnerships because of inadequate trust, competition, and conflicts over institutional turf and mandates. Conclusions: Needed enablers for improving nutrition-sensitive agriculture in Nigeria included improved stakeholder nutrition literacy, as well as enhanced stakeholder engagement facilitated by role definition, clarification, and consensus. Exploring different approaches to coordination may also be necessary. abstract_id: PUBMED:36946325 Nutrition Capacity Assessment of Agriculture Extension Services in Nigeria. Background: Achieving meaningful malnutrition reductions in Nigeria and other high-burden countries requires sustained improvements in diets, mediated through nutrition-sensitive agriculture and food systems. Yet, the capacity to design, plan, implement, and monitor such nutrition-sensitive systems is very limited, including within agricultural extension services delivery. Understanding existing capacity of actors required to implement nutrition change is crucial for effective capacity development. Objective: This study assessed the nutrition capacity of agriculture extension agents (AEAs) in Nigeria and the capacity of their organizations and the institutions within which they operate. Methods: The study assessed 31 extension training materials for inclusion of recommendations for nutrition-sensitive agriculture. Structured interviews and focus group discussions were conducted with 23 extension agents and were coded and analyzed for major themes. Results: Training materials hardly included nutrition objectives and or nutrition-related services to be delivered. Some nutrition-related services were being delivered, including promotion of biofortified crops and nutrient-dense crops and animals, home gardening, food safety, and dietary diversification. However, these services were limited, and service delivery was unstructured, nonuniform, and inconsistent. Numbers of AEAs are quite inadequate while available AEAs had high workloads, are poorly motivated, and had limited funding, supervision, and logistics capacity to perform roles. Physical security was also a challenge for service delivery. Further, complementary activities in other sectors that were necessary for adequate delivery of nutrition-sensitive agriculture did not always exist. Conclusion: Extensive development of nutrition capacities of extension agents appears unlikely to achieve nutrition-related changes if limiting institutional and organizational capacity deficits are not addressed. Plain Language Title: Capacity of Agriculture Extension Agents in Nigeria to Deliver Nutrition Services. Plain Language Summary: Reducing the significant burden of malnutrition in Nigeria requires increased availability and consumption of foods that are nutritious and free from harmful substances. To produce such foods, farmers need adequate nutrition and food safety knowledge and skills. The production of such food will also need to support nutrition in other ways, including increased women's empowerment. Extension agents traditionally support farmers to adopt new methods of food production and/or processing that support increased food yields. These agents can also be used to deliver services that will address nutrition if they have the necessary knowledge and skills. This study assessed the capacity of agriculture extension agents in Nigeria to deliver nutrition services, in order to determine how to increase their capacity to deliver these services. The results from the study are that the extension agents do not have sufficient knowledge and skills to deliver nutrition services, and that their organizations and the wider context in which they work do not have the capacity to enable them to deliver nutrition services effectively. For instance, the organizations do not have sufficient numbers of staff and do not provide current staff with adequate means of transportation to visit farmers. Insecurity is high and so extension agents are unable to visit farmers frequently because of the potential threats to their lives. The study concludes that effectively using extension agents to deliver nutrition services will require not just training of the extension agents but also improvements in organizational capacity and contextual factors. abstract_id: PUBMED:6399898 South-east Asia: curriculum guidelines for nutrition. This project was a cooperative endeavour from its inception. It was designed for both two-way horizontal and vertical interactions at international, regional, and national levels, and within academic educational units in agriculture. This approach to introducing the study of food, nutrition and agriculture into agricultural education was made so that individuals, agencies, and groups interested or involved in agriculture, education, health, and nutrition policy or programmes in South-east Asia could have input into planning, implementing, and helping secure the study of nutrition in post-secondary agricultural curricula. It is a workable participatory model of action that has merit in other interdisciplinary educational planning endeavours. abstract_id: PUBMED:37016712 Stories of change in nutrition in Burkina Faso 1992-2018: a macro-level perspective. Looking back at the development of successful enabling environments for nutrition may inform policymakers on how to accelerate progress to end all forms of malnutrition by 2030. As under-five stunting declined substantially in Burkina Faso, from a peak at 45% in 1998/99 to 25% in 2018, we analyzed through a stories of change approach the actors, ideas, initiatives, policies and capacities which enabled wide-scale nutrition progress. We triangulated findings from policy analysis, stakeholder mapping, and national-level semi-structured interviews (n = 20). We found that since 2002, nutrition has been anchored in the Ministry of Health, where leadership advocated for the creation of coordination bodies, enabling a coherent defining of nutrition and laying groundwork for better integration of nutrition into and prioritization of nutrition by the health and tangential ministries. Under the leadership of the Ministry of Health and its partners, horizontal and vertical coherence in nutrition action increased, through effective cooperation between nutrition actors; increasing intersectoral collaboration, particularly with the influential agriculture sector; and increasing funding to support nutrition-sensitive programming and build the capacity of nutrition staff. Nevertheless, sustainably organizing funding and human resources at the decentralized level remained challenging, in a context of emerging threats such as climate change and insecurity. Burkina Faso's health sector's success in creating an enabling environment for nutrition may have contributed to improvements in child nutrition alongside other sectoral improvements. Enhancing accountability of the Health, Agriculture, WASH, Education and Social Protection sectors and empowering decentralized bodies to take nutrition-relevant decisions may help accelerating progress in nutrition. Answer: Based on the provided abstracts, there is a mixed perspective on the enabling environment for nutrition-sensitive agriculture (NSA) in South Asia. The Leveraging Agriculture for Nutrition in South Asia (LANSA) research consortium conducted interviews with stakeholders in India, Pakistan, and Bangladesh to understand the institutional and political factors surrounding the nutrition sensitivity of agriculture in the region (PUBMED:26121704). Stakeholders agreed on the importance of political commitment to nutrition, improving nutrition literacy, strengthening capacities, and improving the use of financial resources. However, there are different ways in which South Asian agriculture can improve its impact on nutrition, and sensitizing key influencers to the importance of nutrition for the health of a country's population is critical. This should lead to political commitment, intersectoral coordination to implement nutrition-relevant policies, adequately resourced nutrition-specific and nutrition-sensitive programs, and sufficient capacities at all levels (PUBMED:26121704). Additionally, a review of nutrition policies and programs in South Asia towards achieving the Global Nutrition targets indicated that all eight countries in the region had relevant policies and programs to address child malnutrition and micronutrient deficiencies. However, there are major challenges in implementation, monitoring, evaluation, and quality, with increased dependency on international donors and organizations for funding and/or implementation of nutrition plans (PUBMED:37726846). In summary, while there is recognition of the importance of NSA and some policy frameworks in place, the actual enabling environment for NSA in South Asia faces challenges related to political commitment, capacity building, resource allocation, and effective implementation of policies and programs.
Instruction: Does robotic assistance improve efficiency in performing complex minimally invasive surgical procedures? Abstracts: abstract_id: PUBMED:24642128 Robotic-assisted minimally invasive liver resection. Background: Minimally invasive liver resection is feasible for select patients. The flexibility of robotic surgical instruments improves the possibility of minimally invasive liver resection, even in challenging major liver resection. Materials And Methods: We accumulated 69 patients who underwent pure laparoscopic liver resection from 2007 to 2011, and 52 patients who underwent robotic-assisted minimally invasive liver resection in 2012. The patients' characteristics, surgical procedures, and perioperative parameters were described and compared between these two groups. Results: In 2012, we performed 56 robotic hepatobiliary procedures, which included 52 (92%) robotic-assisted minimally invasive liver resection procedures. Under the assistance of a robotic system, we increased the number of patients undergoing minimally invasive liver resection by more than twofold, and we increased the percentage of minimally invasive surgery for patients with fresh hepatocellular carcinoma, compared to our laparoscopic group (44% vs. 15%, respectively). We also increased the percentage of major hepatectomy for minimally invasive liver resection by a robotic approach (39% vs. 15%) with a comparable conversion rate (5%) and morbidity (8%), compared to the laparoscopic group. We can even perform a challenging living donor liver harvest procedure with a robotic-assisted minimally invasive approach. Conclusion: Robotic assistance increased the percentage of minimally invasive liver resections and the percentage of major minimally invasive liver resections with comparable perioperative results. Robotic-assisted minimally invasive liver resection is feasible, but its role needs more accumulated experience to clarify. abstract_id: PUBMED:31482359 Impact of robotic assistance on mental workload and cognitive performance of surgical trainees performing a complex minimally invasive suturing task. Background: Few studies have investigated the potential impact of robotic assistance on cognitive ergonomics during advanced minimally invasive surgery. The purpose of this study was to assess the impact of robotic assistance on mental workload and downstream cognitive performance in surgical trainees. Methods: Robot-naïve trainees from general surgery, urology and gynaecology, stratified by specialty and level of training, were randomised to either laparoscopic surgery (LS) or robotic-assisted laparoscopic surgery (RALS) and performed a time-limited, complex laparoscopic suturing task after watching a 5-min instructional video. The RALS group received an additional 5-min orientation to the robotic console. Subjective mental workload was measured using NASA Task Load Index. Concentration and executive cognitive function were assessed using Psychomotor Vigilance Task (PVT) and Wisconsin Card Sorting Test (WCST), respectively. A p value of 0.05 was considered significant. Results: Sixteen senior residents (SR; ≥ PGY3) and 14 junior residents (JR; PGY1-2) completed the study. There was no difference in mental workload between LS and RALS. Within JR there was no difference in task completion time comparing LS versus RALS; however, LS was associated with impaired concentration post-task versus pre-task (PVT reaction time 306 versus 324 ms, p = 0.03), which was not observed for RALS. In contrast, amongst SR, RALS took significantly longer than LS (10.3 vs. 14.5 min, p = 0.02) and was associated with significantly worse performance on WCST (p < 0.01). Conclusions: Robotic assistance, in this setting, did not provide a technical performance advantage nor impact subjective mental workload with novice users regardless of level of surgery training. We observed a protective effect on cognitive performance offered by RALS to junior trainees with limited LS experience, yet a detrimental effect on senior trainees with greater LS ability and inadequate pre-study robotic training, suggesting that robotic consoles may be mentally taxing for robotic novices and consideration should be given to formal console training prior to initial clinical exposure. abstract_id: PUBMED:27000903 Survey of minimally invasive general surgery fellows training in robotic surgery. Minimally invasive surgery fellowships offer experience in robotic surgery, the nature of which is poorly defined. The objective of this survey was to determine the current status and opportunities for robotic surgery training available to fellows training in the United States and Canada. Sixty-five minimally invasive surgery fellows, attending a fundamentals of fellowship conference, were asked to complete a questionnaire regarding their demographics and experiences with robotic surgery and training. Fifty-one of the surveyed fellows completed the questionnaire (83 % response). Seventy-two percent of respondents had staff surgeons trained in performing robotic procedures, with 55 % of respondents having general surgery procedures performed robotically at their institution. Just over half (53 %) had access to a simulation facility for robotic training. Thirty-three percent offered mechanisms for certification and 11 % offered fellowships in robotic surgery. One-third of the minimally invasive surgery fellows felt they had been trained in robotic surgery and would consider making it part of their practice after fellowship. However, most (80 %) had no plans to pursue robotic surgery fellowships. Although a large group (63 %) felt optimistic about the future of robotic surgery, most respondents (72.5 %) felt their current experience with robotic surgery training was poor or below average. There is wide variation in exposure to and training in robotic surgery in minimally invasive surgery fellowship programs in the United States and Canada. Although a third of trainees felt adequately trained for performing robotic procedures, most fellows felt that their current experience with training was not adequate. abstract_id: PUBMED:32190354 Robotic-assisted minimally invasive esophagectomy: past, present and future. Esophagectomy for cancer of the esophagus is increasingly performed using minimally invasive techniques. After the introduction of minimally invasive esophagectomy (MIE) in the early 1990's, robotic-assisted techniques followed after the turn of the millennium. The advent of robotic platforms has allowed the development of robotic-assisted minimally invasive esophagectomy (RAMIE) over the past 15 years. Although recent trials have shown superior peri-operative morbidity and quality of life compared to open esophagectomy, no randomized trials have compared RAMIE to conventional MIE. This paper summarizes the current literature on RAMIE and provides an overview of expected future developments in robotic surgery. abstract_id: PUBMED:26801753 Minimally invasive surgery and robotic surgery: surgery 4.0? Surgery can only maintain its role in a highly competitive environment if results are continuously improved, accompanied by further reduction of the interventional trauma for patients and with justifiable costs. Significant impulse to achieve this goal was expected from minimally invasive surgery and, in particular, robotic surgery; however, a real breakthrough has not yet been achieved. Accordingly, the new strategic approach of cognitive surgery is required to optimize the provision of surgical treatment. A full scale integration of all modules utilized in the operating room (OR) into a comprehensive network and the development of systems with technical cognition are needed to upgrade the current technical environment passively controlled by the surgeon into an active collaborative support system (surgery 4.0). Only then can the true potential of minimally invasive surgery and robotic surgery be exploited. abstract_id: PUBMED:35903260 Minimally-Invasive Assisted Robotic Spine Surgery (MARSS). Minimally-Invasive robotic spine surgery (MARSS) has expanded the surgeons armamentarium to treat a variety of spinal disorders. In the last decade, robotic developments in spine surgery have improved the safety, accuracy and efficacy of instrumentation placement. Additionally, robotic instruments have been applied to remove tumors in difficult locations while maintaining minimally invasive access. Gross movements by the surgeon are translated into fine, precise movements by the robot. This is exemplified in this chapter with the use of the da Vinci robot to remove apical thoracic tumors. In this chapter, we will review the development, technological advancements, and cases that have been conducted using MARSS to treat spine pathology in a minimally invasive fashion. abstract_id: PUBMED:28251793 Preservation of replaced left hepatic artery during robotic-assisted minimally invasive esophagectomy: A case series. Objective: Finding of a significant replaced left hepatic artery (RLHA) during esophagectomy is relatively rare, with an incidence of approximately 5%. Sparing of the artery may be required to avoid complications of liver ischemia. Robotic assistance during esophagectomy may provide a technically superior method of artery preservation with minimally invasive approaches. Methods: This is a retrospective case series of patients undergoing robotic-assisted minimally invasive esophagectomy (RAMIE) identified to have a significant RLHA at time of surgery. Results: Five patients with a significant RLHA were identified from a series of over 100 RAMIE operations. Preservation of RLHA was accomplished in all cases without need for conversion, no intra-operative complications, and no post-operative liver dysfunction. The stomach was suitable and used for conduit reconstruction in all patients. Conclusions: Sparing of the RLHA during RAMIE is feasible and effective. The robotic assisted approach may obviate the need for open conversion during these complex minimally invasive operations. abstract_id: PUBMED:34312603 Minimally invasive robotic cervicothoracic fusion: a case report and review of literature. Minimally invasive surgery (MIS) of the posterior cervical spine with robotic assistance has recently emerged to treat degenerative disc disease. Robotic arms and 3D neuronavigation with preoperatively planned placement are used to achieve real-time intraoperative guidance, reducing screw malposition through increased accuracy and stability. This results in decreased blood loss, postoperative pain, and quicker recovery time compared to other techniques. We aim to demonstrate a novel technical approach to posterior cervical spine fusion using robotic assistance and discuss its advantages. In a patient with right hand weakness and a right paracentral disc herniation of the cervicothoracic spine, we performed a MIS percutaneous and robotically assisted posterior spinal fusion at C7-T2, with complete C7-T1 and T1-2 right-sided facetectomies and also a T1-T2 discectomy. Preoperative software planning and a robotic platform attachment configuration was used. There was immediate postoperative improvement in upper extremity strength and the patient was discharged without complications. Postoperative imaging confirmed accurate hardware placement, and follow-up at both 3- and 4-month confirmed improved upper extremity strength with sensation intact throughout. MIS robotic posterior cervicothoracic fusion can effectively be used to improve patient outcomes. Further implementation of robotic assistance during cervical fusion in larger studies is needed to further evaluate its effectiveness. abstract_id: PUBMED:35605827 Operative Time for Minimally Invasive Sacrocolpopexy: Comparison of Conventional Laparoscopy versus Robotic Platform. Study Objective: To evaluate the operative time for minimally invasive sacrocolpopexy using conventional laparoscopy vs robotic assistance. In addition, we sought to compare intraoperative complications, mesh complications, anatomic prolapse recurrence, and retreatment. Design: Retrospective cohort study. Setting: Academic hospital. Patients: All 142 women who underwent minimally invasive sacrocolpopexy between January 1, 2019, and December 31, 2019. Intervention: We compared operative time between laparoscopic and robotic-assisted sacrocolpopexies. Measurements And Main Results: A total of 142 women were included. Mean age was 61.8 ± 9.6 years and mean body mass index 27.1 ± 4.4 kg/m2. A total of 86 (60.6%) sacrocolpopexies were performed laparoscopically and 56 (39.4%) with robotic assistance. There were no significant differences in baseline demographic variables. A higher proportion of concomitant hysterectomies were performed with robotic assistance as compared with laparoscopic cases (n = 42, 73.7% robotic vs n = 43, 50.6% laparoscopic; p <.01). Mean operative times were significantly different between robotic and laparoscopic groups (176.3 ± 45.5 minutes and 195.0 ± 45.4 minutes, p = .02). On linear regression, the variables predicting significant change in operative time were robotic assistance, concomitant hysterectomy, age, body mass index, and no resident involvement. There were no differences in intraoperative bladder or bowel injury, anatomic recurrence beyond the hymen, retreatment, or mesh complications (all p >.05). Conclusions: Contrary to previous research, the use of robotic assistance does not appear to increase operative time for patients undergoing minimally invasive sacrocolpopexy in a large academic practice. abstract_id: PUBMED:33634004 Minimally invasive robotic breast reconstruction surgery. The rising popularity of robotic surgery has enabled surgeons to continue to expand the uses of robotic surgery. Robotic surgery offers minimally invasive approaches coupled with tremor elimination, up to seven degrees of freedom, ergonomic positioning, 3D magnified vision and improved resolution. We describe robotic surgery techniques for nipple-sparing mastectomies, latissimus dorsi muscle flap harvest, deep inferior epigastric perforator (DIEP) flap pedicle harvest, and robotic microsurgical anastomoses. By using a robotic system the surgeon is able to offer not only a minimally invasive approach to the patient but the surgeon's ability can be improved upon as well. This improved ability is best characterized in the robotic supermicrosurgical anastomosis where even the faintest surgeon's tremor is exploited. However, within the robotic system tremor is eliminated. We are now able to offer patients a completely minimally invasive approach to ablative breast surgery and breast reconstruction. A patient could have a robotic nipple-sparing mastectomy, followed by a robotic DIEP reconstruction with a robotic microsurgical anastomosis. The patient could even have robotic lymphovenous bypass to address lymphedema that could have arisen after an axillary dissection. A completely robotic surgical approach maximizes both utilization of the robotic system and patient benefit. By using robotic techniques in flap harvest the morbidity of traditional open surgeries is minimized and the use of robotic anastomoses expands the limits of human precision. Answer: Robotic assistance has been shown to improve efficiency in performing complex minimally invasive surgical procedures in several ways. For instance, the flexibility of robotic surgical instruments has been noted to enhance the feasibility of minimally invasive liver resection, even in challenging cases such as major liver resections and living donor liver harvest procedures (PUBMED:24642128). Additionally, robotic assistance has been associated with increased safety, accuracy, and efficacy of instrumentation placement in spine surgery, translating gross movements by the surgeon into fine, precise movements by the robot (PUBMED:35903260). In the context of esophagectomy, robotic assistance may provide a technically superior method of artery preservation with minimally invasive approaches, potentially obviating the need for open conversion during complex operations (PUBMED:28251793). Moreover, robotic assistance does not appear to increase operative time for patients undergoing minimally invasive sacrocolpopexy, contrary to previous research, suggesting an improvement in efficiency (PUBMED:35605827). However, the impact of robotic assistance on efficiency can vary depending on the experience level of the surgeon and the specific procedure. For example, robotic assistance did not provide a technical performance advantage nor impact subjective mental workload with novice users regardless of the level of surgery training, and it was observed to have a protective effect on cognitive performance for junior trainees with limited laparoscopic surgery experience (PUBMED:31482359). On the other hand, senior trainees with greater laparoscopic ability and inadequate pre-study robotic training experienced a detrimental effect, suggesting that robotic consoles may be mentally taxing for robotic novices (PUBMED:31482359). In breast reconstruction surgery, the use of robotic systems has been described to offer a minimally invasive approach with benefits such as tremor elimination, enhanced degrees of freedom, ergonomic positioning, and improved vision, which can improve the surgeon's ability and efficiency (PUBMED:33634004). Overall, while robotic assistance has the potential to improve efficiency in performing complex minimally invasive surgical procedures, the actual impact may depend on factors such as the type of procedure, the surgeon's experience with robotic systems, and the availability of adequate training and simulation facilities (PUBMED:27000903).
Instruction: Are depressive symptoms associated with cancer screening and cancer stage at diagnosis among postmenopausal women? Abstracts: abstract_id: PUBMED:18788983 Are depressive symptoms associated with cancer screening and cancer stage at diagnosis among postmenopausal women? The Women's Health Initiative observational cohort. Background: Women with depressive symptoms may use preventive services less frequently and experience poorer health outcomes. We investigated the association of depressive symptoms with breast and colorectal cancer screening rates and stage of cancer among a cohort of postmenopausal women. Methods: In The Women's Health Initiative Observational Study, 93,676 women were followed on average for 7.6 years. Depressive symptoms were measured at baseline and at 3 years using the 6-item scale from the Center for Epidemiological Studies Depression scale (CES-D). We calculated a cancer screening rate expressed as a proportion of the years that women were current with recommended cancer screening over the number of follow-up visits in the study. Breast and colorectal cancers were staged based on Surveillance, Epidemiology and End Results (SEER) classification. Results: At baseline, 15.8% (12,621) women were positive for depressive symptoms, and 6.9% (4,777) were positive at both baseline screening and at 3 years. The overall average screening rate was 71% for breast cancer and 53% for colorectal cancer. The breast cancer screening rate was 1.5% (CI 0.9%-2.0%) lower among women who reported depressive symptoms at baseline than among those who did not. Depressive symptoms were not a predictor for colorectal cancer screening. Stage of breast and colorectal cancer was not found to be associated with depressive symptoms after adjusting for covariates. Conclusions: Among a healthy and self-motivated cohort of women, self-reported depressive symptoms were associated with lower rates of screening mammography but not with colorectal cancer screening. abstract_id: PUBMED:28387934 Effect of depression before breast cancer diagnosis on mortality among postmenopausal women. Background: Few previous studies investigating depression before the diagnosis of breast cancer and breast cancer-specific mortality have examined depression measured at more than 1 time point. This study investigated the effect of depression (combining depressive symptoms alone with antidepressant use) measured at 2 time points before the diagnosis of breast cancer on all-cause mortality and breast cancer-specific mortality among older postmenopausal women. Methods: A large prospective cohort, the Women's Health Initiative, was used. The study included 3095 women with incident breast cancer who had measures of depressive symptoms and antidepressant use before their diagnosis at the baseline and at year 3. Multivariate Cox proportional hazards regression was used to estimate adjusted hazard ratios (HRs) between depression at the baseline, depression at year 3, and combinations of depression at these time points and all-cause mortality and breast cancer-specific mortality. Results: Depression at year 3 before a breast cancer diagnosis was associated with higher all-cause mortality after adjustments for multiple covariates (HR, 1.35; 95% confidence interval [CI], 1.02-1.78). There was no statistically significant association of baseline depression and all-cause mortality or breast cancer-specific mortality whether or not depression was also present at year 3. In women with late-stage (regional- or distant-stage) breast cancer, newly developed depression at year 3 was significantly associated with both all-cause mortality (HR, 2.00; 95% CI, 1.13-3.56) and breast cancer-specific mortality (HR, 2.42; 95% CI, 1.24-4.70). Conclusions: Women with newly developed depression before the diagnosis of breast cancer had a modestly but significantly increased risk for death from any cause and for death from breast cancer at a late stage. Cancer 2017;123:3107-15. © 2017 American Cancer Society. abstract_id: PUBMED:33382778 Association between prediagnosis depression and mortality among postmenopausal women with colorectal cancer. Background: There are no epidemiologic data on the relation of depression before colorectal cancer diagnosis to colorectal cancer mortality among women with colorectal cancer, especially those who are postmenopausal. Our aim was to fill this research gap. Methods: We analyzed data from a large prospective cohort in the US, the Women's Health Initiative (WHI). The study included 2,396 women with incident colorectal cancer, assessed for depressive symptoms and antidepressant use before cancer diagnosis at baseline (screening visit in the WHI study) during 1993-1998. Participants were followed up from cancer diagnosis till 2018. We used Cox proportional hazards regression to estimate adjusted hazard ratios (HRs) between depression (depressive symptoms or antidepressant use) at baseline, and all-cause mortality and colorectal cancer-specific mortality. Results: Among women with colorectal cancer, there was no association between baseline depression and all-cause mortality or colorectal cancer-specific mortality after adjusting for age or multiple covariates. Conclusion: Among women with colorectal cancer, there was no statistically significant association between depression before colorectal cancer diagnosis and all-cause mortality or colorectal cancer-specific mortality. Further studies are warranted to assess depressive symptoms and antidepressant use, measured at multiple points from baseline to diagnosis, and their interactions with specific types of colorectal cancer treatment on the risk of death from colorectal cancer. abstract_id: PUBMED:33046263 Depression in women with a diagnosis of breast cancer. Prevalence of symptoms of depression in Peruvian women with early breast cancer and related sociodemographic factors. We sought to review literature on the prevalence of symptoms of depression in women with a diagnosis of breast cancer (BC) and in the Peruvian population determine the prevalence of symptoms of depression and to describe the association with sociodemographic characteristics. Descriptive cross-sectional analytical study of 254 patients from the National Cancer Institute of Peru (Instituto Nacional de Enfermedades Neoplásicas) with a diagnosis of clinical stage I or II BC. The patients included women aged between 26 and 67 years old. Symptoms of depression were monitored by the Beck Depression Inventory-II. Moreover, clinical features and patient sociodemographic characteristics were analyzed and their association with depression was assessed by logistic regression. The average age of the patients was 47.8 ± 9.2 years; 5.4% of the patients were postmenopausal at the time of the questionnaire. About 55% of women were from Lima, 58.3% had completed secondary education (11 ± 3.2 years), 45.7% were not working, and 46.5% were single. The prevalence of depression was 25.6% at the time of BC diagnosis. Of those patients with symptoms of depression, 16.9% showed symptoms of mild depression, 6.3% moderate, and 2.4% severe. A multivariable logistic regression model showed that in Peruvian women with a diagnosis of BC being married or employed significantly decreased the odds of presenting depressive symptoms (P = 0.029 and 0.017, respectively). Our main limitation was the lack of evaluation of depressive symptoms before the diagnosis, during or at the end of treatment. Another limitation was that the Beck Depression Inventory-II test could only identify depressive symptoms, but not depression as a disease. We have reviewed relevant literature on depression in women with a diagnosis of BC. The data presented suggests an association between both employment and marital status with depressive symptoms among Peruvian women with a diagnosis of BC. Pre-emptive support for women at risk could influence resilience and/or motivation for compliance with antineoplastic treatments. abstract_id: PUBMED:37936486 Caregiving and all-cause mortality in postmenopausal women: Findings from the Women's Health Initiative. Background: Caregiving is commonly undertaken by older women. Research is mixed, however, about the impact of prolonged caregiving on their health, well-being, and mortality risk. Using a prospective study design, we examined the association of caregiving with mortality in a cohort of older women. Methods: Participants were 158,987 postmenopausal women aged 50-79 years at enrollment into the Women's Health Initiative (WHI) who provided information on current caregiving status and caregiving frequency at baseline (1993-1998) and follow-up (2004-2005). Mortality was ascertained from baseline through March of 2019. Cox regression with caregiving status defined as a time-varying exposure was used to estimate hazard ratios (HR) and 95% confidence intervals (CI) for mortality, adjusting for sociodemographic factors, smoking, and history of diabetes, hypertension, cardiovascular disease (CVD), and cancer. Stratified analyses explored whether age, race-ethnicity, depressive symptoms, frequency of caregiving, optimism, and living status modified the association between caregiver status and mortality. Results: At baseline, 40.7% of women (mean age 63.3 years) self-identified as caregivers. During a mean 17.5-year follow-up, all-cause mortality (50,526 deaths) was 9% lower (multivariable-adjusted HR = 0.91, 95% CI: 0.89-0.93) in caregivers compared to non-caregivers. The inverse association between caregiving and all-cause mortality did not differ according to caregiving frequency or when stratified by age, race-ethnicity, depressive symptoms, optimism, or living status (interaction p > 0.05, all). Caregiving was inversely associated with CVD and cancer mortality. Conclusion: Among postmenopausal women residing across the United States, caregiving was associated with lower mortality. Studies detailing the type and amount of caregiving are needed to further determine its impact on older women. abstract_id: PUBMED:29277970 Personality and breast cancer screening in women of the GAZEL cohort study. The potential benefit of breast cancer screening is mitigated by the risk of false positives and overdiagnosis, thus advocating for a more personalized approach, based on the individual benefit-harm balance. Since personality might influence the women's appraisal of this balance, this prospective observational cohort study examined whether it could influence mammography use. A total of 2691 postmenopausal women of the GAZEL Cohort Study completed the Bortner Type A Rating Scale and the Buss and Durkee Hostility Inventory in 1993. Associations between personality scores and subsequent mammography use, self-reported through up to five triennial follow-up questionnaires, were estimated with Odds Ratio (OR) and 95% confidence interval (CI) with logistic mixed model regressions, adjusting for age, occupational grade, marital status, family history of breast cancer, age at menarche, age at first delivery, gynecological follow-up, hormone therapy use, and depressive symptoms. Individual propensity scores were used to weight the analyses to control for potential selection biases. More than 90% of the participants completed at least two follow-up questionnaires. Type A personality, but not hostility, was associated with mammography use in both univariate (crude OR [95% CI]: 1.62 [1.24-2.11], P < 0.001) and multivariate analyses (OR [95% CI]: 1.46 [1.13-1.90], P < 0.01). Type A personality traits (i.e., sense of time urgency, high job involvement, competitiveness) independently predicted mammography use among postmenopausal women. While paying more attention to the adherence of women with low levels of these traits, clinicians may help those with higher levels to better consider the risks of false positives and overdiagnosis. abstract_id: PUBMED:9068621 Clinical decision-making with the woman after menopause. Women visit physicians more often than men do, but women's medical care frequently remains fragmented and insufficient. The opportunity to establish a primary care relationship often occurs when patients present with an acute complaint. Integral parts of preventive health maintenance for middle-aged women include an evaluation of the risk for osteoporosis, coronary artery disease, depression, and domestic violence; a consideration of hormone replacement therapy; and screening for breast, cervical, and colon cancer. A primary care physician can address all of these issues in a comprehensive manner without specialty referrals. abstract_id: PUBMED:28251384 Identifying socio-demographic and clinical characteristics associated with medication beliefs about aromatase inhibitors among postmenopausal women with breast cancer. Purpose: Non-adherence/persistence to adjuvant endocrine therapy can negatively impact survival. Beliefs about medicines are known to affect adherence. This study aims to identify socio-demographic and clinical characteristics associated with medication beliefs among women taking aromatase inhibitors (AIs). Methods: Women completed an online survey on beliefs about AI therapy [Beliefs about Medicines Questionnaire (BMQ)], beliefs about breast cancer [Assessment of Survivor Concerns scale (ASC)], and depression [Personal Health Questionnaire depression scale (PHQ-8)]. Socio-demographic and clinical characteristics were collected. Bivariate analyses and linear regression models were performed to investigate relationships between variables. Results: A total of 224 women reported currently taking AI therapy and were included in the analysis. Significantly higher concern beliefs were found among women who had at least mild depression, experienced side effects from AIs, and previously stopped therapy with another AI. Significant correlations were found between concern and necessity beliefs and cancer and health worry. Women age 70 and older displayed less fear of cancer recurrence and health worry, and a trend towards lower necessity and concern beliefs. No differences were found for other variables. In the regression model, greater necessity beliefs were found with increases in the number of current prescription medications (B = 1.06, 95% CI 0.31-1.81, p = 0.006) and shorter duration of current AI therapy (B = -0.65, 95% CI -1.23 to -0.07, p = 0.029), whereas greater concern beliefs were associated with higher depression scores (B = 1.19, 95% CI 0.35-2.03, p = 0.006). Conclusions: Medication necessity and concern beliefs were associated with a definable subset of patients who may be at higher risk for non-persistence. abstract_id: PUBMED:26578537 Depression, Antidepressant Use, and Postmenopausal Breast Cancer Risk. Background: Whether depression and antidepressant (AD) use might influence breast cancer risk is unclear, and these exposures have not been evaluated together in a single, prospective cohort study of breast cancer risk. Methods: Among 71,439 postmenopausal women in the Women's Health Initiative Observational Study (WHI-OS), we estimated multivariable-adjusted HRs for the independent and joint effects of depressive symptoms and AD use on breast cancer risk using Cox proportional hazards regression. Results: When analyzed separately, neither depressive symptoms nor AD use at baseline were associated with a significantly increased risk of total breast cancer (HR = 0.96, 95% CI, 0.85-1.08; HR = 1.04, 95% CI, 0.92-1.20, respectively) or invasive breast cancer (HR = 0.98, 95% CI, 0.86-1.12; HR = 1.00, 95% CI, 0.86-1.16, respectively). Current AD use was associated with a borderline-significant increase of in situ breast cancer (HR = 1.30, 95% CI, 0.99-1.75) after adjustment for depressive symptoms; however, this relationship was attenuated after adjustment for mammographic screening (HR = 1.08, 95% CI, 0.76-1.51). No significant variation in total breast cancer risk was observed when the separate and joint effects of depressive symptoms and AD use were explored (P for interaction = 0.14). Conclusion: We found no evidence that either depression or AD use influences breast cancer risk. An elevated risk of in situ disease among AD users could not be ruled out, though is likely due to increased screening in this subgroup. Impact: Given the high prevalence of these exposures, these results may provide reassurance to the millions of women who are depressed and/or use ADs each year. abstract_id: PUBMED:29978034 Change in longitudinal trends in sleep quality and duration following breast cancer diagnosis: results from the Women's Health Initiative. Breast cancer survivors frequently report sleep problems, but little research has studied sleep patterns longitudinally. We examined trends in sleep quality and duration up to 15 years before and 20 years after a diagnosis of breast cancer, over time among postmenopausal women participating in the Women's Health Initiative (WHI). We included 12,098 participants who developed invasive breast cancer after study enrollment. A linear mixed-effects model was used to determine whether the time trend in sleep quality, as measured by the WHI Insomnia Rating Scale (WHIIRS), a measure of perceived insomnia symptoms from the past 4 weeks, changed following a cancer diagnosis. To examine sleep duration, we fit a logistic regression model with random effects for both short (<6 h) and long (≥9 h) sleep. In addition, we studied the association between depressive symptoms and changes in WHIIRS and sleep duration. There was a significantly slower increase in the trend of WHIIRS after diagnosis (β = 0.06; p = 0.03), but there were non-significant increases in the trend of the probability of short or long sleep after diagnosis. The probability of depressive symptoms significantly decreased, though the decrease was more pronounced after diagnosis (p < 0.01). Trends in WHIIRS worsened at a relatively slower rate following diagnosis and lower depression rates may explain the slower worsening in WHIIRS. Our findings suggest that over a long period of time, breast cancer diagnosis does not adversely affect sleep quality and duration in postmenopausal women compared to sleep pre-diagnosis, yet both sleep quality and duration continue to worsen over time. Answer: Depressive symptoms are associated with lower rates of screening mammography but not with colorectal cancer screening among postmenopausal women. Specifically, the breast cancer screening rate was found to be 1.5% lower among women who reported depressive symptoms at baseline compared to those who did not report such symptoms. However, depressive symptoms were not found to be a predictor for colorectal cancer screening. Additionally, the stage of breast and colorectal cancer was not found to be associated with depressive symptoms after adjusting for covariates (PUBMED:18788983). Furthermore, depression before the diagnosis of breast cancer was associated with higher all-cause mortality, particularly in women with late-stage breast cancer. Newly developed depression at year 3 before a breast cancer diagnosis was significantly associated with both all-cause mortality and breast cancer-specific mortality (PUBMED:28387934). However, for colorectal cancer, there was no statistically significant association between depression before diagnosis and all-cause mortality or colorectal cancer-specific mortality (PUBMED:33382778). These findings suggest that while depressive symptoms may influence the likelihood of undergoing breast cancer screening, they do not appear to be linked to the stage at which breast or colorectal cancer is diagnosed. Additionally, the presence of depressive symptoms prior to a breast cancer diagnosis may have implications for mortality outcomes.
Instruction: Does preoperative chemotherapy ease the surgical procedure for Wilms tumor? Abstracts: abstract_id: PUBMED:19692015 Does preoperative chemotherapy ease the surgical procedure for Wilms tumor? Purpose: A conclusion of Société Internationale d'Oncologie Pédiatrique in 2001 after protocol 2, 5, 6, 9 and 93-01 was that "preoperative chemotherapy would make nephrectomy easier and furthermore, metastases may disappear or become resectable and vascular extension may regress and partial nephrectomy may become possible." We changed our strategy from preoperative chemotherapy in select cases only to standard 4 to 6-week preoperative chemotherapy in all. However, it appears dubious whether surgical ease or perioperative complications consistently improved after preoperative chemotherapy. We investigated volume changes after chemotherapy and compared all possible variables at a single center using the same surgical team. Materials And Methods: A total of 32 children with Wilms tumor between 1998 and 2007 were included in a retrospective analysis, including 15 without preoperative chemotherapy according to the National Wilms' Tumor Study Group protocol before 2001 and 17 with preoperative chemotherapy according to the Société Internationale d'Oncologie Pédiatrique 2001 protocol. We studied the change in tumor volume using picture archive and communication systems volume measurement after chemotherapy, surgical procedure parameters (child body mass index, length and type of incision, blood loss, operative time, hospital stay, pain medication and complications) and outcome (histology and staging). Statistical analysis was performed using the t, Fisher and chi-square tests. Results: The 2 groups were comparable in gender, age (mean 5.4 and 3.9 years, respectively) and staging at diagnosis. At diagnosis mean +/- SD Wilms tumor volume was 408.93 +/- 387.39 and 454.58 +/- 236.71 cm(3) in the nonchemotherapy and chemotherapy groups, respectively. In the chemotherapy group 12 of 17 tumors decreased a mean of 60.9% +/- 24.6% and 5 of 17 increased 42.8% +/- 31.3%. No analyzed parameters were different in the 2 groups. Conclusions: Our single center analysis with the same surgical team before and after the preoperative chemotherapy era show that it remains unpredictable whether surgery in a child with Wilms tumor is safer and easier after preoperative chemotherapy. abstract_id: PUBMED:2425100 Successful treatment of renal vein and vena caval extension of nephroblastoma by preoperative chemotherapy. Intravascular nephroblastoma involving the renal veins or inferior vena cava, occurring in 6 to 10 per cent of the cases, dramatically complicates the treatment course for those individuals. Heroic surgical procedures, including sternal splitting and cardiopulmonary bypass, have been used to treat these tumors, often with an added morbidity and increased mortality. We have detailed the dramatic responses to preoperative chemotherapy in 3 patients with intravascular nephroblastoma, which resulted in either complete eradication or marked shrinkage of the intravascular tumor, greatly simplifying the ensuing surgical procedure. These observations strongly indicate that preoperative chemotherapy should be used as the primary treatment in patients with this condition. abstract_id: PUBMED:8393945 Effectiveness of preoperative chemotherapy in the treatment of Wilms tumor with vena caval and intracardiac extension. Surgical treatment of Wilms tumor with vena caval and intracardiac extension is challenging, often requiring cardiopulmonary bypass. We report the response to preoperative chemotherapy in 3 patients following a protocol at our hospital. The tumor thrombus was limited to the inferior vena cava in 2 patients and extended to the right atrium in 1. All 3 patients had marked reduction or complete eradication of the venous thrombus, facilitating the final surgical procedure. The patients are disease-free 18 months to 7 years after the initial treatment. Our protocol involves initial radiographic staging followed by needle biopsy, chemotherapy, repeat radiographic staging, definitive resection and additional chemotherapy. Our results confirm the effectiveness of this approach, which appears to decrease morbidity associated with intracaval and intra-atrial tumor extension. abstract_id: PUBMED:2581512 Preoperative chemotherapy of testicular cancer and Wilm's tumor Of the various urogenital malignancies, preoperative chemotherapy is most effective for testicular cancer and Wilms' tumor. The most effective treatment regimens for advanced nonseminomatous testicular tumors employ vinblastine, CDDP and bleomycin and adjunctive surgery. Another effective chemotherapy regimen is combination of vinblastine, actinomycin D, bleomycin, cyclophosphamide and CDDP presented by MSKCC. Available pretreatment with 4 courses of platinum, vinblastine and bleomycin before any surgical treatment in those with massive bulk metastatic disease seems to provide the most effective cytoreduction and best survival. Donohue has shown that in a primary chemotherapy group, there is only 20% active carcinoma after primary chemotherapy, whereas in a salvage chemotherapy group there is approximately 50% active carcinoma at surgery. It must therefore be emphasized that complete remission should be obtained by primary chemotherapy and adjunctive surgery. In Wilms' tumor preoperative chemotherapy with vincristine and actinomycin D should be given. abstract_id: PUBMED:22146531 Preoperative chemotherapy with doxorubicin in Wilms tumor. Background: Vincristine and actinomycin-D have long been used as the preoperative chemotherapy before nephrectomy in patients with Wilms tumor (WT). Actinomycin-D is not readily found in Turkey, and delay in treatment, occurring basically from commercial and marketing problems, prompted us to administer alternatives such as doxorubicin, another antitumor antibiotic available in Turkey, to our newly diagnosed patients with WT. Procedure: Between November 2009 and January 2011, 12 consecutive patients with WT were included in the study. Preoperative chemotherapy consisted of vincristine (1.4 mg/m, days 1, 8, 15, and 22) and doxorubicin (20 mg/m, 1 to 3 d). Results: All renal tumors showed shrinkage drastically after 4 weeks of preoperative chemotherapy. Surgery of all the patients was uneventful and without any complications. At the end of the median follow-up time of 8 months (range, 6 to 17 mo), all patients are in remission without a sign of the disease. Conclusions: Preoperative chemotherapy consisting of vincristine and doxorubicin seems to be successful as a preoperative treatment in achieving shrinkage and reducing the risk of surgical complications. On the basis of the promising results in our study, it is worth administering a similar regimen with further follow-up in larger populations. abstract_id: PUBMED:17980066 Intracaval and intracardiac extension of Wilms' tumor. The influence of preoperative chemotherapy on surgical morbidity. Objectives: The aim of this retrospective study is to compare surgical complications and long-term survival in children with Wilms' tumor (WT) and tumor thrombus receiving or not preoperative chemotherapy. Materials And Methods: Review of the charts of 155 children with WT treated between 1983 and 2005, and analysis of 16/155 (10.3%) children with WT who presented cavoatrial tumor extension, being 8/16 IVC and 8/16 atrial thrombus. Results: Median age was 54 months. 2/16 had cardiac failure as the first symptom. 11/16(7 IVC and 4 atrial extension) (67%) were submitted to preoperative chemotherapy with vincristine plus actinomycin D, and 5/16(1 IVC and 4 atrial) (33%) underwent initial nephrectomy and thrombus resection. So, 11 patients were submitted to preoperative VCR/ACTD and 2/11 (18.1%) had complete regression of the thrombus, 6/11(54.5%) partial regression and 3/11 (27%) had no response. Among the partial responders, nephrectomy with thrombus removal was performed in all, including one patient with previous intracardiac involvement, without extracorporeal circulation procedures. In two of the three non-responders, cardiopulmonary bypass was necessary for thrombus removal. There were no surgical related deaths. Long-term survival is 91% in the group submitted to preoperative chemotherapy and 100% in the group who had surgery as first approach. Conclusion: Preoperative chemotherapy was able to reduce thrombus extension in 8/11 (73%) treated patients and cardiopulmonary bypass was avoided in 2 patients with atrial thrombus. Surgical resection of tumor and thrombus was successful in all cases, receiving or not preoperative chemotherapy and overall survival was similar in both groups. abstract_id: PUBMED:32292457 Preoperative neoadjuvant chemotherapy on surgical condition and oncogene expression in advanced gastric cancer. Objective: To evaluate the effect of preoperative neoadjuvant chemotherapy regimen of XELOX (capecitabine combined with oxaliplatin) on surgical condition and oncogene expression in advanced gastric cancer. Methods: From January 2015 to July 2016, 124 patients with advanced gastric cancer who were admitted to our hospital were selected. Random number table method was used to divide them into an observation group and a control group, 62 each group. The observation group received two courses of neoadjuvant chemotherapy (XELOX) before operation, and the control group received surgery. The operation condition, expression of oncogenes in gastric cancer lesions, occurrence of adverse reactions and the long-term prognosis were compared between the two groups. Results: The R0 resection rate of the observation group was significantly higher than that of the control group, and the difference was statistically significant (P<0.05). The operation time of the observation group was shorter than that of the control group, the amount of intraoperative bleeding and the amount of postoperative drainage of the observation group were less than that of the control group, and the differences were statistically significant (P<0.05). The mRNA expression of gastrokine 1, multiple tumor suppressor protein, Wilms tumor gene on the X chromosome (WTX gene) and gene of phosphate and tension homology deleted on chromosome ten (PTEN gene) in the observation group after treatment was significantly higher than that in the control group before treatment, and the increase amplitude of the observation group was more obvious than that of the control group (P<0.05). There was no significant difference in the incidence of adverse reactions between the two groups (P>0.05). In terms of long-term prognosis, the disease-free survival time and average survival time of the observation group during the two-year follow-up period were significantly better than those of the control group, and the recurrence rate of the observation group was significantly lower than that of the control group; the differences were statistically significant (P<0.05). Conclusion: Preoperative XELOX for advanced gastric cancer patients can effectively increase the proportion of radical surgery, reduce the risk of surgery, and significantly regulate the expression of oncogene, thus improving the long-term prognosis of patients. abstract_id: PUBMED:26835317 Preoperative chemotherapy and local stage III in nephroblastoma. The evolution of Nephroblastoma (WT) treatment over the last decades has been one major success around the world. Despite pursuing different upfront treatment approaches the Children's Oncology Group (COG, former National Wilms Tumor Study Group, NWTSG) and the International Society of Paediatric Oncology's Renal Tumor Study Group (SIOP-RTSG) show the same outcome. Treatment starts with preoperative chemotherapy in SIOP-RTSG compared to initial surgery in COG. Response to chemotherapy can be used as a stratification parameter. This allows treating patients with blastemal subtype more aggressively resulting in a better event free survival (EFS). Moreover the percentage of patients with local stage III is less in SIOP-RTSG than in COG studies. Lymph node involvement, in NWTS 5 together with residual microscopic disease, results in a lower EFS in both study groups. But overall survival (OS) is not different comparing patients with or without positive lymph nodes (LN). No other reason for stage III has a significant impact on outcome. The role of radiotherapy for local tumor control in stage III is important, but the radiation dose needs to be questioned as 10.8 Gy used in COG is as efficient as 15 Gy in SIOP-RTSG protocols. In addition in part of low income countries radiotherapy can not be given due to a lack of radiation facilities. Nevertheless some patients are cured without irradiation. The analysis of local stage III patients underlines the importance of preoperative chemotherapy and the need for molecular studies to better stratify patients according to their individual risk. abstract_id: PUBMED:29552786 Meta-analysis of the effect of preoperative chemotherapy on Wilms' tumor. Purpose: To evaluate the effect of preoperative chemotherapy on the event-free survival (EFS) and overall survival (OS) of Wilms' tumor in children, and to provide a basis for further improvement of clinical therapeutic effect and research level. Methods: Relevant studies before July 2017 were retrieved from PubMed, EMBASE, Web of Science and other databases, and two evaluators were responsible independently for the studies' selection, data extraction and cross-checking according to the inclusion and exclusion criteria. EFS and OS of patients were assessed using hazard ratio (HR) and 95% confidence interval (CI). All analyses, including publication bias assessment, were performed using Stata 12 software. Results: 12 studies meeting the criteria, with a total of 1639 patients, were finally enrolled. Meta-analysis showed that the preoperative chemotherapy combined with surgery, compared with surgery alone, could improve the EFS and OS of patients with Wilms' tumor (HR=1.26, 95% CI 1.07, 1.48 and 1.12 (1.03, 1.22, respectively). Both sensitivity analysis and publication bias assessment revealed that the results were reliable with no significant publication bias. Conclusions: Compared with surgery alone, preoperative chemotherapy combined with surgery can increase the EFS and OS and improve the prognosis of patients. abstract_id: PUBMED:10235327 Wilms' tumor: a 25-year review of the role of preoperative chemotherapy. Purpose: This 25-year population based, single institution review was conducted to investigate the impact of preoperative chemotherapy on surgical and histological staging in patients with Wilms' tumors. Results: Forty-nine patients under the age of 15 years were identified from case notes to have had histologically verified Wilms' tumors over the 25-year period from January 1972 to December 1996. Twenty-six patients were treated initially with preoperative chemotherapy, 23 with immediate surgery. Eleven had treatment randomized within the UKCCSG WT9101 trial (UKWT3), and the remainder received initial treatment according to unit policy. Surgical stages in the two groups (preoperative chemotherapy and immediate surgery) were respectively, stage 1:14(28.5%) and 11 (22.5%), stage II: one (2%) and eight (16.3%), stage III: 11 (22.5%) and four (8.2%). Seven patients had clinical stage IV disease at presentation. Histology results were favorable in 45 patients and unfavorable in four. All patients received chemotherapy during treatment, whereas 25 (51%) also received radiotherapy. No significant difference was evident in the two groups with respect to treatment-related morbidity. Five patients relapsed, three of whom died within the period of review, but a fourth has since died. Conclusions: This study suggests that the use of preoperative chemotherapy does not put the patient at increased risk of postoperative morbidity or reduced survival. The distribution of surgical stages suggests that limited tumor downstaging may have occurred as a result of preoperative chemotherapy. Answer: The impact of preoperative chemotherapy on the ease of surgical procedures for Wilms tumor is not entirely clear-cut. Some studies and observations suggest that preoperative chemotherapy can be beneficial in certain cases. For instance, it has been reported that preoperative chemotherapy can lead to a significant reduction or even complete eradication of intravascular nephroblastoma, which greatly simplifies the ensuing surgical procedure (PUBMED:2425100; PUBMED:8393945). This approach has been shown to be effective in reducing the extension of tumor thrombus, facilitating the final surgical procedure, and potentially avoiding the need for cardiopulmonary bypass in patients with vena caval and intracardiac extension (PUBMED:17980066). Moreover, preoperative chemotherapy has been associated with a decrease in tumor volume, which could theoretically make the surgery less challenging and reduce the risk of surgical complications (PUBMED:22146531). A meta-analysis also indicated that preoperative chemotherapy combined with surgery could improve event-free survival (EFS) and overall survival (OS) of patients with Wilms' tumor (PUBMED:29552786). However, a single-center analysis with the same surgical team before and after the preoperative chemotherapy era showed that it remains unpredictable whether surgery in a child with Wilms tumor is safer and easier after preoperative chemotherapy. In this study, no analyzed parameters were significantly different in the two groups, suggesting that the benefits of preoperative chemotherapy on surgical ease may not be consistent (PUBMED:19692015). Another study suggested that preoperative chemotherapy does not put the patient at increased risk of postoperative morbidity or reduced survival, but it also indicated that limited tumor downstaging may have occurred as a result of preoperative chemotherapy (PUBMED:10235327). In conclusion, while there is evidence that preoperative chemotherapy can be beneficial in certain cases, particularly with intravascular extension, and may improve patient outcomes, its impact on the ease of the surgical procedure for Wilms tumor is not guaranteed and may vary depending on individual cases.
Instruction: Policies and mental health action plans in OECD: lessons for Quebec? Abstracts: abstract_id: PUBMED:25120115 Policies and mental health action plans in OECD: lessons for Quebec? Objectives: The objectives of this research are: 1) to provide a conceptual framework for analyzing mental health policies; 2) to compare mental health policies across a sample of OECD jurisdictions; 3) to describe configurations of mental health policies; 4) to identify practical implications for the Province of Quebec. Methods: Design: This research is a comparative synthetic study of mental health policies. Sampling: The web sites of the Ministries of health of the thirty-four OECD countries and ten Canadian Provinces were searched for mental health policies proposed within the last fifteen years. Twenty one such policies (with an English or French version) were retrieved, covering thirteen OECD countries, six Canadian Provinces and the WHO. Analysis: Content analysis was performed based on the categories (differentiation, integration, governance) and sub-categories of the aforementioned conceptual framework. Eight policies that together cover the variations encountered between all policies were used to identify typical configurations. Results: A conceptual framework derived from Parsons' Theory of Social Action posits that social action systems must exhibit a level of internal differentiation that corresponds to the heterogeneity of their external environment and also a level of integration that allows them to remain coherent despite the complexity of their environment. Governance mechanisms help them maintain an equilibrium between differentiation and integration.In terms of differentiation, mental health policies exhibit much variation between the needs and the groups that are prioritized (age, gender, ethnicity, culture, etc.), the types of interventions that are proposed (promotion, prevention, treatment, rehabilitation, etc.), the systemic levels at which interventions take place (society, government as a whole, health care system, organizations, programs, individuals), and the level of specification and scientific basis of proposed interventions. In terms of integration, policies promote various mechanisms belonging to four general categories of increasing effectiveness from hierarchical separation of mandates, to exchange of information, to collaborative planning, and to complete structural integration and co-localisation of certain components (ex. dependence and mental health services). In terms of governance, policies present program theories of varying explicitness and scientific bases, and with different emphases on structures, processes or outcomes. Management models also vary in terms of precision, accountability, financing mechanisms, information systems, and the importance of clinical governance and quality improvement.Five configurations of mental health policies are identified (the public health, the professional, the structural technocratic, the functional technocratic, and the political), each comprising typical combinations of the preceding ingredients. Conclusion: The current Quebec mental health policy belongs to the structural technocratic configuration. It specifies fragmented mental health structures with mild integration mechanisms. In the future, it should consider improving its public health aspects (inter-sector work on the determinants of mental health), professional aspects (emphasis on scientific evidence, clinical governance and quality), and functional aspects (integrated specialized community mental health and addiction services). But political factors may prevent it from doing so. abstract_id: PUBMED:24336989 The implementation of the Quebec Mental Health Action Plan: hands-on experience at the Maisonneuve-Rosemont hospital in Montreal The Mental Health Action Plan maps out the reorganization of mental health services in the province of Quebec. Accordingly, this paper presents the transformation process of the mental health program at the Maisonneuve-Rosemont hospital in Montreal. The authors discuss, on the one hand, the challenges that were met and, on the other hand, the model of care and organisational choices that were adopted. This article is divided in three parts. The first part describes the two main guiding principles underlying the Mental Health Action Plan. The second part examines the context surrounding the arrival of the Action Plan at the Maisonneuve-Rosemont Hospital. The third part describes the hospital transformation process that evolved in relation with: 1- the new centralized access points for mental health services; 2- the support for first line (or primary care) services; 3- the second line services for the specialized treatment of complex mental health problems. abstract_id: PUBMED:37263125 Gender perspectives in mental health plans in Spain: a pending task? Objective: To develop a gender sensitivity index and analyse the regional mental health plans in force in 2021 in terms of their gender sensitivity. Method: We reviewed the regional mental health plans in force in 2021 using an ad hoc questionnaire design based on a review of documents and reports on gender sensitivity in health policies. We construct an index based on both the symbolic and operative sensibility of plans. Results: Of the seven plans reviewed, four scored less than 13 out of 26. The maximum score was 18 and the minimum 2. Symbolic gender sensitivity was higher than operational sensitivity. Conclusions: Mental health plans have a low or very low degree of gender sensitivity. Not considering gender as a determinant of mental health in the formulation of specific health plans may reduce the effectiveness of interventions aimed at reducing gender inequalities in mental health. abstract_id: PUBMED:11381737 Off-site and alternative services in community mental health care in Quebec The objective of this article is to understand the evolution of the practice of "elsewhere" and "otherwise" that alternative community resources in Québec have consistently advocated. Towards this end, we begin by identifying the meaning and significance of the ideas of "elsewhere" and "otherwise" in the context of the transformation of Quebec's mental health system since 1989. Then we present the results of an exploratory study carried out between 1997 and 1999 in which the coordinators as well as members of 6 community mental health organisations in a semi-urban region of Quebec were interviewed. The results of the study suggest the validity of certain fears that community resources are becoming "deradicalized." The analysis presented here suggests the unfolding of a complex process involving the integration of alternative resources into the broader public mental health system, internal dynamics, and the emerging limitations of the vision of "elsewhere" and "otherwise" that has guided them until now. Twenty years after alternative community resources appeared on the Quebec mental health scene, is it time to reform the philosophy of "elsewhere" and "otherwise"? abstract_id: PUBMED:25120120 The role of families in the Quebec mental health system Purpose. This paper aims to summarize the current situation regarding the role of families of persons with mental disorders within the mental health system in Quebec.Methods. We made a research in the most recent and pertinent papers or books regarding: 1) the history of the family involvement in the mental health system in Quebec; 2) the present situation of these families and the models that we can see and 3) identify in recent governmental or research documents recommendations regarding a greater empowerment of the families in the mental health system.Results. The research provides a historical perspective to the roles occupied by families. First the family was described as a causal agent; the work of the psychoanalyst Freud described the family unit as a source of conflicts in the areas of affect and sexual dynamics, and which results in the appearance of psychiatric symptoms. Later, this view of a causal agent came both from the point of view of genetic and from expressed emotions. In the 70's new perspectives such as general systems theory (von Bertalanffy, 1968), described the family as responsive to mental disorder of one of its members rather than a responsible agent. With the deinstitutionalization movement, the family was perceived as a source of solutions for persons with mental illness, but also as persons who can live some burden. This subject became well described and a several studies reported about adverse effects of caring for a person with mental disorder on the health, well-being and feeling of caregiver burden. In the 90's, some government action plans called for the relationship between the family and the health system as a partnership. Also, families want to be involved in decisions about care and to be informed about the diagnosis and treatment options. ( Lefley et Wasow, 1993)A new model developed by FFAPAMM that identifies three main roles enables to contextualize the current role in the current system. This model, called CAP lists and describes three roles of families that, if they are dependent on the past, continue to mingle in our time. These roles are:Accompanist: the role imposed by being near a person with mental illness (Fradet, 2012). As an accompanist, the family needs to establish relationships with health professionals. Accompanists want to be considered by stakeholders and be respected in their desire to share information and participate in decisions.Client: this is the role that derives from the accompanist when the caregiver receives care services for its psychological or physical problems related to the fact support a sick person.Partner: it is relative to the involvement (or not) the role of family members in the organization of care. It is a role of participation and decision-making. In this context, we also speak of participation in the consultation mechanisms.Recommendations from a Quebec research project and a report of the Commission on Mental Health of Canada will consider a future where the needs and aspirations of families will be taken into account in mental health general services, short term health care, community mental health services. There are also some guidelines regarding education for professionals about the needs of families and about changing politics.Conclusion. There exists in all associations of families of person with mental disorders, training on topics such as how to behave towards different mental disorders or aggressiveness near reached. A project of the Douglas Institute has hired a family member to the emergency room to help families better manage this often difficult time and to facilitate communication with stakeholders. Another project called "Learning to come closer without aggression" has helped more than 200 family members undergo training inspired by the Omega approach, which helps them better manage their own behavior in situations of aggression with their loved one. abstract_id: PUBMED:10286343 Mental health promotion in the local community service centres in Quebec. The definition of the concepts of mental health and mental health promotion have been much discussed in Quebec for a number of years. The authors of this article present their point of view on the role the Local Community Service Centres (LCSCs) could play in implementing a mental health action plan. On the basis of a definition of the concepts of mental health and mental health promotion and an outline of the goals and objectives of mental health promotion, the authors discuss certain areas of intervention for the various sectors of practice in the LCSCs. Their article closes with some cautions about the limitations of health promotion in the LCSCs. abstract_id: PUBMED:25120113 Primary mental healthcare reform in Quebec and the role and coordination strategies of general practitioners Objectives: The health and mental health systems in Quebec have recently been substantially transformed. At the heart of this restructuring, reforms aimed to strengthen primary care and to better integrate services, which are central trends internationally. This article summarizes Quebec's primary health and mental health reforms. It also presents the key role of general practitioners in the treatment of mental health disorders and their coordination strategies with the mental health care resources in the province. Methods: Numerous documents on the Quebec health and mental health reforms and the international literature on primary mental health care were consulted for this study. Information on general practitioner roles in mental health were based on administrative data from the Régie de l'assurance maladie du Québec (RAMQ) for all medical procedures performed in 2006. The data was compared with the results of a survey realized in the same year with 398 general practitioners in Quebec. Complementary qualitative data was collected through one hour interviews on a subsample of 60 of those general practitioners. Results: The central aim of the Quebec healthcare reform was to improve services integration by implementing local healthcare networks. A population health approach and a hierarchical service provision were promoted. For a better access and continuity of care, family medicine groups and network clinics were also developed. The mental health reform (Action Plan in Mental Health, 2005-2010) was launched in this general context. It prioritized the consolidation of primary care and shared-care (i.e. increased networking between general practitioners and psychosocial workers and psychiatrists) by reinforcing the role of general practitioners in mental health, developing mental health interdisciplinary teams in primary care and adding a psychiatrist-respondent function in each Quebec local healthcare network. In mental health, general practitioners played a central role as the primary source of care and networking to other resources either primary or specialized health care services. Between 20-25% of visits to general practitioners are related to mental health problems. Nearly all general practitioners manage common mental disorders and believed themselves competent to do so; however, the reverse is true for the management of serious mental disorders. Mainly general practitioners practiced in silo without much relation with the mental health care resources. Numerous factors were found to influence the management of mental health problems: patients' profiles (e.g. the complexity of mental health problems, concomitant disorders), individual characteristics of the general practitioners (e.g. informal network, training); professional culture (e.g. formal clinical mechanisms), the institutional setting (e.g. multidisciplinary or not) and organizations of services (e.g. policies). Conclusion: Unfortunately, the Quebec health and mental health care reforms have not been fully implemented yet. Family medicine groups and networks clinics, primary mental health teams and psychiatrists-respondent are not optimally operational and therefore, are not having a significant outcome. Support mechanisms to help implement the reforms were not prioritized. Hindering factors should be identified and minimized to increase positive changes in the health and mental health systems. This article concludes on the importance of implementing continuums of care, especially local healthcare networks and best practices in mental health. Furthermore, strong strategies to support the implementation of changes should always accompany sweeping reforms. abstract_id: PUBMED:25120116 Reflections and recommendations from Quebec mental health university institutes on the working paper of the provincial forum for the 2014-2020 Mental Health Action Plan Goal: Quebec's three mental health university institutes (DMHUI, IUSMM and the IUSMQ) and the Centre hospitalier universitaire de Sherbrooke submitted a statement to the provincial consultation forum on the 2014-2020 Mental Health Action Plan (MHAP), which was held in January 2014 and organized by the Ministère de la Santé et des Services sociaux (MSSS). This article presents these institutes' main recommendations. Method: Mental health university institutes deliver a wide and diverse range of services. They know about the challenges of organizing mental health services and are aware of national and international trends in the delivery of the best organizational and clinical practices in mental health. It is therefore as key stakeholders in the mental health care network that they commented on each component in the working paper. Results: The proposed orientations are consistent with the 2005-2010 MHAP. The presented themes clearly reflect current issues, although the guidelines must be more explicit regarding the vision of how services will be organized in coming years. These institutes therefore suggest that the following principles be included: the full exercise of citizenship rights, the organization of services within integrated networks, performance, continuous improvement and innovation, as well as a global and integrated vision of health. The complexity of today's problems requires flexibility, complementarity and continuity of services, particularly for youth, aboriginals, and people with concomitant disorders. These institutions therefore stress the importance of prevention, early intervention programs, and increased support for first-line general practitioners and health care professionals. They also emphasized that specialized inpatient and outpatient services should not be neglected. Community services must also be structured around various levels of support, such as ICM and ACT, as well as around specialized programs available in hospital outpatient clinics. The development and transfer of knowledge remain a central issue when it comes to improving the mental health of the general population. The consolidation of ultra-specialized services, teaching and research should be included in the next MHAP. Finally, a global health perspective must go beyond the MSSS framework to become a governmental and interministerial commitment based on a vision of public mental health that incorporates the health status of the general population and accounts for social determinants. Conclusion: It is important to have a national plan that promotes a vision. This plan must be part of an interministerial action plan that truly supports the full exercise of citizenship rights and the fight against stigmatization in collaboration with people who use these services and their families. abstract_id: PUBMED:19475194 General practitioners: practices and integration of mental health care in Québec This article examines the socio-demographic profile of general practitioners (GPs), their role in the management of (transient/moderate, severe/chronic) mental health disorders in different areas (urban, semi-urban, and rural) of Quebec as well as if their clinical practice and collaboration are oriented towards integration of mental health services. This crosswise study is based on 398 GPs representative of all Quebec GPs who answered a questionnaire. The study shows that GPs play a central role in mental health. According to territories, they have different socio-demographic and practice profiles. The types of territory and the degree of severity of mental health illnesses influence the propensity of GPs to integrate mental health care. Finally, GPs practiced mostly in silo, but they support greater integration of mental health services. The authors conclude that to improve mental health services integration, more proactive incentives should be favoured by political elites, adapted to the severity of the case and environments (urban, semi-urban or rural). However, the shortage of resources that is particularly striking in rural areas as well as inadequate mechanisms for clinical decision, reduce inter-relations and seriously limit the integration of healthcare. abstract_id: PUBMED:3825569 Theoretical and practical problems in the measurement of the mental health of populations. Quebec health survey The authors present the process of setting up a mental health inquiry. The "Enquête Santé Québec" was conducted at the request of the Ministry of Social Affairs in order to identify the problems, needs and priority intervention sectors in the health field. The inquiry was carried out on a large scale at the pilot stage, collecting information on 4,000 people in rural and urban areas. The principal options chosen are briefly discussed: type of mental health to be measured, sample methods, choice of instruments, validation work and some results. Answer: The research on mental health policies and action plans across OECD jurisdictions provides several lessons for Quebec. A comparative synthetic study of mental health policies across thirteen OECD countries, six Canadian Provinces, and the WHO revealed a conceptual framework based on Parsons' Theory of Social Action, which posits that social action systems must exhibit a level of internal differentiation that corresponds to the heterogeneity of their external environment and a level of integration that allows them to remain coherent despite the complexity of their environment (PUBMED:25120115). In terms of differentiation, mental health policies vary in the needs and groups prioritized, types of interventions proposed, systemic levels at which interventions take place, and the level of specification and scientific basis of proposed interventions. Integration mechanisms range from hierarchical separation of mandates to complete structural integration and co-localization of certain components, such as dependence and mental health services. Governance mechanisms vary in explicitness and scientific bases, with different emphases on structures, processes, or outcomes. Management models also differ in precision, accountability, financing mechanisms, information systems, and the importance of clinical governance and quality improvement (PUBMED:25120115). Five configurations of mental health policies are identified: public health, professional, structural technocratic, functional technocratic, and political, each comprising typical combinations of the aforementioned elements. The current Quebec mental health policy falls into the structural technocratic configuration, which specifies fragmented mental health structures with mild integration mechanisms. It is recommended that Quebec should consider improving its public health aspects, professional aspects, and functional aspects, such as integrated specialized community mental health and addiction services. However, political factors may prevent it from doing so (PUBMED:25120115). The implementation of the Quebec Mental Health Action Plan at the Maisonneuve-Rosemont hospital in Montreal demonstrates the transformation process of mental health services, highlighting the challenges and organizational choices adopted, such as new centralized access points for mental health services, support for primary care services, and specialized treatment for complex mental health problems (PUBMED:24336989). In conclusion, Quebec can learn from the variety of mental health policies in OECD countries to enhance its own mental health policy. Emphasis on public health, evidence-based practices, clinical governance, quality improvement, and functional integration of services could be beneficial. However, the province must navigate political challenges to implement these improvements effectively.
Instruction: Is the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, histrionic personality disorder category a valid construct? Abstracts: abstract_id: PUBMED:20728002 Is the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, histrionic personality disorder category a valid construct? Purpose: The study investigated crucial aspects of the construct validity of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) histrionic personality disorder (HPD) category. Material And Methods: The study included 2289 patients from the Norwegian Network of Psychotherapeutic Day Hospitals. Construct validity was assessed by means of prevalence, comorbidity with other personality disorders, internal consistency among HPD criteria, severity indices, as well as factor analyses. Results: The prevalence of HPD was very low (0.4 %). The comorbidity was high, especially with borderline, narcissistic, and dependent personality disorders. The internal consistency was low. The criteria seemed to form 2 separate clusters: the first contained exhibitionistic and attention-seeking traits and the other contained impressionistic traits. Conclusion: The results indicated poor construct validity of the HPD category. Different options for the future of the category are discussed. The authors suggest the HPD category to be deleted from the DSM system. However, the clinical phenomena of exhibitionism and attention-seeking, which are the dominant personality features of HPD, should be preserved in an exhibitionistic subtype of narcissism. abstract_id: PUBMED:26588083 Clinicians' Judgments of the Clinical Utility of Personality Disorder Trait Descriptions. Proposed for the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) was a five-domain, 25-dimensional trait model that would have constituted a significant part of the diagnostic criteria for each personality disorder. A primary concern with respect to the proposal was that clinicians might find such an approach to be unacceptable. This study provides results from three independent data collections that compared clinicians' clinical utility ratings for each iteration of the DSM-5 dimensional trait assignments, along with an alternative list of traits derived from the Five Factor Form (FFF). The clinicians considered the final trait assignments that were posted for the avoidant, narcissistic, obsessive-compulsive, and schizoid personality disorders to be significantly less acceptable than the original assignments. They also considered the FFF trait assignments to be preferable to the DSM-5 final postings with respect to the avoidant, narcissistic, obsessive-compulsive, dependent, and histrionic personality disorders. The implications of these results for future editions of the diagnostic manual are discussed. abstract_id: PUBMED:24756656 The treatment of personality disorder in Jamaica with psychohistoriographic brief psychotherapy. Objective: To assess the clinical outcome of patients with personality disorder, receiving treatment with psychohistoriographic brief psychotherapy (PBP). Method: Patients seen in the author's private practice from 1974-2010 with a Diagnostic and Statistical Manual of Mental Disorders, fourth edition, text revision (DSM-IV-TR) personality disorder diagnosis were treated with PBP. Demographic, clinical responses and one-year clinical outcome measures were disaggregated and analysed, using SPSS, version 17. Results: One hundred patients completed treatment with PBP, male:female 34:64; mean age of 35.86 ± 10.28 (range 16 - 66) years. Forty-five per cent were married, 73% were of predominantly African racial origin, with 59% from socio-economic class (SEC) I and 39% from SEC II and III. The presenting complaints were interpersonal conflict (35%), anxiety (21%) and depressed mood (20%). Major depression (30%), substance abuse disorder (18%) and generalized anxiety disorder (13%) were the most common Axis I diagnoses. Histrionic personality disorder (39%) and avoidant personality disorder (35%) were the main Axis II diagnoses. Psychohistoriography was completed with all patients, and charted by 96%. Transference variants were experienced by all patients and worked through with 87%. The quadranting process was completed by 42% with goal setting instituted by 96% and actualization scoring fully completed by 34%. A continuous exercise programme was instituted by all patients, and was maintained by 56% at one-year follow-up. Ninety-four per cent reported fair (10%), good (68%) to very good/excellent (16%) improvement on completion of PBP, with 72% assessed as maintaining fair to good clinical improvement by the therapist at one-year follow-up. Conclusion: Patients with personality disorders showed clinical improvement one year after being treated with psychohistoriographic brief psychotherapy. abstract_id: PUBMED:25868053 Clinicians' emotional responses and Psychodynamic Diagnostic Manual adult personality disorders: A clinically relevant empirical investigation. The aim of this study is to explore the relationship between level of personality organization and type of personality disorder as assessed with the categories in the Psychodynamic Diagnostic Manual (PDM; PDM Task Force, 2006) and the emotional responses of treating clinicians. We asked 148 Italian clinicians to assess 1 of their adult patients in treatment for personality disorders with the Psychodiagnostic Chart (PDC; Gordon & Bornstein, 2012) and the Personality Diagnostic Prototype (PDP; Gazzillo, Lingiardi, & Del Corno, 2012) and to complete the Therapist Response Questionnaire (TRQ; Betan, Heim, Zittel-Conklin, & Westen, 2005). The patients' level of overall personality pathology was positively associated with helpless and overwhelmed responses in clinicians and negatively associated with positive emotional responses. A parental and disengaged response was associated with the depressive, anxious, and dependent personality disorders; an exclusively parental response with the phobic personality disorder; and a parental and criticized response with narcissistic disorder. Dissociative disorder evoked a helpless and parental response in the treating clinicians whereas somatizing disorder elicited a disengaged reaction. An overwhelmed and disengaged response was associated with sadistic and masochistic personality disorders, with the latter also associated with a parental and hostile/criticized reaction; an exclusively overwhelmed response with psychopathic patients; and a helpless response with paranoid patients. Finally, patients with histrionic personality disorder evoked an overwhelmed and sexualized response in their clinicians whereas there was no specific emotional reaction associated with the schizoid and the obsessive-compulsive disorders. Clinical implications of these findings were discussed. abstract_id: PUBMED:17707247 Relationship of sex to symptom severity, psychiatric comorbidity, and health care utilization in 163 subjects with borderline personality disorder. Objective: The objective of the study was to compare symptom severity, frequency, and pattern of psychiatric comorbidity, quality of life, and health care utilization in men and women with borderline personality disorder (BPD). Methods: The analysis is based on a sample of 163 subjects with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition BPD recruited for participation in a clinical trial at an academic medical center. Subjects were administered structured interviews and questionnaires of known reliability. Results: Men (n = 25) and women (n = 138) with BPD were compared across multiple domains. Men were significantly older than women (34.5 vs 30.4 years), but subjects were otherwise similar in their demographic characteristics. Overall syndrome severity and frequency of past suicide attempts and self-harm behaviors were also similar. Women were more likely than men to have an anxiety disorder (particularly generalized anxiety disorder), somatoform disorders, and histrionic personality disorder. Antisocial personality disorder was more common in men. Women had higher dimensional ratings of depression, anxiety, obsessive-compulsiveness, work dysfunction, and negative affectivity; they were also more likely to endorse the "paranoia/dissociation" BPD criterion. On the Short Form-36 Health Survey, women reported significantly worse emotional role, social role, and mental health functioning than men. Contrary to expectation, there were no significant cross sex differences in the use of health care resources. Conclusion: Men and women with BPD show important differences in their pattern of psychiatric comorbidity, dimensional symptom ratings, and in quality of life variables. Women have more negative views of themselves, and worse overall emotional and social role functioning. Implications for clinicians and researchers are discussed. abstract_id: PUBMED:19607766 Specific features of suicidal behavior in patients with narcissistic personality disorder. Objective: Suicidal behavior is a clinically significant but underestimated cause of mortality in narcissistic personality disorder. Currently, there are no reliable estimates of suicidal behavior for this population. The main objective of this study was to test whether or not suicide attempters diagnosed with narcissistic personality disorder are different in terms of impulsivity and expected lethality from suicide attempters with other cluster B personality disorders. Method: In a sample of 446 suicide attempters, patients with cluster B personality disorder diagnoses (n = 254) as assessed by the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), version of the International Personality Disorder Examination-Screening Questionnaire (IPDE-SQ) were compared in terms of expected lethality and impulsivity (measured by the Beck Suicidal Intent Scale and the Barratt Impulsiveness Scale, respectively). The subjects were admitted to the emergency departments of the Ramón y Cajal Hospital and the Fundación Jiménez Diaz University Hospital in Madrid, Spain, between January 1999 and January 2003. Results: Suicide attempts of subjects diagnosed with narcissistic personality disorder had higher expected lethality than those of subjects without narcissistic personality disorder (t = -4.24, df = 439, P < .001). There were no differences in expected lethality of the attempts when comparing subjects with and without histrionic personality disorder (t = 0.28, df = 439, P = .795), antisocial personality disorder (t = 0.66, df = 439, P = .504), and borderline personality disorder (t = 1.13, df = 439, P = .256), respectively. Suicide attempters diagnosed with narcissistic personality disorder did not significantly differ from suicide attempters without narcissistic personality disorder in terms of impulsivity measures (t = -0.33, df = 442, P = .738), while suicide attempters diagnosed with antisocial personality disorder, histrionic personality disorder, and borderline personality disorder were significantly more impulsive than suicide attempters without these diagnoses (t = -3.96, df = 442, P < .001; t = -3.88, df = 442, P < .001; and t = -7.44, df = 442, P < .001, respectively). Conclusions: Narcissistic personality disorder seems to be a distinct group among cluster B personality disorders with regard to suicidal behavior. In contrast to suicide attempters with other cluster B personality disorders, suicide attempters diagnosed with narcissistic personality disorder are less impulsive and have suicide attempts characterized by higher lethality. These distinctions may offer a basis for targeted therapies aimed at decreasing suicidal risk in patients with narcissistic personality disorder. abstract_id: PUBMED:26937190 Sexual dysfunction, mood, anxiety, and personality disorders in female patients with fibromyalgia. Background: We aimed to investigate the current prevalence of sexual dysfunction (SD), mood, anxiety, and personality disorders in female patients with fibromyalgia (FM). Methods: This case-control study involved 96 patients with FM and 94 healthy women. The SD diagnosis was based on a psychiatric interview in accordance with the Diagnostic and Statistical Manual of Mental Disorders, fourth edition criteria. Mood and anxiety disorders were diagnosed using the Structured Clinical Interview. Personality disorders were diagnosed according to the Structured Clinical Interview for DSM, Revised Third Edition Personality Disorders. Results: Fifty of the 96 patients (52.1%) suffered from SD. The most common SD was lack of sexual desire (n=36, 37.5%) and arousal disorder (n=10, 10.4%). Of the 96 patients, 45 (46.9%) had a mood or anxiety disorder and 13 (13.5%) had a personality disorder. The most common mood, anxiety, and personality disorders were major depression (26%), generalized anxiety disorder (8.3%), and histrionic personality disorder (10.4%). Conclusion: SD, mood, and anxiety disorders are frequently observed in female patients with FM. Pain plays a greater role in the development of SD in female patients with FM. abstract_id: PUBMED:33325241 An Empirical Evaluation of the DSM-5 Alternative Model of Personality Disorders in Later Life. Personality disorders (PDs) in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) are conceptualized as distinct clinical syndromes. However, debate persists about the clinical utility of this categorical model, with many researchers supporting a dimensional model that focuses on pathological personality traits and personality dysfunction. This model was published in Section III of DSM-5 and named the Alternative Model of Personality Disorders (AMPD). This study evaluated the AMPD by examining relationships between traits and dysfunction with traditional categorical PD constructs among older adults. Older adults (N = 202) completed the Personality Inventory for DSM-5, Levels of Personality Functioning Scale-Self-Report, and Coolidge Axis II Inventory. Results indicated that pathological personality traits do not relate to categorical PDs in directions predicted by the AMPD. Personality functioning related to categorical PDs in expected theoretical patterns according to the AMPD but lacked incremental validity above pathological personality traits. An implication of these findings is that the AMPD does not fully resolve the age-related issues with the traditional categorical PD model. abstract_id: PUBMED:19008729 Questioning the coherence of histrionic personality disorder: borderline and hysterical personality subtypes in adults and adolescents. After the introduction of histrionic personality disorder (HPD), nosologists struggled to reduce its overlap with borderline personality disorder and other PDs. We studied the coherence of HPD in adults and adolescents as part of 2 larger studies. Clinicians described a random patient with personality pathology using rigorous psychometrics, including the SWAP-II (a Q-sort that captures personality and its pathology in adults) in study 1 and the SWAP-II-A (the adolescent version) in study 2. Using DSM-IV-based measures, we identified patients who met HPD criteria with varying degrees of diagnostic confidence. Central tendencies in the SWAP-II and SWAP-II-A profiles revealed that both the most descriptive and most distinctive features of the patients included some features of HPD but also many features of borderline personality disorder. Q-factor analyses of the SWAP data yielded 3 types of patients in each of the 2 samples. The HPD diagnosis may not be sufficiently coherent or valid. abstract_id: PUBMED:1166897 A comparison of two methods of diagnosing hysteria. The authors compare the syndrome of hysteria, defined as or indicated by a specified response to a 55-item symptom checklist previously used by Guze and other researchers, with the definition of hysterical personality in the second edition of APA's Diagnostic and statistical Manual of Mental Disorders (DSM-II). When 20 control subjects and 10 hysterical personalities (DSM-II) were given the Perley-Guze test, the results showed a close correlation between positive scores on the symptom checklist and the DMS-II diagnosis. The authors comment briefly on the theoretical usefulness and practicality of the test and note some difficulties in administration. Answer: The validity of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) histrionic personality disorder (HPD) category has been questioned based on various studies. One study found that the prevalence of HPD was very low at 0.4%, and there was high comorbidity with other personality disorders, particularly borderline, narcissistic, and dependent personality disorders. The internal consistency among HPD criteria was low, and the criteria appeared to form two separate clusters: one containing exhibitionistic and attention-seeking traits, and the other containing impressionistic traits. These results indicated poor construct validity of the HPD category, leading the authors to suggest the deletion of the HPD category from the DSM system and the preservation of the clinical phenomena of exhibitionism and attention-seeking in an exhibitionistic subtype of narcissism (PUBMED:20728002). Additionally, another study highlighted that clinicians found the DSM-5 final trait assignments for personality disorders, including HPD, to be less acceptable than the original assignments. Clinicians also preferred an alternative list of traits derived from the Five Factor Form (FFF) over the DSM-5 final postings for HPD (PUBMED:26588083). Furthermore, research into the coherence of HPD in adults and adolescents revealed that the most descriptive and distinctive features of patients diagnosed with HPD included many features of borderline personality disorder, suggesting a lack of coherence and validity for the HPD diagnosis (PUBMED:19008729). In contrast, a comparison of two methods of diagnosing hysteria, which is related to HPD, showed a close correlation between positive scores on a symptom checklist and the DSM-II diagnosis of hysterical personality, indicating some level of validity for the construct when using specific diagnostic methods (PUBMED:1166897). Overall, the evidence suggests that the DSM-IV HPD category may not be a sufficiently coherent or valid construct, with studies indicating issues with its prevalence, comorbidity, internal consistency, and overlap with other personality disorders.
Instruction: Does decreased meniscal thickness affect surgical outcomes after medial meniscectomy? Abstracts: abstract_id: PUBMED:25117726 Does decreased meniscal thickness affect surgical outcomes after medial meniscectomy? Background: There have been no clinical studies regarding the effect of decreased meniscal thickness on outcomes after meniscectomy. Purpose: To examine the postoperative outcomes of partial meniscectomy with or without horizontal resection compared with the outcomes of subtotal meniscectomy and to evaluate the influence of decreased thickness of the medial meniscus on outcomes after partial meniscectomy. Study Design: Cohort study; Level of evidence, 3. Methods: A total of 312 patients who underwent medial meniscectomy were retrospectively reviewed. Patients were divided into 3 groups: group A (n = 84) included patients with partial meniscectomy with vertical resection, group B (n = 140) consisted of those with partial meniscectomy with horizontal resection, and group C (n = 88) included those with subtotal meniscectomy. Clinical function was evaluated by use of the Lysholm knee scoring scale, International Knee Documentation Committee (IKDC) subjective knee evaluation form, and Tapper and Hoover grading system. Radiologic evaluation was performed with the IKDC radiographic assessment scale as well as with measurements of the medial compartment height at the tibiofemoral joint. Preoperative values and postoperative values measured 5 years after operation were assessed. Results: Functional outcomes in group C were inferior to those in groups A and B according to the Lysholm knee score (mean ± SD for group A = 96.1 ± 4.7, group B = 94.9 ± 5.2, group C = 84.8 ± 11.4; P < .001), IKDC subjective score (group A = 92.1 ± 6.5, group B = 91.3 ± 8.8, group C = 81 ± 11.4; P < .001), and Tapper and Hoover grading system (P = .003). There was no significant difference in scores between groups A and B. With regard to radiologic evaluation, the IKDC radiographic grade for group C was worse than the grades for groups A and B (P < .001); there was no significant difference between groups A and B. However, the postoperative joint space on the affected side was higher for group A (4.7 ± 0.6 mm) than for groups B (4.3 ± 0.5 mm; P < .001) and C (3.7 ± 0.8 mm; P < .001). The joint space was higher in group B than in group C (P < .001). Conclusion: Despite joint space narrowing, decreases in meniscal thickness after partial meniscectomy for horizontal tear had no additional adverse effect on 5-year functional and radiographic outcomes compared with conventional partial meniscectomy preserving whole meniscal thickness. In treating horizontal tears of the meniscus, partial meniscectomy with complete resection of the unstable leaf was an effective method in a 5-year follow-up study. abstract_id: PUBMED:37750149 Arthroscopic Meniscectomy vs Meniscal Repair: Comparison of Clinical Outcomes. Background Meniscal tears are the most common injury of the knee. Surgical treatment has fallen into contention recently and includes arthroscopic meniscectomy and meniscal repair. The primary aim of this study was to quantitatively evaluate patients with isolated meniscal tears and compare their outcomes with patients who have undergone arthroscopic meniscus surgery. The secondary aim of this study was to compare the clinical outcomes of patients who have undergone arthroscopic meniscectomy with patients who have undergone arthroscopic meniscal repair. Methods This comparative clinical study screened 334 patients to identify subjects who underwent arthroscopic knee surgery for isolated meniscal tears and compare them to patients with symptomatic isolated meniscal tears awaiting surgery using validated patient-reported outcome measures. These included the Knee Injury and Osteoarthritis Outcome Score, International Knee Documentation Committee Subjective Knee Form, Lysholm score, Tegner score, EuroQol-5 Dimension, and the 12-Item Short Form Health Survey. Results A total of 117 patients (Meniscal Tear group (n=36), Meniscectomy group (n=64), and Meniscal Repair group (n=17)) were included in the final data analysis. Both the Meniscectomy group and the Meniscal Repair group (mean 55-month follow-up) showed significantly better clinical outcomes than patients in the Meniscal Tear group (p<0.05). Overall, the Meniscal Repair group demonstrated superior clinical outcomes when compared to the Meniscectomy group (p<0.05). Conclusion Arthroscopic knee surgery showed significant clinical benefit at medium-term follow-up in treating patients with isolated meniscal tears. When feasible, meniscal repair should be performed preferentially over meniscectomy. abstract_id: PUBMED:33241058 Clinical and Radiological Outcomes of Meniscal Repair Versus Partial Meniscectomy for Medial Meniscus Root Tears: A Systematic Review and Meta-analysis. Background: Given the superiority of meniscal repair over partial meniscectomy according to biomechanical data, the clinical outcomes of meniscal repair are likely to be better than those of partial meniscectomy for a medial meniscus root tear (MMRT). Purpose/hypothesis: This review was designed to compare the clinical and radiological results between meniscal repair and partial meniscectomy for MMRTs. It was hypothesized that meniscal repair would result in better clinical and radiological results compared with partial meniscectomy. Study Design: Systematic review; Level of evidence, 4. Methods: Studies were included in the review if they (1) included patients with MMRTs who underwent primary arthroscopic meniscal repair or partial meniscectomy and (2) analyzed validated patient-reported outcomes and/or radiological evaluations. Summary odds ratios (ORs) with 95% CIs were calculated to compare partial meniscectomy with meniscal repair for each outcome. Results: A total of 13 studies were included. The mean duration of follow-up was 33.5 and 47.2 months in the meniscal repair group and partial meniscectomy group, respectively. The change in the Lysholm score from preoperatively to postoperatively was statistically significantly in favor of meniscal repair (OR, 2.20 [95% CI, 1.55-3.12]), while no difference was found with respect to the change in the Tegner score between the 2 surgical approaches (OR, 1.21 [95% CI, 0.65-2.24]). The prevalence of postoperative severe knee osteoarthritis (OR, 0.31 [95% CI, 0.17-0.54]) as well as that of reoperations (OR, 0.05 [95% CI, 0.01-0.19]) were significantly in favor of meniscal repair. Conclusion: Better outcomes were seen after meniscal repair compared with partial meniscectomy for MMRTs, with greater improvements in Lysholm scores, and lower rates of progression to knee osteoarthritis, and lower reoperation rate. abstract_id: PUBMED:26889472 Results of meniscectomy and meniscal repair in anterior cruciate ligament reconstruction. Meniscal tears are commonly associated with anterior cruciate ligament (ACL) injuries. A deficient medial meniscus results in knee instability and could lead to higher stress forces on the ACL reconstruction. Comparison of results in meniscectomy and meniscal repairs revealed worse clinical outcomes in meniscectomy, but higher re-operation rates in meniscal repairs. Our aim was to review the results of ACL reconstruction associated with meniscectomy or meniscal repair. abstract_id: PUBMED:30374574 Meniscal tear morphology independently affects pain relief following arthroscopic partial meniscectomy in middle-aged patients. Purpose: To analyze the influence of meniscal tear pattern on clinical outcomes following arthroscopic partial meniscectomy in middle-aged patients with medial meniscal tears. Methods: A total of 123 patients (130 knees) aged ≥ 50 years who underwent arthroscopic partial meniscectomy for medial meniscal tears were evaluated. Inclusion criteria were none to moderate medial knee osteoarthritis [Kellgren-Lawrence (KL) grade ≤ 3] and a minimum of 2-year follow-up (median 4.6 years; range 2.1-8.0 years). Meniscal tears observed during arthroscopic examination were classified into six types: radial tear of the middle segment, posterior root tear, horizontal tear of the posterior segment, flap tear, minor tear, and complex tear. Postoperative outcomes were classified into effective (group 1) and non-effective (group 2) according to the pain relief administered 1 month postoperatively and at the final follow-up. Demographic variables, KL grade, type of meniscal tear, and postoperative follow-up period were evaluated. Results: Forty knees (38%) were classified into group 1. Of the six types of tears, radial tear of the middle segment [odds ratio (OR) 4.1, 95% confidence interval (CI) 1.1-20.9] and flap tear (OR 12.9, 95% CI 1.8-140.7) were significant predictors of good outcome on multivariate logistic regression analysis. Conclusions: In middle-aged patients with medial meniscal tears, radial tear of the middle segment was independently associated with less pain following arthroscopic meniscectomy. Arthroscopic partial meniscectomy may be indicated in patients with radial tear if conservative treatment fails. Level Of Evidence: Case-control study, Level III. abstract_id: PUBMED:34660830 Lateral Meniscal Tears in Young Patients: A Comparison of Meniscectomy and Surgical Repair. Background: Meniscal tears are common in active patients, but treatment trends and surgical outcomes in young patients with lateral meniscal tears are lacking. Purpose: To evaluate treatment trends, outcomes, and failure rates in young patients with lateral meniscal tears. Study Design: Cohort study; Level of evidence, 3. Methods: Patients aged ≤25 years treated surgically for isolated lateral meniscal tears from 2001 to 2017 were identified. Treatment trends were compared over time. International Knee Documentation Committee (IKDC) scores and failure rates were compared by treatment modality (meniscectomy vs meniscal repair). Failure was defined as reoperation, symptomatic osteoarthritis, or a severely abnormal IKDC score. Univariate regression analyses were performed to predict failure and IKDC scores based on treatment, type and location of tear, or extent of meniscectomy. Results: Included were 217 patients (226 knees) with a mean age of 17.4 years (range, 7-25 years); of these patients, 144 knees (64%) were treated with meniscectomy and 82 knees (36%) with meniscal repair. Treatment with repair increased over time compared with meniscectomy (P < .001). At a minimum 2-year follow-up (mean, 6.1 ± 3.9 years), 107 patients (110 knees) had IKDC scores, and analysis indicated that although scores in both groups improved from pre- to postoperatively (repair: from 69.5 ± 13.3 to 97.4 ± 4.3; meniscectomy: from 75.7 ± 9.0 to 97.3 ± 3.9; P < .001 for both), improvement in IKDC score was greater after repair (27.9 ± 13.9) versus meniscectomy (21.6 ± 9.4) (P = .005). Included in the failure analysis were 184 patients (192 knees) at a mean follow-up of 8.4 ± 4.4 years. The rates of reoperation, symptomatic osteoarthritis, and failure were not significantly different between the meniscectomy and repair groups. Conclusion: An increase was seen in the rate of isolated lateral meniscal tear repair in young patients. IKDC score improvement was greater after repair than meniscectomy, although postoperative IKDC scores were similar. Symptomatic arthritis, reoperation, and failure rates were similar between groups; however, there was a trend for increased arthritis symptoms in patients treated with meniscectomy, especially total meniscectomy. Treatment modality, type and location of tear, and amount of meniscus removed were not predictive of final IKDC scores or failure. abstract_id: PUBMED:34428082 Kinematic Alterations After Anterior Cruciate Ligament Reconstruction via Transtibial Techniques With Medial Meniscal Repair Versus Partial Medial Meniscectomy. Background: The treatment strategies for meniscal injuries during anterior cruciate ligament (ACL) reconstruction remain a topic of debate. Hypothesis: After ACL reconstruction, knee kinematics would be affected by different medial meniscal treatment (partial medial meniscectomy [PMM] and medial meniscal repair [MMR]). Study Design: Controlled laboratory study. Methods: A total of 161 patients underwent primary single-bundle ACL reconstruction and simultaneous medial meniscal treatment. Of these, 32 patients were eligible to participate in the kinematic assessment at 24.8 ± 1.7 months after surgery. Patients were divided into 2 groups: (1) those who underwent MMR (Group MMR; n = 18) and (2) those who underwent PMM (Group PMM; n = 14). Twenty healthy participants (Group Intact) were recruited who were comparable in age, body mass index, and sex. The kinematic parameters were collected using an optical tracking system during treadmill gait. Range of motion and kinematic parameters at key events during the gait cycle were compared between the 3 groups. The primary outcomes were the differences in adduction/abduction and internal/external rotation. Results: Patients in Group PMM walked with increased adduction as compared with those in Group Intact during the early stance phase (P = .003; η2 = 0.172) and midstance phase (P = .003; η2 = 0.167). In terms of internal/external rotation, patients in Group PMM walked with significantly larger tibial external rotation when compared with Group MMR by approximately 3.4° to 3.7° (loading response: P = .026, η2 = 0.090; midstance: P = .035, η2 = 0.093) and Group Intact (P = .028; η2 = 0.095) in the early stance phase. In addition, there was significantly increased anterior tibial translation in Groups MMR and PMM compared with Group Intact. Conclusion: ACL reconstruction (via transtibial technique) with concurrent PMM demonstrated larger adduction and external tibial rotation at 24 months of follow-up during level walking. Clinical Relevance: Patients undergoing different medial meniscal treatment strategies in the presence of ACL reconstruction showed distinct knee kinematics. These results suggest that MMR is strongly recommended during ACL reconstructive surgery to reduce the abnormal kinematics close to that of the ACL-intact condition. abstract_id: PUBMED:26497982 Surgical interventions for meniscal tears: a closer look at the evidence. Introduction: The aim of the present study was to compare the outcomes of various surgical treatments for meniscal injuries including (1) total and partial meniscectomy; (2) meniscectomy and meniscal repair; (3) meniscectomy and meniscal transplantation; (4) open and arthroscopic meniscectomy and (5) various different repair techniques. Materials And Methods: The Bone, Joint and Muscle Trauma Group Register, Cochrane Database, MEDLINE, EMBASE and CINAHL were searched for all (quasi) randomized controlled clinical trials comparing various surgical techniques for meniscal injuries. Primary outcomes of interest included patient-reported outcomes scores, return to pre-injury activity level, level of sports participation and persistence of pain using the visual analogue score. Where possible, data were pooled and a meta-analysis was performed. Results: A total of nine studies were included, involving a combined 904 subjects, 330 patients underwent a meniscal repair, 402 meniscectomy and 160 a collagen meniscal implant. The only surgical treatments that were compared in homogeneous fashion across more than one study were the arrow and inside-out technique, which showed no difference for re-tear or complication rate. Strong evidence-based recommendations regarding the other surgical treatments that were compared could not be made. Conclusions: This meta-analysis illustrates the lack of level I evidence to guide the surgical management of meniscal tears. Level Of Evidence: Level I meta-analysis. abstract_id: PUBMED:37014608 Meniscal Allograft Transplantation: Does Surgical Technique Influence Clinical Outcomes? Purpose Of Review: Meniscal allograft transplantation is intended to address painful knees with symptoms that can be attributed to meniscal deficiency through injury or meniscectomy. Initially viewed as an experimental procedure, refinements in patient selection and surgical technique have led to improved clinical outcomes with wider acceptance. Our intention with this paper is to provide a review of meniscal allograft transplantation, with the focus on the different surgical techniques employed and their influence on outcomes. Recent Findings: The main debate regarding surgical technique is using bone or only soft tissue for fixation of the meniscal horns. Biomechanical and other basic science studies show improved function and less extrusion when the grafts are secured with bone. However, several clinical studies show no difference in outcomes. Long-term studies have shown improved success with less graft extrusion and may illustrate the important function of bone fixation. Many clinical studies, including those with long-term outcomes, have shown meniscal allografts can decrease patient pain and improve function. It is a technically challenging procedure with good clinical outcomes regardless of the method of graft fixation. Bone fixation provides less extrusion which is associated with improved graft function and decreased rate of joint deterioration. Further study is needed to determine if other methods to decrease extrusion are able to improve graft function and outcomes. abstract_id: PUBMED:30349946 Meniscal repair associated with a partial meniscectomy for treating complex horizontal cleavage tears in young patients may lead to excellent long-term outcomes. Introduction: While open repair of horizontal meniscal tears in young active patients has shown good results at mid- and long-term follow-up, complex horizontal tears (cleavage associated with meniscal flaps) are often treated by arthroscopic subtotal meniscectomy. The aim of this study was to evaluate long-term outcomes after arthroscopic removal of meniscal flaps associated with an open meniscal repair for treating complex lesions in young active patients. The hypothesis was that this salvage procedure would be efficient in such rare cases. Methods: Fourteen patients underwent an arthroscopic partial meniscectomy associated with an open meniscal repair to treat a painful complex horizontal meniscal cleavage between 2005 and 2010. There were two females and 12 males with a median age of 28.4 years (range 15-48 years). Patients were assessed by KOOS and IKDC scores, return to sport and the need for a secondary meniscectomy. Results: Thirty patients were evaluated at a median follow-up of 8.5 years (range 7-12 years). One patient required revision of a partial meniscectomy and one other a meniscal replacement (15% failure rate). All other patients showed improvement with regard to their symptoms and returned to sports, ten (91%) of them at the same level. The mean IKDC subjective score was 86.1 (± 10.9). The mean KOOS scores were: pain 91.4 (± 7.5), symptoms 91.4 (± 10.2), daily activity 97.1 (± 4), sports 84.4 (± 20.7) and quality of life 84 (± 14.2). For six patients, scores at median follow-up of 2.6 years were available and compared to newly obtained data. IKDC score at 8.6 years follow-up was not significantly different. KOOS scores for daily activity and sports were maintained. Conclusions: Even in the presence of a complex lesion, horizontal cleavage can be repaired in young patients with good subjective and objective outcomes and a low rate of long-term failure as with other meniscal lesions in young active patients. Level Of Evidence: IV. Answer: Decreased meniscal thickness after partial meniscectomy for a horizontal tear does not have an additional adverse effect on 5-year functional and radiographic outcomes compared with conventional partial meniscectomy preserving whole meniscal thickness. In a study, functional outcomes in the group with subtotal meniscectomy were inferior to those in groups with partial meniscectomy with vertical or horizontal resection. However, there was no significant difference in scores between the groups with partial meniscectomy with vertical resection and those with horizontal resection. The postoperative joint space on the affected side was higher for the group with partial meniscectomy with vertical resection than for the other groups, but despite joint space narrowing, decreases in meniscal thickness after partial meniscectomy for horizontal tear did not adversely affect the outcomes (PUBMED:25117726).
Instruction: Evaluation of factors affecting peak expiratory flow in healthy adults: is it necessary to stand up? Abstracts: abstract_id: PUBMED:16316876 Evaluation of factors affecting peak expiratory flow in healthy adults: is it necessary to stand up? Background: Guidelines for the Diagnoses and Management of Asthma recommend that the peak flow meter should be used in a standing position. Objective: To determine whether the peak expiratory flow (PEF) varies with the position of the subject and to evaluate the impact of application of nose clip on PEF measurement. Methods: Using a Mini Wright Peak Flow Meter, the PEF was measured in 33 healthy adults in sitting and standing positions, with and without application of a nose clip. Each subject blew into the peak flow meter three times and best of the 3 blows was recorded. Results: The PEF was not significantly higher in standing than in sitting position. Application of nose clip had no significant impact on the PEF. Conclusion: The position of the subject and application of nose clip has no significant impact on PEF measurement. abstract_id: PUBMED:6691906 Factors affecting the peak expiratory flow rate in children. Standing height and peak expiratory flow rate (PEFR) were measured in 339 British schoolchildren aged 7-16 years. Enquiry was made into a past history of wheeze or atopy, a family history of asthma or atopy, or a cold within the preceding 2 weeks. A strong correlation was found between PEFR and height, expressed by the equation PEFR = 5.640 Ht - 472.5 (r = 0.89). Neither a recent cold nor a positive personal or family history of wheeze or atopy had any significant effect on the regression equation. Sex was also unimportant. Further examination of the data revealed that age had an effect on peak expiratory flow rate independent of height. The effect of age was linear in girls and curvilinear in boys. Five hundred and sixty-nine Greek schoolchildren were also studied and similar age effects were found on the regression of peak expiratory flow rate on height. The implication of these findings is that any population study of peak expiratory flow rates in children should ensure a normal age distribution at each height interval. Significant error in the prediction of the PEFR will result if the effect of age is ignored, particularly in pubertal boys. abstract_id: PUBMED:19727355 A comparison of peak expiratory flow measured from forced vital capacity and peak flow meter manoeuvres in healthy volunteers. Background: Spirometry measures the mechanical function of lungs, chest wall and respiratory muscles by assessing the total volume of air exhaled from total lung capacity to residual volume. Spirometry and peak flow measurements have usually been carried out on separate equipment using different expiratory maneuvers. Aims: The present study was carried out to determine whether there is a significant difference between peak expiratory flow (PEF) derived from a short sharp exhalation (PEF maneuver) and that derived from a full forced vital capacity (FVC) maneuver in healthy volunteers. Settings: A medical college and tertiary level hospital. Materials And Methods: The present study was carried out during the period from January 2006 to July 2006. The study included 80 healthy volunteers with no coexisting illnesses, who were in the 15-45 years age group and belonging to either sex. They were asked to perform two sets of PEF and FVC maneuvers using the same turbine spirometer; the order was randomly assigned. Statistical Analysis: The difference between PEF obtained from a peak flow maneuver (PEFPF) and that obtained from a forced vital capacity maneuver (PEFVC) in healthy volunteers was analyzed separately for males and females, as well as for both groups combined, and statistical significance of its correlations with study data parameters was analyzed. Results: The difference between PEF obtained from a peak flow maneuver (PEFPF) and that obtained from a forced vital capacity maneuver (PEFVC) was statistically significant (P < 0.001) in males and in females separately and also for both groups combined. PEFPF (517.25 +/- 83.22 liters/min) was significantly greater than PEFVC (511.09 +/- 83.54 liters/min), as found on combined group mean analysis. However, the difference was small (6.16 + 7.09 liters/min). Conclusions: FVC maneuver can be used over spirometers to detect the PEF; and on follow-up subsequently, the same maneuver should be used to derive PEF. If we are using a peak flow maneuver subsequently, corrections are required to compensate for the difference due to the different maneuver. abstract_id: PUBMED:2923480 Growth and other factors affecting peak expiratory flow rate. By means of a simple questionnaire and measurements of height, weight, and peak expiratory flow rates 3061 children from city and rural populations were studied. Children with asthma or other respiratory diseases had lower peak expiratory flow rates, and younger children living in rural areas had higher rates. In 2828 healthy children the peak expiratory flow rate increased with age, height, and weight. There was an increase in the slope of this line for both age and height--at 12 years and 145 cm in girls, and at 14 years and 155 cm in boys. This continued for two to three years and 15 cm, respectively, before it declined. Previous surveys have obscured this change associated with height by reporting small numbers and using linear regression analysis. This type of analysis can underestimate the mean peak expiratory flow rate of small children by half to one standard deviation. A centile graph against age is recommended to describe normal values and their differences. abstract_id: PUBMED:24020052 Peak expiratory flow rate of healthy school children living at high altitude. Background: Anthropometric and socioeconomic factors are known to influence peak expiratory flow rate, but the effect of altitude has not been established decisively, due to conflicting results of various studies. Aim: This study was conducted to evaluate the effect of altitude on peak expiratory flow rate of healthy school children. Materials And Methods: A cross-sectional study was conducted in which the peak expiratory flow rate of a group of 290 school children (140 male and 150 female), aged 7-14 years, residing in Shimla (altitude 2150 meters), were compared with the results obtained in an age- and sex-matched control group, consisting of 280 school children (138 male and 142 female), residing in Patiala (altitude 278 meters). Mini Wright Peak Flow Meter was used for the study. Results: The mean peak expiratory flow rate value of boys at high altitude (265 ± 92.6 L/min) was significantly higher than those in plain areas (245 ± 82.1 L/min). Similarly, the mean peak expiratory flow rate of highlander girls (250.4 ± 70.2 L/min) was significantly higher than girls of plains (232.6 ± 65 L/min). Conclusion: The study suggests that besides anthropometric and socioeconomic factors, altitude is an important determinant of lung function. abstract_id: PUBMED:17937848 Detection of peak expiratory flow in healthy children in Xinjiang Objective: To establish reference values of peak expiratory flow (PEF) for children in Xinjiang. Methods: A total of 3520 healthy children (1705 males and 1815 females) aged from 7 to 16 years were enrolled in this study. PEF was measured using MicroPeak Peak Flow Meter. Children's age, sex, height and weight were recorded. Impact factors for PEF were investigated by multiple stepwise regression analysis. Results: PEF values increased with the increase of age, height and weight and correlated significantly with age, height, weight and sex. The regression equation between sex and height and PEF was obtained, i.e, PEF (y)=-342.98+3.94 x Height (x1) + 26.30 x Sex (x2). Conclusions: PEF values can be calculated according to children's height and sex. abstract_id: PUBMED:6453234 Calibration of some instruments for measuring peak expiratory flow. The performance of instruments for measuring peak expiratory flow have been investigated. This evaluation was carried out with two designs of spirometer, two types of pneumotachograph and three versions of the Wright peak flow meter. The appropriate flow profiles were generated by means of a special type of flow calibrator. The errors produced by the instruments varied from zero to about 26%. abstract_id: PUBMED:2787111 Peak expiratory flow rate in an elderly population. A population survey was conducted in 1982-1983 among 3,812 persons aged 65 years and older residing in East Boston, Massachusetts, a geographically defined urban community. Three measurements of peak expiratory flow rate were obtained by using calibrated mini-Wright meters. Peak expiratory flow rate was strongly related to age, sex, smoking, and years smoked. After adjustment for these factors, low peak expiratory flow rate was associated with chronic respiratory symptoms (cough, wheeze, shortness of breath, exertional dyspnea, orthopnea, and paroxysmal nocturnal dyspnea; p less than 0.0001) and with certain cardiovascular variables (history of stroke, p = 0.0014; angina, p = 0.05; and high pulse rate, p = 0.004). No significant associations were found with history of myocardial infarction or systolic and diastolic blood pressures. Peak expiratory flow rate was positively related to education (p less than 0.0001) and income (p less than 0.0001). Peak expiratory flow rate also was strongly related (p less than 0.0001) to measures of functional ability and physical activity, self-assessment of health, and simple measures of cognitive function. The correlations of peak expiratory flow rate with pulmonary symptoms and other indices of chronic disease raise the possibility that peak expiratory flow rate will predict mortality in an elderly population. abstract_id: PUBMED:3672447 Self-monitoring of peak expiratory flow (PEF) N/A abstract_id: PUBMED:15986000 Growth and other factors affecting peak expiratory flow in Greek children. Aim: The estimation of peak expiratory flow (PEF) in children is a very easy and practical way to check lung function and helps in the diagnosis, treatment follow-up and evaluation of the development of chronic obstructive pulmonary disease. Methods: Using a Mini-Wright flowmeter (Clement Clarke International Ltd, England), we studied the Peak Expiratory Flow (PEF) of 7,067 healthy Greek children of age range 6-17 years. All the children have a height ranging between mean value+/-2 Standard Deviations for age and sex. Results: The results were correlated with age, weight, height and triceps skinfold thickness. The mean value of PEF was higher in boys than in girls at all ages, except from the age of 12-13 years. Our results have shown a very strong relationship between PEF and age up to the age of 11 years (P<0.005) but we didn't find such a relationship in older children as regards PEF and height (P<0.001). No positive correlation between PEF and weight or between PEF and triceps skinfold, was found (P > or =0.05). Moreover, a considerable difference in PEF values was found in the various groups of every age and sex according to height. Conclusions: These results indicate that height should always be considered in order to estimate PEF value. The values of this study (mean and percentiles) were compared to those of other studies. Finally, we recommend that the results of this study should be used as standards for Greek children. Answer: The evaluation of factors affecting peak expiratory flow (PEF) in healthy adults has been studied to determine if the position of the subject, specifically standing versus sitting, has a significant impact on PEF measurements. According to one study, the PEF was not significantly higher in the standing position compared to the sitting position, and the application of a nose clip also had no significant impact on PEF measurement (PUBMED:16316876). This suggests that it may not be necessary for adults to stand up when measuring PEF, as the position does not significantly alter the results. However, it is important to note that this conclusion is based on the findings from healthy adults and may not necessarily apply to individuals with respiratory conditions or other populations.
Instruction: Thyroidectomy in patients with a high BMI: a safe surgery? Abstracts: abstract_id: PUBMED:23730571 Voice examination in patients with decreased high pitch after thyroidectomy. Unlabelled: Decreased high pitch after thyroidectomy due to injury of the external branch of superior laryngeal nerve (EBSLN) may be a critical, especially to professional voice users. The author studied the usefulness of VRP (voice range profile) and MDVP (multi-dimensional voice program) to evaluate patients who have decreased high pitch after thyroidectomy. A study was performed with 58 females and 9 males who underwent voice assessment between January 2008 and June 2009. The patients were classified as the group of female with no decreased high pitch (group A, n = 52), decreased high pitch (group B, n = 6) and the group of male with no decreased high pitch (group C, n = 9). VRP and laryngeal electromyogram (EMG) was performed in group B. Results: The preoperative frequency range of group A and B were statistically not different. In Group B, the result of VRP showed that the frequency range was 443.11 ± 83.97, 246.67 ± 49.41, 181.37 ± 80.13 Hz showing significant decrease after the surgery compared to that of the preoperative result. (P < 0.05) In addition, the intensity range was 22.25 ± 2.87, 15.25 ± 2.22, 16.25 ± 2.63 dB respectively showing significant postoperative decrease. (P < 0.05) EMG of the cricothyroid muscle of group B showed decreased potential with 3 patients, normal potential in 2, and the other patient refused to take the examination. In group A and C, the result of VRP revealed no significant difference between the preoperative and postoperative result. VRP is a noninvasive, quick, and practical test to demonstrate decreased frequency range visually and helps to evaluate EBSLN injury in patient with thyroidectomy. abstract_id: PUBMED:36540508 Post-Thyroidectomy Hematoma: Risk Factors To Be Considered for Ambulatory Thyroidectomy. Introduction: Post-thyroidectomy hematoma is a serious, potentially life-threatening complication and it is the most frequent indication for reoperation. In this study, we aimed to identify the risk factors and prevalence of post-thyroidectomy hematoma and perform a literature review of the incidence of post-thyroidectomy hematoma at other centers in Saudi Arabia. Methods: We retrospectively enrolled 372 consecutive patients who had undergone thyroidectomy between January 2015 and December 2020. Patients with bleeding disorders were excluded from the study. Data were analyzed using SPSS software, version 20.0 (IBM Corp., Armonk, NY). Results: Three female (0.8%) patients developed a hematoma after thyroidectomy. Two patients developed a hematoma 24 hours after surgery and were treated conservatively. The third patient developed a hematoma within two hours of surgery and required surgical intervention. None of the patients required a tracheostomy, and there was no mortality. No significant association was found between age, sex, final pathology, the extent of thyroidectomy, and risk of hematoma. Conclusion: A post-thyroidectomy hematoma is a rare but dangerous complication. Identifying the risk factors for hematoma formation is of great importance, particularly when considering outpatient thyroidectomy. A large prospective multicenter study is needed for further investigation. abstract_id: PUBMED:20029354 Thyroidectomy in patients at high-risk of bleeding: can it be safely performed? Aim: This study evaluated the safety of thyroid surgery in patients where anticoagulation was unavoidable or in patients with coagulopathy. Methods: The records of all patients having high risk of bleeding at the time of thyroidectomy (July 2000 and March 2005) were retrospectively reviewed to evaluate intraoperative and postoperative bleeding. Results: Ten patients were studied. Thyroid lobectomy with isthmusectomy (N=3) and bilateral thyroidectomy (N=7) were performed. Intraoperative surgical hemostasis was meticulous using ligasure vessel sealing system. Patients were divided in 3 groups. Group I (N=2) had small-vessel vasculitis, underwent thyroidectomy under full heparinization. Heparin therapy was continued until full anticoagulation by oral anticoagulants. Group II (N=2) had a documented congenital coagulation defect, had a pre and postoperative special protocols. Group III (N=6) underwent simultaneous thyroid and cardiac surgery (e.g.: intraoperative full heparinization and postoperative antiplatelet therapy in some cases). No extra-bleeding was noted on surgical reports. In the postoperative period, one patient developed a superficial small hematoma in the neck that resolved spontaneously. Transient unilateral laryngeal nerve palsy and transient hypocalcemia were seen in one and two patients respectively. Conclusions: Thyroidectomy in patients with coagulopathy is feasible with low risk of bleeding if meticulous hemostasis, particularly using ligasure vessel sealing system, is respected. abstract_id: PUBMED:26610774 Outpatient Thyroidectomy: Is it Safe? Outpatient thyroid surgery is controversial because of concerns over life-threatening cervical hematoma. Despite this concern, outpatient thyroidectomy is becoming increasingly common, especially among high-volume endocrine surgeons. Multiple studies have now demonstrated that careful patient selection combined with surgeon experience can result in successful and safe surgery without a full inpatient admission. This article reviews the data on safety and outcomes for outpatient thyroidectomy and discusses several techniques used to minimize risk to patients. abstract_id: PUBMED:24913199 Thyroidectomy in patients with a high BMI: a safe surgery? Objective: To study and compare the specific postoperative complications of thyroidectomy in a population with a BMI ≥25 with a population having a BMI below 25. Design: A prospective study was carried out from September 2010 to January 2013. Methods: Postoperative calcemia, laryngeal mobility, bleeding or infectious complications, postoperative hospital stay, and operation time were studied and compared statistically by a χ(2)-test or Student's t-test. Results: A total of 240 patients underwent total thyroidectomy and 126 underwent a partial thyroidectomy. Of them, 168 patients had a BMI below 25 and 198 patients had a BMI ≥25. There was no statistically significant difference in the occurrence of early or permanent hypoparathyroidism, recurrent laryngeal nerve palsy, bleeding complications, or postoperative duration of hospital stay. There was, however, a significant operative time in patients with a BMI ≥25. Conclusion: Despite the longer operative time, thyroidectomy (total or partial) can be performed safely in patients with a BMI ≥25. abstract_id: PUBMED:24901723 Risk of postoperative hypocalcemia in patients underwent total thyroidectomy, subtotal thyroidectomy and lobectomy surgeries. The main objective of this study was to identify the incidence of post-thyroidectomy hypocalcemia in patients underwent lobectomy, subtotal thyroidectomy or total thyroidectomy. This randomized trial study was performed from May 2010 to July 2011 among 108 consecutive patients undergoing thyroidectomy. Patients were allocated into 3 groups based on the 3 common type of thyroidectomy. Pre and postoperative serum calcium levels were determined by using standard protocol for all patients. All patients' preoperative serum calcium levels were normal. Post-thyroidectomy hypoglycemia occurred in 0%, 8.3% and 8.3% of patients who underwent total hypocalcemia, subtotal thyroidectomy and lobectomy, respectively, which represents 5.6% of all patients who shown hypocalcemia (P=0.58). Serum calcium levels were fall in 93% patients, however within several days were returned to normal ranges. We did not find a significant relation between the hypocalcemia and thyroidectomy types. We recommended determination other alternative factors as a more effective prediction for postoperative hypocalcemia. abstract_id: PUBMED:25167851 Total thyroidectomy in geriatric patients: a retrospective study. In the English literature there is no single definition that identifies elderly patients. In our retrospective study, we divided total thyroidectomized patients operated on from 2000 to 2010 in the Department of Surgical Sciences of the "Sapienza" University of Rome, in two groups: group 1 consists of 448 patients over 65 years and group 2 consists of 1275 patients under 65 years. We compared both groups in terms of indications for surgery, histological diagnoses, postoperative complications (laryngeal nerv palsy, hypocalcemia, bleeding and seroma) and mortality. The results showed no statistically significant differences between the two groups with respect to the type of surgical indication, the type of comorbidities, the incidence of postoperative complications and perioperative mortality. The only data discordant with those in the international literature was the incidence of neoplastic disease that is found to be slightly greater in group 2. In conclusion, total thyroidectomy in patients over 65 years is a safe procedure and is not burdened with a higher percentage of postoperative complications, even if requires a careful preoperative assessment of risk factors related to comorbidity. abstract_id: PUBMED:33890442 Intraoperative neuromonitoring in thyroidectomy for carcinoma in a high-volume academic hospital. Background: The diagnosis of thyroid carcinoma has changed in last decades, as the surgical technique during thyroidectomy (endoscopic surgery, robotic surgery, new energy device, intraoperative neuromonitoring). Methods: We analyzed patients undergone to thyroidectomy or lobectomy for thyroid carcinoma from January 2010 to December 2019 at the General Surgery Unit of the Hospital - University of Parma. We divided patients into two groups, based on the use or not of IONM. Results: We analyzed data about 638 patients, 486 (76.2%) females and 152 (23.8%) males, with a mean age of 51.8 years. Totally, 574 patients underwent total thyroidectomy and lymphadenectomy was performed in 39 patients. The lobectomy rate was higher in interventions with neuromonitoring (13.93%) than in those without IONM (3.06%). Considering the incidence of postoperative complications and the presence of infiltration of perithyroid tissues or thyroiditis or lymph node metastasis at the histological report, a statistically significant percentage of dysphonia and paraesthesia was recorded only in patients with infiltration of perithyroid tissues (P<0.0001). There was no significant difference in postoperative blood calcium values. The use of intraoperative neuromonitoring has not significantly changed the incidence of postoperative complication. Conclusions: Our study did not show a protective impact of the use of intraoperative neuromonitoring during thyroidectomy on the incidence of postoperative complications but confirmed that it increases the surgeon's feel safety during surgery and facilitates the identification of any undetected nerve lesion with visually intact nerve, inducing the interruption of the thyroidectomy after lobectomy alone, reducing the risk of bilateral recurrent paralysis. abstract_id: PUBMED:38462878 Frequency and Risk Factors of Tracheomalacia in Patients Undergoing Thyroidectomy. It was a descriptive cross-sectional study to determine the frequency and risk factors of tracheomalacia in patients undergoing thyroidectomy. One hundred and forty-nine patients underwent thyroidectomy at MTI-Khyber Teaching Hospital Peshawar between 1st January 2021 and 1st March 2022. The frequency of post-thyroidectomy tracheomalacia and possible associated factors were determined. The inclusion criteria were patients of either gender, between 18 to 70 years fulfilling criteria of clinically diagnosed cases of thyroid disorders who underwent subtotal, hemi, near or total thyroidectomy for their respective thyroid diseases. Post-thyroidectomy tracheomalacia was recorded in 18 patients (12.1%): Seventeen patients were aged less than 30 (p-value 0.038) and 14 (77.8%) patients had a duration of surgery > three hours (p-value <0.001). Young female patients with multinodular goitre who had a longer duration of surgery developed post-thyroidectomy tracheomalacia more frequently. Hence, the incidence of post-thyroidectomy tracheomalacia can be markedly reduced with proper preoperative assessment and postoperative measures. Key Words: Thyroid gland, Thyroid disorders, Tracheomalacia, Thyroidectomy. abstract_id: PUBMED:28712235 Recurrent Laryngeal Nerve Injury In Total Versus Subtotal Thyroidectomy. Background: Both Total and Subtotal Thyroidectomy are correct treatment options for symptomatic Euthyroid Multinodular Goitre. The choice depends upon surgeon's preference due to consideration of disadvantages like permanent hypothyroidism in Total Thyroidectomy and high chances of recurrence in Subtotal Thyroidectomy. Many surgeons believe that there is a higher incidence of Recurrent Laryngeal nerve injury in Total Thyroidectomy which affects their choice of surgery. This study aimed to compare the incidence of recurrent laryngeal nerve injury in total versus subtotal thyroidectomy. Methods: This non randomized controlled trial was carried out at Department of Surgery and ENT of Ayub Teaching Hospital Abbottabad, and Combined Military Hospital Rawalpindi from 1st September 2013 to 30th August 2014. During the period of study, patients presenting in surgical outpatient department with euthyroid multinodular goitre having pressure symptoms requiring thyroidectomy were divided into two groups by convenience sampling with 87 patients in group 1 and 90 patients in group 2. Group-1 was subjected to total thyroidectomy and Group -2 underwent subtotal thyroidectomy. All the patients had preoperative Indirect Laryngoscopy examination and it was repeated postoperatively to check for injury to the recurrent laryngeal nerve. Results: A total of 177 patients were included in the study. Out of these, 87 patients underwent total thyroidectomy (Group-1). Two of these patients developed recurrent laryngeal nerve injury (2.3%). In group-2 subjected to subtotal thyroidectomy, three of the patients developed recurrent laryngeal nerve injury (3.3%). The p-value was 0.678. The overall risk of injury to this nerve in both surgeries combined was 2.8%. Conclusions: There is no significant difference in the risk of recurrent laryngeal nerve damage in patients undergoing total versus subtotal thyroidectomy. Answer: According to the study presented in : PUBMED:24913199, thyroidectomy, whether total or partial, can be performed safely in patients with a high BMI (≥25). The study compared postoperative complications such as hypoparathyroidism, recurrent laryngeal nerve palsy, bleeding complications, and postoperative hospital stay between patients with a BMI below 25 and those with a BMI of 25 or higher. The results indicated no statistically significant difference in the occurrence of these complications between the two groups. However, it was noted that there was a significant increase in operative time for patients with a higher BMI. Despite the longer operative time, the conclusion was that thyroidectomy is a safe surgery for patients with a high BMI.
Instruction: Does migration from high and intermediate endemic regions increase the prevalence of hepatitis B infection in Germany? Abstracts: abstract_id: PUBMED:16307403 Does migration from high and intermediate endemic regions increase the prevalence of hepatitis B infection in Germany? Background And Objective: Germany is one of the low endemic areas for hepatitis B. There are 7,3 million foreign citizens and 3,2 million migrants from the former USSR and Eastern Europe with German roots, the "Resettlers" (Aussiedler), who migrated to Germany mostly from countries with moderate or high HBsAg prevalence. The aim of this study was to determine the HBsAg prevalence in adult foreign citizens and resettlers compared with that among the adult German population. Methods: Adult foreign citizens and resettlers were categorized according to their country of origin into groups with low, intermediate or high HBsAg prevalence, using data from the WHO. Statistics of the Federal Office for Statistics, the Federal Office for Administration and the Federal Ministry of the Interior were used for the demographic analysis. The number of chronic HBsAg carriers for the different population groups and the whole population was then calculated. Results: 84% of the adult migrant population in Germany migrated from countries with intermediate and high HBsAg endemicity. For 2003 we calculated 503 040 HBsAg carriers in Germany. 42% of these have a migratory background, even though migrants represent only 12.7% of the whole population. The risk for chronic infection with HBsAg is 7.1 (4.8-13.2) for the resettlers and is 4.3 (3.0-8.1) times higher for foreign citizens than for the German population. Conclusions: These remarkable differences in prevalence make it important that migrants and their close contacts be vaccinated properly, pregnant women be included in screening programs and cases of chronic hepatitis B be treated adequately. abstract_id: PUBMED:32883187 Risk of hepatitis B when migrating from low to high endemic areas. Prevalence of hepatitis B virus (HBV) infections varies markedly with geography and is endemic in the Arctic. Travel and migration have increased markedly while the influence of migration to high endemic areas remains unknown. We surveyed subjects migrating from an area with a low prevalence of chronic HBV infection (Denmark, 0.01%) to an endemic HBV area (West- and East Greenland, 3% and 29%) in order to describe the prevalence of HBV exposure among migrants. We included 198 Caucasian Danes that had migrated to Greenland and repeated the cross-sectional investigation after 10 years. We performed thorough serological testing for HBV. None had ongoing HBV infection. Migrants to East Greenland were more frequently exposed to HBV than those in West Greenland (34.3% vs 10.3%; p < 0.01). This difference was reduced at 10-year follow-up (8.1% vs 5.7%; ns) and the overall number of participants with past HBV infection decreased over the 10-year period from 19.4% to 6.9% (p = 0.02). In conclusion, migration from very low prevalence to endemic HBV areas associated with a markedly increased risk of exposure to HBV. Lack of vaccination among migrants from Denmark to Greenland was frequent and it poses a continuing risk. All who migrate from low to high endemic HBV areas should be vaccinated. Abbreviations: HBV: Hepatitis B virus; HBV-DNA: Hepatitis B virus deoxyribonucleic acid; HBsAg: Hepatitis B surface antigen; Anti-HBs: Antibodies against hepatitis B surface antigen; Anti-HBc: Antibodies against hepatitis B core antigen; BMI: Body mass index. abstract_id: PUBMED:23132874 Changing epidemiology of hepatitis B and migration--a comparison of six Northern and North-Western European countries. Background: Increased migration volume and different Hepatitis B prevalence between immigration and emigration countries have changed the Hepatitis B virus (HBV) epidemiology considerably in Northern and North-Western European migrants-receiving countries. Due to the difference in migration status monitoring, the HBV infection data on migrants are not easily comparable among those countries. The study aims were: to compare the migration status indicators used by the national surveillance system in six Northern and North-Western European countries (the Netherlands, Germany, Finland, Denmark, Sweden and the UK); to determine the impact of the migration status on HBV infection by comparing the available data on prevalence and transmission routes of Hepatitis B in the migration and the general population in the six countries; to recommend sensible indicators and pertinent measures for HBV infection surveillance and control in the region. Methods: Literature review, statistical data analysis on migration and HBV infection in the six countries; expert interviews to identify migration status indicators used in national surveillance systems. Results: Evident differences were found between the migration and the general population in Hepatitis B prevalence and transmission routes in the six countries. Migration status is monitored differently in six surveillance systems; immigrants from high/intermediate Hepatitis B endemic countries constitute a substantial proportion of HBsAg(+) and chronic cases in all six countries. Conclusions: International migration has an obvious impact on Hepatitis B prevalence in the six countries. It is important to include common migration status indicators and to collect comparable data for HBV infection surveillance in different notification systems. abstract_id: PUBMED:18946359 High impact of migration on the prevalence of chronic hepatitis B in the Netherlands. Objectives: A representative serosurveillance study (1995) resulted in an estimate of 0.2% for the HBsAg prevalence in the Netherlands. Some risk groups, especially migrants, were not well represented in the study, which probably led to an underestimation of the true HBsAg prevalence. The aim of this study was to calculate an adjusted HBsAg prevalence estimate for the total Dutch population including these risk groups. Methods: According to their country of origin first-generation migrants (FGM) were classified into groups with low, intermediate and high prevalence using data from the WHO and Statistics Netherlands. The number of chronic HBsAg carriers in different age and population groups was estimated based on studies about age-specific prevalence in different countries. The number of carriers in the indigenous population was estimated using the serosurveillance study. A combination of these estimates led to an estimate of the total prevalence rate in the Netherlands. Results: Nearly 10% of the Dutch population are FGM. Of these, about 18% were born in low-endemic, 71% in middle-endemic and 11% in high-endemic countries. The overall prevalence of HBsAg in FGM is estimated to be at 3.77%. Combining these results with the results of the serosurveillance study the HBsAg prevalence in the Dutch population is estimated to be between 0.32 and 0.51%, and when including injecting drug users and mentally handicapped persons the prevalence rates are 0.36 and 0.55%, respectively. Conclusion: Our results show the high importance of targeting migrants and their close contacts adequately in screening programmes, vaccination and treatment for chronic hepatitis B. abstract_id: PUBMED:20449785 Serological and sociodemographic differences in HBV-patients with and without migration background Background: It is estimated that 0.5 to 1 % of the total German population is chronically infected with the hepatitis B virus (HBV). This means that approximately 500 000 chronically infected individuals live in Germany and that around 10 % of them need antiviral treatment. According to an epidemiological calculation, around 42 % of them are migrants. The aim of our study was to gather more information on socio-demographic features of HBV patients and the sero-epidemological status and treatment of these patients. Methods: Data collection was carried out in hepatological practices and outpatient clinics in Germany. We collected data from adult patients with chronic hepatitis B by studying their patient records and interviewing the patients and the responsible physicians. Results: Data of 160 patients from 20 different facilities could be collected. 57.5 % of them were male, mean age was 43.7 (range: 19 - 81 years). 61 (38.1 %) were Germans without a migration background, 82 (51.3 %) were first generation migrants and 17 (10.6 %) second generation migrants. We detected significant differences in the level of professional training and employment status between participants with and without migration background. Only 3 % of migrants had a university degree compared to 36.1 % of patients without migration background. In addition, more migrants were unemployed (38.9 % vs. 19.6 %). From all participants 72.4 % were HBeAg-negative. 111 (69.4 %) of all patients received a hepatitis B specific treatment at the time they were interviewed, most of them adefovir (41.5 %) or lamivudin (35.4 %) alone. Discussion: The estimated high HBV prevalence in migrants from countries with intermediate to high prevalence is reflected in the high number of patients with migration background in hepatological practices and outpatient clinics. To avoid further HBV infections in this population group and to reduce the HBV prevalence in Germany, the effect of further interventions, like a general HBV screening of all new incoming migrants from intermediate or high endemic areas, their vaccination and if necessary treatment, have to be tested. abstract_id: PUBMED:33014960 Prevalence of Hepatitis B, C, and D in Germany: Results From a Scoping Review. Background: One of the five strategic directions in the World Health Organization global health sector strategy on viral hepatitis 2016-2021 is to generate strong strategic information for focused action to understand the viral hepatitis epidemic and focus the response. Knowledge of national prevalence is a cornerstone of strategic information. Germany is considered to be a low prevalence country for viral hepatitis B, C, and D, however the prevalence is likely to be higher among at-risk groups. Methods: The aim of this work was to give a detailed overview of the prevalence of viral hepatitis B (HBsAg, anti-HBc), C (anti-HCV, HCV RNA), and D (anti-HDV, HDV RNA) in different population groups in Germany. Therefore, we analyzed the results of a comprehensive literature search on various aspects of the epidemiological situation of hepatitis B, C, and D in Germany. Eligible publications including information on hepatitis B, C, and D prevalence were extracted from the overall spreadsheet table and summarized and analyzed based on virus and different population groups. A quality appraisal was performed using a checklist developed by Hoy et al. to assess risk of bias in prevalence studies. Results: Overall, 51 publications were identified through the literature search. The overall prevalence of HBsAg in the general (and proxy) population ranged from 0.3 to 1.6%. Among at-risk groups, including clinical populations and health care workers, the HBsAg prevalence ranged from 0.2% (among rheumatic patients) to 4.5% among HIV positive patients. The overall prevalence of anti-HCV in the general (and proxy) population ranged from 0.2 to 1.9%. Among at-risk groups, including clinical populations and health care workers, the anti-HCV prevalence ranged from 0.04% (among health care workers) to 68.0% among people who inject drugs. Conclusions: The hepatitis B and C prevalence in the general population in Germany is low. Prevalence is high to very high among at-risk populations, however for some groups evidence was incomplete or missing completely. To reach the elimination goals in Germany and implement a targeted response, more research among at-risk groups is needed. abstract_id: PUBMED:12389938 Predicting the prevalence of antibody to hepatitis C virus (HCV) in a community by the prevalence of elevated levels of alanine aminotransferase: a method to identify areas endemic for HCV. To identify areas endemic for hepatitis C virus (HCV), we attempted to predict the prevalence of antibody to HCV (anti-HCV) in a community by the prevalence of elevated levels of alanine aminotransferase (ALT). The results of health examinations of 2,273 women > or = 40 years old in five townships in central Taiwan were analyzed. The overall prevalence of elevated ALT levels was 11.6%. A township (20.3%) and 10 villages were identified as having a high prevalence (17.9-40.0%) of elevated ALT levels. Five communities with variant rates of elevated ALT levels, which ranged from 8.0% to 27.3%, were selected for correlation with the prevalence of anti-HCV among adults (> or = 40 years old). The prevalence of anti-HCV, which ranged from 10.1% to 72.1%, were obtained by community screenings and/or hospital-based studies. The correlation equation used was the prevalence of anti-HCV (%) = 3.33 x (prevalence of elevated ALT levels (%) - 5.7%) (r2 = 0.94, P < 0.01). Using this method, areas endemic for HCV can be identified by determining the community prevalence of elevated ALT levels. abstract_id: PUBMED:20687016 Retrospective analysis of chronic hepatitis D in a West German University Clinic over two decades: migratory pattern, prevalence and clinical outcome. Background/aims: Epidemiology, clinical features and long term-course of chronic hepatitis D were addressed in a non-endemic Central European area. Methods: Sixty-seven patients with chronic hepatitis D were identified among 1307 HBsAg carriers at the university hospital Düsseldorf during two decades (1989 - 2008) and followed for a mean of 7 +/- 6 years. Forty-one of these were treated with IFN-alfa for at least six months. Results: Hepatitis D prevalence increased from 4.1 to 6.2 % among HBsAg carriers during the two decades (p < 0.06). Patients originating from the former Soviet Union (32.1 vs. 46.2 %) and Africa (0 vs. 17.9 %) became more frequent whereas the prevalence of patients from Southern Europe declined (46.5 vs.17.9 % p < 0.03). The time span between the diagnosis of hepatitis B and D was 4.8 +/- 7 years (p < 0.0001). A sustained virological response to interferon-alfa was achieved in 19.5 % of the patients. The yearly incidence rates for death, HCC and complications were 3.2 %, 2.7 % and 8 % among patients with liver cirrhosis. Estimated survival and complication-free survival during 12 years were 72 % and 45 % in cirrhotic compared to 100 % in non-cirrhotic patients (p < 0.008 and p < 0.0001, respectively). Conclusion: Hepatitis D in western Germany appears to be on the increase and has a migration background that should be considered in clinical practice. Clinical outcome and response to IFN are as poor as in endemic regions, indicating the need to improve early diagnosis. abstract_id: PUBMED:23722085 High prevalence of hepatitis delta virus among patients with chronic hepatitis B virus infection and HIV-1 in an intermediate hepatitis B virus endemic region. We conducted a study to investigate HIV and hepatitis delta virus (HDV) coinfection among patients with chronic hepatitis B virus (HBV) infection and the triple infection's (HIV/HBV/HDV) clinical implications in India, an intermediate HBV endemic region, with an estimated HIV-positive population of 2.5 million. A total of 450 patients (men: 270; women: 180) with chronic HBV infections and 135 healthy volunteers were screened for HIV and HDV. The incidence of the triple infection was low (4 [0.8%]) compared with dual infections of HIV-1/HBV (7 [1.5%]) and HBV/HDV (22[4.8%]). Among 21- to 40-year-olds, HBV/HDV coinfection (45.8%) and HBV/HDV/HIV-1 triple infection was predominant (75%). Among 11 patients coinfected with HIV-1/HBV, 4 (36%) were tri-infected and were also associated with chronic hepatitis and cirrhosis. The HDV coinfection was higher among patients coinfected with HBV/HIV-1, despite the declining trend in HDV infection among HIV-negative patients, as previously reported. Thus, it is important to assess the impact of HIV, chronic HBV, and HDV tri-infection in India. abstract_id: PUBMED:18813761 High prevalence of hepatitis B virus among female sex workers in Nigeria. Hepatitis B virus (HBV) infection is endemic in Nigeria and constitutes a public health menace. The prevalence of HBV infection in many professional groups has been described in Nigeria. However, literature on HBV infection among female sex workers (FSW) in Nigeria is scanty. FSW in Nigeria are not subjected to a preventive control of HBV infection. This study assesses the extent of spread of HBV among FSW in Nigeria. Seven hundred and twenty (n = 720) FSW (mean age = 26.7 years) were tested for hepatitis B surface antigen (HBsAg) by a double antibody sandwich ELISA method. The overall HBV prevalence among the FSW was 17.1%. FSWs between the ages of 31-35 year (20.5%) and those with 'age-at-first-sex' below 10 years of age (28%) were most affected. This high prevalence of a vaccine preventable disease is unacceptable, therefore, vaccination of this high risk HBV reservoir group should be considered worthwhile. Answer: Yes, migration from high and intermediate endemic regions does increase the prevalence of hepatitis B infection in Germany. A study found that 84% of the adult migrant population in Germany migrated from countries with intermediate and high HBsAg endemicity, and for 2003, it was calculated that there were 503,040 HBsAg carriers in Germany. Of these, 42% had a migratory background, even though migrants represented only 12.7% of the whole population. The risk for chronic infection with HBsAg was significantly higher for the resettlers and foreign citizens than for the German population, indicating that migration from regions with higher HBsAg prevalence contributes to the increased prevalence of hepatitis B in Germany (PUBMED:16307403). Additionally, another study highlighted the importance of targeting migrants and their close contacts adequately in screening programs, vaccination, and treatment for chronic hepatitis B, as nearly 10% of the Dutch population are first-generation migrants, with a significant number coming from countries with intermediate to high HBsAg prevalence. The overall prevalence of HBsAg in first-generation migrants was estimated to be at 3.77%, which is substantially higher than in the indigenous population (PUBMED:18946359). Furthermore, a retrospective analysis in a West German University Clinic over two decades showed that the prevalence of hepatitis D, which requires hepatitis B for its replication, increased among HBsAg carriers and had a migration background that should be considered in clinical practice (PUBMED:20687016). These findings underscore the impact of migration from high and intermediate endemic regions on the prevalence of hepatitis B in Germany and the need for appropriate public health interventions to manage and prevent the spread of the virus among migrants and the general population.
Instruction: Is preoperative manometry necessary for evaluating reflux symptoms in sleeve gastrectomy patients? Abstracts: abstract_id: PUBMED:25547053 Is preoperative manometry necessary for evaluating reflux symptoms in sleeve gastrectomy patients? Background: The effect of laparoscopic sleeve gastrectomy (LSG) in gastroesophageal reflux disease (GERD) is controversial. However, it has been reported that up to 22% of patients presented with symptomatic GERD after LSG. The aim of our study was to evaluate the necessity of preoperative manometric testing in LSG patients. Methods: We prospectively collected the data on LSG candidate patients who underwent preoperative manometric testing. The normal range for the lower esophageal sphincter (LES) pressure is 10.0-45.0 mmHg. Each patient was interviewed for the GERD score questionnaire (scaled severity and frequency of heartburn, regurgitation, epigastric pain, epigastric fullness, dysphagia and cough) at the time of the manometric study. Results: Forty-nine patients were studied. The mean preoperative LES pressure was 13.2±7.7 mmHg (range, 1.0-34.4). Eleven patients responded that they had 1 or more moderate to severe GERD symptoms >2-4 times a week, of which 9 had competent LES pressures. Thirteen (26.5%) patients had decreased LES pressures, and only 3 (23.1%) of these reported moderate to severe symptoms of GERD. In 26 LSG patients with postoperative results, the mean preoperative LES pressure was 14.8±8.0 mmHg (range, 3.5-34.4), and the mean GERD score did not show a significant difference at 9 months after LSG. Twenty-two had normal LES pressures, and 16 (72.8%) of these patients reported reflux symptoms preoperatively. Only 2 (12.5%) of these symptomatic patients reported a higher GERD score postoperatively, but the difference was not significant. Of the 4 patients who had low LES pressures, only 1 patient complained of mild GERD symptoms preoperatively. However, this patient and another without preoperative symptoms developed severe GERD symptoms postoperatively. Conclusion: Manometric study may be necessary in LSG patients to accurately evaluate GERD and the LES pressure. abstract_id: PUBMED:37264227 Preoperative esophageal testing predicts postoperative reflux status in sleeve gastrectomy patients. Background: Patients who undergo vertical sleeve gastrectomy (VSG) are at risk of postoperative GERD. The reasons are multifactorial, but half of conversions to Roux-en Y gastric bypass are for intractable GERD. Our institution routinely performs preoperative pH and high-resolution manometry studies to aid in operative decision making. We hypothesize that abnormal pH studies in concert with ineffective esophageal motility would lead to higher rates of postoperative reflux after VSG. Methods: A single institution retrospective review was conducted of adult patients who underwent preoperative pH and manometry testing and VSG between 2015 and 2021. Patients filled out a symptom questionnaire at the time of testing. Postoperative reflux was defined by patient-reported symptoms at 1-year follow-up. Univariate logistic regression was used to examine the relationship between esophageal tests and postoperative reflux. The Lui method was used to determine the cutpoint for pH and manometric variables maximizing sensitivity and specificity for postoperative reflux. Results: Of 291 patients who underwent VSG, 66 (22.7%) had a named motility disorder and 67 (23%) had an abnormal DeMeester score. Preoperatively, reflux was reported by 122 patients (41.9%), of those, 69 (56.6%) had resolution. Preoperative pH and manometric abnormalities, and BMI reduction did not predict postoperative reflux status (p = ns). In a subgroup analysis of patients with an abnormal preoperative pH study, the Lui cutpoint to predict postoperative reflux was a DeMeester greater than 24.8. Postoperative reflux symptoms rates above and below this point were 41.9% versus 17.1%, respectively (p = 0.03). Conclusion: While manometry abnormalities did not predict postoperative reflux symptoms, GERD burden did. Patients with a mildly elevated DeMeester score had a low risk of postoperative reflux compared to patients with a more abnormal DeMeester score. A preoperative pH study may help guide operative decision-making and lead to better counseling of patients of their risk for reflux after VSG. abstract_id: PUBMED:37466827 Role of Preoperative High-Resolution Manometry in the Identification of Patients at High Risk of Postoperative GERD Symptoms 1 Year After Sleeve Gastrectomy. Purpose: Sleeve gastrectomy (SG) has become the most common bariatric procedure, but it is often characterized by the onset of postoperative gastroesophageal reflux disease (GERD). High-resolution manometry (HRM) is a useful tool to detect risk factors for GERD. The aim of this study was to evaluate preoperative manometric parameters as possible predictors of postoperative GERD. Materials And Methods: This was a monocentric retrospective study. We analyzed 164 patients, with preoperative esophagitis/GERD symptoms who underwent preoperative HRM and were submitted to SG (July 2020-February 2022). Results: Postoperative GERD was observed in 60 patients (36.6%): 41 of them (68%) already had preoperative GERD symptoms, whereas the remaining 19 patients (32%) developed postoperative symptoms. Female patients developed postoperative GERD in a significantly higher fraction of cases as compared to male patients (82% versus 18%; p < 0.001). DCI (distal contractile integral) was identified as the only HRM parameter correlating with the presence of GERD. Patients with DCI ≤ 1623 mmHg*cm*s developed postoperative GERD in 46% of cases (n = 43/94), as compared to 24% of cases (n = 17/70) among patients with DCI > 1623 mmHg*cm*s (p = 0.005). At multivariable analysis, female sex (OR 3.402, p = 0.002), preoperative GERD symptoms (OR 2.489, p = 0.013), and DCI ≤ 1623 mmHg*s*cm (OR 0.335, p = 0.003) were identified as independent determinants of postoperative GERD. Conclusion: All the patients with preoperative risk factors for reflux, such as GERD symptoms or esophagitis on EGDS (esophagogastroduodenoscopy), should be considered for an HRM. Moreover, when a DCI ≤ 1623 mmHg*s*cm is found, a bariatric procedure different from SG might be considered. abstract_id: PUBMED:33877508 Do Preoperative Esophageal pH Monitoring and High-Resolution Manometry Predict Symptoms of GERD After Sleeve Gastrectomy? Background: Predictive factors of evolution or appearance of gastroesophageal reflux disease (GERD) after sleeve gastrectomy (SG) have not been identified to date. We aimed to assess the evolution of GERD symptoms 1 year after SG and to determine preoperative predictive factors using high-resolution manometry (HRM) and ambulatory 24-h esophageal pH monitoring (APM). Methods: We included 160 patients who underwent SG between 2013 and 2017 and performed preoperative APM and HRM. Positive APM was defined according to the Lyon consensus. Symptoms of GERD, proton pump inhibitors (PPI) use, weight loss (WL), and diet were recorded in all patients before and 1 year after surgery. Results: One year after surgery, 58 patients (36.3%) complained of GERD symptoms compared to 52 patients (32.5%) preoperatively (p=0.48). Among patients with preoperative GERD symptoms, only 26/52 patients (50%) still had symptoms, whereas 32/108 (29.6%) asymptomatic patients developed de novo GERD symptoms after surgery. PPI use increased after surgery reaching 36.9% of patients against 15.0% before (p<0.0001). Only preoperative symptoms of GERD were predictive of postoperative symptoms (OR= 2.47 [1.14-5.45]; p=0.023) in multivariate analysis. Preoperative manometric parameters, postoperative diet, and WL were not related to postoperative symptoms. In asymptomatic patients before surgery, silent GERD (positive APM without symptom) was predictive of postoperative GERD symptoms (OR=2.69 [1.00-7.25]; p=0.049). Conclusion: Evolution of GERD symptoms after SG reveals improvement for half of the patients and de novo GERD symptoms in one-third of patients. Predictive factors of postoperative symptoms are preoperative symptoms and positive preoperative APM in asymptomatic patients. abstract_id: PUBMED:27400631 Preoperative endoscopy may reduce the need for revisional surgery for gastro-oesophageal reflux disease following laparoscopic sleeve gastrectomy. Laparoscopic sleeve gastrectomy is a safe and effective bariatric operation, but postoperative reflux symptoms can sometimes necessitate revisional surgery. Roux-en-Y gastric bypass is the preferred operation in morbidly obese patients with gastro-oesophageal reflux disease. In 2011, we introduced preoperative endoscopy to assess for hiatus hernia or evidence of oesophagitis in conjunction with an assessment of gastro-oesophageal reflux symptoms for all patients undergoing bariatric surgery with a view to avoid sleeve gastrectomy for these patients. A prospectively maintained database was used to identify patients who underwent sleeve gastrectomy before and after we changed the unit policy. The need for revisional surgery in patients with troublesome gastro-oesophageal reflux disease was examined. Prior to 2011, 130 patients underwent sleeve gastrectomy, and 11 (8.5%) of them required conversion to Roux-en-Y gastric bypass for symptomatic reflux disease. Following the policy change, 284 patients underwent sleeve gastrectomy, and to date, only five (1.8%) have required revisional surgery (p = 0.001). Baseline demographics were comparable between the groups, and average follow-up period was 47 and 33 months, respectively, for each group. Preoperative endoscopy and a detailed clinical history regarding gastro-oesophageal reflux symptoms may improve patient selection for sleeve gastrectomy. Avoiding sleeve gastrectomy in patients with reflux disease and/or hiatus hernia may reduce the incidence of revisional surgery. abstract_id: PUBMED:26110129 Gastroesophageal Reflux Symptoms After Laparoscopic Sleeve Gastrectomy for Morbid Obesity. The Importance of Preoperative Evaluation and Selection. Background: Gastroesophageal reflux disease (GERD) is prevalent in morbidly obese patients, and its severity appears to correlate with body mass index (BMI). Aim: The aim of this study is to investigate the status of GERD after laparoscopic sleeve gastrectomy (LSG). Materials And Methods: A prospectively maintained database of all the patients who underwent LSG from February 2008 to May 2011 was reviewed. Results: A total of 131 patients were included. The mean age and the BMI of the patients were 49.4 years and 48.9 kg/m(2), respectively. Prior to LSG, subjective reflux symptoms were reported in 67 (51%) patients. Anatomical presence of hiatal hernia was endoscopically confirmed in 35 (52%) patients who reported reflux symptoms prior to LSG. All these patients underwent simultaneous hiatal hernia repair during their LSG. The overall mean operative time was 106 min (range: 48-212 min). There were no intra- and 30-day postoperative complications. Out of the 67 preoperative reflux patients, 32 (47.7%) reported resolution of their symptoms after the operation, 20 (29.9%) reported clinical improvement, and 12 (22.2%) reported unchanged or persistent symptoms. Three patients developed new-onset reflux symptoms, which were easily controlled with proton pump inhibitors. No patient required conversion to gastric bypass or duodenal switch because of the severe reflux symptoms. At 18 months, the follow-up data were available in 60% of the total patients. Conclusion: LSG results in resolution or improvement of the reflux symptoms in a large number of patients. Proper patient selection, complete preoperative evaluation to identify the presence of hiatal hernia, and good surgical techniques are the keys to achieve optimal outcomes. abstract_id: PUBMED:34865557 Does Sleeve Gastrectomy Worsen Gastroesophageal Reflux Disease in Obese Patients? A Prospective Study. Background: To evaluate the impact of laparoscopic sleeve gastrectomy (LSG) and gastric bypass (LGB) on gastroesophageal reflux disease (GERD). Methods: GERD was evaluated by the Modified Italian Gastroesophageal reflux disease-Health-Related Quality of Life (MI-GERD-HRQL) questionnaire, pH-manometry, endoscopy, and Rx-esophagogram, before and 12 months after surgery. Based on these exams, patients without GERD underwent LSG, and patients with GERD underwent LGB. Results: Thirteen and six patients underwent LSG and LGB, respectively. After LSG, the only statistically significant difference observed at pH-manometry was the median DeMeester score, from 5.7 to 22.7 (P = .0026). De novo GERD occurred in 6 patients (46.2%), with erosive esophagitis in one. The median MI-GERD-HRQL score improved from 3 to 0. Overall, nine patients underwent LGB, but three were lost to follow-up. Preoperative pH-manometry changed the surgical indication from LSG to LGB in 7 out of 9 patients (77.8%). Six patients who underwent LGB completed the study, and at pH-manometry, statistically significant differences were observed in the percentage of total acid exposure time, with the number of reflux episodes lasting >5 minutes and DeMeester score (P = .009). The median MI-GERD-HRQL score improved from 6.5 to 0. Statistically significant differences were not observed at endoscopy and Rx-esophagogram findings in both groups. Conclusions: LSG has a negative impact on GERD, even in patients without preoperative GERD. LGB confirmed to be the intervention of choice in patients with GERD. Preoperative pH-manometry may identify patients with silent GERD, to candidate them to LGB rather than LSG. pH-manometry should be used more liberally to establish the correct surgical indication on objective grounds. abstract_id: PUBMED:28428706 Gastroesophageal reflux disease and morbid obesity: To sleeve or not to sleeve? Laparoscopic sleeve gastrectomy (LSG) has reached wide popularity during the last 15 years, due to the limited morbidity and mortality rates, and the very good weight loss results and effects on comorbid conditions. However, there are concerns regarding the effects of LSG on gastroesophageal reflux disease (GERD). The interpretation of the current evidence is challenged by the fact that the LSG technique is not standardized, and most studies investigate the presence of GERD by assessing symptoms and the use of acid reducing medications only. A few studies objectively investigated gastroesophageal function and the reflux profile by esophageal manometry and 24-h pH monitoring, reporting postoperative normalization of esophageal acid exposure in up to 85% of patients with preoperative GERD, and occurrence of de novo GERD in about 5% of cases. There is increasing evidence showing the key role of the surgical technique on the incidence of postoperative GERD. Main technical issues are a relative narrowing of the mid portion of the gastric sleeve, a redundant upper part of the sleeve (both depending on the angle under which the sleeve is stapled), and the presence of a hiatal hernia. Concomitant hiatal hernia repair is recommended. To date, either medical therapy with proton pump inhibitors or conversion of LSG to laparoscopic Roux-en-Y gastric bypass are the available options for the management of GERD after LSG. Recently, new minimally invasive approaches have been proposed in patients with GERD and hypotensive LES: the LINX® Reflux Management System procedure and the Stretta® procedure. Large studies are needed to assess the safety and long-term efficacy of these new approaches. In conclusion, the recent publication of pH monitoring data and the new insights in the association between sleeve morphology and GERD control have led to a wider acceptance of LSG as bariatric procedure also in obese patients with GERD, as recently stated in the 5th International Consensus Conference on sleeve gastrectomy. abstract_id: PUBMED:27889485 Safety and effectiveness of anterior fundoplication sleeve gastrectomy in patients with severe reflux. Background: Laparoscopic sleeve gastrectomy has become a popular bariatric surgery in recent years. However, it has been linked to worsening or newly developed gastroesophageal reflux disease (GERD) in the postoperative period. Objectives: The purpose of this study is to determine the safety and effectiveness of anterior fundoplication sleeve gastrectomy in patients with reflux. Setting: Academic hospital, United States. Methods: We prospectively collected data on 31 sleeve gastrectomy patients who concurrently underwent anterior fundoplication between July 2014 and March 2016. Patients were selected when they reported severe reflux before the procedure. Each patient was interviewed using the GERD score questionnaire (scaled severity and frequency of heartburn, regurgitation, epigastric pain, epigastric fullness, dysphagia, and cough) before and 4 months after the procedure. Results: Our patients comprised 27 females and 4 males with a mean age of 49.9±9.6 years (range, 29-63 yr). They had a mean preoperative body mass index of 42.8±5.6 kg/m2 (range, 33.3-58.4 kg/m2), and 67.7% (n = 21) of these patients underwent hiatal hernia repair as well. Preoperatively, patients had a mean heartburn score of 7.4±3.6 (range, 1-12), regurgitation score of 5.4±4.1 (range, 0-12), epigastric pain score of 2.1±3.2 (range, 0-12), epigastric fullness score of 2.7±3.9 (range, 0-12), dysphagia score of 1.3±2.2 (range, 0-9), and cough score of .9±1.8 (range, 0-6). Mean preoperative GERD score was 18.9±9.8 (range, 6-36) in these patients. Patients were interviewed with the same questionnaire approximately 4 months postoperative. Patients had a mean heartburn score of 1.5±3.2 (range, 0-12), regurgitation score of .9±1.7 (range, 0-8), epigastric pain score of .4±1.1 (range, 0-4), epigastric fullness score of 1.1±2.4 (range, 0-8), dysphagia score of .3±1.1 (range, 0-6), and cough score of 0. Mean postoperative GERD score dropped down to 4.1±5.8 (range, 0-28), and the difference was statistically significant (P<.01). One patient was readmitted 28 days later for a staple line leakage, and was treated conservatively. No patient required a reoperation due to the procedure within 30 days. Conclusion: Anterior fundoplication sleeve gastrectomy may be a safe and effective alternative in obese patients with severe reflux who want to undergo sleeve gastrectomy. abstract_id: PUBMED:32146568 Sleeve Gastrectomy and Anterior Fundoplication (D-SLEEVE) Prevents Gastroesophageal Reflux in Symptomatic GERD. Background: A worrying increase of gastroesophageal reflux disease (GERD) and Barrett esophagus has been reported after sleeve gastrectomy (SG). Recent reports on combined fundoplication and SG seem to accomplish initial favorable results. However, no study included manometry or pH monitoring to evaluate the impact of fundoplication in SG on esophageal physiology. Method: In this study, 32 consecutive bariatric patients with GERD and/or esophagitis had high-resolution impedance manometry (HRiM) and combined 24-h pH and multichannel intraluminal impedance (MII-pH) before and after laparoscopic sleeve gastrectomy associated to anterior fundoplication (D-SLEEVE). The following parameters were calculated at HRiM: lower esophageal sphincter pressure and relaxation, peristalsis, and mean total bolus transit time. The acid and non-acid GER episodes were assessed by MII-pH, symptom index association (SI), and symptom-association probability (SAP) were also analyzed. Results: At a median follow-up of 14 months, HRiM showed an increased LES function, and MII-pH showed an excellent control of both acid exposure of the esophagus and number of reflux events. Bariatric outcomes (BMI and EWL%) were also comparable to regular SG (p = NS). Conclusion: D-SLEEVE is an effective restrictive procedure, which recreates a functional LES pressure able to control and/or prevent mild GERD at 1-year follow-up. Answer: Preoperative manometry appears to be an important tool for evaluating reflux symptoms in sleeve gastrectomy patients. Studies have shown that manometric testing can help to accurately evaluate gastroesophageal reflux disease (GERD) and lower esophageal sphincter (LES) pressure in patients undergoing laparoscopic sleeve gastrectomy (LSG) (PUBMED:25547053). Additionally, preoperative esophageal testing, including manometry, has been found to predict postoperative reflux status in sleeve gastrectomy patients (PUBMED:37264227). High-resolution manometry (HRM) has been identified as a useful tool to detect risk factors for GERD, and certain manometric parameters, such as the distal contractile integral (DCI), have been correlated with the presence of GERD postoperatively (PUBMED:37466827). Furthermore, preoperative esophageal pH monitoring and HRM have been used to predict symptoms of GERD after sleeve gastrectomy, with preoperative symptoms and positive preoperative pH monitoring in asymptomatic patients being predictive of postoperative symptoms (PUBMED:33877508). Preoperative endoscopy, which can identify hiatus hernia or evidence of esophagitis, may also reduce the need for revisional surgery for GERD following LSG (PUBMED:27400631). In summary, preoperative manometry, along with other preoperative evaluations such as pH monitoring and endoscopy, can be valuable in assessing the risk of postoperative GERD symptoms and guiding surgical decision-making for patients considering sleeve gastrectomy. These assessments can help in selecting appropriate candidates for the procedure and potentially in choosing alternative surgical options for those at high risk of developing or worsening GERD symptoms postoperatively.
Instruction: Does hospital need more hospice beds? Abstracts: abstract_id: PUBMED:25881621 Does hospital need more hospice beds? Hospital charges and length of stays by lung cancer inpatients at their end of life: A retrospective cohort design of 2002-2012. Background: Previous studies found that hospice and palliative care reduces healthcare costs for end-of-life cancer patients. Aim: To investigate hospital inpatient charges and length-of-stay differences by availability of hospice care beds within hospitals using nationwide data from end-of-life inpatients with lung cancer. Design: A retrospective cohort study was performed using nationwide lung cancer health insurance claims from 2002 to 2012 in Korea. Setting And Participants: Descriptive and multi-level (patient-level and hospital-level) mixed models were used to compare inpatient charges and lengths of stay. Using 673,122 inpatient health insurance claims, we obtained aggregated hospital inpatient charges and lengths of stay from a total of 114,828 inpatients and 866 hospital records. Results: Hospital inpatient charges and length of stay drastically increased as patients approached death; a significant portion of hospital inpatient charges and lengths of stay occurred during the end-of-life period. According to our multi-level analysis, hospitals with hospice care beds tend to have significantly lower end-of-life hospital inpatient charges; however, length of stay did not differ. Hospitals with more hospice care beds were associated with reduction in hospital inpatient charges within 3 months before death. Conclusion: Higher end-of-life healthcare hospital charges were found for lung cancer inpatients who were admitted to hospitals without hospice care beds. This study suggests that health policy-makers and the National Health Insurance program need to consider expanding the use of hospice care beds within hospitals and hospice care facilities for end-of-life patients with lung cancer in South Korea, where very limited numbers of resources are currently available. abstract_id: PUBMED:24576832 Hospice agencies' hospital contract status and differing levels of hospice care. In response to a 2011 finding that approximately 27% of Medicare-certified hospices do not provide a single day of general inpatient care (GIP), the authors explored the extent to which hospices have contracts with hospitals for GIP. Using the 2007 National Home and Hospice Care Survey, we estimated that 1119 (32%) agencies had no contract with any hospitals in 2007 and half of those with no contract did not have a contract with a skilled nursing facility (SNF) either. As a result, these hospices were unable to provide GIP referrals for those in need of inpatient care for acute pain and symptom management. More importantly, not having a contract with a hospital was just one of the factors influencing GIP provision. In the multivariate logistic model, after controlling for contract status with a hospital and other hospice characteristics, agencies in the second quartile of hospice patient census (12-29 vs 73 or more, adjusted odds ratio = 14.10; 95% confidence interval 4.26-46.62) were independently related to providing only routine home care. These hospices are more likely to rely solely on scatter beds for GIP provision. Given that a significant portion of hospices do not have a contract with a hospital, policy makers need to understand barriers to contracts with a hospital/SNF for GIP and consider a hospice's contract status as one of the standards for hospice certification. In addition, further research is necessary to understand why hospices that do have a contract with a hospital do not make GIP referral. abstract_id: PUBMED:19845425 The effect of certificate-of-need laws on hospital beds and healthcare expenditures: an empirical analysis. Objective: To estimate the effect of certificate-of-need legislation on hospital bed supply and healthcare expenditures. Study Design: This study uses state data on several variables, including healthcare expenditures, hospital bed supply, and the existence of a certificate-of-need program, from 4 periods (1985, 1990, 1995, and 2000). Methods: We estimate 2 multivariate regression equations. In the first equation, hospital bed supply is the dependent variable, and certificate of need is included as an independent variable. In the second equation, healthcare expenditures is the dependent variable, and hospital bed supply and certificate of need are included as independent variables. Results: Certificate-of-need laws have reduced the number of hospital beds by about 10% and have reduced healthcare expenditures by almost 2%. Certificate-of-need programs did not have a direct effect on healthcare expenditures. Conclusion: Certificate-of-need programs have limited the growth in the supply of hospital beds, and this has led to a slight reduction in the growth of healthcare expenditures. abstract_id: PUBMED:10108615 Assessing the need for hospice beds. This paper considers means for determining the level of hospice provision that might be appropriate for particular populations. An analysis of the need for hospice beds is described, and the findings are assessed in relation to the subsequent experience of hospice utilisation. Finally, the implications for hospice provision are discussed. abstract_id: PUBMED:37256687 Associations Between Certificate of Need Policies and Hospice Quality Outcomes. Background: Certificate of need (CON) laws are state-based regulations requiring approval of new healthcare entities and capital expenditures. Varying by state, these regulations impact hospices in 14 states and DC, with several states re-examining provisions. Aim: This cross-sectional study examined the association of CON status on hospice quality outcomes using the hospice item set metric (HIS). Design: Data from the February 2022 Medicare Hospice Provider and General Information reports of 4870 US hospices were used to compare group means of the 8 HIS measures across CON status. Multiple regression analysis was used to predict HIS outcomes by CON status while controlling for ownership and size. Results: Approximately 86% of hospices are in states without a hospice CON provision. The unadjusted mean HIS scores for all measures were higher in CON states (M range 94.40-99.59) than Non-CON (M range 90.50-99.53) with significant differences in all except treatment preferences. In the adjusted model, linear regression analyses showed hospice CON states had significantly higher HIS ratings than those from Non-CON states for beliefs and values addressed (β = .05, P = .009), pain assessment (β = .05, P = .009), dyspnea treatment (β = .08, P < .001) and the composite measure (β = .09, P < .001). Treatment preferences, pain screening, dyspnea screening, and opioid bowel treatment were not statistically significant (P > .05). Conclusion: The study suggests that CON regulations may have a modest, but beneficial impact on hospice-reported quality outcomes, particularly for small and medium-sized hospices. Further research is needed to explore other factors that contribute to HIS outcomes. abstract_id: PUBMED:36117248 Predicting Need for Hospital Beds to Reduce Emergency Department Boarding. Introduction Accurately determining the supply and demand of hospital beds for new admissions can help prevent adverse patient outcomes. Quantitative analysis of modern electronic medical record data can help predict supply and demand for unoccupied staffed hospital beds (SEDs) and aid in eliminating human approximations, standardizing daily work through concrete and objective data. The purpose of this study was to reduce variability and human error in predicting the number of SEDs needed. Methods In this study,the authors analyzed bed calculator data from a medium-sized, suburban medical center to evaluate the efficacy of a unique bed calculator prediction tool to determine the need for SEDs. The calculator aggregates multiple key reference factors available through the bed calculator system into a cohesive linear regression model. Results Compared with human estimation, the authors found that the bed calculator is able to predict the number of SEDs needed more effectively. That being said, there was no significant difference in the average boarding times pre- and postintervention, indicating that the bed calculator did not result in decreased boarding times for patients. Discussion These findings establish the efficacy of the bed calculator and its ability to align bed supply and demand. Because patient boarding times depend on the system's patient flow management, future studies should focus on how to improve various streams of communication and coordination. abstract_id: PUBMED:26275783 Impact of a Centralized Inpatient Hospice Unit in an Academic Medical Center. Objective: Rush University Medical Center (RUMC) and Horizon Hospice opened the first centralized inpatient hospice unit (CIPU) in a Chicago academic medical center in 2012. This study examined if there was a difference in cost or length of stay (LOS) in a CIPU compared to hospice care in scattered beds throughout RUMC. Study Design And Methods: This retrospective, cross-sectional study compared cost and LOS for patients admitted to the CIPU (n = 141) and those admitted to hospice scattered beds (SBM) throughout RUMC (n = 56). Results: The CIPU patients had a median LOS of 6.0 days versus 2.0 days for SBM patients. Conclusions: The CIPU patients had longer hospice LOS but lower hospital costs. Academic medical centers may benefit from aggregating hospice beds. abstract_id: PUBMED:25217766 The current state, the possibilities and difficulties of palliative and hospice care in Hungary Hospice-palliative care has existed in Hungary for more than 20 years but physicians know very little about it. The objective of the study is to give detailed practical information about the possibilities and the reasonability of hospice care and the process of how to have access to it. The authors review and analyze the database of the national Hospice-Palliative Association database to provide most recent national data on hospice-palliative care. In addition, legal, financial and educational issues are also discussed. At present there are 90 active hospice providers in Hungary, which provide service for more than 8000 terminally ill cancer patients. According to WHO recommendations there would be a need for much more service providers, institutional supply and more beds. There are also problems concerning the attitude and, therefore, patients are admitted into hospice care too late. Hospice care is often confused with chronic or nursing care due to lack of information. The situation may be improved with proper forms of education such as palliative licence and compulsory, 40-hour palliative training for residents. The authors conclude that a broad dissemination of data may help to overcome misbeliefs concerning hospice and raise awareness concerning death and dying. abstract_id: PUBMED:412816 A model for certification of need for long-term-care beds. A model relating bed supply and utilization is presented in the context of the match between need and service, which is controlled by the screening process that allows or denies access to beds. The conventional cost-minimization approach to certification of need, that of seeking to reduce inappropriate use, is contrasted with a service-delivery approach that seeks to promote appropriate use of facilities. The model expresses the quality of the screening process and the sensitivity and specificity of utilization in terms of bed supply, utilization, and need for service, which allows it to be used for needs assessment. The model is applied to data on supply and use of beds in Massachusetts skilled nursing facilities, with screening quality estimated by Monte Carlo methods; the results suggest that need and bed supply are positively associated and that the regional variation in skilled-nursing beds in Massachusetts may reflect real variations in need. abstract_id: PUBMED:25012157 High frequency of potential entrapment gaps in beds in an acute hospital. Objective: dimensional guidelines for bedrails have been developed to minimise the risk of patient entrapment within the bed. We examined whether bedrails in a large Irish teaching hospital complied with these standards. Design And Setting: survey of 60 accessible beds in six hospital wards. Methods: a specialised cone and cylinder tool that mimics the size and weight of a small adult neck and head was used to determine gaps in the four zones most associated with entrapment. Results: the number of failures for each zone was 15 beds for zone 1 (any space between the perimeters of the rail); 42 beds for zone 2 (the space under the rail); 41 beds for zone 3 (the space between the inside surface of the bedrail and the mattress) and 13 beds for zone 4 (the space between the mattress and rail at the end of the rail). Failures were more common with hydraulic adjusted than with electric profiling beds. Mattresses that were the wrong size (usually too narrow) or too soft and bedrails that were loose or were poorly maintained accounted for many failures. Conclusion: many beds used in our hospital did not comply with dimensional standards to minimise entrapment risks. This emphasises the need for careful selection of patients for whom bedrails are to be used as well as the need for monitoring and maintenance of bed systems. Answer: The question of whether a hospital needs more hospice beds can be informed by several findings from the provided abstracts. Firstly, a study found that hospitals with hospice care beds tend to have significantly lower end-of-life hospital inpatient charges for lung cancer patients, although the length of stay did not differ. This suggests that the availability of hospice care beds within hospitals could reduce healthcare costs, indicating a potential benefit of increasing hospice bed availability (PUBMED:25881621). Secondly, it was reported that a significant portion of Medicare-certified hospices did not provide general inpatient care (GIP), and many did not have contracts with hospitals or skilled nursing facilities to provide GIP when needed. This lack of hospital contracts could limit the ability of hospices to provide comprehensive care, suggesting a need for better integration with hospitals, which could include more hospice beds in hospital settings (PUBMED:24576832). Additionally, certificate-of-need (CON) laws, which regulate the establishment of new healthcare facilities and services, including hospice beds, have been shown to reduce the number of hospital beds by about 10% and healthcare expenditures by almost 2%. This indicates that regulatory frameworks can influence the availability of hospice beds and healthcare costs (PUBMED:19845425). Furthermore, a study on the impact of a centralized inpatient hospice unit in an academic medical center found that patients in such a unit had longer hospice lengths of stay but lower hospital costs compared to hospice care in scattered beds. This suggests that centralizing hospice beds may be beneficial (PUBMED:26275783). Lastly, the quality of hospice care, as measured by the hospice item set metric (HIS), was found to be higher in states with CON laws for hospices. This implies that regulatory environments that support the establishment of hospice beds may also contribute to higher quality outcomes (PUBMED:37256687). In conclusion, the evidence suggests that there are benefits to having more hospice beds in hospitals, including reduced healthcare costs and potentially higher quality of care. However, the decision to increase hospice bed availability must also consider regulatory frameworks, hospital-hospice collaborations, and the specific needs of the patient population served.
Instruction: Cardiac surgery in nonagenarians: not only feasible, but also reasonable? Abstracts: abstract_id: PUBMED:35176730 The three R's of reasonable in radiological protection: relationships, rationale, and resources. Central to applying the principle of optimisation in the system of radiological protection is the evaluation of what level of radiation exposure should be considered 'as low as reasonably achievable' (ALARA), after taking into account the prevailing circumstances. Determining what is 'reasonable' is an abstract, although somewhat intuitive concept with many potential interpretations depending on both the situation and those involved, whether individuals or organisations. There are common themes in the types of considerations that need to be addressed to determine 'reasonableness' regardless of the exposure situation. However, despite the consistent and agreeable nature of these themes, there remains a gap in how to apply them in real situations. For example, without measurable goalposts (or a clear process for setting such goalposts) for determining what constitutes ALARA, we can find ourselves misinterpreting the optimisation process as keeping exposures 'as low as possible'. We propose herein, by consolidating and building on existing ideas, an easily understandable and actionable 'reasonableness' framework. This simple, yet broadly applicable tool is intended to help radiation protection experts in the systematic, deliberative, and collaborative reflection on all of the factors that make up 'reasonable' before making a decision-whether it be a proposed medical treatment or clean-up of a contaminated site. Each process execution and decision itself will necessarily retain the complexity of the prevailing circumstance. The proposed 'Rs' of Reasonable represent Relationships (stakeholders, empathy, and trust), Rationale (contextual, technical, and ethical), and Resources (technological, financial, and time). abstract_id: PUBMED:12400970 'Reasonable' regulation of low doses in The Netherlands? As long as it is not clear exactly what the risks of low doses are, exposures should be regulated to be 'as low as reasonably achievable' (ALARA). In radiation protection, for normal situations, this means that a projected dose reduction can only be obligatory when the efforts needed to achieve the reduction are 'reasonable' in comparison with it, economical and social aspects being taken into account. In the recent Dutch regulations, 'reasonable' values have been established for the relevant parameters used in the ALARA concept and the paper discusses the values required to calculate the doses for the critical group due to a source. In some cases, the effort expended in making the ALARA dose assessments might not be reasonable in comparison with the dose reduction to be expected. The system which has been developed in the Netherlands to avoid these 'unreasonable' dose calculations, measurements and assessments is explained. abstract_id: PUBMED:15561176 Newborn heart surgery: reasonable expectations and outcomes. This article discusses the necessity for standards for reasonable expectations and outcomes for neonatal treatment of congenital heart disease and the databases that allow such standards to be measured and to be compared among treatment centers. abstract_id: PUBMED:35498029 Risk Stratification by Coronary Perfusion Pressure in Left Ventricular Systolic Dysfunction Patients Undergoing Revascularization: A Propensity Score Matching Analysis. Background: Coronary perfusion pressure (CPP) and coronary artery stenosis are responsible for myocardial perfusion. However, how CPP-related survival outcome affects revascularization is unclear. Objective: The aim of this study is to investigate the prognostic role of CPP in patients with left ventricular systolic dysfunction (LVSD) undergoing percutaneous coronary intervention (PCI) with complete revascularization (CR) or reasonable incomplete revascularization (RIR). Methods: We retrospectively screened 6,076 consecutive patients in a registry. The residual synergy between percutaneous coronary intervention with Taxus and cardiac surgery (SYNTAX) score (rSS) was used to define CR (rSS = 0) and RIR (0<rSS≤8). Propensity score matching was performed to reduce bias between RIR and CR. The primary endpoint was all-cause mortality. Results: In total, 816 patients with LVSD who underwent CR or RIR were enrolled. After a mean follow-up of 4.6 years, 134 patients died. Both CPP and RIR independently predicted mortality in the total population. After 1:1 matching, 175 pairs of RIR and CR were found in patients with CPP > 42 mmHg. Moreover, 101 pairs of RIR and CR were present in patients with CPP ≤ 42 mmHg. In patients with CPP > 42 mmHg, RIR was not significantly different from CR in long-term mortality [hazard ratio (HR) 1.20; 95% confidence interval (CI):0.70-2.07; p = 0.513]; However, in patients with CPP≤42 mmHg, RIR had a significantly higher mortality risk than CR (HR 2.39; 95% CI: 1.27-4.50; p = 0.007). Conclusions: The CPP had a risk stratification role in selecting different revascularization strategies in patients with LVSD. When patients with LVSD had CPP > 42 mmHg, RIR was equivalent to CR in survival. However, when patients with LVSD had CPP ≤ 42 mmHg, RIR had a significantly higher mortality risk than CR. abstract_id: PUBMED:23628651 Cardiac surgery in nonagenarians: not only feasible, but also reasonable? Objectives: Changes in the age profile of the population in the western world and improvement in surgical techniques and postoperative care have contributed to a growing number of cardiosurgical patients aged over 90. In periods when transapical and transfemoral aortic valve replacement were done, we aimed at evaluating the outcome of nonagenarians after conventional aortic valve replacement and cardiac surgery in general, and determining perioperative parameters to predict a complicated postoperative course. Methods: Between 1995 and 2011, 49 nonagenarians (aged 91.2±3.1 years) underwent cardiac surgery. A subgroup of 30 patients received aortic valve replacement alone (63%; n=19), in combination with coronary artery bypass grafting (27%; n=8) or other surgical procedures (10%; n=3). Most of the patients suffered from combined aortic valve disease with a mean valve orifice area of 0.6±0.3 cm2 and a mean antegrade pressure gradient of 86±22 mmHg. Results: Cardiac surgery in nonagenarians resulted in remarkable postoperative morbidity and an overall in-hospital mortality of 10% (n=5). In the AVR subgroup, biological valve prostheses were implanted in 29 patients. In this subgroup, the length of stay was 2.9±0.9 days in the intensive care unit and 17.0±5.5 days in the hospital. The in-hospital mortality amounted to 13% (n=4). Although several general preoperative risk factors of postoperative complications such as renal failure, low cardiac output syndrome and New York Heart Association Class IV were remarkably more frequent among the patients who died after the operation, the small cohort of non-surviving nonagenarians did not allow for significant differences. Conclusions: Cardiac surgery in the very elderly, particularly with regard to aortic valve replacement, carries a high risk of early morbidity and mortality. However, in selected nonagenarians, surgery can be performed with an acceptable outcome. The risk may even be reduced by an individual approach to the procedure. With regard to potential risk factors, the selection of these patients should be carried out very carefully. abstract_id: PUBMED:23888436 Choice and reasonable application of staplers for gastrointestinal surgery Digestive tract reconstruction is the main part of gastrointestinal surgery. With the rapid development of technology and widely application in stapling device, more and more surgeons are using stapled anastomosis. Stapled anastomosis is associated with shorter operating time and hospital stay than hand-sewn anastomosis. However, it is not easy to select suitable ones from various staplers and use them correctly. Choice and reasonable application of staplers for anastomosis in gastrointestinal surgery are summarized and evaluated in this article. abstract_id: PUBMED:31264468 Results after graded recession of inferior oblique muscle in strabismus sursoadductorius Introduction: Idiopathic strabismus sursoadductorius is characterized by an overaction of the inferior oblique (IO) muscle, with elevation of the eye in adduction. This can result in chronic headache, eye pain, diplopia and abnormal head posture. Aim: Presentation of clinical results after graded recession of inferior oblique overaction (IOOA) in strabismus sursoadductorius. Method: During a period of 6 years (2012-2017) we performed a dosed transposition of IO muscle in 53 patients in the Department of Ophthalmology of the Semmelweis University and analyzed the clinical results retrospectively. The mean age was 12.8, min-max: 2-51 years. The transposition of the muscle (8 mm, 10 mm or maximal) was performed because of IOOA and was planned depending on the vertical deviation (VD) in maximal adduction. In case of maximal transposition, the insertion of IO was placed under the lateral part of the lower rectus muscle. We evaluated the squint angle in five positions of gaze preoperatively and postoperative 1 week, 1 month and 3-6 months. We examined best corrected visual acuity, binocular vision and complications after surgery. Results: Postoperatively 25 patients had acceptable stereopsis (Titmus 5). Visual acuity remained stable after surgery in all cases. The VD measured in maximal adduction changed with a decrease of 7.5° when 8 mm transposition was performed, and changed with a decrease to 12°, when maximal recession was performed. The mean reduction of VD was 17.8 prism dioptres (PD) after 1 week (p<0.01), 20.5 PD after 1 month and remained stable during a follow-up of 3-6 months. Conclusion: Graded recession of inferior oblique muscle is a suitable proposed procedure in patients with marked strabismus sursoadductorius. The rates of functional under- or overcorrection or other complications are low. Orv Hetil. 2019; 160(27): 1064-1069. abstract_id: PUBMED:36062300 A day surgery admissions pathway for children and young people with learning disabilities. Following feedback from parents and carers, a new pathway was developed by staff in the paediatric day surgery unit of a hospital in England for admitting children and young people with learning disabilities, some of whom are autistic. This article describes the All About Me pathway, which supports individualised admission planning and uses an interdisciplinary approach to enable multiple examinations or procedures to be completed under a single anaesthetic. The authors explain how the All About Me team works with families, schools and caregivers to gain a greater understanding of each child and young person. This enables the team to make reasonable adjustments to the environment and admission process to maximise the chance of a successful admission. Families have reported that the pathway has made significant improvements to their experience of healthcare services. The pathway continues to develop and has become the focus of a quality improvement project which is expected to demonstrate cost savings. abstract_id: PUBMED:12603410 Testicular-sparing surgery: a reasonable option in selected patients with testicular lesions. Objective: To describe our experience of inguinal exploration in patients who had a reasonable chance of having a benign testicular lesion. Patients And Methods: From 1995 to 2002, 11 patients (mean age 43 years, range 27-63) with testicular masses that were suspected to be benign underwent inguinal exploration. Results: In nine of the 11 patients, frozen-section analysis and the final pathological results were similar, and two underwent inguinal orchidectomy. In seven patients the testicle was spared. Finally, because of an uncertain pathological diagnosis and patient age, two patients underwent orchidectomy. Conclusions: Inguinal exploration and testicular-sparing surgery are reasonable options in patients with peripheral intratesticular lesions, on the basis of preoperative ultrasonographic characteristics, and if there is a possibility of the mass being benign because of age, race, physical examination and tumour markers. abstract_id: PUBMED:19155914 Fetal intervention: providing reasonable access to quality care. Interest in fetal intervention has become widespread in recent years. Laser therapy for the treatment of severe twin-twin transfusion alone has been the subject of more than 100 peer-reviewed articles in the past 3 years. Significant issues have arisen that affect the availability of these new therapies in the United States. Formal training fellowships have yet to be established. Questions as to the ultimate number of treatment centers that are required to provide reasonable access should be addressed. The establishment of research networks to evaluate new fetal therapies through randomized clinical trials would appear paramount to the advancement of the field. Answer: Cardiac surgery in nonagenarians is not only feasible but can also be considered reasonable in selected cases. A study conducted between 1995 and 2011 on 49 nonagenarians (aged 91.2±3.1 years) who underwent cardiac surgery, including a subgroup of 30 patients receiving aortic valve replacement alone or in combination with other procedures, showed that while there is a high risk of early morbidity and mortality, the surgery can be performed with an acceptable outcome in carefully selected patients (PUBMED:23628651). The overall in-hospital mortality was 10%, and the length of stay in the intensive care unit and hospital was 2.9±0.9 days and 17.0±5.5 days, respectively, in the aortic valve replacement subgroup. The study concluded that with an individual approach to the procedure and careful patient selection, the risks may be reduced, making cardiac surgery a reasonable option for some nonagenarians (PUBMED:23628651).
Instruction: Percutaneous laser ablation of fetal congenital cystic adenomatoid malformation: too little, too late? Abstracts: abstract_id: PUBMED:11111218 Percutaneous laser ablation of fetal congenital cystic adenomatoid malformation: too little, too late? Objective: Congenital cystic adenomatoid malformation, type III (CCAM III) lesions are large, bulky tumors which can cause mediastinal shift, prevent normal pulmonary growth, and compress the esophagus, thus leading to complications of nonimmune hydrops, pulmonary hypoplasia and polyhydramnios. Because the mortality rate of untreated fetuses with CCAM and hydrops is high, early delivery or intrauterine resection of the enlarged pulmonary lobe (lobectomy) is indicated; however, open fetal resection of CCAM at less than 30 weeks is associated with perioperative mortality that approaches 40%, as well as the usual maternal and fetal morbidity of open fetal surgery. As an alternative, percutaneous laser ablation of a CCAM III lesion with hydrops was attempted. Methods: A 30-year-old G3 P1011 with CCAM III in the left fetal hemithorax developed mediastinal shift, hydrops and polyhydramnios at 23 weeks' gestation. After pregnancy termination and open fetal resection were declined, an 18-gauge needle was placed into the fetal tumor percutaneously under real-time ultrasonographic guidance, using sterile technique with light sedation. A cleaved 400-microm Nd:YAG laser fiber was passed through the needle lumen, and using a power setting of 15 W, a total of 2,943 J of laser energy was delivered in pulses of 1.0 s at 0.2-second intervals over two sessions one week apart. Results: Although tumor size decreased, the hydrops worsened and fetal death occurred. Conclusions: The fetus with CCAM complicated by hydrops is already so compromised by the advanced state of the disease that insufficient time is available for necrotic tissue reabsorption after minimally invasive therapy with laser energy. Until earlier markers for intervention are determined, percutaneous laser debulking of CCAM is unlikely to be successful. abstract_id: PUBMED:22990987 Percutaneous laser ablation under ultrasound guidance for fetal hyperechogenic microcystic lung lesions with hydrops: a single center cohort and a literature review. Objective: To evaluate the perinatal outcomes in hydropic fetuses with congenital microcystic pulmonary lesions that underwent percutaneous, invasive, laser therapy. Method: This retrospective study reviews the literature and our experience between 2004 and 2010. Characteristics of the cystic lung lesions, liquor volume (presence of polyhydramnios or not), localization of ablation (vascular vs interstitial) and gestational age at which the procedure was performed were related to outcome (survival). Results: In total, 16 fetuses with congenital lung lesions underwent 'invasive' percutaneous laser ablation, seven performed in our center and nine published cases. Survival rate was higher in fetuses with a subsequent postnatal diagnosis of bronchopulmonary sequestration (87.5%) compared with congenital adenomatoid malformation (28.6%; p = 0.04). The technique of vascular ablation was more successful (100%) than interstitial ablation (25.0%, p < 0.01). Conclusion: Percutaneous vascular laser ablation seems to be effective for bronchopulmonary sequestration in hydropic fetuses. Outcomes were worst following interstitial ablation for microcystic congenital adenomatoid with hydrops. abstract_id: PUBMED:27363854 Fetal laser ablation of feeding artery of cystic lung lesions with systemic arterial blood supply. Objective: To assess the effectiveness of laser surgery in fetuses with a cystic lung lesion with systemic arterial blood supply (hybrid lung lesion) at risk of perinatal death. Methods: A cohort of five consecutive fetuses with a large hybrid lung lesion associated with hydrops and/or pleural effusion with severe lung compression was selected for percutaneous ultrasound-guided fetal laser ablation of the feeding artery (FLAFA) before 32 weeks' gestation in a single tertiary national referral center in Queretaro, Mexico. The primary outcomes were survival and need for postnatal surgery. Results: FLAFA was performed successfully in all cases at a median gestational age of 24.9 (range, 24.4-31.7) weeks. After fetal intervention, dimensions in both lungs increased and fluid effusions resolved in all cases. All cases were delivered liveborn at term at a median gestational age of 39.6 (range, 38.0-39.7) weeks, without respiratory morbidity or need for oxygen support, resulting in perinatal survival of 100%. During follow-up, three (60%) cases showed progressive regression of the entire lung mass and did not require postnatal surgery, whereas in two (40%) cases a progressive decrease in size of the mass was observed but a cystic portion of the lung mass persisted and postnatal lobectomy was required. Conclusion: In fetuses with large hybrid lung lesions at risk of perinatal death, FLAFA is feasible and could improve survival and decrease the need for postnatal surgery. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. abstract_id: PUBMED:29207383 Thoracic Changes after Full Laser Ablation of the Feeding Artery in Fetuses with Bronchopulmonary Sequestration. Objective: To assess longitudinal intrathoracic changes after fetal laser surgery in fetuses with bronchopulmonary sequestration (BPS) with hydrops and/or hydrothorax. Study Design: The presence of intrafetal fluid effusions, the lung mass volume ratio (congenital pulmonary airway malformation volume ratio [CVR]), and the observed/expected lung-to-head circumference ratio (O/E-LHR) of both lungs were evaluated in a cohort of BPS fetuses with hydrops and/or hydrothorax treated with full laser ablation of the feeding artery (FLAFA). The longitudinal changes in intrafetal fluid effusions, lung mass volume, and pulmonary growth were analyzed by survival and multilevel analysis against days after FLAFA. Results: FLAFA was successfully performed in 15 cases at a median gestational age of 26.9 weeks. A complete disappearance of the hydrops and hydrothorax was observed a median interval of 7.5 and 21 days after the fetal intervention, respectively. A progressive decrease in the CVR and an increment in the size of both lungs were observed after FLAFA. The O/E-LHR of the lung contralateral and ipsilateral to the side of the BPS became normal on average 8 and 10 weeks after FLAFA, respectively. Conclusion: Fetal laser surgery with FLAFA promotes disappearance of all fetal fluid effusions, a lung mass regression sequence, and a normalization of pulmonary growth. abstract_id: PUBMED:37664775 Rapid reversal of left ventricular enlargement after percutaneous closure of pulmonary sequestration in a newborn with prune belly syndrome. In this article, we describe a newborn with Prune belly syndrome who presented with left ventricular dilation due to an extensive intralobar sequestration of the left lung. Pulmonary sequestration was combined with congenital cystic adenomatoid malformation and also had coarctation of the aorta. Percutaneous closure of the anomalous aberrant artery feeding the sequestrated lung and balloon angioplasty for coarctation resulted in prompt regression of the left ventricular enlargement in the catheterization lab. abstract_id: PUBMED:17352451 Successful ultrasound-guided laser treatment of fetal hydrops caused by pulmonary sequestration. Microcystic or solid-looking fetal lung lesions associated with hydrops are associated with a high mortality rate. The most common of these lesions are microcystic congenital cystic adenomatoid malformations and pulmonary sequestrations (PS). Performing open fetal surgery by hysterotomy and lobectomy has, until now, been the only therapeutic option available following diagnosis of these lesions in the second trimester. Here we describe a new minimally invasive technique for the treatment of PS in the second trimester. Laser coagulation of the feeding systemic artery of the sequestration was performed under ultrasound guidance. The blood flow to the lung tumor was successfully arrested and hydrops reversed. The child was born at term and, at the time of writing, was alive and well at 2 years of age. Large PS with hydrops can be treated successfully by ultrasound-guided minimally invasive laser surgery. It is therefore important to utilize color Doppler ultrasound imaging for identification of a systemic feeding blood vessel in solid fetal lung tumors. Published by John Wiley & Sons, Ltd. abstract_id: PUBMED:35490675 Innovative Fetal Therapy for a Giant Congenital Pulmonary Airway Malformation with Hydrops. Introduction: Congenital pulmonary airway malformations (CPAMs) complicated by hydrops portend significant morbidity and mortality, with fetal survival estimates less than 10%. Case Presentation: We report successful use of ultrasound-guided radiofrequency ablation at 21-week gestation in a hydropic fetus with CPAM, with subsequent resolution of hydrops. Thirty-two-week MRI noted persistent mediastinal shift, and US at 36 weeks and 5 days noted polyhydramnios. Maternal gestational hypertension prompted delivery at 37 weeks, with a cesarean section performed after a failed trial of labor. The infant required CPAP at 100% and weaned to 21%. Tachypnea persisted, and chest CT on day of life 2 demonstrated multiple large cysts in the right lower lobe with anterior pneumothorax. On day of life 3, she successfully underwent a thoracoscopic right lower lobectomy. Adhesions to the chest wall and rib abnormalities were noted. She was extubated to CPAP at the conclusion of the procedure. She was able to wean to 21% on POD2 and transitioned to oral feeds. Her chest tube was removed with resultant ex vacuo pneumothorax noted. She remained asymptomatic and was discharged home on room air POD11. Pathology confirmed a type 1 CPAM. Conclusion: In utero radiofrequency ablation may be an adjunct to the management of large CPAM. abstract_id: PUBMED:21991743 Anesthesia experience in a child affected by congenital cystic adenomatoid malformation and retinopathy of prematurity for eye laser procedure--a case report. Congenital Cystic Adenomatoid Malformation or bronchopulmonary displasia (congenital lung cysts disease) is a rare condition. Intrauterine death affect severe cases, those infants surviving till birth lead a chronic course of respiratory illness amenable to corrective thoracic surgery or other corrective measures for non pulmonary conditions. This report describes an experience with one girl that needed anesthesia for congenital retinopathy's laser treatment. abstract_id: PUBMED:29206544 A Complication of Percutaneous Sclerotherapy for Congenital Pulmonary Airway Malformation: Intravascular Injection and Cardiac Necrosis. Introduction: A congenital pulmonary airway malformation (CPAM) type III may become large enough to cause hydrops fetalis. In such circumstances, the fetus can be treated with open fetal resection, maternal betamethasone administration, or percutaneous sclerotherapy. Case Report: A 24 week gestation fetus with a CPAM type III was treated by percutaneous sclerotherapy using ethanolamine oleate (EO). The EO inadvertently entered the left atrium and ventricle with subsequent fetal bradycardia and demise. Autopsy revealed myocardial necrosis. Conclusion: Percutaneous sclerotherapy has been previously described in the literature for the treatment of microcystic CPAMs with secondary hydrops. This is the first reported case of an adverse event after fetal sclerotherapy. abstract_id: PUBMED:28058487 Fetal surgery: a critical review. Therapeutic fetal surgical procedures are predicated upon the ability to make an accurate fetal diagnosis. The earliest open fetal surgical procedures were introduced in the 1960s to treat Rh isoimmunisation. They were introduced when it became possible to predict impending fetal demise. Open procedures were abandoned when percutaneous approaches proved superior. The introduction of fetal ultrasound allowed the diagnosis of other congenital anomalies, some being amenable to fetal interventions. Open fetal surgical procedures were initially utilised, with significant maternal morbidity. For some anomalies, percutaneous approaches became favoured. In general, all of these procedures involved significant risks to the mother, to save a baby that was likely to die before or shortly after birth without fetal intervention. Fetal repair for myelomeningocele was a "sea change" in approach. The same maternal risks were taken to improve the quality of life of the affected fetus, not save its life. The completion of the "MOMs Trial" has occasioned a "tsunami" of centres in North America applying this approach. Others are attempting percutaneous repairs, with mixed results. This paper reviews the history of fetal surgery, focusing on the themes of the tension between accurate diagnosis and prognosis and open versus "minimally invasive" approaches. Answer: The abstract from PUBMED:11111218 discusses a case where percutaneous laser ablation was attempted on a fetus with congenital cystic adenomatoid malformation type III (CCAM III) complicated by hydrops. The procedure involved delivering laser energy to the fetal tumor in an attempt to reduce its size. Despite the intervention, the hydrops worsened, and fetal death occurred. The authors concluded that the advanced state of the disease in fetuses with CCAM complicated by hydrops leaves insufficient time for the reabsorption of necrotic tissue after minimally invasive therapy with laser energy. They suggest that until earlier markers for intervention are determined, percutaneous laser debulking of CCAM is unlikely to be successful. In contrast, the abstract from PUBMED:22990987 reports on a retrospective study evaluating perinatal outcomes in hydropic fetuses with congenital microcystic pulmonary lesions that underwent percutaneous, invasive, laser therapy. The study found that the survival rate was higher in fetuses with a subsequent postnatal diagnosis of bronchopulmonary sequestration compared to congenital adenomatoid malformation. The technique of vascular ablation was more successful than interstitial ablation. The abstract from PUBMED:27363854 describes a cohort of five fetuses with large hybrid lung lesions at risk of perinatal death that underwent successful percutaneous ultrasound-guided fetal laser ablation of the feeding artery. The procedure resulted in increased lung dimensions, resolution of fluid effusions, and a 100% perinatal survival rate without respiratory morbidity or need for oxygen support. Similarly, the abstract from PUBMED:29207383 details the positive outcomes of fetal laser surgery in fetuses with bronchopulmonary sequestration (BPS) with hydrops and/or hydrothorax. The study found that fetal laser surgery promoted the disappearance of all fetal fluid effusions, a lung mass regression sequence, and normalization of pulmonary growth. These abstracts collectively indicate that the success of percutaneous laser ablation for fetal congenital cystic adenomatoid malformation may depend on the timing of the intervention, the specific characteristics of the lung lesion, and the technique used for the ablation. While earlier interventions may be more successful, the prognosis can be poor if the disease is too advanced at the time of treatment.
Instruction: Can the single-breath helium dilution method predict lung volumes as measured by whole-body plethysmography? Abstracts: abstract_id: PUBMED:2798580 Correlation of the size of the residual volume measured by whole body plethysmography and the single breath helium dilution method The value of residual volume, intrathoracic gas volume, functional residual capacity and total lung capacity obtained by the method of body plethysmography and helium dilution method was correlated by single-breath diffusing capacity. These parameters were correlated in 22 patients who had had complete functional diagnostics of the lung: spirometry, flow volume curve, body plethysmography and single-breath diffusing capacity. All findings were within normal limits, which was a condition for selection of patients for our group. Statistically significant differences (p = 0.01) of t-test were established between residual volumes by the plethysmographic and helium dilution method of single-breath diffusing capacity. The residual volume obtained by body plethysmography showed higher values for 526 ml, or 30%. The intrathoracic gas volume showed higher values for 682 ml, or 17%. Inspite of the fact that values of RV obtained by two methods had statistically significant differences, they were still within normal limits, if presented as percent of the predicted value. abstract_id: PUBMED:24473761 Can the single-breath helium dilution method predict lung volumes as measured by whole-body plethysmography? Objective: To compare TLC and RV values obtained by the single-breath helium dilution (SBHD) method with those obtained by whole-body plethysmography (WBP) in patients with normal lung function, patients with obstructive lung disease (OLD), and patients with restrictive lung disease (RLD), varying in severity, and to devise equations to estimate the SBHD results. Methods: This was a retrospective cross-sectional study involving 169 individuals, of whom 93 and 49 presented with OLD and RLD, respectively, the remaining 27 having normal lung function. All patients underwent spirometry and lung volume measurement by both methods. Results: TLC and RV were higher by WBP than by SBHD. The discrepancy between the methods was more pronounced in the OLD group, correlating with the severity of airflow obstruction. In the OLD group, the correlation coefficient of the comparison between the two methods was 0.57 and 0.56 for TLC and RV, respectively (p < 0.001 for both). We used regression equations, adjusted for the groups studied, in order to predict the WBP values of TLC and RV, using the corresponding SBHD values. It was possible to create regression equations to predict differences in TLC and RV between the two methods only for the OLD group. The TLC and RV equations were, respectively, ∆TLCWBP-SBHD in L = 5.264 - 0.060 × FEV1/FVC (r2 = 0.33; adjusted r2 = 0.32) and ∆RVWBP-SBHD in L = 4.862 - 0.055 × FEV1/FVC (r2 = 0.31; adjusted r2 = 0.30). Conclusions: The correction of TLC and RV results obtained by SBHD can improve the accuracy of this method for assessing lung volumes in patients with OLD. However, additional studies are needed in order to validate these equations. abstract_id: PUBMED:34568392 Measurement of the Total Lung Volume Using an Adjusted Single-Breath Helium Dilution Method in Patients With Obstructive Lung Disease. Background: Whole-body plethysmography (WBP) is the gold standard for measuring lung volume, but its clinical application is limited as it requires expensive equipment and is not simple to use. Studies have shown that the single-breath helium dilution (SBHD) method, which is commonly used in clinical practice, significantly underestimates lung volume in patients with obstructive lung disease (OLD). By comparing the differences in lung volume measured using SBHD and WBP, we aimed to establish a correction equation for the SBHD method to determine the total lung volume in patients with OLD of different severities. Methods: From 628 patients with OLD simultaneously subjected to SBHD and WBP, 407 patients enrolled between January 2018 and November 2019 were in the training group and 221 enrolled between December 2019 and December 2020 were in the prospective verification cohort. The multiple linear regression equation was used for data in the training group to establish a correction equation for SBHD to determine the total lung volume, and this was validated in the prospective validation cohort. Results: There was a moderate positive correlation between total lung capacity (TLC) determined using the SBHD [TLC (SBHD)] and WBP methods [TLC (WBP)] (r = 0.701; P < 0.05), and the differences between TLC (SBHD) and TLC (WBP) (ΔTLC) were related to the severity of obstruction. As the severity of obstruction increased, the TLC was underestimated by the SBHD method. We established the following correction equation: TLC (adjusted SBHD) (L) = -0.669 + 0.756*TLC(SBHD)(L) - 0.047* FEV1FVC +0.039*height (cm)-0.009*weight(kg)(r2 = 0.753 and adjusted r2 = 0.751). Next, we validated this equation in the validation cohort. With the correction equation, no statistical difference was observed between TLC (adjusted SBHD) and TLC (WBP) among the obstruction degree groups (P > 0.05). Conclusions: The SBHD method is correlated with WBP to measure the total lung volume, but the SBHD method presents limitations in determining the total lung volume in patients with obstructive lung disease. Here, we established an effective and reliable correction equation in order to accurately assess the total lung volume of patients with OLD using the SBHD method. abstract_id: PUBMED:3995879 Lung volumes measured by helium dilution and by body plethysmography with mouth and oesophageal pressures: a comparative study in patients with lung disease. A comparison was made between the helium dilution technique and body plethysmography--using both mouth pressure and oesophageal pressure against box pressure--for measuring functional residual capacity (FRC). In patients with restrictive lung disease (n = 9) no major differences in FRC were noted between the techniques. In patients with obstructive lung disease (n = 17), helium dilution underestimated FRC by a mean of 0.9 l and conventional body plethysmography (mouth-box pressure) overestimated it by 0.4 l in comparison with FRC obtained by oesophageal pressure against box pressure. The difference between the techniques increased with increasing lung volumes. It is concluded that conventional body plethysmography measures FRC more accurately than the helium dilution technique in patients with obstructive lung disease. abstract_id: PUBMED:8497680 Measuring lung volumes in advanced asbestosis: comparability of plethysmographic and radiographic versus helium rebreathing and single breath methods. Total lung capacity was measured in 16 workers with pulmonary asbestosis using four standard methods: body plethysmography, radiographic lung area, helium dilution by multiple breath and by single breath (alveolar volume). All men had irregular opacities of ILO profusion category 2/1 or greater and four had pleural plaques in addition. The radiographic and plethysmographic methods produced virtually identical mean values for TLC of 7.52 l and 7.64 l and for RV of 4.06 l and 4.32 l and all values were larger than those by helium dilution. The closed circuit helium dilution method systematically underestimated TLC with a mean of 5.89 l as did single breath helium dilution with a mean of 6.39 l. These men had a larger mean RV/TLC, measuring 56.9% by body plethysmography and 54.6% by X-ray area than by the rebreathing dilutional method which was 43.8%. Air trapping within a normal TLC which characterizes asbestosis is revealed by radiographic and plethysmographic methods but concealed by gas dilution methods. Use of the latter is at least partly responsible for the impression that asbestosis is a 'restrictive disease'. abstract_id: PUBMED:9009635 Comparison of methods of measuring static lung volumes. The measurement of static lung volumes is important for the accurate diagnosis of lung disorders, and when making volume-dependent measurements, such as airways resistance. There are a variety of methods available. The most accurate method is that of constant volume body plethysmography, which provides an estimate of total lung capacity regardless of the presence of airflow obstruction. Whilst this method may overestimate lung volumes in asthmatics, and is technically more demanding than gas dilution methods, this should be regarded as the principal method for estimating lung volumes. Gas dilution estimates of multi-breath helium or nitrogen dilution or single-breath estimates using the same gases all underestimate total lung capacity in the presence of airflow obstruction. Single-breath methods will underestimate volumes to a greater extent than multi-breath methods. Multi-breath helium dilution is currently regarded as the acceptable alternative to body plethysmography. Estimates of lung volumes from chest radiographs provide an estimate of lung volumes independent of airflow obstruction. They are probably prone to greater variability than body plethysmographic estimates, and it is regarded as unacceptable to expose patients to excess radiation. Other methods being developed include estimates from nuclear magnetic imaging and computed tomography. abstract_id: PUBMED:3359746 Total lung capacity measured by body plethysmography and by the helium dilution method. A comparative study in different patient groups. The helium dilution method is known to underestimate the total lung capacity (TLC) in patients with poorly or non-ventilated areas in the lungs. The standard plethysmographic method has been reported to overestimate TLC in patients with severe airway obstruction. To determine the magnitude of the difference between the two methods, a comparison was made in different patient groups. In a group of patients with normal lung function tests (n = 20) there was a small but significant average difference in TLC between plethysmography and the helium dilution method, the larger values being obtained with the latter. In patient groups with moderately obstructed airways (n = 23), severely obstructed airways (n = 20), or emphysema (n = 19), there were no significant average differences, although in two patients in the emphysema group the plethysmographic values were considerably larger than those obtained by helium dilution. We conclude that the gas dilution methods and plethysmography with a pressure-compensated volume displacement plethysmograph gave estimates of TLC which agreed even in patients with airway obstruction or emphysema, except in patients with very severe lung disease. abstract_id: PUBMED:882721 Comparative study by body plethysmography and spirometry with the helium dilution method in patients with tuberculosis The persisting defects after treatment of tuberculosis in 51 patients were evaluated by a simultaneous measurement of the functional residual capacity (FRC) using the body plethysmograph and the helium dilution method. In most patients large volumes of trapped air were detected by the difference of two methods of measurement. The amount of trapped air showed to be dependent of the other pulmonary parameters obtained: Tiffeneau index, FRC measured by body plethysmography, difference between predicted and measured FRC. abstract_id: PUBMED:27876834 The measurement of lung volumes using body plethysmography and helium dilution methods in COPD patients: a correlation and diagnosis analysis. Chronic obstructive pulmonary disease (COPD) is a chronic airway disease characterized by persistent airflow limitation. Moreover, lung hyperinflation evaluated by lung volumes is also the key pathophysiologic process during COPD progression. Nevertheless, there is still no preferred method to evaluate lung volumes. For this study, we recruited 170 patients with stable COPD to assess lung volumes stratified by airflow limitation severity. Lung volumes including residual volume (RV) and total lung capacity (TLC) were determined by both body plethysmography and helium dilution methods. The discrepancies between these two methods were recorded as ΔRV%pred, ΔTLC%pred, and ΔRV/TLC. We found that ΔRV%pred, ΔTLC%pred, and ΔRV/TLC increased significantly with the severity of COPD. The differences of lung capacity between these two methods were negatively correlated with FEV1%pred, and diffusing capacity for carbon monoxide (DLCO%pred). Moreover, the receiver operating characteristic (ROC) for ΔTLC%pred to distinguish severe COPD from non-severe COPD had an area under curve (AUC) of 0.886. The differences of lung volume parameters measured by body plethysmography and helium dilution methods were associated with airflow limitation and can effectively differentiate COPD severity, which may be a supportive method to assess the lung function of stable COPD patients. abstract_id: PUBMED:9387967 Comparison of helium dilution and plethysmographic lung volumes in pregnant women. The multibreath helium equilibration method is the technique recommended for routine measurement of static lung volumes in normal subjects. However, pregnancy could be an exception to this general rule, due to airway closure during the second half of gestation. The aim of this study was to compare the measurements of lung volumes by plethysmography and helium dilution during pregnancy. Twenty three healthy women were studied at 12, 24 and 36 weeks of pregnancy, and 4 months postpartum. Total lung capacity (TLC), functional residual capacity (FRC) and residual volume (RV) were measured by multibreath helium equilibration (TLCHe, FRCHe and RVHe) and by plethysmography (TLCbox, FRCbox and RVbox). Only at 36 weeks were there differences between the two methods. RVbox was significantly larger than RVHe (1.01+/-0.18 vs 0.77+/-0.21 L; p<0.001). FRCbox was larger than FRCHe (1.95+/-0.32 vs 1.60+/-0.32 L; p<0.001) and TLCbox was larger than TLCHe (4.83+/-0.52 vs 4.45+/-0.51 L; p<0.05). The 95% limits of agreement for differences between lung volumes measured by the two techniques (helium dilution - plethysmography) at 36 weeks were: -0.42 to -0.06 L for RV; -0.54 to -0.17 L for FRC; and -0.66 to -0.11 L for TLC. We conclude that using the multibreath helium equilibration method to measure lung volumes in at-term pregnant women results in underestimation of functional residual capacity and total lung capacity. Answer: The single-breath helium dilution (SBHD) method can predict lung volumes as measured by whole-body plethysmography (WBP), but there are notable discrepancies between the two methods, particularly in patients with obstructive lung disease (OLD). Studies have shown that lung volumes, specifically total lung capacity (TLC) and residual volume (RV), are higher when measured by WBP than by SBHD. The difference between the methods is more pronounced in patients with OLD and correlates with the severity of airflow obstruction (PUBMED:24473761, PUBMED:34568392, PUBMED:3995879, PUBMED:8497680, PUBMED:9009635, PUBMED:3359746, PUBMED:882721, PUBMED:27876834, PUBMED:9387967). In patients with normal lung function, the discrepancy between SBHD and WBP is less significant, and both methods can be within normal limits when presented as a percent of the predicted value (PUBMED:2798580). However, in patients with OLD, the SBHD method significantly underestimates lung volume, and correction equations have been devised to improve the accuracy of SBHD measurements in these patients (PUBMED:24473761, PUBMED:34568392). For instance, a study established a correction equation for the SBHD method to determine the total lung volume in patients with OLD of different severities, which, when applied, showed no statistical difference between the corrected SBHD and WBP measurements among the obstruction degree groups (PUBMED:34568392). Another study found that the differences in lung volume parameters measured by the two methods were associated with airflow limitation and could effectively differentiate COPD severity (PUBMED:27876834). In summary, while the SBHD method can predict lung volumes as measured by WBP, it tends to underestimate these volumes, especially in patients with OLD. Correction equations and adjustments are necessary to improve the accuracy of SBHD measurements in these cases. Additional studies are needed to validate these correction equations further.
Instruction: Is calcium a coronary vasoconstrictor in vivo? Abstracts: abstract_id: PUBMED:9523818 Is calcium a coronary vasoconstrictor in vivo? Background: Calcium produces constriction in isolated coronary vessels and in the coronary circulation of isolated hearts, but the importance of this mechanism in vivo remains controversial. Methods: The left anterior descending coronary arteries of 20 anesthetized dogs whose chests had been opened were perfused at 80 mmHg. Myocardial segmental shortening was measured with ultrasonic crystals and coronary blood flow with a Doppler flow transducer. The coronary arteriovenous oxygen difference was determined and used to calculate myocardial oxygen consumption and the myocardial oxygen extraction ratio. The myocardial oxygen extraction ratio served as an index of effectiveness of metabolic vasodilation. Data were obtained during intracoronary infusions of CaCl2 (5, 10, and 15 mg/min) and compared with those during intracoronary infusions of dobutamine (2.5, 5.0, and 10.0 microg/min). Results: CaCl2 caused dose-dependent increases in segmental shortening, accompanied by proportional increases in myocardial oxygen consumption. Although CaCl2 also increased coronary blood flow, these increases were less than proportional to those in myocardial oxygen consumption, and therefore the myocardial oxygen extraction ratio increased. Dobutamine caused dose-dependent increases in segmental shortening and myocardial oxygen consumption that were similar in magnitude to those caused by CaCl2. In contrast to CaCl2, however, the accompanying increases in coronary blood flow were proportional to the increases in myocardial oxygen consumption, with the result that the myocardial oxygen extraction ratio remained constant. Conclusions: Calcium has a coronary vasoconstricting effect and a positive inotropic effect in vivo. This vasoconstricting effect impairs coupling of coronary blood flow to the augmented myocardial oxygen demand by metabolic vascular control mechanisms. Dobutamine is an inotropic agent with no apparent direct action on coronary resistance vessels in vivo. abstract_id: PUBMED:2802259 Attenuation of the vasoconstrictor action of neuropeptide-Y by calcium-channel blockers. Neuropeptide-Y (NPY) was administered intracoronarily in dogs to see the modification of its vasoconstrictor action by the calcium (Ca)-channel blockers nisoldipine (0.1 microgram/kg) and nifedipine (1 microgram/kg). Dogs were anesthetized and the left circumflex artery was cannulated without opening the chest by using a specially designed cannula perfused at constant pressure. The change in coronary flow due to NPY was determined before and after the systemic administration of the two Ca-channel blockers. With administration of 1 to 2 nmol of NPY, coronary blood flow decreased maximally by 23.4 +/- 7.8% without changes in perfusion pressure or central venous pressure and it became significantly less after nisoldipine: 16.0 +/- 5.7% (p less than 0.02). A similar attenuation in the decrease in coronary flow was observed in the nifedipine study: 23.2 +/- 7.5% to 12.0 +/- 6.7% (p less than 0.02). A fall in systemic arterial blood pressure was observed after administration of both Ca-channel blockers, but it was significant only after nisoldipine (p less than 0.01). Nonsignificant increases in heart rate were observed after both drugs. Nisoldipine seemed to attenuate the NPY-induced vasoconstriction in dogs, and its equimolar potency is about ten times that of nifedipine. abstract_id: PUBMED:3407677 Neuropeptide Y is a vasoconstrictor of human coronary arteries. Neuropeptide Y (NPY) is a 36-amino-acid polypeptide which coexists with catecholamines in many adrenergic and noradrenergic neurons. It has been demonstrated to exert pressor effects in the perfused guinea pig heart and to constrict large cerebral and coronary blood vessels in animal studies. To determine if NPY might be a human coronary vasoconstrictor, the authors studied its effect on postmortem human coronary arteries. Proximal epicardial coronary rings were studied in a superfusion apparatus in Krebs-Ringer bicarbonate buffer (37 degrees C, pH 7.4) presaturated with 95% O2-5% CO2. Concentration-response curves were obtained using NPY in 0.1% bovine serum albumin in buffer and the responses were compared to those obtained in the presence of alpha 1, beta, and cyclooxygenase antagonists. A dose-related constrictor effect was obtained with NPY, which was significantly more potent than noradrenaline, constriction often being seen at 10(-12) M concentration. A vasorelaxant effect was seen in nonatherosclerotic vessels at higher concentrations. The vasoconstriction produced by noradrenaline was potentiated by subthreshold concentrations of NPY. The vasoconstrictor effect of NPY was not inhibited by prazosin (10(-6) M), and the vasodilatory effect was not inhibited by propranolol (10(-5) M). Indomethacin (3 X 10(-6) M) did not alter either vasoconstriction or vasorelaxation. The authors conclude that NPY is a potent constrictor of the human coronary artery at concentrations that may be achievable in vivo; it may thus be a contributor to sympathetic enhancement of coronary artery tone. abstract_id: PUBMED:11607040 Norbormide: a calcium entry blocker with selective vasoconstrictor activity in rat peripheral arteries. Norbormide is a unique vasoactive substance endowed with species- and tissue-specific, endothelium independent, vasoconstrictor activity that is restricted to the peripheral arteries of rat. In rat aorta and in all tested arteries of other species norbormide exhibits vasorelaxant property presumably due to the blockade of calcium channels. A calcium entry blocker effect of norbormide has also been described in isolated, perfused guinea pig hearts. In these preparations norbormide produced coronary vasodilator, as well as negative inotropic and dromotropic effects. In single ventricular myocytes of guinea pigs norbormide reduces L-type calcium current. The mechanism underlying the selective vasoconstrictor effect of norbormide is unknown. In rat caudal artery, a vessel contracted by norbormide, the drug activates phospholipase C (PLC) signal cascade which is the biochemical pathway involved in the contractile effect triggered by most receptor-activating vasoactive agents. Therefore, norbormide-induced contraction of rat peripheral vessels is likely to be due to the activation of a PLC-coupled receptor abundantly or selectively expressed in vascular smooth muscle cells. The identification of this putative receptor could facilitate the development of tissue-selective pharmacological agents. abstract_id: PUBMED:2614933 The dopamine-induced coronary vasoconstrictor response is potentiated by adenosine administration in the dog heart. The ineffectiveness of beta-adrenergic blockade in abolishing adenosine-induced coronary vasodilation was utilized to demonstrate that dopamine (DA) is capable of eliciting very strong coronary vasoconstrictor actions in vivo. In 2 separate groups of dogs anesthetized with pentobarbital, responses to DA were assessed either by flowmeter recordings or by computer-aided infrared thermography, which senses coronary blood flow-dependent heat emission from the epicardium. In untreated controls, submaximal DA infusions (16 micrograms.kg-1.min-1 iv) elicited a coronary vasodilator response. The thermographic equivalent of this hemodynamic action was an increased epicardial temperature. Pretreatment with oxprenolol (0.5 mg.kg-1 iv) preserved both basic heart activity and cardiac heat emission at levels which were comparable to the control state, but prevented DA mediated excitation of cardiac and coronary beta-adrenoceptors. In this state, DA infusion constricted the coronary arteries and tended to decrease heart emission. However, both types of effects were moderate, and only the hemodynamic effect was statistically significant. If DA was given after the coronary bed had been dilated submaximally by adenosine (30 micrograms.kg-1.min-1 infused into the left heart), the flow-reducing effect of DA became a dramatic phenomenon, and the DA-induced epicardial cooling was significantly potentiated. The results show that after eliminating conventional beta-effects, DA affects the coronary arteries through vasoconstrictor mechanisms. This finding suggests that the DA-induced constriction is limited in undilated coronary arteries by the metabolic autoregulatory capacity of the vessels. abstract_id: PUBMED:7721541 Effects of vasoconstrictors on rabbit coronary arteries exposed to iohexol with addition of electrolytes (sodium and calcium). Rational And Objectives: A well-known side effect of contrast media (CM) is vasodilation. Different types of CM in vitro inhibit the action of vasoconstrictors to various degrees. The nonionic CM iohexol inhibits the action less than ionic CM. In this study, the authors investigate whether the addition of electrolytes to iohexol causes less inhibition of the vasoconstrictors. Vasoconstriction dependent on potential-operated calcium channels (POCs) (potassium chloride), or receptor-operated calcium channels (ROCs) (histamine, endothelin-1, and prostaglandin F2 [PGF2 alpha]) was specifically studied. Methods: Segments of rabbit coronary arteries were mounted between two L-shaped prongs in tissue baths with buffer solution or solutions of iohexol (140 mg I/mL) with addition of different concentrations of NaCl and CaCl2. The responses of the vessels to increasing concentrations of a vasoconstrictor were examined. Concentration of vasoconstrictor versus contractile response curves were plotted, and the maximal contraction of the vasoconstrictor (Emax) and half maximal constriction were calculated. Results: Addition of 30 mM NaCl and 1.5 mM CaCl2 caused a vasoconstriction to KCl in iohexol almost identical to its vasoconstriction in buffer. Higher concentrations of NaCl caused decreasing Emax of all vasoconstrictors. The inhibition of the ROC-dependent vasoconstrictors could not be normalized by the addition of electrolytes. Conclusions: The inhibition of the POC-dependent vasoconstrictor was caused by "ion toxicity" of iohexol. The inhibition of the ROC-dependent vasoconstrictors was caused by other mechanisms, in a manner compatible with a blocking effect on ROCs. abstract_id: PUBMED:3993773 Characterization of a coronary vasoconstrictor produced by cultured endothelial cells. The vasoactive effects of media obtained from bovine aortic endothelial cells (EC) in culture were directly tested on isolated rings of the porcine left anterior descending coronary artery. Increasing concentrations of EC-conditioned culture media resulted in progressive dose-dependent increments in isometric tension in porcine, bovine, and canine coronary arteries; the response did not require an intact endothelium. Control (nonconditioned) media and that conditioned by fibroblasts or vascular smooth muscle cells in culture had negligible effects on vessel tone. The vasoconstriction required extracellular Ca2+ and was unaffected by inhibitors of cyclooxygenase and lipoxygenase or by antagonists to the alpha- or beta-adrenergic, serotonergic, histaminergic, or cholinergic receptor systems. Calibrated gel filtration of the media indicated a molecular weight of 8,500 for the vasoactive factor; treatment of the EC-conditioned media with either sodium dodecyl sulfate, trypsin, alkali, or with acid hydrolysis completely abolished the vasoconstrictive effect. These findings and others now provide evidence for the existence of an EC-derived polypeptide vasoconstrictor that may be important in the regulation of vascular smooth muscle contractility. abstract_id: PUBMED:34391560 USE OF LOCAL ANESTHETICS WITH A VASOCONSTRICTOR AGENT DURING DENTAL TREATMENT IN HYPERTENSIVE AND CORONARY DISEASE PATIENTS. A SYSTEMATIC REVIEW. Background: Coronary disease and Hypertension are highly prevalent health problems worldwide, with the latter being one of the most common diseases in patients visiting dental clinics. Local anesthetics (LAs) with vasoconstrictor agents (VC) are known to be commonly used in dental practice. For the above-mentioned reasons, dentists should know how to adapt and treat patients with these hazardous conditions. Objective: The aim of this study was to find out if the use of local anesthetics (LAs) in combination with vasoconstrictor (VC) agents in dental treatment presents a risk in patient with a known history of Hypertension and/or Coronary disease. Materials And Methods: This systematic review was conducted in accordance with The PRISMA guidelines and registered on the PROSPERO database (CRD42020187369). The search strategy was based on Mesh terms, Boolean operator AND, and the PICO model. It was designed to identify all the randomized clinical trials (RCTs) published in the last 30 years, which assessed whether the use of LA with VC agents in dental treatment produces a significant increase/decrease in hemodynamics in patients with known history of Hypertension and/or Coronary disease. The Cochrane Collaboration's tool was used to assess risk of bias of the included RCTs. Results: An initial electronic search resulted in 87 papers; however only 9 RCTs met the inclusion criteria. There was a total of 482 subjects (N = 482), of which 412 had a known history of Hypertension or Coronary disease. Conclusions: According to the literature reviewed, the use of 1 to 2 cartridges of local anesthetics with 1:80,000, 1:100,000 or 1:200,000 epinephrine in patients with controlled Hypertension and/ or Coronary disease is safe. Randomized clinical trials are essential in determining the safety or risks associated with the use of LAs with VC agents in patients with poorly controlled Hypertension and Coronary disease. abstract_id: PUBMED:9101116 Increasing vasoconstrictor response to ergonovine with oxidative injury in canine coronary artery. Background: The effects of oxygen free radicals on coronary vasoreactivity remain unknown. Objective: To examine whether oxygen free radicals increase coronary arterial tone and sensitivity to vasoconstrictor stimulation in closed-chest dogs. Methods: Oxygen radicals were generated by the reaction of xanthine plus xanthine oxidase (XXO) and effects of these substances on the left coronary artery (the percentage diameter change) and on the constrictor effect of ergonovine were examined in vivo in 19 anesthetized, closed-chest dogs by selective coronary angiography. The effects of XXO solution and ergonovine were assessed in a cumulative fashion using 100, 200, and 300 ml XXO and 50, 100, 150, and 200 micrograms ergonovine, in 5 (group I) and 6 dogs (group II), respectively. The effects of XXO on the constrictor responses elicited by 50 micrograms ergonovine were examined in eight dogs (group III). Changes in the vascular endothelium were examined by postmortem electron microscopic examination. Results: Oxidative injury alone produced slight constriction of the coronary artery, but the change was not significant. However, ergonovine-induced vasoconstriction was enhanced after administration of 100 and 200 ml (cumulative amount) XXO solution (P < 0.05, group II versus group III). The enhancement was no longer observed after administration of 300 ml (cumulative amount) XXO solution. Scanning and transmission electron microscopies revealed the formation of blebs and ulceration in the coronary endothelium after administration of XXO solution. Conclusion: These results suggest that oxygen radicals can enhance the ergonovine-induced coronary vasoconstriction in a concentration-dependent manner. There seems to be a critical level of oxygen radicals for the production of the effect. abstract_id: PUBMED:9055942 Long-lasting coronary vasoconstrictor effects and myocardial uptake of endothelin-1 in humans. The effect of intravenous administration of the endothelium derived vasoconstrictor peptide endothelin-1 (ET-1 0.2. 1 and 8 pmol kg-1 min-1) on coronary blood flow in relation to plasma ET-1 as well as blood lactate and glucose levels were investigated in six healthy volunteers. Coronary sinus blood flow was measured by thermodilution. Administration of ET-1 elevated arterial plasma ET 35-fold, dose-dependently increased mean arterial blood pressure from 95 +/- 5 mmHg to 110 +/- 6 mmHg (P < 0.01) and reduced heart rate from 64 +/- 4 beats min-1 to 58 +/- 4 beats min-1 (P < 0.05) at 8 pmol kg-1 min-1. Coronary sinus blood flow was reduced maximally by 23 +/- 4% (P < 0.01) and coronary vascular resistance increased by 48 +/- 11% (P < 0.01). Coronary sinus oxygen saturation decreased from 35 +/- 1% to 22 +/- 2% at 2 min after the infusion (P < 0.01). A coronary constrictor response was observed at a 4-fold elevation in plasma ET. The reduction is coronary sinus blood flow lasted 20 min and coronary sinus oxygen saturation was still reduced 60 min after the infusion. Myocardial oxygen uptake or arterial oxygen saturation were not affected by ET-1. Myocardial lactate net uptake decreased by 40% whereas glucose uptake was unaffected. At the highest infusion rate there was a net removal of plasma ET by 24 +/- 3% over the myocardium (P < 0.05). The results show that ET-1 induces long-lasting reduction in coronary sinus blood flow via a direct coronary vasoconstrictor effect in healthy humans observable at a 4-fold elevation in plasma ET-1. Furthermore, there is a net removal of circulating ET-1 by the myocardium. Answer: Yes, calcium is a coronary vasoconstrictor in vivo. A study conducted on anesthetized dogs with open chests showed that intracoronary infusions of CaCl2 caused dose-dependent increases in myocardial oxygen consumption, but the increases in coronary blood flow were less than proportional to the increases in myocardial oxygen consumption. This resulted in an increased myocardial oxygen extraction ratio, indicating that calcium has a coronary vasoconstricting effect in vivo. This vasoconstricting effect impairs the coupling of coronary blood flow to the augmented myocardial oxygen demand by metabolic vascular control mechanisms (PUBMED:9523818).
Instruction: Do cancer patients tweet? Abstracts: abstract_id: PUBMED:26977205 Qualitative Twitter analysis of participants, tweet strategies, and tweet content at a major urologic conference. Introduction: The microblogging social media platform Twitter is increasingly being adopted in the urologic field. We aimed to analyze participants, tweet strategies, and tweet content of the Twitter discussion at a urologic conference. Methods: A comprehensive analysis of the Twitter activity at the European Association of Urology Congress 2013 (#eau2013) was performed, including characteristics of user profiles, engagement and popularity measurements, characteristics and timing of tweets, and content analysis. Results: Of 218 Twitter contributors, doctors (45%) were the most frequent, ahead of associations (15%), companies (10%), and journals (3%). However, journals had the highest tweet/participant rate (22 tweets/participant), profile activity (median: 1177, total tweets, 1805 followers, 979 following), and profile popularity (follower/following ratio: 2.1; retweet rank percentile: 96%). Links in a profile were associated with higher engagement (p<0.0001) and popularity (p<0.0001). Of 1572 tweets, 57% were original tweets, 71% contained mentions, 20% contained links, and 25% included pictures. The majority of tweets (88%) were during conference hours, with an average of 24.7 tweets/hour and a peak activity of 71 tweets/hour. Overall, 59% of tweets were informative, led by the topics uro-oncology (21%), urologic research (21%), and urotechnology (12%). Limitations include the analysis of a single conference analysis, assessment of global profile and not domain-specific activity, and the rapid evolution in Twitter-using habits. Conclusion: Results of this single conference qualitative analysis are promising for an enrichment of the scientific discussions at urologic conferences through the use of Twitter. abstract_id: PUBMED:29114591 Tweet for the cure: A snapshot of Twitter usage by 3 U.S. oncologic professional societies. Purpose: Medical societies are incorporating Twitter to communicate with their members and connect with patients. This study compares the online presence of 3 major oncologic societies. Methods And Materials: All available tweets in 2014 by the American Society for Radiation Oncology (ASTRO), American Society of Clinical Oncology (ASCO), and Society of Surgical Oncology (SSO) were collected. We analyzed whether posts were original content or retweets. The monthly tweet rate was followed to assess trends. We created 2 new metrics, supporter ratio and tweet density, to correlate online presence and engagement with offline membership breadth. The supporter ratio is the number of people following the organization divided by the number of registered members of each society. The tweet density is the total number of posts divided by the number of registered members of each society. Results: In February 2015, ASCO, ASTRO, and SSO had 36,385; 10,899; and 2721 members, respectively. ASCO's Twitter handle had 33,974 followers, with a supporter ratio of 0.93. A total of 2563 original tweets and 1416 retweets were estimated, which represents a tweet density of 0.11. @ASTRO_org had 5445 followers and a supporter ratio of 0.50. In 2014, ASTRO posted 415 original content tweets and 9 retweets, with a tweet density of 0.039. SSO had a supporter ratio of 0.91 on the basis of 2481 followers. In 2014, SSO posted 207 original tweets and 190 retweets, with a tweet density of 0.15. An increase in tweets and retweets was seen during the month of each society's annual meeting. ASTRO's 61% increase in September 2014 was smaller than SSO's 462% and ASCO's 84%. Conclusion: ASTRO's use of Twitter lags behind ASCO and SSO. Although all 3 societies show increased Twitter use during their annual meetings, they should work toward more meaningful engagement throughout the year. The new metrics of tweet density and supporter ratio will serve as benchmarks for member engagement in future studies. abstract_id: PUBMED:27036217 Activity, content, contributors, and influencers of the twitter discussion on urologic oncology. Objectives: To analyse the activity, content, contributors, and influencers of the Twitter discussion on urologic oncology. Materials And Methods: We performed a comprehensive quantitative and qualitative Twitter analysis for the hashtags #prostatecancer, #bladdercancer, #kidneycancer, and #testicularcancer. Symplur was used to analyse activity over different time periods and the top influencers of the Twitter discussion. Tweet Archivist and Twitonomy analysis tools were used to assess characteristics of content and contributors. Results: Twitter discussion on urologic oncology in 2014 contained 100,987 tweets created by 39,326 participants. Mean monthly tweet activity was 6,603±2,183 for #prostatecancer, 866±923 for #testicularcancer, 457±477 for #bladdercancer and 401±504 for #kidneycancer. Twitter activity increased by 41% in 2013 and by 122% in 2014. The content analysis detected awareness, cancer, and risk as frequently mentioned words in urologic oncology tweets. Prevalently used related hashtags were the general hashtag #cancer, awareness hashtags, and the respective cancer/urology tag ontology hashtags. Contributors originated from 41 countries on 6 continents and had a mean of 5,864±4,747 followers. They tweeted from platforms on exclusively mobile devices (39%) more frequently than from desktop devices (29%). Health care organizations accounted for 58% of the top influencers in all cancers. The largest proportion of physicians were among the #prostatecancer and #kidneycancer (each 9%) influencers and individual contributors were most frequent in the discussion on #kidneycancer (57%) and #testicularcancer (50%). Conclusion: There is a significant and growing activity in the Twitter discussion on urologic oncology, particularly on #prostatecancer. The Twitter discussion is global, social, and mobile, and merits attention of stakeholders in health care as a promising communication tool. abstract_id: PUBMED:37074770 Scanxiety Conversations on Twitter: Observational Study. Background: Scan-associated anxiety (or "scanxiety") is commonly experienced by people having cancer-related scans. Social media platforms such as Twitter provide a novel source of data for observational research. Objective: We aimed to identify posts on Twitter (or "tweets") related to scanxiety, describe the volume and content of these tweets, and describe the demographics of users posting about scanxiety. Methods: We manually searched for "scanxiety" and associated keywords in cancer-related, publicly available, English-language tweets posted between January 2018 and December 2020. We defined "conversations" as a primary tweet (the first tweet about scanxiety) and subsequent tweets (interactions stemming from the primary tweet). User demographics and the volume of primary tweets were assessed. Conversations underwent inductive thematic and content analysis. Results: A total of 2031 unique Twitter users initiated a conversation about scanxiety from cancer-related scans. Most were patients (n=1306, 64%), female (n=1343, 66%), from North America (n=1130, 56%), and had breast cancer (449/1306, 34%). There were 3623 Twitter conversations, with a mean of 101 per month (range 40-180). Five themes were identified. The first theme was experiences of scanxiety, identified in 60% (2184/3623) of primary tweets, which captured the personal account of scanxiety by patients or their support person. Scanxiety was often described with negative adjectives or similes, despite being experienced differently by users. Scanxiety had psychological, physical, and functional impacts. Contributing factors to scanxiety included the presence and duration of uncertainty, which was exacerbated during the COVID-19 pandemic. The second theme (643/3623, 18%) was the acknowledgment of scanxiety, where users summarized or labeled an experience as scanxiety without providing emotive clarification, and advocacy of scanxiety, where users raised awareness of scanxiety without describing personal experiences. The third theme was messages of support (427/3623, 12%), where users expressed well wishes and encouraged positivity for people experiencing scanxiety. The fourth theme was strategies to reduce scanxiety (319/3623, 9%), which included general and specific strategies for patients and strategies that required improvements in clinical practice by clinicians or health care systems. The final theme was research about scanxiety (50/3623, 1%), which included tweets about the epidemiology, impact, and contributing factors of scanxiety as well as novel strategies to reduce scanxiety. Conclusions: Scanxiety was often a negative experience described by patients having cancer-related scans. Social media platforms like Twitter enable individuals to share their experiences and offer support while providing researchers with unique data to improve their understanding of a problem. Acknowledging scanxiety as a term and increasing awareness of scanxiety is an important first step in reducing scanxiety. Research is needed to guide evidence-based approaches to reduce scanxiety, though some low-cost, low-resource practical strategies identified in this study could be rapidly introduced into clinical care. abstract_id: PUBMED:25993158 Using social media to learn and communicate: it is not about the tweet. Social media can be defined as the use of media to interact with social networks. Social media is not about the content of the tweet, inasmuch as the technologies and social media platforms influence how content is generated, disseminated, and used. Social media is not dead, but rather it offers rapid incoming and outgoing forms of communication, which may be utilized in a variety of "use cases" in medicine and oncology. abstract_id: PUBMED:37531163 Perspectives and Experiences of Patients With Thyroid Cancer at a Global Level: Retrospective Descriptive Study of Twitter Data. Background: Twitter has become a popular platform for individuals to broadcast their daily experiences and opinions on a wide range of topics and emotions. Tweets from patients with cancer could offer insights into their needs. However, limited research has been conducted using Twitter data to understand the needs of patients with cancer despite the substantial amount of health-related data posted on the platform daily. Objective: This study aimed to uncover the potential of using Twitter data to understand the perspectives and experiences of patients with thyroid cancer at a global level. Methods: This retrospective descriptive study collected tweets relevant to thyroid cancer in 2020 using the Twitter scraping tool. Only English-language tweets were included, and data preprocessing was performed to remove irrelevant tweets, duplicates, and retweets. Both tweets and Twitter users were manually classified into various groups based on the content. Each tweet underwent sentiment analysis and was classified as either positive, neutral, or negative. Results: A total of 13,135 tweets related to thyroid cancer were analyzed. The authors of the tweets included patients with thyroid cancer (3225 tweets, 24.6%), patient's families and friends (2449 tweets, 18.6%), medical journals and media (1733 tweets, 13.2%), health care professionals (1093 tweets, 8.3%), and medical health organizations (940 tweets, 7.2%), respectively. The most discussed topics related to living with cancer (3650 tweets, 27.8%), treatment (2891 tweets, 22%), diagnosis (1613 tweets, 12.3%), risk factors and prevention (1137 tweets, 8.7%), and research (953 tweets, 7.3%). An average of 36 tweets pertaining to thyroid cancer were posted daily. Notably, the release of a film addressing thyroid cancer and the public disclosure of a news reporter's personal diagnosis of thyroid cancer resulted in a significant escalation in the volume of tweets. From the sentiment analysis, 53.5% (7025/13,135) of tweets were classified as neutral statements and 32.7% (4299/13,135) of tweets expressed negative emotions. Tweets from patients with thyroid cancer had the highest proportion of negative emotion (1385/3225 tweets, 42.9%), particularly when discussing symptoms. Conclusions: This study provides new insights on using Twitter data as a valuable data source to understand the experiences of patients with thyroid cancer. Twitter may provide an opportunity to improve patient and physician engagement or apply as a potential research data source. abstract_id: PUBMED:24867458 Do cancer patients tweet? Examining the twitter use of cancer patients in Japan. Background: Twitter is an interactive, real-time media that could prove useful in health care. Tweets from cancer patients could offer insight into the needs of cancer patients. Objective: The objective of this study was to understand cancer patients' social media usage and gain insight into patient needs. Methods: A search was conducted of every publicly available user profile on Twitter in Japan for references to the following: breast cancer, leukemia, colon cancer, rectal cancer, colorectal cancer, uterine cancer, cervical cancer, stomach cancer, lung cancer, and ovarian cancer. We then used an application programming interface and a data mining method to conduct a detailed analysis of the tweets from cancer patients. Results: Twitter user profiles included references to breast cancer (n=313), leukemia (n=158), uterine or cervical cancer (n=134), lung cancer (n=87), colon cancer (n=64), and stomach cancer (n=44). A co-occurrence network is seen for all of these cancers, and each cancer has a unique network conformation. Keywords included words about diagnosis, symptoms, and treatments for almost all cancers. Words related to social activities were extracted for breast cancer. Words related to vaccination and support from public insurance were extracted for uterine or cervical cancer. Conclusions: This study demonstrates that cancer patients share information about their underlying disease, including diagnosis, symptoms, and treatments, via Twitter. This information could prove useful to health care providers. abstract_id: PUBMED:30343997 Numbers needed to tweet: social media and impact on surgery. Surgeons have taken to social media using Facebook, Twitter, LinkedIn and Instagram for both personal and professional purposes. In an age of global access to everything there is a risk for information overload, and thus a need to collate and curate. Hashtags going viral have had a huge social influence, e.g. #ILookLikeASurgeon. SoMe has become an instrument for connecting, sharing and mentoring. It is a tool for educating the next generation of surgeons. For researchers and journals the question remains whether the input required to tap into SoMe platforms is rewarded with a similar gain in output, reputation and exposure. Summarized information in visual abstracts may help disseminate study-results to a wider audience, but the effect of a #visualabstract may be specialty-specific. Currently, there is little knowledge as to the 'numbers needed to tweet' in order to make impact on figures such as downloads, citations, and eventually impact factor. abstract_id: PUBMED:1394073 Patients and solicitude Patients are not only passive and vulnerable beings who need to be cared for. They are also autonomous, active and capable of caring for others as well as for themselves. In this study, the author has completed lengthy interviews with five women in order to identify different types of caring. Complicity-based caring is characterized by verbal and nonverbal behavior of patients who are sharing a common experience. This type of caring can be encouraged by a third-party who brings patients together with similar experiences. Action-oriented caring consists of intervening for other patients as an antidote for one's own fear, panic and anguish. Solidarity-based caring corresponds with assisting fellow patients in finding and maintaining their own identity, pride and dignity. Co-operative caring can be observed in group-type settings such as the behavior observed during support groups or vegetarian cooking classes for cancer patients. Self-centred caring refers to the various forms of concern one has for one's self. The experience of suffering can determine how caring is expressed. Concern can force individuals to examine their way of acting and interacting more carefully. Conscious choices are made to avoid empty and exhausting relationships and encourage the growth of positive relationships. Paradoxically, this change in perspective brings about an open mind and heart. It is at this time that caring for healthy individuals appears. Unfortunately, there is very little distinction between patients and healthy individuals, but rather sensitive relationships between individuals facing difficult life challenges. abstract_id: PUBMED:37387665 Patients' contribution to the review of research protocols Involvement of patients in the ethical management of research protocols began by patients' associations involved in the fight against AIDS in France in the 1990s. It was the first step towards recognizing the major role of patients in research that concerns them. This article aims describing this emancipation and its consequences on the evolution of research by drawing on two experiences: 1) The one of the « Comité de patients pour la recherche clinique » founded in 1998 by the « Ligue nationale contre le cancer » and by the « Fédération nationale des centres de lutte contre le cancer »; 2) The one of the « Collège des relecteurs de l'Inserm » set up in 2007. Answer: Yes, cancer patients do tweet. A study examining the Twitter use of cancer patients in Japan found that cancer patients share information about their underlying disease, including diagnosis, symptoms, and treatments, via Twitter (PUBMED:24867458). Another study analyzing Twitter data related to thyroid cancer revealed that patients with thyroid cancer were among the authors of the tweets, discussing topics related to living with cancer, treatment, diagnosis, risk factors, prevention, and research (PUBMED:37531163). Additionally, an observational study on "scanxiety" conversations on Twitter identified that most of the users initiating conversations about scanxiety from cancer-related scans were patients (PUBMED:37074770). These findings indicate that cancer patients are active on Twitter and use the platform to share their experiences, seek support, and connect with others.
Instruction: Does adjunctive family therapy enhance recovery from bipolar I mood episodes? Abstracts: abstract_id: PUBMED:15555694 Does adjunctive family therapy enhance recovery from bipolar I mood episodes? Background: Family therapy is sometimes used as adjunctive treatment to pharmacotherapy to help patients recover from mood episodes of bipolar I disorder. However, the efficacy of this practice is not known. Methods: Ninety-two patients meeting criteria for a current bipolar I mood episode were randomly assigned to family therapy plus pharmacotherapy, multifamily psychoeducational group therapy plus pharmacotherapy, or pharmacotherapy alone. Time to recovery was analyzed with survival analysis. Results: The proportion of subjects within each treatment group who recovered did not significantly differ, nor did time to recovery. Limitations: The analyses did not include other outcomes such as psychosocial functioning, prophylaxis against recurrences of mood episodes, or compliance with pharmacotherapy. Conclusions: Neither adjunctive family therapy nor adjunctive multifamily psychoeducational group therapy significantly improves the rate of recovery from mood episodes of bipolar I disorder, compared to treatment with pharmacotherapy alone. abstract_id: PUBMED:19032711 Preventing recurrence of bipolar I mood episodes and hospitalizations: family psychotherapy plus pharmacotherapy versus pharmacotherapy alone. Objectives: This study compared the efficacy of three treatment conditions in preventing recurrence of bipolar I mood episodes and hospitalization for such episodes: individual family therapy plus pharmacotherapy, multifamily group therapy plus pharmacotherapy, and pharmacotherapy alone. Methods: Patients with bipolar I disorder were enrolled if they met criteria for an active mood episode and were living with or in regular contact with relatives or significant others. Subjects were randomly assigned to individual family therapy plus pharmacotherapy, multifamily group therapy plus pharmacotherapy, or pharmacotherapy alone, which were provided on an outpatient basis. Individual family therapy involved one therapist meeting with a single patient and the patient's family members, with the content of each session and number of sessions determined by the therapist and family. Multifamily group psychotherapy involved two therapists meeting together for six sessions with multiple patients and their respective family members, with the content of each session preset. All subjects were prescribed a mood stabilizer, and other medications were used as needed. Subjects were assessed monthly for up to 28 months. Following recovery from the index mood episode, subjects were assessed for recurrence of a mood episode and for hospitalization for such episodes. Results: Of a total of 92 subjects that were enrolled in the study, 53 (58%) recovered from their intake mood episode. The analyses in this report focus upon these 53 subjects, 42 (79%) of whom entered the study during an episode of mania. Of the 53 subjects who recovered from their intake mood episode, the proportion of subjects within each treatment group who suffered a recurrence by month 28 did not differ significantly between the three treatment conditions. However, only 5% of the subjects receiving adjunctive multifamily group therapy required hospitalization, compared to 31% of the subjects receiving adjunctive individual family therapy and 38% of those receiving pharmacotherapy alone, a significant difference. Time to recurrence and time to hospitalization did not differ significantly between the three treatment groups. Conclusion: For patients with bipolar I disorder, adjunctive multifamily group therapy may confer significant advantages in preventing hospitalization for a mood episode. abstract_id: PUBMED:18794208 Adjunctive psychotherapy for bipolar disorder: state of the evidence. Objective: Psychotherapy has long been recommended as adjunctive to pharmacotherapy for bipolar disorder, but it is unclear which interventions are effective for which patients, over what intervals, and for what domains of outcome. This article reviews randomized trials of adjunctive psychotherapy for bipolar disorder. Method: Eighteen trials of individual and group psychoeducation, systematic care, family therapy, interpersonal therapy, and cognitive-behavioral therapy are described. Relevant outcome variables include time to recovery, recurrence, duration of episodes, symptom severity, and psychosocial functioning. Results: The effects of the treatment modalities varied according to the clinical condition of patients at the time of random assignment and the polarity of symptoms at follow-up. Family therapy, interpersonal therapy, and systematic care appeared to be most effective in preventing recurrences when initiated after an acute episode, whereas cognitive-behavioral therapy and group psychoeducation appeared to be most effective when initiated during a period of recovery. Individual psychoeducational and systematic care programs were more effective for manic than depressive symptoms, whereas family therapy and cognitive-behavioral therapy were more effective for depressive than manic symptoms. Conclusions: Adjunctive psychotherapy enhances the symptomatic and functional outcomes of bipolar disorder over 2-year periods. The various modalities differ in content, structure, and associated mediating mechanisms. Treatments that emphasize medication adherence and early recognition of mood symptoms have stronger effects on mania, whereas treatments that emphasize cognitive and interpersonal coping strategies have stronger effects on depression. The placement of psychotherapy within chronic care algorithms and its role as a preventative agent in the early stages of the disorder deserve investigation. abstract_id: PUBMED:37701540 Role of adjunctive nonpharmacological strategies for treatment of rapid-cycling bipolar disorder. Rapid-cycling bipolar disorder (RCBD) is a phase of bipolar disorder defined by the presence of ≥ 4 mood episodes in a year. It is a common phenomenon characterized by greater severity, a predominance of depression, higher levels of disability, and poorer overall outcomes. It is resistant to treatment by conventional pharmacotherapy. The existing literature underlines the scarcity of evi-dence and the gaps in knowledge about the optimal treatment strategies for RCBD. However, most reviews have considered only pharmacological treatment options for RCBD. Given the treatment-refractory nature of RCBD, nonpharmacological interventions could augment medications but have not been adequately examined. This review carried out an updated and comprehensive search for evidence regarding the role of nonpharmacological therapies as adjuncts to medications in RCBD. We identified 83 reviews and meta-analyses concerning the treatment of RCBD. Additionally, we found 42 reports on adjunctive nonpharmacological treatments in RCBD. Most of the evidence favoured concomitant electroconvulsive therapy as an acute and maintenance treatment. There was pre-liminary evidence to suggest that chronotherapeutic treatments can provide better outcomes when combined with medications. The research on adjunctive psychotherapy was particularly scarce but suggested that psychoeducation, cognitive behavioural therapy, family interventions, and supportive psychotherapy may be helpful. The overall quality of evidence was poor and suffered from several methodological shortcomings. There is a need for more methodologically sound research in this area, although clinicians can use the existing evidence to select and individualize nonpharmacological treatment options for better management of RCBD. Patient summaries are included to highlight some of the issues concerning the implementation of adjunctive nonpharmacological treatments. abstract_id: PUBMED:31940011 Effects of Family-Focused Therapy vs Enhanced Usual Care for Symptomatic Youths at High Risk for Bipolar Disorder: A Randomized Clinical Trial. Importance: Behavioral high-risk phenotypes predict the onset of bipolar disorder among youths who have parents with bipolar disorder. Few studies have examined whether early intervention delays new mood episodes in high-risk youths. Objective: To determine whether family-focused therapy (FFT) for high-risk youths is more effective than standard psychoeducation in hastening recovery and delaying emergence of mood episodes during the 1 to 4 years after an active period of mood symptoms. Design, Settings, And Participants: This multisite randomized clinical trial included referred youths (aged 9-17 years) with major depressive disorder or unspecified (subthreshold) bipolar disorder, active mood symptoms, and at least 1 first- or second-degree relative with bipolar disorder I or II. Recruitment started from October 6, 2011, and ended on September 15, 2016. Independent evaluators interviewed participants every 4 to 6 months to measure symptoms for up to 4 years. Data analysis was performed from March 13 to November 3, 2019. Interventions: High-risk youths and parents were randomly allocated to FFT (12 sessions in 4 months of psychoeducation, communication training, and problem-solving skills training; n = 61) or enhanced care (6 sessions in 4 months of family and individual psychoeducation; n = 66). Youths could receive medication management in either condition. Main Outcomes And Measures: The coprimary outcomes, derived using weekly psychiatric status ratings, were time to recovery from prerandomization symptoms and time to a prospectively observed mood (depressive, manic, or hypomanic) episode after recovery. Secondary outcomes were time to conversion to bipolar disorder I or II and longitudinal symptom trajectories. Results: All 127 participants (82 [64.6%] female; mean [SD] age, 13.2 [2.6] years) were followed up for a median of 98 weeks (range, 0-255 weeks). No differences were detected between treatments in time to recovery from pretreatment symptoms. High-risk youths in the FFT group had longer intervals from recovery to the emergence of the next mood episode (χ2 = 5.44; P = .02; hazard ratio, 0.55; 95% CI, 0.48-0.92;), and from randomization to the next mood episode (χ2 = 4.44; P = .03; hazard ratio, 0.59; 95% CI, 0.35-0.97) than youths in enhanced care. Specifically, FFT was associated with longer intervals to depressive episodes (log-rank χ2 = 6.24; P = .01; hazard ratio, 0.53; 95% CI, 0.31-0.88) but did not differ from enhanced care in time to manic or hypomanic episodes, conversions to bipolar disorder, or symptom trajectories. Conclusions And Relevance: Family skills-training for youths at high risk for bipolar disorder is associated with longer times between mood episodes. Clarifying the relationship between changes in family functioning and changes in the course of high-risk syndromes merits future investigation. Trial Registration: ClinicalTrials.gov identifier: NCT01483391. abstract_id: PUBMED:18677198 Psychosocial interventions as adjunctive therapy for bipolar disorder. Evidence suggests that adjunctive psychosocial interventions in bipolar disorder are clinically beneficial and cost effective when used in conjunction with pharmacotherapy. Appropriate adjunctive psychosocial interventions in bipolar disorder have been found to be associated with improved treatment adherence, greater stability, fewer hospitalizations, fewer days hospitalized, less need for crisis interventions, decreased relapse risk, and fewer acute episodes. Specific types of therapy that have shown efficacy include bipolar-specific cognitive-behavioral therapy, family-focused therapy, interpersonal and social rhythm therapy, and systematic care management. A positive impact on medication adherence is a major goal of these adjunctive treatments. Studies show variable efficacy depending on patient characteristics, phase of illness, and presence of comorbid conditions so that therapies should be selected and administered on an individualized basis that takes into account each specific patient's current presentation and treatment history. For example, psychosocial interventions have been found to be more effective in patients with depressive than manic symptoms and during maintenance treatment. Given findings showing benefits of psychosocial interventions in the treatment of bipolar disorder, the STAndards for BipoLar Excellence (STABLE) project has developed and tested a performance measure to evaluate the use of this critical treatment component. abstract_id: PUBMED:17392295 Effectiveness of adjunctive antidepressant treatment for bipolar depression. Background: Episodes of depression are the most frequent cause of disability among patients with bipolar disorder. The effectiveness and safety of standard antidepressant agents for depressive episodes associated with bipolar disorder (bipolar depression) have not been well studied. Our study was designed to determine whether adjunctive antidepressant therapy reduces symptoms of bipolar depression without increasing the risk of mania. Methods: In this double-blind, placebo-controlled study, we randomly assigned subjects with bipolar depression to receive up to 26 weeks of treatment with a mood stabilizer plus adjunctive antidepressant therapy or a mood stabilizer plus a matching placebo, under conditions generalizable to routine clinical care. A standardized clinical monitoring form adapted from the mood-disorder modules of the Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders, fourth edition, was used at all follow-up visits. The primary outcome was the percentage of subjects in each treatment group meeting the criterion for a durable recovery (8 consecutive weeks of euthymia). Secondary effectiveness outcomes and rates of treatment-emergent affective switch (a switch to mania or hypomania early in the course of treatment) were also examined. Results: Forty-two of the 179 subjects (23.5%) receiving a mood stabilizer plus adjunctive antidepressant therapy had a durable recovery, as did 51 of the 187 subjects (27.3%) receiving a mood stabilizer plus a matching placebo (P=0.40). Modest nonsignificant trends favoring the group receiving a mood stabilizer plus placebo were observed across the secondary outcomes. Rates of treatment-emergent affective switch were similar in the two groups. Conclusions: The use of adjunctive, standard antidepressant medication, as compared with the use of mood stabilizers, was not associated with increased efficacy or with increased risk of treatment-emergent affective switch. Longer-term outcome studies are needed to fully assess the benefits and risks of antidepressant therapy for bipolar disorder. (ClinicalTrials.gov number, NCT00012558 [ClinicalTrials.gov].). abstract_id: PUBMED:25933099 Adjunctive armodafinil for major depressive episodes associated with bipolar I disorder. Background: In a previous study, adjunctive armodafinil 150 mg/day significantly improved depressive symptoms associated with bipolar I disorder. Methods: Multicenter, double-blind study of patients with a major depressive episode despite bipolar I disorder maintenance therapy randomized to adjunctive placebo or adjunctive armodafinil 150 or 200mg/day for 8 weeks; for logistical reasons, assignment to armodafinil 200mg/day was discontinued early. Primary efficacy was measured by change from baseline to week 8 in 30-Item Inventory of Depressive Symptomatology-Clinician-Rated (IDS-C30) total score. Results: Patients were randomized to adjunctive placebo (n=230), adjunctive armodafinil 150 mg/day (n=232), or adjunctive armodafinil 200mg/day (n=30; analyzed for safety only). Least-square mean change in IDS-C30 total score was numerically superior for adjunctive armodafinil 150 mg/day vs adjunctive placebo, but was not statistically significant (p=0.13). Armodafinil was well-tolerated. Adverse events (AEs) observed in >5% with adjunctive armodafinil 150 mg/day and more frequently than with adjunctive placebo were headache (16% [38/231] vs 13% [30/229]) and nausea (7% [17/231] vs 2% [5/229]). The most common AEs with adjunctive armodafinil 200mg/day were diarrhea and dry mouth (17% [5/30] each vs 6% [13/229] and 1% [3/229], respectively, with adjunctive placebo). Limitations: Early study discontinuation for logistical reasons by the sponsor limited adjunctive armodafinil 200-mg/day assessment. Conclusions: FDA-approved bipolar I depression treatments are limited. Adjunctive armodafinil 150 mg/day reduced depressive symptoms associated with bipolar I disorder to a greater extent than adjunctive placebo, although the difference failed to reach statistical significance. Safety data indicate treatment with adjunctive armodafinil was well-tolerated. abstract_id: PUBMED:22554110 Quetiapine for the treatment of acute bipolar mania, mixed episodes and maintenance therapy. Introduction: Bipolar disorder is characterized by mood instability, which can be challenging to manage. First-line pharmacological approaches usually involve lithium, anticonvulsants and antipsychotics. Over the past fifteen years, several second-generation antipsychotics have demonstrated benefits for various phases of this disorder. Areas Covered: This article examines the pharmacodynamics and pharmacokinetics of quetiapine ; its evidence base as an acute and maintenance monotherapy or adjunctive therapy for bipolar manic or mixed episodes is also discussed, along with the related issues of its safety and tolerability. Expert Opinion: In the context of bipolar disorder, quetiapine is the only agent approved as a monotherapy or adjunct therapy for acute manic/mixed episodes in adults and adolescents; as a monotherapy for acute depressive episodes in adults; and as an adjunctive maintenance therapy for bipolar I and II disorder in adults. In addition to its antipsychotic properties, this broad mood-stabilizing potential may simplify the management of select patients. abstract_id: PUBMED:32077116 Traumatic experiences, family functioning, and mood disorder development in bipolar offspring. Objectives: Studies in children of patients affected with bipolar disorder (BD; bipolar offspring) are at high risk to develop mood disorders. Our aim is to investigate how environmental factors such as childhood trauma and family functioning relate to the development of mood disorders in offspring at familial risk for BD. Design: The current study is part of a longitudinal prospective cohort study among offspring of parents with BD. Methods: The current study is part of the Dutch Bipolar Offspring Study, an ongoing prospective cohort study among adolescent offspring of a parent with BD. Bipolar offspring were psychiatrically evaluated at baseline and at 1-, 5-, and 12-year follow-up. Complete follow-up data over de 12-year follow-up were available for 102 offspring. Childhood trauma was measured with the Childhood Trauma Questionnaire (CTQ) and filled out by the offspring. Family functioning was reported by the mother with the 130-item Questionnaire for Family Problems (QFP). Results: Emotional maltreatment was significantly associated (HR = 1.82, CI 1.18-2.82, p = .007) with mood disorder onset in bipolar offspring. No association was found with the family functioning total score (HR = 1.04, CI 0.94-15, p = .085) nor its subscales. Conclusions: The current study suggests that emotional maltreatment is associated with mood disorder development in bipolar offspring. Remarkably, the association of offspring-reported emotional maltreatment and mood disorder onset was not reflected in parent-reported family functioning (e.g., support and communication, openness or involvement). Possible explanations are discussed and warrant further study. Practitioner Points: Offspring of bipolar patients are at increased risk of developing mood disorders across the life-time. Emotional trauma contributes to the likelihood of developing mood disorders in bipolar offspring. In the daily treatment of bipolar patients having children, attention should be given to parental style and difficulties. Further research using multiple informant methods on childhood trauma an family functioning is needed to further disentangle the effects of these variables on the onset of psychopathology in bipolar offspring. Answer: Based on the provided abstracts, adjunctive family therapy does not appear to significantly enhance recovery from bipolar I mood episodes when compared to pharmacotherapy alone in terms of the rate of recovery. A study with 92 patients did not find a significant difference in the proportion of subjects who recovered or the time to recovery between those who received family therapy plus pharmacotherapy and those who received pharmacotherapy alone (PUBMED:15555694). However, another study suggested that while adjunctive family therapy did not significantly prevent recurrence of bipolar I mood episodes, adjunctive multifamily group therapy was associated with a significant reduction in hospitalizations compared to both individual family therapy and pharmacotherapy alone (PUBMED:19032711). Furthermore, a review of randomized trials indicated that the effectiveness of adjunctive psychotherapy, including family therapy, for bipolar disorder varies depending on the clinical condition of patients at the time of random assignment and the polarity of symptoms at follow-up. Family therapy was found to be more effective in preventing recurrences when initiated after an acute episode and was more effective for depressive than manic symptoms (PUBMED:18794208). For symptomatic youths at high risk for bipolar disorder, family-focused therapy (FFT) was associated with longer intervals between mood episodes compared to enhanced usual care, suggesting a potential benefit in delaying the emergence of mood episodes (PUBMED:31940011). Overall, while adjunctive family therapy may not enhance the rate of recovery from bipolar I mood episodes compared to pharmacotherapy alone, it may offer other benefits such as reducing hospitalizations and extending the time between mood episodes, particularly when used as part of a multifamily group therapy approach or with high-risk youths (PUBMED:19032711, PUBMED:31940011).
Instruction: Is there a trans-abdominal testicular descent during the second gestational trimester? Abstracts: abstract_id: PUBMED:27286121 Is there a trans-abdominal testicular descent during the second gestational trimester? Study in human fetuses between 13 and 23 weeks post conception. Objectives: To confirm if a real inner descend of testis occurs, correlating the testicular position with fetal parameters and analyzing the position of the testes relative to the internal ring. Material And Methods: Twenty nine human fetuses between 13 and 23 weeks post conception (WPC) were studied. The fetuses were carefully dissected with the aid of a stereoscopic lens with 16/25X magnification and testicular position observed. With the aid of a digital pachymeter the distance between the lower pole of the kidney and the upper extremity of the testis (DK-T) was measured to show the position of the testis. During the dissection we also indicated the position of the testes relative to the internal ring. Means were statistically compared using simple linear regression and the paired T-test. Results: The 58 testes had abdominal position. The DK-T in the right side measured between 0.17 and 1.82cm (mean=0.79cm) and in the left side it was between 0.12 and 1.84cm (mean=0.87cm), without statistically differences (p=0.0557). The linear regression analysis indicated that DK-T in both sides correlated significantly and positively with fetal age. All fetuses with more than 20 WPC, heavier than 350g and with CRL over 22cm had a greater distance than the average DK-T. We xobserved that the 58 testis remains adjacent to the internal ring throughout the period studied. Conclusions: The testes remains adjacent to the internal ring throughout the period studied, indicating that there is no real trans-abdominal testicular descent during the second gestational trimester. abstract_id: PUBMED:23913766 The usefulness of gestational sac and placental sonographic morphology in differentiating between second-trimester tubal and abdominal pregnancy: case report and a review of literature. Second-trimester tubal pregnancy is rarely encountered and published cases are limited. There are no established sonographic criteria to differentiate it from abdominal pregnancy; however, differences in their medical management make this distinction important. We report a case of a 14-week 5-day tubal pregnancy that presented near rupture. Sonography demonstrated many overlapping features with abdominal pregnancy. In addition, the extrauterine gestational sac was rounded and well defined and the placenta was crescent-shaped. We propose that these sonographic features represent a second-trimester tubal pregnancy rather than an abdominal pregnancy. abstract_id: PUBMED:30482067 Fetal abdominal circumference in the second trimester and prediction of small for gestational age at birth. Background: Infants that are small for gestational age (SGA) at birth are at increased risk for morbidity and mortality. Unfortunately, the antenatal prediction of SGA is suboptimal.Objectives: We sought to: (1) examine the association between second trimester fetal abdominal circumference < 10% (2T-AClag) with SGA and other gestational and neonatal adverse outcomes; (2) assess 2T-AClag as a predictor of SGA.Study design: Retrospective study of 212 singleton gestations with 2T-AClag on routine ultrasound between 18-24 weeks. The study group was compared to 424 gestations without 2T-AClag for maternal characteristics as well as pregnancy and neonatal adverse outcomes. A multivariate logistic regression was used to determine the predictive value of 2T-AClag for SGA, adjusting for maternal and pregnancy characteristics. The screening model accuracy was assessed through receiver operating characteristic (ROC) curves. Fetal growth restriction (FGR) was defined as an estimated fetal weight (EFW) less than the 10th percentile.Results: Gestations with 2T-AClag had higher rates of SGA (35.7 versus 11.6%, p < .0001), FGR (17 versus 1.7%, p < .0001), pregnancy induced hypertension (31.1 versus 17%, p < .0001), preeclampsia (14.6 versus 7.8%, 0 = 0.01), abnormal umbilical artery Doppler (30 versus 5.1%, p < .0001), indicated preterm birth (5.7 versus 1.9%, p = .01), primary cesarean birth (29.6 versus 20.1%, p = .01) and NICU admission (12.9 versus 6.4%, p = .009). After adjusting for maternal and gestational risk factors, 2T-AClag remained an independent risk factor for SGA (OR 4.53, 95%CI 2.91-7.05, p < .0001) and FGR (OR 11.57, 95%CI 5.02-26.65, p < .0001). The inclusion of 2T-AClag in a regression model with traditional risk factors, significantly improved the model's predictability for SGA and FGR (area under ROC curve increased from 0.618 to 0.723 and 0.653 to 0.819, respectively, p < .0001).Conclusions: Second trimester abdominal circumference (AC) lag is associated with an increased risk of SGA, FGR and other adverse outcomes. The inclusion of 2T-AClag in a screening model for prediction of SGA and FGR may improve the identification of this at-risk group and assist in customizing surveillance plans.Brief rationaleScreening for newborns that are small for gestational age (SGA) at birth is currently suboptimal. Our study shows that second trimester abdominal circumference (AC) lag, using a parameter already routinely assessed during anatomic survey, is associated with SGA at birth and can improve current screening for growth restriction and other gestational, fetal and neonatal complications. abstract_id: PUBMED:33393689 Prostate and testicular growth analysis in human fetuses during the second gestational trimester. Background: The second gestational trimester is a very important period for male genital development. We analyzed the testicular and prostatic volume growth and compared them to the biometric parameters in human fetuses. Methods: We studied 64 testes and 32 prostates from 32 fetuses, aged 10-22 weeks postconception. Fetuses were evaluated regarding total length, crown-rump length, and bodyweight. The same observer performed all the measurements. After testicular and prostatic dissection, the prostate and testicular length, width and thickness were recorded with the aid of computer programs (Image Pro and ImageJ software, version 1.46r). Testicular volume (TV) and prostatic volume (PV) were calculated using the ellipsoid formula. Statistical analysis was performed with the GraphPad Prism program (version 6.01). Results: The fetuses presented PV between 6.1 and 297.18 mm2 (mean = 77.98 mm3 ). Linear regression analysis indicated that the PV in these fetuses increased significantly and positively with fetal age (r2 = .3120; p < .0001). We did not observe significant differences between the TV (right testis: 0.39-63.94 mm3 ; mean = 19.84 mm3 ; left testis: ​​​​​​0.52-55.37 mm3 , mean = 17.25 mm3 ). Linear regression analysis also indicated that the right and left TV (right: r2 = .6649; p < .0001 and left: r2 = .6792; p < .001) increased significantly and positively with fetal age. Conclusion: The prostatic growth was slower during the second gestational trimester, with significant correlations with fetal biometric parameters. The testicular growth was moderate and showed a significant correlation with fetal parameters during the studied period in human fetuses. abstract_id: PUBMED:1950881 Sonographic prediction of gestational age: accuracy of second- and third-trimester fetal measurements. We measured the accuracy of second- and third-trimester sonographic predictors of gestational age against highly reliable gold standard (crown-rump length) in a group of fetuses. Using a prospectively collected computerized data base, we selected 460 fetal sonograms obtained at 14-42 weeks of gestation in which age could be reliably established on the basis of crown-rump length in the first trimester. We used data obtained from these sonograms to compare several predictors of fetal age. The accuracy of all predictors worsened progressively as pregnancy proceeded. In the second trimester, corrected biparietal diameter and head circumference were more accurate predictors of gestational age than were biparietal diameter, femoral length, and abdominal circumference (p less than .05, F test). In the third trimester, the corrected biparietal diameter, head circumference, and femoral length were the best predictors, significantly better than biparietal diameter and abdominal circumference (p less than .05, F test). Prediction of gestational age that relies on a single sonographic measurement should be based on the head circumference or corrected biparietal diameter in the second trimester and on one of these two predictors or the femoral length in the third trimester. abstract_id: PUBMED:18611973 Insulin-like factor 3 levels in second-trimester amniotic fluid. Background: According to animal studies, the testicular Leydig cell hormone insulin-like factor 3 (Insl3) exerts a fundamental role in abdominal testis translocation, which occurs in the beginning of the second trimester in humans. Despite this, human prenatal INSL3 production has been poorly investigated. Methods: Amniotic fluid from 91 pregnant women undergoing amniocentesis was analyzed for INSL3 and testosterone (T) levels. Data were related to gestational age (15-25 wk) at amniocentesis and to sex (45 males and 48 females). Results: INSL3 was present in amniotic fluid from all but one of the investigated male fetuses (range: <0.02-0.36 ng/ml; mean +/- sd: 0.12 +/- 0.07), whereas the hormone was undetectable in the female fetuses. T was significantly higher in male (range: 0.54-1.71 nmol/liter; mean +/- sd: 1.04 +/- 0.30) as compared with in female amniotic fluid (range: 0.19-0.50 nmol/liter; mean +/- sd: 0.34 +/- 0.06) (P < 0.001). In males there was no correlation between INSL3 and T. A statistically borderline negative association was found between INSL3 and gestational age (P = 0.07), whereas the corresponding association was not significant for T (P = 0.12). In contrast, T in females correlated positively with gestational age (P = 0.02). Conclusions: INSL3 is clearly present in human male amniotic fluid in the second trimester, where abdominal testis translocation takes place. In contrast, the hormone is undetectable in female amniotic fluid. The prenatal presence of INSL3 supports the hypothesis that this hormone is essential for testicular descent in humans. abstract_id: PUBMED:15467575 First- and second-trimester ultrasound assessment of gestational age. Objective: The purpose of this study was to evaluate and compare the accuracy of first- and second-trimester ultrasound assessment of fetal age. Study Design: Differences between ultrasound estimated gestational age and true gestational age were calculated between 11 to 14 weeks of gestation and 18 to 22 weeks of gestation for singleton, twin, and triplet pregnancies that were conceived by in vitro fertilization. Results: One hundred four singleton pregnancies, 81 twin pregnancies, and 33 triplet pregnancies were included. First trimester ultrasound estimates overestimated gestational age by a mean (+/- SEM) of 1.3 +/- 0.2 days (P < .0001) for singleton pregnancies, 1.4 +/- 0.2 days (P < .0001) for twin pregnancies, and 0.8 +/- 0.4 days (P = .027) for triplet pregnancies. Second trimester estimates were underestimated by a mean of -0.1 +/- 0.4 days (P = .87) for singletons, -0.6 +/- 0.3 days (P = .083) for twins, and -0.6 +/- 0.5 days (p = 0.21) for triplets. Conclusion: Ultrasound assessment of gestational age is very accurate and is marginally better in the first trimester compared with the second. abstract_id: PUBMED:8389509 First- and second-trimester hemoglobin levels. Relation to birth weight and gestational age. The computerized records of 3074 women who were delivered during the period 1987-1990 were analysed. We studied possible relations of birth weight and gestational age to maternal first- and second-trimester hemoglobin levels. Birth weight and gestational age were taken as outcome variables in a multiple stepwise regression analysis, including factors that might interact with hemoglobin levels. A strong independent inverse correlation between the lowest second-trimester hemoglobin and birth weight was observed. First-trimester hemoglobin levels explained little with regard to birth weight, but had a small negative correlation to gestational age. For gestational age < 259 days we found a strong positive correlation between the first- to second-trimester decrease in hemoglobin and birth weight. abstract_id: PUBMED:35341453 Evaluation of ultrasound measurements of abdominal fat for the prediction of gestational diabetes in the first and second trimesters of pregnancy. Objective: To evaluate whether ultrasound abdominal fat measurements in the first and second trimesters can predict adverse gestational outcomes, particularly gestational diabetes mellitus (GDM), and identify early patients at higher risk for complications. Methods: A prospective cohort study of 126 pregnant women at 11-14 and 20-24 weeks of gestation with normal fasting glucose levels during early pregnancy. From 126 participants with complete data, 13.5% were diagnosed with GDM, based on the cutoffs established for the peripherical blood glucose. Subcutaneous, visceral, and maximum preperitoneal abdominal fat were measured using ultrasound techniques. GDM status was determined by oral glucose tolerance test (OGTT) with 75 g glucose overload, and the following values were considered abnormal: fasting glucose ≥92 mg/dl and/or 1 h after overload ≥180 mg/dl and/or 2 h after overload ≥153 mg/dl. The receiver operator characteristic (ROC) curve was used to determine the optimal threshold to predict GDM. Results: Maximum preperitoneal fat measurement was predictive of GDM, and subcutaneous and visceral abdominal fat measurements did not show significant differences in the prediction of GDM. According to the ROC curve, a threshold of 45.25 mm of preperitoneal fat was identified as the optimal cutoff point, with 87% sensitivity and 41% specificity to predict GDM. The raw and adjusted odds ratios for age and pre-pregnancy body mass index were 0.730 (95% confidence interval [CI], 0.561-0.900) and 0.777 (95% CI, 0.623-0.931), respectively. Conclusion: The use of a 45.25 mm threshold for maximum preperitoneal fat, measured by ultrasound to predict the risk of GDM, appears to be a feasible, inexpensive, and practical alternative to incorporate into clinical practice during the first trimester of pregnancy. abstract_id: PUBMED:35905541 The association between second trimester ultrasound fetal biometrics and gestational diabetes. Objective: Gestational diabetes mellitus (GDM) is the most common metabolic complication of pregnancy. The incidence of GDM is increasing worldwide and 5-25% of pregnancies are diagnosed with GDM depending on screening strategies and diagnostic criteria. GDM may lead to obstetric complications and increases the risk of adult metabolic disease in the offspring. Timely identification of GDM allows for regulation of maternal glucose levels which may reduce the obstetric complications considerably. The aim of this study is to investigate the association between second trimester ultrasound biometrics and GDM. Study Design: This is a retrospective cohort study including 2697 singleton pregnancies attending second trimester ultrasound scan at 20 + 0 to 20 + 6 weeks' gestation and giving birth at Aalborg University Hospital in the year 2020. Ultrasound measurements included head circumference (HC), abdominal circumference (AC), femur length (FL) and estimated fetal weight (EFW) by Hadlock's formula. Women with pregestational diabetes were excluded. GDM screening was performed on indication using oral-glucose-tolerance-test (OGTT) including 75 g glucose and a 2-hour serum glucose value ≥ 9 mmol/L was considered diagnostic. The association between fetal biometrics and GDM was investigated by logistic regression. Results: A total of 174 (6.5 %) were diagnosed with GDM. The incidence of GDM in pregnancies with biometrics above the 90th centile was; FL: 10.5 %, HC: 8.8 %, AC: 7.6 %, EFW: 9.3 %. Fetal biometrics above the 90th centile was significantly associated with GDM; ORFL = 2.07, p = 0.001; ORHC = 1.89, p = 0.001; ORAC = 1.63, p = 0.033; OREFW = 1.64, p = 0.036. This association remained significant for HC and FL when adjusted for maternal obesity (Body Mass Index ≥ 27): ORHC(adj)=1.56, p = 0.019; ORFL(adj) = 1.57, p = 0.049. Conclusion: At the second trimester scan, fetal biometrics above the 90th centile increase the risk of GDM. In pregnancies that are later diagnosed with GDM fetal growth is increased already at the second trimester scan. Such knowledge underlines the importance of early identification of GDM. Answer: No, there is no real trans-abdominal testicular descent during the second gestational trimester. A study conducted on human fetuses between 13 and 23 weeks post conception found that the testes remained adjacent to the internal ring throughout the period studied, indicating that there is no significant descent of the testes within the abdomen during this time (PUBMED:27286121).
Instruction: Echocardiographic findings in NATO pilots: do acceleration (+Gz) stresses damage the heart? Abstracts: abstract_id: PUBMED:9215464 Echocardiographic findings in NATO pilots: do acceleration (+Gz) stresses damage the heart? AGARD, Neuilly-sur-Seine, France. Background: Based on physiologic considerations, observations in animal experiments and the results of a preliminary French study, there has been an aeromedical concern that repeated exposure to high sustained G-forces might have a deleterious effect on the heart. The AGARD Aerospace Medical Panel initiated a multi-national study to address the question. Hypothesis: The study addressed the null hypothesis that "there is no difference in cardiac chamber dimensions, wall thickness or echocardiographic functional parameters between pilots who fly high sustained G (HSG) aircraft and pilots who fly primarily rotary wing or transport aircraft." Methods: The study was a cross-sectional design comparing echocardiographic parameters in NATO active duty male pilots of HSG aircraft with a control group of transport and rotary wing pilots (CNTL). Some 13 NATO nations participated using a detailed protocol which included specific echocardiographic technical instructions, and procedures for collecting quantitative data on demographic variables including exercise, smoking and flying hours. Data was forwarded on a specially-designed software program to a central data registry. Careful quality control was carried out. Results: Comparing data from 289 HSG pilots with 254 CNTL pilots, when corrected for the covariates, there were no differences for any of 16 echocardiographic parameters including right and left ventricular dimensions and wall thickness, aortic and left atrial dimensions, and tricuspid and mitral inflow velocities. Conclusions: The results support the null hypothesis. The conclusions are limited to the resolution of the technology employed and to the flight envelopes and +Gz exposure in the current generation of fighter aircraft. abstract_id: PUBMED:38027396 Echocardiographic characterization of cardiac chambers and vasculatures in buffaloes (Bubalus bubalis) during diastolic and systolic phases. Background: Ultrasonography had diagnostic importance in the evaluation of different diseases in buffaloes, including cardiovascular diseases. Aim: The current work describes the normal echocardiographic findings in healthy buffaloes, along with establishing reference values for echocardiographic dimensions for both sides of the heart, i.e., left and right ones. Methods: About 30 healthy adult buffaloes that belonged to private farms in Assiut, Egypt, were included in this study. Each animal underwent a complete clinical evaluation as well as hematological analyses, lipid profile indices, liver functions, cardio-thoracic radiography, and echocardiography to confirm no diseased conditions were detected. The study was conducted on healthy buffaloes (n = 30) in Assiut Governorate, Egypt. Results: The obtained results reported healthy buffaloes with normal clinical findings as well as indices of blood pictures and serum biochemicals that were within the reference intervals. Radiography revealed a free reticulum and a well-defined diaphragm. The heart was seen as a typical radio-opaque organ. Ultrasonographically, using grayscale B-mode and M-mode, the heart was commonly imaged from the left fourth intercostal space. Different echocardiographic views were described, including the four chamber view, i.e., right atrium (RA), right ventricle (RV), tricuspid valve (TCV), left atrium, left ventricle, mitral valve, and interventricular septum (IVS), and the right ventricular outflow tract, i.e., RA, TCV, RV, pulmonary artery (PA), and pulmonary valve. Cross sections in each of the apex and base of the heart were described. Echocardiographic dimensions during cardiac diastole and systole, including diameters and wall thickness of each of the atria and ventricles, were demonstrated. Interventricular septal thickness wall thickness as well as diameters of the aorta and PA, were stated. Conclusion: The work tried to put reference values on the normal echocardiographic dimensions using 2-D B-mode gray scale ultrasonography in healthy adult buffaloes. These echocardiographic reference dimensions with normal echocardiographic imaging will be very helpful in enhancing the diagnostic efficacy of ultrasounds for recognizing abnormal findings related to cardiac disorders. abstract_id: PUBMED:26759509 Echocardiographic pattern of acquired heart diseases in Nigeria. Background: Acquired heart diseases (AHDs) are present from childhood to old age, and the frequency of pathology differs according to age and the geographical region of the patients. The aim of this study was to document the echocardiographic patterns of AHDs in our setting. Materials And Methods: Retrospective analysis of echocardiographic diagnosis of AHD was done for age, sex, and echocardiographic pattern. Results: There were 190 diagnoses in the 163 patients with 27 patients having a double diagnosis, consisting of 88 (54%) males and 75 (46%) females. The mean age was 50.4 years (age range 9-85 years). Ten types of acquired heart pathologies were identified and they included hypertensive heart disease in 49.47%, rheumatic heart disease in 26.32%, cardiomyopathy in 11.05%, endomyocardial fibrosis in 4.74%, and pericarditis in 3.68%. Others were cor pulmonale, pulmonary hypertension, intracardiac thrombi, left atrial myxoma and degenerative heart disease which accounted for the remaining 4.74%. Conclusion: This study identifies 10 types of AHDs among the study population. The huge impact of hypertensive heart disease and rheumatic heart disease is a big indicator pointing to the existence of a sub-optimal level of healthcare in the country. abstract_id: PUBMED:35433946 Changes in echocardiographic parameters of the donor's heart before and after heart transplantation and their relationship with post-transplant survival. Background: Heart transplantation is now widely performed in China, but the current entry criteria for heart transplantation donors are simple and lack reliable standards; there are still no studies on the effect of preoperative echocardiographic parameters of heart transplantation donors on the prognosis of the recipient. Therefore, the aim of this study is twofold: (I) to observe the characteristics of changes in echocardiographic parameters of the donor's heart before and after heart transplantation and (II) to study the relationship between the changes in echocardiographic parameters of the donor's heart pre- and post-transplant and transplant recipient survival. Methods: A total of 29 patients who underwent orthotopic heart transplant in our hospital from October 2016 to October 2019 were enrolled in this study. All recipients were followed up until April 2020. Echocardiographic data were collected pre-transplantation (donor) and at 1 week and 1 month post-transplant (recipient). Results: Compared with the pretransplanted donor's heart, there was an increase in the interventricular septal diameter, left ventricular posterior wall diameter, right atrium diameter and right ventricular diameter, while a decrease was found in the tricuspid annular systolic displacement and tricuspid annular peak systolic velocity at 1 week post-transplant. The survival curve showed that the postoperative survival time in the right atrium/ventricle ≥32 mm group was better than that in the right atrium/ventricle <32 mm group. The survival curve also showed that the postoperative survival time in the right atrium/ventricle enlargement <2 mm group was better than that in the right atrium/ventricle enlargement ≥2 mm group. Conclusions: In the early post-transplant period, ventricular myocardial thickening, right heart enlargement and a decrease in right heart function were observed in the post-transplanted heart compared to the pretransplanted donor's heart. Postoperative survival was higher among heart transplant patients who received a heart with right atrium/ventricle ≥32 mm. Postoperative survival was lower among heart transplant patients whose right atrium/ventricle was dilated ≥2 mm at 1 month postoperatively compared with the pretransplanted donor's heart. abstract_id: PUBMED:7046723 An analysis of the risk of human cardiac damage during +Gz stress: a review. The available information concerning the subendocardial hemorrhage, myofibrillar degeneration, and necrosis observed in miniature swine after acute +Gz exposure, is reviewed and evaluated for any possible occurrence of similar pathology in humans. It is concluded that +Gz exposure poses no significant risk for cardiac damage in humans. Three primary considerations lead to this conclusion: 1) The lesions in swine probably result from very high (toxic) levels of both sympathetic adrenergic tone to the heart and circulating plasma catecholamines acting on the cardiac cells. Most of these catecholamines appear to be released as a result of the overall stress involved in exposing conscious miniature swine to +Gz on the centrifuge, and not directly as the result of the +Gz per se. Thus, the lesions in miniature swine appear to develop as a consequence of a somewhat unique form of the porcine stress syndrome. 2) +Gz exposure is not as psychologically stressful for humans. Therefore, humans would not be expected to have, and do not appear to have, catecholamine levels (cardiac or systemic) as high as those observed in miniature swine during +Gz stress. This conclusion is supported by direct comparisons of the heart rate and plasma catecholamine levels in men and miniature swine during +Gz exposure. 3) Although a large amount of clinical cardiologic data exists from humans who have been exposed to +Gz stress, none of these data indicates any degree of cardiac damage. Even more conclusive is the absence of any cardiac damage in the heart of a human subject who had many significant +Gz exposures over a 2-year period. Thus, the pathology in miniature swine does not appear to be an acceleration phenomenon, and probably does not occur in humans exposed to +Gz stress. abstract_id: PUBMED:34304669 Comparison of echocardiographic findings in women with late preeclampsia and healthy pregnant women: a case-control study. Background: Preeclampsia is one of the most common types of pregnancy disorders, and it is essential to know its cardiovascular complications. Therefore, we aimed to compare echocardiographic findings in women with late preeclampsia and healthy pregnant women. Method: In this case-control study, 80 women with late preeclampsia and 80 healthy pregnant women were randomly selected and echocardiographic findings were determined and compared in the two groups. Results: Evaluating the relationship between late preeclampsia and echocardiographic variables in pregnant women showed a significant relationship between diastolic disorder (15% in the case group vs. 5% in the control group) (p = .035), systolic artery pressure (p = .002), and TR gradient (p = .003) with late preeclampsia; However, this relationship was not observed with other echocardiographic variables (p > .05). Conclusion: Since the role of transthoracic echocardiography in the management of women with preeclampsia has been emphasized by international groups, the results of this study suggest that increased TR Gradient and diastolic dysfunction are the most important echocardiography findings in patients with late preeclampsia. abstract_id: PUBMED:36366247 On the Role of Damage Evolution in Finite Element Modeling of the Cutting Process and Sensing Residual Stresses. This study focuses on the role of the damage evolution when estimating the failure behavior of AISI 1045 steel for sensing and measuring metal cutting parameters. A total of five Lagrangian explicit models are established to investigate the effect of applying damage evolution techniques. The Johnson-Cook failure model is introduced once to fully represent damage behavior, i.e., no damage evolution is considered, and as a damage initiation criterion in the remaining approaches. A fracture energy-based model is included to model damage propagation with different evolution rates. Temperature-dependent and temperature-independent fracture energy models are also investigated. Dry orthogonal cutting and residual stresses measurements of AISI 1045 are conducted for validation. The significance of the damage evolution is investigated using honed-tool and sharp-tool models. Including the damage evolution led to a prediction of higher workpiece temperatures, plastic strains, cutting forces, and residual stresses, with no clear differences between linear and exponential evolution rates. The role of damage evolution is more evident when temperature-dependent evolution models are used. abstract_id: PUBMED:25108873 A systematic review and meta-analysis of cohort studies of echocardiographic findings in OSA children after adenotonsilectomy. Context: There is evidence that OSA in children can be associated with acute and chronic effects on the cardiovascular system due to repetitive episodes of apnea and hypoxemia. Objective: To assess whether there is an association between OSA and echocardiographic findings in children and whether that association persists after adenotonsillectomy. Data Sources: A literature search was conducted based on PUBMED, EMBASE and LILACS. Study Selection: Children with OSA and children who did not have OSA, who were aged ≤12 years. Data Extraction: Two reviewers extracted data independently; the risk of bias was assessed by examining the selected sample, the recruitment method, completeness of follow up, and blinding. Results: Seven studies met all the inclusion criteria and methodological requirements. There was a significant difference with elevated mean pulmonary arterial pressure levels in OSA participants compared to those without OSA at preoperative assessment [mean difference (MD) 8.67; confidential interval (CI) 95% 6.09, 11.25]. OSA participants showed a statistically significant increased interventricular septum (IVS) thickness (mm) [MD 0.60; CI 95% 0.09, 1.11]; and right ventricular (RV) dimension (cm/m) [MD 0.19; CI 95% 0.10, 0.28]. There was also a significant increase in right ventricular (RV) dimension (cm/m) [MD 0.10; CI 95% 0.05, 0.14] in OSA children. Conclusion: There is moderate quality evidence regarding possible association between OSA and right heart repercussions. More prognosis studies are needed, to allow the combination of results in a meta-analysis. abstract_id: PUBMED:33338371 NATO and evidence-based military and disaster medicine: case for Vigorous Warrior Live Exercise Series. The North Atlantic Treaty Organization (NATO) is the premier and only security alliance uniting 30 countries and growing with many partner states in the provision of collective security and against threats posed by conflict and natural disasters. Security of countries and communities is increasingly threatened by a broad spectrum of unconventional types of war and disease threats - from hybrid and asymmetric to multi-domain and peer-to-peer/near-peer conflict. The NATO Centre of Excellence for Military Medicine (MILMED COE) is the centre of gravity for medical best practices and promotion of medical doctrine across the NATO alliance. Disaster medicine is multidisciplinary and in NATO, multinational, requiring best practices that are driven by data and evidence to prevent death on the battlefield and prepare for future conflicts. "Vigorous Warrior" is a live military and disaster medicine exercise series using both civilian and military actors across all sectors of health focused on health security and identifying lessons learned to ready the alliance for future threats. In this brief report, we make the case that the Vigorous Warrior exercise exposes gaps, highlights challenges and generates an evidence base to make NATO military medicine systems more robust, more efficient and in provision of best medical practices. We specifically argue that clinical data capture must be duplicated and continuous across the alliance to ensure evidence-based medicine stays current in NATO military medical doctrine. abstract_id: PUBMED:23935209 Sword, Shield and Buoys: A History of the NATO Sub-Committee on Oceanographic Research, 1959-1973. In the late 1950s the North-Atlantic Treaty Organization (NATO) made a major effort to fund collaborative research between its member states. One of the first initiatives following the establishment of the alliance's Science Committee was the creation of a sub-group devoted to marine science: the Sub-committee on Oceanographic Research.This paper explores the history of this organization, charts its trajectory over the 13 years of its existence, and considers its activities in light of NATO's naval defence strategies. In particular it shows how the alliance's naval commands played a key role in the sub-committee's creation due to the importance of oceanographic research in the tracking of enemy submarines. The essay also scrutinizes the reasons behind the committee's dissolution, with a special focus on the changing landscape of scientific collaboration at NATO. The committee's fall maps onto a more profound shift in the alliance's research agenda, including the re-organization of defence research and the rise of environmentalism. Answer: The study conducted on NATO pilots aimed to determine whether repeated exposure to high sustained G-forces (HSG) might have a deleterious effect on the heart. The null hypothesis of the study was that there is no difference in cardiac chamber dimensions, wall thickness, or echocardiographic functional parameters between pilots who fly HSG aircraft and those who fly primarily rotary wing or transport aircraft (CNTL) (PUBMED:9215464). The cross-sectional study compared echocardiographic parameters in 289 HSG pilots with 254 CNTL pilots, with careful quality control and a detailed protocol for data collection and analysis. The results showed no differences for any of the 16 echocardiographic parameters examined, including right and left ventricular dimensions and wall thickness, aortic and left atrial dimensions, and tricuspid and mitral inflow velocities, when corrected for covariates (PUBMED:9215464). Therefore, the study supports the null hypothesis, suggesting that repeated exposure to +Gz stresses does not damage the heart in the context of the current generation of fighter aircraft and within the resolution of the technology employed in the study (PUBMED:9215464).
Instruction: Does the extraction-site location in laparoscopic colorectal surgery have an impact on incisional hernia rates? Abstracts: abstract_id: PUBMED:36219253 Choice of specimen's extraction site affects wound morbidity in laparoscopic colorectal cancer surgery. Background: The choice for an ideal site of specimen extraction following laparoscopic colorectal surgery remains debatable. However, midline incision (MI) is usually employed for right and left-sided colonic resections while left iliac fossa or suprapubic transverse incision (STI) were reserved for sigmoid and rectal cancer resections. Objective: To compare the incidence of surgical site infection (SSI) and incisional hernia (IH) in elective laparoscopic colorectal surgery for cancer and specimen extraction via MI or STI. Method: Prospectively collected data of elective laparoscopic colorectal cancer resections between January 2017 and December 2019 were retrospectively reviewed. MI was employed for right and left-sided colonic resections while STI was used for sigmoid and rectal resections. SSI is defined according to the US CDC criteria. IH was diagnosed clinically and confirmed by CT scan at 1 year. Results: A total of 168 patients underwent elective laparoscopic colorectal resections. MI was used in 90 patients while 78 patients had STI as an extraction site. Demographic and preoperative data is similar for two groups. The rate of IH was 13.3% for MI and 0% in the STI (p = 0.001). SSI was seen in 16.7% of MI vs 11.5% of STI (p = 0.34). Univariate and multivariate analysis showed that the choice of extraction site is associated with statistically significant higher incisional hernia rate. Conclusion: MI for specimen extraction is associated with higher incidence of both SSI and IH. The choice of incision for extraction site is an independent predicative factor for significantly higher IH and increased SSI rates. abstract_id: PUBMED:18347858 Does the extraction-site location in laparoscopic colorectal surgery have an impact on incisional hernia rates? Background: Incisional hernias are a common postoperative complication with abdominal surgery. The major risk factors for their development include wound infection, obesity, and age. This study aimed to evaluate the impact of extraction-site location and technique on incisional hernia rates in laparoscopic colorectal surgery. Methods: A prospective study of 208 consecutive patients who underwent laparoscopic colorectal surgery between March 2002 and July 2006 was performed. The study included only patients who had an extraction site on the abdominal wall. Patients were excluded if they were lost to follow-up evaluation or underwent conversion to open procedure. For the 166 patients included in the study, the mean follow-up period was 20.2 +/- 14.4 months. Extraction-site incisions were classified into two groups: midline or off-midline. Midline wounds involved sharp division of the linea alba and were closed with a single layer of no.1 Vicryl. Off-midline incisions involved sharp division of the anterior and posterior sheaths with blunt spreading of the muscular layers and were closed in two layers with no. 1 Vicryl. Risk factors including wound infection, body mass index (BMI), age, and diabetes were analyzed. Results: The incisional hernia rate for the entire series was 7.8%. The incisional hernia rate was 17.6% for the midline group (n = 74) and 0% for the off-midline group (n = 92) (p = 0.0002, statistically significant). There was no statistically significant difference in age, BMI, diabetes, follow-up time, or wound infection rate between the two groups. Conclusion: In this series, the midline extraction site resulted in a significantly higher incisional hernia rate statistically than the off-midline extraction sites. The authors therefore have adopted an off-midline blunt muscle-splitting extraction site when performing laparoscopic colorectal surgery. abstract_id: PUBMED:31617098 Extraction-site incisional hernia after laparoscopic colorectal surgery: should we carry out a study about prophylactic mesh closure? Background: Prophylactic mesh closure has only scarcely been studied to avoid extraction-site incisional hernia after laparoscopic colorectal surgery. The aim was to analyze extraction-site incisional hernia incidence after laparoscopic colorectal surgery to assess if prophylactic mesh closure should be studied. Methods: A retrospective analytic cohort study was conducted in patients who had undergone laparoscopic colorectal surgery with an extraction-site incision. Extraction-site incisional hernia was diagnosed during clinical examination or imaging. Risk factors for extraction-site incisional hernia were analyzed. Results: Two hundred and twenty-five patients were included. More than 80% of the patients had a malignant disease. Ninety-two patients (40.9%) underwent right colectomy. Midline extraction-site incision was used in 86 (38.2%) patients. After a mean follow-up of 2.4 years, 39 (17.3%) patients developed an extraction-site incisional hernia. Midline extraction-site incision was associated with incisional hernia when compared to transverse and Pfannenstiel incision (39.5% vs. 3.6%, OR 17.5, p &lt; 0.001). Surgery to repair an extraction-site incisional hernia was also more frequent in the group of patients with a midline incision (10.5% vs. 1.4%, OR 8.0, p = 0.002). In the multivariate analysis, incisional hernia was associated with body mass index, high blood pressure, and midline incision. Conclusions: Extraction-site incisional hernia was mainly related to midline incisions; therefore, midline incision should be avoided whenever possible. Studying prophylactic mesh closure for Pfannesnstiel or transverse incisions is needless, as these incisions have a low incisional hernia risk. abstract_id: PUBMED:37056082 Comparison of Midline and Off-midline specimen extraction following laparoscopic left-sided colorectal resections: A systematic review and meta-analysis. Aims: This study aims to evaluate comparative outcomes following midline versus off-midline specimen extractions following laparoscopic left-sided colorectal resections. Methods: A systematic search of electronic information sources was conducted. Studies comparing 'midline' versus 'off midline' specimen extraction following laparoscopic left-sided colorectal resections performed for malignancies were included. The rate of incisional hernia formation, surgical site infection (SSI), total operative time and blood loss, anastomotic leak (AL) and length of hospital stay (LOS) was the evaluated outcome parameters. Results: Five comparative observational studies reporting a total of 1187 patients comparing midline (n = 701) and off-midline (n = 486) approaches for specimen extraction were identified. Specimen extraction performed through an off-midline incision was not associated with a significantly reduced rate of SSI (odds ratio [OR]: 0.71; P = 0.68), the occurrence of AL (OR: 0.76; P = 0.66) and future development of incisional hernias (OR: 0.65; P = 0.64) compared to the conventional midline approach. No statistically significant difference was observed in total operative time (mean difference [MD]: 0.13; P = 0.99), intraoperative blood loss (MD: 2.31; P = 0.91) and LOS (MD: 0.78; P = 0.18) between the two groups. Conclusions: Off-midline specimen extraction following minimally invasive left-sided colorectal cancer surgery is associated with similar rates of SSI and incisional hernia formation compared to the vertical midline incision. Furthermore, there were no statistically significant differences observed between the two groups for evaluated outcomes such as total operative time, intra-operative blood loss, AL rate and LOS. As such, we did not find any advantage of one approach over the other. Future high-quality well-designed trials are required to make robust conclusions. abstract_id: PUBMED:27384092 Impact of the Specific Extraction-Site Location on the Risk of Incisional Hernia After Laparoscopic Colorectal Resection. Background: The impact of the specific incision used for specimen extraction during laparoscopic colorectal surgery on incisional hernia rates relative to other contributing factors remains unclear. Objective: This study aimed to assess the relationship between extraction-site location and incisional hernia after laparoscopic colorectal surgery. Design: This was a retrospective cohort study (January 2000 through December 2011). Settings: The study was conducted at a high-volume, specialized colorectal surgery department. Patients: All of the patients undergoing elective laparoscopic colorectal resection were identified from our prospectively maintained institutional database. Main Outcome Measures: Extraction-site and port-site incisional hernias clinically detected by physician or detected on CT scan were collected. Converted cases, defined as the use of a midline incision to perform the operation, were kept in the intent-to-treat analysis. Specific extraction-site groups were compared, and other relevant factors associated with incisional hernia rates were also evaluated with univariate and multivariate analyses. Results: A total of 2148 patients (54.0% with abdominal and 46.0% with pelvic operations) with a mean age of 51.7 ± 18.2 years (52% women) were reviewed. Used extraction sites were infraumbilical midline (23.7%), stoma site/right or left lower quadrant (15%), periumbilical midline (22.5%), and Pfannenstiel (29.6%) and midline converted (9.2%). Overall crude extraction site incisional hernia rate during a mean follow-up of 5.9 ± 3.0 years was 7.2% (n = 155). Extraction-site incisional hernia crude rates were highest after periumbilical midline (12.6%) and a midline incision used for conversion to open surgery (12.0%). Independent factors associated with extraction-site incisional hernia were any extraction sites compared with Pfannenstiel (periumbilical midline HR = 12.7; midline converted HR = 13.1; stoma site HR = 28.4; p &lt; 0.001 for each), increased BMI (HR = 1.23; p = 0.002), synchronous port-site hernias (HR = 3.66; p &lt; 0.001), and postoperative superficial surgical-site infection (HR = 2.11; p &lt; 0.001). Limitations: This study was limited by its retrospective nature, incisional hernia diagnoses based on clinical examination, and heterogeneous surgical population. Conclusions: Preferential extraction sites to minimize incisional hernia rates should be Pfannenstiel or incisions off the midline. Midline incisions should be avoided when possible. abstract_id: PUBMED:37460849 Randomized controlled trial comparing cosmetic results of midline incision versus off-midline incision for specimen extraction in laparoscopic colectomy. Purpose: A notable advantage of laparoscopic colorectal surgery is that only a small incision at the extraction site is necessary, which is considered to be cosmetically beneficial. Meanwhile, the optimal extraction site for the resected specimen in laparoscopic colectomy is controversial in terms of cosmetic benefit. This randomized controlled trial compares midline and off-midline extraction sites in laparoscopic colectomy in patients with colon cancer, with consideration of cosmetic benefits as the primary endpoint. Methods: Included were patients that underwent elective laparoscopic colectomy at WMUH between October 2014 and February 2017. Patients were randomly assigned to either midline incision group or off-midline incision group. Prospectively collected data included cosmetic results (patients and observer assessment scale) and complications including incidence of incisional hernia, SSI, and pain. This trial was registered with UMIN Clinical Trials (UMIN000028943). Results: Finally, 98 patients with colorectal cancer were analyzed. No significant differences were found between the two groups in patient and observer assessment scales of cosmetic results (midline 8 ± 1.1 vs off-midline 11 ± 5.9 p = 0.16, midline 13.5 ± 6.6 vs off-midline 15 ± 11 p = 0.58, respectively) or in postoperative pain. However, incisional hernia occurred in four cases in the midline group (8%), which was significantly higher than that in the off-midline group (no cases, 0%). Conclusion: There was no significant difference in terms of cosmetic benefit, the primary endpoint, between the two groups. In this study, only the extraction site location was compared; future studies will examine differences depending on the incisional direction, including the incidence of incisional hernia. abstract_id: PUBMED:26541723 Incisional and port-site hernias following robotic colorectal surgery. Background: The association between extraction site location, robotic trocar size, and the incidence of incisional hernias in robotic colorectal surgery remain unclear. Laparoscopic literature reports variable rates of incisional hernias versus open surgery, and variable rates of trocar site hernias. However, conclusions from these studies are confusing due to heterogeneity in closure techniques and may not be generalized to robotic cases. This study evaluates the effect of extraction site location on incisional hernia rates, as well as trocar hernia rates in robotic colorectal surgery. Materials And Methods: A retrospective review of multiport and single incision robotic colorectal surgeries from a single institution was performed. Patients underwent subtotal, segmental, or proctocolectomies, and were compared based on the extraction site through either a muscle-splitting (MS) or midline (ML) incision. Hernias were identified by imaging and/or physical exam. Demographics and risk factors for hernias were assessed. Groups were compared using a multivariate logistic regression analysis. Results: The study included 259 colorectal surgery patients comprising 146 with MS and 113 with ML extraction sites. Postoperative computed tomograms were performed on 155 patients (59.8 %) with a mean follow-up of 16.5 months. The overall incisional hernia rate was 5.8 %. A significantly higher hernia rate was found among the ML group compared to the MS group (12.4 vs. 0.68 %, p &lt; 0.0001). Of the known risk factors assessed, only increased BMI was associated with incisional hernias (OR 1.18). No trocar site hernias were found. Conclusion: Midline extraction sites are associated with a significantly increased rate of incisional hernias compared to muscle-splitting extraction sites. There is little evidence to recommend fascia closure of 8-mm trocar sites. abstract_id: PUBMED:25307082 Hernia incidence following single-site vs standard laparoscopic colorectal surgery. Aim: Compared with standard laparoscopic (SDL) approaches, less is known about the incidence of hernias after single-site laparoscopic (SSL) colorectal surgery. This study hypothesized that SSL colorectal surgery was associated with an increased risk of hernia development. Method: Institutional retrospective chart review (September 2008-June 2013) identified 276 evaluable patients who underwent laparoscopic colorectal procedures. The following data were collected: demographic data, risk factors for the development of a hernia, operative details and postoperative course including the development of a hernia. Patients were stratified by laparoscopic technique to compare the characteristics of those undergoing SDL and SSL. Patients were subsequently stratified by the presence or absence of a hernia to identify associated factors. Results: One hundred and nineteen patients (43.1%) underwent SDL and 157 patients (56.9%) underwent SSL surgery. The development of an incisional hernia was observed in 7.6% (9/119) of SDL patients compared with 17.0% (18/106) of SSL patients (P = 0.03) over a median 18-month follow-up. Similar proportions of patients developed parastomal hernias in both groups [SDL 16.7% (10/60) vs SSL 15.9% (13/80)]. Hernias were diagnosed at a median of 8.1 (SDL) and 6.5 (SSL) months following the index operation and were less likely to be incarcerated in the SSL group [SDL 38.9% (7/18) vs SSL 6.5% (2/31), P = 0.01]. Conclusion: SSL colorectal surgery is associated with an increase in the incidence of incisional hernias but not parastomal hernias. Site of specimen extraction in SSL may contribute to the development of an incisional hernia. abstract_id: PUBMED:28444496 Incidence of incisional hernia in the specimen extraction site for laparoscopic colorectal surgery: systematic review and meta-analysis. Introduction: The incidence of incisional hernia(IH) may be affected by the choice of specimen extraction incision. The objective of this study was to perform a systematic review and meta-analysis comparing the incidence of IH after midline and off-midline incisions in patients undergoing laparoscopic colorectal surgery. Methods: A systematic search was performed according to PRISMA guidelines to identify all comparative studies from January 1991-August 2016 on the incidence of IH after midline and off-midline(transverse or Pfannenstiel) incisions in patients undergoing laparoscopic colorectal surgery. Case series and studies reporting the IH after stoma site extraction, SILS, or NOTES were excluded. The MINORS instrument was used for quality assessment for observational studies. Weighted estimates were calculated using a random-effects model. Results: A total of 17 articles were identified and included for meta-analysis, 16 of which were observational studies and 1 was an RCT. The mean MINORS score for observational studies was 12.9 (SD 3.2, range 7-17). Sample sizes in the midline (mean 185, range 20-995) and off-midline(mean 184, range 20-903) groups were similar. Follow-up ranged from 17.3 to 42 months. The pooled incidence of IH was 10.6% (338/3177) in midline, 3.7% (48/1314) in transverse, and 0.9% (9/956) in Pfannenstiel incisions. IH was significantly higher in the midline compared to off-midline groups (weighted OR 4.1, 95% CI 2.0-8.3, I 2 = 79.7%, p for heterogeneity &lt;0.001). Midline incisions were also at higher risk of IH versus transverse (weighted OR 3.0, 95% CI 1.4-6.7, I 2 = 72.7%, p for heterogeneity &lt;0.001) and Pfannenstiel (weighted OR 8.6, 95% CI 3.0-24.6, I 2 = 43.5%, p for heterogeneity = 0.101) incisions. There was no publication bias according the funnel plot or statistically (Egger's p = 0.336). Conclusions: Midline incisions for specimen extraction in laparoscopic colorectal surgery are at significantly higher risk of IH compared to off-midline (transverse or Pfannenstiel) incisions, but these data are of poor quality and heterogeneous. abstract_id: PUBMED:28964931 Does site of specimen extraction affect incisional hernia rate after robot assisted laparoscopic radical prostatectomy? Background: Robot assisted laparoscopic radical prostatectomy (RALP) spread in the last decade as a minimally invasive alternative to open radical prostatectomy for men with localized prostate cancer. It is associated with excellent surgical, functional and oncological results with less postoperative pain and shorter convalescence. Anyway, the development of an incisional hernia (IH), may negate known benefits as it can lead not only to bothersome symptoms but also to severe complications, such as bowel obstruction, strangulation and perforation. Port-site or extraction site hernias, whose incidence rate is underdiagnosed, have become more commonly after minimally invasive surgery; but IH rate after robot-assisted radical prostatectomy has not been well characterized. This study aimed to evaluate the impact of extraction-site location (vertical supra-umbilical incision versus an off-midline incision) on incisional hernia rates in robotic prostatectomy. Materials And Methods: We included in the study 800 patients undergone RALP, 400 with a supra-umbilical incision for specimen extraction and 400 with off-midline incision. All were followed up for at least 3 years. The main study end point was IH occurrence at the extraction site (midline versus off-midline). Results: IH rate for the entire series was 4.75%, in particular 5% for the midline group and 4.5% for the off-midline group. The hernias were diagnosed at a mean of 20.2 and 18.2 months after surgery, respectively in the two groups. There was no statistically significant differences in baseline characteristics; anyway larger prostate weight, wound infection and history of prior cholecystectomy were associated with higher proportion of IH. Conclusion: Extraction site hernias are a rare but a potentially serious complication following RALP. In our series, the midline extraction doesn't result in a significantly higher IH rate in comparison with the off-midline extraction site. Answer: Yes, the extraction site location in laparoscopic colorectal surgery does have an impact on incisional hernia rates. Several studies have found that midline incisions are associated with a higher incidence of incisional hernias compared to off-midline incisions. For instance, one study reported an incisional hernia rate of 17.6% for midline incisions versus 0% for off-midline incisions (PUBMED:18347858). Another study found that midline extraction-site incisions were associated with a significantly higher rate of incisional hernias compared to transverse and Pfannenstiel incisions (PUBMED:31617098). Similarly, a retrospective cohort study indicated that midline incisions should be avoided when possible, as they are associated with higher incisional hernia rates compared to Pfannenstiel or incisions off the midline (PUBMED:27384092). Furthermore, a randomized controlled trial found that incisional hernias occurred in 8% of cases with midline incisions, which was significantly higher than the 0% incidence in the off-midline group (PUBMED:37460849). Another retrospective review of robotic colorectal surgeries showed a significantly higher hernia rate among the midline group compared to the muscle-splitting group (12.4% vs. 0.68%, p < 0.0001) (PUBMED:26541723). However, it is important to note that one systematic review and meta-analysis did not find a statistically significant difference in incisional hernia rates between midline and off-midline specimen extractions following laparoscopic left-sided colorectal resections (PUBMED:37056082). Despite this, the overall evidence suggests that off-midline incisions are preferable to midline incisions in terms of reducing the risk of incisional hernias after laparoscopic colorectal surgery.
Instruction: A prospective comparison of two expanded polytetrafluoroethylene grafts for linear forearm hemodialysis access: does the manufacturer matter? Abstracts: abstract_id: PUBMED:9208965 A prospective comparison of two expanded polytetrafluoroethylene grafts for linear forearm hemodialysis access: does the manufacturer matter? Background: The function and patency of standard 6-mm Goretex (W.L. Gore and Associates, Flagstaff, AZ) and Impra (Impra, Inc., Tempe, AZ) expanded polytetrafluoroethylene (e-PTFE) grafts for hemodialysis as radial-antecubital linear arteriovenous fistulae for dialysis are compared. Study Design: A randomized clinical trial was conducted in two community dialysis centers and in one hospital-based center serviced by one vascular surgical practice, that performed the access surgery. Selection of linear forearm access, as opposed to other hemodialysis graft configurations, was at the discretion of the surgeon. Candidates for linear grafts had palpable radial pulses with a normal Allen test and normal digital Doppler flow in the hand. Linear grafts were placed using end-to-side anastomoses to the artery and vein, and the graft type was determined by randomization. Primary patency was determined by first episode of thrombosis, first revision, or angioplasty of the graft. Secondary patency after thrombectomy, revision, or angioplasty was determined when the graft was no longer clinically usable, and a new graft needed to be placed as a parallel conduit in the forearm or in another site. Statistical analysis was by actuarial life-table methods. Results: There were 131 linear forearm grafts in 117 patients. The Impra and Goretex groups were equally matched for gender and major risk factors, except for smoking, which was more common in the Goretex group. Minimum followup was 24 months. Life table primary patencies at 1 year (Impra 43%, Goretex 47%) and at 2 years (Impra 30%, Goretex 26%) were not statistically different (p = 0.78); secondary patency was also equal at 1 year (Impra 49%, Goretex 69%) and at 2 years (Impra 33%, Goretex 41%) (p = 0.15). Discontinuance of use of a patent graft, complications, episodes of thrombosis, and the need to replace the original graft occurred in the two groups without a statistically significant difference. Conclusions: In the linear forearm position from the radial artery to an antecubital vein, there is no difference in the performance of 6-mm standard e-PTFE grafts on the basis of manufacturer, whether Goretex or Impra. On the basis of performance, linear forearm dialysis grafts are an acceptable method for hemodialysis access. abstract_id: PUBMED:749941 Reinforced expanded polytetrafluoroethylene (Gore-Tex) grafts for haemodialysis. Twenty-five vascular grafts made of expanded reinforced polytetrafluoroethylene (PTFE) were inserted as arteriovenous fistulae in 20 patients requiring intermittent haemodialysis. Thirteen grafts were placed in the forearm, either as a looped or straight configuration, and 12 straight grafts were placed in the thigh. Whereas only 45% of the forearm grafts functioned adequately, 92% of the thigh grafts supported haemodialysis satisfactorily. It is suggested that when a vascular graft for haemodialysis is required, then an arteriovenous connection of expanded PTFE placed in the thigh achieves reasonable success. abstract_id: PUBMED:8472233 A comparison between expanded polytetrafluoroethylene and plasma tetrafluoroethylene grafts for hemodialysis access. Animal studies have indicated that plasma tetrafluoroethylene (TFE) may be a better prosthetic material than expanded polytetrafluoroethylene (ePTFE) for arteriovenous access in patients who require hemodialysis because it combines the advantages of both Dacron and Teflon. A randomized clinical trial to compare the two materials was conducted between May 1987 and January 1989. Forty-four patients were enrolled, 22 in each group. The status of the grafts was monitored for at least 18 months. The patency rate for plasma TFE was 59% and for ePTFE was 64%. Kaplan-Meier analysis and Wilcoxon testing revealed no statistical differences between the two groups. Four grafts became infected, two in each group, and one aneurysm developed. Despite its theoretical advantages, plasma TFE was found to be similar to ePTFE as a graft material for hemodialysis. abstract_id: PUBMED:29807489 Hyaluronan enhancement of expanded polytetrafluoroethylene cardiovascular grafts. Heart disease continues to be the leading cause of death in the United States. The demand for cardiovascular bypass procedures increases annually. Expanded polytetrafluoroethylene is a popular material for replacement implants, but it does have drawbacks such as high thrombogenicity and low patency, particularly in small diameter grafts. Hyaluronan, a naturally occurring polysaccharide in the human body, is known for its wound healing and anticoagulant properties. In this work, we demonstrate that treating the luminal surface of expanded polytetrafluoroethylene grafts with hyaluronan improves hemocompatibility without notably changing its mechanical properties and without significant cytotoxic effects. Surface characterization such as ATR-FTIR and contact angle goniometry demonstrates that hyaluronan treatment successfully changes the surface chemistry and increases hydrophilicity. Tensile properties such as elastic modulus, tensile strength, yield stress and ultimate strain are unchanged by hyaluronan enhancement. Durability data from flow loop studies demonstrate that hyaluronan is durable on the expanded polytetrafluoroethylene inner lumen. Hemocompatibility tests reveal that hyaluronan-treated expanded polytetrafluoroethylene reduces blood clotting and platelet activation. Together our results indicate that hyaluronan-enhanced expanded polytetrafluoroethylene is a promising candidate material for cardiovascular grafts. abstract_id: PUBMED:22835566 Pasteurella multocida-infected expanded polytetrafluoroethylene hemodialysis access graft. Infections are among the risks related to prosthetic hemodialysis access grafts. However, dialysis access graft infections caused by Pasteurella multocida have not been reported previously. We report a case of a P. multocida-infected nonfunctioning expanded polytetrafluoroethylene graft in the forearm after a cat bite. At surgery, the graft was completely unincorporated and was completely excised. Operative culture results were positive for P. multocida, a common oral flora found in cats and dogs. The patient was treated with intravenous ceftriaxone, and the wounds healed with local care. abstract_id: PUBMED:7100308 Comparison of expanded polytetrafluoroethylene microvascular grafts to autogenous vein grafts. The use of 1-mm ID by 1-cm-long expanded polytetrafluoroethylene microvascular grafts in various positions in two experimental animals did not compare favorably with the use of autogenous vein interposition grafts in controls. Light microscopy and scanning electron microscopy showed that early fibrin deposition at the anastomosis lines is followed by fully activated coagulation of the grafts. Use of antiplatelet and anticoagulant drugs, changes in techniques, and alterations in the graft material are possible future directions for improved patency with expanded polytetrafluoroethylene microvascular grafts. abstract_id: PUBMED:21609797 A prospective, randomized comparison of bovine carotid artery and expanded polytetrafluoroethylene for permanent hemodialysis vascular access. Background: Many patients on hemodialysis do not have adequate anatomy for native arteriovenous fistulas. In these patients, synthetic conduits remain an alternative option for permanent hemodialysis access. We sought to compare the standard cuffed expanded polytetrafluoroethylene (ePTFE) graft with the bovine carotid artery (BCA) graft. Methods: This was a prospective, randomized controlled trial that was set in an academic medical center. We enrolled 26 patients in the BCA group and 27 patients in the ePTFE group. Primary, assisted primary, and secondary patency were calculated using the Kaplan-Meier method. Complications were monitored and are reported. Results: Although there was no significant difference in secondary patency rates, primary and assisted primary patency rates were significantly higher in BCA than in the ePTFE grafts (60.5% vs 10.1% and 60.5% vs 20.8% at 1 year, respectively). The BCA graft survival advantage was most profound in the upper arm grafts with significantly higher primary and assisted patency rates (P &lt; .0001 and .0005, respectively). The total number of interventions (upper arm grafts) and total number of angioplasties (overall and upper arm) required to maintain patency were significantly fewer in the BCA group. The most common complication was graft thrombosis which occurred 0.34 ± 0.09 times per patient year in the BCA group compared to 0.77 ± 0.16 times per patient year in the ePTFE group, P = .01. Conclusion: The BCA graft is an excellent option for patients on hemodialysis that are not suitable for native arteriovenous fistulas, as these grafts required fewer interventions than the ePTFE grafts to maintain patency. abstract_id: PUBMED:699409 Hemodynamic studies in patients with expanded polytetrafluoroethylene (PTFE) forearm grafts. 21 patients with polytetrafluoroethylene grafts inserted for hemodialysis access were studied to assess the hemodynamic parameters. Mean graft flow was 1.9 l/min with a range of 0.2-3.8 l/min. Intragraft pressure varied from 1 to 83 mmHg and was dependent on systemic pressure, graft flow and the presence of stenotic lesions. Mean cardiac output was 7.1 l/min and index 4.0 l/min and was positively correlated with graft flow. By obtaining this data surgical corrective action was taken in several patients prospectively thus removing the abnormalities and ensuring continued graft patency. The techniques described are reproducible and simple and can be carried out in less than five minutes with minimal patient discomfort or hazard. abstract_id: PUBMED:20535250 Outcomes of two different polytetrafluoroethylene graft sizes in patients undergoing maintenance hemodialysis. Arteriovenous access creation is mandatory for maintenance hemodialysis. If native fistula placement was not possible or failed, a prosthetic conduit would be the best substitute. The purpose of this prospective study was to compare outcomes of two different sizes of polytetrafluoroethylene (PTFE) grafts, in hemodialysis patients, at the Mustafa Khomeini Hospital in Iran. The study population consisted of 586 end-stage renal disease referrals for vascular access construction (January 2003 to January 2007) of which eventually 102 subjects were candidates for PTFE graft who were followed for one year. Data were collected by a questionnaire and analyzed using the SPSS, life table, Kaplan- Meier and Log-Rank tests. Out of 102 PTFE implantation candidates (mean value of age 51.7 +/- 17.06 yrs), 56% were male and 44% female. PTFE grafts of 8 mm and 6 mm sizes were randomly placed in 57 and 45 subjects, with distribution of 83%, 12% and 5% in arm, forearm and thigh. The most underlying diseases were hypertension and diabetes. There was a significant difference in complication rates between patients with and without underlying diseases [42% vs. 10% (P = 0.03)]. One-year patency rates were 42.2% and 36.5% for 6 mm and 8 mm grafts and 28.2% vs. 52% in patients with and without underlying diseases respectively. Despite more complication frequency in 8 mm grafts, the patency and complication rates of two graft groups did not significantly differ. Hypertension and diabetes could have contributory roles in graft complication rate, which may be preventable. Non-tapered grafts of 6 mm and 8 mm sizes have not significant different outcomes. Further research is recommended with larger sample size and longer duration. abstract_id: PUBMED:8843271 Comparison of survival of an expanded polytetrafluoroethylene graft designed for early cannulation to standard wall polytetrafluoroethylene grafts. Background: Placement and maintenance of a well-functioning vascular access are essential for delivery of adequate hemodialysis. Newly placed polytetrafluoroethylene (PTFE) arteriovenous grafts require a period of wound healing and incorporation of fibrous tissue before use, a period typically lasting two to three weeks. An ideal PTFE graft would be one that can be used for vascular access immediately, obviating the need for temporary dialysis catheters. Recently an expanded PTFE (ePTFE) graft with a mesh cannulation segment (Diastat graft) has been proposed for early cannulation. Study Design: This is a retrospective single-center study comparing ePTFE graft survival to contemporaneously placed standard wall PTFE (GORE-TEX) grafts. Results: Forty-seven consecutive new or established patients receiving chronic hemodialysis had grafts (25 ePTFE, 22 standard PTFE) placed between November 1994 and July 1995. There were no significant differences between the groups in age, race, gender, incidence of diabetes mellitus, or peripheral vascular disease. By the end of the study, 21 of 25 ePTFE grafts had clotted, compared with 11 of the 22 patients receiving a standard PTFE graft. Median time to first clotting was 53 days for the ePTFE grafts and 164 days for the standard PTFE grafts (p &lt; 0.0001). Nine patients with ePTFE grafts required a temporary catheter after their first clotting episode. Conclusions: The ePTFE grafts thrombosed at a significantly higher rate than standard wall PTFE grafts. Further experience with the Diastat graft might improve graft survival. However, early experience does not suggest that the avoidance of short-term temporary access outweighs the problem of high clotting rate, and its attendant morbidity. Answer: The prospective comparison of two expanded polytetrafluoroethylene (e-PTFE) grafts for linear forearm hemodialysis access investigated whether the manufacturer of the grafts mattered in terms of function and patency. The study compared standard 6-mm Goretex (W.L. Gore and Associates) and Impra (Impra, Inc.) e-PTFE grafts used as radial-antecubital linear arteriovenous fistulae for dialysis. The randomized clinical trial included 131 linear forearm grafts in 117 patients across two community dialysis centers and one hospital-based center. The primary patency was determined by the first episode of thrombosis, first revision, or angioplasty of the graft, while secondary patency was determined when the graft was no longer clinically usable, and a new graft needed to be placed. The results showed that there were no statistically significant differences in primary patency at 1 year (Impra 43%, Goretex 47%) and at 2 years (Impra 30%, Goretex 26%), with a p-value of 0.78. Secondary patency was also equal at 1 year (Impra 49%, Goretex 69%) and at 2 years (Impra 33%, Goretex 41%) with a p-value of 0.15. There were no statistically significant differences in discontinuance of use of a patent graft, complications, episodes of thrombosis, and the need to replace the original graft between the two groups. The conclusion of the study was that in the linear forearm position from the radial artery to an antecubital vein, there is no difference in the performance of 6-mm standard e-PTFE grafts based on the manufacturer, whether Goretex or Impra. Therefore, linear forearm dialysis grafts are an acceptable method for hemodialysis access regardless of the manufacturer (PUBMED:9208965).
Instruction: Is it safe to perform dental and cardiac valve surgeries concomitantly? Abstracts: abstract_id: PUBMED:23540425 Is it safe to perform dental and cardiac valve surgeries concomitantly? Purpose: Conventionally, dental surgery and cardiac valve replacement surgery (VRS) have been performed separately. Disadvantages of this approach include increased anesthetic and infection risks and increased costs. The authors hypothesized dental surgeries performed immediately before VRS would have similar mortality and morbidity outcomes and significantly decreased costs compared with those performed independently of VRS. Materials And Methods: An institutional review board-approved retrospective study was completed comparing outcomes for 17 patients undergoing concomitant cardiac VRS and invasive dental procedures with outcomes for 16 patients undergoing similar procedures by a conventional approach. Results: The conventional group had a significant increase in ejection fraction (11% vs 6.7%; P &lt; .05) and no difference in the incidence of prosthetic valve endocarditis or other cardiac complications. The concomitant group had longer overall operating room time (389 vs 328 min) but significantly shorter anesthesia time (428 vs 553 min) than the conventional group. Length of stay in the intensive care unit was similar (6.7 days) and overall hospital stays were shorter in the concomitant group (14.5 vs 18.2 days). Cost analysis showed the concomitant group's overall costs were significantly lower than those for the conventional group. Conclusion: There was no significant difference in cardiac outcomes between the concomitant and conventional groups. In addition, each patient in the concomitant approach saved an average of $6,669. Thus, concomitant dental surgery and cardiac VRS may be considered a safe and cost-effective approach that may lead to decreases in overall health care costs. abstract_id: PUBMED:27295774 MITRAL VALVE REPAIR WITH MINIMALLY INVASIVE CARDIAC SURGERY APPROACH AS ROUTINE PRACTICE Although minimally invasive cardiac surgery (MICS) via right minithoracotomy is attracting attention as a minimally invasive approach in cardiac surgery, it has not become a standard, routine approach for mitral valve repair. Although it has spread rapidly in Germany (43%) and USA (20.4%), the proportion of MICS in isolated mitral valve repair still comprises only 15.6% of mitral valve repair surgeries in Japan. For safe, assured introduction of MICS as a routine approach under quality control for good surgical and mid- and long-term results, surgeons experienced in mitral valve repair who perform at least 10 mitral valve repairs per year are necessary. A team approach with surgeons, anesthesiologists, perfusionists, and nurses who are highly motivated is also important. abstract_id: PUBMED:27833481 Cardiac and renal protective effects of dexmedetomidine in cardiac surgeries: A randomized controlled trial. Background: Cardiac and renal injuries are common insults after cardiac surgeries that contribute to perioperative morbidity and mortality. Dexmedetomidine has been shown to protect several organs against ischemia/reperfusion-(I/R) induced injury. We performed a randomized controlled trial to assess the effect of dexmedetomidine on cardiac and renal I/R injury in patients undergoing cardiac surgeries. Materials And Methods: Fifty patients scheduled for elective cardiac surgeries were randomized to dexmedetomidine group that received a continuous infusion of dexmedetomidine initiated 5 min before cardiopulmonary bypass (1 μg/kg over 15 min, followed by 0.5 μg/kg/h) until 6 h after surgery, whereas the control group received an equivalent volume of physiological saline. Primary outcome measures included myocardial-specific proteins (troponin-I, creatine kinase-MB), urinary-specific kidney proteins (N-acetyl-beta-D-glucosaminidase, alpha-1-microglobulin, glutathione transferase-pi, glutathione transferase alpha), serum proinflammatory cytokines (tumor necrosis factor alpha and interleukin-1 beta), norepinephrine, and cortisol levels. They were measured within 5 min of starting anesthesia (T0), at the end of surgery (T1), 12 h after surgery (T2), 24 h after surgery (T3), 36 h postoperatively (T4), and 48 h postoperatively (T5). Furthermore, creatinine clearance and serum cystatin C were measured before starting surgery as a baseline, and at days 1, 4, 7 after surgery. Results: Dexmedetomidine reduced cardiac and renal injury as evidenced by lower concentration of myocardial-specific proteins, kidney-specific urinary proteins, and pro-inflammatory cytokines. Moreover, it caused higher creatinine clearance and lower serum cystatin C. Conclusion: Dexmedetomidine provided cardiac and renal protection during cardiac surgery. abstract_id: PUBMED:12400743 Is it safe to train residents to perform cardiac surgery? Background: The impact of surgical training on patient outcomes in cardiac surgery is unknown. Methods: All cases performed by residents from 1998 to 2001 were compared to staff surgeon cases using prospectively collected data. Operative mortality and a composite morbidity of: reoperation for bleeding perioperative myocardial infarction, infection, stroke, or ventilation more than 24 hours were compared using multivariate analysis. Results: Four residents performed 584 cases. The cases were as follows: coronary artery bypass grafting (CABG), 366 cases; aortic valve replacement (AVR) with or without CABG (AVR +/- CABG), 86 cases; mitral valve replacement, 31 cases; mitral valve repair, 25 cases; thoracic aneurysm/dissection, 22 cases; aortic root, 20 cases; transplantations, 14 cases; and adult congenital defect repairs, 20 cases. There were 2,638 CABGs and 363 AVR +/- CABG performed by the staff during the same period. Crude operative mortality in CABG patients was 2.5% (resident) and 2.9% (staff) (p = 0.62). In multivariate analysis, resident was not associated with operative mortality odds ratio (OR) of 0.59 (p = 0.19). Resident cases had a higher incidence of the composite morbidity outcome for CABG cases (19.4% vs 13.6% for staff; p = 0.003). However, in multivariate analysis, resident was not associated with increased morbidity (OR = 1.23, p = 0.16). The AVR +/- CABG crude mortality was 3.6% (resident) and 2.8% (staff) (p = 0.69). Because of the small number of cases (n = 447), operative mortality was combined with the composite morbidity outcome for the AVR +/- CABG model. In all, 16.7% of resident cases and 19.8% of staff cases had the composite outcome or died (p = 0.51). In multivariate analysis resident was not associated with this outcome (OR = 0.74, p = 0.35). Conclusions: In this analysis of our experience with residency training, the operative morbidity and mortality in CABG and AVR patients was similar for residents and staff. Training residents to perform cardiac surgery appears to be safe. abstract_id: PUBMED:17761432 Need for dental treatment following cardiac valve surgery: a clinical study. Aim: The aim of the present study was to evaluate the long-term need for dental treatment following non-radical treatment modes prior to cardiac valve surgery. Patients: From 1995 to 2001, a total of 305 patients were screened prior to cardiac surgery. After an average period of 36 months, 80 of these patients could be re-evaluated clinically (26%). Another 117 patients (38%) and their family doctors were contacted by telephone. Methods: Dental evaluation prior to cardiac valve replacement was performed clinically and radiographically. Tooth extraction was recommended in cases of carious or periodontal destruction, root remnants, partial retention or apical osteolysis despite endodontic treatment with poor prognosis for apicectomy. Periodontal therapy was recommended if attachment loss was less than 1/2 of the root length. No measures were undertaken for endodontically treated teeth without apical osteolysis and impacted teeth. In October 2002, oral health was re-evaluated in 80 patients. Dental treatment carried out in the follow-up period was documented and compared with the current findings. Results: At the time of re-evaluation, 60 of the total of 80 patients required dental treatment in 155 teeth; oral surgery was indicated in 51 of these 60 patients, mostly due to periodontal pathology. During the follow-up period, only 99 of the 142 dental interventions having taken place had been carried out with prophylactic antibiotic treatment. Conclusion: From the results of this study it may be concluded that non-radical dental treatment modes prior to cardiac valve replacement can only be successful over the long-term if adequate postoperative dental care is provided. To achieve this aim, common follow-up monitoring forms, similar to those used for care of cancer patients, could facilitate communication. abstract_id: PUBMED:28009954 Outcomes of Minimally Invasive Valve Surgery in Patients with Multiple Previous Cardiac Operations. Background: The study aim was to evaluate the outcomes of minimally invasive valve surgery, performed via a right anterior thoracotomy approach, in patients with a history of multiple (more than two) prior cardiac surgeries. Methods: A retrospective review was conducted of all minimally invasive valve operations performed in patients with a prior history of two or more cardiac surgeries, including coronary artery bypass grafting (CABG) and/or valve surgery, at the authors' institution between January 2008 and November 2014. Results: A total of 38 consecutive patients (23 males, 15 females; mean age 65.8 ± 14.6 years) were identified. Nine patients (24%) had two prior CABG operations, 18 (47%) had more than two prior valve surgeries, and 11 (29%) had a cardiac operative history that included both CABG and valve surgery. A total of 34 (89.5%) isolated valve procedures was identified; these consisted of 24 (64%) mitral valve operations, nine (23.7%) aortic valve replacements, and one (0.3%) tricuspid valve repair. Four patients (10.5%) underwent combined mitral and tricuspid valve surgery. Postoperatively, two patients (5.3%) had cerebrovascular accidents, three (7.9%) required reoperation for bleeding, and three (7.9%) had acute kidney injury. The median hospital length of stay was 9.5 days (IQR: 7-16 days). The 30-day mortality was 7.9%. The cumulative survival at one year was 82%, and was 72% at five years. Conclusions: Minimally invasive reoperative valve surgery after multiple prior cardiac operations is safe and feasible, with good perioperative outcomes and mid-term survival. abstract_id: PUBMED:28208977 Comparison of Levosimendan, Milrinone and Dobutamine in treating Low Cardiac Output Syndrome Following Valve Replacement Surgeries with Cardiopulmonary Bypass. Introduction: Low Cardiac Output Syndrome (LCOS) following Cardiopulmonary Bypass (CPB) is common and associated with increased mortality. Maintenance of adequate cardiac output is one of the primary objectives in management of such patients. Aim: To compare Levosimendan, Milrinone and Dobutamine for the treatment of LCOS after CPB in patients who underwent valve replacement surgeries. Materials And Methods: Sixty eligible patients meeting LCOS were allocated into three treatment groups: Group A-Levosimendan (loading dose 10μg/kg over 10 minutes, followed by 0.1μg/kg/min); Group B-Milrinone (loading dose 50 mcg/kg over 10 minutes followed by 0.5mcg/kg/min) and Group C-Dobutamine @ 5μg/kg/min to achieve target cardiac index (CI) of &gt; 2.5 L/min/m2. In case of failure, other drugs were added as required. Hemodynamic parameters were monitored using EV1000TM clinical platform till 30 minutes post CPB. INSTAT software was used for statistics and p&lt;0.05 was considered significant. Results: The mean±standard deviation of time taken by Dobutamine, Levosimendan and Milrinone to bring the CI to target were 11.1±8.79, 11.3±6.34 and 16.62±9.33 minutes respectively (p=0.064). Levosimendan was equally effective in increasing and maintaining adequate CI as compared to Dobutamine (p&gt;0.05). Levosimendan and Milrinone increased MAP (Mean Arterial Pressure) equally while Dobutamine was more effective as compared to both Levosimendan and Milrinone 20th minute onwards (p&lt;0.01). Milrinone was less effective in increasing the stroke volume as compared to Dobutamine and Levosimendan while Dobutamine and Levosimendan were equally effective. There was no difference in the HR (Heart Rate) achieved with all these three drugs. Conclusion: Levosimendan is equally effective to Dobutamine and better than Milrinone for the treatment of LCOS following CPB in patients undergoing valve replacement surgeries. abstract_id: PUBMED:18402620 Oral health, dental treatment, and cardiac valve surgery outcomes. The aim of this study was to determine whether not treating chronic dental infection during the admission for cardiac valve surgery would increase the morbidity and mortality of patients. Patients were divided into three groups: dentally unhealthy and untreated (Group A), dentally healthy not requiring treatment (Group B), and dentally unhealthy and treated (Group C). Hospital computer records and phone interviews were used to assess morbidity and mortality as assessed through the Social Security Death Index. Ninety-eight patient charts were reviewed. Patients in Group A (n = 47)were not at a significantly greater risk for developing infective endocarditis (IE) within 6 months of cardiac surgery compared to patients in Groups B (n = 17) and C (n = 34). Also, patients in Group A did not have a significantly higher rate of mortality compared to other groups (p= .09). The results suggest that there is no need to treat chronic oral infections in patients with compromised cardiac function within 24 to 48 hours prior to cardiac valve replacement surgery since this will not lower the risk of IE and death following cardiac valve surgery. Multicenter prospective case-controlled studies are needed to address this question definitively. abstract_id: PUBMED:20151351 The influence of surgical and restorative dental treatment prior to cardiac valve surgery on the long-term demand of dental treatment: a prospective clinical study The study objective was to evaluate the long-term influence of non-radical surgical and restorative dental treatment modalities prior to elective cardiac valve replacement on the subsequent dental treatment demand. A total of 305 patients preceding cardiac valve surgery were screened and the appropriate dental treatment was initiated. After 36 months 80 patients were re-evaluated clinically, of which 60 required dental restorations of 155 teeth, mostly due to periodontal pathology. Independent of the sub-group there was a statistically substantial increase of the treatment demand compared to the time of initial examination. In addition, at the time of final re-evaluation the definite treatment need significantly increased far beyond anticipation, potentially due to inadequate dental procedures during the follow-up interval. Irrespective of any dental treatment or antibiotic application, endocarditis did not occur in any patient. Non-radical dental restoration prior to cardiac valve replacement can only be successful, if a standardized dental follow-up with common monitoring forms is provided. The risk of prosthetic valve infective endocarditis based on a dental focus is probably overestimated. abstract_id: PUBMED:11060950 Patients with cardiac disease: considerations for the dental practitioner. The provision of dental treatment under both local anaesthesia and sedation has an excellent safety record, although medical problems may occur. The high prevalence of cardiac disease in the population, particularly ischaemic heart disease, makes it the most common medical problem encountered in dental practice. Additionally, the increasing survival of children with congenital heart disease makes them a significant proportion of those attending for dental treatment. While most dental practitioners feel confident in performing cardio-pulmonary resuscitation, treating patients with co-existent cardio-vascular disease often causes concern over potential problems during treatment. This article aims to allay many of these fears by describing the commoner cardiac conditions and how they may affect dental treatment. It outlines prophylactic and remediable measures that may be taken to enable safe delivery of dental care. Answer: Based on the information provided in the abstracts, it appears that performing dental and cardiac valve surgeries concomitantly can be considered safe and cost-effective. A retrospective study comparing outcomes for patients undergoing concomitant cardiac valve replacement surgery (VRS) and invasive dental procedures with those undergoing similar procedures by a conventional approach found no significant difference in cardiac outcomes between the two groups. Additionally, the concomitant approach resulted in significantly lower overall costs, with each patient saving an average of $6,669. The concomitant group had longer overall operating room time but significantly shorter anesthesia time than the conventional group. Length of stay in the intensive care unit was similar, and overall hospital stays were shorter in the concomitant group (PUBMED:23540425). Furthermore, another study suggested that non-radical dental treatment modes prior to cardiac valve replacement can be successful over the long-term if adequate postoperative dental care is provided (PUBMED:17761432). This indicates that with proper dental follow-up, the risk of complications such as prosthetic valve infective endocarditis based on a dental focus is probably overestimated (PUBMED:20151351). In conclusion, based on the outcomes of the studies mentioned, concomitant dental surgery and cardiac VRS may be a safe and cost-effective approach, provided that there is adequate postoperative dental care and follow-up. However, it is important to note that individual patient risk factors and the specifics of each case should be carefully considered when planning such procedures.
Instruction: Ambulatory teaching: do approaches to learning predict the site and preceptor characteristics valued by clerks and residents in the ambulatory setting? Abstracts: abstract_id: PUBMED:16225666 Ambulatory teaching: do approaches to learning predict the site and preceptor characteristics valued by clerks and residents in the ambulatory setting? Background: In a study to determine the site and preceptor characteristics most valued by clerks and residents in the ambulatory setting we wished to confirm whether these would support effective learning. The deep approach to learning is thought to be more effective for learning than surface approaches. In this study we determined how the approaches to learning of clerks and residents predicted the valued site and preceptor characteristics in the ambulatory setting. Methods: Postal survey of all medical residents and clerks in training in Ontario determining the site and preceptor characteristics most valued in the ambulatory setting. Participants also completed the Workplace Learning questionnaire that includes 3 approaches to learning scales and 3 workplace climate scales. Multiple regression analysis was used to predict the preferred site and preceptor characteristics as the dependent variables by the average scores of the approaches to learning and perception of workplace climate scales as the independent variables. Results: There were 1642 respondents, yielding a 47.3% response rate. Factor analysis revealed 7 preceptor characteristics and 6 site characteristics valued in the ambulatory setting. The Deep approach to learning scale predicted all of the learners' preferred preceptor characteristics (beta = 0.076 to beta = 0.234, p &lt; .001). Valuing preceptor Direction was more strongly associated with the Surface Rational approach (beta = .252, p &lt; .001) and with the Surface Disorganized approach to learning (beta = .154, p &lt; 001) than with the Deep approach. The Deep approach to learning scale predicted valued site characteristics of Office Management, Patient Logistics, Objectives and Preceptor Interaction (p &lt; .001). The Surface Rational approach to learning predicted valuing Learning Resources and Clinic Set-up (beta = .09, p = .001; beta = .197, p &lt; .001). The Surface Disorganized approach to learning weakly negatively predicted Patient Logistics (beta = -.082, p = .003) and positively the Learning Resources (beta = .088, p = .003). Climate factors were not strongly predictive for any studied characteristics. Role Modeling and Patient Logistics were predicted by Supportive Receptive climate (beta = .135, p &lt; .001, beta = .118, p &lt; .001). Conclusion: Most site and preceptor characteristics valued by clerks and residents were predicted by their Deep approach to learning scores. Some characteristics reflecting the need for good organization and clear direction are predicted by learners' scores on less effective approaches to learning. abstract_id: PUBMED:15298710 Medical Students' and Residents' preferred site characteristics and preceptor behaviours for learning in the ambulatory setting: a cross-sectional survey. Background: Medical training is increasingly occurring in the ambulatory setting for final year medical students and residents. This study looks to identify if gender, school, level of training, or specialty affects learner's (final year medical students and residents) preferred site characteristics and preceptor behaviours for learning in the ambulatory setting. Methods: All final year medical students and residents at the five medical schools in Ontario (N = 3471) were surveyed about the site characteristics and preceptor behaviours most enhancing their learning in the ambulatory setting. Preferred site characteristics and preceptor behaviours were rank ordered. Factor analysis grouped the site characteristics and preceptor behaviours into themes which were then correlated with gender, school, level of training, and specialty. Results: Having an adequate number and variety of patients while being supervised by enthusiastic preceptors who give feedback and are willing to discuss their reasoning processes and delegate responsibility are site characteristics and preceptor behaviours valued by almost all learners. Some teaching strategies recently suggested to improve efficiency in the ambulatory teaching setting, such as structuring the interview for the student and teaching and reviewing the case in front of the patient, were found not to be valued by learners. There was a striking degree of similarity in what was valued by all learners but there were also some educationally significant differences, particularly between learners at different levels and in different specialties. Key findings between the different levels include preceptor interaction being most important for medical students as opposed to residents who most value issues pertaining to patient logistics. Learning resources are less valued early and late in training. Teaching and having the case reviewed in front of the patient becomes increasingly less valued as learners advance in their training. As one approaches the end of ones' training office management instruction becomes increasingly valued. Differences between specialties pertain most to the type of practice residents will ultimately end up in (ie: office based specialties particularly valuing instruction in office management and health care system interaction). Conclusions: Preceptors need to be aware of, and make efforts to provide, teaching strategies such as feedback and discussing clinical reasoning, that learners have identified as being helpful for learning. If strategies identified as not being valued for learning, such as teaching in front of the patient, must continue it will be important to explore the barriers they present to learning. Although what all learners want from their preceptors and clinic settings to enhance their learning is remarkably similar, being aware of the educationally significant differences, particularly for learners at different levels and in different specialties, will enhance teaching in the ambulatory setting. abstract_id: PUBMED:36898896 Improving learning using a layered learning model in the ambulatory care setting. Introduction: Layered learning models (LLMs) are becoming increasingly popular at various clinical practice sites and give rotational student pharmacists the opportunity to learn from pharmacist preceptors and resident mentors. The purpose of this article is to give additional insight into implementation of a LLM in an ambulatory care clinical practice setting. Given the expanding services of ambulatory care pharmacy practice sites, this is poised as an excellent opportunity to train both current and future pharmacists through utilization of LLM. Commentary: The LLM employed at our institution gives student pharmacists an opportunity to work within a unique team consisting of a pharmacist preceptor and a postgraduate year one and/or postgraduate year two resident mentor if applicable. The LLM gives student pharmacists the opportunity to apply clinical knowledge into practice while refining soft skills that many student pharmacists might struggle with during pharmacy school or may not have the chance to develop prior to graduation. Embedding a resident within a LLM provides an ideal environment for preceptorship experience towards the student pharmacist while developing skills or attributes required to become an effective educator. The pharmacist preceptor in the LLM is able to teach the resident how to precept student pharmacists by tailoring their rotational experience to enhance learning. Implications: LLMs are continuing to grow in popularity in clinical practice settings. This article offers additional insight into how a LLM can improve the learning experience of everyone involved which includes student pharmacists, resident mentors, and pharmacist preceptors. abstract_id: PUBMED:36908741 Learning in Tension: A Case Study Examining What Internal Medicine Residents Learn in the Ambulatory Care Setting. Introduction: Medical care of patients with complex conditions has shifted to the ambulatory setting, whereas current knowledge of resident learning is primarily based on studies from inpatient settings. Preparing trainees to adapt to this shift necessitates an understanding of what internal medicine (IM) residents currently learn during ambulatory rotations. The aim of this study is to identify what residents learn during their ambulatory care experience. Methods: Using a qualitative instrumental case study design, the authors conducted separate focus groups with IM trainees (n = 15), supervisors (n = 16), and program directors (n = 5) from two IM programs in New York City, USA in 2019. Participants were invited via email, and focus group sessions were complemented by document analysis of ambulatory syllabi. Results: Based on focus group commentary and document analysis, content learned in the ambulatory setting encompassed three domains; 1) patient needs, 2) the resident's role within a healthcare team, and 3) health system opportunities and limitations. Residents also learned about tensions within and between these domains including the skills needed to care for patients versus the skills acquired, a desire for ownership of patient care versus fragmented care, and time allotted versus time required. Discussion: This study revealed two outcomes about what residents learn during their ambulatory care experience. First, learning content largely fell into three domains. Second, residents learned about the tensions between ideal care delivery and the realities of practice. These results highlight the imperative to better align curricula with clinical environments to meet the learning needs of residents. abstract_id: PUBMED:12390554 Faculty development seminars based on the one-minute preceptor improve feedback in the ambulatory setting. Objective: While several models of medical student instruction in the ambulatory setting exist, few have been formally studied. We wished to assess the impact of a faculty development workshop based on the One-Minute Preceptor model on the amount and quality of feedback in the outpatient setting. Design: Ambulatory teaching behaviors were studied during consecutive outpatient precepting sessions before and after 3 faculty development workshops. Student-teacher interactions were assessed using audiotapes of teaching encounters coded through qualitative techniques, and surveys of teacher, learner, and patient satisfaction. Setting: Ambulatory internal medicine clinic in a tertiary care medical center. Patients/participants: Nine board-certified internist faculty preceptors and 44 third-year medical students. Interventions: Three 90-minute faculty development seminars based on the One-Minute Preceptor teaching model. Measurements And Main Results: Ninety-four encounters with 18577 utterances were recorded, half before and half after the seminars. After the workshops, the proportion of utterances that contained feedback increased from 17% to 22% (P =.09) and was more likely to be specific (9% vs 15%; P =.02). After the workshops, teachers reported that the learning encounters were more successful (P =.03) and that they were better at letting the students reach their own Conclusions (P =.001), at evaluating the learners (P =.03), and at creating plans for post-encounter learning (P =.02). The workshops had no effect on the duration of the student-teacher encounter or on student or patient satisfaction with the encounters. Conclusions: Brief, interactive, faculty development workshops based on the One-Minute Preceptor model of clinical teaching resulted in modest improvements in the quality of feedback delivered in the ambulatory setting. abstract_id: PUBMED:10632833 Did I answer your question? Attending physicians' recognition of residents' perceived learning needs in ambulatory settings. Accurately recognizing the learning goals of trainees should enhance teachers' effectiveness. We sought to determine how commonly such recognition occurs and whether it improves residents' satisfaction with the teaching interaction. In a cross-sectional survey of 97 internal medicine residents and 42 ambulatory clinic preceptors in five ambulatory care clinics in Washington and Oregon, we systematically sampled 236 dyadic teaching interactions. Each dyad participant independently indicated the residents' perceived learning needs from a standardized list. Overall, the preceptors' recognition of the residents' learning needs, as measured by percentage of agreement between preceptors and residents on the learning topics, was modest (kappa 0.21, p =.02). The percentage of agreement for all topics was 43%, ranging from 8% to 66%. Greater time pressures were associated with lower agreement (38% vs 56% for the highest and lowest strata of resident-reported time pressure; 15% vs 43% for highest and lowest strata of preceptor-reported time pressure). Agreement increased as the number of sessions the pair had worked together increased (62% for pairs with &gt; 20 vs 17% for pairs with 0 previous sessions). Satisfaction with teaching encounters was high (4.5 on a 5-point scale) and unrelated to the degree of agreement ( p =.92). These findings suggest that faculty development programs should emphasize precepting skills in recognizing residents' perceived learning needs and that resident clinics should be redesigned to maximize preceptor-resident continuity and minimize time pressure. abstract_id: PUBMED:33117888 Resident as Preceptor: An Ambulatory Internal Medicine Curriculum for Third-Year Resident Precepting. Introduction: The role of outpatient preceptor is a core component of many careers within internal medicine (both general internal medicine and subspecialty practice), yet opportunities to learn and practice this skill during residency training are limited. The purpose of this initiative was to introduce outpatient clinic precepting into the third-year ambulatory experience with a didactic and experiential curriculum. Methods: Internal medicine third-year residents received a 1-hour didactic on outpatient precepting and participated in a precepting session in their ambulatory block rotation during the 2017-2018 academic year. During this session, third-year residents precepted their first- and second-year colleagues in the residency clinic with faculty supervision. Residents were surveyed before the didactic and after the precepting experience to assess precepting comfort level, preparedness to supervise others, and satisfaction with the initiative in the ambulatory curriculum. Results: A total of 38 third-year residents were eligible to participate in the initiative, and 36 (94%) participated in the precepting session. Survey response rates were 76% and 78% for pre- and postsurveys, respectively. Resident survey scores of self-assessed teaching, precepting, and supervision skills increased from a mean of 25 out of 42 to 34 out of 42 (p &lt; .001), with an average enjoyment score of 10.3 out of a possible 12, suggesting high learner satisfaction. Discussion: The introduction of outpatient clinic precepting into the third-year internal medicine residency ambulatory curriculum was met with high participation and satisfaction and can be a successful approach to augmenting the outpatient residency experience. abstract_id: PUBMED:7830146 Orientation of OB/GYN residents to ambulatory care. A nurse-midwifery approach. Although certified nurse-midwives collaborate with ob/gyn residents in teaching settings, they are rarely given the opportunity to have input into how residents learn to deliver ambulatory care. This study is a preliminary evaluation of a program in which one nurse-midwife was assigned responsibility for the orientation and preceptorship of all post-graduate year 1 residents to the ambulatory care setting in an inner-city women's health center. At the end of the academic year, all of the post-graduate year 1s completed an evaluation of their orientation. Findings included positive responses to the preceptorship orientation and agreement that the residents were comfortable with the competence and educational background of the CNM preceptor. The recurring theme was that orientation by a CNM was an appropriate learning experience. Suggestions for improvement of the program are included. abstract_id: PUBMED:28891393 Improving Ambulatory Care Resident Training: Preparing for Opportunities to Treat Mental Illness in the Primary Care Setting. Purpose: The development of an outpatient psychiatry clinical practice learning experience for PGY2 ambulatory care pharmacy residents in preparation for the treatment of psychiatric disorders in the primary care setting is described. Summary: With the increased prevalence of psychiatric disorders, significant mortality, and limited access to care, integration of mental health treatment into the primary care setting is necessary to improve patient outcomes. Given the majority of mental health treatment occurs in the primary care setting, pharmacists in patient-centered medical homes (PCMHs) are in a unique position with direct access to patients to effectively manage these illnesses. However, the increased need for pharmacist education and training in psychiatry has prompted a large, Midwestern academic health system to develop an outpatient psychiatry learning experience for PGY2 (Postgraduate Year 2) ambulatory care pharmacy residents in 2015. The goal of this learning experience is to introduce the PGY2 ambulatory care residents to the role and impact of psychiatric clinical pharmacists and to orient the residents to the basics of psychiatric pharmacotherapy to be applied to their future practice in the primary care setting. Conclusion: The development of an outpatient psychiatry learning experience for PGY2 ambulatory care pharmacy residents will allow for more integrated and comprehensive care for patients with psychiatric conditions, many of whom are treated and managed in the PCMH setting. abstract_id: PUBMED:9347715 The effect of incorporating women's health into a PBL curriculum on students' tendencies to identify learning issues in an ambulatory care setting. Purpose: To investigate whether the incorporation of women's health into problem-based learning (PBL) cases affects students' tendency to identify learning issues related to women's health as they encounter patients in an ambulatory care setting. Method: Students in the PBL curriculum at the Allegheny University of the Health Sciences, MCP-Hahnemann School of Medicine, participate in a nine-week primary care practicum at the end of their first year, during which they spend three half-days per week in an ambulatory setting examining patients and completing patient logs that include any learning issues identified. Patient logs from 23 first-year PBL students who had not been exposed to a new women's health education program prior to their practicum in 1993 and from 22 first-year PBL students who had been exposed to the program prior to their practicum in 1994 were reviewed. For each women's health learning issue identified, the sex of the student and the sex, specialty, and practice setting of the student's preceptor were recorded. Data were analyzed with several statistical methods. Results: There was no statistically significant difference in the numbers of men and women students or preceptors between the two years. In 1993 an average of 59% of the patients seen per student were women; in 1994 the average was 61%. The mean numbers of total learning issues identified (including women's health learning issues) were similar in the two years, but the mean percentage of clinical women's health learning issues identified increased significantly between 1993 and 1994, as did the mean percentage of community/preventive health women's health learning issues identified. There was a significant student-sex-by-preceptor-sex interaction for the total number of women's health learning issues identified (p = .024): for both years, the students paired with a preceptor of the same sex identified a higher number of women's health learning issues than did the students paired with a preceptor of the opposite sex. Conclusion: The results suggest that PBL is an effective way to increase students' awareness of women's health issues in a primary care clinical setting. More studies are needed to define the effect of PBL on the kind of reading and learning students will do when they get to the clinical setting. Answer: Yes, approaches to learning do predict the site and preceptor characteristics valued by clerks and residents in the ambulatory setting. A study found that the Deep approach to learning scale predicted all of the learners' preferred preceptor characteristics, such as Office Management, Patient Logistics, Objectives, and Preceptor Interaction. In contrast, valuing preceptor Direction was more strongly associated with the Surface Rational approach and the Surface Disorganized approach to learning. The Surface Rational approach to learning also predicted valuing Learning Resources and Clinic Set-up, while the Surface Disorganized approach weakly negatively predicted Patient Logistics and positively predicted Learning Resources. Climate factors were not strongly predictive for any studied characteristics. However, Role Modeling and Patient Logistics were predicted by a Supportive Receptive climate (PUBMED:16225666).
Instruction: Is dedicated extremity 1.5-T MRI equivalent to standard large-bore 1.5-T MRI for foot and knee examinations? Abstracts: abstract_id: PUBMED:25415708 Is dedicated extremity 1.5-T MRI equivalent to standard large-bore 1.5-T MRI for foot and knee examinations? Objective: The purpose of this study was to prospectively compare dedicated extremity MRI and standard large-bore MRI of the lower extremities in the same patients. Subjects And Methods: Sixty-nine patients (27 feet and 42 knees) were examined both with extremity 1.5-T MRI and standard 1.5-T MRI. Scanning duration was measured, and patients completed a detailed questionnaire after each examination (4-point scale). Two readers assessed image quality parameters. Data were analyzed with the paired Student t test, Wilcoxon signed rank test, and chi-square test. Results: Scanning duration was significantly longer for extremity MRI (foot, 29.9±5.5 minutes; knee, 30.4±5.6 minutes) than for standard MRI (foot, 21.9±5.0 minutes; knee, 20.5±3.9 minutes) (p&lt;0.001 for all comparisons). Acoustic noise reported by the patient was significantly lower on extremity MRI (foot, 1.9±0.9; knee, 2.1±0.7) compared with standard MRI (foot, 2.9±1.0; knee, 2.9±0.8) (p&lt;0.001 for all). Patient satisfaction for both systems was high (1.4-1.6 for all, p=0.2-1.0). Image quality and fat suppression were equally good for both scanners for foot examinations (p≥0.48). Knee examinations generally featured good image quality on both systems, but standard MRI had superior image quality (p≤0.01) and fat suppression (p≤0.001) compared with extremity MRI. More motion artifacts were present on extremity MRI than on standard MRI, which was significant for the knee (p≤0.04) but not for the foot (p≥0.32). Conclusion: Extremity MRI featured less acoustic noise than standard MRI, but examination duration was longer on extremity MRI. Patient satisfaction was high for both scanners. For foot examinations, image quality on extremity MRI was equivalent to standard MRI. Image quality for knee examinations was better on standard MRI compared with extremity MRI, but overall it was good on both systems. abstract_id: PUBMED:30580045 High permittivity ceramics improve the transmit field and receive efficiency of a commercial extremity coil at 1.5 Tesla. Objective: The purpose of this work is to investigate the use of ceramic materials (based on BaTiO3 with ZrO2 and CeO2-additives) with very high relative permittivity (εr ∼ 4500) to increase the local transmit field and signal-to-noise ratio (SNR) for commercial extremity coils on a clinical 1.5 T MRI system. Methods: Electromagnetic simulations of transmit efficiency and specific absorption rate (SAR) were performed using four ferroelectric ceramic blocks placed around a cylindrical phantom, as well as placing these ceramics around the wrist of a human body model. Results were compared with experimental scans using the transmit body coil of the 1.5 T MRI system and an eight-element extremity receive array designed for the wrist. SNR measurements were also performed for both phantom and in vivo scans. Results: Electromagnetic simulations and phantom/in vivo experiments showed an increased in the local transmit efficiency from the body coil of ∼20-30%, resulting in an ∼50% lower transmit power level and a significant reduction in local and global SAR throughout the body. For in vivo wrist experiments, the SNR of a commercial eight-channel receive array, integrated over the entire volume, was improved by ∼45% with the ceramic. Conclusion: The local transmit efficiency as well as the SNR can be increased for 1.5 T extremity MRI with commercial array coils by using materials with very high permittivity. abstract_id: PUBMED:37580304 GE Signa HDxt 1.5 T MRI PAC Module Troubleshooting and Repair Through the analysis of PAC module faults in two GE Signa HDxt 1.5 T MRI cases, the fault performance was analyzed. According to the working principle of PAC module, the fault was analyzed and processed, and then the suspicious fault parts were located and repaired to make the equipment run normally. Finally, the operating principles of PAC module in terms of power supply, signal transmission and data transmission were summarized to facilitate the subsequent quickly and find out fault points for maintenance accurately. abstract_id: PUBMED:19766580 A comparison of dedicated 1.0 T extremity MRI vs large-bore 1.5 T MRI for semiquantitative whole organ assessment of osteoarthritis: the MOST study. Objective: To date semiquantitative whole-organ scoring of knee osteoarthritis (OA) relies on 1.5 Tesla (T) Magnetic resonance imaging (MRI) systems. Less costly 1.0 T extremity systems have been introduced that offer superior patient comfort, but may have limitations concerning field-of-view and image quality. The aim of this study was to compare semi-quantitative (SQ) scoring on a 1.0 T system using 1.5 T MRI as the standard of reference. Methods: The Multicenter Osteoarthritis Study (MOST) is a longitudinal study of individuals who have or are at high risk for knee OA. A sample of 53 knees was selected in which MRI was performed on a 1.0 T extremity system as well as on a 1.5 T scanner applying a comparable sequence protocol. MRIs were read according to the Whole Organ Magnetic Resonance Imaging Score (WORMS) score. Agreement was determined using weighted kappa statistics. Sensitivity, specificity and accuracy were assessed using the 1.5 T readings as the reference standard. In addition the number of non-readable features was assessed. Results: Agreement (w-kappa) for seven main WORMS features (cartilage, bone marrow lesions (BMLs), osteophytes, meniscal damage and extrusion, synovitis, effusion) ranged between 0.54 (synovitis) and 0.75 (cartilage). Sensitivity ranged between 68.1% (meniscal damage) and 88.1% (effusion). Specificity ranged between 63.6% (effusion) and 96.4% (BMLs). Although the overall rate of non-readable features was very low, it was higher for the 1.0 T system (1.9% vs 0.2%). Conclusions: Semiquantitative whole organ scoring can be performed using a 1.0 T peripheral scanner with a moderate to high degree of agreement and accuracy compared to SQ assessment using a 1.5 T whole body scanner. Our results are comparable to the published inter- and intra observer exercises obtained from 1.5 T systems. Sensitivity to change of longitudinal scoring was not evaluated in this cross-sectional design and should be investigated in future validation studies. abstract_id: PUBMED:36926212 Using a Dedicated Extremity MRI Scanner for Depicting Anatomic Structures of Common Wrist Pathologies: A Pilot Comparison with a 3-Tesla MRI Scanner. Background This pilot study explored the value of a dedicated extremity magnetic resonance imaging (MRI) scanner by focusing on the qualitative depiction of important wrist structures in common wrist pathologies, the overall image quality, artifacts, and participant experience in undergoing the examinations. Methods Images of the right wrist of 10 healthy adult volunteers were acquired with a 0.31-Tesla (T) dedicated extremity MRI and a 3-T MRI system, using a dedicated wrist coil. Images were separately evaluated by three radiologists. Paired images were randomized and graded for the visibility of anatomical details, including the triangular fibrocartilage complex (TFCC; central disc, meniscus homolog, and ulnar attachment), carpal ligaments (scapholunate [SL] and lunotriquetral [LT] ligaments), intercarpal cartilage, and median and ulnar nerves. Mean values and standard deviations of evaluation results were calculated for each sequence. Participants' experience in undergoing the examination in both MRI scanners was explored using a questionnaire. Results The mean values of anatomic structures and overall image quality were significantly in favor of the 3-T MRI scanner, compared with the dedicated extremity MRI scanner. With respect to patient satisfaction in undergoing the examination, the overall trend suggested that patients were more in favor of the dedicated extremity MRI scanner. Conclusion For defining the real clinical value of the dedicated MRI scanner in the treatment of hand and wrist pathology, studies focused on pathologies are needed, which is certainly warranted, considering the important benefits of these devices including lower costs and higher accessibility for both patients and health care providers. abstract_id: PUBMED:37799462 A cross-sectional study to test equivalence of low- versus intermediate-flip angle dynamic susceptibility contrast MRI measures of relative cerebral blood volume in patients with high-grade gliomas at 1.5 Tesla field strength. Introduction: 1.5 Tesla (1.5T) remain a significant field strength for brain imaging worldwide. Recent computer simulations and clinical studies at 3T MRI have suggested that dynamic susceptibility contrast (DSC) MRI using a 30° flip angle ("low-FA") with model-based leakage correction and no gadolinium-based contrast agent (GBCA) preload provides equivalent relative cerebral blood volume (rCBV) measurements to the reference-standard acquisition using a single-dose GBCA preload with a 60° flip angle ("intermediate-FA") and model-based leakage correction. However, it remains unclear whether this holds true at 1.5T. The purpose of this study was to test this at 1.5T in human high-grade glioma (HGG) patients. Methods: This was a single-institution cross-sectional study of patients who had undergone 1.5T MRI for HGG. DSC-MRI consisted of gradient-echo echo-planar imaging (GRE-EPI) with a low-FA without preload (30°/P-); this then subsequently served as a preload for the standard intermediate-FA acquisition (60°/P+). Both normalized (nrCBV) and standardized relative cerebral blood volumes (srCBV) were calculated using model-based leakage correction (C+) with IBNeuro™ software. Whole-enhancing lesion mean and median nrCBV and srCBV from the low- and intermediate-FA methods were compared using the Pearson's, Spearman's and intraclass correlation coefficients (ICC). Results: Twenty-three HGG patients composing a total of 31 scans were analyzed. The Pearson and Spearman correlations and ICCs between the 30°/P-/C+ and 60°/P+/C+ acquisitions demonstrated high correlations for both mean and median nrCBV and srCBV. Conclusion: Our study provides preliminary evidence that for HGG patients at 1.5T MRI, a low FA, no preload DSC-MRI acquisition can be an appealing alternative to the reference standard higher FA acquisition that utilizes a preload. abstract_id: PUBMED:36924809 Aperture-patch sandwich metasurface for magnetic field enhancement in 1.5 T MRI. Magnetic resonance imaging (MRI) is an increasingly popular non-invasive technique for clinical diagnosis. Signal-to-noise ratio (SNR) is a crucial performance metric of MRI, improvement of which can be exchanged for increased image resolution or decreased scan time. Besides the progress in various hardware and software techniques for improving SNR in MRI scanners, use of metasurfaces as accessories has recently shown potential towards enhancing SNR by boosting local magnetic field in the scanned volume. Magnetic field enhancement over a larger depth from the skin is essential for imaging of deeper tissues, which can be facilitated by a specifically designed metasurface. Here we present such a metasurface with complementary-type resonant structures on the two sides of a high-permittivity dielectric, which substantially increases magnetic flux density on the skin (forty-five fold) that decays down to unity at a depth of 95 mm from the skin. This results in boosting of SNR up to forty-fold on the skin in 1.5 T MRI, while keeping tissue heating below the safety limit. An original analytical approach is formulated to readily estimate the SNR enhancement factor of this metasurface. Using the designed metasurface as an accessory for MRI scanners could help making MRI scans more efficient and affordable. abstract_id: PUBMED:23677359 Prospective comparison of 1.5 and 3.0-T MRI for evaluating the knee menisci and ACL. Background: MRI (magnetic resonance imaging) is widely used to diagnose meniscal pathology and ACL (anterior cruciate ligament) tears. Because of the enhanced signal-to-noise ratio and improved image quality at higher field strength, knee MRI equipment is shifting from 1.5 to 3.0 T. To date, objective evidence of improved diagnostic ability at 3.0 T is lacking. The purpose of this prospective study was to assess the accuracy of 1.5 and 3.0-T MRI of the knee, in the same individuals, for diagnosing meniscal pathology and ACL tears, utilizing arthroscopy as the reference standard. Methods: Two hundred patients underwent MRI of the knee at 1.5 and 3.0 T. All MRI examinations consisted of multiplanar turbo spin-echo sequences. One hundred patients underwent subsequent knee arthroscopy. Two blinded independent radiologists assessed all MRI studies to identify meniscal pathology and ACL tears. In patients with MRI results indicating the need for surgical treatment, the sensitivity and specificity of the 1.5 and 3.0-T protocols for detecting these lesions were determined, utilizing arthroscopy as the reference standard, and compared with use of the McNemar test. The kappa statistic for inter-reader agreement in the 200 patients was calculated. Results: For medial meniscal tears, the mean sensitivity and specificity for the two readers were 93% and 90%, respectively, at 1.5 T and 96% and 88%, respectively, at 3.0 T. For lateral meniscal tears, the mean sensitivity and specificity were 77% and 99%, respectively, at 1.5 T and 82% and 98%, respectively, at 3.0 T. For ACL tears, the mean sensitivity and specificity were 78% and 100%, respectively, at 1.5 T and 80% and 100%, respectively, at 3.0 T. None of the values for either reader differed significantly between the 1.5 and 3.0-T MRI protocols. Inter-reader agreement was almost perfect to perfect (kappa = 0.82 to 1.00). Conclusions: Routine use of a 3.0-T MRI protocol did not significantly improve accuracy for evaluating the knee menisci and ACL compared with a similar 1.5-T protocol. abstract_id: PUBMED:26397347 Closed-Bore Interventional MRI: Percutaneous Biopsies and Ablations. Objective: The purpose of this article is to review clinical applications and technologic development of MRI-guided percutaneous interventions performed in closed-bore MRI scanners. Conclusion: Interventional MRI has rapidly adapted to the closed-bore environment. New tools are being developed to facilitate the use of MRI-guided procedures, and cost-effectiveness studies are exploring the economics of interventional MRI. abstract_id: PUBMED:28952012 Comparison of a fast 5-min knee MRI protocol with a standard knee MRI protocol: a multi-institutional multi-reader study. Purpose: To compare diagnostic performance of a 5-min knee MRI protocol to that of a standard knee MRI. Materials And Methods: One hundred 3 T (100 patients, mean 38.8 years) and 50 1.5 T (46 patients, mean 46.4 years) MRIs, consisting of 5 fast, 2D multi-planar fast-spin-echo (FSE) sequences and five standard multiplanar FSE sequences, from two academic centers (1/2015-1/2016), were retrospectively reviewed by four musculoskeletal radiologists. Agreement between fast and standard (interprotocol agreement) and between standard (intraprotocol agreement) readings for meniscal, ligamentous, chondral, and bone pathology was compared for interchangeability. Frequency of major findings, sensitivity, and specificity was also tested for each protocol. Results: Interprotocol agreement using fast MRI was similar to intraprotocol agreement with standard MRI (83.0-99.5%), with no excess disagreement (≤ 1.2; 95% CI, -4.2 to 3.8%), across all structures. Frequency of major findings (1.1-22.4% across structures) on fast and standard MRI was not significantly different (p ≥ 0.215), except more ACL tears on fast MRI (p = 0.021) and more cartilage defects on standard MRI (p &lt; 0.001). Sensitivities (59-100%) and specificities (73-99%) of fast and standard MRI were not significantly different for meniscal and ligament tears (95% CI for difference, -0.08-0.08). For cartilage defects, fast MRI was slightly less sensitive (95% CI for difference, -0.125 to -0.01) but slightly more specific (95% CI for difference, 0.01-0.5) than standard MRI. Conclusion: A fast 5-min MRI protocol is interchangeable with and has similar accuracy to a standard knee MRI for evaluating internal derangement of the knee. Answer: The dedicated extremity 1.5 T MRI is not entirely equivalent to the standard large-bore 1.5 T MRI for foot and knee examinations. A study comparing the two found that while patient satisfaction was high for both systems, the scanning duration was significantly longer for extremity MRI than for standard MRI (PUBMED:25415708). For foot examinations, image quality on extremity MRI was equivalent to that of standard MRI. However, for knee examinations, the standard MRI had superior image quality and fat suppression compared to extremity MRI, although the overall image quality was good on both systems (PUBMED:25415708). Additionally, the use of high permittivity ceramics has been shown to improve the local transmit field and signal-to-noise ratio (SNR) for commercial extremity coils on a clinical 1.5 T MRI system, which could potentially enhance the performance of extremity MRI (PUBMED:30580045). In terms of depicting anatomic structures of common wrist pathologies, a pilot study indicated that the overall image quality and the visibility of anatomical details were significantly better with a 3-Tesla MRI scanner compared to a dedicated extremity MRI scanner. However, patients tended to prefer the experience of undergoing the examination with the dedicated extremity MRI scanner (PUBMED:36926212). In conclusion, while dedicated extremity MRI scanners offer certain advantages such as less acoustic noise and potentially improved patient experience, they may have limitations in image quality and examination duration compared to standard large-bore MRI, particularly for knee examinations.
Instruction: Disturbances of grip force behaviour in focal hand dystonia: evidence for a generalised impairment of sensory-motor integration? Abstracts: abstract_id: PUBMED:15965201 Disturbances of grip force behaviour in focal hand dystonia: evidence for a generalised impairment of sensory-motor integration? Background: Focal task specific dystonia occurs preferentially during performance of a specific task. There may be an inefficiently high grip force when doing manipulative tasks other than the trigger task, possibly reflecting a generalised impairment of sensory-motor integration. Objective: To examine how well subjects with writer's cramp (n = 4) or musician's cramp (n = 5) adapted their grip force when lifting a new object or catching a weight. Methods: Nine patients with focal hand dystonia and 10 controls were studied. Experiments addressed different motor behaviours: (A) lifting and holding an object; (B) adjusting grip force in anticipation of or in reaction to a change in load force by catching a small weight dropped expectedly or unexpectedly into a hand held receptacle. Results: In (A), patients produced a grip force overshoot during the initial lifts; force overflow was most pronounced in those with writer's cramp. Patients and controls adjusted their grip force to object weight within one or two lifts, though patients settled to a steady force level above normal. In (B), patients with focal hand dystonia and normal controls showed similar predictive grip force adjustments to expected changes in object load, suggesting that this aspect of sensory-motor integration was normal. Patients had a shorter latency of grip force response than controls after an unexpected load increase, reflecting either a greater level of preparatory motor activity or a disinhibited spinal reflex response. Conclusions: The overall increased grip force in patients with focal hand dystonia is likely to be a prelearned phenomenon rather than a primary disorder of sensory-motor integration. abstract_id: PUBMED:26164472 Sensory-motor integration in focal dystonia. Traditional definitions of focal dystonia point to its motor component, mainly affecting planning and execution of voluntary movements. However, focal dystonia is tightly linked also to sensory dysfunction. Accurate motor control requires an optimal processing of afferent inputs from different sensory systems, in particular visual and somatosensory (e.g., touch and proprioception). Several experimental studies indicate that sensory-motor integration - the process through which sensory information is used to plan, execute, and monitor movements - is impaired in focal dystonia. The neural degenerations associated with these alterations affect not only the basal ganglia-thalamic-frontal cortex loop, but also the parietal cortex and cerebellum. The present review outlines the experimental studies describing impaired sensory-motor integration in focal dystonia, establishes their relationship with changes in specific neural mechanisms, and provides new insight towards the implementation of novel intervention protocols. Based on the reviewed state-of-the-art evidence, the theoretical framework summarized in the present article will not only result in a better understanding of the pathophysiology of dystonia, but it will also lead to the development of new rehabilitation strategies. abstract_id: PUBMED:37064180 Dystonia-like behaviors and impaired sensory-motor integration following neurotoxic lesion of the pedunculopontine tegmental nucleus in mice. Introduction: The pedunculopontine nucleus (PPTg) is a vital interface between the basal ganglia and cerebellum, participating in modulation of the locomotion and muscle tone. Pathological changes of the PPTg have been reported in patients and animal models of dystonia, while its effect and mechanism on the phenotyping of dystonia is still unknown. Methods: In this study, a series of behavioral tests focusing on the specific deficits of dystonia were conducted for mice with bilateral and unilateral PPTg excitotoxic lesion, including the dystonia-like movements evaluation, different types of sensory-motor integrations, explorative behaviors and gait. In addition, neural dysfunctions including apoptosis, neuroinflammation, neurodegeneration and neural activation of PPTg-related motor areas in the basal ganglia, reticular formations and cerebellum were also explored. Results: Both bilateral and unilateral lesion of the PPTg elicited dystonia-like behaviors featured by the hyperactivity of the hindlimb flexors. Moreover, proprioceptive and auditory sensory-motor integrations were impaired in bilaterally lesioned mice, while no overt alterations were found for the tactile sensory-motor integration, explorative behaviors and gait. Similar but milder behavioral deficits were found in the unilaterally lesioned mice, with an effective compensation was observed for the auditory sensory-motor integration. Histologically, no neural loss, apoptosis, neuroinflammation and neurodegeneration were found in the substantia nigra pars compacta and caudate putamen (CPu) following PPTg lesion, while reduced neural activity was found in the dorsolateral part of the CPu and striatal indirect pathway-related structures including subthalamic nucleus, globus pallidus internus and substantia nigra pars reticular. Moreover, the neural activity was decreased for the reticular formations such as pontine reticular nucleus, parvicellular reticular nucleus and gigantocellular reticular nucleus, while deep cerebellar nuclei were spared. Conclusion: In conclusion, lesion of the PPTg could elicit dystonia-like behaviors through its effect on the balance of the striatal pathways and the reticular formations. abstract_id: PUBMED:30335080 Non-motor disorders in patients with muscular dystonia Non-motor disturbances represented by sensory, affective, obsessive-compulsive disorders, cognitive dysfunction, sleep disturbances are often found in patients with dystonia and have a negative impact on their quality of life. The prevalence of sensory and affective disorders and sleep disturbances is above 50% in patients with cervical dystonia and is 25% in patients with blepharospasm, writing spasm; cognitive dysfunction is found in more than 25% of patients with focal dystonia. The relationship of non-motor, in particular psychiatric disorders, with the impairment of social and everyday life and worsening of quality of life in whole was shown. Common pathophysiological mechanisms of non-motor disorders as well as approaches to treatment of these disorders are discussed. The authors present the results on the positive effect of botulinum toxin therapy that reduces cognitive dysfunction, sensory disorders and depressive syndrome. Non-medication treatment of non-motor disorders in patients with dystonia is considered. abstract_id: PUBMED:23555946 Integration of sensory force feedback is disturbed in CRPS-related dystonia. Complex regional pain syndrome (CRPS) is characterized by pain and disturbed blood flow, temperature regulation and motor control. Approximately 25% of cases develop fixed dystonia. The origin of this movement disorder is poorly understood, although recent insights suggest involvement of disturbed force feedback. Assessment of sensorimotor integration may provide insight into the pathophysiology of fixed dystonia. Sensory weighting is the process of integrating and weighting sensory feedback channels in the central nervous system to improve the state estimate. It was hypothesized that patients with CRPS-related dystonia bias sensory weighting of force and position toward position due to the unreliability of force feedback. The current study provides experimental evidence for dysfunctional sensory integration in fixed dystonia, showing that CRPS-patients with fixed dystonia weight force and position feedback differently than controls do. The study shows reduced force feedback weights in CRPS-patients with fixed dystonia, making it the first to demonstrate disturbed integration of force feedback in fixed dystonia, an important step towards understanding the pathophysiology of fixed dystonia. abstract_id: PUBMED:23805090 Focal dystonia in musicians: linking motor symptoms to somatosensory dysfunction. Musician's dystonia (MD) is a neurological motor disorder characterized by involuntary contractions of those muscles involved in the play of a musical instrument. It is task-specific and initially only impairs the voluntary control of highly practiced musical motor skills. MD can lead to a severe decrement in a musician's ability to perform. While the etiology and the neurological pathomechanism of the disease remain unknown, it is known that MD like others forms of focal dystonia is associated with somatosensory deficits, specifically a decreased precision of tactile and proprioceptive perception. The sensory component of the disease becomes also evident by the patients' use of "sensory tricks" such as touching dystonic muscles to alleviate motor symptoms. The central premise of this paper is that the motor symptoms of MD have a somatosensory origin and are not fully explained as a problem of motor execution. We outline how altered proprioceptive feedback ultimately leads to a loss of voluntary motor control and propose two scenarios that explain why sensory tricks are effective. They are effective, because the sensorimotor system either recruits neural resources normally involved in tactile-proprioceptive (sensory) integration, or utilizes a fully functioning motor efference copy mechanism to align experienced with expected sensory feedback. We argue that an enhanced understanding of how a primary sensory deficit interacts with mechanisms of sensorimotor integration in MD provides helpful insights for the design of more effective behavioral therapies. abstract_id: PUBMED:8800949 Impaired sensory-motor integration during grasping in writer's cramp. Six subjects with writer's cramp and six healthy control subjects were examined while lifting a small instrumented object with variable weight and friction using the precision grip. The isometric grip and load forces were sampled at 400 Hz and stored in a flexible laboratory computer system for later analysis. Quantitative measurement of vibration and temperature sensibility showed normal sensory perception in the writer's cramp subjects. They exhibited an impaired programming of the grip-lift force co-ordination, while the ability to generate unimodal force-rate trajectories and to characterize the pattern of the force output according to memory representations of weight and friction of the object were intact. The capacity to terminate quickly the programmed grip force in lifts erroneously programmed too high was impaired. Writer's cramp subjects also employed excessive grip force during the static phase of the lift, and some patients had a short grip force latency after sudden unpredicted load increases, indicating a disinhibited spinal reflex response. The results indicate an impaired capacity in writer's cramp subjects to integrate sensory information in the motor programming and force regulation during precision grip tasks, despite a normal sensibility. abstract_id: PUBMED:23876327 Movement-induced uncoupling of primary sensory and motor areas in focal task-specific hand dystonia. Introduction: Due to growing evidence of sensorimotor integration impairment in focal task-specific hand dystonia, we aimed at describing primary sensory (S1) and primary motor (M1) cortex source activities and their functional cross-talk during a non-dystonia-inducing sensorimotor task free of biases generated by the interfering with the occurrence of dystonic movements. Method: Magnetoencephalographic brain signals and opponens pollicis (OP) electromyographic activities were acquired at rest and during a simple isometric contraction performed either alone or in combination with median nerve stimulation. The task was performed separately with the right and left hand by eight patients suffering from focal task-specific hand dystonia and by eight healthy volunteers. Through an ad hoc procedure Functional Source Separation (FSS), distinct sources were identified in S1 (FSS1) and M1 (FSM1) devoted to hand control. Spectral properties and functional coupling (coherence) between the two sources were assessed in alpha [8,13]Hz, beta [14,32]Hz and gamma [33,45]Hz frequency bands. Results: No differences were found between spectral properties of patients and controls for either FSM1 or FSS1 cerebral sources. Functional coupling between FSM1 and FSS1 (gamma band coherence), while comparable between dystonic patients and healthy controls at rest, was selectively reduced in patients during movement. All findings were present in both hemispheres. Discussion: Because previous literature has shown that gamma-band sensory-motor synchronization reflects an efficiency index of sensory-motor integration, our data demonstrate that, in dystonic patients, uncoupling replaces the functional coupling required for efficient sensory-motor control during motor exertion. The presence of bi-hemispheric abnormalities in unilateral hand dystonia supports the presence of an endophenotypic trait. abstract_id: PUBMED:25538594 Sensory abnormalities in focal hand dystonia and non-invasive brain stimulation. It has been proposed that synchronous and convergent afferent input arising from repetitive motor tasks may play an important role in driving the maladaptive cortical plasticity seen in focal hand dystonia (FHD). This hypothesis receives support from several sources. First, it has been reported that in subjects with FHD, paired associative stimulation produces an abnormal increase in corticospinal excitability, which was not confined to stimulated muscles. These findings provide support for the role of excessive plasticity in FHD. Second, the genetic contribution to the dystonias is increasingly recognized indicating that repetitive, stereotyped afferent inputs may lead to late-onset dystonia, such as FHD, more rapidly in genetically susceptible individuals. It can be postulated, according to the two factor hypothesis that dystonia is triggered and maintained by the concurrence of environmental factors such as repetitive training and subtle abnormal mechanisms of plasticity within somatosensory loop. In the present review, we examine the contribution of sensory-motor integration in the pathophysiology of primary dystonia. In addition, we will discuss the role of non-invasive brain stimulation as therapeutic approach in FHD. abstract_id: PUBMED:30405710 Spatial Integration of Somatosensory Inputs during Sensory-Motor Plasticity Phenomena Is Normal in Focal Hand Dystonia. Background: Surround inhibition is a system that sharpens sensation by creating an inhibitory zone around the central core of activation. In the motor system, this mechanism probably contributes to the selection of voluntary movements, and it seems to be lost in dystonia. Objectives. To explore if sensory information is abnormally processed and integrated in focal hand dystonia (FHD) and if surround inhibition phenomena are operating during sensory-motor plasticity and somatosensory integration in normal humans and in patients with FHD. Methods. We looked at the MEP facilitation obtained after 5 Hz repetitive paired associative stimulation of median (PAS M), ulnar (PAS U), and median + ulnar nerve (PAS MU) stimulation in 8 normal subjects and 8 FHD. We evaluated the ratio MU/(M + U) ∗ 100 and the spatial and temporal somatosensory integration recording the somatosensory evoked potentials (SEPs) evoked by a dual nerve input. Results: FHD had two main abnormalities: first, the amount of facilitation was larger than normal subjects; second, the spatial specificity was lost. The MU/(M + U) ∗ 100 ratio was similar in healthy subjects and in FHD patients, and the somatosensory integration was normal in this subset of patients. Conclusions. The inhibitory integration of somatosensory inputs and the somatosensory inhibition are normal in patients with focal dystonia as well as lateral surrounding inhibition phenomena during sensory-motor plasticity in FHD. Answer: The evidence suggests that disturbances of grip force behavior in focal hand dystonia may indeed reflect a generalized impairment of sensory-motor integration. In a study examining subjects with writer's cramp and musician's cramp, it was found that patients produced a grip force overshoot during initial lifts and settled to a steady force level above normal, indicating a prelearned phenomenon rather than a primary disorder of sensory-motor integration (PUBMED:15965201). This suggests that while certain aspects of sensory-motor integration, such as predictive grip force adjustments to expected changes in object load, may remain normal, there is an overall increased grip force which could be indicative of a generalized impairment. Further supporting this notion, a review of experimental studies indicates that sensory-motor integration is impaired in focal dystonia, with neural degenerations affecting not only the basal ganglia-thalamic-frontal cortex loop but also the parietal cortex and cerebellum (PUBMED:26164472). Additionally, a study on mice with lesions in the pedunculopontine tegmental nucleus, which is involved in locomotion and muscle tone modulation, showed that dystonia-like behaviors and impaired proprioceptive and auditory sensory-motor integrations can result from such lesions (PUBMED:37064180). Moreover, non-motor disorders, including sensory dysfunctions, are prevalent in patients with dystonia and can negatively impact their quality of life (PUBMED:30335080). In CRPS-related dystonia, there is evidence of disturbed integration of sensory force feedback, with patients showing reduced force feedback weights (PUBMED:23555946). Similarly, musician's dystonia is linked to somatosensory deficits, and the motor symptoms are believed to have a somatosensory origin (PUBMED:23805090). In writer's cramp, there is an impaired capacity to integrate sensory information in motor programming and force regulation during precision grip tasks, despite normal sensibility (PUBMED:8800949). Furthermore, movement-induced uncoupling of primary sensory and motor areas has been observed in focal task-specific hand dystonia, suggesting a deficiency in the functional coupling required for efficient sensory-motor control during motor exertion (PUBMED:23876327).
Instruction: Do regular dental visits affect the oral health care provided to people with HIV? Abstracts: abstract_id: PUBMED:12403536 Do regular dental visits affect the oral health care provided to people with HIV? Background: Financial factors related to income and insurance coverage have been found to limit access to, and influence use of, oral health care services by people with human immunodeficiency virus, or HIV. Methods: The authors determined if visiting a dentist regularly affected the oral health services provided to people with HIV when financial barriers were eliminated as an impediment to access. They analyzed dental claims data for services submitted for payment to the Minnesota Access to Dental Care Program. The analyses focused on comparisons of dental utilization patterns among 273 people classified as regular patients, or RPs, and 222 people classified as nonregular patients, or NRPs. Results: RPs were found to have been provided more diagnostic and preventive care, and less restorative, endodontic, periodontic, removable prosthodontic and oral surgical treatment than were NRPs. Although the mean submitted cost per patient visit was much higher for NRPs, total mean submitted costs per patient for RPs and NRPs were not significantly different. Even though NRPs underwent fewer procedures and had fewer clinic visits than did RPs, the procedures provided to NRPs were more complex and costly. As indicated by differences in the mix of dental care services provided to RPs vs. NRPs, continuity of primary oral health care for RPs led to a better oral health result at no increase in cost over that for NRPs. Conclusions: The study findings provide substantial evidence regarding the value of regular oral health care for people with HIV. Clinical Implications: This study reinforces the need for dentists to educate and encourage people with HIV to integrate regular oral health care into the ongoing maintenance of their overall health and well-being. abstract_id: PUBMED:31123439 Examining the relationship between oral health-promoting behavior and dental visits. Objective: The objective of this study was to explore the relationship between a number of health-promoting behavior and dental visits. Methods: A stratified sample from 16 primary schools in Riyadh was selected. A total of 1087 students aged 6-12 years were included in the study between October 2017 and January 2018. The World Health Organization (WHO) criteria for assessing dental caries were used to collect clinical data. Information on dental visits and health-promoting behaviors were collected through modified WHO questionnaire. An aggregate variable of eleven health-related behaviors was created. Logistic regression model was used to examine the relationship between regular dental visits and the aggregate health behavior variable accounting for age, sex, parental education, family income, and caries experience. Results: Only 6.8% of the sample reported regular dental visits. The logistic regression showed that the aggregate variable of health-promoting behavior was significantly related to regular dental visits with odds ratio 1.23 (confidence interval 95% 1.10-1.39). Other variables significantly related to regular dental visits included sex (female), higher family income, and lower mean of caries experience. Conclusion: Despite the availability of free dental services in Saudi Arabia, most of the dental visits are symptomatic. The observed association between health-promoting behaviors and regular dental visits implies that those at higher risk of oral diseases are less likely to visit a dentist regularly. abstract_id: PUBMED:32032835 Regular dental visits may prevent severe functional disability: A community-based prospective study. Background: Previous studies have shown that regular dental visits can affect the relationship of tooth loss with mortality and functional disability. However, the independent association between regular dental visits and incident functional disability is unclear. Methods: Our study participants were community-dwelling individuals aged ≥65 years, without disability at baseline. The outcome was the level of incident functional disability, as defined in a new certification of the public long-term care insurance. We defined no disability as no certification at follow-up, mild disability as support levels 1-2 and care level 1 (i.e., independent in basic ADL, but requiring some help in daily activities), and severe disability as care levels 2-5 (i.e., dependent in basic ADL). The exposure variable, based on the questionnaire, was regular dental visits at baseline. Covariates included gender, age, socio-economic status, health status, lifestyle habits, physical and mental functioning, and oral health variables. Using multinomial logistic regression, we calculated adjusted odds ratio (aOR) and 95 % confidence interval (CI) for mild disability and severe disability, with no disability as a reference category. Results: Among 8,877 participants, the 33-month cumulative incidence of mild and severe disability was 6.0 % and 1.8 %, respectively. After controlling for all covariates, regular dental visits at baseline were significantly associated with a lower risk of incident severe disability (aOR 0.65; 95 % CI, 0.46-0.91) but not the incidence of mild disability (aOR 0.96; 95 % CI, 0.79-1.17). Conclusions: Encouraging dental visits may contribute to prevention of severe functional disability and extension of healthy life expectancy among community-dwelling older adults. abstract_id: PUBMED:33240359 Access to oral health care for people living with HIV/AIDS attending a community-based program. Objective: People living with HIV/AIDS (PLWHA) have difficulty accessing oral health services primarily due to HIV-related stigma and discrimination. In 2011, the University of British Columbia (UBC) Dental Hygiene Degree Program implemented a preventive oral health services program at the Positive Living Society of British Columbia (PLSBC), a non-profit organization supporting PLWHA. This study aims to assess the perception of how this type of service delivery influenced access to oral health care for members of PLSBC. Methods: Personal interviews with 10 members and one focus group comprising 12 staff were conducted. Audiorecordings were transcribed verbatim and coded thematically. Emerging themes were identified using the interpretative phenomenology approach following Penchansky and Thomas' theory of access. Results: The program helped members maximize their dental coverage to receive other types of dental services. Members who were influenced by past traumatic experiences appreciated that services were delivered in a safe manner and in a stigma-free setting. Members valued the opportunity to educate future dental professionals to reduce HIV-related stigma. However, dental needs that could not be addressed by the program remained untreated for some members who continued to face barriers to care at referral clinics. Conclusion: This community-based preventive dental program provided affordable dental care, a stigma-free setting, care delivered in a safe manner, an educational opportunity, and accessible location, which all seemed to have a positive influence on access to oral health care for members of PLSBC. However, the limited availability of the program prevented many members from accessing comprehensive oral health care and is a factor that should be addressed. abstract_id: PUBMED:12816137 The impact of HIV on oral health and subsequent use of dental services. Objective: This study examined differences in health and access to dental services among a nationally representative sample of patients with HIV using Andersen's Behavioral Model of Health Services Use. Methods: This investigation is a longitudinal study that used structural equation modeling to analyze data from the HIV Cost and Services Utilization Study, a probability sample of 2,864 adults under treatment for HIV infection. Key predisposing variables included sex, drug use, race/ethnicity, education, and age. Enabling factors included income, insurance, and regular source of care. Need factors included mental, physical, and oral health. Dependent variables included whether a respondent utilized dental services and number of visits. Results: More education, dental insurance, usual source of dental care, and poor oral health predicted a higher probability of having a dental visit. African Americans, Hispanics, those exposed to HIV through drug use or heterosexual contact, and those in poor physical health were less likely to have a dental visit. Of those who visited dental professionals, older persons, those with dental insurance, and those in worse oral health had more visits. African Americans and persons in poor mental health had fewer visits. Conclusions: Persons with more HIV-related symptoms and a diagnosis of AIDS have a greater need for dental care than those with fewer symptoms and without AIDS, but more pressing needs for physical and mental health services limit their access to dental services. Providers should better attend to the oral health needs of persons with HIV who are in poor physical and mental health. abstract_id: PUBMED:22547876 Retention of people living with HIV/AIDS in oral health care. Objective: We identified factors associated with retention in oral health care for people living with HIV/AIDS (PLWHA) and the impact of care retention on oral health-related outcomes. Methods: We collected interview, laboratory value, clinic visit, and service utilization data from 1,237 HIV-positive patients entering dental care from May 2007 to August 2009, with at least an 18-month observation period. Retention in care was defined as two or more dental visits at least 12 months apart. We conducted multivariate regression using generalized estimating equations to explore factors associated with retention in care. Results: In multivariate analysis, patients who received oral health education were 5.91 times as likely (95% confidence interval 3.73, 9.39) as those who did not receive this education to be retained in oral health care. Other factors associated with care retention included older age, taking antiretroviral medications, better physical health status, and having had a dental visit in the past two years. Patients retained in care were more likely to complete their treatment plans and attend a recall visit. Those retained in care experienced fewer oral health symptoms and less pain, and better overall health of teeth and gums. Conclusions: Retention in oral health care was associated with positive oral health outcomes for this sample of PLWHA. The strongest predictor of retention was the receipt of oral health education, suggesting that training in oral health education is an important factor when considering competencies for new dental professionals, and that patient education is central to the development of dental homes, which are designed to engage and retain people in oral health care over the long term. abstract_id: PUBMED:21972461 A typology of models for expanding access to oral health care for people living with HIV/AIDS. Objectives: This article describes a typology of program models for expanding access to dental services for people living with HIV/AIDS (PLWHA). These programs serve communities with limited access and high unmet need for oral health care, such as rural areas, low-income and racial/ethnic minorities. Methods: Interviews and site visits with dental and program directors were conducted at participating sites, including AIDS service organizations, community health centers, and university-affiliated medical centers or hospitals. Results: Despite the differences across organizational structure, similar models and approaches were developed to engage and retain PLWHA in dental care. These approaches included: using mobile dental units; expanding the type and availability of previous dental services provided; providing training opportunities for dental residents and hygienists; establishing linkages with medical providers; providing transportation and other ancillary services; using dental case managers and peer navigators to coordinate care; and patient education. Conclusions: This typology can assist program planners, medical and dental care providers with service delivery strategies for addressing the unmet need for oral health care in their area. abstract_id: PUBMED:31597796 Impact of Dental Visits on Oral Health Awareness in Saudi Arabia. Aim: The aim of this study is to assess the impact of dental visits on oral health awareness among the Saudi population. Materials And Methods: A cross-sectional survey was conducted in Saudi Arabia. An online self-administered questionnaire was distributed from January 2011 to June 2013. The questionnaire consisted of 21 multiple choice questions which was categorized into four: demographic data, dental care status, oral hygiene behavior, and oral health knowledge (OHK). Responses to the questions in the OHK category were grouped and scored according to their percentage of correct answers: 1 = very poor (0 to &lt;25% correct), 2 = poor (25% to &lt;50% correct), 3 = acceptable (50 to &lt;75% correct), and 4 = good (75-100% correct). The sample was divided based on dental visit behavior into poor, average, and good dental visitors. Descriptive statistics, group comparisons, correlations, and linear regressions were conducted using SPSS (V16.0). A significance was set at p &lt; 0.05. Results: A total of 845 participants were included: 73.4% were females, 85.1% were older than 19 years, 76.9% had a college-level education, and 75.3% were from western Saudi Arabia. A total of 14.1% were poor dental visitors, 67.5% were average dental visitors, and 18.5% were good dental visitors. The good dental visitor group showed significantly better results than the other two groups in terms of dental services received, brushing habits, interdental cleansing habits, use of miswak, and OHK (p &lt; 0.05). The best predictors with significant but weak effects on dental visit habits were brushing habits, interdental cleansing habits, use of miswak, and OHK (F(8,836) = 7.212, R2 = 0.065, p &lt; 0.0001). Conclusion: Dental visits were significantly correlated with OHK. However, the impact of dental visits on oral health awareness was found to be weak. Clinical Significance: Well-designed preventive and educational dental visit programs need to be widely implemented as part of the governmental oral health plans. abstract_id: PUBMED:34690240 Regular dental visits and changes in oral health among high school students during the coronavirus disease pandemic: A study using data from medical checkup at a school Objective Continuous oral health care and regular dental visits are important for maintaining oral health. As a result of the high communicability of the coronavirus disease (COVID-19), individuals are reluctant to visit medical institutions. It is as yet unclear how the oral health of the population has been affected by the interruption of regular medical services during the COVID-19 pandemic. This study aimed to examine the association between the status of regular dental visits and changes in oral health among high school students during the pandemic using data from routine medical checkups conducted at a school, as well as data obtained from a questionnaire-based survey conducted simultaneously with medical checkups.Methods This study included 878 participants (comprising freshmen and sophomores) during the 2019 academic year from a high school in Fukuoka prefecture, Japan. Data on the frequency of dental checkups and anxiety related to dental visits during the pandemic was obtained through a questionnaire-based survey. Changes in dental health and gingival status between 2019 and 2020 were determined using data from routine medical checkups at the school. Poisson regression analysis with robust standard errors was used to evaluate the association between changes in dental health and gingival status and the status of regular dental visits.Results Of 878 participants, 417 (47.5%) did not have regular dental visits, 320 (36.4%) continued to have regular dental visits, and 141 (16.1%) had interrupted regular dental visits during the pandemic. In the interrupted regular dental visit group, 30.5% of participants were anxious about visiting a dental institution, which was a significantly higher proportion than other groups. Of the 521 participants who did not have gingival inflammation at the time of dental checkup in 2019, gingival inflammation at the time of dental checkup in 2020 was observed in 31.0% of participants having no regular dental visits, 20.2% participants having regular dental visits, and 38.2% participants having interrupted regular dental visits. The multivariable adjusted incidence rate ratios were 1.95 [95% confidence interval (CI), 1.34-2.84] in the interrupted regular dental visit group and 1.50 (95%CI,1.07-2.10) in the no regular dental visit group. There was no significant association between interruption of regular dental visits and changes in dental health status.Conclusion The study results showed that there was a significant association between interruptions in regular dental visits and anxiety related to dental visits during the COVID-19 pandemic, and that a higher proportion of participants who had interruptions in regular dental visits had gingival inflammation at the time of the medical checkup at the school. abstract_id: PUBMED:34432307 Interruption of regular dental visits during the COVID-19 pandemic due to concerns regarding dental visits was associated with periodontitis in Japanese office workers. Objective: To investigate the interrelationships among concerns regarding dental visits, the status of regular dental visits, and periodontal health during the coronavirus disease 2019 (COVID-19) pandemic. Background: Continuous oral health care and regular dental visits are important for maintaining periodontal health. Due to the possibility of contracting COVID-19, individuals have been reluctant to visit medical institutions. It is unclear how the periodontal health of the Japanese population has been affected by the interruption of regular dental visits during the COVID-19 pandemic and how concerns regarding dental visits have affected attendance at regular dental visits. Methods: This study included 199 Japanese office workers in one municipal office at Fukuoka Prefecture, Japan (average age = 42.6 years; age range = 19-77 years; 123 men and 76 women). Periodontitis was defined based on a full-mouth periodontal examination. The status of regular dental visits during the COVID-19 pandemic and concerns regarding dental visits were obtained via questionnaire. We tested the hypothesis that concerns regarding dental visits would indirectly affect periodontal health through the interruption of regular dental visits during the COVID-19 pandemic. We used mediation analysis, in which concerns regarding dental visits (present or absent) were set as the exposure, periodontitis (present or absent) was set as the outcome, and the status of regular dental visits (continued during the COVID-19 pandemic or not) was set as the mediator. Results: Of the 199 study participants, 108 had a habit of attending regular dental visits. Of these, 31 (28.7%) discontinued regular dental visits during the COVID-19 pandemic. Compared to the individuals who continued regular dental visits, those who discontinued regular dental visits had a higher prevalence of periodontitis (49.4% vs 77.4%, p &lt; 0.05) and concerns regarding dental visits (22.1% vs 64.5%, p &lt; 0.05). Discontinuing regular dental visits significantly mediated the association between concerns regarding dental visits and periodontitis (natural indirect effect: odds ratio = 1.68, 95% confidence interval = 1.02-2.79, proportion mediated = 64.3%). Conclusion: The study results showed that individuals who discontinued regular dental visits during the COVID-19 pandemic due to concerns regarding dental visits had relatively poor periodontal health. Answer: Yes, regular dental visits do affect the oral health care provided to people with HIV. A study analyzing dental claims data for services submitted for payment to the Minnesota Access to Dental Care Program found that regular patients (RPs) with HIV were provided more diagnostic and preventive care, and less restorative, endodontic, periodontic, removable prosthodontic, and oral surgical treatment than nonregular patients (NRPs). This suggests that continuity of primary oral health care for RPs led to better oral health outcomes at no increase in cost over that for NRPs (PUBMED:12403536). Additionally, retention in oral health care was associated with positive oral health outcomes for people living with HIV/AIDS (PLWHA). The strongest predictor of retention was the receipt of oral health education, indicating that patient education is central to the development of dental homes, which are designed to engage and retain people in oral health care over the long term (PUBMED:22547876). Furthermore, a typology of program models for expanding access to dental services for PLWHA, which serve communities with limited access and high unmet need for oral health care, includes approaches such as using mobile dental units, providing training opportunities for dental residents and hygienists, and patient education. These models aim to engage and retain PLWHA in dental care (PUBMED:21972461). In summary, regular dental visits play a crucial role in the oral health care of people with HIV, leading to better preventive care and overall oral health outcomes, and patient education is a key component in retaining PLWHA in consistent oral health care.
Instruction: Is it cost-effective to require recreational ice hockey players to wear face protection? Abstracts: abstract_id: PUBMED:18791529 Is it cost-effective to require recreational ice hockey players to wear face protection? Objective: The purpose of this study was to determine the cost-effectiveness of requiring all recreational hockey players to wear facial protection. Methods: The authors randomly surveyed recreational hockey players at two indoor hockey rinks in Evendale, Ohio. Data were collected on face protection, injuries, demographic variables and attitudes about protective gear from 11/2005 to 03/2006. Results: We surveyed 190 players. The mean age was 34 +/- 8.7 years and 99% were male. The average hockey experience was 17 years. Forty-six percent of respondents reported at least one serious hockey injury in the last five years. Twenty-four percent of the surveyed population chose to not wear face protection (46/190). The average cost of face protection was $48. The cost to purchase two shields for the 46 players would be $4416. Individuals with face protection reported significantly more sprains and strains that resulted in significantly more physician office visits and specialty physician visits. The extra physician visits would add approximately $4590 for the 46 people needing face protection. Those with face protection reported significantly fewer facial lacerations and facial bone fractures. Requiring face protection should prevent seven facial lacerations and three facial bone fractures over five years. The savings would be approximately $15,000. The net savings by requiring face protection would be $6,000/5 years. In our population, with 24% choosing to not wear face protection, requiring face protection would save $250/5 years/person needing protection. Conclusion: It is cost-effective to require facial protection in all recreational hockey players. abstract_id: PUBMED:17602320 Face protection in recreational hockey players. Background And Objectives: Our primary objective was to determine the percentage of recreational hockey players who do not wear face protection while playing hockey. Methods: We randomly surveyed recreational hockey players at two indoor hockey rinks in Evandale, Ohio. Data were collected on face protection, injuries, demographic variables, and attitudes about protective gear. Results: We surveyed 190 players. Their mean age was 34 +/- 8.7 years, and 99% were male. The average years of hockey experience were 17. Twenty percent of respondents reported using no facial protection either at the time of a past serious injury or currently. Those without face protection were significantly more likely to report having had a past facial laceration (odds ratio [OR]=3.40, 95% confidence interval [CI]=2.0--5.8) or facial bone fracture (OR=10.1, 95% CI=1.23--83.4). On the other hand, 69% of those wearing face protection reported that they felt they could "play more aggressive" with the protection, and they had a higher rate of past injuries other than facial injuries. Conclusions: Twenty percent of the recreational hockey players reported using no facial protection at the time of a prior serious injury, and players not using facial protection were more likely to report a facial laceration or facial bone fracture. Most players who wore face protection, on the other hand, reported that they played more aggressively and, over time, experienced more serious injuries. abstract_id: PUBMED:24431962 Recreational ice hockey injuries in adult non-checking leagues: a United States perspective. The purpose of this study was to analyze injuries among adult recreational ice hockey players. This was an observational prospective cohort study with data collected on injuries sustained during one season in the adult recreational ice hockey leagues of Oneida County, NY. The injury incidence rate was found to be 12.2/1000 player-exposures. The most common anatomic region injured was the head/neck/face (35%). Collisions were most often reported as the mechanism of injury (44%). Fracture was the most common diagnosis. Of players wearing face protection (full cage or shield, or partial visor/half shield), none suffered facial injuries, while all facial injuries reported were to players not wearing facial protection. The concussion rate was 1.1/1000 player-exposures. A lack of protective equipment was associated with 38% of injuries and 24% of injuries involved penalties. A history of prior injuries was found in 89% of injured players with 28% re-injuring the same body part. This study's findings suggested various strategies to address player injuries such as mandatory full facial protection and shoulder pads, strict enforcement of game rules, and game rule modifications (no body checking). Further research is needed on the role of preventive rehabilitation in players with previous injury history. Key PointsThe injury incidence rate was found to be 12.2/1000 player-exposures, similar to previous Canadian literature.The concussion rate was 1.1/1000 player-exposures.38% of injuries involved a lack of protective equipment and 24% of injuries involved penalties.Full facial protection and shoulder pads should be compulsory.Strict enforcement of game rules is necessary.History of prior injuries was found in 89% of injured players. abstract_id: PUBMED:23311956 Attitudes towards the use of mouth and face guards in Swedish ice hockey: part 2. Results. Background/aim: The yearly cost of sports injuries, which affects Swedish society, is estimated to 3 billion SEK (460 million USD). Injuries in ice hockey represent at least 270 million SEK (42 million USD). Despite the high number of injuries, mouth and face guards are rarely used in Swedish ice hockey. The major aim of this study was to examine the attitudes of mouth and face guards in two ice hockey clubs in Sweden (one elite and one division 3 club). A second purpose was to determine why some players use mouth and face guards, while others do not. A third goal was to present a material that ice hockey clubs could use for further discussions. Materials And Method: A phenomenographic analysis of focus groups interviews. Results: The phenomenographic analysis of the data resulted in 12 categories. Within each category, issues, activities and engagement of the participants were described. Further, similarities and differences in the discussions between the elite club and the division 3 club were described. The following categories were found to engage the participants the most: 'Ice hockey is a high-velocity collision sport in which injuries are expected', 'Attitudes towards personal protection guards' and 'Suggested measures'. Conclusions: The participants were aware of the risk of playing ice hockey, but they know little about the consequences of a dental injury. Although ice hockey players wish to protect themselves, they refuse to accept just any mouth or face guard. Through the discussions about reducing dental and jaw injuries by routine use of protection devices, many reform proposals were presented that could be useful in future discussions. abstract_id: PUBMED:28828080 Morphological, Physiological and Skating Performance Profiles of Male Age-Group Elite Ice Hockey Players. The purpose of this study was to describe the evolution of morphological, physiological and skating performance profiles of elite age-group ice hockey players based on repeated measures spread over one season. In addition, the results of fitness tests and training programs performed in off-ice conditions and their relationship with skating performance were analyzed. Eighteen high level age-group ice hockey players (13.1 ± 0.6 years) were assessed off and on-ice at the beginning and at the end of the hockey season. A third evaluation was also conducted at the beginning of the following hockey season. The players were taller, heavier, and showed bone breadths and muscle girths above the reference population of the same age. Muscular variables improved significantly during and between the two hockey seasons (p &lt; 0.05). However, maximal aerobic power improved only during the off-season. All skating performance tests exhibited significant enhancements during the hockey season, but not during the off-season where some degradation was observed. Finally, weak observed variances (generally &lt;20% of the explained variance) between physiological variables measured off-ice and on-ice skating performance tests indicated important gaps, both in the choice of the off-ice assessment tools as well as in training methods conventionally used. The reflection on the best way to assess and train hockey players certainly deserves to be continued. abstract_id: PUBMED:30304812 The Effect of Training in the Preparatory and Competitive Periods on Trunk Rotational Power in Canoeists, Ice-Hockey Players, and Tennis Players. This study evaluates changes in trunk rotational power at different weights and velocities after the preparatory and competitive periods in ice-hockey players, tennis players, and canoeists. The subjects performed trunk rotations to each side with a barbell of different weights placed on the shoulders (6, 10, 12, 16, 20, 22, and 26 kg) prior to and after 6 weeks of the preparatory period and 6 weeks of the competitive period. The results showed that mean power produced in the acceleration phase of trunk rotations increased significantly at weights from 10 to 26 kg or 6 to 26 kg after the preparatory and competitive periods in tennis players. The values obtained during trunk rotations with weights ≥12 kg also increased significantly after the preparatory period in ice-hockey players, whereas there were no significant changes after the competitive period. Similarly, the mean power during trunk rotations with weights ≥10 kg increased significantly only after the preparatory period in canoeists. Similar changes were observed for the peak power. These findings demonstrate that changes in trunk rotational power reflect the specificity of their training programs. This information may provide a basis for designing exercises focused on improvements of power produced during trunk rotations under loading conditions. abstract_id: PUBMED:38398795 Cross-Sectional Study of the Anthropometric Profile and Nutrient Status of Elite Female Ice Hockey Players: Differences by Play Position. Both the characteristics of ice hockey and the environmental conditions in which it is played affect the maintenance of the player's nutritional status and, therefore, their state of health and performance. The primary aim of this work was to examine the anthropometric profile, estimated energy expenditure, and macronutrient and micronutrient dietary intake of elite female ice hockey players by play position. As a secondary aim, their dietary intakes were compared with the recommendations. Hypotheses suggest variations in body composition based on ice hockey players' positions, with the expectation that these athletes may not align with energy and nutrient recommendations. Fifteen elite female ice hockey players were anthropometrically measured, basal metabolic rate and total energy expenditure were estimated, a 3-day, 24 h recall questionnaire was registered, and the results were compared with the recommended dietary intake for the Spanish population. Each player's position on the field requires an individualized physical and nutritional approach. There are no significant imbalances (p &gt; 0.05) between energy and nutrient intake in relation to the Recommended Daily Intake (RDI); however, increasing the consumption of vegetables and seafood while reducing meat and fat intake would assist these athletes in maintaining a healthier energy balance, optimizing body composition, and aligning with nutritional parameters that are better suited to enhance physical performance. abstract_id: PUBMED:10949002 Epidemiology of women's recreational ice hockey injuries. Introduction: Participation in ice hockey by women is increasing in many parts of North America; however, research into injuries and the patterns of injury among female players associated with this activity is limited. Purpose: The purpose of this research was to examine the incidence and nature of injuries suffered by female recreational ice hockey players. Methods: This prospective study followed 314 female players from 33 teams in Edmonton, Canada, during the 1997-1998 hockey season. Injury and game attendance data were collected using monthly telephone interviews throughout the season. Diagnostic information for individuals who received medical treatment was solicited from the attending health professional. Results: A total of 102 players reported a total of 125 injuries for a rate 7.5 injuries/1000 player exposures. The anatomic region most often injured was the lower extremity (31.2%), and the most common diagnosis was sprain/strain (52.0%). The predominant injury mechanism was player contact, either as a result of collision with another player or a body check (40.0%). Of all injuries, 65.6% occurred during league games, 27.2% during play-off, tournament, or exhibition games, and 7.2% during practices. Although less than 1% of injuries resulted in hospitalization, 17.6% of injuries resulted in an absence from hockey of 8 or more days. Conclusion: The diagnostic and anatomic distribution of injury in the women's hockey league was similar to that in leagues where full facial protection is mandatory. The observed injury rate was lower than the rates reported for male recreational and collegiate ice hockey players. Female recreational ice hockey players are at risk for injuries and further research is required to identify areas for injury prevention. abstract_id: PUBMED:24744472 CATECHOLAMINES AND β2-ADRENOCEPTOR GENE EXPRESSION BEFORE AND AFTER MAXIMAL INCREMENTAL CYCLE TEST IN YOUNG ICE HOCKEY PLAYERS: RELATION TO WORK PERFORMED. The aim of this study was to assess the plasma adrenaline and noradrenaline concentrations as well as whole blood β2-adrenoceptor gene (ADRB2) expression in young ice hockey players before and immediately after exercise in relation to performed work. Nineteen Youth National Team ice hockey players were subjected to the maximal incremental cycloergometer exercise. The test was done in the pre-competitive phase of training. Among many parameters the plasma adrenaline and noradrenaline concentrations and ADRB2 gene expression in peripheral blood mononuclear cells (PBMC) were determined before and after exercise. The average performed work was 3261.3 ± 558.3 J · kg(-1) and maximal oxygen consumption (VO2max) for all players was 53.85 ± 3.91 mL · kg(-1) min(-1). The geometric mean of the ADRB2 gene expression was statistically significantly different before and after exercise (P ≤ 0.05), while adrenaline and noradrenaline levels in plasma significantly increased after exercise. In the analysed group of athletes we found that initial level of plasma noradrenaline correlated with the performed work (r = - 0.55, P &lt; 0.014) and normalized ADRB2 expression before the exercise correlated with the work done by them (r = 0.48, P&lt;0.039). However, no statistically significant correlations were found between the plasma adrenaline or noradrenaline concentrations and ADRB2 gene expression in peripheral blood of the players. The performed work in the maximal incremental exercise test of regularly training young ice hockey players depends on the initial levels of noradrenaline in plasma and ADRB2 mRNA in PBMC. abstract_id: PUBMED:28931992 Knee injury and osteoarthritis outcome score of Korean national ice hockey players. [Purpose] To investigate sports injuries in Korean national ice hockey players by surveying parts, times, types, frequency, cure, and prevention types of sports injuries and provide basic data for injury prevention and performance improvement of ice hockey players. Another purpose of this study was to evaluate the incidence of ice hockey injuries according to age and the relationship between etiological factors and injuries in high school students. [Subjects and Methods] This was a cross-sectional study. Eighteen female ice hockey players in Korean elite athletes were recruited for this study. This study was conducted by a self-administered questionnaire survey using Knee Injury and Osteoarthritis Outcome Score (KOOS) of national ice hockey players. [Results] Participants were injured mainly during training. Injuries were caused by skate, puck-contact, and body check. Five subscales of KOOS were significantly correlated with each other except that the correlation between activities of daily living and quality of life was insignificant. [Conclusion] For injury prevention in national team ice hockey players, full gear is recommended. In addition, therapist in the field needs to conduct injury prevention through consistent observations and counseling in order to prevent injury and improve performance. Ice hockey players also need sufficient rest with systematic and scientific training for injury prevention and performance improvement. Answer: Yes, it is cost-effective to require recreational ice hockey players to wear face protection. A study that surveyed 190 recreational hockey players found that requiring face protection would prevent seven facial lacerations and three facial bone fractures over five years, resulting in savings of approximately $15,000. The net savings by requiring face protection would be $6,000 over five years. In the surveyed population, with 24% choosing not to wear face protection, requiring it would save $250 over five years per person needing protection (PUBMED:18791529). Additionally, players not using facial protection were significantly more likely to report a facial laceration or facial bone fracture (PUBMED:17602320). Furthermore, in a study of adult recreational ice hockey players, none of the players wearing face protection suffered facial injuries, while all reported facial injuries were to players not wearing facial protection (PUBMED:24431962). These findings support the cost-effectiveness of mandating face protection for recreational ice hockey players to prevent facial injuries and reduce associated healthcare costs.
Instruction: Is polycystic ovary syndrome another risk factor for venous thromboembolism? Abstracts: abstract_id: PUBMED:22959762 Is polycystic ovary syndrome another risk factor for venous thromboembolism? United States, 2003-2008. Objective: We sought to determine prevalence and likelihood of venous thromboembolism (VTE) among women with and without polycystic ovary syndrome (PCOS). Study Design: We performed a cross-sectional analysis using Thomson Reuters MarketScan Commercial databases for the years 2003 through 2008. The association between VTE and PCOS among women aged 18-45 years was assessed using age-stratified multivariable logistic regression models. Results: Prevalence of VTE per 100,000 was 374.2 for PCOS women and 193.8 for women without PCOS. Compared with women without PCOS, those with PCOS were more likely to have VTE (adjusted odds ratio [aOR] 18-24 years, 3.26; 95% confidence interval [CI], 2.61-4.08; aOR 25-34 years, 2.39; 95% CI, 2.12-2.70; aOR 35-45 years, 2.05; 95% CI, 1.84-2.38). A protective association (odds ratio, 0.8; 95% CI, 0.73-0.98) with oral contraceptive use was noted for PCOS women. Conclusion: PCOS might be a predisposing condition for VTE, particularly among women aged 18-24 years. Oral contraceptive use might be protective. abstract_id: PUBMED:35044114 Venous thrombosis risk factors in pregnant women. Venous thromboembolism (VTE) is the third most common cause of death on Earth after myocardial infarctions and strokes, according to the World Health Organization (WHO). Pregnancy is a unique condition of woman, when enormous changes occur in functioning of the most important systems of homeostasis in a relatively short time. These are physiological hypercoagulation, slowing of blood flow, increase in circulating blood volume, etc. However, while being physiological, these changes increase the risks of venous thromboembolism by almost 6 times. In some cases, there appears an imbalance or dissociation between the functioning of natural antithrombotic systems and the activation of coagulation as a consequence of genetically or acquired determined causes (genetic thrombophilia, antiphospholipid syndrome, comorbidities, obstetric complications and other exogenous and endogenous factors). Accordingly, identification of risk factors, their systematization, and determination of VTE risks in pregnancy and puerperium is one of the most important tasks of clinical medicine. Various recommendations have appeared for practitioners during the last 10-15 years on the basis of the risk factors analysis in order to prevent VTE in pregnant women more effectively. Nevertheless, none of these recommendations can yet take into account all risk factors, although convenient scoring systems have emerged for risk assessment and clear recommendations on anti-thrombotic prophylaxis regimens in risk groups in recent years. This article will review historical understanding of thrombosis in pregnant women, progress in understanding VTE risk factors in pregnant women, and available reserves in identifying new risk factors during pregnancy and puerperium in order to stratify risks more efficiently. abstract_id: PUBMED:23449010 Oral contraceptives and cardiovascular risk in women with polycystic ovary syndrome. Because women affected by polycystic ovary syndrome (PCOS) present an increased cardiovascular risk, the safety of estroprogestin treatment is debated and contrasting data have been reported. However, cardiovascular risk is not the same in all PCOS women and individual cardiovascular risk should be assessed before staring any estroprogestin treatment. The available data show that products containing both 2nd-generation and 3rd-generation progestins (including drospirenone and cyproterone acetate) represent a safe treatment in PCOS patients with regular cardiovascular risk. In PCOS patients with increased cardiovascular risk, a careful choice of estroprogestin product is needed and cardiovascular risk should be monitored during treatment. In obese PCOS patients with normal glucose tolerance and lipid profile, products containing 2nd-generation progestins may be preferred because of lower venous thromboembolism risk. In PCOS patients with altered lipid profile or glucose intolerance, 3rd-generation progestins should be used but, during treatment, cardiovascular risk should be periodically re-assessed. In special situations, metformin or statins may be added to estroprogestin treatment. abstract_id: PUBMED:33518188 Cardiometabolic Risk in Polycystic Ovary Syndrome: Current Guidelines. Polycystic ovary syndrome is a common endocrine disorder in reproductive-aged women and is associated with an increased risk of metabolic abnormalities, including obesity, impaired glucose tolerance, diabetes, dyslipidemia, metabolic syndrome, venous thromboembolism, and subclinical atherosclerosis. Clinicians and patients alike need to be aware of these increased risks as well as new international guidelines that recommend frequent screening and active management of metabolic abnormalities. Given that the data on risk of cardiovascular events, such as myocardial infarction and stroke, in women with PCOS is mixed, future large-scale, longitudinal studies are needed to clarify these potential risks. abstract_id: PUBMED:31054196 Endogenous sex hormones and risk of venous thromboembolism in young women. Background: The risk of venous thromboembolism (VTE) in young women can predominantly be attributed to exogenous hormone use. The influence of (abnormalities in) endogenous sex hormones, as in polycystic ovary syndrome (PCOS) or primary ovarian insufficiency (POI), on VTE risk is uncertain. Objectives: Th assess the association between endogenous sex hormone levels and VTE risk. Methods: Women aged ≤45 years from the MEGA case-control study who provided a blood sample in the absence of exogenous hormone exposure or pregnancy were included. Sex hormone-binding globulin (SHBG), estradiol, follicle-stimulating hormone (FSH) and testosterone were measured. The free androgen index (FAI) and estradiol to testosterone ratio (E:T) were calculated. VTE risk was assessed according to quartiles (Qs) of levels and clinical cut-offs as proxies for PCOS (FAI &gt; 4.5) and POI (FSH &gt; 40 U/L). Logistic regression models were used to estimate adjusted odds ratios (ORs) with 95% confidence intervals (CIs). Results: Six hundred and sixty-five women (369 cases; 296 controls) were eligible for the analyses. Testosterone and FSH levels, E:T and POI (FSH &gt; 40 U/L vs FSH ≤ 40 U/L) were not associated with VTE risk. For estradiol, VTE risk was increased with levels in Q4 vs Q1 (OR 1.6; 95% CI 1.0-2.5). There was a dose-response relationship between SHBG levels and VTE risk, with the highest OR at Q4 vs Q1: 2.0 (95% CI 1.2-3.3). FAI &gt; 4.5 (PCOS proxy) vs FAI ≤ 4.5 was associated with increased VTE risk (OR 3.3; 95% CI 0.9-11.8). Conclusions: Estradiol, SHBG and FAI were associated with VTE risk, suggesting a role for endogenous sex hormones in the pathophysiology of VTE in young women. abstract_id: PUBMED:31308416 Oral contraceptive use increases risk of inflammatory and coagulatory disorders in women with Polycystic Ovarian Syndrome: An observational study. Polycystic ovarian syndrome (PCOS) is a multispectral disorder requiring lifelong management. Its pathophysiology is still being explored which makes its treatment options restrained. Present study explores impact of oral contraceptive mode of treatment on metabolic, hormonal, inflammation and coagulation profile of PCOS women. 50 subjects diagnosed with Rotterdam criteria receiving no drug treatment served as controls whereas 50 subjects receiving only OCPs (Ethinyl estradiol 0.03 mg, Levonorgestrel 0.15 mg) as a mode of treatment at least for six-months served as cases. Ferriman-Gallwey score and hormonal profile improved on OCP treatment. However, parameters like weight, Body mass index, waist-hip ratio, Oral glucose tolerance test, lipid profile, insulin, HOMA-IR, adiponectin, interleukin1β, visfatin, resistin, tissue factor, PT and APTT showed considerable derangements in OCP group. All above parameters are associated with the risk of diabetes mellitus, dyslipidemia, coronary vascular disease, cancers, hypercoagulable state, venous thromboembolism and thrombotic events. Long-term use of OCPs needs to be considered carefully for PCOS patients who are already burdened with associated risk factors. This study was conducted in a region where women do not have much access to high-end screening and diagnostic facilities that further exacerbates their clinical outcomes. Large scale, long-term studies need to be designed to further evaluate safety use of OCPs in PCOS women. abstract_id: PUBMED:36828612 JCL Roundtable: Lipidology and Women's Health. In this JCL Roundtable, we bring together three experts to discuss women's cardiovascular health throughout the lifespan, viewed from the standpoint of clinical lipidology. Overall, heart disease leads to one out of every 3 deaths of American women, but unfortunately patient awareness of cardiovascular risk actually has declined since 2009. Younger women are not exempt, since their risk can be increased by smoking, birth control, adverse lifestyle and diet, and genetic disorders. Age at menarche can influence lifetime risk. Polycystic ovary syndrome, noted in 5-13% of women of reproductive age, has been linked to increased cardiovascular risk, partly through atherogenic dyslipidemia. Oral contraception has improved greatly since its introduction, but remains a risk for venous thromboembolism and stroke, particularly in smokers. Fetal nutritional and metabolic requirements in pregnancy impose high vascular demand on the placenta and lead to escalating maternal triglycerides and cholesterol especially in the 3rd trimester. Triglycerides may require special management. Adverse pregnancy outcomes associated with placental dysfunction signal subsequent increased risk for maternal atherosclerotic disease. Early menopause has long been recognized as a risk enhancing factor for atherosclerosis with pathophysiology remaining unclear. The menopause transition represents a period when cardiovascular risk for women increases rapidly and approaches that of men. Current studies are evaluating hormonal changes and even clonal hematopoiesis as potential causes. At the same time, lifestyle habits and routine chronic conditions such as hypertension and obesity/diabetes/metabolic syndrome play a large role and need attention. abstract_id: PUBMED:23209115 Risk of venous thromboembolism in women with polycystic ovary syndrome: a population-based matched cohort analysis. Background: There is an increased risk of venous thromboembolism among women taking oral contraceptives. However, whether there is an additional risk among women with polycystic ovary syndrome (PCOS) is unknown. Methods: We developed a population-based cohort from the IMS LifeLink Health Plan Claims Database, which includes managed care organizations in the United States. Women aged 18-46 years taking combined oral contraceptives and who had a claim for PCOS (n = 43 506) were matched, based on a propensity score, to control women (n = 43 506) taking oral contraceptives. Venous thromboembolism was defined using administrative coding and use of anticoagulation. We used Cox proportional hazards models to assess the relative risk (RR) of venous thromboembolism among users of combined oral contraceptives with and without PCOS. Results: The incidence of venous thromboembolism among women with PCOS was 23.7/10 000 person-years, while that for matched controls was 10.9/10 000 person-years. Women with PCOS taking combined oral contraceptives had an RR for venous thromboembolism of 2.14 (95% confidence interval [CI] 1.41-3.24) compared with other contraceptive users. The incidence of venous thromboembolism was 6.3/10 000 person-years among women with PCOS not taking oral contraceptives; the incidence was 4.1/10 000 person-years among matched controls. The RR of venous thromboembolism among women with PCOS not taking oral contraceptives was 1.55 (95% CI 1.10-2.19). Interpretation: We found a 2-fold increased risk of venous thromboembolism among women with PCOS who were taking combined oral contraceptives and a 1.5-fold increased risk among women with PCOS not taking oral contraceptives. Physicians should consider the increased risk of venous thromboembolism when prescribing contraceptive therapy to women with PCOS. abstract_id: PUBMED:33853290 An Update on Contraception in Polycystic Ovary Syndrome. Polycystic ovary syndrome (PCOS) is a common endocrine disorder in reproductive-aged women, characterized by hyperandrogenism, oligo/anovulation, and polycystic ovarian morphology. Combined oral contraceptives (COCs), along with lifestyle modifications, represent the first-line medical treatment for the long-term management of PCOS. Containing low doses of estrogen and different types of progestin, COCs restore menstrual cyclicity, improve hyperandrogenism, and provide additional benefits such as reducing the risk of endometrial cancer. However, potential cardiometabolic risk associated with these agents has been a concern. COCs increase the risk of venous thromboembolism (VTE), related both to the dose of estrogen and the type of progestin involved. Arterial thrombotic events related to COC use occur much less frequently, and usually not a concern for young patients. All patients diagnosed with PCOS should be carefully evaluated for cardiometabolic risk factors at baseline, before initiating a COC. Age, smoking, obesity, glucose intolerance or diabetes, hypertension, dyslipidemia, thrombophilia, and family history of VTE should be recorded. Patients should be re-assessed at consecutive visits, more closely if any baseline cardiometabolic risk factor is present. Individual risk assessment is the key in order to avoid unfavorable outcomes related to COC use in women with PCOS. abstract_id: PUBMED:37668928 No association effect of genetic polymorphism was observed between polycystic ovary syndrome and cardiovascular diseases risk: a mendelian randomization study. Objective: Polycystic ovary syndrome (PCOS) is one of the risk factors for cardiovascular diseases (CVDs). However, the possible association between PCOS and common CVDs remains inconclusive. The aim of this study was to explore the potential relationship between PCOS and CVDs using genetic polymorphisms. Methods: We conducted two-sample Mendelian randomization (MR) analyses. In our study, 14 single nucleotide polymorphisms (SNPs) in Europeans and another 13 SNPs in Asians were applied as instrumental variables for PCOS. The largest published meta-genome-wide association studies of European ancestry and the BioBank Japan Project of Asian ancestry were used to collect the outcome data. MR analysis was performed using inverse variance weighting as the primary method. Several sensitivity analyses and instrumental variable strength evaluations were also performed to verify the reliability of results. Results: Our analysis revealed that any potential causal association between genetically-predicted PCOS and the risk of CVDs do not exist. These CVDs include peripheral artery disease, atrial fibrillation, arrhythmia, cardiovascular diseases, heart failure, peripheral vascular disease, hypertension, ischemic stroke, myocardial infarction and venous thromboembolisms. Associations could not be found even after the SNPs linked to these possible confounders (body mass index, waist-to-hip ratio, and serum testosterone) were deleted. Sensitivity analysis demonstrated no presence of horizontal pleiotropy or heterogeneity. Conclusion: The present mendelian randomization study suggests that genetically-predicted PCOS may not be associated with the risk of CVDs. Answer: Yes, polycystic ovary syndrome (PCOS) appears to be a risk factor for venous thromboembolism (VTE). A cross-sectional analysis using Thomson Reuters MarketScan Commercial databases for the years 2003 through 2008 found that women with PCOS had a higher prevalence of VTE compared to women without PCOS. The study reported that women with PCOS were more likely to have VTE across different age groups, with the highest adjusted odds ratio observed in women aged 18-24 years (PUBMED:22959762). Additionally, a population-based matched cohort analysis using the IMS LifeLink Health Plan Claims Database indicated that women with PCOS who were taking combined oral contraceptives had a two-fold increased risk of VTE compared to other contraceptive users, and even those with PCOS not taking oral contraceptives had a 1.5-fold increased risk (PUBMED:23209115). However, it is important to note that while there is an association between PCOS and VTE, the causality is not definitively established. A Mendelian randomization study suggested that genetically-predicted PCOS may not be associated with the risk of CVDs, including VTE (PUBMED:37668928). This indicates that further research is needed to clarify the potential causal relationship between PCOS and VTE risk. Despite the lack of consensus on causality, clinicians are advised to consider the increased risk of VTE when prescribing contraceptive therapy to women with PCOS (PUBMED:23209115) and to assess individual cardiovascular risk before starting any estroprogestin treatment in PCOS patients (PUBMED:23449010). Additionally, current guidelines recommend frequent screening and active management of metabolic abnormalities in women with PCOS due to their increased risk of metabolic and cardiovascular issues (PUBMED:33518188).
Instruction: Shortcomings and deficits in surgical treatment of gynecological cancers: a German problem only? Abstracts: abstract_id: PUBMED:12217757 Shortcomings and deficits in surgical treatment of gynecological cancers: a German problem only? Objectives: The objective of this study was to assess the quality of preoperative diagnostic, primary surgical, and postoperative treatment of ovarian, endometrial, and cervical cancers in women in Hesse, Germany, in relation to current international recommendations. Methods: Data on all diagnostic, surgical, and postoperative gynecological procedures undertaken in Hesse in 1997-2001 were collected in a standardized form and validated for clinical quality. Databases were generated for cases of endometrial, ovarian, and cervical cancer, and details of treatment were analyzed. Results: There were 1119 cases of endometrial, 824 cases of ovarian, and 472 cases of cervical cancer. The malignancy remained undiagnosed until after surgery in 17.8% (199/1119) of endometrial cancers, 28.5% (245/824) of ovarian cancers, and 15.5% (73/472) of cervical cancers. There was evidence of suboptimal surgical treatment. Lymphadenectomy rates were low in endometrial and ovarian cancers (about 32%), and omentectomy rates in were low in ovarian cancer (about 50%). Furthermore, 10.7% (31/289) of patients with cervical cancer diagnosed before hospital admission did not undergo radical surgery. Conclusion: Discrepancies between guidelines and treatment of gynecological cancers in Hesse were striking, particularly for endometrial and ovarian cancer, and this situation may be mirrored internationally. The fact that many guidelines are not supported by results from clinical studies may be a factor in this apparently suboptimal treatment. Clinical collaborative trials are needed to provide the necessary evidence to support current recommendations and benchmarks of survey are required to facilitate future quality assessment. abstract_id: PUBMED:38414948 Gynaecologic oncology surgical cancellations in Zambia. Introduction: Cancellations of elective surgery in low-and middle-income countries (LMIC) are common and a major hindrance for patients who are in need of surgical therapeutic modalities. This is especially important in the context of scaling up needed surgical interventions for gynaecological cancer care. There is a knowledge gap in the literature related to cancellation of gynaecologic oncology surgeries in LMIC, where there is enormous need for this specific cancer surgical capacity. We report in an observational descriptive fashion, our experience at the UTH/CDH in Lusaka, Zambia, on the causes of surgical cancellations in gynaecologic oncology. Methods: From January 1, 2021 through June 31, 2023, we retrospectively evaluated the surgical registry for gynaecologic oncology at the UTH/CDH in Lusaka, Zambia to assess the number and causes of surgical cancellations. Results: There were a total of 66 (16.96%) surgical cancellations out of 389 scheduled gynaecologic oncology cases. Lack of available blood and/or low haemoglobin was the most frequent cause of surgical cancellations, 27 cases (40.90%). Conclusion: We highlight in our series that the lack of blood, leading to surgical cancellations was the most frequent impediment related to performing scheduled gynaecologic oncology surgical procedures. As gynaecologic oncology services scale up in LMIC, given the radical nature of surgery and its association with blood loss, it is incumbent on the entire clinical ecosystem to address this issue and to develop mitigating strategies, specific to their respective resource setting. abstract_id: PUBMED:33030730 Subcutaneous rifampicin versus povidone-iodine for the prevention of incisional surgical site infections following gynecologic oncology surgery - a prospective, randomized, controlled trial. Objectives: Surgical site infection (SSI) following gynecologic oncology surgery is a severe problem for both patient and surgeon in terms of increasing morbidity, length of hospital stay, anxiety, and costs. In this prospective, randomized, controlled study we investigated the effect of subcutaneous rifampicin and povidone-iodine on incisional SSI following gynecologic oncology surgery. Material And Methods: Three hundred patients scheduled for abdominal surgery due to any malign gynecological pathology were randomly assigned into one of three groups of 100 members each, as follows: the subcutaneous tissue was irrigated with saline in Group 1; saline + 10% povidone iodine in Group 2; saline + rifampicin in Group 3. Patients were invited to follow-up once every 10 days in a 30-day period for evaluation. Patients who developed a superficial incisional SSI were recorded. Results: No significant relationship was observed between the SSI and the subcutaneous agents used (p = 0.332). It was observed that there was a statistically significant increase in the rate of incisional surgical site infections as the period of hospitalization (p = 0.044), patient's age (p = 0.003), existence of comorbidities (p = 0.001), and perioperative blood transfusion (p = 0.021) increased. Conclusions: Subcutaneous agents are not effective in preventing surgical site infections after gynecologic oncology surgeries. Further large-scale prospective randomized controlled studies may provide other options to prevent SSIs. abstract_id: PUBMED:38151094 Gynecologic Surgical Subspecialty Training Decreases Surgical Complications in Benign Minimally Invasive Hysterectomy. Study Objective: To evaluate the impact of gynecologic subspecialty training on surgical outcomes in benign minimally invasive hysterectomies (MIHs) while accounting for surgeon volume. Design: Retrospective cohort study of patients who underwent an MIH between 2014 and 2017. Setting: Single community hospital system. Patients: Patients were identified via Current Procedural Terminology codes for MIH: vaginal, laparoscopic, or robotic. Exclusion criteria included a gynecologic cancer diagnosis or concomitant major procedure at the time of hysterectomy. One thousand six hundred thirty-one patients underwent a benign MIH performed by a gynecologic generalist or a subspecialist in minimally invasive gynecologic surgery, urogynecology and pelvic reconstructive surgery, or gynecologic oncology; 125 hysterectomies were vaginal, 539 were conventional laparoscopic, and 967 were robotic. Measurements And Main Results: Surgical outcomes, including intraoperative complications, operative outcomes, and postoperative readmissions and reoperations, were compared between generalists and subspecialists and were stratified by surgeon volume status, with high-volume (HV) defined as performing 12 or more hysterectomies annually. Odds ratios for the primary outcome, Clavien-Dindo Grade III complications (which included visceral injuries, conversions, and reoperations within 90 days), were calculated to evaluate the impact of subspecialty training while accounting for surgeon volume status. Of 1631 MIHs, 855 (52.4%) were performed by generalists and 776 (47.6%) by subspecialists. HV generalists performed 618 (37.9%) of MIHs, and 237 (14.5%) were performed by low-volume generalists. All subspecialists were HV surgeons; 38.1% of generalists were HV. The odds ratio of a Clavien-Dindo Grade III complication was 0.39 (0.25-0.62) for hysterectomies performed by subspecialists compared to HV generalists after adjusting for potential confounding variables (p &lt;.001). Subspecialists and HV surgeons had significantly lower incidences of visceral injuries, transfusions, blood loss over 500 mL, and conversions compared with generalists and low-volume surgeons, respectively. Conclusion: Both subspecialty training and high surgeon volume status are associated with a lower risk of surgical complications in benign MIH. Subspecialty training is associated with a reduction in surgical complications even after accounting for surgeon volume. abstract_id: PUBMED:4765925 Bladder carcinoma as a problem in gynecologic diagnosis N/A abstract_id: PUBMED:32456991 Surgical training in gynecologic oncology: Past, present, future. The purpose of this paper is to review the surgical care related to training in gynecologic oncology, from past, present and future perspectives. A marked decline in the incidence of cervical cancer as well as improvements in radiation therapy have led to a reduction in the numbers of radical hysterectomies and exenterations being performed. Utilization of neoadjuvant chemotherapy is reducing the extent of cytoreductive operations, including intestinal surgery. The incorporation of sentinel lymphatic mapping has reduced the number of pelvic, paraaortic and inguinal lymphadenectomies being performed. Coupled with these changes are other factors limiting time for surgical training including an explosion in targeted anticancer therapies and more individualized options beyond simple cytotoxic therapy. With what is likely to be a sustained impact on training, gynecologic oncologists will still provide a broad range of care for women with gynecologic cancer but may be quite limited in surgical scope and rely on colleagues from other surgical disciplines. Enhancement of surgical training by off-service rotations, simulation, attending advanced surgical training courses and/or a longer duration of training are currently incorporated into some programs. Programs must ensure that fellows take full advantage of the clinical materials available, particularly those related to the potential deficiencies described. Changing required research training to an additional elective year could also be considered. Based on the perspectives noted, we believe it is time for our subspecialty to reevaluate its scope of surgical training and practice. abstract_id: PUBMED:30306588 Essential surgical skills for a gynecologic oncologist. Gynecologic oncology has seen a tremendous growth as a surgical specialty over the past four decades. However, many regions of the world still lack structured training programs in this discipline. The aim of this article is to identify the essential skills for a gynecologic oncologist to be able to provide optimal care to women diagnosed with gynecologic cancer. Where the evidence exists in the literature we identify the learning curve necessary. Identifying essential skills required for the practice of gynecologic oncology should assist in standardizing care provision globally, and could be a starting point for health systems beginning structured training programs. Development of surgical skills requires adequate training, mentorship, and self-evaluation as an ongoing process beyond the years spent in training programs. abstract_id: PUBMED:1752447 Nongynecological surgical procedures performed by an oncological gynecology service From March, 1987 to July 1990, 1,568 surgical procedures, were done at Oncological Gynecology Service; 206 of them (13.1%) were considered as "non-gynecological" surgical procedures; 182 (88.3%) were secondary to a malignity diagnosis, being the most frequent one ovarian carcinoma (54.4%). As to benign lesions, 11.7% caused "non gynecological" surgery, most frequent were severe pelvic adhesions, 41.6%, and severe endometriosis, 25%. "Non-gynecological" surgical procedure carried out most frequently was pelvic lymphadenectomy. Morbidity was 3.8%, and mortality, 0.48% due to acute myocardial infarction. The acceptance for oncologic gynecologist is justified in this series, based on survival and morbi-mortality, which permits the participation in a multidisciplinary team. abstract_id: PUBMED:31373380 Minimally Invasive Surgery for the Obese Patient Among the Spectrum of Gynaecologic Surgery-A Surgical Viewpoint. The prevalence of obesity has increased, achieving an epidemic status. Obesity has surgical and medical implications on the health of a woman. A minimally invasive surgical approach has several advantages and is considered the preferred approach for various procedures in obese women. The spectrum of gynaecologic surgical care spans over three main domains: benign gynaecologic surgery, reconstructive pelvic surgery, and gynaecologic cancer surgery. In this viewpoint, we chose a signature procedure for each main domain to compare minimally invasive surgery (MIS) trends for obese patients across all domains. Discrepancy was found in minimally invasive surgical trends for obese patients across different gynaecologic surgical domains. Fellowship training or maintaining high surgical volume might help to bridge this gap in the domain of benign gynaecologic surgery and improve quality care offered to obese patients. abstract_id: PUBMED:28410284 European Surgical Education and Training in Gynecologic Oncology: The impact of an Accredited Fellowship. Objective: The aim of this study was to understand the current situation of surgical education and training in Europe among members of the European Society of Gynecological Oncology (ESGO) and its impact on the daily surgical practice of those that have completed an accredited fellowship in gynecologic oncology. Methods: A questionnaire addressing topics of interest in surgical training was designed and sent to ESGO members with surgical experience in gynecologic oncology. The survey was completely confidentially and could be completed in less than 5 minutes. Responses from 349 members from 42 European countries were obtained, which was 38% of the potential target population. The respondents were divided into 2 groups depending on whether they had undergone an official accreditation process. Results: Two thirds of respondents said they had received a good surgical education. However, accredited gynecologists felt that global surgical training was significantly better. Surgical self-confidence among accredited specialists was significantly higher regarding most surgical oncological procedures than it was among their peers without such accreditation. However, the rate of self-assurance in ultraradical operations, and bowel and urinary reconstruction was quite low in both groups. There was a general request for standardizing surgical education across the ESGO area. Respondents demanded further training in laparoscopy, ultraradical procedures, bowel and urinary reconstruction, and postoperative management of complications. Furthermore, they requested the creation of fellowship programs in places where they are not now accredited and the promotion of rotations and exchange in centers of excellence. Finally, respondents want supporting training in disadvantaged countries of the ESGO area. Conclusions: Specialists in gynecologic oncology that have obtained a formal accreditation received a significantly better surgical education than those that have not. The ESGO responders recognize that their society should lead the standardization of surgical training and promote ways of improving members' surgical skills. Answer: The shortcomings and deficits in the surgical treatment of gynecological cancers are not unique to Germany. The study from Hesse, Germany, revealed significant discrepancies between guidelines and the actual treatment of gynecological cancers, particularly for endometrial and ovarian cancer, with low rates of lymphadenectomy and omentectomy, and a considerable number of malignancies remaining undiagnosed until after surgery (PUBMED:12217757). However, these issues are mirrored internationally, suggesting that such deficits are not confined to Germany alone. In Zambia, for example, there are challenges in gynecologic oncology surgical services, with a high rate of surgical cancellations (16.96%) due to factors such as the lack of available blood or low hemoglobin levels (PUBMED:38414948). This indicates systemic issues in the provision of gynecologic oncology surgery in low-and middle-income countries (LMICs), which are not limited to Germany. Furthermore, the need for improved surgical training and education in gynecologic oncology is a global concern. Studies have highlighted the importance of subspecialty training in reducing surgical complications (PUBMED:38151094), the evolution of surgical training due to changes in clinical practice (PUBMED:32456991), and the need for essential surgical skills and standardized training programs worldwide (PUBMED:30306588). The European Society of Gynecological Oncology (ESGO) has also recognized the need for standardization of surgical education and training across Europe, with accredited specialists reporting better surgical education and self-confidence in surgical procedures (PUBMED:28410284). In summary, the issues related to the surgical treatment of gynecological cancers, including adherence to guidelines, surgical training, and systemic challenges, are not confined to Germany but are present in various forms across different regions and healthcare systems.
Instruction: Inverted papilloma of the urinary bladder: Rigorous surveillance needed? Abstracts: abstract_id: PUBMED:16904447 Inverted papilloma of urinary bladder: is long-term cystoscopic surveillance needed? A single center's experience. Objectives: To review all cases of urinary bladder inverted papilloma (IP) in our institution and determine the need for cystoscopic surveillance. IP is an uncommon benign tumor of the urinary tract. Its multiplicity, recurrence, and association with transitional cell carcinoma (TCC) suggest possible malignant potential, leading to conflicting clinical conclusions regarding the need for surveillance. Methods: All consecutive patients from January 1991 to December 2004 with IP were included in this study. A single pathologist performed the histologic review. The patients had undergone cystoscopy and ultrasound evaluation of the kidneys every 6 months. Results: Of the 52 patients, 45 were men and 7 were women. The average age at presentation was 58.9 +/- 11.8 years (range 30 to 79). No patient had a synchronous or previous bladder tumor. The most common complaint was macroscopic hematuria. Ten cases were incidental findings during bladder ultrasonography or cystoscopy. All were solitary tumors, most commonly found at the bladder neck. The average follow-up period was 62 +/- 23 months, with no recurrence. One case of subsequent noninvasive papillary TCC developed 15 months later. The initial histologic findings had revealed cytologic atypia, with suspicious urine cytology findings. On review, it was more compatible with TCC with an inverted pattern. Conclusions: Although our cases exhibited benign biologic behavior, the presence of cytologic atypia and suspicious urine cytology require exclusion of TCC with an inverted pattern. Thus, in histologically proven solitary bladder IP with no associated TCC, cystoscopic surveillance may not be necessary. abstract_id: PUBMED:15839967 Is surveillance necessary for inverted papilloma in the urinary bladder and urethra? Background: A case series of inverted papilloma of the urinary bladder and urethra is presented, together with a review of the literature with respect to multiplicity, recurrence rate and association with transitional cell carcinoma, and a discussion on surveillance of the lesion. Methods: Cases of inverted papilloma of the lower urinary tract in a single centre were reviewed. Patient and tumour characteristics, recurrence and associated transitional cell carcinoma are reported. Results: Twenty patients were included (18 male, two female). The mean age was 60.8 years (range 35-78 years). All had solitary tumours ranging from 3 mm to 30 mm in size. Median cystoscopic follow up was 30 months (range 2-140 months). There was no recurrence. One patient was associated with subsequent transitional cell carcinoma 44 and 76 months later. Together with the present 20 cases, review of the English literature with respect to inverted papilloma of the lower urinary tract identified a total of 322 cases reported, with a recurrence rate of 3.85%. Moreover, 1.55%, 5.90% and 1.54% were associated with previous, simultaneous and subsequent transitional cell carcinoma, respectively. Conclusions: Recurrence is not uncommon and risk of subsequent transitional cell carcinoma is not rare, such that non-invasive surveillance with flexible cystoscopy is recommended for inverted papilloma of the lower urinary tract. abstract_id: PUBMED:21131756 Inverted papilloma of the urinary bladder: Rigorous surveillance needed? An Indian experience. Aims: Inverted papilloma (IP) is an uncommon benign neoplasm of the urinary tract. Its multiplicity, recurrence, and association with transitional cell carcinoma (TCC) leads to conflicting clinical conclusions regarding its biological behavior, and hence, the need for rigorous follow-up protocols. In this study, we review all cases of urinary bladder IP in our institution and determine the need for strict follow-up. Materials And Methods: We included consecutive patients from August 2004 to August 2008 with IP of the urinary bladder in this study who did not have prior or concurrent urothelial carcinoma. A single pathologist performed the histologic review. The patients had undergone strict a follow-up schedule every 6 months. Results: In our study of the 24 patients, the mean age at presentation was 53.5 (range 22-81) years. The mean follow-up period was 25.8 months (range 6-58 months). Of the 24 patients, 21 were men and 3 were women. No patient had a synchronous or previous bladder tumor. The most common presenting symptoms were macroscopic hematuria and dysuria. All were solitary tumors except one, most commonly found at the bladder neck and trigone. The average follow-up period was 2.5 years without any evidence of recurrence. Conclusions: We conclude that when diagnosed by strictly defined criteria, IP as benign urothelial neoplasm was with extremely low incidence of recurrence and good prognosis. It does not seem to be a risk factor for TCC, especially if located in the bladder. Therefore, a good transurethral resection is adequate therapy and follow-up protocol as rigorous as those for TCC may not be necessary. abstract_id: PUBMED:24074985 Long-term clinical outcome of inverted urothelial papilloma including cases with focal papillary pattern: is continuous surveillance necessary? Objective: To investigate the necessity for continuous cystoscopic surveillance of inverted papilloma (IP), including tumors exhibiting mixed morphology (IP with focal papillary architecture). Methods: We retrieved all cases of de novo ("primary") IP, diagnosed in our institution during 10 years (from January 2000 to December 2009), from the information database. Patients with a history of urothelial carcinoma or concurrent urothelial carcinoma were excluded. Surveillance was performed by routine cystoscopy, and follow-up was obtained from our institutional and regional clinical and pathology databases. Results: We identified 35 patients with IP, including 3 with focal papillary architecture. Mean patient age was 60 years (range, 26-88) with male-to-female ratio of 1.9:1. Most common tumor location was urinary bladder (86%), followed by urethra (14%). Focal papillary architecture was identified in 3 patients (aged 51, 52, and 78 years). Mean follow-up was 66 months (median 68; range, 11-132). Only 1 male patient (age 81) had a subsequent diagnosis of IP on follow-up cystoscopy at 9 months; no recurrence or progression was documented in the other patients diagnosed with IP. Conclusion: The absence of progression of IP on long-term follow-up in this study strongly argues against the need of continuous surveillance for patients in whom (1) strict diagnostic criteria are followed, (2) a complete resection can be ascertained, and (3) no previous or concurrent urothelial malignancies are documented. In this study, the 3 patients with IP showing focal papillary architecture had a benign course, similar to the previously documented cases. abstract_id: PUBMED:3694828 Inverted papillary carcinoma of the urinary bladder. A 47-year-old man presented with an inverted papillary carcinoma of the urinary bladder. Cystoscopy revealed an approximately 2 cm mass, with a small stalk and smooth surface, which was situated just medial to the right ureteral orifice. Transurethral resection of the tumor was performed on July 25, 1983. Malignant changes were recognized in the epithelial cords in the resected specimen. This is in contrast to the general view that inverted papilloma of the urinary bladder is benign in nature. The authors believe that careful pathologic examination should be done when an inverted type tumor of the urinary tracts is found. No recurrence has been recognized as of September 11, 1987. abstract_id: PUBMED:32321559 Inverted urothelial papilloma of the upper urinary tract: description of two cases with systematic literature review. Background: Inverted urothelial papilloma (IUP) of the upper urinary tract is an uncommon benign tumour that occasionally presents as a polypoid mass causing urinary obstruction. Histologically, IUP is characterised by a proliferating urothelium arranged in cords and trabeculae, in continuity with overlying intact epithelium, and extending into the lamina propria in a non-invasive, endophytic manner. Cytological atypia is minimal or absent. Top differential diagnoses include urothelial carcinoma with inverted growth pattern and florid ureteritis cystica. Although urothelial carcinomas of the upper urinary tract with prominent inverted growth pattern commonly harbour microsatellite instability, the role of the mutator phenotype pathway in IUP development is still unclear. The aim of this study was to describe two additional cases of IUP of the upper urinary tract, along with an extensive literature review. Case Presentation: We observed two polypoid tumours originating in the renal pelvis and the distal ureter, respectively. Both patients, a 76-year-old woman and a 56-year-old man, underwent surgery because of the increased likelihood of malignancy. Histology was consistent with IUP and patients are alive and asymptomatic after long-term follow-up (6 years for the renal pelvis lesion and 5 years for the ureter lesion). The tumours retained the expression of the mismatch-repair protein MLH1, MSH2, and PMS2 whereas loss of MSH6 was found in both cases. Conclusions: When completely resected, IUP does not require rigorous surveillance protocols, such as those for urothelial carcinoma and exophytic urothelial papilloma. It is therefore important for the surgical pathologist to be aware of this rare entity in order to ensure correct patient management. abstract_id: PUBMED:281841 Urinary cytology in inverted papilloma of the lower urinary tract. Urinary cytology is reported in six cases of inverted papilloma of the bladder. Abnormal features were found in five of the six cases. These were essentially an increase in the number of single or small clusters of degenerate transitional cells. As their cytologic features fell within the range of normal but degenerate transitional epithelium, a diagnosis of transitional cell tumor was not possible in these cases. Histologic features suggest that the desquamating cells may be derived from the surface epithelium of the papilloma, the underlying cords and trabeculae, or from the adjacent transitional epithelium. abstract_id: PUBMED:21492372 Inverted papilloma of the urinary tract. Objectives: • To compare the clinical and pathological details of inverted papilloma (IP) of the urinary tract diagnosed in Western Australia with those published. • To determine whether urinary tract IP requires post-treatment cystoscopic follow-up. Patients And Methods: • Clinical and pathological details were summarized for 41 cases of IP of the urinary tract diagnosed in Western Australia between 1998 and 2010. • Publications on IP of the urinary tract were reviewed and summarized. Results: • IP of the urinary tract is a rare benign tumour most commonly diagnosed in older men presenting with haematuria or symptoms of lower urinary tract obstruction. • IP is most frequently identified in the bladder neck or trigone as a polypoid growth with a smooth surface. • The major differential diagnosis is transitional cell carcinoma (TCC) with an inverted growth pattern, with differentiation based mainly on morphological criteria. • Treatment involves transurethral resection for lower urinary tract lesions whereas upper urinary tract IPs are resected by ureteroscopy, percutaneous endoscopy, partial ureterectomy or nephroureterectomy. • IP is weakly associated with a history of TCC and with increased risk of concomitant or subsequent TCC. Conclusions: • Based on the association with TCC, post-treatment follow-up for IP of the urinary tract should include cystoscopic follow-up. abstract_id: PUBMED:8301506 Inverted papilloma of the urinary bladder in a boy. Inverted papilloma of the urinary bladder is not uncommon in adults. In children, only two cases had been reported to our knowledge. Another pediatric case is described herein, in which the lesion occurred in a 9-year-old boy; he is the youngest patient reported to have this condition. abstract_id: PUBMED:531531 Urinary bladder tumours in children. Case report of inverted papilloma. A short review is given of tumours of the urinary bladder in children, and the first case of an inverted papilloma in a child is reported. The tumour was situated in the trigone of the bladder and was successfully extirpated, with no recurrence at follow-up 9 months later. It was histologically typical of these tumours. The 105 patients described in the literature had an average age of 55 years, whereas the present case is the first to be described arising during the first two decades of life. The most frequent symptoms of such tumours are haematuria and urinary obstruction. It is suggested that the treatment of choice is transurethral excision. The term Brunnian adenoma is recommended for use with this tumour, in order to avoid the impression given by the name papilloma of the bladder, that the tumour is malignant. Answer: The necessity for rigorous surveillance of inverted papilloma (IP) of the urinary bladder is debated in the literature. Some studies suggest that long-term cystoscopic surveillance may not be necessary for patients with histologically proven solitary bladder IP with no associated transitional cell carcinoma (TCC). For instance, one study reported that of 52 patients with an average follow-up of 62 months, there was no recurrence of IP, suggesting benign biologic behavior and questioning the need for surveillance (PUBMED:16904447). Another study with 24 patients found no evidence of recurrence over an average follow-up of 25.8 months, leading to the conclusion that rigorous follow-up protocols similar to those for TCC may not be necessary (PUBMED:21131756). However, other studies advocate for surveillance due to the potential association of IP with TCC and the possibility of recurrence. A review of 322 cases from the literature found a recurrence rate of 3.85% and associations with previous, simultaneous, and subsequent TCC, supporting the recommendation for non-invasive surveillance with flexible cystoscopy (PUBMED:15839967). Another study, which excluded patients with a history of urothelial carcinoma, found no progression of IP on long-term follow-up, suggesting that continuous surveillance might not be necessary if strict diagnostic criteria are followed, a complete resection can be ascertained, and no previous or concurrent urothelial malignancies are documented (PUBMED:24074985). The association of IP with TCC is also highlighted by a study that emphasizes the need for post-treatment cystoscopic follow-up based on this association (PUBMED:21492372). Moreover, the presence of cytologic atypia and suspicious urine cytology findings in some cases of IP necessitates careful differentiation from TCC with an inverted pattern (PUBMED:16904447). In conclusion, while some evidence suggests that rigorous surveillance may not be necessary for all patients with IP, particularly in the absence of associated TCC and with strict adherence to diagnostic and treatment criteria, other studies recommend surveillance due to the potential for recurrence and association with TCC. The decision for surveillance should be individualized based on patient-specific factors, including the presence of atypia, completeness of resection, and history of urothelial carcinoma.
Instruction: Childhood socio-economic position and adult smoking: are childhood psychosocial factors important? Abstracts: abstract_id: PUBMED:21131349 Childhood socio-economic position and adult smoking: are childhood psychosocial factors important? Evidence from a British birth cohort. Background: Childhood socio-economic position (SEP) is associated with adult smoking status. Previous studies have investigated mediation by educational attainment. The aim of this study is to examine whether childhood psychosocial factors (cognitive ability, psychosocial adjustment and parental involvement) are important in the association between childhood SEP and adult smoking status over and above educational attainment in a large prospective birth cohort study. Methods: Data on 7709 participants from the National Child Development Study birth cohort from Great Britain were used in this study. Multinomial logistic regression was used to examine the associations of childhood SEP and childhood psychosocial factors with adult smoking status, both bivariate and mutually adjusted, and then additionally adjusted for confounders and participant's educational attainment. Analyses were conducted separately for men and women. Results: Childhood SEP is an important determinant of adult smoking status, even after adjustment for childhood psychosocial factors and educational attainment. Parental involvement, cognitive ability and psychosocial adjustment were all associated with adult smoking status for both men and women. Also parental involvement for men, and both parental involvement and psychosocial adjustment for women, remain important determinants of adult smoking status over and above childhood SEP, other childhood psychosocial factors and educational attainment. Conclusions: These findings add to the evidence base that childhood disadvantage is associated with adult smoking behaviours and highlights the importance of the early childhood social environment for the development of these. abstract_id: PUBMED:24604918 Childhood socio-economic status and ethnic disparities in psychosocial outcomes in New Zealand. Objective: The present study examined the extent to which childhood socio-economic status (SES) could account for differences in adult psychosocial outcomes between Māori and non-Māori individuals in a birth cohort of more than 1000 individuals studied to age 30. Methods: Data were gathered on three measures of childhood SES (family SES, family living standards, family income) and adult psychosocial outcomes including mental health, substance use, criminal offending, and education/welfare dependence outcomes, as part of a longitudinal study of a New Zealand birth cohort (the Christchurch Health and Development Study). Results: Those reporting Māori ethnicity had significantly (p &lt; 0.0001) poorer scores on the three measures of childhood SES, with estimates of Cohen's d indicating a moderate effect size. Māori cohort members also had significantly (p &lt; 0.05) greater rates of adverse psychosocial outcomes in adulthood. Controlling for childhood SES reduced the magnitude of the ethnic differences in psychosocial outcomes, but did not fully explain the differences between Māori and non-Māori. Adjustment for childhood SES had the strongest effect on education/welfare dependence, but weaker effects on mental health, substance use, and criminal offending. Conclusions: Improvements in SES among Māori in New Zealand may, to some extent, ameliorate the long standing disparities in psychosocial well-being between Māori and non-Māori. However, efforts to improve Māori well-being will require an approach that moves beyond a sole focus on rectifying socio-economic disadvantage. abstract_id: PUBMED:23267372 Association of an adult obesity, blood pressure adulthood socio-economic position. Background: The purpose of this study is to investigate an effect of childhood and adulthood socio-economic position on selected cardiovascular risk factors including obesity, blood pressure level and smoking behavior. Materials And Methods: This is a cross-sectional study performed on 479 individuals, randomly selected by random clustered sampling from men and women aged 30-50 years, living in Esfahan. Their demographic characteristics, education, occupation and smoking behavior were questioned. Their weight, height and blood pressure were also measured, and their BMI (Body Mass Index) was calculated. The data were analyzed by SPSS 19 software. Results: In men, the odds ratio for ever smoking to never smoking at higher levels of education in comparison with the lower levels was 6.08 (2.65-14.11). For manual occupation to non-manual occupation, it was 3.55 (1.88-6.68). The odds ratio for obesity and overweight vs no overweight, for manual occupation to non-manual occupation was 3.12 (1.81-5.40) in men and for father's occupation it was 2.03 (1.10-3.74). In women, their education with the odds ratio of 2.11 (1.17-3.82) and father's occupation with the odds ratio of 6.63 (3.50-12.58) altered their chance of being obese or overweight. Also, in women, the mean systolic blood pressure was significantly lower at higher educational levels and in those whose fathers' occupation were manual but lower in manual workers. Conclusion: The current socio-economic position in individuals is associated with an obesity and smoking behavior, particularly in men. Childhood socio-economic position increases the chance of an obesity and higher blood pressure, particularly in women. abstract_id: PUBMED:25827289 Diet quality in older age: the influence of childhood and adult socio-economic circumstances. Socio-economic gradients in diet quality are well established. However, the influence of material socio-economic conditions particularly in childhood, and the use of multiple disaggregated socio-economic measures on diet quality have been little studied in the elderly. In the present study, we examined childhood and adult socio-economic measures, and social relationships, as determinants of diet quality cross-sectionally in 4252 older British men (aged 60-79 years). A FFQ provided data on daily fruit and vegetable consumption and the Elderly Dietary Index (EDI), with higher scores indicating better diet quality. Adult and childhood socio-economic measures included occupation/father's occupation, education and household amenities, which combined to create composite scores. Social relationships included social contact, living arrangements and marital status. Both childhood and adult socio-economic factors were independently associated with diet quality. Compared with non-manual social class, men of childhood manual social class were less likely to consume fruit and vegetables daily (OR 0.80, 95% CI 0.66, 0.97), as were men of adult manual social class (OR 0.65, 95% CI 0.54, 0.79), and less likely to be in the top EDI quartile (OR 0.73, 95% CI 0.61, 0.88), similar to men of adult manual social class (OR 0.66, 95 % CI 0.55, 0.79). Diet quality decreased with increasing adverse adult socio-economic scores; however, the association with adverse childhood socio-economic scores diminished with adult social class adjustment. A combined adverse childhood and adulthood socio-economic score was associated with poor diet quality. Diet quality was most favourable in married men and those not living alone, but was not associated with social contact. Diet quality in older men is influenced by childhood and adulthood socio-economic factors, marital status and living arrangements. abstract_id: PUBMED:25220689 Socio-economic position and cardiovascular risk in rural Indian adolescents: evidence from the Andhra Pradesh children and parents study (APCAPS). Objectives: This study examined association between socio-economic position and cardiovascular risk factors in adolescents to investigate whether childhood socio-economic position is a risk factor for future cardiovascular disease, independently of adult behaviours. Study Design And Methods: Participants (n = 1128, 46% girls, aged 13-18 years) were members of a birth cohort (Andhra Pradesh Children and Parents Study or APCAPS) established to investigate long-term effects of a pregnancy and childhood nutritional supplementation trial conducted in 29 villages near Hyderabad in South India. Cross-sectional associations between socio-economic position and cardiovascular risk factors were examined using linear regression models. Results: The mean BMI was 16.7 kg/m(2) for boys and 17.8 kg/m(2) for girls. Socio-economic position was positively associated with fat mass index (0.15 kg/m(2); 95% CI: 0.05-0.25) and inversely associated with central-peripheral skinfold ratio (-0.04; 95% CI: -0.06 to -0.01) and, in boys, fasting triglycerides (-0.05; 95% CI: -0.09 to -0.01). Association of socio-economic position with other risk factors (blood pressure, arterial stiffness, fasting glucose, insulin and cholesterol) was weak and inconsistent, and did not persist after adjustment for potential confounders, including age, sex, pubertal stage, height, adiposity and nutrition supplementation. Conclusions: The study thus showed that lower socio-economic position may be associated with greater central adiposity and higher triglyceride levels in these settings. Socio-economic gradient in cardiovascular risk may strengthen in future with later economic and lifestyle changes. Cardiovascular disease prevention strategies should therefore focus on the youth from the low income group. abstract_id: PUBMED:18245036 Childhood and adulthood socio-economic position and midlife depressive and anxiety disorders. This paper investigates how childhood socio-economic position influences the risk for midlife depressive and anxiety disorders at 45 years of age, assessed by the Clinical Interview Schedule in 9377 participants of the 1958 British Birth Cohort. Socio-economic position was measured by Registrar General Social Class in childhood and adulthood. The association of paternal manual socio-economic position with any diagnosis at 45 years of age was accounted for after adjustment for adult socio-economic position. Manual socio-economic position in women at 42 years of age was associated with midlife depressive disorder and any diagnosis; these associations were diminished by adjustment for childhood psychological disorders. Effects of childhood socio-economic position on adult depressive disorders may be mediated through adult socio-economic position. abstract_id: PUBMED:28362354 Socio-Economic Position and Suicidal Ideation in Men. People in low socio-economic positions are over-represented in suicide statistics and are at heightened risk for non-fatal suicidal thoughts and behaviours. Few studies have tried to tease out the relationship between individual-level and area-level socio-economic position, however. We used data from Ten to Men (the Australian Longitudinal Study on Male Health) to investigate the relationship between individual-level and area-level socio-economic position and suicidal thinking in 12,090 men. We used a measure of unemployment/employment and occupational skill level as our individual-level indicator of socio-economic position. We used the Index of Relative Socio-Economic Disadvantage (a composite multidimensional construct created by the Australian Bureau of Statistics that combines information from a range of area-level variables, including the prevalence of unemployment and employment in low skilled occupations) as our area-level indicator. We assessed suicidal thinking using the Patient Health Questionnaire (PHQ-9). We found that even after controlling for common predictors of suicidal thinking; low individual-level and area-level socio-economic position heightened risk. Individual-level socio-economic position appeared to exert the greater influence of the two; however. There is an onus on policy makers and planners from within and outside the mental health sector to take individual- and area-level socio-economic position into account when they are developing strategic initiatives. abstract_id: PUBMED:23813840 Childhood and adult socio-economic position and social mobility as determinants of low back pain outcomes. Background: Low back pain (LBP) is a prevalent problem and tends to be socio-economically patterned. Relatively little is known about life-course socio-economic circumstances as determinants of different types of LBP. Our aim was to examine whether childhood and adult socio-economic position and social mobility are associated with radiating and non-specific LBP and sciatica. Method: Data were derived from the Young Finns Study (n = 2231). Childhood socio-economic position was based on parental education, occupational class and family income at baseline in 1980. Data on own education and LBP outcomes were collected at the end of follow-up in 2007. Social mobility was based on parental and own education. Covariates were composed of age, parental body mass index and smoking. Results: Both childhood and own socio-economic position remained associated with radiating LBP and sciatica after adjustments. However, the associations varied by socio-economic indicator and gender. Stable lower socio-economic position and downward mobility were associated with radiating LBP. Conclusion: Childhood socio-economic circumstances affect the risk of radiating LBP and sciatica in adulthood. To prevent low back disorders, early socio-economic circumstances need to be considered alongside own socio-economic position. abstract_id: PUBMED:30970149 The impact of childhood cancer on parents' socio-economic situation-A systematic review. Objective: Taking care of children diagnosed with cancer may have considerable consequences on parents' socio-economic situation. Our systematic review aimed to evaluate and synthesise the evidence on the impact of childhood cancer on parents' socio-economic situation. Methods: Systematic literature searches for articles published between January 2000 and January 2019 were performed in PubMed, Scopus, and PsycINFO. Findings of eligible articles were narratively synthesised and quality appraised. Results: Our systematic review included 35 eligible articles. Childhood cancer had a substantial impact on parents' socio-economic situation across all studies. This impact varied largely by geographical region. We observed a high prevalence of disruptions in parental employment such as job quitting or job loss, particularly among mothers. The associated income losses further contributed to families' perceived financial burden in addition to increased cancer-related expenses. Adverse socio-economic consequences were most pronounced shortly after diagnosis, however, persisted into early survivorship for certain groups of parents. We identified families of children diagnosed with haematological cancers, younger age at diagnosis, and lower parental socio-economic position to be at particular risk for adverse socio-economic consequences. Conclusions: Following the child's cancer diagnosis, parents experience a broad range of adverse socio-economic consequences. Further effort is needed to systematically implement an assessment of financial hardship in paediatric oncology together with appropriate support services along the cancer trajectory. abstract_id: PUBMED:12878101 Socio-economic adversity and psychosocial adjustment: a developmental-contextual perspective. The aim of this paper is twofold: firstly to investigate whether the association between childhood and adult psychosocial adjustment can be explained by socio-economic adversity experienced during childhood, and secondly to explore the role of family socio-economic disadvantage and psychological development in explaining adult social inequality in psychological well-being. A developmental-contextual perspective is adopted to analyse the pathways linking childhood experiences to adult functioning in a changing socio-historical context. The study draws on longitudinal data collected for two cohorts of about 30,000 individuals born in Great Britain 12 years apart. Structural equation modelling is used to assess the long-term influence of socio-economic adversity on psychosocial adjustment, and to compare different explanatory models of health inequalities. The results reject a simple selection or social causation argument, suggesting that both dynamics operate in life course development. The effects of social risk cumulate throughout the life course, influencing both behaviour adjustment during childhood and adult psychosocial functioning. It is concluded that the explanation of health differences in adult life must account for the reciprocal interaction between individual behaviour and social circumstances. Answer: Childhood socio-economic position (SEP) is a significant determinant of adult smoking status, and childhood psychosocial factors such as cognitive ability, psychosocial adjustment, and parental involvement also play a role in this association. In a study using data from the National Child Development Study birth cohort from Great Britain, it was found that even after adjusting for educational attainment, childhood SEP remained an important determinant of adult smoking status. Additionally, parental involvement for men and both parental involvement and psychosocial adjustment for women were significant determinants of adult smoking status, over and above childhood SEP, other childhood psychosocial factors, and educational attainment (PUBMED:21131349). This suggests that childhood psychosocial factors are indeed important in the relationship between childhood SEP and adult smoking behavior, and highlights the influence of the early childhood social environment on the development of smoking habits.
Instruction: Does a new surgical residency program increase operating room times? Abstracts: abstract_id: PUBMED:24209643 Does a new surgical residency program increase operating room times? Background: Our country faces a shortage of surgeons; hence, we may anticipate the development of new surgery residencies. Therefore, the question of the effect of a new program on operating room times (ORT) is important. Our primary aim was to compare ORT of 3 common procedures done by attendings alone vs ORT of cases with residents. Methods: We queried records of 1458 patients from the JFK Medical Center database for laparoscopic cholecystectomy, open inguinal hernia repair, and laparoscopic appendectomy from July 2010 to July 2012. We divided the sample into 2 groups: "attending alone" (2010-2011) and "with residents" (2011-2012). The ORT was calculated by "Cut time" and "Close time," as recorded in the OR. ORT for both groups was calculated using the unpaired t test. Results: Of the total number of patients, 778 underwent laparoscopic cholecystectomy, 407 underwent open inguinal hernia repair, and 273 underwent laparoscopic appendectomy; of these, 620, 315, and 211 procedures, respectively, were done by the attending alone and 158, 92, and 62, respectively, were done with residents. Differences in ORT for the 3 types of surgery were statistically significant (p &lt; 0.001). There was no statistical significance when comparing the first half with the second half of the academic year for residents' ORT. Conclusions: Resident involvement increases ORT. Cost analysis considering OR time and anesthesia time vs federal funding for Graduate Medical Education is complicated. The benefit of new programs in diminishing the shortage of surgeons cannot be underestimated. abstract_id: PUBMED:36113130 Surgical Residents' Awareness of the Costs of Common Operating Room Supplies. Background: The Accreditation Council for Graduate Medical Education mandates that residency programs incorporate cost awareness into patient care. This presents a challenge for surgical residents because they must understand operating room costs in addition to other expenses. Trainees' understanding of operating room supply costs is not well understood. Methods: A survey was distributed to surgical residents (N = 73) at an urban, university-based residency program. Residents estimated the costs of 21 single-use operating room items. Descriptive statistics and a regression analysis were calculated. Results: The response rate was 62%. Respondents accurately estimated costs for a median of 7/21 items, with error ranging from 26% to 5438%. They substantially underestimated the three highest-cost items. Increasing post-graduate year did not improve estimation accuracy (β = .233, P = .138). Discussion: Residents have a poor understanding of single-use item costs, and this does not improve with post-graduate training, suggesting inefficiencies. There is opportunity to educate residents and ultimately decrease surgical health care costs. abstract_id: PUBMED:32741693 Operating Room Educational Environment in Canada: Perceptions of Surgical Residents. Objective: The educational experience in operating rooms (OR) plays a central role in the transformation of a trainee into a surgeon. As Canadian residency programs transition to competency-based medical education, and since most surgical competencies are attained in the OR, we investigated the perceptions of Canadian surgical residents about their OR learning environment. Design: Cross-sectional, national survey. Setting: An online questionnaire, including the validated 40-item operating room educational environment measure (OREEM) inventory, was sent to residency programs of all surgical specialties in Canada. The OREEM assesses the trainees' perceptions of the "trainer and training," "learning opportunities," "atmosphere in the operating room," "supervision, workload, and support," Each individual item was rated using a 5-point Likert scale ranging from "strongly disagree" (scores 1) to "strongly agree" (scores 5). The mean score of each individual item was measured, and the mean scores of each subscale and an overall score of the questions were also measured. Participants: Surgical residents in Canada. Results: Four hundred thirty residents were included for final analysis. The overall mean OREEM score was 3.72 ± 0.4. "Atmosphere in the OR" was the subscale with the highest mean score (3.87 ± 0.5), while "supervision, workload, and support" had the lowest subscale mean score (3.49 ± 0.5). The overall OREEM mean score for junior and senior residents was 3.67 ± 0.4 and 3.80 ± 0.4, respectively (p = 0.001). No significant differences were seen in the mean OREEM score between males and females or different surgical programs. Nevertheless, general surgery had the lowest "supervision, workload, and support" subscale score (3.27 ± 0.5; p &lt; 0.001). Conclusions: The overall educational environment in OR may be considered satisfactory as perceived by Canadian surgical residents; however, several areas for potential improvement are identified and suggestions for optimizing the learning environment are described. abstract_id: PUBMED:23520071 Operative time and cost of resident surgical experience: effect of instituting an otolaryngology residency program. Objective: Describe the procedure length difference between surgeries performed by an attending surgeon alone compared with the resident surgeon supervised by the same attending surgeon. Study Design: Case series with chart review. Setting: Tertiary care center and residency program. Subjects And Methods: Six common otolaryngologic procedures performed between August 1994 and May 2012 were divided into 2 cohorts: attending surgeon alone or resident surgeon. This division coincided with our July 2006 initiation of an otolaryngology-head and neck surgery residency program. Operative duration was compared between cohorts with confounding factors controlled. In addition, the direct result of increased surgical length on operating room cost was calculated and applied to departmental and published resident case log report data. Results: Five of the 6 procedures evaluated showed a statistically significant increase in surgery length with resident involvement. Operative time increased 6.8 minutes for a cricopharyngeal myotomy (P = .0097), 11.3 minutes for a tonsillectomy (P &lt; .0001), 27.4 minutes for a parotidectomy (P = .028), 38.3 minutes for a septoplasty (P &lt; .0001), and 51 minutes for tympanomastoidectomy (P &lt; .0021). Thyroidectomy showed no operative time difference. Cost of increased surgical time was calculated per surgery and ranged from $286 (cricopharyngeal myotomy) to $2142 (mastoidectomy). When applied to reported national case log averages for graduating residents, this resulted in a significant increase of direct training-related costs. Conclusion: Resident participation in the operating room results in increased surgical length and additional system cost. Although residency is a necessary part of surgical training, associated costs need to be acknowledged. abstract_id: PUBMED:36189098 Improving Operating Room Efficiency: Relocating a Surgical Oncology Program Within a Health Care System. Background: To meet increased community and regional needs for quality services, our hospital system concluded that its established surgical oncology program-consisting of gynecologic oncology (4 physicians), surgical oncology (2 physicians), and otolaryngologic oncology (2 physicians)-would be best served by the transition of the comprehensive surgical oncology program to a new oncology-naive hospital. We describe the overall strategy and approach involved with this move, its implementation, operating room efficiency results, and physician satisfaction associated with the relocation. Methods: The purpose of the systematic plan for relocation, which was developed and refined during the 2 years preceding the move, was to facilitate a collective awareness and understanding of important patient-centered concepts and essential workflow. All parties involved in direct patient cancer care participated in multiple workgroups to successfully transition the surgical oncology practice. Following the transition to the oncology-naive hospital, components of the operative cases and surgical data were prospectively collected for the initial 6 weeks and compared to retrospective data from the last 8 weeks at the established hospital. The surgical day for each surgeon was deconstructed, and measured variables included total surgical cases, total surgical hours, surgical minutes per case, total anesthesia hours, first case on-time surgical starts, surgical stretcher wheels out to surgical stretcher wheels in, surgical stretcher wheels out to next case start, case end to postanesthesia care unit (PACU), and case end to case start. Results: Five hundred twenty-nine surgical cases encompassing 1,076 anesthesia hours and 710 surgical hours were completed during the 14-week evaluation period. The gynecologic oncologists completed the majority of surgical procedures in both settings. The percentage of first case on-time surgical starts initially decreased during the 6-week interval at the oncology-naive hospital, but interval subset analysis suggested a return to the pre-move norm. Surgical stretcher wheels out to surgical stretcher wheels in had a wide range (9 minutes to 305 minutes) for all surgical sections, but no statistically significant difference was seen overall or for any surgical section. Case end to PACU significantly increased for gynecologic oncology but not for surgical oncology or otolaryngologic oncology. Overall case end to case start times decreased nonsignificantly (63.7 ± 3.1 mean minutes vs 60.3 ± 1.7 mean minutes) following the move. A physician survey found that physicians' expectations were met in terms of the move occurring smoothly without major issues, surgical scheduling and accommodation, anesthesia services, and surgical personnel. Physicians indicated less satisfaction with quality and availability of instrumentation. Conclusion: The transfer of established surgical oncology services to an oncology-naive hospital was associated with early surgeon and operating room staff support, as well as process and programmatic alignment among stakeholders. The success of this transition required transparency, open and honest communication, and problem solving at all levels. The move of a surgical oncology program to an oncology-naive hospital was deemed successful without deterioration of time-related variables associated with operating room efficiency and physician satisfaction. The breakdown and analysis of key components of the surgical day offered additional opportunities for quality improvement in operating room efficiency. abstract_id: PUBMED:29936982 Reducing cost and improving operating room efficiency: examination of surgical instrument processing. Background: Operating room efficiency can be compromised because of surgical instrument processing delays. We observed that many instruments in a standardized tray were not routinely used during thyroid and parathyroid surgery at our institution. Our objective was to create a streamlined instrument tray to optimize operative efficiency and cost. Materials And Methods: Head and neck surgical instrument trays were evaluated by operating room team leaders. Instruments were identified as either necessary or unnecessary based on use during thyroidectomies and parathyroidectomies. The operating room preparation time, tray weights, number of trays, and number of instruments were recorded for the original and new surgical trays. Cost savings were calculated using estimated reprocessing cost of $0.51 per instrument. Results: Three of 13 head and neck trays were converted to thyroidectomy and parathyroidectomy trays. The starting head and neck surgical set was reduced from two trays with 98 total instruments to one tray with 36 instruments. Tray weight decreased from 27 pounds to 10 pounds. Tray preparation time decreased from 8 min to 3 min. The new tray saved $31.62 ($49.98 to $18.36) per operation in reprocessing costs. Projected annual savings with hospitalwide implementation is over $28,000.00 for instrument processing alone. Unmeasured hospital savings include decreased instrument wear and replacement frequency, quicker operating room setup, and decreased decontamination costs. Conclusions: Optimizing surgical trays can reduce cost, physical strain, preparation time, decontamination time, and processing times, and streamlining trays is an effective strategy for hospitals to reduce costs and increase operating room efficiency. abstract_id: PUBMED:26952688 Estimating Surgical Procedure Times Using Anesthesia Billing Data and Operating Room Records. Objective: The median time required to perform a surgical procedure is important in determining payment under Medicare's physician fee schedule. Prior studies have demonstrated that the current methodology of using physician surveys to determine surgical times results in overstated times. To measure surgical times more accurately, we developed and validated a methodology using available data from anesthesia billing data and operating room (OR) records. Data Sources: We estimated surgical times using Medicare 2011 anesthesia claims and New York Statewide Planning and Research Cooperative System 2011 OR times. Estimated times were validated using data from the National Surgical Quality Improvement Program. We compared our time estimates to those used by Medicare in the fee schedule. Study Design: We estimate surgical times via piecewise linear median regression models. Principal Findings: Using 3.0 million observations of anesthesia and OR times, we estimated surgical time for 921 procedures. Correlation between these time estimates and directly measured surgical time from the validation database was 0.98. Our estimates of surgical time were shorter than the Medicare fee schedule estimates for 78 percent of procedures. Conclusions: Anesthesia and OR times can be used to measure surgical time and thereby improve the payment for surgical procedures in the Medicare fee schedule. abstract_id: PUBMED:16696227 Education in postgraduate surgical schools: the role of the surgical tutor as supervisor in the operating room The Postgraduate Surgical education is in an era of transition, in order to create physicians with skills and attitudes needed by modern health care. Many studies have examined the impact of surgical tutoring in surgical residency programs in USA Medical Schools, while few experiences are reported from European Universities. The new Italian guidelines for post-graduate education require a structured clinical learning with the supervision of a tutor ("attending surgeon" for surgical residency); it is a challenge to describe the role of this teacher and educator, and to implement an effective evaluation of operating room teachers. Confidential survey was administered to 14 surgical residents of the Authors' University. Questions were related to their surgical activity and their perception of educational role of tutors in operating room and tutors' teaching behaviors. Residents pointed out five behaviors they perceive as signs of tutor excellence in clinical and operating room setting. According with studies from other Universities, residents need a tutor with competency but also with good teaching skills and a mature self-perception as educator. Faculty would provide training programs for surgeons in order to improve their teaching skills and behaviors. abstract_id: PUBMED:33632184 A structured program for teaching pancreatojejunostomy to surgical residents and fellows outside the operating room: a pilot study. Background: Pancreatojejunostomy (PJ) is one of the most difficult and challenging abdominal surgical procedures. There are no appropriate training systems available outside the operating room (OR). We developed a structured program for teaching PJ outside the OR. We describe its development and results of a pilot study. Methods: We have created this structured program to help surgical residents and fellows acquire both didactic knowledge and technical skills to perform PJ. A manual was created to provide general knowledge about PJ and the specific PJ procedure used in our institution. Based on questionnaires completed by trainers and trainees, the procedure for PJ was divided into twelve steps and described in detail. After creating the manual, we developed organ models, needles and a frame box for simulation training. Three residents (PGY3-5) and three fellows (PGY6 or above) participated in a pilot study. Objective and subjective evaluations were performed. Results: Trainees learn about PJ by reading the procedure manual, acquiring both general and specific knowledge. We conducted simulation training outside the OR using the training materials created for this system. They simulate the procedure with surgical instruments as both primary and assistant surgeon. In this pilot study, as objective assessments, the fellow-group took less time to complete one anastomosis (36 min vs 48 min) and had higher scores in the objective structured assessment of technical skill (average score: 4.1 vs 2.0) compared to the resident-group. As a subjective assessment, the confidence to perform a PJ anastomosis increased after simulation training (from 1.6 to 2.6). Participants considered that this structured teaching program is useful. Conclusion: We developed a structured program for teaching PJ. By implementing this program, learning opportunities for surgical residents and fellows can be increased as a complement to training in the OR. abstract_id: PUBMED:7986513 Operating room start times and turnover times in a university hospital. Study Objective: To measure the start time for the first case of the day and the turnover times for subsequent cases in the operating rooms (ORs) at an academic hospital. Design: Prospective study. Setting: ORs at a university medical center. Patients: All patients undergoing an operative procedure that started between 7 A.M. and 5 P.M. weekdays for the period January 1, 1989, through June 30, 1989. Interventions: For each patient, the following times were recorded: OR ready, patient enters OR, anesthesia induction complete, surgery start, surgery end, patient leaves OR. Measurements And Main Results: Patients were brought into the OR just before the scheduled start time. Surgical incision was made 21 to 49 minutes after the patient was brought into the OR. Room turnover time (time from patient in to patient out) was almost uniformly 36 minutes. Patient turnover time (time from end of surgery in one patient to end of induction of next patient) was generally 1 hour. Turnover times were shorter for those ORs in which primarily monitored anesthesia care was provided and longer in ORs in which patients routinely required invasive monitoring. Conclusions: The scheduled start time for the first case of the day was generally the time the patient was brought into the OR. Because of the variable amount of time required for anesthesia induction and surgical preparation and draping, incision occurred 21 to 49 minutes later. The time between cases when no surgery was occurring was significantly longer than room turnover time because of the need to wake up one patient and induce the following patient. Because of a lack of standardized definitions, there is probably a strong perceptual difference among anesthesiologists, OR nurses, and surgeons when viewing start and turnover times. At our own teaching institution, shortening turnover times would increase the amount of elective OR time available, but the impact would not be significant because the number of procedures done per OR each day is low. Answer: Yes, a new surgical residency program does increase operating room times. A study that compared operating room times (ORT) for three common procedures performed by attending surgeons alone versus cases with resident involvement found that resident involvement led to statistically significant increases in ORT for all three types of surgery (laparoscopic cholecystectomy, open inguinal hernia repair, and laparoscopic appendectomy) (PUBMED:24209643). Additionally, another study examining the effect of instituting an otolaryngology residency program showed that five out of six evaluated procedures had a statistically significant increase in surgery length when residents were involved (PUBMED:23520071). These findings suggest that the participation of residents in the operating room results in longer surgical times, which can lead to additional system costs. However, the importance of residency programs in addressing the shortage of surgeons and providing essential training cannot be underestimated (PUBMED:24209643).
Instruction: Is PSA related to serum cholesterol and does the relationship differ between black and white men? Abstracts: abstract_id: PUBMED:26332663 Is PSA related to serum cholesterol and does the relationship differ between black and white men? Background: Previously, in prostate cancer-free men, we found that statin initiation resulted in a 4.1% decline in PSA. This decline in PSA was proportional to the decline in cholesterol, suggesting a link between cholesterol and PSA levels. Whether these associations vary by race has not been explored and therefore we examined the association between pre-statin serum cholesterol and PSA in black and white prostate cancer-free men. Methods: We conducted a retrospective, cross-sectional analysis of 1,163 men (709 white and 454 black) without prostate cancer who initiated a statin between 1994 and 2006. Linear regression was used to test the association between pre-statin serum cholesterol and PSA levels, adjusting for potential confounders and stratifying by race. Results: Black men were younger, had higher low-density lipoprotein (LDL) and high-density lipoprotein (HDL) levels (both P &lt; 0.05) and a trend toward higher total cholesterol (P = 0.063). There was no difference in PSA or year of statin prescription by race (P = 0.900 and P = 0.217, respectively). On multivariable analysis, we found a positive correlation between serum PSA and total cholesterol (P = 0.005) and LDL (P = 0.003) in white men, but no association among black men. HDL was not significantly related to PSA levels in black or white men (both P &gt; 0.5). Conclusions: Among prostate cancer-free men about to begin a statin, pre-statin total cholesterol and LDL were correlated with PSA levels in white, but not black men. If confirmed in future studies, these findings suggest that serum cholesterol may be related to prostate biology, and that this effect may vary by race. abstract_id: PUBMED:1342269 Serum cholesterol and 20-year mortality in black and white men and women aged 65 and older in the Evans County Heart Study. Serum cholesterol and 20-year mortality rates were studied in 396 Evans County black and white men and women who were 65 years and older and free of prevalent coronary heart disease (CHD) at baseline examination in 1960 to 1962. Previous reports on Evans County men and women younger than 65 found cholesterol levels to be significantly associated with all-cause and CHD mortality in white men, with CHD mortality in black men, and with cardiovascular disease mortality in white women. The independent role of total serum cholesterol as a predictor of CHD and all-cause mortality in the 65-and-older age group was evaluated using Cox proportional hazards models. Among white men, serum cholesterol level was positively associated with CHD mortality (relative risk of 1.54, P &lt; 0.05 for an increment of 40 mg/dL [1.03 mmol/L], or one standard deviation in cholesterol). A significant J-shaped relationship of cholesterol with all-cause mortality was found among white men. Among black women, cholesterol was negatively associated with all-cause mortality. Neither all-cause nor CHD mortality was related to serum cholesterol among black men or white women. Although based on small numbers, the results of this study suggest that in Evans County, total serum cholesterol is an independent predictor of mortality in white men aged 65 and over, while these results should not be generalized to other race-gender groups in this cohort. abstract_id: PUBMED:9836538 Reference ranges for serum prostate-specific antigen in black and white men without cancer. Objectives: To determine the age- and race-specific prostate-specific antigen (PSA) distributions in healthy men in central South Carolina and to compare these to data from other studies. Methods: Two thousand ninety-two black men aged 40 to 69 years and white men aged 50 to 69 years from the general population in 11 counties of central South Carolina participated in a prostate cancer educational program. Seventy-two percent of the participants were black-about double the proportion in the general population-and 63% of the men (1319 of 2092) subsequently obtained a PSA determination from their own physician. The distribution of serum PSA was compared with distributions from the Olmsted County study and from the Walter Reed Army Medical Center/Center for Prostate Disease Research study. Results: Older men without cancer had higher PSA levels. Regression analyses yielded an associated increase of about 3.3% per year. Reference ranges for normal PSA in men without cancer (based on their sample 95th percentiles) were zero to 1.9, 3.8, and 5.7 ng/mL in black men aged 40 to 49, 50 to 59, and 60 to 69 years, and zero to 2.7 and 4.9 mg/mL in white men aged 50 to 59 and 60 to 69 years, respectively. Conclusions: Reference ranges for normal serum PSA levels should take into account the population from which they are derived and to which they will be applied. Reference ranges that are useful in the general population can differ from those that are appropriate in a hospital setting. For the general population in central South Carolina, reference ranges for serum PSA levels are lower than previously published reference ranges, particularly among black men. abstract_id: PUBMED:3877458 Menopausal change in serum cholesterol. Black/white differences in Evans County, Georgia. The relationship between menopausal status and total serum cholesterol was examined cross-sectionally in a population of 40- to 54-year-old black and white women of Evans County, Georgia. The relationship differed in blacks and whites as assessed by a significant interaction term in linear regression. Controlling for age, Quetelet index, and smoking status, white postmenopausal women showed an estimated increase in total serum cholesterol of 25 mg/dl (p less than 0.001) compared with premenopausal whites, whereas black postmenopausal women showed no significant increase in total serum cholesterol compared with premenopausal blacks. The authors were unable to attribute the lack of menopausal effect in blacks to selection or measurement bias, nor did confounding or effect modification by obesity or social status negate their findings. Of six other populations in which menopausal influence on total serum cholesterol has been examined, only Pima Indians failed to show a significant increase. abstract_id: PUBMED:2403106 Differences between black and white men in correlates of high density lipoprotein cholesterol. Although socioeconomic status has been positively related to levels of high density lipoprotein (HDL) cholesterol in white Americans, limited evidence suggests that an inverse association may exist among blacks. These associations were further examined using data collected in 1985-1986 from 3,562 white and 500 black men who ranged in age from 31 to 45 years. Overall, mean levels of HDL cholesterol were 8 mg/dl higher among blacks than among whites and, in each group, levels were related positively to alcohol consumption and negatively to Quetelet index and cigarette smoking. A statistically significant interaction, however, was observed between race and educational achievement: no association was seen between educational achievement and levels of HDL cholesterol among white men, but there was an inverse association among blacks. For example, a 13 mg/dl (55 vs. 42 mg/dl) difference in levels of HDL cholesterol was observed between black and white men who did not complete high school, but only a 3 mg/dl black excess (47 vs. 44 mg/dl) was seen among college graduates. About 40 percent of this reduction could be attributed to the effects of cigarette smoking, alcohol consumption, and relative weight. These results indicate that the higher levels of HDL cholesterol frequently found among black men in the United States may not apply to all social strata. Other characteristics, such as physical activity and diet, may account for the differing race-specific associations with educational achievement. abstract_id: PUBMED:10463983 A prospective study of the serum prostate specific antigen concentrations and Gleason histologic scores of black and white men with prostate carcinoma. Background: The stage specific survival rates of black American men with prostate carcinoma are less favorable than those of white American men. The authors conducted a prospective study of the serum prostate specific antigen (PSA) concentrations and Gleason histologic scores of black and white men with newly diagnosed prostate carcinoma to determine whether there were racial differences in these prognostic variables. Methods: At a Veterans Affairs Medical Center between January 1, 1992, and December 31, 1997, clinical stage, Gleason histologic score, serum PSA concentration, prostate volume, and PSA density were determined for 796 consecutive men (465 black and 331 white) who had biopsy-detected prostate carcinoma. Results: The percentages, respectively, of black and white men with local, regional, and metastatic carcinoma were 58 and 72; 22 and 17; and 20 and 11 (P &lt; 0.0001). Of 271 black and 329 white men with local stage cancer, 20% and 12%, respectively, had Gleason 8-10 tumors (P = 0.02), and the age-adjusted risk of Gleason 8-10 cancer was 1.39 times greater for black men (95% confidence interval [CI] = 1.09-2.93). Gleason 8-10 cancer was found in 12 of 68 black (18%) and 5 of 87 white (6%) men with local cancer who were age 65 years or younger (P = 0.02). Among black and white men with local stage cancer, the mean PSA was 12.9 (95% CI = 11.5-14.4) and 8.5 (95% CI = 7.6-9.4) ng/mL, respectively (P &lt; 0.0001), and among black and white men with regional stage cancer the mean PSA was 53.3 (95% CI = 42.7-63.9) and 35.0 (95% CI = 27.3-42.6) ng/mL, respectively (P = 0.02). The mean PSA of black and white men with local cancer who were age 65 years or younger was 11.6 (95% CI = 8.8-14.4) and 6.9 (95% CI = 5.9-8.0) ng/mL, respectively (P = 0.0009). Conclusions: Disparities in the risk of Gleason score 8-10 cancer for black and white men with local stage disease and in the serum PSA concentrations of black and white men with local and regional stage disease help to explain racial differences in cancer survival. Racial differences in the risk of Gleason 8-10 cancer and in the serum PSA concentrations of men age 65 years or younger have implications regarding the potential benefits of screening for prostate carcinoma in the African American community. abstract_id: PUBMED:11870801 Clinical stage T1c prostate cancer: pathologic outcomes following radical prostatectomy in black and white men. Background: The incidence of prostate cancer in black men is 50% to 70% higher than among age-matched white men. Black men have a twofold higher mortality rate and overall tend to have higher serum prostate-specific antigen (PSA) levels than white men. To determine whether racial differences exist in men whose prostate cancer was diagnosed based solely on an elevated serum PSA level, we compared clinical and pathologic features in black and white men undergoing radical prostatectomy (RP) for clinical stage T1c prostate cancer. Methods: We used a prospectively collected database to identify all men undergoing RP for clinical T1c prostate cancer between July 1995 and October 2000. A total of 129 consecutive men (56 black men and 73 white men) were compared for age at diagnosis, serum PSA level, biopsy Gleason score, pathologic stage, RP specimen Gleason score, incidence of lymph node metastasis, and incidence of positive surgical margins. Results: Statistically significant differences were not found by race in patients' ages, serum PSA levels, biopsy Gleason score, pathologic stage, incidence of lymph node metastases, or incidence of positive surgical margins. The RP specimen Gleason score was more heterogeneous in black men than white men (P=0.02). Conclusions: Racial differences in the incidence and mortality rate of prostate cancer are well known, but differences in the clinical and pathologic features between black and white men with prostate cancer identified solely based on an elevated serum PSA level with negative results on digital rectal examination (clinical stage T1c ) have been poorly studied. Our results suggest that men with clinical stage T1c prostate cancer have similar clinical and pathologic findings regardless of race. These results suggest that early-detection programs using serum PSA testing for prostate cancer in black men potentially can result in improvements in prostate cancer outcomes in this high-risk group. abstract_id: PUBMED:11068308 Prostate-specific antigen levels in young white and black men 20 to 45 years old. Objectives: To determine the prostate-specific antigen (PSA) levels and PSA change over time in young white and black men 20 to 45 years old. Methods: The Department of Defense Serum Repository, a serum bank that stores all residual serum from the military human immunodeficiency virus screening program at -25 degrees C, was sampled to obtain a total of 588 black and 588 white subjects 20 to 45 years old. This was a retrospective study with only demographic data available on the studied subjects. The samples used for this study were collected between June 24, 1988 and June 12, 1996. Individuals with a history of prostate disease were excluded by query of a centralized Department of Defense diagnosis database. Three serum specimens evenly distributed over a mean of 6 years were selected for each individual to determine the free and total PSA levels and PSA velocity. The Hybritech Tandem-E PSA assay was used for the total PSA measurement, and the Hybritech Tandem-R assay was used for the free PSA measurement. Results: The baseline serum PSA levels differed by race (P = 0.04). The median (25th, 75th percentile) baseline serum PSA levels for black men 20 to 29, 30 to 39, and 40 to 45 were 0.38 ng/mL (0.26, 0.61), 0.45 ng/mL (0.32, 0. 67), and 0.52 ng/mL (0.37, 0.73), respectively. The median baseline serum PSA levels for the same decade groups in white men were 0.38 ng/mL (0.27, 0.57), 0.45 ng/mL (0.28, 0.68), and 0.40 ng/mL (0.26, 0. 64), respectively. The PSA velocity was higher in white men than in black men (mean 2.8%/yr and 1.6%/yr, respectively, P = 0.032). Conclusions: These results suggest that although black men 20 to 45 years old have higher baseline serum PSA levels than white men of the same age, the PSA velocity is greater in young white than in young black men. Additional work is needed to determine the clinical significance of these findings. abstract_id: PUBMED:35466781 Allostatic Load, Income, and Race Among Black and White Men in the United States. Research indicates that income is significantly associated with allostatic load (AL) and that this association may differ between White and Black Americans. Most existing income-AL link work focuses on women and less is known about this association among men. Using data from the National Health and Nutrition Examination Survey (NHANES), we examined whether race moderates the association between income and AL among Black and White men in the United States (n = 5,685). We find that, regardless of income levels, Black men have significantly higher prevalence of being in the high-AL group compared with high-income White men. Our findings suggest that Black men do not receive the same health benefits for increased income relative to their White counterparts. abstract_id: PUBMED:9186343 Prostate specific antigen in black and white men after hormonal therapies for prostate cancer. Purpose: Prostate cancer deaths usually result from proliferation of the androgen independent malignant phenotype, and in the United States the survival of black men with metastatic cancer is less favorable than that of white men. We compared prostate specific antigen (PSA) functions after hormonal therapies in men of both races to investigate potential differences in the biology of androgen independent cancer. Materials And Methods: The PSA nadir after gonadal androgen withdrawal was determined in 217 black and 188 white men with localized or metastatic cancer. The time to PSA elevation and PSA doubling time were determined in 62 black and 27 white men with biochemical relapse. Biochemical response to deferred flutamide treatment and flutamide withdrawal was assessed in 87 and 11 black and 30 and 10 white men, respectively. Results: There were no significant racial differences in the PSA nadir when controlled for clinical stage and pretreatment PSA, or in PSA doubling time when controlled for clinical stage, PSA nadir and month of PSA elevation. The biochemical response to deferred flutamide therapy and flutamide withdrawal was the same in black and white men. Conclusions: The burden and growth rate of androgen independent cancer estimated from PSA functions after gonadal androgen withdrawal, and the impact of deferred antiandrogen therapy on the serum PSA are similar in black and white men. These findings suggest that racial differences in the biology of androgen independent carcinoma do not contribute to the inferior survival of black men with metastatic prostate cancer. Answer: Yes, PSA is related to serum cholesterol, and the relationship does differ between black and white men. A study found a positive correlation between serum PSA and total cholesterol as well as low-density lipoprotein (LDL) in white men, but no association among black men. High-density lipoprotein (HDL) was not significantly related to PSA levels in either black or white men (PUBMED:26332663). This suggests that serum cholesterol may be related to prostate biology, and that this effect may vary by race.
Instruction: Cardiometabolic determinants of mortality in a geriatric population: is there a "reverse metabolic syndrome"? Abstracts: abstract_id: PUBMED:19237305 Cardiometabolic determinants of mortality in a geriatric population: is there a "reverse metabolic syndrome"? Aims: Diabetes or insulin resistance, overweight, arterial hypertension, and dyslipidaemia are recognized risk factors for cardiovascular (CV) disease. However, their predictive value and hierarchy in elderly subjects remain uncertain. Methods: We investigated the impact of cardiometabolic risk factors on mortality in a prospective cohort study of 331 elderly high-risk subjects (mean age+/-SD: 85+/-7 years). Results: Two-year total mortality was predicted by age, diabetes, low BMI, low diastolic blood pressure (DBP), low total and HDL cholesterol, and previous CV events. The effect of diabetes was explained by previous CV events. In non-diabetic subjects, mortality was predicted by high insulin sensitivity, determined by HOMA-IR and QUICKI indices. In multivariate analyses, the strongest mortality predictors were low BMI, low HDL cholesterol and previous myocardial infarction. Albumin, a marker of malnutrition, was associated with blood pressure, total and HDL cholesterol, and HOMA-IR. The inflammation marker CRP was associated with low total and HDL cholesterol, and high HOMA-IR. Conclusion: In very old patients, low BMI, low DBP, low total and HDL cholesterol, and high insulin sensitivity predict total mortality, indicating a "reverse metabolic syndrome" that is probably attributable to malnutrition and/or chronic disorders. These inverse associations limit the relevance of conventional risk factors. Previous CV events and HDL cholesterol remain strong predictors of mortality. Future studies should determine if and when the prevention and treatment of malnutrition in the elderly should be incorporated into conventional CV prevention. abstract_id: PUBMED:27137217 Which parameters affect long-term mortality in older adults: is comprehensive geriatric assessment a predictor of mortality? Background: Determining predictors of mortality among older adults might help identify high-risk patients and enable timely intervention. Aims: The aim of the study was to identify which variables predict geriatric outpatient mortality, using routine geriatric assessment tools. Methods: We analyzed the data of 1141 patients who were admitted to the geriatric medicine outpatient clinic between 2001 and 2004. Comprehensive geriatric assessment was performed by an interdisciplinary geriatric team. Mortality rate was determined in 2015. The parameters predicting survival were examined. Results: Median age of the patients (415 male, 726 female) was 71.7 years (53-95 years). Mean survival time was 12.2 years (95 % CI; 12-12.4 years). In multivariate analysis, age (OR: 1.16, 95 % CI: 1.09-1.23, p &lt; 0.001), smoking (OR: 2.51, 95 % CI: 1.18-5.35, p = 0.017) and metabolic syndrome (OR: 2.20, 95 % CI: 1.05-4.64, p = 0.038) were found to be independent risk factors for mortality. MNA-SF scores (OR: 0.84, 95 % CI: 0.71-1.00, p = 0.050) and free T3 levels (OR: 0.70, 95 % CI: 0.49-1.00, p = 0.052) had borderline significance. Discussion: The present study showed that the risk conferred by metabolic syndrome is beyond its individual components. Our findings confirm previous studies on the prognostic role of nutritional status, as reflected by MNA-SF. Serum fT3, a simple laboratory test, may also be used in geriatric outpatient clinics to identify individuals at risk. Conclusions: The results of the study demonstrated the need for addressing modifiable risk factors such as smoking, metabolic syndrome, and undernutrition in older adults. abstract_id: PUBMED:28710143 Interindividual Variability in Biomarkers of Cardiometabolic Health after Consumption of Major Plant-Food Bioactive Compounds and the Determinants Involved. Cardiometabolic disease, comprising cardiovascular diseases, type 2 diabetes, and their associated risk factors including metabolic syndrome and obesity, is the leading cause of death worldwide. Plant foods are rich sources of different groups of bioactive compounds, which might not be essential throughout life but promote health and well-being by reducing the risk of age-related chronic diseases. However, heterogeneity in the responsiveness to bioactive compounds can obscure associations between their intakes and health outcomes, resulting in the hiding of health benefits for specific population groups and thereby limiting our knowledge of the exact role of the different bioactive compounds for health. The heterogeneity in response suggests that some individuals may benefit more than others from the health effects of these bioactive compounds. However, to date, this interindividual variation after habitual intake of plant bioactive compounds has been little explored. The aim of this review is to provide an overview of the existing research that has revealed interindividual variability in the responsiveness to plant-food bioactive compound consumption regarding cardiometabolic outcomes, focusing on polyphenols, caffeine and plant sterols, and the identified potential determinants involved. abstract_id: PUBMED:29525513 Metabolically healthy obesity, vitamin D, and all-cause and cardiometabolic mortality risk in NHANES III. Background & Aims: Previous studies assessing the prognosis of metabolically healthy obesity (MHO) have been limited by a lack of a harmonized definition of MHO phenotype. Furthermore, obesity is a risk factor for vitamin D deficiency and low vitamin D status has been associated with a higher risk of mortality; however, few studies have evaluated the joint association between vitamin D, metabolic health phenotype, and mortality risk. Using a harmonized definition, we investigated whether MHO is associated with subsequent all-cause and cardiometabolic mortality, and whether serum 25-hydroxyvitamin D [25(OH)D] modifies these associations. Methods: This study included participants aged ≥20 years from the Third National Health and Nutrition Examination Survey (NHANES III). MHO phenotype was defined as a combination of obesity (≥30 kg/m2) and zero component of metabolic syndrome. Multivariable Cox regression was used to assess the risk of mortality across metabolic phenotypes, and the joint association between metabolic phenotype and 25(OH)D. Fine and Gray regression was performed to account for competing risk events. Results: Among 11,333 participants, a total of 2980 deaths (937 cardiometabolic death outcomes) occurred during a median follow-up of 19.1 years. In the absence of any metabolic abnormality, obesity (MHO) was not associated with a higher risk of all-cause (hazard ratio [HR], 0.89 [95% CI, 0.52-1.51]) or cardiometabolic mortality (cause-specific HR, 1.21 [95% CI 0.33-4.46]). Similar results were obtained from competing risk analysis. No significant differences in average 25(OH)D levels were observed between MHO and non-MHO participants; however, there was a significant interaction between metabolic health phenotype and serum 25(OH)D in relation to cardiometabolic mortality such that levels of serum 25(OH)D &lt; 50 nmol/L were associated with increased risk of cardiometabolic mortality, particularly in participants within the normal-weight and obese BMI ranges. Conclusions: Our results support the hypothesis that MHO phenotype is a benign health condition. Vitamin D deficiency may exacerbate the risk of cardiometabolic death outcomes associated with metabolic dysfunction in normal weight and obese individuals. Further research is warranted to validate our findings. abstract_id: PUBMED:34205293 Cholesterol and Egg Intakes with Cardiometabolic and All-Cause Mortality among Chinese and Low-Income Black and White Americans. We examined the associations of dietary cholesterol and egg intakes with cardiometabolic and all-cause mortality among Chinese and low-income Black and White Americans. Included were 47,789 Blacks, 20,360 Whites, and 134,280 Chinese aged 40-79 years at enrollment. Multivariable Cox models with restricted cubic splines were applied to estimate hazard ratios (HRs) and 95% confidence intervals (CIs) for mortality outcomes using intakes of 150 mg cholesterol/day and 1 egg/week as the references. Cholesterol intake showed a nonlinear association with increased all-cause mortality and a linear association with increased cardiometabolic mortality among Black Americans: HRs (95% CIs) associated with 300 and 600 mg/day vs. 150 mg/day were 1.07 (1.03-1.11) and 1.13 (1.05-1.21) for all-cause mortality (P-linearity = 0.04, P-nonlinearity = 0.002, and P-overall &lt; 0.001) and 1.10 (1.03-1.16) and 1.21 (1.08-1.36) for cardiometabolic mortality (P-linearity = 0.007, P-nonlinearity = 0.07, and P-overall = 0.005). Null associations with all-cause or cardiometabolic mortality were noted for White Americans (P-linearity ≥ 0.13, P-nonlinearity ≥ 0.06, and P-overall ≥ 0.05 for both). Nonlinear inverse associations were observed among Chinese: HR (95% CI) for 300 vs. 150 mg/day was 0.94 (0.92-0.97) for all-cause mortality and 0.91 (0.87-0.95) for cardiometabolic mortality, but the inverse associations disappeared with cholesterol intake &gt; 500 mg/day (P-linearity ≥ 0.12; P-nonlinearity ≤ 0.001; P-overall &lt; 0.001 for both). Similarly, we observed a positive association of egg intake with all-cause mortality in Black Americans, but a null association in White Americans and a nonlinear inverse association in Chinese. In conclusion, the associations of cholesterol and egg intakes with cardiometabolic and all-cause mortality may differ across ethnicities who have different dietary patterns and cardiometabolic risk profiles. However, residual confounding remains possible. abstract_id: PUBMED:37091807 Schizophrenia and cardiometabolic abnormalities: A Mendelian randomization study. Background: Individuals with a diagnosis of schizophrenia are known to be at high risk of premature mortality due to poor physical health, especially cardiovascular disease, diabetes, and obesity. The reasons for these physical health outcomes within this patient population are complex. Despite well-documented cardiometabolic adverse effects of certain antipsychotic drugs and lifestyle factors, schizophrenia may have an independent effect. Aims: To investigate if there is evidence that schizophrenia is causally related to cardiometabolic traits (blood lipids, anthropometric traits, glycaemic traits, blood pressure) and vice versa using bi-directional two-sample Mendelian randomization (MR) analysis. Methods: We used 185 genetic variants associated with schizophrenia from the latest Psychiatric Genomics Consortium GWAS (n = 130,644) in the forward analysis (schizophrenia to cardiometabolic traits) and genetic variants associated with the cardiometabolic traits from various consortia in the reverse analysis (cardiometabolic traits to schizophrenia), both at genome-wide significance (5 × 10-8). The primary method was inverse-variance weighted MR, supported by supplementary methods such as MR-Egger, as well as median and mode-based methods. Results: In the forward analysis, schizophrenia was associated with slightly higher low-density lipoprotein (LDL) cholesterol levels (0.013 SD change in LDL per log odds increase in schizophrenia risk, 95% CI, 0.001-0.024 SD; p = 0.027) and total cholesterol levels (0.013 SD change in total cholesterol per log odds increase in schizophrenia risk, 95% CI, 0.002-0.025 SD; p = 0.023). However, these associations did not survive multiple testing corrections. There was no evidence of a causal effect of cardiometabolic traits on schizophrenia in the reverse analysis. Discussion: Dyslipidemia and obesity in schizophrenia patients are unlikely to be driven primarily by schizophrenia itself. Therefore, lifestyle, diet, antipsychotic drugs side effects, as well as shared mechanisms for metabolic dysfunction and schizophrenia such as low-grade systemic inflammation could be possible reasons for the apparent increased risk of metabolic disease in people with schizophrenia. Further research is needed to examine the shared immune mechanism hypothesis. abstract_id: PUBMED:31767973 Determinants of new onset cardiometabolic risk among normal weight children. Objective: To identify determinants for the development of "normal weight metabolically unhealthy" (NWMU) profiles among previously metabolically healthy normal weight children. Methods: The QUALITY cohort comprises youth 8-10 years of age with a parental history of obesity (n = 630). Of these, normal weight children with no metabolic risk factors were identified and followed up 2 years later (n = 193). Children were classified as NWMU if they remained normal weight but developed at least one cardiometabolic risk factor. They were classified as normal weight metabolically healthy otherwise. Multivariable logistic regression models were used to identify whether adiposity (anthropometrics and DXA), lifestyle habits (physical activity, screen time, vegetables, and fruit- and sugar-sweetened beverages intake), fitness, and family history of cardiometabolic disease were associated with new onset NWMU. Results: Of the 193 normal weight and metabolically healthy children at baseline, 45 (23%) became NWMU 2 years later (i.e., 48% had elevated HDL cholesterol, 13% had elevated triglycerides, and 4% had impaired fasting glucose). Changes in adiposity between baseline and follow-up were associated with an increased risk of NWMU for all adiposity measures examined (e.g., for ∆zBMI OR = 3.95; 95% CI: 1.76, 8.83). Similarly, a 2-year change in screen time was associated with incident NWMU status (OR = 1.24; 95% CI 1.04, 1.49). Conclusions: Children who increase their adiposity levels as they enter puberty, despite remaining normal weight, are at risk of developing cardiometabolic risk factors. Studies examining long-term consequences of NWMU profiles in pediatrics are needed to determine whether changes in screening practice are warranted. abstract_id: PUBMED:34117867 Adiponectin and cardiometabolic trait and mortality: where do we go? Adiponectin is an adipocyte-derived cytokine known for its cardioprotective effects in preclinical studies. Early epidemiologic studies replicated these findings and drew great interest. Subsequent large-scale prospective cohorts, however, showed that adiponectin levels seemed not to relate to incident coronary artery disease (CAD). Even more surprisingly, a paradoxical increase of all-cause and cardiovascular (CV) mortality with increased adiponectin levels was reported. The adiponectin-mortality paradox has been explained by some groups asserting that adiponectin secretion is promoted by elevated natriuretic peptides (NP). Other groups have proposed that adiponectin is elevated due to adiponectin resistance in subjects with metabolic syndrome or heart failure (HF). However, there is no unifying theory that can clearly explain this paradox. In patients with HF with reduced ejection fraction (HFrEF), stretched cardiomyocytes secrete NPs, which further promote release of adiponectin from adipose tissue, leading to adiponectin resistance. On the other hand, adiponectin biology may differ in patients with heart failure with preserved ejection fraction (HFpEF), which constitutes 50% of all of HF. Most HFpEF patients are obese, which exerts inflammation and myocardial stiffness, i.e. likely to prevent myocardial stretch and subsequent NP release. This segment of the patient population may display different adiponectin biology from its HFrEF counterpart. Dissecting the adiponectin-mortality relationship in terms of different HF subtypes may help to comprehensively understand this paradox. Mendelian randomization (MR) analyses claimed that adiponectin levels are not causally related to CAD or metabolic syndrome. Results from MR studies, however, should be interpreted with great caution because the underlying history of CAD or CHF was not taken into account in these analyses, an issue that may substantially confound the results. Here, we discuss many aspects of adiponectin; cardiometabolic traits, therapeutic interventions, and the ongoing debate about the adiponectin paradox, which were recently described in basic, epidemiologic, and clinical studies. abstract_id: PUBMED:37273161 Challenges and Opportunities in Diagnosis and Management of Cardiometabolic Risk in Adolescents. Purpose Of Review: This review aims to elucidate the limitations of diagnosing metabolic syndrome in adolescents as well as challenges and opportunities in the identification and reduction of cardiometabolic risk in this population. Recent Findings: There are multiple criticisms of how we define and approach obesity in clinical practice and scientific research, and weight stigma further complicates the process of making and communicating weight-related diagnoses. While the goal of diagnosing and managing metabolic syndrome in adolescents would be to identify individuals at elevated future cardiometabolic risk and intervene to reduce the modifiable component of this risk, there is evidence that identifying cardiometabolic risk factor clustering may be more useful in adolescents than establishing a cutoff-based diagnosis of metabolic syndrome. It has also become clear that many heritable factors and social and structural determinants of health contribute more to weight and body mass index than do individual behavioral choices about nutrition and physical activity. Promoting cardiometabolic health equity requires that we intervene on the obesogenic environment and mitigate the compounding effects of weight stigma and systemic racism. The existing options to diagnose and manage future cardiometabolic risk in children and adolescents are flawed and limited. While striving to improve population health through policy and societal interventions, there are opportunities to intervene at all levels of the socioecological model in order to decrease future morbidity and mortality from the chronic cardiometabolic diseases associated with central adiposity in both children and adults. More research is needed to identify the most effective interventions. abstract_id: PUBMED:35120497 Cardiometabolic risk factors, physical activity, and postmenopausal breast cancer mortality: results from the Women's Health Initiative. Background: Higher physical activity levels are associated with lower breast cancer-specific mortality. In addition, the metabolic syndrome is associated with higher breast cancer-specific mortality. Whether the physical activity association with breast cancer mortality is modified by number of metabolic syndrome components (cardiometabolic risk factors) in postmenopausal women with early-stage breast cancer remains unknown. Methods: Cardiovascular risk factors included high waist circumference, hypertension, high cholesterol, and diabetes. Breast cancers were verified by medical record review. Mortality finding were enhanced by serial National Death Index queries. Cox proportional hazards regression models were used to estimate associations between baseline physical activity and subsequent breast cancer-specific and overall mortality following breast cancer diagnosis in Women's Health Initiative participants. These associations were examined after stratifying by cardiometabolic risk factor group. Results: Among 161,308 Women's Health Initiative (WHI) participants, 8543 breast cancers occurred after 9.5 years (median) follow-up in women, additionally with information on cardiometabolic risk factors and physical activity at entry. In multi-variable analyses, as measured from cancer diagnosis, higher physical activity levels were associated with lower all-cause mortality risk (hazard ratio [HR] 0.86, 95% confidence interval [CI] 0.78-0.95, trend P &lt; 0.001) but not with breast cancer-specific mortality (HR 0.85, 95% CI 0.70 to 1.04, trend P = 0.09). The physical activity and all-cause mortality association was not significantly modified by cardiometabolic risk factor number. Conclusions: Among women with early-stage breast cancer, although higher antecedent physical activity was associated with lower risk of all-cause mortality, the association did not differ by cardiometabolic risk factor number. Answer: In a geriatric population, the concept of a "reverse metabolic syndrome" has been suggested, where traditional cardiometabolic risk factors for mortality, such as diabetes, overweight, arterial hypertension, and dyslipidemia, may not hold the same predictive value as they do in younger populations. A study investigating the impact of cardiometabolic risk factors on mortality in elderly high-risk subjects found that low BMI, low diastolic blood pressure (DBP), low total and HDL cholesterol, and high insulin sensitivity predicted total mortality, indicating a possible "reverse metabolic syndrome" (PUBMED:19237305). This phenomenon is thought to be attributable to malnutrition and/or chronic disorders, which can lead to inverse associations that limit the relevance of conventional risk factors. However, previous cardiovascular (CV) events and HDL cholesterol remained strong predictors of mortality in this population. The study suggests that future research should explore whether prevention and treatment of malnutrition in the elderly should be incorporated into conventional CV prevention strategies. This finding is consistent with other research that has identified age, smoking, and metabolic syndrome as independent risk factors for mortality in older adults, with nutritional status and serum free T3 levels also being important predictors (PUBMED:27137217). The complexity of cardiometabolic health in the elderly is further highlighted by the variability in response to bioactive compounds found in plant foods, which can affect cardiometabolic outcomes (PUBMED:28710143). Additionally, the concept of metabolically healthy obesity (MHO) and its relationship with vitamin D levels has been studied, showing that MHO may not be associated with a higher risk of all-cause or cardiometabolic mortality, but vitamin D deficiency could exacerbate the risk of cardiometabolic death outcomes (PUBMED:29525513). Overall, these findings suggest that the traditional cardiometabolic risk factors may not have the same impact on mortality in a geriatric population as they do in younger individuals, and that a "reverse metabolic syndrome" could be a relevant concept in understanding and managing the health of older adults.
Instruction: Paracetamol orodispersible tablets: a risk for severe poisoning in children? Abstracts: abstract_id: PUBMED:21104406 Paracetamol orodispersible tablets: a risk for severe poisoning in children? Purpose: Childhood paracetamol (acetaminophen) ingestion with subsequent risk of hepatotoxicity is a major medical problem. The aim of this study was to investigate the risk of high-dose ingestion of orodispersible, fast-disintegrating paracetamol tablets in children. Methods: A retrospective single-center case study of all accidental selfadministrations of solid or orodispersible 500-mg paracetamol tablets occurring in children ≤ 6 years, reported to the Swiss Toxicological Information Centre between June 2003 and August 2009. Results: We found 187 cases with ingestion of solid 500-mg paracetamol tablets and 16 cases with ingestion of orodispersible 500-mg tablets. The mean ingested dose in the orodispersible-tablet group was 59% higher than in the solid-tablet group (p = 0.085). Administration of activated charcoal and/or N-acetylcysteine because of ingestion of a potentially hepatotoxic paracetamol dose ( ≥ 150 mg/kg body weight) was recommended in 32 patients (17.1%) in the solid-tablet group and in five (31%) in the orodispersible-tablet group. Conclusions: Orodispersible paracetamol formulations may represent an important risk factor for severe paracetamol poisoning in children. Over-the-counter availability may contribute to increasing the use of this galenic formulation and eventually the number of poisonings in children. abstract_id: PUBMED:23210259 Orodispersible paracetamol tablets: danger of poisoning in children. Paracetamol overdose can cause serious and sometimes fatal liver damage. Children are at particular risk of overdose with the orodispersible tablet form. A study conducted by a Swiss poison control centre showed that children ingested twice the dose of paracetamol in cases involving the orodispersible tablet form. In practice, it is better to keep drugs, including commonly used medicines such as paracetamol, out of reach of children and to warn their relatives and carers that paracetamol is highly toxic in case of overdose. In France, as of 1 March 2012, a number of orodispersible paracetamol tablets are sold in bulk tubes without a child-proof cap, packaging that is particularly dangerous for children. Blister packs sealed with a safety film are safer because it takes longer to remove the tablets. When choosing a drug, the safety of its packaging will remain a key factor to take into consideration until the pharmaceutical industry and drug regulatory agencies place more importance on protecting patients and their relatives. abstract_id: PUBMED:35154529 Acute liver failure secondary to toxic exposure in children. Introduction: Acute liver failure (ALF) is a syndrome defined by jaundice, coagulopathy (INR &gt; 1.5) and hepatic encephalopathy in patients with no evidence of prior liver disease. Toxins and drugs are a frequent cause of ALF in children. Material And Methods: The aim of our study was to establish the causes of toxic ALF in children followed up in our hospital in the period of January 2000 to August 2018. We retrospectively studied all hospital records of patients who developed ALF after mushroom/drug exposure and had been admitted to our hospital, the main pediatric toxicology center in north-western Romania. Results: In the last 18 years, 123 patients were admitted to our clinic with toxic ALF (89 patients secondary to mushroom ingestion and 34 patients after drug exposure). In the 2000-2012 period accidental mushroom poisoning was the leading cause of toxic ALF. Unfortunately, during the last years, voluntary drug ingestions have increased dramatically. The most commonly incriminated drug was acetaminophen (52.94%). Conclusions: ALF in mushroom poisoning is associated with a high mortality in children, despite optimal medical therapy. This etiology was one of the most important causes of death in our cohort. The difficulty in accessing emergency liver transplantation is an obstacle common to many Eastern European pediatric centers. Fortunately, in the last 5 years the incidence of mushroom intoxications has decreased in our area. It is worrying that over the last few years there has been an increased incidence of toxic ALF after drug exposure (for suicidal purposes or due to lenient regulations for prescribing hepatotoxic medications). abstract_id: PUBMED:11545233 Nephrotoxicity after acute severe acetaminophen poisoning in adolescents. Objective: To determine the rate of acetaminophen related nephrotoxicity in adolescents who present after acute severe acetaminophen intoxication and to identify potential predictors of this outcome. Study Design: Retrospective analysis of consecutive patients between the ages of 12 and 18 years who were admitted at a tertiary care children's hospital for treatment of acute severe acetaminophen intoxication with N-acetylcysteine. The main outcome measure was the frequency of acetaminophen-related nephrotoxicity, defined as abnormal blood urea nitrogen (&gt;6.4 mmol/L or &gt; 18 mg/dL) and/or elevated creatinine (97.2 micromol/L or &gt; 1.1 mg/dL) in association with one or both of the following: elevated blood pressure (systolic blood pressure &gt; 140 mm Hg/diastolic blood pressure &gt;85) or abnormal urinalysis (urinalysis with hematuria or proteinuria). Statistical analyses used were measures of central tendency, Student's t-test, Mann-Whitney, and multivariate logistic regression. Results: Fourty-five patients were included. Acetaminophen-related nephrotoxicity occurred in 4 (8.9%) cases. One victim developed severe renal injury in association with elevated hepatic transaminases. Intergroup analyses revealed no statistically significant association between acetaminophen-related nephrotoxicity and amount/kg of acute severe acetaminophen ingested, delay in treatment with N-acetylcysteine, or measures of hepaticfunction. Conclusions: Acetaminophen-related nephrotoxicity occurred in 8.9% [95% CI: 4.52, 20.48] of children with severe overdose. There are no obvious predictors of this complication of acetaminophen overdose. Because the occurrence of renal injury can not be predicted, serial blood pressure, blood urea nitrogen/creatinine, and urinalysis should be considered an integral part of the management of children with acute, severe acetaminophen intoxication. abstract_id: PUBMED:30034421 Evaluation of the children with acute acetaminophen overdose and intravenous N-acetylcysteine treatment. Objective: To evaluate the demographic and clinical features associated with acetaminophen overdose and to identify the clinical use of IV (intravenous) N- Acetylcysteine (NAC) treatment in children. Methods: This prospective study was conducted in Kanuni Sultan Suleyman Training and Research Hospital between August 2016 and August 2017. A total of 59 patients with overdose acetaminophen ingestion were included in this study. The toxic dose for acute acetaminophen intake was defined as greater than 150 mg/kg. Rumack-Matthew nomogram was used to evaluate the risk of acute intoxication and to determine the decision of using antidote. Results: The mean age of the patients was 8.5±6.4 y and 34 of them (58%) were female. The mean time from ingestion to admission was 4.3±4.7 h. The mean ingested acetaminophen dose was 142.1±80 mg/kg. Twenty four patients (41%) received NAC and there were significant differences in terms of acetaminophen dose, creatinine and INR between antidote and decontamination therapy groups at admission time (p= 0.00, p= 0.03, p= 0.02, respectively). The complication due to antidote therapy was observed in only 1 patient. Conclusions: This study confirms that the side effects due to IV NAC therapy are uncommon and it is generally well tolerated in children. abstract_id: PUBMED:29077309 Usefulness of testing for paracetamol concentration in urine when poisoning is suspected in children Objectives: To analyze the diagnostic yield of a cut-point of 3 μg/mL for paracetamol in urine to screen for poisoning in children. Material And Methods: Prospective case-control observational study in a pediatric intensive care unit (PICU). All enrolled patients had been admitted to the PICU. Cases were children receiving a therapeutic dose of intravenous paracetamol. Controls were not receiving paracetamol. Urine samples were collected early in the morning and 4 hours after a dose of paracetamol was received by case patients. Paracetamol concentration was measured in all samples. We compared the percentages of cases and controls who had a concentration of 3 μg/mL or more. The sensitivity, specificity, and predictive values of the cut-point were calculated. Results: Forty children aged between 1 month and 19 years (20 per matched group) were enrolled. Paracetamol was not detected in any of the control samples. The sensitivity of the test in early morning urine was 95% (95% CI, 85.5%-100%); specificity was 100%. The positive predictive value was 100%; the negative predictive value was 95.2% (95% CI, 86.1%-100%). Paracetamol was detected in all of the second samples collected from cases. Conclusion: Measuring the paracetamol concentration in urine within 4 hours of dosing is useful to rule out prior intake of paracetamol and overdosing in PICU patients. Studies to validate the new cut-point of 3 μg/mL for paracetamol in urine are required with a view to possibly including it in a diagnostic protocol for suspected acute poisoning. abstract_id: PUBMED:9473817 Acetaminophen overdose in children and adolescents. From 1982 to 1997, 12 Chinese children were admitted to Chang Gung Memorial Hospital with acetaminophen overdose. Six subjects (one young child, and five adolescents) developed liver damage which was severe in three cases (AST &gt; 1000 IU/L). Acetaminophen-induced liver function abnormalities were characterized by elevation of transaminase levels with ALT higher than AST(6/6), coagulopathy(5/6), thrombocytopenia (1/6), but absence of jaundice(6/6). Fortunately, none of the six patients with liver damage developed fulminant liver failure, and all recovered completely. Acetaminophen overdose can cause significant morbidity in children and adolescents. Caretakers should be well instructed to give the drug correctly. So far, acetaminophen is still considered as the drug-of-choice for antipyresis in pediatric practice. However, multicentered collaborative study is necessary to determine whether acetaminophen intoxication causes less hepatic failure in Chinese children than in Western children. abstract_id: PUBMED:29577416 Retrospective study of paracetamol poisoning in children aged zero to six years found no cases of liver injury. Aim: This study focused on children aged zero to six years with suspected single-dose paracetamol poisoning, which has not been investigated in Denmark. We evaluated the incidence of liver injuries and the use of activated charcoal and N-acetylcysteine treatment. Methods: Our retrospective study was performed in three paediatric hospital centres from 2001 to 2012. Data on symptoms, time of ingestion, blood biochemistry, treatment and adverse reactions were collected. The results were evaluated against the Rumack-Matthew nomogram. Results: We identified 221 children (58% male), with a mean age of 2.67 ± 1.05 years. Activated charcoal treatment was given in 87% of cases, but only 15% of the children received treatment within one hour of the suspected paracetamol poisoning. Although 80% of the children received N-acetylcysteine treatment, only one case (0.5%) had a toxic plasma paracetamol level according to the treatment nomogram. Abdominal pain or vomiting was associated with higher paracetamol levels in plasma. None of the children developed liver injuries. Conclusion: We found a low incidence of significant poisoning and liberal use of N-acetylcysteine and activated charcoal treatment in Danish children aged zero to six years with suspected paracetamol poisoning. Vomiting or abdominal pain was associated with elevated plasma paracetamol levels. No liver injuries were reported. abstract_id: PUBMED:37580980 Acute drug intoxication in childhood: a 10-year retrospective observational single-centre study and case reports. Background: Medication poisoning in children is a severe condition that can endanger a child's life. Although drug intoxications are easily preventable, awareness of the proper handling of drugs and their safe storage out of the reach of children is not widespread among the general public. In this work, we investigated the demographic and clinical data of children admitted to the Department of Pediatrics of the University Hospital Olomouc for acute drug-induced intoxication. We also selected several case reports to illustrate the wide range of both presentations and outcomes in individual patients. Method: Cases of drug-induced intoxications were selected from a group of patients under the age of 19 years admitted to the hospital for poisoning between January 1, 2010, and December 31, 2019. Medical records of these patients were prospectively evaluated, and overview tables and graphs of predefined research objectives were created. Results: During the given time period, 162 children with suspected drug intoxications were hospitalized at the Department of Pediatrics, University Hospital Olomouc. Of these, 108 cases were reported in girls and 54 in boys (66.7% vs. 33.3%). In 16 cases (9.9%), there was a severe intoxication requiring follow-up intensive care. There was also one case of fatal accidental intoxication. Most poisonings were seen in toddlers (65; 40.1%). Intoxication with suicidal ideation was found in 44 cases (27.2%), with a higher incidence of suicide attempts in girls (40 vs. 4). Repeated intoxication was recorded in nine cases. Analgesics were the most common drug group (61; 37.7%), with paracetamol (28; 17.3%) being the leading drug. In 154 cases (95.1%), the drugs were taken orally, most often in the form of tablets. Conclusion: Accidental drug intoxications most frequently occurred in the age group from one to three years old. The second highest incidence was among adolescents most of which were suicide attempts. Analgesics and psychoactive agents accounted for the majority of cases. Medications should be kept in places where children cannot reach them. abstract_id: PUBMED:30121087 Patterns and outcome of acute poisoning among children in rural Sri Lanka. Background: The burden of poisoning among children is largely underexplored in rural Sri Lanka. This study describes the patterns of demographic characteristics, poison related factors, clinical management and outcome following acute poisoning among children (9 months- 12 years) in rural Sri Lanka. Method: This hospital based multi-center study included Anuradhapura Teaching hospital, Polonnaruwa District General hospital, and 34 regional hospitals within Regional Director of Health Services in North Central province of Sri Lanka. The study assessed clinical profiles, poison related factors, clinical management, complications, harmful first aid practices, reasons for delayed management, complications and outcomes following acute poisoning over 7 years. Results: Among 1621 children with acute poisoning, the majority were in preschool age group. Household chemicals were accountable for 489 acute poisonings (30.2%). The most common poison was kerosene oil, followed by paracetamol. Most events occurred within their own domestic premises. Potentially harmful first aid measures were practiced by approximately one third of care givers. Reasons for delayed presentation at emergency center included lack of concern by family members regarding the urgency of the situation and lack of knowledge regarding possible complications. Complications were observed in 12.5% and the most common complication was chemical pneumonitis. Conclusions: Children with acute poisoing in rural Sri Lanka were predominantly preschoolers. They are poisonined mostly within their own housing premises. Kerosene oil, in addition to being the most common poison, had additional risks of aspiration pneumonia following potentially hazadrous first aid measures practised by the care givers. Complications though rare were potentially preventable by community education and awareness on timely attention to seek medical care, and avoidance of harmful first aid practices. Answer: Yes, paracetamol orodispersible tablets pose a risk for severe poisoning in children. Studies have shown that children ingested a significantly higher dose of paracetamol when it was in the orodispersible tablet form compared to the solid tablet form. A Swiss poison control centre study found that the mean ingested dose in the orodispersible-tablet group was 59% higher than in the solid-tablet group, and a higher percentage of children in the orodispersible-tablet group required treatment with activated charcoal and/or N-acetylcysteine due to ingestion of a potentially hepatotoxic dose (PUBMED:21104406). Another study highlighted that children are at particular risk of overdose with orodispersible tablets and that the over-the-counter availability of these formulations may contribute to an increased number of poisonings (PUBMED:23210259). Furthermore, acetaminophen (paracetamol) is the most commonly incriminated drug in cases of acute liver failure (ALF) secondary to toxic exposure in children, which underscores the severity of the risk associated with paracetamol overdose (PUBMED:35154529). It is important to note that while orodispersible tablets may be more appealing or easier for children to ingest, they also make it easier for children to accidentally consume a dangerous amount, especially if the packaging is not child-proof (PUBMED:23210259). Therefore, it is crucial to keep drugs, including orodispersible paracetamol tablets, out of reach of children and to use packaging that is child-resistant to reduce the risk of accidental ingestion and severe poisoning (PUBMED:23210259).
Instruction: Does fusion status after posterior lumbar interbody fusion affect patient-based QOL outcomes? Abstracts: abstract_id: PUBMED:26835195 Lifestyle-Related Diseases Affect Surgical Outcomes after Posterior Lumbar Interbody Fusion. Study Design Retrospective study. Objective Hyperlipidemia (HL) and hypertension (HT) lead to systemic atherosclerosis. Not only atherosclerosis but also bone fragility and/or low bone mineral density result from diabetes mellitus (DM) and chronic kidney disease (CKD). The purpose of this study was to examine whether these lifestyle-related diseases affected surgical outcomes after posterior lumbar interbody fusion (PLIF). Methods The subjects comprised 122 consecutive patients who underwent single-level PLIF for degenerative lumbar spinal disorders. The clinical results were assessed using the Japanese Orthopaedic Association (JOA) score before surgery and at 2 years postoperatively. The fusion status was graded as union in situ, collapsed union, or nonunion at 2 years after surgery. The abdominal aorta calcification (AAC) score was assessed using preoperative lateral radiographs of the lumbar spine. Results HL did not significantly affect the JOA score recovery rate. On the other hand, HT and CKD (stage 3 to 4) had a significant adverse effect on the recovery rate. The recovery rate was also lower in the DM group than in the non-DM group, but the difference was not significant. The AAC score was negatively correlated with the JOA score recovery rate. The fusion status was not significantly affected by HL, HT, DM, or CKD; however, the AAC score was significantly higher in the collapsed union and nonunion group than in the union in situ group. Conclusions At 2 years after PLIF, the presence of HT, CKD, and AAC was associated with significantly worse clinical outcomes, and advanced AAC significantly affected fusion status. abstract_id: PUBMED:38186530 Does the Number of Interbody Devices Affect the Fusion Outcome in Short-Segment Posterior Lumbar Fusion? Introduction: Interbody devices (IBDs) have been shown to improve outcomes when used in posterior lumbar fusion (PLF) surgery; however,the exact extent of their clinical benefit remains a current topic of interest. Our primary objective in this study was to identify whether the use of an IBD at every level of fusion construct would affect fusion outcomes such as adjacent segment pathology (ASP) and pseudarthrosis after one- to three-level PLF surgery. Methods: This was a single-institution retrospective study. We studied the association of factors such as smoking status, BMI, gender, age, and number of IBDs on the development of ASP and pseudarthrosis. To study the effect of independent variables on ASP and pseudoarthrosis, univariate and multivariate regression analyses were used. Results: The study included 2,061 patients with a history of posterior lumbar fusion who were identified and reviewed. Among these, 363 patients met our inclusion criteria; 247 patients had a minimum follow-up of six months and were finally included in the study. The median follow-up was 30 months. Among the 247 patients, 105 (42.5%) and 24 (9.7%) experienced ASP and pseudarthrosis, respectively. Gender and use of IBD significantly affected the presence of pseudarthrosis (with a higher rate in males and those without any IBDs). Gender, age, BMI, and use of IBDs did not affect ASP. Moreover, using an IBD at each fused level reduced the pseudarthrosis rate significantly compared to when IBDs were not used at all levels (7.3% vs. 27.6%, p &lt;0.001), while there was no significant difference in the rate of ASP (43.6% vs. 34.5%, p = 0.35). Conclusions: In patients undergoing one- to three-level PLF surgery, the use of an IBD at all levels of the fusion construct significantly reduces the rate of pseudarthrosis. There was no significant correlation between the rates of ASP. Studies with a larger sample size and a longer follow-up time are suggested to validate our results for pseudoarthrosis and ASP. Our results suggest the use of an IBD per fusion level in short-segment PLF surgeries. abstract_id: PUBMED:30069505 Biomechanical evaluation of interbody fixation with secondary augmentation: lateral lumbar interbody fusion versus posterior lumbar interbody fusion. Background: Many approaches to the lumbar spine have been developed for interbody fusion. The biomechanical profile of each interbody fusion device is determined by the anatomical approach and the type of supplemental internal fixation. Lateral lumbar interbody fusion (LLIF) was developed as a minimally invasive technique for introducing hardware with higher profiles and wider widths, compared with that for the posterior lumbar interbody fusion (PLIF) approach. However, the biomechanics of the interbody fusion construct used in the LLIF approach have not been rigorously evaluated, especially in the presence of secondary augmentation. Methods: Spinal stability of 21 cadaveric lumbar specimens was compared using standard nondestructive flexibility studies [mean range of motion (ROM), lax zone (LZ), stiff zone (SZ) in flexion-extension, lateral bending, and axial rotation]. Non-paired comparisons were made among four conditions: (I) intact; (II) with unilateral interbody + bilateral pedicle screws (BPS) using the LLIF approach (referred to as the LLIF construct); (III) with bilateral interbody + BPS using the PLIF approach (referred to as the PLIF construct); and (IV) with no lumbar interbody fusion (LIF) + BPS (referred to as the no-LIF construct). Results: With bilateral pedicle screw-rod fixation, stability was equivalent between PLIF and LLIF constructs in lateral bending and flexion-extension. PLIF and LLIF constructs had similar biomechanical profiles, with a trend toward less ROM in axial rotation for the LLIF construct. Conclusions: LLIF and PLIF constructs had similar stabilizing effects. abstract_id: PUBMED:23798756 Comparison of the early results of transforaminal lumbar interbody fusion and posterior lumbar interbody fusion in symptomatic lumbar instability. Background: Transforaminal lumbar interbody fusion (TLIF) has been preferred to posterior lumbar interbody fusion (PLIF) for different spinal disorders but there had been no study comparing their outcome in lumbar instability. A comparative retrospective analysis of the early results of TLIF and PLIF in symptomatic lumbar instability was conducted between 2005 and 2011. Materials And Methods: Review of the records of 102 operated cases of lumbar instability with minimum 1 year followup was done. A total of 52 cases (11 men and 41 women, mean age 46 years SD 05.88, range 40-59 years) underwent PLIF and 50 cases (14 men and 36 women, mean age 49 years SD 06.88, range 40-59 years) underwent TLIF. The surgical time, duration of hospital stay, intraoperative blood loss were compared. Self-evaluated low back pain and leg pain status (using Visual Analog Score), disability outcome (using Oswestry disability questionnaire) was analyzed. Radiological structural restoration (e.g., disc height, foraminal height, lordotic angle, and slip reduction), stability (using Posner criteria), fusion (using Hackenberg criteria), and overall functional outcome (using MacNab's criteria) were compared. Results: Pain, disability, neurology, and overall functional status were significantly improved in both groups but PLIF required more operative time and caused more blood loss. Postoperative hospital stay, structural restoration, stability, and fusion had no significant difference but neural complications were relatively more with PLIF. Conclusions: Both methods were effective in relieving symptoms, achieving structural restoration, stability, and fusion, but TLIF had been associated with shorter operative time, less blood loss, and lesser complication rates for which it can be preferred for symptomatic lumbar instability. abstract_id: PUBMED:27811052 Comparison of posterior lumbar interbody fusion with transforaminal lumbar interbody fusion for treatment of recurrent lumbar disc herniation: A retrospective study. Objective To compare posterior lumbar interbody fusion (PLIF) with transforaminal lumbar interbody fusion (TLIF) for spinal fusion in patients previously treated by discectomy. Methods This retrospective study evaluated pre- and postoperative neurological status via Japan Orthopaedic Association (JOA) score. Surgical outcome was based on recovery rate percentage (RR%). Adverse event data were reviewed. Results Both PLIF ( n = 26) and TLIF ( n = 25) significantly improved neurological status. There were no significant between-group differences in postoperative JOA score, RR% or surgical outcome. Overall, 92.3% patients in the PLIF group and 84% in the TLIF group had an excellent or good outcome (RR ≥ 65%). No patient had a poor outcome (RR &lt; 50%). There were six cases of dural tear in the PLIF group and two in the TLIF group. Conclusions PLIF and TLIF provided good outcomes for recurrent lumbar disc herniation. TLIF may be preferred because of its shorter operative time and fewer procedure-related complications than PLIF. abstract_id: PUBMED:25225542 Treating Traumatic Lumbosacral Spondylolisthesis Using Posterior Lumbar Interbody Fusion with three years follow up. Objective: To analyze the surgical outcome of traumatic lumbosacral spondylolisthesis treated using posterior lumbar interbody fusion, and help spine surgeons to determine the treatment strategy. Methods: We reviewed retrospectively five cases of traumatic lumbosacral spondylolisthesis treated in our hospital from May 2005 to May 2010. There were four male and one female patient, treated surgically using posterior lumbar interbody fusion. The patients' data including age, neurological status, operation time, blood loss, follow-up periods, X- radiographs and fusion status were collected. Results: All the cases were treated using posterior lumbar interbody fusion to realize decompression, reduction and fusion. Solid arthrodesis was found at the 12-month follow-up. No shift or breakage of the instrumentation was found, and all the patients were symptom-free at the last follow-up. Conclusion: Traumatic lumbosacral spondylolisthesis can be treated using posterior lumbar interbody fusion to realize the perfect reduction, decompression, fixation and fusion. abstract_id: PUBMED:29393010 Comparison of rigid and semi-rigid instrumentation under acute load on vertebrae treated with posterior lumbar interbody fusion/transforaminal lumbar interbody fusion procedures: An experimental study. Rigid and semi-rigid fixations are investigated several times in order to compare their biomechanical stability. Interbody fusion techniques are also preferable for maintaining the sagittal balance by protecting the disk height. In this study, the biomechanical comparison of semi-rigid and rigid fixations with posterior lumbar interbody fusion or transforaminal lumbar interbody fusion procedures is conducted under trauma. There were four different test groups to analyze the effect of acute load on treated ovine vertebrae. First and second groups were fixed with polyetheretherketone rods and transforaminal lumbar interbody fusion and posterior lumbar interbody fusion cages, respectively. Third and fourth groups were fixed with titanium rods and posterior lumbar interbody fusion and transforaminal lumbar interbody fusion cages, respectively. The drop tests were conducted with 7 kg weight. There were six samples in each group so the drop test repeated 24 times in total. The test samples were photographed and X-rayed (laterally and anteroposteriorly) before and after drop test. Two fractures were observed on group 1. Conversely, there were no fractures observed for group 2. There were no anterior element fractures for both groups 1 and 2. However, one fracture seen on group 3 was anterior element fracture, whereas the other three were posterior element fractures. All three fractures were anterior element fractures for group 4. Treated vertebrae with polyetheretherketone rods and posterior lumbar interbody fusion cages showed the best durability to the drop tests among the groups. Semi-rigid fixation gave better results than rigid fixation according to failed segments. Posterior lumbar interbody fusion cages seem to be better option for semi-rigid fixation, however mentioned surgical disadvantages must be considered. abstract_id: PUBMED:37443066 Surgical outcomes of anterior lumbar interbody fusion in revision lumbar interbody fusion surgery. Backgrounds: Anterior lumbar interbody fusion (ALIF) is an attractive option for revision lumbar interbody fusion as it provides wide access for implant removal and accommodation of large interbody grafts for fusion. However, revision lumbar interbody fusion surgery has not been found to result in significantly better functional outcomes compared with other approaches. To date, no prognostic factors of anterior lumbar interbody fusion in revision lumbar interbody fusion have been reported. In this study, we investigated the surgical results and possible prognostic factors of anterior lumbar interbody fusion in revision lumbar interbody fusion. Methods: Patients who received revision interbody fusion surgery between January 2010 and May 2018 in our hospital were reviewed. Clinical outcomes were determined according to whether the VAS score improvement in back pain and leg pain reached the minimum clinically important difference (MCID) and Macnab criteria. Radiographic outcomes were assessed with fusion rate, preoperative, and postoperative lumbar lordosis. Operative-relative factors that may affect clinical outcomes, such as BMI, existence of cage migration, cage subsidence, pseudarthrosis, previous procedure, and number of fusion segments, were collected and analyzed. Results: A total of 22 consecutive patients who received ALIF for revision interbody fusion surgery were included and analyzed. There were 9 men and 13 women with a mean age at operation of 56 years (26-78). The mean follow-up was 73 months (20-121). The minimal clinically important difference (MCID) was reached in 11 (50%) of the patients for back pain and 14 (64%) for leg pain. According to the modified Macnab criteria, 73% of the patients in this study had successful outcomes (excellent or good). The pain and lumbar lordosis had significant improvement (P &lt; 0.05). Preoperative fusion segment ≥ 2 was shown to be a poor prognostic factor for back pain improvement reaching MCID (P = 0.043). Conclusions: ALIF has proven effective for revision lumbar fusion surgery, yielding positive clinical and radiographic results. However, having two or more preoperative fusion segments can negatively impact back pain improvement. Level Of Evidence: IV. abstract_id: PUBMED:30691257 Comparative Radiographic Outcomes of Lateral and Posterior Lumbar Interbody Fusion in the Treatment of Degenerative Lumbar Kyphosis. Study Design: Retrospective case-control study. Purpose: To compare surgical invasiveness and radiological outcomes between posterior lumbar interbody fusion (PLIF) and lateral lumbar interbody fusion (LLIF) for degenerative lumbar kyphosis. Overview Of Literature: LLIF is a minimally invasive interbody fusion technique; however, few reports compared the clinical outcomes of conventional PLIF and LLIF for degenerative lumbar kyphosis. Methods: Radiographic data for patients who have undergone lumbar interbody fusion (≥3 levels) using PLIF or LLIF for degenerative lumbar kyphosis (lumbar lordosis [LL] &lt;20°) were retrospectively examined. The following radiographic parameters were retrospectively evaluated preoperatively and 2 years postoperatively: segmental lordotic angle, LL, pelvic tilt (PT), pelvic incidence (PI), C7 sagittal vertical axis, and T1 pelvic angle. Results: Nineteen consecutive cases with PLIF and 27 cases with LLIF were included. There were no significant differences in patients' backgrounds or preoperative radiographic parameters between the PLIF and the LLIF groups. The mean fusion level was 5.5±2.5 levels and 5.8±2.5 levels in the PLIF and LLIF groups, respectively (p=0.69). Although there was no significant difference in surgical times (p=0.58), the estimated blood loss was significantly greater in the PLIF group (p&lt;0.001). Two years postoperatively, comparing the PLIF and LLIF groups, the segmental lordotic angle achieved (7.4°±7.6° and 10.6°±9.4°, respectively; p=0.03), LL (27.8°±13.9° and 39.2°±12.7°, respectively; p=0.006), PI-LL (19.8°±14.8° and 3.1°±17.5°, respectively; p=0.002), and PT (22.6°±7.1° and 14.2°±13.9°, respectively; p=0.02) were significantly better in the LLIF group. Conclusions: LLIF provided significantly better sagittal alignment restoration in the context of degenerative lumbar kyphosis, with less blood loss. abstract_id: PUBMED:27683674 Lumbar interbody fusion: techniques, indications and comparison of interbody fusion options including PLIF, TLIF, MI-TLIF, OLIF/ATP, LLIF and ALIF. Degenerative disc and facet joint disease of the lumbar spine is common in the ageing population, and is one of the most frequent causes of disability. Lumbar spondylosis may result in mechanical back pain, radicular and claudicant symptoms, reduced mobility and poor quality of life. Surgical interbody fusion of degenerative levels is an effective treatment option to stabilize the painful motion segment, and may provide indirect decompression of the neural elements, restore lordosis and correct deformity. The surgical options for interbody fusion of the lumbar spine include: posterior lumbar interbody fusion (PLIF), transforaminal lumbar interbody fusion (TLIF), minimally invasive transforaminal lumbar interbody fusion (MI-TLIF), oblique lumbar interbody fusion/anterior to psoas (OLIF/ATP), lateral lumbar interbody fusion (LLIF) and anterior lumbar interbody fusion (ALIF). The indications may include: discogenic/facetogenic low back pain, neurogenic claudication, radiculopathy due to foraminal stenosis, lumbar degenerative spinal deformity including symptomatic spondylolisthesis and degenerative scoliosis. In general, traditional posterior approaches are frequently used with acceptable fusion rates and low complication rates, however they are limited by thecal sac and nerve root retraction, along with iatrogenic injury to the paraspinal musculature and disruption of the posterior tension band. Minimally invasive (MIS) posterior approaches have evolved in an attempt to reduce approach related complications. Anterior approaches avoid the spinal canal, cauda equina and nerve roots, however have issues with approach related abdominal and vascular complications. In addition, lateral and OLIF techniques have potential risks to the lumbar plexus and psoas muscle. The present study aims firstly to comprehensively review the available literature and evidence for different lumbar interbody fusion (LIF) techniques. Secondly, we propose a set of recommendations and guidelines for the indications for interbody fusion options. Thirdly, this article provides a description of each approach, and illustrates the potential benefits and disadvantages of each technique with reference to indication and spine level performed. Answer: The fusion status after posterior lumbar interbody fusion (PLIF) does indeed affect patient-based quality of life (QOL) outcomes. In a study examining the effects of lifestyle-related diseases on surgical outcomes after PLIF, it was found that the presence of hypertension (HT), chronic kidney disease (CKD), and abdominal aorta calcification (AAC) was associated with significantly worse clinical outcomes at 2 years postoperatively. Additionally, advanced AAC significantly affected fusion status, with a higher AAC score being significantly associated with collapsed union and nonunion compared to union in situ (PUBMED:26835195). This suggests that successful fusion status is correlated with better clinical outcomes and thus likely better QOL for patients. Furthermore, the use of interbody devices (IBDs) at every level of fusion construct has been shown to significantly reduce the rate of pseudarthrosis compared to when IBDs were not used at all levels (PUBMED:38186530). Since pseudarthrosis, or nonunion, can lead to poor surgical outcomes and potentially necessitate revision surgery, it can be inferred that achieving solid fusion status is important for patient satisfaction and QOL. In summary, the available evidence indicates that achieving a successful fusion status after PLIF is important for patient-based QOL outcomes, as nonunion or suboptimal fusion can lead to worse clinical results and may necessitate further intervention.
Instruction: Does intravenous landiolol, a β1-adrenergic blocker, affect stroke volume variation? Abstracts: abstract_id: PUBMED:23653103 Does intravenous landiolol, a β1-adrenergic blocker, affect stroke volume variation? Purpose: There are no reports about the effect of bradycardia on stroke volume variation (SVV), and we hypothesized that induced bradycardia alters the value of SVV. Landiolol, an ultra-short-acting adrenergic β1-receptor blocking agent, was reported to induce bradycardia without decreasing blood pressure. The initial aim of this prospective study was to investigate changes in SVV values by induced bradycardia in patients with good cardiac function. Methods: At 30 min after anesthesia induction, if heart rate (HR) was &gt;80 bpm, the patient was chosen as a subject. Ten ASA physical status I-II patients aged 38-75 years who were scheduled for elective abdominal surgery were included in this study. Baseline values were recorded, and then administration of landiolol was started at 125 μg/kg/min for 1 min and then continued at 40 μg/kg/min. SVV and other parameters were recorded at baseline and 3 min after continuous landiolol injection. Results: Landiolol significantly decreased systolic arterial pressure, and diastolic arterial pressure, contrary to our expectations, and also HR, SVV, cardiac output, stroke volume index, and pressure of end-tidal CO(2), whereas systemic vascular resistance values increased significantly. Conclusions: SVV decreased after continuous administration of a β1-adrenergic blocker, probably because of a decrease in the difference of maximum stroke volume (SV) and minimum SV, or the downward shift of the Frank-Starling curve that occurred after landiolol administration. We believe that SVV values might be overestimated or misinterpreted when HR is decreased by landiolol and might not necessarily indicate that the patient is hypervolemic or normovolemic. abstract_id: PUBMED:10192736 The effects of a new ultra-short-acting beta-adrenergic blocker, ONO-1101, on cardiac function during and after cardiopulmonary bypass. The administration of an ultra-short-acting beta-adrenergic antagonist, esmolol, has been introduced as a novel method for beating-heart surgery. In the present study, a new ultra-short-acting beta-blocker, ONO-1101, was administered during cardiopulmonary bypass (CPB) to investigate its effects on cardiac function and hemodynamics. Nine adult mongrel dogs underwent 60 min of CPB during which they were given either ONO-1101 (ONO group; n = 4) or saline (control group; n = 5). In the ONO group, the hearts became flaccid enough for surgery to be performed without cardiac standstill within 10 min after the commencement of ONO-1101 with significant decreases in the heart rate, the preload recruitable stroke work (PRSW), and the slope of the end-systolic left ventricular pressure-volume relationship (Emax). The mean arterial pressure and systemic vascular resistance also decreased, but were maintained above 50 mmHg during CPB without catecholamine. These indices increased to the control group level 20 min after the discontinuation of ONO-1101. The serum concentration of ONO-1101 decreased from the maximum level of 121 +/- 15 microg/ml soon after infusion to 11 +/- 5 microg/ml within 30 min after discontinuation. These data suggest that ONO-1101 may be useful to enable beating-heart surgery to be performed without aortic cross-clamp as an ultra-short-acting beta-adrenergic blocker. abstract_id: PUBMED:22156314 Randomized study on the efficacy and safety of landiolol, an ultra-short-acting β1-adrenergic blocker, in patients with acute myocardial infarction undergoing primary percutaneous coronary intervention. Background: It is still controversial whether intravenous administration of β-blocker in the very acute phase of acute myocardial infarction (AMI) is beneficial. Landiolol is an ultra-short-acting β-blocker that has less effect on blood pressure, but little is known about its efficacy and safety for patients with AMI undergoing primary percutaneous coronary intervention (PCI). Methods And Results: A consecutive 96 patients with AMI not manifesting cardiogenic shock were prospectively randomized to landiolol (n = 47) or a control group (n = 49). Continuous administration of landiolol (3 µg·kg(-1)·min(-1) for 24 h) was done just after PCI in the landiolol group, but not in the control group. Heart rate decreased by 9.4 ± 1.7 beats/min after initiation of landiolol (P&lt;0.01), but was unchanged in the control group. Left ventricular ejection fraction assessed 6 months later was greater than that at 2 weeks in the landiolol group (52.0 ± 1.5 vs. 49.1 ± 1.5%, P = 0.01), but remained unchanged in the control group. Left ventricular end-diastolic volume index assessed 6 months later was increased compared with that at 2 weeks in the control group (78.0 ± 2.7 vs. 72.5 ± 2.8 ml/m(2), P = 0.02), whereas it was unchanged in the landiolol group. Conclusions: Early intravenous administration of landiolol in patients with AMI undergoing PCI is safe and has the potential to improve cardiac function and inhibit cardiac remodeling in the chronic phase. abstract_id: PUBMED:28505021 Use of a Short-Acting β1 Blocker During Endotoxemia May Reduce Cerebral Tissue Oxygenation if Hemodynamics are Depressed by a Decrease in Heart Rate. Background: A decrease in heart rate (HR) using a short-acting β blocker has potential benefits in sepsis; however, depression of hemodynamics and reduction of cerebral oxygenation may also occur in endotoxemia. Methods: Seventeen swine were allocated to landiolol or control groups. In the landiolol group, the dose was sequentially changed from 0 to 40 to 200 μg kg min, and stopped. Hemodynamics, blood variables, and the cerebral tissue oxygenation index (TOI) were recorded by near infrared spectroscopy at each dose. Lipopolysaccharide (LPS) was then administered continuously at 1 μg kg h after a 100 μg bolus administration. After 30 and 150 min, as two severity stages of endotoxemia (endotoxemia 1 and 2), landiolol was administered as above and measurements were made. In the control group, landiolol was not administered, but measurements were made. Results: LPS increased HR and landiolol decreased HR, with similar effects in each endotoxemia stage. In endotoxemia 1, LPS decreased stroke volume (SV), but landiolol restored SV to a value similar to that before endotoxemia, and did not decrease cardiac output (CO), even at 200 μg kg min. In contrast, landiolol did not restore SV in endotoxemia 2, resulting in a decrease in CO and mean arterial pressure, accompanied with a dose-dependent decrease in TOI. Conclusions: A short-acting β blocker has various hemodynamic effects in endotoxemia. Use of a short-acting β blocker during endotoxemia may reduce cerebral tissue oxygenation if hemodynamics are depressed by a decrease in HR. abstract_id: PUBMED:32921677 Comparison of Landiolol and Digoxin as an Intravenous Drug for Controlling the Heart Rate in Patients with Atrial Fibrillation and Severely Depressed Left Ventricular Function. Clinical experience with landiolol use in patients with atrial fibrillation (AF) and a severely depressed left ventricular (LV) function is limited. We compared the efficacy and safety of landiolol with that of digoxin as an intravenous drug in controlling the heart rate (HR) during AF associated with a very low LV ejection fraction (LVEF).We retrospectively analyzed 53 patients treated with landiolol (n = 34) or digoxin (n = 19) for AF tachycardias with an LVEF ≤ 25. The landiolol dose was adjusted between 0.5 and 10 μg/kg/minute according to the patient's condition. The response to treatment was defined as a decrease in the HR of ≤ 110/minute, and that decreased by ≥ 20% from baseline.There were no significant differences between the two groups regarding the clinical characteristics. The responder rate to landiolol at 24 hours was significantly higher than that to digoxin (71.0% versus 41.2%; odds ratio: 4.65, 95% confidence interval: 1.47-31.0, P = 0.048). The percent decrease in the HR from baseline at 1, 2, 12, and 24 hours was greater in the landiolol group than in the digoxin group (P &lt; 0.01, P = 0.071, P = 0.036, and P = 0.016, respectively). The systolic blood pressure (SBP) from baseline within 24 hours after administering landiolol was significantly reduced, whereas digoxin did not decrease the SBP over time. Hypotension (&lt; 80 mmHg) occurred in two patients in the landiolol group and 0 in the digoxin group (P = 0.53).Landiolol could be more effective in controlling the AF HR than digoxin even in patients with severely depressed LV function. However, careful hemodynamic monitoring is necessary when administering landiolol. abstract_id: PUBMED:25876017 Evaluation of near infrared spectroscopy for detecting the β blocker-induced decrease in cerebral oxygenation during hemodilution in a swine model. β blockers reduce cerebral oxygenation after acute hemodilution and may contribute to the incidence of stroke when used perioperatively. The goal of the study was to investigate whether cerebral tissue oxygenation using near infrared spectroscopy can detect the β blocker-induced decrease in cerebral oxygenation depending on the severity of hemodilution and/or the dose of β blockers. Animals were anesthetized with 2% isoflurane and randomly assigned to a landiolol or esmolol group. After baseline measurement, landiolol or esmolol was administered at 40 µg/kg/min for 20 min, increased to 200 µg/kg/min for 20 min, and then stopped. Hemodynamic and arterial variables and the tissue oxygenation index (TOI) were recorded at each β blocker dose. Two stages of hemodilution were sequentially induced by repeated hemorrhage of 600 ml (33% of estimated blood volume) and infusion of the same volume of hydroxyethylstarch. During each stage, landiolol or esmolol was similarly administered and measurements were made. Landiolol and esmolol both dose-dependently decreased heart rate, mean arterial pressure and cardiac output, depending on the severity of hemodilution. Landiolol at 40 µg/kg/min was almost equivalent in potency to 200 µg/kg/min esmolol for decreasing HR before hemodilution. Based on the TOI, short-acting β blockers reduced cerebral oxygenation in a dose-dependent manner during hemodilution, and oxygenation returned to the baseline level after drug infusion was stopped. TOI may be useful for identification of a decrease in cerebral oxygenation for patients receiving β blockade during surgery associated with major bleeding. abstract_id: PUBMED:35322092 Female hormones prevent sepsis-induced cardiac dysfunction: an experimental randomized study. Although epidemiologic research has demonstrated significant differences in incidence and outcomes of sepsis according to sex, their underlying biological mechanisms are poorly understood. Here, we studied the influence of hormonal status by comparing in vivo cardiac performances measured by MRI in non-ovariectomized and ovariectomized septic female rats. Control and ovariectomized rats were randomly allocated to the following groups: sham, sepsis and sepsis plus landiolol. Sepsis was induced by caecum ligation and punction (CLP). Landiolol, a short-acting selective β1-adrenergic blocker improving the in vivo cardiac performance of septic male rats was perfused continuously after sepsis induction. Cardiac MRI was carried out 18 h after induction of sepsis to assess in vivo cardiac function. Capillary permeability was evaluated by Evans Blue administration and measurement of its tissue extravasation. Variation in myocardial gene and protein expression was also assessed by qPCR and western-blot in the left ventricular tissue. Sepsis reduced indexed stroke volume, cardiac index and indexed end-diastolic volume compared to sham group in ovariectomized females whereas it had no effect in control females. This was associated with an overexpression of JAK2 expression and STAT3 phosphorylation on Ser727 site, and an inhibition of the adrenergic pathways in OVR females. Landiolol increased the indexed stroke volume by reversing the indexed end-diastolic volume reduction after sepsis in ovariectomized females, while it decreased indexed stroke volume and cardiac index in control. This was supported by an overexpression of genes involved in calcium influx in OVR females while an inactivation of the β-adrenergic and a calcium efflux pathway was observed in control females. Sepsis decreased in vivo cardiac performances in ovariectomized females but not in control females, presumably associated with a more pronounced inflammation, inhibition of the adrenergic pathway and calcium efflux defects. Administration of landiolol prevents this cardiac dysfunction in ovariectomized females with a probable activation of calcium influx, while it has deleterious effects in control females in which calcium efflux pathways were down-regulated. abstract_id: PUBMED:22481100 Low-dose β-blocker in combination with milrinone safely improves cardiac function and eliminates pulsus alternans in patients with acute decompensated heart failure. Background: The purpose of this study was to determine whether a low-dose β-blocker, in combination with milrinone, improves cardiac function in acute decompensated heart failure (ADHF) with tachycardia. Methods And Results: Twenty ADHF patients (New York Heart Association classification III, n=1, and IV, n=19; heart rate [HR], 107±12 beats/min; left ventricular ejection fraction, 24±7%; cardiac index [CI], 2.2±0.6 L·min(-1)·m(-2); pulmonary capillary wedge pressure [PCWP], 26±8 mmHg) were enrolled in this study. The patients first underwent conventional therapy with milrinone, vasodilators and diuretics; landiolol (1.5-6.0 µg·kg(-1)·min(-1); i.v.), which is an ultra-short-acting β(1)-selective blocker, was then added to the treatment regimen to study its effect on hemodynamics. Low-dose landiolol (1.5 µg·kg(-1)·min(-1)) significantly reduced HR by 11% without changing blood pressure (BP) and CI, whereas higher doses (≥3.0 µg·kg(-1)·min(-1)) tended to decrease BP and CI while increasing PCWP and systemic vascular resistance. After treatment with landiolol (1.5 µg·kg(-1)·min(-1)), hemodynamic parameters such as PCWP, stroke volume index, SvO(2), rate pressure product, filling time/RR, E/e', and Tei index were significantly improved. Conclusions: A low-dose β-blocker in combination with milrinone improved cardiac function in ADHF patients with tachycardia; therefore, it may be considered as an adjunct therapy for use when standard therapy with milrinone is not effective at slowing HR. abstract_id: PUBMED:31137022 Addition of a β1-Blocker to Milrinone Treatment Improves Cardiac Function in Patients with Acute Heart Failure and Rapid Atrial Fibrillation. Background: Tachycardia worsens cardiac performance in acute decompensated heart failure (ADHF). We investigated whether heart rate (HR) optimization by landiolol, an ultra-short-acting β1-selective blocker, in combination with milrinone improved cardiac function in patients with ADHF and rapid atrial fibrillation (AF). Methods And Results: We enrolled9 ADHF patients (New York Heart Association classification IV; HR, 138 ± 18 bpm; left ventricular [LV] ejection fraction, 28 ± 8%; cardiac index [CI], 2.1 ± 0.3 L/min-1/m-2; pulmonary capillary wedge pressure [PCWP], 24 ± 3 mm Hg), whose HRs could not be reduced using standard treatments, including diuretics, vasodilators, and milrinone. Landiolol (1.5-6.0 µg/kg-1/min-1, intravenous) was added to milrinone treatment to study its effect on hemodynamics. The addition of landiolol (1.5 µg/kg-1/min-1) significantly reduced HR by 11% without changing systolic blood pressure (BP) and resulted in a significant decrease in PCWP and a significant increase in stroke volume index (SVI), suggesting that HR reduction restores incomplete LV relaxation. Administration of more than 3.0 µg/kg-1/min-1 of landiolol decreased BP, CI, and SVI. Conclusion: The addition of landiolol at doses of &lt;3.0 µg/kg/min to milrinone improved cardiac function in decompensated chronic heart failure with rapid atrial fibrillation by selectively reducing HR. abstract_id: PUBMED:20511411 A cardioprotective agent of a novel calpain inhibitor, SNJ-1945, exerts beta1 actions on left ventricular mechanical work and energetics. We have previously shown that a newly developed calpain inhibitor, SNJ-1945 (SNJ), with good aqueous solubility prevents the heart from KCl arrest-reperfusion injury associated with the impairment of total Ca(2+) handling by inhibiting the proteolysis of alpha-fodrin as a cardioplegia. The aim of the present study was to investigate certain actions of this calpain inhibitor, SNJ, on left ventricular (LV) mechanical work and energetics in cross-circulated excised rat hearts undergoing blood perfusion with 40 microM SNJ. Mean end-systolic pressure at midrange LV volume and systolic pressure-volume area (PVA) at mLVV (a total mechanical energy/beat) were significantly increased by SNJ perfusion (P &lt; 0.01). Mean myocardial oxygen consumption per beat (Vo(2)) intercepts (Vo(2) for the total Ca(2+) handling in excitation-contraction coupling and basal metabolism) of Vo(2)-PVA linear relations were significantly increased (P &lt; 0.01) with unchanged mean slopes of Vo(2)-PVA linear relations. Pretreatment with the selective beta(1)-blocker landiolol (10 microM) blocked these effects of SNJ perfusion. There were no significant differences in mean basal metabolic oxygen consumption among normal, 40 microM SNJ, and 10 microM landiolol + 40 microM SNJ groups. Our results indicate that water-soluble SNJ exerted positive actions on mechanical work and energetics mediated via beta(1)-adrenergic receptors associated with the enhancement of total Ca(2+) handling in excitation-contraction coupling and with unchanged contractile efficiency. In clinical settings, this pharmacological action of SNJ is beneficial as an additive agent for cardioplegia. Answer: Yes, intravenous landiolol, a β1-adrenergic blocker, does affect stroke volume variation (SVV). A study reported that continuous administration of landiolol significantly decreased heart rate (HR), SVV, cardiac output, stroke volume index, and pressure of end-tidal CO2, while systemic vascular resistance values increased significantly. The study concluded that SVV decreased after continuous administration of landiolol, likely due to a decrease in the difference between maximum stroke volume (SV) and minimum SV, or a downward shift of the Frank-Starling curve after landiolol administration. It was suggested that SVV values might be overestimated or misinterpreted when HR is decreased by landiolol and might not necessarily indicate that the patient is hypervolemic or normovolemic (PUBMED:23653103).
Instruction: Are healthcare workers in England and Wales at increased risk of tuberculosis? Abstracts: abstract_id: PUBMED:8789976 Are healthcare workers in England and Wales at increased risk of tuberculosis? Objective: To determine whether healthcare workers in England and Wales are at increased risk of tuberculosis and to examine the frequency of drug resistance in this population. Design: Comparison of notification rates by occupation obtained from national tuberculosis notification surveys in 1988 and 1993, with denominators from the 1991 census. Subjects: People with notified tuberculosis in professional and associate professional occupations from the two surveys. Main Outcome Measures: Rates of notified tuberculosis in health professionals (mainly doctors) and health associate professionals (mainly nurses) compared with rates in other professional and associate professional occupations, adjusted for ethnic group, sex, and age. Results: 119 cases of tuberculosis were identified in healthcare workers, including 61 nurses and 42 doctors. The crude notification rate in healthcare workers was 11.8 per 100,000 per year (95% confidence interval 9.8 to 14.1) compared with 3.3 per 100,000 per year (2.9 to 3.6) in other professional and associate professional occupations; rate ratios were higher (range 1.7 to 3.2) in all ethnic groups. The relative risk adjusted for ethnic group, sex, and age was 2.4 (95% confidence interval 2.0 to 3.0), slightly higher for health professionals (2.7 (1.9 to 3.8)) than for associate professionals (2.0 (1.5 to 2.6)). No multiple drug resistant strains of tuberculosis were identified in healthcare workers. Conclusions: Better detection and notification of cases of tuberculosis in healthcare workers may account for some of the apparent increased risk, but these findings imply that tuberculosis remains a hazard for healthcare workers and highlight the importance of ensuring that occupational health monitoring and protection workers are not neglected. abstract_id: PUBMED:17599921 Survey of tuberculosis incidents in hospital healthcare workers, England and Wales, 2005. Background: Tuberculosis (TB) incidence has increased in England and Wales since 1987, with much of the increase in migrants from high TB incidence countries. The National Health Service increasingly depends on healthcare workers (HCWs) trained abroad, often from areas of high TB incidence. A retrospective survey of TB incidents in hospital-based HCWs was carried out to describe their frequency, distribution and characteristics. Methods: Reports of HCWs with TB in hospitals were identified among routine surveillance of TB incidents. Additional data on these and reports of further incidents fitting the study definition were obtained by contacting local and regional Health Protection Units. Results: At least 105 incidents of TB in hospital-based HCWs occurred in England and Wales in 2005. Most involved HCWs from high incidence countries, and most cases had pre-employment occupational health screening. We found no evidence of onward transmission within hospitals. Conclusions: Pre-employment screening for active disease may not be enough to prevent the occurrence of these incidents. A high index of suspicion among HCWs with TB symptoms is needed. Detection of latent infection with interferon gamma release assays, and the use of preventive treatment, should be evaluated. abstract_id: PUBMED:22117103 Tuberculosis and HIV co-infection in healthcare workers in England and Wales, 1999-2005. This study used linked national tuberculosis (TB) and HIV surveillance data to investigate recent trends and factors associated with HIV co-infection (TB-HIV) in healthcare workers (HCWs) with TB in England and Wales. Methods applied were the χ2 trend test and logistic regression. Overall 14% (231/1627) of HCWs with TB were co-infected with HIV, increasing from 8% in 1999 to 14% in 2005 (P&lt;0·001). Most (78%) HCWs were non-UK born and 74% of these developed TB ⩾2 years post-entry. Being born in Sub-Saharan Africa was an independent predictor for TB-HIV, especially for female HCWs (odds ratio 66·5, 95% confidence interval 16·3-271·1), who also had a lower median CD4 count than other co-infected women (106/mm3, interquartile range 40-200, P&lt;0·01). Voluntary HIV testing of new HCWs should be encouraged as an opportunity for early diagnosis. Post-entry, a high index of clinical suspicion for TB in those most at risk remains important. abstract_id: PUBMED:7728031 Increasing incidence of tuberculosis in England and Wales: a study of the likely causes. Objective: To examine factors responsible for the recent increase in tuberculosis in England and Wales. Design: Study of the incidence of tuberculosis (a) in the 403 local authority districts in England and Wales, ranked according to Jarman score, and (b) in one deprived inner city district, according to ethnic origin and other factors. Setting: (a) England and Wales 1980-92, and (b) the London borough of Hackney 1986-93. Main Outcome Measure: Age and sex adjusted rate of tuberculosis. Results: In England and Wales notifications of tuberculosis increased by 12% between 1988 and 1992. The increase was 35% in the poorest 10th of the population and 13% in the next two; and in the remaining 70% there was no increase. In Hackney the increase affected traditionally high risk and low risk ethnic groups to a similar extent. In the "low risk" white and West Indian communities the incidence increased by 58% from 1986-8 (78 cases) to 1991-3 (123), whereas in residents of Indian subcontinent origin the increase was 41% (from 51 cases to 72). Tuberculosis in recently arrived immigrants--refugees (11% of the Hackney population) and Africans (6%)--accounted for less than half of the overall increase, and the proportion of such residents was much higher than in most socioeconomically deprived districts. The local increase was not due to an increase in the proportion of cases notified, to HIV infection, nor to an increase in homeless people. Conclusions: The national rise in tuberculosis affects only the poorest areas. Within one such area all residents (white and established ethnic minorities) were affected to a similar extent. The evidence indicates a major role for socioeconomic factors in the increase in tuberculosis and only a minor role for recent immigration from endemic areas. abstract_id: PUBMED:9441091 The annual risk of infection with Mycobacterium tuberculosis in England and Wales since 1901. Setting: England and Wales. Objective: To estimate the magnitude and trend in the annual risk of infection with Mycobacterium tuberculosis in England and Wales since 1901. Design: Estimates for the prechemotherapy era are derived assuming that 1% of new infections among 0-4 year olds led to fatal tuberculosis meningitis, as found in the Netherlands. The estimates are validated against data from the 1949-1950 national tuberculin survey. We explore the trend thereafter using tuberculous meningitis notifications and data from the 1971-1973 national tuberculin survey, and discuss the utility of data collected under the national bacille Calmette-Guérin (BCG) vaccination scheme for estimating the annual risk of infection. Results: Tuberculosis meningitis mortality rates among 0-4 year olds declined at 4% per annum until 1950, and suggest that the annual risk of infection declined from 12% in 1901 to 1.9% in 1949. The decline in the annual risk of infection probably accelerated in 1950, although its magnitude cannot be determined accurately. Conclusion: An accelerated decline in the annual risk of infection in England and Wales from 1950 probably resulted from the introduction of chemotherapy, which dramatically reduced the prevalence of sources of infection in the population. Data collected during the national BCG vaccination scheme were found to be unsuitable for estimating infection risks. abstract_id: PUBMED:32110034 The Prevalence and Demographic Risk Factors for Latent Tuberculosis Infection (LTBI) Among Healthcare Workers in Semarang, Indonesia. Purpose: To determine the prevalence and demographic risk factors for latent tuberculosis infection (LTBI) among healthcare workers in Semarang, Indonesia. Methods: A cross-sectional study involving 195 healthcare workers from 34 primary health centers was conducted from August to October 2019. The relationship between independent variables and dependent variables was analyzed using a multivariable logistic regression analysis. Results: The prevalence of LTBI among healthcare workers in this study was 23.6%. Comorbidities were the only risk factor for LTBI identified among other risk factors (OR=3.39, 95% CI: 0.99-11.62, p=0.04). Other demographic factors such as age (OR=0.93, 95% CI: 0.45-1.92, p=0.839), gender (OR=0.79, 95% CI: 0.23-2.72, p=0.708, smoking habits (OR=2.54, 95% CI: 0.52-12.38, p=0.247), and length of work (OR=1.43, 95% CI: 0.70-2.91, p=0.331) were not significant risk factors for LTBI. Conclusion: Healthcare workers suffering from comorbidity have a high risk for tuberculosis infection, and should not work in areas where they would be exposed to patients with tuberculosis. Healthcare workers need to apply occupational safety standards during contact with TB patients or specimens to minimize the disease transmission. abstract_id: PUBMED:19901384 Estimating the risk of cattle exposure to tuberculosis posed by wild deer relative to badgers in England and Wales. Wild deer populations in Great Britain are expanding in range and probably in numbers, and relatively high prevalence of bovine tuberculosis (bTB, caused by infection with Mycobacterium bovis) in deer occurs locally in parts of southwest England. To evaluate the M. bovis exposure risk posed to cattle by wild deer relative to badgers in England and Wales, we constructed and parameterized a quantitative risk model with the use of information from the literature (on deer densities, activity patterns, bTB epidemiology, and pathology) and contemporary data on deer, cattle, and badger (Meles meles) distribution and abundance. The median relative risk score for each of the four deer species studied--red (Cervus elaphus), fallow (Dama dama), and roe (Capreolus capreolus) deer, and muntjac (Muntiacus reevesi)--was lower than unity (the relative risk set for badgers, the putative main wildlife reservoir of M. bovis in England and Wales). However, the 95th percentiles associated with risk estimates were large, and the upper limits for all four deer species exceeded unity. Although M. bovis exposure risks to cattle from deer at pasture are likely to be lower than those from badgers across most areas of England and Wales where cattle are affected by bTB because these areas coincide with high-density badger populations but not high-density deer populations, we predict the presence of localized areas where relative risks posed by deer may be considerable. Moreover, wherever deer are infected, risks to cattle may be additive to those posed by badgers. There are considerable knowledge gaps associated with bTB in deer, badgers, and cattle, and data available for model parameterization were generally of low quality and high variability, and consequently model output were subject to some uncertainty. Improved estimates of the proportion of time that deer of each species spend at pasture, the likelihood and magnitude of M. bovis excretion, and local badger and deer densities appear most important for improving estimates of relative risk in this system. abstract_id: PUBMED:1477385 HIV-related tuberculosis in England and Wales. Numbers of tuberculosis notifications in England and Wales increased by 8% from 1987 to 1989. An analysis of notifications by age and sex has been undertaken to determine whether this increase has been due to an increase in young male adults, as has occurred in the USA, implying that HIV is largely responsible for the increase in notifications. Though notifications increased by 9.5% in younger males between 1987 and 1989 they also increased by over 10% amongst most age groups of females, and elderly males. These findings would suggest that there are a number of factors causing a rise in tuberculosis notifications, and that HIV is not yet directly implicated in England and Wales. abstract_id: PUBMED:2195535 An assessment of the value of BCG vaccination in England and Wales. The scheme for BCG vaccination at about age 13 years in England and Wales was introduced in 1953. The coverage of the scheme is now 75%, and the efficacy of BCG has been 75-80% throughout, but the incidence of tuberculosis in young white adults has been decreasing steeply since 1950. The benefits of the scheme have consequently been decreasing equally rapidly in recent years. Discontinuation of the scheme would lead to some additional cases of tuberculosis but would not greatly increase the very small current risk of tuberculosis developing in the unvaccinated 13-years old. abstract_id: PUBMED:28325904 Abundance of badgers (Meles meles) in England and Wales. The European badger (Meles meles) is of considerable interest in the UK as it is both a protected species and the main wildlife reservoir for bovine tuberculosis infection in cattle. While there have been three national badger surveys in the 1980s, 1990s and 2011-13, using the number of badger main setts as a proxy for the abundance of badger social groups, none has combined contemporary data on social group size at landscape and national scales. We estimated social group size by genotyping hair samples collected at 120 main setts across England and Wales and employing a capture-mark-recapture method based on genotypes. The estimated mean social group size in England and Wales was 6.74 (±0.63) badgers. There was considerable variation in badger social group size among Land Class Groups (LCGs), with a low of 2.67 in LCG3 and a high of 7.92 in LCG4. Combining these results with the recent Badger Sett Survey of England and Wales, we estimate there are approximately 485,000 badgers (95% confidence intervals 391,000-581,000) in England and Wales. Although direct comparison with previous estimates is not ideal owing to methodological differences, our results are consistent with a marked increase in the badger population of England and Wales since the 1980s. Answer: Yes, healthcare workers in England and Wales are at increased risk of tuberculosis (TB). A study comparing notification rates by occupation from national tuberculosis notification surveys in 1988 and 1993 found that healthcare workers had a crude notification rate of 11.8 per 100,000 per year, which was significantly higher than the rate of 3.3 per 100,000 per year in other professional and associate professional occupations. The relative risk adjusted for ethnic group, sex, and age was 2.4, indicating that healthcare workers were more than twice as likely to be notified with TB compared to other professionals (PUBMED:8789976). Additionally, a survey of TB incidents in hospital healthcare workers in 2005 revealed that at least 105 incidents of TB occurred among this group in England and Wales. Most of these healthcare workers were from high TB incidence countries, and despite pre-employment occupational health screening, there was no evidence of onward transmission within hospitals. This suggests that pre-employment screening may not be sufficient to prevent TB incidents among healthcare workers (PUBMED:17599921). Furthermore, a study on TB and HIV co-infection in healthcare workers between 1999 and 2005 found that 14% of healthcare workers with TB were co-infected with HIV, with the proportion increasing over the years. The majority of these healthcare workers were non-UK born and developed TB more than two years after entering the UK. Being born in Sub-Saharan Africa was a significant predictor for TB-HIV co-infection, especially for female healthcare workers (PUBMED:22117103). These findings imply that TB remains a significant occupational hazard for healthcare workers in England and Wales, highlighting the importance of effective occupational health monitoring and protection for this group.
Instruction: Does the Child Behavior Checklist predict levels of preoperative anxiety at anesthetic induction and postoperative emergence delirium? Abstracts: abstract_id: PUBMED:24861721 Does the Child Behavior Checklist predict levels of preoperative anxiety at anesthetic induction and postoperative emergence delirium? A prospective cohort study. Background: preoperative anxiety at induction and postoperative emergence delirium (ED) in children are associated with postoperative behavioral changes and adjustment disorders. This study's aim is to assess the value of the Child Behavior Checklist (CBCL) score in order to predict anxiety during induction and emergence delirium after anesthesia in children undergoing elective day-care surgery. Methods: Anxiety at induction, assessed by the modified Yale Preoperative Anxiety Scale (mYPAS), was studied as outcome in 401 children (60.1% male, age range: 1.5-16 years). For 343 of these children (59.8% male, age range: 1.5-16 years) ED could be investigated postoperatively, as assessed by the Pediatric Anesthesia Emergence Delirium scale (PAED). Demographic data, healthcare contacts, anesthesia and surgical data were registered. Preoperative emotional/behavioral problems, during the 6 months prior to surgery, were assessed by the CBCL. Hierarchical, multiple regression was used to test whether anxiety and ED could be predicted by CBCL scores. Results: Children with a higher CBCL score on preoperative internalizing problems (e.g. anxious/depression) showed preoperative more anxiety at induction (P=0.003). A higher CBCL score on preoperative emotional/behavioral problems was not associated with ED. Conclusion: The CBCL predicted anxiety at induction but not ED. abstract_id: PUBMED:19224789 Real-time assessment of perioperative behaviors and prediction of perioperative outcomes. Background And Aims: New onset maladaptive behaviors, such as temper tantrums, nightmares, bed-wetting, attention-seeking, and fear of being alone are common in children after outpatient surgery. Preoperative anxiety, fear and distress behaviors of children predict postoperative maladaptive behaviors as well as emergence delirium. Parental anxiety has also been found to influence children's preoperative anxiety. Currently, there is no real-time and feasible tool to effectively measure perioperative behaviors of children and parents. We developed a simple and real-time scale, the Perioperative Adult Child Behavioral Interaction Scale (PACBIS) to assess perioperative child and parent behaviors that might predict postoperative problematic behavior and emergence excitement. Methods: We used the PACBIS to evaluate perioperative behaviors during anesthetic induction and recovery in a sample of 89 children undergoing tonsillectomies and adenoidectomies, and their parents. Preoperative anxiety with the modified Yale Preoperative Anxiety Scale, compliance with induction of anesthesia with Induction Compliance Checklist, and incidence of emergence excitement were also recorded. Results: The PACBIS demonstrated good concurrent validity with modified Yale Preoperative Anxiety Scale and Induction Compliance Checklist and predicted postanesthetic emergence excitement. Discussion: The PACBIS is the first real-time scoring instrument that evaluates children's and parents' perioperative behavior. The specific behaviors identified by the PACBIS might provide targets for interventions to improve perioperative experiences and postoperative outcomes. abstract_id: PUBMED:32306238 Predicting Intense Levels of Child Anxiety During Anesthesia Induction at Hospital Arrival. In children, intense levels of anxiety during anesthetic induction are associated with a higher risk of pain, poor recovery, and emergence delirium. Therefore, it is important to identify these high-risk children at hospital arrival. The current study examined internalizing behavior (Child Behavior Checklist, CBCL) and state anxiety measures (modified Yale Preoperative Anxiety Scale, mYPAS, and State Trait Anxiety Inventory for Children, STAIC) at hospital arrival as predictors of anxiety during induction of anesthesia. One hundred children (aged 4 to 12 years) undergoing elective daycare surgery were included. The STAIC and mYPAS at hospital arrival were significant predictors of anxiety during induction, whereas CBCL was not. The STAIC state form at hospital arrival was the strongest predictor and could be used to identify children who will experience intense levels of anxiety during anesthetic induction, with sufficient to good diagnostic accuracy. Using the STAIC at hospital arrival allows targeted interventions to reduce anxiety in children. abstract_id: PUBMED:20359845 Perioperative psychobehavioural changes in children The relevant literature since the 1940s has been collected from the Medline database, using the keywords: child, operation, anxiety, distress, postoperative complications, preparation, premedication, parental presence, prevention. Preoperative anxiety, emergence delirium, and postoperative behavior changes are all manifestations of psychological distress in children undergoing surgery. Preoperative anxiety is most prominent during anaesthesia induction. Emergence delirium is frequent and somewhat independent of pain levels. Postoperative behavior changes most often include separation anxiety, tantrums, fear of strangers, eating problems, nightmares, night terrors and bedwetting. These difficulties tend to resolve themselves with time but can last up to one year in some children. The major risk factors for postoperative behavior problems are young age, prior negative experience with hospitals or medical care, certain kinds of hospitalization, postoperative pain, parental anxiety, and certain personality traits of the child. Currently, tools exist for quantifying anxiety (m-YPAS) and postoperative behavior (PHBQ). It is possible to identify those children who are at risk for postoperative complications during the preanaesthesia consultation by paying close attention to children under six years with higher levels of emotionality and impulsivity and poorer socialization skills with anxious parents. Suggested strategies for reducing child distress include preoperative preparation, premedication, parental presence during anaesthesia induction, and interventions affecting the child's environment, such as hypnosis. There are numerous ways to provide preoperative preparation (information, modeling, role playing, encouraging effective coping) and their effectiveness is proven in the preoperative setting but not during anaesthesia induction or in the operating room. Midazolam has been shown to be an effective preoperative sedative for reducing anxiety. Parental presence during induction has been shown to effectively reduce preoperative anxiety in children in certain contexts (when the parent is calm and the child is anxious). It is worthwhile if it is integrated into a family-centered anxiety management program and remains one of several options offered to families. Overall, taking into account the child's psychological needs should be considered an essential part of paediatric anaesthesia. Tools and techniques are available for assessing and managing the perioperative distress experienced by children. abstract_id: PUBMED:31869478 Reducing preoperative anxiety with Child Life preparation prior to intravenous induction of anesthesia: A randomized controlled trial. Background: Induction of anesthesia can be stressful: Up to 60% of children suffer significant anxiety immediately before surgery. Anxiety is associated with higher postoperative analgesia requirements, higher incidence of emergence delirium, and detrimental effects on sleep and behavior. Child Life preparation includes role-play, expectation-setting, and teaching coping strategies. AIM : The aim of this trial was to determine whether preoperative Child Life preparation reduces anxiety prior to intravenous induction of anesthesia. Methods: Children aged 3-10 years, with no known preexisting anxiety and no preoperative anxiolytics, undergoing elective day surgery lasting ≤ 2 hours, were enrolled in a randomized controlled trial. Each child's baseline anxiety was assessed in the anesthetic care unit, using the modified Yale Preoperative Anxiety Scale-Short Form (mYPAS-SF, observational scores from 22.9, minimal anxiety, to 100, maximal anxiety) as the primary outcome. The child was randomly assigned to intervention (minimum 15 minutes Child Life preparation) or control (standard practice without Child Life preparation). Participants entered the operating room with one parent. A researcher (blinded to group allocation) scored the child's operating room anxiety using mYPAS-SF, up to the first attempt at intravenous cannulation. Results: Fifty-nine children completed the study, aged median [interquartile range] 5 [3-7] years. Baseline mYPAS-SF anxiety was 29.2 [22.9-37.5] for all children, and operating room anxiety was 29.2 [22.9-49.0]. Operating room anxiety was higher than baseline in 16/31 (52%) children in the control group and 6/28 (21%) in the Child Life preparation group. ANCOVA revealed a significant effect of baseline mYPAS-SF anxiety and group on operating room anxiety (F = 10.31, P &lt; .001, adjusted R2 = .24); individual parameter estimates indicated that Child Life preparation reduced operating room anxiety by 13.8 (95% CI 4.4-23.1) points compared to control, P = .005. Conclusion: A brief, targeted Child Life preparation session had a statistically significant effect on reducing preoperative anxiety prior to intravenous induction of anesthesia in young children, with no known preexisting anxiety. This effect may be clinically important and suggests that Child Life can be a valuable component of pediatric surgical care. Further research is required in specific populations. abstract_id: PUBMED:15562048 Preoperative anxiety and emergence delirium and postoperative maladaptive behaviors. Based on previous studies, we hypothesized that the clinical phenomena of preoperative anxiety, emergence delirium, and postoperative maladaptive behavioral changes were closely related. We examined this issue using data obtained by our laboratory over the past 6 years. Only children who underwent surgery and general anesthesia using sevoflurane/O(2)/N(2)O and who did not receive midazolam were recruited. Children's anxiety was assessed preoperatively with the modified Yale Preoperative Anxiety Scale (mYPAS), emergence delirium was assessed in the postanesthesia care unit, and behavioral changes were assessed with the Post Hospital Behavior Questionnaire (PHBQ) on postoperative days 1, 2, 3, 7, and 14. Regression analysis showed that the odds of having marked symptoms of emergence delirium increased by 10% for each increment of 10 points in the child's state anxiety score (mYPAS). The odds ratio of having new-onset postoperative maladaptive behavior changes was 1.43 for children with marked emergence status as compared with children with no symptoms of emergence delirium. A 10-point increase in state anxiety scores led to a 12.5% increase in the odds that the child would have a new-onset maladaptive behavioral change after the surgery. This finding is highly significant to practicing clinicians, who can now predict the development of adverse postoperative phenomena, such as emergence delirium and postoperative behavioral changes, based on levels of preoperative anxiety. abstract_id: PUBMED:24103068 Development and validation of the Pediatric Anesthesia Behavior score--an objective measure of behavior during induction of anesthesia. Background: Measuring perioperative behavior changes requires validated objective rating scales. We developed a simple score for children's behavior during induction of anesthesia (Pediatric Anesthesia Behavior score) and assessed its reliability, concurrent validity, and predictive validity. Methods: Data were collected as part of a wider observational study of perioperative behavior changes in children undergoing general anesthesia for elective dental extractions. One-hundred and two healthy children aged 2-12 were recruited. Previously validated behavioral scales were used as follows: the modified Yale Preoperative Anxiety Scale (m-YPAS); the induction compliance checklist (ICC); the Pediatric Anesthesia Emergence Delirium scale (PAED); and the Post-Hospitalization Behavior Questionnaire (PHBQ). Pediatric Anesthesia Behavior (PAB) score was independently measured by two investigators, to allow assessment of interobserver reliability. Concurrent validity was assessed by examining the correlation between the PAB score, the m-YPAS, and the ICC. Predictive validity was assessed by examining the association between the PAB score, the PAED scale, and the PHBQ. Results: The PAB score correlated strongly with both the m-YPAS (P &lt; 0.001) and the ICC (P &lt; 0.001). PAB score was significantly associated with the PAED score (P = 0.031) and with the PHBQ (P = 0.034). Two independent investigators recorded identical PAB scores for 94% of children and overall, there was close agreement between scores (Kappa coefficient of 0.886 [P &lt; 0.001]). Conclusion: The PAB score is simple to use and may predict which children are at increased risk of developing postoperative behavioral disturbance. This study provides evidence for its reliability and validity. abstract_id: PUBMED:34309126 Behavioral changes after hospital discharge in preschool children experiencing emergence delirium after general anesthesia: A prospective observational study. Background: Emergence delirium is well known as early postoperative behavioral change after general anesthesia. However, it is unclear whether children with emergence delirium have negative behavioral changes after hospital discharge. Aim: This observational study investigated the association between emergence delirium and posthospital behavioral changes. Methods: One-hundred preschoolers aged 2-7 years undergoing elective surgery were enrolled in 2 tertiary university hospitals. Preoperative anxiety level was assessed using modified Yale preoperative anxiety scale. Emergence delirium was defined via pediatric anesthesia emergence delirium score ≥12 at any time in the recovery room. We divided the delirium score into a delirium-specific score (the sum of the first 3 items: eye contact, purposeful movement, and awareness of surrounding) and a pain-related score (the sum of the last 2 items: restlessness and inconsolability). High delirium scores represent severe emergence delirium. Posthospital behavioral changes were assessed by a change in Child Behavior Checklist 1.5-5 scores before and 1 week after surgery. The primary outcome was the total behavior checklist scores 1 week after surgery. Multiple linear regression was performed to identify risk factors for posthospital behavioral changes. Results: Children with emergence delirium (n = 58) had higher postoperative behavior checklist scores than children without emergence delirium (n = 42) [mean (SD), 22.8 (17.5) vs. 14.0 (12.1); mean difference (95% CI), 8.8 (1.5-16.2)]. Increases in preoperative anxiety level [regression coefficient (b) (95% CI) =0.241 (0.126-0.356)] and peak delirium-specific score [b = 0.789 (0.137-1.442)] were associated with an increase in behavior checklist score 1 week after surgery, while pain-related score, type of surgery, premedication, and age were not. Conclusion: Children with emergence delirium developed more severe behavior changes 1 week after surgery than those without emergence delirium. High preoperative anxiety level and emergence delirium scores were associated with posthospital behavioral changes. abstract_id: PUBMED:22435270 Postoperative maladaptive behavioral changes in children. Induction of anesthesia can be a very stressful period for a child and his family and can be associated with increased risk of psychological disturbances. These disturbances are categorized as preoperative anxiety, emergence delirium and postoperative behavioral changes. Several tools have been developed to measure these psychological manifestations as well as the baseline personality traits of these patients. Postoperative negative behavioral changes, such as sleep and eating disorders, separation anxiety, temper tantrum, aggression toward authorities, may occur in up to 60% of all children undergoing general anesthesia. Several studies found a strong association between these postoperative behavioral changes, the distress of the child on induction and his individual personality characteristics, although a cause-effect relationship could not be determined. Understanding the risk factors for behavior changes helps us determine the best way for prevention and treatment of these changes in the perioperative period. abstract_id: PUBMED:19546692 Preoperative melatonin and its effects on induction and emergence in children undergoing anesthesia and surgery. Background: Studies conducted in adults undergoing surgery reported a beneficial effect of oral melatonin administered before surgery. There is a paucity of such data in children undergoing anesthesia and surgery. Methods: Children undergoing surgery were randomly assigned to receive preoperatively oral midazolam 0.5 mg/kg or oral melatonin 0.05 mg/kg, 0.2 mg/kg, or 0.4 mg/kg. The primary outcome of the study was preoperative anxiety (Yale Preoperative Anxiety Scale). The secondary outcomes were the children's compliance with induction (Induction Compliance Checklist), emergence behavior (Keegan scale), and parental anxiety (State-Trait Anxiety Inventory). Results: Repeated measures ANOVA showed that children who received melatonin at any of the three doses were more anxious compared with children who received midazolam (P &lt; 0.001). Parental anxiety did not differ on the basis of the experimental condition (P = ns). The melatonin groups showed a dose-response effect on emergence behavior. Children who received melatonin developed less emergence delirium compared with those who received midazolam (P &lt; 0.05), and the effect was dose related; the incidence after 0.05 mg/kg melatonin was 25.0%, incidence after 0.2 mg/kg melatonin was 8.3%, and incidence after 0.4 mg/kg melatonin was 5.4%. Conclusions: Midazolam is more effective than melatonin in reducing children's anxiety at induction of anesthesia. Melatonin showed a direct dose-dependent effect on emergence delirium. Answer: The Child Behavior Checklist (CBCL) has been studied to assess its predictive value for preoperative anxiety at anesthetic induction and postoperative emergence delirium (ED) in children. According to the findings from a prospective cohort study, children with higher CBCL scores on preoperative internalizing problems, such as anxiety or depression, exhibited more preoperative anxiety at induction (PUBMED:24861721). However, the same study found that a higher CBCL score on preoperative emotional/behavioral problems was not associated with ED, indicating that while the CBCL can predict anxiety at induction, it does not predict ED (PUBMED:24861721). Another study examining internalizing behavior and state anxiety measures at hospital arrival as predictors of anxiety during induction of anesthesia found that the CBCL was not a significant predictor of anxiety during induction, whereas the modified Yale Preoperative Anxiety Scale (mYPAS) and the State Trait Anxiety Inventory for Children (STAIC) at hospital arrival were significant predictors (PUBMED:32306238). In summary, the CBCL may predict levels of preoperative anxiety at anesthetic induction but does not appear to predict postoperative emergence delirium. Other tools, such as the mYPAS and STAIC, may be more effective in predicting anxiety during induction of anesthesia.
Instruction: Is complex therapy of achalasia using botulinum toxin combined with balloon dilatation an effective approach? Abstracts: abstract_id: PUBMED:11153166 Is complex therapy of achalasia using botulinum toxin combined with balloon dilatation an effective approach? Background: Achalasia (ACHL) is not a very frequent disease and its etiology is still unclear. In addition to the oldest therapeutical approach represented by myotomy, two conservative methods are commonly used--balloon dilatation and application of botulotoxin. So far, both methods have been used only separately, and their effects have been compared. Literature provides no evidence of the renewal of oesophageal propulsive peristalsis in result of conservative treatment. Objectives: The aim is to use both approaches subsequently within a short period of time, in order to potentiate their effects and at the same time to reduce the risk of possible complications. Methods: The group was formed by 9 patients. Achalasia was diagnosed by flow manometry. Pseudoachalasia was excluded endoscopically and endosonographically. The treatment included application of 250 J of botulotoxin (Dysport) into the region of the lower oesophageal sphincter, and balloon dilatation which was applied 7 days later. Following this treatment, patients were observed for 24 hours. The clinical and manometric examinations were performed in 3-month intervals. Results: All patients felt significantly better after treatment. The clinical state in two patients required the performance of Heller's myotomy. After 1 year, evident clinical and manometric improvement was observed in 7 patients. The longest improvement so far lasted for 36 months. The treatment has renewed the propulsive oesophageal peristalsis in two women. Conclusion: The treatment of primary achalasia by means of combining the application of botulotoxin (Dysport) and balloon dilatation is effective, and it is possible to assume that the clinical remission will last longer than that in result of separate use of just one of the methods. The synchronous treatment of functional blocks in the cervical and thoracic regions of the spine and continuous rehabilitation can participate in the favourable clinical effect. The renewal of primary peristalsis of the oesophagus was achieved in two out of 9 patients. No such change has been either manometrically verified or described in literature. (Fig. 5, Tab. 1, Ref. 39.) abstract_id: PUBMED:37875360 Endoscopic botulinum toxin injection combined with balloon dilatation for treatment of cricopharyngeal achalasia in patient with brainstem stroke. Objectives: At present, there are many reports about the treatment of cricopharyngeal achalasia by injecting botulinum toxin type A (BTX-A) into cricopharyngeal muscle guided by ultrasound, electromyography or CT in China, but there is no report about injecting BTX-A into cricopharyngeal muscle guided by endoscope. This study aims to evaluate the efficacy of endoscopic BTX-A injection combined with balloon dilatation in the treatment of cricopharyngeal achalasia after brainstem stroke, and to provide a better method for the treatment of dysphagia after brainstem stroke. Methods: From June to December 2022, 30 patients with cricopharyngeal achalasia due to brainstem stroke were selected from the Department of Rehabilitation Medicine, the First Hospital of Changsha. They were randomly assigned into a control group and a combined group, 15 patients in each group. Patients in both groups were treated with routine rehabilitation therapy, while patients in the control group were treated with balloon dilatation, and patients in the combined group were treated with balloon dilatation and BTX-A injection. Before treatment and after 2 weeks of treatment, the patients were examined by video fluoroscopic swallowing study, Penetration-aspiration Scale (PAS), Dysphagia Outcome Severity Scale (DOSS), and Functional Oral Intake Scale (FOIS) were used to assess the swallowing function. Results: In the combined group, 1 patient withdrew from the treatment because of personal reasons. Two weeks after treatment, the scores of DOSS, PAS, and FOIS in both groups were better than those before treatment (all P&lt;0.01), and the combined group was better than the control group (all P&lt;0.001). The effective rate was 85.7% in the combined group and 66.7% in the control group, with no significant difference between the 2 groups (P&gt;0.05). Conclusions: BTX-A injection combined with balloon dilatation is more effective than balloon dilatation alone in improving swallowing function and is worthy of clinical application. abstract_id: PUBMED:35014232 Safety and Efficacy of Graded Gradual Pneumatic Balloon Dilation in Idiopathic Achalasia Patients: A 24-Year Single-Center Experience. Background: Pneumatic balloon dilation (PBD) is a first line treatment for idiopathic achalasia. Here we report the safety and efficacy of graded gradual PBD on short and long-term follow-up. Methods: We evaluated 1370 idiopathic achalasia patients over a period of 24 years (1994-2018), prospectively. 216 patients did not undergo PBD due to comorbid diseases. Ultimately, 1092 achalasia patients were enrolled. All patients underwent graded gradual PBD, with repeat dilation if symptoms relapsed. Response to treatment was evaluated by Vantrappen scoring system. Results: Of 1092 achalasia patients, 937 patients were treated by PBD and 155 patients were treated by combined therapy (PBD 1 month after Botulinum toxin injection). In short-term follow-up, 728 of 1092 patients underwent one PBD and 77.3% of them had excellent or good response (responders), 163 patients (58.6%) who underwent two PBDs were responders, and 44 (51.2%) patients who underwent three PBDs were responders. Overall, 2193 balloon dilations were performed on 1092 patients (mean 2 PBDs/patient). Of 786 patients with long-term follow-up, 259 patients had excellent or good response with one PBD. The responders with two, three, and four or more dilations were 149, 67, and 67, respectively. The overall response rate was 69%. No any serious complications were noted by using the graded gradual method. Conclusion: Our results show that graded gradual PBD is a safe and effective method for treatment of achalasia patients, and achieves sufficient short and long-term symptomatic remission with high cumulative success rate. abstract_id: PUBMED:19732128 Combined treatment of achalasia - botulinum toxin injection followed by pneumatic dilatation: long-term results. Injection of botulinum toxin (BT) and pneumatic dilatation are available methods in nonsurgical treatment of achalasia. Authors anticipate beneficial effect of prior BT injection on the success of pneumatic dilatation and duration of its effect. There are no long-term data available to assess efficacy of combined treatment. From 1998 to 2007, 51 consecutive patients (20 men and 31 women, age 24-83) with achalasia were included and prospectively followed up. Each patient received injection of 200 IU of BT into the lower esophageal sphincter (LES) during endoscopy and 8 days later pneumatic dilatation (PD) under X-ray control was performed. The follow-up was established every 3 months first year and then annually. The efficacy was evaluated by a questionnaire concerning patient's symptoms and manometry. Results were compared with 40 historical controls (16 men and 24 women, age 26-80) treated by PD alone using the same method and follow-up. Fifty-one patients underwent combined treatment. Four patients failed in follow-up and were not included for analysis. The mean duration of follow-up was 48 months with range 12-96 months. Thirty-four of forty-seven (72%) patients were satisfied with results with none or very rare and mild troubles at the time of the last visit. Forty-one patients were followed up more than 2 years. Effect of therapy lasted in 75% (31/41) of them. In 17 patients, more than 5 years after treatment, effect lasted in 12 (70%). Mean tonus of LES before therapy was 29 mm Hg (10-80), 3 months after therapy decreased to 14 mmHg (5-26). The cumulative 5 years remission rate (+/-95% CI) in combined treated patients 69% +/- 8% was higher than in controls 50% +/- 9%; however it, was not statistically significant (P= 0.07). In control group 1, case of perforation (2.5%) occurred. Eight patients (17%) with relapse of dysphagia were referred to laparoscopic Heller myotomy with no surgical complication. The main adverse effect was heartburn that appeared in 17 patients (36%). Initial injection of BT followed by PD seems to be effective for long-term results with fewer complications. But the combined therapy is not significantly superior to PD alone. abstract_id: PUBMED:35213271 Botulinum toxin A injection using ultrasound combined with balloon guidance for the treatment of cricopharyngeal dysphagia: analysis of 21 cases. Background: As an upper oesophageal sphincter (UES) dysfunction disorder, cricopharyngeal achalasia (CPA) is a common cause of dysphagia and is associated with an increased risk of pulmonary complications. The aim of this study was to investigate the effectiveness and safety of BTX-A injection using ultrasound combined with balloon guidance for the treatment of CPA caused by stroke. Methods: A total of 21 patients diagnosed with CPA were treated with BTX-A injection into the cricopharyngeal muscle using ultrasound combined with balloon guidance. Primary outcome measures, including the functional oral intake scale (FOIS), videofluoroscopic dysphagia scale (VDS) and penetration aspiration scale (PAS), which are quantitative measures for a video fluoroscopic swallowing study (VFSS), and scores of the self-rating anxiety scale (SAS) and self-rating depression scale (SDS) were assessed from baseline to 12 weeks after treatment. Repeated measures analysis of variance was used to compare the scores between time points. Results: BTX-A injection led to improved dysphagia symptoms and scores in 19 patients (90.48%). Among them, 5 cases were cured (23.81%), 11 cases showed significant improvement (52.38%), and 3 cases showed improvement (14.29%). Two cases were absolutely ineffective (9.52%). Compared with the scores prior to treatment, the scores on the FOIS, VDS, PAS, SAS and SDS significantly improved beginning at 3 days (p &lt; .05) and lasting for at least 12 weeks after injection. Conclusions: Ultrasound with balloon-guided BTX-A injection is probably a relatively safe, easy, and effective technique for the treatment of CPA caused by stroke, with better visualization of the injection procedure. A well-designed controlled trial with a larger sample size is needed for more convincing conclusions. abstract_id: PUBMED:31042239 Endoscopic dilatation in a patient with achalasia during pregnancy: case report and review of the literature Achalasia in pregnancy is an infrequent, poorly understood condition and its treatment is not clearly defined. The repercussions on the patients nutritional status are serious and in a pregnant woman have serious implications for the course of gestation, with high risk of intrauterine growth restriction, preterm delivery and even fetal loss; there are symptoms that can be confused with hyperemesis gravidarum delaying the diagnosis. The therapeutic options are medical treatment, endoscopic and surgical interventions; to decide what is the best treatment, we should be taken into account the severity, gestational age and patient conditions. Within the spectrum mentioned in the management include calcium antagonists and nitrates, however these have restrictions in pregnancy, another options are botulinum toxin, endoscopic pneumatic dilation, laparoscopic Heller myotomy and recently POEM. In pregnancy there is a few evidence in the literature and in this moment there are about 40 reported cases, some with complications such as fetal loss and maternal death. We present our experience at the San Ignacio University Hospital in Bogotá, Colombia, with a 26-year-old woman with a novo diagnosis of achalasia type II during the first trimester of pregnancy, with a clinical history of severe dysphagia associated with malnutrition. She was management with enteral nutrition support with nasogastric tube to achieve repletion of the body mass index (BMI) and after that, she had a endoscopic management with Rigiflex balloon dilation. It allowed to successfully carry out pregnancy without adverse effects on the mother or the fetus, with adequate evolution and oral tolerance without dysphagia. We consider that nutritional support is important prior to taking a desicion with this type of patient, in addition that endoscopic management with balloon dilation can be safe and effective for the management of achalasia in pregnancy. abstract_id: PUBMED:23659347 Long-term outcome following pneumatic dilatation as initial therapy for idiopathic achalasia: an 18-year single-centre experience. Background: Relapse after treatment for idiopathic achalasia is common and long-term outcome data are limited. Aim: To determine the cumulative relapse rate and long-term outcome after pneumatic dilatation (PD) for achalasia in a tertiary referral centre. Methods: A retrospective study of 301 patients with achalasia treated with PD as first-line therapy. Short-term outcome was measured at 12 months. Long-term outcome was assessed in those who were in remission at 12 months by cumulative relapse rate and cross-sectional analysis of long-term remission rate regardless of any interval therapy, using a validated achalasia-specific questionnaire. Results: Eighty-two percent of patients were in remission 12 months following initial PD. Relapse rates thereafter were 18% by 2 years; 41% by 5 years and 60% by 10 years. Whilst 43% patients underwent additional treatments [PD (29%), myotomy (11%) or botulinum toxin (3%)] beyond 12 months, 32% of those who had not received interval therapy had relapsed at cross-sectional analysis. After a mean follow-up of 9.3 years, regardless of nature, timing or frequency of any interval therapy, 71% (79/111) patients were in remission. The perforation rate from PD was 2%. Chest pain had a poor predictive value (24%) for perforation. Conclusions: Long-term relapse is common following pneumatic dilatation. While on-demand pneumatic dilatation for relapse yields a good response, one-third of relapsers neither seek medical attention nor receive interval therapy. Close follow-up with timely repeat dilatation is necessary for a good long-term outcome. Given the poor predictive value of chest pain for perforation, routine gastrografin swallow is recommended postdilatation. abstract_id: PUBMED:11100311 Restoration of propulsive peristalsis of the esophagus in achalasia. The set consisting of 3 patients with esophageal achalasia diagnosed by manometry, pseudoachalasia excluded by esophagoscopy and endosonography, was treated with combined conservative procedure. Botulinum toxin 250u (Dysport) was applied to the area of lower esophageal sphincter and after 7 days balloon dilatation was carried out. Treatment efficacy was evaluated by the data obtained about the subjective condition, manometrically and endoscopically. The spine condition was evaluated in all patients before treatment and functional blockades were released by manual medicine and even by acupuncture. We succeeded in restoring propulsive peristalsis of the esophagus in all of them. It is objectively proven in the longest duration of 44 months in the case of a patient treated with a balloon dilatation. abstract_id: PUBMED:8590521 Esophageal dilatation. Esophageal dilatation as a treatment option in patients with both benign and malignant esophageal strictures is described. Types of dilators available, techniques of passage, complications, redilation rates, and comparative studies between dilating systems are reviewed. The use of proton pump inhibitors to reduce the rates of redilation is discussed in view of the natural history of benign esophageal strictures. Comparisons among blind passage, fluoroscopic guidance and endoscopically directed dilatation are made and discussed in relation to cost effectiveness. Brief mention of new dilating systems including wall stents and wall-tension-sensing systems are made. Achalasia treatment with pneumatic dilatation is described in detail and compared to medication and surgical myotomy options. New treatments, including botulinum toxin injection into the lower esophageal sphincter, are briefly mentioned. abstract_id: PUBMED:15361093 Efficacy of botulinum toxin injection before pneumatic dilatation in patients with idiopathic achalasia. Graded pneumatic dilatation (PD) is an appropriate long-term therapy and botulinum toxin injection (BT) is a relatively short-term therapy in idiopathic achalasia. Their combination has not been previously scrutinized. This study aimed to evaluate the role of BT in enhancing the efficacy of PD with 30 mm balloons. Patients who underwent PD with 30 mm balloons after botulinum toxin injections and a group of age- and sex-matched controls who were treated only with PD were enrolled in the study. Symptom scores were taken before, 1 month after and then every 3 months after PD. There were no significant differences between the two groups in gender, duration or severity of symptoms. One of the 12 patients in the case group relapsed 30 months after PD but the others were in remission for an average of 25.6 months. In the control group, all the patients relapsed after a mean of 12.6 months and needed a 35-mm PD. The cumulative remission rate was significantly higher in the case group compared with the control group (P &lt; 0.01). The mean symptom score decreased by 76% in the case group (P &lt; 0.001) and 53% in the controls (P &lt; 0.01) at the end of the first month. Neither age, sex, nor duration or severity of symptoms were predictive of patients' responses to treatment. It seems that BT may be a meaningful enhancing factor in long-term efficacy of PD. PD with a 30 mm balloon after a BT session may resolve the need for the future higher grade PD. Answer: Yes, complex therapy of achalasia using botulinum toxin combined with balloon dilatation appears to be an effective approach. Studies have shown that this combined treatment can lead to significant improvements in patients with achalasia. For instance, one study reported that all patients felt significantly better after treatment, with evident clinical and manometric improvement observed in 7 out of 9 patients after 1 year, and the longest improvement lasting for 36 months. The treatment also renewed propulsive esophageal peristalsis in two women (PUBMED:11153166). Another study found that botulinum toxin injection combined with balloon dilatation was more effective than balloon dilatation alone in improving swallowing function, with an effective rate of 85.7% in the combined group compared to 66.7% in the control group (PUBMED:37875360). Furthermore, a 24-year single-center experience showed that graded gradual pneumatic balloon dilation (PBD), with or without prior botulinum toxin injection, is a safe and effective method for the treatment of achalasia patients, achieving sufficient short and long-term symptomatic remission with a high cumulative success rate (PUBMED:35014232). Long-term results from another study indicated that initial injection of botulinum toxin followed by pneumatic dilatation seems to be effective for long-term results with fewer complications, although the combined therapy was not significantly superior to PD alone (PUBMED:19732128). Additionally, botulinum toxin A injection using ultrasound combined with balloon guidance for the treatment of cricopharyngeal dysphagia showed improved dysphagia symptoms and scores in 19 out of 21 patients (90.48%) (PUBMED:35213271). Lastly, another study suggested that botulinum toxin may be a meaningful enhancing factor in the long-term efficacy of pneumatic dilatation, with PD after a botulinum toxin session potentially resolving the need for future higher-grade PD (PUBMED:15361093). In summary, the evidence suggests that the combination of botulinum toxin and balloon dilatation can be an effective treatment strategy for achalasia, leading to improved patient outcomes and potentially longer-lasting remission.
Instruction: Oral contraceptive tolerance: does the type of pill matter? Abstracts: abstract_id: PUBMED:29094544 Undertaking an oral contraceptive pill review. This article provides nurses with an understanding of the process for undertaking an annual oral contraceptive pill review effectively. This is commonly referred to as a 'pill check', and is the responsibility of many nurses working in primary care and sexual health services. Undertaking a pill check requires an understanding of the different types of oral contraceptive pills and associated risk factors. This article refers to recommendations from the Faculty of Sexual and Reproductive Healthcare, the lead body in the UK for standards and guidance relating to family planning and sexual health. It enables nurses to consider a variety of factors during a pill check, assisting women to make informed decisions about contraception in partnership with the healthcare practitioner. It advocates using a systematic approach to the pill check and emphasises the importance of collaborating with other healthcare colleagues, such as GPs or independent prescribers. abstract_id: PUBMED:26708301 Maintenance of ovulation inhibition with a new progestogen-only pill containing drospirenone after scheduled 24-h delays in pill intake. Objectives: Traditional progestogen-only pills (POPs) have stringent daily timing and missed pill rules that might affect contraceptive reliability. A new-generation oestrogen-free pill has been developed, containing 4-mg drospirenone with a unique regimen of 24 active treatment days followed by four placebo tablets. A previous study showed that this new drospirenone-only pill effectively inhibited ovulation. Clinical efficacy, however, can be affected by compliance, and delayed or forgotten pill intake often occurs in daily life. The aim of this study was to investigate if inhibition of ovulation was maintained after four scheduled 24-h delays in tablet intake. Study Design: One hundred thirty healthy women with proven ovulatory cycles were randomized, and 127 were treated with the drospirenone-only pill during two cycles. In treatment Group A (n=62), 24-h delays in tablet intake were scheduled on days 3, 6, 11 and 22 during Cycle 2 and, in treatment Group B (n=65) during Cycle 1, respectively. Ovulation was defined as disappearance or persistence of a large follicle and progesterone levels higher than 5 ng/mL for at least 5 consecutive days. Results: The overall ovulation rate was 0.8%; only one subject in Group A fulfilled the ovulation criteria in Cycle 2. Follicular diameters in the regular-intake and the delayed-intake cycles were similar. Conclusion: Despite the 4-day hormone-free period and multiple intentional 24-h delays in tablet intake, ovulation inhibition was maintained. This property distinguishes this new-generation oestrogen-free pill from traditional POPs by allowing the same "safety window" or flexibility in intake as combined oral contraceptives without compromising contraceptive reliability. Implications: Delayed or forgotten pill intake is very common. Ovulation inhibition by the new-generation oestrogen-free pill, containing 4-mg drospirenone for 24 days followed by a 4-day treatment-free period, was maintained despite four 24-h delays in tablet intake, so the impact of delayed intake on contraceptive reliability will be low. abstract_id: PUBMED:32551357 The French pill scare and the reshaping of social inequalities in access to medical contraceptives. While the consequences of various "pill scares" have been relatively well-documented in the public health literature revealing a drop in pill use and a rise in unplanned pregnancies and abortion rates, researchers rarely considered that these controversies would affect women contraceptive practices differently according to their social background. Indeed, social differentiations in reaction to "pill scares" could contribute to reinforce the social gradient in the use of contraceptive methods and choice of visiting the health professionals who prescribe them. These could contribute to an increase in health inequalities on access to contraceptive methods. Using data from three state nationally representative cross-sectional surveys conducted in France in 2010, 2013 and 2016, we studied the changes in women's contraceptive uses around the French "pill scare" that occurred in 2012-2013. We focused on the changes in the use of all contraceptives available under medical prescription (called medical contraceptives) on one hand, and on each specific method (pill, IUD, implant, patch or vaginal ring, and female sterilization) on the other hand according to the women's social background. We saw a social gradient in contraceptives changes. The decline in the use of contraceptive methods available under medical prescription was particularly marked for women from lower and higher classes in which we observe a decrease in pill use between 2010 and 2013, whereas it was observed only between 2013 and 2016 among middle class women. Moreover, while some women from upper class shifted from pill to IUD between 2010 and 2013, this was not the case for their less privileged counterparts. As a consequence, it seems that the French "pill scare" led to the reshaping of social inequalities in access to medical contraceptives. abstract_id: PUBMED:24125124 Skipping the pill-free interval: data from a Dutch national sample. Introduction: A monthly pill-free interval is unnecessary and many women prefer to have fewer bleeding episodes. Methods: Eight hundred eighteen oral contraceptive pill (OCP) users reported frequency of skipping the pill-free interval in the online 2011 Sexual Health in the Netherlands Survey. Demographics of women who skipped pill-free intervals regularly were compared to those of women who rarely skipped intervals. Results: 73% of the women have ever skipped pill-free intervals and 38% do so regularly. Women of Dutch or Western origin and non-religious women skip intervals more frequently, but differences are small. Discussion: Skipping pill-free intervals is common, even when it is not actively promoted. Implications: Women who use OCPs can and do skip pill-free intervals, either with or without consulting a physician. If bleeding preferences were structurally discussed with patients, the number of women who skip intervals may increase even further. abstract_id: PUBMED:34040887 Adherence to the oral contraceptive pill: the roles of health literacy and knowledge. Objective: The oral contraceptive pill is the most widely used method of contraception and when adhered to perfectly is 99% effective at preventing pregnancy. However, adherence to the pill is relatively low. Knowledge has shown to be important in continuation of the pill, and previous research shows the importance of health literacy in adhering to medication in chronic illnesses, but its role has yet to be explored in this behavior. Methods: This cross-sectional study examined the associations between health literacy, knowledge of the pill and adherence, as well as the predictive ability of these two variables and their interaction, in predicting adherence. Recruited through CloudResearch, 193 women (Mage = 32.63 years, SD = 5.98) residing in the United States completed the Health Literacy Skills Instrument - Short Form, a previously validated measure of oral contraceptive pill knowledge and the Medication Adherence Report Scale. Results: Results showed a strong positive correlation between health literacy and adherence (r = .76) and moderate associations between health literacy and knowledge (r = .42), and knowledge and adherence (r = .42). The final model of the hierarchical multiple regression accounted for 59.8% of variance in adherence, with health literacy (β = .69) and length of time taking the pill (β = .13) the only significant predictors of adherence. Conclusion: Family planning clinics should consider assessing the patient's health literacy skills before prescribing the pill to ensure patients fully understand the requirements. abstract_id: PUBMED:25985996 To pill or not to pill in GnRH-antagonist cycles: the answer is in the data already! The planning of IVF treatment by scheduling menstruation and hence initiation of ovarian stimulation using sex-steroid pre-treatment is commonly used. Pooling data from six randomized-controlled trials encompassing 1343 patients, with and without combined oral contraceptive pill pre-treatment, suggests that the ongoing pregnancy rate per randomized woman is significantly lower in patients with oral contraceptive pill pre-treatment (relative risk [RR]: 0.80, 95% confidence interval [CI]: 0.66-0.97; rate difference [RD]: -5%, 95% CI: -10% to -1%; fixed effects model). This finding remains remarkably robust in multiple sensitivity analyses: exclusion of a study on poor responders, exclusion of the three smallest studies or exclusion of studies with a pill-free interval of less than 5 days, results in RR of 0.78 (95% CI: 0.64-0.94), 0.80 (95% CI: 0.65-0.98) and 0.79, (95% CI: 0.64-0.99), respectively. Furthermore, the finding of a significant reduction in ongoing pregnancy rate is not inconsistent with other evidence from the literature. The potential benefit of using oral contraceptive pill pre-treatment for cycle planning should therefore be balanced against its detrimental effect. Further randomized studies should test whether an effect similar to the one observed after combined oral contraceptive pill usage exists after other sex steroid pre-treatment regimens. abstract_id: PUBMED:27343747 The birth control pill, thromboembolic disease, science and the media: a historical review of the relationship. The introduction of the birth control pill (the Pill) in 1960 revolutionized the options for contraception, sparking vibrant discussion in the scientific and social science literature and in the media. Much attention focused on issues of women's rights, including ethics and personal choice. But the Pill also introduced new questions about risk. Shortly after its introduction, the risk of thromboembolic disease was recognized [1]. After more than half a century, controversies about the relationship between the Pill and thromboembolic disease have persisted. The scientific and media communities have been active in the discussion, debate and delivery of information about this risk. Scientific and public attention to thromboembolism and the Pill has had dramatic consequences, both good and bad. The spotlight on risk has helped to change norms regarding the public's right to know and assess dangers; it has sparked Pill scares linked to increased unplanned pregnancy, birth and abortion rates; and it has led to a change in federally mandated policies regarding how new contraceptive products are studied and brought to market. This paper charts the narrative of the thromboembolic risk of the Pill from its introduction in 1960 until today and reviews the corresponding media response to this history. How does the story of the thromboembolic risk of the Pill - explored through the lens of science, media and contemporary social dynamics - frame contemporary understanding of risk for researchers, clinicians, individuals and the public? abstract_id: PUBMED:36130667 The effect of deliberate non-adherence to a norgestrel progestin-only pill: A randomized, crossover study. Objectives: To estimate the effects on cervical mucus, ovarian activity and theoretical contraceptive protection of a 6-hour delay and of missing one norgestrel 0.075 mg progestogen-only pill. Study Design: In a prospective, two-site, randomized, crossover study, healthy women aged 18 to 35 with BMI &lt;32.0 kg/m² and regular ovulatory cycles completed a baseline 28-day cycle with correct daily pill use followed by two intervention cycles in which, around mid-cycle, one pill was taken 6 hours late or missed completely. We undertook ovarian ultrasonography, estradiol and progesterone measurement, and cervical mucus assessments every 3 to 4 days (daily around the time of the incorrect use) and based the theoretical contraceptive protection score on ovarian activity status, cervical mucus and their temporal relationship. Results: Of 91 potential participants screened, 52 started the study and 46 provided complete data for each intervention cycle. Fourteen participants (30%) ovulated in each of the two intervention cycles, with four during the delayed pill cycle and two during the missed pill cycle having an abnormal luteal phase. Seven participants in the delayed pill cycle, and six with a missed pill had elevated cervical mucus scores temporally associated with the intervention. However only two women, one in the delayed pill cycle and one in the missed pill cycle, had cervical mucus scores in the range considered favorable for fertility. Conclusions: Delayed or missed intake of a single norgestrel 0.075 mg progestogen-only pill appears to have little effect on theoretical contraceptive efficacy. Implications: This biomedical study suggests that taking a norgestrel 0.075mg progestogen-only pill 6 hours late or missing one pill have little effect on ovarian activity or cervical mucus and may not jeopardize contraceptive efficacy. Correlation with typical use outcomes is necessary to confirm pregnancy risk with delayed or missed norgestrel intake. abstract_id: PUBMED:29556261 A probe into effect of Shoutai pill on endometrial receptivity. The effect of Shoutai pill on endometrial receptivity was analyzed. A total of 160 patients with spontaneous abortion were enrolled in this study. They were treated in the Fifth People's Hospital of Jinan (Shandong, China) from February 2014 to June 2016. The following treatment regimen was performed for the patients: after menstruation, Shoutai pill was taken as continuous medication for more than 5 menstrual cycles, the pill was taken 20 days per month, medication was stopped during menstruation, and contraceptive measures were taken during this period. Statistics were recorded of endometrial thickness and morphology, hemodynamic parameters of uterine spiral artery, including pulsatility index (PI) and resistance index (RI), endometrial LIF levels before and after treatment, and follicular development and ovulation were strictly monitored, and statistics of pregnancy were recorded. The endometrial thickness, type A endometrium and endometrial spiral arterial PI and RI levels, endometrial LIF levels were significantly improved after treatment (P&lt;0.05); at the same time, pregnancy rate was 85%; during this treatment, patients had no significant side effects. In conclusion, improvement of endometrial acceptability with Shoutai pills can receive good effect, significantly promote embryo implantation and enhance pregnancy rate. abstract_id: PUBMED:33970869 Assessing the Pregnancy Protective Impact of Scheduled Nonadherence to a Novel Progestin-Only Pill: Protocol for a Prospective, Multicenter, Randomized, Crossover Study. Background: Progestin-only contraceptive pills (POP) are commonly reserved for women with medical comorbidities but in actuality, POPs can be safely used by anyone wanting to prevent pregnancy. This wide safety profile makes them an ideal candidate for being available over the counter without a prescription, but adherence issues may be more common with over-the-counter use. We need a better understanding of the ability of POPs to prevent pregnancy when adherence issues occur in the form of a missed or delayed pill. Objective: This study aims to determine cervical mucus characteristics following a 6-hour delayed pill intake or after one missed pill as compared to typical daily use of norgestrel 75 mcg. Methods: This prospective, multicenter, randomized, crossover study assesses the effect of norgestrel 75 mcg (Opill) on cervical mucus and ovarian activity during reported compliant daily use, after a 6-hour delayed intake mid cycle, and after a mid-cycle missed pill. Subject participation will last approximately 4.5 months. We will recruit at 2 US sites: Oregon Health &amp; Science University, Portland, Oregon and University of California Davis Health, Sacramento, California. Reproductive-aged subjects with regular menstrual cycles (21-35 days), BMI &lt;32 kg/m2, and proven ovulation (screening luteal phase progesterone &gt;3 ng/mL [&gt;10 nmol/L]) are eligible to enroll. Participants cannot be at risk for pregnancy during the study period and not use other hormonal methods. Norgestrel 75 mcg will be taken at the same time daily except for one day in each of treatment periods 2 and 3, when the pill will be taken either 6 hours late (delayed pill) or omitted completely (missed pill). Every 3-4 days, we will monitor subjects for follicular activity with transvaginal ultrasound (TVUS) examination, cervical mucus, and blood sampling for ovarian hormones and gonadotropins. Subjects will undergo serial cervical mucus sampling on the days with missed and delayed pill intake at 8 hours after pill intake on the day before the delayed or missed pill, 3 hours following the scheduled time of pill intake if intake was delayed, 6 hours after the scheduled time if intake was omitted, and on the next day 30 minutes before the time of scheduled pill intake. The primary objective of the study is to determine the effect of a delayed or omitted pill intake on cervical mucus characteristics based on a modified Insler score compared to reported daily use. Results: Our protocol was successfully approved by a central institutional review board (Advarra, Columbia, MD), received ethical approval on March 23, 2018, and was registered with ClinicalTrials.gov (NCT03585712). As of January 2020, the study completed enrollment of 52 subjects. Analyses are pending. Conclusions: Our protocol was approved by a central review board, and study procedures were successfully executed with completed proposed enrollment. Trial Registration: ClinicalTrials.gov NCT03585712; https://clinicaltrials.gov/ct2/show/NCT03585712. International Registered Report Identifier (irrid): DERR1-10.2196/29208. Answer: Yes, the type of oral contraceptive pill can affect tolerance, as different pills have different formulations and regimens that can influence individual responses and side effects. Traditional progestogen-only pills (POPs) have stringent daily timing and missed pill rules that might affect contraceptive reliability, but a new-generation oestrogen-free pill containing 4-mg drospirenone has been developed that allows more flexibility in intake without compromising contraceptive reliability (PUBMED:26708301). This suggests that the type of pill and its specific regimen can impact tolerance in terms of ease of use and adherence. Moreover, the choice of pill can be influenced by individual health risks and preferences. For example, the combined oral contraceptive pill has been associated with an increased risk of thromboembolic disease, which has been a subject of scientific and media attention, affecting public perception and tolerance of the pill (PUBMED:27343747). Additionally, some women may prefer to skip the pill-free interval, which is common practice even when not actively promoted, indicating a preference for fewer bleeding episodes and potentially better tolerance for continuous-use regimens (PUBMED:24125124). Health literacy and knowledge also play significant roles in adherence to the oral contraceptive pill, with higher health literacy associated with better adherence (PUBMED:34040887). This implies that understanding the pill's requirements and how to manage its intake can affect tolerance and satisfaction with the contraceptive method. Furthermore, social factors such as the "pill scare" can reshape social inequalities in access to medical contraceptives, affecting tolerance and adherence differently across various social backgrounds (PUBMED:32551357). In summary, the type of oral contraceptive pill, its regimen, associated health risks, individual preferences, health literacy, and social factors can all influence tolerance to oral contraceptives.
Instruction: Provocation of atrial fibrillation triggers during ablation: does the use of general anesthesia affect inducibility? Abstracts: abstract_id: PUBMED:25110156 Provocation of atrial fibrillation triggers during ablation: does the use of general anesthesia affect inducibility? Introduction: The autonomic nervous system exerts important effects upon atrial fibrillation (AF) initiation. The strategy of anesthesia used during AF ablation may impact the provocation of AF triggers. We hypothesized that the use of general anesthesia (GA) would reduce the incidence of provokable AF triggers in patients undergoing AF ablation compared to patients studied while receiving only conscious sedation (CS). Methods And Results: We performed a prospective, case control study comparing the incidence of provokable AF triggers in a consecutive series of patients undergoing AF ablation under GA using a standard trigger induction protocol. We compared the frequency and distribution of AF triggers to a second cohort of historical controls (matched for age, gender, left atrial dimension, and AF phenotype) who underwent ablation while receiving CS. We calculated that 44 total subjects (22 patients in each group) were required to detect a 50% reduction in the incidence of AF triggers in the GA cohort. There was no difference between the 2 groups in the rate of AF trigger inducibility (77% vs. 68%, P = 0.26) or the number of triggers provoked per patient (1.2 ± 0.8 vs. 1.3 ± 0.8, P = 0.38). Patients ablated under GA required higher doses of phenylephrine during the trigger induction protocol (408.3 mg [52-600] vs. 158.3 mg [0-75]; P = 0.003), and tended to require higher doses of isoproterenol to initiate triggers (92.8 mg [20-111] vs. 63.6 mg [6-103]; P = 0.25). Conclusion: AF trigger induction during GA is both safe and efficacious. abstract_id: PUBMED:30829344 Atrial fibrillation inducibility after pulmonary vein isolation under general anaesthesia. Aims: Atrial fibrillation (AF) inducibility with rapid atrial pacing following AF ablation is associated with higher risk of AF recurrence. The predictive value of AF inducibility in paroxysmal AF patients after pulmonary vein isolation (PVI), done under general anaesthesia (GA), remains questionable since GA might alter AF inducibility and/or sustainability. Methods: Consecutive patients (n = 120) with paroxysmal AF without prior catheter ablation (CA) were enlisted in the study. All patients were ablated under GA. We have used a point-by-point CA and elimination of dormant conduction after adenosine in all patients. A predefined stimulation protocol was used to induce arrhythmias after PVI. Regular supraventricular tachycardias were mapped and ablated. Patients were divided into 3 subgroups - noninducible, inducible AF with spontaneous termination in five minutes, inducible AF without spontaneous termination. During 12 months of follow-up, all patients were examined four-times with 7-day ECG recordings. Results: There was no statistical difference between the three subgroups in a rate of arrhythmia recurrence (11.1 vs. 27.5 vs. 27.3%, P=0.387), despite a clear trend to a better success rate in the non-inducible group. The subgroups did not differ in left atrial (LA) diameter (41.0±6, 43.0±7, 42.0±5 mm, P=0.962) or in any other baseline parameter. Conclusion: AF inducibility as well as presence or absence of its early spontaneous termination after PVI done under general anaesthesia in paroxysmal AF patients were not useful as predictors of procedural failure. abstract_id: PUBMED:30455834 Prognostic Value of Atrial Fibrillation Inducibility in Patients Without History of Clinical Atrial Fibrillation. Purpose: During invasive electrophysiological studies (EPS), atrial fibrillation (AF) can be induced in patients without a history of AF. However, the prognostic value is not well evaluated in this population. Our aim was to investigate whether AF inducibility in those patients is associated with future clinical episodes of AF; whether non-inducibility is predictive of freedom from new-onset AF and finally, to examine clinical factors associated with inducibility. Methods: Medical records from patients undergoing EPS between the years 2011 and 2014 were analysed retrospectively with 62 patients matching our inclusion criteria. Patients were divided into subgroups according to their inducibility status and underwent follow-up. Patients were assessed by a structured telephone interview, data from the further treating physicians and ECG recordings. Results: AF was inducible in 19 patients ("induction group") and not inducible in the remaining 43 ("control group"). Inducibility was associated with a higher age (p=0.002), lower GFR (p=0.002), higher CHAD2S2-VASc score (p=0.004) and diagnosis of mitral (p=0.014), tricuspid (p=0.017) and pulmonary (p=0.026) valve insufficiency. Three months after EPS, 89.5% of all inducible patients were free of diagnosed AF, in contrast to 100% of those without inducibility (p=0.031). At three years, no significant difference was left (p=0.162). Conclusion: AF inducibility was found more often in an older population with cardiac comorbidities. While inducibility was associated with an increased rate of diagnosed new-onset clinical AF in the months after testing, non-inducibility seemed to be associated with freedom from AF at least in the short to medium term. However, there was no significant difference in the long term follow-up. abstract_id: PUBMED:29056544 Lack of prognostic value of atrial arrhythmia inducibility and change in inducibility status after catheter ablation of atrial fibrillation. Background: Previous studies have suggested a role of atrial arrhythmia inducibility as an endpoint of catheter ablation of atrial fibrillation (AF). The prognostic value of noninducibility after ablation and of a change in inducibility status has not been investigated in large studies. Objective: The purpose of this study was to evaluate the prognostic role of noninducibility and of a change in inducibility status after ablation of AF. Methods: We studied 305 consecutive patients with AF (66% paroxysmal) undergoing antral pulmonary vein (PV) isolation plus non-PV triggers ablation. All patients underwent a standardized induction protocol before and after ablation from the coronary sinus and right atrium: 15-beat burst pacing at 250 ms and decrementing to 180 ms (up to 20 μg/min isoproterenol). Inducibility was defined as any sustained AF or organized atrial tachycardia (AT) lasting &gt;2 minutes. Results: A total of 197 patients (65%) had inducible AF/AT at baseline compared to 118 (39%) after ablation. One hundred seven patients (57%) changed their inducibility status from inducible preablation to noninducible postablation. After 19 ± 7 months of follow-up, 212 patients (70%) remained free from any recurrent AF/AT. Noninducibility of AF/AT postablation (log-rank P = .236) or change in inducibility status (log-rank P = .429) was not associated with reduced risk of recurrent AF/AT. Results were consistent across the paroxysmal and nonparoxysmal subgroups. Conclusion: Noninducibility of atrial arrhythmia or change in inducibility status after PV isolation and non-PV trigger ablation is not associated with long-term freedom from recurrent arrhythmia and should not be used as an ablation endpoint or to support the appropriateness of additional ablation lesion sets. abstract_id: PUBMED:34884237 AF Inducibility Is Related to Conduction Abnormalities at Bachmann's Bundle. We investigated whether patterns of activation at Bachmann's bundle are related to AF inducibility. Epicardial mapping of Bachmann's bundle during sinus rhythm was performed prior to cardiac surgery (192 electrodes, interelectrode distances: 2 mm). Compared to non-inducible patients (N = 20), patients with inducible AF (N = 34) had longer lines of conduction block (18(2-164) mm vs. 6(2-28) mm, p = 0.048), prolonged total activation time (55(28-143) ms vs. 46(24-73) ms, p = 0.012), multiple wavefronts entering Bachmann's bundle more frequently (64% vs. 37%, p = 0.046) and more often areas of simultaneous activation (conduction velocity &gt; 1.7 m/s, 45% vs. 16%, p = 0.038). These observations further support a relation between conduction abnormalities at Bachmann's bundle and AF inducibility. The next step is to examine whether Bachmann's bundle activation patterns can also be used to identify patients who will develop AF after cardiac surgery during both short- and long-term follow-up. abstract_id: PUBMED:28259694 Techniques for the provocation, localization, and ablation of non-pulmonary vein triggers for atrial fibrillation. The end point of current catheter-based ablation approaches for the treatment of atrial fibrillation (AF) is the elimination of all the possible triggers with the least amount of ablation necessary. Once all the triggers have been eliminated, the incremental value of any additional lesion sets remains to be proven. Pulmonary vein (PV) isolation is the cornerstone of catheter ablation approaches for eliminating AF triggers. However, up to 11% of patients demonstrate reproducible sustained AF initiation from non-PV foci. In these patients, triggers can typically be elicited using standardized induction protocols, which include cardioversion of spontaneous and/or induced AF and infusion of high-dose isoproterenol. Non-PV triggers typically arise from discrete anatomical structures that include the mitral and tricuspid periannular regions, the crista terminalis and Eustachian ridge, the interatrial septum, the left atrial (LA) posterior wall, the left atrial appendage (LAA), and other thoracic veins such as the superior vena cava, the coronary sinus, and the ligament of Marshall. Localization of non-PV foci involves a detailed analysis of specific intra-atrial activation sequences using multipolar catheters in standard atrial locations coupled with information from the surface electrocardiogram P wave when possible. Multipolar catheters positioned along the coronary sinus and crista terminalis/superior vena cava region together with direct recordings from the right and left PVs allow a quick localization of non-PV foci. Elimination of non-PV triggers by means of focal ablation at the site of origin or isolation of arrhythmogenic structures (eg, LA posterior wall or superior vena cava) has been associated with improved arrhythmia-free survival. abstract_id: PUBMED:31671042 Comparison of Monitored Anesthesia Care and General Anesthesia for Transcatheter Aortic Valve Replacement. Objective: Transcatheter aortic valve replacement is a safe, minimally invasive treatment for severe aortic stenosis in patients with moderate-to-high surgical risk. Monitored anesthesia is administered by an anesthesiologist. This study compares transcatheter aortic valve outcomes under monitored anesthesia vs general anesthesia. Methods: Data were prospectively collected for 286 patients undergoing transcatheter aortic valve replacement at a single academic hospital from March 2012 to August 2016. The patients were grouped by type of anesthesia: monitored vs general. A propensity score match was performed to compare intraoperative and post-operative outcomes between groups. Results: General anesthesia was used in 102 patients and moderate sedation in 184. Propensity score matching produced 80 pairs. Compared to procedures under general anesthesia, patients receiving monitored anesthesia had shorter procedure (1.6 [1.4, 2.0] vs 2.0 [1.6, 2.5] hours;P &lt; 0.001) and fluoroscopy times (17 [14.5, 22.5] vs 25 [17.9, 30.3] minutes;P &lt; 0.001) and shorter hospital length-of-stay (3 [2.0, 4.0] vs 5 [3.0, 7.0] days;P &lt; 0.001) but no difference in intensive care unit length-of-stay. Blood transfusion was more common in patients undergoing general anesthesia, but there was no difference in stroke, renal failure, postoperative atrial fibrillation, or need for permanent pacemaker. More patients were discharged to home after monitored anesthesia (90% vs 64%;P &lt; 0.001). There was no difference in 30-day mortality (0% vs 3%;P = 0.15). Conclusions: Transcatheter aortic valve replacement under monitored anesthesia provides the safety of anesthesia-led sedation without intubation and general anesthetic. We found no compromise in patient safety or clinical outcomes. abstract_id: PUBMED:34805314 Extra-Pulmonary Vein Triggers at de novo and the Repeat Atrial Fibrillation Catheter Ablation. Background: Extra-pulmonary vein triggers can play a significant role in atrial fibrillation recurrence after catheter ablation. We explored the characteristics of the extra-pulmonary vein (PV) triggers in de novo and repeat atrial fibrillation (AF) catheter ablation (AFCA). Methods: We included 2,118 patients who underwent a de novo AFCA (women 27.6%, 59.2 ± 10.9 years old, paroxysmal AF 65.9%) and 227 of them conducted repeat procedures. All included patients underwent isoproterenol provocation tests at the end of the procedure, and then we analyzed extra-PV triggers-related factors. Results: Extra-PV triggers were documented in 11.7% of patients undergoing de novo AFCA (1.22 ± 0.46 foci per patient) and 28.6% undergoing repeat AFCA (1.49 ± 0.73 foci per patient). Older age and higher LA volume index in de novo procedures and women, diabetes, and higher parasympathetic nerve activity (heart rate variability) in repeat-AFCA were independently associated with the existence of extra-PV triggers. The septum (19.9%), coronary sinus (14.7%), and superior vena cava (11.2%) were common extra-PV foci. Among 46 patients who were newly found to have mappable extra-PV triggers upon repeat procedures, 15 (32.6%) matched with the previous focal or empirical extra-PV ablation sites. The rate of AF recurrence was significantly higher in patients with extra-PV triggers than in those without after de novo (HR 1.91, 95% CI 1.54-2.38, p &lt; 0.001) and repeat procedures (HR 2.68, 95% CI 1.63-4.42, p &lt; 0.001). Conclusions: Extra-PV triggers were commonly found in AF patients with significant remodeling and previous empirical extra-PV ablation. The existence of extra-PV triggers was independently associated with poorer rhythm outcomes after the de novo and repeat AFCA. abstract_id: PUBMED:38296010 Predictors of nonpulmonary vein triggers for atrial fibrillation: A clinical risk score. Background: Targeting non-pulmonary vein triggers (NPVTs) after pulmonary vein isolation may reduce atrial fibrillation (AF) recurrence. Isoproterenol infusion and cardioversion of spontaneous or induced AF can provoke NPVTs but typically require vasopressor support and increased procedural time. Objective: The purpose of this study was to identify risk factors for the presence of NPVTs and create a risk score to identify higher-risk subgroups. Methods: Using the AF ablation registry at the Hospital of the University of Pennsylvania, we included consecutive patients who underwent AF ablation between January 2021 and December 2022. We excluded patients who did not receive NPVT provocation testing after failing to demonstrate spontaneous NPVTs. NPVTs were defined as non-pulmonary vein ectopic beats triggering AF or focal atrial tachycardia. We used risk factors associated with NPVTs with P &lt;.1 in multivariable logistic regression model to create a risk score in a randomly split derivation set (80%) and tested its predictive accuracy in the validation set (20%). Results: In 1530 AF ablations included, NPVTs were observed in 235 (15.4%). In the derivation set, female sex (odds ratio [OR] 1.40; 95% confidence interval [CI] 0.96-2.03; P = .080), sinus node dysfunction (OR 1.67; 95% CI 0.98-2.87; P = .060), previous AF ablation (OR 2.50; 95% CI 1.70-3.65; P &lt;.001), and left atrial scar (OR 2.90; 95% CI 1.94-4.36; P &lt;.001) were risk factors associated with NPVTs. The risk score created from these risk factors (PRE2SSS2 score; [PRE]vious ablation: 2 points, female [S]ex: 1 point, [S]inus node dysfunction: 1 point, left atrial [S]car: 2 points) had good predictive accuracy in the validation cohort (area under the receiver operating characteristic curve 0.728; 95% CI 0.648-0.807). Conclusion: A risk score incorporating predictors for NPVTs may allow provocation of triggers to be performed in patients with greatest expected yield. abstract_id: PUBMED:33686680 Inducibility of atrial fibrillation after catheter ablation predicts recurrences of atrial fibrillation: a meta-analysis. Background: Pulmonary vein isolation (PVI) is a component of standard care for patients with symptomatic atrial fibrillation (AF). Procedural inducibility of AF following PVI has been suggested as predictor of AF recurrence but is discussed controversially. This meta-analysis aimed at evaluating the relevance of electrophysiological inducibility of AF following PVI for future AF recurrences. Methods: A literature search of MEDLINE and Web of Science was performed until April 2020. Prospective trials of PVI in patients with AF and post-procedural atrial stimulation to test for inducibility of AF as well as adequate follow-up for AF recurrence (defined as AF &gt;10 s to &gt;10 min at follow-up) were included. Odds ratios (ORs) were analyzed using random-effects models. Results: A total of 11 trials with 1544 patients (follow-up 7-39 months, age 56 ± 6 years, predominantly male 74 ± 6%) were included. Inducibility of AF post-PVI was predictive for AF recurrence during follow-up (OR 2.08; 95% CI 1.25 to 3.46). Prediction for AF recurrence at follow-up was better for patients with paroxysmal AF (OR 4.06; 95% CI 1.39 to 11.91), stimulation in the CS (OR 2.82, 95% CI 1.17 to 6.79). A trend towards higher ORs was seen without the use of isoproterenol (OR 2.43; 95% CI 1.17 to 5.07), as well as few stimulations during induction and a short definition of AF in meta-regression analyses. Conclusions: Electrophysiological inducibility of AF following PVI was predictive for future recurrence of AF, in particular in patients with paroxysmal AF, stimulation in only CS and no use of isoproterenol. Answer: The use of general anesthesia (GA) during atrial fibrillation (AF) ablation does not appear to significantly affect the inducibility of AF triggers. A prospective case-control study comparing the incidence of provokable AF triggers in patients undergoing AF ablation under GA to those receiving conscious sedation (CS) found no difference in the rate of AF trigger inducibility or the number of triggers provoked per patient (PUBMED:25110156). Similarly, another study found that AF inducibility after pulmonary vein isolation (PVI) under GA in paroxysmal AF patients was not a useful predictor of procedural failure, suggesting that GA might not alter AF inducibility and/or sustainability (PUBMED:30829344). Furthermore, a study on the prognostic value of AF inducibility in patients without a history of clinical AF showed that while inducibility was associated with an increased rate of diagnosed new-onset clinical AF in the months after testing, non-inducibility seemed to be associated with freedom from AF at least in the short to medium term. However, there was no significant difference in the long term follow-up (PUBMED:30455834). Another study also concluded that noninducibility of atrial arrhythmia or change in inducibility status after PV isolation and non-PV trigger ablation is not associated with long-term freedom from recurrent arrhythmia and should not be used as an ablation endpoint (PUBMED:29056544). In summary, the available evidence suggests that the strategy of anesthesia used during AF ablation, whether GA or CS, does not have a significant impact on the provocation of AF triggers or the long-term outcomes of AF ablation procedures.
Instruction: Can the durability of one-step self-etch adhesives be improved by double application or by an extra layer of hydrophobic resin? Abstracts: abstract_id: PUBMED:21116389 Marginal permeability of one step self-etch adhesives: Effects of double application or the application of hydrophobic layer. Aim: The purpose of this in vitro investigation was to evaluate the influence of double application and application of hydrophobic layer on marginal adaptation of four self-etch adhesive systems (XENOIII, ALLBONDSE, CLEARFIL SE TRI BOND, FUTURA BOND). Materials And Methods: One hundred and twenty class V cavities were prepared on intact, extracted human premolars and were divided into three groups of ten teeth each for all four adhesives. Group 1: Application of bonding agents as per manufacturer directions. Group 2: Double application of bonding agents. Group 3: Application of hydrophobic layer. The specimens were restored with composite and light cured. After thermocycling and immersion in 2% Basic Fuchsin dye solution, the teeth were sectioned and dye penetration was observed under a stereomicroscope at 20× magnification. All the samples were scored and results were analyzed using Kruskal-Wallis and Mann-Whitney tests. Results: Group 3, in which the adhesive systems were coated with hydrophobic layer, showed significantly decreased microleakage, followed by Group 1 and Group 2 for all the adhesive systems. And there is no significant different between Group 1 and Group 2. Conclusion: Marginal permeability of one-step adhesives can be minimized by the application of more hydrophobic resin layer, and the double application of one-step self-etch system can be safely performed without jeopardizing the performance of adhesives. abstract_id: PUBMED:18353520 Can the durability of one-step self-etch adhesives be improved by double application or by an extra layer of hydrophobic resin? Objectives: This study evaluated the immediate and 6-month resin-dentin micro-bond strength (microTBS) of one-step self-etch systems (Adper Prompt L-Pop [AD] 3M ESPE; Xeno III [XE] Dentsply De Trey; iBond [iB] Heraeus Kulzer) under different application modes. Materials And Methods: Dentin oclusal surfaces were exposed by grinding with 600-grit SiC paper. The adhesives were applied according to the manufacturer's directions [MD], or with double application of the adhesive layer [DA] or following the manufacturer's directions plus a hydrophobic resin layer coating [HL]. After applying the adhesive resins, composite crowns were built up incrementally. After 24-h water storage, the specimens were serially sectioned in "x" and "y" directions to obtain bonded sticks of about 0.8mm2 to be tested immediately [IM] or after 6 months of water storage [6M] at a crosshead speed of 0.5mm/min. The data from each adhesive was analyzed by a two-way repeated measures ANOVA (mode of application vs. storage time) and Tukey's test (alpha=0.05). Results: The adhesives performed differently according to the application mode. The DA and HL either improved the immediate performance of the adhesive or did not differ from the MD. The resin-dentin bond strength values observed after 6 months were higher when a hydrophobic resin coat was used than compared to those values observed under the manufacturer's directions. Conclusions: The double application of one-step self-etch system can be safety performed however the application of an additional hydrophobic resin layer can improve the immediate resin-dentin bonds and reduce the degradation of resin bonds over time. abstract_id: PUBMED:37308054 The effect of an extra hydrophobic resin layer on the bond strength and durability of one-step universal adhesives used as a primer. Objectives: To determine the effect of an extra hydrophobic resin layer (EHL) on the bond strength and durability of three different pH one-step universal adhesives (UAs) utilized in self-etch (SE) mode, and to explore whether UAs can be used as a primer in two-step bonding system. Methods: Three different pH UAs were utilized: G-Premio Bond (GPB), Scotchbond Universal (SBU), All-Bond Universal (ABU), while the bonding agent of Clearfil SE Bond 2 (SE2) was selected as the EHL. For the EHL groups, EHL was applied after the air blow of each UA and before light curing. The microtensile bond strength (μTBS), fracture modes, interfacial structures and nanoleakage (NL) were evaluated after 24 h water storage and after 15,000 thermal cycling (TC). Elastic modulus (EM) and hardness (H) was tested by a nanoindenter after 24 h. Results: Significantly higher μTBS was achieved in the GPB+EHL group compared with GPB both at 24 h and after 15,000 TC, while the additional use of EHL did not improve the μTBS of SBU and ABU significantly both at 24 h and after 15,000 TC. GPB+EHL demonstrated lower NL than GPB. The mean EM and H of the adhesive layer in GPB+EHL was significantly decreased compared with GPB. Conclusions: The results indicated that the bond strength and durability of low pH one-step UA (GPB) were significantly improved by additional application of EHL both at 24 h and after 15,000 TC, while no significant improvement was observed for ultra-mild one-step UAs (SBU and ABU). Clinical Significance: This study indicates that GPB can be used as a primer in a two-step bonding system, while SBU and ABU may not be as effective. These findings may help guide clinicians in selecting the appropriate UAs and bonding techniques for different clinical scenarios. abstract_id: PUBMED:28452076 Comparison between universal adhesives and two-step self-etch adhesives in terms of dentin bond fatigue durability in self-etch mode. This aim of this study was to compare universal adhesives and two-step self-etch adhesives in terms of dentin bond fatigue durability in self-etch mode. Three universal adhesives - Clearfil Universal, G-Premio Bond, and Scotchbond Universal Adhesive - and three-two-step self-etch adhesives - Clearfil SE Bond, Clearfil SE Bond 2, and OptiBond XTR - were used. The initial shear bond strength and shear fatigue strength of resin composite bonded to adhesive on dentin in self-etch mode were determined. Scanning electron microscopy observations of fracture surfaces after bond strength tests were also made. The initial shear bond strength of universal adhesives was material dependent, unlike that of two-step self-etch adhesives. The shear fatigue strength of Scotchbond Universal Adhesive was not significantly different from that of two-step self-etch adhesives, unlike the other universal adhesives. The shear fatigue strength of universal adhesives differed depending on the type of adhesive, unlike those of two-step self-etch adhesives. The results of this study encourage the continued use of two-step self-etch adhesive over some universal adhesives but suggest that changes to the composition of universal adhesives may lead to a dentin bond fatigue durability similar to that of two-step self-etch adhesives. abstract_id: PUBMED:29233540 Smear layer-deproteinizing improves bonding of one-step self-etch adhesives to dentin. Objectives: Smear layer deproteinizing was proved to reduce the organic phase of smear layer covered on dentin surface. It was shown to eliminate hybridized smear layer and nanoleakage expression in resin-dentin bonding interface of two-step self-etch adhesive. This study aimed to investigate those effects on various one-step self-etch adhesives. Methods: Four different one-step self-etch adhesives were used in this study; SE One (SE), Scotchbond™ Universal (SU), BeautiBond Multi (BB), and Bond Force (BF). Flat human dentin surfaces with standardized smear layer were prepared. Smear layer deproteinizing was carried out by the application of 50ppm hypochlorous acid (HOCl) on dentin surface for 15s followed by Accel® (p-toluenesulfinic acid salt) for 5s prior to adhesive application. No surface pretreatment was used as control. Microtensile bond strength (μTBS) and nanoleakage under TEM observation were investigated. The data were analyzed by two-way ANOVA and Tukey's post-hoc test and t-test at the significant level of 0.05. Results: Smear layer deproteinizing significantly improved μTBS of SE, SU, and BB (p&lt;0.001). Hybridized smear layer observed in control groups of SE, BB, and BF, and reticular nanoleakage presented throughout the hybridized complex in control groups of BB and BF were eliminated upon the smear layer deproteinizing. Significance: Smear layer deproteinizing by HOCl and Accel® application could enhance the quality of dentin for bonding to one-step self-etch adhesives, resulting in the improving μTBS, eliminating hybridized smear layer and preventing reticular nanoleakage formation in resin-dentin bonding interface. abstract_id: PUBMED:22025829 In vitro bonding effectiveness of self-etch adhesives with different application techniques: A microleakage and scanning electron microscopic study. Aim: To evaluate and compare the microleakage of self-etch adhesives placed under different clinical techniques and to analyze the resin-dentin interfacial ultrastructure under scanning electron microscope (SEM). Materials And Methods: 100 extracted human premolars were divided into two groups for different adhesives (Clearfil S(3) and Xeno III). Class V cavities were prepared. Each group was further divided into four subgroups (n = 10) according to the placement technique of the adhesive, i.e. according to manufacturer's directions (Group 1), with phosphoric acid etching of enamel margins (Group 2), with hydrophobic resin coat application (Group 3), with techniques of both groups 2 and 3 (Group 4). The cavities were restored with composite. Ten samples from each group were subjected to microleakage study. Five samples each of both the adhesives from groups 1 and 3 were used for SEM examination of the micromorphology of the resin-dentin interface. Results: At enamel margins for both the adhesives tested, groups 2 and 4 showed significantly lesser leakage than groups 1 and 3. At dentin margins, groups 3 and 4 depicted significantly reduced leakage than groups 1 and 2 for Xeno III. SEM observation of the resin-dentin interfaces revealed generalized gap and poor resin tag formation in both the adhesives. Xeno III showed better interfacial adaptation when additional hydrophobic resin coat was applied. Conclusions: In enamel, prior phosphoric acid etching reduces microleakage of self-etch adhesives, while in dentin, hydrophobic resin coating over one-step self-etch adhesives decreases the microleakage. abstract_id: PUBMED:25864526 Influence of double application technique on the bonding effectiveness of self-etch adhesive systems. Aim: To evaluate and compare the effect of double-application of single-step self-etch adhesives using microleakage study and to analyze the dentin-adhesive interfacial micromorphology. Methods: In total, 72 extracted human premolars were divided into three groups for different self-etch adhesives (G Bond, GC [GB], Optibond, Kerr [OB], and Xeno V Plus, Dentsply [XV]). Class V cavities were prepared. Each group was further divided into two subgroups (n = 10) according to the placement technique of the adhesive, using the single-application [subgroup (a)] or double-application method [subgroup (b)]. Resin composite (Z 250, 3M ESPE, St. Paul, MN) was used to restore the cavities and light cured for 40 s. Twenty samples from each group were subjected to microleakage study. Two samples from both the subgroups of the three adhesives were used for scanning electron microscopic examination of the resin-dentin interfacial ultrastructure. Dye leakage scores were subjected to statistical analysis using Kruskal-Wallis and Mann-Whitney U-tests at significance level of P &lt; 0.05. Results: GB depicted significantly more microleakage which was significantly greater than OB and XV. The double application led to significant decrease in microleakage of GB with no significant effect on the microleakage scores of other two all-in-one adhesives, that is OB and XV. Conclusion: Double application of all-in-one self-etch adhesives improves the marginal sealing ability in dentin although it appears to be product dependent. abstract_id: PUBMED:15973833 Conversion of one-step to two-step self-etch adhesives for improved efficacy and extended application. Purpose: One-step self-etch adhesives have restricted use due to their acid-base incompatibility with autocured composites and their behavior as permeable membranes after polymerization. This study examined the feasibility of their conversion to two-step self-etch adhesives via the adjunctive use of a non-solvented, relatively hydrophobic resin coating. Methods: iBond, Xeno III and Adper Prompt were used either in multiple coats, or in a single coat followed by the use of a layer of Scotchbond Multi-Purpose Plus bond resin for coupling to light- and auto-cured composites. Four types of experiments were performed. Bonded specimens were examined with TEM after immersion in an ammoniacal silver nitrate tracer. Fluid flow measurements of iBond were conducted using the two application protocols to compare the permeability of the bonded dentin with the original smear layer. Permeability of vital dentin bonded with both application protocols were compared for the transudation of dentin fluid across the bonded dentin. Microtensile bond strengths of dentin bonded with the two protocols were examined for their compatibility with an auto-cured composite. Results: The results of the four experiments were complementary. iBond and Xeno III exhibited "apparent incompatibility" to auto-cured composites that resulted from their inherent permeability. This was confirmed by the presence of dentin fluid transudate on the adhesive surfaces when they were bonded to vital dentin. Conversely, Adper Prompt exhibited "true incompatibility" to auto-cured composites that was caused by adverse acid-base interaction, masking the inherent permeability of this adhesive. "True" and "apparent" incompatibility issues were eliminated upon their conversion to two-step self-etch adhesives. abstract_id: PUBMED:30949631 Universal Adhesives Benefit from an Extra Hydrophobic Adhesive Layer When Light Cured Beforehand. Purpose: To measure microtensile bond strength (μTBS) of universal adhesives immediately and after 6-month aging, with or without an additional adhesive layer applied on a separately light-cured or non-light-cured universal adhesive. Materials And Methods: Eighty human third molars were randomly assigned to 8 experimental groups. The universal adhesives Clearfil Universal Bond (Kuraray Noritake) and Single Bond Universal (3M Oral Care) were used in self-etch mode (following the manufacturer's directions), and either light cured or not before application of an extra hydrophobic adhesive layer (Clearfil SE Bond bond, Kuraray Noritake). The two-step self-etch adhesives Clearfil SE Bond (Kuraray Noritake) and OptiBond XTR (Kerr) were used as references. After composite buildups were bonded to mid-coronal occlusal dentin surfaces, the specimens were stored in water (37°C/24 h) and sectioned into microspecimens (0.96 ± 0.04 mm2). Half of the specimens were immediately subjected to μTBS testing (1.0 mm/min), while the other half was stored in water (37°C) for 6 months prior to testing. Failure analysis was performed using stereomicroscopy and SEM. Data were analyzed with two-way repeated measures ANOVA, Tukey's and paired t-tests (p = 0.05). Results: The immediate μTBS was similar for the universal adhesives when applied following the different application strategies (p &gt; 0.05). Application of an extra layer of hydrophobic adhesive improved the aged μTBS of the universal adhesives, which was statistically significant when the universal adhesives were first light cured (p &lt; 0.05). The reference adhesives Clearfil SE Bond and OptiBond XTR exhibited a significantly higher immediate and aged μTBS to dentin than did the universal adhesives (p &lt; 0.05). Conclusion: The bond durability of universal adhesives, applied in self-etch mode, was found to benefit from the application of an extra hydrophobic adhesive layer when the universal adhesives were first light cured. abstract_id: PUBMED:26005459 Shear bond strength of one-step self-etch adhesives: pH influence. Background: The aim of this study was to compare the shear bond strength of four one-step self-etch adhesives with different pH values to enamel and dentin. Materials And Methods: In this in vitro study, 200 bovine permanent mandibular incisors were used. Four one-step self-etch adhesives with different pH values were tested both on enamel and on dentin: Adper™ Easy Bond Self-Etch Adhesive (pH = 0.8-1), Futurabond NR (pH=2), G-aenial Bond (pH = 1.5), Clearfil S(3) Bond (pH = 2.7). After adhesive systems application, a nanohybrid composite resin was inserted into the bonded surface. The specimens were placed in a universal testing machine. The shear bond strength was performed at a cross-head speed of 1 mm/min until the sample rupture. The shear bond strength values (MPa) of the different groups were compared with analysis of variance after that Kolmogorov and Smirnov tests were applied to assess normality of distributions. P &lt; 0.05 was considered as significant. Results: In enamel shear bond strength, the highest shear bond strength values were reported with Futurabond NR (P &lt; 0.01); however, no significant differences were found with Clearfil S(3) Bond. The others adhesive systems showed lower shear bond strength values with significant differences between them (P &lt; 0.05). When comparing the dentin shear bond strength, the lowest shear bond strength values were reported with Clearfil S(3) Bond (P &lt; 0.05), while there were no significant differences among the other three products (P &gt; 0.05). Conclusion: The pH values of adhesive systems did not influence significantly their shear bond strength to enamel or dentin. Answer: Yes, the durability of one-step self-etch adhesives can be improved by double application or by an extra layer of hydrophobic resin. Several studies have shown that these techniques can enhance the performance of these adhesives. For instance, one study found that the application of a hydrophobic resin layer significantly decreased microleakage for all the adhesive systems tested, suggesting that marginal permeability of one-step adhesives can be minimized by the application of a more hydrophobic resin layer (PUBMED:21116389). Another study reported that the double application of one-step self-etch systems can be safely performed and that the application of an additional hydrophobic resin layer can improve the immediate resin-dentin bonds and reduce the degradation of resin bonds over time (PUBMED:18353520). Furthermore, research has indicated that the bond strength and durability of low pH one-step universal adhesives were significantly improved by the additional application of an extra hydrophobic resin layer both at 24 hours and after 15,000 thermal cycles (PUBMED:37308054). This suggests that certain universal adhesives can benefit from an extra hydrophobic resin layer, enhancing their performance as a primer in a two-step bonding system. Additionally, a study comparing universal adhesives and two-step self-etch adhesives found that the shear fatigue strength of a universal adhesive was not significantly different from that of two-step self-etch adhesives when an extra hydrophobic adhesive layer was applied (PUBMED:28452076). This implies that the addition of a hydrophobic layer can potentially bring the performance of universal adhesives closer to that of two-step self-etch adhesives in terms of dentin bond fatigue durability. Lastly, another study showed that the bond durability of universal adhesives, applied in self-etch mode, benefited from the application of an extra hydrophobic adhesive layer, especially when the universal adhesives were first light cured (PUBMED:30949631). In summary, the evidence suggests that the durability of one-step self-etch adhesives can indeed be improved by either double application or the addition of an extra hydrophobic resin layer.
Instruction: Are drugs a risk factor of post-ERCP pancreatitis? Abstracts: abstract_id: PUBMED:29521055 Post-ERCP pancreatitis: Pathophysiology, early identification and risk stratification. Acute pancreatitis is the most common and feared complication of endoscopic retrograde cholangiopancreatography (ERCP). The aim of the study was to review the current knowledge on the nomenclature, etiology, pathophysiology, clinical presentation, diagnostic workup, and risk stratification of post-ERCP pancreatitis (PEP). A structured search in PubMed and Scopus databases was performed using search terms related to the subject of diagnosis, pathophysiology, risk stratification of post-ERCP pancreatitis, including full text articles and abstracts in the English language. Several causes, operating both at a local and systemic level, might play an important role in the pathogenesis of PEP. Different patient-related risk factors can help predict post-ERCP pancreatitis; diagnosis depends on clinical presentation, imaging and laboratory investigations. As an outpatient procedure, post-ERCP pancreatitis may be safe in a selected group of low-risk patients. Further investigation of the etio-pathogenesis of post-ERCP pancreatitis is required in order to improve diagnosis and treatment. Early identification and severity stratification of post-ERCP pancreatitis greatly affects the patient's outcome. There is still controversy concerning the risk factors related to PEP. More studies are needed to clarify early and definite diagnosis, risk and severity stratification, as well as treatment of post-ERCP pancreatitis. abstract_id: PUBMED:23599832 Post-ERCP acute pancreatitis and its risk factors. Introduction: Endoscopic retrograde cholangiopancreatography (ERCP) is a complex endoscopic technique that evolved from a diagnostic to a mainly therapeutic procedure. This was due to the identification of post-procedural complications that can follow both simple ERCP and that associated with the instrumentation of the biliary and pancreatic ductals. The identification of post ERCP complications in a proportion of 5 to 10% of cases, with a mortality rate of 0.33%, imposed their analysis and study of risk factors involved in their occurrence. The significance of post ERCP complications reveals the necessity of their avoidance by adopting additional measures if risk factors are identified. Materials And Methods: We have retrospectively analyzed 900 cases that underwent ERCP in the Surgery Department of "Sf. Ioan" Clinical Hospital in a period of 17 years. The complications of the procedure were studied. Among them, a special attention was given to post-ERCP acute pancreatitis (pERCP-AP), the most common complication that occurred in the study group. We also tried to find out and highlight the risk factors for this complication. Results: ERCP is a relatively safe invasive procedure, yet it has complications (8% of cases), some of them potentially fatal (mortality 0.43%). The most common complications after ERCP are acute pancreatitis (3.7%), papillary bleeding (1.04%), retroperitoneal duodenal perforation (0.69%) and biliary septic complications like acute cholecystitis and cholangitis (1.21%). Acute pancreatitis is by far the most common complication. Risk factors for its occurrence are difficult sphincterotomy with precut use, failure of CBD desobstruction, pancreatic sphincterotomy, repeated injection of contrast in the pancreatic ductal system, dysfunction of the sphincter of Oddi and the absence of changes of chronic pancreatitis. When risk factors are identified, the patients' selection must be very strict and diagnostic ERCP should be avoided in favor of non-invasive diagnostic methods (MRI-cholangiography, echo-endoscopy). abstract_id: PUBMED:25547802 Risk factors for post-ERCP pancreatitis: A systematic review and meta-analysis. Background And Aim: Pancreatitis is the most common and serious complication of diagnostic and therapeutic endoscopic retrograde cholangiopancreatography (ERCP). Prevention strategies targeting risk factors could be important to reduce the rate of post-ERCP pancreatitis. However, the risk factors for post-ERCP pancreatitis (PEP) are still debated. This systematic review and meta-analysis was performed to identify risk factors for PEP. Methods: Medline (PubMed and Ovid), Cochrane Central Register of Controlled trials &amp; Database of Systematic Reviews, Embase, Scopus, ScienceDirect, Springer links and WEB OF SCIENCE were searched for published studies in all languages. Inclusion and exclusion criteria were defined a priori. Eighteen probable risk factors were evaluated, and outcomes were expressed in the case of dichotomous variables, as an odds ratio (OR) (with a 95% confidence interval, (CI)). Results: When patient-related risk factors were analyzed, the ORs for female gender was 1.46 (95%CI: 1.30-1.64); for previous pancreatitis 2.03 (95%CI: 1.31-3.14); for previous PEP was 2.90 (95%CI: 1.87-4.48); for Sphincter of Oddi dysfunction (SOD) was 2.04 (95%CI: 1.73-2.33) and for Intraductal papillary mucinous neoplasm (IPMN) was 3.01 (95%CI: 1.34-6.77). Four endoscopy-related factors were confirmed: the OR for difficult cannulation was 3.49 (95%CI: 1.364-8.925); for endoscopic sphincterotomy (EST) it was 1.39 (95%CI: 1.09-1.79); for precut sphincterotomy it was 2.25 (95%CI: 1.70_2.96); and for main pancreatic duct injection it was 1.58 (95%CI: 1.21-2.08). Conclusions: Female gender, previous pancreatitis, previous PEP, SOD, IPMN, difficult cannulation, EST, precut sphincterotomy and main pancreatic duct injection are risk factors for post-ERCP pancreatitis. abstract_id: PUBMED:34924299 Risk factors for post-endoscopic retrograde cholangiopancreatography (ERCP) abdominal pain in patients without post-ERCP pancreatitis. Background: Abdominal pain is often observed after endoscopic retrograde cholangiopancreatography (ERCP). Few studies have focused on the risk factors of post-ERCP abdominal pain without post-ERCP pancreatitis (PEP). This study aimed to identify risk factors of post-ERCP abdominal pain without PEP and investigate characteristics of the abdominal pain in non-PEP patients. Methods: Data from patients who underwent ERCP from August 2019 to January 2020 were retrospectively collected. Characteristics of the abdominal pain after ERCP were recorded and compared between PEP and non-PEP patients. Multivariate analysis was conducted to identify risk factors of non-PEP abdominal pain. Results: A total of 1295 ERCP procedures were investigated in this study, among which 100 (7.72%) patients presented post-ERCP abdominal pain without PEP and 63 (4.86%) patients with PEP. Multivariate analysis found 9 risk factors of non-PEP abdominal pain: age ≤ 65 years [odds ratio (OR): 1.971], primary ERCP (OR: 2.442), dilated extrahepatic bile duct (OR: 1.803), no papilla opening (OR: 2.095), pancreatic guidewire passages (OR: 2.258), white blood cells (WBC) ≤ 6.0 × 109/L (OR: 1.689), platelet (PLT) ≤ 250 × 109/L (OR: 2.505), serum γ-glutamyl transferase (γ - GT) ≤ 35 U/L (OR: 2.190), and albumin ≥ 40 g/L (OR: 1.762). The PEP group had later pain onset, higher pain frequency and longer hospital stay than those of the non-PEP pain group (P &lt; 0.05). There were no significant differences in the pain duration, visual analogue scale score and mortality between the PEP group and non-PEP pain group (P &gt; 0.05). Conclusions: This study indicated that age ≤ 65 years, primary ERCP, dilated extrahepatic bile duct, no papilla opening, pancreatic guidewire passages, lower WBC, lower PLT, normal γ - GT and elevated albumin were independent risk factors for post-ERCP abdominal pain without PEP. The pain occurred earlier in non-PEP patients than in PEP patients. abstract_id: PUBMED:33321017 Risk factors for post-ERCP pancreatitis: it depends on the ERCP indication. Background And Aims: Endoscopic retrograde cholangiopancreatography (ERCP) is an invasive modality, and has a high risk of causing post-ERCP pancreatitis (PEP). Risk factors of PEP have been investigated and conflicting results are present for most risk factors. The aim of this study was to evaluate the risk factors for PEP and to determine whether the risk factors differ due to the ERCP indication. Patients And Methods: A retrospective study was conducted which included 666 patients with 968 ERCP procedures. Some risk factors were evaluated for PEP, and they were also evaluated separately for patients with bile duct stones and patients who underwent ERCP for other reasons than bile duct stones. Results: In patients with bile duct stones detected on ERCP ; female gender, lower diameter of the common bile duct, placing a biliary plastic stent and not having a cholecystectomy history were risk factors for PEP, whereas in patients without bile duct stones the only risk factor for PEP was not having a prior endoscopic sphincterotomy. Conclusions: Our study revealed that PEP risk factors depend on the indication of ERCP. To the best of our knowledge our study is the first study defining cholecystectomy as a protective factor for PEP in patients with bile duct stones and endoscopic sphincterotomy history as a protective factor for PEP in patients without bile duct stones. Our study also showed that female gender, lower diameter of the common bile duct and placing a plastic biliary stent were risk factors for PEP in patients with bile duct stones. abstract_id: PUBMED:14595303 Are drugs a risk factor of post-ERCP pancreatitis? Background: Pancreatitis is the most severe complication of ERCP. The aim of this study was to assess whether the use of potentially pancreatotoxic drugs is a risk factor for post-ERCP pancreatitis. Methods: Risk factors for post-ERCP pancreatitis and all drugs taken during the month before ERCP were recorded retrospectively in a database. Patients with other causes of acute pancreatitis or chronic pancreatitis were excluded from the analysis. Post-ERCP pancreatitis was defined as abdominal pain and/or vomiting associated with amylase/lipase plasma levels equal to or greater than twice the upper normal value. Results: A total of 173 patients (95 men, 78 women; mean age, 68 [16] years) were included. Post-ERCP pancreatitis occurred in 31 patients (18%). Several risk factors were identified in a multivariate analysis: difficulty in cannulation (p&lt;0.001), endoscopic sphincterotomy (p&lt;0.005), and female gender (p=0.02). Having taken potent pancreatotoxic drugs increased the occurrence of post-ERCP pancreatitis: odds ratio 3.7: 95% confidence intervals [1.1,12.4], p=0.04. Conclusions: Use of pancreatotoxic drugs before or during ERCP significantly increased the risk of post-ERCP pancreatitis. Thus, discontinuation of the use of such drugs before ERCP seems justified whenever possible. abstract_id: PUBMED:36143938 Post-ERCP Pancreatitis: Prevention, Diagnosis and Management. Endoscopic retrograde cholangiopancreatography (ERCP) carries a post-ERCP pancreatitis (PEP) rate of 2-10%, which could be as high as 30-50% in high-risk cases. PEP is severe in up to 5% of cases, with potential for life-threatening complications, including multi-organ failure, peripancreatic fluid collections, and death in up to 1% of cases. The risk of PEP is potentially predictable and may be modified with pharmacological measures and endoscopist technique. This review covers the definition, epidemiology and risk factors for PEP, with a focus on the latest evidence-based medical and endoscopic strategies to prevent and manage PEP. abstract_id: PUBMED:32292447 Post-ERCP Pancreatitis: Risk factors and role of NSAIDs in primary prophylaxis. Objective: To determine efficacy of diclofenac suppository in reducing post-ERCP pancreatitis (PEP) and identify risk factors for PEP. Methods: This is a placebo-based prospective study at Department of Medicine &amp; Gastroenterology, Services Institute of Medical Sciences / Services Hospital, Lahore performed from January 2018 to June 2019. Patients were randomized to receive diclofenac suppository or glycerine suppository before ERCP. Both groups were compared for PEP using chi square x2 test while risk factors for PEP were determined using binary logistic regression. Results: Total of 165 patients with mean age 49.1(±15.2) and male to female ratio 1/1.6 (63/102) were included. Among 82 (49.7%) patients in diclofenac group, 8 (9.7%) developed pancreatitis while 19(22.9%) of 83(50.3%) in placebo group had PEP (p value 0.02). After multivariate analysis, age&gt;45 years (p value 0.014, OR 3.2), Bilirubin &gt;3 mg/dl (p value 0.004 OR 3.58), time to cannulation&gt; 5 minutes (p value&lt;0.000 OR 9.2), use of precut (p value&lt; 0.000 OR 4.9), pancreatic duct cannulation (p value 0.000 OR 5.46) and total procedure time &gt;30 minutes (p value 0.01 OR 3.92) were risk factors for PEP. Conclusion: Pre-procedure Diclofenac suppository reduces post-ERCP pancreatitis. Age &gt; 45 years, serum bilirubin &gt; 3 mg/dl, cannulation time &gt; 5 minutes, use of precut, pancreatic duct cannulation and procedure time &gt; 30 minutes are risk factors for post-ERCP pancreatitis. abstract_id: PUBMED:24714755 Risk factors for therapeutic ERCP-related complications: an analysis of 2,715 cases performed by a single endoscopist. Background: Endoscopic retrograde cholangiopancreatography (ERCP) is now the exclusive endoscopic therapeutic modality for biliary as well as pancreatic diseases. The aim of the present study was to investigate patient- and procedure-related risk factors for post-ERCP complications in a large-scale study of procedures performed by a single experienced endoscopist. Methods: This is a retrospective cohort study which included a total of 2,715 therapeutic ERCPs enrolled in the final analysis. Potential important patient- and procedure-related risk factors for overall post-ERCP complications, pancreatitis and post-endoscopic sphincterotomy (ES) bleeding were investigated by univariate and multivariate analyses. Results: Following the first therapeutic ERCP, 327 patients suffered complications; pancreatitis was observed in 132 (4.9%) patients, hemorrhage in 122 (4.5%) patients, cholangitis in 63 (2.3%) patients, perforation in 3 (0.11%) patients, and basket impaction in 7 (0.26%) patients. History of acute pancreatitis was more common in patients with post-ERCP complications (P&lt;0.001). Female gender, young age (&lt;40 years), periampullary diverticulum, suspected sphincter of Oddi dysfunction, metal stent placement, opacification of main pancreatic duct and suprapapillary fistulotomy were not found to be risk factors for overall post-ERCP complications and post-ERCP pancreatitis (PEP). Multivariate analysis showed a history of acute pancreatitis, difficult cannulation, needle-knife papillotomy, transpancreatic sphincterotomy, opacification of first and second class pancreatic ductules and acinarization as independent risk factors for overall complications and PEP, whereas antiplatelet and anticoagulation drug use were not found to be independent risk factors for post-ES bleeding. Conclusions: The results of this study demonstrate that the endoscopist's experience reduces patient- and procedure-related risk factors for post-ERCP complications. abstract_id: PUBMED:27349659 Preventing Post-ERCP Pancreatitis: Update 2016. Opinion Statement: Post-ERCP remains a major challenge, although significant progress has been made in predicting risk and methods of prevention. Facets of post-ERCP pancreatitis can be divided into the four "P's": they are patient-related factors, procedure-related factors, pancreatic stents, and pharmacoprophylaxis. New information about risk factors includes a description of IPMN as a patient-related risk, with smoking and chronic liver disease as protective factors. Procedure-related factors include one or more deep passages of a guidewire into the pancreatic duct as a salient risk, perhaps outweighing difficult cannulation or contrast injection, but one that can be mitigated by placement of a pancreatic stent. In addition, placement of transpapillary metallic stents has emerged as an independent risk for post-ERCP pancreatitis (PEP). Although there has been a rising wave of enthusiasm for rectal NSAIDs as a kind of panacea for prevention of post-ERCP pancreatitis, the newest data question their efficacy in unselected cohorts. Pancreatic stent placement remains the most proven and reliable method of preventing post-ERCP pancreatitis in both mixed- and high-risk ERCP. Success at placement of protective pancreatic stents is a paramount for safety and efficacy, and technical expertise at placing pancreatic stents widely among centers. The use of specialized techniques including very small caliber guidewires and stents is necessary to approach 100 % success. Answer: Yes, drugs can be a risk factor for post-ERCP pancreatitis. The use of potentially pancreatotoxic drugs has been identified as a risk factor for post-ERCP pancreatitis. A study found that having taken potent pancreatotoxic drugs increased the occurrence of post-ERCP pancreatitis with an odds ratio of 3.7, suggesting that the use of such drugs before or during ERCP significantly increased the risk of developing pancreatitis after the procedure. Therefore, it is recommended to discontinue the use of pancreatotoxic drugs before ERCP whenever possible (PUBMED:14595303).
Instruction: VALIDATION OF THE HUNGARIAN MDS-UPDRS: WHY DO WE NEED A NEW PARKINSON SCALE? Abstracts: abstract_id: PUBMED:26118257 VALIDATION OF THE HUNGARIAN MDS-UPDRS: WHY DO WE NEED A NEW PARKINSON SCALE? Background: The Movement Disorder Society-sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) has been published in 2008 as the successor of the original UPDRS. The MDS-UPDRS organizing team developed guidelines for the development of official non-English translations consisting of four steps: translation/back-translation, cognitive pretesting, large field testing, and clinimetric analysis. The aim of this paper was to introduce the new MDS-UPDRS and its validation process into Hungarian. Methods: Two independent groups of neurologists translated the text of the MDS-UPDRS into Hungarian and subsequently back-translated into English. After the review of the back-translated English version by the MDS-UPDRS translation administration team, cognitive pretesting was conducted with ten patients. Based on the results of the initial cognitive pretesting, another round was conducted. For the large field testing phase, the Hungarian official working draft version of MDS-UPDRS was tested with 357 patients with Parkinson's disease (PD). Confirmatory factor analyses (CFA) determined whether the factor structure for the English-language MDS-UPDRS could be confirmed in data collected using the Hungarian Official Draft Version. To become an official translation, the Comparative Fit Index (CFI) had to be ≥ 0.90 compared to the English-language version. Results: For all four parts of the Hungarian MDS-UPDRS, the CFI was ≥ 0.94. Conclusion: The overall factor structure of the Hungarian version was consistent with that of the English version based on the high CFIs for all the four parts of the MDS-UPDRS in the CFA; therefore, this version was designated as the "OFFICIAL GUNGARIAN VERSION OF THE MDS-UPDRS'. abstract_id: PUBMED:38292815 Validation of the Kazakh version of the movement disorder Society-Unified Parkinson's disease rating scale. Background And Purpose: The International Movement Disorder Society revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) is widely used in the assessment of the severity of Parkinson's disease (PD). This study aimed to validate the Kazakh version of the MDS-UPDRS, explore its dimensionality, and compare it to the original English version. Methods: The validation was conducted in three phases: first, the English version of the MDS-UPDRS was translated into Kazakh and thereafter back-translated into English by two independent teams; second, the Kazakh version underwent a cognitive pretesting; third, the Kazakh version was tested in 360 native Kazakh-speaking PD patients. Both confirmatory and exploratory factor analyses were performed to validate the scale. We calculated the comparative fit index (CFI) for confirmatory factor analysis and used unweighted least squares for exploratory factor analysis. Results: The CFI was higher than 0.90 for all parts of the scale, thereby meeting the pre-set threshold for the official designation of a validated translation. Exploratory factor analysis also showed that the Kazakh MDS-UPDRS has the analogous factors structure in each part as the English version. Conclusions: The Kazakh MDS-UPDRS had a consistent overall structure as the English MDS-UPDRS, and it was designated as the official Kazakh MDS-UPDRS, which can reliably be used in the Kazakh-speaking populations. Presently, Kazakhstan stands as the sole country in both Central Asia and Transcaucasia with an MDS-approved translated version of the MDS-UPDRS. We expect that other Central Asian and Transcaucasian countries will embark on the MDS Translation Program for MDS-UPDRS in the near future. abstract_id: PUBMED:32639019 Validation of the Polish version of the Movement Disorder Society-Unified Parkinson's Disease Rating Scale (MDS-UPDRS). Background: In 2008, the Movement Disorders Society (MDS) published a new Unified Parkinson's Disease Rating Scale (MDS-UPDRS) as the official benchmark scale for Parkinson's Disease (PD). We have translated and validated the Polish version of the MDS-UPDRS, explored its dimensionality, and compared it to the original English one. Methods: The MDS-UPDRS was translated into Polish by a team of Polish investigators led by JS and GO. The back-translation was completed by colleagues fluent in both languages (Polish and English) who were not involved in the original translation, and was reviewed by members of the MDS Rating Scales Programme. Then the translated version of the MDS-UPDRS underwent cognitive pretesting, and the translation was modified based on the results. The final translation was approved as the Official Working Document of the MDS-UPDRS Polish version, and was tested on 355 Polish PD patients recruited at movement disorders centres all over Poland (at Katowice, Gdańsk, Łódź, Warsaw, Wrocław, and Kraków). Confirmatory and explanatory factor analyses were applied to determine whether the factor structure of the English version could be confirmed in the Polish version. Results: The Polish version of the MDS-UPDRS showed satisfactory clinimetric properties. The internal consistency of the Polish version was satisfactory. In the confirmatory factor analysis, all four parts had greater than 0.90 comparative fit index (CFI) compared to the original English MDS-UPDRS. Explanatory factor analysis suggested that the Polish version differed from the English version only within an acceptable range. Conclusions And Clinical Implications: The Polish version of the MDS-UPDRS meets the requirements to be designated as the Official Polish Version of the MDS-UPDRS, and is available on the MDS web page. We strongly recommend using the MDS-UPDRS instead of the UPDRS for research purposes and in everyday clinical practice. abstract_id: PUBMED:30949554 Pilot Study of the International Parkinson and Movement Disorder Society-sponsored Non-motor Rating Scale (MDS-NMS). Background: Non-motor symptoms (NMS) are integral to Parkinson's disease (PD) and have a detrimental effect on patients and their caregivers. Clinical quantification has been aided by the development of comprehensive assessments such as the Non-Motor Symptoms Questionnaire (NMSQuest) and Scale (NMSS). The NMSS has been widely used in clinical studies and trials; however, since its validation in 2007, our understanding of NMS has changed substantially. With the support of the International Parkinson and Movement Disorder Society (IPMDS), after a detailed peer review an initiative to develop an updated version of NMSS, the MDS-NMS was launched in 2015. Objective: This paper encapsulates the data from the pre-validation phases carried out under the auspices of the IPMDS Non-Motor PD Study Group. Methods: Item selection and wording (formatted as a rater-based tool) were based on the NMSS, literature review, and expert consensus. Neurologists, PD patients, and healthy controls were included in the cognitive pretesting and administration of the preliminary version of the MDS-NMS. Primary data on acceptability and reliability were obtained. Results: The pilot study, carried out in English in the United Kingdom and the United States, demonstrated that the preliminary version of the MDS-NMS was comprehensive, understandable, and appropriate. Data quality was excellent; moderate floor effect was present in patients for most MDS-MNS domains, with some components showing weak internal consistency. The results led to additional instrument modifications. Conclusion: Qualitative and quantitative research results have led to an updated NMSS, the definitive version of the MDS-NMS, which is currently being validated. abstract_id: PUBMED:25328906 Official Japanese Version of the Movement Disorder Society-Unified Parkinson's Disease Rating Scale: validation against the original English version. Background: The Movement Disorder Society (MDS)-sponsored revision of the Unified Parkinson's Disease (PD) Rating Scale (UPDRS) (MDS-UPDRS) has been developed and is now available in English. Part of the overall program includes the establishment of official non-English translations of the MDS-UPDRS. We present the process for completing the official Japanese translation of the MDS-UPDRS with clinimetric testing results. Methods: In this trial, the MDS-UPDRS was translated into Japanese, underwent cognitive pre-testing, and the translation was modified after taking the results into account. The final translation was approved as Official Working Draft of the MDS-UPDRS Japanese version and tested in 365 native-Japanese-speaking patients with PD. Confirmatory analyses were used to determine whether the factor structure for the English-language MDS-UPDRS could be confirmed in data collected using the Official Working Draft of the Japanese translation. As a secondary analysis, we used exploratory factor analyses to examine the underlying factor structure without the constraint of a pre-specified factor organization. Results: Confirmatory factor analysis revealed that Comparative Fit Index for all Parts of the MDS-UPDRS exceeded the minimal standard of 0.90 relative to the English version and therefore Japanese translation met the pre-specified criterion to be designated called an OFFICIAL MDS TRANSLATION. Secondary analyses revealed some differences between the English-language MDS-UPDRS and the Japanese translation, however, these differences were considered to be within an acceptable range. Conclusions: The Japanese version of the MDS-UPDRS met the criterion as an Official MDS Translation and is now available for use (www.movementdisorders.org). abstract_id: PUBMED:33029970 Validation Study of the Official Korean Version of the Movement Disorder Society-Unified Parkinson's Disease Rating Scale. Background And Purpose: The Movement Disorder Society-Sponsored Revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) is widely used for estimating the symptoms of Parkinson's disease. Translation and validation of the MDS-UPDRS is necessary for non-English speaking countries and regions. The aim of this study was to validate the Korean version of the MDS-UPDRS. Methods: Altogether, 362 patients in 19 centers were recruited for this study. We translated the MDS-UPDRS to Korean using the translation-back translation method and cognitive pretesting. We performed both confirmatory and exploratory factor analyses to validate the scale. We calculated the comparative fit index (CFI) for confirmatory factor analysis, and used unweighted least squares for exploratory factor analysis. Results: The CFI was higher than 0.90 for all parts of the scale. Exploratory factor analysis also showed that the Korean MDS-UPDRS has the same number of factors in each part as the English version. Conclusions: The Korean MDS-UPDRS has the same overall structure as the English MDS-UPDRS. Our translated scale can be designated as the official Korean MDS-UPDRS. abstract_id: PUBMED:35287261 Validation of the Thai Version of the Movement Disorder Society-Sponsored Revision of the Unified Parkinson's Disease Rating Scale. Objective: This study aims to validate the Thai translation of the Movement Disorder Society-sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS). Methods: The English version was translated into Thai and then back-translated into English. The translated version underwent 2 rounds of cognitive pretesting to assess the ease of comprehension, ease of use and comfort with the scale. Then, it underwent large clinimetric testing. Results: The Thai version was validated in 354 PD patients. The comparative fit index (CFI) for all four parts of the Thai version of the MDS-UPDRS was 0.93 or greater. Exploratory factor analysis identified isolated item differences in factor structure between the Thai and English versions. Conclusion: The overall factor structure of the Thai version was consistent with that of the English version based on the high CFIs (all CFI ≥ 0.90). Hence, it can be designated the official Thai version of the MDS-UPDRS. abstract_id: PUBMED:28965872 Validation of the Hebrew version of the Movement Disorder Society-Unified Parkinson's Disease Rating Scale. Background: The Movement Disorders Society (MDS) published the English new Unified Parkinson's Disease Rating Scale (MDS-UPDRS) as the official benchmark scale for Parkinson's disease (PD) in 2008. We aimed to validate the Hebrew version of the MDS-UPDRS, explore its dimensionality and compare it to the original English one. Methods: The MDS-UPDRS questionnaire was translated to Hebrew and was tested on 389 patients with PD, treated at the Movement Disorders Unit at Tel-Aviv Medical Center. The MDS-UPDRS is made up of four sections. The higher the score, the worst the clinical situation of the patient is. Confirmatory and explanatory factor analysis were applied to determine if the factor structure of the English version could be confirmed in the Hebrew version. Results: The Hebrew version of the MDS-UPDRS showed satisfactory clinimetric properties. The internal consistency of the Hebrew-version was satisfactory, with Cronbach's alpha values 0.79, 0.90, 0.93, 0.80, for parts 1 to 4 respectively. In the confirmatory factor analysis, all four parts had high (greater than 0.90) comparative fit index (CFI) in comparison to the original English MDS-UPDRS with high factor structure (0.96, 0.99, 0.94, 1.00, respectively), thus confirming the pre-specified English factor structure. Explanatory factor analysis yielded that the Hebrew responses differed from the English one within an acceptable range: in isolated item differences in factor structure and in the findings of few items having cross loading on multiple factors. Conclusions: The Hebrew version of the MDS-UPDRS meets the requirements to be designated as the Official Hebrew Version of the MDS-UPDRS. abstract_id: PUBMED:24084382 Validation and attempts of revision of the MDS-recommended tests for the screening of Parkinson's disease dementia. The Movement Disorders Society (MDS) formulated diagnostic criteria and assessment guidelines for the screening of dementia in Parkinson's disease (PD). We carried out a validation of the cognitive measures suggested in the screening algorithm (i.e. the Mini Mental State Examination - MMSE - total score, serial 7s subtraction, 3-word recall, pentagons copy, and one minute letter fluency) in 86 patients with PD. Thirty-six percent of participants were diagnosed with dementia using the MDS algorithm, but with the Dementia Rating Scale instead of the MMSE. The original MDS procedure misclassified 11 patients (12.8%) as false negatives and 3 (3.5%) as false positives, leading to 65% sensitivity and 95% specificity. The main reason for misdiagnoses was insensitivity of the MMSE total score. Three attempts were made to reach a better screening performance, which warrants high sensitivity more than high specificity: 1. exclusion of the MMSE total score as a diagnostic requirement; 2. determination of a better cut off through Receiver Operating Characteristic curve analysis; 3. replacement of the MMSE with the equally undemanding, but more PD-specific, Mini Mental Parkinson. The first two strategies generally yielded high sensitivity, but poor specificity. The best outcome was achieved using a Mini Mental Parkinson total score &lt;27 as cognitive criterion: sensitivity was 87% and negative predictive value was 90%; however, specificity was only 67%. Our findings seem to suggest that MDS practical guidelines are specific, but might benefit from the use of more PD-oriented tools than the MMSE in terms of sensitivity. abstract_id: PUBMED:26096210 Proposing a Parkinson's disease-specific tremor scale from the MDS-UPDRS. Background: This article proposes an International Parkinson and Movement Disorder Society (MDS)-UPDRS tremor-based scale and describes its measurement properties, with a view to developing an improved scale for assessing tremor in Parkinson's disease (PD). Methods: This was a cross-sectional, multicenter study of 435 PD patients. Rasch analysis was performed on the 11 MDS-UPDRS tremor items. Construct validity, precision, and test-retest reliability were also analyzed. Results: After some modifications, which included removal of an item owing to redundancy, the obtained MDS-UPDRS tremor scale showed moderate reliability, unidimensionality, absence of differential item functioning, satisfactory convergent validity with medication, and better precision than the raw sum score. However, the scale displayed a floor effect and a need for more items measuring lower levels of tremor. Conclusions: The MDS-UPDRS tremor scale provides linear scores that can be used to assess tremor in PD in a valid, reliable way. The scale might benefit from modifications and studies that analyze its responsiveness. Answer: The validation of the Hungarian version of the Movement Disorder Society-sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) is part of a broader effort to provide standardized, high-quality tools for the assessment of Parkinson's disease (PD) across different languages and cultural contexts. The MDS-UPDRS was published in 2008 as the successor to the original UPDRS, with the goal of improving the assessment of PD symptoms and their impact on patients (PUBMED:26118257). The need for a new Parkinson scale, such as the MDS-UPDRS, arises from the evolving understanding of PD and its symptoms, as well as the necessity for a more comprehensive and sensitive tool that can capture the full spectrum of motor and non-motor symptoms associated with the disease. The MDS-UPDRS includes modifications and additions to the original UPDRS to address these needs. The validation process for the Hungarian MDS-UPDRS involved translation and back-translation, cognitive pretesting with patients, large field testing, and clinimetric analysis to ensure that the Hungarian version maintained the integrity and factor structure of the original English-language version. The Comparative Fit Index (CFI) was used as a measure of how well the translated version matched the original, with a CFI ≥ 0.90 indicating an acceptable fit. The Hungarian MDS-UPDRS achieved a CFI ≥ 0.94 for all four parts, confirming its consistency with the English version and leading to its designation as the official Hungarian version of the MDS-UPDRS (PUBMED:26118257). The validation of the Hungarian MDS-UPDRS, along with other language versions, supports the global standardization of PD assessment, allowing for more accurate diagnosis, better tracking of disease progression, and improved comparability of research findings across different populations. This is crucial for advancing the understanding of PD and enhancing patient care worldwide.
Instruction: Do measures of body fat distribution provide information on the risk of type 2 diabetes in addition to measures of general obesity? Abstracts: abstract_id: PUBMED:12941718 Do measures of body fat distribution provide information on the risk of type 2 diabetes in addition to measures of general obesity? Comparison of anthropometric predictors of type 2 diabetes in Pima Indians. Objective: To investigate which anthropometric measurements of obesity best predict type 2 diabetes in a population of Pima Indians and whether additional information on diabetes risk could be obtained by combining measures of general obesity with measures of body fat distribution. Research Design And Methods: We conducted a prospective study of 624 men and 990 nonpregnant women &gt;18 years of age without diabetes. Subjects were followed a mean of 5.25 years for the development of type 2 diabetes (using 1997 American Diabetes Association criteria). Results: A total of 322 new cases of type 2 diabetes (107 men and 215 women) were diagnosed during follow-up. Baseline obesity measurements were highly correlated and predicted diabetes in proportional hazards models adjusted for age. BMI had the highest hazard ratio in men and women, with age-adjusted hazard ratios per SD of 1.73 (95% CI 1.44-2.07) and 1.67 (1.45-1.91), respectively. According to receiver-operating characteristic analysis, BMI and waist-to-height ratio were the best predictors of diabetes in men, while in women BMI, waist-to-height ratio, waist circumference, and waist-to-thigh ratio were the best predictors. The predictive abilities of models containing BMI were not significantly improved by including other measures of general obesity or measures of the body fat distribution. Conclusions: Throughout its range, BMI was an excellent predictor of type 2 diabetes risk in Pima Indians and was not significantly improved by combining it with other measures of general adiposity or body fat distribution. abstract_id: PUBMED:22443340 Impact of body mass index on the predictive ability of body fat distribution for type 2 diabetes risk in Koreans. Aims: The optimal anthropometric measure of obesity or body fat distribution that best predicts the risk of Type 2 diabetes in Asians is unclear. Moreover, it has not been determined whether BMI modifies the effect of body fat distribution on diabetes risk in Asians. Methods: We analysed the anthropometric and laboratory data of 7658 non-diabetic Korean adults (5061 men and 2597 women, aged 20-79 years) who underwent routine medical check-ups at 5-year intervals. BMI, waist circumference, waist-to-height ratio, and bioelectrical impedance (to calculate fat mass and per cent body fat) were measured at baseline. Results: Of the 7658 participants, 278 subjects (3.6%) developed diabetes over 5 years. Each of the anthropometric measures of general obesity (BMI, fat mass, per cent body fat) and central body fat distribution (waist circumference and waist-to-height ratio) was a good predictor of Type 2 diabetes. However, when the areas under the receiver-operating characteristic curves were compared, BMI (0.697; 95% CI, 0.669-0.725), waist circumference (0.709, 0.682-0.736) and waist-to-height ratio (0.718, 0.692-0.743) were better predictors of diabetes risk than fat mass (0.672, 0.643-0.700) or per cent body fat (0.657, 0.628-0.686). In the low- (&lt; 23 kg/m(2)) and mid- (23-27 kg/m(2)) BMI groups, the addition of waist-to-height ratio or waist circumference to BMI could improve the prediction of diabetes risk. Conclusions: BMI, waist circumference and waist-to-height ratio were good predictors of Type 2 diabetes risk in Koreans. In non-obese or less obese subjects, measures of central body fat distribution can help improve the prediction of Type 2 diabetes risk when added to measures of general obesity. abstract_id: PUBMED:28073971 Sexual dimorphisms in genetic loci linked to body fat distribution. Obesity is a chronic condition associated with increased morbidity and mortality and is a risk factor for a number of other diseases including type 2 diabetes and cardiovascular disease. Obesity confers an enormous, costly burden on both individuals and public health more broadly. Body fat distribution is a heritable trait and a well-established predictor of adverse metabolic outcomes. Body fat distribution is distinct from overall obesity in measurement, but studies of body fat distribution can yield insights into the risk factors for and causes of overall obesity. Sexual dimorphism in body fat distribution is present throughout life. Though sexual dimorphism is subtle in early stages of life, it is attenuated in puberty and during menopause. This phenomenon could be, at least in part, due to the influence of sex hormones on the trait. Findings from recent large genome-wide association studies (GWAS) for various measures of body fat distribution (including waist-to-hip ratio, hip or waist circumference, trunk fat percentage and the ratio of android and gynoid fat percentage) emphasize the strong sexual dimorphism in the genetic regulation of fat distribution traits. Importantly, sexual dimorphism is not observed for overall obesity (as assessed by body mass index or total fat percentage). Notably, the genetic loci associated with body fat distribution, which show sexual dimorphism, are located near genes that are expressed in adipose tissues and/or adipose cells. Considering the epidemiological and genetic evidence, sexual dimorphism is a prominent feature of body fat distribution. Research that specifically focuses on sexual dimorphism in fat distribution can provide novel insights into human physiology and into the development of obesity and its comorbidities, as well as yield biological clues that will aid in the improvement of disease prevention and treatment. abstract_id: PUBMED:17372330 Measures of adiposity and cardiovascular disease risk factors. Objective: To determine which of five measures of adiposity maintains the strongest association with cardiovascular disease risk factors. Research Methods And Procedures: A nationally representative sample of 12,608 adult participants of the third National Health and Nutrition Examination Survey were examined. Waist circumference, total body fat, percent body fat, BMI, and skinfold thickness were measured following a standardized protocol. Results: In multivariable adjusted models including waist circumference and BMI as independent variables, waist circumference was a significantly better predictor. The odds ratios (95% confidence intervals) for each standard deviation higher waist circumference and BMI for men were as follows: 1.88 (1.43, 2.48) and 0.99 (0.76, 1.29), respectively, for hypertension; 1.51 (0.87, 2.59) and 1.23 (0.76, 1.99), respectively, for diabetes; and 1.85 (1.48, 2.32) and 1.00 (0.80, 1.24), respectively, for low high-density lipoprotein-cholesterol. The analogous odds ratios (95% confidence intervals) for women were as follows: 2.28 (1.74, 3.00) and 0.91 (0.69, 1.19), respectively, for hypertension; 2.72 (1.85, 4.00) and 0.82 (0.55, 1.23), respectively, for diabetes; and 1.90 (1.47, 2.47) and 1.07 (0.83, 1.38), respectively, for low high-density lipoprotein-cholesterol. Results were markedly similar for waist circumference in models adjusting for total body fat, percent body fat, and skinfold thickness separately. In contrast, waist circumference was not a significantly better predictor of elevated C-reactive protein than the other measures of adiposity. Discussion: Waist circumference maintains a stronger association with cardiovascular disease risk factors than other measures of adiposity. abstract_id: PUBMED:25849815 Associations between ultrasound measures of abdominal fat distribution and indices of glucose metabolism in a population at high risk of type 2 diabetes: the ADDITION-PRO study. Aims: Visceral adipose tissue measured by CT or MRI is strongly associated with an adverse metabolic risk profile. We assessed whether similar associations can be found with ultrasonography, by quantifying the strength of the relationship between different measures of obesity and indices of glucose metabolism in a population at high risk of type 2 diabetes. Methods: A cross-sectional analysis of 1342 participants of the ADDITION-PRO study. We measured visceral adipose tissue and subcutaneous adipose tissue with ultrasonography, anthropometrics and body fat percentage by bioelectrical impedance. Indices of glucose metabolism were derived from a three point oral glucose tolerance test. Linear regression of obesity measures on indices of glucose metabolism was performed. Results: Mean age was 66.2 years, BMI 26.9kg/m2, subcutaneous adipose tissue 2.5cm and visceral adipose tissue 8.0cm. All measures of obesity were positively associated with indicators of glycaemia and inversely associated with indicators of insulin sensitivity. Associations were of equivalent magnitude except for subcutaneous adipose tissue and the visceral/subcutaneous adipose tissue ratio, which showed weaker associations. One standard deviation difference in BMI, visceral adipose tissue, waist circumference, waist/height ratio and body fat percentage corresponded approximately to 0.2mmol/l higher fasting glucose, 0.7mmol/l higher 2-hr glucose, 0.06-0.1% higher HbA1c, 30 % lower HOMA index of insulin sensitivity, 20% lower Gutt's index of insulin sensitivity, and 100 unit higher Stumvoll's index of beta-cell function. After adjustment for waist circumference visceral adipose tissue was still significantly associated with glucose intolerance and insulin resistance, whereas there was a trend towards inverse or no associations with subcutaneous adipose tissue. After adjustment, a 1cm increase in visceral adipose tissue was associated with ~5% lower insulin sensitivity (p≤0.0004) and ~0.18mmol/l higher 2-hr glucose (p≤0.001). Conclusion: Visceral and subcutaneous adipose tissue assessed by ultrasonography are significantly associated with glucose metabolism, even after adjustment for other measures of obesity. abstract_id: PUBMED:35886153 Torso Shape Improves the Prediction of Body Fat Magnitude and Distribution. Background: As obesity increases throughout the developed world, concern for the health of the population rises. Obesity increases the risk of metabolic syndrome, a cluster of conditions associated with type-2 diabetes. Correctly identifying individuals at risk from metabolic syndrome is vital to ensure interventions and treatments can be prescribed as soon as possible. Traditional anthropometrics have some success in this, particularly waist circumference. However, body size is limited when trying to account for a diverse range of ages, body types and ethnicities. We have assessed whether measures of torso shape (from 3D body scans) can improve the performance of models predicting the magnitude and distribution of body fat. Methods: From 93 male participants (age 43.1 ± 7.4) we captured anthropometrics and torso shape using a 3D scanner, body fat volume using an air displacement plethysmography device (BODPOD®) and body fat distribution using bioelectric impedance analysis. Results: Predictive models containing torso shape had an increased adjusted R2 and lower mean square error when predicting body fat magnitude and distribution. Conclusions: Torso shape improves the performance of anthropometric predictive models, an important component of identifying metabolic syndrome risk. Future work must focus on fast, low-cost methods of capturing the shape of the body. abstract_id: PUBMED:34473314 A Matter of Fat: Body Fat Distribution and Cardiometabolic Disease in Africa. Africa is a complex and diverse continent that faces numerous challenges. It is a region in epidemiological transition which is currently experiencing a dual burden of communicable and non-communicable diseases. The high prevalence of cardiometabolic disease (CMD) on the continent is driven largely by the increasing prevalence of obesity in the more affluent African nations. Although epidemiological studies demonstrate that a greater level of total body fat is associated with a higher risk for CMD, there is a complex association between body fat distribution and CMD risk. Thus, visceral adipose tissue (VAT) is considered a prime etiological agent for CMD, while subcutaneous adipose tissue (SAT) may act as a protective factor. The literature demonstrates positive correlations of VAT with type 2 diabetes, hypertension, and atherogenic dyslipidemia. However, the mechanisms via which VAT and SAT modulate CMD risk in African patients require further investigation. In addition, studies from high-income countries have shown that HIV and antiretroviral therapy (ART) are associated with changes in body fat distribution and higher risk for CMDs. The prevalence of HIV infection is at its highest in sub-Saharan Africa. However, cross-sectional studies from this region have produced contradictory results on the association of HIV and ART with CVD risk factors, and data is required from large prospective studies to clarify these relationships. abstract_id: PUBMED:25226803 The effect of exercise on obesity, body fat distribution and risk for type 2 diabetes. It is well known that obesity is a major risk factor for type 2 diabetes (T2D), while exercise is known to reduce body fatness and attenuate the risk of T2D. The aim of this chapter is to examine the interactions between exercise, obesity and body fat distribution, and the risk for T2D. Firstly, we show that body fatness, in particular visceral adipose tissue (VAT) accumulation, is associated with insulin resistance and incident T2D. We then show that aerobic exercise of sufficient intensity and volume results in a decrease in body fat and VAT. Conversely, sedentary behavior and physical inactivity are associated with increased body fat and VAT. Finally, the chapter examines the interaction between physical activity (PA), obesity and risk for T2D and shows that both obesity and PA are significant independent predictors of incident T2D, but the magnitude of risk imparted by high levels of body fat is much greater than that of low levels of PA. Further, we show that obese physically active individuals are at greater risk for incident T2D than normal-weight physically inactive individuals. The mechanisms underlying this complex interaction include the ability of exercise to increase free fatty acid oxidation to match high rates of lipolysis associated with obesity, as well as the effects of exercise on adipokine, cytokine and myokine secretion. Exercise, of sufficient volume and intensity, is therefore recommended to reduce obesity, centralization of body fat, and risk of T2D. abstract_id: PUBMED:26905521 MR-based assessment of body fat distribution and characteristics. The assessment of body fat distribution and characteristics using magnetic resonance (MR) methods has recently gained significant attention as it further extends our pathophysiological understanding of diseases including obesity, metabolic syndrome, or type 2 diabetes mellitus, and allows more detailed insights into treatment response and effects of lifestyle interventions. Therefore, the purpose of this study was to review the current literature on MR-based assessment of body fat distribution and characteristics. PubMed search was performed to identify relevant studies on the assessment of body fat distribution and characteristics using MR methods. T1-, T2-weighted MR Imaging (MRI), Magnetic Resonance Spectroscopy (MRS), and chemical shift-encoding based water-fat MRI have been successfully used for the assessment of body fat distribution and characteristics. The relationship of insulin resistance and serum lipids with abdominal adipose tissue (i.e. subcutaneous and visceral adipose tissue), liver, muscle, and bone marrow fat content have been extensively investigated and may help to understand the underlying pathophysiological mechanisms and the multifaceted obese phenotype. MR methods have also been used to monitor changes of body fat distribution and characteristics after interventions (e.g. diet or physical activity) and revealed distinct, adipose tissue-specific properties. Lastly, chemical shift-encoding based water-fat MRI can detect brown adipose tissue which is currently the focus of intense research as a potential treatment target for obesity. In conclusion, MR methods reliably allow the assessment of body fat distribution and characteristics. Irrespective of the promising findings based on these MR methods the clinical usefulness remains to be established. abstract_id: PUBMED:19654593 Body mass index, waist circumference and waist:hip ratio as predictors of cardiovascular risk--a review of the literature. Overweight and obesity have become a major public health problem in both developing and developed countries as they are causally related to a wide spectrum of chronic diseases including type II diabetes, cardiovascular diseases and cancer. However, uncertainty regarding the most appropriate means by which to define excess body weight remains. Traditionally, body mass index (BMI) has been the most widely used method by which to determine the prevalence of overweight in, and across, populations as well as an individual's level of risk. However, in recent years, measures of central obesity, principally waist circumference and the waist:hip ratio and to a lesser extent the waist:height ratio, which more accurately describe the distribution of body fat compared with BMI, have been suggested to be more closely associated with subsequent morbidity and mortality. There is also uncertainty about how these measures perform across diverse ethnic groups; earlier, most of the evidence regarding the relationships between excess weight and risk has been derived chiefly from Caucasian populations, and hence, it remains unclear whether the relationships are consistent in non-Caucasian populations. The purpose of this review, therefore, is to provide an overview of the current evidence-base focusing predominantly on three main questions: (1) Which, if any, of the commonly used anthropometric measures to define excess weight is more strongly associated with cardiovascular risk? (2) Which of the anthropometric measures is a better discriminator of risk? and (3) Are there any notable differences in the strength and nature of these associations across diverse ethnic groups? Answer: Yes, measures of body fat distribution do provide information on the risk of type 2 diabetes in addition to measures of general obesity. Studies have shown that while general obesity measures such as body mass index (BMI) are good predictors of type 2 diabetes risk, measures of body fat distribution, such as waist circumference and waist-to-height ratio, can further improve the prediction of diabetes risk, especially in certain populations or BMI groups. For instance, in Pima Indians, BMI was found to be an excellent predictor of type 2 diabetes risk and was not significantly improved by combining it with other measures of general adiposity or body fat distribution (PUBMED:12941718). However, in a Korean population, BMI, waist circumference, and waist-to-height ratio were all good predictors of type 2 diabetes risk, and in non-obese or less obese subjects, measures of central body fat distribution could help improve the prediction of diabetes risk when added to measures of general obesity (PUBMED:22443340). Furthermore, sexual dimorphism in body fat distribution is a well-established predictor of adverse metabolic outcomes, and studies have emphasized the strong sexual dimorphism in the genetic regulation of fat distribution traits, which is distinct from overall obesity (PUBMED:28073971). Waist circumference has also been shown to maintain a stronger association with cardiovascular disease risk factors than other measures of adiposity (PUBMED:17372330). Additionally, ultrasonography measures of visceral and subcutaneous adipose tissue have been significantly associated with glucose metabolism, even after adjustment for other measures of obesity (PUBMED:25849815). Torso shape, as measured by 3D body scans, has been found to improve the performance of anthropometric predictive models for body fat magnitude and distribution, which are important components of identifying metabolic syndrome risk (PUBMED:35886153). In Africa, the complex association between body fat distribution and cardiometabolic disease risk has been highlighted, with visceral adipose tissue considered a prime etiological agent for cardiometabolic disease, while subcutaneous adipose tissue may act as a protective factor (PUBMED:34473314). Exercise has been shown to reduce body fatness, particularly visceral adipose tissue, and attenuate the risk of type 2 diabetes (PUBMED:25226803).
Instruction: The trajectory of psychological impact in BRCA1/2 genetic testing: does time heal? Abstracts: abstract_id: PUBMED:18787910 The trajectory of psychological impact in BRCA1/2 genetic testing: does time heal? Background: Most research on adjustment of women undergoing genetic testing for breast cancer susceptibility has not followed women for more than 6 months after result receipt and has not evaluated curvilinear patterns in general and cancer-specific adjustment. Purpose: This study's primary goal was to examine the trajectory of psychological status in women at risk for breast and ovarian cancer prior to undergoing genetic testing through 1 year after BRCA1/2 result receipt. Methods: Women in the UCLA Familial Cancer Registry completed questionnaires assessing psychological status (i.e., depressive symptoms, negative and positive mood, anxiety, and cancer-related distress) prior to testing and at 1, 6, and 12 months after result receipt. Results: Of 155 women tested, 117 were BRCA1/2- (96 uninformative negative and 21 true negative) and 38 were BRCA1/2+. Linear mixed model analyses revealed a consistent pattern in adjustment indicators, such that the groups did not differ at baseline, but mutation carriers endorsed significantly more depressive symptoms, negative mood, and cancer-specific distress relative to non-mutation carriers at 1 and 6 months after test result receipt (and less positive mood at 6 months only). At 12 months, negative and positive mood returned to baseline levels for mutation carriers, and depressive symptoms approached baseline. At 12 months, the groups differed significantly only on cancer-specific distress, owing to declining distress in non-carriers. Neither having a previous cancer diagnosis nor receiving a true negative versus uninformative negative result predicted reactions to genetic testing. Conclusions: Genetic testing prompted an increase in general and cancer-specific distress for BRCA1/2+ women, which remitted by 1 year after result receipt. abstract_id: PUBMED:15937976 Psychological impact of genetic testing for cancer susceptibility: an update of the literature. This article presents an overview of the rapidly evolving body of literature on the psychological impact of genetic testing for hereditary breast/ovarian cancer susceptibility, hereditary non-polyposis colorectal cancer (HNPCC) and familial adenomatous polyposis (FAP). Uptake of genetic testing for BRCA1/2 and HNPCC-related mutations is more consistently related to psychological factors, rather than sociodemographic variables. Most studies on the psychological impact of genetic testing amongst individuals who have never been affected by cancer demonstrate that non-carriers derive significant psychological benefits from genetic testing, while no adverse effects have been observed amongst carriers. These benefits are more clear-cut for HNPCC, compared to hereditary breast/ovarian cancer, reflecting differences in risk management options. The few studies available on individuals affected with cancer indicate that the impact of genetic testing is mediated and amplified by their former experience of cancer. Future directions and challenges of research in this area are reviewed. In particular, more empirical data are needed on the broader impact of genetic testing on those with inconclusive results or results of uncertain significance. As genetic testing is becoming available for other types of familial cancer, additional investigations will be needed as there is evidence to suggest that the impact of genetic testing may be unique to each type of familial cancer. abstract_id: PUBMED:26741411 The psychological impact of genetic information on children: a systematic review. Purpose: This review assessed the psychological impact that acquiring personal and familial genetic information has on children. We also examined the concordance between the available empirical data and clinical guidance/perspectives articles. Methods: We screened 591 abstracts and identified 13 studies, representing 966 children. Ten studies assessed 386 children tested for familial adenomatous polyposis (n = 171), hereditary cardiac disease (n = 134), and other conditions (n = 81). Three studies addressed the impact of BRCA1/2 testing of a family member on 580 children. Results: Serious adverse psychological outcomes were uncommon. Most studies reported no significant increase in mean anxiety, depression, and distress scores (n = 8, 61.5%); however, some children experienced intrafamilial distress, discrimination, and guilt/regret. Some children were more concerned about their own health or their family members' health. There was limited consistency between anticipated adverse impact and empirical data. Conclusions: The review identified little conclusive evidence of deleterious psychological consequences for children acquiring genetic information. However, there is a lack of data regarding genetic testing for conditions that may not be treatable/modifiable, as well as a dearth of longitudinal studies. Therefore, clinical caution remains essential for the ethical integration of genetic testing into pediatrics. Further research assessing the potential positive and negative effects of genetic testing in childhood is warranted.Genet Med 18 8, 755-762. abstract_id: PUBMED:11584901 Psychological impact of genetic counselling and testing in women previously diagnosed with breast cancer. Background: The recent discovery of susceptibility genes relating to breast cancer, BRCA1 and BRCA2, now allows women with breast cancer and a family history of breast/ovarian cancer to undergo genetic testing to identify a causative germ-line mutation. The present study assessed the psychological status over time of women affected by breast cancer requesting genetic testing (cases; n = 32) compared with matched controls (n = 28). Methods: Subjects were recruited through two Sydney-based hospitals. Data were collected via questionnaire and telephone interview at baseline, and 2 weeks and 3-6 months following counselling. Results: Genetic test results were not received by the subjects within the study period. Cases showed a greater increase in knowledge of cancer genetics following counselling compared with controls, and this was maintained over time. Psychological symptoms remained stable over the study period and there were no differences between groups. Conclusions: Increased knowledge following genetic counselling was not accompanied by an increase in anxiety or depression. Further assessment will be required in the long term to determine the psychological impact of receiving a genetic test result. abstract_id: PUBMED:12692658 Attitudes and psychological impact of genetic testing, genetic counseling, and breast cancer risk assessment among women at increased risk. Purpose/objectives: To review research related to the psychological functioning of women with family histories of breast cancer, the impact of genetic counseling on women at increased risk, and their participation in and description of breast cancer risk evaluation programs. Data Sources: Published articles and material from the World Wide Web. Data Synthesis: Findings from these sources suggest an underlying level of psychological distress in women with family histories of breast cancer. This may either increase or decrease their surveillance practices. With the onset of commercial genetic testing for BRCA1 and BRCA2, researchers have studied some of the initial psychosocial effects of genetic information on women at high risk. Conclusions: Women with family histories of breast cancer have a very high interest in genetic testing for BRCA1 and BRCA2 mutations. They have an underlying level of psychological distress that is not relieved by genetic counseling. They tend to state reasons for wanting and not wanting testing that are not polar opposites. Women who attend breast cancer risk assessment programs tend to be self-referred, Caucasian, well-educated, and of middle or upper income status. Large gaps exist in the research on women of color and those who are less educated and of lower socioeconomic status. Implications For Nursing: Nurses and other healthcare professionals should tailor care given to women at increased risk for hereditary breast cancer by using the current information related to their emotional and medical needs. Decisions regarding genetic testing, genetic counseling, and breast cancer risk assessment are highly individualized. Thus, healthcare professionals should be cautious regarding any generalizations about women at risk for breast cancer. abstract_id: PUBMED:23475556 Is the psychological impact of genetic testing moderated by support and sharing of test results to family and friends? Receiving the results of genetic tests for a breast and ovarian cancer susceptibility can be a stressful experience. Here we studied the effects of social support (SS) and the sharing of test results on the psychological impact of BRCA1/2 test result disclosure. We also compared carriers and non-carriers on sharing, SS and psychological impact. Five-hundred and twenty-two unaffected women were followed prospectively for 2 years after receiving their test results. Psychological impact was measured on the impact of event scale. Multivariate multi-level models were used, and all the analyses were stratified depending on mutation status (carriers vs non-carriers). Two weeks after receiving their BRCA1/2 results, carriers had shared their test results less frequently than non-carriers (p &lt; 0.01). Sharing test results was not significantly associated with psychological impact. Availability of SS was significantly associated with better psychological adjustment across time among carriers (p &lt; 0.01), but not among non-carriers. For female BRCA1/2 mutation carriers, the importance of SS should be stressed, and possible ways of enlisting people in their entourage for this purpose should be discussed in the context of clinical encounters. abstract_id: PUBMED:21382771 Cognitive, emotional and behavioral impact of an uncertain outcome after study of BRCA1/2: review of the literature Background: Recent advances in oncogenetics have enabled the development of tests for predisposition to breast and ovarian cancers. Where no mutation has been identified in the BRCA1 or 2 genes, the proband (first person tested in a family with a genetic risk) can receive an uncertain outcome: negative inconclusive or identification of a variant of unknown clinical significance. From the demonstration of such outcomes, their psychological impact has been studied among women concerned. Objective: The purpose of this article is to summarize the results of studies about the impact of delivering an uncertain BRCA1/2 genetic result on emotional (general or cancer specific distress), cognitive (perception of risk) and behavioral (decisions of medical care) reactions of consultants. It is also to identify factors particularly associated with personal or familial medical history that may alter this impact. Methodology: A literature review was conducted from a key word search on the databases PsycINFO and PubMed (breast*, BRCA*, genetic*, familial, mutation, heredit*) crossed with terms related to the psychological impact and mutation status. Included papers are quantitative studies focused on the psychological impact of the uncertain genetic test result, compared to the impact resulting from positive or true negative result, or from test refusal. Results: The results of the eight selected articles generally suggest a less emotional distress and a lower perceived risk of predisposition or to develop cancer facing uncertain genetic result compared to positive result. Intentions of breast cancer surveillance are optimal, indicating the absence of "false reassurance", while demand for prophylactic surgery appears to be less frequent. Nevertheless risk factors of inappropriate psychological reactions may be highlighted as pretest clinical distress, a personal cancer history or multiple family history of cancer. Conclusion: Current data suggest psychological reactions adapted to the clinical significance of uncertain genetic test results. These findings are preliminary given the small number of studies and their restriction to populations with similar sociocultural characteristics. abstract_id: PUBMED:34046315 Uncertainty following an inconclusive result from the BRCA1/2 genetic test: A review about psychological outcomes. Background: An inconclusive result from BRCA1/2 genetic testing indicates that a genetic variant of uncertain significance is detected. This case constitutes the majority of genetic test results, but studies specifically addressing the psychological adjustment of people with inconclusive results are scarce. Aim: To examine psychological outcomes of receiving an uninformative BRCA1/2 test result. Methods: PubMed, PsychInfo, and Cochrane Central Register of Controlled Trials were screened for studies focusing on distress, anxiety, and depression levels in individuals with inconclusive genetic test results. This review is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses method. Results: Studies on psychological outcomes of inconclusive BRCA1/2 focused on general and specific distress, anxiety, and depression. Overall, they produced mixed results. These inconsistent findings are probably due to the uncertainty caused by this type of result, that may also influence the decisions of individuals about surveillance and prophylactic options, reducing their compliance. In addition, this review highlights specific risk and protective factors that affect psychological adjustment in individuals with an inconclusive genetic testing result. Conclusion: Individuals with inconclusive genetic test results need specific educational programs and support to better understand the meaning of their results in order to be able to make decisions about surveillance and prophylactic options. abstract_id: PUBMED:17917142 Psychosocial and behavioral impact of genetic counseling and testing. Over a decade has passed since the clinical availability of BRCA1/2 mutation testing for Hereditary Breast and Ovarian Cancer (HBOC). The purpose of this article is to review key areas of psychosocial and behavioral research related to genetic counseling and testing for BRCA1/2 mutations. Special attention will be given to understudied issues within each of these key areas. Where appropriate, the article will also highlight the clinical and research experiences of the authors. The first area that will be reviewed is the impact of genetic testing on psychological well-being. This will be followed by a brief discussion of a practical assessment strategy for psychosocial distress in clinical settings. Next, published data on the uptake of risk management options based on genetic testing results as well as the psychosocial impact of these behaviors will be reviewed. Thirdly, research focused on understanding the decision making at various points in the genetic counseling and testing process will also be examined. Finally, the available research on genetic counseling and testing in minority communities will be presented. By recognizing and addressing the psychosocial and behavioral issues faced by patients undergoing BRCA1/2 genetic counseling and testing, researchers and providers have the potential to maximize opportunities for prevention, early detection, and healthy coping. abstract_id: PUBMED:22271377 Psychological distress with direct-to-consumer genetic testing: a case report of an unexpected BRCA positive test result. We report a case of a client who discovered she had a BRCA mutation following direct-to-consumer (DTC) genetic testing in the absence of genetic counseling. After testing she presented for genetic counseling with anxiety, distress, and a deficit of knowledge about what the DTC genetic testing revealed. Genetic counseling helped alleviate distress while empowering the client to apply the results of testing to improve medical management. Despite recent studies demonstrating no negative psychological impact of DTC genetic testing on the consumer, this case illustrates that significant psychological distress and confusion can occur as a result of DTC genetic testing for highly penetrant single gene disorders. Pre- and post-test genetic counseling in conjunction with DTC genetic testing may alleviate consumers' distress and empower clients to proactively utilize their result information. Answer: The trajectory of psychological impact in BRCA1/2 genetic testing seems to indicate that time can indeed mitigate some of the initial distress experienced by individuals who test positive for the mutations. A study examining women at risk for breast and ovarian cancer prior to undergoing genetic testing through 1 year after BRCA1/2 result receipt found that mutation carriers endorsed significantly more depressive symptoms, negative mood, and cancer-specific distress relative to non-mutation carriers at 1 and 6 months after test result receipt. However, by 12 months, negative and positive mood returned to baseline levels for mutation carriers, and depressive symptoms approached baseline. The groups differed significantly only on cancer-specific distress at 12 months, due to declining distress in non-carriers (PUBMED:18787910). This pattern suggests that while genetic testing can prompt an increase in general and cancer-specific distress for BRCA1/2+ women, this distress tends to remit by 1 year after result receipt. It is important to note that the psychological impact of genetic testing can vary depending on the individual and the type of familial cancer involved (PUBMED:15937976). Moreover, the availability of social support has been associated with better psychological adjustment over time among carriers (PUBMED:23475556). While the psychological impact of receiving an uncertain or inconclusive result from BRCA1/2 genetic testing can also be significant, leading to mixed outcomes in terms of distress, anxiety, and depression, individuals with inconclusive results may benefit from specific educational programs and support to better understand their results and make informed decisions about surveillance and prophylactic options (PUBMED:34046315). Overall, the literature suggests that while the initial psychological impact of BRCA1/2 genetic testing can be significant, particularly for mutation carriers, there is a general trend towards psychological adjustment and a return to baseline levels of mood and distress over time. However, the presence of social support and the provision of appropriate counseling and educational resources are crucial in facilitating this adjustment process.
Instruction: Parturition lines in modern human wisdom tooth roots: do they exist, can they be characterized and are they useful for retrospective determination of age at first reproduction and/or inter-birth intervals? Abstracts: abstract_id: PUBMED:24932749 Parturition lines in modern human wisdom tooth roots: do they exist, can they be characterized and are they useful for retrospective determination of age at first reproduction and/or inter-birth intervals? Background: Parturition lines have been described in the teeth of a number of animals, including primates, but never in modern humans. These accentuated lines in dentine are comprised of characteristic dark and light component zones. Aim: The aim of this study was to review the physiology underlying these lines and to ask if parturition lines exist in the third molar tooth roots of mothers known to have had one or more children during their teenage years. Methods: Brief retrospective oral medical obstetric histories were taken from four mothers and compared with histological estimates for the timing of accentuated markings visible in longitudinal ground sections of their wisdom teeth. Results: Evidence of accentuated markings in M3 root dentine matched the age of the mother at the time their first child was born reasonably well. However, the dates calculated for inter-birth intervals did not match well. Conclusions: Parturition lines corresponding to childbirth during the teenage years can exist in human M3 roots, but may not always do so. Without a written medical history it would not be possible to say with confidence that an accentuated line in M3 root dentine was caused by stress, illness or was a parturition line. abstract_id: PUBMED:28395550 The impact of inter-pregnancy interval on subsequent risk of preterm birth. Objective: The aim of the study was to assess optimal time to conceive after previous delivery associated with smallest risk of preterm birth. Methods: We selected all women (n = 2723) with their first and second singleton delivery between the years 2004 and 2012. Inter-pregnancy interval was defined as that between live birth and subsequent conception. We performed logistic regression analyses to assess the risk of preterm birth adjusted for maternal age and body mass index. Results: Association between inter-pregnancy interval and the natural logarithm of the adjusted relative risk of preterm birth had a J-shaped curve with lowest risk at 15 months after last birth. Conclusion: The optimal time to conceive after a previous delivery is 15 months, as longer or shorter interval are associated with increased risk of preterm birth. Women with short or long inter-pregnancy intervals were 1.6 times more likely to experience preterm birth. abstract_id: PUBMED:36966813 Association between short inter-pregnancy interval and placenta previa and placenta accreta spectrum with respect to maternal age at first cesarean delivery. Objective: To explore the association between inter-pregnancy intervals and placenta previa and placenta accreta spectrum among women who had prior cesarean deliveries with respect to maternal age at first cesarean delivery. Methods: This retrospective study included clinical data from 9981 singleton pregnant women with a history of cesarean delivery at 11 public tertiary hospitals in seven provinces of China between January 2017 and December 2017. The study population was divided into four groups (&lt;2, 2-5, 5-10, ≥10 years of the interval) according to the inter-pregnancy interval. The rate of placenta previa and placenta accreta spectrum among the four groups was compared, and multivariate logistic regression was used to analyze the relationship between inter-pregnancy interval and placenta previa and placenta accreta spectrum with respect to maternal age at first cesarean delivery. Results: Compared to women aged 30-34 years old at first cesarean delivery, the risk of placenta previa (aRR, 1.48; 95% CI, 1.16-1.88) and placenta accreta spectrum (aRR, 1.74; 95% CI, 1.28-2.35) were higher among women aged 18-24. Multivariate regression results showed that women at 18-24 with &lt;2 years intervals exhibited a 5.05-fold increased risk for placenta previa compared with those with 2-5-year intervals (aRR, 5.05; 95% CI, 1.13-22.51). In addition, women aged 18-24 with less than 2 years intervals had an 8.44 times greater risk of developing PAS than women aged 30-34 with 2 to 5 years intervals (aRR, 8.44; 95% CI, 1.82-39.26). Conclusions: The findings of this study suggested that short inter-pregnancy intervals were associated with increased risks for placenta previa, and placenta accreta spectrum for women under 25 years at first cesarean delivery, which may be partly attributed to obstetrical outcomes. abstract_id: PUBMED:27975042 Predictive values derived from lower wisdom teeth developmental stages on orthopantomograms to calculate the chronological age in adolescence and young adults as a prerequisite to obtain age-adjusted informed patient consent prior to elective surgical procedures in young patients with incomplete or mismatched personal data. Introduction: Surgical procedures require informed patient consent, which is mandatory prior to any procedure. These requirements apply in particular to elective surgical procedures. The communication with the patient about the procedure has to be comprehensive and based on mutual understanding. Furthermore, the informed consent has to take into account whether a patient is of legal age. As a result of large-scale migration, there are eventually patients planned for medical procedures, whose chronological age can't be assessed reliably by physical inspection alone. Age determination based on assessing wisdom tooth development stages can be used to help determining whether individuals involved in medical procedures are of legal age, i.e., responsible and accountable. At present, the assessment of wisdom tooth developmental stages barely allows a crude estimate of an individual's age. This study explores possibilities for more precise predictions of the age of individuals with emphasis on the legal age threshold of 18 years. Material and Methods: 1,900 dental orthopantomograms (female 938, male 962, age: 15-24 years), taken between the years 2000 and 2013 for diagnosis and treatment of diseases of the jaws, were evaluated. 1,895 orthopantomograms (female 935, male 960) of 1,804 patients (female 872, male 932) met the inclusion criteria. The archives of the Department of Diagnostic Radiology in Dentistry, University Medical Center Hamburg-Eppendorf, and of an oral and maxillofacial office in Rostock, Germany, were used to collect a sufficient number of radiographs. An effort was made to achieve almost equal distribution of age categories in this study group; 'age' was given on a particular day. The radiological criteria of lower third molar investigation were: presence and extension of periodontal space, alveolar bone loss, emergence of tooth, and stage of tooth mineralization (according to Demirjian). Univariate and multivariate general linear models were calculated. Using hierarchical multivariate analyses a formula was derived quantifying the development of the four parameters of wisdom tooth over time. This model took repeated measurements of the same persons into account and is only applicable when a person is assessed a second time. The second approach investigates a linear regression model in order to predict the age. In a third approach, a classification and regression tree (CART) was developed to derive cut-off values for the four parameters, resulting in a classification with estimates for sensitivity and specificity. Results: No statistically significant differences were found between parameters related to wisdom tooth localization (right or left side). In univariate analyses being of legal age was associated with consecutive stages of wisdom tooth development, the obliteration of the periodontal space, and tooth emergence, as well with alveolar bone loss; no association was found with tooth mineralization. Multivariate models without repeated measurements revealed imprecise estimates because of the unknown individual-related variability. The precision of these models is thus not very good, although it improves with advancing age. When calculating a CART-analysis and a receiver operating characteristics - area under the curve of 78% was achieved; when maximizing both specificity and sensitivity, a Youden's index of 47% was achieved (with 73% specificity and 74% sensitivity). Discussion: This study provides a basis to help determine whether a person is 18 years or older in individuals who are assumed to be between 15 and 24 years old. From repeated measurements, we found a linear effect of age on the four parameters in the individuals. However, this information can't be used for prognosis, because of the large intra-individual variability. Thus, although the development of the four parameters can be estimated over time, a direct conclusion with regard to age can't be drawn from the parameters without previous biographic information about a person. While a single parameter is of limited value for calculating the target age of 18 years, combining several findings, that can be determined on a standard radiography, may potentially be a more reliable diagnostic tool for estimating the target age in both sexes. However, a high degree of precision can't be achieved. The reason for persistent uncertainty lies in the wide chronological range of wisdom tooth development, which stretches from well below to above the 18th life year. The regression approach thus seems not optimal. Although sensitivity and specificity of the CART-model are moderately high, this model is still not reliable as a diagnostic tool. Our findings could have impact, e.g. on elective surgeries for young individuals with unknown biography. However, these results cannot replace social engagement, in particular thorough physical examination of patients and careful registration of their histories. Further studies on the use of this calculation method in different ethnic groups would be desirable. abstract_id: PUBMED:37073929 A Retrospective Analysis of Postoperative Abscess Formation Following Wisdom Tooth Removal and Their Clinical Condition and Localization. Aim: As a common procedure in oral surgery, the removal of wisdom teeth (3M) is associated with a variety of postoperative complications. This study reports of deep tissue abscesses after the removal of 3M in correlation to several factors. Materials And Methods: Patients between 2012 and 2017 with removed 3M were retrospectively evaluated in terms of clinical condition and localization and thus assigned tog A (removal of asymptomatic 3M) or group B (removal of symptomatic 3M). Moreover, they were analyzed in terms of abscesses after the removal and correlation with various parameters: localization of the abscess, general diseases, perioperative antibiotic treatment, number of days from removal of the tooth to abscess formation, and postoperative complications after primary abscess incision. Results: About 82 patients (male n = 44, female n = 38) were included, with 88 wisdom teeth removed and postoperative abscesses. Postoperative abscesses occurred more frequently in group B (n = 53) with n = 29 in IIB localization, without a significant correlation. Patients in this group were older, and there were more surgical abscess incisions needed, despite a longer treatment with oral and intravenous antibiosis that correlated with neurologic diseases and age. Younger patients reported significantly more pain. Conclusions: Detection of potential 3M pathologies at an early and asymptomatic stage is essential to avoid postoperative complications following 3M removal. Additional prospective studies are necessary to develop corresponding guidelines. Clinical Significance: Wisdom tooth extraction is the most common operation in oral surgery, and therefore, adequate risk evaluation is still required. abstract_id: PUBMED:23237623 Levels and correlates of non-adherence to WHO recommended inter-birth intervals in Rufiji, Tanzania. Background: Poorly spaced pregnancies have been documented worldwide to result in adverse maternal and child health outcomes. The World Health Organization (WHO) recommends a minimum inter-birth interval of 33 months between two consecutive live births in order to reduce the risk of adverse maternal and child health outcomes. However, birth spacing practices in many developing countries, including Tanzania, remain scantly addressed. Methods: Longitudinal data collected in the Rufiji Health and Demographic Surveillance System (HDSS) from January 1999 to December 2010 were analyzed to investigate birth spacing practices among women of childbearing age. The outcome variable, non-adherence to the minimum inter-birth interval, constituted all inter-birth intervals &lt;33 months long. Inter-birth intervals ≥33 months long were considered to be adherent to the recommendation. Chi-Square was used as a test of association between non-adherence and each of the explanatory variables. Factors affecting non-adherence were identified using a multilevel logistic model. Data analysis was conducted using STATA (11) statistical software. Results: A total of 15,373 inter-birth intervals were recorded from 8,980 women aged 15-49 years in Rufiji district over the follow-up period of 11 years. The median inter-birth interval was 33.4 months. Of the 15,373 inter-birth intervals, 48.4% were below the WHO recommended minimum length of 33 months between two live births. Non-adherence was associated with younger maternal age, low maternal education, multiple births from the preceding pregnancy, non-health facility delivery of the preceding birth, being an in-migrant resident, multi-parity and being married. Conclusion: Generally, one in every two inter-birth intervals among 15-49 year-old women in Rufiji district is poorly spaced, with significant variations by socio-demographic and behavioral characteristics of mothers and newborns. Maternal, newborn and child health services should be improved with a special emphasis on community- and health facility-based optimum birth spacing education in order to enhance health outcomes of mothers and their babies, especially in rural settings. abstract_id: PUBMED:36626760 Distinct microbiome profiles in convergent wisdom tooth impactions. Aims: Long-term retention of impacted third molars (wisdom teeth) is associated with plaque stagnation and the development of caries on the adjacent surface of the neighboring second molar. While caries and tooth loss are common outcomes of impaction, there is currently insufficient evidence to support the pre-emptive removal of asymptomatic wisdom teeth. Emerging evidence suggests that convergently growing impactions are associated with caries. We have therefore investigated the composition of dental plaque on the distal surface of the mandibular second molar at various impaction angles. Methods And Results: We have compared the microbiome of these surfaces at four impaction angulations using short-read sequencing of the bacterial 16S rRNA gene: two convergent (horizontal and mesial) and two divergent (distal and vertical) angulations, and in cases where the wisdom tooth is missing. Horizontal angulations exhibited lower microbial diversity than mesial impactions. Amplicon Sequence Variants (ASVs) associated with Veillonella were significantly more abundant at impactions with angulations toward the midline. Using machine learning, a random forest classifier trained to distinguish microbiome profiles was used to predict the native angulations for a subset of samples, with samples from the two convergent impactions estimated with the greatest accuracy. Conclusions: Differences in microbial diversity were apparent between caries-associated convergent (horizontal and mesial) impacted wisdom teeth, as well as greater abundances of Veillonella ASVs at horizontal impactions. abstract_id: PUBMED:36981527 Risk Factors for Delayed-Onset Infection after Mandibular Wisdom Tooth Extractions. Wisdom tooth extraction is one of the most commonly performed procedures by oral maxillofacial surgeons. Delayed-onset infection (DOI) is a rare complication of wisdom tooth extraction, and it occurs ~1-4 weeks after the extraction. In the present study, risk factors for DOI were investigated by retrospectively analyzing the cases of 1400 mandibular wisdom tooth extractions performed at Kagawa University Hospital from April 2015 to June 2022. Inclusion criteria were patients aged &gt;15 years with a wisdom tooth extraction per our procedure. The exclusion criteria were patients with insufficient medical records, a &gt;30-mm lesion around the wisdom tooth shown via X-ray, colonectomy, radiotherapy treatment of the mandible, the lack of panoramic images, and lesions other than a follicular cyst. The DOI incidence was 1.1% (16 cases), and univariate analyses revealed that the development of DOI was significantly associated with the Winter classification (p = 0.003), position (p = 0.003), hypertension (p = 0.011), and hemostatic agent use (p = 0.004). A multivariate logistic regression analysis demonstrated that position (OR = B for A, 7.75; p = 0.0163), hypertension (OR = 7.60, p = 0.013), and hemostatic agent use (OR = 6.87, p = 0.0022) were significantly associated with DOI development. Hypertension, hemostatic use, and position were found to be key factors for DOI; long-term observation may thus be necessary for patients with these risk factors. abstract_id: PUBMED:20713288 Relationship between birth weight and time of first deciduous tooth eruption in 143 consecutively born infants. Background: Deciduous teeth play an important role in the proper alignment, spacing and occlusion of permanent teeth. The calcification of deciduous teeth begins during the fourth prenatal month, and calcification of all deciduous teeth begin by the end of the sixth prenatal month. The eruption date varies and is genetically influenced. Delayed eruption of deciduous teeth, especially the first teeth, causes nutritional problems for the infants. It also results in parental concerns. In this study, we compared the timing of eruption of the first deciduous teeth in infants in relation to their birth weight. Methods: A total of 143 infants born at Shariati Hospital in Tehran from December 2004 to December 2005 were included in the study. Data on sex, birth weight, gestational age, and time of first tooth eruption were collected. Results: The mean birth weight was 3220 +/- 420 g with 5.5% of infants weighing less than 2500 g, and 19.9% weighing more than 3500 g. Patients were monitored weekly from the third month of age until the time of first tooth eruption. The mean age of first tooth eruption was 7.68 +/- 1.84 months. Conclusion: There was a negative linear correlation between the time of first deciduous tooth eruption and birth weight, suggesting that delayed tooth eruption may be related to lower birth weight. abstract_id: PUBMED:14767777 Forensic age estimation in living subjects: the ethnic factor in wisdom tooth mineralization. Radiological assessment of the mineralization stage of third molars is a major criterion for age estimation of living people involved in criminal proceedings. To date insufficient knowledge has been obtained about how the ethnic origin can influence tooth mineralization. A comparative study of wisdom tooth mineralization was carried out on three population samples: one German, one Japanese and one South African. To this end, 3,652 conventional orthopantomograms were evaluated on the basis of Demirjian's stages. The Japanese subjects were on average 1-2 years older than their German counterparts upon reaching stages D-F, whereas the South African subjects were on average 1-2 years younger than the Germans when displaying stages D-G. To enhance the accuracy of forensic age estimates based on wisdom tooth mineralization we recommend the use of population-specific standards. Answer: Parturition lines, also known as accentuated lines in dentine, have been described in the teeth of various animals, including primates, but their existence in modern humans, particularly in wisdom tooth roots, has been a subject of investigation. A study aimed to determine if parturition lines exist in the third molar tooth roots of mothers who had children during their teenage years. The study found evidence of accentuated markings in the M3 root dentine that matched the age of the mother at the time their first child was born reasonably well. However, the dates calculated for inter-birth intervals did not match well. The study concluded that while parturition lines corresponding to childbirth during teenage years can exist in human M3 roots, they may not always be present. Without a written medical history, it would not be possible to confidently assert that an accentuated line in M3 root dentine was caused by stress, illness, or was a parturition line (PUBMED:24932749). Therefore, while parturition lines in wisdom tooth roots may exist and can be characterized to some extent, they are not reliable for retrospective determination of age at first reproduction or inter-birth intervals without corroborating medical history. The study suggests that these lines are not consistently present or accurately indicative of specific reproductive events, limiting their usefulness in such retrospective analyses.
Instruction: Is there a difference between active opening of the Eustachian tube in a hypobaric surrounding compared to a hyperbaric surrounding? Abstracts: abstract_id: PUBMED:26742258 Is there a difference between active opening of the Eustachian tube in a hypobaric surrounding compared to a hyperbaric surrounding? Introduction: The Eustachian tube (ET) is the key to pressure equalization between the middle ear and ambient pressure. To date, little is known about differences of the opening mechanisms under hyper- or hypobaric conditions. Aim of this study was to compare standard ET opening parameters during standardized hypo- and hyperbaric exposures. Methods: Thirty healthy participants were exposed to a standardized profile of decompression and compression (SPDC) in a hypo-/hyperbaric pressure chamber. Impedance, expressed as tympanic membrane compliance, was recorded at intervals during the excursions from 1 atmosphere absolute (atm abs) to 0.8 and 1.2 atm abs respectively. Parameters for tubal opening were obtained during SPDC: ET opening pressure (ETOP), ET opening duration (ETOD) and ET opening frequency (ETOF), hypobaric (Phase 1) and hyperbaric (Phase 2) data were compared. Results: Mean value for Valsalva maneuver ETOP was 40.10 ± 19.02 mbar in Phase 2 vs. 42.82 ± 21.75 mbar in Phase 1. For ETOD it was 2.80 ± 2.09 seconds in Phase 2 vs. 2.51 ± 1.90 seconds in Phase 1. For swallowing, mean value for ETOP was 33.47 ± 14.50 mbar in Phase 2 vs. 28.44 ± 14.04 in Phase 1. ETOD was 0.82 ± 0.60 seconds in Phase 2 vs. 0.76 ± 0.55 seconds in Phase 1. There was no statistical significance for ETOP, ETOD and ETOF between the two phases. Conclusion: No statistical significant difference was evident for active pressure equalization (Valsalva and swallowing) between a hyperbaric setting (dive) and a hypobaric setting (flight) in healthy subjects. abstract_id: PUBMED:25226372 Characterizing the active opening of the eustachian tube in a hypobaric/hyperbaric pressure chamber. Objective: Active and passive opening of the Eustachian tube (ET) enables direct aeration of the middle ear and a pressure balance between middle ear and the ambient pressure. The aim of this study was to characterize standard values for the opening pressure (ETOP), the opening frequency (ETOF), and the opening duration (ETOD) for active tubal openings (Valsalva maneuver, swallowing) in healthy participants. Design/participants: In a hypobaric/hyperbaric pressure chamber, 30 healthy participants (19 women, 11 men; mean age, 25.57 ± 3.33 years) were exposed to a standardized profile of compression and decompression. The pressure values were recorded via continuous impedance measurement during the Valsalva maneuver and swallowing. Based on the data, standard curves were identified and the ETOP, ETOD, and ETOF were determined. Results: Recurring patterns of the pressure curve during active tube opening for the Valsalva maneuver and for active swallowing were characterized. The mean value for the Valsalva maneuver for ETOP was 41.21 ± 17.38 mbar; for the ETOD, it was 2.65 ± 1.87 seconds. In the active pressure compensation by swallowing, the mean value for the ETOP was 29.91 ± 13.07 mbar; and for the ETOD, it was 0.82 ± 0.53 seconds. Conclusion: Standard values for the opening pressure of the tube and the tube opening duration for active tubal openings (Valsalva maneuver, swallowing) were described, and typical curve gradients for healthy subjects could be shown. This is another step toward analyzing the function of the tube in compression and decompression. abstract_id: PUBMED:23824504 Characterizing the passive opening of the eustachian tube in a hypo-/hyperbaric pressure chamber Background: Beside arbitrary and not arbitrary active pressure equalization systems there is a passive equalization system via the Eustachian tube (ET) at pressure difference between the epipharyngeal space and the middle ear. Aim of this study was to characterize this passive equalization system in a hypobaric/hyperbaric pressure chamber by continuously measuring the tympanic impedance. In contrast to other studies, which are measured only in a hypobaric pressure chamber it is possible to include participants with Eustachian tube dysfunction (ETD). Material And Methods: Following a fixed pressure profile 39 participants were exposed to phases of pressure rising and decompression. By continuously measuring the tympanic impedance in the pressure chamber it was possible to measure data of the Eustachian Tube opening Pressure (ETOP), Eustachian Tube closing pressure (ETCP) and Eustachian Tube opening duration (ETOD). In addition it was possible to characterize the gradient of pressure during decompression, while the ET was open. Results: Beside the measurement of the arithmetic average of the ETOP (30.2 ± 15.1 mbar), ETCP (9.1 ± 7.7 mbar) and ETOD (0.65 ± 0.38 s) it was obvious that there are recurrent samples of pressure progression during the phase of tube opening. Generally it is possible to differentiate between the type of complete opening and partial opening. Conclusion: The fundamental characterization of the action of the passive tube opening, including the measurement of the ETOP, ETCP and ETOD, is a first step in understanding the physiological and pathophysiological function of the ET. abstract_id: PUBMED:8916865 Eustachian tube function and middle ear barotrauma associated with extremes in atmospheric pressure. Eustachian tube (ET) function was studied by means of sonotubometry and tubotympano-aerodynamography (TTAG) prior to and following exposure to hypobaric or hyperbaric conditions. Forty normal adults were subjected to hypobaric pressure. Fifty adults who underwent hyperbaric oxygen (HBO) therapy also were studied. Following hypobaric exposure, 14 of 80 ears (17.5%) exhibited middle ear barotrauma. Following hyperbaric exposure, 34 of 100 ears (34%) exhibited middle ear barotrauma. Dysfunction of the ET, characterized by altered active and passive opening capacity, was more prevalent following exposure to extremes in atmospheric pressure compared to baseline. The ET function, which was impaired after the first HBO treatment, improved gradually over the next 2 hours. Overall, however, ET function was worse after the seventh treatment. The patients who developed barotrauma exhibited worse ET function prior to hypobaric or hyperbaric exposure. Thus, abnormal ET function can be used to predict middle ear barotrauma prior to exposure to hypobaric or hyperbaric atmospheric pressure. abstract_id: PUBMED:18257280 Study of morphological alterations in pharyngeal ostium of Eustachian tube and its surrounding tissue and its association with SOM Objective: To observe the morphological alterations in pharyngeal ostium of Eustachian tube and its surrounding tissues and its correlation with secretory otitis media. Method: 72 volunteers (144 ears) as control group, 89 patients (151 ears) as SOM group, all were observed for the shape of pharyngeal opening of auditory tube and the morphological changes of its surrounding structures. Result: Statistic difference was found between the shape of pharyngeal ostium of eustachian tube of adult patients with SOM and that of control group, but for children, there was no difference. It indicated that there was a difference between adult and children in morphological changes of pharyngeal ostium of eustachian tube when suffered from SOM. Conclusion: Morphological changes of pharyngeal ostium of eustachian tube play an important role in the genesis and development of adult SOM . The primary change in children patients is the inflammation of nasopharynx, then is the hypertrophy of adenoid and tubal tonsil. Nasopharyngeal-fibroscope is a very useful tool in the diagnosis and treatment of SOM. abstract_id: PUBMED:28374944 Reliability of Eustachian tube function measurements in a hypobaric and hyperbaric pressure chamber. Objectives: Measurement of the Eustachian tube (ET) function is a challenge. The demand for a precise and meaningful diagnostic tool increases-especially because more and more operative therapies are being offered without objective evidence. The measurement of the ET function by continuous impedance recording in a pressure chamber is an established method, although the reliability of the measurements is still unclear. Methods: Twenty-five participants (50 ears) were exposed to phases of compression and decompression in a hypo- and hyperbaric pressure chamber. The ET function reflecting parameters-ET opening pressure (ETOP), ET opening duration (ETOD) and ET opening frequency (ETOF)-were determined under exactly the same preconditions three times in a row. The intraclass correlation coefficient (ICC) and Bland and Altman plot were used to assess test-retest reliability. Results: ICCs revealed a high correlation for ETOP and ETOF in phases of decompression (passive equalisation) as well as ETOD and ETOP in phases of compression (active induced equalisation). Very high correlation could be shown for ETOD in decompression and ETOF in compression phases. The Bland and Altman graphs could show that measurements provide results within a 95 % confidence interval in compression and decompression phases. Conclusions: We conclude that measurements in a pressure chamber are a very valuable tool in terms of estimating the ET opening and closing function. Measurements show some variance comparing participants, but provide reliable results within a 95 % confidence interval in retest. This study is the basis for enabling efficacy measurements of ET treatment modalities. abstract_id: PUBMED:1131089 Surface-active agent in Eustachian Tube Function. Using 15 fresh guinea pig temporal bones, Eustachian tubal function was tested repeatedly before and after saline washing to demonstrate the effect of surface-active agent. Although tubal opening pressures for the 15 ears varied considerably, in each ear tested, a higher pressure was required to open the Eustachian tube after saline washing (P less than .001). A significant statistical difference in Eustachian tube opening pressure was also found between consecutive trials before washing the Eustachian tube with saline (P less than .001). This phenomenon disappeared after saline washing. It is possible that these phenomena occurred as a result of surface-active agent (surfactant) functioning in the Eustachian tube. abstract_id: PUBMED:34095963 Prospective study on the Eustachian tube function during Frenzel maneuver in a hypobaric/hyperbaric pressure chamber. Introduction: The Frenzel maneuver describes a technique for middle ear equalizing which is frequently used by apnea divers. It offers advantages compared to the most commonly used techniques such as the Valsalva or Toynbee maneuver. Until now, there is insufficient literature about the pressure dynamics and Eustachian tube (ET) function during the Frenzel maneuver. The aim of the present study was to characterize the ET function during the Frenzel maneuver. Materials And Methods: By means of an established standardized profile of compression and decompression in a hypo/hyperbaric pressure chamber, we examined different parameters such as the ET opening pressure (ETOP), ET opening duration (ETOD), and ET opening frequency (ETOF) in 11 experienced apnea divers and compared them to the parameters during the Valsalva and Toynbee maneuver. Results: Standard values for ETOP, ETOD, and ETOF could be established for the Frenzel maneuver under standardized conditions in a hypo/hyperbaric pressure chamber. Compared to the Frenzel maneuver, ETOP was higher and ETOD longer (both p &lt; 0.001) during the Valsalva maneuver whereas ETOP was lower and ETOD shorter (both p &lt; 0.001) during the Toynbee maneuver. No difference regarding ETOF was observed between the Frenzel, Valsalva, and Toynbee maneuver. Discussion: The Frenzel maneuver was shown to be at least as effective as the Valsalva maneuver concerning ET opening. We believe that knowledge of the Frenzel technique might facilitate the pressure equalization during diving and recommend implementation of an appropriate equalization training in apnea and scuba diving education. abstract_id: PUBMED:30570609 Anatomy of the Surrounding Tissue of the Eustachian Tube in Patulous Eustachian Tube: 3 Tesla Magnetic Resonance Imaging Approach. Objectives: To clarify the possible pathological conditions of cartilaginous portion of the Eustachian tube (ET) and its surrounding tissues. Study Design: Retrospective review. Patients: Twenty-five normal subjects (control group) and 25 patients with patulous ET (patulous group). Methods: We reconstructed cross-sectional 3 Tesla magnetic resonance (MR) images using the multiplanar reconstruction (MPR) technique. Cross-section area widths that were measured included the following: 1) inferior head of lateral pterygoid muscle (PM); 2) pterygoid venous plexus (PVP); 3) medial soft tissues of ET, including ET cartilage; and 4) maximum cross-sectional area of high intensity surrounding the medial ET lumen. Results: A statistically significant difference was found in the thickness of the inferior head of the lateral PM between male and female; it was thicker in the male participants. Ostmann's fatty tissue (OF) was significantly smaller in the patulous ET group. Conclusion: Smaller size of the surrounding tissues of the ET, viz., the OF, may contribute to the pathogenesis of patulous ET. abstract_id: PUBMED:27435594 Eustachian tube evaluation in aviators. The aim of the study is to evaluate the Eustachian tube functionality in a group of aviators to underline a subclinical dysfunction and the related risk of ear fullness or barotitis. Hypobaric chamber allows to simulate the pressure variation of the flight. This prospective study enrolled 42 aviation pilots, members of the Italian Air Force, to whom were evaluated Eustachian tube functionality by Tubomanometry, patients subjective assessments concerning feasibility of Valsalva's and Toynbee's clinical symptoms, tympanometry, and objective Valsalva before and after exposure to hypobaric chamber. The new Eustachian tube score (ETS-7) was also calculated for each pilot before and after exposure to hypobaric chamber. Results of our examination showed that: before chamber exposition, in 92.8 % of pilots, an ETS-7 of eight or more was found bilaterally. In three (7.2 %) cases, a unilateral ETS-7 ≤ 7 was found and two of those had a positive history. After undergoing the hypobaric chamber session, the evaluation of ETS-7 showed only 19 % of pilots with an ETD score ≤7, in particular three bilateral and five unilateral cases. Three of those pilots were clinically positive: two airmen reported persistent fullness, while the other one had a barotitis. Therefore, the combined use of TMM and ETS-7 before and after hypobaric chamber exposure appears to be a reliable method for assessing the functional capacity of the Eustachian tube in aviators, the stressful effect of flight on it, to exclude subjects at increased risk of ear pain, fullness, or barotrauma. Answer: Based on the provided abstracts, there is no statistical significant difference in the active opening of the Eustachian tube (ET) between hypobaric and hyperbaric surroundings. A study involving 30 healthy participants exposed to standardized hypo- and hyperbaric pressures found no significant difference in Eustachian tube opening pressure (ETOP), Eustachian tube opening duration (ETOD), and Eustachian tube opening frequency (ETOF) between the two phases (PUBMED:26742258). This suggests that the mechanisms for active pressure equalization, such as the Valsalva and swallowing maneuvers, function similarly in both hypobaric (flight) and hyperbaric (dive) settings in healthy subjects.
Instruction: Can digit ratio (2D:4D) studies be helpful in explaining the aetiology of idiopathic gynecomastia? Abstracts: abstract_id: PUBMED:25543532 Can digit ratio (2D:4D) studies be helpful in explaining the aetiology of idiopathic gynecomastia? Background: Aetiology of idiopathic form of gynecomastia is unknown and it has not been established if it is related to factors present at the prenatal period or if it is caused by yet unidentified environmental conditions. Objective: The aim of this study is to compare digit ratio (2D:4D) in men with idiopathic gynecomastia and unaffected male and female controls from the general population. Methods: The study involved 250 subjects (50 men with idiopathic gynecomastia, 100 control men and 100 control women). Eight measurements were taken: body height, waist and hip circumferences, II and IV digits' lengths (right and left), and body weight, and 4 indices were calculated: BMI, WHR, and 2D:4D for the right and left hand. Results: 2D:4D in men with gynecomastia ([median (I-III quartiles)]: 1.03 (1.01-1.04) for right hand and 1.03 (1.01-1.03) for left hand) was significantly different than the ratio in control men [0.97 (0.95-0.99)] for right and left hand) (p&lt;0.0001) and similar to the ratio found in control women [1.02 (1-1.03)] for right and left hand) (p=1). Men with gynecomastia were characterized with higher values of the following variables: weight, waist and hip circumferences, BMI and WHR. Their body height was significantly greater than in women (p&lt;0.0001) but similar to the values observed in control men (p=0.2687). Conclusion: The findings presented in this study may influence the aetiology-related classification of gynecomastia and justify introducing the new group: "gynecomastia related to the prenatal sex hormones disturbances". abstract_id: PUBMED:29772574 Is There a Rationale behind Pharmacotherapy in Idiopathic Gynecomastia? Background/aims: The aim of this research was to analyze digit ratio in relation to estrogen receptor (ER) and progesterone receptor (PR) expression and to verify digit ratio (2D: 4D) as a marker of ER and PR overexpression in the male breast. Methods: This study included 35 patients who underwent breast reduction due to the idiopathic form of gynecomastia. The average age of the studied individuals was 25.7 years (SD = 7.8). ER and PR expression was detected in breasts, and digit ratios were calculated in patients with idiopathic gynecomastia. Results: ER expression did not correlate with the right (p = 0.51) and left 2D: 4D (p = 0.97). Also, there was no correlation between PR expression and 2D: 4D. A lack of correlation between these variables may result from the fact that the analyzed group of men with idiopathic gynecomastia was small in number, but at the same time, it appeared to be homogenous in these aspects (positive ER and/or PR expression and high digit ratio). Conclusion: High digit ratio in men with gynecomastia may tend to be a marker of overexpression of ER and PR. This may justify an early use of tamoxifen in men with gynecomastia and a high digit ratio. abstract_id: PUBMED:8024242 Hormonal profile of idiopathic gynecomastia in young adults. Apropos of 488 cases Gynecomastia, a very frequent disorder, is present in almost 40% of young men. In this population the investigations often fail to find any aetiology, therefore defining idiopathic gynecomastia. The aim of this work is to compare clinical and hormonal characteristics of 488 subjects with gynecomastia to 41 healthy controls. Their are many explanations for the occurrence of idiopathic gynecomastia, including modification of hormonal balance, change of aromatase activity, or a receptor anomaly. Our works demonstrate a significant decrease in mean testosteronemia, linked to a high prevalence of incipient hypogonadism in the studied population, especially in patients with an history of testicular disease. abstract_id: PUBMED:31333925 Hand Pattern Based on Second-to-Fourth Digit Ratio: Its Relationship to Sex, Body Mass Index, and Physical Activity. Background: The second-to-fourth digit ratio is measured by dividing the length of the index finger by the length of the ring finger. A ratio &lt;1 means that the ring finger is longer; this is known as the male pattern or "A" pattern. A ratio &gt;1 means that the index finger is longer; this is known as the female pattern or "C" pattern. When both fingers are equal in length (ratio = 1), the pattern is labeled as "B" pattern; this may be seen in both sexes. None of the previously reported studies on this topic were reported in the plastic or hand surgery literature. Our aim is to increase the awareness of plastic and hand surgeons to these types of hand patterns and to correlate such patterns to sex, body mass index (BMI), and physical activity in a group of healthy medical students and interns. Methods: We conducted a cross-sectional study on 160 healthy medical students/interns. There were 82 females and 78 males between the ages of 20 and 30. A male pattern (pattern A) was defined as pattern A in both hands or pattern A in one hand and pattern B in the other hand. A female pattern (pattern C) was defined as pattern C in both hands or pattern C in one hand and pattern B in the other hand. The correlation between hand pattern and sex, BMI, and physical activity was done using the chi-square test and Fisher's exact test. A P value of &lt;0.05 was considered statistically significant. Results: Five participants (3.1%) had pattern B in both hands. Another 15 participants (9.4%) had a male pattern in one hand and a female pattern in the other hand. Both of these groups were excluded from the statistical analysis. Pattern A was significantly higher in males, and pattern C was significantly higher in females (P &lt; 0.001). BMI and physical activity did not significantly affect the hand pattern in males. However, the correlations between hand pattern and both BMI and physical activity were significant in females (P = 0.005 for BMI) (P = 0.042 for physical activity). Conclusions: About 12.5% of the study group did not fit into a male or female pattern. In the remaining 87.5% of the participants, sex correlated with hand pattern. BMI and physical activity significantly affected the hand pattern in females. We believe that our study will encourage plastic surgeons to further investigate the correlation of different hands patterns to other variables of interest in plastic surgery such as masculine facial anthropometric values, hypoplastic breasts, gynecomastia, and lipodystrophy. abstract_id: PUBMED:12496773 Gynaecomastia: aetiology and treatment options. This paper reviews the aetiology, prevalence and treatment of gynaecomastia. Gynaecomastia is a proliferation of male breast tissue apparently caused by alterations in the ratio of oestrogen:androgen levels. It is common in the general population, resulting from normal physiological changes and various pathological causes. Gynaecomastia is also induced by many drug treatments, including hormonal therapies for prostate cancer. Prophylactic radiotherapy appears to be effective against gynaecomastia in this population. Surgery is an option for intractable cases, and some medical treatments have been reported, although there is a paucity of data evaluating such therapies. abstract_id: PUBMED:33411368 Biochemical features of eugonadal patients with idiopathic gynaecomastia: A retrospective cross-sectional study. Patients with idiopathic gynaecomastia have greater BMI and an unfavourable lipid profile compared with age-matched controls. Twenty-five adult eugonadal patients with idiopathic gynaecomastia and 50 age- and BMI-matched controls were selected. Clinical and biochemical parameters and ultrasound testis volume were reviewed retrospectively. Patients and controls differed for no biochemical parameter, except for LH levels, which were 31% higher in patients (p = 0.019), although within the normal range. Compared with controls, patients had a threefold greater rate of elevated LDL-c (p = 0.025). Patients ≥ 25 years had higher levels of serum LDL-c compared with either patients &lt; 25 years (p = 0.006) or controls ≥ 25 years (p = 0.012). In patients, both at bivariate analysis and at linear regression, age correlated positively with total cholesterol and LDL-c, the latter correlated inversely with total testosterone. Negative interactions were found for age and total testosterone with LDL-c, for LH and estradiol to testosterone ratio (E2:T) with LDL-c, and for age and E2:T with total cholesterol. Our data suggest inadequate local androgen action in patients with idiopathic gynaecomastia. This partial androgen resistance might blunt the beneficial effects of testosterone on lipid metabolism. Further studies are needed to verify whether this metabolic derangement impacts the cardiovascular health of these patients. abstract_id: PUBMED:10651345 Comparison of tamoxifen with danazol in the management of idiopathic gynecomastia. Idiopathic gynecomastia, unilateral or bilateral, is a common physical finding in normal men. Successful treatment using tamoxifen (antiestrogen) and danazol (antiandrogen) has recently been reported. We compared the efficacy of tamoxifen and danazol in the treatment of idiopathic gynecomastia. We reviewed the clinical records of patients with idiopathic gynecomastia presenting to the Department of Surgery, University of Hong Kong, between August 1990 and September 1995. Medical treatment with either tamoxifen (20 mg/d) or danazol (400 mg/d) was offered and continued until a static response was achieved. The treatment response was compared. Sixty-eight patients with idiopathic gynecomastia were seen in the Breast Clinic. The median age was 39.5 years (range, 13-82), with a median duration of symptoms of 3 months (range, 1-90). The median size was 3 cm (range, 1-7). Twenty-three patients were treated with tamoxifen and 20 with danazol. Complete resolution of the gynecomastia was recorded in 18 patients (78.2%) treated with tamoxifen, whereas only 8 patients (40%) in the danazol group had complete resolution. Five patients, all from the tamoxifen group, developed recurrence of breast mass. In conclusion, hormonal manipulation is effective in the treatment of patients with idiopathic gynecomastia. Although the effect is more marked for tamoxifen compared with danazol, the relapse rate is higher for tamoxifen. Further prospective randomized studies would be useful in defining the role of these drugs in the management of patients with idiopathic gynecomastia. abstract_id: PUBMED:31117335 Clinical and Biochemical Phenotype of Adolescent Males with Gynecomastia Objective: Gynecomastia is defined as a benign proliferation of male breast glandular tissue. Its prevalence during puberty varies between 50-60% and is also common in neonatal and elderly males. It develops mainly due to the disequilibrium between estrogen and androgen activity in breast tissue, where estradiol (E2) binds to estrogen receptors and stimulates ductal and glandular cells. The aim of this work was to investigate the relationship between sex hormone alterations and the natural history of gynecomastia. Methods: Participants in this study were young males referred to an outpatient clinic, between January 2011 and February 2016, with breast enlargement. Thyroid function, liver function, hormone concentrations and tumor markers were measured and anthropometric assessment was conducted. Results: Subjects comprised 93 males, aged 9 to 18 (mean±standard deviation age 13.8±2.6) years. In 63 of 93 (67.7%) the gynecomastia was confirmed and 28 were followed-up for a median period of three months. None of the boys showed any reduction in breast size during follow-up. There was no correlation between body mass index Z-score and breast size. Breast enlargement progressed in nine boys (32.1%). A positive correlation between estrogen to testosterone (E2/TTE) ratio and Tanner B stage (r=0.47; p=0.034) was observed. Conclusion: The E2/TTE ratio may be a helpful tool in diagnosing gynecomastia. Altered E2/TTE ratio might be responsible for a proportion of cases described previously as idiopathic. Additionally, weight loss does not imply reduction of breast size in boys. Nonetheless it should be the first step in the management of prolonged gynecomastia. abstract_id: PUBMED:6354523 Studies on the treatment of idiopathic gynaecomastia with percutaneous dihydrotestosterone. We have studied clinical and endocrine parameters in a group (group A) of forth men referred to us because of persistent idiopathic gynaecomastia (of more than 18 months duration), before and during the administration of percutaneous dihydrotestosterone (DHT). The endocrine parameters (testosterone (T), 17 beta-oestradiol (E2), DHT, gonadotrophins (FSH and LH) and prolactin (PRL), were compared to those of control groups of 12 healthy men on DHT therapy (group B) and 10 on placebo (group C). Local administration of DHT was followed by the complete disappearance of gynaecomastia in 10 patients, partial regression in 19 and no change in 11 patients after 4 to 20 weeks of percutaneous DHT (125 mg twice daily). Before treatment the T + DHT/E2 ratio was significantly (P less than 0.001) lower in group A 244 +/- 21 (SEM) than in groups B and C (361 +/- 21) while T, DHT and E2 concentrations were all within the normal range. During DHT treatment plasma hormone levels were measured in 26 patients from group A: DHT levels increases significantly (day 0: 1.63 +/- 0.14 nmol/l; day 15: 12.8 +/- 1.6 nmol/l, P less than 0.001) while T and E2 levels fell significantly (T: day 0: 22.6 +/- 1.2 nmol/l; day 15: 11.0 +/- 1.5 nmol/l, P less than 0.001; E2: day 0: 110.5 +/- 7.12 pmol/l; day 15: 86.79 +/- 9.4 pmol/l, P less than 0.01). The T/E2 ratio decreased from 231 +/- 20 to 164 +/- 27 (P less than 0.05) while the T + DHT/E2 ratio increased significantly (P less than 0.02) to a normal mean value (day 15: 354 +/- 57).(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:15212645 Prepubertal gynaecomastia: aetiology, course and outcome. Background: Palpable and measurable glandular breast tissue is a rare finding in prepubertal boys and warrants thorough evaluation to rule out an underlying pathology. Objective: To characterize the course and outcome of prepubertal gynaecomastia, examine its effect on puberty, and try to identify its underlying cause. Patients And Methods: Twenty-nine out of 581 (5%) boys referred to our clinic for evaluation of gynaecomastia between 1980 and 2000 were prepubertal at diagnosis. Data on age at appearance of gynaecomastia, general and endocrine evaluation, course of growth and puberty, and treatment were collected from the medical files. Results: Prepubertal gynaecomastia was diagnosed at mean age of 8.9 +/- 2.2 years. In 27 of the 29 boys (93.2%) no underlying cause was identified. The remaining two boys (6.8%) had hyperaromatase syndrome. Nine boys (31%) were obese. Spontaneous resolution was recorded in six boys (20.5%), no change in 15 (52%), and further breast enlargement in eight (including the two with hyperaromatase syndrome) (27.5%). Accelerated growth and bone maturation rates were noted only in the two boys with hyperaromatase syndrome. Pubertal onset was documented in 13 boys at 11.8 +/- 1.1 years. Conclusions: According to our large single tertiary care centre experience, 5% of boys referred for evaluation of gynaecomastia were prepubertal. The development of gynaecomastia was the only abnormality and was not associated with other pubertal signs or accelerated growth rate. In most cases, the gynaecomastia was idiopathic. Further prospective studies are needed to evaluate the pathophysiological mechanisms responsible for this phenomenon. Answer: Digit ratio (2D:4D) studies have been explored as a potential tool to understand the aetiology of idiopathic gynecomastia. One study found that men with idiopathic gynecomastia had a significantly different digit ratio compared to control men, with their 2D:4D ratios being similar to those found in control women (PUBMED:25543532). This suggests that the aetiology of idiopathic gynecomastia may be related to prenatal sex hormone disturbances, as the digit ratio is considered an indicator of prenatal androgen exposure. Another study aimed to analyze digit ratio in relation to estrogen receptor (ER) and progesterone receptor (PR) expression in men with idiopathic gynecomastia. Although no correlation was found between ER or PR expression and 2D:4D, the study suggested that a high digit ratio in men with gynecomastia might be a marker of ER and PR overexpression. This could justify the early use of tamoxifen, a selective estrogen receptor modulator, in men with gynecomastia and a high digit ratio (PUBMED:29772574). However, it is important to note that the relationship between digit ratio and gynecomastia is not yet fully understood, and more research is needed to confirm these findings. The studies mentioned provide preliminary evidence that digit ratio may be related to the development of idiopathic gynecomastia, potentially through a connection with prenatal hormone exposure and receptor expression in breast tissue. Nonetheless, the exact mechanisms and the clinical utility of 2D:4D as a predictive or diagnostic tool for idiopathic gynecomastia require further investigation.
Instruction: Liver Transplantation for Hereditary Transthyretin Amyloidosis: After 20 Years Still the Best Therapeutic Alternative? Abstracts: abstract_id: PUBMED:26308415 Liver Transplantation for Hereditary Transthyretin Amyloidosis: After 20 Years Still the Best Therapeutic Alternative? Background: Until recently, liver transplantation (Ltx) was the only available treatment for hereditary transthyretin (TTR) amyloidosis; today, however, several pharmacotherapies are tested. Herein, we present survival data from the largest available database on transplanted hereditary TTR patients to serve as a base for comparison. Methods: Liver transplantation was evaluated in a 20-year retrospective analysis of the Familial Amyloidosis Polyneuropathy World Transplant Registry. Results: From April 1990 until December 2010, data were accumulated from 77 liver transplant centers. The Registry contains 1940 patients, and 1379 are alive. Eighty-eight Ltx were performed in combination with a heart and/or kidney transplantation. Overall, 20-year survival after Ltx was 55.3%. Multivariate analysis revealed modified body mass index, early onset of disease (&lt;50 years of age), disease duration before Ltx, and TTR Val30Met versus non-TTR Val30Met mutations as independent significant survival factors. Early-onset patients had an expected mortality rate of 38% that of the late-onset group (P &lt; 0.001). Furthermore, Val30Met patients had an expected mortality rate of 61% that of non-TTR Val30Met patients (P &lt; 0.001). With each year of duration of disease before Ltx, expected mortality increased by 11% (P &lt; 0.001). With each 100-unit increase in modified body mass index at Ltx, the expected mortality decreased to 89% of the expected mortality (P &lt; 0.001). Cardiovascular death was markedly more common than that observed in patients undergoing Ltx for end-stage liver disease. Conclusions: Long-term survival after Ltx, especially for early-onset TTR Val30Met patients, is excellent. The risk of delaying Ltx by testing alternative treatments, especially in early-onset TTR Val30Met patients, requires consideration. abstract_id: PUBMED:26558262 Ocular Manifestations and Therapeutic Options in Patients with Familial Amyloid Polyneuropathy: A Systematic Review. Purpose: This paper aims to review the morphological and functional characteristics of patients affected by familial amyloid polyneuropathy (FAP), with greater focus on type I and its progression after liver transplantation. We also analyse therapeutic options for the ophthalmic manifestations. Methods: The literature from 2002 through 2015 was reviewed, with a total of 45 articles studied, using the key terms related to amyloidosis and its therapeutic approaches. Information was collated, evaluated, critically assessed, and then summarised in its present form. PATHOPHYSIOLOGY AND TREATMENT: FAP results from mutation of the transthyretin gene, with Val30Met being the most frequent substitution. The symptoms are those typical of a sensorimotor autonomic neuropathy and can be halted with liver transplantation. Nowadays there are new medical therapies that delay the progression of the systemic neuropathy. However, there are still no options to avoid ocular disease. Conclusion: The main ocular manifestations in patients with FAP type I are amyloid deposition in the vitreous, dry eye, and secondary glaucoma. Despite liver transplantation, eye synthesis of amyloid persists and is associated with progressive ocular manifestations, which require continued ophthalmologic follow-up. New therapeutic strategies are therefore needed, particularly to target the ocular synthesis of the abnormal protein. abstract_id: PUBMED:25069833 Familial amyloid polyneuropathy: elaboration of a therapeutic patient education programme, "EdAmyl". Background: Transthyretin-related amyloidosis (ATTR) is an autosomal dominant disease affecting the peripheral and autonomic nervous system, heart, eyes and kidneys. It is the most disabling hereditary polyneuropathy in adults. The French National Reference centre for this disease was accredited in 2005 with 10 lines of action. One of them is to inform and educate patients about their disease to improve their care and reduce morbidities. We thus decided to elaborate a therapeutic patient education (TPE) programme, starting with patients' needs assessment. Methods: A qualitative research study was conducted with one-to-one semi-structured interviews of selected individuals. Recorded interviews were analysed to identify the skills that patients need to acquire. A TPE programme was elaborated on the basis of these findings. Results: Seven patients, one asymptomatic carrier and two healthy spouses were interviewed. Analysis of the interviews showed that interviewees had a good knowledge of the disease and its symptoms but they had difficulties explaining the disease mechanism and did not have an adequate knowledge of the available treatment options, although they knew that liver transplant might halt progression of the disease. ATTR amyloidosis appeared to have a major negative impact on the patient's physical and mental well-being. Patients feared loss of autonomy and having to require assistance from their relatives and spouses. All interviewees were keen to participate in a TPE programme. Based on this needs assessment, we identified seven skills that patients need to acquire and several pedagogical goals to be achieved during the education programme. An interdisciplinary team then elaborated a complete TPE programme. Conclusion: Elaboration of a TPE programme for ATTR amyloidosis required to obtain useful information from the patients themselves, and their relatives, concerning their perception of their disease. This needs' assessment constituted the basis for designing the first TPE programme, to our knowledge, for ATTR amyloidosis. After translation, this programme could be applied in other EU countries and worldwide for this rare disease. abstract_id: PUBMED:37132627 Central nervous system involvement in two siblings affected by hereditary transthyretin amyloidosis 30 years after liver transplantation: a model for gene-silencing therapies. Hereditary transthyretin amyloidosis (ATTRv) is a genetic, autosomal dominant, severe disease characterized by progressive sensory-motor polyneuropathy, cardiomyopathy, dysautonomia, renal and eyes involvement, provoked by the deposition of the mutated and unstable transthyretin protein. In past decades, liver transplant, avoiding the synthesis of the pathologic protein, has been a good, even if not resolutive, treatment. In this report we describe two siblings affected with ATTRv, who developed first symptoms of disease at a young age and underwent a liver transplant with prompt resolution of clinical manifestations. After several years, central nervous system and eyes symptoms relapsed despite treatment, considering that the synthesis of mutated protein continues in choroid plexus, a locum where current therapies are unable to act. In our opinion, these cases represent a long-term prognostic model for the novel gene-silencers approved for ATTRv, because they share a similar therapeutic effect with liver transplant: the block of mutated protein synthesis limited only in the main transthyretin (TTR) production organ is able to prevent the progression of disease only for some years, but not to avoid long-term clinical worsening due to extra-hepatic production of TTR. Novel future therapeutic strategies are demanded to guarantee a better long-term stabilization of symptomatology. abstract_id: PUBMED:12884739 Analyses of pathogenesis and therapeutic approaches for hereditary amyloidosis Amyloidosis is a disorder of protein metabolism in which normally soluble autologous proteins are deposited in tissues as abnormal insoluble fibrils, causing structural and functional disruptions. We have recently identified the novel localized amyloidosis accompanied by trichiasis. The precursor protein of amyloid deposits was mutated lactoferrin and all the patients had lactoferrin Glu561Asp. The disease was classified hereditary amyloidosis whose risk factor is trichiasis. We examined the therapeutic possibilities for mutated transthyretin(ATTR) related familial amyloidotic polyneurpathy(FAP), one of the systemic amyloidoses. Cr3+ suppressed amyloid formation by stabilizing ATTR structure in vitro. BSB is a useful new diagnostic tool to detect amyloid deposits both in in vitro and in vivo and may have therapeutic potential for preventing amyloid deposition. Gene therapy using single-stranded oligonuclotides(SSOs) may become a promising tool for therapy instead of liver transplantation. SSOs with athrocollagen effectively replaced the TTR gene both in vitro and in vivo. abstract_id: PUBMED:27663057 Familial amyloid polyneuropathy: When does it stop to be asymptomatic and need a treatment? Transthyretin familial amyloid polyneuropathy (FAP) is a rare disease with autosomal transmission due to point mutation of the transthyretin (TTR) gene. It is the most disabling hereditary neuropathy affecting sensory, motor and autonomic nerves, and is irreversible and fatal within 7 to 12 years of onset in the absence of therapy. Diagnosis is usually delayed for 1-5 years because the onset is usually insidious, and a positive family history is lacking in 50% of late-onset cases. Penetrance is variable, and depends of the age of the carrier and age of onset in family members. Two treatments are available: liver transplantation, to suppress the main source of systemic production of mutant TTR; and TTR tetramer stabilizer drugs, to avoid the release of highly amyloidogenic monomers and oligomers. These therapies are able to stop or slow the progression of the disease in its early stages. Genetic counseling is crucial to detect carriers at risk of developing the disease. The European network for TTR-FAP recommends careful baseline assessment by questionnaire, clinical examination and neurophysiological tests, and periodic consultations to detect the onset of disease in time to start anti-amyloid therapy after biopsy findings of amyloid deposition. A therapeutic educational program is important for improving patients' awareness. Patients are considered symptomatic and ill when they themselves perceive symptoms or changes, including changes from baseline measurements on neurophysiological tests, followed by findings of amyloid deposition on biopsy. The most sensitive biopsies are from the labial salivary gland and skin. abstract_id: PUBMED:29073801 Management of gastrointestinal complications in hereditary transthyretin amyloidosis: a single-center experience over 40 years. Introduction: Hereditary transthyretin amyloidosis (ATTRm amyloidosis) is a rare disease caused by the deposition and accumulation of insoluble non-native transthyretin fibrils in the body. The disease inevitably results in widespread organ disruption, and poor life expectancy. The GI tract is one organ system vulnerable to disruption and, although the clinical presentation of the disease varies, GI involvement affects most patients with ATTRm amyloidosis. Areas covered: This article presents our experience with diagnosing and treating the GI symptoms of ATTRm amyloidosis patients at our center over the last 40 years, in the Swedish clustering area of the disease. Our aim is to help other physicians to better manage GI complications in patients with this rare but widespread condition. Expert commentary: GI symptoms are debilitating complications for ATTRm amyloidosis patients to experience, yet with the appropriate questioning and diagnosis methods, symptomatic treatments of these symptoms can be implemented to provide relief. Further, patients with fewer GI complications and a good nutritional status are also better candidates for liver transplantation which, in selected cases, is the best disease-modifying treatment of ATTRm amyloidosis to date. abstract_id: PUBMED:19075702 Pathogenesis of and therapeutic strategies to ameliorate the transthyretin amyloidoses. Transthyretin (TTR) is a homotetrameric serum and cerebrospinal fluid protein that transports both thyroxine (T(4)) and the retinol-retinol binding protein complex (holoRBP). Rate-limiting tetramer dissociation and rapid monomer misfolding and misassembly of variant TTR results in familial amyloid polyneuropathy (FAP), familial amyloid cardiomyopathy (FAC), or familial central nervous system amyloidosis. Analogous misfolding of wild-type TTR results in senile systemic amyloidosis (SSA) characterized by sporadic amyloidosis in elderly populations. With the availability of genetic and immunohistochemical diagnostic tests, patients with TTR amyloidosis have been found in many nations worldwide. Recent studies indicate that TTR amyloidosis is not a rare endemic disease as previously thought. The only effective treatment for the familial TTR amyloidoses is liver transplantation; however, this strategy has a number of limitations, including a shortage of donors, a requirement for surgery for both the recipient and living donor, and the high cost. Furthermore, a large number of patients are not good transplant candidates. Recent studies focused on the TTR gene and protein have provided insight into the pathogenesis of TTR amyloidosis and suggested new strategies for therapeutic intervention. TTR tetramer (native state) kinetic stabilization by small molecule binding, immune therapy, and gene therapy with small interfering RNAs, antisense oligonucleotides, and single-stranded oligonucleotides are promising strategies based on our understanding of the pathogenesis of TTR amyloidosis. Among these, native state kinetic stabilization by diflunisal and Fx-1006A, a novel therapeutic strategy against protein misfolding diseases, are currently in Phase II/III clinical trials. abstract_id: PUBMED:30847674 Genetic neuromuscular disorders: living the era of a therapeutic revolution. Part 1: peripheral neuropathies. Recent advances in pathophysiological and genetic mechanisms of some neuromuscular diseases and a rapid progress in new pharmacological technologies led to an accelerated development of innovative treatments, generating an unexpected therapeutic revolution. In part 1, we report already commercially available drugs, just approved drugs and new therapeutic promises in the treatment of peripheral neuropathies. Hereditary transthyretin amyloidosis (hATTR) is a devastating disease due to amyloid accumulation in peripheral nerves, heart and autonomic system. The first specific drug approved for hATTR was tafamidis, a TTR tetramer stabilizer. In 2018, the positive results of two phase 3 trials have been reported leading to start of regulatory approval route for inotersen, an antisense oligonucleotide and patisiran, the first-ever RNA interference (RNAi) therapeutic. System biology targeting approach has indicated baclofen, naltrexone and sorbitol in combination (PXT3003) as candidate drugs for Charcot-Marie-Tooth disease type 1A. This hypothesis was confirmed in experimental models and in phase 2 and 3 clinical trials. Givosiran, another RNAi therapeutic, targeting 5-aminolevulinic acid synthase, has been positively tested in acute intermittent porphyria in phase 1/2 and ongoing phase 3 trials. Although allogenic hematopoietic stem cell transplantation resulted recently a long-term therapy in mitochondrial neurogastrointestinal encephalomyopathy (MNGIE), a new strategy is liver transplantation which is able to revert the severe biochemical and clinical imbalance of the disease. Recently, a gene therapy has been tested in a MNGIE murine model, indicating that it may become a new therapeutic option. abstract_id: PUBMED:32107564 Amyloid cardiomyopathy. Cardiac amyloidosis is a heterogeneous group of diseases characterized by extracellular deposition of amyloid fibrils in many different organs finally resulting in organ failure. Cardiac involvement is common for immunoglobulin light chain amyloidosis (AL) or transthyretin amyloidosis (ATTR); the latter is caused by a transthyretin gene variant or wild-type protein. Precise diagnostic assessment including laboratory tests, electrocardiography, echocardiography, cardiac magnetic resonance imaging, biopsy, and/or bone scintigraphy is mandatory for definition of the amyloid type and finally for treatment initiation. Treatment of cardiac amyloidosis includes symptomatic therapy of heart failure as well as the underlying disease. Causative treatment of AL amyloidosis is according to regimens used for multiple myeloma. For many years, orthotopic liver transplantation was the only treatment available for hereditary ATTR amyloidosis, but important advances have been made after approval of a novel class of medication, namely, RNA silencers. However, currently no treatment is available to remove amyloid deposited in the tissue. Thus, early diagnosis is still critical to afford the best efficacy of available therapies. Answer: Liver transplantation (Ltx) has been a primary treatment option for hereditary transthyretin (TTR) amyloidosis, particularly effective for early-onset TTR Val30Met patients, with a 20-year survival rate of 55.3% (PUBMED:26308415). This procedure aims to halt the progression of the disease by replacing the liver, which is the main source of mutant TTR production. However, despite its effectiveness, liver transplantation has limitations, including donor shortages, surgical risks, and high costs (PUBMED:19075702). Recent advancements have introduced alternative pharmacotherapies, such as TTR tetramer stabilizers like tafamidis, and gene-silencing therapies like inotersen (an antisense oligonucleotide) and patisiran (an RNA interference therapeutic) (PUBMED:30847674). These new treatments have shown promise in clinical trials and are beginning to offer alternatives to liver transplantation. Moreover, while liver transplantation can halt systemic neuropathy progression, it does not prevent the continued synthesis of amyloid in the eyes, leading to progressive ocular manifestations (PUBMED:26558262). Additionally, there are cases where central nervous system involvement occurs many years after liver transplantation, as the mutated protein continues to be produced in the choroid plexus, an area not affected by the transplantation (PUBMED:37132627). Given these considerations, while liver transplantation has been the best therapeutic alternative for many years, the emergence of new medical therapies that delay the progression of systemic neuropathy and the limitations of liver transplantation in preventing ocular and central nervous system involvement suggest that it may no longer be the sole best option. The risk of delaying liver transplantation to test alternative treatments, especially in early-onset TTR Val30Met patients, requires careful consideration (PUBMED:26308415). The therapeutic landscape for hereditary TTR amyloidosis is evolving, and the best treatment option may vary depending on individual patient factors, including the stage of the disease, the specific mutation, and the presence of symptoms.
Instruction: Is TOMPOOL (gated blood-pool SPECT processing software) accurate to diagnose right and left ventricular dysfunction in a clinical setting? Abstracts: abstract_id: PUBMED:24875579 Is TOMPOOL (gated blood-pool SPECT processing software) accurate to diagnose right and left ventricular dysfunction in a clinical setting? Background: The assessment of right ventricular function is crucial for management of heart disease. TOMPOOL is a software that processes data acquired with Tomographic Equilibrium Radionuclide Ventriculography. In this report, TOMPOOL's diagnostic accuracy and inter-observer reproducibility were assessed in a cohort of patients with various etiologies of ventricular dysfunction. Methods And Results: End-diastolic volume (EDV), ejection fraction (EF), and cardiac output (CO) were calculated for the right ventricle (RV) and the left ventricle (LV) using TOMPOOL in 99 consecutive patients. Thirty-five patients underwent cardiac magnetic resonance imaging (CMR) considered as the reference-standard to measure EDV and EF; the Spearman's rho correlation coefficients were r = 0.73/0.80 and 0.67/0.73 for right/left EF and EDV, respectively. Twenty-one patients had thermodilution measurements of right CO (reference-standard), the correlation was r = 0.57. The best cut-off points (sensitivity/specificity) in order to diagnose a ventricular dysfunction or enlargement were 46% for RVEF (67%/89%), 62% for LVEF (100%/90%), 94 mL for RVEDV (77%/73%), and 84 mL for LVEDV (100%/91%). The areas under the ROC curve were, respectively, 0.79, 0.91, 0.83, and 0.99. Inter-observer reproducibility was r = 0.81/0.94, 0.77/0.90, and 0.78/0.75 for Right/Left EF, EDV, and CO, respectively. Conclusion: TOMPOOL is accurate: measurements of EDV, EF, and CO are reproducible and correlate with CMR and thermodilution. However, thresholds must be adjusted. abstract_id: PUBMED:12607866 Functional assessment of the right ventricle with gated myocardial perfusion SPECT. Background: The evaluation of right ventricular function can provide valuable information in a variety of cardiac and noncardiac conditions. Functional assessment of the right ventricle is difficult because of its anatomy and geometry. The authors describe a method for assessing right ventricular function using gated myocardial perfusion SPECT. Methods: In 20 patients, right and left ventricular ejection fractions (RVEF, LVEF) were determined using gated blood-pool scintigraphy (GBPS) and gated myocardial perfusion SPECT (GSPECT). To avoid contamination with right atrial activity, the two-frame method was adopted for gated blood-pool data when RVEF was measured. In nine patients with normal right ventricles, an index of wall thickening for the right ventricle was derived from the peak systolic and diastolic counts in the free wall. Results: Linear correlation between the two methods adopted for calculation of LVEF and RVEF was good. Bland-Altman analysis revealed good agreement between the two methods with no specific bias. The mean LVEF was 47.9 +/- 12% (GBPS) and 47.3 +/- 12.4 (GSPECT). The mean RVEF was 43.2 +/- 9.6% (GBPS) and 44.2 +/- 8.5% (GSPECT). In both cases, the values were not significantly different. The mean wall motion index was 35%. There was no correlation between the wall thickness index and ejection fraction, but the index was greater in patients with a normal right ventricle compared with those with reduced RVEF. Conclusions: Gated SPECT offers an alternative to GBPS for the functional assessment of the right ventricle. Using GSPECT will allow the simultaneous assessment of both the right and left ventricles. abstract_id: PUBMED:9856927 Diagnosis of arrhythmogenic right ventricular cardiomyopathy by fourier analysis of gated blood pool single-photon emission tomography. To evaluate the diagnostic performance of Fourier phase analysis of gated blood pool single-photon emission computed tomography (GBP SPECT) in arrhythmogenic right ventricular (RV) cardiomyopathy, 18 patients with confirmed arrhythmogenic RV cardiomyopathy underwent GBP SPECT and x-ray cineangiography. Results were compared with data obtained with GBP SPECT in 10 control subjects. This 3-dimensional method demonstrated good correlation with cineangiography for measurements of RV enlargement and extent of the disease; RV and left ventricular segments were analyzed with the same accuracy. Tomographic abnormalities were significant decreased RV ejection fraction, RV dilatation, nonsynchronized contraction of the ventricles, increased RV contraction dispersion, presence of segmental RV wall motion disorders and/or phase delays, and occasionally regional left ventricular abnormalities. RV-delayed phase areas were always present in our population. A scoring system with RV criteria was proposed to diagnose RV disease. Because Fourier analysis of GBP SPECT provides ventricular morphologic information for the right ventricle with the same accuracy as for the left ventricle, it may replace planar radionuclide studies. Therefore, this method is helpful in patients with a strong clinical suspicion of arrhythmogenic RV cardiomyopathy, and should be used as a screening method before right ventriculography. abstract_id: PUBMED:25144559 Comparison of LVEF assessed by 2D echocardiography, gated blood pool SPECT, 99mTc tetrofosmin gated SPECT, and 18F-FDG gated PET with ERNV in patients with CAD and severe LV dysfunction. Introduction: Left ventricular ejection fraction (LVEF) is the single most important predictor of prognosis in patients with coronary artery disease (CAD) and left ventricular (LV) dysfunction. Equilibrium radionuclide ventriculography (ERNV) is considered the most reliable technique for assessing LVEF. Most of these patients undergo two dimensional (2D) echocardiography and myocardial viability study using gated myocardial perfusion imaging (MPI) or gated F-fluorodeoxyglucose (F-FDG) PET. However, the accuracy of LVEF assessed by these methods is not clear. This study has been designed to assess the correlation and agreement between the LVEF measured by 2D echocardiography, gated blood pool single photon emission computed tomography (SPECT), Tc tetrofosmin gated SPECT, and F-FDG gated PET with ERNV in CAD patients with severe LV dysfunction. Patients And Methods: Patients with CAD and severe LV dysfunction [ejection fraction (EF) &lt;35 assessed by 2D echocardiography] were prospectively included in the study. These patients underwent ERNV along with gated blood pool SPECT, Tc tetrofosmin gated SPECT, and F-FDG gated PET as per the standard protocol for myocardial viability assessment and LVEF calculation. Spearman's coefficient of correlation (r) was calculated for the different sets of values with significance level kept at a P-value less than 0.05. Bland-Altman plots were inspected to visually assess the between-agreement measurements from different methods. Results: Forty-one patients were prospectively included. LVEF calculated by various radionuclide methods showed good correlation with ERNV as follows: gated blood pool SPECT, r=0.92; MPI gated SPECT, r=0.85; and F-FDG gated PET, r=0.76. However, the correlation between 2D echocardiography and ERNV was poor (r=0.520). The Bland-Altman plot for LVEF measured by all radionuclide methods showed good agreement with ERNV. However, agreement between 2D echocardiography and ERNV is poor, as most of the values in this plot gave a negative difference for low EF and a positive difference for high EF. The mean difference between various techniques [2D echocardiography (a), gated blood pool SPECT (b), MPI gated SPECT (c), F-FDG gated PET (d)] and ERNV (e) was as follows: (a)-(e), 3.3; (b)-(e), 5; (c)-(e), 1.1; and (d)-(e), 2.9. The best possible correlation and agreement was found between MPI gated SPECT and ERNV. Conclusion: This study showed good correlation and agreement between MPI gated SPECT and F-FDG gated PET with ERNV for LVEF calculation in CAD patients with severe LV dysfunction. Thus, subjecting patients who undergo viability assessment by MPI gated SPECT or F-FDG gated PET to a separate procedure like ERNV for LVEF assessment may not be warranted. As the gated blood pool SPECT also showed good correlation and agreement with ERNV for LVEF assessment in CAD patients with severe LV dysfunction, with better characteristics than ERNV, it can be routinely used whenever accurate LVEF assessment is needed. abstract_id: PUBMED:15094446 The value of a completely automatic ECG gated blood pool SPECT processing method for the estimation of global systolic left ventricular function. Objectives: Electrocardiographically gated blood pool SPECT (GBPS) is an interesting method for measuring left ventricular (LV) ejection fraction (LVEF) and volume. Recently, the availability of completely automatic GBPS processing software has been reported. We aimed to evaluate its reliability in measuring global LV systolic function. In addition, using the same population, we compared its reliability to that of three previously reported methods for processing GBPS. Methods: We studied the performances of the new GBPS system for the evaluation of LVEFs and volumes in 29 patients. The LVEF provided by the planar equilibrium radionuclide angiography (planarLAO) and LV volumes provided by radiological LV contrast angiography (X-rays) were used as 'gold standards'. Results: The new GBPS system failed in one patient. It shows good reproducibility for the measurement of both LVEF and volume. LVEF provided by this system is moderately correlated to planarLAO (r = 0.62; P &lt; 0.001). The new GBPS constantly overestimates LVEF (P &lt; 0.05). Results for LV volumes are moderately correlated to those obtained by X-ray investigation (r = 0.7; P &lt; 0.001) but are significantly lower (P &lt; 0.0001). There is a linear correlation between the average and the paired absolute difference for LV volumes (r = 0.52, P = 0.0001). Conclusions: The new, completely automatic, GBPS processing software is an interesting, moderately reliable method for measuring LVEF and volume. The performance of the method is lower than that previously reported for the same population for the other three GBPS processing methods. abstract_id: PUBMED:15173775 Model dependence of gated blood pool SPECT ventricular function measurements. Background: Calculation differences between various gated blood pool (GBP) single photon emission computed tomography (SPECT) (GBPS) algorithms may arise as a result of different modeling assumptions. Little information has been available thus far regarding differences for right ventricular (RV) function calculations, for which GBPS may be uniquely well suited. Methods And Results: Measurements of QBS (Cedars-Sinai Medical Center, Los Angeles, Calif) and BP-SPECT (Columbia University, New York, NY) algorithms were evaluated. QBS and BP-SPECT left ventricular (LV) ejection fraction (EF) correlated strongly with conventional planar-GBP LVEF for 422 patients (r = 0.81 vs r = 0.83). QBS correlated significantly more strongly with BP-SPECT for LVEF than for RVEF (r = 0.80 vs r = 0.41). Both algorithms demonstrated significant gender differences for 31 normal subjects. BP-SPECT normal LVEF (67% +/- 9%) was significantly closer to values in the magnetic resonance imaging (MRI) literature (68% +/- 5%) than QBS (58% +/- 9%), but both algorithms underestimated normal RVEF (52% +/- 7% and 50% +/- 9%) compared with the MRI literature (64% +/- 9%). For 21 patients, QBS correlated similarly to MRI as BP-SPECT for LVEF (r = 0.80 vs r = 0.85) but RVEF correlation was significantly weaker (r = 0.47 vs r = 0.81). For 16 dynamic phantom simulations, QBS LVEF correlated similarly to BP-SPECT (r = 0.81 vs r = 0.91) but QBS RVEF correlation was significantly weaker (r = 0.62 vs r = 0.82). Volumes were lower by QBS than BP-SPECT for all data types. Conclusions: Both algorithms produced LV parameters that correlated strongly with all forms of image data, but all QBS RV relationships were significantly different from BP-SPECT RV relationships. Differences between the two algorithms were attributed to differences in their underlying ventricular modeling assumptions. abstract_id: PUBMED:12679390 Left ventricular ejection fraction and volumes from gated blood-pool SPECT: comparison with planar gated blood-pool imaging and assessment of repeatability in patients with heart failure. Unlabelled: Gated blood-pool SPECT (GBPS) has several potential advantages over planar radionuclide ventriculography (PRNV), including the possibility of greater repeatability of left ventricular ejection fraction (LVEF) and the noninvasive calculation of left ventricular end-systolic volume and left ventricular end-diastolic volume (LVEDV). The aim of this study was to assess the repeatability of LVEF and LVEDV from GBPS and to compare LVEF with those from PRNV. Methods: Fifty patients underwent PRNV and GBPS, 23 of whom also had repeated studies in the same session. GPBS studies were processed using the Cedars Sinai Quantitative Blood-Pool SPECT (QBS) software that automatically calculates LVEF and LVEDV. Automatic processing with QBS was successful in 70% of the GBPS studies, with the remaining studies processed using the manual option in QBS. All PRNV studies were processed using a manual processing technique. Results: Comparison of LVEF from PRNV and GBPS yielded a correlation coefficient of 0.80. Bland-Altman analysis demonstrated a mean difference of 0.74% +/- 7.62% (mean +/- SD) between LVEF from the 2 techniques. The 95% limits of agreement are therefore -14.50% to +15.98%. The correlation between repeated measurements was 0.87 for GBPS and 0.95 for PRNV. Bland-Altman analysis revealed poorer repeatability for GBPS (95% limits of agreement, -9.63% to +14.97% vs. -4.66% to +5.92%; P = 0.003). The mean LVEDV was 198 +/- 94 mL, with a mean difference of 9 +/- 47 mL between repeated measurements. The 95% limits of agreement are therefore -85 to +103 mL. Conclusion: GBPS provides a less repeatable measurement of LVEF than PRNV. Repeatability of LVEDV measurements from GBPS is poor. abstract_id: PUBMED:11391308 Optimal 3-dimensional method for right and left ventricular fourier phase analysis in electrocardiography-gated blood-pool SPECT. Background: To overcome some of the limitations imposed by planar imaging, we aimed to optimize the use of first harmonic Fourier phase analysis (FPA) in electrocardiography-gated blood-pool single photon emission computed tomography (GBPS) by comparing different quantitative, 3-dimensional methods. Methods And Results: Three groups of patients who underwent GBPS were evaluated: group 1, 8 patients with no heart disease; group 2, 10 patients with left ventricular disease; and group 3, 6 patients with right ventricular disease. Six different methods for FPA were compared: surface, cylindrical, spherical, and hybrid methods with fixed thresholding and spherical and hybrid methods with multiple thresholding. The hybrid method with multiple thresholding for the left ventricle and the spherical method for the right ventricle provided the highest discrimination score (phase) between normal and abnormal ventricles. Among methods with similar discrimination score for these 2 methods, the cylindrical and hybrid methods for the left ventricle and the spherical method for the right ventricle provided the best homogeneity of phase distribution histogram in normal ventricles. These were considered the optimal methods for FPA. Conclusions: The hybrid or cylindrical method for the left ventricle and the spherical method for the right ventricle with fixed thresholding are the optimal methods for FPA in GBPS. abstract_id: PUBMED:11752072 Planar imaging versus gated blood-pool SPECT for the assessment of ventricular performance: a multicenter study. Unlabelled: Gated blood-pool SPECT (GBPS), inherently 3-dimensional (3D), has the potential to replace planar equilibrium radionuclide angiography (ERNA) for computation of left ventricular ejection fraction (LVEF), analysis of regional wall motion (RWM), and analysis of right heart function. The purpose of this study was to compare GBPS and ERNA for the assessment of ventricular function in a large, multicenter cohort of patients. Methods: One hundred seventy-eight patients referred in the usual manner for nuclear medicine studies underwent ERNA followed by GBPS. Each clinical site followed a GBPS acquisition protocol that included 180 degrees rotation, a 64 by 64 matrix, and 64 or 32 views using single- or double-head cameras. Transverse GBPS images were reconstructed with a Butterworth filter (cutoff frequency, 0.45-0.55 Nyquist; order, 7), and short-axis images were created. All GBPS studies were processed with a new GBPS program, and LVEF was computed from the isolated left ventricular chamber and compared with standard ERNA LVEF. Reproducibility of GBPS LVEF was evaluated, and right ventricular ejection fraction (RVEF) was computed in a subset of patients (n = 33). Using GBPS, RWM and image quality from 3D surface-shaded and volume-rendered cine displays were evaluated qualitatively in a subset of patients (n = 30). Results: The correlation between GBPS LVEF and planar LVEF was excellent (r = 0.92). Mean LVEF was 62.2% for GBPS and 54.1% for ERNA. The line of linear regression was GBPS LVEF = (1.04 x ERNA LVEF) + 6.1. Bland-Altman plotting revealed an increasing bias in GBPS LVEF with increasing LVEF (Y = 0.13x + 0.61; r = 0.30; mean difference = 8.1% +/- 7.0%). Interoperator reproducibility of GBPS LVEF was good (r = 0.92). RVEF values averaged 59.8%. RWM assessment using 3D cine display was enhanced in 27% of the studies, equivalent in 67%, and inferior in 7%. Conclusion: GBPS LVEF was reproducible and correlated well with planar ERNA. GBPS LVEF values were somewhat higher than planar ERNA, likely because of the exclusion of the left atrium. abstract_id: PUBMED:24844547 Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras. Objective: To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. Methods: The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. Results: The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p &lt; 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results. Answer: Yes, TOMPOOL gated blood-pool SPECT processing software is accurate for diagnosing right and left ventricular dysfunction in a clinical setting. The diagnostic accuracy and inter-observer reproducibility of TOMPOOL were assessed in a cohort of patients with various etiologies of ventricular dysfunction. The study found that measurements of end-diastolic volume (EDV), ejection fraction (EF), and cardiac output (CO) for both the right ventricle (RV) and the left ventricle (LV) using TOMPOOL were reproducible and correlated well with cardiac magnetic resonance imaging (CMR) and thermodilution, which are considered reference standards for these measurements. The Spearman's rho correlation coefficients were r = 0.73/0.80 and 0.67/0.73 for right/left EF and EDV, respectively, and r = 0.57 for right CO. The areas under the ROC curve for diagnosing ventricular dysfunction or enlargement were 0.79 for RVEF, 0.91 for LVEF, 0.83 for RVEDV, and 0.99 for LVEDV, indicating good diagnostic performance. The study concluded that while TOMPOOL is accurate, thresholds must be adjusted for optimal diagnostic use (PUBMED:24875579).