input
stringlengths
6.82k
29k
Instruction: Hospital admissions for ischemic stroke: does long-term exposure to air pollution interact with major risk factors? Abstracts: abstract_id: PUBMED:21196728 Hospital admissions for ischemic stroke: does long-term exposure to air pollution interact with major risk factors? Background: The aim was to investigate whether the effects of major risk factors for ischemic stroke were modified by long-term exposure to air pollution in Scania, southern Sweden. Methods: Cases were defined as first-ever ischemic strokes in patients born between 1923 and 1965 during 2001-2006 (n = 7,244). Data were collected from The Swedish National Stroke Register (Riks-stroke) and the Malmö and Lund Stroke Registers. Population controls were matched on age and sex. Modeled outdoor annual mean NO(x) concentrations were used as proxy for long-term exposure to air pollution. Heterogeneity across NO(x) categories was tested for smoking, hypertension, diabetes mellitus, atrial fibrillation and physical inactivity. Data were analyzed as case-control data and to some extent as case-only data, with logistic regression analysis. Results: The case-control odds ratios for ischemic stroke in association with diabetes were 1.3 [95% confidence interval (CI): 1.1-1.6] and 2.0 (95% CI: 1.2-3.4) in the lowest and highest NO(x) category, respectively (p value for testing heterogeneity across the categories = 0.056). The case-only approach gave further support for the risk associated with diabetes to increase with NO(x) (p for trend = 0.033). We observed no main effect of mean NO(x) or any conclusive effect modifications between NO(x) and smoking, hypertension, atrial fibrillation or physical inactivity. Conclusions: In a low-level air pollution area, the risk for ischemic stroke associated with diabetes seemed to increase with long-term exposure to air pollution. abstract_id: PUBMED:28930181 Ambient Air Pollution and Risk for Ischemic Stroke: A Short-Term Exposure Assessment in South China. Data on the association between air pollution and risk of ischemic stroke in China are still limited. This study aimed to investigate the association between short-term exposure to ambient air pollution and risk of ischemic strokes in Guangzhou, the most densely-populated city in south China, using a large-scale multicenter database of stroke hospital admissions. Daily counts of ischemic stroke admissions over the study years 2013-2015 were obtained from the Guangzhou Cardiovascular and Cerebrovascular Disease Event Surveillance System. Daily particulate matter <2.5 μm in diameter (PM2.5), sulfur dioxide (SO₂), nitrogen dioxide (NO₂), ozone (O₃), and meteorological data were collected. The associations between air pollutants and hospital admissions for stroke were examined using relative risks (RRs) and their corresponding 95% confidence intervals (CIs) based on time-series Poisson regression models, adjusting for temperature, public holiday, day of week, and temporal trends in stroke. Ischemic stroke admissions increased from 27,532 to 35,279 through 2013 to 2015, increasing by 28.14%. Parameter estimates for NO₂ exposure were robust regardless of the model used. The association between same-day NO₂ (RR = 1.0509, 95% CI: 1.0353-1.0668) exposure and stroke risk was significant when accounting for other air pollutants, day of the week, public holidays, temperature, and temporal trends in stroke events. Overall, we observed a borderline significant association between NO₂ exposure modeled as an averaged lag effect and ischemic stroke risk. This study provides data on air pollution exposures and stroke risk, and contributes to better planning of clinical services and emergency contingency response for stroke. abstract_id: PUBMED:20068360 Estimation of short-term effects of air pollution on stroke hospital admissions in southern Sweden. Background: Short-term exposure to high levels of air pollution can increase stroke risk. In this study we investigated the short-term effects of air pollution on hospital admissions for stroke in a setting where pollutant levels are rather low. We also addressed methodological issues in evaluating the short-term effects of air pollution. Methods: Daily admissions of ischemic (n = 11,267) and hemorrhagic (n =1,681) stroke were obtained from a Swedish quality register for stroke, Riks-Stroke. We used two types of exposure data: (1) daily measured background levels of ozone, temperature and particles with a diameter < 10 microm (PM(10)) and (2) modeled levels of a mixture of NO and NO2 (NOx) at the residential address of each individual. Results: We estimated a 13% (95% confidence interval, 4-22%) increased risk for hospital admissions for ischemic stroke for levels of PM 10 above 30 microg/m(3) compared to < 15 microg/m(3) , whereas temperature above 16 degrees C decreased the risk. No consistent associations were found for hemorrhagic stroke or for ischemic stroke and ozone or NOx . Conclusion: Particulate air pollution and temperature seemed to be associated with ischemic stroke hospital admissions. Individual exposure modeling facilitates a detailed exposure assessment but may also be more prone to misclassification errors. The time series and case crossover approaches yielded similar effect estimates. abstract_id: PUBMED:24692477 Cardiovascular emergency hospital visits and hourly changes in air pollution. Background And Purpose: Few studies have examined the effect of hourly changes in air pollution on cardiovascular disease morbidity. We evaluated the associations between hourly changes in air pollution and the risks of several types of cardiovascular disease. Methods: We used a time-stratified case-crossover design. Study participants were 10,949 residents of the city of Okayama, Japan, aged≥65 years who were taken to hospital emergency rooms between January 2006 and December 2010 for onset of cardiovascular disease. We calculated city representative hourly average concentrations of air pollutants from several monitoring stations and examined the associations between air pollution exposure before the case event, focusing mainly on suspended particulate matter, and disease onset. Results: Suspended particulate matter exposure 0 to <6 hours before the case events was associated with risks of onset of cardiovascular and cerebrovascular disease; odds ratios after 1 interquartile range increase in suspended particulate matter exposure were 1.04 (95% confidence interval, 1.01-1.06) for cardiovascular disease and 1.04 (95% confidence interval, 1.00-1.08) for cerebrovascular disease. We observed an elevated risk of hemorrhagic as well as ischemic stroke, but the risk was slightly higher for hemorrhagic stroke, and this elevation was persistent. Women tended to have higher effect estimates. Conclusions: This study provides further evidence that particulate matter exposure increases the risks of onset of cardiovascular and cerebrovascular disease (including hemorrhagic stroke) shortly after exposure. abstract_id: PUBMED:35701697 Short-term associations of ambient air pollution with hospital admissions for ischemic stroke in 97 Japanese cities. The short-term association between ambient air pollution and hospital admissions for ischemic stroke is not fully understood. We examined the association between four regularly measured major ambient air pollutants, i.e., sulfur dioxide (SO2), nitrogen dioxide (NO2), photochemical oxidants (Ox), and particulate matter with aerodynamic diameters ≤ 2.5 μm (PM2.5), and hospital admissions for ischemic stroke by analyzing 3 years of nationwide claims data from 97 cities in Japan. We first estimated city-specific results by using generalized additive models with a quasi-Poisson regression, and we obtained the national average by combining city-specific results with the use of random-effect models. We identified a total of 335,248 hospital admissions for ischemic stroke during the 3-year period. Our analysis results demonstrated that interquartile range increases in the following four ambient air pollutants were significantly associated with hospital admissions for ischemic stroke on the same day: SO2 (1.05 ppb), 1.05% (95% CI: 0.59-1.50%); NO2 (6.40 ppb), 1.10% (95% CI: 0.61-1.59%); Ox (18.32 ppb), 1.43% (95% CI: 0.81-2.06%); and PM2.5 (7.86 μg/m3), 0.90% (95% CI: 0.35-1.45%). When the data were stratified by the hospital admittees' medication use, we observed stronger associations with SO2, NO2, and PM2.5 among the patients who were taking antihypertensive drugs and weaker associations with SO2, NO2, and Ox among those taking antiplatelet drugs. Short-term exposure to ambient air pollution was associated with increased hospital admissions for ischemic stroke, and medication use and season may modify the association. abstract_id: PUBMED:28711336 Short-term air pollution exposure and cardiovascular events: A 10-year study in the urban area of Como, Italy. Background: In the last years several studies have investigated the effects of some air pollutants on cardiovascular diseases (CVD), but the results are not conclusive. Aim of this study was to investigate the association between short-term exposure to ambient air pollution and CVD events in a long-term observational period. Methods: All CVD events (myocardial infarction - MI and ischemic stroke - IS) which occurred in residents of Como between January 2005 and December 2014 were included. Demographics and main vascular risk factors were registered for each patient. Principal meteorological variables and four ambient particles (O3, SO2, NO2, PM10) were recorded. Statistical analysis was performed using linear regression (GLM) and a generalized additive model (GAM) estimating Poisson distribution and adjusted for the main risk factors and ambient meteorological variables. Results: A total of 4110 events were registered with a mild prevalence of MI (51.6%). In GLM analysis we observed a continuative association between CVD events and SO2 (p<0.05), while PM10 was associated with CVD occurrences at two-day lag time (p<0.05). Using GAM we observed a positive association between CVD and PM10 at the same day and at three- and four-day lag time with HRs ranging between 1.025 and 1.039 (p<0.001). These results were observed for both MI and IS, with an earlier effect for MI than for IS. Conclusions: These data confirm a possible association between some ambient air pollutants and CVD events, precisely MI and IS, with different temporal and cumulative effect. abstract_id: PUBMED:32777785 Short-Term Air Pollution as a Risk for Stroke Admission: A Time-Series Analysis. Background: The harmful effects of outdoor air pollution on stroke incidence are becoming increasingly recognised. We examined the impact of different air pollutants (PM2.5, PM10, NO2, ozone, and SO2) on admission for all strokes in two Irish urban centres from 2013 to 2017. Methods: Using an ecological time series design with Poisson regression models, we analysed daily hospitalisation for all strokes and is-chaemic stroke by residence in Dublin or Cork, with air pollution level monitoring data with a lag of 0-2 days from exposure. Splines of temperature, relative humidity, day of the week, and time were included as confounders. Analysis was also performed across all four seasons. Data are presented as relative risks (RRs) and 95% confidence intervals (95% CI) per interquartile range (IQR) increase in each pollutant. Results: There was no significant association between all stroke admission and any individual air pollutant. On seasonal analysis, during winter in the larger urban centre (Dublin), we found an association between all stroke cases and an IQR increase in NO2 (RR 1.035, 95% CI: 1.003-1.069), PM10 (RR 1.032, 95% CI: 1.007-1.057), PM2.5 (RR 1.024, 95% CI: 1.011-1.039), and SO2 (RR 1.035, 95% CI: 1.001-1.071). There was no significant association found in the smaller urban area of Cork. On meta-analysis, there remained a significant association between NO2 (RR 1.013, 95% CI: 1.001-1.024) and PM2.5 (1.009, 95% CI 1.004-1.014) per IQR increase in each. Discussion: Short-term air pollution in winter was found to be associated with hospitalisation for all strokes in a large urban centre in Ireland. As Ireland has relatively low air pollution internationally, this highlights the need to introduce policy changes to reduce air pollution in all countries. abstract_id: PUBMED:36990953 Air pollution impacts on in-hospital case-fatality rate of ischemic stroke patients. Background: A growing body of evidence suggests that air pollution exposure is associated with an increased risk for cardiovascular diseases. Data regarding the impact of long-term air pollution exposure on ischemic stroke mortality are sparse. Methods: The German nationwide inpatient sample was used to analyse all cases of hospitalized patients with ischemic stroke in Germany 2015-2019, which were stratified according to their residency. Data of the German Federal Environmental Agency regarding average values of air pollutants were assessed from 2015 to 2019 at district-level. Data were combined and the impact of different air pollution parameters on in-hospital case-fatality was analyzed. Results: Overall, 1,505,496 hospitalizations of patients with ischemic stroke (47.7% females; 67.4 % ≥70 years old) were counted in Germany 2015-2019, of whom 8.2 % died during hospitalization. When comparing patients with residency in federal districts with high vs. low long-term air pollution, enhanced levels of benzene (OR 1.082 [95%CI 1.034-1.132],P = 0.001), ozone (O3, OR 1.123 [95%CI 1.070-1.178],P < 0.001), nitric oxide (NO, OR 1.076 [95%CI 1.027-1.127],P = 0.002) and PM2.5 fine particulate matter concentrations (OR 1.126 [95%CI 1.074-1.180],P < 0.001) were significantly associated with increased case-fatality independent from age, sex, cardiovascular risk-factors, comorbidities, and revascularization treatments. Conversely, enhanced carbon monoxide, nitrogen dioxide, PM10, and sulphur dioxide (SO2) concentrations were not significantly associated with stroke mortality. However, SO2-concentrations were significantly associated with stroke-case-fatality rate of >8 % independent of residence area-type and area use (OR 1.518 [95%CI 1.012-2.278],P = 0.044). Conclusion: Elevated long-term air pollution levels in residential areas in Germany, notably of benzene, O3, NO, SO2, and PM2.5, were associated with increased stroke mortality of patients. Research In Context: Evidence before this study: Besides typical, established risk factors, increasing evidence suggests that air pollution is an important and growing risk factor for stroke events, estimated to be responsible for approximately 14 % of all stroke-associated deaths. However, real-world data regarding the impact of long-term exposure to air pollution on stroke mortality are sparse. Added value of this study: The present study demonstrates that the long-term exposure to the air pollutants benzene, O3, NO, SO2 and PM2.5 are independently associated with increased case-fatality of hospitalized patients with ischemic stroke in Germany. Implications of all the available evidence: The results of our study support the urgent need to reduce the exposure to air pollution by tightening emission controls to reduce the stroke burden and stroke mortality. abstract_id: PUBMED:23208275 Long-term exposure to traffic-related air pollution and the risk of death from hemorrhagic stroke and lung cancer in Shizuoka, Japan. A number of studies have linked exposure to long-term outdoor air pollution with cardiopulmonary disease; however, the evidence for stroke is limited. Furthermore, evidence with the risk for lung cancer (LC) is still inconsistent. We, therefore, evaluated the association between long-term exposure to traffic-related air pollution and cause-specific mortality. Individual data were extracted from participants of an ongoing cohort study in Shizuoka, Japan. A total of 14,001 elderly residents completed questionnaires and were followed from December 1999 to January 2009. Annual individual nitrogen dioxide (NO(2)) exposure data, as an index for traffic-related exposure, were modeled using a Land Use Regression model and assigned to the participants. We then estimated the adjusted hazard ratios (HRs) and their confidence intervals (CIs) associated with a 10 μg/m(3) elevation in NO(2) for all-cause or cause-specific mortality using time-varying Cox proportional hazards models. We found positive associations of NO(2) levels with all-cause (HR=1.12, 95% CI: 1.07-1.18), cardiopulmonary disease (HR=1.22, 95% CI: 1.15-1.30), and LC mortality (HR=1.20, 95% CI: 1.03-1.40). Among cardiopulmonary disease mortality, not only the risk for ischemic heart disease (HR=1.27, 95% CI: 1.11-1.47) but also the risks for stroke were elevated: intracerebral hemorrhage (HR=1.28, 95% CI: 1.05-1.57) and ischemic stroke (HR=1.20, 95% CI: 1.04-1.39). The present study supports the existing evidence that long-term exposure to traffic-related air pollution increases the risk of cardiopulmonary as well as LC mortality, and provides additional evidence for adverse effects on intracerebral hemorrhage as well as ischemic stroke. abstract_id: PUBMED:34390672 Ambient air pollution and the risk of ischaemic and haemorrhagic stroke. Stroke is a leading cause of disability and the second most common cause of death worldwide. Increasing evidence suggests that air pollution is an emerging risk factor for stroke. Over the past decades, air pollution levels have continuously increased and are now estimated to be responsible for 14% of all stroke-associated deaths. Interpretation of previous literature is difficult because stroke was usually not distinguished as ischaemic or haemorrhagic, nor by cause. This Review summarises the evidence on the association between air pollution and the different causes of ischaemic stroke and haemorrhagic stroke, to clarify which people are most at risk. The risk for ischaemic stroke is increased after short-term or long-term exposure to air pollution. This effect is most pronounced in people with cardiovascular burden and stroke due to large artery disease or small vessel disease. Short-term exposure to air pollution increases the risk of intracerebral haemorrhage, a subtype of haemorrhagic stroke, whereas the effects of long-term exposure are less clear. Limitations of the current evidence are that studies are prone to misclassification of exposure, often rely on administrative data, and have insufficient clinical detail. In this Review, we provide an outlook on new research opportunities, such as those provided by the decreased levels of air pollution due to the current COVID-19 pandemic. Answer: Yes, long-term exposure to air pollution does interact with major risk factors for ischemic stroke. A study conducted in Scania, southern Sweden, suggested that the risk for ischemic stroke associated with diabetes seemed to increase with long-term exposure to air pollution, as indicated by modeled outdoor annual mean NOx concentrations used as a proxy for long-term exposure to air pollution (PUBMED:21196728). This finding was supported by logistic regression analysis, which showed that the case-control odds ratios for ischemic stroke in association with diabetes were higher in the highest NOx category compared to the lowest. The case-only approach further supported the risk associated with diabetes to increase with NOx (p for trend = 0.033). Moreover, other studies have found associations between air pollution and the risk of ischemic stroke. For instance, a study in South China found that short-term exposure to ambient air pollution, particularly NO2, was associated with an increased risk of ischemic stroke hospital admissions (PUBMED:28930181). Similarly, a study in southern Sweden estimated a 13% increased risk for hospital admissions for ischemic stroke associated with higher levels of PM10 compared to lower levels (PUBMED:20068360). In Japan, short-term associations of ambient air pollution with hospital admissions for ischemic stroke were observed, with significant associations found for SO2, NO2, Ox, and PM2.5 (PUBMED:35701697). Furthermore, a study in Germany found that elevated long-term air pollution levels, notably of benzene, O3, NO, SO2, and PM2.5, were associated with increased stroke mortality of patients hospitalized with ischemic stroke (PUBMED:36990953). This suggests that not only does air pollution interact with risk factors for ischemic stroke, but it also impacts the case-fatality rate of ischemic stroke patients. In summary, evidence from multiple studies indicates that long-term exposure to air pollution interacts with major risk factors for ischemic stroke and can increase the risk and severity of the condition.
Instruction: Does the Miller blade truly provide a better laryngoscopic view and intubating conditions than the Macintosh blade in small children? Abstracts: abstract_id: PUBMED:24690084 Does the Miller blade truly provide a better laryngoscopic view and intubating conditions than the Macintosh blade in small children? Background: Both Miller and Macintosh blades are widely used for laryngoscopy in small children, though the Miller blade is more commonly recommended in pediatric anesthetic literature. The aim of this study was to compare laryngoscopic views and ease and success of intubation with Macintosh and Miller blades in small children under general anesthesia. Materials And Method: One hundred and twenty children aged 1-24 months were randomized for laryngoscopy to be performed in a crossover manner with either the Miller or the Macintosh blade first, following induction of anesthesia and neuromuscular blockade. The tips of both the blades were placed at the vallecula. Intubation was performed following the second laryngoscopy. The glottic views with and without external laryngeal maneuver (ELM) and ease of intubation were observed. Results: Similar glottic views with both blades were observed in 52/120 (43%) children, a better view observed with the Miller blade in 35/120 (29%) children, and with the Macintosh blade in 33/120 (28%). Laryngoscopy was easy in 65/120 (54%) children with both the blades. Restricted laryngoscopy was noted in 55 children: in 27 children with both the blades, 15 with Miller, and 13 with Macintosh blade. Laryngoscopic view improved following ELM with both the blades. Conclusion: In children aged 1-24 months, the Miller and the Macintosh blades provide similar laryngoscopic views and intubating conditions. When a restricted view is obtained, a change of blade may provide a better view. Placing the tip of the Miller blade in the vallecula provides satisfactory intubating conditions in this age group. abstract_id: PUBMED:33072220 Comparison of the Laryngoscopic View using Macintosh and Miller Blades in Children Less than Four Years Old. This study aimed to compare Miller and Macintosh laryngoscopes in zero to 4-year-old children. A total of 72 children with a score of I and II, according to the American Society of Anesthesiologists (ASA) physical status classification, who were candidates for elective surgery with general anesthesia and tracheal intubation were enrolled in the study. The children were divided into two equal groups (36 persons) according to used laryngoscope: Miller laryngoscope (group 1) and Macintosh laryngoscope (group 2). Observations and all laryngoscopies were performed by a single experienced anesthesiologist. Heart rate, systolic blood pressure, non-invasive arterial blood pressure, and hemoglobin saturation were measured and recorded. The number of endotracheal intubation attempts and complications were also recorded for both groups. In terms of gender, the first group consisted of 88.9% boys and 11.1% girls, and the second group consisted of 66.6% boys and 33.3% girls (p-value=0.05). The mean age was 16.7 months in the first group and 17.7 months in the second group (p-value=0.5). The mean weight of the children was 16988.5 g and 16300 g in the Miller and Macintosh groups, respectively (p-value=0.9). Regarding the Cormack-Lehane classification system, 5 patients were classified as grade 1 (13.9%), 14 patients as grade 2 (38.9%), 15 patients as grade 3 (41.7%), and 2 patients as grade 4 (5.6%) in the Macintosh group. In contrast, in the Miller group, 5 patients were classified as grade 1 (13.9%), 27 patients as grade 2 (75%), and 4 patients as grade 3 (11.1%) (p-value=0.004). These results can provide more data about the tracheal intubation method with the Macintosh and Miller laryngoscopes, the ease of intubation, and the best laryngoscopic view with each blade. abstract_id: PUBMED:28974373 Effect of the Macintosh curved blade size on direct laryngoscopic view in edentulous patients. Objective: In the present study, we compared the laryngoscopic view depending on the size of the Macintosh curved blade in edentulous patients. Methods: Thirty-five edentulous adult patients scheduled for elective surgery were included in the study. After induction of anesthesia, two direct laryngoscopies were performed alternately using a standard-sized Macintosh curved blade (No. 4 for men and No. 3 for women) and smaller-sized Macintosh curved blade (No. 3 for men and No. 2 for women). During direct laryngoscopy with each blade, two digital photographs of the lateral view were taken when the blade tip was placed in the valleculae; the laryngoscope was lifted to achieve the best laryngeal view. Then, the best laryngeal views were assessed using the percentage of glottic opening (POGO) score. On the photographs of the lateral view of direct laryngoscopy, the angles between the line extending along the laryngoscopic handle and the horizontal line were measured. Results: The POGO score was improved with the smaller-sized blade compared with the standard-sized blade (87.3% [11.8%] vs. 71.3% [20.0%], P<0.001, respectively). The angles between the laryngoscopic handle and the horizontal line were greater with the smaller-sized blade compared to the standard-sized blade when the blade tip was placed on the valleculae and when the laryngoscope was lifted to achieve the best laryngeal view (both P<0.001). Conclusions: Compared to a standard-sized Macintosh blade, a smaller-sized Macintosh curved blade improved the laryngeal exposure in edentulous patients. abstract_id: PUBMED:38337375 A Comparison of Miller Straight Blade and Macintosh Blade Laryngoscopes for Intubation in Morbidly Obese Patients. The primary objective of this study was to demonstrate whether the Miller blade laryngoscope could provide better visualization of the vocal cords in morbidly obese patients than the Macintosh blade laryngoscope. The secondary objective was to identify the patient-measured factors associated with better visualization of the vocal cords when using the Miller vs. Macintosh blade, as well as whether the application of external pressure might improve the visibility of the glottis during intubation. A prospective, observational study encompassing 110 patients with a BMI > 40 undergoing elective bariatric surgery and intubation procedure was performed. The evaluation of the vocal cords was performed according to the Cormack-Lehane scale and POGO scale in the same patient during intubation, performed with a Miller and a Macintosh blade laryngoscope, in a random matter. The following parameters were assessed: body weight, height, BMI, neck circumference, thyromental distance, sternomental distance, mouth opening, and Mallampati scale and their impact on visualization of the vocal cords using the Miller blade without the application of external pressure. The Miller blade provides an improved view of the glottis compared to the Macintosh blade measured with both the Cormac-Lehane scale (45 (40.91%) without external pressure application on the larynx, and 18 (16.36%) with external pressure application on the larynx) and the POGO scale (45 (40.91%) without external pressure application on the larynx, and 19 (17.27%) with external pressure application on the larynx). The application of laryngeal pressure improved the view of the glottis. Among the measured features, a significant improvement in the visibility of the glottis could be found in patients with a BMI over 44.244 kg/m2 and a neck circumference over 46 cm. To conclude, the usage of the Miller blade improves the visibility of the glottis compared to the Macintosh blade in morbidly obese patients. The recommendation to use the Miller blade in this group of patients requires further investigation, taking into account the effectiveness of the intubation. Trial Registration: NCT05494463. abstract_id: PUBMED:30745607 Comparison between Macintosh, Miller and McCoy laryngoscope blade size 2 in paediatric patients - A randomised controlled trial. Background And Aims: Paediatric airway needs special consideration as it is not a miniature replica of adult airway, rather it has different anatomy with different proportion and angulations. This study was conducted with the aim to find a laryngoscope blade that provides best laryngoscopic and intubation conditions in paediatric patients of age 2-6 years. Methods: This trial was conducted in a total of 75 children age 2-6 years, either gender, with American Society of Anesthesiologists grade I or II scheduled for elective surgery under general anaesthesia. They were randomly allocated to groups A, B and C to be intubated with Macintosh, Miller and McCoy blades, respectively. Intubation Difficulty Score (IDS) was considered as primary outcome, and Cormack-Lehane grade and Percentage of Glottic Opening (POGO) score were taken as secondary outcome. Data were compared by ANOVA or Kruskal-Wallis or chi square test using Statistica, SPSS and GraphPad Prism softwares. P < 0.05 was considered statistically significant. Results: IDS score was significantly lower (P = 0.002) in group B (0.6 ± 0.7) as compared to group A (1.4 ± 0.9) and group C (1.3 ± 1.1); majority of patients in group B (48%) had Cormack-Lehane grade Ι (P = 0.002) unlike group A (0%) and group C (20%) and POGO score (P < 0.001) was higher in group B (86 ± 23.4) when compared with groups A (68.2 ± 20.5) and C (59.8 ± 28.9). Haemodynamic changes and other intubation parameters were comparable among the groups. Conclusion: Miller blade may be considered superior to Macintosh and McCoy blades in terms of glottic visualisation and ease of intubation in paediatric patients. abstract_id: PUBMED:35422547 An Inquiry on Airway Management by McCoy Blade with Elevated Tip and Miller Straight Blade with Paraglossal Technique: Relevance for Difficult Airway Management in Current Infectious Times. Background: Modifications of curved and straight laryngoscope blades have been used for airway management since a long time. While McCoy blade with an elevated tip is commonly used to intubate patients with anticipated difficult airway, the Miller's straight blade is used for intubations in children and less commonly adults. In this study, we revisit the paraglossal technique of Miller's straight blade as a method to improve laryngeal view especially in difficult intubations. Aim: This study aimed to compare laryngoscopic view and ease of intubation (EOI) using McCoy blade elevated tip and Miller's straight blade paraglossal technique. Materials And Methods: A prospective single-blind study was conducted on 170 patients undergoing elective surgery under general anesthesia. They were randomly allotted to two groups. In Group A, laryngoscopy was performed by Miller's blade paraglossal approach, whereas in Group B, laryngoscopy was performed by McCoy blade with an elevated tip. Laryngeal view was graded using the modified Cormack-Lehane grading, and EOI was graded using EOI score. These were compared with preoperative intubation prediction score. Statistical analysis was done using "Medcalc" version 19.0.3. Numerical and categorical data were analyzed by Student's t-test and Chi-square test, respectively. A P < 0.05 was considered statistically significant. Results: The paraglossal approach with Miller's blade offered better laryngoscopic view as compared with McCoy blade with an elevated tip in normal (54.1% vs. 25.9%) and difficult airway (44.7% vs. 11.8%). Tracheal intubation was easier with McCoy blade with an elevated tip although the success rate of intubation improved with the assistance of a bougie with Miller's straight blade paraglossal approach. Conclusion: The laryngeal view was significantly better with the paraglossal approach of Miller's straight blade even in difficult airway. McCoy blade with an elevated tip was also found to be a useful tool to have in difficult airway, as EOI is significantly higher. The study also highlights the usefulness of adjuncts such as a gum elastic bougie while intubating. abstract_id: PUBMED:35698670 Comparison of Hemodynamic Response to Laryngoscopy Using Miller and McCoy Blade. Background: The most commonly used equipment to aid endotracheal intubation is a laryngoscope, and the procedure performed is known as laryngoscopy, which leads to profound cardiovascular effects. The process of laryngoscopy causes the release of catecholamines, thereby leading to marked pressor responses and tachycardia. The process of laryngoscopy can be made easier by the use of various types of laryngoscopic blades. The McCoy blade is a modification of the standard Macintosh blade that incorporates a hinged tip blade. It allows elevation of the epiglottis while decreasing overall laryngeal movement. A Miller blade is a straight blade with a slight upward curve near the tip. It is found that the force exerted, head extension, and cervical spine movement are less with the Miller blade. This study was undertaken to compare changes in haemodynamic parameters before, during, and after laryngoscopy using these two blades. Materials And Methods: Following institutional ethical committee approval and obtaining informed written consent, 100 patients of American Society of Anesthesiologists (ASA) grades I and II in the age group of 18-45 years of either sex undergoing elective surgeries under general anaesthesia were included in the study. The patients were randomly allocated into two groups of 50 patients each. Group Mc - laryngoscopy was performed using a no. 3 McCoy blade. Group Ml - laryngoscopy was performed using a no. 2 Miller blade. The laryngoscopic view was compared using Cormack and Lehane grading. Haemodynamic parameters before, during, and after laryngoscopy were recorded. Results: Hemodynamic parameters including heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean arterial pressure (MAP) were increased in both the groups but were statistically and clinically significant in the Miller group with p≤0.001. Conclusion: McCoy blade is associated with a significantly more stable hemodynamic response to laryngoscopy in comparison with the Miller blade. abstract_id: PUBMED:31920236 Comparison of the C-MAC video laryngoscope size 2 Macintosh blade with size 2 C-MAC D-Blade for laryngoscopy and endotracheal intubation in children with simulated cervical spine injury: A prospective randomized crossover study. Background And Aims: CMAC video laryngoscope size 2 D-Blade has been recently introduced for management of pediatric difficult airway. Our primary outcome was to compare glottic view, intubation time, and ease of intubation with the size 2 Macintosh versus D-Blade of C-MAC video laryngoscope in simulated cervical injury in children. Material And Methods: This randomized crossover study was conducted in a tertiary care hospital of Northern India. Forty children of 4-14 years of age were enrolled in this study. After induction of anesthesia, video laryngoscopy was performed either with size 2 CMAC Macintosh (group M) or D-Blade (group D) with manual in-line stabilization. After removal of the first blade, second video laryngoscopy was performed with the alternative blade. Endotracheal intubation was done with the second laryngoscopy. Best glottic view, time for best glottic view, and difficulty in blade insertion were recorded during both the video laryngoscopies. During second video laryngoscopy, difficulty of tube insertion and time for intubation were noted. Results: The glottic view grade was significantly better in group D compared with the group M (P = 0.0002). Insertion of D-Blade was more difficult than Macintosh blade (P = 0.0007). There was no statistical difference in terms of time for best glottic view in group M and group D (13.40 ± 4.90 vs 13.62 ± 5.60 s) and endotracheal tube insertion time (24.80 ± 7.90 vs 27.90 ± 10.90 s), respectively. Number of intubation attempts was similar in both the groups. Conclusions: Size 2 D-Blade of C-MAC video laryngoscope provided a better glottic view in children with simulated cervical spine injury as compared with CMAC Macintosh blade. Success of intubation, intubation time, and ease of intubation were comparable with both the blades. abstract_id: PUBMED:30625689 The effects of callander modification of laryngoscopic blade on hemodynamic changes according to the degree of difficult airway. Background: Laryngoscopy and tracheal intubation are known to have profound cardiovascular effects. The Callander modification of Macintosh blade is associated with greater field of laryngoscopic view and decreased risk of dental contact. The purpose of this study was to compare the hemodynamic responses to laryngoscopy and tracheal intubation according to the degree of difficult airway, and to evaluate the usefulness of Callander modification of Macintosh blade for attenuating the hemodynamic responses. Methods: One hundred, forty-eight patients scheduled for elective surgery were divided into Easy group and Difficult group by Wilson's risk sum score. Laryngoscopy was performed using either an ordinary Macintosh No. 3 blade or the modified Macintosh blade. The modification consisted of reducing the height of the flange by partial removal, as described by Callander et al. Hemodynamic variables (systolic, diastolic, mean blood pressure, heart rate and rate pressure product) were noted before induction (baseline) and immediately after intubation. Results: The hemodynamic changes after tracheal intubation in Difficult group were significantly greater than those in Easy group (P < 0.05). When using the modified blade, systolic, diastolic and mean blood pressure after tracheal intubation were lower than those using the conventional blade regardless of Wilson's risk sum score, but no statistical significances could be found. Conclusions: The hemodynamic changes after tracheal intubation increased as the degree of airway difficulty increased. Laryngoscopy with the Callander's modified blade did not reduce the degree of hemodynamic stimulation compared with the conventional Macintosh blade. abstract_id: PUBMED:29359076 Effect of Cricoid Pressure on Laryngeal View During Macintosh, McGrath MAC X-Blade and GlideScope Video Laryngoscopies. Objective: Cricoid pressure is useful in fasted patients requiring emergency intubation. We compared the effect of cricoid pressure on laryngeal view during Macintosh, McGrath MAC X-Blade and GlideScope video laryngoscopy. Methods: After obtaining approval from the Human Research Ethics Committee and written informed consent from patients, we enrolled 120 patients (American Society of Anesthesiologists I-II, age 18-65 years) undergoing elective surgery that required endotracheal intubation in this prospective randomised study. Patients were divided into three groups (Macintosh, McGrath MAC X-Blade and GlideScope). Results: Demographic and airway variables were similar in the groups. Cormack-Lehane grades were improved or unchanged on using cricoid pressure in Macintosh and McGrath MAC X-Blade groups. However, laryngeal views worsened in 12 patients (30%), remained unchanged in 26 patients (65%) and improved in 2 patients (5%) in the GlideScope group (p<0.001). Insertion and intubation times for Macintosh and McGrath MAC X-Blade video laryngoscopes were similar. Insertion times for GlideScope and Macintosh video laryngoscopes were similar, but were longer than those for the McGrath MAC X-Blade video laryngoscope (p=0.02). Tracheal intubation took longer with the GlideScope video laryngoscope than with the other devices (p<0.001 and p=0.003). Mean arterial pressures after insertion increased significantly in Macintosh and GlideScope groups (p=0.004 and p=0.001, respectively) compared with post-induction values. Heart rates increased after insertion in all three groups compared with post-induction values (p<0.001). Need for optimisation manoeuvres and postoperative minor complications were comparable in all three groups. Conclusion: Although all three devices are useful for normal or difficult intubation, cricoid pressure improved Cormack-Lehane grades of Macintosh and McGrath MAC X-Blade video laryngoscopes but statistically significantly worsened that of the GlideScope video laryngoscope. Answer: The comparison between the Miller and Macintosh blades for laryngoscopic view and intubating conditions in small children has been the subject of several studies, and the results are somewhat mixed. One study found that in children aged 1-24 months, the Miller and Macintosh blades provided similar laryngoscopic views and intubating conditions. When a restricted view was obtained, changing the blade could provide a better view, and placing the tip of the Miller blade in the vallecula provided satisfactory intubating conditions for this age group (PUBMED:24690084). Another study comparing the two blades in children less than four years old found that the Miller group had a higher percentage of patients with a better Cormack-Lehane classification, suggesting that the Miller blade may provide a better laryngoscopic view in this age group (PUBMED:33072220). A randomized controlled trial comparing Macintosh, Miller, and McCoy laryngoscope blades in pediatric patients aged 2-6 years found that the Miller blade was superior to the Macintosh and McCoy blades in terms of glottic visualization and ease of intubation (PUBMED:30745607). However, a study comparing the hemodynamic response to laryngoscopy using Miller and McCoy blades found that the McCoy blade was associated with a more stable hemodynamic response compared to the Miller blade, suggesting that while the Miller blade may offer better visualization, it could also be associated with more significant hemodynamic changes (PUBMED:35698670). In conclusion, while some studies suggest that the Miller blade may provide a better laryngoscopic view in small children compared to the Macintosh blade, the evidence is not entirely consistent, and factors such as hemodynamic stability must also be considered. It appears that both blades can be effective, and the choice may depend on the specific circumstances and the anesthesiologist's preference and experience.
Instruction: Do hemodialysis patients prefer renal-specific or standard oral nutritional supplements? Abstracts: abstract_id: PUBMED:19218047 Do hemodialysis patients prefer renal-specific or standard oral nutritional supplements? Objective: We investigated whether hemodialysis (HD) patients prefer standard or renal-specific oral nutritional supplements (ONS). Design: Standard ONS Fortisip (Nutricia Clinical Care, Wiltshire, Trowbridge, UK) and renal ONS Renilon (Nutricia Clinical Care) and Nepro (Abbott Laboratories, Ltd., Maidenhead, Berkshire, UK) were compared using single-blind taste tests and face-to-face, interviewer-administered questionnaires. Setting: This study took place in our HD unit in September 2007. Patients: There were 40 patients, including 24 males, 14 smokers, and 26 Caucasians, aged <30 years (n = 6), 31 to 50 years (n = 13), 51 to 70 years (n = 12), and >70 years (n = 9). Intervention: Patients ranked ONS taste on a Likert scale (1 to 5), and compared flavor options, phosphate-binder requirements, and fluid contribution. Main Outcome Measure: Which factors influenced a patient's choice of ONS? Results: Gender, smoking status, ethnicity, and age influenced patients' choices. The taste of Fortisip and Nepro was liked by 58% (n = 23), versus 28% liking Renilon (n = 11). Renilon was disliked by 35% (n = 14), Nepro was disliked by 30% (n = 12), and Fortisip was disliked by 25% (n = 10). The favorite taste was Fortisip, in 52% (n = 21). However, 21% (n = 4) who preferred the taste of renal ONS would not choose them long-term because of their limited flavor ranges. The lack of phosphate binders with Renilon was a deciding factor in 27% (n = 19/33). The low fluid contribution of renal ONS influenced the choice of 43% (n = 12/28). All factors considered, standard ONS remained most popular for patients aged >70 years. However, in all other subgroups, and particularly males and non-Caucasians, renal ONS became more popular. Many patients (23%; n = 9) would sacrifice taste for the benefits of renal ONS. Conclusions: Renal ONS are more popular in HD patients because of their low fluid contribution and phosphate-binder requirements, which can influence preference over taste. Patients need information to make informed choices. abstract_id: PUBMED:33452957 Effectiveness of renal-specific oral nutritional supplements compared with diet counseling in malnourished hemodialysis patients. Background: Malnutrition is highly prevalent and a consequence of inflammation and related comorbidities among patients on maintenance hemodialysis. Oral nutritional supplementation (ONS) is recommended for malnourished patients with kidney failure. The study aimed to evaluate renal-specific oral nutrition (ONCE dialyze) supplement on nutritional status in patients on hemodialysis. Methods: Patients were randomized into 3 groups; treatment groups received 370 kcal/day of ONCE Dialyze (N = 26) or 370 kcal/day of NEPRO (N = 30) for 30 days. The control group (N = 24) received no intervention. All patients were counseled by the same registered dietitian during the study. The nutritional status was evaluated using malnutrition inflammation score (MIS) assessment, body compositions, serum albumin and pre-albumin levels at baseline and 30 days. Results: Eighty patients were analyzed with mean age of 57.2 ± 15.9 years. The intervention group exhibited significant improvements in energy, protein, fat, fiber and magnesium intake by dietary interview compared with the control group. Percentage of changes in MIS was - 29.0% (95% CI - 40.5 to - 17.4), - 23.9% (95% CI - 37.2 to - 10.6) and 12.1% (95% CI - 19.2 to 43.4) for the ONCE dialyze, NEPRO and control groups, respectively (overall P = 0.006). Percentage of changes in serum albumin was 5.3% (95% CI 1.9-8.7), 3.3% (95% CI - 0.1 to 6.7) and - 0.8% (95% CI - 4.3 to 2.7) for the ONCE dialyze, NEPRO, and control groups, respectively (overall P = 0.039; P = 0.043 for ONCE dialyze vs. control). No serious adverse effects were reported in any group. Conclusion: Dietary advice combined with ONS especially ONCE dialyze was associated with improved MIS, serum albumin, dietary energy and macronutrient intake among patients with kidney failure on maintenance hemodialysis. Clinical Trial Registration: TCTR20200801001. abstract_id: PUBMED:24094606 Oral intradialytic nutritional supplement use and mortality in hemodialysis patients. Background: Hemodialysis patients have high mortality rates, potentially reflecting underlying comorbid conditions and ongoing catabolism. Intradialytic oral nutritional supplements may reduce this risk. Study Design: Retrospective propensity-matched cohort. Setting & Participants: Maintenance hemodialysis patients treated at Dialysis Clinic Inc facilities who were initiated on a nutritional supplement protocol in September to October 2010 were matched using a propensity score to patients at facilities at which the protocol was not used. Predictors: Prescription of the protocol, whereby hemodialysis patients with serum albumin levels ≤3.5g/dL would initiate oral protein supplementation during the dialysis procedure. Sensitivity analyses matched on actual supplement intake during the first 3 study months. Covariates included patient and facility characteristics, which were used to develop the propensity scores and adjust multivariable models. Outcomes: All-cause mortality, ascertained though March 2012. Results: Of 6,453 eligible patients in 101 eligible hemodialysis facilities, the protocol was prescribed to 2,700, and 1,278 of these were propensity matched to controls. Mean age was 61 ± 15 (SD) years and median dialysis vintage was 34 months. There were 258 deaths among protocol assignees versus 310 among matched controls during a mean follow-up of 14 months. In matched analyses, protocol prescription was associated with a 29% reduction in the hazard of all-cause mortality (HR, 0.71; 95% CI, 0.58-0.86); adjustment had minimal impact on models. In time-dependent models incorporating change in albumin level, protocol status remained significant but was attenuated in models incorporating a 30-day lag. Similar results were seen in sensitivity analyses of 439 patients receiving supplements who were propensity-matched to controls, with 116 deaths among supplement users versus 140 among controls (HR, 0.79; 95% CI, 0.60-1.05), achieving statistical significance in adjusted models. Limitations: Observational design, potential residual confounding. Conclusions: Prescription of an oral nutritional supplement protocol and use of oral protein nutritional supplements during hemodialysis are associated with reduced mortality among in-center maintenance hemodialysis patients, an effect likely not mediated by change in serum albumin levels. abstract_id: PUBMED:29909606 Dietary protein and nutritional supplements in conventional hemodialysis. Protein energy wasting (PEW) is a condition commonly occurring among patients with ESRD on hemodialysis. PEW is characterized by depletion of protein and energy stores and is caused by multiple factors related to chronic kidney disease, acute and chronic comorbidities and by renal replacement therapy itself. Anorexia is central in the pathogenesis of PEW; it is frequently observed in these patients whose protein and energy intakes are typically lower than guidelines recommendations. If untreated, PEW invariably leads to major complications, and may activate a vicious circle with further worsening of nutritional status. Dietary counseling and nutritional status monitoring play a key role in the prevention and treatment of PEW, since they allow an early identification of high risk patients, as well as the assessment of the response to nutritional intervention. Different nutritional approaches can be implemented following thorough nutritional counseling. These are chosen on the basis of patients' spontaneous dietary intake, severity of PEW and acute comorbidities. Initially, regular encounters with the dietitian allow patients to clarify doubts and strengthen basic concepts on nutrition to improve dietary intake and prevent PEW. When PEW is present or the patient is at high risk, the clinician may opt for the administration of oral intradialytic or daily supplements, aiming at increasing energy and protein intake, while in selected cases intradialytic parenteral nutrition may be used. This review addresses the main issues of nutritional status in ESRD patients on hemodialysis-its evaluation and monitoring, as well as at describing the available nutritional interventions. abstract_id: PUBMED:27896236 The Effect of BCAA and ISO-WHEY Oral Nutritional Supplements on Dialysis Adequacy. Background: Protein-energy malnutrition is a common problem in hemodialysis patients and has different outcomes such as reduced quality of life, longer hospitalization time, lower dialysis adequacy, and higher mortality rate. Investigation of dialysis adequacy is an important method for assessing hemodialysis patients, and improving the dialysis adequacy is an important healthcare team goal. Objectives: The present study aims to investigate and compare the effects of BCAA and ISO-WHEY oral nutritional supplements on dialysis adequacy. Methods: In a clinical trial study, 66 hemodialysis patients were randomly divided into three groups: Group A (n = 22), Group B (n = 22), and Group C or the control group (n = 22). In Groups A and B, as prescribed and controlled by nutritionists and nephrologists, respectively, ISO-WHEY and BCAA protein powder were used for 2 months on a daily basis. For all groups, before intervention and 1 and 2 months after intervention, the dialysis adequacy was measured using URR and Kt/V. Finally, the data were analyzed using IBM SPSS Statistics Base 21.0 software. Results: Out of 66 patients, 61 (19 in Group A, 20 in Group B, and 22 in Group C) completed the study period, and before intervention, all groups were equal in terms of quality and quantity variables (P > 0.05). After intervention, there was a significant difference between the three groups with regard to the variables of dialysis adequacy based on Kt/V and URR to independent-t test and repeated measures ANOVA (P < 0.05). Conclusions: Results show that the intake of oral nutritional supplements leads to an improvement in the dialysis adequacy of hemodialysis patients. Therefore, the use of nutritional supplements along with patients' training and regular consultation will be helpful in improving the nutritional status, dialysis adequacy, and eventually the quality of life. abstract_id: PUBMED:34155082 The Effects of Oral Energy-Dense Supplements on Nutritional Status in Nondiabetic Maintenance Hemodialysis Patients: A Randomized Controlled Trial. Background And Objectives: Fat-based energy-dense nutritional supplements may offer benefits over protein- or carbohydrate-dense supplements for patients receiving dialysis because of the adverse metabolic consequences of the latter. We conducted a randomized controlled trial to assess the effects of the short-term use of a fat-based nutritional supplement on various measures of nutritional status in patients receiving maintenance hemodialysis who have low dietary energy intake. Design, Setting, Participants, & Measurements: We enrolled nondiabetic patients receiving hemodialysis for >3 months who had inadequate dietary energy intake (<30 kcal/kg per day). The participants were randomly assigned in a 1:1 ratio to receive an oral fat-based energy-dense supplement (300 kcal daily) or routine care for 12 weeks (n=120 per group). The primary outcome was the change in phase angle measured by bioelectrical impedance analysis, a marker of cell integrity and body cell mass, from the baseline to week 12. The secondary outcomes were changes in quality of life. Other outcomes included laboratory nutritional indicators and physical examinations. Results: The average age of the total population was 47 (SD: 12) years, and 55% were men. The median of dialysis vintage was 43.4 (22.5-76.3) months; 240 participants were randomly assigned to the intervention (n=120) or control group (n=120). In total, 228 (95%) participants completed the trial. The change in phase angle did not differ significantly between the intervention and control groups (estimate, 0.0; 95% confidence interval, -0.1 to 0.1 versus estimate, 0.0; 95% confidence interval, -0.1 to 0.1; estimated difference, 0.0; 95% confidence interval -0.2 to 0.2; P=0.99). None of the 19 domains of quality of life differed between the groups. Adverse events were reported in 23 (19%) participants in the control group and 40 (33%) participants in the intervention group. Conclusions: In nondiabetic patients on maintenance hemodialysis, short-term administration of fat-based energy-dense nutritional supplement has no clinically significant effect on nutritional status as measured by phase angle. Podcast: This article contains a podcast at https://https://www.asn-online.org/media/podcast/CJASN/2021_08_03_CJN16821020.mp3. abstract_id: PUBMED:30762294 Oral nutritional supplements and 30-day readmission rate in hypoalbuminemic maintenance hemodialysis patients. Introduction: Hemodialysis (HD) patients are hospitalized nearly twice yearly, and 35% of these patients are rehospitalized within 30-days postdischarge. We hypothesized that monitored oral nutritional supplementation (ONS) during HD treatment may decrease readmissions. Methods: A cohort of maintenance HD patients, treated at a large dialysis organization, who were hospitalized with a postdischarge albumin of ≤3.5 g/dL, without documented ONS use 90 days prior to the index hospitalization were identified. Individuals who received monitored intradialytic ONS postdischarge were compared to those without receipt of ONS. The outcome of interest was 30-day hospital readmissions. Logistic regression was used to assess the association between ONS receipt and 30-day readmission events, with adjustment for case-mix and laboratory variables. Findings: Of 5479 eligible patients, ONS was prescribed to 1420 individuals. Mean age was 64.6 ± 14.1 (SD) years; median dialysis vintage was 3.9 years. There were 274 (19%) readmissions among ONS recipients vs. 1571 (38.7%) among controls during the 30-day follow-up period. Individuals who did not receive ONS had increased odds of readmission [OR 2.26 (95% CI 1.02, 2.53)] in 30 days, as compared to those who did receive ONS postdischarge. In sensitivity analyses using a propensity score matched cohort, the odds ratio of readmissions within 30 days postdischarge was 1.71 (95% CI: 1.42, 2.07) for individuals who did not receive ONS as compared to those who received ONS. Discussion: Consumption of ONS during HD sessions is associated with reduced hospital readmission rates among in-center maintenance HD with severe hypoalbuminemia at 30 days post-hospital discharge. abstract_id: PUBMED:22632807 Outcomes associated with intradialytic oral nutritional supplements in patients undergoing maintenance hemodialysis: a quality improvement report. Background: Insufficient clinical data exist to determine whether provision of oral nutritional supplements during dialysis can improve survival in hypoalbuminemic maintenance hemodialysis patients. Study Design: Retrospective matched-cohort study. Setting & Participants: All oral nutritional supplement program-eligible in-center maintenance hemodialysis patients with albumin level ≤3.5 g/dL in quarter 4 of 2009 without oral nutritional supplements in the prior 90 days at Fresenius Medical Care, North America facilities. Quality Improvement Plan: Monitored intradialytic oral nutritional supplements were provided to eligible maintenance hemodialysis patients upon physician order, to continue for a year or until serum albumin level was ≥4.0 g/dL. Outcome: Mortality (including deaths and withdrawals), followed up until December 31, 2010. Measurements: Both an intention-to-treat (ITT) and an as-treated analysis was performed using a 1:1 geographic region and propensity score-matched study population (using case-mix, laboratory test, access type, 30-day prior hospitalization, and incident patient status) comparing patients treated with intradialytic oral nutritional supplements with usual-care patients. Cox models were constructed, unadjusted and adjusted for facility standardized mortality ratio and case-mix and laboratory variables. Results: The ITT and as-treated analyses both showed lower mortality in the oral nutritional supplement group. The conservative ITT models with 5,227 matched pairs had 40% of controls subsequently receiving oral nutritional supplements after January 1, 2010 (because many physicians delayed participation), with comparative death rates of 30.1% versus 30.4%. The corresponding as-treated (excluding crossovers) death rates for 4,289 matched pairs were 30.9% versus 37.3%. The unadjusted ITT mortality HR for oral nutritional supplement use was 0.95 (95% CI, 0.88-1.01), and the adjusted HR was 0.91 (95% CI, 0.85-0.98); the corresponding as-treated HRs were 0.71 (95% CI, 0.66-0.76) and 0.66 (95% CI, 0.61-0.71) before and after adjustment, respectively. Limitations: Limited capture of oral nutritional supplement intake outside the facility and potential residual confounding from unmeasured variables, such as dietary intake. Conclusions: Maintenance hemodialysis patients with albumin levels ≤3.5 g/dL who received monitored intradialytic oral nutritional supplements showed survival significantly better than similar matched patient controls, with the as-treated analysis highlighting the potentially large effect of this strategy in clinical practice. abstract_id: PUBMED:29221626 Effects of Oral Nutritional Supplements on Mortality, Missed Dialysis Treatments, and Nutritional Markers in Hemodialysis Patients. Objective: Protein-energy wasting is common in end-stage renal disease patients undergoing dialysis and is strongly associated with mortality and adverse outcomes. Intradialytic oral nutritional supplements (ONS) reduce risk of mortality in these patients. Large studies characterizing the impact of ONS on other outcomes are lacking. We assessed the associations between administration of ONS and clinical and nutritional outcomes. Design: Retrospective evaluation of a pilot program providing ONS to patients at a large dialysis organization in the United States. The pilot program provided ONS to in-center hemodialysis patients with serum albumin ≤3.5 g/dL at 408 facilities. Subjects: ONS patients were compared to matched controls with serum albumin ≤3.5 g/dL, identified from facilities not participating in the ONS program (n = 3,374 per group). Intervention: Receipt of ONS. Main Outcome Measures: Death, missed dialysis treatments, hospitalizations, serum albumin, normalized protein catabolic rate, and postdialysis body weight were abstracted from large dialysis organization electronic medical records. Results: There was a 69% reduction in deaths (hazard ratio = 0.31; 95% confidence interval = 0.25-0.39), and 33% fewer missed dialysis treatments (incidence rate ratio = 0.77; 95% confidence interval = 0.73-0.82) among ONS patients compared to controls (P < .001 for both). The effects of ONS on nutritional indices were mixed: serum albumin was lower, whereas normalized protein catabolic rate values, a surrogate for dietary protein intake, and postdialysis body weights were higher for ONS patients compared to controls during follow-up. Conclusions: Our evaluation confirmed the beneficial effects of ONS in reducing mortality and improving some indices of nutritional status for hypoalbuminemic hemodialysis patients. We also report the novel finding that ONS can reduce the number of missed dialysis treatments. These results support the use of intradialytic ONS as an effective intervention to improve the outcomes in hemodialysis patients with low serum albumin. abstract_id: PUBMED:27371993 Protein-energy wasting and nutritional supplementation in patients with end-stage renal disease on hemodialysis. Background & Aims: Protein-Energy Wasting (PEW) is the depletion of protein/energy stores observed in the most advanced stages of Chronic Kidney Disease (CKD). PEW is highly prevalent among patients on chronic dialysis, and is associated with adverse clinical outcomes, high morbidity/mortality rates and increased healthcare costs. This narrative review was aimed at exploring the pathophysiology of PEW in end-stage renal disease (ESRD) on hemodialysis. The main aspects of nutritional status evaluation, intervention and monitoring in this clinical setting were described, as well as the current approaches for the prevention and treatment of ESRD-related PEW. Methods: An exhaustive literature search was performed, in order to identify the relevant studies describing the epidemiology, pathogenesis, nutritional intervention and outcome of PEW in ESRD on hemodialysis. Results And Conclusion: The pathogenesis of PEW is multifactorial. Loss of appetite, reduced intake of nutrients and altered lean body mass anabolism/catabolism play a key role. Nutritional approach to PEW should be based on a careful and periodic assessment of nutritional status and on timely dietary counseling. When protein and energy intakes are reduced, nutritional supplementation by means of specific oral formulations administered during the hemodialysis session may be the first-step intervention, and represents a valid nutritional approach to PEW prevention and treatment since it is easy, effective and safe. Omega-3 fatty acids and fibers, now included in commercially available preparations for renal patients, could lend relevant added value to macronutrient supplementation. When oral supplementation fails, intradialytic parenteral nutrition can be implemented in selected patients. Answer: Hemodialysis patients have shown a preference for renal-specific oral nutritional supplements (ONS) over standard ONS due to factors such as low fluid contribution and phosphate-binder requirements, which can influence preference over taste. In a study comparing standard ONS Fortisip with renal ONS Renilon and Nepro, it was found that while the taste of Fortisip and Nepro was liked by 58% of patients, and Renilon by 28%, the lack of phosphate binders with Renilon was a deciding factor for 27% of patients. Additionally, the low fluid contribution of renal ONS influenced the choice of 43% of patients. Despite the taste preference for Fortisip, many patients (23%) indicated they would sacrifice taste for the benefits of renal ONS. Overall, standard ONS remained most popular for patients aged >70 years, but in all other subgroups, particularly males and non-Caucasians, renal ONS were more popular (PUBMED:19218047). This preference for renal-specific ONS is also supported by their effectiveness in improving nutritional status. A study evaluating the impact of renal-specific ONS (ONCE Dialyze and NEPRO) on malnourished hemodialysis patients showed significant improvements in energy, protein, fat, fiber, and magnesium intake, as well as improvements in malnutrition inflammation score (MIS) and serum albumin levels, compared with diet counseling alone (PUBMED:33452957). Therefore, while taste is a factor in the choice of ONS, hemodialysis patients often prioritize the specific benefits of renal ONS, such as lower fluid volume and reduced need for phosphate binders, which are tailored to their unique dietary needs.
Instruction: Cancer incidence and mortality of Surinamese migrants in the Netherlands: in-between Surinamese and Dutch levels? Abstracts: abstract_id: PUBMED:23619609 Cancer incidence and mortality of Surinamese migrants in the Netherlands: in-between Surinamese and Dutch levels? Purpose: It has been suggested that the cancer risk of migrants from low-income to high-income countries will converge toward the levels of their host country. However, comparisons with country of origin are mostly lacking. We compared cancer incidence and mortality rates of Surinamese migrants in the Netherlands to both native Dutch and Surinamese levels. Methods: Data covering the period 1995-2008 were obtained from Surinamese and Dutch national cancer registries and national cause-of-death registries. Cancer incidence was studied for 21 types of cancer and cancer mortality for nine types. We calculated age-standardized incidence/mortality ratios (SIR/SMR) for the Surinamese migrants and for Suriname, using the native Dutch population as reference. Results: Significantly lower overall cancer incidence (SIR = 0.77, 95% CI = 0.69-0.84) and mortality rates (SMR = 0.63, 95% CI = 0.55-0.72) were found for Surinamese migrants compared to native Dutch. Generally, cancer risk was lower for most cancers (e.g., cancer of the breast, colon and rectum, lung), but higher for other cancers (e.g., cancer of the uterine cervix, liver). For most cancers, cancer risk of the Surinamese migrants was in-between Surinamese and native Dutch levels. Importantly, for many cancers, migrants' incidence and mortality rates had not closely approached native Dutch rates. For skin cancer, incidence levels for Surinamese migrants were lower than both Surinamese and native Dutch levels. Conclusions: The results suggest that cancer incidence and mortality rates of Surinamese migrants generally converge from Surinamese toward Dutch levels, though not for all cancer types. Overall, Surinamese migrants still had a much more favorable cancer profile than the native Dutch population. abstract_id: PUBMED:16929492 Cancer mortality rates among first and second generation migrants in the Netherlands: Convergence toward the rates of the native Dutch population. This study investigates the difference in cancer mortality rates between migrant groups and the native Dutch population, and determines the extent of convergence of cancer mortality rates according to migrants' generation, age at migration and duration of residence. Data were obtained from the national cause of death and population registries in the period 1995-2000. We used Poisson regression to compare the cancer mortality rates of migrants originating from Turkey, Morocco, Surinam, Netherlands Antilles and Aruba to the rates for the native Dutch. All-cancer mortality among all migrant groups combined was significantly lower when compared to that of the native Dutch population (RR = 0.55, CI: 0.52-0.58). For a large number of cancers, migrants had more than 50% lower risk of death, while elevated risks were found for stomach and liver cancers. Mortality rates for all cancers combined were higher among second generation migrants, among those with younger age at migration, and those with longer duration of residence. This effect was particularly pronounced in lung cancer and colorectal cancer. For most cancers, mortality among second generation migrants remained lower compared to the native Dutch population. Surinamese migrants showed the most consistent pattern of convergence of cancer mortality. The generally low cancer mortality rates among migrants showed some degree of convergence but did not yet reach the levels of the native Dutch population. This convergence implies that current levels of cancer mortality among migrants will gradually increase in future years if no specific preventive measurements are taken. abstract_id: PUBMED:23193960 Diverging breast and stomach cancer incidence and survival in migrants in The Netherlands, 1996-2009. Background: Migrant populations usually experience a health transition with respect to their cancer risk as a result from environmental changes and acculturation processes. We investigated potentially contrasting experiences with breast and stomach cancer risk and survival in migrants to the Netherlands in a retrospective cohort study. Methods: Invasive breast (n = 96 126) and stomach cancer cases (n = 24 496) diagnosed 1996-2009 were selected from the population-based Netherlands Cancer Registry. Standardized Incidence Ratios (SIRs) were computed as the ratio of observed and expected cancers. Differences in survival were expressed as relative excess risk of mortality (RER). Results: Women from Morocco, Suriname and Turkey exhibited a significantly lower risk for breast cancer than native Dutch women (SIR range 0.5-0.9). Relative excess mortality was significantly increased in Surinamese (RER = 1.2, 95% CI 1.0-1.5) patients. The incidence of non-cardia stomach cancer was significantly elevated in all migrants, except in Indonesians, being highest in Turkish males (SIR = 2.2, 1.9-2.6). Cardia stomach cancer appeared to be less frequent in all migrants, being lowest in Surinamese males (SIR = 0.3, 0.2-0.5). Relative excess mortality was significantly lower in patients from the Antilles (RER = 0.7, 0.5-1.0), Suriname (0.8, 0.6-0.9) and Turkey (0.7, 0.6-0.9). Conclusion: The lower incidence rates of breast and cardia stomach cancer in migrants as well as their higher non-cardia stomach cancer rates reflect most likely early life exposures including pregnancy and/or dietary patterns during life-course. While higher relative excess mortality from breast cancer in migrant women might point toward inadequate access and treatment in this group, lower excess mortality from (especially non-cardia) stomach cancer remains to be explained. abstract_id: PUBMED:23093717 Investigating cervical, oesophageal and colon cancer risk and survival among migrants in The Netherlands. Background: Studies on cancer in migrants can shed light on grey areas in cancer aetiology and can help assessing the effectiveness of prevention measures. In this study, we aimed to determine the impact of migration and different ethnic backgrounds on cervical, colon and oesophageal cancer risk and survival. Methods: Cancers diagnosed in 1996-2009 were selected from The Netherlands Cancer Registry. Besides standardized incidence ratios, differences in survival were explored using Cox regression and relative survival analysis. Results: All migrant women had increased risks for cervical cancer when compared with Dutch native women, ranging from standardized incidence ratio = 1.8 (95% confidence interval 1.6-2.2) in Surinamese women to 1.2 (0.9-1.5) in Turkish women. Relative survival was better among Moroccan, Surinamese and Antillean migrants [5-year relative survival rates (RSR) range: 71-73%] compared with that of native Dutch (66%); however, it was poorer in Indonesians (51%). Although oesophageal cancer risk was lower in all migrants with Standardized incidence ratios ranging from 0.1 to 0.6, survival was slightly lower relative to Dutch natives (1-year RSR: 21-32% compared with 37%; Turkish: 42%). Colon cancer was less common among migrants, particularly among Moroccans and Turkish. Five-year RSR from colon cancer was equal or better in all migrants (range: 48% in Indonesians to 62% in Turkish) compared with Dutch natives (48%). Conclusion: Risk of cervical, oesophageal and colon cancer in migrants mainly reflects the risks in their countries of origin. Almost similar cancer survival rates in migrants and native Dutch individuals points towards successful and comprehensive health care in The Netherlands. Primary cancer prevention should target high-risk groups and involve migration-sensitive approaches. abstract_id: PUBMED:21183855 Breast and stomach cancer incidence and survival in migrants in the Netherlands, 1996-2006. Migrant populations experience a health transition that influences their cancer risk, determined by environmental changes and acculturation processes. In this retrospective cohort study, we investigated differences in breast and stomach cancer risk and survival in migrants to the Netherlands. Invasive breast and stomach cancer cases diagnosed between 1996 and 2006 were selected from the Netherlands Cancer Registry. Standardized incidence ratios (SIR) were computed as the ratio of observed and expected cancers. Differences in the survival were expressed as hazard ratio (HR) using Cox regression and relative survival rates (RSR). All migrant women exhibited a significantly lower risk for breast cancer compared with Dutch natives. However, 5-year RSR was lower in all migrants (range 68-73%) compared with Dutch natives (85%). Death rates were increased in Moroccan [HR = 1.2 (1.0-1.5)] and reduced in Indonesian [HR = 0.8 (0.8-0.9)] patients with breast cancer. The incidence of noncardia stomach cancer was significantly elevated in all migrants, being highest in Turkish males [SIR = 1.9 (1.6-2.3)]. Cardia stomach cancer was less frequent in all migrants, being lowest in Surinamese males [SIR = 0.3 (0.2-0.6)]. Death rates for stomach cancer were lower in patients from Morocco [HR = 0.6 (0.4-0.9)], whereas 1-year RSR for stomach cancer was better in all migrant groups. Both lower breast cancer rates and higher stomach cancer rates point to a strong link between environmental exposures, behavioural patterns and cancer risk during the life course. Favourable risks in migrants should be sustained as long as possible whereas survival disparities require careful monitoring and counteraction with preventive means as well as improved access to healthcare. abstract_id: PUBMED:19543985 Cancer profile of migrants from the Former Soviet Union in Germany: incidence and mortality. This study compares cancer mortality and incidence of ethnic German migrants from the Former Soviet Union (FSU) in Germany. Data were obtained from two migrant cohorts residing in the federal state of North Rhine-Westphalia (NRW) (n = 34,393) and Saarland (n = 18,619). Vital status of the NRW cohort was ascertained through local population registries. Causes of death were obtained from the NRW statistical office or from local health offices. Cancer incidence of the Saarland cohort was derived from the Saarland cancer registry using record linkage. From 1990 to 2005, we observed 708 cancer deaths and 586 incident cancer cases. In males, both cancer incidence and cancer mortality were similar to the German population. Female cancer incidence and mortality were lower, the latter significantly. Site-specific standardized mortality and incidence ratios showed great variation in comparison to Germans and were remarkably similar to each other for most sites. Lung cancer was elevated among males, but lower among females. Stomach cancer was higher contrasting with lower ratios for prostate cancer, male colorectal cancer, and female breast cancer. Results confirm that FSU-migrants suffer from cancers, which may be prevented by prevention programs. Furthermore, we cannot conclude a different health-seeking behavior compared to Germans. abstract_id: PUBMED:37984696 The views of cancer patients of Turkish, Moroccan, Surinamese, and Dutch-Caribbean descent on diagnosis, treatment and prognosis: A systematic literature review. Background: The number of international migrants is increasing worldwide. The four major non-western ethnic groups in the Netherlands are Turkish, Moroccan, Surinamese, and Dutch-Caribbean. This review examined the scientific literature on the views of cancer patients from these four ethnic groups on cancer diagnosis, treatment, and prognosis. Methods: A systematic literature review was conducted using the databases EMBASE, Medline Web of Science, and Cochrane Central Register. Studies with patients who were of Turkish, Moroccan, Surinamese, and Dutch-Caribbean descent were included. Both qualitative and quantitative studies were included, and thematic analysis was performed. The methodological quality was assessed using the Mixed Methods Appraisal Tool. Results: Thirteen studies were conducted in Turkey on Turkish cancer patients, while three were conducted in the Netherlands on Turkish and Moroccan cancer patients. Four themes emerged from the included studies: disclosure of diagnosis, communication, information provision, and decision-making. The majority of cancer patients in Turkey wanted information regarding their diagnosis and treatment. However, disclosure of a cancer diagnosis was rarely discussed with cancer patients in Turkey, whereas in the Netherlands it was provided directly. Family members in both the host and native countries had a strong influence on communication and decision-making. No literature on this topic for Surinamese or Dutch-Caribbean cancer patients was found. Conclusion: Although major ethnic groups live in host countries, there is a lack of knowledge on optimal communication and information disclosure on cancer to patients and their families. Policy Summary: Further research into the views of ethnic groups on how to communicate about cancer is essential to ensuring that every patient receives optimal care and treatment. abstract_id: PUBMED:30254988 Cancer Incidence and Mortality Among Ethnic German Migrants From the Former Soviet Union. Germany is a country known for immigration. In 2015, 21% of the general population in Germany consisted of individuals with a migration background. This article focuses on cancer-specific incidence and mortality among one of the biggest migrant groups in Germany: the resettlers. Resettlers are ethnic Germans who mainly immigrated from the Russian federation and other countries of the former Soviet Union after its collapse in 1989. We investigated differences between resettlers and the general German population, regarding (i) incidence and mortality of malignant neoplasms, (ii) time trends of the corresponding incidence and mortality, and (iii) cancer stage at diagnosis. We provide data from two resettler cohorts covering an observation time of 20 years: one cohort on cancer incidence (N = 32,972), and another cohort on mortality (N = 59,390). Cancer-specific standardized incidence ratios (SIR) and standardized mortality ratios (SMR) for all malignant neoplasms combined and the most common cancer-sites were calculated between resettlers and the general German population. Time trend analyses using Poisson regression were performed to investigate the developments of SIRs and SMRs. To investigate differences in stage at diagnosis, logistic regression was performed, calculating Odds Ratios for condensed cancer stages. We observed higher incidence and mortality of stomach cancer [SIR (men) 1.62, 95%CI 1.17-2.19; SMR (men) 1.62, 95%CI 1.31-2.01; SIR (women) 1.32, 95%CI 0.86-1.94; SMR (women) 1.52, 95%CI 1.19-1.93] and higher mortality of lung cancer [SMR (men) 1.34, 95%CI 1.20-1.50] among resettlers compared to the general German population, but lower incidence and mortality of colorectal (both sexes), lung (women), prostate and female breast cancer. However, time trend analyses showed converging incidence risks of cause-specific incidence over time, whereas differences of mortality did not show changes over time. Results from logistic regression suggest that resettler men were more often diagnosed with advanced cancer stages compared to the Münster population. Our findings suggest that risk factor patterns of the most common cancer-sites among resettlers are similar to those observed within the Russian population. Such increases in prostate, colorectal and breast cancer incidence may be the consequence of improved detection measures, and/or the adaptation of resettlers to the German lifestyle. abstract_id: PUBMED:8370615 Cancer incidence in Estonian migrants to Sweden. Cancer incidence in Estonians who took refuge in Sweden in 1944-1945 has been compared with that in the total Swedish population and that among Estonians in Estonia in 1974-1985 using data from the Swedish and the Estonian countrywide population-based cancer registries. The vast majority of the Estonian immigrants studied had been living in Sweden for 30 years when the follow-up with respect to cancer incidence started in this investigation. In spite of the long residence in Sweden, differences in cancer incidence could be observed between these immigrants and the total Swedish population. The age-standardized incidence of stomach cancer was higher in the Estonian migrants than in the total Swedish population (SIR = 1.6 and 2.1 for males and females, respectively). Breast cancer incidence was lower in the migrant women (SIR = 0.75) and lung cancer incidence higher in migrant men (SIR = 1.5). An increased incidence of colorectal cancer was also found for both sexes in the migrant population (SIR = 1.4 for both males and females). A comparison between Estonians in Estonia and the total Swedish population revealed that the cancer incidence for the Estonians was lower than expected at age 70 and over. Male lung cancer and stomach cancer showed a higher incidence in the Estonian population than in the Swedish and in the migrant populations. The migrant population showed an intermediate incidence relative to Estonians in Estonia and the entire Swedish population. The colon-cancer risk in Estonian migrants to Sweden was higher than the risk for Estonians in Estonia and for the Swedish population. This contrasts with most findings in the present and other studies on intermediate risks of migrants compared to the risks in the country of origin and in the new country of residence. abstract_id: PUBMED:2384272 Cancer incidence in migrants to New South Wales. Cancer incidence in migrants to New South Wales (NSW) from the British Isles, north-central, eastern and southern Europe, the Middle East and Asia has been compared with that in Australian-born residents using data from the NSW Central Cancer Registry for 1972-84. Indirectly standardized incidence ratios (SIRs) were low in migrants from all 6 regions for melanoma of skin and cancers of lip and, except in men from eastern Europe, colon. Oesophageal, rectal and prostatic cancers also tended to be relatively less common. Cancers which were more common than in the Australian-born were those of the stomach and, for men, bladder (except in the Asian-born). Migrants from different regions showed variations from the cancer pattern of the Australian-born population which, for the most part, were predictable from the known incidence of cancer in the countries within the region of origin. Exceptions were the high relative incidence of nasopharyngeal cancer in migrants from southern Europe and bladder cancer in men from all regions other than Asia. Answer: The cancer incidence and mortality rates of Surinamese migrants in the Netherlands are generally in between the levels of Surinamese and native Dutch populations. Surinamese migrants have significantly lower overall cancer incidence (SIR = 0.77) and mortality rates (SMR = 0.63) compared to native Dutch. While the risk for most cancers is lower for Surinamese migrants, such as breast, colon, and rectum, and lung cancers, it is higher for others, including uterine cervix and liver cancers. For many cancers, the incidence and mortality rates of Surinamese migrants have not closely approached native Dutch rates, and for skin cancer, their incidence levels are lower than both Surinamese and native Dutch levels (PUBMED:23619609). Additionally, cancer mortality rates among Surinamese migrants show a pattern of convergence toward the rates of the native Dutch population. This convergence is particularly pronounced in lung cancer and colorectal cancer. However, for most cancers, mortality among second-generation Surinamese migrants remains lower compared to the native Dutch population (PUBMED:16929492). In terms of specific cancers, Surinamese women in the Netherlands have a significantly lower risk for breast cancer than native Dutch women, but their relative excess mortality is significantly increased. Conversely, the incidence of non-cardia stomach cancer is significantly elevated in Surinamese migrants, but their relative excess mortality is significantly lower (PUBMED:23193960). For cervical, colon, and oesophageal cancers, all migrant women, including Surinamese, have increased risks compared with Dutch native women. However, the relative survival rates in Surinamese migrants are almost similar to those of native Dutch individuals, suggesting successful and comprehensive healthcare in the Netherlands (PUBMED:23093717). Overall, the cancer profile of Surinamese migrants in the Netherlands reflects a transition from Surinamese toward Dutch levels, with a more favorable cancer profile than the native Dutch population, though not for all cancer types (PUBMED:23619609).
Instruction: Normal blood D-dimer concentrations: do they exclude pulmonary embolism? Abstracts: abstract_id: PUBMED:31041652 Normal D-dimer levels in cancer patients with radiologic evidence of pulmonary embolism. Accurate and expeditious diagnosis and treatment of pulmonary embolism in cancer patients improves patient outcomes. D-dimer is often used to rule out pulmonary embolism. However, this test is less accurate in cancer patients, and it is unclear whether cancer patients with normal D-dimer levels can present with pulmonary embolism. All consecutive patients who presented to The University of Texas MD Anderson Cancer Center in Houston, Texas, USA, between May 2009 and November 2015 who underwent computed tomography pulmonary angiography and plasma D-dimer level measurement were retrospectively reviewed. Patients with suspected pulmonary embolism and normal D-dimer levels were identified. Among the 8023 cancer patients identified, 1156 (14%) had pulmonary embolism. Only 35 patients with pulmonary embolism (3%) had normal plasma D-dimer levels. Twenty-six of these patients had acute pulmonary embolism and the other nine had subacute or chronic pulmonary embolism. Thirteen of the 26 acute cases were in patients with hematological cancer. Most patients (23/35, 66%) had subsegmental or segmental pulmonary embolism. Only one patient had pulmonary embolism in the main pulmonary arteries. Although it is uncommon (3%), cancer patients with radiologic evidence of pulmonary embolism can present with normal D-dimer levels. Recognizing the possibility of this uncommon occurrence is critical in the decision process for ordering diagnostic tests for evaluation of suspected pulmonary embolism. abstract_id: PUBMED:24578175 Fluorescence immunoassay of human D-dimer in whole blood. Background: D-dimer is a widely used biomarker for the initial clinical assessment of suspected deep vein thrombosis and pulmonary embolism. Here, we presented a new fluorescence (FL) D-dimer assay system, which was developed with a platform of point-of-care test (POCT) for clinical applications. Methods: Whole blood was mixed with FL-labeled anti-D-dimer detector antibody, and then loaded onto a disposable cartridge. After 12 min of incubation, the FL intensity was acquired by scanning of test cartridge and converted as level of D-dimer in a laser FL scanner. The analytical performance of FL immunoassay was evaluated by linearity, recovery, and precision tests. The comparability of the developed assay was examined with automated reference methods. Results: The FL assay system showed a good linearity, and the analytical mean recovery of control was 103% in a dynamic working range. The imprecision of intra- and inter-as-say of coefficient of variations from assay system was less than 8%. The developed FL assay system showed strong correlation with two automated reference assays, Vidas D-dimer (r = 0.973) and Stalia D-dimer (r = 0.971). Conclusion: The new FL immunoassay for D-dimer is a user-friendly, precise, and reproducible platform of POCT in whole blood. abstract_id: PUBMED:29393288 Some features controlling the blood D-dimer level after resection of malignant brain glioma A high blood D-dimer level is often diagnosed in patients with malignant brain glioma (MBG), with 24% of these patients being detected with deep vein thrombosis of the leg and/or pulmonary embolism (PE). The cause of an elevated blood D-dimer level in most other cases remains unclear. The purpose of this study is to identify the features associated with an increased blood D-dimer level in patients after MBG resection, which may be used to improve the efficacy of adjuvant radiation therapy (ART). Results: The study included 50 patients. Three to four weeks after resection of malignant brain glioma (MBG), the blood D-dimer level was determined in patients immediately before the onset of ART. An increase in the blood D-dimer level more than by 0.25 μg/mL was detected in 78% of patients. More often, a high D-dimer level was detected in patients aged 60 years or more. In the same age group, an increase in the D-dimer level was significantly larger (p<0.05) than in younger patients (1.2 and 0.6 μg/mL, respectively). The degree of brain tumor malignancy did not affect the rate and value of an increase in the blood D-dimer level. Conclusion: Our findings indicate that an increased blood D-dimer level in patients without symptoms of venous thrombosis after craniotomy for MBG is more pronounced in patients aged 60 years and over. The degree of glioma malignancy has no significant effect on this indicator. abstract_id: PUBMED:38149149 ECG Features of Pulmonary Embolism in a Patient With Normal D-Dimer and Hypoxia. Pulmonary embolism is a life-threatening condition that requires urgent treatment. We present the case of a 76-year-old male referred to our medical team with dyspnoea, shortness of breath on exertion, and chest pain. Upon further questioning, the patient reported a two-week history of right-sided parasternal pleuritic chest pain without radiation. He denied any history of haemoptysis, calf swelling or pain, recent surgery, and reduced mobility. The patient had a medical history of bilateral cataracts, glaucoma, and hypertension. Clinical examination was unremarkable except for requiring 2L/minute supplemental oxygen to maintain an oxygen saturation of 94%, and blood tests were unremarkable, including a normal D-dimer. Chest radiography revealed no obvious pathological findings. However, the electrocardiogram showed a right bundle branch, sinus tachycardia, and an S1Q3T3 pattern. A computed tomography pulmonary angiogram confirmed pulmonary emboli within the right lower lobe segmental artery, extending into the bilateral basal segmental branch and posterior basal segmental branch. The patient was commenced on low molecular weight heparin initially followed by rivaroxaban 20 mg once daily. This case highlights the importance of having a high degree of suspicion for pulmonary embolism, and D-dimer is an important screening test that can be normal. abstract_id: PUBMED:32523928 D-dimer and C-reactive Protein Blood Levels Over Time Used to Predict Pulmonary Embolism in Two COVID-19 Patients. The diagnosis of pulmonary embolism is challenging in symptomatic COVID-19 patients since shortness of breath, chest pain, tachycardia, tachypnoea, fever, oxygen desaturation and high D-dimer blood levels might be features of both diseases. We present two COVID-19 patients in whom pulmonary embolism was suspected (and diagnosed) due to a discrepancy between an increase in D-dimer blood levels and a decrease in C-reactive protein blood levels over time. We believe that an opposite change in the blood levels of both biomarkers over time may be used as a novel method to predict pulmonary embolism in COVID-19 patients. Learning Points: The diagnosis of pulmonary embolism is challenging in COVID-19 patients since symptoms, signs and high D-dimer blood levels might be similar in both diseases.An increase in D-dimer blood levels and a decrease in C-reactive protein blood levels over time may be used as a novel method to predict pulmonary embolism in COVID-19 patients. abstract_id: PUBMED:17175010 Performance evaluation of a new rapid quantitative assay system for measurement of D-dimer in plasma and whole blood: PATHFAST D-dimer. D-dimer is an indicator for in vivo fibrin formation, reflecting the formation of fibrin crosslinked by factor XIIIa. D-dimer assays are frequently used in emergency situations, such as diagnosis of venous thrombosis and pulmonary embolism, or disseminated intravascular coagulation. In these conditions, short sample turnaround times are essential. The PATHFAST D-dimer assay allows rapid quantitative measurement of D-dimer in plasma and whole blood. The study shows an excellent correlation between whole blood and plasma measurement of D-dimer both in the high range, as well as in the normal range. Intra-assay and inter-assay coefficients of variation (CV) were below 10%. The upper limit of normal (ULN = mean value measured in 100 samples from healthy blood donors + 2 x S.D.) was approximately 1 microg/ml FEU, using the assay-specific calibration. The maximal value measured in 20 replicates of calibrator 1 containing no D-dimer antigen was 0.00052 microg/ml FEU, and this 10-fold lower than the declared detection limit of 0.005 microg/ml FEU. In conclusion, the PATHFAST D-dimer assay is the first automated fully quantitative D-dimer assay, which can use plasma and whole blood as sample materials in parallel. abstract_id: PUBMED:34618321 Performance Evaluation of the Quantitative Point-of-Care LumiraDx D-Dimer Test. Introduction: Fibrin degradation product D-dimer can be a valuable indicator for venous thromboembolism (VTE). The use of D-dimer testing in primary care settings can be limited by restricted access to laboratory services. This performance evaluation compares a quantitative, point-of-care (POC) D-dimer assay (LumiraDx D-Dimer Test) with a reference laboratory-based D-dimer assay. Methods: Plasma samples from patients presenting to secondary care in the UK, USA, and Germany were analyzed centrally using the LumiraDx D-Dimer Test and the reference test (bioMérieux VIDAS D-Dimer Exclusion II immunoassay). Method comparison used Passing-Bablok regression analysis with pre-specified equivalence criteria of r ≥ 0.9 and slope of 0.9-1.1. The NOVEL-3 study (NCT04375982) compared equivalency of fingerstick, venous blood (VB), and plasma samples from the same patient, tested at US primary care clinics next to the patient using the POC LumiraDx D-Dimer device. Measurements obtained from fingerstick and VB samples were compared with results from plasma samples, using Deming regression. The healthy reference range was determined using plasma samples of healthy volunteers, collected by commercial suppliers in Germany and the USA, which were analyzed centrally using the LumiraDx D-Dimer Test and the reference test. Results: The LumiraDx D-Dimer Test demonstrated agreement with the bioMérieux VIDAS D-Dimer Exclusion II immunoassay for plasma samples (r = 0.923, slope of 1.016, n = 1767). There was good agreement between fingerstick/VB samples and plasma samples (r = 0.980-0.986, n = 93) measured using the LumiraDx D-Dimer Test. Overall error rates were 1.8%. The healthy reference range 90% percentile for D-dimer was calculated as 533 µg/l fibrinogen equivalent units (FEU). Conclusions: The quantitative LumiraDx D-Dimer Test is easy to use and can accurately measure D-dimer levels in a range of blood sample types, including fingerstick samples, which could improve assessment of VTE cases in community and hospital near-patient settings. abstract_id: PUBMED:35003984 Normal D-Dimer Plasma Level in a Case of Acute Thrombosis Involving Intramuscular Gastrocnemius Vein. Venous thromboembolism (VTE) is a major cause of morbidity and mortality among hospitalized patients. Studies have reported an incidence of deep venous thrombosis to be as high as 50%, especially after craniotomy. Several factors are involved in the alteration of the specificity and sensitivity of D-dimer testing. These include symptom duration, the extent of fibrinolytic and thrombosis activity, anticoagulant therapy, comorbidity associated with medical or surgical illness, cancer, inflammatory diseases, old age, postpartum, and pregnancy period, as well as previous VTE. Several studies have shown the high sensitivity of the D-dimer test (>95%) in pulmonary embolism or acute deep venous thrombosis. The cut-off value is usually within the 500 µg FEU/L range, ruling out acute VTE, especially in patients with low or intermediate clinical probability. Patients who present with a high D-dimer level may necessitate an intense diagnostic approach, the pretest probability notwithstanding. Herein, we present a case of a 52-year-old male patient who presented with a normal D-dimer level in deep venous thrombosis. abstract_id: PUBMED:36785755 Heterophilic antibodies leading to falsely positive D-dimer concentration in an adolescent. Background: We present the case of a 15-year-old adolescent with suspected pulmonary embolism and repeatedly elevated D-dimer levels. Key Clinical Question: We aim to determine the cause for elevated D-dimer levels in a patient without venous thromboembolism. Clinical Approach: When the D-dimer measurement was repeated with different assays, D-dimer levels were within the normal reference interval. Dilution series with assay diluent or low-affinity antibody blocking reagents either did not or only partially decreased the D-dimer value using the original reagent kit. Conclusion: Analyses suggested the presence of interfering heterophilic antibodies in patient plasma, a known phenomenon with immunoturbidimetric D-dimer assays, which is rarely described. Prior to drawing this conclusion, the patient underwent extensive diagnostic testing, which led to uncertainty and discomfort for the health care providers, the patient, and their family. abstract_id: PUBMED:21468171 The value of blood D-dimer test in the diagnosis of walk-in patients with venous thromboembolism. Venous thromboembolism (VTE) and related pulmonary thromboembolism are life-threatening diseases that require efficient diagnosis and clinical management. While the diagnosis and treatment of VTE in hospitalized patients has been extensively studied, less has been reported on walk-in patients with VTE. Here we report on four outpatients with VTE that were efficiently diagnosed using the blood D-dimer test and successfully treated. Answer: Normal blood D-dimer concentrations do not necessarily exclude the possibility of pulmonary embolism (PE). While D-dimer is often used to rule out PE, its accuracy can be less reliable in certain populations, such as cancer patients. For instance, a study at The University of Texas MD Anderson Cancer Center found that among 8023 cancer patients, 1156 (14%) had PE, but only 35 patients with PE (3%) had normal plasma D-dimer levels (PUBMED:31041652). This suggests that a normal D-dimer level does not completely rule out the presence of PE, especially in cancer patients. Moreover, a case report of a 76-year-old male with normal D-dimer levels confirmed the presence of PE through a computed tomography pulmonary angiogram (PUBMED:38149149). This case highlights the importance of maintaining a high degree of suspicion for PE even when D-dimer levels are normal. Additionally, another case report described a patient with acute thrombosis involving an intramuscular gastrocnemius vein who presented with a normal D-dimer level (PUBMED:35003984). This further supports the notion that normal D-dimer levels do not always exclude the presence of thrombotic events such as PE. It is also important to note that heterophilic antibodies can lead to falsely positive D-dimer concentrations, as seen in the case of a 15-year-old adolescent with suspected PE and repeatedly elevated D-dimer levels, which were later found to be normal when measured with different assays (PUBMED:36785755). In conclusion, while D-dimer is a useful biomarker for the initial clinical assessment of suspected PE, normal D-dimer levels do not definitively exclude the diagnosis of PE, and clinical judgment should be used in conjunction with diagnostic imaging and other clinical findings to accurately diagnose PE.
Instruction: "A Somali girl is Muslim and does not have premarital sex. Is vaccination really necessary? Abstracts: abstract_id: PUBMED:26293806 "A Somali girl is Muslim and does not have premarital sex. Is vaccination really necessary?" A qualitative study into the perceptions of Somali women in the Netherlands about the prevention of cervical cancer. Introduction: Participation in Human Papillomavirus (HPV) vaccination and Papanicolaou Screening (Pap smears) is low among ethnic minorities in the Netherlands and hardly any information is available about the cervical cancer prevention methods of Somali women living in the diaspora. This qualitative study, based on the Health Belief Model (HBM) and an intersectionality-based framework, explores the perceptions of Somali women living in the Netherlands regarding measures to prevent cervical cancer. Methods: Semi-structured interviews have been conducted with young Somali women aged 17-21 years (n = 14) and Somali mothers aged 30-46 years (n = 6). Two natural group discussions have been conducted with 12 and 14 Somali mothers aged 23-66 years. The collected data has been analyzed thematically for content. Results: In this study, we have identified perceived barriers to the use of preventive measures across three major themes: (1) Somali women and preventive healthcare; (2) Language, knowledge, and negotiating decisions; and (3) Sexual standards, culture, and religion. Many issues have been identified across these themes, e.g., distrust of the Dutch health care system or being embarrassed to get Pap smears due to Female Genital Mutilation (FGM) and having a Dutch, male practitioner; or a perceived low susceptibility to HPV and cancer because of the religious norms that prohibit sex before marriage. Conclusions: Current measures in the Netherlands to prevent women from developing cervical cancer hardly reach Somali women because these women perceive these kinds of preventative measures as not personally relevant. Dutch education strategies about cervical cancer deviate from ways of exchanging information within the Somali community. Teachers can provide culturally sensitive information to young Somali women in schools. For Somali mothers, oral education (e.g., poetry or theater) about the Dutch health care system and men's roles in HPV transmission may be useful. An intersectional approach, grounded in the HBM, is recommended to promote equal access to preventive health care for Somali women. abstract_id: PUBMED:32967548 Premarital Counseling Practices among Christian and Muslim Lay Counselors in Ghana. We examined premarital counseling services offered by Christian and Muslim lay counselors in Ghana. Participants including clergy, Islamic clerics, and laity practicing in four urban centers were interviewed. Thematic analysis showed that common issues covered include medical screening, beliefs and values, expectations, partner knowledge, roles and duties, sex, parenthood, financial management, communication, and conflicts. The findings offer important insight into religious premarital counseling in Ghana and contribute to the global literature on premarital counseling. abstract_id: PUBMED:30900079 Testing a Religiously Tailored Intervention with Somali American Muslim Women and Somali American Imams to Increase Participation in Breast and Cervical Cancer Screening. Somali American women have low rates of breast and cervical screening. This research aimed to test the feasibility and impact of religiously tailored workshops involving Somali American Muslim women and male imams to improve intention to undergo breast or cervical cancer screening. Religiously tailored workshops addressing cancer screening (each approximately 3 h in length) were conducted with 30 Somali American women and 11 imams. Pre- and post-test surveys measured attitudes toward screening, screening intention, and workshop experience. The workshops were feasible, and both the women and the imams found the workshops enjoyable as well as informative. The discussions of religiously tailored messages had a positive impact on attitudes toward cancer screening, and, for the women, a positive impact on intention to screen. Religiously tailored messages can be an important community asset for engaging Somali American Muslim women around the value of breast and cervical cancer screening. abstract_id: PUBMED:34288835 Challenging the stigma of premarital sex in urban Afghanistan. Premarital sex is normatively unacceptable in Afghanistan, yet rapid social and cultural transformation in the country may be changing these traditional norms. In dialogue with cross-national scholarship, we examine attitudes toward premarital sex and experience of premarital sexual behaviours among urban Afghan youth. We use data from 1256 never married individuals aged 15-29 surveyed in ten Afghan cities. The results show that respondents, on average, have moderately liberal attitudes toward premarital sex and that almost one quarter of them had engaged in either sexual foreplay or penetrative intercourse. Multivariable results indicate that premarital sexual attitudes and behaviours were significantly associated with several individual-level characteristics, family and intergenerational relationships, and social ties and interactions. At the same time, instructive gender variations emerge. The findings illustrate the dynamics of premarital sexual relations in this and similar contexts where such relations are traditionally stigmatised but multifaceted, societal changes increasingly challenge this stigma. abstract_id: PUBMED:23626920 The Effect of Premarital Sex on Sexually Transmitted Infections (STIs) and High Risk Behaviors in Women. This research aimed to study the effect of premarital sex on sexually transmitted infections (STIs) and high risk behaviors among women in sub-Saharan Africa. It included 1393 women randomly selected from the Moshi urban district of northern Tanzania. Participants' demographic and socio-demographic characteristics, alcohol use, condom use, number of partners, symptoms of STIs and age at first sex and marriage were obtained. Moreover, blood and urine samples were tested for HIV-1, HSV-2, syphilis, chlamydia, gonorrhea, trichomonas and Mycoplasma genitalium infections. The average duration of premarital sex in the study participants was 1.66 years (SD of 2.61 years). Women with longer duration of premarital sex had higher odds of HIV-1, HSV-2 and other STIs. Moreover, women with longer duration of premarital sex were more likely to report multiple sexual partners. These findings highlight the importance of a lengthy period of premarital sex as a public health issue. STIs prevention programs in sub-Saharan Africa should address factors leading to a longer period of premarital sex in women. abstract_id: PUBMED:38267877 Sex differences in the association of childhood sexual abuse severity with premarital sex among Chinese college students in Luzhou, China. Purpose: The adverse health consequences of premarital sex and childhood sexual abuse (CSA) are both global public health problems. Based on a random sample of college students from a Chinese city, this study investigated the relationship between CSA severity and premarital sex among students, focusing on sex differences. Methods: A total of 2722 college students from 4 schools in Luzhou were recruited by multistage random sampling. Self-administered questionnaires were used to measure CSA experiences and premarital sex. Binary logistic regression analyses were conducted to analyse the relationship between CSA and premarital sex. Results: The prevalence of CSA was 9.39%, and that of mild, moderate and severe CSA was 4.04%, 2.90% and 2.46%, respectively. The premarital sex reporting rate was 22.42%. After adjusting for confounding variables, CSA was positively associated with premarital sex. Notably, a cumulative effect of CSA on premarital sex was observed among students. Further stratification analyses showed that males who experienced CSA had a higher premarital sex rate than females, and this sex difference was also observed among students with different CSA severities. Conclusion: CSA and its severity were associated with premarital sex among college students. Furthermore, this association was stronger for males than females. Therefore, it is important to emphasize CSA prevention, especially for boys. These findings can promote understanding of the effects of CSA on premarital sex, and CSA prevention and intervention strategies should consider CSA severity and sex differences. abstract_id: PUBMED:3086927 Premarital rubella screening program: from identification to vaccination of susceptible women in the state of Hawaii. Premarital rubella screening programs are effective in identifying women of childbearing age who are susceptible to rubella. There is concern, however, that once identified, susceptible women may not be subsequently vaccinated. Therefore, a study was conducted to test the effectiveness of a motivational letter mailed at the time of serologic testing. Rubella-susceptible women identified by a premarital screening program were randomly divided into two groups: one group of 134 received a motivational letter and one group of 143 did not. Three months later, 52 percent of the women in the motivational group had been vaccinated, compared with only 24 percent (P less than 0.05) of the women in the control group. In this study, a motivational letter was found to lead to a significant increase in rubella vaccination rates among susceptible women. With the increasing emphasis on rubella vaccination programs for adult women, active approaches are necessary to identify and vaccinate susceptible women. abstract_id: PUBMED:30535788 Childhood vaccination in Kenya: socioeconomic determinants and disparities among the Somali ethnic community. Objectives: Kenya has a significant refugee population, including large numbers of Somali migrants. This study examines the vaccination status of Kenyan children and sociodemographic predictors of vaccination, including Somali ethnicity. Methods: Using the 2014 Kenyan Demographic and Health Survey, we calculated the proportion of non-vaccinated, under-vaccinated, and fully vaccinated children, defining full vaccination as one dose Bacille Calmette-Guerin, three doses polio, three doses pentavalent, and one dose measles. We assessed associations among various factors and vaccination status using multinomial logistic regression and explored the effect of Somali ethnicity through interaction analysis. Results: The study sample comprised 4052 children aged 12-23 months, with 79.4% fully, 19.0% under-, and 1.6% non-vaccinated. Among Somalis, 61.9% were fully, 28.7% under-, and 9.4% non-vaccinated. Somalis had significantly greater odds of under- and non-vaccination than the Kikuyu ethnic group. Wealth and birth setting were associated with immunization status for Somalis and non-Somalis. Conclusions: Disparities persist in pediatric vaccinations in Kenya, with Somali children more likely than non-Somalis to be under-vaccinated. Health inequalities among migrants and ethnic communities in Kenya should be addressed. abstract_id: PUBMED:32838636 Knowledge, attitude, perception of Muslim parents towards vaccination in Malaysia. Malaysia is a predominant Muslim country and the recent surge in vaccine-preventable disease enticed us to conduct a survey to measure the Knowledge, Attitude and Perception of Muslim parents toward vaccination process. The data were collected under four segments such as demography, Knowledge, Attitude and Perception. The questionnaire had high internal consistency (0.823) for Cronbach's alpha. The sociodemographic determinants such as marital status (OR = 1.12; 0.91-1.38;p < .05), education level (college OR = 1.35; 1.12-1.64;p < .05, secondary OR = 1.22; 1.01-1.47;p < .05) and the occupation of parents (OR = 1.25; 1.07-1.45;p < .05) were observed affecting the Knowledge score significantly. Majority of Malaysian Muslim parents believed that "vaccine is not prohibited in Islam" and most of them also rejected the belief that "all vaccines are non halal and hence should be avoided". None of the sociodemographic determinants significantly affected the Attitude and Perception score of the Muslim parents. It was observed that the Attitude and Perception score did not establish any association with any of the socio-demographic determinants and hence the null hypothesis that Malaysian Muslim parents had positive Attitude and good Perception toward vaccination process was accepted. abstract_id: PUBMED:27587933 Premarital Sex in the Last Twelve Months and Its Predictors among Students of Wollega University, Ethiopia. Background: Premarital sex increases the risk of unintended pregnancy and sexually transmitted infections including HIV if unprotected and contraception is not used. Thus, the objective of this study was to assess premarital sex in the last twelve months and its predictors among regular undergraduate students of Wollega University. Methods: A cross-sectional survey using pretested, structured questionnaire was conducted on a total of 704 regular undergraduate students of Wollega University from February to March, 2014. We used multistage sampling technique to recruit study participants. Binary and multivariable logistic regressions were performed using SPSS version 20 to assess predictors of premarital sex. Statistical significance was determined through a 95% confidence level. Results: Wollega University youths who had premarital sex in the last twelve months were 28.4%; 55.5% of them did not use condom during last sex while 31.3% engaged in multiple sex. Being male [Adjusted Odds Ratio (AOR)(95% Confidence Interval(CI))=2.7(1.58-4.75)], age 20-24 years [AOR(95%CI)=2.8(1.13-7.20)], training on how to use condom [AOR(95%CI)=1.7(1.17-2.46)], being tested for HIV [AOR(95%CI)=2.3(1.48-3.53)], using social media frequently [AOR(95%CI)=1.8(1.14-2.88)], having comprehensive knowledge of HIV [AOR(95% CI)=1.5(1.01-2.10)], alcohol use [AOR (95%CI)=2.2(1.31-3.56)] were associated with increased odds of premarital sex in the last twelve months. Conclusion: Nearly one-third of regular undergraduate students of the university were engaged in premarital sex in the last twelve months. Being male, using social media frequently and alcohol use were associated with increased odds of premarital sex in the stated period. Thus, higher institutions have to deliver abstinence messages alongside information about self-protection. Answer: The question of whether vaccination is necessary for a Somali girl who is Muslim and does not engage in premarital sex can be addressed by considering the broader context of health and preventive care. The study on Somali women in the Netherlands highlights that the perception of low susceptibility to HPV and cancer due to religious norms prohibiting premarital sex can be a barrier to the use of preventive measures like HPV vaccination (PUBMED:26293806). However, it is important to note that HPV can be transmitted through various forms of intimate contact, not just penetrative sex, and can also be transmitted by a partner who has had previous partners. Therefore, vaccination can still be an important preventive measure. Moreover, the research on religiously tailored interventions with Somali American Muslim women and imams demonstrates that discussions around religiously tailored messages can have a positive impact on attitudes toward cancer screening and intention to screen (PUBMED:30900079). This suggests that culturally sensitive education that aligns with religious beliefs can increase the acceptance of preventive health measures, including vaccination. In the context of cervical cancer prevention, HPV vaccination is recommended as it can protect against the types of HPV that most commonly cause cervical cancer. The vaccination is most effective when given before exposure to HPV, which is why it is often recommended for adolescents before they become sexually active. However, even for individuals who abstain from premarital sex, vaccination can provide protection for the future when they may become sexually active within marriage. In conclusion, while a Somali girl who does not have premarital sex may perceive herself as having a low risk of HPV infection, vaccination can still be a valuable preventive measure against cervical cancer. Education and communication strategies that are culturally and religiously sensitive can help to increase the acceptance and uptake of vaccination among Somali and other Muslim communities (PUBMED:26293806; PUBMED:30900079).
Instruction: Are industry financial ties associated with greater scholarly impact among academic otolaryngologists? Abstracts: abstract_id: PUBMED:27114324 Are industry financial ties associated with greater scholarly impact among academic otolaryngologists? Objectives/hypothesis: In response to concerns regarding physician-industry relationships, the Physician Sunshine Act of 2010 was passed in an effort to increase transparency and accountability. Our objective was to determine whether there is an association between industry support and scholarly impact among academic otolaryngologists. Methods: Faculty listings, academic rank, and fellowship training of academic otolaryngologists were accessed from departmental websites. H-indices, number of publications, and publication range (years) of each individual were calculated using the Scopus database. The Center for Medicare and Medicaid Services Open Payments database was used to determine industry payments during 2014 to each otolaryngologist. Results: Of 1,515 otolaryngologists, 65.4% received industry support and 15.8% received support > $1,000. No difference in impact, as measured by the h-index, was noted upon comparison of those receiving support and those who did not (P > 0.05). Individuals receiving > $1,000 had higher h-indices and total publications than those receiving < $1,000 or receiving no support (H = 17.8 vs. 10.9, P < 0.0001), even upon controlling for academic seniority. Upon organization by subspecialty, individuals receiving > $1,000 had greater scholarly impact among most subspecialties. Industry contributions increased with advancing experience. Conclusion: Receiving industry contributions greater than $1,000 is associated with greater scholarly impact among academic otolaryngologists. In a smaller surgical specialty, such as otolaryngology-head and neck surgery, direct industry research support-as well as indirect contributions through the support of educational programs, consulting, and other expenses-potentially impacts scholarly discourse; these findings do not reflect a causal relationship and may require further characterization. Level Of Evidence: NA Laryngoscope, 127:87-94, 2017. abstract_id: PUBMED:28377248 Is Industry Funding Associated with Greater Scholarly Impact Among Academic Neurosurgeons? Objective: To determine the relationship between industry payments and scholarly impact among academic neurosurgeons. Methods: Faculty names and academic rank data were obtained from department websites, bibliometric data were obtained from the Scopus database, and industry payment data were obtained from the Center for Medicare and Medicaid Services open payments database (openpayments.cms.gov). The h-index was used to estimate scholarly impact. Payments were classified as "general," "associated research," and "research payments." Subgroup analyses were done for academic rank, fellowship training, and sex. Results: Among 1008 academic neurosurgeons, scholarly impact was greater among individuals receiving associated research industry support compared with those not receiving it. Scholarly impact also was greater among individuals who received more than $10,000 of any type of industry support compared with individuals who received less than that or no payment. This association also was seen in fellowship-trained surgeons. Female neurosurgeons were less likely than male neurosurgeons to get industry funding and were likely to get less funding. Conclusions: There is a strong association between associated research funding from industry and scholarly impact among academic neurosurgeons. It's unclear whether this association is a result of funding facilitating more research projects that eventually lead to more high-impact publications, if industry is providing more funding to academic neurosurgeons with greater scholarly impact, or whether it represents intrinsic academic activity among a group of neurosurgeons who are more likely to be academically productive and procure funding from all potential sources to increase this activity. abstract_id: PUBMED:28570741 Association of Gender With Financial Relationships Between Industry and Academic Otolaryngologists. Importance: Gender disparities continue to exist in the medical profession, including potential disparities in industry-supported financial contributions. Although there are potential drawbacks to industry relationships, such industry ties have the potential to promote scholarly discourse and increase understanding and accessibility of novel technologies and drugs. Objectives: To evaluate whether gender disparities exist in relationships between pharmaceutical and/or medical device industries and academic otolaryngologists. Design, Setting, And Participants: An analysis of bibliometric data and industry funding of academic otolaryngologists. Main Outcomes And Measures: Industry payments as reported within the CMS Open Payment Database. Methods: Online faculty listings were used to determine academic rank, fellowship training, and gender of full-time faculty otolaryngologists in the 100 civilian training programs in the United States. Industry contributions to these individuals were evaluated using the CMS Open Payment Database, which was created by the Physician Payments Sunshine Act in response to increasing public and regulatory interest in industry relationships and aimed to increase the transparency of such relationships. The Scopus database was used to determine bibliometric indices and publication experience (in years) for all academic otolaryngologists. Results: Of 1514 academic otolaryngologists included in this analysis, 1202 (79.4%) were men and 312 (20.6%) were women. In 2014, industry contributed a total of $4.9 million to academic otolaryngologists. $4.3 million (88.5%) of that went to men, in a population in which 79.4% are male. Male otolaryngologists received greater median contributions than did female otolaryngologists (median [interquartile range (IQR)], $211 [$86-$1245] vs $133 [$51-$316]). Median contributions were greater to men than women at assistant and associate professor academic ranks (median [IQR], $168 [$77-$492] vs $114 [$55-$290] and $240 [$87-$1314] vs $166 [$58-$328], respectively). Overall, a greater proportion of men received industry contributions than women (68.0% vs 56.1%,). By subspecialty, men had greater median contribution levels among otologists and rhinologists (median [IQR], $609 [$166-$6015] vs $153 [$56-$336] and $1134 [$286-$5276] vs $425 [$188-$721], respectively). Conclusions And Relevance: A greater proportion of male vs female academic otolaryngologists receive contributions from industry. These differences persist after controlling for academic rank and experience. The gender disparities we have identified may be owing to men publishing earlier in their careers, with women often surpassing men later in their academic lives, or as a result of previously described gender disparities in scholarly impact and academic advancement. abstract_id: PUBMED:24357395 AAO-HNSF CORE grant acquisition is associated with greater scholarly impact. Objective: To determine whether receiving funding from the American Academy of Otolaryngology--Head and Neck Surgery Foundation (AAO-HNSF) Centralized Otolaryngology Research Efforts (CORE) grant program is associated with career choice (in terms of practice setting) and scholarly impact. Study Design And Setting: Examination of bibliometrics among academic otolaryngologists, including CORE grants funding history. Methods: An Internet search was conducted to determine the current practice setting and, for academic otolaryngologists, academic rank of individuals receiving CORE grants since 1985. The Scopus database was used to determine scholarly impact, as measured by the h-index, and publication experience (in years) of these practitioners along with a "control" cohort of nonfunded academic otolaryngologists. Results: Of 432 unique individuals receiving CORE grant funding since 1985, 44.4% are currently academicians. This cohort had a higher h-index (mean, 11.9; median, 10; interquartile range [IQR], 6-18) than their non-CORE grant-funded academic peers (mean, 9.2; median, 7; IQR, 3-13; P = .002) and colleagues who are not currently in academic practice (mean, 4.4; median, 3; IQR, 0-6; P < .001). CORE grant-funded academic otolaryngologists had a statistically higher scholarly impact on controlling for academic rank and among practitioners with greater than 10 years of publication experience. No statistical differences in academic promotion patterns were noted between those with and those without a CORE grant funding history. Conclusions: Procurement of an AAO-HNSF CORE grant is associated with greater scholarly impact, as measured by the h-index. This relationship persists among practitioners with more than 10 years of publication experience, as well as upon comparison of CORE grant-funded and non-CORE grant-funded otolaryngologists at all academic ranks. Practitioners awarded these grants may be more likely to go into and remain in academic practice. abstract_id: PUBMED:16279755 Attitudes of academic and clinical researchers toward financial ties in research: a systematic review. Involvement of industry in academic research is widespread and associated with favorable outcomes for industry. The objective of this study was to review empirical data on the attitudes of researchers toward industry involvement and financial ties in research. A review of the literature for quantitative data from surveys on the attitudes of researchers to financial ties in research, reported in English, resulted in the 17 studies included. Review of these studies revealed that investigators are concerned about the impact of financial ties on choice of research topic, research conduct and publication, but this concern is less among investigators already involved with industry. Researchers approve of industry collaboration and financial ties when the ties are indirectly related to the research, disclosure is upfront, and results and ideas are freely publicized. However, their trust in disclosure as a way to manage conflicts may reveal a lack of awareness of the actual impact of financial incentives on themselves and other researchers. abstract_id: PUBMED:32006547 Gender Gap in Industry Relationships and Scholarly Impact Among Academic Urologists in the United States. Objectives: To examine the distribution of industry payments to male and female academic urologists and the relationship between industry funding, academic rank, and scholarly impact. Material And Methods: Academic urologists from 131 programs with publicly available websites were compiled. Gender, rank, fellowship training, and scholarly impact metrics were recorded. Data from the 2016 Centers for Medicare and Medicaid Services Open Payments database were paired with faculty names. Comparisons were made using Fisher's Exact, Wilcoxon Rank Sum, and Spearman's Rank-Order tests. Multivariable logistic regression modeling identified predictors of receiving payments in the top quintile. Results: Among 1,657 academic urologists, males comprised 84%. While there were no gender differences in the number of urologists listed in the Open Payments Database, males received more total funding (P < .001) and higher median general payments per capita (P < .03). Males also received higher proportions of research funding (P = .002), speaker fees (P = .03), education fees (P = .03) and higher median consulting fees (P = .003). Overall, males had higher scholarly impact (P < .001), which correlated with total industry payments (rho = 0.27, P < .001). Predictors of accepting the top quintile payments include male gender, associate professorship and H-index score ≥10. Conclusion: Most academic urologists accepted at least one industry payment in 2016, but males received more funding than females. There is a positive correlation between total industry payments, H-index, and total publications. More research is needed to understand why gender and scholarly productivity are associated with higher payouts. This is another important area that may influence career advancement and compensation for female urologists. abstract_id: PUBMED:34881644 Trends in Academic Achievement Within Otolaryngology: Does Fellowship Training Impact Research Productivity? Background: Scholarly productivity and research output vary among different subspecialties. The h-index was developed as a more wholesome metric that measures an author's contribution to literature. Objective: Through a web-based cross-sectional analysis, we investigated the differences in scholarly impact and influence of both fellowship and nonfellowship-trained academic otolaryngologists in the United States. A secondary objective was to further understand the output among the larger fellowship fields. Methods: A cross-sectional analysis was performed for active faculty otolaryngologists. A total of 1704 otolaryngologists were identified as faculty in residency training programs across the United States. Their h-index and publication data were gathered using the Scopus database. The data were obtained in August 2019 and analysis occurred in January 2020. Results: Head and neck surgical faculty (25.5%) had the highest representation with fellowship experience. Among all faculty, there was no statistical difference in the overall average h-index scores when comparing faculty that had fellowship training with those who did not (12.6 and 12.1, respectively, P = .498). Rhinologists had the highest publication output per year at 3.90. Among fellowship-trained faculty, the highest average h-index and total publications were seen in head & neck surgery, while facial plastics had the lowest averages (P < .001). Conclusions: In this study, fellowship-trained faculty had a greater but not significant scholarly impact than nonfellowship faculty. Furthermore, there were significant variations in output among the various subspecialties of otolaryngology. Growing fields, as academic rhinology, are continuing to flourish in robust research productivity and output. This study further demonstrates the potential, growing influence of fellowship training on research involvement and academic advancement within the otolaryngology subspecialties. abstract_id: PUBMED:22991270 Is NIH funding predictive of greater research productivity and impact among academic otolaryngologists? Objectives/hypothesis: The h-index is an accurate and reliable indicator of scholarly productivity that takes into account relevance, significance, and influence of research contributions. As such, it is an effective, objective bibliometric that can be used to evaluate academic otolaryngologists for decisions regarding appointment and advancement. In this study, we evaluate the impact of NIH funding on scholarly productivity in otolaryngology. Study Design: Analysis of bibliometric data of academic otolaryngologists. Methods: Funding data for the 20 otolaryngology departments with the largest aggregate total of NIH grants for the fiscal years (FY) 2011 and 2012 was obtained using the National Institutes of Health Research Portfolio Online Reporting Tools Expenditures and Reports (RePORTER) Database. H-indices were calculated using the Scopus online database, and then compared to funding data at both the departmental and individual level. Results: Faculty members in otolaryngology departments who received NIH funding had significantly greater research productivity and impact, as measured by the h-index, than their nonfunded peers. H-indices increased with greater NIH funding levels, and investigators with MD degrees tended to have higher mean NIH funding levels than those with PhDs. While there was no correlation between average h-index and NIH funding totals at the level of departments, there was greater correlation upon examination of NIH funding levels of individual investigators. Conclusions: The h-index has a strong relationship with, and may be predictive of, grant awards of NIH-funded faculty members in otolaryngology departments. This bibliometric may be useful in decisions regarding appointment and advancement of faculty members within academic otolaryngology departments. abstract_id: PUBMED:24356877 Regional differences in gender promotion and scholarly productivity in otolaryngology. Objectives: To identify whether regional differences exist in gender disparities in scholarly productivity and faculty rank among academic otolaryngologists. Study Design And Setting: Academic otolaryngologists' bibliometric data analyses. Methods: Online faculty listings from 98 otolaryngology departments were organized by gender, academic rank, fellowship training status, and institutional location. The Scopus database was used to assess bibliometrics of these otolaryngologists, including the h-index, number of publications, and publication experience. Results: Analysis included 1127 otolaryngologists, 916 men (81.3%) and 211 women (18.7%). Female faculty comprised 15.4% in the Midwest, 18.8% in the Northeast, 21.3% in the South, and 19.0% in the West (P = .44). Overall, men obtained significantly higher senior academic ranks (associate professor or professor) compared to women (59.8% vs. 40.2%, P < .0001). Regional gender differences in senior faculty were found in the South (59.8% men vs. 37.3% women, P = .0003) and Northeast (56.4% men vs. 24.1% women, P < .0001) with concomitant gender differences in scholarly impact, as measured by the h-index (South, P = .0003; Northeast, P = .0001). Among geographic subdivisions, female representation at senior ranks was lowest in the Mid-Atlantic (22.0%), New England (30.8%), and West South Central (33.3%), while highest in Pacific (60.0%) and Mountain (71.4%) regions. No regional gender differences were found in fellowship training patterns (P-values > .05). Conclusion: Gender disparities in academic rank and scholarly productivity exist most notably in the Northeast, where women in otolaryngology are most underrepresented relative to men at senior academic ranks and in scholarly productivity. abstract_id: PUBMED:27578559 Does formal research training lead to academic success in otolaryngology? Objectives/hypothesis: To evaluate whether formalized research training is associated with higher researcher productivity, academic rank, and acquisition of National Institutes of Health (NIH) grants within academic otolaryngology departments. Methods: Each of the 100 civilian otolaryngology program's departmental websites were analyzed to obtain a comprehensive list of faculty members credentials and characteristics, including academic rank, completion of a clinical fellowship, completion of a formal research fellowship, and attainment of a doctorate in philosophy (PhD) degree. We also recorded measures of scholarly impact and successful acquisition of NIH funding. Results: A total of 1,495 academic physicians were included in our study. Of these, 14.1% had formal research training. Bivariate associations showed that formal research training was associated with a greater h-index, increased probability of acquiring NIH funding, and higher academic rank. Using a linear regression model, we found that otolaryngologists possessing a PhD had an associated h-index of 1.8 points higher, and those who completed a formal research fellowship had an h-index of 1.6 points higher. A PhD degree or completion of a research fellowship was not associated with a higher academic rank; however, a higher h-index and previous acquisition of an NIH grant were associated with a higher academic rank. The attainment of NIH funding was three times more likely for those with a formal research fellowship and 8.6 times more likely for otolaryngologists with a PhD degree. Conclusion: Formalized research training is associated with academic success in otolaryngology. Such dedicated research training accompanies greater scholarly impact, acquisition of NIH funding, and a higher academic rank. Level Of Evidence: NA Laryngoscope, 127:E15-E21, 2017. Answer: Yes, industry financial ties are associated with greater scholarly impact among academic otolaryngologists. A study found that receiving industry contributions greater than $1,000 is associated with greater scholarly impact, as measured by the h-index, among academic otolaryngologists. This association was observed even after controlling for academic seniority and was consistent across most subspecialties. The study suggests that direct industry research support, as well as indirect contributions through the support of educational programs, consulting, and other expenses, potentially impacts scholarly discourse. However, the authors note that these findings do not reflect a causal relationship and may require further characterization (PUBMED:27114324). Additionally, gender disparities were identified in the relationships between industry and academic otolaryngologists, with a greater proportion of male otolaryngologists receiving contributions from industry compared to female otolaryngologists. These differences persisted after controlling for academic rank and experience (PUBMED:28570741). It is important to note that while there is an association between industry funding and scholarly impact, the nature of this relationship is complex and multifaceted. The studies do not necessarily imply causation, and further research is needed to fully understand the dynamics at play.
Instruction: Is a laparoscopic approach useful for treating complications after primary laparoscopic colorectal surgery? Abstracts: abstract_id: PUBMED:23805353 Synchronous resections of primary colorectal tumor and liver metastasis by laparoscopic approach. Liver metastasis of colorectal cancer is common. Resection of solitary tumors of primary and metastatic colorectal cancer can have a favorable outcome. Open resection of primary colorectal tumor and liver metastasis in one operation or in separate operations is currently common practice. Reports have shown that synchronous resections do not jeopardize short or long-term surgical outcomes and that this is a safe and effective approach in open surgery. The development of laparoscopic colorectal surgery and laparoscopic hepatectomy has made a minimally invasive surgical approach to treating colorectal cancer with liver metastasis feasible. Synchronous resections of primary colorectal tumor and liver metastasis by laparoscopy have recently been reported. The efficacy and safety of laparoscopic colorectal resection and laparoscopic hepatectomy have been proven separately but synchronous resections by laparoscopy are in hot debate. As it has been shown that open resection of primary colorectal tumor and liver metastasis in one operation results in an equally good short-term outcome when compared with that done in separate operations, laparoscopic resection of the same in one single operation seems to be a good option. Recent evidence has shown that this new approach is a safe alternative with a shorter hospital stay. Large scale randomized controlled trials are needed to demonstrate the effectiveness of this minimally invasive approach. abstract_id: PUBMED:19279423 Is a laparoscopic approach useful for treating complications after primary laparoscopic colorectal surgery? Purpose: Although the use of laparoscopy for the management of postoperative complications has been previously well documented for different pathologies, there is scarce information regarding its use after laparoscopic colorectal surgery. Methods: Data were prospectively collected from all patients undergoing laparoscopic colorectal surgery between June 2000 to October 2007. Patients were divided into two groups according to the approach used for the reoperation: laparoscopy (Group I) or laparotomy (Group II). Data were statistically analyzed by using Student's t-test and chi-squared test. Results: In all, 510 patients were analyzed. Twenty-seven patients (5.2 percent), 14 men and 13 women (men/women Group I: 10/7 vs. Group II: 4/6; P = not significant (NS)), required a second surgery because of postoperative complications (Group I: 17 (63 percent); Group II: 10 (37 percent)). Mean age was 60 +/- 17 years (Group I: 61.7 +/- 17.7 vs. Group II: 57.1 +/- 16 years; P = NS). Fifteen patients (55.5 percent) had anastomotic leaks (Group I 13/17 (76.5 percent) vs. Group II 2/13 (15 percent); P = 0.004). The were no differences between the groups regarding the length of stay or postoperative complications (Group I: 11.9 +/- 9.6 vs. Group II: 18.1 +/- 19.7 days: P = NS; Group I: 1 vs. Group II: 3; P = NS). Conclusions: Laparoscopic approach is a useful tool for treating complications after laparoscopic colorectal surgery, especially anastomotic leaks. Randomized, controlled trials are necessary to validate these findings. abstract_id: PUBMED:23582883 Re-laparoscopy in the diagnosis and treatment of postoperative complications following laparoscopic colorectal surgery. Background: Laparoscopic colorectal surgery has increasingly become the standard of care in the management of both benign and malignant colorectal disease. We herein describe our experience with laparoscopy in the management of complications following laparoscopic colorectal surgery. Methods: Between November 2010 and July 2012, data were prospectively collected for all patients requiring surgical intervention for colorectal cancer. This was performed by a full-time colorectal cancer data manager. Results: A total of 203 patients had surgery for colorectal cancer during this period, 154 (75.9%) of which were performed laparoscopically and 49 (24.1%) performed by open surgery. Ten patients (4.9%) underwent surgery for complications of which 7 were following laparoscopic surgery. Two of these 7 patients had an exploratory laparotomy due to abdominal distension and haemodynamic instability. Laparoscopic surgical intervention was successful in diagnosing and treating the remaining 5 patients. Three of these patients developed small bowel obstruction which was managed by re-laparoscopy while in 2 patients there was a significant suspicion of an anastomotic leakage despite appropriate diagnostic imaging which was out ruled at laparoscopy. Conclusions: Laparoscopy can frequently be used to diagnose and treat complications following laparoscopic colorectal surgery. This is another benefit associated with laparoscopic colorectal surgery which is rarely described and allows the benefits associated with the laparoscopic approach to be maintained. abstract_id: PUBMED:29290377 Reoperation after laparoscopic colorectal surgery. Does the laparoscopic approach have any advantages? Introduction: The laparoscopic approach in colorectal complications is controversial because of its difficulty. However, it has been proven that it can provide advantages over open surgery. The aim of this study is to compare laparoscopic approach in reoperations for complications after colorectal surgery with the open approach taking into account the severity of the patient prior to reoperation. Methods: Patients who underwent laparoscopic colorectal surgery from January 2006 to December 2015 were retrospectively reviewed. Patients requiring urgent surgical procedures for complications in the postoperative period were divided in two groups: laparoscopic surgery (LS) and open surgery (OS). To control clinical severity prior to reoperation, The Mannheim Peritonitis Index (MPI) was calculated. Results: A total of 763 patients were studied, 40 required urgent surgery (24 OS/16 LS). More ileostomies were performed in the LS group (68.7% vs. 29.2%) and more colostomies in the OS group (37.5% vs. 6.2%), p<0.05. MPI was higher in OS group (27.31±6.47 [19-35] vs. 18.36±7.16 [11-24], p<0.001). Hospital stay after re-intervention, oral tolerance and surgical wound infection, were favorable in LS (p<0.05 in all cases). In patients with MPI score ≤26, laparoscopic approach showed shorter hospital stay after re-intervention, less stay in the critical care unit after re-intervention, earlier start of oral tolerance and less surgical wound infection (p<0.05). Conclusions: A laparoscopic approach in re-intervention for complications after laparoscopic colorectal surgery associates a faster recovery reflected in a shorter hospital stay, earlier start of oral tolerance and a lower abdominal wall complication rate in patients with low severity index. abstract_id: PUBMED:31101548 Highlights, limitations and future challenges of laparoscopic resection for colorectal liver metastases. The liver is the most common site for metastatic colorectal cancer (CRLM). Despite advances in oncologic treatment, resection of metastases is still the only curative option. Although laparoscopic surgery for primary colorectal cancer is well documented and widely used, laparoscopic surgery for liver metastases has developed more slowly. However, in spite of some difficulties, laparoscopic approach demonstrated strong advantages including minimal parietal damage, decreased morbidity (reduced blood loss and need for transfusion, fewer pulmonary complications), and simplification of subsequent iterative hepatectomy. Up to now, more than 9 000 laparoscopic procedures have been reported worldwide and long-term results in colorectal liver metastases seem comparable to the open approach. Only one recent randomized controlled trial has compared the laparoscopic and the open approach. The purpose of the present update was to identify the barriers limiting widespread acceptance of laparoscopic approach, the benefits and the limits of laparoscopic hepatectomies in CRLM. abstract_id: PUBMED:28261695 Laparoscopic combined colorectal and liver resections for primary colorectal cancer with synchronous liver metastases. Backgrounds/aims: Synchronous liver metastases (SLMs) are found in 15-25% of patients at the time of diagnosis with colorectal cancer, which is limited to the liver in 30% of patients. Surgical resection is the most effective and potentially curative therapy for metastatic colorectal carcinoma (CRC) of the liver. The comparison of simultaneous resection of primary CRC and synchronous liver metastases with staged resections is the subject of debate with respect to morbidity. Laparoscopic surgery improves postoperative recovery, diminishes postoperative pain, reduces wound infections, shortens hospitalization, and yields superior cosmetic results, without compromising the oncological outcome. The aim of this study is therefore to evaluate our initial experience with simultaneous laparoscopic resection of primary CRC and SLM. Methods: Currently, laparoscopic resection of primary CRC is performed in more than 53% of all patients in our surgical department. Twenty-six patients with primary CRC and a clinical diagnosis of SLM underwent combined laparoscopic colorectal and liver surgery. Six of them underwent laparoscopic colorectal resection combined with major laparoscopic liver resection. Results: The surgical approaches were total laparoscopic (25 patients) or hybrid technique (1 patients). The incision created for the extraction of the specimen varied between 5 and 8cm. The median operation time was 223 minutes (100 to 415 min.) with a total blood loss of 180 ml (100-300 ml). Postoperative hospital stay was 6.8 days (6-14 days). Postoperative complications were observed in 6 patients (22.2%). Conclusions: Simultaneous laparoscopic colorectal and liver resection appears to be safe, feasible, and with satisfying short-term results in selected patients with CRC and SLM. abstract_id: PUBMED:34919765 Adhesion-related readmissions after open and laparoscopic colorectal surgery in 16 524 patients. Aim: Colorectal surgery is associated with a high risk of adhesion formation and subsequent complications. Laparoscopic colorectal surgery reduces adhesion formation by 50%; however, the effect on adhesion-related complications is still unknown. This study aims to compare differences in incidence rates of adhesion-related readmissions after laparoscopic and open colorectal surgery. Method: Population data from the Scottish National Health Service were used to identify patients who underwent colorectal surgery between June 2009 and June 2011. Readmissions were registered until December 2017 and categorized as being either directly or possibly related to adhesions, or as reoperations potentially complicated by adhesions. The primary outcome measure was the difference in incidence of directly adhesion-related readmissions between the open and laparoscopic cohort. Results: Colorectal surgery was performed in 16 524 patients; 4455 (27%) underwent laparoscopic surgery. Patients undergoing laparoscopic surgery were readmitted less frequently for directly adhesion-related complications, 2.4% (95% CI 2.0%-2.8%) versus 7.5% (95% CI 7.1%-7.9%) in the open cohort. Readmissions for possibly adhesion-related complications were less frequent in the laparoscopic cohort, 16.8% (95% CI 15.6%-18.0%) versus 21.7% (95% CI 20.9%-22.5%), as well as reoperations potentially complicated by adhesions, 9.7% (95% CI 8.9%-10.5%) versus 16.9% (95% CI 16.3%-17.5%). Conclusion: Overall, any adhesion-related readmissions occurred in over one in three patients after open colorectal surgery and one in four after laparoscopic colorectal surgery. Compared with open surgery, incidence rates of adhesion-related complications decrease but remain substantial after laparoscopic surgery. abstract_id: PUBMED:32424686 The Trends in Adoption, Outcomes, and Costs of Laparoscopic Surgery for Colorectal Cancer in the Elderly Population. Background: The elderly constitute the majority of both colorectal cancer and surgical volume. Despite established safety and feasibility, laparoscopy may remain underutilized for colorectal cancer resections in the elderly. With proven benefits, increasing laparoscopy in elderly colorectal cancer patients could substantially improve outcomes. Our goal was to evaluate utilization and outcomes for laparoscopic colorectal cancer surgery in the elderly. Methods: A national inpatient database was reviewed for elective inpatient resections for colorectal cancer from 2010 to 2015. Patients were stratified into elderly (≥ 65 years) and non-elderly cohorts (< 65 years), then grouped into open or laparoscopic procedures. The main outcomes were trends in utilization by approach and total costs, length of stay (LOS), readmission, and complications by approach in the elderly. Multivariable regression models were used to control for differences across platforms, adjusting for patient demographics, comorbidities, and hospital characteristics. Results: Laparoscopic adoption for colorectal cancer in the elderly increased gradually until 2013, then declined, with simultaneously increasing rates of open surgery. Laparoscopy significantly improved all primary outcomes compared to open surgery (all p < 0.01). From the adjusted analysis, laparoscopy reduced complications by 30%, length of stay by 1.99 days, and total costs by $3276/admission. Laparoscopic patients were 34% less likely to be readmitted; when readmitted, the episodes were less expensive when index procedure was laparoscopic. Conclusion: The adoption of laparoscopy for colorectal cancer surgery in the elderly is slow and even declining recently. In addition to the clinical benefits, there are reduced overall costs, creating a tremendous value proposition if use can be expanded. Precis: This national contemporary study shows the slow uptake and recent decline in adaption of laparoscopic surgery for colorectal cancer in the elderly, despite the benefits in clinical outcomes and costs found. This data can be used to target education, regionalization, and quality improvement efforts in this expanding population. abstract_id: PUBMED:25960797 Laparoscopic colorectal surgery for colorectal polyps: single institution experience. Introduction: Because of their difficult location or size, some polyps are impossible to remove with a flexible colonoscope and must be surgically removed. Laparoscopy is a great alternative. Aim: To assess outcomes of a laparoscopic approach for the management of difficult colorectal polyps. Material And Methods: From 2006 to 2014, patients with polyps that could not be treated by endoscopy were included. Demographic data, histology of the biopsy, type of surgery, length of postoperative stay, complications and final pathology were reviewed prospectively. Results: Forty-two patients with a mean age of 64.9 ±8.4 underwent laparoscopic polypectomy. Laparoscopic mobilization of the colonic segment and colotomy with removal of the polyp was performed for 12 (28.6%) polyps. Laparoscopic segmental bowel resection was performed in 30 (71.4%) cases: anterior rectal resection with partial total mesorectal excision in 12 (28.6%), left hemicolectomy in 7 (16.6%), sigmoid resection in 6 (14.3%), ileocecal resection in 2 (4.76%), resection of transverse colon in 2 (4.76%) and sigmoid resection with transanal retrieval of specimen in 1 (2.38%). Mean postoperative hospital stay was 5.9 ±2.6 days. There were 4 complications (9.5%). All patients recovered after conservative treatment. Mean polyp size was 3.6 ±2.2 cm. Final pathology revealed polyps (n = 2), tubular adenoma (n = 6), tubulovillous adenoma (n = 20), carcinoma in situ (n = 10) and invasive cancer (n = 4). Two of these patients underwent laparoscopic left hemicolectomies 14 and 10 days after laparoscopic colotomy and polypectomy. Conclusions: For the management of endoscopically unresectable polyps, laparoscopic polypectomy is currently the technique of choice. abstract_id: PUBMED:28745700 Laparoscopic surgery for colon cancer Aim: To assess laparoscopic surgery in treatment of colon cancer patients. Material And Methods: The results of laparoscopic treatment of patients with colorectal cancer are presented in the article. It was estimated the influence of various clinical parameters including age, gender, comorbidities, tumor localization and stage and complications on laparoscopic management of these patients. Conclusion: It was revealed that efficiency of laparoscopic surgery in patients with colon cancer is affected by tumor stage and presence of complications. Answer: Yes, a laparoscopic approach is useful for treating complications after primary laparoscopic colorectal surgery. Studies have shown that laparoscopy can be a valuable tool for managing postoperative complications, particularly anastomotic leaks. One study found that 63 percent of patients who required a second surgery due to postoperative complications underwent a laparoscopic approach, and it was especially effective for anastomotic leaks (76.5 percent in the laparoscopic group versus 15 percent in the open surgery group) (PUBMED:19279423). Another study reported that laparoscopy was successful in diagnosing and treating complications following laparoscopic colorectal surgery, maintaining the benefits associated with the laparoscopic approach (PUBMED:23582883). Additionally, a laparoscopic approach in re-intervention for complications after laparoscopic colorectal surgery was associated with a faster recovery, reflected in a shorter hospital stay, earlier start of oral tolerance, and a lower abdominal wall complication rate in patients with a low severity index (PUBMED:29290377). These findings suggest that laparoscopic surgery is not only useful but also advantageous in managing complications following primary laparoscopic colorectal procedures.
Instruction: Is routine postoperative diuresis renography indicated in all adult patients after pyeloplasty for ureteropelvic junction obstruction? Abstracts: abstract_id: PUBMED:25530392 Is routine postoperative diuresis renography indicated in all adult patients after pyeloplasty for ureteropelvic junction obstruction? Objective: To determine if routine follow-up diuresis renography is indicated in all adult patients after pyeloplasty for ureteropelvic junction obstruction (UPJO). Methods: A multicenter retrospective analysis was conducted in adults who underwent pyeloplasty for symptomatic UPJO between January 2002 and August 2012. Patients with unilateral UPJO demonstrated on diuresis renography, treated with pyeloplasty, and aged >18 years at time of surgery were included in the study. Patients with contralateral renal abnormalities, genitourinary anomalies, and those who declined renography during follow-up were excluded. All eligible patients underwent diuresis renography approximately 3 months postoperatively. Minimal follow-up was 12 months. Patients were divided into 2 groups: patients with persistent pain at 3 months after pyeloplasty and patients who became asymptomatic. Treatment failures in each cohort were identified. Comparisons were performed using the Fisher exact test. Results: A total of 100 pyeloplasties were performed. Of them, 90 were eligible for the study. Mean age was 40 years. Mean follow-up was 21 months. Seventy-three patients (81.1%) became pain free after pyeloplasty. One patient (1.4%) had worsening of differential renal function despite unobstructed drainage on diuresis renogram. None of the patients in the asymptomatic cohort was identified to have unequivocal drainage obstruction on postoperative renogram. Seventeen patients (18.9%) remained symptomatic with pain at 3 months after pyeloplasty; 3 (17.6%) of those patients with loin pain after pyeloplasty were confirmed to have persistent obstructed drainage postoperatively on diuresis renogram (P <.001). All 3 patients required insertion of ureteric stents and/or revision surgery (P <.007). Conclusion: In our series, adult patients who became pain free after unilateral pyeloplasty for UPJO did not have persistent obstruction of renal drainage on renography. Routine diuresis renogram to assess drainage and differential renal function in patients who become pain free after pyeloplasty for UPJO may not be necessary. If objective evidence of postoperative outcome is required, then a single renogram at 3 months is recommended. abstract_id: PUBMED:2729785 Diuresis renography in the postoperative course of pyeloureteral stenosis Ureteropelvic junction (UPJ) obstruction constitutes the most common form of upper urinary tract obstruction in children. We study the role of diuretic renography and its correlation with other diagnostic methods for postoperative evaluation of this malformation. We reviewed 13 patients, 10 males and 3 females aged 1 months to 15 years, and with right UPJ obstruction in six and left obstruction in seven. In all of them pre and postoperative IVP, abdominal ultrasound, diuretic renography, and pressure flow studies. Were performed our results demonstrated a non-correlation between the findings of excretory urogram, abdominal ultrasound and diuretic renography and a close relation between pressure flow studies and diuretic nephrography. This two last methods are good tools for assessment of UPJ obstruction. abstract_id: PUBMED:29430113 Comparison of the F+20 and F-15 Diuresis Technetium-99m Diethylenetriaminepentacetate Renography Protocols for Diagnosis of Ureteropelvic Junction Obstruction in Adult Patients with Hydronephrosis. Introduction: Hydronephrosis (HDN) in adults is a common presentation and may be due to congenital ureteropelvic junction obstruction. Diuresis renography is the method of choice for differentiating a dilated unobstructed urinary system from a true obstruction. The aim of this study was to compare the F+20 protocol with F-15 protocol and see whether this new protocol can reduce inconclusive results. Materials And Methods: It was a prospective, comparative study included 51 adult patients of primary HDN. Each patient underwent both F+20 and F-15 diuretic Technetium-99m diethylenetriaminepentacetate (Tc-DTPA) renography protocol studies. The results of diuretic Tc-DTPA renography studied in terms of obstructive, nonobstructive, or equivocal. Results: A total 60 renal units (RU) with HDN were included in this study. The mean age of patients was 25.25 years. 36 RU presented with specific symptoms and 24 RU with vague symptoms or diagnosed incidentally. The equivocal results were significantly lower in F-15 protocol than F+20 protocol. The equivocal results in F+20 protocol were significantly higher with incidentally diagnosed HDN and patients presented with vague symptoms. Most of the equivocal results of F+20 protocol were converted into either obstructive or nonobstructive by the F-15 protocol. Conclusion: The F-15 diuretic renogram protocol was associated with significantly less equivocal results than F+20 protocol. Moreover, F-15 protocol allowed clarification in cases of equivocal results of F+20 protocol. Therefore, we suggest the F-15 diuresis protocol as a single test for confirmation or exclusion of obstruction especially in the adult patient of HDN presented with vague symptoms or diagnosed incidentally. abstract_id: PUBMED:3910440 Role of the diuresis renogram in the study of the pelviureteric junction. A consecutive series of 29 patients (33 pelviureteric units) with urographic signs of altered passage through the pelviureteric junction underwent diuresis renography performed by means of an intravenous injection of 99mTc-diethylenetriamine pentaacetic acid/kg body weight followed by intravenous administration of frusemide. Anderson-Hynes pyeloplasty was performed in 19 pelviureteric units with a modest or almost absent fall in the renographic curve after diuretic injection; 2 other symptomatic patients, in whom a moderate fall of the same curve was found also underwent surgery. Complete normalization of the renogram was observed in 17 of 21 operated patients. Severe functional deficit and polymegacalicosis were evident in 3 and 1 patient, respectively, with unchanged postoperative renograms. During follow-up (6 months to 2 years) diuresis renography remained unaltered in non-operated units. Our experience indicates how the diuresis renogram not only gives functional confirmation of a pelviureteric junction obstruction, but is also helpful in resolving situations in which there is a discrepancy between symptoms and urographic signs. Furthermore it provides an accurate method for evaluating the results of pyeloplasty. However, the use of this investigation is not, in our opinion, indicated in the presence of severe functional deficit of the kidney or pyelectasis. abstract_id: PUBMED:8986040 Long-term outcome of kidneys with initially poor drainage or no drainage following pyeloplasty. Pyeloplasty for congenital ureteropelvic junction (UPJ) obstruction enjoys a 90-95% success rate. Although treatment of the failed pyeloplasty has been addressed in the literature, management of the poorly draining or nondraining renal unit in the immediate postoperative period has not received any attention. For this purpose the medical records of 33 consecutive children (37 renal units) treated by dismembered pyeloplasty between 1986 and 1992 were reviewed. All of our pyeloplasties were stented and urine was diverted via a nephrostomy tube. All patients underwent a nephrostogram following stent removal 1 week postoperatively. These studies showed poor drainage, or no, across the newly reconstructed anastomosis in 7 of 37 renal units (19%). The ages of these 4 boys and 3 girls at the time of pyeloplasty ranged between 7 weeks and 5 years (mean 22 months). In four patients, good drainage occurred without intervention by 2-4 weeks postoperation. In two patients, percutaneous balloon dilation of the anastomosis via the intraoperatively placed nephrostomy tube was required at 3 and 6 weeks, respectively. The remaining patient failed percutaneous dilation, necessitating a ureterocalycostomy at 9 weeks following pyeloplasty. The long-term follow-up for the entire group of 33 children averaged 30 months and consisted of radionuclide diuresis renography in 84% of cases or intravenous pyelography in the remainder. All patients had excellent long-term outcomes as assessed by comparison of the postoperative studies with the baseline studies obtained preoperatively. Our results show that kidneys with initially poor drainage, or even no drainage, across the newly reconstructed anastomosis following pyeloplasty can be salvaged with an excellent long-term outcome comparable with that of the group with initially good drainage. In addition, intervention was necessary in only 43% of renal units with initial compromise and was facilitated by the intraoperatively placed nephrostomy tube. We recommend that percutaneous dilation be done at between 4 and 6 weeks postpyeloplasty, as the waiting period was long enough to allow for spontaneous improvement without precluding a successful outcome if drainage failed to occur. Ureterocalycostomy was rarely necessary. abstract_id: PUBMED:11171502 Diuretic renography in the diagnostic approach of prenatal hydronephrosis Objective: To verify the utility of diuretic renography using 99mTc-MAG3 in the evaluation of the urinary tract obstruction in patients with prenatal diagnosis of hydronephrosis. In a neonate with upper urinary tract dilatation, it is difficult to differentiate a true obstruction from a dilated non-obstructed system. Material And Methods: The retrospective study (january 1993-december 1998) included 40 consecutive patients selected from 181 newborns with a prenatal diagnosis of hydronephrosis. The mean age of the performance of the first renography was 2.6 months. Results: The final diagnosis (once the vesicoureteral reflux was ruled out) was: ureteropelvic junction obstruction (UPJ) 16, megaureter 15, ureterocele 3 and stasis 6. Conservative treatment was applied in most of the patients. Nevertheless, 17 of them needed surgical treatment due to the high risk of the permanent renal damage. Conclusions: 1) UPJ is the most frequent cause of neonatal hydronephrosis (NH) and consequently, of the request for diuretic renographies. The renography with 99mTc-MAG3 has the greatest influence in the therapeutic decision (identifying who should be operated on and when this should be done), due to the fact that it is the disease with the highest rate of surgical interventions. 2). The surgical indication in NH is based on the diuretic renography parameters and/or on the presence of symptomatology. In our series of surgical patients, none of those who had a differential renal function <20% with regard to the contralateral one showed recuperation after surgery (control 6 months later). abstract_id: PUBMED:23198954 Laparoscopic retroperitoneal dismembered pyeloplasty: single-center experience in China. Objective: To report on our experience with laparoscopic retroperitoneal dismembered pyeloplasty in the treatment of ureteropelvic junction (UPJ) obstruction. Special attention was paid to the technical features associated with the procedure. Subjects And Methods: From May 2004 to April 2012, in total, 117 consecutive patients (64 men and 53 women) with a mean age of 33.5 years (range, 12-60 years) underwent laparoscopic retroperitoneal dismembered pyeloplasty for symptomatic and radiologically proved UPJ obstruction. Follow-up studies were performed with diuretic renography at 3 and 6 months postoperatively and annually thereafter. Success was defined as symptomatic relief and improvement of the diuresis renogram without evidence of obstructed drainage. Results: The mean operative time was 170 minutes (range, 90-310 minutes), and the mean estimated blood loss was 40 mL (range, 20-100 mL). Crossing vessels were encountered in 21 patients (18.0%). In all cases, the ureter was transposed anteriorly. From 9 patients, including 4 cases of horseshoe kidney, coexisting renal calculi were successfully removed. The mean hospital stay was 7 days (range, 5-14 days). There were no intraoperative complications. Postoperative complication was recorded in 2 patients with leakage at the anastomosis. Seven patients developed anastomotic strictures, which required open surgery or ureteral stent. The success rate was 94% (110/117) at a mean follow-up of 43 months (range, 3-95 months). Conclusions: In our experience, laparoscopic retroperitoneal dismembered pyeloplasty is effective and feasible. With the improvement of technique, we believe that it would become a new standard treatment for UPJ obstruction. abstract_id: PUBMED:27939966 Prediction of Clinical Outcomes in Prenatal Hydronephrosis: Importance of Gravity Assisted Drainage. Purpose: In infants with SFU (Society for Fetal Urology) grade 3-4 congenital hydronephrosis, 99mTc-mercaptoacetyltriglycine diuretic renography assesses differential function and drainage half-time. We routinely also include the percent of radiotracer drained after 30 minutes of diuresis as well as after 15 minutes with the patient in the upright position. We investigated whether any 1 or more of these parameters on initial diuretic renography predicts persistent or worsening drainage parameters. Materials And Methods: Infants 6 months or younger with grade 3-4 congenital hydronephrosis who presented between January 2009 and December 2014 were identified from billing data and included in analysis if they underwent at least 1 baseline diuretic renography. Those with structural anomalies were excluded from study. Baseline and followup differential function, diuresis half-time, clearance at 30 minutes and clearance with the patient upright were abstracted and comparisons were made between those with initially indeterminate diuresis half-time who underwent pyeloplasty vs those showing spontaneous improvement. Results: A total of 74 patients (82 renal units) with presumed ureteropelvic junction obstruction met inclusion/exclusion criteria. All 10 renal units with initial diuresis half-time less than 5 minutes resolved spontaneously and all 25 renal units with initial diuresis half-time greater than 75 minutes underwent pyeloplasty. Therefore, we defined the indeterminate group as the 47 renal units with initial half-time between 5 and 75 minutes. Of those 47 renal units with indeterminate initial diuresis half-time 23 (47%) underwent pyeloplasty and 25 (53%) resolved spontaneously. Indications for pyeloplasty included worsening in 17 cases, persistent obstruction in 4 and urinary tract infection in 1. Among renal units with indeterminate drainage clearance while upright and clearance at 30 minutes were the only variables that differed significantly between surgical cases and those that resolved spontaneously. Conclusions: Radiotracer clearance with the patient upright and clearance at 30 minutes are more predictive of surgical management than diuresis half-time or differential function for renal units with indeterminate drainage. They should be included in the standard assessment of ureteropelvic junction obstruction. abstract_id: PUBMED:21527234 Observation of infants with SFU grades 3-4 hydronephrosis: worsening drainage with serial diuresis renography indicates surgical intervention and helps prevent loss of renal function. Purpose: Early pyeloplasty is indicated for ureteropelvic junction obstruction (UPJ) obstructions with reduced differential renal function (DRF) and/or no drainage on diuretic renography (DR). Optimal management of Society of Fetal Urology (SFU) Grades 3 and 4hydronephrosis with preservation of DRF and indeterminate drainage is less straightforward. We review our experience using serial DR to guide the management of kidneys with high-grade hydronephrosis, emphasizing preservation of DRF. Methods: After IRB approval we reviewed the charts of 1398 patients <1-year-old referred for prenatal hydronephrosis. Only patients with SFU Grades 3 and 4 hydronephrosis without ureterectasis were included in the study. Initial evaluation included a baseline DR. Follow-up included DR or ultrasound (US). Results: 115 patients (125 kidneys) were eligible for study inclusion. 27 kidneys underwent early surgery (median 64 days) due to reduced DRF and/or severely impaired drainage. 98 kidneys were initially observed. Of these, 21 underwent delayed surgery (median 487 days) due to worsening drainage. Only 2 patients had an irreversible decrease in DRF of >5%. 77 kidneys demonstrated improved drainage and stable DRF. Comparison of observation (n = 77) and surgery groups (n = 48) revealed more kidneys with SFU Grade 3 hydronephrosis in the observation group (p = 0.0001). Conclusion: Infants with Grades 3 and 4 hydronephrosis and preserved DRF may be safely followed with serial DR. Patients with SFU Grade 4 hydronephosis are more likely to require surgery. Worsening drainage on serial DR is a useful indicator for surgical intervention which limits the number of pyeloplasties while preserving DRF. abstract_id: PUBMED:9475805 Obstruction score of ureteropelvic junction investigated by modified F-15 diuresis renography. Aim: Introduction and assessment of an obstruction score in patients with obstructive uropathy, based on a diuresis renography time-activity curve. Methods: Twenty-eight adults with unilateral pelviureteric junction obstruction were assessed by ultrasound scan, furosemide-enhanced intravenous urography and modified (F-15) 99mTc-DTPA diuresis renography. Renal unit drainage pattern was assessed by the obstruction score (sum of the time to peak, the index of excretion, and the tracer washout 40 min after furosemide administration). Renal units were classified as normal (0-3), equivocal (4-7) or obstructed (8-10). Renal function was determined by a split renal function and individual kidney glomerular filtration rate. The whole kidney minimum transit time was assessed as an interval on time scale between tracer input and output curve. Results: After a 28-month mean follow-up, ultrasound findings were improved in 23 and unchanged in 5 patients. Regarding the response to furosemide, intravenous urography showed improvement in 19, no change in 8, and deterioration in 1 patient. The upper urinary tract on the affected side was obstructed in 26 and equivocal in 2 patients. Postpyeloplasty outcome was normal in 12 and equivocal in 16 cases. Overall drainage function was improved in 27 and unchanged in 1 patient. Parenchymal function was improved in 20, unchanged in 2, and deteriorated in 6 patients. Whole kidney minimum transit time was significantly reduced (5.55 +/- 1.56 to 4.41 +/- 0.83 min; p<0.001). Conclusion: The F-15 diuresis renography supplemented with the obstruction score system may be used for assessing the upper urinary tract urodynamics as initial diagnosis and in long-term follow-ups. Answer: Routine postoperative diuresis renography may not be indicated in all adult patients after pyeloplasty for ureteropelvic junction obstruction (UPJO). A multicenter retrospective analysis indicated that adult patients who became pain-free after unilateral pyeloplasty for UPJO did not have persistent obstruction of renal drainage on renography. The study suggested that routine diuresis renogram to assess drainage and differential renal function in patients who become pain-free after pyeloplasty for UPJO may not be necessary. If objective evidence of postoperative outcome is required, then a single renogram at 3 months is recommended (PUBMED:25530392). Additionally, a study comparing the F+20 and F-15 diuresis renography protocols for diagnosis of UPJO in adult patients with hydronephrosis found that the F-15 protocol was associated with significantly fewer equivocal results than the F+20 protocol. This suggests that the F-15 diuresis protocol could be a more reliable single test for confirming or excluding obstruction, especially in adult patients with vague symptoms or incidentally diagnosed hydronephrosis (PUBMED:29430113). In contrast, another study emphasized the importance of diuresis renography in the postoperative course of pyeloureteral stenosis, indicating that it is a good tool for assessment of UPJ obstruction (PUBMED:2729785). Furthermore, diuresis renography has been shown to be helpful in resolving situations where there is a discrepancy between symptoms and urographic signs and in evaluating the results of pyeloplasty (PUBMED:3910440). In conclusion, while diuresis renography can be a valuable tool for assessing UPJO and guiding postoperative management, routine use in all adult patients post-pyeloplasty may not be necessary, particularly in those who become asymptomatic. Individual patient assessment and the use of more specific protocols like the F-15 may optimize the utility of postoperative diuresis renography.
Instruction: Is combined inner preputial inlay graft with tubularized incised plate in hypospadias repair worth doing? Abstracts: abstract_id: PUBMED:26119452 Is combined inner preputial inlay graft with tubularized incised plate in hypospadias repair worth doing? Background: It has been noted that after tubularized incised plate urethroplasty (TIP) repair, the final meatal position is glanular but not at the optimum position. Inner preputial inlay graft combined with tubularized incised plate (G-TIP) has been proposed for redo urethroplasty. We extended this indication to be the standard technique for primary hypospadias repair. We conduct this prospective study to obtain a wide, slit-like appearance neomeatus at the optimum position in the glans proper and to judge if hypospadias repair complications differ from TIP repair in the published data in the literature. Patients And Methods: This prospective study included 230 consecutive patients who underwent this technique. The study was conducted from November 2011 to August 2014 for all hypospadias cases to be repaired in a single stage regardless of the width and depth of urethral plate or the glans size and shape. Localization of the meatus was glanular in 13 patients, coronal in 75, distal penile in 112, mid penile in 25 and proximal in five. The urethral plate was incised deeply and extended distally beyond the end of the plate by 3 mm in glans proper. The mucosal graft was harvested from the inner prepuce, inlayed and quilted in the incised urethral plate. The neourethra was created over a urethral catheter in two layers. The vascular dartos flap was mobilized dorsally and moved ventrally to cover the neourethral suture line as a barrier. Results: The follow-up period ranged from 5 to 36 months. Excellent cosmetic and functional results were achieved in 221 of 230 patients (96.09%). Neither meatal stenosis nor urethral diverticulum were encountered. An excellent glanular position of a wide slit-like neomeatus was achieved using this technique. Nine patients (3.91%) developed urethrocutaneous fistula. Excellent urinary stream was reported by parents. Conclusions: Combined inner preputial graft with TIP urethroplasty secures the optimal glanular position of a wide slit-like neomeatus because of extension of the incision beyond the end of the plate, thus optimizing functional and cosmetic outcome with no meatal stenosis. abstract_id: PUBMED:37664098 Dorsal inlay inner preputial graft repair versus ventral-only preputial graft repair in primary distal penile hypospadias with narrow urethral plate. Background: Tubularized incised plate (TIP) is the most common technique used for distal hypospadias repair with good outcome but with a high rate of urethral stricture. Inner preputial-free graft can be used as an inlay graft in the incised area of the narrow urethral plate, also can be used as an onlay graft for urethroplasty in hypospadias repair to avoid this complication. Patients And Methods: A comparative prospective randomized study was conducted on two groups of hypospadias patients with narrow urethral plate. Group A: dorsal inlay inner preputial graft repair was performed (grafted TIP [G-TIP]) and Group B: ventral onlay preputial graft repair was performed. The assessment of outcome and hypospadias objective scoring evaluation (HOSE) score was done at 2 weeks and 6 months. Results: Group A included 55 patients for whom dorsal inlay inner preputial graft repair was performed (G-TIP), and Group B which was planned to be conducted on 55 patients using onlay preputial graft (onlay graft) but was terminated after 15 cases due to high failure rate (33%). Group A showed better success rate 96% and better HOSE score (score 16) at 2 months and 6 months 83.6% and 88.2% versus 26.7% and 33.3% in Group B. Postoperative complications showed a statistically significant difference; glans dehiscence (3.6% vs. 40%), wound infection (1.8% vs. 33.3%), and skin sloughing (3.6% vs. 26.7%) in Groups A and B, respectively. Conclusion: G-TIP is a good technique for the management of distal hypospadias with narrow urethral plate with good success rate, cosmetic outcome, and with less complications compared to onlay graft. abstract_id: PUBMED:21369381 Preputial reconstruction and tubularized incised plate urethroplasty in proximal hypospadias with ventral penile curvature. Aims: Objective of this study was to assess the feasibility and results of preputial reconstruction and tubularized incised plate urethroplasty (TIP) in patients of proximal hypospadias with ventral penile curvature. Materials And Methods: Twenty-seven patients of proximal hypospadias who underwent preputioplasty with TIP were evaluated retrospectively. Ventral curvature was corrected by mobilization of the urethral plate with the corpus spongiosum and the proximal urethra; dorsal plication was added according to the severity of curvature. Feasibility of preputial reconstruction was assessed by applying 3 stay sutures-the first to fix the skin at the corona, the second at the junction of the inner and outer preputial skin for pulling up the skin over the glans, and the third stay on penile skin at the level of the corona for retracting the skin. Preputial reconstruction consisted of a standard 3 layered re-approximation of the margins of the dorsal hood. Results: Age of the patients varied from 10 months to 21 years with an average of 6 years and 4 months. Ventral curvature (mild 10, moderate 13, and severe 4 cases) was corrected by the mobilization of the urethral plate and spongiosum in 14 patients, 11 cases had mobilization of the proximal urethra in addition and 2 patients required single stitch dorsal plication with the above-mentioned steps. Two patients developed urethral fistula and 1 had preputial dehiscence. Conclusions: Preputioplasty with TIP is feasible in proximal hypospadias with curvature without increasing the complication rate. Postoperative phimosis can be prevented by on-table testing of the adequacy of preputial skin by 3 stay sutures. abstract_id: PUBMED:35975363 Double dartos flap layer in tubularized incised plate urethroplasty to prevent urethrocutaneous fistula in uncircumcised patients with distal hypospadias. Urethrocutaneous fistula may complicate hypospadias repair. We noticed that double-layered preputial dartos flaps added to tubularized incised plate urethroplasty can reduce the risk of urethrocutaneous fistula. The aim of this study was to compare the outcomes of tubularized incised plate urethroplasty with double-layered preputial dartos flaps to with single-layered local fascial flaps in preventing urethrocutaneous fistula. A retrospective cohort study was conducted between January 2017 and December 2020 at Jordan University Hospital (Amman, Jordan). Boys who were aged between 6 months and 5 years, diagnosed with distal hypospadias, and not circumcised were included. The primary outcome was the occurrence of urethrocutaneous fistula in patients who underwent tubularized incised plate urethroplasty with a double-layered fascial flap. The results showed a total of 163 boys with distal hypospadias; among them, 116 patients underwent tubularized incised plate urethroplasty with a single-layered fascial flap, and 47 underwent tubularized incised plate urethroplasty with a double-layered fascial flap. The development of urethrocutaneous fistula was higher in the group receiving tubularized incised plate urethroplasty with a single-layered fascial flap than in the group receiving tubularized incised plate urethroplasty with a double-layered fascial flap after 1 month, 6 months, and 12 months (6.9% vs 0, 10.3% vs 0, and 5.2% vs 0, respectively), and the difference after 6 months was statistically significant (P = 0.02). abstract_id: PUBMED:37947847 Comparative outcomes among inlay grafted incised plate, onlay preputial flap and tubularized incised plate urethroplasty for the repair of distal penile hypospadias with a narrow urethral plate. Purpose: We conducted this study, comparing the outcomes among Transverse Onlay Island Flap, inlay grafted incised plate and our previous records of tubularized incised plate urethroplasty (TIPU) in patients with narrow urethral plates, aiming to determine which method of repair provides a good outcome. Methods: This hybrid study included two datasets. The first from a prospective randomized study evaluating outcomes of two treatment modalities; Inlay graft and only flap for distal hypospadias with shallow urethral plate with 80 patients (40 patients in each group) included, the second based on our previous records of TIPU in 40 patients with distal primary hypospadias with narrow urethral plate. Results: The success rate in inlay graft urethroplasty group (n = 40) was 87.5%; glandular dehiscence occurred in one case (2.5%), fistulas occurred in 2 cases (5%), and narrow meatus occurred in two cases (5%). Success rate in onlay flap urethroplasty group (n = 40) was 82.5%; glandular dehiscence occurred in two cases (5%), fistulas occurred in two cases (5%), and narrow meatus occurred in three cases (7.5%). TIPU group (n = 40) had success rate of 62.5%; glandular dehiscence occurred in eight cases (20%), fistulas occurred in five cases (12.5%), and narrow meatus occurred in seven cases (17.5%), with five cases exhibiting both narrow meatus with fistula. Conclusion: Inlay graft and onlay flap urethroplasty for repair of distal penile hypospadias with narrow urethral plate had higher success rate and fewer complications than traditional TIPU. Moreover, operative time was shorter in TIPU. abstract_id: PUBMED:19468401 Tubularized incised plate urethroplasty for distal hypospadias: A literature review. The tubularized incised plate (TIP) urethroplasty or Snodgrass procedure has gained worldwide acceptance for distal hypospadias repair due to its low complication rate, good cosmetic result, and technical simplicity. As a result, several articles have been published concerning various aspects and subtle variations of this procedure. The aim of this review is to critically and systematically analyze the published complication rates of TIP repair for distal hypospadias in children. We also reviewed the surgical modifications that have been introduced to the original technique and discussed the potential impact on the final outcome of the Snodgrass procedure. abstract_id: PUBMED:32061491 Comparing the outcomes of tubularized incised plate urethroplasty and dorsal inlay graft urethroplasty in children with hypospadias: a systematic review and meta-analysis. Background: Excellent outcomes have been reported following the widely accepted tubularized incised plate urethroplasty (TIPU) and its relatively recent modification, the dorsal inlay graft urethroplasty (DIGU). However, there is a lack of consensus on which technique offers more favorable postoperative outcomes. Aims: To systematically compare the reported outcomes of the TIPU and DIGU techniques in children undergoing primary hypospadias repair. Design: A systematic review and meta-analysis of randomized and observational studies. Methods: An electronic database search was conducted up to May 2018. Sources included Medline, Embase, Cochrane library, CINAHL, Web of Science, and Google Scholar as well as trial registries and grey literature sources. Studies were selected if they compared the postoperative complications of TIPU and DIGU in children. Secondary outcomes included standardized cosmetic scores and urinary flow studies. A meta-analysis of reported complications was performed using a random-effects model. Results: Two randomized, two prospective, and two retrospective studies met the inclusion criteria. TIPU and DIGU were performed in 350 and 267 patients, respectively. Pooled analysis did not demonstrate a significant difference regarding postoperative urethrocutaneous fistula, meatal/urethral stenosis, wound dehiscence, or total complications. Subgroup analysis according to hypospadias severity did not alter initial findings. Statistical analysis of secondary outcomes was not feasible due to insufficient data. Most studies were of low methodological quality with a high risk of bias. Conclusions: There is no strong evidence to suggest that either technique offers more favorable outcomes. Until more robust randomized trials exist, decisions regarding the appropriate repair should be based on the surgeon's experience and outcomes. abstract_id: PUBMED:21862065 Comparative histological and functional controlled analysis of tubularized incised plate urethroplasty with and without dorsal inlay graft: a preliminary experimental study in rabbits. Purpose: We analyzed the histological and functional characteristics of the tubularized incised plate vs dorsal inlay graft urethroplasty in an experimental rabbit model. Materials And Methods: A total of 24 New Zealand male rabbits were randomly allocated into 4 groups, including sham operation, urethroplasty, tubularized incised plate urethroplasty and dorsal inlay graft urethroplasty. In the urethroplasty group the anterior urethral wall was half excised and the dorsal aspect was tubularized. In the tubularized incised plate group the same steps were followed but tubularization followed a longitudinal midline incision in the dorsal wall. In the dorsal inlay graft group the defect created by the dorsal incision was covered with an inner preputial graft. The animals were sacrificed at 4 and 8 weeks, respectively. The penis was immediately harvested for standardized passive flowmetry and subsequently fixed for histological staining. Results: The grafts took in all animals. The tubularized incised plate defect was bridged by urothelium while in the dorsal inlay graft group the preputial graft kept its original histological characteristics. There was a significant decrease in average flow in the urethroplasty group (1.6 ml per second) compared to that in the sham operated group (3.4 ml per second) and to the other groups (p <0.05). However, no significant difference in average flow was found for the tubularized incised plate and dorsal inlay graft groups (2.4 and 2.2 ml per second, respectively, p = 0.7). Conclusions: In this short-term rabbit model dorsal inlay graft urethroplasty was feasible with good graft take and integration. Simple tubularization of a reduced urethral plate led to significantly decreased flow. Incision of the reduced plate with or without grafting improved the average flow. Findings in this experimental model do not support the superiority of dorsal inlay graft urethroplasty over tubularized incised plate urethroplasty in terms of urethral flow dynamics. abstract_id: PUBMED:24309516 Tubularized incised plate urethroplasty for the treatment of penile fistulas after hypospadias repair. Objective: Urethrocutaneous fistula is the most common complication of hypospadias repair. Tubularized incised plate urethroplasty (TIPU) has been used for the management of distal fistulas. This study reports the usage of TIPU in the treatment of large penile fistulas. Materials And Methods: Between April 2002 and September 2012, 15 patients with large penile fistulas who were managed with TIPU were included in the study. The fistulas were sited along the penile shaft from proximal to distal penile localization. Glanular and coronal fistulas were excluded. The surgical technique was completed according to the standard TIPU technique. The surrounding scar tissue of the fistula was circumferentially excised, and the urethral plate at the level of the fistula was incised to provide performance of loose urethral tubularization. A urethral stent was kept for 5-7 days. Results: The mean age of the patients was 7.3 ± 3.1 years. Primary operation of these patients was tubularized preputial island flap (n = 6), on-lay preputial island flap (n = 4), and TIPU (n = 5). The sites of the hypospadias fistulas were as follows; penoscrotal (three), mid-penile (eight) and subcoronal (four). Fistulas recurred in two patients after fistula repair. The postoperative follow up of the patients was 12.4 ± 7.7 months. Conclusion: TIPU may be used safely for the treatment of fistulas after hypospadias repair. abstract_id: PUBMED:33276974 Self-reported outcomes after the onset of puberty in patients undergoing primary distal hypospadias repair by the tubularized incised plate technique combined with preputial reconstruction vs. circumcision: A norm related study. Purpose: We assessed self-reported outcomes after the onset of puberty in patients undergoing tubularized incised plate (TIP) repair in combination with preputial reconstruction (PR) or circumcision for primary distal hypospadias, in comparison to normal controls. Material And Methods: Out of 498 patients undergoing primary distal hypospadias repairs between 2001 and 2012, 83 underwent TIP repair associated with PR or circumcision before age 10-year, and had a Tanner stage ≥2 at study. Outcomes were assessed by online self-administration of validated questionnaires and additional non-validated questions. Results were compared between patients undergoing TIP repair vs. normal age-matched controls (healthy volunteers), and, between patients undergoing PR vs. circumcision. Results: Forty-one of the 83 eligible patients completed the study (response rate 49.3%). Of these, 26 had undergone PR. Median post-operative follow-up was 11.8 (range 6.5-16.2) years. The control group included 45 healthy boys. Comparing groups, there were no statistically significant differences in the cosmetic aspect of the penis, patient perception of penile appearance, and quality of life. Irrespective of preputial management, patients reported fear of being teased for the appearance of their genitalia much more commonly than controls (p = 0.02), but this did not affect their relationship patterns and quality of life. Conclusion: Self-reported outcomes after the onset of puberty were not significantly different between patients undergoing primary TIP repair of distal hypospadias in childhood and normal controls. Patients were significantly more commonly feared of being teased for their genital appearance, but this did not interfere with their social skills. Preputial management, namely PR vs. circumcision, did not affect the outcome. Answer: The combined inner preputial inlay graft with tubularized incised plate (TIP) in hypospadias repair appears to be worth doing based on the results from several studies. The technique has been shown to secure the optimal glanular position of a wide slit-like neomeatus, optimizing functional and cosmetic outcomes with a low incidence of meatal stenosis (PUBMED:26119452). Additionally, a comparative prospective randomized study found that the dorsal inlay inner preputial graft repair (G-TIP) had a better success rate and fewer complications compared to ventral onlay preputial graft repair in primary distal penile hypospadias with a narrow urethral plate (PUBMED:37664098). Furthermore, a systematic review and meta-analysis comparing the outcomes of TIP urethroplasty and dorsal inlay graft urethroplasty (DIGU) in children undergoing primary hypospadias repair did not find a significant difference in postoperative complications between the two techniques, suggesting that the addition of the inlay graft does not increase the risk of complications (PUBMED:32061491). Another study comparing inlay grafted incised plate, onlay preputial flap, and TIP urethroplasty for the repair of distal penile hypospadias with a narrow urethral plate found that inlay graft and onlay flap urethroplasty had higher success rates and fewer complications than traditional TIP urethroplasty (PUBMED:37947847). Moreover, a study on the use of TIP urethroplasty for the treatment of penile fistulas after hypospadias repair reported that TIP could be safely used for the treatment of fistulas, indicating the versatility of the TIP technique (PUBMED:24309516). Lastly, self-reported outcomes after puberty in patients who underwent TIP repair with preputial reconstruction or circumcision for primary distal hypospadias showed no significant differences in the cosmetic aspect of the penis, patient perception of penile appearance, and quality of life compared to normal controls, suggesting that the long-term outcomes of the procedure are satisfactory (PUBMED:33276974). In conclusion, the evidence suggests that the combined inner preputial inlay graft with TIP urethroplasty is a valuable technique for hypospadias repair, providing good cosmetic and functional results with a low complication rate.
Instruction: Are sterile water and brushes necessary for hand washing before surgery in Japan? Abstracts: abstract_id: PUBMED:16046831 Are sterile water and brushes necessary for hand washing before surgery in Japan? Purpose: To examine whether sterile water and brushes are necessary for hand washing before surgery. Method: Twenty-two operating room nurses were randomly divided into two groups as follows: 11 nurses who used 7.5% povidone iodine (PVI group) and another 11 nurses who used 4% chlorhexidine gluconate (CHG group) to wash their hands using the rubbing method. All the nurses were examined for bacterial contamination of their hands before and after surgical hand rubbing. We used tap water to wash the hands at the sink used for washing surgical instruments in the operating room and non sterilized plastic brushes. Results: No bacteria were detected in the tap water. Before washing the hands, the number of bacteria detected was 5.0 x 10(3) cfu/H in the PVI group and 4.0 x 10(3) cfu/H in the CHG group, which were similar in both groups. After washing the hands, the median value of the bacteria decreased to 8.7 x 10(2) cfu/H in the PVI group and 0 cfu/H in the CHG group. Conclusions: Sterile water and brushes are not necessary for preoperative scrubbing up. When using tap water for surgical hand washing, 1) the hand-rubbing method should be used; 2) a quick-alcohol-based disinfectant scrub should be used; 3) the concentration of free chloride in the water should be maintained at over 0.1 PPM; 4) the bacterial contamination of the water should be checked; and 5) the faucet should be routinely cleaned and sterilized. abstract_id: PUBMED:11452482 The effectiveness of ozonated water for hand washing before surgery Using an ozonated water-dispensing machine for sterilization of hands (Mediaqua MA-III; Core Medical Co., Ltd, Tokyo, Japan), we investigated the effectiveness of ozonated water as a disinfectant for hand washing before surgery. The effectiveness of this new hand-washing method, using 4 ppm of ozonated water, which is expected to have a short-term bactericidal effect, and 0.2% benzalkonium chloride/83% ethanol solution (Welpas), which is expected to have a long-term bactericidal effect, was compared with that of the conventional hand-washing method (Fürbringer's method using a scrubbing agent containing povidone-iodine). The results showed no significant differences in the numbers of live bacteria and exponential reduction rates in live bacteria. Thus, this new method for hand washing using ozonated water is an effective method for sterilization of the hands before surgery. abstract_id: PUBMED:33707922 Hand Surgery in Japan. Japan has faced the most challenging times in the past. Through precise diligence by stalwarts and doyens of initial hand surgeons, it led an incredible path for the most significant moments of hand surgery. This article describes the early phase of development of Japanese Society for Surgery of the hand, substantial and innovative contributions from surgeons. A noteworthy and significant achievement in the hand surgery is microsurgery and its utilities for all hand-related diseases. The first replantation of the thumb, toe transfers and wrap-around flaps are the effective surgical techniques developed and imparted to the fellow hand surgeons worldwide. We had a particular interest in congenital hand surgery and developed a modification of congenital hand classifications and introduced many surgical techniques. Besides, we grew ourselves refining more in hand and microsurgery, innovating flexor tendon repair, peripheral nerve surgeries, wrist arthroscopy, joint replacements, external fixators, and implant arthroplasty for rheumatoid hand. We share our health care information, insurance working model and hand surgery training schedule in Japan. abstract_id: PUBMED:10991818 The development of hand surgery in Japan and the Japanese Society for Surgery of the Hand. Japanese hand surgeons have contributed articles to our journal for many years. This article describes advances in Hand Surgery in Japan and introduces a section of this issue which is devoted to articles from Japan. (The Editor) abstract_id: PUBMED:21416914 Surgical team satisfaction levels between two preoperative hand-washing methods Introduction: Recently, there have been new antiseptics for surgical scrub that do not require brushing. One of them contains 1% chlorhexidine gluconate and 61% ethyl alcohol; within its benefits, it may offer a low potential for skin sensitization, as well as cost savings and less use of water. Objectives: To evaluate satisfaction levels, washing time, safety, cost and amount of water between the traditional surgical scrub technique (group A) and brush-free surgical scrub procedure (group B). Material And Methods: One hundred clean and clean-contaminated surgeries with four hundred members of surgical teams were included. Satisfaction levels, hand-washing time, skin disorders and problems associated with placement of gloves were evaluated. Hands cultures were taken in 20% of the population and the amount of water used by patients in group A was measured. Total costs and wound infections were analyzed. Results: Satisfaction scale in group A was 9.1 +/- 1.39 and 9.5 +/- 1.54 in group B (p = 0.004). The mean hand-washing time was 3.9 +/- 1.07 min in group A and 2.0 +/- 0.47 min in group B (p = 0.00001). Thirteen patients had dry skin in group A and four in group B (6.5% vs. 2%; p = 0.02). There were ten positives cultures in group A and five in group B (25% vs. 12.5%, p = 0.152). Wound infection rate was 3%. On average, five-hundred eighty liters of water were used by the former group, and the estimated hand-washing cost was lower in the second group. Conclusions: The handwashing technique with CGEA is as effective as traditional surgical scrub technique, and it is associated with less washing time, dry skin, cost and use of water. abstract_id: PUBMED:37719618 Hand Scrubbing and Donning of Sterile Surgical Gloves: An Observational Clinical Audit of Novice Dental Surgeons. Background The most critical factors in the satisfactory recovery of a patient post-surgery are obedience to sterilization and aseptic protocol. Using aseptic principles, the standard hand scrubbing and gloving procedure prevents contamination of the surgical site and aids in infection control. Methods Eighty dental interns were observed during minor oral surgical procedures for hand scrubbing and donning sterile surgical gloves, following the steps and guidelines provided by World Health Organization (WHO). The dental interns were evaluated, and in order to enhance their understanding of hand scrubbing and donning surgical gloves, desensitization programs were conducted through lectures using PowerPoint presentations. After one week, the participants were observed and evaluated again. This program made the participants aware of asepsis and infection control in clinical practice. Results Prior to intervention, only 37.14% of young dental surgeons performed proper conventional hand hygiene practices. After the intervention, this percentage increased to 62.142%, indicating a significant improvement. Regarding the donning of sterile surgical gloves, 43.75% of participants followed the standard steps before the intervention. After the intervention, the percentage raised to 86.25% indicating substantial growth. Conclusion Observations before and after the evaluation demonstrated significant changes in the acceptance rates for the fundamental criteria of hand hygiene and donning sterile surgical gloves. Adhering to both procedures according to WHO guidelines will help to reduce the risk of infections and raise awareness about asepsis in the practice among young dental surgeons. abstract_id: PUBMED:6705363 History, current status, and aspects of hand surgery in Japan. The history of modern surgery of the hand in Japan, along with the establishment of the Japanese Society for Surgery of the Hand in 1957, is described in the following article. Major and unique contributions of Japanese surgeons in various aspects of hand surgery are introduced. These contributions encompass microvascular and replantation surgery, tendon repair and grafting, neurotization in brachial plexus injury, reconstruction of nail-bed and paronychial tissues for nail regrowth and that of post-traumatic contracture, congenital anomalies, and osteonecrosis of the lunate. These contributions are not well known internationally because most of the authors have been published in Japanese. abstract_id: PUBMED:33487846 Hand Bacterial Repopulation Dynamics Following Two Methods of Surgical Hand Preparation during Elective Orthopedic Surgeries. Context: Antiseptic hand preparations are routine prior to surgical procedures to reduce microbial load on the operating gloved hands. Two methods of surgical hand preparations available are the antibacterial detergent hand wash and an alcohol-based hand rub. Aim: The aim of the study was to compare quantitatively, the efficacy of the two methods in hand bacterial reductions during elective orthopedic surgeries. Setting And Design: This comparative study was conducted at the orthopedic surgery department of a tertiary health facility. Subjects And Methods: One-hundred and sixty dominant hands of operating surgeons and nurses involved in forty elective orthopedic surgeries were studied. The subjects were randomly assigned to either the antibacterial detergent hand-washing (HW) or the alcohol-based hand-rubbing (HR) groups. Swab samples were obtained from the hands before and after hand preparations and at the end of surgeries following removal of the operating gloves. These samples were then subjected to culture. The bacterial counts on these were then obtained through an automated colony counter, and the results were expressed in logarithmic values (log10). Statistical Analysis Used: The analysis was done using IBM SPSS software version 20. The mean results obtained were subjected to an independent t-test analysis with the statistical significance level set at P < 0.05. Results: Both methods of hand antisepsis showed comparable efficacies in attaining surgical hand hygiene at 1-min postapplication (P = 0.73). HR group, however, showed greater sustained effects during the period of surgeries, though not statistically significant (P = 0.18). Conclusion: Scrubbing using the HR method is a viable alternative to the HW method during elective orthopedic surgery. abstract_id: PUBMED:32539867 Hand hygiene in surgery in Benin: opportunities and challenges. Background: Hand Hygiene (HH) has been described as the cornerstone and starting point in all infection control. Compliance to HH is a fundamental quality indicator. The aim of this study was to investigate the HH compliance among Health-care Workers (HCWs) in Benin surgical care units. Methods: A multicenter prospective observational study was conducted for two months. The World Health Organization (WHO) Hand Hygiene Observation Tool was used in obstetric and gastrointestinal surgery through six public hospitals in Benin. HH compliance was calculated by dividing the number of times HH was performed by the total number of opportunities. HH technique and duration were also observed. Results: A total of 1315 HH opportunities were identified during observation period. Overall, the compliance rate was 33.3% (438/1315), without significant difference between professional categories (nurses =34.2%; auxiliaries =32.7%; and physicians =32.4%; p = 0.705). However, compliance rates differed (p < 0.001) between obstetric (49.4%) and gastrointestinal surgery (24.3%). Generally, HCWs were more compliant after body fluid exposure (54.5%) and after touching patient (37.5%), but less before patient contact (25.9%) and after touching patient surroundings (29.1%). HCWs were more likely to use soap and water (72.1%) compared to the alcohol based hand rub solution (27.9%). For all of the WHO five moments, hand washing was the most preferred action. For instance, hand rub only was observed 3.9% after body fluid exposure and 16.3% before aseptic action compared to hand washing at 50.6 and 16.7% respectively. Duration of HH performance was not correctly adhered to 94% of alcohol hand rub cases (mean duration 9 ± 6 s instead of 20 to 30 s) and 99.5% of hand washing cases (10 ± 7 s instead of the recommended 40 to 60 s). Of the 432 HCWs observed, 77.3% followed HH prerequisites (i.e. no artificial fingernails, no jewellery). We also noted a lack of permanent hand hygiene infrastructures such as sink, soap, towels and clean water. Conclusion: Compliance in surgery was found to be low in Benin hospitals. They missed two opportunities out of three to apply HH and when HH was applied, technique and duration were not appropriate. HH practices should be a priority to improve patient safety in Benin. abstract_id: PUBMED:25934209 Infection after hand surgery. Postoperative infections are uncommon after hand surgery. Infection can delay recovery and contribute to scarring and stiffness. Measures intended to reduce the risk of infection after hand surgery include hand washing, skin preparation, sterile technique, and prophylactic antibiotics. The role of prophylactic antibiotics for small, clean, elective hand surgery procedures lasting less than 2 hours is debated. Answer: Based on the provided abstracts, sterile water and brushes are not deemed necessary for hand washing before surgery in Japan. A study involving operating room nurses who used either povidone iodine or chlorhexidine gluconate for hand washing with tap water and non-sterilized plastic brushes found that after washing, the bacterial count significantly decreased, suggesting that sterile water and brushes are not required for preoperative scrubbing (PUBMED:16046831). Additionally, the use of ozonated water for hand washing before surgery was found to be an effective method for sterilization of the hands, comparable to the conventional method using povidone-iodine (PUBMED:11452482). Furthermore, a study comparing traditional surgical scrub technique with a brush-free surgical scrub procedure found that the latter was associated with less washing time, dry skin, cost, and use of water, while being as effective as the traditional method (PUBMED:21416914). Another study indicated that alcohol-based hand rub is a viable alternative to antibacterial detergent hand-washing during elective orthopedic surgery, with comparable efficacy in reducing hand bacterial load (PUBMED:33487846). These findings suggest that alternative methods to using sterile water and brushes can be effective for surgical hand preparation in Japan.
Instruction: Does allograft size really matter in the long-term outcome of living donor kidney transplantation? Abstracts: abstract_id: PUBMED:25769574 Does allograft size really matter in the long-term outcome of living donor kidney transplantation? Background: Increased allograft mass in living donor kidney transplantation has been recognized as a predictor factor of better short-term allograft function. We evaluated whether donor kidney volume adjusted for recipient body weight is associated with long-term allograft function in living donor kidney transplantation. Methods: We analyzed 67 living donors and their recipients who underwent transplantation between 2003 and 2007. Estimated glomerular filtration rate (eGFR) and serum creatinine levels at 1, 2, 3, 4, and 5 years post-transplantation were recorded for all recipients. Transplanted kidney volumes were measured using 3-D helical computed tomography scanning. A transplant kidney volume-recipient body weight (Vol/Wt) ratio was calculated for each donor-recipient pair. The subjects were divided into tertiles according to Vol/Wt ratios: low (<2.16), medium (2.16-2.88), and high (>2.88). Results: Vol/Wt ratio significantly correlated with recipient eGFR and serum creatinine levels at 1, 2, 3, and 4 years post-transplantation (r = .48, P < .0001; r = .46, P < .0001; r = .47, P < .0001; r = .26, P = .037, respectively, for eGFR; r = -.53, P < .0001; r = -.50, P < .0001; r = -.44, P < .0001; r = -.37, P = .003, respectively, for serum creatinine) but not at 5 years (r = .12, P = .406 for eGFR; r = -.21, P = .110 for serum creatinine). Whereas recipient eGFR increased significantly in a graded fashion among low to high Vol/Wt ratio groups during 1 to 3 years post-transplantation, there was no difference in eGFR values between Vol/Wt ratio groups at 4 and 5 years (P = .21 and .71, respectively). Conclusion: Vol/Wt ratio is not associated with long-term allograft function in living donor kidney transplantation. abstract_id: PUBMED:34895720 Long-term outcomes of living-donor lobar lung transplantation. Objective: Although living-donor lobar lung transplantation (LDLLT) enables an intermediate survival similar to cadaveric lung transplantation, the long-term outcome remains unknown. We examined the long-term outcomes of 30 patients who received LDLLT more than 16 years previously. Methods: We retrospectively reviewed the clinical data of 30 patients who underwent LDLLT (bilateral LDLLT, 29 patients; single LDLLT, 1 pediatric patient) between October 1998 and April 2004. Results: LDLLT was performed for 25 female and 5 male patients ranging in age from 8 to 55 years. The diagnoses included pulmonary hypertension (n = 11), pulmonary fibrosis (n = 7), bronchiolitis obliterans (n = 5), and others (n = 7). At a median follow-up of 205 months, 22 patients were alive and 8 were dead. The causes of death were infection (n = 3), malignancy (n = 2), acute rejection (n = 2), and chronic lung allograft dysfunction (CLAD; n = 1). Unilateral CLAD occurred in 17 patients (56.7%), but only 1 of these patients subsequently developed bilateral CLAD. Two patients underwent bilateral cadaveric lung retransplantations. The 5-, 10-, and 15-year CLAD-free survival rates were 80.0%, 62.8%, and 44.3%, respectively. Malignancy occurred in 7 patients. Two of 5 patients with chronic kidney disease requiring hemodialysis underwent living-donor kidney transplantation. The 5-, 10-, and 15-year overall survival rates were 96.7%, 86.7%, and 73.3%, respectively. Conclusions: Although only 2 lobes are implanted, LDLLT provides encouraging long-term outcomes. In patients with unilateral CLAD, the functioning contralateral graft might contribute to a favorable long-term outcome. abstract_id: PUBMED:29266831 Inferior long-term allograft and patient outcomes among recipients of offspring living donor kidneys. While offspring-to-parent living donor kidney transplantations may represent an ideal donor-recipient combination to optimize long-term transplantation outcomes, the sex-specific long-term success of these transplantations remains unclear. We hypothesize that allograft and recipient survivals in offspring-to-parent living donor kidney transplantation differ between men and women due to donor-specific alloimmunization during pregnancy. We retrospectively analyzed long-term allograft and patient survival among men and women who received an offspring living donor kidney compared with those who received other haplotype-matched living donor kidneys. Based on multivariable Cox proportional hazards modeling of Organ Procurement and Transplantation Network data from 2001 to 2015, we found that both men and women who received offspring living donor kidneys had significantly increased mortality compared with recipients who received nonoffspring living donor kidneys. While male recipients of any living donor kidney had greater risk of mortality and allograft failure than female recipients, there was no significant difference in all-cause allograft failure or mortality in male versus female recipients of offspring living donor kidney transplantations. Our analysis demonstrated no significant interaction between recipient sex and donor offspring status. We conclude that nonoffspring living donors should be considered whenever feasible for both men and women with multiple donor options. abstract_id: PUBMED:37213488 Living Donor Kidney Transplant in Recipients With Glomerulonephritis: Donor Recipient Biologic Relationship and Allograft Outcomes. Using the Scientific Registry of Transplant Recipients, we examined the association between donor-recipient biologic relationship and long-term recipient and allograft survival among glomerulonephritis (GN) patients. Four GN types were studied: membranous nephropathy, IgA, lupus-associated nephritis, and focal segmental glomerulosclerosis (FSGS). We identified all adult primary living-donor recipients between 2000 and 2018 (n = 19,668): related (n = 10,437); unrelated (n = 9,231). Kaplan-Meier curves were generated for the recipient, death-censored graft survival and death with functioning graft through ten years post-transplant. Multivariable Cox proportional hazard models were used to examine the association between the donor-recipient relationship and outcomes of interest. There was an increased risk for acute rejection by 12 months post-transplant among the unrelated compared to the related group in IgA (10.1% vs. 6.5%, p<0.001), FSGS (12.1% vs. 10%, p-0.016), and lupus nephritis (11.8% vs. 9.2%; p-0.049). The biological donor-recipient relationship was not associated with a worse recipient or graft survival or death with functioning graft in the multivariable models. These findings are consistent with the known benefits of living-related-donor kidney transplants and counter the reports of the potential adverse impact of the donor-recipient biologic relationship on allograft outcomes. abstract_id: PUBMED:23991890 Impact of donor and recipient age difference on long-term allograft survival after living donor renal transplantation: analysis of 478 cases. Introduction: Either deceased or living-related renal transplantation constitutes the best therapeutic option for patients with end-stage renal disease. In this retrospective study, an attempt to identify parameters that affect allograft survival in living donor renal transplantation was made. Methods: Between January 2000 and July 2012, 478 adult patients received a renal transplant from a living-related donor in our center and their records were retrospectively reviewed in November 2012. Data concerning donor age, recipient age, donor/recipient age difference, donor/recipient gender, and ABO compatibility/incompatibility were recorded and associated with renal allograft survival rate. Results: Renal allograft survival rate was 96%, 89.5%, and 77.7% in the first, fifth, and 10th yr after transplantation, respectively. Only the difference between donor and recipient age was statistically significant in relation to graft survival. In cases with age difference >13 yr, graft survival rate was lower from the third yr onward. Conclusions: Only the age difference between donor and recipient exerts an adverse impact on graft outcome after living donor renal transplantation, whereas donor age, recipient age, donor/recipient gender, and ABO incompatibility do not significantly influence renal allograft survival. abstract_id: PUBMED:33194009 Long-term outcomes of laparoscopic versus open donor nephrectomy for kidney transplantation: a meta-analysis. Laparoscopic surgery is widely used for living donor nephrectomy and has demonstrated superiority over open surgery by improving several outcomes, such as length of hospital stay and morphine requirements. The purpose of the present study was to compare the long-term outcomes of open donor nephrectomy (ODN) versus laparoscopic donor nephrectomy (LDN) using meta-analytical techniques. The Web of Science, PubMed and Cochrane Library databases were searched, for relevant articles published between 1980 and January 20, 2020. Lists of reference articles retrieved in primary searches were manually screened for potentially eligible studies. Outcome parameters were explored using Review Manager version 5.3. The evaluated outcomes included donor serum creatinine levels, incidence of hypertension or proteinuria at 1 year postoperative, donor health-related quality of life, donation attitude, and graft survival. Thirteen of the 111 articles fulfilled the inclusion criteria. The LDN group demonstrated similar 1 year outcomes compared with ODN with respect to serum creatinine levels (weighted mean difference [WMD] -0.02 mg/dL [95% confidence interval (CI) -0.18-0.13]; P=0.77); hypertension (odds ratio [OR] 1.21 [95% CI 0.48-3.08]; P=0.68); proteinuria (OR 0.28 [95% CI 0.02-3.11]; P=0.30); and donation attitude (OR 4.26 [95% CI 0.06-298.27]; P=0.50). Donor health-related quality of life and recipient graft survival were also not significantly different between the groups analyzed. Thus, the long-term outcomes between LDN and ODN for living donor kidney transplantation are similar. abstract_id: PUBMED:24175190 Long-term renal function, complications and life expectancy in living kidney donors. Living kidney transplantation is now a widely accepted treatment for end stage renal disease (ESRD) because it provides excellent outcomes for recipients. However, long-term outcomes of living kidney donors have not been well understood. Because securing the safety of the donor is essential to the continued success of this procedure, we reviewed articles discussing long-term outcomes of living kidney donors. Most studies found no decrease in long-term survival or progressive renal dysfunction in previous kidney donors. Moreover, the prevalence of hypertension was comparable to that expected in the general population, although some did report otherwise. Urinary protein showed small increases in this population and was associated with hypertension and a lower glomerular filtration rate. Quality of life following living kidney donation seems to be better than the national norm. We also encountered several reports of ESRD in previous living kidney donors. Regular follow-up of kidney donors is recommended and future controlled, prospective studies will better delineate risk factors which cause health problems following living kidney donation. abstract_id: PUBMED:32416641 Predictors of long-term renal allograft survival after second kidney transplantation. Introduction: Few studies investigate significant perioperative predictors for long-term renal allograft survival after second kidney transplant (SKT). We compared long-term survival following SKT with primary kidney transplant and determined predictors of renal allograft failure after SKT. Methods: Outcomes of all primary or second kidney transplant recipients at a national kidney transplant center between 1993 and 2017 were reviewed. The primary outcomes measurements were renal allograft survival for both first and second kidney transplants. Secondary outcome measurements were incidence of delayed graft function (DGF), incidence of acute rejection (AR), and predictors for renal allograft survival in SKT recipients. Results: In total, there were 392 SKTs and 2748 primary kidney transplants performed between 1993 and 2017. The 1-, 5-, and 10-year death-censored graft survival for deceased-donor recipients was 95.3%, 88.7%, and 78.2% for primary kidney transplant and 94.9%, 87.1%, and 74.9% for SKT (P = .0288). Survival of primary renal allograft <6 years (HR 0.6, P = .017), AR episodes (HR 1.6, P = .031), DGF (HR 2.0, P = .005), and HLA-DR MM (HR 1.7, P = .018) was independent predictors of long-term renal allograft failure after SKT. Conclusion: These findings may provide important information on long-term survival outcomes after SKT and for identifying patients at risk for long-term renal allograft failure after SKT. abstract_id: PUBMED:35100577 Long-Term Outcomes of Left versus Right Laparoscopic Living Donor Nephrectomy with Multiple Renal Arteries. Objectives: The objective of this study was to compare the long-term graft outcomes of left-versus-right donor nephrectomy with multiple renal arteries (MRAs), and therefore creating a reference for the expansion of the potential living kidney donor pool. Methods: Laparoscopic live donor nephrectomy cases between May 2010 and October 2020 were included in this retrospective cross-sectional study. The data relating to donor and recipient demographics, surgical and anatomical characteristics, recipient, and graft status were retrieved and compared using nonparametric statistical methods and multivariate regression. Analyses were fit for survival factors. Results: A total of 1,009 recipients were included in this retrospective cross-sectional study with their donors. 16.7% of the donors had been discovered to have more than one renal artery supplying the donated kidney. The acute rejection rate was 12.8%. Death-censored graft survival at postoperative year 5 for single renal artery (SRA) transplants was 89.6%, 89.5% for left-sided MRAs, and 88.2% for right-sided ones. Conclusions: Both right donor nephrectomy and left donor nephrectomy are safe procedures with no significant negatively impacted rates for neither survival nor complications of the recipients in the long-term, compared to SRA ones. abstract_id: PUBMED:31478267 The temporal and long-term impact of donor body mass index on recipient outcomes after kidney transplantation - a retrospective study. The impact of increasing body mass index (BMI) on development and progression of chronic kidney disease is established. Even implantation kidney biopsies from obese living donors demonstrate subtle histologic changes despite normal function. We hypothesized that kidneys from obese living (LD) and deceased donors (DD) would have inferior long-term allograft outcomes. In a study utilizing US transplant registry, we studied adult kidney transplant recipients from 2000 to 2014. Donors were categorized as BMI <20 (underweight), 20-25 (normal), 25-30 (overweight), 30-35 (mildly obese), and >35 kg/m2 (very obese). Our outcome of interest was death censored graft failure (DCGF). Cox proportional hazards model were fitted separately for recipients of DD and LD kidneys, and adjusted for donor, recipient, and transplant characteristics, including donor and recipient size mismatch ratio. Among 118 734 DD and 84 377 LD transplants recipients, we observed a significant and graded increase in DCGF risk among the overweight (LD:HR = 1.06, DD:HR = 1.04), mildly obese (LD:HR = 1.16, DD:HR = 1.10), and very obese (LD:HR = 1.22, DD:HR = 1.22) compared to normal BMI (P < 0.05). The graded effect of donor BMI on outcomes begins early and persists throughout the post-transplant period. Donor obesity status is an independent risk factor for inferior long-term renal allograft outcome despite adjusting for donor and recipient size mismatch and other donor, recipient, and transplant factors. Answer: The long-term outcome of living donor kidney transplantation is influenced by various factors, and allograft size is one of the considerations. According to the study with PUBMED:25769574, the transplant kidney volume to recipient body weight (Vol/Wt) ratio was initially correlated with better recipient estimated glomerular filtration rate (eGFR) and lower serum creatinine levels at 1, 2, 3, and 4 years post-transplantation. However, this correlation was not observed at 5 years post-transplantation, suggesting that the Vol/Wt ratio is not associated with long-term allograft function in living donor kidney transplantation. Other studies have identified different factors that may impact long-term outcomes. For instance, the study with PUBMED:23991890 found that the age difference between donor and recipient significantly affects graft survival, with a larger age difference being associated with lower graft survival rates from the third year onward. Additionally, the study with PUBMED:31478267 reported that donor body mass index (BMI) is an independent risk factor for inferior long-term renal allograft outcomes, with overweight and obese donors associated with a graded increase in death-censored graft failure risk. In summary, while allograft size may have some short-term correlations with better outcomes, it does not appear to be a significant factor in the long-term function of living donor kidney transplants. Other factors, such as age difference between donor and recipient and donor BMI, may have more substantial impacts on long-term allograft survival (PUBMED:25769574, PUBMED:23991890, PUBMED:31478267).
Instruction: Does simple hysterectomy alter bowel function? Abstracts: abstract_id: PUBMED:9474423 Does simple hysterectomy alter bowel function? Background And Aims: Hysterectomy is believed to be associated with disturbed defecation, mainly constipation. This study longitudinally describes bowel function in women submitted for hysterectomy. Material And Methods: Rectoanal manovolumetry, whole gut transit time and detailed interviews on bowel function and dyspareunia were performed preoperatively and at 3 and 11-18 months after hysterectomy in 42 women. Twenty healthy women matched for age and parity served as manovolumetry controls. Results: No significant changes in anal sphincter pressures could be demonstrated, neither early nor late after hysterectomy. Transit time was unaffected. All but one of the patients claimed that they had been suffering from one or more of the following symptoms; abdominal pain, distension, constipation and dysparenuia. While postoperative interviews revealed a significant improvement with respect to abdominal pain and dyspareunia (P < 0.01) after 3 and 11-18 months, improvement of abdominal distension and constipation proved to be transient only. Conclusion: Simple abdominal hysterectomy appears not to interfere adversely with bowel function. On the contrary many patients were relieved from abdominal pain present before operation. abstract_id: PUBMED:22374890 Robotic nerve-sparing radical parametrectomy: feasibility and technique. Background: The objective of this research was to evaluate the feasibility of robotic nerve-sparing radical parametrectomy for cervical cancer after simple hysterectomy. Methods: A 41 year-old patient was diagnosed with invasive cervical adenocarcinoma after simple hysterectomy. Hysterectomy margins were negative. A robotic nerve-sparing radical parametrectomy was offered and performed 5 weeks later. Results: Total operating time was 330 min, blood loss was 145 ml and length of hospitalization 2 days. Pathology revealed no residual tumour. Normal bladder function resumed on postoperative day 9. At a follow-up of 16 months, the patient remains with no evidence of disease and with normal bladder and bowel function. Conclusions: Robotic nerve-sparing radical parametrectomy is safe and feasible and can be offered to patients with indications for radical parametrectomy. abstract_id: PUBMED:12592421 A study of non-closure of the peritoneum at vaginal hysterectomy. The aim of this case-controlled study was to determine whether non-closure of the peritoneum is detrimental in vaginal hysterectomy. 233 patients who underwent total vaginal hysterectomy (TVH) or laparoscopically assisted vaginal hysterectomy (LAVH) at the University of Vienna/Austria were analyzed. Cohorts of patients were formed according to their peritonealization status (open, n=117, vs closed peritoneum, n=116) and further stratified according to the type of surgical procedure: simple TVH ( n=115), TVH with concurrent vaginal repair and/or urinary incontinence surgery ( n=91) and LAVH ( n=27). No significant differences could be observed in analyzed surgical outcome (operation time, blood loss and analgesia). Complications (fever, infection, hemorrhage or revision) were similar whether the peritoneum was closed or not. After simple TVH, resumption of bowel function took place earlier in patients with open peritoneum than in those where it had been sutured (1.9 vs 2.4 days, P=0.001). No readmission for prolapse of the vaginal vault was recorded. Non-closure of the peritoneum at vaginal hysterectomy appears to be safe. Omission of peritoneal closure reduces the potential risk of injury and has a beneficial effect on bowel function. abstract_id: PUBMED:12013162 Development and validation of a questionnaire for the assessment of bowel and lower urinary tract symptoms in women. Objective: To develop a simple but sensitive instrument to evaluate and document symptoms of both bowel and urinary dysfunction in women. Design: A 22-item questionnaire covering a range of bowel and urinary symptoms was developed and underwent rigorous psychometric testing. Setting: The gynaecology departments of three hospitals, a urogynaecology clinic, a functional bowel clinic and a general practice. Population: Six hundred and thirty women, comprising four groups: 1. women awaiting hysterectomy (n = 379), 2. women following hysterectomy (n = 45), 3. women referred with functional bowel and/or urinary symptoms (n = 65), 4. asymptomatic controls (n = 141). Main Outcome Measures: The content, construct and criterion validity, internal consistency, reliability and responsiveness of the questionnaire were measured. RESULTS Peer and patient reports and missing data patterns supported face and content validity. Factor analysis showed a clinically relevant four-factor structure with low content replication able to distinguish between patient groups, indicating good internal structure. Comparison with clinical, anorectal physiological, videoproctographic, transit time and urodynamic test results provide provisional indication of criterion validity. Key domain question analysis and Cronbach's alphas showed internal consistency. Kappa values demonstrated good test-retest reliability and key question correlation over time proved responsiveness. Conclusions: Our findings support the suitability, clinical validity, reliability and responsiveness of a simple questionnaire, which is sensitive to the constraints of clinical practice. The authors recommend its use in health care evaluation research assessing the effects of pelvic surgery and as a useful tool in comparing treatment efficacy. abstract_id: PUBMED:31609137 The impact of chewing gum on postoperative bowel activity and postoperative pain after total laparoscopic hysterectomy. We aimed to investigate the effects of chewing gum on bowel activity and postoperative pain in patients undergoing laparoscopic hysterectomy. Patients were randomised into two groups (n = 58, study; n = 51, control). In the study group, patients started chewing sugarless gum every 2 h for 15 min, beginning at the second postoperative hour. The control group did not chew gum, and they received standard postoperative care. Both groups were compared primarily in terms of the amount of time until the first bowel movement, the time of the first passage of flatus and the time of first defaecation. The amount of time until the first bowel movement, the time of the first passage of flatus and the time of the first defaecation were found to be significantly shorter in the chewing gum group (p < .001). The amount of postoperative analgesics that were needed and VAS scores at 6-hours and 24-hours postoperatively, were found to be lower in the study group than in the control group (p < .001). Chewing gum was found to have beneficial effects on bowel motility and postoperative pain in patients undergoing laparoscopic hysterectomy. This affordable and simple method could be recommended to patients after total laparoscopic hysterectomy.Impact statementWhat is already known on this subject? Postoperative gastrointestinal dysfunction remains a source of morbidity and the major determinant of length of stay after abdominal operation. The mechanism of enhanced recovery from postoperative gastrointestinal dysfunction with the help of chewing gum is believed to be the cephalic-vagal stimulation of digestion which increases the promotability of neural and humoral factors that act on different parts of the gastrointestinal tract.What do the results of this study add? The findings of previous randomised controlled studies have been inconsistent regarding the effect of chewing gum on postoperative bowel function following abdominal gynecological surgery. In this randomised prospective study, we found that chewing gum early in the postoperative period after total laparoscopic hysterectomy hastened time to bowel motility and flatus. To our knowledge this is the first study of the impact of chewing gum on bowel motility after total laparoscopic hysterectomy.What are the implications of these findings for clinical practice and/or further research? Chewing gum early in the postoperative period following laparoscopic hysterectomy hastens time to bowel motility and flatus. The use of chewing gum is a simple and cheap strategy for promoting the recovery of gastrointestinal functions. abstract_id: PUBMED:19410279 Long-term assessment of bladder and bowel dysfunction after radical hysterectomy. Objective: To determine the long-term effects of radical hysterectomy on bladder and bowel function. Methods: Subjects included women who underwent radical hysterectomy for early stage cervical cancer between 1993 and 2003. Two contemporary controls who underwent extrafascial abdominal hysterectomy for benign disease were identified for each subject. Identified subjects and controls were surveyed. The Urogenital Distress Inventory (UDI) was used to assess symptoms of incontinence, and the Incontinence Impact Questionnaire (IIQ) was used to examine the impact of incontinence on quality of life. The Manchester Health Questionnaire and Fecal Incontinence Quality of Life Scale (FIQL) were used to assess anorectal symptoms. Results: Surveys were returned by 66 of 209 (32%) subjects and 152 of 428 (36%) controls. Overall, 50% of subjects and 42% of controls reported mild incontinence symptoms; 34% of subjects and 35% of controls reported moderate-severe symptoms (p=0.72). Incontinence was associated with moderate-severe impairment in 18% of subjects and 14% of controls (p=0.74). Fecal incontinence symptoms were uncommon, not differing between subjects and controls. Conclusion: Urinary incontinence is relatively common after radical hysterectomy, but severe anorectal dysfunction is uncommon. Radical hysterectomy does not appear to be associated with more long-term bladder or anorectal dysfunction than simple hysterectomy. abstract_id: PUBMED:26628070 The Outcomes of Ultralow Anterior Resection or an Abdominoperineal Pull-Through Resection and Coloanal Anastomosis for Radiation-Induced Recto-Vaginal Fistula Patients. Purpose: The purpose of this study was to evaluate the outcomes of patients who underwent colorectal resections and coloanal anastomosis for radiation-induced recto-vaginal fistulas (RVFs). The effect of the surgical treatment technique on bowel function, fecal continence, and quality of life of patients was also evaluated. Methods: Twenty-one female patients, who received adjuvant chemotherapy and external beam pelvic radiation for cervix carcinoma after radical hysterectomy + pelvic/paraaortic lymph node dissection, having RVF but without tumor recurrence, were included. All patients underwent an ultralow anterior resection (n = 11) or an abdominoperineal pull-through resection and straight coloanal anastomosis (n = 10). A bowel functions questionnaire and a Fecal Incontinence Quality of Life (FIQLI) questionnaire were applied to patients pre-operatively and also 6 months after the ileostomy closure procedures. Results: No recurrent RVF was observed in a mean follow-up period of 20 months after ostomy reversal procedures. The FIQLI depression, lifestyle, and embarrassment scores were significantly improved on the follow-up questionnaire. The mean pre- and post-operative incontinence scores were not significantly different. Conclusions: The spontaneous closure rate after a simple diverting stoma is quite low and local repair procedures usually result in failure. In selected patients, performing a nearly total rectum resection and maintaining the intestinal continuity with a coloanal anastomosis may be accepted as a safe and curative option. Recurrence-free outcome and the improvement of the quality of life of the patients represent the efficiency of this treatment modality. abstract_id: PUBMED:9689560 The effect of hysterectomy on ano-rectal physiology. Hysterectomy is associated with severe constipation in a subgroup of patients, and an adverse effect on colonic motility has been described in the literature. The onset of irritable bowel syndrome and urinary bladder dysfunction has also been reported after hysterectomy. In this prospective study, we investigated the effect of simple hysterectomy on ano-rectal physiology and bowel function. Thirty consecutive patients were assessed before and 16 weeks after operation. An abdominal hysterectomy was performed in 16 patients, and a vaginal procedure was performed in 14. The parameters measured included the mean resting, and maximal forced voluntary contraction anal pressures, the recto-anal inhibitory reflex, and rectal sensation to distension. In 8 patients, the terminal motor latency of the pudendal nerve was assessed bilaterally. Pre-operatively, 8 patients were constipated. This improved following hysterectomy in 4, worsened in 2, and was unchanged in 2. Symptomatology did not correlate with changes in manometry. Although, the mean resting pressure was reduced after hysterectomy (57 mmHg-53 mmHg, P = 0.0541), the maximal forced voluntary contraction pressure was significantly decreased (115 mmHg-105 mmHg, P = 0.029). This effect was more pronounced in those with five or more previous vaginal deliveries (P = 0.0244, n = 9). There was no significant change in the number of patients with an intact ano-rectal inhibitory reflex after hysterectomy. There was no change in rectal sensation to distension, and the right and left pudendal nerve terminal motor latencies were unaltered at follow-up. Our results demonstrate that hysterectomy causes a decrease in the maximal forced voluntary contraction and pressure, and this appears to be due to a large decrease in a small group of patients with previous multiple vaginal deliveries. abstract_id: PUBMED:27780709 Effects of coffee consumption on gut recovery after surgery of gynecological cancer patients: a randomized controlled trial. Background: Paralytic ileus that develops after elective surgery is a common and uncomfortable complication and is considered inevitable after an intraperitoneal operation. Objective: The purpose of this study was to investigate whether coffee consumption accelerates the recovery of bowel function after complete staging surgery of gynecologic cancers. Study Design: In this randomized controlled trial, 114 patients were allocated preoperatively to either postoperative coffee consumption with 3 times daily (n=58) or routine postoperative care without coffee consumption (n=56). Total abdominal hysterectomy and bilateral salpingo-oophorectomy with systematic pelvic and paraaortic lymphadenectomy were performed on all patients as part of complete staging surgery for endometrial, ovarian, cervical, or tubal cancer. The primary outcome measure was the time to the first passage of flatus after surgery. Secondary outcomes were the time to first defecation, time to first bowel movement, and time to tolerance of a solid diet. Results: The mean time to flatus (30.2±8.0 vs 40.2±12.1 hours; P<.001), mean time to defecation (43.1±9.4 vs 58.5±17.0 hours; P<.001), and mean time to the ability to tolerate food (3.4±1.2 vs 4.7±1.6 days; P<.001) were reduced significantly in patients who consumed coffee compared with control subjects. Mild ileus symptoms were observed in 17 patients (30.4%) in the control group compared with 6 patients (10.3%) in the coffee group (P=.01). Coffee consumption was well-tolerated and well-accepted by patients, and no intervention-related side-effects were observed. Conclusion: Coffee consumption after total abdominal hysterectomy and systematic paraaortic lymphadenectomy expedites the time to bowel motility and the ability to tolerate food. This simple, cheap, and well-tolerated treatment should be added as an adjunct to the postoperative care of gynecologic oncology patients. abstract_id: PUBMED:22453316 Bowel function after minimally invasive urogynecologic surgery: a prospective randomized controlled trial. Objectives: The goals of this study were to assess the effect of a standardized postoperative bowel regimen of over-the-counter medications on (1) time to first bowel movement (BM) and (2) pain level associated with first BM in subjects undergoing minimally invasive urogynecologic surgery. Methods: Eligible patients scheduled to undergo minimally invasive urogynecologic surgery were offered participation. Enrolled subjects were randomized by computerized schedule. Demographic and perioperative data were collected. Subjects completed a validated questionnaire preoperatively and postoperatively assessing preexisting constipation, frequency and consistency of bowel movements, use of pain medications, mean daily pain level (using visual analog scale), stool consistency, and pain associated with first postoperative bowel movement. The control group was instructed to take docusate sodium twice daily postoperatively. The treatment group took docusate sodium plus Miralax, fiber wafers, and bisacodyl suppositories as directed by protocol. Wilcoxon or t testing was used to compare continuous variables; χ testing was used for categorical relationships, and backward-elimination multiple regression was used to assess independent effects. Results: Seventy-two subjects were enrolled and randomized. Twelve subjects withdrew, leaving 60 (30 per group) completing the study. There were no statistically significant differences between groups in baseline characteristics. Mean (SD) age was 63 (9) years for the control group and 58 (10) for the study group (P = 0.06). Mean pelvic organ prolapse stage was III in each group. The mean (SD) operating room time was 198 (65) minutes for the controls and 216 (74) for the study subjects. Sixty-five percent underwent robot-assisted surgery (50% hysterectomy and 63% sacrocolpopexy). Ninety-eight percent of surgeries were performed under general anesthesia.Before adjustment, the mean (SD) time to first BM was 77 (24) hours in controls versus 64 (21) in the study patients (P = 0.03). Using multiple regression, baseline frequency of defecation (1-2 BMs/wk) was directly associated with the time to first BM (added 25.2 hours; P = 0.009) and being in the study group was inversely associated (first BM, 11.7 hours sooner; P = 0.04). No other variables were retained.There was no difference in pain associated with first postoperative BM (visual analog scale, 3.6 (3.2) vs 3.7 (2.8); P = 0.98), but those with prior complaints of vaginal or rectal splinting had higher pain scores (1.9 and 2.8 points higher, respectively; P = 0.04 for both). There was a trend toward higher pain scores with higher postoperative daily narcotic intake (P = 0.06). No other variables were retained.There was a significant difference in recorded compliance between control versus study regimens (94% vs 81%, respectively; P = 0.002). Conclusions: Mean time to first postoperative BM after minimally invasive urogynecologic surgery is more than 3.5 days with use of docusate sodium alone and is only slightly shorter when combination therapy is used. First BM after surgery is considered to be painful despite the use of medications. Future studies targeting postoperative discomfort/pain with defecation could target preoperative bowel regimens or more aggressive postoperative interventions. Regimens should remain simple to increase compliance. Answer: Based on the provided abstracts, simple hysterectomy does not appear to adversely interfere with bowel function. A study longitudinally describing bowel function in women submitted for hysterectomy found no significant changes in anal sphincter pressures, transit time, or bowel function after simple abdominal hysterectomy. In fact, many patients reported relief from abdominal pain present before the operation (PUBMED:9474423). Another study on non-closure of the peritoneum at vaginal hysterectomy indicated that non-closure may have a beneficial effect on bowel function, with earlier resumption of bowel function observed in patients with open peritoneum compared to those with closed peritoneum after simple total vaginal hysterectomy (PUBMED:12592421). Furthermore, a long-term assessment of bladder and bowel dysfunction after radical hysterectomy showed that severe anorectal dysfunction was uncommon and that radical hysterectomy did not appear to be associated with more long-term bladder or anorectal dysfunction than simple hysterectomy (PUBMED:19410279). Another study investigating the effect of hysterectomy on ano-rectal physiology found that while there was a decrease in maximal forced voluntary contraction pressure after hysterectomy, this effect was more pronounced in patients with multiple previous vaginal deliveries, and there was no significant change in other ano-rectal physiology parameters (PUBMED:9689560). In summary, the evidence from these studies suggests that simple hysterectomy does not have a significant negative impact on bowel function, and in some cases, it may even lead to an improvement in symptoms related to bowel function.
Instruction: Do clinical, histological or immunohistochemical primary tumour characteristics translate into different (18)F-FDG PET/CT volumetric and heterogeneity features in stage II/III breast cancer? Abstracts: abstract_id: PUBMED:26140849 Do clinical, histological or immunohistochemical primary tumour characteristics translate into different (18)F-FDG PET/CT volumetric and heterogeneity features in stage II/III breast cancer? Purpose: The aim of this retrospective study was to determine if some features of baseline (18)F-FDG PET images, including volume and heterogeneity, reflect clinical, histological or immunohistochemical characteristics in patients with stage II or III breast cancer (BC). Methods: Included in the present retrospective analysis were 171 prospectively recruited patients with stage II/III BC treated consecutively at Saint-Louis hospital. Primary tumour volumes were semiautomatically delineated on pretreatment (18)F-FDG PET images. The parameters extracted included SUVmax, SUVmean, metabolically active tumour volume (MATV), total lesion glycolysis (TLG) and heterogeneity quantified using the area under the curve of the cumulative histogram and textural features. Associations between clinical/histopathological characteristics and (18)F-FDG PET features were assessed using one-way analysis of variance. Areas under the ROC curves (AUC) were used to quantify the discriminative power of the features significantly associated with clinical/histopathological characteristics. Results: T3 tumours (>5 cm) exhibited higher textural heterogeneity in (18)F-FDG uptake than T2 tumours (AUC <0.75), whereas there were no significant differences in SUVmax and SUVmean. Invasive ductal carcinoma showed higher SUVmax values than invasive lobular carcinoma (p = 0.008) but MATV, TLG and textural features were not discriminative. Grade 3 tumours had higher FDG uptake (AUC 0.779 for SUVmax and 0.694 for TLG), and exhibited slightly higher regional heterogeneity (AUC 0.624). Hormone receptor-negative tumours had higher SUV values than oestrogen receptor-positive (ER-positive) and progesterone receptor-positive tumours, while heterogeneity patterns showed only low-level variation according to hormone receptor expression. HER-2 status was not associated with any of the image features. Finally, SUVmax, SUVmean and TLG significantly differed among the three phenotype subgroups (HER2-positive, triple-negative and ER-positive/HER2-negative BCs), but MATV and heterogeneity metrics were not discriminative. Conclusion: SUV parameters, MATV and textural features showed limited correlations with clinical and histopathological features. The three main BC subgroups differed in terms of SUVs and TLG but not in terms of MATV and heterogeneity. None of the PET-derived metrics offered high discriminative power. abstract_id: PUBMED:35158904 Prognostic Value of Metabolic, Volumetric and Textural Parameters of Baseline [18F]FDG PET/CT in Early Triple-Negative Breast Cancer. (1) Background: triple-negative breast cancer (TNBC) remains a clinical and therapeutic challenge primarily affecting young women with poor prognosis. TNBC is currently treated as a single entity but presents a very diverse profile in terms of prognosis and response to treatment. Positron emission tomography/computed tomography (PET/CT) with 18F-fluorodeoxyglucose ([18F]FDG) is gaining importance for the staging of breast cancers. TNBCs often show high [18F]FDG uptake and some studies have suggested a prognostic value for metabolic and volumetric parameters, but no study to our knowledge has examined textural features in TNBC. The objective of this study was to evaluate the association between metabolic, volumetric and textural parameters measured at the initial [18F]FDG PET/CT and disease-free survival (DFS) and overall survival (OS) in patients with nonmetastatic TBNC. (2) Methods: all consecutive nonmetastatic TNBC patients who underwent a [18F]FDG PET/CT examination upon diagnosis between 2012 and 2018 were retrospectively included. The metabolic and volumetric parameters (SUVmax, SUVmean, SUVpeak, MTV, and TLG) and the textural features (entropy, homogeneity, SRE, LRE, LGZE, and HGZE) of the primary tumor were collected. (3) Results: 111 patients were enrolled (median follow-up: 53.6 months). In the univariate analysis, high TLG, MTV and entropy values of the primary tumor were associated with lower DFS (p = 0.008, p = 0.006 and p = 0.025, respectively) and lower OS (p = 0.002, p = 0.001 and p = 0.046, respectively). The discriminating thresholds for two-year DFS were calculated as 7.5 for MTV, 55.8 for TLG and 2.6 for entropy. The discriminating thresholds for two-year OS were calculated as 9.3 for MTV, 57.4 for TLG and 2.67 for entropy. In the multivariate analysis, lymph node involvement in PET/CT was associated with lower DFS (p = 0.036), and the high MTV of the primary tumor was correlated with lower OS (p = 0.014). (4) Conclusions: textural features associated with metabolic and volumetric parameters of baseline [18F]FDG PET/CT have a prognostic value for identifying high-relapse-risk groups in early TNBC patients. abstract_id: PUBMED:26834096 ¹⁸F-FDG PET/CT for Staging and Restaging of Breast Cancer. Many studies have pointed out the role of (18)F-FDG PET/CT (or (18)F-FDG PET) in patients with clinical stage III or II breast cancer. (18)F-FDG PET/CT might advantageously replace other staging procedures, such as bone scanning and possibly contrast-enhanced CT of the thorax or abdomen-pelvis. We discuss the findings, locoregional or distant, that can be expected in different categories of breast cancer and their impact on prognosis and management. We also discuss the role of (18)F-FDG PET/CT in restaging and how (18)F-FDG PET/CT compares with conventional techniques in restaging for patients with suspected disease recurrence. We conclude with some recommendations for clinical practice and future research. abstract_id: PUBMED:24899946 (18)F-FDG PET/CT with Contrast Enhancement for Evaluation of Axillary Lymph Node Involvement in T1 Breast Cancer. Background: (18)F-fluorodeoxyglucose ((18)F-FDG) positron emission tomography ((PET) safely predicts axillary status in patients with breast cancer, but is not sufficiently accurate in early breast cancer patients. This study analyzed the value of (18)F-FDG PET/computed tomography (CT) with contrast enhancement in detecting axillary lymph node involvement in T1 breast cancer patients. Methods: Contrast-enhanced (18)F-FDG PET/CT was performed within 20 days of surgery in 143 breast cancer patients with tumors ≤2 cm in size. The patients underwent either axillary lymph node dissection (ALND) or sentinel lymph node biopsy (SLNB), and histopathology reports were used to provide the definitive diagnosis against which the contrast-enhanced (18)F-FDG PET/CT study results were compared. Results: The sensitivity, specificity, and negative and positive predictive values of contrast-enhanced (18)F-FDG PET/CT in detecting axillary involvement were 70.0%, 92.2%, 88.8%, and 77.8%, respectively, in the entire series of 143 patients, with eight false-positive and 12 false negative results. The false-negative results were associated with the number of metastatic lymph nodes and the rate of FDG uptake. Conclusion: Contrast-enhanced (18)F-FDG PET/CT cannot replace histologic staging using SLNB in patients with breast cancer, but (18)F-FDG PET/CT increases the sensitivity for predicting axillary node metastasis, and allows for a selective approach to either ALND or SLNB, even in patients with T1 breast cancer. abstract_id: PUBMED:24900104 Dynamic (18) F-FDG PET for Assessment of Tumor Physiology in Two Breast Carcinoma Xenografts. Purpose: To compare dynamic 2-deoxy-2-[(18) F]fluoro-D-glucose positron emission tomography ((18) F-FDG PET) parameters in two selected human breast cancer xenografts and to evaluate associations with immunohistochemistry and histology. Procedures: Dynamic (18) F-FDG PET of luminal-like MAS98.06 and basal-like MAS98.12 xenografts was performed, and the compartmental transfer rates (k 1 ,k 2 ,k 3 ), blood volume fraction (v B ) and metabolic rate of (18) F-FDG(MR FDG ) were estimated from pharmacokinetic model analysis. After sacrifice, analyses of hypoxia (pimonidazole), proliferation (Ki-67), vascularization (CD31), glucose transport receptor (GLUT1) and necrosis (HE) was performed. The level of hexokinase 2 (HK2) was estimated from Western blot analysis. Results: The (18) F-FDG uptake curves for the two xenografts were significantly different (p < 0.05). k 1 and v B were higher for MAS98.12 (p < 0.01), while k 3 was higher for MAS98.06 (p < 0.01). MAS98.12 had a higher fraction of stromal tissue and higher microvessel density (MVD), and it was less necrotic and hypoxic than MAS98.06. MAS98.12 had stronger positive GLUT1 staining and lower Ki-67 than MAS98.06. In both models significant correlations were found between k 1 and the GLUT1 score, between k 3 and the level of HK2, and between v B and MVD. Conclusions: Significant differences in dynamic (18) F-FDG parameters between the two human breast cancer xenografts were found. The differences could be explained by underlying histological and physiological characteristics. abstract_id: PUBMED:25741051 Correlation between PET/CT results and histological and immunohistochemical findings in breast carcinomas. Objective: To correlate the results of (18)F-fluoro-2-deoxy-D-glucose ((18)F-FDG) positron emission tomography/computed tomography (PET/CT) performed with a specific protocol for assessment of breasts with histological/immunohistochemical findings in breast carcinoma patients. Materials And Methods: Cross-sectional study with prospective data collection, where patients with biopsy-confirmed breast carcinomas were studied. The patients underwent PET/CT examination in prone position, with a specific protocol for assessment of breasts. PET/CT findings were compared with histological and immunohistochemical data. Results: The authors identified 59 malignant breast lesions in 50 patients. The maximum diameter of the lesions ranged from 6 to 80 mm (mean: 32.2 mm). Invasive ductal carcinoma was the most common histological type (n = 47; 79.7%). At PET/CT, 53 (89.8%) of the lesions demonstrated anomalous concentrations of (18)F-FDG, with maximum SUV ranging from 0.8 to 23.1 (mean: 5.5). A statistically significant association was observed between higher values of maximum SUV and histological type, histological grade, molecular subtype, tumor diameter, mitotic index and Ki-67 expression. Conclusion: PET/CT performed with specific protocol for assessment of breasts has demonstrated good sensitivity and was associated with relevant histological/immunohistochemical factors related to aggressiveness and prognosis of breast carcinomas. abstract_id: PUBMED:26521995 Basal (18)F-FDG PET/CT as a predictive biomarker of tumor response for neoadjuvant therapy in breast cancer. Purpose: To explore the relation between tumor kinetic assessed by (18)F-FDG PET and final neoadjuvant chemotherapy (NC) response within a molecular phenotype perspective. Material And Methods: Prospective study included 144 women with breast cancer. All patients underwent a dual-time point (18)F-FDG PET/CT previous to NC. The retention index (RI), between SUV-1 and SUV-2 was calculated. Molecular subtypes were re-grouped in low, intermediate and high-risk biological phenotypes. After NC, all residual primary tumor specimens were histopathologically classified in tumor regression grades (TRG) and response groups. The relation between SUV-1, SUV-2 and RI with the TRG and response groups was evaluated in all molecular subtypes and in accordance with the risk categories. Results: Responder's lesions showed significant greater SUVmax compared to non-responders. The RI value did not show any significant relation with response. Attending to molecular phenotypes, statistical differences were observed with greater SUV for responders having high-risk molecular subtypes. Conclusion: Glycolytic tumor characteristics showed a significant correlation with NC response and dependence of risk phenotype. abstract_id: PUBMED:31427247 Global heterogeneity assessed with 18F-FDG PET/CT. Relation with biological variables and prognosis in locally advanced breast cancer. Aim: To analyze the relationship between measurements of global heterogeneity, obtained from 18F-FDG PET/CT, with biological variables, and their predictive and prognostic role in patients with locally advanced breast cancer (LABC). Material And Methods: 68 patients from a multicenter and prospective study, with LABC and a baseline 18F-FDG PET/CT were included. Immunohistochemical profile [estrogen receptors (ER) and progesterone receptors (PR), expression of the HER-2 oncogene, Ki-67 proliferation index and tumor histological grade], response to neoadjuvant chemotherapy (NC), overall survival (OS) and disease-free survival (DFS) were obtained as clinical variables. Three-dimensional segmentation of the lesions, providing SUV, volumetric [metabolic tumor volume (MTV) and total lesion glycolysis (TLG)] and global heterogeneity variables [coefficient of variation (COV) and SUVmean/SUVmax ratio], as well as sphericity was performed. The correlation between the results obtained with the immunohistochemical profile, the response to NC and survival was also analyzed. Results: Of the patients included, 62 received NC. Only 18 responded. 13 patients relapsed and 11 died during follow-up. ER negative tumors had a lower COV (p=0.018) as well as those with high Ki-67 (p=0.001) and high risk phenotype (p=0.033) compared to the rest. No PET variable showed association with the response to NC nor OS. There was an inverse relationship between sphericity with DFS (p=0.041), so, for every tenth that sphericity increases, the risk of recurrence decreases by 37%. Conclusions: Breast tumors in our LABC dataset behaved as homogeneous and spherical lesions. Larger volumes were associated with a lower sphericity. Global heterogeneity variables and sphericity do not seem to have a predictive role in response to NC nor in OS. More spherical tumors with less variation in gray intensity between voxels showed a lower risk of recurrence. abstract_id: PUBMED:35275444 Heterogeneity derived from 18 F-FDG PET/CT predicts immunotherapy outcome for metastatic triple-negative breast cancer patients. Background: Recently, immunotherapy has been used to treat metastatic triple-negative breast cancer (mTNBC). Basic research has indicated a relation between tumor heterogeneity and the immune response. Tumor heterogeneity derived from 18 F-FDG PET/CT is a potential predictor of chemotherapy results; however, few studies have focused on immunotherapy. This study aims to develop a convenient and efficient measurement of tumor heterogeneity for the prediction of immunotherapy in mTNBC patients. Methods: We enrolled mTNBC patients who received immunotherapy (PD-1/PD-L1 antibody) plus chemotherapy as first-line treatment and underwent 18 F-FDG PET/CT scans before treatment. We defined a novel index representing tumor heterogeneity calculated from the standard uptake value (SUV) as IATH and IETH. Optimal cutoffs were determined using time-dependent receiver operator characteristics (ROC) analysis. Results: A total of 32 patients were enrolled and analyzed in this trial. A significantly longer median PFS was observed in the low SUVmax group than in the high SUVmax group (9.4 vs. 5.8 months, HR = 0.3, 95% CI 0.1-0.9, p = 0.025). The median PFS of low-IATH patients was significantly longer than that of high-IATH patients (HR = 0.3, 95% CI 0.1-0.8, p = 0.022). Similarly, patients with low IETH had significantly longer PFS than patients with high IETH (9.4 vs. 4.9 months, HR = 0.3, 95% CI 0.1-0.7, p = 0.01). Multivariate analysis demonstrated IETH as an independent predictor of PFS. Conclusions: This study proposed a novel method to assess intratumor and intertumor heterogeneity among metastatic breast cancer patients and determined that baseline IETH derived from 18 F-FDG PET/CT could represent a simple and promising predictor for first-line immunotherapy among mTNBC patients. abstract_id: PUBMED:24895506 Usefulness of Combined Metabolic-Volumetric Indices of (18)F-FDG PET/CT for the Early Prediction of Neoadjuvant Chemotherapy Outcomes in Breast Cancer. Purpose: The purpose of this study was to investigate the usefulness of metabolic-volumetric indices of (18)F- fluorodeoxy-D-glucose ((18)F-FDG) positron emission tomography/computed tomography (PET/CT) for the evaluation of neoadjuvant chemotherapy outcomes in breast cancer. Methods: Twenty-four patients with locally advanced breast cancer were enrolled in the study. They underwent baseline (18)F-FDG PET/CT scan and received four or six cycles of neoadjuvant chemotherapy, interim (18)F-FDG PET/CT was done after second cycle of chemotherapy. Maximum standardized uptake value (SUVmax), metabolic tumor volume (MTV), and total lesion glycolysis (TLG) of the primary lesions were calculated. Reduction rates of these parameters were obtained between baseline and interim (18)F-FDG PET/CT. Chemotherapy outcomes were assessed using tumor size reduction rate and histological grading system (Miller and Payne system). Reduction rates of SUVmax, MTV, and TLG correlated with chemotherapy outcomes. Results: MTV and TLG reduction rates showed significant correlation with tumor size reduction rate (R = 0.68, P = 0.0004; R = 0.62, P = 0.002, respectively). However, SUVmax reduction rate showed no significant correlation. MTV and TLG reduction rates were significantly higher in responders than nonresponders, as determined by Miller and Payne system (P < 0.0007, P < 0.002). However, SUVmax reduction rate showed no significant difference. On ROC analysis, the area under the MTV and TLG curves was 0.886, and that of SUVmax was 0.743. Sensitivity, specificity, positive predictive value, and negative predictive value to predict histopathologic response were the same for MTV and TLG, and the values were 100 %, 85.7 %, 83.3 %, and 100 %, respectively (at the reduction rate of 93.2 % for MTV, and 95.8 % for TLG). Conclusion: Changes of metabolic-volumetric indices successfully reflected the neoadjuvant chemotherapy outcomes. MTV and TLG could be robust indices in discriminating pathologic responder as SUVmax, after neoadjuvant chemotherapy. Answer: Clinical, histological, or immunohistochemical primary tumor characteristics do show some correlation with different ^18F-FDG PET/CT volumetric and heterogeneity features in stage II/III breast cancer, but the correlations are limited and the discriminative power of PET-derived metrics is not high (PUBMED:26140849). For instance, T3 tumors (>5 cm) exhibited higher textural heterogeneity in ^18F-FDG uptake than T2 tumors, but there were no significant differences in SUVmax and SUVmean. Invasive ductal carcinoma showed higher SUVmax values than invasive lobular carcinoma, but other metrics like metabolically active tumor volume (MATV), total lesion glycolysis (TLG), and textural features were not discriminative. Grade 3 tumors had higher FDG uptake and exhibited slightly higher regional heterogeneity. Hormone receptor-negative tumors had higher SUV values than estrogen receptor-positive and progesterone receptor-positive tumors, while heterogeneity patterns showed only low-level variation according to hormone receptor expression. HER-2 status was not associated with any of the image features. The three main breast cancer subgroups (HER2-positive, triple-negative, and ER-positive/HER2-negative) differed in terms of SUVs and TLG but not in terms of MATV and heterogeneity metrics (PUBMED:26140849). In the context of triple-negative breast cancer (TNBC), high TLG, MTV, and entropy values of the primary tumor were associated with lower disease-free survival (DFS) and overall survival (OS), suggesting that textural features associated with metabolic and volumetric parameters of baseline ^18F-FDG PET/CT have prognostic value for identifying high-relapse-risk groups in early TNBC patients (PUBMED:35158904). Overall, while there are some associations between PET/CT features and clinical, histological, or immunohistochemical characteristics, these relationships are not strong enough to allow PET-derived metrics to serve as highly discriminative tools for these tumor characteristics in stage II/III breast cancer.
Instruction: Can hip abduction and external rotation discriminate sacroiliac joint pain? Abstracts: abstract_id: PUBMED:26299325 Can hip abduction and external rotation discriminate sacroiliac joint pain? Aim: The primary aim of the study is to determine if Hip Abduction and External Rotation (HABER) test is capable of reproducing familiar pain in individuals with low back pain (LBP) of sacroiliac joint (SIJ) origin (SIJ-positive) when compared with LBP of Non-SIJ origin (SIJ-negative). If so, the secondary aim is to determine the diagnostic accuracy of HABER test against the reference standard of pain provocation tests, and to determine which increments of the HABER test has highest sensitivity and specificity for identifying SIJ-positive individuals. Design: Single-blinded diagnostic accuracy study. Method: Participants [n(122)] between ages of 18-50 y, suffering from chronic non-specific LBP (≥3 months) volunteered in the study. An experienced musculoskeletal physiotherapist evaluated and classified participants into either SIJ-positive [n(45)] or SIJ-negative [n(77)], based on reference standard of pain provocation tests [≥3 positive tests = SIJ-positive]. Another musculoskeletal physiotherapist, blinded to clinical groups, evaluated participants for reproduction of familiar pain during each increment (10°, 20°, 30°, 40°, and 50°) of HABER test. Results: The HABER test reproduced familiar pain in SIJ-positive individuals when compared with SIJ-negative individuals [p (0.001), R(2) (0.38), Exp(β) (5.95-10.32)], and demonstrated moderate level of sensitivity (67%-78%) and specificity (71%-72%) for identifying SIJ-positive individuals. Receiver operator curve analysis demonstrated that the HABER increments of ≥30° have the highest sensitivity (83%-100%) and specificity (52%-64%). Conclusions: The HABER test is capable of reproducing familiar pain in SIJ-positive LBP individuals and has moderate levels of sensitivity and specificity for identifying SIJ-positive LBP individuals. abstract_id: PUBMED:9589539 Unilateral hip rotation range of motion asymmetry in patients with sacroiliac joint regional pain. Study Design: A cross-sectional study was used to determine whether limited range of motion in the hip was present in 100 patients--one group with unspecified low back pain and another group with signs suggesting sacroiliac joint dysfunction. Objectives: To determine whether a characteristic pattern of range of motion in the hip is related to low back pain in patients and to determine whether such a pattern is associated with and without signs of sacroiliac joint dysfunction. Summary Of Background Data: The sacroiliac joint is often considered a potential site of low back pain. Problems with the sacroiliac joint, as well as with the low back, have often been related to reduced or asymmetric range of motion in the hip. The correlation between sacroiliac joint dysfunction and hip range of motion, however, has not been thoroughly evaluated with reliable tests in a population of patients with low back pain. Methods: Passive hip internal and external rotation goniometric measurements were taken by a blinded examiner, while a separate examiner evaluated the patient for signs of sacroiliac joint dysfunction. Patients with sacroiliac joint dysfunction were further classified as having a left or a right posteriorly tilted innominate. Results: The patients with low back pain but without evidence of sacroiliac joint dysfunction had significantly greater external hip rotation than internal rotation bilaterally, whereas those with evidence of sacroiliac joint dysfunction had significantly more external hip rotation than internal rotation unilaterally, specifically on the side of the posterior innominate. Conclusions: Clinicians should consider evaluating for unilateral asymmetry in range of motion in the hip in patients with low back pain. The presence of such asymmetry in patients with low back pain may help identify those with sacroiliac joint dysfunction. abstract_id: PUBMED:152619 Symphyseal and sacroiliac joint pain associated with pubic symphysis instability. Fifty patients presenting with lumbosacral and inguinal pain were examined by routine clinical radiographic and electromyographic evaluations. All were without antecedent history of major pelvic trauma or spinal surgery but demonstrated evidence of pubic symphysis instability. Slip between the pubic rami in excess of 2 mm could be demonstrated in each with alternate leg weight bearing. Asymmetry of hip mobility on the symptomatic side with a reduction in abduction and external rotation was present in 20 patients. An approach to treatment of pain associated with pubic symphyseal and associated sacroiliac joint instability is described, combining both intraarticular steroids, lumbosacral supports and physical therapy modalities. abstract_id: PUBMED:38263079 In vivo interrelationships between the gluteus minimus and hip joint capsule in the hip internal rotation position with flexion. Background: The flexion adduction internal rotation (FADIR) test is performed by the combined motions of hip flexion (with knee flexion), adduction, and internal rotation, and can often reproduce anterior hip pain consistent with an individual's presenting pain. Since it has high sensitivity for intraarticular pathology diagnosis but low specificity, understanding the extraarticular pathology that can induce anterior hip pain in the FADIR test may also be essential. This study hypothesized that the interrelationships between the joint capsule and gluteus minimus differ in individuals with and without FADIR-positive pain and aimed to elucidate the in vivo interrelationships at hip internal rotation in 90°-flexion, which is also often restricted in individuals with FADIR-positive pain. Methods: Ten hips were included in the FADIR-positive group, and ten hips without hip pain in the FADIR test were included in a control group. Based on the ultrasound images at the four hip rotation conditions (20° and 10° external rotations, 0° external/internal rotation, and 10° internal rotation), orientation measurements of the gluteus minimus (muscle belly portion) and joint capsule were performed and quantitatively compared between the FADIR-positive and control groups. Additionally, 3 hips of 3 participants were randomly selected from each of the control and FADIR-positive groups for magnetic resonance imaging analysis. Results: At 0°-external/internal and 10°-internal rotation, on ultrasound images, fibers of the gluteus minimus and joint capsule in the FADIR-positive group were significantly more oriented in the same direction than those in the control group. Magnetic resonance imaging showed that the loose connective tissue between the gluteus minimus and joint capsule was prominent at 10°-internal rotation in the control group, although this was not apparent in the FADIR-positive group. Conclusions: At hip internal rotation in 90° flexion, the muscular belly portion of the gluteus minimus and joint capsule were oriented in the same direction to a greater extent in the FADIR-positive group than in the control group owing to a morphological change in the loose connective tissue between them. The pathological changes in the loose connective tissue may inhibit smooth movement of the gluteus minimus relative to the joint capsule in individuals with FADIR-positive pain. abstract_id: PUBMED:18198787 Hip and sacroiliac disease: selected disorders and their management with physical therapy. Many problems in the hip area show movement dysfunctions of the hip joint in combination with the lumbar spine, sacroiliac joint, neurodynamic structures, and the muscular systems. Muscle strain injuries pertinent to the canine hip have been reported in the iliopsoas, pectineus, gracilis, sartorius, tensor fasciae latae, rectus femoris, and semitendinosus muscles. Physical diagnoses of this type of injury require palpation skills and the ability to specifically stretch the suspected musculotendinous tissue. Treatments shall incorporate modalities, stretches, specific exercises, and advisement on return to normal activity. Canine hip dysplasia (CHD) is a common finding in many large breed dogs. Physical treatments, preventative therapies, and rehabilitation could have a large role to play in the management of nonsurgical CHD patients with the goal to create the best possible musculoskeletal environment for pain-free hip function and to delay or prevent the onset of degenerative joint disease. Osteoarthritic hip joints can benefit from early detection and subsequent treatment. Physical therapists have long utilized manual testing techniques and clinical reasoning to diagnose early-onset joint osteoarthritis and therapeutic treatments consisting of correcting muscle dysfunctions, relieving pain, joint mobilizations, and advisement on lifestyle modifications could be equally beneficial to the canine patient. As well, sacroiliac joint dysfunctions may also afflict the dog. An understanding of the anatomy and biomechanics of the canine sacroiliac joint and application of clinical assessment and treatment techniques from the human field may be substantially beneficial for dogs suffering from lumbopelvic or hindlimb issues. abstract_id: PUBMED:38240723 Assessment of the sacroiliac joint with computed tomography motion analysis: a diagnostic study of 12 patients. Background And Purpose: Pain in the sacroiliac joint may be caused by abnormal joint motion. Diagnosis is mainly based on clinical tests. The aims of this study were to examine whether low-dose computed tomography with provocation of the hip could detect sacroiliac joint motion, and to study whether provocation of the hip results in greater sacroiliac joint motion in the ipsilateral than in the contralateral sacroiliac joint. Patients And Methods: 12 patients with sacroiliac joint pain were examined with low-dose computed tomography scans of the sacroiliac joint, one with the hips in the neutral position, and one each in provocation with the left or the right hip in a figure-of-4 position. Accuracy was tested by comparing internal rotation of the sacrum with internal rotation in the sacroiliac joint. Motion in the sacroiliac joint was assessed by comparing the position of each of the ilia with the reference, the sacrum. Data is shown as mean with 95% confidence interval (CI). Results: We observed greater motion in the sacroiliac joint than internally in the sacrum, i.e., 0.57° (CI 0.43-0.71) vs. 0.20° (CI 0.11-0.28). The motion of the geometric center of the moving object for the sacroiliac joint was larger on the provoked side; mean difference 0.17 mm (CI 0.01-0.33), P = 0.04. Corresponding figures for rotation were mean difference 0.19° (CI 0.10-0.28), P < 0.001. Compared with the sacrum, the largest motion was seen at the anterior superior iliac spine; mean difference 0.38 mm (CI 0.10-0.66), P = 0.001. Conclusion: Provocation in the figure-of-4 position of the hip results in sacroiliac joint motion measurable with computed tomography motion analysis. Provocation of the hip induces larger motion on the ipsilateral than on the contralateral sacroiliac joint. abstract_id: PUBMED:35763889 Do femoral version abnormalities play a role in hip function of patients with hip pain? Background: A high prevalence of femoral version abnormalities has been observed in hip pain patients, with impact on hip range of motion and muscle strength that should be elucidated. Methods: Cross-sectional study. Thirty-one patients with hip pain (16 men and 15 female) were subjected to Biplanar X-Rays to quantify femoral version using three-dimensional measurements. The 62 hips were divided into normal version (10-20°, n = 18), anteverted (>20°, n = 19), and retroverted (<10°, n = 25). Joint range of motion for flexion, internal rotation, and external rotation was assessed through digital goniometry. Maximal isometric hip strength (flexion, extension, internal rotation, external rotation at 0° and 30°, abduction, adduction) was evaluated through hand-held dynamometry. Hip rotation index was calculated as external rotation minus internal rotation. Findings: Anteverted hips had greater internal rotation, while retroverted hips had greater external rotation (p = 0.001). Anteverted hips were weaker than retroverted hips for external rotation at 30° (p < 0.001), abduction (p = 0.006) and adduction (p < 0.001), and weaker than normal version hips for extension (p = 0.018). All three groups had different rotation index: retroverted>normal>anteverted (p < 0.001). The ordinal logistic regression found higher values of rotation index with higher probability of being retroverted (common odds ratio = 1.20). There was a strong correlation between femoral version group and rotation index (rS = 0.76, p < 0.001). There was probability >70% of a hip being anteverted if the rotation index was <11°, and being retroverted if the index was >40°. Interpretation: Range of motion and muscle strength differed in hips with different femoral versions. The hip rotation index was a strong femoral version predictor. abstract_id: PUBMED:14669962 Hip strength in females with and without patellofemoral pain. Study Design: Cross-sectional. Objectives: To determine if females with anterior knee pain are more likely to demonstrate hip abduction or external rotation weakness than a similar, asymptomatic, age-matched control group. Background: Diminished hip strength has been implicated as being contributory to lower-extremity malalignment and patellofemoral pain. The identification of reliable and consistent patterns of weakness in this population may assist health care professionals establish a more effective treatment plan. Methods And Measures: Hip abduction and external rotation isometric strength measurements were recorded for the injured side of 15 female subjects with patellofemoral joint pain (mean +/- SD age, 15.7 +/- 2.7 years; age range, 12-21 years). These were compared with strength measurements from the corresponding hip of 15 age-matched female control subjects (mean +/- SD age, 15.7 +/- 2.7 years; age range, 12-21 years). All strength measurements were made using hand-held dynamometers. Results: Subjects with patellofemoral pain demonstrated 26% less hip abduction strength (P<.001) and 36% less hip external rotation strength (P<.001) than similar age-matched controls. Conclusions: The results indicate that young women with patellofemoral pain are more likely to demonstrate weakness in hip abduction as well as external rotation than age-matched women who are not symptomatic. abstract_id: PUBMED:28298064 Radiographic Hip Anatomy Correlates With Range of Motion and Symptoms in National Hockey League Players. Background: Hip disorders in athletes have been increasingly recognized. Purpose: To characterize radiographic hip anatomy for National Hockey League (NHL) players and correlate it with hip range of motion and hip symptoms and/or surgery. Study Design: Cross-sectional study; Level of evidence, 3. Methods: Fifty-nine professional hockey players (118 hips) with 1 NHL organization (mean age, 24.2 years; range, 18-36) prospectively underwent history and physician examination by 2 independent orthopaedic surgeons. Current or previous groin and/or hip pain or surgery was noted. Anteroposterior (AP) pelvis and bilateral Dunn lateral radiographs were obtained for all players with assessment of hip morphology by 2 blinded independent orthopaedic surgeons. Results: Good to very good reliability of radiographic assessments was noted (intraclass correlation coefficients = 0.749-0.958). Sixty-four percent of athletes had a positive crossover sign, while 86% and 60% had a positive posterior wall sign and a prominent ischial spine sign, respectively. Twenty-one percent of hips demonstrated dysplastic acetabular features (lateral center edge angle <25°). Eighty-five percent and 89% of hips demonstrated cam-type morphology based on alpha angle (>50° Dunn lateral) and head-neck offset, respectively. Good to very good reliability was noted for ROM assessments (intraclass correlation coefficient >0.69). Mean hip flexion was 107.4º ± 6.7º, and mean hip internal rotation was 26.1º ± 6.6º. Thirty-one percent of hips had a history of hip-related pain and/or surgery. Higher AP, Dunn lateral, and maximal alpha angles correlated with decreased hip internal rotation ( P = .004). Greater AP alpha angle correlated with decreased hip extension/abduction ( P = .025), and greater Dunn lateral and maximal alpha angle correlated with decreased hip flexion/abduction ( P = .001). A positive posterior wall sign correlated with increased straight hip abduction, while other radiographic acetabular parameters were not predictive of range of motion. Only decreased hip external rotation and total arc of motion correlated with an increased risk for current or prior hip symptoms or surgery ( P < .001). Conclusion: Hip anatomy in NHL players is characterized by highly prevalent cam-type morphology (>85%) and acetabular retroversion (>60%). In addition, acetabular dysplasia (21%) was relatively common. Greater cam-type morphology correlated with decreased hip range of motion, and a positive crossover sign correlated with increased hip abduction. Decreased hip external rotation and total arc of motion were predictive of hip-related pain and/or surgery. abstract_id: PUBMED:16253048 Patellofemoral pain and asymmetrical hip rotation. Background And Purpose: Patellofemoral joint problems are the most common overuse injury of the lower extremity, and altered femoral or hip rotation may play a role in patellofemoral pain. The purpose of this case report is to describe the evaluation of and intervention for a patient with asymmetrical hip rotation and patellofemoral pain. Case Description: The patient was a 15-year-old girl with an 8-month history of anterior right knee pain, without known trauma or injury. Prior to intervention, her score on the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) was 24%. Right hip medial (internal) rotation was less than left hip medial rotation, and manual muscle testing showed weakness of the right hip internal rotator and abductor muscles. The intervention was aimed at increasing right hip medial rotation, improving right hip muscle strength (eg, the muscle force exerted by a muscle or a group of muscles to overcome a resistance), and eliminating anterior right knee pain. Outcomes: After 6 visits (14 days), passive left and right hip medial rotations were symmetrical, and her right hip internal rotator and abductor muscle grades were Good plus. Her WOMAC score was 0%. Discussion: The patient had right patellofemoral pain and an uncommon pattern of asymmetrical hip rotation, with diminished hip medial rotation and excessive hip lateral (external) rotation on the right side. The patient's outcomes suggest that femoral or hip joint asymmetry may be related to patellofemoral joint pain. Answer: Yes, hip abduction and external rotation can discriminate sacroiliac joint pain. The Hip Abduction and External Rotation (HABER) test has been shown to be capable of reproducing familiar pain in individuals with low back pain (LBP) of sacroiliac joint (SIJ) origin, as compared to those with LBP of non-SIJ origin. The HABER test demonstrated moderate levels of sensitivity (67%-78%) and specificity (71%-72%) for identifying SIJ-positive individuals, with the increments of ≥30° showing the highest sensitivity (83%-100%) and specificity (52%-64%) (PUBMED:26299325). Additionally, a study found that patients with sacroiliac joint dysfunction had significantly more external hip rotation than internal rotation unilaterally, specifically on the side of the posterior innominate, suggesting that clinicians should evaluate for unilateral asymmetry in hip range of motion in patients with low back pain, as this may help identify those with sacroiliac joint dysfunction (PUBMED:9589539). Furthermore, asymmetry of hip mobility on the symptomatic side with a reduction in abduction and external rotation was present in patients with pubic symphysis instability, which was associated with sacroiliac joint pain (PUBMED:152619). These findings indicate that hip abduction and external rotation measurements and tests can be useful in differentiating sacroiliac joint pain from other causes of low back pain.
Instruction: Is anybody talking to physicians about acquired immunodeficiency syndrome and sex? Abstracts: abstract_id: PUBMED:8252135 Is anybody talking to physicians about acquired immunodeficiency syndrome and sex? A national survey of patients. Objective: We wanted to know what proportion of the US population had spoken with a physician about sex and the acquired immunodeficiency syndrome (AIDS). In particular, we wanted to know whether patients who were at risk for sexually transmitted diseases, including human immunodeficiency virus (HIV), had had such discussions. Design: A telephone survey of a US nationwide random probability sample of adults was conducted in the summer of 1991. Patients: The survey was completed by 1350 adults; of these, 1312 were patients, defined as those who reported that they had been to a physician within the last 5 years. Main Outcome Measure: We assessed whether patients reported having had discussions about sex and AIDS with physicians. Results: Only 259 (20%) of patients in our survey reported that they had talked with a physician about AIDS. Fifty-five (21%) of those who had talked with a physician about AIDS reported that the physician started the discussion. Few patients reported that they had spoken with a physician even when it appeared vital that they do so: 46 (26%) of those who reported that their chances of getting the AIDS virus were "high" or "medium" and 25 (23%) of those who reported being at behavioral risk for AIDS had spoken to a physician about AIDS. Conclusions: Few patients reported having discussed sex and AIDS with a physician, even if the patients considered themselves to be at risk for contracting HIV or another sexually transmitted disease. More of these discussions must take place to prevent the spread of sexually transmitted diseases and AIDS and to facilitate testing and early treatment of HIV-infected individuals. abstract_id: PUBMED:1571882 The Talking Sex Project: descriptions of the study population and correlates of sexual practices at baseline. The Talking Sex Project is an HIV risk reduction education project for gay and bisexual men. It was developed to evaluate two types of small discussion groups using a randomized control study design. This paper reports on the baseline measures of knowledge, attitudes and sexual practices. The 612 subjects had a mean age of 32 years, were all English speaking and were relatively highly educated. Bivariate analyses found that younger men, those in a relationship, those reporting bisexual activity and those with higher levels of substance use were significantly more likely to have unprotected anal intercourse. While overall knowledge was not associated with sexual behaviour, a significant relationship was found with knowledge of risk of anal sex. Attitudes, in particular, the belief in the efficacy of condoms and the enjoyment of safer sex were found to have the strongest associations with protected anal intercourse. The paper provides an understanding of some of the factors which may be important to behaviour change and clues for the targeting of programs. abstract_id: PUBMED:35579404 Awareness and perception of accuracy of the Undetectable=Untransmittable message (U=U) in Italy: results from a survey among PLWHA, infectious-diseases physicians and people having unprotected sex. Evidences on the absence of risk of sexual transmission of HIV by persons living with HIV/AIDS (PLWHA) with undetectable plasma HIV-RNA (HIV-RNA <200 copies/ml) led to the worldwide campaign "U = U" (undetectable = untransmittable). The purpose of this study was to evaluate the perceived accuracy of this message among PLWHA, HIV-negative people having unprotected sex (PHUS) and infectious diseases' (ID) physicians in Italy. A nationwide survey has been conducted using three different anonymous questionnaires (for ID physicians, PLWHA and PHUS). A total of 1121 participants filled the questionnaires: 397 PLWHA; 90 physicians; 634 PHUS. Awareness of U = U message has been reported in 74%, 92% and 47% of PLWHA, ID physicians and PHUS, respectively. The perception of accuracy of the U = U message among those aware was reported as high in 80.4%, 79.5% and 67.3% of PLWHA, ID physicians and PHUS, respectively. Physicians perceived that 11% of PLWHA have a high rate of perception of U = U, whereas among PLWHA, only 34% reported definitive positive messages from physicians. Discrepancies between awareness and perception of accuracy of the message U = U in PLWHA and physicians have been found, suggesting still low confidence in the community regarding the message itself. abstract_id: PUBMED:2251525 AIDS, minority patients, and doctors: what's the risk? Who's talking? We asked 39 physicians providing primary care for a mostly minority patient population to respond to a questionnaire concerning their attitudes and behavior toward AIDS risk assessment and preventive counseling and to indicate their beliefs concerning patients' knowledge and behavior. Most of the 36 physicians who responded (92%) agreed that physicians must educate their patients about AIDS. They also reported that patients who engage in risk-taking behavior may not know much about AIDS transmission and prevention. Despite these beliefs, these doctors reported that they gave advice to only 11% of their male patients and 14% of their female patients. More than one third of physicians reported feeling uncomfortable talking about patients' sexual preferences and practices. To identify patients at risk and to help prevent AIDS, methods must be found to make physicians more comfortable discussing sexual issues with their patients, especially their minority patients. abstract_id: PUBMED:2356491 Why doctors have difficulty with sex histories. Studies have shown that physicians' performance has not been as good as it should be in detecting sexually transmitted diseases (STDs) and in counseling patients about their transmission. The AIDS pandemic has underscored the need to find out why this is true. In our study, we identified the major reasons physicians believe other doctors fail to take adequate sex histories. Scales were then developed to measure the three principal reasons given by these physicians: embarrassment, belief that the sex history is not relevant to the patient's chief complaint, and belief by the physicians that they are not adequately trained. When 350 senior medical students were surveyed, 93% thought that knowledge of a patient's sexual practices is an important part of their patient's medical history, but 50% felt poorly trained to take this history and 25% felt embarrassed to ask the necessary questions. To learn why some students score well on these three dimensions and others do not, a limited number of personal attributes were measured and correlated with the scores on these three measures. Shyness and social anxiety as a personal trait predicted which student was most likely to experience embarrassment in taking a sex history. A nonsympathetic view of patients' psychosocial problems was the variable most closely related to the belief that the sex history was of little importance in understanding a patient's problem. Students who believed this most strongly were the same ones who were most homophobic, authoritarian, and had the greatest fear of AIDS infection. The sense of not feeling adequately trained to take a sex history related most strongly to low self-esteem. How these barriers to STD risk assessment might be overcome is discussed. abstract_id: PUBMED:17701337 Start Talking About Risks: development of a Motivational Interviewing-based safer sex program for people living with HIV. The epidemiology of HIV infection in the US in general, and in the southeast, in particular, has shifted dramatically over the past two decades, increasingly affecting women and minorities. The site for our intervention was an infectious diseases clinic based at a university hospital serving over 1,300 HIV-infected patients in North Carolina. Our patient population is diverse and reflects the trends seen more broadly in the epidemic in the southeast and in North Carolina. Practicing safer sex is a complex behavior with multiple determinants that vary by individual and social context. A comprehensive intervention that is client-centered and can be tailored to each individual's circumstances is more likely to be effective at reducing risky behaviors among clients such as ours than are more confrontational or standardized prevention messages. One potential approach to improving safer sex practices among people living with HIV/AIDS (PLWHA) is Motivational Interviewing (MI), a non-judgmental, client-centered but directive counseling style. Below, we describe: (1) the development of the Start Talking About Risks (STAR) MI-based safer sex counseling program for PLWHA at our clinic site; (2) the intervention itself; and (3) lessons learned from implementing the intervention. abstract_id: PUBMED:25170453 Talking About Sex in Malawi: Toward a Better Understanding of Interpersonal Communication for HIV Prevention. Abstract: The generalised AIDS epidemic in Malawi presents many challenges. As communication and advice from parents, peers, and partners are important factors in influencing sexual behaviour, understanding communication may provide insights into behaviour change programming. This mixed-method study used a household survey (n=1812) and 15 focus group discussions from the southern districts of Malawi to explore communication about sex and sexuality. Quantitative study findings point to the idea that self-efficacy, perceived benefits, and injunctive norms about talking about condom use are important factors influencing intentions to discuss condom use with partners. Qualitative study findings found that communication regarding sex between parents and children, partners, and peers was not common, and when there was communication, messages about sex focused on negative consequences of sexual activity. In Malawi, there is a need to increase efficacy in talking about sex and protective sexual behaviours, including condom use. Interventions should include components to increase communication skills, shift norms about sexual communication, and provide alternative mechanisms for individuals to gather pertinent information regarding their sexual behaviour. abstract_id: PUBMED:12348237 HIV and safe, healthy sex. If a woman wants to become pregnant, how can she reduce her risk of HIV and other sexually transmitted infections? When a young man is growing up and worried about his future, how can you expect him to be concerned about HIV? Many HIV prevention projects focus on HIV and other sexually transmitted infections without considering people's broader reproductive and sexual health concerns. HIV prevention depends on people being able to make choices about their sexual behavior. This means understanding how their bodies work, knowing what choices are available to them, and having the confidence and skills to discuss and make changes in their sexual and reproductive lives. HIV educators, family planning workers, youth counselors, and others need to be able to respond to a range of questions and concerns in a sensitive and supportive way. This special, double issue of AIDS Action provides basic facts about the reproductive system, fertility, sexually transmitted infections and contraception, and looks at the links between HIV, sex, and reproduction. Talking about sex can be difficult. Sex is a private matter and people often feel embarrassed talking about it. This issue also contains tips for communication and activities to find out what people know already and help them learn. You may find that not everything in this issue is necessary for the people you are working with. This is a 'pick and mix' issue for you to pick out what is useful, adapting it if you wish. abstract_id: PUBMED:32104950 Exploration of clients living with HIV needs for reporting on experiences with sex. Talking about sex with people living with HIV receives insufficient attention in health care. A cross-sectional and exploratory study describes the preferences of people living with HIV to talk about sex with specialized HIV physicians and nurses in a clinic in Barcelona (Spain). A 27-item self-administered questionnaire was used between June 2017 and May 2018. One hundred fourteen people agreed to participate. Data were analyzed using multivariate logistic regression. Most of the participants reported "never or almost never" having been asked about sex practices in visits with the HIV specialist physician (n = 65 [57.0%]) or nurse (n = 74 [64.9%]). Older participants stated that neither physicians nor nurses talked about sex during visits. Women felt that physicians hardly ever asked about their sexual practices. Men who had sex with other men indicated that their physicians always asked about their sex practices compared with heterosexuals living with HIV. Health professionals should reformulate how to talk about sex with people living with HIV to facilitate communication and provide adequate care. abstract_id: PUBMED:15673481 Stigma, discrimination, and HIV/AIDS knowledge among physicians in Barbados. Objective: To determine the extent of clinical knowledge of HIV/AIDS that physicians in Barbados have and their attitudes towards persons living with HIV/AIDS. Methods: In 2000 the Barbados Association of Medical Practitioners conducted a survey in order to assess its members' views on HIV/AIDS issues. Over a two-month period 203 physicians (76% of all those practicing in the country) were interviewed. The survey included physicians working in private practice and the public sector. They were surveyed individually concerning their attitudes towards counseling as well as their clinical knowledge, perception of safe practices, fear of occupational exposure, views on ethical issues, experience treating HIV/AIDS patients, and background with HIV/AIDS continuing education. Results: In comparison to physicians who had graduated in later years, physicians who had graduated in 1984 or earlier had seen fewer HIV/AIDS clients, had lower levels of knowledge about the disease, were more likely to test for HIV/AIDS without informed consent, and were less likely to have ever attended a continuing education training course on HIV/AIDS. Overall, knowledge of the clinical indications of HIV/AIDS was low, and 76% of the physicians did not think they had adequate counseling skills. Over 80% of the physicians were comfortable looking after HIV/AIDS patients. While 95% of the physicians would not release HIV test results without a patient's consent, 33% would test, without consent, a seriously ill patient, and 15% would test without consent a patient upon whom they had to perform an invasive procedure if they perceived the patient to be from a high-risk population such as gay men or commercial sex workers. Only 53% of the physicians had attended an HIV/AIDS in-service training program between 1995 and 1999. Conclusions: Physician training in Barbados should focus on all aspects of HIV/AIDS care, including clinical and emotional factors. Attendance at such training should be mandatory for public sector physicians, and medical school curricula need to be examined to ensure their HIV/AIDS content is current and comprehensive. Answer: Based on the provided abstracts, it appears that communication between physicians and patients regarding acquired immunodeficiency syndrome (AIDS) and sex is not as frequent or effective as it could be. A national survey in the US found that only 20% of patients had talked with a physician about AIDS, and even among those who considered themselves at high or medium risk, the discussion rates were only 26% and 23%, respectively (PUBMED:8252135). This suggests that more discussions must take place to prevent the spread of sexually transmitted diseases and facilitate testing and early treatment of HIV-infected individuals. Physicians themselves have reported difficulties in discussing sexual practices with patients. More than one-third of physicians felt uncomfortable talking about patients' sexual preferences and practices, and they reported giving advice to only a small percentage of their patients (PUBMED:2251525). The reasons for this discomfort include embarrassment, belief that the sex history is not relevant, and feeling inadequately trained (PUBMED:2356491). Efforts to improve communication have been made, such as the development of the Start Talking About Risks (STAR) program, which is a Motivational Interviewing-based safer sex counseling program for people living with HIV/AIDS (PUBMED:17701337). Additionally, in Malawi, there is a need to increase efficacy in talking about sex and protective sexual behaviors, including condom use, and interventions should include components to increase communication skills and shift norms about sexual communication (PUBMED:25170453). However, discrepancies still exist between awareness and perception of accuracy of messages such as "Undetectable=Untransmittable" (U=U) among patients living with HIV/AIDS, physicians, and people having unprotected sex, suggesting low confidence in the community regarding the message itself (PUBMED:35579404). In summary, while there are efforts to improve communication about AIDS and sex between physicians and patients, there is still a significant gap that needs to be addressed to enhance prevention and treatment efforts.
Instruction: Policy on global warming: fiddling while the globe burns? Abstracts: abstract_id: PUBMED:38097442 Burn related globe and adnexal trauma at trauma centres. Introduction: Prompt management of burn-related globe trauma can prevent long term complications. Delays in diagnosis may occur when globe trauma is associated with life-threatening injuries. We aimed to improve the understanding of the epidemiology, acute assessment and management of burns-related globe and adnexal trauma admitted to two trauma centres in Sydney, Australia. Method: Admitted patients with burns-related globe and/or adnexal trauma were retrospectively reviewed at Royal North Shore Hospital (RNSH) and Royal Prince Alfred Hospital (RPAH) between January 2015 and December 2019. The International Classification of Disease, Tenth Revision codes was used to search and identify patients. Medical records were reviewed to extract data on demographics, injuries, ocular examination and ophthalmology involvement. Results: Over the 5-years, 101 patients with globe and/or adnexal burns-related trauma were admitted to RNSH or RPAH. Median age was 37years. Most patients were male (76%) and were injured while at home or work (74%). Patients with chemical exposure were more likely to have globe trauma (100% vs 72%, p < 0.001) and severe globe trauma (54% vs 32%, p = 0.028). On initial review by emergency staff, 14 patients were not referred to ophthalmology, of these there were 2 patients where the diagnosis was delayed. Conclusion: Globe trauma is common in patients with chemical exposure. Thorough ocular assessment within the acute setting is vital to diagnose globe trauma. We investigated hospitals with specialised burn staff, further research is required to understand the management of globe trauma in hospitals without such resources. Synopsis: Chemicals in household-products can cause severe globe trauma. Globe trauma can occur alongside large burns leading to delay in its diagnosis and management. Ophthalmology can assist in the early diagnosis and management of globe trauma. abstract_id: PUBMED:37563048 Globe and adnexal trauma at Australian trauma centres. Introduction: In multisystem trauma, the assessment and management of globe and adnexal trauma is often complex. Ophthalmology input may assist managing such patients. To understand the role of ophthalmology in tertiary trauma centres we report on the management of globe and adnexal trauma at two tertiary trauma centres in Sydney, Australia. Method: A retrospective case series was completed at Royal North Shore Hospital (RNSH) and Royal Prince Alfred Hospital (RPAH) on patients admitted between January 2015 and December 2019. International Classification of Disease, Tenth Revision codes, diagnostic and procedural coding data were used to identify patients admitted with globe and/or adnexal trauma. Data extracted from medical records included demographics, mechanism of injury, ocular examination and specialist ophthalmic referral. Results: Over 5-years, 773 patients, average age of 53.2years and 62% male, were admitted to RNSH and RPAH with globe and/or adnexal trauma. Most patients (83%) first presented to RNSH or RPAH. The most common mechanism of injury was falls (45%) followed by burns (13%). Two-hundred and thirty-five patients had multisystem trauma, of these patients, 121 (51%) suffered globe trauma with 49 (21%) classified as severe. Three patients were not diagnosed initially due to delayed ophthalmology referral. Conclusion: Falls followed by burns were common causes of globe and adnexal trauma in Sydney, Australia. The presence of orbital/mid-facial injury may indicate a patient has globe trauma. In multisystem trauma, globe trauma may be diagnosed late or not identified. Ophthalmology review has an important role in diagnosing and managing globe trauma in multisystem trauma. abstract_id: PUBMED:28363304 International Disease Burden of Hand Burns: Perspective from the Global Health Arena. Measuring the extent and impact of a health problem is key to being able to address it appropriately. This review uses available information within the framework of the Global Burden of Disease studies to estimate the disease burden due to burn injuries of the hands. The GBD indicates that since 1990 there has been an approximately 30% decrease in the disease burden related to burn injuries. The GBD methods have not been applied specifically to hand burns, but from available data, it is estimated that about 18 million people in the world suffer from sequelae of burns to the hands. abstract_id: PUBMED:38102041 Global burden of burns and its association with socio-economic development status, 1990-2019. Background: Burns represent important global health problems. Whereas many studies are limited by the difficulties in estimating the burden of burns and instead focus on the causes of burns, such as fire, heat, and hot substances. Therefore, a complete assessment of the burden of all injuries leading to burns is essential to developing reasonable global intervention strategies. Methods: Data on three classes of burns, including "< 20 % total burned surface area without lower airway burns" (Moderate injury), "> =20 % total burned surface area or > = 10 % burned surface area if head/neck or hands/wrist involved w/o lower airway burns" (Major injury), "Lower airway burns" (Inhalation injury) were collected from the Global Burden of Disease 2019 database. Age-standardized incidence rates (ASR-I) and Years Lived with Disability (ASR-YLDs) for burns has been standardized by removing the influence of population size and age structure. They were extracted and stratified by cause, year, sex, age, socio-demographic index, country, and territory. Results: In terms of ASR-I and ASR-YLDs, burns showed a significant decrease from 1990 to 2019, especially for moderate and major injury. In 2019, the burden of moderate injury was positively correlated with socio-demographic index while major injury was negatively correlated (P < 0.05). We found no correlation between socio-demographic index and the burden for inhalation injury (P > 0.05). Fire, heat, and hot substances were the most important cause of burns except for inhalation injury. The most common association with inhalation injury was falls, which were also a major cause of moderate and major injury. Conclusions: The Global Burden of Disease 2019 database data can be used to guide the allocation of resources to reduce ASR-I and ASR-YLDs of different burn classes. abstract_id: PUBMED:34916089 Global burden of thermal burns, 1990-2017: Unbalanced distributions and temporal trends assessed from the Global Burden of Disease Study 2017. Background: Many studies demonstrate that being burned has both physical and psychological sequelae that affect quality of life. Further, these effects may be more prevalent in some regions and populations. We sought to access the unbalanced distributions and temporal trends concerning the health burden of thermal burns. Methods: Data were collected from the Global Burden of Disease Study 2017, and the disability-adjusted life year (DALY)1 was used as a measure of health burden. Linear regression was used to evaluate the relationship between the age-standardized DALY rate and socio-demographic index.2 Joinpoint regression analysis and comparison line charts were all applied to assess the temporal trends of burns. Results: The age-standardized DALY rate of global thermal burns decreased by 43.7%, from 197 (95% CI: 152-228) per 100,000 in 1990 to 111 (95% CI: 93-129) per 100,000 in 2017. The burden was borne mainly by children 1-4 years of age and people over 80 years. Socio-demographic index was negatively correlated with the age-standardized DALY rate. In low-middle and low socio-demographic index regions, the decreasing trends were slower than other regions with an average annual percentage change of -2.1% (95% CI: -2.2 to -2.0) and -2.1% (95% CI: -2.1 to -2.0), respectively. Among six geographical regions, Africa presented the highest age-standardized DALY rates of 352 (95% CI: 275-410) per 100,000 in 1990 and 208 (95% CI: 175-236) per 100,000 in 2017, and also the slowest average decreasing trend, with an average annual percentage change of -1.9% (95% CI: -2 to -1.8). Conclusions: The global burden of thermal burns shows a downward trend from 1990 to 2017, and regions with lower socio-demographic index and Africa show greater burdens and smaller downward trends. abstract_id: PUBMED:21321644 Research in burns - Present and future. There have been tremendous advances in burns care over the past 50 years. Much of this, but not all, can be attributed to basic science and clinically related research. Out of the best centres in the world, centres that are fully funded and richly resourced, best practice guidelines result in impressive outcomes not only in terms of survival but also in terms of a quality of survival. Indeed the remaining clinical challenges in these centres are the elderly, the inhalational burns, and the very extensive burns. There are however other challenges when looking at burns care in a global context and in particular is the provision of even minimal standards of acceptable care for burns patients in many parts of the world. Whilst the justification for research funding in the wealthy countries becomes increasingly esoteric, for example looking at the immunology of face transplantation, the global health challenges of burns care still remain. Perhaps, the greatest research challenge in burns care in the 21st century lies not in furthering our understanding of the phenomenon we observe but the global application of the knowledge we already possess. abstract_id: PUBMED:20423936 Global health and foreign policy. Health has long been intertwined with the foreign policies of states. In recent years, however, global health issues have risen to the highest levels of international politics and have become accepted as legitimate issues in foreign policy. This elevated political priority is in many ways a welcome development for proponents of global health, and it has resulted in increased funding for and attention to select global health issues. However, there has been less examination of the tensions that characterize the relationship between global health and foreign policy and of the potential effects of linking global health efforts with the foreign-policy interests of states. In this paper, the authors review the relationship between global health and foreign policy by examining the roles of health across 4 major components of foreign policy: aid, trade, diplomacy, and national security. For each of these aspects of foreign policy, the authors review current and historical issues and discuss how foreign-policy interests have aided or impeded global health efforts. The increasing relevance of global health to foreign policy holds both opportunities and dangers for global efforts to improve health. abstract_id: PUBMED:21991061 The Policy of our Clinic in Early Escharectomy in Burns of the Hand. Early escharectomy in deep partial- and full-thickness burns of the hand has proved to have an advantage over late surgical treatment. It provides an improved functional and cosmetic result. Hand deformities, the need for secondary reconstructive procedures, and morbidity are significantly reduced. In our clinic we prefer to use early escharectomy in all full-thickness burns of the hand unless the patient's condition prevents it or unless the extent of the burns in the rest of the body would prevent improvement in the general outcome. abstract_id: PUBMED:19689589 Policy on global warming: fiddling while the globe burns? Objective: To assess the extent that the health consequences of global warming and the responses to it take due account of its impact on poverty and inequality. Method: Reviewing the relevant literature on global warming, proposed solutions and the impact. Results: To date, too little attention has been paid to the health consequences arising from the increased poverty and inequality that global warming will bring. When these are combined with issues arising from the economic melt-down, food shortages, peak oil, etc. we are heading for a global public health crisis of immeasurable magnitude. Conclusion: Solutions lie in rethinking the global economic system that we have relied upon over the past several decades and the global institutions that have led and fed off that global system - the IMF, the World Bank and so on. Implications: Public health practitioners need to look and act globally more often. They need to better recognise the links between global warming and the global financial crisis. How the latter is dealt with will determine whether the former can be resolved. It is in this global political economy arena that future action in public health lies. abstract_id: PUBMED:37579444 From Local Action to Global Policy: A Comparative Policy Content Analysis of National Policies to Address Musculoskeletal Health to Inform Global Policy Development. Background: Global policy to guide action on musculoskeletal (MSK) health is in a nascent phase. Lagging behind other non-communicable diseases (NCDs) there is currently little global policy to assist governments to develop national approaches to MSK health. Considering the importance of comparison and learning for global policy development, we aimed to perform a comparative analysis of national MSK policies to identify areas of innovation and draw common themes and principles that could guide MSK health policy. Methods: Multi-modal search strategy incorporating a systematic online search targeted at the 30 most populated nations; a call to networked experts; a specified question in a related eDelphi questionnaire; and snowballing methods. Extracted data were organised using an a priori framework adapted from the World Health Organization (WHO) Building Blocks and further inductive coding. Subsequently, texts were open coded and thematically analysed to derive specific sub-themes and principles underlying texts within each theme, serving as abstracted, transferable concepts for future global policy. Results: The search yielded 165 documents with 41 retained after removal of duplicates and exclusions. Only three documents were comprehensive national strategies addressing MSK health. The most common conditions addressed in the documents were pain (non-cancer), low back pain, occupational health, inflammatory conditions, and osteoarthritis. Across eight categories, we derived 47 sub-themes with transferable principles that could guide global policy for: service delivery; workforce; medicines and technologies; financing; data and information systems; leadership and governance; citizens, consumers and communities; and research and innovation. Conclusion: There are few examples of national strategic policy to address MSK health; however, many countries are moving towards this by documenting the burden of disease and developing policies for MSK services. This review found a breadth of principles that can add to this existing work and may be adopted to develop comprehensive system-wide MSK health approaches at national and global levels. Answer: The abstract "Policy on global warming: fiddling while the globe burns?" (PUBMED:19689589) addresses the health consequences of global warming and its impact on poverty and inequality. The review suggests that there has been insufficient attention to the health consequences arising from the increased poverty and inequality that global warming will bring. The authors argue that the combination of global warming with other global crises such as economic meltdown, food shortages, and peak oil could lead to a public health crisis of immeasurable magnitude. The solutions proposed involve rethinking the global economic system and the global institutions that have supported it, such as the IMF and the World Bank. The abstract concludes by emphasizing the need for public health practitioners to act globally and recognize the links between global warming and the global financial crisis, suggesting that resolving the latter will determine the success in addressing the former. The implications point towards the importance of global political economy in future public health actions.
Instruction: Up to 16 years follow-up of aortic valve reconstruction with pericardium: a stentless readily available cheap valve? Abstracts: abstract_id: PUBMED:16039963 Up to 16 years follow-up of aortic valve reconstruction with pericardium: a stentless readily available cheap valve? Objective: There is lack of information regarding the long-term behavior of aortic valve reconstruction with pericardium (AoR). A 16-year follow-up is reported here. Methods: Between 1988 and 1995, 92 consecutive patients had AoR with bovine (Group I, n=27) or glutaraldehyde-treated autologous pericardium (Group II, n=65). The mean age was 30 years (range 12-68). There were 65% males, 92% in sinus rhythm, 84% had rheumatic etiology and 36% had 'other valve' surgery. Mitral valve replacement with a mechanical prosthesis is a contraindication to the operation. Results: Hospital mortality was 2%. The reconstructed aortic valve performed well with excellent hemodynamics. The mean follow-up interval was 10.5+/-4 years, range 9-16 years (longer for group I,12 versus 10 years) with 4% late deaths and seven patients lost to follow-up. Survival rate was 85+/-4%. There were no episodes of thromboembolism. Freedom from reoperation for the whole group was 68+/-5% at 10 years and 47+/-6% at 16 years. For group I, it was 68+/-9% at 10 years and 48+/-10% at 16 years, while for group II it was 72+/-6 and 45+/-8% at 10 and 15 years, respectively. Excluding endocarditis (one in group I and seven in group II) and 'other' reasons for reoperation (two in group I and three in group II), the freedom from structural valve degeneration (SVD) at 10 and 16 years was 78+/-1 and 55+/-10% for group I. For group II, it was 80+/-5% at 10 years and 58+/-9% at 15 years. The mean interval at which the valve degenerated was 8.8 years+/-3.6 and did not differ between the two groups. Conclusions: AoR is feasible with good hemodynamics, low mortality and thromboembolic rate. Its behavior at 10 years is comparable to that of stentless aortic valve bioprosthesis. It can be performed with either xenopericardium or glutaraldehyde-treated autologous pericardium, but the latter has the advantage of being inexpensive and readily available. abstract_id: PUBMED:33941469 Trileaflet Aortic Valve Reconstruction Using Bovine Pericardium. Aim: For several years now, trileaflet aortic valve reconstruction has been performed in patients with various aortic valve diseases. This study aimed to explore the feasibility and durability of trileaflet aortic valve reconstruction with bovine pericardium. Method: Trileaflet aortic valve reconstruction with bovine pericardium was performed in 519 patients with various aortic valve diseases from April 2008 to December 2019. The results for all 519 patients were reviewed retrospectively. Mean age was 48.04±19.08 years (range, 13-80 years), and 40 patients were younger than 18 years of age. Thirty-four (34) patients had aortic stenosis, 344 patients had aortic regurgitation, and 141 patients had both aortic stenosis and regurgitation. One hundred and fifty-four (154) patients had bicuspid aortic valves, and three patients had quadricuspid aortic valves. The size of the pericardial leaflets was designed to be individually proportional to the size of the aortic root. Results: Mean length of follow-up was 41.97±22.68 years (range, 1-127 months). In total, 448 patients were reviewed, and the follow-up rate was 86.4%. All procedures were successful without conversion to prosthetic valve replacement during the procedure. Six (6) patients died after the procedure (in-hospital mortality, 1.2%). All-cause mortality occurred in 11 patients during the follow-up period. The 10-year cumulative survival rate was 94.7%±2.9%. Redo aortic valve surgery was performed in 11 patients after the procedure. The actuarial freedom from redo aortic valve surgery after the procedure was 88.2%±5.2%. At the end of the follow-up, the mean peak was 29.1±9.6 mmHg and mean gradient was 15.6±6.2 mmHg. Conclusions: The mid- and long-term outcomes in patients with various aortic valve diseases undergoing trileaflet aortic valve reconstruction with bovine pericardium are encouraging. The engineered pericardial leaflets should be individually proportional to the size of the aortic root. abstract_id: PUBMED:35187728 Aortic valve reconstruction surgery using autologous pericardium: The experience in Vietnam. Objective: The study aimed to evaluate the indications and describe the aortic valve reconstruction techniques by Ozaki's procedure in Vietnam and report mid-term outcomes of this technique in Vietnam. Methods: Between June 2017 and December 2019, 72 patients diagnosed with isolated aortic valve disease, with a mean age of 52.9 (19-79 years old), and a male:female ratio of 3:1 underwent aortic valve reconstruction surgery by Ozaki's technique at Cardiovascular Center, E Hospital, Vietnam. Results: The aortic valve diseases consisted of aortic stenosis (42%), aortic regurgitation (28%), and a combination of both (30%). In addition, the proportion of aortic valves with bicuspid morphology and small annulus (≤21 mm) was 28% and 38.9%, respectively. The mean aortic cross-clamp time was 106 ± 13.8 min, mean cardiopulmonary bypass time was 136.7 ± 18.5 min, and 2.8% of all patients required conversion to prosthetic valve replacement surgery. The mean follow-up time was 26.4 months (12-42 months), the survival rate was 95.8%, the reoperation rate was 2.8%, and rate of postoperative moderate or higher aortic valve regurgitation was 4.2%. Postoperative valvular hemodynamics was favorable, with a peak pressure gradient of 16.1 mmHg and an effective orifice area index of 2.3 cm2 . Conclusions: This procedure was safe and effective, with favorable valvular hemodynamics and a low rate of valvular degeneration. However, more long-term follow-up data are needed. abstract_id: PUBMED:29049705 Follow-up of a simple method for aortic valve reconstruction with fixed pericardium in children. In 2013, we published a simple method for aortic valve reconstruction in children using fixed pericardium in this journal. This was not expected to be a durable solution but a temporizing measure in the absence of other solutions, in order to buy time for growth that would allow subsequent prosthetic valve insertion. As such, this was implemented in 3 patients with excellent immediate results, as reported in our original publication. We are now writing to provide longer term information. In 2 cases, a newborn and a 12-year-old boy, the valve calcified heavily at 4 years' follow-up. By then, the valvar annulus had grown sufficiently to accept a prosthetic valve in both cases. In these 2 cases, the material used for valve reconstruction was autologous pericardium fixed in 0.6% gluteraldehyde. In the third patient, a 3-year-old girl, the valve failed by dehiscence of the suture line 6 months following surgery. In this case, fixed bovine pericardium had been used, which is more rigid than autologous pericardium, thus possibly explaining the dehiscence. However, the pericardial patch was not calcified. Thus, the described method lived up to our expectation of buying time for growth for later insertion of a prosthetic valve in 2 of the 3 cases. abstract_id: PUBMED:9796279 Aortic and mitral valve replacement in an 83-year-old female. 3 years follow-up of 16 mm bileaflet valve at the aortic position. An 83-year-old female with aortic and mitral stenosis showed orthopnea. The aortic valve pressure gradient was 139 mmHg. The mitral valve orifice was 0.92-1.05 cm2 and the right ventricular pressure was 70 mmHg. The body surface area of the patient was 1.23 m2. Double valve replacement was performed with 16 mm CarboMedics pediatric bileaflet valve for the aortic position and 25 mm CarboMedics mitral valve for the mitral position. The patient leads a normal daily life with NYHA class II three years after surgery. abstract_id: PUBMED:8969568 Clinical and echocardiographic follow-up after aortic valve reconstruction with bovine or autologous pericardium. Eighty-six patients, mean age 29 +/- 15 years, underwent aortic valve reconstruction with bovine or autologous pericardial tissue. Mean clinical follow-up was 35 months. Echocardiographic data were assessed in 65 patients with follow-up > or = 6 months. There were two in-hospital and three late deaths. Warfarin was not given, and no thromboembolic events occurred. Five (6%) patients needed reoperation because of severe aortic regurgitation. Peak aortic valve gradients remained low (26 +/- 14 mm Hg for the bovine group and 16 +/- 16 mm Hg for the autologous group). One patient is awaiting surgery for aortic stenosis after 76 months. Leaflet thickening at latest follow-up was marked in six (9%) patients. Left ventricular dimensions normalized postoperatively and showed only insignificant increase during follow-up. This technique is a promising alternative to valve prosthesis in selected patients; however, longer follow-up is necessary to assess long-term results. abstract_id: PUBMED:32949127 Reconstruction of the aortic valve leaflet with autologous pulmonary artery wall. Leaflet reconstruction outcomes in young patients can be compromised by treated autologous pericardium utilization. We present a new and simple unicuspid/unicommissural aortic valve repair technique with an autologous pulmonary artery wall graft. With comparative and longitudinal follow-up studies, this technique could constitute a living reconstruction of the aortic valve that does not preclude a future Ross procedure. abstract_id: PUBMED:38075996 Aortic valve neocuspidization with the Ozaki procedure in congenital aortic valve disease: Early results. Background: In this study, we present our early results with the Ozaki procedure in the treatment of congenital aortic valve disease. Methods: Between July 2021 and July 2023, a total of 14 patients (10 males, 4 females; median age: 13.9 years; range, 8.5 to 15 years) who underwent neoaortic valve reconstruction of three leaflets using Ozaki procedure were retrospectively analyzed. Preoperative, postoperative, and follow-up echocardiogram images were evaluated. Results: Preoperative indications were aortic regurgitation (n=3) or combined aortic stenosis and regurgitation (n=11). The median aortic annular diameter was 23 (range, 19.5 to 25) mm on preoperative echocardiography. The median preoperative peak systolic aortic valve gradient for patients with aortic stenosis was 60 (range, 45 to 93) mmHg. The median preoperative aortic valve regurgitation grade was 4 (range, 3 to 4). Autologous pericardium and bovine pericardium were used in 12 and two patients, respectively. There was no conversion to valve replacement, myocardial infarction, or mortality in the early postoperative period. The median follow-up time was 8.5 (range, 6 to 19) months. One patient who performed the Ozaki procedure with bovine pericardium underwent valve replacement eight months later. Conclusion: The Ozaki procedure can be performed safely and effectively in congenital aortic valve stenosis and insufficiency with promising early results. abstract_id: PUBMED:33641690 Mid-term follow-up of aortic valve replacement for bicuspid aortic valve. Objective: The purpose of this study was to evaluate the mid-term outcome of aortic valve replacement for bicuspid aortic valve and tricuspid aortic valve and the related risk factors. Methods: From January 2014 to June 2019, 177 tricuspid aortic valve patients and 101 bicuspid aortic valve patients who underwent aortic valve replacement in our hospital were collected. 1:1 propensity score matching analysis was used to control the bias in patient selection. The perioperative and follow-up data between the two groups were compared. Independent risk factors which were associated with the continued dilatation of the ascending aorta were identified by univariate or multivariate logistic regression analysis. Results: After the matching procedure, 160 patients were included in the analysis (80 in each group). Baseline characteristics, intraoperative, and perioperative outcomes were similar between the two groups (all p > 0.05). Moreover, 67 patients in the tricuspid aortic valve group and 70 in the bicuspid aortic valve group completed the follow-up. The ascending aorta change, annual change rate, and the proportion of continuous dilation of ascending aorta in bicuspid aortic valve group were significantly higher than those in the tricuspid aortic valve group (p < 0.05). Multivariate logistic regression analysis showed that type 1 in bicuspid aortic valve (OR 5.173; 95% CI 1.772, 15.101; p = 0.003), aortic regurgitation (OR 3.673; 95% CI 1.133, 11.908; p = 0.030), and aortic valve stenosis with regurgitation (OR 6.489; 95% CI 1.726, 24.404; p = 0.006) were independent risk factors for the continued dilatation of the ascending aorta in all AV patients. Furthermore, the multivariate logistic regression analysis showed that type 1 in bicuspid aortic valve (OR 5.157; 95% CI 1.053, 25.272; p = 0.043), age ≥ 40 years (OR 6.956; 95% CI 1.228, 39.410; p = 0.028), and aortic regurgitation (OR 4.322; 95% CI 1.174, 15.911; p = 0.028) were independent risk factors for the continued dilatation of the ascending aorta in bicuspid aortic valve patients. Conclusion: Compared with tricuspid aortic valve patients, the ascending aorta of bicuspid aortic valve patients is more likely to continue to enlarge after aortic valve replacement. Type 1 in bicuspid aortic valve, age ≥ 40 years, and aortic regurgitation were the independent risk factors. abstract_id: PUBMED:8186568 Aortic valve reconstruction in the young. Aortic valve replacement in the young patient, and particularly in women of child-bearing age, still represents a problem. Between July 1988 and August 1993, 644 aortic valve patients (mean age 32.5 years) were operated in our institution. Aortic valve reconstruction was performed in 274 (42.5%). A variety of repair techniques (valvuloplasty) was used in 202 patients (mean age 21.5 years). Concomitant mitral surgery was performed in 103 (51%). In 72 patients (mean age 27.7 years), a cusp extension was undertaken with glutaraldehyde treated bovine (27 pts) or autologous (45 pts) pericardium. In the "plasty" group, there were 8 (4%) hospital deaths and 8 (4.1%) late deaths with an actuarial survival of 86.05% +/- 3.97%. No thromboembolic events were detected in patients with isolated aortic surgery. There were 32 reoperations without mortality, 22 due to progressive rheumatic disruption of the mitral repair. There was severe aortic dysfunction in 17 (8.76%) cases. There was no hospital mortality among the 72 patients with cusp extensions. There were two (2.8%) late deaths and no thromboembolic events. No patient was anticoagulated. Four patients required reoperation on the aortic valve without mortality. The last echocardiographic follow-up showed stability of the reconstruction. These techniques offer a valid alternative to valve replacement in this difficult category of patients. Answer: Aortic valve reconstruction (AoR) with pericardium has been evaluated for up to 16 years, providing insights into its long-term performance as a stentless, readily available, and inexpensive valve option. Between 1988 and 1995, 92 patients underwent AoR with either bovine pericardium (Group I, n=27) or glutaraldehyde-treated autologous pericardium (Group II, n=65). The mean follow-up was 10.5 years, with survival rates at 85%. The freedom from reoperation was 68% at 10 years and 47% at 16 years, with no significant difference in structural valve degeneration (SVD) between the two groups. The mean interval before valve degeneration was 8.8 years. The study concluded that AoR with pericardium is feasible, with good hemodynamics, low mortality, and thromboembolic rates, and its 10-year performance is comparable to that of stentless aortic valve bioprosthesis (PUBMED:16039963). Further studies have supported the use of pericardium for AoR. A study involving 519 patients who underwent trileaflet aortic valve reconstruction with bovine pericardium reported a 10-year cumulative survival rate of 94.7%, with a mean follow-up of 41.97 years. The study concluded that mid- and long-term outcomes are encouraging (PUBMED:33941469). Another study in Vietnam using Ozaki's procedure for AoR with autologous pericardium in 72 patients reported a survival rate of 95.8% and a reoperation rate of 2.8% over a mean follow-up of 26.4 months, indicating favorable mid-term outcomes (PUBMED:35187728). However, the durability of pericardial valves in children may be limited. A follow-up study of aortic valve reconstruction in children using fixed pericardium reported heavy calcification and the need for prosthetic valve insertion after 4 years in two cases (PUBMED:29049705). In conclusion, AoR with pericardium offers a stentless, inexpensive, and readily available valve option with good hemodynamics and low mortality. While the long-term performance is comparable to stentless bioprostheses, the durability may vary, and more long-term data are needed to fully assess its viability as a cheap valve alternative.
Instruction: Is the presence of a non-cleaved embryo on day 3 associated with poorer quality of the remaining embryos in the cohort? Abstracts: abstract_id: PUBMED:25772647 Is the presence of a non-cleaved embryo on day 3 associated with poorer quality of the remaining embryos in the cohort? Purpose: Morphological evaluation is currently considered the single most important predictive measure for assessing embryo quality. The aim of this study was to investigate whether cycles with at least one non-cleaved embryo (i.e., a 1-cell embryo on day 3) have different outcomes compared with cycles in which all embryos had cleaved by day 3. Methods: All autologous IVF/ICSI cycles with a fresh day 3 transfer and without using a gestational carrier performed at our center between 1/1/2010 and 12/31/2011 were analyzed retrospectively. Those cycles with at least one non-cleaved embryo on day 3 were compared with all other autologous cycles that had 100% cleaved embryos performed during the study period. Results: Eight hundred and forty two cycles were included. Of them, 144 cycles comprised the non-cleaved group, and 698 cycles comprised the cleaved group. Cycles in the non-cleaved group had more oocytes retrieved (15.4 ± 7.1 vs. 12.5 ± 7.1, p < 0.001), more zygotes obtained (10.0 ± 5.3 vs. 7.9 ± 5.2, p = <0.001), but the embryos exhibited lower cleavage rates and higher rates of fragmentation and asymmetry compared with controls (p < 0.001). However, spontaneous abortion rates, ectopic pregnancies rates as well as delivery rates were similar between the two groups. Conclusions: Our results show that the presence of a non-cleaved embryo on day 3 is associated with a more exuberant response to controlled ovarian stimulation as reflected by more oocytes retrieved. Despite the significant decrease in quality of the whole cohort in the non-cleaved group, implantation, delivery rates and number of embryos frozen were not adversely affected by the presence of a non-cleaved embryo. abstract_id: PUBMED:30602163 Over Ten-Cell Good Embryo Transfers on Day Three have Equivalent Clinical Outcomes with Those of Eight-Cell Embryos in Female Patients Aged ≤35 Years: A Retrospective Cohort Study. Background/aim: To compare the clinical outcomes between > 10- and 8-cell good quality embryos on day 3. Methods: A retrospective study of a cohort of 459 patients was included in the fresh embryo transfer (ET) cycle group from January 2009 to April 2016. In this group, 2 good quality embryos on day 3, were transferred on corresponding dates, in 75 patients (> 10-cell embryos), and in 384 patients (8-cell embryos). Seven hundred and forty four patients, with 1 blastocyst transferred derived from > 10-cell (n = 183) or 8-cell (n = 561) good quality embryos on day 3, were assigned in the frozen ET (FET) group. Results: In the ET group, the clinical pregnancy and live birth rates of the > 10-cell transfer patients were comparable with those of the 8-cell transfer group (62.67 vs. 69.27%, 60.00 vs. 59.90%, respectively); however, the miscarriage rate of > 10-cell transfers was significantly lower than that of 8-cell transfers (4.26 vs. 13.53%, p < 0.05). In the FET group, there were no statistically significant differences found in the clinical pregnancy, live birth, and miscarriage rates between one > 10-cell-derived and one 8-cell-derived blastocyst transfers (71.04 vs. 65.78%, 59.02 vs. 54.19%, and 16.92 vs. 17.62%, respectively). Conclusions: The results suggested that > 10- and 8-cell, good quality embryos on day 3, had comparable viability and clinical outcomes. abstract_id: PUBMED:26206457 Is the presence of an uncleaved embryo on day 3 a useful predictor of outcomes following day 5 transfer? Purpose: This study aimed to determine whether the presence of an uncleaved embryo on day 3 is predictive of cycle outcome after day 5 transfer (D5 ET). Methods: In vitro fertilization (IVF)/intracytoplasmic sperm injection (ICSI) cycles from January 2013 to November 2014 with D5 ET were analyzed for the presence of at least one uncleaved embryo on day 3 (D3). Each index cycle (n = 70) was compared with two matched control cycles without uncleaved embryos. The main outcome measures included embryo quality, implantation rate, and clinical pregnancy rate. Results: Fifty-nine of 3896 total embryos in this study were uncleaved on D3 (1.5 %). Cycles with uncleaved embryos had more oocytes retrieved (20.6 vs. 17.5), lower proportions of good quality embryos on D3 (52.4 vs. 66.1 %), and fewer usable embryos (transferred or frozen) on D5 (42.4 vs. 50.8 %). However, there were no significant differences in the incidence of cycles with a positive hCG, or in the rates of implantation, clinical pregnancy, or live birth. Conclusions: Although an uncleaved embryo on D3 is associated with reduced conversion of sibling embryos to the blastocyst stage on D5, overall quality of those embryos forming blastocysts is not markedly decreased and clinical outcomes are not compromised. abstract_id: PUBMED:37291503 Prediction model for day 3 embryo implantation potential based on metabolites in spent embryo culture medium. Background: Metabolites in spent embryo culture medium correlate with the embryo's viability. However, there is no widely accepted method using metabolite dada to predict successful implantation. We sought to combine metabolomic profiling of spent embryo culture medium and clinical variables to create an implantation prediction model as an adjunct to morphological screening of day 3 embryos. Methods: This investigation was a prospective, nested case-control study. Forty-two day 3 embryos from 34 patients were transferred, and the spent embryo culture medium was collected. Twenty-two embryos implanted successfully, and the others failed. Metabolites in the medium relevant to implantation were detected and measured by Liquid Chromatography-Mass Spectrometry. Clinical signatures relevant to embryo implantation were subjected to univariate analysis to select candidates for a prediction model. Multivariate logistical regression of the clinical and metabolomic candidates was used to construct a prediction model for embryo implantation potential. Results: The levels of 13 metabolites were significantly different between the successful and failed groups, among which five were most relevant and interpretable selected by Least Absolute Shrinkage and Selection Operator regression analysis. None of the clinical variables significantly affected day 3 embryo implantation. The most relevant and interpretable set of metabolites was used to construct a prediction model for day 3 embryo implantation potential with an accuracy of 0.88. Conclusions: Day 3 embryos'implantation potential could be noninvasively predicted by the spent embryo culture medium's metabolites measured by LC-MS. This approach may become a useful adjunct to morphological evaluation of day 3 embryos. abstract_id: PUBMED:16595212 Early cleavage morphology affects the quality and implantation potential of day 3 embryos. Objective: To assess the development and implantation potential of early-cleaved embryos displaying various morphological patterns. Design: Retrospective analysis. Setting: Private IVF center. Patient(s): Embryos obtained from 1,556 transfer cycles were assessed. Early-cleaved embryos were grouped according to their cleavage patterns as: even (1,490); uneven (3,238); and fragmented (768), or according to nuclear morphologies as: mononucleation (2,008) and other nuclear morphologies; nonmononucleation (3,488). Seven thousand four hundred forty-five embryos were late cleaved. Intervention(s): None. Main Outcome Measure(s): Embryo quality, pregnancy (PR), and implantation rates. Result(s): Day 3 embryo quality was highest in evenly early-cleaved embryos and in those displaying mononucleation. Early-cleaved embryos displaying fragmentation and late-cleaved embryos yielded the poorest day 3 quality. Early cleavage cycles displayed higher PR and implantation rate than late cleavage with the exception of other nuclear morphologies, in which similar outcome was obtained. Mononucleated early-cleaved embryos implanted at a higher frequency than early-cleaved embryos displaying other nuclear morphologies. Conclusion(s): The morphology of early cleavage correlates to day 3 embryo quality and implantation rate. abstract_id: PUBMED:37308804 Does Day 3 embryo status matter to reproductive outcomes of single blastocyst transfer cycles? A cohort study. Objective: To investigate whether Day 3 (D3) embryo status matter to reproductive outcomes of blastocyst transfer cycles. Design: Retrospective cohort study. Setting: Assisted Reproduction Department of Shanghai Ninth People's Hospital, Shanghai, China. Population: A total of 6906 vitrified-thawed single blastocyst transfer cycles in 6502 women were included. Methods: Generalised estimated equation regression models were used to calculate adjusted odds ratios (aOR) and 95% confidence intervals (CI) for the associations between embryo status and pregnancy outcomes. Main Outcome Measures: Biochemical pregnancy, miscarriage, live birth. Results: High-quality blastocysts derived from poor-grade D3 embryos had comparable pregnancy outcomes to those derived from high-grade D3 embryos (40.0% versus 43.2%, aOR 1.00, 95% CI 0.85-1.17 for live birth rate; 8.3% versus 9.5%, aOR 0.82, 95% CI 0.63-1.07 for miscarriage rate). Cycles with low D3 cell number (five cells or fewer) had significantly higher miscarriage rate (9.2% versus 7.6%, aOR 1.33, 95% CI 1.02-1.75) compared with cycles with eight cells on D3. Conclusions: Poor-quality cleavage embryos should be cultivated to the blastocyst stage because high-quality blastocysts derived from poor-grade D3 embryos had acceptable pregnancy outcomes. When the blastocyst grade is identical, choosing embryos with higher D3 cell number (eight or more cells) for transfer could reduce the risk of early miscarriage. abstract_id: PUBMED:36149236 Developmental potential of different embryos on day 3: a retrospective study. To investigate how different quality of day 3 (D3) embryos affect blastocyst formation and clinical outcomes. This retrospective study analysed 699 patients undergoing assisted reproductive technology (ART) between January 2017 and February 2021. A total of 2517 D3 embryos were transferred to blastocyst medium for extended culture. D3 embryos were divided into five groups. Grade A, 6-10 cells, symmetrical blastomeres and <20% fragmentation; grade B, 6-10 cells, uneven blastomeres and ≥20% fragmentation; grade C, >10 cells, symmetrical blastomeres and <20% fragmentation; grade D, >10 cells, uneven blastomeres and ≥20% fragmentation; grade E, <6 cells. Status of day 5 (D5) and day 6 (D6) blastocysts and the clinical outcomes (blastocyst transfer) of each D3 embryo were recorded. The grade C group showed a higher D5 blastocyst formation rate and a high-quality blastocyst rate than other groups (p<.05). However, the clinical pregnancy rates in the grade A group were higher than other groups (p<.05). Embryos with low speed of development (grade E group) showed considerable clinical outcomes that were still worth investigating. D3 embryos with less fragmentation and ≥6 symmetrical blastomeres revealed a higher developmental potential, while embryos with 6-10 blastomeres showed the ideal clinical outcomes.Impact StatementWhat is already known on this subject? Accurate embryo evaluation can effectively reflect the developmental potential of different embryos. The number of blastomeres, proportion of fragmentation, and blastomere symmetry are three important and popular morphologic parameters used for evaluating day 3 (D3) embryos. However, in existing reports, combining these three parameters for embryo evaluation often results in different results. This is because different researchers have chosen different criteria for these three parameters.What do the results of this study add? In this retrospective study, we summarised the medical records of our reproductive centre in the past three years, redefined the evaluation method of the D3 embryos, and analysed the corresponding developmental potential and clinical outcomes. We conclude that although the embryonic development potential of grade C embryos (>10 cells, symmetrical blastomeres and/or <20% fragmentation) is relatively good, the results of grade A embryos (6-10 cells, symmetrical blastomeres and/or <20% fragmentation) are better in terms of clinical outcomes.What are the implications of these findings for clinical practice and/or further research? We believe this is meaningful for embryologists to choose embryos for transfer and predict the clinical outcome of IVF cycles. abstract_id: PUBMED:32252570 Day 3 time lapse selection is beneficial for the patients with no good-quality embryos. We aimed to evaluate whether or not time lapse selection was beneficial for the cleavage-stage embryo transfers. The study included 838 infertile women with good ovarian reserve (obtaining more than 8 oocytes) from January 2018 to August 2019. Based on the transferred embryos with different grades (grade I, II and III), the patients were divided into day 3 selection with conventional morphology (CM) and day 3 selection with time lapse (TL) groups. For the grade I and II embryos, we observed that CM and TL had similar implantation, clinical pregnancy and ongoing pregnancy (p > .05) rates. For the grade III embryos, we observed that CM group showed slightly lower implantation (36.74 versus 41.03%, p = .261) and clinical pregnancy (56.82 versus 64.10%, p = .182) rates than TL group. CM group showed significantly lower ongoing pregnancy (47.35 versus 59.83%, p = .025) rate than TL group. And we observed that CM group had significantly higher blastulation (38.93 versus 26.61%, p = .019) rate than TL group. We concluded that TL selection was beneficial to the patients with no good-quality embryos in the first cleavage-stage embryo transfers. abstract_id: PUBMED:27475526 Effect of embryo morphology and morphometrics on implantation of vitrified day 3 embryos after warming: a retrospective cohort study. Background: Characteristics routinely used to evaluate embryo quality after thawing include number of blastomeres survived and presence of mitosis resumption after overnight culture. It is unknown to which extent symmetry and fragmentation affect implantation after warming and whether application of stricter criteria either before vitrification or after warming would improve implantation rate (IR) of vitrified/warmed embryos. This study aimed to find new parameters to improve selection criteria for vitrification and for transfer after warming. Methods: Firstly, we evaluated standard morphological characteristics (intact survival, mitosis resumption, number of blastomeres, symmetry and fragmentation) of 986 warmed day 3 embryos and, from a subset of 654, we evaluated morphometric characteristics (fragmentation, symmetry and volume change). Secondly, we tested the hypothesis that IR of day 3 vitrified/warmed embryos is influenced by morphometric characteristics. IR per embryo transferred was calculated using embryos that were transferred in a single embryo transfer (SET) or a double embryo transfer (DET) with either 0 or 100 % implantation (830/986). We investigated the significant differences in IR between the different categories of a specific characteristic. These categories were based on our standard embryo evaluation system. The statistical tests Chi-square, Fisher's exact or Cochrane-Armitage were used according to the type and/or categories of the variable. Results: The 986 embryos were transferred in 671 FET cycles with 16.9 % (167/986) IR. After exclusion of DET with 1 embryo implanted, IR per embryo transferred was 12.4 % (103/830). Embryo symmetry, fragmentation and volume change in vitrified/warmed day 3 embryos were not associated with IR. However, when mitosis resumption was present after overnight culture, intact embryos reached significantly higher IR than non-intact embryos and only when the embryo compacted after overnight culture the number of cells damaged after warming had no effect on IR. Concretely, embryos with 8 cells after warming or >9 cells after overnight culture-including compacted embryos-reached the highest IR (>15 %) while embryos with <6 cells after warming or with ≤6 cells after overnight culture had extremely low IR (<1 %). Conclusions: IR of vitrified embryos is determined by the number of cells lost, by the occurrence of mitosis resumption, and by the specific number of blastomeres present but not by fragmentation, blastomere symmetry or volume change. Unselecting embryos for cryopreservation because of fragmentation >10 % and/or symmetry < 75 % only leads to unwanted loss of embryos with acceptable implantation potential. Trial Registration: Retrospectively registered NCT02639715 . abstract_id: PUBMED:26396565 Successful pregnancy following the transfer of vitrified blastocyst which developed from poor quality embryos on day 3. Background: The selection of pre-embryos for transferred is based on morphological appearance. But some poor quality cleaved embryos also can be cultured to the blastocyst stage and implanted. Objective: To assess the clinical pregnancy outcomes of blastocyst transfer which developed from poor quality embryos. Materials And Methods: A total of 109 cleaved embryos with poor quality were cultured to day 5/day 6 and 27 (24.8%) blastocysts were collected from the 15 cycles/patients undergoing conventional IVF. All the blastocysts were cooling with fast-freezing. Then the blastocysts were warmed for transfer. Results: All of 25 vitrified blastocysts (92.6%) survived after warming and were transferred to 15 patients. Five of the women became pregnant. Conclusion: Our results suggest that vitrified human day 5/day 6 blastocyst transfer which develop from poor quality embryo at day 3 can contribute to increasing cumulative pregnancy rates in assisted reproduction. Answer: Yes, the presence of a non-cleaved embryo on day 3 is associated with poorer quality of the remaining embryos in the cohort. The study by PUBMED:25772647 found that cycles with at least one non-cleaved embryo on day 3 exhibited lower cleavage rates and higher rates of fragmentation and asymmetry compared with controls. Despite this decrease in quality, the study noted that implantation, delivery rates, and the number of embryos frozen were not adversely affected by the presence of a non-cleaved embryo. Additionally, PUBMED:26206457 reported that cycles with an uncleaved embryo on day 3 had lower proportions of good quality embryos on day 3 and fewer usable embryos on day 5. However, the study also found that the overall quality of those embryos forming blastocysts was not markedly decreased and clinical outcomes were not compromised. These findings suggest that while the presence of non-cleaved or uncleaved embryos on day 3 is indicative of a general decrease in embryo quality within the cohort, it does not necessarily predict poor clinical outcomes, as other factors may contribute to the viability of the embryos that do reach the blastocyst stage or are selected for transfer.
Instruction: Does a full bladder assist insertion of intrauterine contraception? Abstracts: abstract_id: PUBMED:23349532 Does a full bladder assist insertion of intrauterine contraception? A randomised trial. Background: Bladder distension is thought to cause the uterine axis to become more aligned with the cervical canal. Among women undergoing assisted conception it has been demonstrated that having a full bladder facilitates the passage of an intrauterine catheter for embryo transfer. Objective: To determine if insertion of intrauterine contraceptive devices is easier in women who have a full bladder at the time of insertion. Methods: 200 women requesting intrauterine contraception with a pre-filled bladder were randomised to delayed emptying (after insertion; n=100) or immediate emptying (before insertion; n=100). Comparisons were made between doctors' reported ease of insertion and women's pain scores. Results: There was no significant difference with reported ease of insertion between the groups. Doctors reported that insertions were either 'very easy' or 'quite easy' in 82% and 83% of women in the immediate and delayed emptying groups, respectively (95% CIs for difference -10% to +11%). There was no significant difference in reported pain scores, with mean pain scores (out of 10) of 3.8 and 4.4 in the delayed and immediate emptying groups, respectively. Conclusions: Bladder filling does not have a significant effect on ease of insertion of an intrauterine method of contraception. Insertion is mostly associated with low levels of discomfort, even in the presence of a full bladder. abstract_id: PUBMED:33312880 Erosion of an intrauterine contraceptive device into the urinary bladder: A case report. Intrauterine contraceptive devices may rarely erode into the urinary bladder, usually shortly after insertion. This case report describes the presentation and management of a copper-bearing intrauterine device which had eroded into the bladder. The patient presented with dysuria, dyspareunia and groin pain. The device had been inserted 10 years previously following a termination of pregnancy. A bladder stone had formed on the arm of the T-shaped device. The calculus was successfully lasered transurethrally and the intrauterine device was removed transvaginally. A urinary catheter was left on free drainage for four weeks and a follow-up cystogram showed no leak. Most complications related to intrauterine devices occur within days or weeks of insertion but in this case the complications presented 10 years later. abstract_id: PUBMED:31304041 Cystoscopic removal of a migrated intrauterine device to the bladder; a case report. Background: An intrauterine device (IUD) is a well-accepted means of reversible contraception. Migration of IUD to the bladder through partial or complete perforation has been rarely reported. This phenomenon could be strongly associated with history of prior cesarean sections (C-section) or early insertion of the device in the postpartum period. Case Presentation: In this study, a case of copper IUD migration through cesarean scar defect is presented, in such a way that was successfully managed by cystoscopic removal. A 31-year-old female with a history of lower urinary symptoms referred to the clinic for her secondary infertility work-up. A copper IUD outside the uterus in the bladder was found using hysterosalpingraphy. A plain abdominal radiography also confirmed the presence of a T-shaped IUD in the pelvis. According to ultrasound, the copper IUD was partly in the bladder lumen and within the bladder wall. The patient had a history of an intrauterine device insertion eight years ago followingher second cesarean delivery. Three years later, her IUD was expelled, and another copper IUD was inserted. Thesecond copper IUD was alsoremoved while she decided to be pregnant. The patient finally underwent a hysteroscopic cystoscopy. The intrauterine device with its short arms embedded in the bladder wall was successfully extracted through the urethra. Conclusions: IUD insertion seems to be more challenging in women with prior uterine incisions and requires more attention. Cystoscopic removal should be considered as a safe and effective minimally invasive approach tomanage a migrated intrauterine device in the bladder. abstract_id: PUBMED:29058619 A large bladder stone caused by the intravesical migration of an intrauterine contraceptive device: a case report. Background: A wide variety of complications due to the extrauterine migration of intrauterine contraceptive devices have been reported in the literature. Here we describe the case of a large bladder stone formed around a migrated Copper T380A device that was neglected and detected 15 years after insertion. Case Presentation: A 48-year-old Sri Lankan woman underwent a workup for lower urinary tract symptoms and recurrent urinary tract infections over the previous 6 months. The radiographs showed a large bladder stone with an imprint of an intrauterine contraceptive device in the center of it. The device had been inserted 15 years previously. Two years after the insertion, it was considered to be missing, but our patient did not comply with the recommended follow-up. She had been completely asymptomatic until she developed lower urinary tract symptoms. After confirming the location of the stone via ultrasonography, a vesicolithotomy was performed, revealing a stone with three limbs corresponding to the shape of the Copper T380A device. The device and the threads were fully covered with the stone material. Our patient was asymptomatic following the surgery. Conclusions: A migrated intrauterine contraceptive device can act as the nidus for the formation of a secondary bladder stone. The detailed imprint of the device inside the stone and the laminated appearance of the stone material were characteristic of a secondary bladder stone formed around an intrauterine contraceptive device. Radiography and ultrasonography are adequate for the diagnosis of intravesical migration of intrauterine contraceptive devices. abstract_id: PUBMED:18709823 Bladder lithiasis secondary to intrauterine device migration. Case report Objective: To report one clinical case of bladder lithiasis secondary to the migration of an intrauterine device, its symptoms, diagnostic tests employed and treatment. Methods/results: 30-year-old female patient who presented lower urinary tract irritative symptoms. One year before she underwent insertion of an intrauterine device (copper T) after endometrial curettage for pregnancy interruption. She received treatment for infection cystitis over two years, with various antibiotic cycles. The persistence of symptoms led her to the urology clinics, and a 5 x 3 cm intravesical lithiasis with a copper T inside was detected by ultrasound and pelvic anterior-posterior x-ray Suprapubic cystolithotomy was performed with a good outcome and disappearance of the symptoms. Conclusions: The migration of an intrauterine device to the bladder is an unfrequent cause of bladder lithiasis secondary to foreign body. Nevertheless, we must consider this possibility in front of chronic lower urinary tract irritative symptoms in every female using this birth control method. abstract_id: PUBMED:18797898 Migration of an intrauterine device into the bladder: a rare case. Introduction: Intrauterine device (IUD) application has been used for over 30 years and is a widely accepted contraception method among women because of its low-complication rates. The use of intrauterine devices may cause complications but migration of the IUD into an adjacent organ is rarely encountered. Case: In the present report, we present a 26-year-old patient to whom IUD had been applied 2 years ago and whose examination performed due to pain and urinary complaints revealed migration of the uterine device into the bladder. Conclusion: Patients with pelvic pain and chronic irritative urinary symptoms whose vaginal examination and ultrasonography reveal a dislocated IUD should be carefully examined for the migration of the IUD into the bladder. In order to avoid this rare complication, patient should be evaluated physically and ultrasonographically for uterine position, thinness of the uterine wall and inflammatory disease before the insertion. The patient should be evaluated with sonography immediately after insertion and periodically. abstract_id: PUBMED:37879705 Intrauterine device migration into the bladder leading to stones formation. An intrauterine device (IUD) is a popular method of contraception mainly used in developing countries. Perforation is one of the most serious but a rare complication secondary to the insertion of an IUD, while perforation into the intravesical organs such as the bladder is even more rare. A 30-year-old multipara in early 30s, with two previous caesarean sections (CS) and one curettage, was found to have her IUD puncturing the bladder during a cystoscopy procedure to remove her bladder stones. Transvesical migration of an IUD is an uncommon complication with a high rate of calculi formation, which is thought to be caused by the IUD's lithogenic potential. Imaging approaches such as ultrasound and pelvic X-rays are considered imperative in the accurate diagnosis. Any migrated IUD should be removed regardless of location. Prompt and continual monitoring of women using an IUD is essential and in a case where the IUD has migrated, the removal using the endoscopic approach is a safe and effective method. abstract_id: PUBMED:15074788 Double bladder lithiasis from an intrauterine device. Report one case Objectives: To report a new case of an intrauterine device causing bladder lithiasis, because of its rarity as a complication of the intrauterine device contraceptive method. Methods/results: 27-year-old female patient admitted in the urology department of the "Hospital Medico-quirurgico 10 de Octubre" who having an intrauterine device was pregnant, underwent a caesarean section, and a new intrauterine device implant, and a few months later presented with lower urinary tract infectious syndrome, being diagnosed of double bladder lithiasis secondary to intrauterine device migration, for which cystolithotomy was performed. Conclusions: We performed a short review of the bladder foreign bodies reported in the literature, and present this case which is very infrequent because of the finding of 2 stones in the bladder and an intrauterine device inside one of them. The intrauterine device was detected after crushing one of the calculi, 2 cm in size. It seems that first intrauterine device had migrated through the uterine wall into the bladder, creating a calcareous concretion around. abstract_id: PUBMED:24612897 Best practices to minimize risk of infection with intrauterine device insertion. Background: Intrauterine devices provide an extremely effective, long-term form of contraception that has the benefit of being reversible. Historically, the use of certain intrauterine devices was associated with increased risk of pelvic inflammatory disease. More recent evidence suggests that newer devices do not carry the same threat; however, certain risk factors can increase the possibility of infection. Objectives: To review the risk of infection with the insertion of intrauterine devices and recommend strategies to prevent infection. Outcomes: The outcomes considered were the risk of pelvic inflammatory disease, the impact of screening for bacterial vaginosis and sexually transmitted infections including chlamydia and gonorrhea; and the role of prophylactic antibiotics. Evidence: Published literature was retrieved through searches of PubMed, Embase, and The Cochrane Library on July 21, 2011, using appropriate controlled vocabulary (e.g., intrauterine devices, pelvic inflammatory disease) and key words (e.g., adnexitis, endometritis, IUD). An etiological filter was applied in PubMed. The search was limited to the years 2000 forward. There were no language restrictions. Grey (unpublished) literature was identified through searching the web sites of national and international medical specialty societies. Values: The quality of evidence in this document was rated using the criteria described in the Report of the Canadian Task Force on Preventative Health Care (Table). Recommendations 1. All women requesting an intrauterine device should be counselled about the small increased risk of pelvic inflammatory disease in the first 20 days after insertion. (II-2A) 2. All women requesting an intrauterine device should be screened by both history and physical examination for their risk of sexually transmitted infection. Women at increased risk should be tested prior to or at the time of insertion; however, it is not necessary to delay insertion until results are returned. (II-2B) 3. Not enough current evidence is available to support routine screening for bacterial vaginosis at the time of insertion of an intrauterine device in asymptomatic women. (II-2C) 4. Routine use of prophylactic antibiotics is not recommended prior to intrauterine device insertion, although it may be used in certain high-risk situations. (I-C) 5. Standard practice includes cleansing the cervix and sterilizing any instruments that will be used prior to and during insertion of an intrauterine device. (III-C) 6. In treating mild to moderate pelvic inflammatory disease, it is not necessary to remove the intrauterine device during treatment unless the patient requests removal or there is no clinical improvement after 72 hours of appropriate antibiotic treatment. In cases of severe pelvic inflammatory disease, consideration can be given to removing the intrauterine device after an appropriate antibiotic regimen has been started. (I-B) 7. An intrauterine device is a safe, effective option for contraception in an HIV-positive woman. (I-B) 8. An intrauterine device can be considered a first-line contraceptive agent in adolescents. (I-A). abstract_id: PUBMED:3952918 Perforation of urinary bladder by intrauterine device. Perforation of the uterus by an intrauterine device (IUD) is not an uncommon complication of this means of birth control. Comparatively, however, perforation of the bladder is uncommon, having been reported only seven times previously in the literature. This additional case is presented with review of the current literature and discussion of uterine and bladder perforation by an IUD. Answer: According to the randomized trial reported in PUBMED:23349532, having a full bladder does not significantly assist in the insertion of intrauterine contraception. The study involved 200 women who were randomized to either delayed emptying (after insertion) or immediate emptying (before insertion) of their bladder. The results showed no significant difference in the reported ease of insertion between the groups, with doctors reporting that insertions were 'very easy' or 'quite easy' in 82% and 83% of women in the immediate and delayed emptying groups, respectively. Additionally, there was no significant difference in reported pain scores between the two groups. Therefore, the conclusion was that bladder filling does not have a significant effect on the ease of insertion of an intrauterine method of contraception.
Instruction: Does concurrent radiochemotherapy affect cosmetic results in the adjuvant setting after breast-conserving surgery? Abstracts: abstract_id: PUBMED:17448869 Does concurrent radiochemotherapy affect cosmetic results in the adjuvant setting after breast-conserving surgery? Results of the ARCOSEIN multicenter, Phase III study: patients' and doctors' views. Purpose: To evaluate the cosmetic results of sequential vs. concurrent adjuvant chemotherapy with radiotherapy after breast-conserving surgery for breast cancer, and to compare ratings by patients and physicians. Methods And Materials: From 1996 to 2000, 716 patients with Stage I-II breast cancers were included in a multicenter, Phase III trial (the ARCOSEIN study) comparing, after breast-conserving surgery with axillary dissection, sequential treatment with chemotherapy first followed by radiotherapy vs. chemotherapy administered concurrently with radiotherapy. Cosmetic results with regard to both the overall aspect of the breast and specific changes (color, scar) were evaluated in a total of 214 patients (107 in each arm) by means of questionnaires to both the patient and a physician whose rating was blinded to treatment allocation. Results: Patients' overall satisfaction with cosmesis was not statistically different between the two arms, with approximately 92% with at least satisfactory results (p = 0.72), although differences between the treated and untreated breasts were greater after the concurrent regimen (29% vs. 14% with more than moderate differences; p = 0.0015). Physician assessment of overall cosmesis was less favorable, with lower rates of at least satisfactory results in the concurrent arm (60% vs. 85%; p = 0.001). Consequently, the concordance for overall satisfaction with cosmesis between patients and doctors was only fair (kappa = 0.62). Conclusion: After breast-conserving surgery, the concurrent use of chemotherapy with radiotherapy is significantly associated with greater differences between the breasts. These differences do not translate into patients' lessened satisfaction with cosmesis. abstract_id: PUBMED:24623360 Assessing cosmetic results after breast conserving surgery. "Taking less treating better" has been one of the major improvements of breast cancer surgery in the last four decades. The application of this principle translates into equivalent survival of breast cancer conserving treatment (BCT) when compared to mastectomy, with a better cosmetic outcome. While it is relatively easy to evaluate the oncological results of BCT, the cosmetic outcome is more difficult to measure due to the lack of an effective and consensual procedure. The assessment of cosmetic outcome has been mainly subjective, undertaken by a panel of expert observers or/and by patient self-assessment. Unfortunately, the reproducibility of these methods is low. Objective methods have higher values of reproducibility but still lack the inclusion of several features considered by specialists in BCT to be fundamental for cosmetic outcome. The recent addition of volume information obtained with 3D images seems promising. Until now, unfortunately, no method is considered to be the standard of care. This paper revises the history of cosmetic evaluation and guides us into the future aiming at a method that can easily be used and accepted by all, caregivers and caretakers, allowing not only the comparison of results but the improvement of performance. abstract_id: PUBMED:37795008 Impact of Radiotherapy Volumes on Late-Term Cosmetic Outcomes and Quality of Life in Patients With Unifocal and Multifocal/Multicentric Breast Cancer After Breast-Conserving Surgery. Objective: Breast-conserving surgery (BCS) followed by radiotherapy (RT) is the standard treatment for early-stage breast cancer. The use of an additional RT dose (boost) to the tumour bed improves local control but may worsen quality of life (QOL) and cosmetic results. Multifocal/multicentric tumours (MMTs) pose a challenge as they require larger boost volumes. This study investigated the impact of RT volumes on late-term cosmetic outcomes and QOL in patients with unifocal and MMTs who underwent adjuvant RT after BCS. Materials And Methods: Retrospective data of 367 patients who underwent BCS between 2012 and 2014 were reviewed. A cohort of 121 patients with at least six months of completed RT were prospectively included in the study. Cosmetic results were evaluated using a modified scoring system, and QOL was assessed using The European Cancer Treatment and Organization Committee tools. Results: The results showed that the inclusion of regional lymphatics in the RT treatment field significantly affected QOL, particularly in terms of role functioning and social functioning. Higher boost volume ratios were associated with increased pain-related symptoms. However, the presence of MMTs did not significantly affect cosmetic outcomes compared to unifocal tumours. Conclusion: The size of the boost and inclusion of regional lymphatics in RT significantly impact QOL in patients undergoing BCS. Tumour foci number does not affect cosmetic outcomes. These findings emphasize the need for careful consideration of RT volumes to minimize long-term adverse effects on QOL. Future prospective studies should evaluate early side effects and baseline QOL scores to provide a comprehensive assessment. abstract_id: PUBMED:31857762 Correlation of Specimen/Breast Volume Ratio to Cosmetic Outcome After Breast Conserving Surgery. Breast conserving surgery (BCS) is currently the standard of care for early breast cancer. One of the key determinants for the line of treatment in breast cancer is the size of the tumor in relation to the breast size. The aim of this study is to determine the effect of the ratio of the excised specimen to breast volume on the cosmetic outcome after conventional BCS. This is a retrospective study conducted on female patients with early stage breast cancer who underwent BCT at National Cancer Institute, Cairo University. The study included 41 patients with stage I and II breast cancer. Breast volume was calculated using mammography, and ratio of the specimen to breast volume was determined. This ratio was correlated with the cosmetic outcome using the BCCT.core software. Thirty-six out of the 41 breast cancer patients completed the study. Favorable outcome (excellent + good) was detected in 52.7% of patients, while 47.3% had unfavorable outcome (fair + poor). Breast volume, tumor site, patients' age, and weight did not seem to alter the cosmetic result. The only statistically significant factors affecting the cosmetic outcome were the specimen volume and the ratio of the specimen to the normal breast volume (p = 0.006 and 0.019 respectively). In order to obtain a satisfactory cosmetic outcome after conventional BCS, the ratio of the excised specimen to breast volume has to be seriously considered. abstract_id: PUBMED:29271636 Adjuvant Photothermal Therapy Inhibits Local Recurrences after Breast-Conserving Surgery with Little Skin Damage. Adjuvant treatments following breast-conserving surgery (BCS) are essential to reduce the risk of local recurrences in patients with breast cancer. However, current adjuvant treatments are based on ionizing radiation, which brings radiation-induced damage and amplifies the risk of death. Here we explore the feasibility of using non-ionizing light to induce photothermal therapy as an adjuvant treatment to BCS. In an orthotopic breast cancer mice model, we demonstrate that adjuvant photothermal therapy (aPTT) decreases the incidence of local recurrences after BCS with no expense of cosmetic outcome. In comparison with conventional photothermal therapy, the technique used in aPTT provides more uniformly distributed light energy and less risk of skin burns and local recurrences. Overall, this work represents a departure from the traditional concept of using PTT as an alternative to surgery and reveals the potential of using PTT as an alternative to adjuvant radiation therapy, which is valuable especially for patients susceptible to radiation damage. abstract_id: PUBMED:26260375 The influence of simultaneous integrated boost, hypofractionation and oncoplastic surgery on cosmetic outcome and PROMs after breast conserving therapy. Introduction: We retrospectively investigated the possible influence of a simultaneous integrated boost (SIB), hypofractionation and oncoplastic surgery on cosmetic outcome in 125 patients with stage I-II breast cancer treated with breast conserving therapy (BCT). Patients And Methods: The boost was given sequentially (55%) or by SIB (45%); fractionation was conventional (83%) or hypofractionated (17%); the surgical technique was a conventional lumpectomy (74%) or an oncoplastic technique (26%). We compared cosmetic results subjectively using a questionnaire independently completed by the patient and by the physician and objectively with the BCCT.core software. Independent-samples T-tests were used to compare outcome in different groups. Patients also completed the EORTC QLQ C30 and BR23. Results: Univariate analyses indicated no significant differences of the cosmetic results (P ≤ 0.05) for the type of boost or fractionation. However, the conventional lumpectomy group scored significantly better than the oncoplastic group in the BCCT.core evaluation, without a significant difference in the subjective cosmetic evaluation. Quality of life outcome was in favour of SIB, hypofractionation and conventional surgery. Conclusion: Our study indicates that the current RT techniques seem to be safe for cosmetic outcome and quality of life. Further investigation is needed to verify the possible negative influence of oncoplastic surgery on the cosmetic outcome and the quality of life as this technique is especially indicated for patients with an unfavourable tumour/breast volume ratio. abstract_id: PUBMED:29473960 Breast-specific factors determine cosmetic outcome and patient satisfaction after breast-conserving therapy: Results from the randomized COBALT study. Background And Objectives: To identify breast-specific factors and the role of tumor, treatment, and patient-related items in influencing patient opinion on cosmesis and satisfaction after breast-conserving therapy (BCT). Methods: Data from the randomized COBALT study was used. At 3, 12, and 36 months, 128 patients with T1-T2 breast cancer completed a questionnaire on breast-specific factors and overall cosmetic outcome and patient satisfaction, using a 4-point Likert scale. Results: There was a strong positive correlation between breast-specific factors, overall cosmetic outcome,and satisfaction at all time-points. Excellent/good cosmetic outcomes and satisfaction decreased during follow-up. A shift was noted in the degree of influence of the various breast-specific factors. At 3 years, symmetry factors such as size, shape, and nipple position largely determined a patient's opinion on the final cosmesis, followed by firmness. The risk of an unacceptable outcome was associated with young age and large excision volumes. Conclusion: A questionnaire including breast-specific questions provides important information on final cosmetic results and satisfaction after BCT. These outcomes can also be of great value as quality indicators and pre-operative counseling. The major influence of breast-specific factors on asymmetry underlines the importance of achieving an optimal excision volume at the initial procedure. abstract_id: PUBMED:29688847 The COSMAM TRIAL a prospective cohort study of quality of life and cosmetic outcome in patients undergoing breast conserving surgery. Background: Cosmetic result in breast cancer surgery is gaining increased interest. Currently, some 30-40% of the patients treated with breast conserving surgery (BCS) are dissatisfied with their final cosmetic result. In order to prevent disturbing breast deformity oncoplastic surgical techniques have been introduced. The extent of different levels of oncoplastic surgery incorporated in breast conserving surgery and its value with regard to cosmetic outcome, patient satisfaction and quality of life remains to be defined. The aim of this prospective cohort study is to investigate quality of life and satisfaction with cosmetic result in patients with breast cancer, undergoing standard lumpectomy versus level I or II oncoplastic breast conserving surgery. Methods: Female breast cancer patients scheduled for BCS, from 18 years of age, referred to our outpatient clinic from July 2015 are asked to participate in this study. General, oncologic and treatment information will be collected. Patient satisfaction will be scored preceding surgery, and at 1 month and 1 year follow up. Photographs of the breast will be used to score cosmetic result both by the patient, an independent expert panel and BCCT.Core software. Quality of life will be measured by using the BREAST-Q BCT, EORTC-QLQ and EQ-5D-5 L questionnaires. Discussion: The purpose of this prospective study is to determine the clinical value of different levels of oncoplastic techniques in breast conserving surgery, with regard to quality of life and cosmetic result. Analysis will be carried out by objective measurements of the final cosmetic result in comparison with standard breast conserving surgery. The results of this study will be used to development of a clinical decision model to guide the use oncoplastic surgery in future BCS. Trial Registration: Central Commission of Human Research (CCMO), The Netherlands: NL54888.015.15. Medical Ethical Commission (METC), Maxima Medical Centre, Veldhoven, The Netherlands: 15.107. Dutch Trial Register: NTR5665 , retrospectively registered, 02-25-2016. abstract_id: PUBMED:32867462 Safety and short-term efficacy analysis of breast-conserving surgery combined with intraoperative radiotherapy for early-stage breast cancer Objective: To analyze the safety and short-term efficacy of breast-conserving surgery combined with intraoperative radiotherapy for early-stage breast cancer. Methods: A total of 101 consecutive patients who received breast-conserving surgery plus intraoperative radiotherapy were recruited to summarize the recent follow-up results and clinicopathological data. Univariate analysis and Logistic regression model were used to evaluate the factors affecting the postoperative adverse reactions and cosmetic effects. Results: Among 101 patients, 4 patients had recurrence or metastasis. The 3-years disease free survival rate was 94.9%, and the 3-years cumulative recurrence rate was 5.1%. Univariate analysis showed that the menstrual status and postoperative whole breast radiotherapy were associated with the postoperative adverse reactions (P<0.05). The T stage and applicator diameter were associated with the cosmetic effect (P<0.05). Multivariate analysis indicated that the diameter of the applicator (OR=3.701, P=0.026) and postoperative whole breast radiotherapy (OR=5.962, P=0.005) were independent factors for the postoperative adverse reactions, and the diameter of the applicator (OR=2.522, P=0.037) was an independent factor for the cosmetic effect. Conclusion: Breast-conserving surgery combined with intraoperative radiotherapy shows safety and good short-term efficacy in low-risk early-stage breast cancer. abstract_id: PUBMED:27538776 Single dose IOERT versus whole breast irradiation : Cosmetic results in breast-conserving therapy. Background: Scarce data are available about the cosmetic result of single dose intraoperative electron radiotherapy (IOERT) in breast-conserving radiotherapy. Methods And Materials: We included 71 breast cancer patients. Breast-conserving surgery and sentinel node procedure had started almost 3 years earlier. Subsequently, 26 patients were treated with IOERT and 45 patients received postoperative whole breast irradiation (WBI). For both groups we determined seven dimensionless asymmetry features. We compared the subjectively and the objectively derived cosmetic scores with each other. Results: For four asymmetry features we noted significantly smaller differences for patients treated with IOERT when compared to those treated with WBI: relative breast contour difference, relative breast area difference and relative breast overlap difference. After correcting for excision volume a significant difference was noticed also for relative lower breast contour. For the IOERT group the cosmetic scores "excellent or good" as determined by each patient and one physician were 88 and 96 %, respectively. When the overall cosmetic scores for patients treated with IOERT and WBI were compared to those of the objectively derived scores, there was a fair level of agreement. Conclusion: For patients treated with IOERT we noted less asymmetry and high rates of "good or excellent" subjectively derived cosmetic scores. The level of agreement between the subjectively and the objectively derived cosmetic scores was limited. Due to the small sample size and the design of the study no definitive conclusions can be drawn. Answer: Yes, concurrent radiochemotherapy does affect cosmetic results in the adjuvant setting after breast-conserving surgery. According to the ARCOSEIN multicenter, Phase III study, the concurrent use of chemotherapy with radiotherapy after breast-conserving surgery is significantly associated with greater differences between the treated and untreated breasts. However, these differences did not translate into patients' lessened satisfaction with cosmesis. Patients' overall satisfaction with the cosmetic outcome was not statistically different between the sequential and concurrent arms, with approximately 92% reporting at least satisfactory results. In contrast, physician assessment of overall cosmesis was less favorable in the concurrent arm, with lower rates of at least satisfactory results (60% vs. 85%; p = 0.001). The concordance for overall satisfaction with cosmesis between patients and doctors was only fair (kappa = 0.62) (PUBMED:17448869).
Instruction: Specific work-related problems: do they matter in access to work-related medical rehabilitation? Abstracts: abstract_id: PUBMED:24217877 Specific work-related problems: do they matter in access to work-related medical rehabilitation? Background: The study determined if need for rehabilitation in work-related medical rehabilitation (WMR) is more frequently characterised by specific work-related problems than in conventional medical rehabilitation (MR). Methods: In 6 rehabilitation centres, persons with back pain (M50, M51, M53, M54) were surveyed about work-related restrictions of work ability at begin of their rehabilitation. Results: Differences in work ability of WMR and MR patients confirmed need-related access. However, these differences were exclusively explained by screening-based access decisions. If access was not supported by a screening, WMR and MR patients did not differ. Decisions by rehabilitation centres compared with decisions by insurance agencies resulted in similar differences between WMR and MR patients. Conclusion: Screening-based decisions about access enable a more need-related access to WMR. As there were no differences between access decisions by rehabilitation centres and insurance agencies, access decisions can be realised already by the insurance agency. abstract_id: PUBMED:31451849 Work-related medical rehabilitation in neurology : Effective on the basis of individualized rehabilitant identification Background: Evidence for the effectiveness of work-related medical rehabilitation (WMR) for a successful return to work (RTW) is lacking for neurological diseases. The aim of this study was therefore to correlate the cross-indication screening instrument for the identification of the demand of work-related medical rehabilitation (SIMBO‑C) with the individualized clinical anamnestic determination of severe restrictions of work ability (SRWA) as a required access criterion for admittance to neurological WMR. A further aim was to compare the rate of successful RTW in rehabilitants with and without WMR measures 6 months after inpatient rehabilitation. Methods: On admission SRWA were routinely screened by an individualized clinical anamnestic determination with subsequent assignment to WMR or conventional rehabilitation. At the beginning of rehabilitation the SIMBO-C was applied and 6 months after the rehabilitation the RTW status was surveyed. Results: Of the 80 rehabilitants 44 (55%) received WMR. On admission they showed a higher SIMBO-C score (41.3 ± 15.7 vs. 26.2 ± 18.6 points, p = 0.002), on discharge more often locomotor and psychomental disorders (55% vs. 36%, p = 0.10 and 46% vs. 22%, p = 0.03, respectively) and longer incapacitation times after rehabilitation of > 4 weeks (66% vs. 33%, p = 0.02) compared to those without WMR. At 6 months follow-up after discharge the 2 groups did not significantly differ with respect to successful RTW (61% vs. 66%, p = 0.69). The SIMBO-C (cut-off ≥ 30 points) showed a medium correlation with the individualized clinical anamnestic determination of SRWA (r = 0.33, p = 0.01). Conclusion: The applied neurological WMR concept accomplished a comparable RTW rate between rehabilitants with SRWA by a WMR and those without SRWA and conventional rehabilitation. The SIMBO-C should only be used in combination with the individualized anamnesis to identify SRWA. abstract_id: PUBMED:27465148 Effectiveness of work-related medical rehabilitation in cancer patients: study protocol of a cluster-randomized multicenter trial. Background: Work is a central resource for cancer survivors as it not only provides income but also impacts health and quality of life. Additionally, work helps survivors to cope with the perceived critical life event. The German Pension Insurance provides medical rehabilitation for working-age patients with chronic diseases to improve and restore their work ability, and support returning to or staying at work, and thus tries to sustainably avoid health-related early retirement. Past research showed that conventional medical rehabilitation programs do not support returning to work sufficiently and that work-related medical rehabilitation programs report higher return-to-work rates across several health conditions, when compared to medical rehabilitation. Therefore, the current study protocol outlines an effectiveness study of such a program for cancer survivors. Methods: To evaluate the effectiveness of work-related medical rehabilitation in cancer patients we conduct a cluster-randomized multicenter trial. In total, 504 rehabilitation patients between 18 and 60 years with a Karnofsky Performance Status of ≥70 %, a preliminary positive social-medical prognosis of employability for at least 3 h/day within the next 6 months and an elevated risk of not returning to work will be recruited in four inpatient rehabilitation centers. Patients are randomized to the work-related medical rehabilitation program or the conventional medical rehabilitation program based on their week of arrival at each rehabilitation center. The work-related medical rehabilitation program comprises additional work-related diagnostics, multi-professional team meetings, an introductory session as well as work-related functional capacity training, work-related psychological groups, and social counseling. All additional components are aimed at the adjustment of the patients' capacity in relation to their individual job demands. Role functioning defines the main study outcome and will be assessed with the EORTC-QLQ30. Secondary outcome measures are the remaining scales of the EORTC-QLQ30, fatigue, self-rated work ability, disease coping, participation in working life, realization of work-related goals and therapies during rehabilitation, and treatment satisfaction. Discussion: A positive evaluation of work-related medical rehabilitation in cancer patients is expected due to the promising findings on the effectiveness of such programs for patients with other health conditions. Results may support the dissemination of work-related medical rehabilitation programs in German cancer rehabilitation. Trial Registration: German Clinical Trials Register DRKS00007770 . Registered 13 May 2015. abstract_id: PUBMED:33941123 Work-related medical rehabilitation in patients with mental disorders: the protocol of a randomized controlled trial (WMR-P, DRKS00023175). Background: Various rehabilitation services and return-to-work programs have been developed in order to reduce sickness absence and increase sustainable return-to-work. To ensure that people with a high risk of not returning to work can participate in working life, the model of work-related medical rehabilitation was developed in Germany. The efficacy of these programs in patients with mental disorders has been tested in only a few trials with very specific intervention approaches. To date, there is no clear evidence of the effectiveness of work-related medical rehabilitation implemented in real-care practice. Methods/design: Our randomized controlled trial will be conducted in six rehabilitation centers across Germany. Within 15 months, 1800 patients with mental disorders (300 per rehabilitation center) will be recruited and assigned one-to-one either to a work-related medical rehabilitation program or to a conventional psychosomatic rehabilitation program. Participants will be aged 18-60 years. The control group will receive a conventional psychosomatic rehabilitation program without additional work-related components. The intervention group will receive a work-related medical rehabilitation program that contains at least 11 h of work-related treatment modules. Follow-up data will be assessed at the end of the rehabilitation and 3 and 12 months after completing the rehabilitation program. The primary outcome is a stable return to work. Secondary outcomes cover several dimensions of health, functioning and coping strategies. Focus groups and individual interviews supplement our study with qualitative data. Discussion: This study will determine the relative effectiveness of a complex and newly implemented work-related rehabilitation strategy for patients with mental disorders. Trial Registration: German Clinical Trials Register ( DRKS00023175 , September 29 2020). abstract_id: PUBMED:30985930 Work-related medical rehabilitation in patients with cancer-Postrehabilitation results from a cluster-randomized multicenter trial. Background: Effective multidisciplinary rehabilitation programs supporting the return to work have become increasingly relevant for cancer survivors. In Germany, inpatient work-related medical rehabilitation programs consider treatment modules of work-related diagnostics, work-related functional capacity training, psychosocial groups, and intensified social counseling. The authors tested the effectiveness of a work-related medical rehabilitation program compared with conventional medical rehabilitation using a cluster-randomized multicenter trial (German Clinical Trial Register: DRKS00007770). Methods: In total, 484 patients with cancer were recruited at 4 rehabilitation centers. Patients at a center who started their rehabilitation in the same week represented a cluster. These clusters were randomly assigned using computer-generated randomization schedules either to an intervention group (IG) or to a control group (CG). The primary outcome was role functioning. Secondary outcomes were other quality-of-life domains and the return to work. Results: In total, 425 patients (210 in the IG) were included in the analysis at the 3-month follow-up. There was no significant difference between the IG and CG in role functioning (b = 3.55; 95% CI, -1.18 to 8.29; P = .142). Participants in the IG reported better physical functioning (b = 5.99; 95% CI, 3.33-8.65; P < .001), less physical fatigue (b = -5.09; 95% CI, -9.62 to -0.56; P = .028), and less pain (b = -6.24; 95% CI, -11.24 to -1.23; P = .015). Conclusions: Work-related medical rehabilitation had no effect on the primary outcome compared with conventional medical rehabilitation but may enhance physical functioning and reduce physical fatigue and pain. abstract_id: PUBMED:28219096 Work-Related Medical Rehabilitation Work-related medical rehabilitation (WMR) is a strategy to improve work participation in patients with poor work ability. This review summarizes the state of knowledge on WMR. The prevalence of poor work ability and corresponding need for WMR is high (musculoskeletal disorders: 43%; mental disorders: 57%). The meta-analysis of randomized controlled trials in patients with musculoskeletal disorders shows better return to work outcomes after one year in favor of WMR patients than compared to patients participating in usual medical rehabilitation. The amount of work-related measures in rehabilitation was clearly increased during recent years. A direct involvement of the workplace and a closer cooperation with employers and occupational health physicians may further improve the outcomes of WMR. abstract_id: PUBMED:27534527 Work-related medical rehabilitation in patients with musculoskeletal disorders: the protocol of a propensity score matched effectiveness study (EVA-WMR, DRKS00009780). Background: Musculoskeletal disorders are one of the most important causes of work disability. Various rehabilitation services and return-to-work programs have been developed in order to reduce sickness absence and increase sustainable return-to-work. As the effects of conventional medical rehabilitation programs on sickness absence duration were shown to be slight, work-related medical rehabilitation programs have been developed and tested. While such studies proved the efficacy of work-related medical rehabilitation compared with conventional medical rehabilitation in well-conducted randomized controlled trials, its effectiveness under real-life conditions has yet to be proved. Methods/design: The cohort study will be performed under real-life conditions with two parallel groups. Participants will receive either a conventional or a work-related medical rehabilitation program. Propensity score matching will be used to identify controls that are comparable to treated work-related medical rehabilitation patients. Over a period of three months, about 18,000 insured patients with permission to undergo a musculoskeletal rehabilitation program will be contacted. Of these, 15,000 will receive a conventional and 3,000 a work-related medical rehabilitation. We expect a participation rate of 40 % at baseline. Patients will be aged 18 to 65 years and have chronic musculoskeletal disorders, usually back pain. The control group will receive a conventional medical rehabilitation program without any explicit focus on work, work ability and return to work in diagnostics and therapy. The intervention group will receive a work-related medical rehabilitation program that in addition to common rehabilitation treatments contains 11 to 25 h of work-related treatment modules. Follow-up data will be assessed three and ten months after patients' discharge from the rehabilitation center. Additionally, department characteristics will be assessed and administrative data records used. The primary outcomes are sick leave duration, stable return to work and subjective work ability. Secondary outcomes cover several dimensions of health, functioning and coping strategies. Discussion: This study will determine the relative effectiveness of a complex, newly implemented work-related rehabilitation strategy for patients with musculoskeletal disorders. Trial Registration: German Clinical Trials Register ( DRKS00009780 , February 10, 2016). abstract_id: PUBMED:3569731 Work and health: work-related disease Work and health interact with one another in the following ways: when work is adapted to human goals and capacities of men, work can play important role in promoting both physical and mental health; exposure to chemical, physical and biological risk factors, can lead to occupational diseases; both work environment and work condition can play a role, together with other risk factors, in the development of diseases characterized by a multiple etiology. Such diseases have been commonly called work-related. In occupational disease, there is a direct cause-effect relationship between hazard and disease, whereas in work-related disease, the work environment contributes, but as one of a number of factors, to the development of the disease. Because of their multifactorial and complex etiology, work-related diseases represent an interesting subject for epidemiological surveys. At the time, epidemiological study of work-related diseases can take the following forms. Descriptive studies relating morbidity to occupation, work area, task, or specific exposure can be concerned with anything from "hard" indicators of illness (such as death), to "soft; indicators (such as non-specific symptoms). Descriptive studies of work-related diseases utilizing the latter indicators of disease are usually study ad hoc, since morbidity registers do not exist for any disease in our country.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:34740350 Satisfying medical and rehabilitation needs positively influences returning to work after a work-related injury: an analysis of national panel data from 2018 to 2019. Background: This study examined how meeting the medical needs of injured workers after initial treatment may affect their return to work, using data from the Panel Study of Workers' Compensation Insurance. Methods: This study was designed as a longitudinal study, which used data from the second-year, follow-up survey conducted in the secondary cohort of the Panel Study of Workers' Compensation Insurance. The odds ratio (OR) and 95% confidence interval were estimated through binomial and multinomial logistic regression analyses to examine the effects of unmet medical needs on workers' return to original work and return to work overall (including reemployment). Results: The returned to original work OR of workers whose rehabilitation needs were met was 1.35 (1.12-1.63) while the return to work OR was 1.20 (1.03-1.41). The returned to original work OR of workers whose medical needs were met was 1.64 (1.18-2.27) while the return to work OR was 1.39 (1.07-1.80). In terms of disability rating, the return to work ORs of workers with mild disabilities whose medical/rehabilitation needs were not met and those of workers without disabilities were 1.71 (1.17-2.49) and 1.97 (1.27-3.08), respectively. In the case of regular/temporary workers, the returned-to-work ORs of workers whose medical/rehabilitation needs were not met were 1.54 (1.12-2.13) and 1.27 (1.03-1.56), respectively. Conclusions: For workers who sustained work-related injuries, providing medical accessibility and meeting rehabilitation needs were found to be important predictors of return to work after initial treatment. abstract_id: PUBMED:30912528 Development and implementation of work-related medical rehabilitation in cancer patients using organizational ethnography and action research methodology. Objectives: To develop a work-related medical rehabilitation (WMR) program for cancer patients based on the best available evidence, the expertise of rehabilitation professionals and the perspective of the patients, to ensure the fidelity of its implementation and to prepare its subsequent outcome evaluation. Material And Methods: The implementation study was based on organizational ethnography and action research, and followed a multimethod, participatory and iterative approach to data collection and analysis. The authors carried out observations in 4 rehabilitation centers and conducted focus groups with rehabilitation professionals and patients. The obtained data were subjected to qualitative content analysis. All findings were discussed promptly with the rehabilitation centers at feedback meetings that contributed to the further development of the program. Results: The following WMR modules were defined based on the findings: additional work-related diagnostics, multi-professional team meetings, an introductory session, work-related functional capacity training, work-related psychological groups and intensified social counseling. Process descriptions for the subsequent evaluation of the program via a cluster-randomized trial were also developed, containing, e.g., instructions for patient information and recruitment. Conclusions: Implementation studies can help to prepare for valid trials as they facilitate ensuring the feasibility, acceptability and fidelity of program implementation and evaluation. Organizational ethnography and action research are suitable methods for carrying out such studies. Int J Occup Med Environ Health. 2019;32(2):217-28. Answer: Yes, specific work-related problems do matter in access to work-related medical rehabilitation (WMR). The study identified in PUBMED:24217877 found that differences in work ability between WMR and conventional medical rehabilitation (MR) patients confirmed need-related access. However, these differences were exclusively explained by screening-based access decisions. When access was not supported by a screening, WMR and MR patients did not differ. This suggests that specific work-related problems, as identified through screening, are significant in determining the need for WMR. Moreover, the study in PUBMED:31451849 supports the importance of specific work-related problems by showing that rehabilitants with severe restrictions of work ability (SRWA) on admission to neurological WMR had higher SIMBO-C scores, indicating a greater need for work-related rehabilitation. The study concluded that the SIMBO-C should be used in combination with individualized anamnesis to identify SRWA, which further emphasizes the role of specific work-related issues in accessing WMR. Additionally, the abstract from PUBMED:28219096 highlights that the prevalence of poor work ability and the corresponding need for WMR is high among patients with musculoskeletal and mental disorders. The meta-analysis mentioned in this abstract shows better return-to-work outcomes for WMR patients compared to those participating in usual medical rehabilitation, suggesting that addressing specific work-related problems through WMR can lead to more successful outcomes. In summary, specific work-related problems are indeed important in determining access to WMR, as they help to identify individuals who may benefit most from such rehabilitation programs. Screening tools and individual assessments are used to ascertain these specific problems and guide access decisions to WMR (PUBMED:24217877, PUBMED:31451849, PUBMED:28219096).
Instruction: Does satisfaction with abilities mediate the relationship between the impact of rheumatoid arthritis on valued activities and depressive symptoms? Abstracts: abstract_id: PUBMED:11409668 Does satisfaction with abilities mediate the relationship between the impact of rheumatoid arthritis on valued activities and depressive symptoms? Objective: Both impact of rheumatoid arthritis (RA) on valued life activities and dissatisfaction with abilities have been linked to depression among individuals with RA. We integrated these concepts by examining the hypothesis that satisfaction with one's abilities may explain the mechanism by which the impact of RA on valued activities leads to depression. Methods: Data were collected over 2 years (1997 and 1998) through interviews with the University of California, San Francisco, RA panel. Analyses examined whether activity impairment in 1997 predicted later (1998) dissatisfaction with abilities and depression. Results: Greater impact on activities predicted dissatisfaction with abilities, which in turn was associated with higher depression scores. There was no direct relationship between activity impact and depression when satisfaction with abilities was considered. Conclusion: Satisfaction with abilities mediated the relationship between the impact of RA on valued activities and an increase in depressive symptoms, suggesting a need to assess not only physical decline but also individuals' interpretation of the decline. abstract_id: PUBMED:12790252 Effect of valued activity disability, social comparisons, and satisfaction with ability on depressive symptoms in rheumatoid arthritis. This longitudinal study identified a model through which function affects the psychologicalwell-being of individuals with rheumatoid arthritis (RA). Results of hierarchical linear regression analyses (N = 436) demonstrated that greater physical impairment resulted in greater disability in valued activities and engagement in unfavorable social comparisons. All 3 factors--greater physical impairment, greater disability in valued activities, and unfavorable comparison evaluations--predicted low satisfaction with abilities. Low satisfaction with abilities was the most important predictor of higher depressive symptoms and mediated the impact of physical impairment, valued activity disability, and unfavorable comparisons on depressive symptoms. Results highlight the role of personal meaning attached to changes in functional status in predicting the long-term psychological well-being of individuals with chronic illnesses such as RA. abstract_id: PUBMED:9313397 Satisfaction with abilities and well-being: development and validation of a questionnaire for use among persons with rheumatoid arthritis. Objective: To develop a questionnaire to measure satisfaction with abilities and well-being of persons with rheumatoid arthritis (RA). Methods: We used data from a panel study of persons with RA (n = 446). The 13 items of the Satisfaction with Abilities and Well-Being Scale (SAWS) were based on problematic aspects of RA and domains of life activities. Results: The SAWS demonstrated internal consistency (Cronbach's alpha = 0.93). Two subscales were identified, Satisfaction with Abilities and Satisfaction with Well-Being, which also demonstrated internal consistency (alpha = 0.91 and alpha = 0.82, respectively). Construct validity was supported by significant correlations with psychosocial and clinical variables. After controlling for demographic, psychosocial, and clinical characteristics, there was a significant inverse association between SAWS scores and depressive symptoms scores. Conclusions: The SAWS appears to be a valid and reliable measure of satisfaction with abilities and well-being among persons with RA. Longitudinal studies will examine the role of satisfaction in the development of depressive symptoms. abstract_id: PUBMED:15334431 Impact of social support on valued activity disability and depressive symptoms in patients with rheumatoid arthritis. Objective: To examine the impact of instrumental and emotional support on valued life activity (VLA) disability and depressive symptoms. Instrumental support was expected to affect VLA disability; emotional support was expected to be associated with depressive symptoms and moderate the impact of VLA disability on depressive symptoms. Methods: Data were collected over 3 years through interviews with the University of California, San Francisco, Rheumatoid Arthritis Panel. Analyses assessed whether instrumental support predicted later VLA disability and whether emotional support predicted both concurrent and later depressive symptoms. Results: Receiving adequate instrumental support was associated with less subsequent VLA disability. Strong associations were noted between both VLA disability and emotional support with concurrent depressive symptoms. No relationship was found between emotional support and later depression. No evidence was found for the hypothesis that emotional support moderated the impact of VLA disability on depressive symptoms. Conclusion: Results highlight the need to assess different types of support and their unique impact on critical outcomes. Instrumental support is beneficial to the maintenance of valued activities, a critical factor in the psychological adjustment of individuals living with rheumatoid arthritis. Emotional support has a significant short-term impact on depression, although it may not buffer the impact of VLA disability on future depression. abstract_id: PUBMED:8605266 The impact of rheumatoid arthritis on life activities. Objective: To examine the proportion of valued activities performed by persons with rheumatoid arthritis (RA) and the change in this proportion over a 5-year period and to compare the performance and loss of valued activities of persons with and without RA. Methods: Data were drawn from a longitudinal panel study of persons with RA. Valued activities were assessed using a standardized questionnaire. Results: Over a 5-year period, persons with RA lost approximately 10% of the activities they had valued. Losses were noted in every domain of activity. The greatest losses were seen in work-related, service, nurturant, cultural and leisure, and social participation activities. Compared to persons without arthritis, persons with RA performed fewer valued activities at baseline, and lost more valued activities over the 5-year period. Conclusions: RA takes a considerable toll on the ability of people with the disease to perform valued life activities. Other research has shown that the loss of valued activities is a strong risk factor for the development of depressive symptoms among women with RA. abstract_id: PUBMED:7857996 Life activities of persons with rheumatoid arthritis with and without depressive symptoms. Objective: Persons with rheumatoid arthritis (RA) report more activity losses than persons without RA. Persons with RA who report depressive symptoms experience lower levels of functioning. We integrated these findings and examined the association of depressive symptoms with the activities in which persons with RA participate. Methods: We surveyed 726 persons with RA and 192 matched controls. Within each group, we examined the perceived importance and actual performance of 65 life activities by individuals with and without depressive symptoms. Results: Significantly more individuals with RA reported depressive symptoms. Among persons with RA, those who had depressive symptoms performed fewer activities, particularly fewer valued activities. In contrast, among controls, those who were depressed rated fewer activities as important, but exhibited few significant differences in the performance of activities. Conclusions: Depressive symptoms and life activities are associated differently among persons with RA and controls. The higher rate of depression among persons with RA may be partially due to the inability to perform valued activities. abstract_id: PUBMED:23478596 Impact of fatigue on psychological outcomes in adults living with rheumatoid arthritis. Background: Fatigue has been shown to be a prevalent symptom in patients with rheumatoid arthritis, having a negative impact on health-related quality of life. However, aside from depression, related fatigue has not been linked to specific psychological outcomes and is not a common target in treatment programs. Objectives: The aim of the study was to examine the unique influence of rheumatoid arthritis-related fatigue on three psychological outcomes: depressive symptoms, perceived health impairment, and satisfaction with abilities. Methods: In a large (n = 200) convenience sample of older adults who provided informed consent, multiple regression analyses were conducted to determine the direct and unique impact of demographic variables, functional impairment, pain, and fatigue on each of three psychological outcomes: depressive symptoms, perceived health impairment, and satisfaction with ability. Results: Fatigue significantly and uniquely contributed to each of the psychological outcomes, above and beyond pain and functional impairment. The variances of depressive symptoms, perceived health impairment, and satisfaction with ability accounted for by fatigue were highly significant. Discussion: The study indicates that rheumatoid arthritis-related fatigue contributes to diminished psychological well-being in older adults aging with rheumatoid arthritis and suggests the need for psychoeducational and management strategies that specifically target fatigue as part of an overall rheumatoid arthritis management program. Future research should attempt to obtain a larger sample of male and younger patients to determine if there are significant gender and age differences in the impact of fatigue on psychological outcomes. abstract_id: PUBMED:33119380 Relationship of medical comorbidities to psychological health at 2 and 5 years following traumatic brain injury (TBI). Objective: To examine the relationship between medical comorbidities and psychological health outcomes at 2 and 5 years following traumatic brain injury (TBI). Method: Veterans Affairs (VA) TBI Model System participants who completed a 2-year (n = 225) and/or 5-year (n = 283) follow-up with a comorbidities interview were included in the current study. Psychological health outcomes were assessed using the Patient Global Impression of Change (PGIC), Patient Health Questionnaire-9 (PHQ-9), and Satisfaction with Life Scale (SWLS). While controlling for known predictors of outcome, the relationship of overall comorbidity burden to psychological outcomes was examined cross-sectionally using generalized linear regression at 2 and 5 years post-TBI. Lasso regularization was used to examine relationships of specific comorbid conditions to outcome. Results: Greater comorbidity burden was significantly associated with lower satisfaction with life at 2 and 5 years post-TBI and was associated with greater depressive symptomatology at 5 years post-TBI. Chronic pain was associated with lower satisfaction with life and greater depressive symptoms at both 2- and 5-year follow-up. Sleep apnea was associated with lower satisfaction with life and greater depressive symptoms at 5-year follow-up. Rheumatoid arthritis was associated with lower satisfaction with life and lower levels of perceived improvement in health and well-being at the 5-year follow-up. Implications: Results suggest that medical comorbidities may have a cumulative impact on adverse psychological health outcomes in chronic stages of TBI. This study further highlights the complexity of patients with TBI and the importance of identifying medical comorbidities as they provide potential targets for intervention. (PsycInfo Database Record (c) 2021 APA, all rights reserved). abstract_id: PUBMED:31049848 Shared Decision-Making and Patient Satisfaction in Japanese Rheumatoid Arthritis Patients: A New "Preference Fit" Framework for Treatment Assessment. Introduction: We have developed a new framework to assess shared decision-making (SDM) as a tool to improve patient satisfaction. This framework is based on a "preference fit" index that relates SDM to patient treatment preferences and patient satisfaction in a sample of rheumatoid arthritis (RA) patients in Japan. Methods: We surveyed 500 RA patients in Japan and explored the interactions between the treatment preference fit index, SDM, and overall patient satisfaction. Results: Our new preference fit index reveals significant impact on patient satisfaction: the better the fit between SDM and patient preferences, the higher the patient satisfaction with the current treatment. Patients treated with biologic agents were more satisfied. Patients suffering from depression or migraines scored significantly lower both on our preference fit measure and for overall patient satisfaction. Conclusion: The association between depression and a low treatment preference fit suggests that depression may pose challenges to SDM and that doctors in Japan are less attuned to the SDM preferences of depressed patients. Funding: Janssen Pharmaceutical KK. abstract_id: PUBMED:11352254 Activity loss and the onset of depressive symptoms: do some activities matter more than others? Objective: This study continues an investigation into the role of decline in performance of valued life activities in the development of depressive symptoms among persons with rheumatoid arthritis (RA). We examined whether declines in specific types of activities are important in the onset of depressive symptoms or whether the important factor is simply the overall burden of activity decline. Methods: Data from a longitudinal study of persons with RA, for which individuals are interviewed annually, were used. Two analyses (n = 344 and 310) were conducted because of differences in the way life activities were assessed over time. Each analysis covered 4 interviews (1989-1992 and 1995-1998). Analyses were structured so that the decline in performance of life activities clearly preceded the development of depressive symptoms. The outcome variable was the presence of depressive symptoms at time 4; primary independent variables were activity decline between time 2 and time 3. Individuals with high levels of depressive symptoms prior to time 4 were excluded from the analyses. Results: In both analyses, total decline in performance of life activities was an important predictor of subsequent high levels of depressive symptoms. However, some activity domains were more closely linked to the onset of new depressive symptoms than others. In particular, declines in the ability to perform recreational activities and engage in social interactions were linked to the onset of new depressive symptoms. Conclusion: Declines in the ability to engage in recreational activities and social interactions appear to significantly increase the risk of new depressive symptoms. These findings can give direction to both clinical inquiries into patients' functioning and interventions intended to enhance functioning. Answer: Yes, satisfaction with abilities mediates the relationship between the impact of rheumatoid arthritis (RA) on valued activities and depressive symptoms. This was demonstrated in a study where data were collected over two years through interviews with the University of California, San Francisco, RA panel. The analyses showed that greater impact on activities predicted dissatisfaction with abilities, which in turn was associated with higher depression scores. There was no direct relationship between activity impact and depression when satisfaction with abilities was considered, indicating that satisfaction with abilities is a key mediator in this relationship (PUBMED:11409668).
Instruction: Body weight and weight loss: are health messages reaching their target? Abstracts: abstract_id: PUBMED:30221167 Extreme body messages: themes from Facebook posts in extreme fitness and nutrition online support groups. Background: Extreme fitness and nutrition programs, that are said to "jump start" a healthy lifestyle, are becoming more pervasive. While some research has explored the harmful impact of fitspiration, thinspiration, and pro-anorexia/bulimia websites, no studies have examined the content associated with these extreme programs that advertised as promoting a healthy lifestyle. The objective of this study was to analyze posts found on extreme fitness and nutrition program support pages on Facebook. As these programs have become more popular, along with Facebook platforms for their participants, examining online user-generated posts related to these programs can provide information about the types of messages being promoted, and whether these can contribute to disordered eating and other harmful health behaviors. Methods: User data were selected from one month from public posts on two popular Facebook fitness groups. Both groups were short-term programs that set extreme guidelines for fitness and nutrition. Researchers examined the data for harmful health messages using an individual coding strategy with inter-rater reliability (Kappa =0.92). Results: Majority (88.6%) of messages analyzed promoted harmful health messages. Common categories of messages included losing weight/fat, promoting dieting/restraint, and harmful body messages, which promoted unsafe ways to attain certain body types. Conclusions: The data analyzed in this study represent overwhelmingly negative commentary relating to harmful health and body messages. Messages in this study normalized dysfunctional behaviors and promoted fixating on certain body parts and objectifying bodies. These Facebook groups, though intended to be a sort of online support forum, provide an open space for body negativity and promotion of extreme behaviors for the sake of thinness. Most concerning, participants of these programs, which are advertised as promoting health, are expressing unhealthy thoughts and behaviors surrounding food, exercise, and body image. Given the ease of accessibility to this content using online platforms, these harmful messages can reach large groups of people and continue adding to a culture that values physical appearance to the detriment of health. abstract_id: PUBMED:18042529 Body weight and weight loss: are health messages reaching their target? Objective: To investigate lay peoples' knowledge of health risks of overweight, accuracy of self-perception of body weight and perceived benefits of weight loss. Method: A nine item questionnaire was administered to a cross sectional survey of adults in metropolitan shopping centres, height and weight were measured. Results: Two hundred and nine (57% female) adults completed the survey. Thirty eight percent had a healthy BMI (18.5-24.9), 38% were overweight (BMI 25-29.9) and a further 22% were obese (BMI > 30). However only 46% perceived themselves 'overweight', 50% considered themselves 'just about right' and 4% considered themselves 'underweight'. Of those with a BMI of 25 or greater 28% considered their weight 'just about right'. Over 80% thought 'being overweight' was 'likely' or 'very likely' to be a risk factor for cardiovascular disease, hypertension, diabetes and stroke; however 20% of overweight or obese individuals did not think their health would improve if they lost weight. Conclusion: A significant proportion of overweight or obese individuals do not accurately perceive their body weight and do not recognise the health advantages of weight loss despite recognising excess body weight as a risk factor for chronic diseases. Implications: Increasing the awareness of an individual's BMI and promoting the benefits of modest weight loss maybe two underutilized strategies for population level weight control. abstract_id: PUBMED:33719629 Non-stigmatising alternatives to anti-obesity public health messages: Consequences for health behaviour and well-being. This project investigated how alternative non-stigmatising public health messages influence people's health behaviours and well-being, relative to traditional stigmatising weight-loss messages. We conducted three experimental studies (total N = 1281) that compared traditional weight-loss messages to weight-neutral messages (Study 1), weight-inclusive messages (Study 2) and size acceptance messages (Study 3). Results revealed that public health messages have differential effects on health behaviours and well-being, depending on the audience's BMI or perceived weight. However, campaigns that challenge weight stigma and promote body positivity have positive effects on some psychological indicators of health and well-being for people of all body sizes. abstract_id: PUBMED:19507724 Messages about physical activity and nutrition offered by Quebec mass media? As social elements of our environment, mass media are regarded as determinants of individual and population beliefs, social norms and habits. Since it is recognized that they influence population health, this study aims to obtain a better portrait of Quebec media content regarding physical activity and nutrition messages on a public health level. First, we analyzed the content of fictional television shows (n = 1 3) and advertisements broadcast during those shows (n = 68). Second, we reviewed the content of La Presse newspaper and of French television Société Radio-Canada from 1986 to 2005 with regard to physical activity and nutrition messages. Our results indicate a difference between how men and women are portrayed on French television, with women more often being shown as underweight and men as at or above healthy body weight. The results also show that during the 20-year period of the reviewed content, there were fewer messages about physical activity than about nutrition. To be successful in their goal of improving population health, mass media should address both subjects together in their messages. abstract_id: PUBMED:23844558 Weighing women down: messages on weight loss and body shaping in editorial content in popular women's health and fitness magazines. Exposure to idealized body images has been shown to lower women's body satisfaction. Yet some studies found the opposite, possibly because real-life media (as opposed to image-only stimuli) often embed such imagery in messages that suggest thinness is attainable. Drawing on social cognitive theory, the current content analysis investigated editorial body-shaping and weight-loss messages in popular women's health and fitness magazines. About five thousand magazine pages published in top-selling U.S. women's health and fitness magazines in 2010 were examined. The findings suggest that body shaping and weight loss are a major topic in these magazines, contributing to roughly one-fifth of all editorial content. Assessing standards of motivation and conduct, as well as behaviors promoted by the messages, the findings reflect overemphasis on appearance over health and on exercise-related behaviors over caloric reduction behaviors and the combination of both behaviors. These accentuations are at odds with public health recommendations. abstract_id: PUBMED:30143478 A Church-Based Weight Loss Intervention in African American Adults using Text Messages (LEAN Study): Cluster Randomized Controlled Trial. Background: African American adults experience a high prevalence of obesity and its associated comorbidities, including diabetes. Church-based interventions have been shown to be effective in decreasing weight in this population. mHealth interventions can address two needs for obesity treatment in this community, including enhancing weight loss and providing wide dissemination. Objective: This study aimed to assess the feasibility and efficacy of a church-based weight loss intervention that incorporates mHealth technology. Methods: In this study, 8 churches (n=97) were randomly assigned to the intervention or delayed intervention condition (control group). We recruited participants through their respective church. Volunteer church members were trained by study staff to deliver the 10-session, 6-month intervention. Participants in the intervention group attended group sessions and received automated short message service (SMS) text messages designed to reinforce behavioral strategies. Conversely, participants in the delayed intervention condition received SMS text messages related to health conditions relevant for African American adults. We obtained measures of body composition, blood pressure, blood glucose, and cholesterol. Results: We successfully recruited 97 African American adults, with a mean age of 56.0 (SE 10.3) years and a mean body mass index of 38.6 (SE 6.4) kg/m2 (89/97, 91.8% females), who attended the churches that were randomized to the intervention (n=68) or control (n=29) condition. Of these, 74.2% (72/97) of the participants (47/68, 69.1% intervention; 25/29, 86.2% delayed intervention) completed the 6-month assessment. The average intervention group attendance was 55%. There was a significant difference in weight loss (P=.04) between participants in the intervention (-1.5 (SE 0.5) kg) and control (0.11 (SE 0.6) kg) groups. Among participants in the intervention group, the correlation between the number of SMS text messages sent and the percent body fat loss was r=.3 with P=.04. The participants reported high satisfaction with the automated SMS text messages. Conclusions: Automated SMS text messages were well-received by participants, suggesting that more enhanced mHealth technologies are a viable option for interventions targeting African American adults. Trial Registration: ClinicalTrials.gov NCT02863887; https://clinicaltrials.gov/ct2/show/NCT02863887 (Archived by WebCite at http://www.webcitation.org/71JiYzizO). abstract_id: PUBMED:27611629 An exploration of appearance and health messages present in pregnancy magazines. This study explored appearance-related images and messages present in pregnancy magazines (e.g. Fit Pregnancy, Pregnancy & Newborn, and Pregnancy Magazine). A substantial portion of advertisements and articles promoted appearance-related products or products for postpartum weight loss. Health messages and messages related to diet and exercise were also present. Images also sexualized pregnant women's bodies. Appearance-related messages aimed toward pregnant women may be problematic and may contribute to body dissatisfaction in these women. abstract_id: PUBMED:31017585 Effect of Serial Anthropometric Measurements and Motivational Text Messages on Weight Reduction Among Workers: Pilot Randomized Controlled Trial. Background: Obesity is an endemic problem with significant health and financial consequences. Text messaging has been shown to be a simple and effective method of facilitating weight reduction. In addition, waist-to-hip ratio (WHR) has emerged as a significant anthropometric measure. However, few studies have examined the effect of serial anthropometric self-measurement combined with text messaging. Objective: The primary aim of this study was to assess whether an 8-week program, consisting of weekly serial self-measurements of waist and hip circumference, combined with motivational text messages, could reduce WHR among Australian workers. Methods: This was a community-based, participant-blinded, staggered-entry, parallel group study. Adult workers with access to mobile phones were eligible and recruited through an open access Web-based survey. Participants were randomly allocated to receive intervention or control messages for 8 weeks. Outcome data were self-assessed through a Web-based survey. Results: A total of 60 participants were randomized with 30 participants each allocated to a control and an intervention group. There was no significant change in WHR (P=.43), and all secondary outcome measures did not differ between the intervention group and the control group at the end of the 8-week intervention. Both groups, however, showed a significant decrease in burnout over time (mean [SE]: pre 4.80 [0.39] vs post 3.36 [0.46]; P=.004). The intervention uptake followed a downward trend. Peak participant replies to weekly self-measurements were received in week 3 (14/23, 61%) and the least in week 8 (8/23, 35%). No harm was found to result from this study. Conclusions: This study is an innovative pilot trial using text messaging and serial anthropometric measurements in weight management. No change was detected in WHRs in Australian workers over 8 weeks; therefore, it could not be concluded whether the intervention affected the primary outcome. However, these results should be interpreted in the context of limited sample size and decreasing intervention uptake over the course of the study. This pilot trial is useful for informing and contributing to the design of future studies and the growing body of literature on serial self-measurements combined with text messaging. Trial Registration: Australian New Zealand Clinical Trials Registry ACTRN12616001496404; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=371696&isReview=true (Archived by WebCite at http://www.webcitation.org/73UkKFjSw). abstract_id: PUBMED:24081719 Role of media and peers on body change strategies among adult men: is body size important? There has been limited previous research that has examined the role of sociocultural influences on body change strategies among adult men. The current study investigated the role of specific types of messages (encouragement, teasing and modelling) from peers and the media on the strategies to change weight among adult men. Differences were evaluated between 526 men aged from 18 to 60 years from three groups (normal weight, overweight and obese) on body image, body change strategies and messages about their body received from peers and the media. Men were primarily drawn from United States, Australia and Europe. Results showed that messages received by men regarding losing weight or increasing muscle size differed according to weight. Body image and media messages were the strongest predictors of losing weight, whereas body image importance and messages from peers were the strongest predictors of increasing muscles. These findings highlight the importance of sociocultural influences on body change strategies among adult males. abstract_id: PUBMED:21326398 Body image and body change strategies among Tongan adolescents in Tonga and New Zealand. Aim: The aim of this study was to evaluate the body image and body change strategies of adolescents from Tonga, and Tongans who are resident New Zealand. Method: In total, 598 Tongans from Tonga and 388 Tongans from New Zealand completed measures of body image, body change strategies and messages about their body. Results: Tongans in Tonga were more likely to receive positive messages about a larger body from adults at school, church and the media, and losing weight from the media. They were also more likely to adopt strategies to lose weight, increase weight and increase muscles. Conclusion: The large body ideal appears to be still valued in Tonga, whereas Tongans also want to lose weight because of media messages to achieve a healthy body weight. Answer: The question "Body weight and weight loss: are health messages reaching their target?" can be addressed by examining the findings from the provided abstracts. One study found that a significant proportion of overweight or obese individuals do not accurately perceive their body weight and do not recognize the health advantages of weight loss despite recognizing excess body weight as a risk factor for chronic diseases (PUBMED:18042529). This suggests that health messages regarding the risks of being overweight and the benefits of weight loss may not be effectively reaching or influencing this demographic. Another study indicated that extreme fitness and nutrition program support pages on Facebook often promote harmful health messages, including unsafe ways to attain certain body types, which can contribute to disordered eating and other harmful health behaviors (PUBMED:30221167). This implies that the messages intended to promote health are, in fact, promoting unhealthy thoughts and behaviors surrounding food, exercise, and body image. Research also shows that non-stigmatising public health messages, such as those promoting body positivity, have positive effects on some psychological indicators of health and well-being for people of all body sizes (PUBMED:33719629). This suggests that the framing of health messages is crucial and that messages which challenge weight stigma may be more effective in reaching their target audience. Additionally, content analysis of women's health and fitness magazines revealed an overemphasis on appearance over health and on exercise-related behaviors over caloric reduction behaviors (PUBMED:23844558). This could indicate that the messages being disseminated are not aligned with public health recommendations and may not be effectively promoting healthy weight loss strategies. Moreover, a church-based weight loss intervention using text messages was found to be effective in African American adults, suggesting that tailored mHealth interventions can be a viable option for reaching specific populations with health messages (PUBMED:30143478). In contrast, a pilot randomized controlled trial found no significant change in waist-to-hip ratio among Australian workers who received motivational text messages and performed serial self-measurements, indicating that this particular approach may not be effective in reaching its target for weight management (PUBMED:31017585). Overall, the evidence from these studies suggests that while some health messages about body weight and weight loss are reaching their target and having positive effects, others may not be as effective due to issues with message framing, delivery method, and the influence of sociocultural factors. It is clear that more work is needed to ensure that health messages are accurately perceived, positively framed, and tailored to the specific needs of different populations to effectively promote healthy weight loss and body image.
Instruction: Is performance on the Wechsler test of adult reading affected by traumatic brain injury? Abstracts: abstract_id: PUBMED:17535531 Is performance on the Wechsler test of adult reading affected by traumatic brain injury? Objectives: The validity of the National Adult Reading Test (NART) as a predictor of premorbid IQ when used with patients who have sustained a traumatic brain injury (TBI) has been questioned in recent years. This study examined whether performance on the Wechsler Test of Adult Reading (WTAR) is similarly affected by TBI in the first year after an injury. Design And Method: The WTAR scores of participants who had sustained a mild TBI (N=82), moderate TBI (N=73), severe TBI (N=61) or an orthopaedic injury (N=95) were compared (cross-sectional study). A subset of 21 mild TBI, 31 moderate TBI, 26 severe TBI and 21 control group participants were additionally reassessed 6 months later to assess the impact of recovery on WTAR scores (longitudinal study). Results: The severe TBI group had significantly lower scores on the WTAR than the mild TBI, moderate TBI and control groups in the cross-sectional study, despite being matched demographically. The findings from the longitudinal study revealed a significant group difference and a small improvement in performance over time but the interaction between group and time was not significant, suggesting that the improvements in WTAR performance over time were not restricted to more severely injured individuals whose performance was temporarily suppressed. Conclusions: These findings suggest that reading performance may be affected by severe TBI and that the WTAR may underestimate premorbid IQ when used in this context, which may cause clinicians to underestimate the cognitive deficits experienced by these patients. abstract_id: PUBMED:27799224 The Wechsler Test of Adult Reading as a Measure of Premorbid Intelligence Following Traumatic Brain Injury. Objective: The current study sought to determine whether the Wechsler Test of Adult Reading (WTAR) provides a stable estimate of premorbid intellectual ability in acutely injured patients recovering from traumatic brain injury (TBI). Method: A total of 135 participants (43 mild TBI [mTBI], 40 moderate/severe TBI [msevTBI], 52 healthy controls) were administered the WTAR at 1 and 12 months post-injury. Results: Despite similar demographic profiles, participants with msevTBI performed significantly worse than controls on the WTAR at both time points. Moreover, the msevTBI group had a significant improvement in WTAR performance over the 1-year period. In contrast, those participants with mTBI did not significantly differ from healthy controls and both the mTBI and control groups demonstrated stability on the WTAR over time. Conclusions: Results indicate that word-reading tests may underestimate premorbid intelligence during the immediate recovery period for patients with msevTBI. Clinicians should consider alternative estimation measures in this TBI subpopulation. abstract_id: PUBMED:18213530 Measuring premorbid IQ in traumatic brain injury: an examination of the validity of the Wechsler Test of Adult Reading (WTAR). Estimation of premorbid IQ in traumatic brain injury (TBI) is clinically and scientifically valuable because it permits the quantification of the cognitive impact of injury. This is achieved by comparing performances on tests of current ability to estimates of premorbid IQ, thereby enabling current capacity to be interpreted in light of preinjury ability. However, the validity of premorbid IQ tests that are commonly used for TBI has been questioned. In the present study, we examined the psychometric properties of a recently developed test, the Wechsler Test of Adult Reading (WTAR), which has yet to be examined for TBI. The cognitive performance of a group of 24 patients recovering from TBI (with a mean Glasgow Coma Scale score in the severely impaired range) was measured at 2 and 5 months postinjury. On both occasions, patients were administered three tests that have been used to measure premorbid IQ (the WTAR and the Vocabulary and Matrix Reasoning subtests of the Wechsler Adult Intelligence Scale 3rd Edition, WAIS-III) and three tests of current ability (Symbol Digit Modalities Test-Oral and Similarities and Block Design subtests of the WAIS-III). We found that performance significantly improved on tests of current cognitive ability, confirming recovery. In contrast, stable performance was observed on the WTAR from Assessment 1 (M = 34.25/50) to Assessment 2 (M = 34.21/50; r = .970, p < .001). Mean improvement across assessments was negligible (t = -0.086, p = .47; Cohen's d = -.005), and minimal individual participant change was observed (modal scaled score change = 0). WTAR scores were also highly similar to scores on a demographic estimate of premorbid IQ. Thus, converging evidence--high stability during recovery from TBI and similar IQ estimates to those of a demographic equation suggests that the WTAR is a valid measure of premorbid IQ for TBI. Where word pronunciation tests are indicated (i.e., in patients for whom English is spoken and read fluently), these results endorse the use of the WTAR for patients with TBI. abstract_id: PUBMED:23858087 Education quality, reading recognition, and racial differences in the neuropsychological outcome from traumatic brain injury. Ethnically diverse examinees tend to perform lower on neuropsychological tests. The practice of adjusting normative comparisons for the education level and/or race to prevent overpathologizing low scores is problematic. Education quality, as measured by reading recognition, appears to be a more accurate benchmark for premorbid functioning in certain populations. The present study aimed to extend this line of research to traumatic brain injury (TBI). We hypothesized that a measure of reading recognition, the Wechsler Test of Adult Reading (WTAR), would account for racial differences in neuropsychological performance after TBI. Fifty participants (72% African American, 28% Caucasian) with moderate to severe TBI underwent neuropsychological testing at 1-year post-injury. Reading recognition accounted for all the same variance in neuropsychological performance as race and education (together), as well as considerable additional variance. Estimation of premorbid functioning in African Americans with TBI could be refined by considering reading recognition. abstract_id: PUBMED:31696203 Stronger Together: The Wechsler Adult Intelligence Scale-Fourth Edition as a Multivariate Performance Validity Test in Patients with Traumatic Brain Injury. Objective: This study was designed to evaluate the classification accuracy of a multivariate model of performance validity assessment using embedded validity indicators (EVIs) within the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). Method: Archival data were collected from 100 adults with traumatic brain injury (TBI) consecutively referred for neuropsychological assessment in a clinical setting. The classification accuracy of previously published individual EVIs nested within the WAIS-IV and a composite measure based on six independent EVIs were evaluated against psychometrically defined non-credible performance. Results: Univariate validity cutoffs based on age-corrected scaled scores on Coding, Symbol Search, Digit Span, Letter-Number-Sequencing, Vocabulary minus Digit Span, and Coding minus Symbol Search were strong predictors of psychometrically defined non-credible responding. Failing ≥3 of these six EVIs at the liberal cutoff improved specificity (.91-.95) over univariate cutoffs (.78-.93). Conversely, failing ≥2 EVIs at the more conservative cutoff increased and stabilized sensitivity (.43-.67) compared to univariate cutoffs (.11-.63) while maintaining consistently high specificity (.93-.95). Conclusions: In addition to being a widely used test of cognitive functioning, the WAIS-IV can also function as a measure of performance validity. Consistent with previous research, combining information from multiple EVIs enhanced the classification accuracy of individual cutoffs and provided more stable parameter estimates. If the current findings are replicated in larger, diagnostically and demographically heterogeneous samples, the WAIS-IV has the potential to become a powerful multivariate model of performance validity assessment. Brief Summary: Using a combination of multiple performance validity indicators embedded within the subtests of theWechsler Adult Intelligence Scale, the credibility of the response set can be establishedwith a high level of confidence. Multivariatemodels improve classification accuracy over individual tests. Relying on existing test data is a cost-effective approach to performance validity assessment. abstract_id: PUBMED:31698991 Word-reading ability as a "hold test" in cognitively normal young adults with history of concussion and repetitive head impact exposure: A CARE Consortium Study. Objective: Neuropsychological evaluations include hold tests like word-reading ability as estimates of premorbid intellect thought to be resilient to the effects of neurologic insult. We tested the alternative hypothesis that exposure to concussion or repetitive subclinical head impacts throughout early life may stunt acquisition of word-reading skills.Method: Data were obtained from student-athletes within the CARE Consortium that completed the Wechsler Test of Adult Reading (WTAR). Measures of head trauma burden included self-reported concussion history and cumulative years of exposure to collision sports. We evaluated the effects of head trauma, sociodemographic (race, SES), and academic (SAT/ACT scores, learning disorder) variables on WTAR standard score using linear regression. Analyses were repeated in a football-only subsample estimating age of first exposure to football as a predictor.Results: We analyzed data from 6,598 participants (72.2% white, 39.6% female, mean ± SD age = 18.8 ± 1.2 years). Head trauma variables collectively explained 0.1% of the variance in WTAR standard scores, with years of collision sport exposure weakly predicting lower WTAR standard scores (β = .026-.035, very small effect). In contrast, sociodemographic and academic variables collectively explained 20.9-22.5% of WTAR standard score variance, with strongest effects noted for SAT/ACT scores (β = .313-.337, medium effect), LD diagnosis (β = -.115 to -.131, small effect), and SES (β = .101-.108, small effect). Age of first exposure to football did not affect WTAR scores in a football-only sample.Conclusion: Wechsler Test of Adult Reading performance appears unrelated to history of self-reported concussion(s) and/or repetitive subclinical head trauma exposure in current collegiate athletes. Sociodemographic and academic variables should be incorporated in test score interpretations for diverse populations like athletes. abstract_id: PUBMED:14591458 Detecting malingered performance on the Wechsler Adult Intelligence Scale. Validation of Mittenberg's approach in traumatic brain injury. This study assesses the effectiveness of the Wechsler Adult Intelligence Scale (WAIS) performance validity markers devised by Mittenberg et al. [Prof. Psychol.: Res. Pract. 26 (1995) 491] in the detection of malingered neurocognitive dysfunction (MND). Subjects were 65 traumatic brain injury (TBI) patients referred for neuropsychological evaluation. Twenty-eight met the Slick et al. [Clin. Neuropsychol. 13 (1999) 545] criteria for at least probable MND. The control group was comprised of 37 patients without external incentive and who thus did not meet the Slick et al. criteria. All subjects completed the Wechsler Adult Intelligence Scale-Revised (WAIS-R or WAIS-III). The discriminant function score (DFS) and the vocabulary-digit span (VDS) difference score were calculated and sensitivity, specificity, and predictive power were examined for several cut-offs for each marker individually and the two combined. Classification accuracy for the DFS was acceptable and better than for VDS. The use of the two markers in combination resulted in no incremental increase in classification accuracy. Issues related to the clinical application of these techniques are discussed. abstract_id: PUBMED:12677478 Complutense Verbal Learning Test versus Wechsler Memory Scale-Revised Introduction: We present a comparative study (sensitivity and convergent validity) between a new verbal memory test Complutense Verbal Learning Test (TAVEC) and the Wechsler Memory Scale-Revised (WMSr) in a sample of patients with acquired brain injury. Patients And Methods: Twenty-eight patients with acquired brain injury were included in this study. Twelve patients were reevaluated six months after their inclusion in a cognitive multidisciplinary rehabilitation program. All patients were assessed with the WMSr (verbal memory, general memory and long-term memory) and the TAVEC (verbal learning, immediate memory and delayed memory). Results obtained with both scales were compared (chi-square) and correlated (Pearson correlation). Results: Globally, learning and memory deficits were detected with both scales. Initially 32.1 % of our sample showed general memory deficits, and 44.4 % long-term memory deficits when assessed with WMSr. These percentages were significantly lower compared to the 57.1 %, 75 % and 78.5 % of our sample who showed learning, recent memory and delayed memory difficulties when evaluated with the TAVEC. The results obtained by the 12 patients included in a cognitive rehabilitation program significantly improved. Numerous strong correlations between the TAVEC and the WMSr were found, suggesting a high degree of convergence between the two instruments. Conclusions: The TAVEC is a sensitive measure of learning and memory after acquired brain injury. Due to its sensitivity, ecological value and multifactorial structure we recommend this test for the evaluation of learning and memory of these patients. abstract_id: PUBMED:22277127 Exploration of malingering indices in the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span subtest. Prior research shows that Digit Span is a useful embedded measure of malingering. However, the Wechsler Adult Intelligence Scale-IV (Wechsler, 2008) altered Digit Span in meaningful ways, necessitating another look at Digit Span as an embedded measure of malingering. Using a simulated malingerer design, we examined the predictive accuracy of existing Digit Span validity indices and explored whether patterns of performance utilizing the new version would provide additional evidence for malingering. Undergraduates with a history of mild head injury performed with best effort or simulated impaired cognition and were also compared with a large sample of non-head-injured controls. Previously established cutoffs for the age-corrected scaled score and Reliable Digit Span (RDS) performed similarly in the present samples. Patterns of RDS length using all three subscales of the new scale were different in malingerers when compared with both head-injured and non-head-injured controls. Two potential alternative RDS scores were introduced, which showed better sensitivity than the traditional RDS, while retaining specificity to malingering. abstract_id: PUBMED:31608766 Clinical utility of WAIS-IV 'excessive decline from premorbid functioning' scores to detect invalid test performance following traumatic brain injury. Objective: Excessive Decline from Premorbid Functioning (EDPF), an atypical discrepancy between demographically predicted and obtained Wechsler Adult Intelligence Scale-4th Edition (WAIS-IV) scores, has been recently proposed as a potential embedded performance validity test (PVT). This study examined the clinical utility of EDPF scores to detect invalid test performance following traumatic brain injury (TBI).Methods: Participants were 194 U.S. military service members who completed neuropsychological testing on average 2.4 years (SD = 4.0) following uncomplicated mild, complicated mild, moderate, severe, or penetrating TBI (Age: M = 34.0, SD = 9.9). Using TBI severity and PVT performance (i.e., PVT Pass/Fail), participants were classified into three groups: Uncomplicated Mild TBI-PVT Fail (MTBI-Fail; n = 21), Uncomplicated Mild TBI-PVT Pass (MTBI-Pass; n = 94), and Complicated Mild to Severe/Penetrating TBI-PVT Pass (CM/STBI-Pass; n = 79). Seven EDPF measures were calculated by subtracting WAIS-IV obtained index scores from the demographically predicted scores from the Test of Premorbid Functioning (TOPF). Cutoff scores to detect invalid test performance were examined for each EDPF measure separately.Results: The MTBI-Fail group had higher scores than the MTBI-Pass and CM/STBI-Pass groups on five of the seven EDPF measures (p<.05). Overall, the EDPF measure using the Processing Speed Index (EDPF-PSI) was the most useful score to detect invalid test performance. However, sensitivity was only low to moderate depending on the cutoff score used.Conclusions: These findings provide support for the use of EDPF as an embedded PVT to be considered along with other performance validity data when administering the WAIS-IV. Answer: Yes, performance on the Wechsler Test of Adult Reading (WTAR) can be affected by traumatic brain injury (TBI). Studies have shown that individuals with severe TBI performed significantly worse on the WTAR compared to those with mild TBI, moderate TBI, and control groups with orthopedic injuries (PUBMED:17535531). This suggests that reading performance may be compromised by severe TBI, potentially leading to an underestimation of premorbid IQ when using the WTAR in this context. Further research indicates that the WTAR may provide a stable estimate of premorbid intellectual ability in patients recovering from TBI, but it may underestimate premorbid intelligence during the immediate recovery period for patients with moderate to severe TBI (PUBMED:27799224). This is supported by findings that participants with moderate/severe TBI showed significant improvement in WTAR performance over a 1-year period, suggesting that initial WTAR scores may not be reliable indicators of premorbid ability immediately following injury. Additionally, the WTAR has been found to have high stability during recovery from TBI and to provide similar IQ estimates to those of a demographic equation, suggesting its validity as a measure of premorbid IQ for TBI patients (PUBMED:18213530). However, it is important to note that word-reading tests like the WTAR may not be sensitive to the effects of concussion or repetitive subclinical head trauma exposure in young adults with a history of such injuries (PUBMED:31696203). In summary, while the WTAR can be a useful tool for estimating premorbid IQ in individuals with TBI, its accuracy may be compromised in the case of moderate to severe TBI, particularly during the early stages of recovery. Clinicians should consider these limitations and potentially use alternative estimation measures for this subpopulation of TBI patients (PUBMED:27799224; PUBMED:17535531).
Instruction: 'Is he awake? Abstracts: abstract_id: PUBMED:35789808 How to start an awake spine program: Protocol and illustrative cases. Surgical techniques and technology are steadily improving, thereby expanding the pool of patients amenable for spine surgery. The growing and aging population in the United States further contributes to the increase in spine surgery cases. Traditionally, spine surgery is performed under general anesthesia. However, awake spinal surgery has recently gained traction due to evidence of decreased perioperative risks, postoperative opioid consumption, and costs, specifically in lumbar spine procedures. Despite the potential for improving outcomes, awake spine surgery has received resistance and has yet to become adopted at many healthcare systems. We aim to provide the fundamental steps in facilitating the initiation of awake spine surgery programs. We also present case reports of two patients who underwent awake spine surgery and reported improved clinical outcomes. abstract_id: PUBMED:35915692 The Awake Craniotomy: A Patient's Experience and A Literature Review. We present a case report and a literature review of the awake craniotomy procedure for mass resection, with emphasis on the historical aspects, anatomical and surgical considerations, and, uniquely, a patient's experience undergoing this procedure. This procedure is a safe and effective method for lesion resection when working in and around eloquent brain. We have described our process of guiding a patient through an awake craniotomy procedure and detailed the patient's experience in this study. We also conducted a systematic literature review of studies involving awake craniotomy over three years, 2018-2021. Lastly, we compared the methodology used by our institution and the current mostly used methods within the neurosurgical community. Several studies were identified using PubMed and Google Scholar. Awake craniotomy is a safe and effective method of achieving a high rate of resection of lesions located in and around the eloquent cortex with a low degree of postoperative neurological deficit. abstract_id: PUBMED:30140530 Anaesthesiologist's Approach to Awake Craniotomy. Awake craniotomy, which was initially used for the surgical treatment of epilepsy, is performed for the resection of tumours in the vicinity of some eloquent areas of the cerebral cortex which is essential for language and motor functions. It is also performed for stereotactic brain biopsy, ventriculostomy, and supratentorial tumour resections. In some institutions, avoiding risks of general anaesthesia, shortened hospitalization and reduced use of hospital resources may be the other indications for awake craniotomy. Anaesthesiologists aim to provide safe and effective surgical status, maintaining a comfortable and pain-free condition for the patient during surgical procedure and prolonged stationary position and maintaining patient cooperation during intradural interventions. Providing anaesthesia for awake craniotomy require scalp blockage, specific sedation protocols and airway management. Long-acting local anaesthetic agents like bupivacaine or levobupivacaine are preferred. More commonly, propofol, dexmedetomidine and remifentanyl are used as sedative agents. A successful anaesthesia for awake craniotomy depends on the personal experience and detailed planning of the anaesthetic procedure. The aim of this review was to present an anaesthetic technique for awake craniotomy under the light of the literature. abstract_id: PUBMED:33446979 Awake Proning: Current Evidence and Practical Considerations. Prone positioning has been shown to improve oxygenation for decades. However, proning in awake, non-intubated patients gained acceptance in the last few months since the onset of coronavirus (COVID-19) pandemic. To overcome the shortage of ventilators, to decrease the overwhelming burden on intensive care beds in the pandemic era, and also as invasive ventilation was associated with poor outcomes, proning of awake, spontaneously breathing patients gathered momentum. Being an intervention with minimal risk and requiring minimum assistance, it is now a globally accepted therapy to improve oxygenation in acute hypoxemic respiratory failure in COVID-19 patients. We thus reviewed the literature of awake proning in non-intubated patients and described a safe protocol to practice the same. How to cite this article: Sodhi K, Chanchalani G. Awake Proning: Current Evidence and Practical Considerations. Indian J Crit Care Med 2020;24(12):1236-1241. abstract_id: PUBMED:24223378 Awake craniotomy for tumor resection. Surgical treatment of brain tumors, especially those located in the eloquent areas such as anterior temporal, frontal lobes, language, memory areas, and near the motor cortex causes high risk of eloquent impairment. Awake craniotomy displays major rule for maximum resection of the tumor with minimum functional impairment of the Central Nervous System. These case reports discuss the use of awake craniotomy during the brain surgery in Alzahra Hospital, Isfahan, Iran. A 56-year-old woman with left-sided body hypoesthesia since last 3 months and a 25-year-old with severe headache of 1 month duration were operated under craniotomy for brain tumors resection. An awake craniotomy was planned to allow maximum tumor intraoperative testing for resection and neurologic morbidity avoidance. The method of anesthesia should offer sufficient analgesia, hemodynamic stability, sedation, respiratory function, and also awake and cooperative patient for different neurological test. Airway management is the most important part of anesthesia during awake craniotomy. Tumor surgery with awake craniotomy is a safe technique that allows maximal resection of lesions in close relationship to eloquent cortex and has a low risk of neurological deficit. abstract_id: PUBMED:37369489 Awake Breast Surgery: A Systematic Review. Background/aim: Awake surgery has become a valid alternative to general anesthesia in many surgery fields. This technique played a very important role during the COVID-19 period. The growing use of this technique has many advantages. We performed a systematic review to study the potentialities of awake breast surgery. Materials And Methods: We searched Pubmed, Embase, and Cochrane library database and retrieved a total of 109 records. Forty-nine of them were excluded as unsuitable. Finally, we selected a total of 12 records concerning different types of studies for topic appropriateness. Three reviewers reviewed independently each record. Results: Five articles analyzing the sustainability of awake surgery during the COVID-19 period were selected. In addition, one article analyzing the impact on the immune system and six articles and eight case reports analyzing anesthetic techniques were also selected. The studies analyzing awake breast surgery during the COVID-19 period showed advantages in terms of sustainability and length of hospitalization. The study analyzing the immune response after awake breast surgery showed lesser lymphocyte response than the general anesthesia group. The studies analyzing anesthetic techniques in awake breast surgery showed that the nerve blocks allow good level of safety and postoperative pain control. Conclusion: The awake breast surgery and fast track implementation shortened hospital stays and reduced costs, without influencing the surgical results. Furthermore, awake breast surgery reduced surgical stress compared to general anesthesia. Among the various anesthetic techniques, nerve blocks are the most advantageous in terms of safety and efficacy compared to epidural anesthesia. abstract_id: PUBMED:33329824 Anesthetic considerations for awake craniotomy. Awake craniotomy is a gold standard of care for resection of brain tumors located within or close to the eloquent areas. Both asleep-awake-asleep technique and monitored anesthesia care have been used effectively for awake craniotomy and the choice of optimal anesthetic approach is primarily based on the preferences of the anesthesiologist and surgical team. Propofol, remifentanil, dexmedetomidine, and scalp nerve block provide the reliable conditions for intraoperative brain mapping. Appropriate patient selection, adequate perioperative psychological support, and proper anesthetic management for individual patients in each stage of surgery are crucial for procedural safety, success, and patient satisfaction. abstract_id: PUBMED:27688563 Awake Craniotomy for Tumour Excision. Background: Craniotomy and excision of tumours can produce neurological deficits if the tumour is located close to eloquent areas of the brain. One technique of overcoming this problem is to keep the patient 'awake' during surgery. Methods: Eight patients with intra cranial space occupying lesions (ICSOL) were operated 'awake', using a combination of skull block with sedation and analgesia. A mixture of 0.125% bupivacaine and 0.5% lignocaine was used for various nerve and field blocks. Midazolam, fentanyl and propofol in titrated doses were used to achieve conscious sedation. Result: The procedure was successful in all the patients. They tolerated the procedure well and were able to follow the commands intraoperatively as desired. There were no significant complications. Conclusion: Awake craniotomy with skull blocks with sedation and analgesia is a well established procedure. It requires a good rapport between surgeon, anaesthesiologist and the patient. abstract_id: PUBMED:38233157 Analysis of definite awake bruxism using a portable electromyography device. Purpose: This study aimed to compare awake bruxism events between subjective and objective evaluations using a questionnaire survey and a modified portable electromyography (EMG) device, and to examine correlations between sleep quality and awake bruxism. Methods: The Epworth Sleepiness Scale (ESS), Pittsburgh Sleep Quality Index (PSQI), and awareness of awake bruxism as clarified via interviews were conducted on 34 participants as subjective evaluations. The EMG device was used to record left temporal muscle activity for 6.5 h (from 09:00 to 15:30) and the number of awake bruxism episodes per hour. The participants were then classified into "bruxer" and "non-bruxer" groups based on the number of awake bruxism episodes. Results: The mean number of awake bruxism episodes per hour was 33.6 ± 21.4, and 23% of the participants who reported having no awareness of awake bruxism in the interviews were defined as "bruxers" in the objective evaluations. In the bruxer group, positive correlations were found between the number of awake bruxism episodes and both ESS and PSQI scores. Conclusion: These findings suggest that objective measurements using a portable EMG device can increase the diagnostic accuracy for awake bruxism, and that sleep quality is a major risk factor for awake bruxism. abstract_id: PUBMED:23439673 Patients selection for awake neurosurgery. Background: Based upon the surgical location and indication, including redundant regions, eloquent areas, deep brain stimulation, and epilepsy foci, some patients will benefit from an awake craniotomy, which allows completion of neurocognitive testing during the intra-operative period. This paper suggests patient selection criteria through a new decision algorithm. Methods: We completed a retrospective chart review at Tampa General Hospital after IRB approval; data were obtained concerning total number of craniotomies, indications, and problems experienced for selection of awake vs. general anesthetic techniques. Results: A total of 397 craniotomies were performed during the two years 2005 and 2006: among those 79 patients received an awake craniotomy (20%). We have utilized a sedation sequence which includes dexmedetomidine, propofol and LMA placement. A skull block is then performed to anesthetize pin placement, and desflurane and remifentanil are used for maintenance until the dural incision. At this time the inhalation agent is stopped and the LMA is removed while breathing spontaneously: the patient remains sedated on dexmedetomidine and remifentanil for the duration of the operation and can communicate effectively if closely coached. Analysis of all patient data led us to a decision tree to guide the surgeon and anesthesiologist in selecting the awake patients. Discussion: We describe the sequence of steps and anesthetic agents which has proved successful for our group. Finally, the use of the proposed decision algorithm simplifies preoperative anesthetic selection and prevents erroneous assignment of inappropriate patients to an awake technique. Answer: The question "Is he awake?" in the context of the provided abstracts seems to refer to the practice of awake surgeries, where the patient remains conscious during the procedure. Awake surgeries, such as awake craniotomies and awake spine surgeries, are performed for various reasons, including maximizing tumor resection while preserving neurological function, reducing perioperative risks, and minimizing postoperative opioid consumption (PUBMED:35789808, PUBMED:35915692, PUBMED:30140530, PUBMED:24223378, PUBMED:33329824, PUBMED:27688563, PUBMED:23439673). These procedures require careful patient selection, psychological support, and specific anesthetic techniques to ensure patient comfort and cooperation during the surgery (PUBMED:30140530, PUBMED:33329824, PUBMED:27688563, PUBMED:23439673). Awake surgeries have also been adapted in other fields, such as breast surgery, where they have been shown to reduce hospital stays and costs without compromising surgical outcomes (PUBMED:37369489). Additionally, the concept of awake proning has been utilized for non-intubated COVID-19 patients to improve oxygenation (PUBMED:33446979). In summary, the question "Is he awake?" can be affirmatively answered in the context of these surgeries, as the patient is indeed awake, conscious, and sometimes actively participating in the procedure by providing feedback to the surgical team.
Instruction: Is the speed of chronic compression an important factor for chronic spinal cord injury rat model? Abstracts: abstract_id: PUBMED:23632138 Is the speed of chronic compression an important factor for chronic spinal cord injury rat model? Objective: To evaluate the effect of expansion speed on chronic compressive spinal cord injury in the rat. Methods: Thirty-six Sprague-Dawley rats were divided into four groups: a control group, a group receiving compressor in the C5-C6 epidural space with instant compression (group 1), and two other groups receiving water-absorbing polyurethane polymer sheets with two expansion speeds, which reached maximum volume in 2 h (group 2: fast expansion) or 24 h (group 3: slow expansion). A C6 laminectomy was performed in the control group. Neurological function, MRI, large motoneuron number in the ventral horn, and myelin staining intensity in the posterior funiculus were evaluated. Results: In the instant compression group, compression was confirmed on T2-weighted images by a hypointense signal change in the intramedulla. In the gradual compressive injury groups, large motoneuron number (p<0.001), but not myelin staining intensity, was significantly decreased in both the fast and slow expansion groups compared with the instant compression group. However, there was no difference in Basso Beattie Bresnahan score, cord distortion in T2-weighted image, large motoneuron numbers, or myelin staining between the fast and slow expansion groups. Conclusion: Instant spinal cord compression caused acute injury. Gradual expansion compression induced reliable pathology and MRI characteristics consistent with chronic compressive spinal cord injury. The speed of expansion is not a significant problem for establishing a reliable model if the chronic compression is induced by gradual expansion. abstract_id: PUBMED:29375327 Ultrastructural Features of Neurovascular Units in a Rat Model of Chronic Compressive Spinal Cord Injury. Chronic spinal cord compression is the most common cause of spinal cord impairment worldwide. Objective of this study is to assess the ultrastructural features of the neurovascular unit (NVU) in a rat model of chronic compressive spinal cord injury, 24 SD rats were divided into two groups: the control group (n = 12), and the compression group (n = 12). A C6 semi-laminectomy was performed in the control group, whereas a water-absorbent polyurethane polymer was implanted into the C6 epidural space in the compression group. The Basso Beattie Bresnahan (BBB) scores and the somatosensory evoked potentials (SEP) were used to evaluate neurological functions. Transmission Electron Microscopy (TEM) was performed to investigate the change of NVU at the 28th day after modeling. Compared with the control group, the compression group shows a significant reduction (P < 0.05) of BBB score and a significant severity (P < 0.05) of abnormal SEP. TEM results of the compression group showed a striking increase in endothelial caveolae and vacuoles; a number of small spaces in tight junctions; a significant increase in pericyte processing area and vessel coverage; an expansion of the basement membrane region; swollen astrocyte endfeet and mitochondria; and the degeneration of neurons and axons. Our study revealed that damage to NVU components occurred followed by chronic compressive spinal cord injury. Several compensatory changes characterized by thicker endothelium, expansive BM, increased pericyte processing area and vessel coverage were also observed. abstract_id: PUBMED:27482226 Establishment of a rat model of chronic thoracolumbar cord compression with a flat plastic screw. Previous studies of animal models of chronic mechanical compression of the spinal cord have mainly focused on cervical and thoracic lesions, but few studies have investigated thoracolumbar injury. The specific pathophysiological mechanism of chronic thoracolumbar cord injury has not yet been elucidated. The purpose of this study was to improve animal models of chronic thoracolumbar cord compression using the progressive screw. A custom-designed flat plastic screw was implanted in the spinal cord between thoracic vertebrae 12 and lumbar 1 of rats. The screw was tightened one complete turn (0.5 mm) every 7 days for 4 weeks to create different levels of chronic spinal cord compression. Following insertion of the screw, there was a significant decline in motor function of the hind limbs, and severe stenosis of micro-computed tomography parameters in the spinal cord. Cortical somatosensory evoked potential amplitudes were reduced remarkably, and latencies were prolonged at 30 minutes after surgery. The loss of motor neurons in the gray matter was marked. Demyelination and cavitation were observed in the white matter. An appropriate rat model of chronic thoracolumbar cord compression was successfully created using the progressive screw compression method, which simulated spinal cord compression injury. abstract_id: PUBMED:30142996 Time course of diffusion tensor imaging metrics in the chronic spinal cord compression rat model. Background: Diffusion tensor imaging (DTI) provides information about water molecule diffusion in spinal cord. Purpose: This study was aimed to investigate DTI changes in the different stages of compressive spinal cord induced by water-absorbing material implantation. Material And Methods: The spinal cord compression was administered over the fourth cervical vertebral level in rat. Rat models were divided into five subgroups according to compression stages: sham group, group A: three-day compression rat models; group B: 12-day compression rat models; group C: 20-day compression rat models; group D: 60-day compression rat models. DTI including fractional anisotropy (FA) and apparent diffusion coefficient (ADC) in the compressive spinal cord were collected. The relationship between the Basso, Beattie, and Bresnahan (BBB) scores and DTI metrics was further explored. Results: Compared with the sham group, BBB scoring of rat model showed a decreased tendency from group A ( P < 0.05) to group B ( P < 0.05). Then the motor function of rat model hindlimbs was recovered in some degree from group C ( P < 0.05) to group D ( P < 0.05) but had significant motor defects when compared with the normal level ( P < 0.05). The DTI metrics results revealed that chronic spinal cord compression resulted in lower FA value and higher ADC value at the compressive spinal cord level assessed at all four time-points ( P < 0.05). DTI metrics also showed a close correlation with motor function ( P < 0.05). Conclusion: DTI is an optimal pre-clinical imaging tool to reflect locomotor performance and pathological status of compressive spinal cord epicenter in chronic spinal cord compression rat model. abstract_id: PUBMED:37497210 Suppression of microglial Ccl2 reduces neuropathic pain associated with chronic spinal compression. Introduction: Chronic spinal compression is a common complication of spinal cord injury (SCI), which can lead to spinal stenosis or herniated discs. The ensuing neuropathic pain is often associated with the activation of microglia. In this investigation, our objective was to explore whether modifying the levels of chemokine (C-C motif) ligand 2 (Ccl2) in microglia could alleviate neuropathic pain resulting from chronic spinal compression. Methods: We used a public database to look for major altered gene associated in a SCI model established in rats. We then employed adeno-associated virus (AAV) vectors, expressing siRNA for the identified significantly altered gene under a microglia-specific TMEM119 promoter. We also tested the impact of this treatment in microglia in vivo on the severity of chronic spinal compression and associated pain using a ttw mouse model for progressive spinal compression. Results: We identified chemokine (C-C motif) ligand 2 (Ccl2) as the primary gene altered in microglia within a rat SCI model, utilizing a public database. Microglial Ccl2 levels were then found to be significantly elevated in disc specimens from SCI patients diagnosed with chronic spinal compression and strongly correlated with the Thompson classification of the degeneration level and pain score. Depletion of Ccl2 in microglia-specific TMEM119 promoter were developed to transfect mouse microglia in vitro, resulting in a proinflammatory to anti-inflammatory phenotypic adaption. In vivo depletion of Ccl2 in microglia mitigated the severity of chronic spinal compression and related pain in ttw mice, likely due to significant changes in pain-associated cytokines and factors. Conclusion: Disc microglia expressing high levels of Ccl2 may contribute to chronic spinal compression and SCI-associated pain. Therapeutically targeting Ccl2 in microglia could offer a potential avenue for treating chronic spinal compression and SCI-associated pain. abstract_id: PUBMED:6866207 A model of chronic spinal cord compression in the cat. An experimental method of producing chronic compression of the cat spinal cord is described. A ligature placed around the lumbar spinal cord of 3-month-old kittens restricts the growth of the spinal cord to produce compression with a slow onset and an insidious progression. The methods of following the clinical progress of affected animals and of sampling the spinal cord after perfusion fixation are presented. The sampling method used allowed analysis of the three dimensional distribution of the pathological changes caused by the compression. These changes were not symmetrically distributed: the spinal cord caudal to the ligature became swollen and extensive partial demyelination occurred under the ligature and caudal to it, in the swollen region of the cord, whereas cranial to the ligature there was only minimal damage. It is concluded that the method produces a useful model of chronic compression of the spinal cord, which will be of value in studying partial demyelination. abstract_id: PUBMED:31132056 Diffusion Tensor Magnetic Resonance Imaging in Chronic Spinal Cord Compression. Chronic spinal cord compression is the most common cause of spinal cord impairment in patients with nontraumatic spinal cord damage. Conventional magnetic resonance imaging (MRI) plays an important role in both confirming the diagnosis and evaluating the degree of compression. However, the anatomical detail provided by conventional MRI is not sufficient to accurately estimate neuronal damage and/or assess the possibility of neuronal recovery in chronic spinal cord compression patients. In contrast, diffusion tensor imaging (DTI) can provide quantitative results according to the detection of water molecule diffusion in tissues. In the present study, we develop a methodological framework to illustrate the application of DTI in chronic spinal cord compression disease. DTI fractional anisotropy (FA), apparent diffusion coefficients (ADCs), and eigenvector values are useful for visualizing microstructural pathological changes in the spinal cord. Decreased FA and increases in ADCs and eigenvector values were observed in chronic spinal cord compression patients compared to healthy controls. DTI could help surgeons understand spinal cord injury severity and provide important information regarding prognosis and neural functional recovery. In conclusion, this protocol provides a sensitive, detailed, and noninvasive tool to evaluate spinal cord compression. abstract_id: PUBMED:26327142 Three-dimensional alteration of cervical anterior spinal artery and anterior radicular artery in rat model of chronic spinal cord compression by micro-CT. Objective: To investigate the spatial and temporal changes of anterior spinal artery (ASA) and anterior radicular artery (ARA) of chronic compressive spinal cord injury on rat model by three-dimensional micro-CT. Methods: 48 rats were divided into two groups: sham control group (n=24) and compressive spinal cord injury group (n=24). A C6 semi-laminectomy was performed in the sham control group, while a water-absorbable polyurethane polymer was implanted into C6 epidural space in the compression group. The Basso Beattie Bresnahan (BBB) score and somatosensory evoked potentials (SEP) were used to evaluate neurological function. Micro-CT scanning was used to investigate the change of ASA and ARA after perfusion at the 1th (n=6), 28th (n=6), 42th (n=6) and 70th (n=6) day of post operation. The diameter, angle-off and vascular index (VI) was measured by 3D micro-CT. Results: In comparison with sham control, BBB score have a significant reduction at the 28th day (p<0.05) and abnormal SEP have a significant severity at the 28th day (p<0.05). Both of them have a significant improvement at the 70th day compared with that of the 28th day (p<0.05). VI shows the amount of microvessels reduced at the 28th day (p<0.05) and increased at the 70th day (p<0.05). The diameter and angle-off of ASA and ARA also changed significantly at the 28th, 42th, 70th day (p<0.05). Conclusion: There was a significant alteration of cervical anterior spinal artery and anterior radicular artery after chronic cervical spinal cord compression. Alteration of ASA and ARA may affect the vascular density of spinal cord and play an important role in neural functional change of chronic cervical spinal cord compression through 3D micro-CT. abstract_id: PUBMED:37286109 Analysis of gene expression profiles and experimental validations of a rat chronic cervical cord compression model. Cervical spondylotic myelopathy (CSM) is a severe non-traumatic spinal cord injury (SCI) wherein the spinal canal and cervical cord are compressed due to the degeneration of cervical tissues. To explore the mechanism of CSM, the ideal model of chronic cervical cord compression in rats was constructed by embedding a polyvinyl alcohol-polyacrylamide hydrogel in lamina space. Then, the RNA sequencing technology was used to screen the differentially expressed genes (DEGs) and enriched pathways among intact and compressed spinal cords. A total of 444 DEGs were filtered out based on the value of log2(Compression/Sham); these were associated with IL-17, PI3K-AKT, TGF-β, and Hippo signaling pathways according to the GSEA, KEGG, and GO analyses. Transmission electron microscopy indicated the changes in mitochondrial morphology. Western blot and immunofluorescence staining revealed neuronal apoptosis, astrogliosis and microglial neuroinflammation in the lesion area. Specifically, the expression of apoptotic indicators, such as Bax and cleaved caspase-3, and inflammatory cytokines, such as IL-1β, IL-6, and TNF-α, were upregulated. The activation of IL-17 signaling pathway was observed in microglia instead of neurons or astrocytes, the activation of TGF-β and inhibition of Hippo signaling pathways were detected in astrocytes instead of neurons or microglia, and the inhibition of PI3K-AKT signaling pathway was discovered in neurons rather than microglia of astrocytes in the lesion area. In conclusion, this study indicated that neuronal apoptosis was accompanied by inhibiting of the PI3K-AKT pathway. Then, the activation of microglia IL-17 pathway and NLRP3 inflammasome effectuated the neuroinflammation, and astrogliosis was ascribed to the activation of TGF-β and the inhibition of the Hippo pathway in the chronic cervical cord of compression. Therefore, therapeutic methods targeting these pathways in nerve cells could be promising CSM treatments. abstract_id: PUBMED:24715058 Enhanced p62 expression triggers concomitant autophagy and apoptosis in a rat chronic spinal cord compression model. Chronic spinal cord compression is the result of mechanical pressure on the spinal cord, which in contrast to traumatic spinal cord injury, leads to slowly progressing nerve degeneration. These two types of spinal cord injuries may trigger similar mechanisms, including motoric nerve cell apoptosis and autophagy, however, depending on differences in the underlying injury severity, nerve reactions may predominantly involve the conservation of function or the initiation of functions for the removal of irreversibly damaged cells. p62 is a multidomain adapter protein, which is involved in apoptosis and cell survival as well as autophagy, and is a common component of protein aggregations in neurodegenerative diseases. In the present study, a rat chronic spinal cord compression model was used, in which the spinal cord was progressively compressed for six weeks and then constantly compressed for another 10 weeks. As a result Basso, Beattie and Bresnahan locomotor scaling revealed a gradual score decrease until the 6th week followed by constant recovery until the 16th week after spinal cord compression was initiated. During the first eight weeks of the experiment, p62 and nuclear factor-κB (NF-κB) were increasingly expressed up to a constant plateau at 12-16 weeks, whereas caspase 3 exhibited a marginally enhanced expression at 8 weeks, however, reached a constant maximum peak 12-16 weeks after the beginning of spinal cord compression. It was hypothesized that, in the initial phase of spinal cord compression, enhanced p62 expression triggered NF-κB activity, directing the cell responses mainly to cell survival and autophagy, whereas following eight weeks of spinal cord compression, caspase 3 was additionally activated indicating cumulative elimination of irreversibly damaged nerve cells with highly activated autophagy. Answer: The speed of chronic compression does not appear to be a significant factor for establishing a reliable chronic spinal cord injury rat model, as long as the compression is induced gradually. A study by PUBMED:23632138 found that both fast and slow expansion groups, which reached maximum volume in 2 hours and 24 hours respectively, showed a significant decrease in large motoneuron number compared with the instant compression group. However, there was no difference in neurological function, cord distortion, large motoneuron numbers, or myelin staining intensity between the fast and slow expansion groups. This suggests that the speed of expansion is not a critical issue for modeling chronic compressive spinal cord injury, provided that the compression is applied gradually.
Instruction: Is comorbidity of posttraumatic stress disorder and borderline personality disorder related to greater pathology and impairment? Abstracts: abstract_id: PUBMED:12411235 Is comorbidity of posttraumatic stress disorder and borderline personality disorder related to greater pathology and impairment? Objective: The authors examined whether patients with comorbid borderline personality disorder and posttraumatic stress disorder (PTSD) have a more severe clinical profile than patients with either disorder without the other. Method: Outpatients with borderline personality disorder without PTSD (N=101), PTSD without borderline personality disorder (N=121), comorbid borderline personality disorder and PTSD (N=48), and major depression without PTSD or borderline personality disorder (N=469) were assessed with structured interviews for psychiatric disorders and for degree of impairment. Results: Outpatients with diagnoses of comorbid borderline personality disorder and PTSD were not significantly different from outpatients with borderline personality disorder without PTSD, PTSD without borderline personality disorder, or major depression without PTSD or borderline personality disorder in severity of PTSD-related symptoms, borderline-related traits, or impairment. Conclusions: The additional diagnosis of PTSD or borderline personality disorder does little to augment the pathology or dysfunction of patients who have either disorder without the other. abstract_id: PUBMED:34581461 Correlates of mild, moderate, and severe Alcohol Use Disorder among adults with problem substance use: Validity implications for DSM-5. Background: The DSM-5 definition of alcohol use disorder (AUD) has been well validated, but information is lacking on the extent to which alcohol use, comorbidity, and impairment are associated with the 3 DSM-5-defined AUD severity levels: mild, moderate, and severe. This study examined clinical and functional characteristics as predictors (validators) of these severity levels. Methods: Participants aged ≥18 years reporting current problem substance use (N = 588) were recruited between 2016 and 2019 and assessed for DSM-5 AUD and a set of potential validators: indicators of alcohol use severity (i.e., craving, binge drinking frequency, problematic use, and harmful drinking), psychiatric disorders, and functional impairment. Multinomial logistic regression models examined the association between the predictors and the 3 AUD severity levels (mild, moderate, and severe) vs the reference group, no AUD, controlling for sociodemographic characteristics and other substance use. Results: All alcohol use validators were associated with a greater likelihood of all 3 AUD severity levels compared with the no-AUD group. However, psychiatric disorders were associated only with severe AUD and participants with major depression (aOR = 2.44), posttraumatic stress disorder (aOR = 1.65), borderline personality disorder (aOR = 1.99), and antisocial personality disorder (aOR = 1.78) had a greater likelihood of severe AUD than the no-AUD group. Functioning validators were also associated only with severe AUD and participants with social (aOR = 1.87), physical (aOR = 1.62), or mental (aOR = 1.84) impairment had a greater likelihood of severe AUD than the no-AUD group. Many alcohol-related, psychiatric, and functioning validators were associated with greater odds of severe AUD than mild or moderate AUD. Conclusion: This study supports the criterion validity of the DSM-5 tri-categorical measure of AUD. Specifically, results fully supported the validity of severe AUD by its associations with all predictors, whereas the validity of mild and moderate AUD was supported only by alcohol use predictor variables. Findings suggest the value of using severity-specific interventions utilizing the DSM-5 AUD. abstract_id: PUBMED:26401313 The comorbidity of borderline personality disorder and posttraumatic stress disorder: revisiting the prevalence and associations in a general population sample. Background: The comorbidity of borderline personality disorder (BPD) and posttraumatic stress disorder (PTSD) is frequent, yet not well understood. The influence of childhood sexual abuse (CSA) in the development of this comorbidity has been a focus of prior clinical studies, but empirical evidence to generalize this focus to the broader population is lacking. Primary aims of the present study included evaluation of: (a) the association of this comorbidity with decrements in health-related quality of life (HRQOL) and (b) the importance of CSA as a predictive factor for this comorbidity in a general population sample. Methods: We utilized data from Wave 2 of the National Epidemiological Survey on Alcohol and Related Conditions, a nationally representative face-to-face survey evaluating mental health in the non-institutionalized adult population of the United States. Data from respondents who met criteria for BPD and/or PTSD were analyzed (N = 4104) to assess potential associations between and among lifetime BPD-PTSD comorbidity, CSA, gender, healthcare usage, and mental and physical HRQOL. Results: Lifetime comorbidity of BPD and PTSD was associated with more dysfunction than either individual disorder; and the factors of gender, age, and CSA exhibited significant effects in the prediction of this comorbidity and associated decrements in HRQOL. Conclusions: Results support the measured focus on CSA as an important, but not necessary, etiologic factor and emphasize this comorbidity as a source of greater suffering and public health burden than either BPD or PTSD alone. The differential impact of these disorders occurring alone versus in comorbid form highlights the importance of diagnosing both BPD and PTSD and attending to lifetime comorbidity. abstract_id: PUBMED:18557663 Prevalence, correlates, disability, and comorbidity of DSM-IV narcissistic personality disorder: results from the wave 2 national epidemiologic survey on alcohol and related conditions. Objectives: To present nationally representative findings on prevalence, sociodemographic correlates, disability, and comorbidity of narcissistic personality disorder (NPD) among men and women. Method: Face-to-face interviews with 34,653 adults participating in the Wave 2 National Epidemiologic Survey on Alcohol and Related Conditions conducted between 2004 and 2005 in the United States. Results: Prevalence of lifetime NPD was 6.2%, with rates greater for men (7.7%) than for women (4.8%). NPD was significantly more prevalent among black men and women and Hispanic women, younger adults, and separated/divorced/widowed and never married adults. NPD was associated with mental disability among men but not women. High co-occurrence rates of substance use, mood, and anxiety disorders and other personality disorders were observed. With additional comorbidity controlled for, associations with bipolar I disorder, post-traumatic stress disorder, and schizotypal and borderline personality disorders remained significant, but weakened, among men and women. Similar associations were observed between NPD and specific phobia, generalized anxiety disorder, and bipolar II disorder among women and between NPD and alcohol abuse, alcohol dependence, drug dependence, and histrionic and obsessive-compulsive personality disorders among men. Dysthymic disorder was significantly and negatively associated with NPD. Conclusions: NPD is a prevalent personality disorder in the general U.S. population and is associated with considerable disability among men, whose rates exceed those of women. NPD may not be as stable as previously recognized or described in the DSM-IV. The results highlight the need for further research from numerous perspectives to identify the unique and common genetic and environmental factors underlying the disorder-specific associations with NPD observed in this study. abstract_id: PUBMED:9842784 Axis I comorbidity of borderline personality disorder. Objective: The purpose of this study was to assess the lifetime rates of occurrence of a full range of DSM-III-R axis I disorders in a group of patients with criteria-defined borderline personality disorder and comparison subjects with other personality disorders. Method: The axis I comorbidity of 504 inpatients with personality disorders was assessed by interviewers who were blind to clinical diagnosis and who used a semistructured research interview of demonstrated reliability. Results: Four new findings emerged from this study. First, anxiety disorders were found to be almost as common among borderline patients (N=379) as mood disorders but far more discriminating from axis II comparison subjects (N=125). Second, posttraumatic stress disorder (PTSD) was found to be a common but not universal comorbid disorder among borderline patients, a finding inconsistent with the view that borderline personality disorder is actually a form of chronic PTSD. Third, male and female borderline patients were found to differ in the type of disorder of impulse in which they "specialized." More specifically, substance use disorders were significantly more common among male borderline patients, while eating disorders were significantly more common among female borderline patients. Fourth, a lifetime pattern of complex comorbidity (i.e., met DSM-III-R criteria for both a disorder of affect and a disorder of impulse at some point before the patients' index admission) was found to have strong positive predictive power for the borderline diagnosis as well as a high degree of sensitivity and specificity. Conclusions: These results suggest that the lifetime pattern of axis I comorbidity characteristic of borderline patients and distinguishing for the disorder is a particularly good marker for borderline personality disorder. abstract_id: PUBMED:25214370 Trauma-related psychiatric comorbidity of somatization disorder among women in eastern Turkey. Objective: This study sought to determine the trauma-related psychiatric comorbidity of somatization disorder among women who applied to an outpatient psychiatric unit of a general hospital in eastern Turkey. Methods: Forty women with somatization disorder and 40 non-clinical controls recruited from the same geographic region participated in the study. Somatization disorder and posttraumatic stress disorder (PTSD) sections of the Structured Clinical Interview for DSM-IV (including its criterion A traumatic events checklist), Dissociative Disorders Interview Schedule, Dissociative Experiences Scale (Taxon), Hamilton Depression Rating Scale, and Childhood Abuse and Neglect Questionnaire were administered to all participants. Results: A significant proportion of the women with somatization disorder had the concurrent diagnoses of major depression, PTSD, dissociative disorder, and borderline personality disorder. Women with somatization disorder reported traumatic experiences of childhood and/or adulthood more frequently than the comparison group. A significant proportion of these patients reported possession and/or paranormal experiences. Binary logistic regression analysis demonstrated that current major depression, being married, total number of traumatic events in adulthood, and reports of possession and/or paranormal experiences were independent risk factors for somatization disorder diagnosis. Conclusions: Among women with endemically high exposition to traumatic stress, multiple somatic complaints were in a significant relationship with major depressive disorder and lifelong cumulative traumatization. While accompanying experiences of possession and paranormal phenomena may lead to seeking help by paramedical healers, the challenge of differential diagnosis may also limit effective service to this group of somatizing women with traumatic antecedents and related psychiatric comorbidities. abstract_id: PUBMED:8742671 Axis I and II comorbidity and childhood trauma history in chemical dependency. The authors replicated and extended two former studies on Axis I and II comorbidity and childhood trauma histories in 106 patients admitted to a chemical dependency treatment unit. Sixty-nine subjects reported a history of physical and/or sexual abuse during childhood and 26.4% met criteria for posttraumaic stress disorder. Patients who reported a history of childhood abuse showed more symptoms of depression, dissociation, and borderline personality disorder than those who denied childhood trauma. Dissociative disorders, mood and anxiety disorders, personality disorders, and histories of childhood trauma appear to be common in chemical dependency subjects, and should be inquired about routinely. abstract_id: PUBMED:24299094 Borderline personality disorder, exposure to interpersonal trauma, and psychiatric comorbidity in urban primary care patients. Objective: Few data are available on interpersonal trauma as a risk factor for borderline personality disorder (BPD) and its psychiatric comorbidity in ethnic minority primary care populations. This study aimed to examine the relation between trauma exposure and BPD in low-income, predominantly Hispanic primary care patients. Method: Logistic regression was used to analyze data from structured clinical interviews and self-report measures (n = 474). BPD was assessed with the McLean screening scale. Trauma exposure was assessed with the Life Events Checklist (LEC); posttraumatic stress disorder (PTSD) was assessed with the Lifetime Composite International Diagnostic Interview, other psychiatric disorders with the SCID-I, and functional impairment with items from the Sheehan Disability Scale and Social Adjustment Scale Self-Report (SAS-SR). Results: Of the 57 (14%) patients screening positive for BPD, 83% reported a history of interpersonally traumatic events such as sexual and physical assault or abuse. While interpersonal trauma experienced during adulthood was as strongly associated with BPD as interpersonal trauma experienced during childhood, noninterpersonal trauma was associated with BPD only if it had occurred during childhood. The majority (91%) of patients screening positive for BPD met criteria for at least one current DSM-IV Axis I diagnosis and exhibited significant levels of functional impairment. Conclusion: Increased awareness of BPD in minority patients attending primary care clinics, high rates of exposure to interpersonal trauma, and elevated risk for psychiatric comorbidity in this population may enhance physicians' understanding, treatment, and referral of BPD patients. abstract_id: PUBMED:7599374 Axis I and personality comorbidity in adolescents with conduct disorder. This study was undertaken to investigate psychiatric comorbidity in male and female adolescents with conduct disorder diagnoses. Twenty-five hospitalized adolescents (11 females, 14 males) with conduct disorder were evaluated using structured diagnostic interviews for Axis I and personality disorders. The most common Axis I comorbid diagnoses were: depressive disorders (major depression and/or dysthymia), 64 percent; anxiety disorders (separation anxiety disorder, overanxious disorder, panic disorder, obsessive-compulsive disorder, phobias, and/or posttraumatic stress disorder), 52 percent; substance abuse, 48 percent; and attention-deficit hyperactivity disorder, 28 percent. Common Axis II disorders included passive-aggressive personality disorder, 56 percent, and borderline personality disorder, 32 percent. When compared with the male subjects, the females had significantly more total Axis I disorders and a trend toward more total personality disorders, anxiety disorders, depression, and borderline personality disorders. These findings support conduct disorder as a complex illness with extensive Axis I and II involvement as well as some gender differences in presentation. abstract_id: PUBMED:10428182 Axis I diagnostic comorbidity and borderline personality disorder. Borderline personality disorder (PD) has been the most studied PD. Research has examined the relationship between borderline PD and most axis I diagnostic classes such as eating disorders, mood disorders, and substance use disorders. However, there is little information regarding the relationship of borderline PD and overall comorbidity with all classes of axis I disorders assessed simultaneously. In the present study, 409 patients were evaluated with semistructured diagnostic interviews for axis I and axis II disorders. Patients with a diagnosis of borderline PD versus those who did not receive the diagnosis were assigned significantly more current axis I diagnoses (3.4 v 2.0). Borderline PD patients were twice as likely to receive a diagnosis of three or more current axis I disorders (69.5% v 31.1%) and nearly four times as likely to have a diagnosis of four or more disorders 147.5% v 13.7%). In comparison to nonborderline PD patients, borderline PD patients more frequently received a diagnosis of current major depressive disorder (MDD), bipolar I and II disorder, panic disorder with agoraphobia, social and specific phobia, posttraumatic stress disorder (PTSD), obsessive-compulsive disorder (OCD), eating disorder NOS, and any somatoform disorder. Similar results were observed for lifetime diagnoses. Overall, borderline PD patients were more likely to have multiple axis I disorders than nonborderline PD patients, and the differences between the two groups were present across mood, anxiety, substance use, eating, and somatoform disorder categories. These findings highlight the importance of performing thorough evaluations of axis I pathology in patients with borderline PD in order not to overlook syndromes that are potentially treatment-responsive. Answer: The comorbidity of posttraumatic stress disorder (PTSD) and borderline personality disorder (BPD) has been examined in various studies, with some findings indicating that this comorbidity is associated with greater pathology and impairment. For instance, one study found that the lifetime comorbidity of BPD and PTSD was associated with more dysfunction than either individual disorder, with factors such as gender, age, and childhood sexual abuse (CSA) significantly affecting the prediction of this comorbidity and associated decrements in health-related quality of life (HRQOL) (PUBMED:26401313). This suggests that the comorbidity of BPD and PTSD can lead to greater suffering and public health burden than either disorder alone. Another study indicated that patients with somatization disorder who also had a history of trauma showed significant comorbidity with major depression, PTSD, dissociative disorder, and BPD (PUBMED:25214370). This further supports the notion that trauma-related psychiatric comorbidity, including BPD and PTSD, can lead to more complex clinical presentations and potentially greater impairment. However, one study specifically examining outpatients with BPD without PTSD, PTSD without BPD, comorbid BPD and PTSD, and major depression without either disorder found that the additional diagnosis of PTSD or BPD did little to augment the pathology or dysfunction of patients who had either disorder without the other (PUBMED:12411235). This suggests that the presence of comorbid BPD and PTSD may not necessarily result in a more severe clinical profile than having either disorder alone. In summary, while some research supports the idea that comorbidity of PTSD and BPD is related to greater pathology and impairment, other studies suggest that the additional diagnosis may not significantly increase the severity of symptoms or impairment compared to having just one of the disorders. It is important to consider the individual variability and the specific characteristics of the comorbid presentation when assessing the impact of these disorders on pathology and impairment.
Instruction: Is employment status in adults over 25 years old associated with nonmedical prescription opioid and stimulant use? Abstracts: abstract_id: PUBMED:27858120 Is employment status in adults over 25 years old associated with nonmedical prescription opioid and stimulant use? Purpose: Nonmedical use of prescription opioid and stimulants (NMUPO and NMUPS, respectively) has declined in recent years, but remains an important public health problem. Evidence regarding their relationships with employment status remains unclear. We determined the relationship between employment status and NMUPO and NMUPS. Methods: We analyzed a cross-sectional, nationally representative, weighted sample of 58,486 adults, ages 26 years and older, using combined 2011-2013 data from the National Survey on Drug Use and Health (NSDUH). We fit two crude and two adjusted multivariable logistic regression models to assess the relationship between our two different outcomes of interest: (1) past-year NMUPO and (2) past-year NMUPS, and our exposure of interest: employment status, categorized as (1) full time, (2) part time, (3) unemployed, and (4) not in the workforce. Our adjusted models featured the following covariates: sex, race, age, marital status, and psychological distress, and other nonmedical use. Results: Prevalence of NMUPO was higher than NMUPS (3.48 vs. 0.72%). Unemployed participants had the highest odds of NMUPO [aOR 1.45, 95% CI (1.15-1.82)], while those not in the workforce had the highest odds of NMUPS [aOR 1.71, 95% CI (1.22-2.37)]. Additionally, part-time and unemployed individuals had increased odds of NMUPS [aORs, 95% CI 1.59 (1.09-2.31) and 1.67 (1.11-2.37) respectively], while those not in the workforce had decreased odds of NMUPO [aOR 0.82, 95% CI (0.68-0.99)] relative to full-time participants. Conclusions: There is a need for adult prevention and deterrence programs that target nonmedical prescription drug use, especially among those unemployed or not in the workforce. abstract_id: PUBMED:37123433 Associations between prescription stimulant use as prescribed, nonmedical use, and illicit stimulant use among adults evaluated for substance use treatment, 2017-2021. Background: Limited data exist on risk factors for illicit stimulant use, including associations between prescription stimulant use/nonmedical use (NMU) and illicit stimulant use. Methods: We used 2017-2021 data from adults assessed for substance use disorder (SUD) treatment using the National Addictions Vigilance Intervention and Prevention Program Addiction Severity Index-Multimedia Version® tool. Multivariable Poisson regression models analyzed associations between past 30-day prescription stimulant use as prescribed or NMU and past 30-day illicit stimulant use. Separate models examined past 30-day illicit stimulant, methamphetamine, and cocaine use. We explored problem severity across seven biopsychosocial domains (e.g., drug, psychiatric, family) by past 30-day prescription stimulant use/NMU and illicit stimulant use. Results: Among 218,981 assessments, 1.8% reported prescription stimulant NMU; 1.6% reported use as prescribed. Past 30-day prescription stimulant NMU (vs. no use) was associated with past 30-day illicit stimulant use (adjusted prevalence ratio [aPR] [95% CI]: 2.67 [2.59, 2.75]), methamphetamine use (aPR: 2.81 [2.71, 2.92]), and cocaine use (aPR: 3.53 [3.33, 3.74]). Prescription stimulant use as prescribed (vs. no use) was associated with lower prevalence of past 30-day illicit stimulant use. Assessments reporting prescription stimulant NMU (vs. no use, or use as prescribed) appeared more likely to have moderate-to-extreme problem scores across biopsychosocial domains, indicating greater need for treatment or assistance. Assessments reporting prescription stimulant use as prescribed or NMU frequently reported opioids, alcohol, or other substances as their primary substance problem. Conclusions: Adults using illicit stimulants/nonmedically using prescription stimulants may benefit from care addressing polysubstance use, mental health, social, and recovery support services. abstract_id: PUBMED:29747875 Young adults' opioid use trajectories: From nonmedical prescription opioid use to heroin, drug injection, drug treatment and overdose. Introduction: Recent research has begun to explore the transition from nonmedical use of prescription opioids (POs) to heroin and injection drug use, adding to earlier literature identifying factors that influence the transition from intranasal to injection use of heroin. However, little research has explored how these transitions are embedded within young people's broader opioid use trajectories - individual pathways that may also include experiences of nonfatal overdose and drug treatment. Methods: Data are from a study of 539 18-29 year-old New York City residents, recruited via Respondent-Driven Sampling, who reported past-month nonmedical use of POs and/or heroin use. Participants completed structured, computer-assisted interviews that included assessment of their ages at a series of "benchmark" events and experiences, including first use of a drug or route of administration, the onset of "regular" use of a drug (i.e., 1 or more times a week for at least 1 month), first overdose and first drug treatment. Results: Results suggest a predictable, ordered pathway by which opioid use tends to progress in this cohort of young adults. Participants initiated nonmedical PO use at age 16.8, on average, and most transitioned to heroin use (83%) and heroin injection (64%), generally within 4 years of first PO misuse. Drug treatment was not typically accessed until after participants had progressed to heroin use. First overdose occurred <1 year after first heroin use, on average. Conclusions: Findings may help inform the optimal timing for delivery of primary, secondary and tertiary prevention efforts targeting young opioid users. abstract_id: PUBMED:34938836 Characterizing prescription stimulant nonmedical use (NMU) among adults recruited from Reddit. Objective: Increased prescription stimulant nonmedical use (NMU) is part of a growing polysubstance use landscape. The purpose of the present study was to characterize prescription stimulant NMU among adults reporting past 5-year non-oral prescription stimulant NMU. Methods: Adults who reported non-oral prescription stimulant NMU within the last 5 years were recruited by banner ads placed on the Reddit website between February and September 2019. Types of prescription stimulants used, routes of administration, preferred routes of administration, motivations for prescription stimulant NMU, concurrent substances used simultaneously during prescription stimulant NMU, illicit substances used and factors impacting prescription stimulant NMU were queried. Results: Respondents (n = 225) were male (86.2%), non-Hispanic (92.4%), white (78.2%), between 18 and 24 (48.0%) or 25-34 (43.1%) years with some amount of college education (81.3%). Most reported lifetime intranasal (93.8%) or oral use (85.2%). Prescription stimulants were diverted: 64.5% reported the prescription stimulants were given to them by a family or a friend and 10.5% reported that they had stolen these medications from a family or friend. Preferred route of administration was oral use (70.2%). Motivations to use were stratified by route of administration: intranasal (55.6%) or oral (63.0%) use was primarily endorsed as an attempt to enhance performance at work or at school; use by injection (57.1%) or smoking (62.5%) was primarily endorsed to get high. Most of the sample reported concurrent drug use (79.1%) including tobacco (57.3%), marijuana (52.0%), caffeine (47.6%) or alcohol (41.8%), among others. When excluding licit substances, 30.7% reported using 1 illicit substance concurrently with prescription stimulants and 25.3% reported using 2 or more illicit substances concurrently with prescription stimulants. Whether participants would undertake prescription stimulant NMU was determined by their work/school schedules or the location of the NMU (48.9%) whereas the route of administration employed was primarily influenced by the desired feeling or effect (56.9%). Conclusions: Adults reporting lifetime non-oral prescription stimulant NMU engage in substantial risky behaviors that in addition to alternate routes of administration include polysubstance use, diversion and concurrent substance use. abstract_id: PUBMED:37101598 Quantitative Survey on Prevalence of Prescription Pain Medications and Stimulants Use in Young Adults. Background: Prescription drug use including pain relievers and stimulants has been a growing problem in young adults and the issue has been a public health concern for years. Methods: This Quantitative Cross-sectional study aimed to collect preliminary data regarding prescription opioid use, prescription stimulant drug use and overdose treatment knowledge in young adults between 18 and 24 years old in a southern New Jersey university through administering an online survey. Results: Of the 1663 students who completed the survey, 33% reported using prescription pain relievers and 15% reported using prescription stimulant drugs. Stimulant drug users were found to be more likely to use prescription pain relievers (49%) as compared to non-stimulant users (30%). Furthermore, students with overdose treatment knowledge of opioids were more likely to report misuse of prescription drugs (15%) than students with less knowledge (8%). Conclusions: This study reiterates the increasing usage of prescription drugs and stimulant use among college students. There is a need for effective education strategies to inform students about the prescription medication uses and misuse to reduce nonmedical use of prescription medications. abstract_id: PUBMED:25024105 Family ties: maternal-offspring attachment and young adult nonmedical prescription opioid use. Background: Nonmedical prescription drug use is prevalent among young adults, yet little is known about modifiable determinants of use. We examined whether maternal-offspring attachment reported at mean age 21 was associated with nonmedical prescription opioid use at mean age 26, and investigated whether a history of depressive symptoms and substance use played a role in associations between maternal-offspring attachment and nonmedical prescription opioid use. Methods: We used data from the Growing Up Today Study, a longitudinal cohort of United States adolescents followed into young adulthood. Maternal-offspring attachment was reported by young adults and their mothers, and defined as mutual low, mutual medium or high, and dissonant. Analyses were carried out in the full sample using generalized estimating equation models, and in a sibling subsample, using conditional fixed effects models to control for stable aspects of the family environment. Results: Analyses with the full sample and the sibling subsample both showed that mutual medium/high maternal-offspring attachment at age 21 was associated with lower odds of nonmedical prescription opioid use at age 26 (RR=0.74; 95% CI=0.57-0.97 in full sample). The association was partly mediated by mean age 23 offspring smoking, heavy episodic drinking, and illicit drug use. Conclusions: Promoting reciprocal attachment in the maternal-offspring dyad should be investigated as a strategy to prevent nonmedical prescription opioid use by young adulthood. Even in young adulthood, programs that target both parents and offspring may have greater impact on offspring substance use than programs that target offspring alone. abstract_id: PUBMED:31606591 Sex differences in nonmedical prescription tranquilizer and stimulant use trends among secondary school students in Argentina, Chile, and Uruguay. Background: Little is known about recent nonmedical prescription tranquilizer and stimulant use trends in Latin America. We tested whether recent trends among students in three South American countries differed by sex over time. Methods: Three countries independently collected National School Students Survey on Drugs. Students in 8th, 10th, and 12th grades were sampled in Argentina (2007-2014, N = 328,202), Chile (2007-2015, N = 136,379), and Uruguay (2007-2016, N = 32,371). Weighted linear regression models predicted the prevalences and trends over time of past-year nonmedical tranquilizer and stimulant use by country, and tested whether trends differed by sex, adjusting for school type and grade. Results: In Argentina from 2007 to 2014, past-year nonmedical prescription tranquilizer (girls: 2.8 to 2.6%, boys: 2.5 to 2.3%) and stimulant (girls: 1.7 to 1.3%, boys: 1.9 to 1.5%) use trends did not differ by sex. In Chile from 2007 to 2015, nonmedical prescription tranquilizer use trends significantly differed comparing girls (3.9 to 10%) with boys (3.2 to 6.9%); stimulant use trends did not differ comparing girls (1.6 to 2.0%) with boys (2.0 to 1.3%). In Uruguay from 2007 to 2014 and 2014-2016, past-year nonmedical prescription tranquilizer (girls: 5.1 to 6.6%; boys: 2.8 to 4.2%) and stimulant (girls: 1.8 to 0.7%; boys: 1.8 to 0.7%) use trends did not differ by sex. Conclusions: Trends of nonmedical prescription tranquilizer use recently increased in Chile and Uruguay, widening by sex over time in Chile only. The drivers of increasing tranquilizer use among girls in Chile and Uruguay merit further investigation. abstract_id: PUBMED:27163188 Trends in older adult nonmedical prescription drug use prevalence: Results from the 2002-2003 and 2012-2013 National Survey on Drug Use and Health. Background: Based on projections of increasing older adult nonmedical prescription drug use (NMPDU) prevalence, we investigated whether increases had occurred in opioid, tranquilizer and stimulant NMPDU in older adults from 2002-2003 to 2012-2013, using the National Survey on Drug Use and Health (NSDUH). Methods: The NSDUH is a nationally representative survey of the US population, with assessments of lifetime, past-year and past 30-day NMPDU from opioids, tranquilizers and stimulants. Weighted cross-tabulations were used to compute prevalence rates, and design-based logistic regressions were used to examine change in NMPDU. Regressions controlled for gender, race/ethnicity and population density of respondent residence. Results: Across medication classes, lifetime NMPDU rates increased in all older adults and two sub-groups: those aged 50 to 64 and those 65years and older. Rates of past year opioid NMPDU also increased from 2002-2003 to 2012-2013 in all examined age ranges. Trend-level results were also found for the past-30day opioid NMPDU and past-year tranquilizer NMPDU in adults aged 50years and older. Conclusions: The results support projections of increasing older adult NMPDU rates. As NMPDU in older adults may impart greater risk for adverse events, public health efforts are needed to reverse the increases in older adult NMPDU. abstract_id: PUBMED:37728974 Differing Behaviors Around Adult Nonmedical Use of Prescription Stimulants and Opioids: Latent Class Analysis. Background: The availability of central nervous system stimulants has risen in recent years, along with increased dispensing of stimulants for treatment of, for example, parent-reported attention-deficit/hyperactivity disorder in children and new diagnoses during adulthood. Typologies of drug use, as has been done with opioids, fail to include a sufficient range of behavioral factors to contextualize person-centric circumstances surrounding drug use. Understanding these patterns across drug classes would bring public health and regulatory practices toward precision public health. Objective: The objective of this study was to quantitatively delineate the unique behavioral profiles of adults who currently nonmedically use stimulants and opioids using a latent class analysis and to contrast the differences in findings by class. We further evaluated whether the subgroups identified were associated with an increased Drug Abuse Screening Test-10 (DAST-10) score, which is an indicator of average problematic drug use. Methods: This study used a national cross-sectional web-based survey, using 3 survey launches from 2019 to 2020 (before the COVID-19 pandemic). Data from adults who reported nonmedical use of prescription stimulants (n=2083) or prescription opioids (n=6127) in the last 12 months were analyzed. A weighted latent class analysis was used to identify the patterns of use. Drug types, motivations, and behaviors were factors in the model, which characterized unique classes of behavior. Results: Five stimulant nonmedical use classes were identified: amphetamine self-medication, network-sourced stimulant for alertness, nonamphetamine performance use, recreational use, and nondiscriminatory behaviors. The drug used nonmedically, acquisition through a friend or family member, and use to get high were strong differentiators among the stimulant classes. The latter 4 classes had significantly higher DAST-10 scores than amphetamine self-medication (P<.001). In addition, 4 opioid nonmedical use classes were identified: moderate pain with low mental health burden, high pain with higher mental health burden, risky behaviors with diverse motivations, and nondiscriminatory behaviors. There was a progressive and significant increase in DAST-10 scores across classes (P<.001). The potency of the opioid, pain history, the routes of administration, and psychoactive effect behaviors were strong differentiators among the opioid classes. Conclusions: A more precise understanding of how behaviors tend to co-occur would improve efficacy and efficiency in developing interventions and supporting the overall health of those who use drugs, and it would improve communication with, and connection to, those at risk for severe drug outcomes. abstract_id: PUBMED:25012553 Nonmedical prescription stimulant use among youth in the emergency department: prevalence, severity and correlates. This study examined the prevalence, severity and correlates of nonmedical prescription stimulant use (NPSU) among youth in the emergency department (ED). Participants 14-20 years old presenting to the ED completed a survey. A multinomial logistic regression was used to compare those without NPSU, with mild NPSU and with moderate/severe NPSU on demographics, risk factors and ED utilization. There were 4389 participants; 8.3% reported past-year NPSU and 44% of those with past 3-month NPSU reported at least monthly use. After controlling for demographics, participants with mild NPSU or moderate/severe NPSU had higher odds of all substance use risk factors compared to those with no NPSU. Also, those with moderate/severe NPSU were more likely to report dating violence and nonmedical use of opioids or sedatives and less likely to use marijuana compared to those with mild NPSU. Healthcare setting screening and intervention efforts should consider NPSU concomitant with other substance use and explore the association of dating violence with NPSU. Answer: Yes, employment status in adults over 25 years old is associated with nonmedical prescription opioid and stimulant use. According to the analysis of data from the National Survey on Drug Use and Health (NSDUH), unemployed participants had the highest odds of nonmedical use of prescription opioids (NMUPO), while those not in the workforce had the highest odds of nonmedical use of prescription stimulants (NMUPS). Additionally, part-time and unemployed individuals had increased odds of NMUPS, while those not in the workforce had decreased odds of NMUPO relative to full-time participants (PUBMED:27858120).
Instruction: Pre-deployment stress briefing: does it have an effect? Abstracts: abstract_id: PUBMED:35866414 Military personnel's stress reactivity during pre-deployment in a war zone. Many studies have been conducted on the numerous negative post-deployment outcomes for military personnel. However, data on service members' pre-deployment stress reactivity are absent. This is a serious gap in existing research, as stress has an important regulatory role. This study aimed to determine possible manifestations of military personnel's stress reactivity during pre-deployment in a war zone in eastern Ukraine. The study involved 270 Ukrainian service members (all male, aged 18 to 58 years). Sample 1 (n = 108) were preparing to be deployed for the first time, sample 2 (n = 84) were preparing to be deployed and had previous experience of deployment, and sample 3 (n = 108) were not preparing to deploy and had no previous deployment experience. We used the Ukrainian adaptation of the Giessen Subjective Complaints List (GBB-24), the Symptom Checklist-90-Revised (SCL-90-R), the Short Screening Scale for DSM-IV posttraumatic stress disorder and the Combat Exposure Scale (CES). We found that indicators for physical complaints, psychological problems and psychopathological and posttraumatic symptoms among service members from samples 1 and 2 were significantly higher than those of sample 3 in 15 of 18 cases (p < 0.001-0.05). The indicators obtained for sample 2 were higher than those of sample 1 in six of nine cases (p < 0.001-0.05). Both our study hypotheses were confirmed. The present findings can be used to develop efficient psychological interventions for military personnel during pre-deployment in a war zone. abstract_id: PUBMED:29707166 Pre-deployment dissociation and personality as risk factors for post-deployment post-traumatic stress disorder in Danish soldiers deployed to Afghanistan. Objective: This study investigated whether pre-deployment dissociation was associated with previously identified post-traumatic stress disorder (PTSD) symptom trajectories from before to 2.5 years after military deployment. Furthermore, it examined whether the tendency to dissociate, pre-deployment personality factors, conceptualized by the Big Five model, and previous trauma represented independent risk factors for post-deployment PTSD symptoms. Method: This prospective study included the entire team of 743 soldiers from the Danish Contingent of the International Security Assistance Force 7 deployed to Afghanistan in 2009. Data consisted of self-report measures and were collected six times: before deployment; during deployment; and 1-3 weeks, 2 months, 7 months and 2.5 years after homecoming. Results: The findings indicate significant associations between pre-deployment dissociation and six PTSD trajectories (p < 0.001, η2 = 0.120). Based on mean differences in dissociation for the six trajectories, two main groups emerged: a group with high dissociation scores at pre-deployment, which had moderate PTSD symptom levels at pre-deployment and fluctuated over time; and a group with low dissociation scores at pre-deployment, which had low initial PTSD symptom levels and diverged over time. Our study also confirmed previous findings of a positive association between neuroticism and dissociation (r = 0.31, p < 0.001). This suggests that negative emotionality may be a vulnerability that enhances dissociative experiences, although a causal link cannot be concluded from the findings. Finally, pre-deployment dissociation, pre-deployment neuroticism and a history of traumatic events, as independent factors, were significant predictors of post-deployment PTSD (p < 0.001, R2 = 0.158). Conclusions: The study emphasizes the multiplicity of factors involved in the development of PTSD, and group differences in dissociative symptoms support the heterogeneity in PTSD. Further, this study points to specific aspects of personality that may be targeted in a clinical setting and in pre-deployment assessments in the military. abstract_id: PUBMED:26921867 The influence of pre-deployment cognitive ability on post-traumatic stress disorder symptoms and trajectories: The Danish USPER follow-up study of Afghanistan veterans. Objective: New trajectories of PTSD symptoms have recently been identified in war exposed army veterans. The aim of this army veterans study was to examine whether pre-deployment cognitive ability is associated with the risk of developing PTSD symptoms or non-resilient PTSD trajectories. Method: Follow up study in 428 Danish soldiers, deployed to Afghanistan in 2009, who were assessed at six occasions from pre-deployment to three years post-deployment. Pre-deployment vulnerabilities, deployment and homecoming stressors were measured. Pre-deployment cognitive test scores on Børge Priens Prøve (based on logical, verbal, numerical and spatial reasoning) were converted to a mean of 100 and with a standard deviation of 15. Results: Higher pre-deployment cognitive ability scores were associated with lower risk of PTSD symptoms as assessed by the Posttraumatic Stress Disorder Checklist-Civilian Version (PCL-C) 2.5 years post-deployment (OR=0.97; 95% CI 0.95-1.00) after adjustment for educational length, baseline PCL-C score and perceived war-zone stress. Compared to a resilient trajectory, a non-resilient relieved-worsening trajectory (high baseline mental symptoms, being symptom free during deployment and a drastic increase in PTSD symptoms at the final assessments of PTSD symptoms) had significantly lower cognitive scores by a mean difference of 14.5 (95% CI 4.7-24.3). This trajectory (n=9) comprised 26.5% of soldiers with moderate-severe PTSD symptoms 2.5 years post-deployment. Conclusion: We confirmed an inverse association between pre-deployment cognitive ability and risk of PTSD symptoms, and observed significantly lower mean pre-deployment cognitive scores in one non-resilient PTSD trajectory. If replicated, this might inform relevant prevention efforts for soldiers at pre-deployment. abstract_id: PUBMED:37209616 Early prediction of mental health problems following military deployment: Integrating pre- and post-deployment factors in neural network models. Military personnel deployed to war zones are at increased risk of mental health problems such as posttraumatic stress disorder (PTSD) or depression. Early pre- or post-deployment identification of those at highest risk of such problems is crucial to target intervention to those in need. However, sufficiently accurate models predicting objectively assessed mental health outcomes have not been put forward. In a sample consisting of all Danish military personnel who deployed to war zones for the first (N = 27,594), second (N = 11,083) and third (N = 5,161) time between 1992 and 2013, we apply neural networks to predict psychiatric diagnoses or use of psychotropic medicine in the years following deployment. Models are based on pre-deployment registry data alone or on pre-deployment registry data in combination with post-deployment questionnaire data on deployment experiences or early post-deployment reactions. Further, we identified the most central predictors of importance for the first, second, and third deployment. Models based on pre-deployment registry data alone had lower accuracy (AUCs ranging from 0.61 (third deployment) to 0.67 (first deployment)) than models including pre- and post-deployment data (AUCs ranging from 0.70 (third deployment) to 0.74 (first deployment)). Age at deployment, deployment year and previous physical trauma were important across deployments. Post-deployment predictors varied across deployments but included deployment exposures as well as early post-deployment symptoms. The results suggest that neural network models combining pre- and early post-deployment data can be utilized for screening tools that identify individuals at risk of severe mental health problems in the years following military deployment. abstract_id: PUBMED:22253233 Combat experiences, pre-deployment training, and outcome of exposure therapy for post-traumatic stress disorder in Operation Enduring Freedom/Operation Iraqi Freedom veterans. The association between exposure to multiple potentially traumatic events (PTEs) and subsequent increased risk of post-traumatic stress disorder (PTSD) is well established. However, less is known about the relation between exposure to numerous PTEs, as is typical with military service, and treatment outcome. Furthermore, there has been little research examining military specific protective factors, such as pre-deployment preparedness, on PTSD treatment response. The current study investigated combat exposure and potential moderators of treatment outcome for exposure therapy in Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) veterans with PTSD. One hundred and eleven OEF/OIF veterans diagnosed with PTSD participated in 8 weeks of exposure therapy. Results indicated that increased combat exposure was associated with a reduced rate of change in PTSD symptoms but not depression symptoms. These findings were consistent across two measures of combat exposure. There was preliminary support for the moderating effect of pre-deployment preparedness on the association between combat exposure and treatment response. Together, these findings suggest that increased combat exposure is associated with poor treatment response in veterans with PTSD; however, this can be reduced by elevated pre-deployment preparedness. abstract_id: PUBMED:31317295 Finite element evaluation of artery damage in deployment of polymeric stent with pre- and post-dilation. Using finite element method, this paper evaluates damage in an arterial wall and plaque caused by percutaneous coronary intervention. Hyperelastic damage models, calibrated with experimental results, are used to describe stress-stretch responses of arterial layers and plaque; these models are capable to simulate softening behaviour of the tissue due to damage. Abaqus CAE is employed to create the finite element models for the artery wall (with media and adventitia layers), a symmetric uniform plaque, a bioresorbable polymeric stent and a tri-folded expansion balloon. The effect of percutaneous coronary intervention on vessel damage is investigated by simulating the processes of vessel pre-dilation, stent deployment and post-stenting dilation. Energy dissipation density is used to assess the extent of damage in the tissue. Softening of the plaque and the artery, due to the pre-dilation-induced damage, can facilitate the subsequent stent deployment process. The plaque and the artery experienced heterogeneous damage behaviour after the stent deployment, caused by non-uniform deformation. The post-stenting dilation was effective to achieve a full expansion of the stent, but caused additional damage to the artery. The continuous and discontinuous damage models yielded similar results in the percutaneous coronary intervention simulations, while the incorporation of plaque rupture affected the simulated outcomes of stent deployment. The computational evaluation of the artery damage can be potentially used to assess the risk of in-stent restenosis after percutaneous coronary intervention. abstract_id: PUBMED:25128222 The effect of deployment to a combat zone on testosterone levels and the association with the development of posttraumatic stress symptoms: A longitudinal prospective Dutch military cohort study. Objective: There is limited evidence on the association of the activity of HPG-axis with stress and symptoms of stress-related disorders. The aim of the current study was to assess the effect of deployment to a combat zone on plasma testosterone levels, and the possible association with the development of symptoms of posttraumatic stress disorder (PTSD). Methods: A total of 918 males were included in the study before deployment to a combat zone in Afghanistan. The effect of deployment on testosterone was longitudinally assessed; starting prior to deployment and follow-up assessments were preformed at 1 and 6 months after return. Furthermore, the association with PTSD symptoms reported at 1 and 2 years post-deployment was assessed. Results: Plasma testosterone levels were significantly increased after deployment compared with pre-deployment levels. Although no difference was found between individuals reporting high or low levels of PTSD symptoms, pre-deployment testosterone levels predicted the development of PTSD symptoms at 1 and 2 years post-deployment. Conclusion: This study provides evidence that not the alterations in testosterone levels shortly after deployment, but the pre-deployment testosterone levels are associated with PTSD symptoms, which is of value in the identification of biological vulnerability factors for the development of PTSD. abstract_id: PUBMED:18032390 Pre-deployment stress briefing: does it have an effect? Background: The role of giving information about stress and stress reactions to people about to be exposed to hazardous situations remains unclear. Such information might improve coping and hence resilience. Alternatively, it might increase the expectancy of experiencing adverse psychological consequences following exposure to a hazard. Aim: To determine the effect of a pre-operational stress briefing on health and occupational indices among Naval and Marine personnel who were subsequently deployed to the 2003 Iraq War. Method: Controlled, non-randomized, parallel group study. Mental health outcomes post-deployment were compared between those who received a pre-operational stress briefing and those who did not receive such a briefing. Results: Stress briefing attendees were slightly younger, more likely to be marines and to have been exposed to traumatic events than non-attendees. There were no significant differences between the two groups for the health outcomes of common mental health disorders, post-traumatic stress disorder or alcohol misuse. Attendees reported higher morale/cohesion but these differences disappeared following adjustment for demographic and military factors. No differences between the two groups were apparent for experiencing problems during or post-deployment or for marital satisfaction. Conclusions: We found no evidence that a pre-deployment stress briefing reduced subsequent medium-term psychological distress. On the other hand, we found no evidence of harm either. While only a randomized trial can give genuinely unbiased results, at present stress debriefing must be regarded as an unproven intervention, and it remains a matter of judgement as to whether or not it is indicated. abstract_id: PUBMED:38049477 Cumulative trauma load and timing of trauma prior to military deployment differentially influences inhibitory control processing across deployment. Military personnel experience high trauma load that can change brain circuitry leading to impaired inhibitory control and posttraumatic stress disorder (PTSD). Inhibitory control processing may be particularly vulnerable to developmental and interpersonal trauma. This study examines the differential role of cumulative pre-deployment trauma and timing of trauma on inhibitory control using the Go/NoGo paradigm in a military population. The Go/NoGo paradigm was administered to 166 predominately male army combat personnel at pre- and post-deployment. Linear mixed models analyze cumulative trauma, trauma onset, and post-deployment PTSD symptoms on NoGo-N2 and NoGo-P3 amplitude and latency across deployment. Here we report, NoGo-N2 amplitude increases and NoGo-P3 amplitude and latency decreases in those with high prior interpersonal trauma across deployment. Increases in NoGo-P3 amplitude following adolescent-onset trauma and NoGo-P3 latency following childhood-onset and adolescent-onset trauma are seen across deployment. Arousal symptoms positively correlated with conflict monitoring. Our findings support the cumulative trauma load and sensitive period of trauma exposure models for inhibitory control processing in a military population. High cumulative interpersonal trauma impacts conflict monitoring and response suppression and increases PTSD symptoms whereas developmental trauma differentially impacts response suppression. This research highlights the need for tailored strategies for strengthening inhibitory control, and that consider timing and type of trauma in military personnel. abstract_id: PUBMED:33845256 Pre-deployment personality traits predict prescription opioid receipt over 2-year post-deployment period in a longitudinal cohort of deployed National Guard soldiers. Background: While military service members are at risk forpain conditions, receipt of prescribed opioids is associated with a range of serious adverse outcomes. The goal of this study is to examine the association between pre-deployment personality traits and receipt of prescription opioids after return from deployment. Method: Data were drawn from the Readiness and Resilience in National Guard Soldiers (RINGS) cohort study, an ongoing study of post-deployment health. Participants (N = 522) completed baseline assessments one month prior to deploying to Iraq (2006-2007). At baseline, we assessed personality traits using abbreviated versions of the Personality Psychopathology Five scales from the Minnesota Multiphasic Personality Inventory-2. Follow-up assessments were conducted three months, one year, and two years post-deployment. The primary outcome was total amount of prescribed opioids dispensed from Department of Veterans Affairs outpatient pharmacies in the two-year period following soldiers' return from deployment. Unadjusted and adjusted negative binomial regression models examined the relationships of pre-deployment personality traits, demographics (age, gender, and rank), baseline trauma symptoms, deployment related risk factors (difficult living/working environment, deployment injury, combat exposure), and post-deployment trauma symptoms with post-deployment opioid prescribing. Results: Disconstraint, negative emotionality, and introversion/low positive emotionality were associated with receipt of more prescribed opioids over the two years after return from deployment. Personality traits measured at baseline remained statistically significantly after adjusting for all eight baseline and deployment risk factors of interest. Conclusions: Understanding how pre-deployment personality traits contribute to post-deployment prescription opioid use could inform efforts to improve veterans' health. Answer: The effect of pre-deployment stress briefing on health and occupational indices among military personnel deployed to a combat zone was investigated in a study that aimed to determine whether such briefings could improve coping and resilience or, conversely, increase the expectancy of experiencing adverse psychological consequences following exposure to a hazard. The study was a controlled, non-randomized, parallel group study comparing mental health outcomes post-deployment between those who received a pre-operational stress briefing and those who did not. The findings indicated that there were no significant differences between the two groups for the health outcomes of common mental health disorders, post-traumatic stress disorder, or alcohol misuse. Attendees of the stress briefing reported higher morale/cohesion, but these differences disappeared after adjusting for demographic and military factors. There were also no differences between the two groups for experiencing problems during or post-deployment or for marital satisfaction. The study concluded that there was no evidence that a pre-deployment stress briefing reduced subsequent medium-term psychological distress, nor was there evidence of harm. The intervention remains unproven, and whether or not it is indicated is a matter of judgment (PUBMED:18032390).
Instruction: Microscopic neoplastic thrombosis in localised nephroblastoma: does it influence outcome? Abstracts: abstract_id: PUBMED:21129949 Microscopic neoplastic thrombosis in localised nephroblastoma: does it influence outcome? Introduction: Microscopic neoplastic thrombosis (MNT) is reported to occur frequently in Wilms tumour (WT). The aim of this study is to determine whether MNT influences prognosis in localised WT. Patients And Methods: Records and slides of 80 consecutive, unselected, localised WT patients were retrospectively reviewed. All patients received chemotherapy before surgery according to SIOP Protocol. The median follow-up was 9 years (range 0.5-25.8). The Kaplan-Meier method and the Cox proportional hazard model were applied. Results: MNT was present in 14 (18%) cases. Out of 14 patients with MNT, 6 presented macroscopic thrombosis and 5 had either blastemal predominance or anaplastic histology. The 5-year overall survival (OS) and progression-free survival (PFS) for the whole population were 95% (95% confidence interval, CI, 87-98%) and 91% (95% CI 82-96%), respectively. The 5-year OS and PFS for MNT positive patients were 92% (95% CI 57-99%) and 77% (95% CI 44-92%), while the 5-year OS and PFS for MNT negative patients were 96% (95% CI 87-99%) and 94% (95% CI 85-98%), respectively; the difference was statistically significant (p<0.05) for PFS. In multivariate analysis, only the presence of anaplasia retained significance with a hazard ratio (HR) of 14.8 and 12.9 (p<0.05) for recurrence and death, respectively. Conclusion: These data suggest that the presence of MNT increases the risk of recurrence. MNT is associated with well-known prognostic factors, such as macroscopic thrombosis (possibly representing regression of macroscopic involvement) and anaplasia. Further prospective studies are needed to clarify the role of MNT as independent prognostic factor. abstract_id: PUBMED:2155790 Renal vein involvement with nephroblastoma: a report of the National Wilms' Tumor Study-3. The records of children enrolled in the National Wilms' Tumor Study-3 who had Wilms' tumor and thrombosis into the renal vein were reviewed. There were 164 patients with gross involvement of the extrarenal vein and 47 patients with microscopic involvement, an incidence of 11.3% (211 of 1,865 evaluable patients). Two-year survival rates were 90, 79 and 72% for stages II, III and IV, respectively. The most important predictors of survival were histologic pattern and stage. Complete en bloc excision of the tumor and thrombus continues to be the most effective initial management. abstract_id: PUBMED:8600341 Wilms' tumours with intracaval involvement. Since Gross established the basic rules for nephrectomy in Wilms' tumour (WT) in 1953, the management of nephroblastoma has been more straightforward. However, some cases with intravascular involvement, currently detected by ultrasound (US), may represent a daunting challenge for the surgeon. Inferior vena cava with tumour thrombus, induced by WT, can be asymptomatic and, if undetected, can contribute to poorer prognosis for two main reasons: possible neoplastic cells inside the thrombus and higher morbidity risk of surgery. From 1979 to 1993, 81 WT were studied by routine US. Intracaval thrombosis was diagnosed in four (5%), in one of which the thrombus extended to the right atrium. In our experience, the surgical strategy in each of the four cases (100% survival) depended on the length of the thrombus and whether or not it infiltrated the vena cava wall. If the thrombus can be easily removed: complete resection. However, in cases of atrial thrombus, and more particularly if the thrombus involves the intima, we suggest the thrombus not be touched since the problem may be solved by preoperative and postoperative chemotherapy. Thus the favourable prognosis would be maintained and superfluous risky surgery avoided. abstract_id: PUBMED:7974082 Neoplastic caval and intracardiac thrombosis secondary to reno-adrenal tumors. One-stage surgical treatment in deep hypothermia and cardiocirculatory arrest Due to the considerable progress made by instrumental total body diagnostics (ECO, CAT, RMN, angiography, etc.) in recent years heart surgery has increasingly often been used to treat pathologies which are not primarily cardiac but which see the involvement of the heart and large vessels in the advanced stages of cancer and non-cancer diseases of other organs or apparatus. This is the case of malignant renal or adrenal tumours which infiltrate along the caval lumen until they reach the right atrium. In these cases caval and atrial involvement must be seen as a prolapse of the tumour and not a long-distance metastasis: prognosis only appears to be linked to the hemodynamic impairment caused by the obstacle to systemic lower venous drainage. On the bases of this observation radical surgery may be justified at a renal, caval and cardiac level. The authors report their preliminary experience in 6 patients with renal cancer (4 renal carcinoma, 1 Wilm's tumour, 1 adrenal carcinoma) who underwent combined surgery, in a single stage, involving enlarged nephrectomny and caval and atrial thrombectomy, the latter performed in profound hypothermia and cardiocirculatory arrest. Two patients died later and 4 are living, in good condition and with perviousness of the lower caval venous drainage. Similar to other analogous experience reported in the literature, the authors suggest taking a combined approach performed in a single stage into consideration for these patients. abstract_id: PUBMED:31575285 Wilms tumor with extensive cardiac thrombus: a multidisciplinary approach. N/A abstract_id: PUBMED:12478107 Experience with an elective vacuum assisted cardiopulmonary bypass in the surgical treatment of renal neoplasms extending into the right atrium. Purpose: We evaluate the results of an elective cardiopulmonary bypass conceived to minimize the surgical risk related to its use with temporary circulatory arrest and deep hypothermia in the treatment of patients with renal tumor extending into the right atrium. Materials And Methods: From July 1996 to December 2000, 19 patients with renal neoplasm and venous involvement were admitted to our department. Three patients 4, 57 and 58 years old with a right (2) and left (1) renal tumor extending into the right atrium underwent radical nephrectomy and tumor thrombus removal using a normothermic cardiopulmonary bypass. The bypass circuit was connected with a vacuum assisted venous drainage giving a negative pressure of 20 to 40 mm. Hg. Neither circulatory arrest nor hypothermia was used. Tumor thrombus was extracted through a longitudinal "cavotomy" and removed along with the kidney. Results: Total cardiopulmonary bypass time was 14, 19 and 22 minutes, respectively. No intraoperative or postoperative complications due to surgical technique occurred. No significant bleeding was observed at the time of cavotomy and all neoplastic tissue was removed. Pathological examination documented renal cell carcinoma in 2 cases and Wilms tumor in 1. All the patients are alive 30, 42 and 15 months, respectively, after the operation. Conclusions: Normothermic cardiopulmonary bypass with vacuum assisted venous drainage makes circulatory arrest and hypothermia unnecessary and avoids the potential complications associated with these procedures. With respect to veno-venous shunts this technique guarantees complete surgical control of the thrombus and avoids the need for extensive dissection of the retrohepatic vena cava and Pringle maneuver. abstract_id: PUBMED:8232079 Wilms' tumor involving the inferior vena cava: preoperative evaluation and management. Neoplastic invasion of the inferior vena cava due to renal tumors (especially Wilms' tumor) is uncommon in children. The tumor thrombus, according to the aggressiveness of the original neoplasm, can extend in diverse ways, obliterate the vascular lumen completely, and even reach the right atrium. The luminal thrombus might be accompanied by the involvement of the caval wall, which requires wide vascular resection. The purpose of this paper is to present our experience with 7 children, aged 18 months and 6 years, affected by caval invasion due to Wilms' tumor. Furthermore, the diagnostic techniques and the surgical treatment in simple caval thrombosis and in associated invasion of the caval wall are described. abstract_id: PUBMED:11093345 Enormous tumour in the right atrium of a Wilms' tumour patient Formation of neoplastic thrombus in vena cava inferior (VCI) is well known in Wilms' tumour. We demonstrate a case of large neoplastic mass in the right atrium as prolongation from vena cava inferior in a 2 years and 2 months old girl with Wilms' tumour. The long preoperative chemotherapy (7 months) reduced the tumour and the thrombus. The surgical removal of the tumour and the thrombus was followed by further chemotherapy. During postoperative chemotherapy the control USG examinations showed the presence of a stable mass in a caval wall (a neoplastic mass or a scar?). The extension of the thrombus into the right atrium remains a therapeutic problem which has not been yet resolved. abstract_id: PUBMED:11051151 Inferior vena cava and right atrial thrombosis in children with nephroblastoma: diagnostic and therapeutic problems. Background: The neoplastic thrombus in Wilms' tumor rarely can extend in to the inferior vena cava or to the right atrium. The neoplastic thrombus usually is diagnosed concurrently with the tumor, although in some cases the diagnosis of the thrombus may precede the diagnosis of nephroblastoma. Methods: Among 90 children with Wilms' tumor who were treated in the authors' unit, 4 had extensive tumor thrombosis of the inferior vena cava or the right atrium. One of these patients was found with a life-threatening thrombosis of the inferior vena cava and the right atrium, which was treated surgically; in this case, the diagnosis of nephroblastoma was made postoperatively. As for the 3 remaining patients the diagnosis of neoplastic thrombosis and Wilms' tumor was made simultaneously. Results: In the first case, the patient underwent surgical excision of the thrombus with cardiopulmonary bypass and a short period of hypothermic cardiopulmonary arrest. In the other 3 cases the thrombus resolved with chemotherapy only. Conclusions: Surgical excision of extensive neoplastic thrombosis is suggested in the case of life-threatening thrombosis even with cardiopulmonary bypass. Chemotherapy is suggested in cases lacking clinical symptoms of thrombosis. abstract_id: PUBMED:23414872 Management of nine cases of Wilms' tumour with intracardiac extension - a single centre experience. Aim: The aim of this study was to review the management of children with Wilms' tumour who have intracardiac extension. Patients And Methods: Data were collected from patient notes regarding presentation, operative details, and outcome. Results: From 1984 through 2011, 264 children with Wilms' tumour were treated at our hospital. Nine (3.4%) had cavo-atrial extension of the tumour thrombus. The thrombus extended into the right ventricle in two children and involved the hepatic veins in both, and also two others. Pre-operative chemotherapy was administered in eight children with complete regression of the intra-cardiac tumour thrombus in two cases. One child died preoperatively of septicaemia and respiratory failure after two doses of chemotherapy. Six children with intra-cardiac tumour were operated on under cardiopulmonary bypass (CPB) with deep hypothermia and circulatory arrest (DHCA). The mean ischemic time was 30 min. There was one peri-operative death in a child with hepatic vein involvement and Budd-Chiari syndrome. All others made a good postoperative recovery. All tumours were favourable histology. To date four children are still alive and disease free. Three children have died as a result of pulmonary metastases. Conclusion: Intracardiac extension of Wilms' tumour is rare, and the management is technically challenging. Pre-operative chemotherapy is effective. CPB and DHCA for excision of the cavo-atrial tumour thrombus may be necessary. Distant metastatic disease is common and determines long term prognosis. Hepatic vein extension complicates surgery and remains challenging. Answer: Microscopic neoplastic thrombosis (MNT) in localized nephroblastoma, also known as Wilms tumor (WT), has been reported to occur frequently and is a subject of interest regarding its influence on patient outcomes. A study aimed to determine whether MNT influences prognosis in localized WT found that MNT was present in 18% of cases (14 out of 80 patients). These patients received chemotherapy before surgery according to the SIOP Protocol, and the median follow-up was 9 years. The study revealed that the 5-year overall survival (OS) and progression-free survival (PFS) for MNT positive patients were 92% and 77%, respectively, while for MNT negative patients, the figures were 96% and 94%, respectively. The difference in PFS was statistically significant, suggesting that the presence of MNT increases the risk of recurrence. MNT was also associated with other well-known prognostic factors, such as macroscopic thrombosis and anaplasia. In multivariate analysis, the presence of anaplasia was significant, with a hazard ratio (HR) of 14.8 and 12.9 for recurrence and death, respectively. The study concluded that further prospective studies are needed to clarify the role of MNT as an independent prognostic factor (PUBMED:21129949). In summary, the presence of MNT in localized nephroblastoma is associated with an increased risk of recurrence and is linked to other adverse prognostic factors. However, its role as an independent prognostic factor requires further investigation.
Instruction: Can carotid endarterectomy improve metabolic status in patients with asymptomatic internal carotid artery flow lesion? Abstracts: abstract_id: PUBMED:12218960 Can carotid endarterectomy improve metabolic status in patients with asymptomatic internal carotid artery flow lesion? Studies with localized in vivo proton magnetic resonance spectroscopy. Purpose: Carotid endarterectomy (CEA) has proven to be effective in the prevention of stroke in patients with significant internal carotid artery (ICA) stenosis. However, whether increased cerebral blood flow after CEA improves the cerebral metabolism in patients with asymptomatic ICA flow lesions is unknown. Localized in vivo proton magnetic resonance spectroscopy ((1)H-MRS) has been used to measure the metabolic status of the human brain in a totally noninvasive manner. The aim of this study was to investigate the cerebral metabolism after CEA in patients with asymptomatic ICA flow lesions and no visible infarction on magnetic resonance imaging (MRI). Materials And Methods: We designed a prospective study to investigate the metabolic changes in the middle cerebral artery (MCA) territory with (1)H-MRS for 18 consecutive patients with asymptomatic severe stenosis of the ICA (>70% reduction in diameter) and for 16 healthy control subjects. The 18 patients with ICA flow lesion and no visible infarction on MRI who underwent CEA were evaluated before and 7 days after surgery (CEA group). The 16 control subjects had never had a cerebral event, and brain MRI and carotid duplex scan study results were normal in all (control group). Results: Preoperative ICA volume flow was severely decreased to less than 150 mL/min in all 18 patients, in comparison with our laboratory normal value of matched age group of 250 to 300 mL/min. After CEA, ICA volume flow was increased to greater than 300 mL/min in all patients (P =.00). For patients in the CEA group, preoperative N-acetylaspartate/creatine and choline/creatine ratios in the MCA territory were slightly decreased compared with the healthy subjects in the control group but were within normal limits. However, the postoperative values of N-acetylaspartate/creatine and choline/creatine ratios in the ipsilateral MCA territory were significantly increased as compared with the preoperative values (P <.05). In the contralateral side, the postoperative increase of choline/creatine ratio and the decrease of myo-inositol/creatine ratio were statistically significant as compared with the preoperative values (P <.05). Conclusion: CEA seems to improve the cerebral metabolic status in patients with asymptomatic ICA flow lesions and no visible infarction on MRI. abstract_id: PUBMED:16781876 Cerebral hemodynamics in symptomatic and asymptomatic patients with severe unilateral carotid stenosis before and after carotid endarterectomy. Background: Data concerning hemodynamic status prior to and after carotid endarterectomy (CEA) in symptomatic and asymptomatic patients is insufficient. Transcranial Doppler (TCD) provides information regarding compensatory collateral flow as well as mechanisms of cerebral autoregulation in patients with carotid stenosis. Patients And Methods: Forty eight symptomatic and 81 asymptomatic patients with unilateral severe carotid stenosis were examined by TCD before and in early postoperative period after CEA. Results: Cigarette smoking was the only risk factor significantly more frequent in symptomatic patients. Preoperative anterior cerebral artery (ACA) and middle cerebral artery (MCA) asymmetry, basilar artery velocity and number of ophthalmic arteries with reversed flow, were not significantly different between the two groups. Pulsatility index, cerebrovascular reactivity and flow acceleration on the side of stenosis were significantly lower in symptomatic patients. After surgery there was a significant improvement of all TCD parameters in symptomatic as well as asymptomatic patients. Conclusions: The exhausted ability of cerebral autoregulation is an important factor differentiating between symptomatic and asymptomatic patients with severe carotid stenosis. Successful surgery provides good recovery of cerebral hemodynamics in both symptomatic and asymptomatic patients. abstract_id: PUBMED:25160766 Asymptomatic dissecting intimal lesions of common carotid arteries after carotid endarterectomy. Purpose: Carotid endarterectomy is a standard treatment for symptomatic high-degree internal carotid artery stenosis. The aim of this article is to present possible intimal lesions after carotid endarterectomy. These lesions could be manifested as intimal flaps, intimal steps or dissections with or without occlusion or stenosis of the artery. Methods: The evaluation of the frequency and characteristics of the asymptomatic dissecting intimal lesions of the common carotid arteries was performed in a sample of 100 patients who underwent endarterectomy for symptomatic high-grade stenosis of the internal carotid artery. Results: We found five patients with asymptomatic dissecting intimal lesions of the common carotid arteries. Conclusion: The most common causes of these intimal lesions were shunting and prolongation of the clamping time. Routine carotid ultrasound follow-up exams are necessary because of the potential need for a change in the antithrombotic therapy or due to a need to perform an endovascular treatment. abstract_id: PUBMED:29950789 Comparison of long-term results of carotid endarterectomy for asymptomatic carotid artery stenosis. This article summarizes the current study situation on treatment of asymptomatic carotid artery stenosis and discusses the evidence situation in the literature. The 10-year results of the ACST study have shown that in comparison to conservative treatment, carotid endarterectomy (CEA) has retained a positive long-term effect on the reduction of all forms of stroke. All multicenter randomized controlled trials comparing CEA with carotid artery stenting (CAS) and, in particular the SAPHIRE and CAVATAS studies, have in common that despite a basic evidence level of Ib, the case numbers of asymptomatic patients are too small for a conclusive therapy recommendation. In the overall assessment of the CREST study the resulting difference in the questionable endpoint of "perioperative myocardial infarction" in favor of the CAS methods, could not be confirmed for exclusively asymptomatic patients. In the long-term course of the CREST study, both methods were classified as equivalent, even when the 4‑year results of periprocedural and postprocedural stroke rates in the separate assessment of the asymptomatic study participants clearly favored the CEA. The results of the ACST-1 study showed an equivalent effect of both treatment methods with respect to all investigated endpoints; however, the unequal sizes of the groups in addition to the statistically insufficient case numbers put a question mark on the validity of the study results. The results of the ASCT-2 and CREST-2 studies are to be awaited, which also investigate the significance of "CEA versus CAS" (ASCT-2) and "CEA/CAS + best medical treatment (BMT) versus BMT alone" in only asymptomatic stenoses. The current S3 guidelines allow operative therapy to be considered in patients with a 60-99% asymptomatic carotid artery stenosis, because the risk of stroke is statistically significantly reduced. abstract_id: PUBMED:7751850 Carotid endarterectomy for patients with asymptomatic internal carotid artery stenosis. National Institute of Neurological Disorders and Stroke. On September 28, 1994, the investigators of the Asymptomatic Carotid Atherosclerosis Study (ACAS) reported the interim results of a randomized controlled clinical trial of carotid endarterectomy in patients who have asymptomatic carotid stenosis of greater than 60% reduction in diameter. In addition to aspirin and aggressive management of modifiable risk factors, one-half of the patients were randomly assigned to receive surgery after angiographic confirmation of the lesion. Carotid endarterectomy is beneficial with a statistically significant absolute reduction of 5.8% in the risk of the primary end point of stroke within 5 years and a relative risk reduction of 55%. As a consequence of the trial reaching statistical significance in favor of endarterectomy, and on the recommendation of the study's data monitoring committee, physicians participating in the study were immediately notified and advised to reevaluate patients who did not receive surgery. It is important to note that the success of the operation is dependent on medical centers and surgeons who have a documented perioperative morbidity and mortality of less than 3%, careful selection of patients, and postoperative management of modifiable risk factors. abstract_id: PUBMED:7974602 Clinical advisory: carotid endarterectomy for patients with asymptomatic internal carotid artery stenosis. The investigators of the Asymptomatic Carotid Atherosclerosis Study (ACAS) are reporting the interim results of a randomized controlled clinical trial of carotid endarterectomy in patients who have asymptomatic carotid stenosis of greater than 60% reduction in diameter. In addition to aspirin and aggressive management of modifiable risk factors, one half of the patients were randomly assigned to receive surgery after angiographic confirmation of the lesion. Carotid endarterectomy is beneficial with a statistically significant absolute reduction of 5.8% in the risk of the primary end point of stroke within 5 years and a relative risk reduction of 55%. As a consequence of the trial reaching statistical significance in favor of endarterectomy, and on the recommendation of the study's data monitoring committee, physicians participating in the study were immediately notified and advised to reevaluate patients who did not receive surgery. It is important to note that the success of the operation is dependent on medical centers and surgeons who have a documented perioperative morbidity and mortality of less than 3%, careful selection of patients, and postoperative management of modifiable risk factors. abstract_id: PUBMED:22764541 Ultrasonografic monitoring of hemodynamic parameters in symptomatic and asymptomatic patients with high-grade carotid stenosis prior and following carotid endarterectomy Background/aim: Doppler ultrasonography is now a reliable diagnostic tool for noninvasive examination of the morphology and hemodynamic parameters of extracranial segments of blood vessels that participate in the brain vascularisation. This diagnostic modality in recent years become the only diagnostic tool prior to surgery. The aim of the study was to determine hemodynamic status in symptomatic and asymtomatic patients with severe carotid stenosis prior to and after carotid endarterectomy (CEA). Methods: A total of 124 symptomatic and 94 asymptomatic patients who had underwent CEA at the Clinic for Cardiovasculare Disease "Dedinje" in Belgrade were included in this study. Doppler ultrasonography examinations were performed one day before CEA and seven days after it. The peak systolic velocity (PSV), end-dyastolic velocity (EDV), time-averaged maximum blood flow velocity (MV), resistance index (RI) and the blood flow volume (BFV) of the ipsilateral and the contralateral internal carotid artery (ICA) were measured. Results: Diabetes was the only risk factor found significantly more frequent in symptomatic patients. There were significantly more occluded contralateral ICAs in the group of symptomatic patients. There was a significant increase in PSV, EDV, MV and BFV of the ipsilateral ICA after CEA and a significant decrease in PSV, EDV, MV and BFV of the contralateral ICA after CEA. RI is the only hemodynamic parameter without significant changes after CEA in both groups of patients. Comparing the values of hemodynamic parameters after CEA between the group of symptomatic and the group of asymptomatic patients no significant differences were found. Conclusion: The occlusion of the contralateral ICA is an important factor differentiating between symptomatic and asymptomatic patients with severe carotid stenosis. Successful surgery provides good recovery of cerebral hemodynamics in both symtomatic and asymptomatic patients. abstract_id: PUBMED:7563396 Redistribution of blood flow after carotid endarterectomy. Purpose: We wanted to characterize the immediate effect of endarterectomy on flow of the arteries composing the extracranial carotid artery system. Methods: Transit time ultrasound probes were used to measure flow through the carotid bifurcation in 48 patients undergoing endarterectomy. Maximum single-diameter stenosis affecting the internal carotid artery (ICA) was determined by angiography. The significance of differences between means were determined by t tests and analysis of variance; linear and nonparametric correlation analyses were also applied to analyze the relation between stenosis and several flow-derived parameters. Results: Common carotid artery flow significantly increased (p = 0.0043) from a mean value of 264 +/- 99 ml/min to 314 +/- 98 ml/min, corresponding to an average percent increase of 34.3% +/- 71.3%. ICA flow increased from 128 +/- 69 ml/min to 173 +/- 66 ml/min (p < 0.0001), with an average percent increase of 74.9% +/- 114.9%. External carotid artery (ECA) flow decreased from 129 +/- 61 ml/min to 106 +/- 49 ml/min (p = 0.0098), representing an average percent decrease of -5.2% +/- 48.2%. The difference between ECA and ICA mean flow changes is highly significant (p < 0.001). The percent change in ECA flow did not correlate with preoperative stenosis. We noted, however, a positive correlation between stenosis and the ECA/ICA flow ratio before endarterectomy (Spearman r = 0.31, p = 0.032), indicating that more severe stenosis led to a greater distribution of blood into the ECA. The ECA/ICA flow ratio fell from an initial value (ECFbef/ICFbef) of 1.52 +/- 1.74 before endarterectomy to 0.69 +/- 0.37 (ECFaft/ICFaft) after endarterectomy (p = 0.0006). Conclusions: The data are consistent, with the ECA being an important collateral path for cerebral perfusion when ICA stenosis exists. When endarterectomy relieves bifurcation stenosis, common carotid artery blood flow is redistributed preferentially to the ICA at the expense of ECA flow, consistent with a change in the relative resistances of the two vessels resulting from operative reconstruction. abstract_id: PUBMED:8173679 Status of carotid endarterectomy. The randomized clinical trial has been brought to bear on the indications and the search for benefit for the surgical procedure of carotid endarterectomy. Progress has been made and some answers are available. It is not too optimistic to state that in a few years time, we will be able to state with some degree of finality, which patients with symptomatic disease will benefit from carotid endarterectomy, which ones will be better treated with medical therapy alone, and whether or not there is a group of patients, despite being asymptomatic, who should be advised to have this procedure. abstract_id: PUBMED:36808052 Visual outcomes in carotid stenosis patients treated using endarterectomy. IntroductionCarotid arthrosclerosis can be a cause of visual impairment. It has been observed that carotid endarterectomy has a positive effect on ophthalmic parameters. The aim of this study was to evaluate the impact of endarterectomy on the optic nerve function.Materials and methods54 asymptomatic patients (19 women and 35 men - 108 eyes) with unilateral carotid stenosis >70% of internal carotid artery, were recruited to the study. All of them were qualified for the endarterectomy procedure. The whole study group underwent Doppler ultrasonography of internal carotid arteries and ophthalmic examination before the surgery, with 22 of them (11 women and 11 men) were examined after the endarterectomy. The ophthalmic examination included; distant best-corrected visual acuity, measurement of the intraocular pressure, electrophysiology (pattern visual evoked potentials), perimetry, and optical coherent tomography (the retinal nerve fiber layer thickness).DiscussionCarotid arteries supply brain and face with blood. Extensive research has observed a concomitant improvement in eyesight after enduring carotid endarterectomy in patients with artery stenosis. This effect was associated with a better blood flow in the ophthalmic artery and its branches, the central retinal artery and the ciliary artery; the major blood supply of the eye.ResultsThe present study proved that carotid endarterectomy has a positive impact on the function of the optic nerve. The visual field parameters and amplitude of pattern visual evoked potentials significantly improved. Preoperative and postoperative values of intraocular pressure and the retinal nerve fiber layer thickness remained stable. Answer: Yes, carotid endarterectomy (CEA) can improve the metabolic status in patients with asymptomatic internal carotid artery (ICA) flow lesions. A study using localized in vivo proton magnetic resonance spectroscopy ((1)H-MRS) found that after CEA, patients with asymptomatic severe stenosis of the ICA showed a significant increase in N-acetylaspartate/creatine and choline/creatine ratios in the middle cerebral artery (MCA) territory compared to preoperative values. This suggests an improvement in cerebral metabolism post-surgery (PUBMED:12218960).
Instruction: Is contralateral exploration of the kidney necessary in patients with Wilms tumor? Abstracts: abstract_id: PUBMED:8683762 Is contralateral exploration of the kidney necessary in patients with Wilms tumor? Purpose: Surgical exploration of the contralateral kidney in patients with Wilms tumor is standard practice. The introduction of modern imaging techniques (ultrasound, computerized tomography and magnetic resonance imaging) in place of excretory urography has led to a more accurate definition of intrarenal pathology. Is contralateral kidney exploration still necessary in patients with Wilms tumor? Materials And Methods: We reviewed the records of 75 patients from 3 medical centers who were evaluated and treated for Wilms tumor in the last 14 years. All children underwent preoperative excretory urography, computerized tomography, ultrasound and/or magnetic resonance imaging. Results: Seven bilateral Wilms tumors were diagnosed preoperatively and confirmed surgically, whereas extensive operative exploration of the contralateral kidney in the other 69 patients revealed no additional pathological condition. This concurred with preoperative radiological findings with 100% sensitivity and specificity. Furthermore, 1 month to 12 years of followup (mean 4.3 years) revealed no tumor in the contralateral kidney. The major postoperative complication was small bowel obstruction in 5 patients 15 to 180 days (mean 77.2) after explorative laparotomy. Conclusions: In all patients preoperative radiological investigation was accurate in excluding bilaterality. With modern imaging techniques and effective chemotherapy extensive contralateral renal exploration may no longer be mandatory for managing Wilms tumor. abstract_id: PUBMED:2165185 The necessity of contralateral surgical exploration in Wilms tumor with modern noninvasive imaging technique: a reassessment. Surgical exploration of the contralateral kidney in unilateral Wilms tumor has been the standard of surgical practice. During the last decade advances in noninvasive imaging techniques (ultrasound, computerized tomography and magnetic resonance imaging) have led to more accurate definition of intrarenal pathological conditions. The intuitive question presently is whether contralateral exploration remains mandatory in Wilms tumor patients. We reviewed the records of 52 consecutive children at our institutions who underwent radiological and operative staging of the Wilms tumor from 1979 to 1988. All 48 evaluable patients underwent either preoperative computerized tomography, ultrasound and/or magnetic resonance imaging. Five bilateral Wilms tumors were diagnosed preoperatively and confirmed surgically, whereas extensive operative exploration of the contralateral kidney in the other 43 cases of radiologically diagnosed unilateral disease failed to reveal any contralateral abnormality. Thus, in all patients preoperative radiological investigation was accurate to stage the disease regarding bilaterality. With modern imaging techniques and effective chemotherapy, extensive contralateral renal exploration may no longer be mandatory to manage Wilms tumor. abstract_id: PUBMED:9146646 Re: Is contralateral exploration of the kidney necessary in patients with Wilms tumor? N/A abstract_id: PUBMED:9186352 Re: Is contralateral exploration of the kidney necessary in patients with Wilms tumor? N/A abstract_id: PUBMED:2991613 Adult nephroblastoma with glomerular intracapillary deposits of IgM in the contralateral kidney. A 43-year-old man who presented with hematuria had a nephroblastoma in the right kidney. After nephrectomy the patient was treated with irradiation and chemotherapy. Despite the development of pulmonary metastases he remained reasonably well for several years. He died of metastatic disease 8 years after presentation. Histological study of the contralateral kidney at autopsy revealed intracapillary glomerular deposits of IgM. abstract_id: PUBMED:8863617 Enucleative surgery for stage I nephroblastoma with a normal contralateral kidney. Purpose: Tumor enucleation is not recommended for children with nephroblastoma and a normal contralateral kidney. However, in adults with unilateral low stage renal cell carcinoma tumor enucleation may offer an alternative to radical nephrectomy, since functioning renal tissue is preserved without a greater risk of residual microscopic disease. Enucleative surgery may be more reasonable in children with nephroblastoma, because the risk of relapse can be reduced with chemotherapy. Therefore, we prospectively evaluated the feasibility of enucleative surgery in children with stage I unilateral nephroblastoma. Materials And Methods: Between 1992 and 1995, 13 children with nephroblastoma and a normal contralateral kidney were consecutively admitted to our surgical unit. Possible candidates for tumor enucleation were evaluated according to certain criteria, including stage I disease at diagnosis, well-defined margins on post-contrast computerized tomography and at least 50% of the functioning kidney could be preserved. Preoperative and postoperative chemotherapy was given in all cases. Results: Of 4 children with preservation of more than 50% of the functioning kidney 3 were considered eligible for enucleation. The tumors, which were confined to the mid kidney in 2 children and upper renal pole in 1, were successfully enucleated without hypothermia or vascular occlusion. All 3 children are disease-free at 49, 48 and 26 months of followup, respectively. Renal function has been almost completely restored postoperatively. Conclusions: These preliminary data suggest that enucleative surgery may be a reasonable option in select children with stage I nephroblastoma and a normal contralateral kidney. abstract_id: PUBMED:8158788 Partial nephrectomy in a metachronous multilocular cyst of the kidney (cystic nephroma). Cystic nephroma, that is multilocular cyst, is an uncommon renal neoplasm once thought to occur only unilaterally. We report a rare case of bilateral metachronous cystic nephroma in a child, which was treated by partial nephrectomy. Because of its benign nature this lesion is best managed by nephron sparing surgery. Exploration of the contralateral kidney is not necessary if modern radiological techniques are used. Magnetic resonance imaging angiography is an excellent noninvasive method to assess renal vessels before partial nephrectomy. Periodic ultrasound or computerized tomography screening of the remaining renal unit is recommended. abstract_id: PUBMED:10572902 Unexpected regeneration of a congenital unilateral hypoplastic- dysplastic kidney after a contralateral nephrectomy for Wilms tumor The congenital hypoplastic-dysplastic kidney is characterized by a significant reduction of renal mass, an abnormal parenchymal differentiation and is associated with anomalies of the whole urinary tract. Not much is known about the ability for regeneration in such a kidney. We report on a 2 year 11 months old boy with an anaplastic Wilms tumor of the left kidney, in whom after tumor-nephrectomy an unexpected functional and sonographically documented morphological regeneration of the contralateral hypoplastic-dysplastic kidney occurred. The regeneration of this kidney is especially notable, because it occurred during nephrotoxic chemotherapy and radiotherapy. abstract_id: PUBMED:8653763 Nephroblastoma and renal dysplasia in a 40 year old Nigerian man with an atrophic contralateral foetal kidney. This report concerns a 40 year old Nigerian man whose initial presentation mimicked acute pyelonephritis. He was subsequently found to have a renal tumour which turned out to be a nephroblastoma. In addition, there was marked renal dysplasia and the contralateral kidney was atrophic and foetal in configuration. The possible link between these observations is discussed. abstract_id: PUBMED:8683761 Unilateral nephrectomy induces the expression of the Wilms tumor gene in the contralateral kidney of the adult rat. Purpose: The tumor suppressor gene WT-1 encodes a nuclear deoxyribonucleic acid binding protein that is a transcriptional regulator. This gene is commonly deleted or defective in Wilms tumors and the Denys-Drash syndrome. Recently WT-1 was demonstrated to be essential for the development of the urogenital tract. We determined whether we could induce WT-1 expression in mature kidneys induced to grow by performing contralateral nephrectomy in mature rats. Materials And Methods: Northern analysis with a 32phosphorus-labeled antisense riboprobe synthesized by in vitro transcription of a 731 bp complementary deoxyribonucleic acid insert spanning exons 1 to 7 of the rat WT-1 in a pT7 Blue vector was used to demonstrate the expression of WT-1 in the developing and adult Sprague-Dawley rat kidney. Results: Transcript levels of WT-1 in the rat kidney decreased from day 0 (day of birth) to day 16, after which WT-1 transcripts were undetectable in the normal rat kidney. Unilateral nephrectomy in the adult male Sprague-Dawley rat (250 to 300 gm.) induced the expression of WT-1 ribonucleic acid in the contralateral kidney to detectable levels by Northern analysis 0.25 hours after nephrectomy. Subsequently levels of WT-1 ribonucleic acid decreased progressively to undetectable by 3 hours after nephrectomy. Expression of this gene was not detected in the normal kidneys of adult rats or sham operated adult rats. Conclusions: These data suggest that the WT-1 gene product is involved in normal renal growth in the adult and developing rat kidney. Answer: Contralateral exploration of the kidney in patients with Wilms tumor has been standard practice, but with the advent of modern imaging techniques such as ultrasound, computerized tomography (CT), and magnetic resonance imaging (MRI), the necessity of this practice has been called into question. Studies have shown that these imaging techniques can accurately define intrarenal pathology, potentially making extensive operative exploration of the contralateral kidney unnecessary. A review of 75 patients treated for Wilms tumor across three medical centers found that preoperative imaging (excretory urography, CT, ultrasound, and/or MRI) was accurate in excluding bilaterality in all cases. Seven bilateral Wilms tumors were diagnosed preoperatively and confirmed surgically, while exploration of the contralateral kidney in the remaining 69 patients revealed no additional pathological condition, aligning with preoperative radiological findings with 100% sensitivity and specificity. Follow-up ranging from 1 month to 12 years (mean 4.3 years) revealed no tumor in the contralateral kidney, suggesting that with modern imaging techniques and effective chemotherapy, extensive contralateral renal exploration may no longer be mandatory for managing Wilms tumor (PUBMED:8683762). Similarly, a review of 52 children who underwent radiological and operative staging of Wilms tumor from 1979 to 1988 showed that preoperative imaging was accurate in staging the disease regarding bilaterality. Five bilateral Wilms tumors were diagnosed preoperatively and confirmed surgically, while no contralateral abnormalities were found in the other 43 cases of radiologically diagnosed unilateral disease. This further supports the view that extensive contralateral renal exploration may not be necessary when modern imaging techniques are employed (PUBMED:2165185). In conclusion, the evidence suggests that with the use of modern imaging techniques, contralateral exploration of the kidney may not be necessary in patients with Wilms tumor, as these noninvasive methods can accurately assess the presence of bilateral disease.
Instruction: Are hepatomas a good target for suicide gene therapy? Abstracts: abstract_id: PUBMED:9457219 Are hepatomas a good target for suicide gene therapy? An experimental study in rats using retroviral-mediated transfer of thymidine kinase gene. Background: Suicide gene transfer into tumor cells has been proposed for the treatment of various tumors. The most common suicide gene is coded for the herpes simplex type I thymidine kinase (HSV1-TK), which converts nontoxic nucleoside analogs such as ganciclovir into toxic triphosphated compounds. This study evaluated the potential of this treatment for gene therapy of liver tumors. Methods: The sensitivity of different colon carcinoma and hepatoma cell lines to infection by recombinant retroviruses was evaluated. Next, HSV1-TK-expressing derivatives of these cells were generated to analyze their sensitivity to ganciclovir. Finally, these cells were used to generate experimental hepatomas in rats after injection under the liver capsule, and the efficacy and safety of a ganciclovir treatment on tumor growth and survival were evaluated. Results: All the different cell lines analyzed were sensitive to retroviral-mediated gene transfer, although the susceptibility of individual cell lines to this transfer varied significantly. HSV1-TK derivatives were about 1000-fold more sensitive to the toxic effects of ganciclovir than parental cells. Tumors with HSV1-TK expressing MCA-RH8994 hepatoma cells were then generated. Intraperitoneal injection of 75 mg/kg ganciclovir twice daily for 5 days dramatically reduced the size of HSV1-TK-positive tumors compared with tumor size in untreated control rats (0.4 mm3 versus 65 mm3, p < 0.02). A long-term study demonstrated that this reduction of tumor volume was associated with a significant increase in survival (p < 0.01). Pathologic examination 26 days after the end of ganciclovir injections showed that complete tumor regression was observed in two of five rats. Most important, there was no toxicity associated with these tumors. Conclusions: In a clinical perspective the good tolerance to treatment and the significant clinical effect observed were encouraging. Gene transfer methods should be established to allow efficient targeting of the tumor cells in vivo. abstract_id: PUBMED:26909111 Image-aided Suicide Gene Therapy Utilizing Multifunctional hTERT-targeting Adenovirus for Clinical Translation in Hepatocellular Carcinoma. Trans-splicing ribozyme enables to sense and reprogram target RNA into therapeutic transgene and thereby becomes a good sensing device for detection of cancer cells, judging from transgene expression. Previously we proposed PEPCK-Rz-HSVtk (PRT), hTERT targeting trans-splicing ribozyme (Rz) driven by liver-specific promoter phosphoenolpyruvate carboxykinase (PEPCK) with downstream suicide gene, herpes simplex virus thymidine kinase (HSVtk) for hepatocellular carcinoma (HCC) gene therapy. Here, we describe success of a re-engineered adenoviral vector harboring PRT in obtaining greater antitumor activity with less off-target effect for clinical application as a theranostics. We introduced liver-selective apolipoprotein E (ApoE) enhancer to the distal region of PRT unit to augment activity and liver selectivity of PEPCK promoter, and achieved better transduction into liver cancer cells by replacement of serotype 35 fiber knob on additional E4orf1-4 deletion of E1&E3-deleted serotype 5 back bone. We demonstrated that our refined adenovirus harboring PEPCK/ApoE-Rz-HSVtk (Ad-PRT-E) achieved great anti-tumor efficacy and improved ability to specifically target HCC without damaging normal hepatocytes. We also showed noninvasive imaging modalities were successfully employed to monitor both how well a therapeutic gene (HSVtk) was expressed inside tumor and how effectively a gene therapy took an action in terms of tumor growth. Collectively, this study suggests that the advanced therapeutic adenoviruses Ad-PRT-E and its image-aided evaluation system may lead to the powerful strategy for successful clinical translation and the development of clinical protocols for HCC therapy. abstract_id: PUBMED:29858076 RNA Structure Design Improves Activity and Specificity of trans-Splicing-Triggered Cell Death in a Suicide Gene Therapy Approach. Spliceosome-mediated RNA trans-splicing enables correction or labeling of pre-mRNA, but therapeutic applications are hampered by issues related to the activity and target specificity of trans-splicing RNA (tsRNA). We employed computational RNA structure design to improve both on-target activity and specificity of tsRNA in a herpes simplex virus thymidine kinase/ganciclovir suicide gene therapy approach targeting alpha fetoprotein (AFP), a marker of hepatocellular carcinoma (HCC) or human papillomavirus type 16 (HPV-16) pre-mRNA. While unstructured, mismatched target binding domains significantly improved 3' exon replacement (3'ER), 5' exon replacement (5'ER) correlated with the thermodynamic stability of the tsRNA 3' end. Alternative on-target trans-splicing was found to be a prevalent event. The specificity of trans-splicing with the intended target splice site was improved 10-fold by designing tsRNA that harbors secondary target binding domains shielding alternative on-target and blinding off-target splicing events. Such rationally designed suicide RNAs efficiently triggered death of HPV-16-transduced or hepatoblastoma-derived human tissue culture cells without evidence for off-target cell killing. Highest cell death activities were observed with novel dual-targeting tsRNAs programmed for trans-splicing toward AFP and a second HCC pre-mRNA biomarker. Our observations suggest trans-splicing represents a promising approach to suicide gene therapy. abstract_id: PUBMED:27391603 Effects of Cationic Microbubble Carrying CD/TK Double Suicide Gene and αVβ3 Integrin Antibody in Human Hepatocellular Carcinoma HepG2 Cells. Objective: Hepatocellular carcinoma (HCC), mostly derived from hepatitis or cirrhosisis, is one of the most common types of liver cancer. T-cell mediated immune response elicited by CD/TK double suicide gene has shown a substantial antitumor effect in HCC. Integrin αVβ3 over expresssion has been suggested to regulate the biology behavior of HCC. In this study, we investigated the strategy of incorporating CD/TK double suicide gene and anti-αVβ3 integrin monoclonal antibodies into cationic microbubbles (CMBsαvβ3), and evaluated its killing effect in HCC cells. Methods: To improve the transfection efficiency of targeted CD/TK double suicide gene, we adopted cationic microbubbles (CMBs), a cationic delivery agent with enhanced DNA-carrying capacity. The ultrasound and high speed shearing method was used to prepare the non-targeting cationic microbubbles (CMBs). Using the biotin-avidin bridge method, αVβ3 integrin antibody was conjugated to CMBs, and CMBsαvβ3 was generated to specifically target to HepG2 cells. The morphology and physicochemical properties of the CMBsαvβ3 was detected by optical microscope and zeta detector. The conjugation of plasmid and the antibody in CMBsαvβ3 were examined by immunofluorescent microscopy and flow cytometry. The binding capacities of CMBsαvβ3 and CMBs to HCC HepG2 and normal L-02 cells were compared using rosette formation assay. To detect EGFP fluorescence and examine the transfection efficiencies of CMBsαvβ3 and CMBs in HCC cells, fluorescence microscope and contrast-enhanced sonography were adopted. mRNA and protein level of CD/TK gene were detected by RT-PCR and Western blot, respectively. To evaluate the anti-tumor effect of CMBsαvβ3, HCC cells with CMBsαvβ3 were exposed to 5-flurocytosine / ganciclovir (5-FC/GCV). Then, cell cycle distribution after treatment were detected by PI staining and flow cytometry. Apoptotic cells death were detected by optical microscope and assessed by MTT assay and TUNEL-staining assay. Results: CMBsαvβ3 had a regular shape and good dispersion. Compared to CMBs, CMBsαvβ3 had more stable concentrations of αVβ3 ligand and pEGFP-KDRP-CD/TK, and CMBsαvβ3 was much sticker to HepG2 HCC cells than normal liver L-02cells. Moreover, after exposed to anti-αVβ3 monoclonal antibody, the adhesion of CMBsαvβ3 to HepG2 cells and L-02 cells were significantly reduced. Also, CMBsαvβ3 demonstrated a substantially higher efficiency in pEGFP-KDRP-CD/TK plasmid transfection in HepG2 cells than CMBs. In addition, CMBsαvβ3 could significantly facilitate 5-FC/GCV-induced cell cycle arrest in S phase. Moreover, treatment of 5-FC/GCV combined with CMBsαvβ3 resulted in a marked apoptotic cell death in HepG2 and SK-Herp-1 HCC cells. In vitro, treatment of 5-FC/GCV combined with CMBsαvβ3 suppresed cell proliferation. In nude mice model, 5-FU + GCV combined with plasmid + CMBsαvβ3were able to significantly suppress tumor volumes. Conclusion: Through biotin-avidin mediation system, CMBsαvβ3 were successfully generated to specifically target HCC HepG2 cells. More importantly, CMBsαvβ3 could significantly facilitate 5-FC/GCV-induced cell cycle arrest and apoptotic cell death in HepG2 cells. Our study demonstrated a potential strategy that could be translated clinically to improve liver tumor gene delivery. abstract_id: PUBMED:10072906 Several new targets of antitumor agents. Alpha-fetoprotein (AFP), as a hepatoma-promoting factor, has become a new target of anti-hepatoma agents. It is a new approach for the treatment of tumors to inhibit or block oncogene expression. Informational drugs are being developed for gene therapy applications as inhibitors of oncogene expression. The induction of tumor cell differentiation is another new strategy of drug therapy of tumors. Common action mode of many antitumor drugs is to induce apoptosis of tumor cells. Suicide genes, as targeting therapy of tumors, improve the present chemotherapy, exhibiting broad application prospects. abstract_id: PUBMED:19809268 Cancer cell targeting with mouse TERT-specific group I intron of Tetrahymena thermophila. Telomerase reverse transcriptase (TERT), which prolongs the replicative life span of cells, is highly upregulated in 85-90% of human cancers, whereas most normal somatic tissues in humans express limited levels of the telomerase activity. Therefore, TERT has been a potential target for anticancer therapy. Recently, we described a new approach to human cancer gene therapy, which is based on the group I intron of Tetrahymena thermophila. This ribozyme can specifically mediate RNA replacement of human TERT (hTERT) transcript with a new transcript harboring anticancer activity through a trans-splicing reaction, resulting in selective regression of hTERT-positive cancer cells. However, to validate the therapeutic potential of the ribozyme in animal models, ribozymes targeting inherent transcripts of the animal should be developed. In this study, we developed a Tetrahymena-based trans-splicing ribozyme that can specifically target and replace the mouse TERT (mTERT) RNA. This ribozyme can trigger transgene activity not only also in mTERT-expressing cells but hTERT-positive cancer cells. Importantly, the ribozyme could selectively induce activity of the suicide gene, a herpes simplex virus thymidine kinase gene, in cancer cells expressing the TERT RNA and thereby specifically hamper the survival of these cells when treated with ganciclovir. The mTERT-targeting ribozyme will be useful for evaluation of the RNA replacement approach as a cancer gene therapeutic tool in the mouse model with syngeneic tumors. abstract_id: PUBMED:14608942 Gene therapy of liver tumors: results of the first clinical studies Only a small percentage of primary and secondary liver tumours is suitable for surgical resection. Gene therapy represents a novel strategy that seems to be effective both, in vitro and in vivo. The use of tumour suppressor gene p53 therapy, suicide gene therapy, immune gene therapy and therapy with replication-competent oncolytic adenoviruses in liver tumours already entered the first clinical trials. In patients with hepatocellular carcinoma, the first clinical trials in phase I and II showed good tolerance and low toxicity to gene therapy. However, the clinical benefit for the patients treated either with wild type p53 or E1B deleted adenoviruses were marginal. abstract_id: PUBMED:22790964 Synergistic effects of genetically engineered stem cells expressing cytosine deaminase and interferon-β via their tumor tropism to selectively target human hepatocarcinoma cells. Stem cells have received a great deal of attention for their clinical and therapeutic potential for treating human diseases and disorders. Recent studies have shown that it is possible to genetically engineered stem cells (GESTECs) to produce suicide enzymes that convert non-toxic prodrugs to toxic metabolites, selectively migrate toward tumor sites and reduce tumor growth. In this study, we evaluated whether these GESTECs are capable of migrating to hepatocarcinoma cells and examined the potential therapeutic efficacy of gene-directed enzyme prodrug therapy against liver cancer cells in cellular and animal models. A modified transwell migration assay was performed to determine the migratory capacity of GESTECs to Hep3B hepatocarcinoma cells. GESTECs, that is, HB1.F3.CD or HB1.F3.CD.interferon-β (IFN-β) cells, engineered to express a suicide gene, cytosine deaminase (CD), selectively migrated toward liver cancer cells. Treatment of Hep3B, human liver cancer cells, with the prodrug 5-fluorocytosine (5-FC) in the presence of HB1.F3.CD or HB1.F3.CD.IFN-β cells resulted in the inhibition of Hep3B cell growth. In a xenografted mouse model injected with hepatocarcinoma, we investigated the therapeutic effect of these stem cells. For 9 weeks, the xenografted mice were treated with HB1.F3.CD or HB1.F3.CD.IFN-β in the presence of 5-FC. A growth of tumor mass was inhibited about 40-50% in the mice treated with GESTECs and a prodrug. In addition, we further confirmed the cytotoxic effect on tumor cells by histological analysis and migratory effect of therapeutic stem cells. Taken together, GESTECs expressing a fusion gene encoding CD and IFN-β may exert a synergistic antitumor effect on this type of tumor. abstract_id: PUBMED:21390816 Gene Targeting to Hepatomas (AFP). Hepatocellular carcinoma (HCC) is one of the most common malignancies with poor prognosis worldwide, especially in eastern Asia and Africa (1). Recent advances in delivering genes to mammalian cells stimulate the possibility of gene therapy for human diseases, including cancer gene therapy (2). One approach of gene therapy for cancers is the transduction of the herpes simplex virus thymidine kinase (HSV-tk) gene in tumor cells, because the killing effect of the HSV-tk product on the virus-infected cells is seen in only proliferating cells (3). HSV-tk can efficiently phosphorylate nucleoside analogs, and the phosphorylated products act as a chain terminator of DNA synthesis, leading to cell death (4). In addition, successful application of suicide gene therapy for cancer, in part, relies on the bystander effect, where the active chemotherapeutic agent produced in target cells diffuses from cells to neighboring malignant cells in sufficient concentrations to induce growth inhibition (5). abstract_id: PUBMED:31181889 Combination Suicide Gene Delivery with an Adeno-Associated Virus Vector Encoding Inducible Caspase-9 and a Chemical Inducer of Dimerization Is Effective in a Xenotransplantation Model of Hepatocellular Carcinoma. Current treatment approaches for hepatocellular carcinoma (HCC) have a narrow therapeutic index and alternate modes of treatment are thus required. We have utilized a gene delivery vector containing inducible caspase 9 (iCasp9) gene, which is a synthetic analogue based on the mammalian caspase 9 and fused to a human FK506 binding protein that allows its conditional dimerization to a synthetic, small molecule [chemical inducer of dimerization, AP20187] and results in target cell apoptosis. In our studies, we have tested these synthetic vectors based on an adeno-associated virus platform for their potential anti-tumorigenic effect in human HCC cells in vitro and in a HCC tumor model developed in nude mice. Our data demonstrates that the iCasp9-AP20187 bioconjugate is able to trigger terminal effectors of cellular apoptosis and presents a viable approach for the potential treatment of HCC. Answer: Yes, hepatomas appear to be a good target for suicide gene therapy. Several studies have demonstrated the potential effectiveness of suicide gene therapy in treating liver tumors, particularly hepatocellular carcinoma (HCC). One study showed that retroviral-mediated transfer of the herpes simplex virus type I thymidine kinase (HSV1-TK) gene into hepatoma cells followed by treatment with ganciclovir resulted in a dramatic reduction in tumor size and increased survival in rats, without associated toxicity (PUBMED:9457219). Another study described the development of an advanced adenoviral vector harboring a liver-specific promoter and a suicide gene (HSVtk), which showed great anti-tumor efficacy and improved specificity for targeting HCC without damaging normal hepatocytes (PUBMED:26909111). RNA structure design has been used to improve the activity and specificity of trans-splicing RNA (tsRNA) in a suicide gene therapy approach targeting hepatocellular carcinoma, resulting in efficient cell death without off-target effects (PUBMED:29858076). Cationic microbubbles carrying a CD/TK double suicide gene and αVβ3 integrin antibody have been shown to specifically target HCC cells and facilitate cell cycle arrest and apoptotic cell death when combined with prodrug treatment (PUBMED:27391603). Other studies have identified additional targets for suicide gene therapy in hepatomas, such as alpha-fetoprotein (AFP) and telomerase reverse transcriptase (TERT), which are highly expressed in HCC (PUBMED:10072906, PUBMED:19809268). Clinical studies have also been conducted, indicating good tolerance and low toxicity of gene therapy in patients with hepatocellular carcinoma, although the clinical benefit has been marginal in some cases (PUBMED:14608942). Furthermore, genetically engineered stem cells expressing suicide enzymes have shown the ability to selectively migrate toward tumor sites and reduce tumor growth in HCC models (PUBMED:22790964). In summary, the evidence suggests that hepatomas are a promising target for suicide gene therapy, with multiple studies demonstrating the potential for specific targeting and effective treatment of HCC cells while minimizing harm to normal cells.
Instruction: Is C-reactive protein a useful adjunct in selecting patients for emergency cholecystectomy by predicting severe/gangrenous cholecystitis? Abstracts: abstract_id: PUBMED:24856179 Is C-reactive protein a useful adjunct in selecting patients for emergency cholecystectomy by predicting severe/gangrenous cholecystitis? Unlabelled: Gangrenous cholecystitis is difficult to diagnose clinically and carries risk of morbidity and mortality if not treated urgently. Complex scoring systems exist to predict the condition. C reactive protein may be a single independent predictor as suggested in this small study. Background: Gangrenous cholecystitis (GC) is a serious sequel of acute cholecystitis occurring in 2-30% patients and has a mortality of 0.2-0.5%. Urgent surgical intervention is important to reduce morbidity and mortality therefore it is important to identify patients with GC from non-severe cholecystitis. The aim of this study is to determine biochemical and radiological markers, which is associated with the development GC and the value of C-reactive protein (CRP) at different cut-offs in predicting GC. Methods: This is an observational cohort study of all consecutive patients who presented with biliary symptoms to the emergency department in a large NHS Hospital in the UK, from January to December 2012. They had cholecystectomies performed either during index admission or electively at later date by a team of 4 upper gastrointestinal surgeons. The gangrenous nature of the gallbladder was determined by operative findings and/or histopathology results. Parameters including age, gender, albumin, jaundice, gallbladder wall thickness on ultrasound scan, highest preoperative white blood count (WBC) and CRP value, were examined for their predictive value. Results: 141 patients presented with acute biliary problems. 22 underwent emergency cholecystectomy and 119 were discharged and called back for elective surgery. Of these, 16 were gangrenous (11%). Patients with GC were significantly older (p = 0.016), had significantly higher CRP (p < 0.001) and WBC (p = 0.001), significantly lower albumin levels (p < 0.001) and higher percentage with thick walled gallbladder (p < 0.001). We found that a CRP value of more than 200 mg/dL has a 50% positive predictive value and 100% negative predictive value in predicting gangrenous cholecystitis with 100% sensitivity and 87.9% specificity. Conclusions: In this study CRP on its own has been shown to have high predictive value in predicting GC, but larger studies are needed to validate this finding. Monitoring trend of CRP in patients with acute cholecystitis may help early diagnosis and decision for early surgical intervention. abstract_id: PUBMED:29464491 Do I Need to Operate on That in the Middle of the Night? Development of a Nomogram for the Diagnosis of Severe Acute Cholecystitis. Background: Some authors have proposed different predictive factors of severe acute cholecystitis, but generally, the results of risk analyses are expressed as odds ratios, which makes it difficult to apply in the clinical practice of the acute care surgeon. The severe form of acute cholecystitis should include both gangrenous and phlegmonous cholecystitis, due to their severe clinical course, and cholecystectomy should not be delayed. The aim of this study was to create a nomogram to obtain a graphical tool to compute the probability of having a severe acute cholecystitis. Methods: This is a retrospective study on 393 patients who underwent emergency cholecystectomy between January 2010 and December 2015 at the Acute Care Surgery Service of the S. Anna University Hospital of Ferrara, Italy. Patients were classified as having a non-severe acute cholecystitis or a severe acute cholecystitis (i.e., gangrenous and phlegmonous) based on the final pathology report. The baseline characteristics, pre-operative signs, and abdominal ultrasound (US) findings were assessed with a stepwise multivariate logistic regression analysis to predict the risk of severe acute cholecystitis, and a nomogram was created. Results: Age as a continuous variable, WBC count ≥ 12.4 × 103/μl, CRP ≥9.9 mg/dl, and presence of US thickening of the gallbladder wall were significantly associated with severe acute cholecystitis at final pathology report. A significant interaction between the effect of age and CRP was found. Four risk classes were identified based on the nomogram total points. Conclusions: Patients with a nomogram total point ≥ 74 should be considered at high risk of severe acute cholecystitis (at 74 total point, sensitivity = 78.5%; specificity = 78.2%; accuracy = 78.3%) and this finding could be useful for surgical planning once confirmed in a prospective study comparing the risk score stratification and clinical outcomes. abstract_id: PUBMED:27891173 Laparoscopic cholecystectomy for acute calculous cholecystitis: a retrospective study assessing risk factors for conversion and complications. Background: The purpose of the study was to identify risk factors for conversion of laparoscopic cholecystectomy and risk factors for postoperative complications in acute calculous cholecystitis. The most common complications arising from cholecystectomy were also to be identified. Methods: A total of 499 consecutive patients, who had undergone emergent cholecystectomy with diagnosis of cholecystitis in Meilahti Hospital in 2013-2014, were identified from the hospital database. Of the identified patients, 400 had acute calculous cholecystitis of which 27 patients with surgery initiated as open cholecystectomy were excluded, resulting in 373 patients for the final analysis. The Clavien-Dindo classification of surgical complications was used. Results: Laparoscopic cholecystectomy was initiated in 373 patients of which 84 (22.5%) were converted to open surgery. Multivariate logistic regression identified C-reactive protein (CRP) over 150 mg/l, age over 65 years, diabetes, gangrene of the gallbladder and an abscess as risk factors for conversion. Complications were experienced by 67 (18.0%) patients. Multivariate logistic regression identified age over 65 years, male gender, impaired renal function and conversion as risk factors for complications. Conclusions: Advanced cholecystitis with high CRP, gangrene or an abscess increase the risk of conversion. The risk of postoperative complications is higher after conversion. Early identification and treatment of acute calculous cholecystitis might reduce the number of patients with advanced cholecystitis and thus improve outcomes. abstract_id: PUBMED:34904358 Pre-operative predictive factors for gangrenous cholecystitis at an Australian quaternary cardiothoracic centre. Introduction: The aim of this study was to determine associations between pre-operative factors and gangrenous cholecystitis. Gangrenous cholecystitis represents a severe spectrum of cholecystitis and is associated with an increased risk of complications. Early recognition and diagnosis of gangrenous cholecystitis can optimize prioritization for surgery and improve outcomes. Methods: A retrospective case-control study was undertaken. Patient demographics, medical comorbidities, pre-operative biochemical and ultrasound findings as well as post-operative outcomes were assessed. Four hundred and fourteen patients who underwent an emergency cholecystectomy between 2018 and 2020 were reviewed. Results: One hundred and nine (26%) patients who underwent emergency cholecystectomy during this 3-year period had gangrenous cholecystitis. These patients were older (median age 65 versus 45, P < 0.001) and more likely to be male (58.7% versus 32.5%, P < 0.001). Patients with coronary disease (OR 2.55, P < 0.001), hypertension (OR 2.25, P = 0.001), or arrhythmias (OR 3.64, P = 0.001) were two-to-three times more likely to have gangrenous cholecystitis. Patients with renal disease (OR 1.92, P = 0.004) or using antiplatelet medication (OR 2.27, P = 0.003) were twice as likely to have gangrenous cholecystitis. Patients who had gangrenous cholecystitis presented with a higher white cell count (median 13 x 109 /L versus 8 x 109 /L, P < 0.001) and a higher C-reactive protein (median 196 versus 18 mg/L, P < 0.001). Patients with gangrenous cholecystitis had poorer post-operative outcomes. Conclusion: We verified predictive factors from existing literature and identified a number of novel pre-operative predictive factors for gangrenous cholecystitis which could help with development of a scoring system for clinical use. abstract_id: PUBMED:30581347 Prognostic Value of Preoperative Neutrophil-to-lymphocyte Ratio for Prediction of Severe Cholecystitis. Background: The predictive value of preoperative neutrophil-to-lymphocyte ratio (NLR) in patients with cholecystitis has not been established. The aim of this study was to investigate preoperative NLR in patients with cholecystitis and to identify a relevant NLR value that discriminates between simple and severe cholecystitis. Methods: This study included 136 patients who under went laparoscopic cholecystectomy due to cholecystitis. The Receiver Operating Characteristic (ROC) analysis was performed to identify the most useful NLR cut-off value in relation to the severity of cholecystitis. The patients were di vided into two groups according to the cut-off NLR value: high NLR group (≥4.18, n=23) and low NLR group (<4.18, n=113). Severe cholecystitis was defined as a state which includes inflammation, empyema, gangrene, perforation of gallbladder, adhesions or difficulty in dissecting Calot's triangle. Results: In the high NLR group, severe cholecystitis (p<0.0001) and higher C-reactive protein level (CRP) and white blood cells count (WBC) (p<0.0001) were significantly more frequent. There was no difference in homeostatic model assessment-insulin resistance index (HOMA-IR) between both groups before the operation (p<0.634). The incidence of severe cholecystitis was 16.9%. The NLR of 4.18 could predict severe cholecystitis with 78.3% sensitivity and 74.3% specificity. Spearman's correlation revealed significant association between the preoperative NLR and HOMA-IR on day 1, (r=0.254, p=0.030) and between preoperative NLR and CRP on day 1 (ρ=0.355; p<0.0001). Conclusions: NLR ≥4.18 was significantly associated with severe cholecystitis. The preoperative NLR in patients under going cholecystectomy due to cholecystitis could be a useful surrogate marker of severe cholecystitis. abstract_id: PUBMED:11143321 Guidelines to laparoscopic management of acute cholecystitis. Objective: The present work aims at identifying preoperative variables that may help the surgeon choosing the most appropriate operative approach in various clinical presentation of acute cholecystitis. Summary Background Data: Conversion rates up to 60% have been reported for laparoscopic management of acute cholecystitis. Previous works indicate that the severity of the gallbladder inflammatory process represents the main cause of laparoscopic conversion. The influence of converting a laparoscopic cholecystectomy for acute cholecystitis to open surgery on the post-operative course remains questionable. Objective preoperative variables correlated to the severity of cholecystitis and predictive of laparoscopic cholecystectomy failure remain to be identified. Methods: Seven preoperative and five post-operative variables were compared among a continuous series of 62 patients operated for acute cholecystitis either by a successful laparoscopic procedure (37 patients) or by a laparoscopic procedure converted to open surgery (14 patients) or directly by open surgery (11 patients). The post-operative outcome of patients in the 3 groups were compared. Determinant preoperative factors correlated with the need to convert a laparoscopic procedure were searched by the CHI square test for independance and by a multivariate logistic regression analysis. Results: Converting a laparoscopic cholecystectomy for acute cholecystitis to open surgery does not worsen the patients early post-operative course as long as conversion is decided rapidly and before peroperative complications arise. Three preoperative independent variables predictive of the need to convert a laparoscopic cholecystectomy for acute cholecystitis were identified: The thickness of the gallbladder wall, preoperative C reactive protein seric levels and finally the delay between the start of acute symptoms of cholecystitis and surgery. Conclusions: The post-operative outcome of patients operated for acute cholecystitis depends more on the severity of the disease than on the type of the surgical procedure. Patients with a perforated cholecystitis (grade III) should better be handled immediately by open surgery. Patients with acute edematous cholecystitis (grade I) or with empyema or gangrenous cholecystitis (grade II) coming early to surgery (within 72 hrs) and having seric preoperative CRP levels less than 10 mg/% represent the best candidates to laparoscopic surgery. abstract_id: PUBMED:33796428 Complicated Acute Cholecystitis: The Role of C-Reactive Protein and Neutrophil-Lymphocyte Ratio as Predictive Markers of Severity. Objectives The clinical diagnosis of complicated acute cholecystitis (CAC) remains difficult with several pathological or ultrasonography criteria used to differentiate it from uncomplicated acute cholecystitis (UAC). This study aims to evaluate the use of combined inflammatory markers C-reactive protein (CRP) and neutrophil-to-lymphocyte ratio (NLR) as surrogate markers to differentiate between UAC and CAC. Methods We identified 600 consecutive patients admitted with biliary symptoms during an acute surgical take from our electronic prospectively maintained database over a period of 55 months. Only patients undergoing emergency cholecystectomy performed during the index admission were included. The primary outcome was the finding of CAC versus UAC. Results A total of 176 patients underwent emergency laparoscopic cholecystectomy (ELC) during the index admission, including 118 (67%) females with a median age of 51 years (range: 21-97 years). The proportion of UAC (130 [74%]) and CAC (46 [26%]) was determined along with demographic data. Multivariate regression analysis showed that patient's age (OR=1.047; p=0.003), higher CRP (OR=1.005; p=0.012) and NLR (OR=1.094; p=0.047) were significant independent factors associated with severity of cholecystitis. Receiver operating characteristic (ROC) analysis for CRP showed an AUC (area under the curve) of 0.773 (95% CI: 0.698- 0.849). Using a cut-off value of 55 mg/L for CRP, the sensitivity of CAC was 73.9% and specificity was 73.1% in predicting CAC. The median post-operative length of stay was four days. The conversion rate from laparoscopic cholecystectomy to open surgery was 2% (4/176), and 5% (9/176) patients suffered post-operative complications with no mortality at 30 days. Conclusion CRP, NLR and age were independent factors associated with the severity of acute cholecystitis. NLR and CRP can be used as surrogate markers to predict patients at risk of CAC during emergency admission, which can inform future guidelines. Moreover, ELC for CAC can be safely performed under the supervision of dedicated upper GI surgeons. abstract_id: PUBMED:33733804 The inclusive use of effect size conversion and Bayes factor in digestive disease research. An important article was published, whicht evaluated statistically significant associations (p < 0.05) using the odds ratio (OR), in the first issue of volume 113 of this journal. The study evaluated a relationship between C-reactive protein levels (CRP > 100) and the histological diagnosis of gangrenous acute cholecystitis in patients who underwent cholecystectomy (OR = 3.1). abstract_id: PUBMED:31768828 C-reactive Protein Is the Best Biomarker to Predict Advanced Acute Cholecystitis and Conversion to Open Surgery. A Prospective Cohort Study of 556 Cases. Background: White blood cell levels (WBC) is the only biologic determinant criterion of the severity assessment of acute cholecystitis (AC) in the revised Tokyo Guidelines 2018 (TG18). The aims of this study were to evaluate the discriminative powers of common inflammatory markers (neutrophil-to-lymphocyte ratio (NLR), and C-reactive protein (CRP)) compared with WBC for the severity of AC, and the risk for conversion to open surgery and to determine their diagnostic cutoff levels. Methods: This was a prospective cohort study. Over 3 years, 556 patients underwent laparoscopic cholecystectomy for AC. Patients were classified into two groups: 139 cases of advanced acute cholecystitis (AAC) (gangrenous cholecystitis, pericholecystic abscess, hepatic abscess, biliary peritonitis, emphysematous cholecystitis), and 417 cases of non-advanced acute cholecystitis (NAAC). Multiple logistic regression and receiver-operating characteristic curve analysis were employed to explore which variables (WBC, CRP, and neutrophil-to-lymphocyte ratio (NLR)) were statistically significant in predicting AAC and conversion to open surgery. Results: On multivariable logistic regression analysis, male gender (OR = 0.4; p = 0.05), diabetes mellitus (OR = 7.8; p = 0.005), 3-4 ASA score (OR = 5.34; p = 0.037), body temperature (OR = 2.65; p = 0.014), and CRP (OR = 1.01; p = 0.0001) were associated independently with AAC. The value of the area under the curve (AUC) of the CRP (0.75) was higher than that of WBC (0.67) and NLR (0.62) for diagnosing AAC. CRP was the only predictive factor of conversion in multivariate analysis (OR = 1.008 [1.003-1.013]. Comparing areas under the receiver operating characteristic curves, it was the CRP that had the highest discriminative power in terms of conversion. Conclusion: CRP is the best inflammatory marker predictive of AAC and of conversion to open surgery. We think that our results would support a multicenter-international study to confirm the findings, and if supported, CRP should be considered as a severity criterion of acute cholecystitis in the next revised version of the Guidelines of Tokyo. abstract_id: PUBMED:30665720 Association of laboratory and radiologic parameters in the diagnosis of acute cholecystitis. Introduction And Objectives: There is currently more evidence suggesting that early surgery should be the treatment of choice for acute calculous cholecystitis, although initial conservative treatment is also reported to be safe. Treatment decision depends on the conditions of the patient, surgical experience, and hospital infrastructure, given that early surgery cannot always be carried out. The aim of the present study was to correlate C-reactive protein values with other variables to determine those situations in which surgery cannot be delayed. Materials And Methods: A retrospective study was conducted on patients admitted to the hospital from the emergency service with the diagnosis of acute calculous cholecystitis. The patients were divided into 2groups: 1) patients that required urgent cholecystectomy and 2) patients that responded well to conservative medical treatment and later underwent deferred cholecystectomy. Results: A total of 238 patients (♂ 54.6%, ♀ 45.4%) were analyzed. Urgent surgery was performed on 158 patients, whereas the remaining 80 patients were released from the hospital following conservative treatment. The odds ratio of gangrenous cholecystitis presenting in acute cholecystitis for C-reactive protein was calculated in the logistic regression analysis, obtaining an OR of 1.088 and a 95% CI of 1.031-1.121. Conclusion: In patients diagnosed with acute calculous cholecystitis, the combination of elevated values of C-reactive protein levels, gallbladder wall thickness, and number of leukocytes was correlated with less favorable clinical and gallbladder histologic states, resulting in a greater need for urgent surgical treatment. Answer: Yes, C-reactive protein (CRP) appears to be a useful adjunct in selecting patients for emergency cholecystectomy by predicting severe gangrenous cholecystitis. Several studies have indicated that elevated CRP levels are significantly associated with gangrenous cholecystitis (GC) and can be used as a predictive marker for the condition. One study found that a CRP value of more than 200 mg/dL has a 50% positive predictive value and 100% negative predictive value in predicting gangrenous cholecystitis with 100% sensitivity and 87.9% specificity, suggesting that CRP on its own has a high predictive value in predicting GC (PUBMED:24856179). Another study identified CRP as the best biomarker to predict advanced acute cholecystitis and conversion to open surgery, with a higher area under the curve (AUC) for diagnosing advanced acute cholecystitis compared to white blood cell count and neutrophil-to-lymphocyte ratio (PUBMED:31768828). Furthermore, a retrospective study assessing risk factors for conversion and complications during laparoscopic cholecystectomy for acute calculous cholecystitis identified CRP over 150 mg/l as a risk factor for conversion (PUBMED:27891173). Another study also confirmed the predictive value of CRP, showing that patients with gangrenous cholecystitis presented with a higher CRP (PUBMED:34904358). Additionally, a study evaluating the use of combined inflammatory markers CRP and neutrophil-to-lymphocyte ratio (NLR) as surrogate markers to differentiate between uncomplicated and complicated acute cholecystitis found that higher CRP was a significant independent factor associated with the severity of cholecystitis (PUBMED:33796428). In summary, the evidence suggests that CRP is a valuable marker in predicting the severity of acute cholecystitis, particularly gangrenous cholecystitis, and can aid in the decision-making process for emergency cholecystectomy. However, it is important to note that while CRP is a strong predictor, it should be used in conjunction with other clinical assessments and diagnostic tools for optimal patient management.
Instruction: Can research influence mental health policy? Abstracts: abstract_id: PUBMED:11531721 Can research influence mental health policy? Objective: This paper describes the processes involved in policy development and implementation with examples of how this can be influenced by the outcomes of research. Method: The author draws on his experience in the development and implementation of Australia's National Mental Health Policy and on the literature describing public policy analysis. Results: A five-step process of problem identification, policy development, political decision, policy implementation and evaluation is described. This process identifies how issues are considered, adopted and implemented by governments. Conclusion: An understanding of this process can inform mechanisms by which scientific research can impact on the issues considered and the decisions made in each step of policy analysis and development. abstract_id: PUBMED:16917656 From research evidence to policy: Mental health care in Viet Nam. Problem: The use of evidence-based policy is gaining attention in developing countries. Frameworks to analyse the process of developing policy and to assess whether evidence is likely to influence policy-makers are now available. However, the use of evidence in policies on caring for people with mental illness in developing countries has rarely been analysed. Approach: This case study from Viet Nam illustrates how evidence can be used to influence policy. We summarize evidence on the burden of mental illness in Viet Nam and describe attempts to influence policy-makers. We also interviewed key stakeholders to ascertain their views on how policy could be affected. We then applied an analytical framework to the case study; this framework included an assessment of the political context in which the policy was developed, the links between organizations needed to influence policy, external influences on policy-makers and the nature of evidence required to influence policy-makers. Local Setting: The burden of mental illness among various population groups was large but there were few policies aimed at providing care for people with mental illness, apart from policies for providing hospital-based care for people with severe mental illness. Relevant Changes: The national plan proposes to incorporate screening for mental illness among women and children in order to implement early detection and treatment. Lessons Learned: Evidence on the burden of mental ill-health in Viet Nam is patchy and research in this area is still relatively undeveloped. Nonetheless the policy process was influenced by the evidence from research because key links between organizations and policy-makers were established at an early stage, the evidence was regarded as rigorous and the timing was opportune. abstract_id: PUBMED:32517640 Dissemination Strategies to Accelerate the Policy Impact of Children's Mental Health Services Research. The United States is in the midst of a children's mental health crisis, with rates of depression, anxiety, and suicide increasing precipitously. Evidence produced by children's mental health services research can help address this crisis by informing public policy decisions about service delivery, system design, and investments in the social determinants of mental health. Unfortunately, the policy impact of children's mental health services research is limited because evidence often fails to reach policy makers, be responsive to their needs, resonate with their worldview, or reflect the contexts in which they make decisions. Dissemination strategies-defined as the development and targeted distribution of messages and materials about research evidence pertaining to a specific issue or intervention-can help address these challenges. Yet, limited integrated guidance exists to inform the design of such strategies. This article addresses this need by synthesizing the results of empirical studies to provide guidance about how to enhance the dissemination of children's mental health services research to policy makers. The article provides four recommendations about the content of policy maker-focused dissemination materials, discusses how strategic framing and message tailoring can increase the chances that evidence is persuasive to policy makers, and highlights strategies to ensure that evidence reaches policy makers. abstract_id: PUBMED:15276934 The mental health policy template: domains and elements for mental health policy formulation. Mental disorders are a major and rising cause of disease burden in all countries. Even when resources are available, many countries do not have the policy and planning frameworks in place to identify and deliver effective interventions. The World Health Organization (WHO) and the World Bank have emphasized the need for ready access to the basic tools for mental health policy formulation, implementation and sustained development. The Analytical Studies on Mental Health Policy and Service Project, undertaken in 1999-2001 by the International Consortium for Mental Health Services and funded by the Global Forum for Health Research aims to address this need through the development of a template for mental health policy formulation. A mental health policy template has been developed based on an inventory of the key elements of a successful mental health policy. These elements have been validated against a review of international literature, a study of existing mental health policies and the results of extensive consultations with experts in the six WHO regions of the world. The Mental Health Policy Template has been revised and its applicability will be tested in a number of developing countries during 2001-2002. The Mental Health Policy Template and the work of the Consortium for Mental Health Services will be presented and the future role of the template in mental health policy development and reform in developing countries will be discussed. abstract_id: PUBMED:35999953 Ageing and Mental Health in Canada: Perspectives from Law, Policy, and Longitudinal Research. Canada is a relatively young, geographically-diverse country, with a larger proportion of the population aged over 65 than under 15. Increasing alongside the number of ageing Canadians is the number of older adults that live with mental health challenges. Across the life course, one in five Canadians will experience a mental health disorder with many more living with subclinical symptoms. For these individuals, their lived experience may be directly impacted by the contemporary laws and policies governing mental illness. Examining and reviewing the historical context of mental health and older adults, we provide insights into the evolving landscape of Canadian mental health law and policy, paternalistic roots in the infancy of the country, into modern foci on equity and diversity. Progressing in parallel to changes in mental health policy has been the advancement of mental health research, particularly through longitudinal studies of ageing. Although acting through different mechanisms, the evolution of Canadian mental health law, policy, and research has had, and continues to have, considerable impacts on the substantial proportion of Canadians living with mental health challenges. abstract_id: PUBMED:35378994 Building Bridges Between Evidence and Policy in Mental Health Services Research: Introducing the Policy Review Article Type. Although it is widely accepted that patients do better when evidence-based health care practices are used, there is less acknowledgment of the positive outcomes associated with evidence-based policy making. To address the need for high-quality evidence to inform mental health policies, Psychiatric Services has recently launched a new article format: the Policy Review. This review type defines a specific policy-relevant issue affecting behavioral health systems, describes current knowledge and limitations, and discusses policy implications. Reviews can focus on mental health policies or examine how other health or social policies affect people with mental illness or substance use disorders. This brief overview of the need for a policy review article type describes differences between evidence-based policy making and practices and looks at research approaches focused on evidence-based policy making, as well as legislative and other efforts to support it. Broad guidelines for potential submissions are also provided. abstract_id: PUBMED:33468166 Determinants of using children's mental health research in policymaking: variation by type of research use and phase of policy process. Background: Research use in policymaking is multi-faceted and has been the focus of extensive study. However, virtually no quantitative studies have examined whether the determinants of research use vary according to the type of research use or phase of policy process. Understanding such variation is important for selecting the targets of implementation strategies that aim to increase the frequency of research use in policymaking. Methods: A web-based survey of US state agency officials involved with children's mental health policymaking was conducted between December 2019 and February 2020 (n = 224, response rate = 33.7%, 49 states responding (98%), median respondents per state = 4). The dependent variables were composite scores of the frequency of using children's mental health research in general, specific types of research use (i.e., conceptual, instrumental, tactical, imposed), and during different phases of the policy process (i.e., agenda setting, policy development, policy implementation). The independent variables were four composite scores of determinants of research use: agency leadership for research use, agency barriers to research use, research use skills, and dissemination barriers (e.g., lack of actionable messages/recommendations in research summaries, lack of interaction/collaboration with researchers). Separate multiple linear regression models estimated associations between determinant and frequency of research use scores. Results: Determinants of research use varied significantly by type of research use and phase of policy process. For example, agency leadership for research use was the only determinant significantly associated with imposed research use (β = 0.31, p < 0.001). Skills for research use were the only determinant associated with tactical research use (β = 0.17, p = 0.03) and were only associated with research use in the agenda-setting phase (β = 0.16, p = 0.04). Dissemination barriers were the most universal determinants of research use, as they were significantly and inversely associated with frequency of conceptual (β = -0.21, p = 0.01) and instrumental (β = -0.22, p = 0.01) research use and during all three phases of policy process. Conclusions: Decisions about the determinants to target with policy-focused implementation strategies-and the strategies that are selected to affect these targets-should reflect the specific types of research use that these strategies aim to influence. abstract_id: PUBMED:30739487 E-mental health: Promising advancements in policy, research, and practice. The increasing need for mental health services in the population is posing significant challenges for the health system. It is therefore important to identify new approaches to delivering care that are sustainable and scalable in terms of reach and impact. E-mental health is one approach that shows promise in addressing the treatment gap in mental healthcare. E-mental health involves leveraging the Internet and related technologies such as smartphone apps, web sites, and social media to deliver mental health services. Over the past decade, this field has made significant advancements in Canada and internationally. In this article, the author introduces the e-mental health field and provides an overview of promising Canadian developments in relation to policy, research, and practice. In addition, the article discusses some of the challenges facing the wide-scale implementation of e-mental health and identifies priority areas of focus for health leaders to advance the field. abstract_id: PUBMED:31739813 Reflections on the next ten years of research, policy and implementation in global mental health. Since the World Health Report 2001 focused on mental health for the first time, the field of global mental health has seen unprecedented growth in policy commitments and research. Yet many challenges remain, including a lack of substantial new financial investments from governments, ongoing human rights abuses suffered by people living with mental illness, weak health systems in low resource settings and large gaps in our knowledge regarding aetiology, prevention of mental illness and mental health promotion. Stark inequalities persist between high-income countries and low- and middle-income countries (LMIC) in research knowledge and service resources. This editorial sets out to reflect on progress to date, and suggest priorities and possible future trends for research, policy and service implementation, especially in LMIC. abstract_id: PUBMED:18441659 Using qualitative research to inform mental health policy. This article offers examples of the ways in which qualitative methods have informed, and may inform, mental health policy in Canada and beyond. Three initial uses of these methods are identified: to generate hypotheses to be tested by other means; to explore the subjective experiences and everyday lives of people with mental illnesses; and to investigate processes of recovery and the active role of the individual in recovery. Given the recent focus in mental health policy in Canada, the United States, and around the world on transforming mental health systems to promote recovery and the emphasis recovery places on the individual's own first-hand perspective, we argue that qualitative methods will become increasingly useful as psychiatry shifts away from symptom reduction to enabling people to live satisfying, hopeful, and meaningful lives in the community. Answer: Yes, research can influence mental health policy. The process of policy development and implementation can be influenced by the outcomes of research, as described in a five-step process that includes problem identification, policy development, political decision, policy implementation, and evaluation (PUBMED:11531721). Research evidence has been used to influence mental health care policy in developing countries, such as Vietnam, where evidence on the burden of mental illness helped to shape national plans for early detection and treatment (PUBMED:16917656). Dissemination strategies are important for ensuring that the evidence from children's mental health services research reaches policymakers and is persuasive to them (PUBMED:32517640). The development of a mental health policy template based on key elements of successful policies can aid countries in formulating effective mental health policies (PUBMED:15276934). In Canada, the evolution of mental health law, policy, and research has had considerable impacts on individuals living with mental health challenges (PUBMED:35999953). The introduction of the Policy Review article type in psychiatric services is an effort to provide high-quality evidence to inform mental health policies (PUBMED:35378994). The determinants of using children's mental health research in policymaking can vary by the type of research use and phase of the policy process, indicating that implementation strategies should be tailored accordingly (PUBMED:33468166). E-mental health advancements in policy, research, and practice show promise in addressing the treatment gap in mental healthcare (PUBMED:30739487). Reflections on the future of global mental health research emphasize the importance of research in informing policy and service implementation, especially in low- and middle-income countries (PUBMED:31739813). Finally, qualitative research methods have been used to inform mental health policy by exploring subjective experiences and processes of recovery, which are increasingly relevant as mental health systems focus on promoting recovery (PUBMED:18441659).
Instruction: Does universal active MRSA surveillance influence anti-MRSA antibiotic use? Abstracts: abstract_id: PUBMED:34487964 Evaluation of MRSA Surveillance Nasal Swabs for Predicting MRSA Infection in Surgical Intensive Care Unit Patients. Background: We aimed to examine the clinical value of serial MRSA surveillance cultures to rule out a MRSA diagnosis on subsequent cultures during a patient's surgical intensive care unit (SICU) admission. Material And Methods: We performed a retrospective cohort study to evaluate patients who received a MRSA surveillance culture at admission to the SICU (n = 6,915) and collected and assessed all patient cultures for MRSA positivity during their admission. The primary objective was to evaluate the transition from a MRSA negative surveillance on admission to MRSA positive on any subsequent culture during a patient's SICU stay. Percent of MRSA positive cultures by type following MRSA negative surveillance cultures was further analyzed. Measurements And Main Results: 6,303 patients received MRSA nasal surveillance cultures at admission with 21,597 clinical cultures and 7,269 MRSA surveillance cultures. Of the 6,163 patients with an initial negative, 53 patients (0.87%) transitioned to MRSA positive. Of the 139 patients with an initial positive, 30 (21.6%) had subsequent MRSA positive cultures. Individuals who had an initial MRSA surveillance positive status on admission predicted MRSA positivity rates for cultures in qualitative lower respiratory cultures (64.3% versus. 3.1%), superficial wound (60.0% versus 1.6%), deep wound (39.0% versus 0.8%), tissue culture (26.3% versus 0.6%), and body fluid (20.8% versus 0.7%) cultures when compared to MRSA negative patients on admission. Conclusion: Following MRSA negative nasal surveillance cultures patients showed low likelihood of MRSA infection suggesting empiric anti-MRSA treatment is unnecessary for specific patient populations. SICU patient's MRSA status at admission should guide empiric anti-MRSA therapy. abstract_id: PUBMED:25999747 Polymerase chain reaction-based active surveillance of MRSA in emergency department patients. Conventional culture methods to detect methicillin-resistant Staphylococcus aureus (MRSA) take a few days, and their sensitivity and usefulness also need to be improved. In this study, active screening was performed using the polymerase chain reaction (PCR) for colonization with MRSA on admission and follow-up surveillance after admission to an emergency department between June 2012 and August 2012, and the backgrounds of PCR and/or culture-method-positive patients were compared. Among 95 patients, 15 (15.8%) patients were positive for MRSA on PCR and/or culture; 6.3% (6/95) of patients were positive on admission, and 9.5% (9/95) became positive during the stay after admission. The major primary diagnoses in MRSA-positive patients were trauma and cerebrovascular diseases. Nine (60%) of 15 patients were MRSA-positive on both PCR and culture, compared with three (20%) of 15 who were PCR-positive but culture-negative. The other three (20%) of 15 patients were PCR-negative but culture-positive. Furthermore, there was a tendency for younger age and shorter stay to be associated with PCR-positive but culture-negative results. These findings suggest that active surveillance with PCR may be highly sensitive and useful for the early diagnosis of MRSA colonization to prevent nosocomial transmission from the emergency department to the regular inpatient wards of the hospital. abstract_id: PUBMED:25103488 Does universal active MRSA surveillance influence anti-MRSA antibiotic use? A retrospective analysis of the treatment of patients admitted with suspicion of infection at Veterans Affairs Medical Centers between 2005 and 2010. Objectives: After the implementation of an active surveillance programme for MRSA in US Veterans Affairs (VA) Medical Centers, there was an increase in vancomycin use. We investigated whether positive MRSA admission surveillance tests were associated with MRSA-positive clinical admission cultures and whether the availability of surveillance tests influenced prescribers' ability to match initial anti-MRSA antibiotic use with anticipated MRSA results from clinical admission cultures. Methods: Analyses were based on barcode medication administration data, microbiology data and laboratory data from 129 hospitals between January 2005 and September 2010. Hospitalized patient admissions were included if clinical cultures were obtained and antibiotics started within 2 days of admission. Mixed-effects logistic regression was used to examine associations between positive MRSA admission cultures and (i) admission MRSA surveillance test results and (ii) initial anti-MRSA therapy. Results: Among 569,815 included admissions, positive MRSA surveillance tests were strong predictors of MRSA-positive admission cultures (OR 8.5; 95% CI 8.2-8.8). The negative predictive value of MRSA surveillance tests was 97.6% (95% CI 97.5%-97.6%). The diagnostic OR between initial anti-MRSA antibiotics and MRSA-positive admission cultures was 3.2 (95% CI 3.1-3.4) for patients without surveillance tests and was not significantly different for admissions with surveillance tests. Conclusions: The availability of nasal MRSA surveillance tests in VA hospitals did not seem to improve the ability of prescribers to predict the necessity of initial anti-MRSA treatment despite the high negative predictive value of MRSA surveillance tests. Prospective trials are needed to establish the safety and effectiveness of using MRSA surveillance tests to guide antibiotic therapy. abstract_id: PUBMED:28289460 The evaluation of MRSA surveillance cultures by the number and combinations of anatomical sites. Introduction: The identification of patients infected and/or colonised by methicillin resistant Staphylococcus aureus (MRSA) is necessary for the timely introduction of measures for infection control. We compared the diagnostic efficacy of combinations of MRSA surveillance swabs routinely taken by health institutions in the country. Methods: All surveillance samples, which were sent for a microbiological analysis to detect MRSA with the culture method in 2014, in the three departments for medical microbiology of the National Laboratory for Health, Environment and Food, were included in this study. Results: Among 65,251 surveillance cultures from 13,274 persons, 1,233 (2.1%) were positive (490 positive persons). Prevailing positive surveillance cultures were throat swabs (31.3%), followed by nose swab (31.2%), skin swab (18.9%), perineum (16.4%) and wound swabs (1.4%). The contribution of other samples, such as aspirate, urine and excreta, was under 1%. We found no statistically significant differences in the frequency of detection of a positive patient, if the combination of samples NTS (nose, throat, skin) or NTP (nose, throat, perineum) was analysed. However, statistically significant differences were confirmed when any of the anatomic sites would be omitted from the sets of NTP and NTS (chi square; p<0.01). Adding additional samples resulted in only 24 additional positive patients (4.9%). Conclusions: The results indicate that increasing the number of surveillance cultures above three does not add much to the sensitivity of MRSA surveillance, the exception could be wound. The swabs from the perineum and from the skin are exchangeable. abstract_id: PUBMED:33812940 Routine antiseptic baths and MRSA decolonization: diverse approaches across Singapore's acute-care hospitals. To determine the variation in practices on meticillin-resistant Staphylococcus aureus (MRSA) surveillance and management of MRSA-colonized patients amongst 17 acute healthcare facilities in Singapore, the Ministry of Health convened a sharing session with Infection Prevention and Control Leads. All hospitals practised close to universal MRSA entry swabbing in keeping with national policy. There were, however, major variations in the response to both positive and negative surveillance swabs across facilities including the role of routine antiseptic bathing and MRSA decolonization. Most undertaking decolonization considered its role to be in 'bioburden reduction' rather than longer-term clearance. abstract_id: PUBMED:25433010 Relationship between consumption of MRSA-active antibiotics and burden of MRSA in acute care hospitals in Catalonia, Spain. Objectives: To analyse the possible relationship between consumption of old and new MRSA-active antibiotics and burden of MRSA in acute care hospitals in Catalonia during the period 2007-12. Methods: Fifty-four hospitals participating in the VINCat Programme were included. Proportion of MRSA (resistant isolates of Staphylococcus aureus per 100 isolates of S. aureus tested), incidence of new cases of infection [new cases of MRSA per 1000 occupied bed-days (OBD)] and incidence of cases of bacteraemia (MRSA bacteraemia cases per 1000 OBD) were determined to estimate the annual MRSA burden. Antibiotic consumption was calculated in DDD/100 OBD. Cost was expressed in euros/100 OBD. Results: MRSA rates remained stable over the study period, with the proportion of MRSA ranging from 20% to 22.82% in 2007 and 2012, respectively (P=0.864). Consumption of old MRSA-active antibiotics (vancomycin and teicoplanin) did not change significantly, with values from 1.51 to 2.07 DDD/100 OBD (P=0.693). Consumption of new MRSA-active antibiotics (linezolid and daptomycin) increased significantly, with values rising from 0.24 to 1.49 DDD/100 OBD (P<0.001). Cost increased by almost 200%. Conclusions: A widespread and steady increase in consumption of new MRSA-active antibiotics was observed among acute care hospitals in Catalonia, in spite of a stable MRSA burden. At the same time, consumption of old drugs remained stable. Such trends resulted in a significant increase in cost. Our findings suggest that factors other than the proportion of methicillin resistance among S. aureus may influence the use of old and new MRSA-active antibiotics in the clinical setting. abstract_id: PUBMED:36830164 Implementation of MRSA Nasal Swabs as an Antimicrobial Stewardship Intervention to Decrease Anti-MRSA Therapy in COVID-19 Infection. In the early stages of treating patients with SARS-CoV-2, limited information was available to guide antimicrobial stewardship interventions. The COVID-19 Task Force and Antimicrobial Stewardship Committee, at a 988-bed academic medical center, implemented the use of methicillin-resistant Staphylococcus aureus (MRSA) nasal swab polymerase chain reaction (PCR) testing to assist with the de-escalation of anti-MRSA therapy in patients with suspected superimposed bacterial pneumonia in COVID-19. A retrospective study was conducted to evaluate the impact of MRSA nasal swab PCR testing on the rate of anti-MRSA therapy between 13 April 2020 and 26 July 2020. A total of 122 patients were included in the analysis. Of the patients included in the final analysis, 58 (47.5%) had anti-MRSA therapy discontinued and 41 (33.6%) avoided anti-MRSA therapy completely due to a negative swab result. With the implementation of MRSA nasal swab PCR testing in COVID-19 patients, anti-MRSA therapy was reduced in 81% of patients in this study. In patients who continued with anti-MRSA therapy, nasal swabs were either positive for MRSA or an alternative indication for anti-MRSA therapy was noted. Only three patients in the cohort had MRSA identified in a sputum culture, all of whom had anti-MRSA therapy continued. MRSA nasal swab PCR testing may serve as an effective antimicrobial stewardship tool in COVID-19 pneumonia. abstract_id: PUBMED:31940297 Incidence and antibiotic susceptibility of MRSA infections in a Saudi Arabian Hospital: a 10-year surveillance study. Introduction: Methicillin-resistant Staphylococcus aureus (MRSA) infections remain prevalent and are associated with significant morbidity and mortality. The aim of the present study was to investigate the epidemiology of MRSA infections and antibiotic susceptibility in Qatif, Saudi Arabia. Methodology: All patients who had positive culture for S. aureus from January 1, 2006 through December 31, 2015 were enrolled and analyzed in WHONET, a free database software developed by the World Health Organization (WHO). Patients' data were collected from electronic medical records and traditional chart reviews to determine whether MRSA acquisition was likely to have been in the community or in the healthcare facility. Susceptibility results for community-associated (CA)-MRSA were compared with isolates from healthcare setting. Results: A total of 3395 patients with S. aureus infections were analyzed, with an overall annual MRSA incidence of 25 cases per 100,000 patients (27% of total S. aureus isolates). While the majority (64%) of MRSA infections occurred in healthcare setting, CA-MRSA isolation increased steadily from 23% in 2006 to 60% in 2015, exceeding rate of isolation of healthcare-associated (HA)-MRSA. Skin and soft tissue, the lung and blood stream were the most common sites of infection, with 20% to 35% of MRSA infections occurring in pediatric patients. In the inpatient setting, the majority of infections due to MRSA were in surgical wards and critical care units. Compared with CA-MRSA, HA-MRSA isolates turned out to be more frequently resistant against ciprofloxacin, clindamycin, erythromycin, tetracycline, and trimethoprim/sulfamethoxazole. Conclusions: Staphylococcus aureus continues to cause multiple site infections with a relatively stable methicillin-resistance rate, but the isolation of MRSA from the community is increasing. abstract_id: PUBMED:38246975 Antimicrobial activity and mechanism of anti-MRSA of phloroglucinol derivatives. Background: In previous studies, authors have completed the total synthesis of several phloroglucinol natural products and synthesized a series of their derivatives, which were tested with good biological activities. Objectives: To discover anti-MRSA lead compound and study their mechanism of action. Methods: Phloroglucinol derivatives were tested to investigate their activities against several gram-positive strains including Methicillin-resistant Staphylococcus aureus (MRSA). The mechanism study was conducted by determining extracellular potassium ion concentration, intracellular NADPH oxidase content, SOD activity, ROS amount in MRSA and MRSA survival rate under A5 treatment. The in vitro cytotoxicity test of A5 was conducted. Results: The activity of monocyclic compounds was stronger than that of bicyclic compounds, and compound A5 showed the best MIC value of 0.98 μg/mL and MBC value of 1.95 μg/mL, which were 4-8 times lower than that of vancomycin. The mechanism study of A5 showed that it achieved anti-MRSA effect through membrane damage, which is proved by increased concentration of extracellular potassium ion after A5 treatment. Another possible mechanism is the over ROS production induced cell death, which is suggested by observed alternation of several reactive oxygen species (ROS) related indicators including NADPH concentration, superoxide dismutase (SOD) activity, ROS content and bacterial survival rate after A5 treatment. The cytotoxicity results in vitro showed that A5 was basically non-toxic to cells. Conclusion: Acylphloroglucinol derivative A5 showed good anti-MRSA activity, possibly via membrane damage and ROS-mediated oxidative stress mechanism. It deserves further exploration to be a potential lead for the development of new anti-MRSA agent. abstract_id: PUBMED:27303395 A Snapshot on MRSA Epidemiology in a Neonatal Intensive Care Unit Network, Palermo, Italy. Objectives: We performed a 1-year prospective surveillance study on MRSA colonization within the five NICUs of the metropolitan area of Palermo, Italy. The purpose of the study was to assess epidemiology of MRSA in NICU from a network perspective. Methods: Transfer of patients between NICUs during 2014 was traced based on the annual hospital discharge records. In the period February 2014-January 2015, in the NICU B, at the University teaching hospital, nasal swabs from all infants were collected weekly, whereas in the other four NICUs (A, C, D, E) at 4 week-intervals of time. MRSA isolates were submitted to antibiotic susceptibility testing, SCCmec typing, PCR to detect lukS-PV and lukF-PV (lukS/F-PV) genes and the gene encoding the toxic shock syndrome toxin (TSST-1), multilocus variable number tandem repeat fingerprinting (MLVF), and multilocus sequence typing (MLST). Results: In the period under study, 587 nasal swabs were obtained from NICU B, whereas 218, 180, 157, and 95 from NICUs A, C, D, and E, respectively. Two groups of NICUs at high prevalence and low prevalence of MRSA colonization were recognized. Overall, 113 isolates of MRSA were identified from 102 infants. Six MLVF types (A-F) were detected, with type C being subdivided into five subtypes. Five sequence types (STs) were found with ST22-IVa being the most frequent type in all NICUs. All the MRSA molecular subtypes, except for ST1-IVa, were identified in NICU B. Conclusions: Our findings support the need to approach surveillance and infection control in NICU in a network perspective, prioritizing referral healthcare facilities. Answer: Universal active MRSA surveillance does not seem to significantly influence the prescribers' ability to match initial anti-MRSA antibiotic use with anticipated MRSA results from clinical admission cultures. A retrospective analysis of the treatment of patients admitted with suspicion of infection at Veterans Affairs Medical Centers between 2005 and 2010 found that the availability of nasal MRSA surveillance tests did not improve the ability of prescribers to predict the necessity of initial anti-MRSA treatment, despite the high negative predictive value of MRSA surveillance tests. Prospective trials were suggested to establish the safety and effectiveness of using MRSA surveillance tests to guide antibiotic therapy (PUBMED:25103488). Furthermore, the implementation of MRSA nasal swabs as an antimicrobial stewardship intervention at a 988-bed academic medical center showed that with the use of MRSA nasal swab PCR testing, anti-MRSA therapy was reduced in 81% of patients in the study. This suggests that MRSA nasal swab PCR testing can serve as an effective antimicrobial stewardship tool (PUBMED:36830164). However, it's important to note that the impact of MRSA surveillance on antibiotic use may vary depending on the setting and the specific practices of the healthcare facility. For instance, in Singapore's acute-care hospitals, there were major variations in the response to both positive and negative surveillance swabs, including the role of routine antiseptic bathing and MRSA decolonization (PUBMED:33812940). In summary, while MRSA surveillance has the potential to influence anti-MRSA antibiotic use, the actual impact on prescribing practices may not be significant without additional interventions or changes in clinical guidelines.
Instruction: Internal Spermatic Vein Insufficiency in Varicoceles: A Different Entity in Adults and Adolescents? Abstracts: abstract_id: PUBMED:26295656 Internal Spermatic Vein Insufficiency in Varicoceles: A Different Entity in Adults and Adolescents? Objective: The purpose of this study was to determine whether phlebographic features can be used to discriminate adult from adolescent varicocele. Materials And Methods: Left and right internal spermatic venograms of 191 adolescents (< 17 years) and 224 adults (≥ 25 years) were anonymized and evaluated. Phlebographic radioanatomic features (valves, duplications, collaterals, and classifications) were compared and analyzed with univariate tests. Results: Insufficiency of the left internal spermatic vein (ISV) was confirmed in 409 of the 415 (99%) patients. Adults had no spontaneous opacification of the ISV during venography twice as frequently as adolescents (p = 0.001), a complex outflow into the renal vein 2.2 times as often (p = 0.021), and significantly more collaterals (p = 0.030). Adolescents had a significantly lower number of competent valves and significantly more instances of nutcracker phenomenon (p = 0.001). According to the Bähren classification, the distribution of the types of ISVs was significantly different between adults and adolescents (p = 0.009). Insufficiency of the right ISV was encountered 2.4 times as frequently in adults as in adolescents (p < 0.001). In adults the maximum diameter of the ISV was significantly larger (p = 0.023). Bilateral ISV insufficiency was 2.2 times as frequent in adults (p < 0.001) as in adolescents. Conclusion: Left-sided varicoceles in adults are distinct from those in adolescents. In adults, reflux is likely to be induced via collateral pathways, whereas in adolescents congenital venous abnormalities are predominantly present. The higher prevalence and the greater diameter of a right insufficient ISV in adults, combined with the absence of venous anatomic differences, supports the hypothesis that right-sided varicocele is an evolutive disease. abstract_id: PUBMED:1393441 Laparoscopic ligation of internal spermatic vein. Laparoscopic procedures have long been a standard form of treatment for gynaecological disorders but have only recently shown promise in the evaluation and treatment of urogenital diseases, such as pelvic lymphadenectomy. We performed laparoscopic ligation of the bilateral internal spermatic veins in 15 male pigs. The average operative time was 20 min and operative morbidity was minimal, comprising mild subcutaneous emphysema around the trocars. Engorgement of the spermatic vein proximal to the endoclip site was noted. There was no operative mortality. Laparoscopic ligation of the internal spermatic veins seems to be a feasible method for the treatment of varicoceles, especially bilateral lesions. abstract_id: PUBMED:7966798 The correlation of internal spermatic vein palpability with ultrasonographic diameter and reversal of venous flow. A gold standard that defines the presence of a subclinical varicocele does not exist. No one has accurately demonstrated at what diameter an internal spermatic vein becomes palpable. We correlated the clinical examination of the scrotum with ultrasonographic measurements of internal spermatic vein diameter and reversal of venous flow in 156 testicles. The internal spermatic vein became palpable at diameters of 3.0 to 3.5 mm. Reversal of flow was found in all veins larger than 3.5 mm. and did not occur in any veins smaller than 2.5 mm. In veins with diameters of 2.5 to 3.5 mm. neither palpability nor internal spermatic vein diameter correlated with the reversal of flow by duplex Doppler color ultrasound. If the internal spermatic vein is nonpalpable during the diagnostic evaluation of the infertile man we do not advocate further radiographic studies. If the palpability of the internal spermatic vein is questionable duplex Doppler color ultrasound is indicated. abstract_id: PUBMED:2287470 Spermatic-saphenous vein bypass in the treatment of essential varicocele A new technique for surgical treatment of idiopathic varicocele is described. This technique consists of internal spermatic vein ligature and its resection and microvascular anastomosis between proximal tract of the internal spermatic and saphenous vein. In this way a communication is established between a high regimen pressure with a lower one. Fourteen idiopathic varicoceles are successfully operated with this new technique. abstract_id: PUBMED:29738177 Microscopic spermatic vein ligation for the treatment of varicocele Objective: To explore the effect of spermatic vein ligation under the microscope in the treatment of varicocele (VC). Methods: A total of 120 VC patients received in our department from September 2011 to February 2015 were randomly divided into an experimental and a control group of equal number, the former treated by microscopic spermatic vein ligation and the latter by conventional open high ligation. Comparisons were made between the two groups of patients in the internal diameters of the spermatic vein during eupnea and Valsalva maneuver, the reflux time of the spermatic vein, blood flow parameters of the testicular artery, and semen quality before and at 3 months after surgery. Results: At 3 months after surgery, the experimental group, as compared with the control, showed significantly decreased reflux time of the spermatic vein ([0.41 ± 0.10] vs [1.08 ± 0.10] s, P <0.05) and peak systolic velocity (9.26 ± 1.35 vs 10.64 ± 1.28, P <0.05) and resistance index (0.52 ± 0.03 vs 0.61 ± 0.03, P <0.05) of the testicular artery but markedly increased internal diameters of the spermatic vein during eupnea ([1.63 ± 0.07] vs [1.59 ± 0.06] mm, P <0.05) and Valsalva maneuver ([1.72 ± 0.05] vs [1.68 ± 0.07] mm, P <0.05), sperm concentration ([46.84 ± 5.24] vs [35.35 ± 4.26] ×10⁶/ml, P <0.05), sperm motility ([63.75 ± 7.73] vs [53.87 ± 6.46] %, P <0.05), and total sperm count ([89.54 ± 7.95] vs [75.24 ± 8.43] ×10⁶/ml, P <0.05). Conclusions: Microscopic spermatic vein ligation has a definite effect in the treatment of varicocele, which can significantly improve the testicular blood flow and semen quality of the patient. abstract_id: PUBMED:16779541 Phlebographic classification of anatomic variants in the right internal spermatic vein confluence. Purpose: Male varicocele is a clinical dysfunction caused by a pathological venous reflux. Knowledge of anatomic variants of the internal spermatic vein confluence is fundamental for the technical success of percutaneous treatment. While numerous studies have analysed the phlebographic anatomy of the left internal spermatic vein, no exhaustive description exists for the right internal spermatic vein. Materials And Methods: From a retrospective review of 3229 patients treated percutaneously between 1988 and 2003, we extrapolated the phlebographic images of patients with incontinence of the right internal spermatic vein only. Mean patient age was 24.6 (range 14-46) years. Indication for treatment was presence of pain in the right inguinal region and absence of a history of trauma and/or seminal-fluid alterations. Phlebography had been performed with transbrachial access using a tilt table and a multipurpose angiographic catheter. Contrast medium was injected into both the inferior vena cava and the renal vein. Selective catheterisation of the internal spermatic vein was then performed to assess the radiological characteristics of the vessels prior to sclerosis. Results: There were 93 cases of incontinence of the right internal spermatic vein only (2.8%). In the first group of patients (seven cases, 7.5%), the right internal spermatic vein drained exclusively into the renal vein; the injection of contrast medium during a Valsalva manoeuvre allowed visualisation of the vein almost as far as the iliac level. In most cases, the vein appeared uniformly dilatated and without valvular systems along its course. In the second group (21 cases, 22.5%), the vein drained into both the renal vein and the inferior vena cava, with one branch showing functional predominance over the other: selective catheterisation was easier to perform on the first branch. Selective catheterisation confirmed dilatation of the vein as well as the absence of valvular systems. In most patients, (65 cases, 69.8%), the internal spermatic vein drained into the inferior vena cava; the confluence was double in five patients and single in 60 patients. Visualisation of incontinence was limited to the initial 5-10 cm of the vein in 13 cases; however, vein dilatation and absence of valvular systems were confirmed beyond the semicontinent valve. Conclusions: Interventional treatment is one of the therapeutic options for male varicocele, but the method is limited by the presence of anatomic variants or aberrant supplying vessels, which make catheterisation and sclerosis of the internal spermatic vein difficult if not impossible. Interventional radiologists must have a thorough knowledge of anatomic variants of the right internal spermatic vein to be able to perform the procedure within a reasonable amount of time and reduce radiation exposure. abstract_id: PUBMED:24659569 Analysis of testicular-internal spermatic vein variation and the recreation of varicocoele in a Sprague-Dawley rat model. Many laboratories tried to recreate the varicocoele model have met with varied success. To recreate a consistent varicocoele model by exploring the anatomic variability of the testicular-spermatic venous system in Sprague-Dawley (SD) rats. Seventy-two sexually mature SD male rats were randomly divided into three groups containing 24 rats per group. Partial ligation of the left renal vein and internal spermatic vein (ISV) communicating branches to common iliac vein and ISV communicating branches ligation (RVISVCBCIV) or partial ligation of the left renal vein and ISV communicating branches ligation (RVISVCB). The results showed that the mean diameter of the left ISV was significantly increased in the RVISVCBCIV group compared with the control and RVISVCB groups (p < 0.001). Using ISV as the reference, the sensitivity of varicocoele was 71.43%, and the specificity was 80%. In addition, the positive predictive value was 83.33%, and the negative predictive value was 66.67%. Sperm count, motility, Johnsen score and the spermatogenic cell density were lower in the RVISVCBCIV group compared with the control (p < 0.01). The apoptotic index was higher in the RVISVCBCIV group compared with control groups (p < 0.01). The RVISVCBCIV provides a more effective method for establishing a varicocoele-induced model. abstract_id: PUBMED:36104092 Internal Spermatic Vein to Superficial Epigastric Vein Microsurgical Bypass in Varicocele Treatment. Introduction: Identification and preservation of testicular artery and lymphatic vessels during microsurgical varicocelectomy can be tedious if adhered encompassing venous network is encountered. A venous bypass from internal spermatic to saphenous or inferior epigastric vein, that have been described for varicocele treatment, may be used in such situations. This paper describes a simplified modification of the venous bypass technique that reroutes the testicular blood to the superficial epigastric vein, which can easily be found in the incisional wound. Surgical technique and anastomotic patency test are described, and indications and results are discussed. Materials And Methods: During 2020 and 2021, 32 adolescent patients underwent microsurgical varicocelectomy. In eight patients additional microsurgical testicular vein-superficial epigastric vein microvascular bypass was done. The indication for bypass was difficult identification of testicular artery and/or lymphatic vessels due to adhered venous plexus. Results: Varicocele resolution was noted in all eight patients with clinical and/or semen analysis improvement. There were no complications or recurrences. Average length of procedure was 65 minutes. All patients were discharged within 24 hours and no antiplatelet or anticoagulant therapy was used. Conclusion: Testicular vein to superficial epigastric vein anastomosis is a useful and simplified venous bypass technique that reroutes the blood from the pampiniform plexus to the femoral vein. It can be done as an adjunct to microsurgical varicocelectomy in selected patients through a standard incision. abstract_id: PUBMED:7426951 The Doppler assessment of varicoceles and internal spermatic vein reflux in infertile men. The Doppler technique has been used to evaluate venous reflux in the spermatic cord. Valsalva-induced reflux occurred on the left side in 83% and on the right side in 59% of 118 patients without clinical varicoceles and there was no difference in incidence between fertile and infertile men. The significance of Valsalva-induced reflux should be questioned. Greater importance should be attributed to the spontaneous venous reflux that occurred during quiet respiration in the majority of patients with varicoceles. Seven velocity waveform patterns are described and these are thought to represent increasing degrees of internal spermatic vein reflux and provide a basis on which it is possible to grade varicoceles. The Doppler grades correlated with the size of the varicocele, and with the internal spermatic vein diameter and testosterone concentrations. abstract_id: PUBMED:24314819 Impact of internal spermatic artery preservation during laparoscopic varicocelectomy on recurrence and the catch-up growth rate in adolescents. Objective: To investigate the effectiveness of laparoscopic varicocelectomy (LV) in adolescents with varicocele and analyze the impact of internal spermatic artery (ISA) preservation on surgical outcomes. Materials And Methods: Data on 92 adolescents with left varicocele who underwent LV between December 1998 and January 2011 were retrospectively analyzed. The mean age of the patients was 13.2 ± 2.1 years. Age, grade of disease, number of ligation veins, recurrence rates, and catch-up growth were analyzed in patients who underwent ISA preservation and ligation. The median duration of the follow-up was 21 months. Results: ISA preservation was performed on 50 patients (54%). There were no significant inter-group differences in terms of age, varicocele grade, number of ligation veins, and catch-up growth (93% vs. 90%). The patients who received artery preservation demonstrated a higher recurrence rate (22%) than those who received artery ligation (5%; p = 0.032). Among 13 patients who had persistent or recurrent varicocele, nine were treated with embolization and one was treated with magnification-assisted subinguinal varicocelectomy. None of these 10 patients demonstrated recurrence or testicular atrophy. Conclusions: LV with ISA ligation can reduce the recurrence rate and results in the same catch-up growth rate in comparison with LV with ISA preservation. Answer: Yes, internal spermatic vein (ISV) insufficiency in varicoceles appears to be a different entity in adults and adolescents. A study aimed to determine whether phlebographic features can discriminate adult from adolescent varicocele found that left-sided varicoceles in adults are distinct from those in adolescents. In adults, reflux is likely to be induced via collateral pathways, whereas in adolescents, congenital venous abnormalities are predominantly present. The study also noted that adults had no spontaneous opacification of the ISV during venography twice as frequently as adolescents, a complex outflow into the renal vein more often, and significantly more collaterals. Adolescents had a significantly lower number of competent valves and significantly more instances of nutcracker phenomenon. The distribution of the types of ISVs was significantly different between adults and adolescents according to the Bähren classification. Insufficiency of the right ISV was encountered more frequently in adults than in adolescents, and in adults, the maximum diameter of the ISV was significantly larger. Bilateral ISV insufficiency was also more frequent in adults (PUBMED:26295656). These findings suggest that the pathophysiology and anatomical characteristics of varicoceles may evolve with age, indicating that the approach to diagnosis and treatment might need to be tailored differently for adults and adolescents.
Instruction: A genome-wide association study of autism using the Simons Simplex Collection: Does reducing phenotypic heterogeneity in autism increase genetic homogeneity? Abstracts: abstract_id: PUBMED:25534755 A genome-wide association study of autism using the Simons Simplex Collection: Does reducing phenotypic heterogeneity in autism increase genetic homogeneity? Background: Phenotypic heterogeneity in autism has long been conjectured to be a major hindrance to the discovery of genetic risk factors, leading to numerous attempts to stratify children based on phenotype to increase power of discovery studies. This approach, however, is based on the hypothesis that phenotypic heterogeneity closely maps to genetic variation, which has not been tested. Our study examines the impact of subphenotyping of a well-characterized autism spectrum disorder (ASD) sample on genetic homogeneity and the ability to discover common genetic variants conferring liability to ASD. Methods: Genome-wide genotypic data of 2576 families from the Simons Simplex Collection were analyzed in the overall sample and phenotypic subgroups defined on the basis of diagnosis, IQ, and symptom profiles. We conducted a family-based association study, as well as estimating heritability and evaluating allele scores for each phenotypic subgroup. Results: Association analyses revealed no genome-wide significant association signal. Subphenotyping did not increase power substantially. Moreover, allele scores built from the most associated single nucleotide polymorphisms, based on the odds ratio in the full sample, predicted case status in subsets of the sample equally well and heritability estimates were very similar for all subgroups. Conclusions: In genome-wide association analysis of the Simons Simplex Collection sample, reducing phenotypic heterogeneity had at most a modest impact on genetic homogeneity. Our results are based on a relatively small sample, one with greater homogeneity than the entire population; if they apply more broadly, they imply that analysis of subphenotypes is not a productive path forward for discovering genetic risk variants in ASD. abstract_id: PUBMED:31289270 A genome-wide scan statistic framework for whole-genome sequence data analysis. The analysis of whole-genome sequencing studies is challenging due to the large number of noncoding rare variants, our limited understanding of their functional effects, and the lack of natural units for testing. Here we propose a scan statistic framework, WGScan, to simultaneously detect the existence, and estimate the locations of association signals at genome-wide scale. WGScan can analytically estimate the significance threshold for a whole-genome scan; utilize summary statistics for a meta-analysis; incorporate functional annotations for enhanced discoveries in noncoding regions; and enable enrichment analyses using genome-wide summary statistics. Based on the analysis of whole genomes of 1,786 phenotypically discordant sibling pairs from the Simons Simplex Collection study for autism spectrum disorders, we derive genome-wide significance thresholds for whole genome sequencing studies and detect significant enrichments of regions showing associations with autism in promoter regions, functional categories related to autism, and enhancers predicted to regulate expression of autism associated genes. abstract_id: PUBMED:32807774 Clustering by phenotype and genome-wide association study in autism. Autism spectrum disorder (ASD) has phenotypically and genetically heterogeneous characteristics. A simulation study demonstrated that attempts to categorize patients with a complex disease into more homogeneous subgroups could have more power to elucidate hidden heritability. We conducted cluster analyses using the k-means algorithm with a cluster number of 15 based on phenotypic variables from the Simons Simplex Collection (SSC). As a preliminary study, we conducted a conventional genome-wide association study (GWAS) with a data set of 597 ASD cases and 370 controls. In the second step, we divided cases based on the clustering results and conducted GWAS in each of the subgroups vs controls (cluster-based GWAS). We also conducted cluster-based GWAS on another SSC data set of 712 probands and 354 controls in the replication stage. In the preliminary study, which was conducted in conventional GWAS design, we observed no significant associations. In the second step of cluster-based GWASs, we identified 65 chromosomal loci, which included 30 intragenic loci located in 21 genes and 35 intergenic loci that satisfied the threshold of P < 5.0 × 10-8. Some of these loci were located within or near previously reported candidate genes for ASD: CDH5, CNTN5, CNTNAP5, DNAH17, DPP10, DSCAM, FOXK1, GABBR2, GRIN2A5, ITPR1, NTM, SDK1, SNCA, and SRRM4. Of these 65 significant chromosomal loci, rs11064685 located within the SRRM4 gene had a significantly different distribution in the cases vs controls in the replication cohort. These findings suggest that clustering may successfully identify subgroups with relatively homogeneous disease etiologies. Further cluster validation and replication studies are warranted in larger cohorts. abstract_id: PUBMED:19455147 Genome-wide linkage in Utah autism pedigrees. Genetic studies of autism over the past decade suggest a complex landscape of multiple genes. In the face of this heterogeneity, studies that include large extended pedigrees may offer valuable insights, as the relatively few susceptibility genes within single large families may be more easily discerned. This genome-wide screen of 70 families includes 20 large extended pedigrees of 6-9 generations, 6 moderate-sized families of 4-5 generations and 44 smaller families of 2-3 generations. The Center for Inherited Disease Research (CIDR) provided genotyping using the Illumina Linkage Panel 12, a 6K single-nucleotide polymorphism (SNP) platform. Results from 192 subjects with an autism spectrum disorder (ASD) and 461 of their relatives revealed genome-wide significance on chromosome 15q, with three possibly distinct peaks: 15q13.1-q14 (heterogeneity LOD (HLOD)=4.09 at 29 459 872 bp); 15q14-q21.1 (HLOD=3.59 at 36 837 208 bp); and 15q21.1-q22.2 (HLOD=5.31 at 55 629 733 bp). Two of these peaks replicate earlier findings. There were additional suggestive results on chromosomes 2p25.3-p24.1 (HLOD=1.87), 7q31.31-q32.3 (HLOD=1.97) and 13q12.11-q12.3 (HLOD=1.93). Affected subjects in families supporting the linkage peaks found in this study did not reveal strong evidence for distinct phenotypic subgroups. abstract_id: PUBMED:32624584 Quantitative genome-wide association study of six phenotypic subdomains identifies novel genome-wide significant variants in autism spectrum disorder. Autism spectrum disorders (ASD) are highly heritable and are characterized by deficits in social communication and restricted and repetitive behaviors. Twin studies on phenotypic subdomains suggest a differing underlying genetic etiology. Studying genetic variation explaining phenotypic variance will help to identify specific underlying pathomechanisms. We investigated the effect of common variation on ASD subdomains in two cohorts including >2500 individuals. Based on the Autism Diagnostic Interview-Revised (ADI-R), we identified and confirmed six subdomains with a SNP-based genetic heritability h2SNP = 0.2-0.4. The subdomains nonverbal communication (NVC), social interaction (SI), and peer interaction (PI) shared genetic risk factors, while the subdomains of repetitive sensory-motor behavior (RB) and restricted interests (RI) were genetically independent of each other. The polygenic risk score (PRS) for ASD as categorical diagnosis explained 2.3-3.3% of the variance of SI, joint attention (JA), and PI, 4.5% for RI, 1.2% of RB, but only 0.7% of NVC. We report eight genome-wide significant hits-partially replicating previous findings-and 292 known and novel candidate genes. The underlying biological mechanisms were related to neuronal transmission and development. At the SNP and gene level, all subdomains showed overlap, with the exception of RB. However, no overlap was observed at the functional level. In summary, the ADI-R algorithm-derived subdomains related to social communication show a shared genetic etiology in contrast to restricted and repetitive behaviors. The ASD-specific PRS overlapped only partially, suggesting an additional role of specific common variation in shaping the phenotypic expression of ASD subdomains. abstract_id: PUBMED:26314684 Genome-wide Association Study of Autism Spectrum Disorder in the East Asian Populations. Autism spectrum disorder is a heterogeneous neurodevelopmental disorder with strong genetic basis. To identify common genetic variations conferring the risk of ASD, we performed a two-stage genome-wide association study using ASD family and healthy control samples obtained from East Asian populations. A total of 166 ASD families (n = 500) and 642 healthy controls from the Japanese population were used as the discovery cohort. Approximately 900,000 single nucleotide polymorphisms (SNPs) were genotyped using Affymetrix Genome-Wide Human SNP array 6.0 chips. In the replication stage, 205 Japanese ASD cases and 184 healthy controls, as well as 418 Chinese Han trios (n = 1,254), were genotyped by TaqMan platform. Case-control analysis, family based association test, and transmission/disequilibrium test (TDT) were then conducted to test the association. In the discovery stage, significant associations were suggested for 14 loci, including 5 known ASD candidate genes: GPC6, JARID2, YTHDC2, CNTN4, and CSMD1. In addition, significant associations were identified for several novel genes with intriguing functions, such as JPH3, PTPRD, CUX1, and RIT2. After a meta-analysis combining the Japanese replication samples, the strongest signal was found at rs16976358 (P = 6.04 × 10(-7)), which is located near the RIT2 gene. In summary, our results provide independent support to known ASD candidate genes and highlight a number of novel genes warranted to be further investigated in a larger sample set in an effort to improve our understanding of the genetic basis of ASD. abstract_id: PUBMED:35225680 Identification of Chromosomal Regions Linked to Autism-Spectrum Disorders: A Meta-Analysis of Genome-Wide Linkage Scans. Background: Autism spectrum disorder (ASD) is a clinically and genetically heterogeneous group of pervasive neurodevelopmental disorders with a strong hereditary component. Although, genome-wide linkage scans (GWLS) and association studies (GWAS) have previously identified hundreds of ASD risk gene loci, the results remain inconclusive. Method: We performed a heterogeneity-based genome search meta-analysis (HEGESMA) of 15 genome scans of autism and ASD. Results: For strictly defined autism, data were analyzed across six separate genome scans. Region 7q22-q34 reached statistical significance in both weighted and unweighted analyses, with evidence of significantly low between-scan heterogeneity. For ASDs (data from 12 separate scans), chromosomal regions 5p15.33-5p15.1 and 15q22.32-15q26.1 reached significance in both weighted and unweighted analyses but did not reach significance for either low or high heterogeneity. Region 1q23.2-1q31.1 was significant in unweighted analyses with low between-scan heterogeneity. Finally, region 8p21.1-8q13.2 reached significant linkage peak in all our meta-analyses. When we combined all available genome scans (15), the same results were produced. Conclusions: This meta-analysis suggests that these regions should be further investigated for autism susceptibility genes, with the caveat that autism spectrum disorders have different linkage signals across genome scans, possibly because of the high genetic heterogeneity of the disease. abstract_id: PUBMED:35994476 A genome-wide association study of total child psychiatric problems scores. Substantial genetic correlations have been reported across psychiatric disorders and numerous cross-disorder genetic variants have been detected. To identify the genetic variants underlying general psychopathology in childhood, we performed a genome-wide association study using a total psychiatric problem score. We analyzed 6,844,199 common SNPs in 38,418 school-aged children from 20 population-based cohorts participating in the EAGLE consortium. The SNP heritability of total psychiatric problems was 5.4% (SE = 0.01) and two loci reached genome-wide significance: rs10767094 and rs202005905. We also observed an association of SBF2, a gene associated with neuroticism in previous GWAS, with total psychiatric problems. The genetic effects underlying the total score were shared with common psychiatric disorders only (attention-deficit/hyperactivity disorder, anxiety, depression, insomnia) (rG > 0.49), but not with autism or the less common adult disorders (schizophrenia, bipolar disorder, or eating disorders) (rG < 0.01). Importantly, the total psychiatric problem score also showed at least a moderate genetic correlation with intelligence, educational attainment, wellbeing, smoking, and body fat (rG > 0.29). The results suggest that many common genetic variants are associated with childhood psychiatric symptoms and related phenotypes in general instead of with specific symptoms. Further research is needed to establish causality and pleiotropic mechanisms between related traits. abstract_id: PUBMED:31200929 Evidence of Assortative Mating in Autism Spectrum Disorder. Background: Assortative mating is a nonrandom mating system in which individuals with similar genotypes and/or phenotypes mate with one another more frequently than would be expected in a random mating system. Assortative mating has been hypothesized to play a role in autism spectrum disorder (ASD) in an attempt to explain some of the increase in the prevalence of ASD that has recently been observed. ASD is considered to be a heritable neurodevelopmental disorder, but there is limited understanding of its causes. Assortative mating can be explored through both phenotypic and genotypic data, but up until now it has never been investigated through genotypic measures in ASD. Methods: We investigated genotypically similar mating pairs using genome-wide single nucleotide polymorphism data on trio families (Autism Genome Project data [1590 parents] and Simons Simplex Collection data [1962 parents]). To determine whether or not an excess in genetic similarity was present, we employed kinship coefficients and examined spousal correlation between the principal components in both the Autism Genome Project and Simons Simplex Collection datasets. We also examined assortative mating using phenotype data on the parents to detect any correlation between ASD traits. Results: We found significant evidence of genetic similarity between the parents of ASD offspring using both methods in the Autism Genome Project dataset. In the Simons Simplex Collection, there was also significant evidence of genetic similarity between the parents when explored through spousal correlation. Conclusions: This study gives further support to the hypothesis that positive assortative mating plays a role in ASD. abstract_id: PUBMED:26176695 A Pooled Genome-Wide Association Study of Asperger Syndrome. Asperger Syndrome (AS) is a neurodevelopmental condition characterized by impairments in social interaction and communication, alongside the presence of unusually repetitive, restricted interests and stereotyped behaviour. Individuals with AS have no delay in cognitive and language development. It is a subset of Autism Spectrum Conditions (ASC), which are highly heritable and has a population prevalence of approximately 1%. Few studies have investigated the genetic basis of AS. To address this gap in the literature, we performed a genome-wide pooled DNA association study to identify candidate loci in 612 individuals (294 cases and 318 controls) of Caucasian ancestry, using the Affymetrix GeneChip Human Mapping version 6.0 array. We identified 11 SNPs that had a p-value below 1x10-5. These SNPs were independently genotyped in the same sample. Three of the SNPs (rs1268055, rs7785891 and rs2782448) were nominally significant, though none remained significant after Bonferroni correction. Two of our top three SNPs (rs7785891 and rs2782448) lie in loci previously implicated in ASC. However, investigation of the three SNPs in the ASC genome-wide association dataset from the Psychiatric Genomics Consortium indicated that these three SNPs were not significantly associated with ASC. The effect sizes of the variants were modest, indicating that our study was not sufficiently powered to identify causal variants with precision. Answer: The genome-wide association study (GWAS) using the Simons Simplex Collection (SSC) aimed to determine if reducing phenotypic heterogeneity in autism spectrum disorder (ASD) would increase genetic homogeneity and aid in the discovery of common genetic variants conferring liability to ASD (PUBMED:25534755). The study analyzed genome-wide genotypic data from 2576 families and conducted a family-based association study, heritability estimates, and evaluated allele scores for each phenotypic subgroup defined based on diagnosis, IQ, and symptom profiles. The results showed that subphenotyping did not substantially increase power, and allele scores predicted case status in subsets of the sample equally well. Heritability estimates were very similar for all subgroups, suggesting that reducing phenotypic heterogeneity had at most a modest impact on genetic homogeneity in the SSC sample (PUBMED:25534755). This indicates that analysis of subphenotypes may not be a productive path forward for discovering genetic risk variants in ASD, at least within the context of the SSC sample.
Instruction: Does lumbar puncture at night prevent post-dural puncture headache? Abstracts: abstract_id: PUBMED:24825589 Does lumbar puncture at night prevent post-dural puncture headache? Objectives: The aim of this study was to identify the patient-related risk factors for post-dural puncture headache with same standardized procedures. Materials And Methods: The inclusion criteria were patients (i) who underwent lumbar puncture for diagnostic purposes, (ii) with ≥10 years of age, and (iii) with no structural lesions that could cause headache from brain-computed tomography or magnetic resonance images. The primary endpoint for this study was post-dural-puncture headache as a dependent variable. The differences were analyzed with demographic and cerebrospinal fluid profiles as independent variables. Results: Four hundred and thirteen patients met the inclusion criteria for this study, and 36 patients developed post-dural puncture headache. Patients with post-dural puncture headache were younger, had lower body mass index, and had less diabetes and hypertension. In cerebrospinal fluid profile, the counts of white blood cell and protein, and cerebrospinal fluid pressure were lower in patients with post-dural puncture headache than those without post-dural puncture headache, but glucose ratio was higher. Interestingly, patients who underwent puncture at daytime developed more post-dural puncture headache than those who were performed puncture at nighttime. After adjusting the clinical variables, multiple logistic regression analysis showed that younger age, lower cerebrospinal fluid pressure, and puncture at daytime were independently significant variables for predicting post-dural puncture headache. Conclusions: We confirmed the risk factor for post-dural puncture headache such as young age, and newly found that patients who underwent puncture at daytime developed more post-dural puncture headache than those who were performed puncture at nighttime. abstract_id: PUBMED:29911422 Therapeutic lumbar puncture for headache in idiopathic intracranial hypertension: Minimal gain, is it worth the pain? Background: Headache is disabling and prevalent in idiopathic intracranial hypertension. Therapeutic lumbar punctures may be considered to manage headache. This study evaluated the acute effect of lumbar punctures on headache severity. Additionally, the effect of lumbar puncture pressure on post-lumbar puncture headache was evaluated. Methods: Active idiopathic intracranial hypertension patients were prospectively recruited to a cohort study, lumbar puncture pressure and papilloedema grade were noted. Headache severity was recorded using a numeric rating scale (NRS) 0-10, pre-lumbar puncture and following lumbar puncture at 1, 4 and 6 hours and daily for 7 days. Results: Fifty two patients were recruited (mean lumbar puncture opening pressure 32 (28-37 cmCSF). At any point in the week post-lumbar puncture, headache severity improved in 71% (but a small reduction of -1.1 ± 2.6 numeric rating scale) and exacerbated in 64%, with 30% experiencing a severe exacerbation ≥ 4 numeric rating scale. Therapeutic lumbar punctures are typically considered in idiopathic intracranial hypertension patients with severe headaches (numeric rating scale ≥ 7). In this cohort, the likelihood of improvement was 92% (a modest reduction of headache pain by -3.0 ± 2.8 numeric rating scale, p = 0.012, day 7), while 33% deteriorated. Idiopathic intracranial hypertension patients with mild (numeric rating scale 1-3) or no headache (on the day of lumbar puncture, prior to lumbar puncture) had a high risk of post- lumbar puncture headache exacerbation (81% and 67% respectively). Importantly, there was no relationship between lumbar puncture opening pressure and headache response after lumbar puncture. Conclusion: Following lumbar puncture, the majority of idiopathic intracranial hypertension patients experience some improvement, but the benefit is small and post-lumbar puncture headache exacerbation is common, and in some prolonged and severe. Lumbar puncture pressure does not influence the post-lumbar puncture headache. abstract_id: PUBMED:23298181 Unique case of "post-lumbar puncture headache". Lumbar puncture (LP) is associated with complications that include post-LP orthostatic headache, local bleeding, and subdural hematoma. We report a unique case of a spontaneous frontal epidural hematoma following a therapeutic lumbar puncture in a patient with a history of idiopathic intracranial hypertension. This case highlights the importance of symptomatology in patients following LPs by revealing a rare intracranial presentation that would be devastating if not discovered promptly and appropriately managed. abstract_id: PUBMED:38116072 A controlled lumbar puncture procedure improves the safety of lumbar puncture. Background: In order to improve the safety of lumbar puncture (LP), we designed a new type of LP needle, that is, an integrated and controlled LP needle, which can actively and accurately control the flow rate and retention of cerebrospinal fluid (CSF) during puncture, so as to achieve a controlled LP procedure. Objective: To evaluate whether a controlled LP procedure can improve the comfort of LP and reduce the risk of complications associated with LP. Methods: Patients requiring LP (n = 63) were pierced with an integrated and controlled LP needle or a conventional LP needle. The differences in vital signs, symptom score, comfort, operation time, CSF loss, CSF pressure fluctuation and back pain before and after puncture were analyzed. Results: An integrated and controlled LP needle (n = 35) significantly improved patients' headache symptoms before and after puncture. In addition, a controlled LP procedure significantly reduced the amount of unnecessary CSF loss (p < 0.001), shortened the time of puncture (p < 0.001), improved patient comfort (p = 0.001) and reduced the incidence of back pain (p < 0.001). For patients with high intracranial pressure (HICP), the fluctuations in pressure of the CSF were also reduced while obtaining similar amounts of CSF (p = 0.009). Conclusion: A controlled LP procedure avoids unnecessary CSF loss, prevents rapid fluctuations in CSF pressure in patients with HICP, and reduces the risks associated with LP. abstract_id: PUBMED:22091245 Evaluation of pre lumbar puncture position on post lumbar puncture headache. Background: The most common complication of lumbar puncture (LP) occurring in over thirty percent of patients is headache. The position after lumbar puncture, needle type and size, and volume of the extracted cerebrospinal fluid (CSF) have been evaluated as contributory factors in occurrence of post lumbar puncture headache (PLPH), but the position before lumbar puncture has not been evaluated. Methods: The occurrence of post lumbar puncture headache was evaluated in 125 patients undergoing lumbar puncture, divided randomly into sitting and lateral decubitus groups in the following five days. Chi-square test was used for statistical analysis. Results: Thirty eight patients (30.4%) reported headache after lumbar puncture in the two groups, and post lumbar puncture headache was significantly lower in the lateral decubitus position (p = 0.001). There was no significant difference between genders in the post lumbar puncture headache occurrence (p = 0.767). Conclusions: Lumbar puncture in sitting position could produce more post lumbar puncture headache in comparison with lateral decubitus position. abstract_id: PUBMED:24928757 Evidence-Based Practice Recommendations to Prevent/Manage Post-Lumbar Puncture Headaches in Pediatric Patients Receiving Intrathecal Chemotherapy. Post-lumbar puncture headaches (PLPHs) are a known complication of lumbar puncture procedures. Children undergoing treatment for cancer often undergo multiple lumbar punctures, placing them at increased risk for PLPHs. There are currently no guidelines for the prevention or management of PLPHs in children. A team was therefore assembled to conduct a systematic review of the evidence in relationship to PLPHs in the pediatric population. Clinical questions were developed and used to guide the literature review. Twenty-four articles were deemed appropriate for use and were evaluated using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria. Based on the review of evidence, strong recommendations are made for the use of smaller needle sizes and for the use of pencil point needles during lumbar puncture procedures. Weak recommendations are made for needle orientation and positioning following the procedure as well as for interventions used to treat PLPHs once they occur. There is a need for additional, pediatric-specific studies to further examine the issue of PLPH prevention and treatment. abstract_id: PUBMED:31627321 Post-Lumbar Puncture Headache-Does Hydration before Puncture Prevent Headache and Affect Cerebral Blood Flow? Headache is a common complication after diagnostic lumbar puncture (DLP). We aimed to check whether hydration before puncture influences the incidence of post-lumbar puncture headache (PLPH) and affects cerebral blood flow. Ninety-nine patients enrolled for puncture were assigned to a group with (n = 40) or without hydration (n = 59). In the hydration group, 1000 mL 0.9% NaCl was infused and a minimum of 1500 mL oral fluids was recommended within the 24 h before puncture. A Transcranial Doppler (TCD) was performed before and after DLP. Mean velocity (Vm) and pulsatility index (PI) were measured in the middle cerebral arteries (MCAs). PLPH occurred in 28 patients (28.2%): six (15.4%) from the hydrated and 22 (37.3%) from the non-hydrated group (p < 0.023). Patients with PLPH were younger (p < 0.014) and with headaches in their histories (p < 0.036) compared with the non-headache group. Vm values in both MCAs after puncture were significantly lower than before puncture in all patients. In the PLPH group, Vm in MCAs before puncture were significantly higher and the PI was lower than in the non-headache group. Our findings suggest that hydration of patients within 24 h before puncture prevented PLPH. Twenty-four hours after puncture, significant decreases in Vm were observed in the MCAs of all patients. Low baseline values of PI and high Vm predisposed patients to PLPH. abstract_id: PUBMED:32095469 Post lumbar puncture headache: Case report of a serious adverse event in first-in-human study. A lumbar puncture can be used to measure the concentrations of drugs and/or pharmacodynamic biomarkers during clinical trials of central nervous system drugs. We report a case of a post lumbar puncture headache (PLPH) in a first-in-human study, which was reported as a serious adverse event. A 20-year-old man received 200 mg of the investigational product (IP) for 7 days and underwent a lumbar puncture for cerebrospinal fluid sampling before IP administration (Day 1, pre-dose) and after 7 days and multiple IP administrations (Day 7, 1 hour post-dose). After discharge on Day 8, the subject complained of headache, nausea, vomiting, neck stiffness, and numbness of the extremities. The symptoms occurred when he got up and disappeared after he remained in the supine position for several minutes. Five days later, he visited the neurology clinic of the main hospital. The neurologist recommended hospitalization for further evaluation and symptom management, and the subject was then admitted to the hospital. There were no abnormal findings in vital signs, laboratory results, or brain-computed tomography. His symptoms disappeared during the hospitalization period. It was important to distinguish whether the headache was IP-related or lumbar puncture-related. Therefore, knowledge of clinical characteristics and differential diagnosis of PLPH is paramount. Furthermore, if severe PLPH occurs, a consultation with a neurologist and imaging studies should be considered for a differential diagnosis of PLPH. abstract_id: PUBMED:32934040 A nudge towards better lumbar puncture practice. Background: Despite a body of evidence demonstrating reduced incidence of post-lumbar puncture headache associated with pencil-point (vs bevelled-edge) needles, their use remains variable in the UK. Methods: A multimodal longitudinal intervention was performed over a 12-month period at a tertiary neurology referral centre. In addition to simulation training using pencil-point needles and an electronic documentation pro forma, a change in the default needles presented in clinical environments was performed. Results: Prior to the intervention, pencil-point needle usage was minimal. Documentation significantly improved throughout the intervention period. Simulation training interventions only resulted in transient, moderate improvements in pencil-point needle usage. However, changing the default produced a marked increase in use that was sustained. No significant changes in operator success rate were found. Conclusions: In the context of wider literature on the power of default options in driving behavioural choices, changing defaults may be an effective, inexpensive and acceptable intervention to improve lumbar puncture practice. abstract_id: PUBMED:29497669 Failure of lumbar puncture in a patient with spinal epidural lipomatosis: a case report. We report a case of difficult lumbar puncture due to the inability to obtain adequate cerebrospinal fluid (CSF) in a patient later diagnosed with spinal epidural lipomatosis (SEL). A 76-year-old man with a body mass index (BMI) of 24.1 kg/m2 was scheduled for transurethral resection of a bladder tumor for superficial bladder cancer under spinal anesthesia. The patient had a 3-year history of inhaled steroid use for the management of chronic obstructive pulmonary disease. After placing the patient in the right lateral position, a lumbar puncture was performed via the median approach. However, CSF could not be tapped adequately despite repeated attempts at lumbar puncture, so general anesthetic was administered instead. Subsequently, both anesthesia and surgery proceeded uneventfully. On the first postoperative day, the patient developed mild postdural puncture headache (PDPH), which was treated conservatively. No postoperative neurological complications related to spinal anesthesia were observed. Approximately 2 months after discharge, the patient reported progressive lower back pain and was diagnosed with SEL by magnetic resonance imaging (MRI). A lumbar laminectomy and removal of excessive adipose tissue was performed. After surgery, the patient's symptoms resolved. The pathogenesis of SEL involves excess fat tissue deposition in the spinal canal, which can lead to obliteration of the spinal subarachnoid space. Therefore, in this patient, the SEL was thought to have caused the inability to obtain adequate CSF during lumbar puncture, and was associated with difficult spinal anesthesia. Answer: Yes, according to the study with PUBMED:24825589, patients who underwent lumbar puncture at daytime developed more post-dural puncture headache than those who had the procedure performed at nighttime. After adjusting for clinical variables, multiple logistic regression analysis indicated that puncture at daytime was an independently significant variable for predicting post-dural puncture headache, along with younger age and lower cerebrospinal fluid pressure. This suggests that performing a lumbar puncture at night may reduce the risk of developing post-dural puncture headache.
Instruction: Testicular catch-up growth after varicocelectomy: does surgical technique make a difference? Abstracts: abstract_id: PUBMED:37394305 Testicular catch-up growth in the non-operative management of the adolescent varicocele. Introduction: Adolescent varicocele is a common urologic condition with a spectrum of outcomes, leading to variations in management. Testicular hypotrophy is a common indication for surgery Routine monitoring may be an appropriate form of management for many adolescents with testicular hypotrophy, as studies have shown that a large proportion of these patients may experience catch-up growth of the ipsilateral testis. Furthermore, there are few longitudinal studies which have correlated patient specific factors to catch-up growth. We aimed to determine the frequency of testicular catch up-growth in adolescents with varicocele while also examining if patient specific factors such as BMI, BMI percentile, or height correlated with testicular catch-up growth. Methods: A retrospective chart review found adolescent patients who presented to our institution with varicocele from 1997 to 2019. Patients between the ages of 9 and 20 years with left-sided varicocele, a clinically significant testicular size discrepancy, and at least two scrotal ultrasounds at least one year apart were included in analysis. Testicular size discrepancy of greater than 15% on scrotal ultrasound was considered clinically significant. Testicular size was estimated in volume (mL) via the Lambert formula. Statistical relationships between testicular volume differential and height, body mass index (BMI), and age were described with Spearman correlation coefficients (ρ). Results: 40 patients had a testicular volume differential of greater than 15% at some point during their clinical course and were managed non-operatively with observation and serial testicular ultrasounds. On follow-up ultrasound, 32/40 (80%) had a testicular volume differential of less than 15%, with a mean age of catch up growth at 15 years (SD 1.6, range 11-18 years). There were no significant correlations between baseline testicular volume differential and baseline BMI (ρ = 0.00, 95% CI [-0.32, 0.32]), baseline BMI percentile (ρ = 0.03, 95% CI [-0.30, 0.34]), or change in height over time (ρ = 0.05, 95% CI [-0.36, 0.44]). Discussion: The majority of adolescents with varicocele and testicular hypotrophy exhibited catch-up growth with observation, suggesting that surveillance is an appropriate form of management in many adolescents. These findings are consistent with previous studies and further indicate the importance of observation for the adolescent varicocele. Further research is warranted to determine patient specific factors that correlate with testicular volume differential and catch up growth in the adolescent varicocele. abstract_id: PUBMED:28717535 Evaluation of testicular catch-up growth in adolescent microsurgical varicocelectomy. Objective: The incidence of varicocele is approximately 15% in adolecent men. Early treatment with varicocele is aimed to resolve testicular hypotrophy and ensure catch-up growth. The aim of this study was to evaluate ipsilateral catch-up growth rates relative to contralateral testicular growth in adolescents with varicocele undergoing microsurgical subinguinal varicocelectomy. Material And Methods: Fifty adolescents with unilateral grade 2-3 varicoceles were included in the study. All patients underwent microsurgical subinguinal left varicocelectomies performed by the same experienced surgeon. All patients were evaluated clinically and using orchidometric measurements to define the grade of varicocele and testicular volume at presentation and follow-up. The number of internal and external spermatic veins, testicular arteries and lymphatic vessels preserved during the subinguinal microsurgical varicocelectomy were recorded. The mean follow-up period was 26 months (range 6-48 months). Results: At presentation, mean patient age was 12.9±2.1 years. Mean testicular preoperative volumes were 7.1±4.3 mL for the right and 5.4±3.4 mL for the left testis. There were significant differences between mean volumes of the right and left testis (p=0.002). At the final postoperative follow-up visit, mean testicular volumes were 10.8±5.1 mL (range 3-25) for the right and 9.9±4.3 mL (range 2-20) for the left, and the difference between the right and left testicular volumes was insignificant (p=0.47). In our series, catch-up growth was observed in 70% (35/50) of our patients. Conclusion: Adolescent varicocelectomy is associated with a higher percentage of patients showing testicular catch-up growth. In our study, similarly to the available literature the catch-up growth rate was found as 70% and observed to have positive effects of adolescent varicocelectomy on testicular growth. abstract_id: PUBMED:18817958 Testicular catch-up growth after varicocelectomy: does surgical technique make a difference? Objectives: Catch-up growth of the affected testis in adolescents after varicocele repair has been well documented. Many investigators have found evidence that testicular hypotrophy related to varicocele can be reversed by early intervention. The aim of this study was to analyze the testicular catch-up growth rate in pediatric patients, correlating it with patient age at surgery, varicocele size, procedures used, and semen quality. Methods: Between March 1990 and September 2006, a total of 465 varicocelectomies were performed at our department. We evaluated the mean testicular volume before and after varicocelectomy in patients aged 9-14 years. Two procedures were used: laparoscopic artery-preserving varicocelectomy (group 1) and open inguinal microscopic artery-preserving varicocelectomy with a venous-venous bypass (group 2). The testicular volume was measured before and after surgery using ultrasonography, and the mean testicular catch-up growth was recorded. Results: Although the overall catch-up growth rate for both groups was 80%, after 18 months, only 45% of patients in group 1 and 34% of patients in group 2 had equal bilateral testicular volume. None of these procedures showed a statistically significant correlation with age at surgery, varicocele size, or catch-up rate. The semen analysis results did not show statistically significant differences between the 2 groups. Conclusions: Although 80% of patients demonstrated testicular catch-up, with a different distribution depending on the procedure type used but without statistically significant differences, only 32% of patients had complete and real testicular volume catch-up. abstract_id: PUBMED:30604693 Effects of percutaneous varicocele repair on testicular volume: results from a 12-month follow-up. Varicocele is a common finding in men. Varicocele correction has been advocated for young patients with testicular hypotrophy, but there is a lack of morphofunctional follow-up data. We assessed whether percutaneous treatment of left varicocele is associated with testicular "catch-up growth" in the following 12 months by retrospectively reviewing data from an electronic database of 10 656 patients followed up in our clinic between 2006 and 2016. We selected all young adults (<35 years) with left varicocele who underwent percutaneous treatment, had a minimum of 12 months' ultrasound imaging follow-up, and had no other conditions affecting testicular volume. One hundred and fourteen men (mean±standard deviation [s.d.] of age: 22.8 ± 5.4 years) met the inclusion and exclusion criteria. Left testicular hypotrophy (LTH), defined as a ≥20% difference between left and right testicular volume at baseline, was observed in 26 (22.8%) men. Participants with LTH (mean±s.d.: 14.5 ± 2.7 ml) had lower baseline testicular volume compared to those without LTH (mean±s.d.: 15.7 ± 3.8 ml; P = 0.032). Repeated measures mixed models showed a significant interaction between LTH and time posttreatment when correcting for baseline left testicular volume (β = 0.114, 95% confidence interval [CI]: 0.018-0.210, P = 0.020), resulting in a catch-up growth of up to 1.37 ml per year (95% CI: 0.221-2.516). Age at intervention was also associated with reduced testicular volume (-0.072 ml per year, 95% CI: -0.135--0.009; P = 0.024). Percutaneous treatment of left varicocele in young adults with LTH can result in catch-up growth over 1 year of follow-up. The reproductive and psychological implications of these findings need to be confirmed in longer and larger prospective studies. abstract_id: PUBMED:21346830 The validity of testicular catch-up growth and serum FSH levels in the long-term postoperative assessment of laparoscopic varicocele correction in adolescents. Background: Postoperative assessment after varicocele surgery in adolescence is commonly centred around catch-up growth of the testis. There is paucity of evidence on the correlation of catch-up growth with underlying testicular function in these patients. Aims: To assess the reliability of catch-up growth of the testis as an indicator of normalization of testicular function and the utility of serum FSH levels in the long-term postoperative assessment of varicocele surgery in adolescence. Materials And Methods: Prospective cohort study of young adults (18-27 years) who had laparoscopic varicocele correction in adolescence (11-16 years). Evaluation included serum FSH levels, scrotal ultrasonography and semen analysis. Analysis: Anatomical and functional parameters of participants with equal and normal testicular size were compared to those of participants with persistent testicular hypotrophy or hypertrophy. Sensitivity and positive predictive value of postoperative serum FSH levels were estimated and elevated levels of serum FSH were checked for association with suboptimal outcomes of varicocele correction. Results: The serum FSH levels of participants with unequal testicular sizes (n=6, median 6.65 IU/l), which included testicular hypertrophy (n=3, median 7.2 IU/l) and persistent testicular hypotrophy (n=3, median 6.1 IU/l), were significantly higher than the group with equal testicular sizes (n=8, median 3.5 IU/l; P=0.014, Mann-Whitney U test). Postoperative elevated serum FSH levels were significantly associated with suboptimal outcomes of varicocele surgery (P=0.015, Fisher's exact test). The test also had a high positive predictive value. Conclusions: Testicular catch-up growth may not be a reliable postoperative assessment criterion by itself. Serum FSH levels may be of value in detecting suboptimal outcomes of varicocele surgery in adolescents. abstract_id: PUBMED:31293512 Postnatal Catch-Up Growth After Suspected Fetal Growth Restriction at Term. Objective: The aim of this study was to study growth patterns of children born after suspected fetal growth restriction (FGR) at term and to compare the effect of induction of labor (IoL) and expectant management (EM), also in relation to neurodevelopmental and behavioral outcome at age 2. Methods: We performed a 2 years' follow-up of growth of children included in the Disproportionate Intrauterine Growth Restriction Trial at Term (DIGITAT) study, a Randomized Controlled Trial (RCT) comparing IoL with EM in pregnancies with suspected FGR at term. We collected data on child growth until the age of 2 years. Standard deviation scores (SDSs) for height and weight were calculated at different ages. We assessed the effects of IoL compared with EM and the effects of a birth weight below or above the 3rd or 10th centile on catch-up growth. Target height SDSs were calculated using the height of both parents. Results: We found a significant increase in SDS in the first 2 years. Children born after EM showed more catch-up growth in the first month [height: mean difference -0.7 (95% CI: 0.2; 1.3)] and weight [mean difference -0.5 (95% CI: 0.3; 0.7)]. Children born with a birth weight below the 3rd and 10th centiles showed more catch-up growth after 1 year [mean difference -0.8 SDS (95% CI: -1.1; -0.5)] and after 2 years [mean difference -0.7 SDS (95% CI: -1.2; -0.2)] as compared to children with a birth weight above the 3rd and 10th centiles. SDS at birth had the strongest effect on adverse neurodevelopmental outcome at 2 years of age. Conclusion: After FGR at term, postnatal catch-up growth is generally present and associated with the degree of FGR. Obstetric management in FGR influences postnatal growth. Longer-term follow-up is therefore needed and should be directed at growth and physical health. Clinical Trial Registration: www.ClinicalTrials.gov, identifier SRCTN10363217. abstract_id: PUBMED:36907727 Evaluation of catch-up growth in severe pediatric Hashimoto's hypothyroidism. Background: We aimed to evaluate catch-up growth in children with severe Hashimoto's hypothyroidism (HH) after thyroid hormone replacement therapy (HRT). Methods: A multicenter retrospective study was conducted including children referred for growth slowdown that led to the diagnosis of HH between 1998 and 2017. Results: A total of 29 patients were included, with a median age of 9.7 years (13-172 months). Median height at diagnosis was -2.7 [-4.6; -0.1] standard deviation score (SDS), with a height loss of 2.5 [0.7; 5.4] SDS compared to height before growth deflection (p<0.0001). At diagnosis, the median TSH level was 819.5 mIU/L [100; 1844], the median FT4 level was 0 pmol/L [undetectable; 5.4], and the median anti-thyroperoxidase antibody level was 1601 UI/L [47; 25,500]. In the 20 patients treated only with HRT, there were significant differences between height at diagnosis and height at 1 year (n = 19, p<0.0001), 2 years (n = 13, p = 0.0005), 3 years (n = 9, p = 0.0039), 4 years (n = 10, p = 0.0078), and 5 years (n = 10, p = 0.0018) of treatment but not in the case of final height (n = 6, p = 0.0625). Median final height was -1.4 [-2.7; 1,5] SDS (n = 6), with a significant difference between height loss at diagnosis and total catch-up growth (p = 0.003). The other nine patients were also given growth hormone (GH). They were smaller at diagnosis (p = 0.01); however, there was no difference in final height between those two groups (p = 0.68). Conclusion: Severe HH can lead to a major height deficit, and catch-up growth seems to be insufficient after treatment with HRT alone. In the most severe cases, administration of GH may enhance this catch-up. abstract_id: PUBMED:28481072 Evaluation of testicular growth after varicocele treatment in early childhood and adolescence based on the technique used (Palomo, Ivanissevich and embolization) Objectives: To analyze, depending on the technique employed, recurrence, symptomatic improvement and testicular growth following treatment of testicular varicocele. Material And Methods: Descriptive retrospective study of 69 pediatric and adolescent males diagnosed with varicocele treated in our center by open technique according Ivanissevich technique (IT), Palomo (PT) and percutaneous embolization (PE) between 2000-2014. Variables analyzed were age, symptoms, differential testicular volume (RV), employed technique, recurrence, symptomatic improvement and RV after treatment. Association between qualitative variables was evaluated (chi-square test or Fisher's exact test). Results: 69 patients with a median age of 14 years (7-19) were studied. PE was performed in 37 patients (53,6%), PT in 23 (33,3%) and IT in 9 (13%). Recurrence occurred in 16 patients (23,2%), 80% of them had been treated with PE. Eleven patients had pain (15.9%), there was improvement in 100% of patients treated with PE, but none of those treated by PT or IT improved. At diagnosis, 37 patients (53.6%) had decreased testicular volume (left testicular hypotrophy), in 28 cases the RV was >20%. After treatment, the RV was normalized in 11 cases (39,2%). Conclusions: The choice of therapeutic technique in pediatric varicocele should be based on patient characteristics, symptoms, experience center for embolization and previous recurrence. Regardless of the chosen technique, 39,2% of testicular hypotrophy with VD >20% present at diagnosis normalized after treatment. abstract_id: PUBMED:31363301 Is there Catch-Up Growth? Evidence from Three Continents. The ability to correct deficiencies in early childhood malnutrition, what is known as catch-up growth, has widespread consequences for economic and social development. While clinical evidence of catch-up has been observed, less clear is the ability to correct for chronic malnutrition found in impoverished environments in the absence of extensive and focused interventions. This paper investigates whether nutritional status at early age affects nutritional status a few years later among children using panel data from China, South Africa and Nicaragua. The key research question is the extent to which state dependence in linear growth exists among young children, and what family and community level factors mediate state dependency. The answer to this question is crucial for public policy due to the long term economic consequences of poor childhood nutrition. Results show strong but not perfect persistence in nutritional status across all countries, indicating that catch-up growth is possible though unobserved household behaviors tend to worsen the possibility of catch-up growth. Public policy that can influence these behaviors, especially when children are under 24 months old, can significantly alter nutrition outcomes in South Africa and Nicaragua. abstract_id: PUBMED:23523290 Elevation of testicular temperature predicts testicular catch-up growth and hypotrophy after varicocelectomy and observation in adolescent varicocele. Objective: To predict testicular catch-up growth after varicocelectomy, we investigated the usefulness of testicular temperature (TT) measurements for adolescent varicocele. Materials And Methods: The present study included 110 patients with left varicocele patients (age 9-17 years, mean ± standard deviation 12.5 ± 2.4) who had undergone varicocelectomy (87 patients) or observation (23 patients). In addition to the physical examination and duplex/color Doppler ultrasonography, the pre- and postoperative TT was measured using a CoreTemp CTM204 device (Terumo, Tokyo, Japan). Testicular catch-up growth was evaluated 2 years after the operation or initial visit. Univariate and multivariate statistical analyses were performed to determine whether patient age, varicocele grade, testicular asymmetry, TT, and endocrinologic results could be used as determinants of catch-up growth. Results: Left testicular hypotrophy was noted in 34 patients (31%), and TT elevation was observed in 58 patients (53%). In the observation group, 9 patients (39%) showed new-onset testicular hypotrophy. Patients with TT elevation in this group had a significantly greater incidence of new-onset hypotrophy compared with those without TT elevation (P <.05). Patients with catch-up growth after varicocelectomy had a greater grade of varicocele, more preoperative testicular hypotrophy, and TT elevation (P <.05, P <.05, and P <.0001, respectively). Multivariate statistical analyses revealed that TT elevation was significantly associated with catch-up growth after varicocelectomy (P <.05). Conclusion: The measurement of scrotal temperature is a helpful approach to identify patients at a greater risk of progressive testicular hypotrophy and catch-up growth after varicocelectomy. Answer: The available literature suggests that testicular catch-up growth after varicocelectomy in adolescents with varicocele is a well-documented phenomenon, but the impact of the surgical technique on this outcome is less clear. One study found that the overall catch-up growth rate for two different surgical groups was 80%, but there was no statistically significant correlation between the surgical technique used (laparoscopic artery-preserving varicocelectomy versus open inguinal microscopic artery-preserving varicocelectomy with a venous-venous bypass) and the catch-up rate or semen analysis results (PUBMED:18817958). This suggests that while catch-up growth is common after varicocelectomy, the specific surgical technique may not significantly influence the likelihood of catch-up growth. Another study evaluating microsurgical subinguinal varicocelectomy in adolescents reported a catch-up growth rate of 70% (PUBMED:28717535). Although this study did not directly compare different surgical techniques, it supports the notion that a high percentage of patients experience testicular catch-up growth after varicocelectomy. Additionally, a study comparing different techniques (Palomo, Ivanissevich, and embolization) for treating pediatric varicocele found that regardless of the chosen technique, 39.2% of cases with testicular hypotrophy at diagnosis normalized after treatment (PUBMED:31363301). This indicates that testicular catch-up growth can occur with various surgical approaches. In summary, while testicular catch-up growth is a common outcome following varicocelectomy for adolescent varicocele, the available evidence does not strongly suggest that the type of surgical technique significantly affects the rate or success of catch-up growth. However, further research may be needed to fully understand the potential nuances and long-term outcomes associated with different surgical techniques.
Instruction: Snodgrass hypospadias repair with vascularized dartos flap: the perfect repair for virgin cases of hypospadias? Abstracts: abstract_id: PUBMED:12352344 Snodgrass hypospadias repair with vascularized dartos flap: the perfect repair for virgin cases of hypospadias? Purpose: Since its introduction, the Snodgrass hypospadias repair has been applied to virtually all forms of hypospadias repair. However, fistula rates have still been reported to be as high as 5% from large center, multiple surgeon studies and 16% from smaller center studies. We report on the use of the Snodgrass repair in conjunction with routine use of a vascularized dartos flap and 2-layer closure of the neourethra from multiple institutions. Materials And Methods: Records of patients who underwent a primary 1-stage hypospadias repair with the Snodgrass technique in conjunction with vascularized dartos flap coverage were reviewed. Nearly identical surgical technique was used by all 6 surgeons in each case, which included a 2-layer closure of the neourethra, preservation of the well vascularized periurethral tissue and routine use of vascularized dartos flap coverage. A total of 514 cases were identified, including 414 with distal and 100 with midshaft or proximal hypospadias. Stents were used in 292 of the 514 repairs. Results: Of the 414 distal cases there were no fistulas and 1 case of meatal stenosis. Of the 100 proximal cases there were 3 fistulas and 1 case of meatal stenosis. The overall complication rate was less than 1% for all cases combined. Conclusions: This series represents the largest reported multi-institutional experience with the Snodgrass technique. When used in conjunction with vascularized dartos flap coverage, 2-layer closure of the neourethra and special attention to preservation of the periurethral vascular supply, this repair can be performed with a near 0 complication rate. We believe that this is the optimal repair for routine cases of hypospadias. abstract_id: PUBMED:29118537 Choosing an ideal vascular cover for Snodgrass repair. Aim: The aim of this study is to compare tunica vaginalis (TV), dorsal dartos, and ventral dartos flap as a second layer vascular cover during Snodgrass repair. Materials And Methods: Data of 83 patients who underwent primary hypospadias repair with Snodgrass technique (age range: 1.6-12 years) were retrospectively collected and compared. They were divided into three groups. Group A (26 patients) included cases using TV flap, Group B (36 patients) included those where dorsal dartos from prepuce was used as second cover, and Group C (21 patients) included those with ventral dartos as cover. Results: In Group A, no complications recorded. Mild scrotal edema was present in 5 patients which was conservatively managed. In Group B, there were 8 fistulas, 2 glans breakdown, and 1 meatal stenosis. In Group C, there were 3 fistulas and 1 glans breakdown. Conclusion: TV flap is better than dorsal dartos and ventral dartos as vascular cover for primary hypospadias repair with Snodgrass technique. abstract_id: PUBMED:19857999 Snodgrass hypospadias repair with onlay overlapping double-layered dorsal dartos flap without urethrocutaneous fistula: experience of 156 cases. Objective: To evaluate the neourethra covering created by a vascularized overlapping double-layered dorsal dartos flap for preventing urethrocutaneous fistula in the Snodgrass hypospadias repair (tubularized incised plate). Patients And Methods: Between March 2003 and January 2008, 156 boys (mean age, 4.5 years) were enrolled for hypospadias repair. Preoperative position of the urethral meatus was subcoronal in 37, at the distal shaft in 61 and mid-shaft in 58 boys. All patients underwent the Snodgrass hypospadias repair. The neourethra was then covered with an overlapping double-layered dorsal dartos flap before glans and skin closure. Results: All 156 patients underwent successful reconstruction. With a mean follow up of 23 months (range 6-42), all boys had a satisfactory subjective cosmetic and functional result with a vertically oriented, slit-like meatus at the tip of the glans. No urethrocutaneous fistula or urethral stenosis occurred. Conclusion: As the neouretha covering is an integral part of the Snodgrass hypospadias repair, a dorsal well vascularized double-layered dartos flap is a good choice for preventing urethrocutaneous fistula formation. abstract_id: PUBMED:35197700 Modified snodgrass hypospadias repair using the lembert suturing technique. Background: Surgical repair of hypospadias deformity is a challenging and complex issue. However, the technique of Snodgrass and its modifications attend promising functional and cosmetic results. Purpose: To assess the Lembert suturing technique effectiveness as a modification of the Snodgrass technique in lowering an urethrocutanous fistula rate. Patients And Methods: Fifty boys with mid-shaft and distal hypospadias were repaired using the modified Snodgrass technique utilizing the interrupted Lembert suturing technique in neourethroplasty. Patients with previous failed operations or severe chordee were excluded from the study. Results: None of the patients develops UCF apart from one who developed complete wound dehiscence following severe wound infection. Mild meatal stenosis is seen in two patients, and one patient experienced a small area of superficial skin sloughing. abstract_id: PUBMED:15925087 Snodgrass hypospadias repair with ventral based dartos flap in combination with mucosal collars. Objective: In this study, we report our results of Snodgrass hypospadias repair in conjunction with use of ventral based vascularized dartos flaps. In all repairs, mucosal collars were created and incorporated into the repair to create a more normal appearing circumcised penis, as described by Firlit. Patients And Methods: Records of 60 patients with distal or mid-penil hypospadias who underwent standard Snodgrass repair were evaluated. During circumscribing incision mucosal collars were preserved and used to create a normal appearing circumcision line. A ventral based vascular dartos tissue was preserved as a flap and used as a second layer to cover the entire neourethra before glans closure. Results: All repairs were completed in 1 stage. Mean follow-up was 10.5 months (3-37 months). There were 5 (8.3%) cases of fistula and 6 (10%) cases of meatal stenosis. All patients with fistula formation had meatal stenosis. All patients, except for patients with fistula and/or metal stenosis, were voiding a straight stream and have a slit like meatus with cosmetically normal looking circumcised penis. Conclusion: Ventral based flaps are easier to harvest and transpose to cover the neourethra. Combining repairs with mucosal collars enhances the cosmetic results and makes the ventral flap harvesting process easier. abstract_id: PUBMED:15892822 Dorsal dartos flap for preventing fistula in the Snodgrass hypospadias repair. Objective: To evaluate the importance of urethral covering using vascularized dorsal subcutaneous tissue for preventing fistula in the Snodgrass hypospadias repair. Patients And Methods: The study included 67 children (aged 1-11 years) who had hypospadias repaired between April 1998 and May 2003, including 51 with distal and 16 with midshaft hypospadias. In all children, a standard tubularized incised-plate urethroplasty was followed by reconstruction of new surrounding urethral tissue. A longitudinal dartos flap was harvested from excessive dorsal preputial and penile hypospadiac skin, and transposed to the ventral side by a buttonhole manoeuvre; it was sutured to the glans wings around the neomeatus, and to the corpora cavernosa over the neourethra. Thus the neourethra was completely covered with well-vascularized subcutaneous tissue. Results: At a mean (range) follow-up of 21 (6-65) months, the result was successful, with no fistula or urethral stenosis, in all 67 children. Conclusion: We suggest that urethral covering should be part of the Snodgrass procedure. A dorsal well-vascularized dartos flap, buttonholed ventrally, is a good choice for preventing fistula. Redundancy of the flap and its excellent vascularization depends on the harvesting technique. abstract_id: PUBMED:24917716 Vascularized Dartos Flap in Conjunction with Tubularized Incised Plate Urethroplasty: Single versus Double Flaps for Management of Distal Hypospadias. Objective: To compare the results of Snodgrass repair with single versus double vascularized dartos pedicle flap for neouretheral coverage. Patients And Methods: Fifty-eight pediatric patients with mean age 41.5 ± 21 months (range 9-96 months) underwent primary hypospadias repair, 39 uncircumcised and 19 circumcised, classified into 2 groups according to coverage of the neourethra. Single layer coverage (Group I) was performed in 28 (48.3%) while double layers coverage (Group II) was done in 30 (51.7%) cases. Results: After mean follow-up 8.6 ± 5.4 months (range 3-24 months), good cosmetic results with vertical slit like meatus at the tip of conical shaped glans were obtained in 53 cases (91.4%). Totally 5 cases occurred postoperative complications, of which 4 cases in Group I [2 (7.1%) with urethrocutaneous fistula, 1 (3.6%) with meatal stenosis and 1 (3.6%) with skin necrosis], 1 case in Group II [glanular dehiscence (3.3%)]. No fistula was observed in Group II. Conclusion: The Snodgrass tubularized incised-plate repair in conjunction with double layer coverage of the neourethra could be considered the procedure of choice for distal hypospadias with low complications rate especially urethrocutaneous fistula. abstract_id: PUBMED:37811069 Assessment of functional and cosmetic outcomes of Extended-Tubularized incised plate for mid-shaft and distal hypospadias as a modification of Snodgrass repair: a cross-sectional study. Background: Hypospadias is a congenital abnormality of anterior urethral and penile development where the urethral meatus is ectopically located on the ventral aspect of the penis. It is a relatively common condition affecting ~1 in 250 male birth. Extended-Tubularized Incised Plate (E-TIP) is a modification of Snodgrass repair by extending the midline incision to the apical part of the glans resulting in a more normal appearing location of the meatus and straight urinary stream without increasing the risk of complications. Aim Of Study: To evaluate functional and cosmetic outcomes of E-TIP repair for mid-shaft and distal hypospadias as a modification of Snodgrass repair. Patient And Method: A prospective cross-sectional study including 53 cases of mid-to-distal hypospadias repaired with the E-TIP technique between November 2019 and February 2022 in Baghdad. The authors start with the standard technique described by Snodgrass, but the midline incision of the urethral plate extended up to the apical part of the glans tip, and tabularization started distally creating a slit-like a neomeatus. Outcome assessment depended on an objective evaluation of the following parameters: maximum flow rate, post-void residual, Hypospadias Objective Score Evaluation (HOSE) score, and direction of the urinary stream. Results: The mean age was 3.4±2.1 years, ranging from 0.6 to 9 years. The mean duration of follow-up was 14.4±6.9 months, and the location of the meatus was coronal in 16 patients (30%), subcoronal in 25 (47%), and mid-distal shaft in 12 (23%). The glans width was greater than or equal to 14 mm in 46 patients (87%) and less than 14 mm in 7 (13%). Q.max was above the fifth percentile of age-related uroflowmetry nomograms for healthy children 5-15 years old in 29 boys (93.5%) and below the fifth percentile in 2 (6.5%). Post-void residual urine was less than 10% of voided volume in 28 out of 31 boys (90%) and greater than 10% in 3 (10%). Regarding cosmetic outcomes, 50 boys (94%) had good penile appearance and their HOSE score greater than or equal to 14 with a slit-like meatus located at the tip of the glans, while 3 (6%) of them their score was less than 14. The overall mean HOSE score postoperatively for all patients was 15±0.9. Six patients (11%) had a deviated urinary stream and 47 (89%) had a straight stream. Conclusion: E-TIP repair is a good alternative to standard Snodgrass repair for mid-shaft to distal hypospadias with a good functional and cosmetic outcome, particularly the neomeatus position at the tip of the glans, which is similar to the normal one without increasing the risk of meatal stenosis. abstract_id: PUBMED:18758223 Dorsal dartos flap in Snodgrass hypospadias repair: how to use it? Purpose: To introduce new technique for covering neourethra with dorsal dartos subcutaneous tissue in Snodgrass hypospadias repair. Material And Methods: The study included 26 patients with primary hypospadias, aged 2-22 years (average 7.86), operated from June 2002 to August 2006. Of the patients, 21 had distal, 3 midshaft and 2 penoscrotal hypospadias. The standard technique of tubularized incised plate (TIP) with double-layer covering of the neourethra by subcutaneous tissue was used in all cases of reconstruction. The mean follow-up period was 4.5 months (range 3-12 months). Results: Successful result of a normal-looking penis without fistula was achieved in all patients. One patient had meatal stenosis (3.84%) at the early postoperative period which was corrected by urethral dilatation of the external meatus at an interval of up to 2 months postoperatively. Conclusion: Our technique represents a reasonable option for utilizing dorsal dartos subcutaneous tissue in TIP urethroplasty. The neourethra is covered symmetrically with a double layer of well-vascularized tissue and the penis is kept without rotation. Redundancy of the flap and its excellent vascularization depend on the harvesting technique. Further follow-up and a larger number of patients are needed before a final conclusion can be made. abstract_id: PUBMED:36344393 A novel triple dartous fixation in hypospadias repair. Objective: This study investigates the impact of the use of dartos covering to augment the neourethra on functional and cosmetic results. To evaluate a novel technique demonstrating how to fix dartos flap to cover the neourethra as a barrier in hypospadias repair. Patients And Methods: This study comprised 204 male patients with different degree of hypospadias (DPH = 132, coronal hypospadias = 46, MPH = 26). Their ages ranged from 1 to 23 ys (mean age = 2 ys). Penile chordee was in (DPH = 45, coronal hypospadias = 33, MPH = 26). All patients had abnormal downward directed urinary stream. Hypospadias repair was performed by the classic TIP technique in addition to our novel modification of dartos covering. Patients were submitted to 5 years of follow-up including clinical examination of the functional and cosmetic parameters. Results: Success rates were reported in 200 patients, 3 patients had complications with subcoronal urethrocutaneous fistula and one patient with complete repair disruption. Conclusion: Triple dartos fixation is a simple technique for all hypospadiologists to minimize urethrocutaneous fistula as common complication of hypospadias repair with good functional and cosmetic results. Answer: The Snodgrass hypospadias repair, also known as the tubularized incised plate (TIP) urethroplasty, has been widely applied to various forms of hypospadias. The use of a vascularized dartos flap in conjunction with the Snodgrass technique has been reported to significantly reduce the rates of complications such as fistula formation. A large multi-institutional experience with the Snodgrass technique, which included the routine use of a vascularized dartos flap and a 2-layer closure of the neourethra, reported an overall complication rate of less than 1% for all cases combined, suggesting that this approach could be the optimal repair for routine cases of hypospadias (PUBMED:12352344). Comparative studies have also been conducted to determine the ideal vascular cover for the Snodgrass repair. One study found that the tunica vaginalis (TV) flap was superior to both dorsal and ventral dartos flaps, with no complications recorded in the group using the TV flap (PUBMED:29118537). Another study reported no urethrocutaneous fistula or urethral stenosis with the use of an overlapping double-layered dorsal dartos flap, indicating that a well-vascularized dorsal dartos flap is a good choice for preventing fistula formation (PUBMED:19857999). Additional modifications to the Snodgrass technique have been explored, such as the Lembert suturing technique, which also showed promising results with a low rate of urethrocutaneous fistula (PUBMED:35197700). The use of ventral-based dartos flaps in combination with mucosal collars has been reported to enhance cosmetic results and make the flap harvesting process easier, although there were cases of fistula and meatal stenosis (PUBMED:15925087). Overall, the evidence suggests that the Snodgrass hypospadias repair with the addition of a vascularized dartos flap is a highly effective technique for primary hypospadias repair, with low complication rates and good functional and cosmetic outcomes. This supports the notion that it could be considered the perfect repair for virgin cases of hypospadias, especially when meticulous surgical technique and proper vascular cover are employed.
Instruction: Regional versus general anesthesia in surgical patients with chronic obstructive pulmonary disease: does avoiding general anesthesia reduce the risk of postoperative complications? Abstracts: abstract_id: PUBMED:25526396 Regional versus general anesthesia in surgical patients with chronic obstructive pulmonary disease: does avoiding general anesthesia reduce the risk of postoperative complications? Background: Surgical patients with chronic obstructive pulmonary disease (COPD) are at increased risk of perioperative complications. In this study, we sought to quantify the benefit of avoiding general anesthesia in this patient population. Methods: Data from the National Surgical Quality Improvement Program database (2005-2010) were used for this review. Patients who met the National Surgical Quality Improvement Program definition for COPD and underwent surgery under general, spinal, epidural, or peripheral nerve block anesthesia were included in this study. For each primary current procedural terminology code with ≥ 1 general and ≥ 1 regional (spinal, epidural, or peripheral nerve block) anesthetic, regional patients were propensity score--matched to general anesthetic patients. Propensity scoring was calculated using all available demographic and comorbidity data. This match yielded 2644 patients who received regional anesthesia and 2644 matched general anesthetic patients. These groups were compared for morbidity and mortality. Results: Groups were well matched on demographics, comorbidities, and type of surgery. Compared with matched patients who received regional anesthesia, patients who received general anesthesia had a higher incidence of postoperative pneumonia (3.3% vs 2.3%, P = 0.0384, absolute difference with 95% confidence interval = 1.0% [0.09, 1.88]), prolonged ventilator dependence (2.1% vs 0.9%, P = 0.0008, difference = 1.2% [0.51, 1.84]), and unplanned postoperative intubation (2.6% vs 1.8%, P = 0.0487, difference = 0.8% [0.04, 1.62]). Composite morbidity was 15.4% in the general group versus 12.6% (P = 0.0038, difference = 2.8% [0.93, 4.67]). Composite morbidity not including pulmonary complications was 13.0% in the general group versus 11.1% (P = 0.0312, difference = 1.9% [0.21, 3.72]). Thirty-day mortality was similar (2.7% vs 3.0%, P = 0.6788, difference = 0.3% [-1.12, 0.67]). As a test for validity, we found a positive association between pulmonary end points because patients with 1 pulmonary complication were significantly more likely to have additional pulmonary complications. Conclusions: The use of regional anesthesia in patients with COPD is associated with lower incidences of composite morbidity, pneumonia, prolonged ventilator dependence, and unplanned postoperative intubation. abstract_id: PUBMED:30792062 Utilization of regional versus general anesthesia and its impact on lower extremity bypass outcomes. Objective: Previous studies evaluating general anesthesia (GA) vs regional (epidural/spinal) anesthesia (RA) for infrainguinal bypass have produced conflicting results. The purpose of this study was to analyze the factors associated with contemporary use of RA and to determine whether it is associated with improved outcomes after infrainguinal bypass in patients with critical limb ischemia. Methods: Using the Vascular Quality Initiative infrainguinal database, a retrospective review identified all critical limb ischemia patients who received an infrainguinal bypass from 2011 through 2016. Patients were then separated by GA or RA. Primary outcomes were perioperative mortality, complications, and length of stay. Predictive factors for RA and perioperative outcomes were analyzed using a mixed-effects model to adjust for center differences. Results: There were 16,052 patients identified to have a lower extremity bypass during this time frame with 572 (3.5%) receiving RA. There was a wide variation in the use of RA, with 31% of participating centers not using it at all. Age (67.2 vs 70.3 years; P < .001), chronic obstructive pulmonary disease (25.7% vs 30.9%; P < .001), and urgency of the operation (75.7% vs 80.4%; P = .01) were found to be independently associated with receiving a regional anesthetic. Univariate and multivariate analysis demonstrated that length of stay (6.8 days vs 5.7 days; P < .01), postoperative congestive heart failure (2.3% vs 1.1%; P = .040), and change in renal function (5.7% vs 2.9%; P = .005) were all significant outcomes in favor of RA. There was a trend toward lower mortality rates; however, this did not reach statistical significance. Rates of myocardial infarction, pulmonary complications, and stroke were not found to be statistically different. Coarsened exact matching continued to demonstrate a difference in length of stay and rates of new-onset congestive heart failure in favor of RA. Conclusions: RA is an infrequent but effective form of anesthesia for infrainguinal bypass surgery. Elderly patients and those with underlying respiratory problems may benefit from this form of anesthesia. Further evaluation within institutions should be performed to identify which patients would most benefit from RA or GA. abstract_id: PUBMED:36263279 Anesthesia Type and Postoperative Outcomes for Patients Receiving Arthroscopic Rotator Cuff Repairs. Background: As the indications for and the volume of arthroscopic rotator cuff repair increase, it is important to optimize perioperative care to minimize postoperative complications and health care costs. Purpose: We sought to investigate if the anesthesia type used affects the rate of postoperative complications in patients undergoing arthroscopic rotator cuff repairs. Methods: We conducted a retrospective cohort study using the American College of Surgeons National Surgical Quality Improvement Program database to identify patients undergoing arthroscopic rotator cuff repair from 2014 to 2018. Patients were divided into 3 cohorts: general anesthesia, regional anesthesia, and combined general plus regional anesthesia. Bivariate and multivariate analyses with stepwise technique were performed on data related to patient demographics, smoking history, functional status, medical comorbidities (ie, bleeding disorders, chronic obstructive pulmonary disease, and dialysis), and postoperative outcomes within 30 days of discharge. To assess the independent risk factors for postoperative complications, demographics and medical comorbidities were included in the multivariate analyses for any variables that derived P values <.20. Results: Of 24,677 total patients undergoing arthroscopic rotator cuff repair, 15,661 (63.5%) had general anesthesia, 889 (3.6%) had regional anesthesia, and 8127 (32.9%) received combined general plus regional anesthesia. Patients who received general anesthesia rather than regional anesthesia were more frequently white (76.8% vs 74.8%, respectively) and had a medical history of hypertension (47.9% vs 41.8%, respectively), smoking (14.9% vs 12.4%, respectively), and chronic obstructive pulmonary disease (3.4% vs 1.6%, respectively). Compared with patients receiving general anesthesia, those receiving combined general plus regional were more likely to have higher American Society of Anesthesiologists class and a smoking history. Following adjustment, patients who underwent regional anesthesia had a decreased risk for postoperative admission compared with patients who had general anesthesia. Patients who underwent combined regional plus general anesthesia had decreased rates of wound complications and readmission compared with those who received general anesthesia. Conclusion: Among patients undergoing arthroscopic rotator cuff repair, this retrospective study found a significantly higher rate of respiratory and cardiac comorbidities with general anesthesia compared with regional anesthesia. When controlling for these confounders, the use of regional anesthesia was still associated with lower rates of postoperative readmission compared with the general and combined subgroups. Patients receiving combined general plus regional anesthesia had decreased rates of wound complications and readmittance compared with general anesthesia. These findings may influence anesthetic choice in minimizing postoperative complications for rotator cuff repairs. abstract_id: PUBMED:30243870 The Use of Regional or Local Anesthesia for Carotid Endarterectomies May Reduce Blood Loss and Pulmonary Complications. Objective: Over 150,000 carotid endarterectomy (CEA) procedures are performed each year. Perioperative anesthetic management may be complex due to multiple patient and procedure-related risk factors. The authorsaimed to determine whether the use of general anesthesia (GA), when compared with regional anesthesia (RA), would be associated with reduced perioperative morbidity and mortality in patients undergoing a CEA. Design: Retrospective analysis of the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database. Setting: The authors evaluated patients undergoing a CEA at multiple university- and community-based settings. Participants: A total of 43,463 patients were identified; 22,845 patients were propensity matched after excluding for missing data. Interventions: The study population was divided into 2 groups: patients undergoing RA or GA. The RA group included regional anesthesia performed by the anesthesiologist or surgeon, monitored anesthesia care, and local infiltration. Methods: The primary endpoint was 30-day mortality. Secondary endpoints included surgical site infection, pulmonary complications, return to the operating room, acute kidney injury, cardiac arrest, urinary tract infection, myocardial infarction, thromboembolism, perioperative transfusion, sepsis, and days to discharge. Measurements And Main Results: Younger age, Hispanic ethnicity, body mass index <18.5, dyspnea, chronic obstructive pulmonary disease, and smoking history were associated with receiving GA. Patients with low hematocrit and low platelets were more likely to get RA. There was no mortality difference. GA was associated with a significantly higher rate of perioperative transfusions (p = 0.037) and perioperative pneumonia (p = 0.027). Conclusion: The use of RA over GA in CEA is associated with decreased risk of postoperative pneumonia and a reduced need for perioperative blood transfusions. abstract_id: PUBMED:29095750 The Use of Regional or Neuraxial Anesthesia for Below-Knee Amputations May Reduce the Need for Perioperative Blood Transfusions. Background And Objectives: Amputations of the lower extremity remain a common procedure in a high-risk population. Perioperative morbidity and mortality reach as high as 14.1% in below-knee amputations. We aimed to determine whether regional, or neuraxial, anesthesia, when compared with general anesthesia (GA), would be associated with reduced perioperative morbidity and mortality. Methods: We queried the American College of Surgeons National Surgical Quality Improvement Program data set. The study population was divided into 2 groups: patients undergoing regional anesthesia (RA) and those undergoing GA. The primary end point for our study was 30-day mortality. The secondary end points were return to the operating room, surgical site infections, pulmonary complications, acute kidney injury, urinary tract infection, cardiac arrest, myocardial infarction, perioperative transfusions, thromboembolisms, sepsis, composite measure of postoperative complications, and days from operation to discharge. Results: Twelve thousand seven hundred twenty-three patients were identified. Older patients, white patients, patients with a higher body mass index, patients without dyspnea, patients with independent functional status, smokers, patients with sepsis, and patients with bleeding disorders were associated with receiving GA. Hispanic patients, patients with chronic obstructive pulmonary disease, and patients with congestive heart failure were associated with receiving RA. Our study did not reveal a 30-day mortality difference between RA and GA. Regional anesthesia was associated with a significantly decreased need for perioperative blood transfusions (11.8% vs 16.5%, P < 0.001) and a decrease in the composite measure of postoperative complications (25.7% vs 29.1%, P < 0.04). Conclusions: Regional anesthesia does not offer a mortality advantage over GA, but RA may reduce the need for perioperative blood transfusions. abstract_id: PUBMED:30715953 Office-Based Sedation/General Anesthesia for COPD Patients, Part I. The safe treatment of patients with chronic obstructive pulmonary disease (COPD) in dental office-based settings can be quite complex without a current understanding of the etiology, course, severity, and treatment modalities of the disease. The additional concerns of providing sedation and/or general anesthesia to patients with COPD in settings outside of a hospital demand thorough investigation of individual patient presentation and realistic development of planned treatment that patients suffering from this respiratory condition can tolerate. Along with other comorbidities, such as advanced age and potential significant cardiovascular compromise, the dental practitioner providing sedation or general anesthesia must tailor any treatment plan to address multiple organ systems and mitigate risks of precipitating acute respiratory failure from inadequate pain and/or anxiety control. Part I of this article will cover the epidemiology, etiology, and pathophysiology of COPD. Patient evaluation in the preoperative period will also be reviewed. Part II will cover which patients are acceptable for sedation/general anesthesia in the dental office-based setting as well as sedation/general anesthesia techniques that may be considered. abstract_id: PUBMED:35272341 Vaginal Hysterectomy Performed Under General Versus Neuraxial Regional Anesthesia: Comparison of Patient Characteristics and 30-Day Outcomes Using Propensity Score-Matched Cohorts. Objectives: Compared with surgery under general anesthesia (GA), surgery under neuraxial regional anesthesia (RA) has been associated with economic and clinical benefits in certain populations. Our aim was to compare preoperative and postoperative characteristics and 30-day outcomes, including intraoperative complications, for patients undergoing benign vaginal hysterectomy under GA versus RA. Methods: This is a retrospective cohort study of patients who underwent vaginal hysterectomy for benign indications between 2015 and 2019 using the American College of Surgeons National Surgical Quality Improvement Program database. Patients were identified using Current Procedural Terminology codes and stratified into GA and RA groups. Propensity score matching was performed to account for selection bias between anesthesia groups. Results: Of 18,030 vaginal hysterectomies performed during this study period, 17,472 (96.9%) were performed under GA and 558 (3.1%) under RA. The RA group was older, more likely to be White, and more likely to have a history of chronic obstructive pulmonary disease and chronic steroid use (P < 0.01 for all); they were less likely to be discharged the same day (8.6% vs 12.2%, P = 0.01). In the matched cohort, there were similar proportions of major, minor, and composite complications between RA and GA groups (major: odds ratio [OR], 0.95; 95% confidence interval [CI], 0.51-1.78; minor: OR, 1.18; 95% CI, 0.74-1.88; composite: OR, 1.10; 95% CI, 0.75-1.64). Similar proportions of same-day discharge were observed (OR, 0.72; 95% CI, 0.47-1.10). Conclusions: Although RA comprises only 3% of the anesthetic modalities used for benign vaginal hysterectomies, it is associated with a similar incidence of postoperative complications compared with general anesthesia. abstract_id: PUBMED:22944010 Anesthesia-based evaluation of outcomes of lower-extremity vascular bypass procedures. Background: This report examines the effects of regional versus general anesthesia for infrainguinal bypass procedures performed in the treatment of critical limb ischemia (CLI). Methods: Nonemergent infrainguinal bypass procedures for CLI (defined as rest pain or tissue loss) were identified using the 2005 to 2008 American College of Surgeons National Surgical Quality Improvement Program database using International Classification of Disease, ninth edition, and Current Procedure Terminology codes. Patients were classified according to National Surgical Quality Improvement Program data as receiving either general anesthesia or regional anesthesia. The regional anesthesia group included those specified as having regional, spinal, or epidural anesthesia. Demographic, medical, risk factor, operative, and outcomes data were abstracted for the study sample. Individual outcomes were evaluated according to the following morbidity categories: wound, pulmonary, venous thromboembolic, genitourinary, cardiovascular, and operative. Length of stay, total morbidity, and mortality were also evaluated. Associations between anesthesia types and outcomes were evaluated using linear or logistic regression. Results: A total of 5,462 inpatient hospital visits involving infrainguinal bypasses for CLI were identified. Mean patient age was 69 ± 12 years; 69% were Caucasian; and 39% were female. In all, 4,768 procedures were performed using general anesthesia and 694 with regional anesthesia. Patients receiving general anesthesia were younger and significantly more likely to have a history of smoking, previous lower-extremity bypass, previous amputation, previous stroke, and a history of a bleeding diathesis including the use of warfarin. Patients receiving regional anesthesia had a higher prevalence of chronic obstructive pulmonary disease. Tibial-level bypasses were performed in 51% of procedures, whereas 49% of procedures were popliteal-level bypasses. Cases performed using general anesthesia demonstrated a higher rate of resident involvement, need for blood transfusion, and operative time. There was no difference in the rate of popliteal-level and infrapopliteal-level bypasses between groups. Infrapopliteal bypass procedures performed using general anesthesia were more likely to involve prosthetic grafts and composite vein. Mortality occurred in 157 patients (3%). The overall morbidity rate was 37%. Mean and median lengths of stay were 7.5 days (± 8.1) and 6.0 days (Q1: 4.0, Q3: 8.0), respectively. Multivariate analyses demonstrated no significant differences by anesthesia type in the incidence of morbidity, mortality, or length of stay. Conclusion: These results provide no evidence to support the systematic avoidance of general anesthesia for lower-extremity bypass procedures. These data suggest that anesthetic choice should be governed by local expertise and practice patterns. abstract_id: PUBMED:36361140 The Incidence of Acute Respiratory Infection Was Higher in the Older Adults with Lower Limb Fracture Who Receive Spinal Anesthesia Than Those Who Receive General Anesthesia. Introduction: Acute respiratory infection (ARI) can significantly reduce postoperative quality of life and impair the recovery of older adult patients with lower-limb fractures, and its relationship with methods of anesthesia remains inconclusive. Using data from the National Health Insurance Research Database (NHIRD) of Taiwan, this study examined the data of patients who received surgical management for lower-limb fractures and compared those who underwent general anesthesia (GA) with those who underwent regional anesthesia (RA) in terms of their incidence of acute upper and lower respiratory infection during the one-month postoperative period. The study also identified related risk factors. Material And Methods: Approximately two million patients were randomly sampled from the NHIRD registry. We identified and enrolled patients with lower-limb fractures who were over 60 years old and underwent GA or RA during surgeries conducted between 2010 and 2017. We divided these patients into two groups for further analysis. The outcome of this study was the development of ARI during the one-month postoperative period. Results: In total, 45,032 patients (GA group, 19,580 patients; RA group, 25,452 patients) with a mean age of 75.0 ± 8.9 years were included in our study. The incidence of postoperative ARI within one month of surgery was 8.0% (1562 patients) in the GA group and 9.5% (2412 patients) in the RA group, revealing a significant difference. The significant risk factors for the incidence of ARI were the application of RA for surgery, older age, hypertension, liver disease, and chronic obstructive pulmonary disease (COPD). A subgroup analysis revealed that the RA method was associated with a significantly higher ARI incidence relative to the GA method among patients aged between 60 and 80 years, among male patients, among the patients with or without any comorbidity and among the patients without COPD. Conclusion: The incidence of postoperative ARI within one month of surgery was higher among older patients with lower-limb fractures who received RA for surgery than among those who received GA for surgery. The other major risk factors for ARI were older age, hypertension, liver disease, and COPD. Therefore, we should focus on patients with a high risk of developing ARI, especially during the COVID-19 pandemic. abstract_id: PUBMED:31473112 Local Versus General Anesthesia in Nonemergency Endovascular Abdominal Aortic Aneurysm Repair: A Systematic Review and Meta-Analysis. The purpose of this systematic review was to assess the perioperative clinical outcomes in using local/regional anesthesia (LA/RA) or general anesthesia (GA) in patients undergoing endovascular abdominal aortic aneurysm repair. A comprehensive electronic literature search was undertaken from inception to September 2018, identifying all randomized and nonrandomized studies comparing LA/RA versus GA in patients with abdominal aortic aneurysm who underwent endovascular repair. A total of 12,024 patients (n = 1,664 LA/RA, n = 10,360 GA) were analyzed from 12 observational studies included in this analysis. No difference in mean age between LA/RA and GA group was noted (73.8 ± 7.8 y v 72.4 ± 7.6 y, 95% confidence interval 0.85 [-0.08 to 1.79]; p = 0.07). No differences in preoperative rate of chronic obstructive pulmonary disease, ischemic heart disease, diabetes mellitus, and American Society of Anesthesiologists grades were noted between the 2 groups (p = 0.21, p = 0.85, p = 0.46, and p = 0.67, respectively). Shorter total surgical time in LA/RA patients was reported (135 ± 40 min v 164 ± 43 min; p < 0.00001). Shorter hospital stay was observed in LA/RA patients (3.6 ± 3.3 d v 4.6 ± 5 d; p = 0.002). No difference in cardiac or renal complications was noted between the LA/RA and GA groups postoperatively (2.7% v 2.5%; p = 0.46 and 1.2% v 1.6%; p = 0.13). Similarly, no difference in vascular complications was noted in LA/RA versus GA patients (8.4% v 7.7%; p = 0.44). Thirty-day morality was not different between the 2 cohorts (2% v 1.7%; p = 0.97). Use of LA/RA in selective endovascular abdominal aortic aneurysm repair procedures provides satisfactory and comparable perioperative outcomes with those of GA, with the advantage of a shorter hospital stay. A large randomized controlled trial or multicenter study is required to confirm the present study's findings. Answer: The use of regional anesthesia in surgical patients with chronic obstructive pulmonary disease (COPD) appears to be associated with a reduction in the risk of certain postoperative complications when compared to general anesthesia. Specifically, a study found that patients with COPD who received regional anesthesia had lower incidences of composite morbidity, pneumonia, prolonged ventilator dependence, and unplanned postoperative intubation compared to those who received general anesthesia (PUBMED:25526396). Additionally, in the context of infrainguinal bypass surgery, regional anesthesia was associated with shorter lengths of stay, lower rates of postoperative congestive heart failure, and a reduction in changes in renal function (PUBMED:30792062). For arthroscopic rotator cuff repairs, regional anesthesia was associated with a decreased risk for postoperative admission compared with general anesthesia, and patients who underwent combined regional plus general anesthesia had decreased rates of wound complications and readmission (PUBMED:36263279). In carotid endarterectomy procedures, regional anesthesia was linked to decreased risk of postoperative pneumonia and a reduced need for perioperative blood transfusions (PUBMED:30243870). Similarly, for below-knee amputations, regional anesthesia did not offer a mortality advantage over general anesthesia but was associated with a significantly decreased need for perioperative blood transfusions and a decrease in the composite measure of postoperative complications (PUBMED:29095750). However, it is important to note that the choice of anesthesia should be tailored to individual patient needs and local expertise. For example, a study on lower-extremity vascular bypass procedures did not find significant differences in morbidity, mortality, or length of stay between regional and general anesthesia, suggesting that anesthetic choice should be governed by local expertise and practice patterns (PUBMED:22944010). Moreover, in the case of older adults with lower limb fractures, the incidence of acute respiratory infection was higher in those who received spinal anesthesia compared to those who received general anesthesia (PUBMED:36361140). In summary, while regional anesthesia may offer benefits in reducing certain postoperative complications for patients with COPD, the decision between regional and general anesthesia should consider the specific surgical context, patient comorbidities, and the expertise of the medical team.
Instruction: Atypical aortic thrombus: should nonoperative management be first line? Abstracts: abstract_id: PUBMED:24704850 Atypical aortic thrombus: should nonoperative management be first line? Background: Aortic thrombus in the absence of atherosclerotic plaque or aneurysm is rare, and its optimal management remains unclear. Although atypical aortic thrombus (AAT) has been historically managed operatively, successful nonoperative strategies have been recently reported. Here, we report our experience in treating patients with AAT that has evolved from a primarily operative approach to a first-line, nonoperative strategy. Methods: Records of patients treated for AAT between 2008 and 2011 at our institution were reviewed. Results: Ten female and three male patients with ages ranging from 27 to 69 were identified. Seven were treated operatively and 6 nonoperatively. Initial presentation was variable and included limb thromboembolic events (n = 6), visceral ischemia (n = 5), and stroke (n = 1). Associated risk factors included hypercoagulability (76%; n = 10) and hyperlipidemia (38%, n = 5). In the nonoperative group, complete thrombus resolution was obtained via anticoagulation (n = 5) or systemic thrombolysis (n = 1). Complete thrombus extraction was achieved in all operative patients. There were 11 significant complications in 5 of the 7 patients (71%) in the operative group, including intraoperative lower extremity embolism, pericardial effusion, stroke, and 1 death. There was 1 complication in the patients treated nonoperatively. The median hospital length of stay was 9 days (range 3-49) for those treated nonoperatively and 30 days (range 4-115) for those undergoing operative thrombectomy. Conclusions: Although AAT has traditionally been treated operatively, nonoperative management of AAT with anticoagulation or thrombolysis is feasible in selected patients and may lessen morbidity and length of hospitalization in those patients for whom it is appropriate. abstract_id: PUBMED:30294946 Atypical Hemolytic-Uremic Syndrome: An Update on Pathophysiology, Diagnosis, and Treatment. Atypical hemolytic uremic syndrome (aHUS), a rare variant of thrombotic microangiopathy, is characterized by microangiopathic hemolytic anemia, thrombocytopenia, and renal impairment. The condition is associated with poor clinical outcomes with high morbidity and mortality. Atypical HUS predominantly affects the kidneys but has the potential to cause multi-organ system dysfunction. This uncommon disorder is caused by a genetic abnormality in the complement alternative pathway resulting in over-activation of the complement system and formation of microvascular thrombi. Abnormalities of the complement pathway may be in the form of mutations in key complement genes or autoantibodies against specific complement factors. We discuss the pathophysiology, clinical manifestations, diagnosis, complications, and management of aHUS. We also review the efficacy and safety of the novel therapeutic agent, eculizumab, in aHUS, pregnancy-associated aHUS, and aHUS in renal transplant patients. abstract_id: PUBMED:31928535 Atypical hemolytic uremic syndrome: a case report. Background: Thrombotic microangiopathy is a pathological condition comprised of microvascular thrombosis involving any organ of the body leading to thrombocytopenia, Coombs-negative hemolytic anemia, and end-organ damage. The most common forms of thrombotic microangiopathies are Shiga toxin-producing Escherichia coli-mediated hemolytic uremic syndrome, thrombotic thrombocytopenic purpura, and atypical hemolytic uremic syndrome. The atypical hemolytic uremic syndrome occurs due to genetic and acquired mutations in complement regulatory factors and to complement activation factors in the immune system, mainly the alternative pathway. Clinical manifestations and outcomes differ with the prevalent mutations of the patient. Currently, available treatment modalities are therapeutic plasma exchange and a monoclonal antibody against C5, eculizumab. We report a case of a Sri Lankan girl diagnosed with atypical hemolytic uremic syndrome complicated with septicemia, hemolytic anemia, acute kidney injury, pulmonary hemorrhage with respiratory failure, and hypertension who had a complete remission following long-term (30 months) therapeutic plasma exchange. Case Presentation: A 15-year-old Sri Lankan girl was transferred from a local hospital with the features of septicemia and acute kidney injury for specialized management. She had high blood pressure (180/100 mmHg) on admission. She underwent appendicectomy based on suspicion of acute appendicitis as the cause of sepsis. Following surgery, her condition deteriorated, and intensive care unit management was warranted because she developed pulmonary hemorrhages and respiratory failure requiring mechanical ventilation and renal replacement therapy in the form of hemodialysis. Her blood investigations showed microangiopathic hemolytic anemia, thrombocytopenia, elevated lactate dehydrogenase, and reduced human complement C3 levels, together with a normal coagulation profile. She was diagnosed with atypical hemolytic uremic syndrome and was initiated on therapeutic plasma exchange and other supportive therapy, including corticosteroids. Following a lengthy course of plasma exchange, complete recovery was achieved. Conclusion: The atypical hemolytic uremic syndrome is a rare disease entity requiring a high index of suspicion to diagnose. It is a diagnosis of exclusion. Early diagnosis with prompt treatment will render a better outcome. The atypical hemolytic uremic syndrome needs to be considered in all patients with thrombotic microangiopathy. abstract_id: PUBMED:25280590 A mechanistic approach to the diagnosis and management of atypical hemolytic uremic syndrome. Until recently, atypical hemolytic uremic syndrome (aHUS), conventionally defined in the pediatric literature as a syndrome of the triad of renal failure, microangiopathic hemolytic anemia, and thrombocytopenia without a prodrome of hemorrhagic diarrhea, has received little attention in adult practice because the patients are commonly given the diagnosis of thrombotic thrombocytopenic purpura (TTP) or TTP/HUS and treated as TTP with plasma exchange, augmented in refractory cases with rituximab and sometimes even splenectomy. Molecular studies have shown that the regulation of the alternative complement pathway is defective in many patients with conventionally defined aHUS. With this new knowledge and the findings of ADAMTS13 autoinhibitors or mutations in TTP, it is time to redefine aHUS as a disorder with propensity to the development of thrombotic microangiopathy due to defective regulation of the alternative complement pathway and TTP as a disorder with propensity to arteriolar and capillary thrombosis due to ADAMTS13 deficiency. This new definition provides a clear distinction of aHUS from TTP, encompasses patients without all 3 components of the triad, and provides the rationale for management with anticomplement therapy. abstract_id: PUBMED:24681522 Atypical Hemolytic-Uremic Syndrome: A Clinical Review. Atypical hemolytic-uremic syndrome (HUS) is a rare life-threatening disorder characterized by microangiopathic hemolytic anemia, thrombocytopenia, and ischemic injury to organs, especially the kidneys. Microvascular injury and thrombosis are the dominant histologic findings. Complement activation through the alternative pathway plays a critical role in the pathogenesis of atypical HUS. Genetic abnormalities involving complement regulatory proteins and complement components form the molecular basis for complement activation. Endothelial cell dysfunction, probably because of the effects of complement activation, is an intermediate stage in the pathophysiologic cascade. Atypical HUS has a grave prognosis. Although mortality approaches 25% during the acute phase, end-stage renal disease develops in nearly half of patients within a year. Atypical HUS has a high recurrence rate after renal transplantation, and recurrent disease often leads to graft loss. Plasma therapy in the form of plasma exchange or infusion has remained the standard treatment for atypical HUS. However, many patients do not respond to plasma therapy and some require prolonged treatment. Approved by the Food and Drug Administration in the treatment of atypical HUS, eculizumab is a humanized monoclonal antibody that blocks cleavage of complement C5 into biologically active mediators of inflammation and cytolysis. Although case reports have shown the efficacy of eculizumab, randomized clinical trials are lacking. Therapeutic strategies targeting endothelial cells have demonstrated promising results in experimental settings. Therefore, inhibitors of angiotensin-converting enzyme, HMG-CoA reductase, and xanthine oxidase as well as antioxidants, such as ascorbic acid, may have salutary effects in patients with atypical HUS. abstract_id: PUBMED:31170773 First-Line Therapy for Immune Thrombocytopenia. Immune thrombocytopenia (ITP) is an autoimmune disease affecting blood platelets that causes thrombocytopenia and an increased risk of bleeding. First-line therapy is indicated for patients with bleeding complications or who are at increased risk of bleeding, and the decision to initiate therapy depends not only on the platelet count, but also on other endpoints including quality of life. The choice of first-line therapy depends primarily on how quickly a platelet count response is required, with intravenous immune globulin providing the more rapid response, followed by high-dose dexamethasone and prednisone. In this narrative review, we discuss key issues with first-line therapy in ITP including when to initiate therapy, treatment options and special considerations for children. Evidence-based guidelines are lacking for the emergency management of patients with ITP who present with significant bleeding; we provide our approach to this critical situation. abstract_id: PUBMED:30499520 Ocular thrombotic microangiopathy in atypical hemolytic-uremic syndrome (a clinical case study) The article presents a clinical observation of ocular thrombotic microangiopathy in atypical hemolytic-uremic syndrome - a rare genetic disease characterized by systemic thrombosis caused by uncontrolled activation of alternative complement pathway. A typical manifestation of this ocular lesion in this disease is bilateral Purtscher-like retinopathy. Timely diagnostics of atypical hemolytic-uremic syndrome, including ophthalmologic examination, determines the early start of a highly effective pathogenetic therapy with complement inhibitor eculizumab. abstract_id: PUBMED:33539596 Management of early hepatic artery thrombosis following living-donor liver transplantation: feasibility, efficacy and potential risks of endovascular therapy in the first 48 hours post-transplant-a retrospective cohort study. This retrospective cohort study aims to review our 18-year experience with early hepatic artery thrombosis (e-HAT) following living-donor liver transplantation (LDLT), as well as to assess the feasibility, efficacy and potential risks of endovascular management of e-HAT in the first 48 hours (hrs) post-LDLT. Medical records of 730 patients who underwent LDLT were retrospectively reviewed. In all cases who had developed e-HAT, treatment modalities employed and their outcomes were evaluated. Thirty-one patients developed e-HAT(4.2%). Definite technical success and 1-year survival rates of surgical revascularization[11/31 cases(35.5%)] were 72.7% & 72.7%, whereas those of endovascular therapy[27/31 cases(87.1%)] were 70.4% & 59.3%, respectively. Endovascular therapy was carried out in the first 48hrs post-transplant in 9/31 cases(29%) [definite technical success:88.9%, 1-year survival:55.6%]. Four procedure-related complications were reported in 3 of those 9 cases(33.3%). In conclusion, post-LDLT e-HAT can be treated by surgical revascularization or endovascular therapy, with comparable results. Endovascular management of e-HAT in the first 48hrs post-LDLT appears to be feasible and effective, but is associated with a relatively higher risk of procedure-related complications, compared to surgical revascularization. Hence, it can be reserved as a second-line therapeutic option in certain situations where surgical revascularization is considered futile, potentially too complex, or potentially more risky. abstract_id: PUBMED:31312560 Atypical Hemolytic Uremic Syndrome: A Case Report. Hemoytic uremic syndrome (HUS) is a rare type of thrombotic microangiopathies. Manifestations include thrombocytopenia, microangiopathic hemolytic anemia, and thrombi in small blood vessels. The prognosis is poor. Herein, we present a case of atypical HUS, which is very rare. abstract_id: PUBMED:33346497 The experience of using the Russian biosimilar of the original drug eculizumab for the treatment of patients with atypical hemolytic-uremic syndrome Atypical hemolytic-uremic syndrome (aHUS) is a chronic systemic disease of a genetic nature, which is based on uncontrolled activation of the alternative complement pathway, leading to generalized thrombosis in the vessels of the microvasculature (complement-mediated thrombotic microangiopathy). To date, therapy with eculizumab is the most effective and pathogenetically substantiated method of treating patients with ASH. Using the example of three clinical cases of patients with a verified diagnosis of aHUS, the high efficiency and safety of the worlds first bioanalogue of eculizumab in the treatment of adult patients with aHUS (complement-mediated thrombotic microangiopathy) was demonstrated. Answer: The management of atypical aortic thrombus (AAT) has historically been operative, but recent reports suggest that nonoperative strategies can be successful and may be considered as a first-line approach in selected patients. A study that reviewed the treatment of patients with AAT between 2008 and 2011 at a particular institution found that nonoperative management, including anticoagulation and systemic thrombolysis, led to complete thrombus resolution in the majority of cases. This approach was associated with fewer complications and a shorter hospital stay compared to operative management, which involved complete thrombus extraction but resulted in significant complications in a high percentage of patients (71%) and a longer median hospital length of stay. The study concluded that nonoperative management of AAT with anticoagulation or thrombolysis is feasible and may reduce morbidity and hospitalization duration for appropriate patients (PUBMED:24704850). Therefore, based on the evidence from this study, nonoperative management should be considered as a first-line strategy for the treatment of AAT in selected patients, as it may offer a safer and more efficient alternative to traditional operative approaches.
Instruction: Does the removal of retained staples really improve postoperative chronic sequelae after transanal stapled operations? Abstracts: abstract_id: PUBMED:24819108 Does the removal of retained staples really improve postoperative chronic sequelae after transanal stapled operations? Background: Transanal stapled procedures are increasingly being used. Several postoperative complications can be referred to their application, including those related to the presence of retained staples at the level of the staple line. Objective: This study was conducted to assess whether the removal of the retained staples is a useful approach to improve some of the most common postoperative complications of these surgical techniques. Design: This is a retrospective study. Settings: The study was conducted at the One-Day Surgery Unit of St. Andrea Hospital. Patients: All of the patients who underwent a stapled transanal procedure from January 2003 to December 2011 were included in the study. Patients included in the study were followed postoperatively for 1 year after surgery to identify the presence of retained staples. Interventions: If identified, the retained staples were removed endoscopically or transanally. Main Outcome Measures: After the staple removal, patients were followed with biweekly office visit for 2 months to evaluate the progression of symptoms. Results: From the 566 patients included in the study, 165 experienced postoperative complications, and in 66 of these cases, retained staples were found and removed. With the removal of retained staples, symptoms were almost all resolved or improved. In only 1 case did the retained staples removal not modify the symptoms. Limitations: The study design may have introduced potential selection bias. In addition, the study was limited by the lack of a specific questionnaire for the evaluation of symptoms improvement. Conclusions: The removal of the retained staples is an efficacious and safe procedure to solve or improve postoperative complications and should be always considered. abstract_id: PUBMED:30117358 A Modification of the Stapled TransAnal Rectal Resection (STARR) Procedure for Rectal Prolapse. Purpose: This study was designed to assess the safety, efficacy, and postoperative outcomes of the modified Stapled TransAnal Rectal Resection (modified STARR) in patients presenting with cases of limited external rectal prolapse. Methods: A prospective cohort of patients with mild rectal prolapse undergoing rectal resection with the Tissue-Selecting Technique Stapled TransAnal Rectal Resection Plus (TSTStarr Plus) stapler between February 2014 and September 2016 was reviewed retrospectively. Results: Twenty-five eligible patients underwent rectal resection with the TSTStarr Plus stapler. The median vertical height of the resected specimen was 5.0 cm (range = 3.1-10 cm) with all cases being confirmed histologically as full-thickness resections. Over a follow-up of 33.6 ± 9.4 months, only 1 case (4%) was encountered with recurrence. The mean postoperative Wexner score was significantly improved when compared with the preoperative scores (preoperative: median = 3, range = 0-20, vs postoperative: median = 2, range = 0-20, respectively; P = .010). The median preoperative Symptom Severity Score and Obstructed Defecation Score were both decreased compared with the postoperative scores ( P = .001). Conclusions: Modified STARR in management of mild rectal prolapse appear to be a safe and effective technique. The initial results would encourage a more formal prospective assessment of this technique as part of a randomized trial for the management of mild rectal prolapse. abstract_id: PUBMED:22195257 Stapled mucosectomy: an alternative technique for the removal of retained rectal mucosa after ileal pouch-anal anastomosis. Restorative proctocolectomy (RPC), when performed with a stapled ileal pouch-anal anastomosis (IPAA), allows the retention of the rectal mucosa above the dentate line and can result in disease persistence or recurrence, as well as neoplastic lesions in patients with ulcerative colitis (UC). We report the case of a patient with chronic UC who underwent staple mucosectomy, which is an alternative technique that evolved from stapled hemorrhoidopexy, rather than more traditional procedures. The patient had undergone laparoscopic RPC with a stapled IPAA 2 cm above the dentate line and a temporary loop ileostomy. Because the histopathology showed low-grade dysplasia in the proximal rectum, stapled mucosectomy with a 33-mm circular stapler kit at the time of ileostomy closure was scheduled. Following the application of a purse-string suture 1 cm above the dentate line, the stapler was inserted with its anvil beyond the purse-string and was fired. The excised rectal tissue was checked to ensure that it was a complete cylindrical doughnut. Histopathology of the excised tissue showed chronic inflammation. There were no complications during a follow-up period of 5 months. Because it preserves the normal rectal mucosal architecture and avoids a complex mucosectomy surgery, stapled mucosectomy seems to be a technically feasible and clinically acceptable alternative to the removal of rectal mucosa retained after RPC. abstract_id: PUBMED:38160748 Combined Robotic Transanal Transection Single-stapled Technique in Ultralow Rectal Endometriosis Involvement Associated with Parametrial and Vaginal Infiltration. Objective: To describe a combined robotic and transanal technique used to treat ultralow rectal endometriosis in a 36-year-old patient with multiple pelvic compartments, which was responsible for infertility, dyspareunia, left sciatic pain, and severe dyschezia. Design: Surgical video article. Setting: The achievement of a perfect bowel anastomosis in patients with low rectal endometriosis could be challenging owing to technical and anatomic limitations [1]. By allowing a right angle rectotomy with a single-stapled anastomosis, the transanal transection single-stapled technique overcomes these technical difficulties ensuring a good-quality anastomosis with an easier correction of postoperative anastomotic leakage when it occurs [2,3]. Interventions: The surgery starts by splitting the nodule in 3 components according to different anatomic structures involved (parametrium, vagina, and rectum). Parametrial and vaginal fragments are excised as previously described (Supplemental Videos 1) [4]. The rectal involvement is approached following several steps: isolation and cut of inferior mesenteric vessels (inferior mesenteric artery and inferior mesenteric vein) and left colic artery to obtain a proper colon mobilization; transanal rectotomy immediately below the lower limit of the nodule; extraction of the specimen through the anus (Supplemental Videos 2); proximal bowel segment transection 1 cm above the upper limit of the nodule; introduction of circular stapler anvil into the sigmoid colon; placement of 2 purse string to secure the anvil and at distal rectal cuff, respectively; connection of the anvil to the shoulder of circular stapler; stapler closing and firing with coloanal anastomosis formation; stapled line reinforcement by stitching; and integrity anastomosis test (Supplemental Videos 3). No preventive diverting stoma was performed in accordance with our policy [5]. Conclusions: Although no data are yet available in patients with endometriosis, the use of transanal transection single-stapled technique may be an interesting approach in patients with very low rectal endometriosis involvement. abstract_id: PUBMED:25398818 Long-term outcome of stapled transanal rectal resection (STARR) versus stapled hemorrhoidopexys (STH) for grade III-IV hemorrhoids: preliminary results. Circular stapled transanal hemorrhoidopexy (STH) was first introduced by A. Longo for the correction of internal mucosal prolapse and obstructed defecation and in 1998, was proposed as alternative to conventional excisional hemorrhoidectomy. More recently, stapled transanal rectal resection (STARR) has gradually gained popularity, as the Longo procedure, in the treatment of hemorrhoids. The aim of our study was to evaluate the usefulness of STARR as alternative to STH in patients with grade III (n=218, 68.1%) and IV (n=102, 31.9%) hemorrhoids. A group of 320 consecutive patients (median age=51 years; range=16-85) underwent STH (n=281) or STARR (n=39) procedure. The rate of postoperative bleeding (53.8% vs. 74.4%, p<0.01) was significantly reduced in patients who underwent STARR procedure, which required a longer (45 ± 22 vs. 26 ± 11 min, p<0.01) operative time. There were no differences between groups with regard to use of painkillers, postoperative pain intensity, short- (three months) and long-term (one and three years) residual pain, soiling, incontinence and urgency. Patients treated with the STARR procedure had lower recurrence rate of hemorrhoids and a lower incidence of prolapse, both at one year (none vs. 1.4%, p=0.593 and 2.6% vs. 5.3%, p=0.396, respectively) and at two years (none vs. 6.8%, p=0.078 and none vs. 13.2%, p=0.012, respectively). The one-year (9.0 ± 1.8 vs. 9.4 ± 0.7, p=0.171) and two-year (9.6 ± 0.8 vs. 9.1 ± 1.7, p=0.072) general satisfaction was similar but higher in STARR patients than in the STH group. In conclusion, according to our preliminary results, the STARR procedure leads to a lower incidence of complications and recurrences and should be considered for patients with grade III or IV hemorrhoids previously selected for stapled hemorrhoidectomy, as a promising alternative to STH. abstract_id: PUBMED:20070341 The fate and significance of retained staples after stapled haemorrhoidopexy. Aim: We studied the fate of staples after stapled haemorrhoidectomy (SH) and reviewed the complications they may cause. Method: A survey was carried among 16 surgeons experienced at performing SH to assess their knowledge regarding retained staples. In addition, plain pelvic radiology was used to visualize residual staples at different time intervals after SH in 45 patients. Results: Ten (63%) surgeons informed their patients that the staples would be shed within 3 months of SH. Regardless of the timing of surgery, part of the staple line was radiologically visible in all patients, and in 19 patients (45%) the staple line was completely visible. Conclusion: Contrary to the common belief, the majority of staples are retained after SH. abstract_id: PUBMED:30943327 Integration of transanal techniques for precise rectal transection and single-stapled anastomosis: a proof of concept study. Aim: Transection of the distal rectum and subsequent anastomosis differ between the open, minimally invasive and transanal approaches. With the transanal technique, there is direct control of the transection level and the single-stapled anastomosis, thus overcoming two crucial limitations of rectal surgery. This study describes a technique for precise a transanal rectal transection with a single-stapled (TTSS) colorectal, coloanal or ileoanal anastomosis in 20 consecutive patients undergoing low rectal surgery. Methods: After completing rectal dissection by the preferred technique (open or minimally invasive), TTSS was created. The detailed video describes this technique. Results: TTSS was feasible in all patients: 13 underwent total mesorectal excision + TTSS for low rectal cancer and seven underwent ileoanal pouch + TTSS for benign disease. Complications included one Grade IIIa and three Grade I, according to the Clavien-Dindo classification (median follow-up 6 months). Conclusion: TTSS represents a technique which can be applied regardless of the preferred approach (open, minimally invasive or transanal) for low rectal dissection. The adoption of TTSS could well allow for a more consistent comparison of the outcomes following the differing approaches to rectal surgery. abstract_id: PUBMED:29363847 Clinical outcomes of stapled transanal rectal resection for obstructed defaecation syndrome at 10-year follow-up. Aim: The long-term efficacy of stapled transanal rectal resection (STARR) for surgical management of obstructed defaecation syndrome (ODS) has not been evaluated. Therefore, we investigated the long-term efficacy (> 10 years) of STARR for treatment of ODS related to rectocele or rectal intussusception and the factors that predict treatment outcome. Method: This study was a retrospective cohort analysis conducted on prospectively collected data. Seventy-four consecutive patients who underwent STARR for ODS between January 2005 and December 2006 in two Italian hospitals were included. Results: Seventy-four patients [66 women; median age 61 (29-77) years] underwent STARR for ODS. No serious postoperative complications were recorded. Ten years postoperatively, 60 (81%) patients completed the expected follow-up. Twenty-three patients (38%) reported persistent perineal pain and 13 (22%) experienced the urge to defaecate. ODS symptoms recurred in 24 (40%) patients after 10 years. At the 10-year follow-up, 35% of patients were very satisfied and 28% would recommend STARR and undergo the same procedure again if necessary. In contrast, 21% of patients would not select STARR again. Previous uro-gynaecological or rectal surgery and high constipation scores were identified as risk factors for recurrence. Conclusions: Stapled transanal rectal resection significantly improves the symptoms of ODS in the short term. In the long term STARR is less effective, however. abstract_id: PUBMED:34148318 Efficacy of transanal hand-sewn reinforcement in low rectal stapled anastomosis in preventing anastomotic leak after transanal total mesorectal excision Objective: To explore the efficacy and feasibility of transanal hand-sewn reinforcement of low stapled anastomosis in preventing anastomotic leak after transanal total mesorectal excision (taTME). Methods: A descriptive cohort study was conducted. Clinical data of 51 patients with rectal cancer who underwent taTME with transanal hand-sewn reinforcement of low stapled anastomosis at Department of Colorectal Surgery, the Sixth Affiliated Hospital of Sun Yat-sen University from January 2019 to December 2020 were retrospectively collected. Inclusion criteria: (1) age >18 years old; (2) rectal cancer confirmed by preoperative pathology; (3) distance from tumor to anal verge ≤ 8 cm according to pelvic MR; (4) the lesion was evaluated to be resectable before operation; (5) with or without neoadjuvant chemotherapy and radiotherapy; (6) taTME, end-to-end stapled anastomosis, and reinforcement in the anastomosis with absorbable thread intermittently were performed, and the distance between anastomosis and anal verge was ≤ 5 cm. Exclusion criteria: (1) previous history of colorectal cancer surgery; (2) emergency surgery due to intestinal obstruction, bleeding or perforation; (3) patients with local recurrence or distant metastasis; (4) the period of postoperative follow-up less than 3 months. The procedure of transanal hand-sewn reinforcement was as follows: firstly, no sign of bleeding was confirmed after checking the anastomosis. Then, the anastomosis was reinforced by suturing the muscle layer of rectum intermittently in a figure-of-eight manner using 3-0 single Vicryl. The entry site of the next suture was close next to the exit site of the last one. Any weak point of the anastomosis could also be reinforced according to the specimen from the circular stapler. The primary outcome were the incidence of anastomotic leak, methods of the secondary operation, anastomotic infection, anastomotic stricture, and conditions of Intraoperative and postoperative. Results: All the 51 enrolled patients completed surgery successfully without any conversion to open surgery. The median operative time was 169 (109-337) minutes, and the median intraoperative blood loss was 50 (10-600) ml. The median postoperative hospital stay was 8 (5-16) days. The mssorectum was complete and distal resection margin was negative in all patients. Postive circumferential resection margin was observed in 1 patients (2.0%). Twelve (23.5%) patients underwent prophylactic ileostomy. One patient developed anastomosis stricture which was cured by digital dilatation of the anastomosis. ISREC grade C anastomotic leak was observed in 3 (5.9%) male patients, of whom 2 cases did not received prophylactic ileostomy during the operation, and were cured by a second operation with the ileostomy and anastomotic repair. The other one healed by transanal repair of the anastomosis and anti-infection therapy. One (2.0%) patient suffered from perianal infection and healed by sitz bath and anti-infection therapy. No death was reported within 30 days after operation. Conclusion: Transanal hand-sewn reinforcement in low rectal stapled anastomosis in preventing anastomotic leak after taTME is safe and feasible. abstract_id: PUBMED:18060361 Proctalgia in a patient with staples retained in the puborectalis muscle after STARR operation. Stapled transanal rectal resection (STARR) is a novel surgical technique for the treatment of intussusception and rectocele causing obstructed defecation. In this procedure, a double full-thickness rectal resection is performed transanally using two circular staplers. We describe the case of a patient complaining of persistent pain, tenesmus and fecal urgency after STARR. The patient also had an external rectal prolapse requiring an Altemeier rectosigmoid resection; during this operation we found and removed several staples that had stuck to the puborectalis muscle during STARR. Some degree of muscle inflammation was found at histological analysis. The patient recovered fully after this reintervention. Among the complications reported after STARR, the present one had not previously been described. The retained staples might have caused proctalgia in this patient. Answer: Yes, the removal of retained staples does appear to improve postoperative chronic sequelae after transanal stapled operations. A retrospective study conducted at the One-Day Surgery Unit of St. Andrea Hospital included 566 patients who underwent a stapled transanal procedure, with 165 experiencing postoperative complications. In 66 of these cases, retained staples were found and removed, leading to the resolution or improvement of symptoms in almost all cases. Only one case did not show symptom improvement after staple removal. The study concluded that the removal of retained staples is an efficacious and safe procedure to solve or improve postoperative complications and should always be considered (PUBMED:24819108). Furthermore, the study on the long-term efficacy of stapled transanal rectal resection (STARR) for obstructed defaecation syndrome (ODS) indicated that while STARR significantly improves symptoms of ODS in the short term, its long-term effectiveness is less certain. However, this study did not directly address the issue of retained staples (PUBMED:29363847). Another study reported a case where a patient experienced persistent pain, tenesmus, and fecal urgency after a STARR operation. During a subsequent Altemeier rectosigmoid resection for external rectal prolapse, several staples retained in the puborectalis muscle were found and removed, which led to full recovery of the patient. This suggests that retained staples can cause complications such as proctalgia and their removal can lead to symptom resolution (PUBMED:18060361). In summary, the evidence from the provided abstracts supports the notion that the removal of retained staples can improve postoperative chronic sequelae following transanal stapled operations.
Instruction: Is age of the patient an independent predictor influencing the management of cranio-maxillo-facial trauma? Abstracts: abstract_id: PUBMED:24842446 Is age of the patient an independent predictor influencing the management of cranio-maxillo-facial trauma? A retrospective study of 308 patients. Objective: To determine whether patient age is an independent predictor of indication for surgical treatment, realization of surgical procedure, and care management modality. Study Design: We analyzed data of 308 patients with facial fractures. Patients were classified into four age groups: (1) 17 to 40 years old; (2) 41 to 64 years old; (3) 65 to 80 years old; (4) more than 81 years old. Results: Multivariate analysis showed that the patient's age was associated with decreased indication and realization rate for surgery (P = .03 in Group 3; P = .05 in Group 4) and with an increased hospitalization rate (P = .004 in Group 3; P = .02 in Group 4). The presence of concomitant injuries and the fracture's location were associated with a decreased indication and realization rate for surgery regardless age. Conclusion: The current study demonstrated that the patient's age was an independent predictor influencing the treatment decision and realization process as well as the care management modality. abstract_id: PUBMED:17003763 A therapeutic approach: the "Maxillo Family" For patients with severe facial injuries, global management from trauma to "cure", is the leitmotif in the department of maxillofacial surgery of Besançon. The progressive development of therapy groups led to the creation of the "Maxillo Family" which is more or less modelled after well known "Gueules Cassées" French Association. We also created the "Journal of Maxillo" in which every injured patient can write his own story. This seems to be a very good therapeutic approach not only from the patients' point of view but also for practitioners and nurses. The testimony of a young woman victim of a dramatic accident with severe facial injury illustrates the interesting aspects of this therapeutic approach. abstract_id: PUBMED:25450733 Patient treated with sub-mental intubation for maxillofacial trauma Maxillo-facial traumas are frequent and most often occur in young patients. Naso-tracheal or orotracheal intubation may be contraindicated in case of combined occlusal fracture and nasal or ethmoido-nasal fracture. This study was carried out a clinical case of a patient treated at the Lille University Hospital for a maxillofacial trauma associating fracture of nose and maxilla. The purpose was to assess the reliability of submental intubation as an alternative to tracheotomy. Submental intubation is a reliable single and safe technique allowing an one-stage surgical treatment in case of complex association of fractures without using tracheotomy. Its use should be implemented on a larger scale. abstract_id: PUBMED:37222981 Printing in Time for Cranio-Maxillo-Facial Trauma Surgery: Key Parameters to Factor in. Study Design: retrospective cohort study. Objective: 3D printing is used extensively in cranio-maxillo-facial (CMF) surgery, but difficulties remain for surgeons to implement it in an acute trauma setting because critical information is often omitted from reports. Therefore, we developed an in-house printing pipeline for a variety of cranio-maxillo-facial fractures and characterized each step required to print a model in time for surgery. Methods: All consecutive patients requiring in-house 3D printed models in a level 1 trauma center for acute trauma surgery between March and November 2019 were identified and analyzed. Results: Sixteen patients requiring the printing of 25 in-house models were identified. Virtual Surgical Planning time ranged from 0h 08min to 4h 41min (mean = 1h 46min). The overall printing phase per model (pre-processing, printing, and post-processing) ranged from 2h 54min to 27h 24min (mean = 9h 19min). The overall success rate of prints was 84%. Filament cost was between $0.20 and $5.00 per model (mean = $1.56). Conclusions: This study demonstrates that in-house 3D printing can be done reliably in a relatively short period of time, therefore allowing 3D printing usage for acute facial fracture treatment. When compared to outsourcing, in-house printing shortens the process by avoiding shipping delays and by having a better control over the printing process. For time-critical prints, other time-consuming steps need to be considered, such as virtual planning, pre-processing of 3D files, post-processing of prints, and print failure rate. abstract_id: PUBMED:16274131 Fabrication of 3-dimensional skull model with rapid prototyping technique and its primary application in repairing one case of cranio-maxillo-facial trauma Objective: To investigate the methods of establishing 3-dimensional skull model using electron beam CT (EBCT) data rapid prototyping technique, and to discuss its application in repairing cranio-maxillo-facial trauma. Methods: The data were obtained by EBCT continuous volumetric scanning with 1.0 mm slice at thickness. The data were transferred to work-station for 3-dimensional surface reconstruction by computer-aided design software and the images were saved as STL file. The data can be used to control a laser rapid-prototyping device (AFS-320QZ) to construct geometric model. The material for the model construction is a kind of laser-sensitive resin power, which will become a mass when scanned by laser beam. The design and simulation of operation can be done on the model. The image data were transferred to the device slice by slice. Thus a geometric model is constructed according to the image data by repeating this process. Preoperative analysis, surgery simulation and implant of bone defect could be done on this computer-aided manufactured 3D model. One case of cranio-maxillo-facial bone defect resulting from trauma was reconstructed with this method. The EBCT scanning showed that the defect area was 4 cm x 6 cm. The nose was flat and deviated to left. Results: The 3-dimensional skull was reconstructed with EBCT data and rapid prototyping technique. The model can display the structure of 3-dimensional anatomy and their relationship. The prefabricated implant by 3-dimensional model was well-matched with defect. The deformities of flat and deviated nose were corrected. The clinical result was satisfactory after a follow-up of 17 months. Conclusion: The 3-dimensional model of skull can replicate the prototype of disease and play an important role in the diagnosis and simulation of operation for repairing cranio-maxillo-facial trauma. abstract_id: PUBMED:30512753 Diagnostic evaluation and management of a cranio-facial trauma abstract_id: PUBMED:7984132 Fractures of maxillo-facial bones in children. Our experience A clinic-statistical study of maxillo-facial fractures in a series of 52 children is reported. This group presents a higher involvement of mandibular body and a low rate M:F than in the literature. Factors responsible for increasing of maxillo-facial traumas in little girls and in school age are analyzed. abstract_id: PUBMED:6625527 Management of severe maxillo-facial injuries. Maxillo-facial injuries have increased in incidence in Singapore due mainly to road traffic accidents. Approximately 450 cases are seen annually in the Singapore General Hospital. A review of 50 consecutive cases of severe maxillo-facial injuries seen in the Department of Plastic Surgery showed that the majority were Lefort II type fractures (64%) followed by Lefort I fractures (14%) and Lefort III fractures (8%). There were seven cases which had a combination of multiple facial fractures. The significant associated injuries occurred in the limbs (32%), the head (30%) and in the chest (8%). The emergency management of maxillo-facial trauma is discussed in some detail and some of the problems in the treatment of severe or multiple facial fractures are also highlighted in this paper. abstract_id: PUBMED:7254449 Problems of technique and operative tactics in cranio-facial trauma. 229 cases (author's transl) The main difficulty in diagnosis and treatment of fronto-basal skull fractures can be said in two propositions: --On the one hand owing to the fact that diagnosis of cerebro-spinal fistula is sometimes difficult; --on the other hand because therapeutic require intervention of several surgical teams. An anatomo-pathological classification should take into considerations the point of frontal impact which is: medio-facial, medio-cranio-facial, latero-cranio-facial and it sagittal outcome which is: frontal, ethmoïdal, sphénoïdal. Surgical indication whether meet urgent character or respond to clinical and radiological facts (rhinorrhea, pneumocephalus, meningitis, radiological suspicion of osteo-meningeal break). Surgical treatment will envisage primary and complete repair of neuro-surgical and maxillo-facial injuries. abstract_id: PUBMED:7212620 The pattern of maxillo-facial injuries in Singapore. This is a study of 1,404 patients with maxillo-facial injuries treated at the Dental Clinic, Singapore General Hospital during the period January 1977 to December 1979. The significant features in the pattern of injuries in this group of patients are: 1. the commonest causes of maxillo-facial injuries are falls from height and automobile accidents. 2. the age group most affected is 0 to 39 years. 3. a high incidence in males (72.2%). The various modalities of treatment are also reviewed. Recent trends in the management of complex maxillo-facial fractures have significantly reduced the period of hospitalisation. Answer: Yes, the age of the patient is an independent predictor influencing the management of cranio-maxillo-facial trauma. A retrospective study of 308 patients with facial fractures found that patient age was associated with a decreased indication and realization rate for surgery, particularly in older age groups (65 to 80 years old and more than 81 years old), as well as an increased hospitalization rate for these groups (PUBMED:24842446). This suggests that as patients age, the likelihood of them undergoing surgery for cranio-maxillo-facial trauma decreases, and they are more likely to be managed with hospitalization instead. Other factors such as the presence of concomitant injuries and the fracture's location were also associated with a decreased indication and realization rate for surgery regardless of age.
Instruction: Does nerve examination improve diagnostic efficacy of the WHO classification of leprosy? Abstracts: abstract_id: PUBMED:18797051 Does nerve examination improve diagnostic efficacy of the WHO classification of leprosy? Background: In the year 1998 WHO proposed that the clinical criteria of counting skin lesions alone should decide whether a patient receives Multibacillary (MB) or Paucibacillary (PB) MDT. There is a concern that a significant number of patients may be incorrectly treated under these guidelines. Aims: This study aims to determine whether the sensitivity and the specificity of the latest WHO criteria, can be enhanced by the addition of nerve examination in the place of slit skin smears. Methods: 150 patients of untreated leprosy reporting at a TLM Hospital in Delhi from January to December 2006 were registered for the study. After physical examination, the number of skin lesions and nerves involved were counted and slit skin smears performed. Two groups were created, those with > 5 skin lesions, and those with 5 or less skin lesions. The diagnostic efficacy of the current WHO classification was calculated with and without the addition of nerve examination. Results: The sensitivity and the specificity of the current WHO operational classification are 76.6%, and 73.7% respectively, using slit skin smear as a standard. When the number of nerves was added to the diagnosis, the sensitivity increased to 94.4%, for more than 5 lesions and to 90.9%, for five or less than five lesions. Conclusions: Nerve examination can significantly improve the sensitivity of the WHO criterion in determination of MB versus PB leprosy. abstract_id: PUBMED:3373086 Diagnostic efficacy of cutaneous nerve biopsy in primary neuritic leprosy. The role of nerve biopsy in the diagnosis of primary neuritic leprosy was evaluated in a study of 77 patients who had symptoms of peripheral neuropathy without hypopigmented patches, positive skin smears, or a skin biopsy consistent with leprosy. A biopsy of a representative cutaneous nerve near the site of the neurological deficit was taken for histopathological examination and acid-fast staining. Nearly half of the patients had leprosy confirmed by nerve biopsy, and the entire spectrum of leprosy was represented. No significant relationship was seen by age or sex or type of neuropathy. The duration of symptoms did not correlate with the severity of nerve damage as seen histologically. The probability of false-positive or false-negative results is discussed in light of clinical management. Being a relatively simple office procedure, a cutaneous nerve biopsy is strongly recommended as an important diagnostic tool, particularly for primary neuritic leprosy. abstract_id: PUBMED:9801901 Dacryocystographic examination does not identify early seventh nerve failure in leprosy patients. Dacryocystographic examination was performed in two groups of patients: patients having leprosy and those not having leprosy, in order to look for early failure of the facial nerve. The results of this study show that this kind of examination does not improve diagnosis of leprosy. abstract_id: PUBMED:7500818 Repeatability of nerve thickness assessment in the clinical examination for leprosy. The assessment of the thickness of the superficial peripheral nerve trunks to document nerve involvement is an important aspect of clinical examination in case finding for leprosy, and is usually done by trained paramedical workers (PMWs). This assessment is subject to variability and has implications on the outcome of the survey. The present study proposes to quantify this variability. In this study, 242 individuals, consisting of 50 neuritic cases, 143 nonneuritic cases of leprosy and 49 normal controls, selected from the records of the trial of BCG prophylaxis in leprosy in South India, were examined by a doctor and paramedical workers. Repeatability of nerve thickness assessment for ulnar and popliteal nerves between the medical officer (MO) and the PMWs was quantified using Kappa statistics. The Kappa values for repeatability between the MO and the PMWs ranged from 0.45 to 0.54 and 0.52 to 0.69 for ulnar and popliteal nerves, respectively. The implications of the variability in nerve assessment are discussed. abstract_id: PUBMED:6277344 Nerve biopsy. Advantages and limitations of modern examination techniques (author's transl) The main technics of peripheral nerve biopsy examination, the diagnostic value of each of them and their limits are given. Chief pathological processes and changes and usual causes of peripheral neuropathies, the diagnosis of which can be provided by nerve biopsy are analysed. Most frequent errors and misleading changes are emphasized. An examination scheme is set up. abstract_id: PUBMED:38000811 Epithelioid malignant peripheral nerve sheath tumour of the ulnar nerve around the elbow: a diagnostic and therapeutic challenge. Epithelioid malignant peripheral nerve sheath tumour (EMPNST) is a rare histological subtype of malignant peripheral nerve sheath tumour (MPNST), accounting for 5% to 17% of MPNSTs. The clinical and MRI findings of EMPNST mimic those of nerve abscesses, similar to the presentation in Hansen's disease. We present one such case with this kind of diagnostic dilemma. Intraoperative findings suggest a tumour changed the course of management subsequently. The development of neurological deficits postoperatively after tumour resection was a reconstructive challenge. To provide motor power and sensation through a procedure that provides a complete functional outcome for a young patient, distal nerve transfers were chosen. This provided an improvement in the quality of life and hastened the neurological recovery of the involved limb. Level of evidence: V. abstract_id: PUBMED:22415265 Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency. Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data. abstract_id: PUBMED:8341113 Concurrent skin and nerve histology in leprosy and its role in the classification of leprosy. Concurrent skin and nerve histology was evaluated in 60 leprosy patients (25 BT, 28 BL and 7 LL). The twin aims were to study the comparative histology and the usefulness of nerve histology in the classification of the disease. In BT patients, clinical and histological classification was in agreement in 11 (44%) skin and 17 (68%) nerve biopsies. Concurrent skin and nerve histology was in consonance in 14 (56%) BT patients, while in 6 (24%) patients, only nerve histology was helpful in the classification of the disease, the skin histology being non-specific. Nerve histology was classified as BL in 3 (12%) BT patients, the skin histology was non-specific. In the BL group, the histology of 23 (82.4%) nerve biopsies correlated with the clinical classification, in contrast to skin histology which correlated with clinical assessment in 19 (68%) patients only. In the LL patients, the histology of nerve correlated with the clinical classification in 5 patients (71.4%), compared to histology of the skin in 4 (57%) patients only. The GF was higher in the nerves than in the skin throughout the leprosy spectrum (BT, BL, LL); the difference was, however, marginal in BL leprosy. The average bacteriological index (BI) was higher in nerves (4+) compared to that of skin histology and slit skin smears (3+) in BL leprosy. There was, however, no difference in the BI of the slit skin smears, skin and nerve biopsies in lepromatous leprosy.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:33769731 High-resolution ultrasound in the assessment of peripheral nerves in leprosy: A comparative cross-sectional study. Background: Detection of peripheral nerve thickening and nerve function impairment is crucial in the diagnosis and the management of leprosy. Aims And Objectives: (1) To document the cross-sectional area, echotexture and blood flow of peripheral nerves in healthy controls and leprosy cases using high-resolution ultrasound, (2) to compare the sensitivities of clinical examination and high-resolution ultrasound in detecting peripheral nerve thickening in leprosy. Methods: Peripheral nerves of 30 leprosy patients and 30 age- and sex-matched controls were evaluated clinically and by high-resolution ultrasound. When the cross-sectional area of a peripheral nerve on high-resolution ultrasound in a leprosy patient was more than the calculated upper bound of the 95% confidence interval for mean for that specific nerve in controls, that particular peripheral nerve was considered to be enlarged. Results: Cross-sectional areas more than 7.1 mm2 for the radial nerve, 8.17 mm2 for ulnar, 10.17 mm2 for median, 9.50 mm2 for lateral popliteal and 11.21mm2 for the posterior tibial nerve were considered as nerve thickening on high-resolution ultrasound. High-resolution ultrasound detected 141/300 (47%) nerves enlarged in contrast to the 60 (20%) diagnosed clinically by palpation (P < 0.001). Clinical examination identified thickening in 31/70 (44.3%) nerves in cases with impairment of nerve function and 29/230 (12.6%) in the absence of nerve function impairment. High-resolution ultrasound detected thickening in 50/70 (71.4%) nerves with impairment of function and in 91/230 (39.6%) nerves without any impairment of function. Limitation: A single-centre study design was the major study limitation. Conclusion: High-resolution ultrasound showed greater sensitivity than clinical examination in detecting peripheral nerve thickening in leprosy cases. High-resolution ultrasound, may therefore improve the sensitivity of the diagnostic criterion of peripheral nerve enlargement in the diagnosis and classification of leprosy. abstract_id: PUBMED:33935387 Cytomorphological Patterns of Nerve Aspirates in Pure Neuritic Leprosy-A Single Centre Cross-Sectional Observational Study. Background: Pure neuritic leprosy (PNL) poses a diagnostic challenge because of absence of skin patches, inconclusive skin biopsies and nerve conduction studies. Nerve biopsy though the diagnostic gold standard, is invasive, requires expertise, and may not be feasible in all cases. Fine needle aspiration cytology (FNAC) of accessible thickened nerves can be utilized as a minimally invasive diagnostic modality in PNL. This study was carried out to describe cytomorphological patterns of nerve aspirates in patients of PNL for diagnosis and classification of leprosy and study its advantage, if any, over skin biopsy. Methods: Twenty-seven treatment naive clinically diagnosed patients of PNL were included in this cross-sectional study carried out from January 2017 to December 2018 at a tertiary care centre in Western India. FNAC was done from a clinically involved nerve and aspirates were evaluated for cytomorphological characteristics and the presence of Acid-Fast Lepra bacilli. Results: Nerve aspirates were diagnostic in 10 (37%) patients while 17 (63%) aspirates showed non-specific or no inflammation. Of the diagnostic aspirates, six (22.2%) were classified as tuberculoid leprosy, three (11.1%) as lepromatous and one (3.7%) as borderline leprosy. Mycobacterium leprae were demonstrated among three (11.1%) of these aspirates. In comparison, only three (11.1%) skin biopsies were diagnostic of leprosy with features of indeterminate spectrum. Remaining 24 skin biopsies showed normal histology in 20 (74.1%) cases to perivascular lymphocytic infiltrate in four (14.8%) cases. Conclusion: Our study demonstrates that FNAC of clinically thickened nerves has a better diagnostic yield than skin biopsy in PNL and shows all spectrums of leprosy. It also offers the advantage of sampling major nerve trunks without the fear of residual neurological deficit. However, most of the smears were paucicellular and a negative aspirate does not rule out leprosy. Answer: Yes, nerve examination does improve the diagnostic efficacy of the WHO classification of leprosy. A study aimed at determining whether the sensitivity and specificity of the latest WHO criteria could be enhanced by adding nerve examination to the diagnostic process found that when the number of nerves involved was included in the diagnosis, the sensitivity increased significantly. Specifically, the sensitivity increased to 94.4% for patients with more than 5 lesions and to 90.9% for patients with 5 or fewer lesions, compared to the sensitivity and specificity of the current WHO operational classification, which are 76.6% and 73.7% respectively, using slit skin smear as a standard (PUBMED:18797051). This indicates that nerve examination can significantly improve the accuracy of determining whether a patient should be treated for Multibacillary (MB) or Paucibacillary (PB) leprosy under the WHO guidelines.
Instruction: Do novel genes drive morphological novelty? Abstracts: abstract_id: PUBMED:30135167 A morphological novelty evolved by co-option of a reduced gene regulatory network and gene recruitment in a beetle. The mechanisms underlying the evolution of morphological novelties have remained enigmatic but co-option of existing gene regulatory networks (GRNs), recruitment of genes and the evolution of orphan genes have all been suggested to contribute. Here, we study a morphological novelty of beetle pupae called gin-trap. By combining the classical candidate gene approach with unbiased screening in the beetle Tribolium castaneum, we find that 70% of the tested components of the wing network were required for gin-trap development. However, many downstream and even upstream components were not included in the co-opted network. Only one gene was recruited from another biological context, but it was essential for the anteroposterior symmetry of the gin-traps, which represents a gin-trap-unique morphological innovation. Our data highlight the importance of co-option and modification of GRNs. The recruitment of single genes may not be frequent in the evolution of morphological novelties, but may be essential for subsequent diversification of the novelties. Finally, after having screened about 28% of annotated genes in the Tribolium genome to identify the genes required for gin-trap development, we found none of them are orphan genes, suggesting that orphan genes may have played only a minor, if any, role in the evolution of gin-traps. abstract_id: PUBMED:27216622 Do novel genes drive morphological novelty? An investigation of the nematosomes in the sea anemone Nematostella vectensis. Background: The evolution of novel genes is thought to be a critical component of morphological innovation but few studies have explicitly examined the contribution of novel genes to the evolution of novel tissues. Nematosomes, the free-floating cellular masses that circulate through the body cavity of the sea anemone Nematostella vectensis, are the defining apomorphy of the genus Nematostella and are a useful model for understanding the evolution of novel tissues. Although many hypotheses have been proposed, the function of nematosomes is unknown. To gain insight into their putative function and to test hypotheses about the role of lineage-specific genes in the evolution of novel structures, we have re-examined the cellular and molecular biology of nematosomes. Results: Using behavioral assays, we demonstrate that nematosomes are capable of immobilizing live brine shrimp (Artemia salina) by discharging their abundant cnidocytes. Additionally, the ability of nematosomes to engulf fluorescently labeled bacteria (E. coli) reveals the presence of phagocytes in this tissue. Using RNA-Seq, we show that the gene expression profile of nematosomes is distinct from that of the tentacles and the mesenteries (their tissue of origin) and, further, that nematosomes (a Nematostella-specific tissue) are enriched in Nematostella-specific genes. Conclusions: Despite the small number of cell types they contain, nematosomes are distinct among tissues, both functionally and molecularly. We provide the first evidence that nematosomes comprise part of the innate immune system in N. vectensis, and suggest that this tissue is potentially an important place to look for genes associated with pathogen stress. Finally, we demonstrate that Nematostella-specific genes comprise a significant proportion of the differentially expressed genes in all three of the tissues we examined and may play an important role in novel cell functions. abstract_id: PUBMED:25351750 Large-scale coding sequence change underlies the evolution of postdevelopmental novelty in honey bees. Whether coding or regulatory sequence change is more important to the evolution of phenotypic novelty is one of biology's major unresolved questions. The field of evo-devo has shown that in early development changes to regulatory regions are the dominant mode of genetic change, but whether this extends to the evolution of novel phenotypes in the adult organism is unclear. Here, we conduct ten RNA-Seq experiments across both novel and conserved tissues in the honey bee to determine to what extent postdevelopmental novelty is based on changes to the coding regions of genes. We make several discoveries. First, we show that with respect to novel physiological functions in the adult animal, positively selected tissue-specific genes of high expression underlie novelty by conferring specialized cellular functions. Such genes are often, but not always taxonomically restricted genes (TRGs). We further show that positively selected genes, whether TRGs or conserved genes, are the least connected genes within gene expression networks. Overall, this work suggests that the evo-devo paradigm is limited, and that the evolution of novelty, postdevelopment, follows additional rules. Specifically, evo-devo stresses that high network connectedness (repeated use of the same gene in many contexts) constrains coding sequence change as it would lead to negative pleiotropic effects. Here, we show that in the adult animal, the converse is true: Genes with low network connectedness (TRGs and tissue-specific conserved genes) underlie novel phenotypes by rapidly changing coding sequence to perform new-specialized functions. abstract_id: PUBMED:32359002 A multidimensional framework for measuring biotic novelty: How novel is a community? Anthropogenic changes in climate, land use, and disturbance regimes, as well as introductions of non-native species can lead to the transformation of many ecosystems. The resulting novel ecosystems are usually characterized by species assemblages that have not occurred previously in a given area. Quantifying the ecological novelty of communities (i.e., biotic novelty) would enhance the understanding of environmental change. However, quantification remains challenging since current novelty metrics, such as the number and/or proportion of non-native species in a community, fall short of considering both functional and evolutionary aspects of biotic novelty. Here, we propose the Biotic Novelty Index (BNI), an intuitive and flexible multidimensional measure that combines (a) functional differences between native and non-native introduced species with (b) temporal dynamics of species introductions. We show that the BNI is an additive partition of Rao's quadratic entropy, capturing the novel interaction component of the community's functional diversity. Simulations show that the index varies predictably with the relative amount of functional novelty added by recently arrived species, and they illustrate the need to provide an additional standardized version of the index. We present a detailed R code and two applications of the BNI by (a) measuring changes of biotic novelty of dry grassland plant communities along an urbanization gradient in a metropolitan region and (b) determining the biotic novelty of plant species assemblages at a national scale. The results illustrate the applicability of the index across scales and its flexibility in the use of data of different quality. Both case studies revealed strong connections between biotic novelty and increasing urbanization, a measure of abiotic novelty. We conclude that the BNI framework may help building a basis for better understanding the ecological and evolutionary consequences of global change. abstract_id: PUBMED:24376428 Novelty or surprise? Novelty and surprise play significant roles in animal behavior and in attempts to understand the neural mechanisms underlying it. They also play important roles in technology, where detecting observations that are novel or surprising is central to many applications, such as medical diagnosis, text processing, surveillance, and security. Theories of motivation, particularly of intrinsic motivation, place novelty and surprise among the primary factors that arouse interest, motivate exploratory or avoidance behavior, and drive learning. In many of these studies, novelty and surprise are not distinguished from one another: the words are used more-or-less interchangeably. However, while undeniably closely related, novelty and surprise are very different. The purpose of this article is first to highlight the differences between novelty and surprise and to discuss how they are related by presenting an extensive review of mathematical and computational proposals related to them, and then to explore the implications of this for understanding behavioral and neuroscience data. We argue that opportunities for improved understanding of behavior and its neural basis are likely being missed by failing to distinguish between novelty and surprise. abstract_id: PUBMED:30294344 Taxonomically Restricted Genes Are Fundamental to Biology and Evolution. Genes limited to particular clades, taxonomically restricted genes (TRGs), are common in all sequenced genomes. TRGs have recently become associated with the evolution of novelty, as numerous studies across the tree of life have now linked expression of TRGs with novel phenotypes. However, TRGs that underlie ancient lineage specific traits have been largely omitted from discussions of the general importance of TRGs. Here it is argued that when all TRGs are considered, it is apparent that TRGs are fundamental to biology and evolution and likely play many complementary roles to the better understood toolkit genes. Genes underlying photosynthesis and skeletons, for example, are examples of commonplace fundamental TRGs. Essentially, although basic cell biology has a highly conserved genetic basis across the tree of life, most major clades also have lineage specific traits central to their biology and these traits are often based on TRGs. In short, toolkit genes underlie what is conserved across organisms, while TRGs define in many cases what is unique. An appreciation of the importance of TRGs will improve our understanding of evolution by triggering the study of neglected topics in which TRGs are of paramount importance. abstract_id: PUBMED:27493475 Computational Identification of Novel Genes: Current and Future Perspectives. While it has long been thought that all genomic novelties are derived from the existing material, many genes lacking homology to known genes were found in recent genome projects. Some of these novel genes were proposed to have evolved de novo, ie, out of noncoding sequences, whereas some have been shown to follow a duplication and divergence process. Their discovery called for an extension of the historical hypotheses about gene origination. Besides the theoretical breakthrough, increasing evidence accumulated that novel genes play important roles in evolutionary processes, including adaptation and speciation events. Different techniques are available to identify genes and classify them as novel. Their classification as novel is usually based on their similarity to known genes, or lack thereof, detected by comparative genomics or against databases. Computational approaches are further prime methods that can be based on existing models or leveraging biological evidences from experiments. Identification of novel genes remains however a challenging task. With the constant software and technologies updates, no gold standard, and no available benchmark, evaluation and characterization of genomic novelty is a vibrant field. In this review, the classical and state-of-the-art tools for gene prediction are introduced. The current methods for novel gene detection are presented; the methodological strategies and their limits are discussed along with perspective approaches for further studies. abstract_id: PUBMED:26852869 Exploration of a novel object in late adolescence predicts novelty-seeking behavior in adulthood: Associations among behavioral responses in four novelty-seeking tests. Unlabelled: The sensation/novelty seeking behavioral trait refers to the exploration/preference for a novel environment. Novelty seeking increases during late adolescence and it has been associated with several neurobehavioral disorders. In this experiment, we asked whether inbred Roman high- and low-avoidance (RHA-I, RLA-I) rats (1) differ in novelty seeking in late adolescence and (2) whether late adolescent novelty seeking predicts this trait in adulthood. Thirty six male RHA-I and 36 RLA-I rats were exposed to a novel object exploration (NOE) test during late adolescence (pnd: 52-59; Dependent Variables: contact latency, contact time, contact frequency). Head-dipping (hole-board, HB), time and visits to a novel-arm (Y-maze), and latency-in and emergence latency (emergence test) were registered in adulthood (pnd: 83-105). The results showed strain differences in all these tests (RHA-I>RLA-I). Factor analysis (RHA-I+RLA-I) revealed two clusters. The first one grouped HB and emergence test measures. The second one grouped NOE and Y-maze variables. Time exploring a novel object (NOE) was a significant predictor of novel arm time (RHA-I+RLA, RHA-I); contact latency was a significant predictor of novel arm frequency (RLA-I). Present results show consistent behavioral associations across four novelty-seeking tests and suggest that late adolescent novelty seeking predicts this genetically-influenced temperamental trait in adult Roman rats. abstract_id: PUBMED:35659164 Clade-specific genes and the evolutionary origin of novelty; new tools in the toolkit. Clade-specific (a.k.a. lineage-specific) genes are very common and found at all taxonomic levels and in all clades examined. They can arise by duplication of previously existing genes, which can involve partial truncations or combinations with other protein domains or regulatory sequences. They can also evolve de novo from non-coding sequences, leading to potentially truly novel protein domains. Finally, since clade-specific genes are generally defined by lack of sequence homology with other proteins, they can also arise by sequence evolution that is rapid enough that previous sequence homology can no longer be detected. In such cases, where the rapid evolution is followed by constraint, we consider them to be ontologically non-novel but likely novel at a functional level. In general, clade-specific genes have received less attention from biologists but there are increasing numbers of fascinating examples of their roles in important traits. Here we review some selected recent examples, and argue that attention to clade-specific genes is an important corrective to the focus on the conserved developmental regulatory toolkit that has been the habit of evo-devo as a field. Finally, we discuss questions that arise about the evolution of clade-specific genes, and how these might be addressed by future studies. We highlight the hypothesis that clade-specific genes are more likely to be involved in synapomorphies that arose in the stem group where they appeared, compared to other genes. abstract_id: PUBMED:32174872 The Experience of Novelty and the Novelty of Experience. In cognitive psychology novelty is an antecedent of attention, emotion, memory, and behavior. However, the relationship between novelty and experience memorability remains conceptually underdeveloped in tourism. This research applies cognitive appraisal theory (CAT) to explore the contribution of novelty and emotion to memorable tourism experiences (MTEs). Seventy-five novel travel episodes were identified through semi-structured interviews. Analysis focused on the antecedent and consequent conditions of novelty. Novel experiences, whether positive or negative, were identified as critical to experience memorability. Novelty could be segmented into trip-related and event-related dimensions. Novelty contributes to how spatial, temporal, and contextual details of tourism experiences are remembered and reconstructed due to the elicitation of intense emotions. Analysis revealed negative experiences deemed as novel were found to be re-evaluated and often remembered as a positive experience. A conceptual model titled "cognitive appraisal of novelty in memorable tourism experiences" is presented for consideration in future research. By applying a retrospective and prospective approach the conceptual model explores the role of novelty through the process of cognitive appraisal, identifying goals, attention, and prior experiences as central for the experience of novelty. Future research should consider the application of recent advance in CAT to advance inquiry on tourism experiences as a psychological phenomenon. Answer: The relationship between novel genes and morphological novelty is a subject of ongoing research and debate in evolutionary biology. Several studies have investigated this relationship in different organisms and contexts. In the case of the beetle Tribolium castaneum, researchers found that the development of a morphological novelty called gin-traps involved the co-option of a reduced gene regulatory network (GRN) and the recruitment of a single gene from another biological context. This single recruited gene was essential for the anteroposterior symmetry of the gin-traps, which represents a unique morphological innovation. However, the study did not find any orphan genes (genes without known homologs) involved in the development of gin-traps, suggesting that orphan genes may have played only a minor role, if any, in their evolution (PUBMED:30135167). In the sea anemone Nematostella vectensis, researchers investigated nematosomes, a novel tissue, and found that they are enriched in Nematostella-specific genes. This suggests that lineage-specific genes may play an important role in the evolution of novel cell functions and tissues (PUBMED:27216622). A study on honey bees revealed that novel physiological functions in the adult animal are underpinned by positively selected tissue-specific genes of high expression, which confer specialized cellular functions. These genes include both taxonomically restricted genes (TRGs) and conserved genes, indicating that changes to the coding regions of genes are important for the evolution of postdevelopmental novelty (PUBMED:25351750). Another study highlighted the importance of TRGs in biology and evolution, arguing that they are fundamental to the evolution of novelty and often underlie lineage-specific traits that are central to the biology of major clades (PUBMED:30294344). In summary, while novel genes, including TRGs and lineage-specific genes, are implicated in the evolution of morphological novelties, the extent to which they drive such novelties can vary. Co-option of existing GRNs, gene recruitment, and coding sequence changes all contribute to the evolution of novel morphologies. The role of orphan genes appears to be less significant in some cases, but this may not be universally applicable across all organisms and evolutionary events.
Instruction: Increased glutathione conjugate transport: a possible compensatory protection mechanism against oxidative stress in obesity? Abstracts: abstract_id: PUBMED:26851532 Oxidative stress and metabolic disorders: Pathogenesis and therapeutic strategies. Increased body weight and metabolic disorder including insulin resistance, type 2 diabetes and cardiovascular complications together constitute metabolic syndrome. The pathogenesis of metabolic syndrome involves multitude of factors. A number of studies however indicate, with some conformity, that oxidative stress along with chronic inflammatory condition pave the way for the development of metabolic diseases. Oxidative stress, a state of lost balance between the oxidative and anti-oxidative systems of the cells and tissues, results in the over production of oxidative free radicals and reactive oxygen species (ROS). Excessive ROS generated could attack the cellular proteins, lipids and nucleic acids leading to cellular dysfunction including loss of energy metabolism, altered cell signalling and cell cycle control, genetic mutations, altered cellular transport mechanisms and overall decreased biological activity, immune activation and inflammation. In addition, nutritional stress such as that caused by high fat high carbohydrate diet also promotes oxidative stress as evident by increased lipid peroxidation products, protein carbonylation, and decreased antioxidant system and reduced glutathione (GSH) levels. These changes lead to initiation of pathogenic milieu and development of several chronic diseases. Studies suggest that in obese person oxidative stress and chronic inflammation are the important underlying factors that lead to development of pathologies such as carcinogenesis, obesity, diabetes, and cardiovascular diseases through altered cellular and nuclear mechanisms, including impaired DNA damage repair and cell cycle regulation. Here we discuss the aspects of metabolic disorders-induced oxidative stress in major pathological conditions and strategies for their prevention and therapy. abstract_id: PUBMED:37569463 Role of Oxidative Stress on Insulin Resistance in Diet-Induced Obesity Mice. Insulin resistance is the link between obesity and type 2 diabetes mellitus. The molecular mechanism by which obese individuals develop insulin resistance has not yet been fully elucidated; however, inconclusive and contradictory studies have shown that oxidative stress may be involved in the process. Thus, this study aimed to evaluate the effect of reactive species on the mechanism of insulin resistance in diet-induced obese mice. Obese insulin-resistant mice were treated with N-acetylcysteine (NAC; 50 mg/kg per day, for 15 days) by means of oral gavage. Twenty-four hours after the last NAC administration, the animals were euthanized and their tissues were extracted for biochemical and molecular analyses. NAC supplementation induced improved insulin resistance and fasting glycemia, without modifications in food intake, body weight, and adiposity. Obese mice showed increased dichlorofluorescein (DCF) oxidation, reduced catalase (CAT) activity, and reduced glutathione levels (GSH). However, treatment with NAC increased GSH and CAT activity and reduced DCF oxidation. The gastrocnemius muscle of obese mice showed an increase in nuclear factor kappa B (NFκB) and protein tyrosine phosphatase (PTP1B) levels, as well as c-Jun N-terminal kinase (JNK) phosphorylation compared to the control group; however, NAC treatment reversed these changes. Considering the molecules involved in insulin signaling, there was a reduction in insulin receptor substrate (IRS) and protein kinase B (Akt) phosphorylation. However, NAC administration increased IRS and Akt phosphorylation and IRS/PI3k (phosphoinositide 3-kinase) association. The results demonstrated that oxidative stress-associated obesity could be a mechanism involved in insulin resistance, at least in this animal model. abstract_id: PUBMED:29452783 Oxidative stress is increased in sarcopenia and associated with cardiovascular disease risk in sarcopenic obesity. Objectives: To define whether circulating markers of oxidative stress correlate with sarcopenia in terms of glutathione balance and oxidative protein damage, and whether these biomarkers are associated with risk of cardiovascular disease (CVD). Study Design: Population-based cross-sectional study. 115 out of 347 elderly subjects were classified as non-sarcopenic non-obese (NS-NO), sarcopenic non-obese (S-NO), non-sarcopenic obese (NS-O), and sarcopenic obese (S-O). Main Outcome Measurements: Sarcopenia was defined as a relative skeletal muscle mass index (RASM) <7.25kg/m2 for men or <5.67kg/m2 for women, while obesity was diagnosed in those presenting with% fat >27 for men or >38 for women. The CVD risk was estimated by the carotid intima-media thickness (IMT) and the Framingham score. Blood reduced glutathione (GSH), oxidized glutathione (GSSG), plasma malondialdehyde-(MDA) and 4-hydroxy-2,3-nonenal-(HNE) protein adducts were analyzed. Results: Significantly greater blood GSSG/GSH ratio and plasma MDA/HNE protein adducts were observed in sarcopenic than in non-sarcopenic patients. A logistic regression model showed a close relationship between serum HNE and MDA adducts and sarcopenia (OR=1.133, 95% CI 1.057-1.215, and OR=1.592, 95% CI 1.015-1.991, respectively). Linear and logistic regression analysis evidenced strong associations between the IMT or the Framingham CVD risk category and blood GSSG/GSH or serum HNE protein adducts in the S-O group. Conclusion: Circulating markers of oxidative stress are increased in sarcopenia and related to CVD risk in sarcopenic obesity, suggesting that redox balance analysis would be a useful part of a multidimensional evaluation in aging. Further research is encouraged to support interventional strategies to correct redox imbalance, which might contribute to the prevention or at least limitation of sarcopenia and its co-morbidities. abstract_id: PUBMED:22574914 A matter of fat: insulin resistance and oxidative stress. Background: Obesity is linked to insulin resistance (IR), which can lead to type 2 diabetes mellitus. Oxidative stress present in early obesity may favor the progression to comorbid conditions. Objective: To examine the relationship between oxidative stress biomarkers and the severity of IR in a group of obese children. Methods: Forty obese children with a body mass index (BMI) Z-score ≥ 2 were divided into two groups using the median obtained for the homeostasis model assessment of IR (HOMA-IR). Anthropometric parameters (including body fat composition by bioelectrical impedance) and biochemical parameters were assessed. The following biomarkers of oxidative stress were measured: malondialdehyde (MDA), carbonyl groups (CG), reduced glutathione, oxidized low-density lipoprotein, and vitamin E. Comparisons were adjusted for gender and Tanner stage. Results: Children with high values of HOMA-IR were more likely to have high body fat percentage and waist circumferences. However, the BMI Z-score did not correlate to the level of IR. Children with higher values of HOMA-IR presented increased levels of markers of oxidative stress in lipids (MDA, p = 0.005) and proteins (CG, p = 0.015). Moreover, MDA increased with increasing levels of HOMA-IR (r = 0.50, p = 0.002), suggesting that lipoperoxidation increases as IR worsens. In a multivariate regression model, only HOMA-IR was predictive of MDA values, irrespective of adiposity parameters and other metabolic risk factors (r2 = 0.22, p = 0.002). Conclusions: Oxidative stress increases in obese children according to the severity of IR, which could be linked to the development of comorbidities. abstract_id: PUBMED:24698544 Oxidative stress and maternal obesity: feto-placental unit interaction. Objective: To determine oxidative stress markers in maternal obesity during pregnancy and to evaluate feto-placental unit interaction, especially predictors of fetal metabolic alterations. Patients And Methods: 40 obese pregnant women (prepregnancy BMI > 30 kg/m²) were compared to 50 control pregnant women. Maternal, cord blood and placenta samples were collected at delivery. Biochemical parameters (total cholesterol and triglycerides) and oxidative stress markers (malondialdehyde, carbonyl proteins, superoxide anion expressed as reduced Nitroblue Tetrazolium, nitric oxide expressed as nitrite, reduced glutathione, catalase, superoxide dismutase) were assayed by biochemical methods. Results: Maternal, fetal and placental triglyceride levels were increased in obese group compared to control. Maternal malondialdehyde, carbonyl proteins, nitric oxide and superoxide anion levels were high while reduced glutathione concentrations and superoxide dismutase activity were low in obesity. In the placenta and in newborns of these obese mothers, variations of redox balance were also observed indicating high oxidative stress. Maternal and placental interaction constituted a strong predictor of fetal redox variations in obese pregnancies. Discussion: Maternal obesity compromised placental metabolism and antioxidant status which strongly impacted fetal redox balance. Oxidative stress may be one of the key downstream mediators that initiate programming of the offspring. Conclusion: Maternal obesity is associated with metabolic alterations and dysregulation of redox balance in the mother-placenta - fetus unit. These perturbations could lead to maternal and fetal complications and should be carefully considered. abstract_id: PUBMED:28569623 Increased total DNA damage and oxidative stress in brain are associated with decreased longevity in high sucrose diet fed WNIN/Gr-Ob obese rats. Background: Obesity and Type 2 Diabetes (T2D) are chronic nutrient-related disorders that occur together and pose a grave burden to society. They are among the most common causes of ageing and death. Obesity and T2D per se accelerate ageing albeit the underlying mechanisms are unclear yet. Also, it is not clear whether or not superimposing T2D on obesity accelerates ageing. Present study validated the hypothesis, 'super-imposing T2D on obesity accelerates ageing' in WNIN/Gr-Ob, the impaired glucose tolerant, obese rat as the model and evaluated probable underlying mechanisms. Objectives: To estimate the survival analysis of WNIN/Gr-Ob rats induced with T2D. To determine the extent of DNA damage and oxidative stress in the brain, the master controller of the body, in WNIN/Gr-Ob rats with/without high sucrose induced T2D/aggravation of insulin resistance (IR) after 3 and 6 months of feeding. Methods: T2D was induced/IR was aggravated by feeding high sucrose diet (HSD) to 9-10 weeks old, male WNIN/Gr-Ob rats. Survival percentage was determined statistically by Kaplan-Meier estimator. Neuronal DNA damage was quantified by the Comet assay while the oxidative stress and antioxidant status were evaluated from the levels of malonaldialdehyde, reduced glutathione, and superoxide dismutase (SOD) activity. Results And Discussion: HSD feeding decreased longevity of WNIN/Gr-Ob rats and was associated with significantly higher total neuronal DNA damage after three months of feeding but not later. In line with this was the increased neuronal oxidative stress (lipid peroxidation) and decreased antioxidant status (reduced glutathione and SOD activity) in HSD than Starch-based diet (SBD) fed rats. The results suggest that HSD feeding decreased the longevity of WNIN/Gr-Ob obese rats probably by increasing oxidative stress and aggravating IR, a condition that precedes T2D. abstract_id: PUBMED:29616223 Geraniin Protects High-Fat Diet-Induced Oxidative Stress in Sprague Dawley Rats. Geraniin, a hydrolysable polyphenol derived from Nephelium lappaceum L. fruit rind, has been shown to possess significant antioxidant activity in vitro and recently been recognized for its therapeutic potential in metabolic syndrome. This study investigated its antioxidative strength and protective effects on organs in high-fat diet (HFD)-induced rodents. Rats were fed HFD for 6 weeks to induce obesity, followed by 10 and 50 mg/kg of geraniin supplementation for 4 weeks to assess its protective potential. The control groups were maintained on standard rat chows and HFD for the same period. At the 10th week, oxidative status was assessed and the pancreas, liver, heart and aorta, kidney, and brain of the Sprague Dawley rats were harvested and subjected to pathological studies. HFD rats demonstrated changes in redox balance; increased protein carbonyl content, decreased levels of superoxide dismutase, glutathione peroxidase, and glutathione reductase with a reduction in the non-enzymatic antioxidant mechanisms and total antioxidant capacity, indicating a higher oxidative stress (OS) index. In addition, HFD rats demonstrated significant diet-induced changes particularly in the pancreas. Four-week oral geraniin supplementation, restored the OS observed in the HFD rats. It was able to restore OS biomarkers, serum antioxidants, and the glutathione redox balance (reduced glutathione/oxidized glutathione ratio) to levels comparable with that of the control group, particularly at dosage of 50 mg geraniin. Geraniin was not toxic to the HFD rats but exhibited protection against glucotoxicity and lipotoxicity particularly in the pancreas of the obese rodents. It is suggested that geraniin has the pharmaceutical potential to be developed as a supplement to primary drugs in the treatment of obesity and its pathophysiological sequels. abstract_id: PUBMED:16814277 Effects of N-acetylcysteine on sucrose-rich diet-induced hyperglycaemia, dyslipidemia and oxidative stress in rats. This study examined whether sucrose-rich diet (SRD)-induced hyperglycaemia, dyslipidemia and oxidative stress may be inhibited by N-acetylcysteine (C(5)H(9)-NO(3)S), an organosulfur from Allium plants. Male Wistar 40 rats were divided into four groups (n=10): (C) given standard chow and water; (N) receiving standard chow and 2 mg/l N-acetylcysteine in its drinking water; (SRD) given standard chow and 30% sucrose in its drinking water; and (SRD-N) receiving standard chow, 30% sucrose and N-acetylcysteine in its drinking water. After 30 days of treatment, SRD rats had obesity with increased abdominal circumference, hyperglycaemia, dyslipidemia and hepatic triacylglycerol accumulation. These adverse effects were associated with oxidative stress and depressed lipid degradation in hepatic tissue. The SRD adverse effects were not observed in SDR-N rats. N-Acetylcysteine reduced the oxidative stress, enhancing glutathione-peroxidase activity, and normalizing lipid hydroperoxyde, reduced glutathione and superoxide dismutase in hepatic tissue of SRD-N rats. The beta-hydroxyacyl coenzyme-A dehydrogenase and citrate-synthase activities were increased in SRD-N rats, indicating enhanced lipid degradation in hepatic tissue as compared to SRD. SRD-N rats had reduced serum oxidative stress and diminished glucose, triacylglycerol, very-low-density lipoprotein (VLDL), oxidized low-density lipoprotein (ox-LDL) and cholesterol/high-density lipoprotein (HDL) ratio in relation to SRD. In conclusion, NAC offers promising therapeutic values in prevention of dyslipidemic profile and alleviation of hyperglycaemia in high-sucrose intake condition by improving antioxidant defences. N-Acetylcysteine had also effects preventing metabolic shifting in hepatic tissue, thus enhancing fat degradation and reducing body weight gain in conditions of excess sucrose intake. The application of this agent in food system via exogenous addition may be feasible and beneficial for antioxidant protection. abstract_id: PUBMED:18298345 Evaluation of oxidative stress and inflammation in obese adults with metabolic syndrome. Background: Obesity and metabolic syndrome increase the risk of cardiovascular morbidity and mortality. Oxidative stress seems to be involved in the pathophysiology of diabetes and cardiovascular complications of metabolic syndrome. The aim of our study was to evaluate the level of oxidative stress and inflammation in obese adults with and without metabolic syndrome. Methods: Oxidative stress and inflammation markers (total amount of free radicals, malondialdehyde, allantoin, alpha1-antiproteinase, oxidized/reduced glutathione ratio, high-sensitive C-reactive protein, fibrinogen), total antioxidant capacity and lipid standardized alpha-tocopherol were determined in obese subjects fulfilling at least three criteria of metabolic syndrome according to the National Cholesterol Education Program-Adult Treatment Panel III guidelines (n=20 patients), in obese subjects without metabolic syndrome (n=20 patients) and in 48 healthy controls. Results: Oxidative stress and inflammation markers were significantly elevated in the obese subjects, especially in those exhibiting metabolic syndrome. According to multidimensional statistical analysis, oxidative stress was independently related to triacylglyceride concentration, abdominal fat, low high-density lipoprotein cholesterol and low lipid standardized alpha-tocopherol in the patients with metabolic syndrome. Conclusions: High levels of free radicals together with low antioxidant capacity detected in obese adults indicate elevated oxidative stress, which is--together with systemic inflammation--further potentiated in the case of obese patients with metabolic syndrome. This imbalance in oxidative/antioxidative status and subclinical inflammatory state leads to higher risk of atherosclerotic and diabetic complications. abstract_id: PUBMED:32086249 Long-term sucrose solution consumption causes metabolic alterations and affects hepatic oxidative stress in Wistar rats. As the number of overweight and obese people has risen in recent years, there has been a parallel increase in the number of people with metabolic syndrome, diabetes and non-alcoholic fatty liver disease. The consumption of artificially sweetened beverages contributes to these epidemics. This study investigated the long-term effects of ingestion of a 40% sucrose solution on serum and hepatic parameters in male Wistar rats (Rattus norvegicus). After 180 days, the glycemic response, lipid profile and hepatic oxidative stress were compared to those of rats maintained on water. Sucrose ingestion led to higher body weight, increased fat deposits, reduced voluntary food intake and reduced feeding efficiency. Rats that received sucrose solution showed early signs of glucose intolerance and insulin resistance, such as hyperinsulinemia. Serum triacylglycerol (TG), very-low density lipoprotein (VLDL), cholesterol, ALT and AST levels increased after sucrose consumption. Elevated malondialdehyde and superoxide dismutase (SOD) levels and reduced glutathione levels characterize the hepatic oxidative stress due to sucrose ingestion. Liver sample histology showed vacuolar traces and increased fibrotic tissue. Our data showed the harmful effects of chronic consumption of sucrose solution, which can cause alterations that are found frequently in obesity, glucose intolerance and non-alcoholic hepatic disease, characteristics of metabolic syndrome. Answer: The provided abstracts do not directly address the concept of increased glutathione conjugate transport as a compensatory protection mechanism against oxidative stress in obesity. However, they do discuss the role of glutathione (GSH) in the context of oxidative stress and obesity, which may suggest potential compensatory mechanisms. Several abstracts mention the importance of glutathione in combating oxidative stress in obesity and related metabolic disorders. For instance, oxidative stress is characterized by an imbalance between the production of reactive oxygen species (ROS) and the antioxidant defense system, where glutathione plays a crucial role as a major antioxidant (PUBMED:26851532). In obesity, oxidative stress can lead to cellular dysfunction, inflammation, and the development of chronic diseases such as diabetes and cardiovascular diseases. In the context of diet-induced obesity in mice, treatment with N-acetylcysteine (NAC), a precursor to glutathione, improved insulin resistance and fasting glycemia, suggesting that bolstering the glutathione system can counteract oxidative stress and its metabolic consequences (PUBMED:37569463). Similarly, increased oxidative stress markers were observed in sarcopenic obesity, and the study suggested that redox balance analysis, which includes glutathione balance, could be useful in evaluating and potentially intervening in sarcopenia and its comorbidities (PUBMED:29452783). In obese children, higher levels of insulin resistance were associated with increased markers of oxidative stress, and the severity of insulin resistance correlated with the degree of lipoperoxidation, a process that glutathione can help mitigate (PUBMED:22574914). Moreover, maternal obesity during pregnancy was linked to compromised placental metabolism and antioxidant status, including alterations in glutathione levels, which impacted the fetal redox balance (PUBMED:24698544). The administration of geraniin, a polyphenol with antioxidant activity, restored oxidative stress biomarkers and the glutathione redox balance in high-fat diet-induced obese rats (PUBMED:29616223). N-acetylcysteine supplementation also reduced oxidative stress and improved antioxidant defenses, including glutathione-peroxidase activity, in rats fed a sucrose-rich diet (PUBMED:16814277). Overall, while the abstracts do not explicitly discuss increased glutathione conjugate transport, they do highlight the role of glutathione in protecting against oxidative stress in obesity.
Instruction: Does cognitive behaviour therapy for insomnia reduce clinical levels of fatigue, anxiety and depression in cancer patients? Abstracts: abstract_id: PUBMED:24458543 Does cognitive behaviour therapy for insomnia reduce clinical levels of fatigue, anxiety and depression in cancer patients? Objectives: This secondary analysis of data from a randomised controlled trial explores associations between common symptom clusters and evaluates pre-treatment to post-treatment changes in clinical levels of these symptoms following cognitive behaviour therapy for insomnia (CBT-I). Methods: Baseline data from 113 participants with insomnia were explored to establish rates of and associations between clinical levels of fatigue, anxiety and depression across the sample. Effects of CBT-I on this symptom cluster were also explored by examining changes in pre-treatment to post-treatment levels of fatigue, anxiety and depression. Results: At baseline, the most common symptom presentation was insomnia + fatigue, and 30% of the sample reported at least three co-morbid symptoms. Post-CBT, the number of those experiencing clinical insomnia and clinical fatigue decreased. There were no changes in anxiety rates from baseline to post-treatment in the CBT group and modest reductions in rates of those with clinical depression. Seven individuals (9.6%) from the CBT group were completely symptom free at post-treatment compared with 0% from the treatment as usual condition. Chi-square analysis revealed a significant relationship between group allocation and changes in symptoms of insomnia and fatigue. No such relationship was found between group allocation and mood variables. Conclusions: These findings confirm the high rate of symptom co-morbidities among cancer patients and highlight strong associations between sleep and fatigue. CBT-I appears to offer generalised benefit to the symptom cluster as a whole and, specifically, is effective in reducing fatigue, which exceeded clinical cut-offs prior to implementation of the intervention. This has implications for the diagnosis/management of common symptoms in cancer patients. abstract_id: PUBMED:36123640 Clinical and genetic factors associated with self-reported cognitive deficits in women with breast cancer: the "CAGE-Cog" study. Background: Breast cancer patients undergoing chemotherapy treatment are at particular risk of experiencing acute cognitive impairment leading to daily challenges in decision-making and reduced quality of life and functional autonomy. The aim was to assess the relationship between clinical and genetic factors and cognitive function in a sample of patients with breast cancer undergoing chemotherapy. Methods: A cross-sectional study was carried out between November 2017 and June 2019 on women (N = 112) treated for breast cancer by intravenous chemotherapy at the oncology outpatient unit of Hôtel-Dieu de France Hospital, Beirut. Patients were evaluated with the 37-item Functional Assessment of Cancer Therapy-Cognitive Function (FACT-Cog). Other validated scales were also used to assess depression, anxiety, sleep disorders, pain, and fatigue. DNA was obtained by a buccal swab (FTA®technology) for genotyping of different genes (ABCB1, COMT, DRD2, OPRM1, CLOCK, CRY2, and PER2) using the Lightcycler®(Roche). Results: The mean age of participants was 56.04 years. Multivariable analysis, taking the four FACT-Cog subscores as the dependent variables, showed that the mean cognitive score decreased with higher depression, anxiety, and insomnia scores. Patients with university education levels had better perceived cognitive abilities than those with primary education. Moreover, carrying the G allele for the OPRM1 polymorphism (c.118A > G;rs197791) was significantly associated with a better cognitive function compared to AA patients (B = 2.05; p = 0.038). Conclusions: A comprehensive oncological care plan should include a personalized assessment of all factors related to cognitive functioning in cancer patients, particularly anxiety and depression, to achieve an optimal patient outcome. abstract_id: PUBMED:16135475 Randomized study on the efficacy of cognitive-behavioral therapy for insomnia secondary to breast cancer, part I: Sleep and psychological effects. Purpose: Chronic insomnia is highly prevalent in cancer patients. Cognitive-behavioral therapy (CBT) is considered the treatment of choice for chronic primary insomnia. However, no randomized controlled study has been conducted on its efficacy for insomnia secondary to cancer. Using a randomized controlled design, this study conducted among breast cancer survivors evaluated the effect of CBT on sleep, assessed both subjectively and objectively, and on hypnotic medication use, psychological distress, and quality of life. Patients And Methods: Fifty-seven women with insomnia caused or aggravated by breast cancer were randomly assigned to CBT (n = 27) or a waiting-list control condition (n = 30). The treatment consisted of eight weekly sessions administered in a group and combined the use of stimulus control, sleep restriction, cognitive therapy, sleep hygiene, and fatigue management. Follow-up evaluations were carried out 3, 6, and 12 months after the treatment. Results: Participants who received the insomnia treatment had significantly better subjective sleep indices (daily sleep diary, Insomnia Severity Index), a lower frequency of medicated nights, lower levels of depression and anxiety, and greater global quality of life at post-treatment compared with participants of the control group after their waiting period. Results were more equivocal on polysomnographic indices. Therapeutic effects were well maintained up to 12 months after the intervention and generally were clinically significant. Conclusion: This study supports the efficacy of CBT for insomnia secondary to breast cancer. abstract_id: PUBMED:27536265 Cognitive Behavioral Therapy for Insomnia in Breast Cancer Survivors: A Review of the Literature. Background: Insomnia is a common sleep disorder in patients with breast cancer and studies show a higher frequency than in the general population but it appears to be understudied and the treatment seems to be a neglected problem. There is a growing body of evidence about the efficacy of cognitive behavioral therapy for insomnia (CBT-I) in breast cancer survivors (BCS). The aim of this review is to examine the best available scientific evidence related to CBT-I and insomnia in patients with breast cancer and to assess the effect of CBT-I on their psychosocial functioning, sleep, quality of life, and mood. Methods: Empirical articles published in peer-reviewed journals from the earliest reports available until August 2015 were considered. The research on PubMed generated 18 papers, three of which did not meet the inclusion criteria. Another paper was retrieved by screening the reference list of the previously selected papers. Results: A total of 16 studies were found that evaluated the effects of CBT-I in breast cancer patients. CBT-I appears to be an effective therapy for insomnia in BCS, improving mood, general and physical fatigue, and global and cognitive dimensions of quality of life. CBT-I may also reduce menopausal symptoms, such as hot flushes and night sweat problems, frequency of medicated nights, level of depression, and anxiety. Conclusions: CBT-I seems to be an eligible intervention for improving sleep in BCS. Improvements concerning insomnia and sleep quality are durable (usually up to 12 months) and statistically significant. abstract_id: PUBMED:28557043 A psychological intervention programme for patients with breast cancer under chemotherapy and at a high risk of depression: A randomised clinical trial. Aims And Objectives: To develop a nurse-led psychological intervention programme and to evaluate its effects on psychological distress and quality of life in patients with breast cancer undergoing chemotherapy and at a high risk of depression. Background: Depression is common among patients with breast cancer undergoing chemotherapy. Psychological intervention programmes that improve psychological distress and quality of life have previously been lacking in South Korea. Design: This was a pre- and post-test randomised controlled trial. Methods: The nurse-led psychological intervention programme comprised seven weekly counselling sessions delivered face to face and telephonically. These aimed to provide emotional support to patients and to enable them to express their feelings. Patients at a high risk of depression were recruited from an oncology outpatient clinic in a university hospital. Sixty participants were evenly and randomly allocated to either the intervention group or the control group. The effects of the intervention on psychological distress (mood disturbance, anxiety and depression) and quality of life were examined using linear mixed models. Results: Compared with the control group, the intervention group reported significantly lower mood disturbance, anxiety and depression and showed an improved global health status and physical, role and emotional functions. They also reported fewer symptoms such as fatigue, nausea/vomiting, pain and insomnia. Conclusions: Our nurse-led psychological intervention programme might reduce patients' uncertainty and encourage them to be proactive and self-controllable. Relevance To Clinical Practice: Nurse-led psychological intervention programmes should be implemented to reduce psychological distress and improve quality of life in patients with breast cancer, particularly those at a high risk of depression. abstract_id: PUBMED:34330229 Clinical and genetic factors associated with anxiety and depression in breast cancer patients: a cross-sectional study. Background: Despite the progress in assessment and treatment of breast cancer, being diagnosed with it or receiving chemotherapy treatment is still conceived as a traumatic experience. Women develop negative thoughts about life and death with detrimental effects on their daily physical functioning/activities, emotional state and overall quality of life. The aim of our study was to evaluate the level of anxiety and depression among breast cancer patients receiving chemotherapy and explore the correlation between these psychological disorders, clinical, sociodemographic and genetic factors. Methods: A cross-sectional study was conducted among breast cancer patients undergoing intravenous chemotherapy at the oncology outpatient unit of Hôtel-Dieu de France hospital (November 2017-June 2019; Ethical approval number: CEHDF1016). All patients gave their written informed consent and completed several validated scales, including the Hospital Anxiety and Depression scale (HADS) for the assessment of anxiety and depression. Sleep quality, insomnia, cognitive function, fatigue and pain were also evaluated. Genotyping for certain gene polymorphisms (CLOCK, PER2, CRY2, OPRM1, ABCB1, COMT, DRD2) was performed using the Lightcycler® (Roche). Results: A total of 112 women was included. The prevalence of depression was 43.4%, and 56.2% of the patients reported anxiety (based on the HADS classification). Multivariable analysis showed that higher cognitive scores and taking fosaprepitant were significantly associated with lower depression and anxiety scores. Moreover, being married compared to single was also associated with lower depression scores, whereas higher PSQI scores (worse sleep quality) and having the PER2 AA variant genotype compared to GG were significantly associated with higher depression scores. Finally, reporting a more severe insomnia and having the COMT Met/Met genotype were significantly associated with a higher anxiety score. Conclusions: Our study demonstrated a strong relationship between depression scores and cognitive impairment, sleep quality, marital status, fosaprepitant intake, and PER2 polymorphism, while anxiety scores were correlated to cognitive impairment, insomnia severity, fosaprepitant intake, and COMT polymorphism. The association with PER polymorphism was not previously reported. Identification of genetic and clinical risk factors for anxiety and depression would help clinicians implement an individualized management therapy aiming at preventing and alleviating the burden of these symptoms in breast cancer patients, hence improving their overall quality of life. abstract_id: PUBMED:34228123 Efficacy of a stepped care approach to deliver cognitive-behavioral therapy for insomnia in cancer patients: a noninferiority randomized controlled trial. Study Objectives: Cognitive-behavioral therapy for insomnia (CBT-I) is the recommended first-line treatment for cancer-related insomnia, but its accessibility is very limited in routine care. A stepped care approach has been recommended as a cost-effective way to make CBT-I more widely accessible. However, no controlled study has yet been published about the efficacy of this approach. The goal of this noninferiority randomized controlled trial (RCT) was to compare the short and long-term efficacy of a stepped care CBT-I (StepCBT-I) to a standard face-to-face CBT-I (StanCBT-I). Methods: A total of 177 cancer patients were randomized to: (1) StanCBT-I (6 face-to-face CBT-I sessions; n = 59) or (2) StepCBT-I (n = 118). In the StepCBT-I group, patients with less severe insomnia first received a web-based CBT-I (n = 65), while those with more severe insomnia received 6 face-to-face CBT-I sessions (n = 53). In both cases, patients could receive up to three booster sessions of CBT-I if they still had insomnia symptoms following this first step. Results: Results indicated that the Step-CBT-I group showed an Insomnia Severity Index score reduction and a sleep efficiency (on a sleep diary) increase that was not significantly inferior to that of StanCBT-I at all post-treatment time points. Analyses of secondary outcomes indicated significant time effects (ps < .001) and no significant group-by-time interactions (ps from .07 to .91) on other sleep diary parameters, sleep medication use, depression, anxiety, fatigue, and quality of life scores. Conclusion(s): The efficacy of stepped care CBT-I is not inferior to that of a standard face-to-face intervention and is a valuable approach to making this treatment more widely accessible to cancer patients. Trial Registration: ClinicalTrials.gov Identifier: NCT01864720 (https://clinicaltrials.gov/ct2/show/NCT01864720?term=Savard&draw=2&rank=6; Stepped Care Model for the Wider Dissemination of Cognitive-Behavioural Therapy for Insomnia Among Cancer Patients). abstract_id: PUBMED:34330793 Androgen deprivation therapy and radiation for prostate cancer-cognitive impairment, sleep, symptom burden: a prospective study. Objectives: This paper (1) sought to compare sleep, mood and physical symptom profiles of men with prostate cancer (PCa) who experienced subjective and objective cancer-related cognitive impairment (CRCI) during the first year of treatment and (2) examine if fluctuations in mood and physical symptoms are associated with change in subjective or objective CRCI. Methods: This prospective observational cohort study examined 24 new patients with PCa receiving androgen deprivation therapy (ADT) and radiation therapy (RT) during the first 12 months of treatment. Participants completed subjective and objective assessments of cognition, sleep continuity and self-report measures of insomnia, fatigue, depression and anxiety. Independent sample t-tests, correlations and hierarchical regressions were used to compare groups, explore associations, and assess change over time. Effects are reported as corrected Cohen's d (dc). Results: Men with objective CRCI reported worse subjective time asleep (dc=0.47) and more depression (dc=0.55). Men with subjective CRCI reported worse insomnia (dc=0.99), hot flashes (dc=0.76), sleep quality (dc=0.54), subjective total sleep time (dc=0.41), wake after sleep onset (dc=0.71), sleep efficiency (dc=0.49), fatigue (dc=0.67) and objectively estimated sleep latency (dc=0.72) than men without subjective CRCI. Declines in perceived cognition was associated with higher anxiety (p=0.05), fatigue (p≤0.01) and symptoms of insomnia (p=0.01). Finally, subjective time awake during the night (p=0.03) and fatigue (p=0.02) were associated with subjective cognitive decline, controlling for objective change. Conclusions: Subjective concerns of CRCI appear more critical to patient experience than objective measurements in men with PCa who have received RT and ADT. Interventions to improve sleep may result in an improved perception of cognition. abstract_id: PUBMED:37103999 Telehealth-Based Music Therapy Versus Cognitive Behavioral Therapy for Anxiety in Cancer Survivors: Rationale and Protocol for a Comparative Effectiveness Trial. Background: Cancer survivors represent one of the fastest growing populations in the United States. Unfortunately, nearly 1 in 3 survivors experience anxiety symptoms as a long-term consequence of cancer and its treatment. Characterized by restlessness, muscle tension, and worry, anxiety worsens the quality of life; impairs daily functioning; and is associated with poor sleep, depressed mood, and fatigue. Although pharmacological treatment options are available, polypharmacy has become a growing concern for cancer survivors. Music therapy (MT) and cognitive behavioral therapy (CBT) are evidence-based, nonpharmacological treatments that have demonstrated effectiveness in treating anxiety symptoms in cancer populations and can be adapted for remote delivery to increase access to mental health treatments. However, the comparative effectiveness of these 2 interventions delivered via telehealth is unknown. Objective: The aims of the Music Therapy Versus Cognitive Behavioral Therapy for Cancer-related Anxiety (MELODY) study are to determine the comparative effectiveness of telehealth-based MT versus telehealth-based CBT for anxiety and comorbid symptoms in cancer survivors and to identify patient-level factors associated with greater anxiety symptom reduction for MT and CBT. Methods: The MELODY study is a 2-arm, parallel-group randomized clinical trial that aims to compare the effectiveness of MT versus CBT for anxiety and comorbid symptoms. The trial will enroll 300 English- or Spanish-speaking survivors of any cancer type or stage who have experienced anxiety symptoms for at least 1 month. Participants will receive 7 weekly sessions of MT or CBT delivered remotely via Zoom (Zoom Video Communications, Inc) over 7 weeks. Validated instruments to assess anxiety (primary outcome), comorbid symptoms (fatigue, depression, insomnia, pain, and cognitive dysfunction), and health-related quality of life will be administered at baseline and at weeks 4, 8 (end of treatment), 16, and 26. Semistructured interviews will be conducted at week 8 with a subsample of 60 participants (30 per treatment arm) to understand individual experiences with the treatment sessions and their impact. Results: The first study participant was enrolled in February 2022. As of January 2023, 151 participants have been enrolled. The trial is expected to be completed by September 2024. Conclusions: This study is the first and largest randomized clinical trial to compare the short- and long-term effectiveness of remotely delivered MT and CBT for anxiety in cancer survivors. Limitations include the lack of usual care or placebo control groups and the lack of formal diagnostic assessments for psychiatric disorders among trial participants. The study findings will help guide treatment decisions for 2 evidence-based, scalable, and accessible interventions to promote mental well-being during cancer survivorship. International Registered Report Identifier (irrid): DERR1-10.2196/46281. abstract_id: PUBMED:30247968 Complementary and Integrative Medicine to Reduce Adverse Effects of Anticancer Therapy. Background: To address the side effects of anticancer treatments, the Clinic for Complementary Medicine and Diet in Oncology was opened, in collaboration with the oncology department, at the Hospital of Lucca (Italy) in 2013. Aim: To present the results of complementary medicine treatment targeted toward reducing the adverse effects of anticancer therapy and cancer symptoms, and improving patient quality of life. Dietary advice was aimed at the reduction of foods that promote inflammation in favor of those with antioxidant and anti-inflammatory properties. Methods: This is a retrospective observational study on 357 patients consecutively visited from September 2013 to December 2017. The intensity of symptoms was evaluated according to a grading system from G0 (absent) to G1 (slight), G2 (moderate), and G3 (strong). The severity of radiodermatitis was evaluated with the Radiation Therapy Oncology Group (RTOG) scale. Almost all the patients (91.6%) were receiving or had just finished some form of conventional anticancer therapy. Results: The main types of cancer were breast (57.1%), colon (7.3%), lung (5.0%), ovary (3.9%), stomach (2.5%), prostate (2.2%), and uterus (2.5%). Comparison of clinical conditions before and after treatment showed a significant amelioration of nausea, insomnia, depression, anxiety, fatigue, mucositis, hot flashes, joint pain, dysgeusia, neuropathy, and all symptoms. Moreover, in a subgroup of 17 patients in radiotherapy undergoing integrative treatment, the level of toxicities and the severity of radiodermatitis were much lower than in the 13 patients without integrative treatment. Twenty-one cancer patients (6.2%) either refused (18) or discontinued (3) conventional anticancer treatment against the recommendation of their oncologist; after the integrative oncology (IO) visit, 7 (41.2%) out of 17 patients with follow-up decided to accept standard oncologic treatments. Conclusions: An IO clinic may contribute to reducing the adverse effects of anticancer therapy and improving the quality of life of cancer patients. Answer: Cognitive behavior therapy for insomnia (CBT-I) has been shown to be effective in reducing clinical levels of fatigue in cancer patients. A study reported that post-CBT-I, the number of those experiencing clinical insomnia and clinical fatigue decreased, indicating that CBT-I offers a generalized benefit to the symptom cluster as a whole, specifically in reducing fatigue (PUBMED:24458543). Additionally, CBT-I has been found to improve mood, general and physical fatigue, and global and cognitive dimensions of quality of life in breast cancer survivors (PUBMED:27536265). Regarding anxiety and depression, the results are somewhat mixed. One study found no changes in anxiety rates from baseline to post-treatment in the CBT group, although there were modest reductions in rates of clinical depression (PUBMED:24458543). Another study supported the efficacy of CBT for insomnia secondary to breast cancer, reporting lower levels of depression and anxiety, and greater global quality of life at post-treatment compared with participants of the control group (PUBMED:16135475). Furthermore, a nurse-led psychological intervention program, which included elements of CBT, significantly reduced mood disturbance, anxiety, and depression in patients with breast cancer undergoing chemotherapy and at a high risk of depression (PUBMED:28557043). In a noninferiority randomized controlled trial, a stepped care approach to deliver CBT-I was found to be not inferior to standard face-to-face CBT-I, with significant time effects on depression, anxiety, fatigue, and quality of life scores (PUBMED:34228123). This suggests that CBT-I, whether delivered traditionally or through a stepped care approach, can be effective in managing these symptoms. In summary, CBT-I appears to be effective in reducing fatigue and has shown benefits in reducing depression and improving quality of life in cancer patients. The evidence for its effect on anxiety is less consistent, with some studies reporting no change and others reporting reductions in anxiety levels.
Instruction: Nonfebrile illness seizures: a unique seizure category? Abstracts: abstract_id: PUBMED:15946338 Nonfebrile illness seizures: a unique seizure category? Purpose: To describe the clinical characteristics of children with a first-time nonfebrile seizure in the setting of mild illness and to test the hypothesis that these seizures are associated with illness characterized by diarrhea. Methods: This retrospective cohort study was performed in a pediatric emergency department. Patients ages 6 months to 6 years who were evaluated with first-time seizures were eligible for inclusion. Subjects were divided into three groups on the basis of symptoms accompanying their seizure: febrile (temperature, >38.0 degrees C with seizure), unprovoked (no symptoms of illness), and nonfebrile illness (no fever at the time of seizure, but other symptoms of illness present). Results: Of the 323 children with first-time seizures, 247 (76%) had febrile seizure, 37 (12%) had unprovoked seizures, and 39 (12%) had nonfebrile illness seizures. Children with nonfebrile illness seizures were more likely than children with febrile seizures to have diarrheal illnesses accompanying their seizure (44 vs. 16%; p=0.001). Frequency of cough, rhinorrhea, and rash did not differ significantly between children with febrile and nonfebrile illness seizures. Diagnostic testing for infectious etiologies was not performed frequently in either group. Conclusions: Nonfebrile illness seizures may represent a distinct group of seizures with unique epidemiology. Further study to define this seizure group better is warranted. abstract_id: PUBMED:10356374 Acute symptomatic seizures - incidence and etiological spectrum: a hospital-based study from South India. We analysed the incidence and etiological spectrum of acute symptomatic seizures in 2531 patients with seizure disorder, both in-patients and out-patients, seen in a university hospital in South India. Seizure(s) occurred in close temporal association with an acute systemic, metabolic, or toxic insult or in association with an acute central nervous system (CNS) insult in 22.5% of patients. Of the 572 patients, 8% could be grouped under the International League Against Epilepsy (ILAE) category 4.1 and 92% under category 1.2. The seizure type was generalized in all the patients included in category 4.1 and 78% of patients grouped in category 2.1 had simple or complex partial seizure(s) with or without secondary generalization. Sixteen (3%) patients developed status epilepticus during the acute phase of illness and 7% of patients had only single seizure. Infections of the central nervous system (CNS) and single CT enhancing lesions (SCTEL) together accounted for 77% of the provoking factors in patients grouped under category 2.1. These two etiological factors together accounted for 95% of etiologies in patients aged under 16 years. SCTEL and neurocysticercosis together accounted for 67% of the provoking factors. In 14% of patients cerebrovascular diseases were the etiological factors and 60% of the patients were aged over 40 years. In patients with cerebrovascular diseases, aged under 40 years, cortical sinovenous thrombosis accounted for 37%. SCTEL was the provoking factor in 61% of patients with isolated seizure. Infections of CNS and SCTEL together accounted for 62.5% of etiological factors for status epilepticus. This study illustrates that the etiological spectrum of acute symptomatic seizures in this part of the world is different from that described from developed countries and CNS infections account for a significant number of cases. abstract_id: PUBMED:33183389 Executive Functions and Attention in Childhood Epilepsies: A Neuropsychological Hallmark of Dysfunction? Objective: Patients with epilepsy are at risk for several lifetime problems, in which neuropsychological impairments may represent an impacting factor. We evaluated the neuropsychological functions in children suffering from three main epilepsy categories. Further, we analyzed the longitudinal evolution of the neuropsychological profile over time. Methods: Patients undergoing neuropsychological evaluation at our Department from 2012 to 2018 were identified retrospectively. We selected patients aged 6-16 years and with at least two evaluations. Three epilepsy categories were considered: focal/structural, focal self-limited, and idiopathic generalized. Each evaluation included the same structured assessment of main neuropsychological domains. The effect of the epilepsy category, illness duration, seizure status, and medication was computed in multilevel models. Results: We identified 103 patients (focal self-limited = 27; focal/structural = 51; and idiopathic generalized = 25), for 233 evaluations. The majority of deficits were reported in attention and executive functions (>30% of patients); the results were dichotomized to obtain global indexes. Multilevel models showed a trend toward statistical significance of category of epilepsy on the global executive index and of illness duration on global attention index. Illness duration predicted the scores of executive and attention tasks, while category and medication predicted executive task performance. Focal/structural epilepsies mostly affected the executive domain, with deficits persisting over time. By contrast, an ameliorative effect of illness duration for attention was documented in all epilepsies. Conclusions: This study offers lacking information about the evolution of deficits in time, the role of epilepsy category, and possible psychological implications for high-order cognitive skills, central in several social and academic problems. abstract_id: PUBMED:28079875 Early postnatal illness severity scores predict neurodevelopmental impairments at 10 years of age in children born extremely preterm. Objective: A neonatal illness severity score, The Score for Neonatal Acute Physiology-II (SNAP-II), predicts neurodevelopmental impairments at two years of age among children born extremely preterm. We sought to evaluate to what extent SNAP-II is predictive of cognitive and other neurodevelopmental impairments at 10 years of age. Study Design: In a cohort of 874 children born before 28 weeks of gestation, we prospectively collected clinical, physiologic and laboratory data to calculate SNAP-II for each infant. When the children were 10 years old, examiners who were unaware of the child's medical history assessed neurodevelopmental outcomes, including neurocognitive, gross motor, social and communication functions, diagnosis and treatment of seizures or attention deficit hyperactivity disorder (ADHD), academic achievement, and quality of life. We used logistic regression to adjust for potential confounders. Results: An undesirably high SNAP-II (⩾30), present in 23% of participants, was associated with an increased risk of cognitive impairment (IQ, executive function, language ability), adverse neurological outcomes (epilepsy, impaired gross motor function), behavioral abnormalities (attention deficit disorder and hyperactivity), social dysfunction (autistic spectrum disorder) and education-related adversities (school achievement and need for educational supports. In analyses that adjusted for potential confounders, Z-scores ⩽-1 on 11 of 18 cognitive outcomes were associated with SNAP-II in the highest category, and 6 of 18 were associated with SNAP-II in the intermediate category. Odds ratios and 95% confidence intervals ranged from 1.4 (1.01, 2.1) to 2.1 (1.4, 3.1). Similarly, 2 of the 8 social dysfunctions were associated with SNAP-II in the highest category, and 3 of 8 were associated with SNAP-II in the intermediate category. Odds ratios and 95% confidence intervals were slightly higher for these assessments, ranging from 1.6 (1.1, 2.4) to 2.3 (1.2, 4.6). Conclusion: Among very preterm newborns, physiologic derangements present in the first 12 postnatal hours are associated with dysfunctions in several neurodevelopmental domains at 10 years of age. We are unable to make inferences about causality. abstract_id: PUBMED:29123561 Mortality prediction in status epilepticus with the APACHE II score. A retrospective study was performed of adults admitted to the intensive care unit in order to determine the utility of the Acute Physiology and Chronic Health Evaluation II (APACHE II) score in predicting in-hospital mortality in intensive care unit patients with non-cardiac status epilepticus. The cohort consisted of 104 subjects, 50 (48.1%) male, 39 (37.5%) aged ≥65 years, with a mean APACHE II score of 17.88. Four models of the APACHE II system were assessed: numerical score, adjusted score mortality, category, and category mortality. All models demonstrated poor calibration and discrimination, even after adjustment for significantly different covariates. There were independent associations between mortality and acute or toxic-metabolic seizure etiologies, myoclonic seizures, and postoperative status. After multivariate adjustment, only the association with toxic-metabolic seizure etiologies remained. The APACHE II score is a poor predictor of mortality in intensive care unit patients with status epilepticus. Further investigation is warranted to develop better measures of acute physiological disease severity in status epilepticus and its impact on mortality. abstract_id: PUBMED:35961352 Infectious aetiologies of neonatal illness in south Asia classified using WHO definitions: a primary analysis of the ANISA study. Background: Globally, neonatal mortality accounts for almost half of all deaths in children younger than 5 years. Aetiological agents of neonatal infection are difficult to identify because the clinical signs are non-specific. Using data from the Aetiology of Neonatal Infections in south Asia (ANISA) cohort, we aimed to describe the spectrum of infectious aetiologies of acute neonatal illness categorised post-hoc using the 2015 WHO case definitions of critical illness, clinical severe infection, and fast breathing only. Methods: Eligible infants were aged 0-59 days with possible serious bacterial infection and healthy infants enrolled in the ANISA study in Bangladesh, India, and Pakistan. We applied a partial latent class Bayesian model to estimate the prevalence of 27 pathogens detectable on PCR, pathogens detected by blood culture only, and illness not attributed to any infectious aetiology. Infants with at least one clinical specimen available were included in the analysis. We assessed the prevalence of these aetiologies according to WHO's case definitions of critically ill, clinical severe infection, and infants with late onset, isolated fast breathing. For the clinical severe definition, we compared the prevalence of signs by bacterial versus viral aetiology. Findings: There were 934 infants (992 episodes) in the critically ill category, 3769 (4000 episodes) in the clinical severe infection category, and 738 (771 episodes) in the late-onset isolated fast breathing category. We estimated the proportion of illness attributable to bacterial infection was 32·7% in infants in the critically ill group, 15·6% in the clinical severe infection group, and 8·8% among infants with late-onset isolated fast breathing group. An infectious aetiology was not identified in 58-82% of infants in these categories. Among 4000 episodes of clinical severe infection, those with bacterial versus viral attribution had higher proportions of hypothermia, movement only when stimulated, convulsions, and poor feeding. Interpretation: Our modelled results generally support the revised WHO case definitions, although a revision of the most severe case definition could be considered. Clinical criteria do not clearly differentiate between young infants with and without infectious aetiologies. Our results highlight the need for improved point-of-care diagnostics, and further study into neonatal deaths and episodes with no identified aetiology, to ensure antibiotic stewardship and targeted interventions. Funding: The Bill and Melinda Gates Foundation. abstract_id: PUBMED:17565593 Behavioral and psychiatric comorbidities in pediatric epilepsy: toward an integrative model. It is well recognized that children with epilepsy are at heightened risk for developing behavior problems and psychiatric disorders. Studies identifying factors associated with child behavior were reviewed and findings were placed into two broad categories for review and critique: illness-related variables and psychosocial variables. Illness-related variables were seizure frequency and seizure control, type of epilepsy, age of onset, duration of illness, and antiepileptic drugs. Psychosocial variables were reviewed using a family stress framework: stressors, perceptions, adaptive resources, coping, and family adjustment. After the literature on each category is reviewed, an integrated heuristic model that includes key illness-related and psychosocial variables is presented. abstract_id: PUBMED:36525126 Association of body mass index with COVID-19-related neurologic sequelae: a retrospective cohort study. We sought to explore the relationship between body mass index (BMI) and neurologic outcomes following acute COVID-19 infection. We conducted a retrospective electronic medical record-based cohort study enrolling adults with laboratory-confirmed acute COVID-19 infection who presented to 1 of 12 academic and community hospitals in Southwestern Ontario, Canada between April 1, 2020 and July 31, 2021. Primary subjective (anosmia, dysgeusia, and/or headache) and objective (aseptic meningitis, ataxia, delirium, encephalopathy, encephalitis, intracranial hemorrhage, ischemic stroke, and/or seizure) composite neurologic outcomes were assessed, comparing obese and overweight individuals to those with underweight/normal BMI indices, adjusting for baseline characteristics. Secondary outcomes (severity of illness, length of hospital stay, SARS-CoV-2 viral load, mortality) were similarly analyzed. A total of 1437 enrolled individuals, of whom 307 (21%), 456 (32%), and 674 (47%) were underweight/normal, overweight, and obese, respectively. On multivariable analysis, there was no association between BMI category and the composite outcome for subjective (odds ratio [OR] 1.17, 95% CI 0.84-1.64, Bonferroni p = 1.00 for obese; OR 1.02, 95% CI 0.70-1.48; Bonferroni p = 1.00 for overweight) and objective (OR 0.74, 95% CI 0.42-1.30, p = 0.29 for obese; OR = 0.80, 95% CI 0.45-1.43, p = 0.45 for overweight) neurologic manifestations. There was no association between BMI category and any secondary outcome measure and no evidence of effect modification by age or sex. This study demonstrates the absence of an association between BMI and neurologic manifestations following acute COVID-19 illness. Prospective studies using standardized data collection tools and direct measures of body fat are warranted to obtain more valid effect estimates. abstract_id: PUBMED:7435669 Medication use and deaths attributed to asphyxia among psychiatric patients. In a review of the charts of inpatients who died in 1969-1977 the author found 49 whose death could be attributed to asphyxia. Compared with a matched control group, 48 of the asphyxia patients represented three distinct pathologic categories: 1) older patients with a history of serious physical illness whose deaths appeared unrelated to psychotropic medication use (40%), 2) a group whose deaths were associated with seizures (31%), raising questions about subtherapeutic anticonvulsant levels in association with the use of psychotropic drugs, and 3) a group of patients who choked to death (29%). Choking has been theoretically linked to a combination of dopamine blockade plus strong anticholinergic effects leading to impairment of swallowing. The third category appears to have been virtually eliminated by the use of a drug monitoring system and the Heimlich maneuver. abstract_id: PUBMED:34509751 Acute-phase electroencephalography for an infantile atypical teratoid/rhabdoid tumor. Background: Primary brain tumor is a leading cause of death in cancer-bearing children. Acutely progressive patterns of electroencephalography (EEG) remain to be investigated for children with rapidly growing brain tumors. Case Report: A 14-month-old boy was transferred to our department for prolonged seizures and unrecovered consciousness on his fifth day of illness. The EEG recording on admission showed highly disorganized background activity with high-voltage rhythmic delta waves. Serial EEG monitoring revealed a rapid transition of the background activity to the suppression-burst pattern, and then to generalized suppression of cortical activity within a few hours after admission. Magnetic resonance imaging detected a midline tumor at the pineal gland extending to the midbrain and pons. The tumor was pathologically confirmed as atypical teratoid/rhabdoid tumor (AT/RT) with absent expression of SMARCB1. He died of tumor progression on the 20th day after admission. Conclusion: AT/RT is an additional category of brain tumors that cause the clinically and electro-physiologically critical condition in a few days after the onset. Answer: The concept of nonfebrile illness seizures as a unique seizure category is supported by the clinical characteristics described in children with a first-time nonfebrile seizure in the setting of mild illness. A study found that children with nonfebrile illness seizures were more likely to have diarrheal illnesses accompanying their seizure compared to children with febrile seizures (44% vs. 16%; p=0.001), suggesting that nonfebrile illness seizures may represent a distinct group of seizures with unique epidemiology (PUBMED:15946338). Further study to define this seizure group better is warranted, as the diagnostic testing for infectious etiologies was not performed frequently in either group of children with febrile and nonfebrile illness seizures. The recognition of nonfebrile illness seizures as a unique category could have implications for understanding the etiological spectrum of acute symptomatic seizures, which varies by region and includes a range of systemic, metabolic, toxic, or central nervous system insults (PUBMED:10356374). It also may contribute to the broader understanding of neuropsychological impairments in children with epilepsy, as different epilepsy categories can have varying impacts on cognitive functions such as attention and executive functions (PUBMED:33183389). Moreover, the identification of nonfebrile illness seizures as a unique category could influence the assessment of illness severity and its predictive value for long-term neurodevelopmental outcomes, as seen in studies using scores like SNAP-II for preterm newborns (PUBMED:28079875). It could also affect the evaluation of mortality prediction in critical conditions like status epilepticus, where current scoring systems like APACHE II have shown poor prediction capabilities (PUBMED:29123561). In summary, nonfebrile illness seizures may indeed represent a unique seizure category, with distinct clinical characteristics and epidemiology that warrant further investigation to better understand their etiology, impact on neuropsychological function, and implications for illness severity assessment and mortality prediction.
Instruction: Is a flared stent effective for decreasing stent migration in canine prostatic urethra? Abstracts: abstract_id: PUBMED:18365816 Is a flared stent effective for decreasing stent migration in canine prostatic urethra? Background: Prostatic urethral stents are effective in relieving obstructions caused by benign prostatic hyperplasia (BPH). However, migration of these stents occurs frequently. Purpose: To evaluate the efficacy of a flared stent for decreasing the migration rate in comparison with a straight stent in a canine prostatic urethral model. Material And Methods: The flared stent (15 mm in diameter and 20 mm in length) was flared up to 19 mm at both ends to prevent migration. A straight stent with the same size was straight without flaring. Both stents were made of a nitinol wire filament and covered with an expanded polytetrafluoroethylene (ePTFE) membrane. The flared stent was inserted in the prostatic urethra of 10 dogs (group 1) and the straight stent in the prostatic urethra of 12 dogs (group 2). Follow-up retrograde urethrography (RUG) was performed 1, 4, and 8 weeks after stent placement. Fisher's exact test was used to compare the migration rate between the two groups. When stent migration occurred during the follow-up period, the same type of stent was inserted again. Results: Three of 10 (30.0%) flared stents migrated into the urinary bladder between 1 week and 4 weeks after placement. Seven of 12 (58.3%) straight stents migrated into the urinary bladder 1 (n = 3), 4 (n = 1), and 8 (n = 3) weeks following placement. The flared stent group showed lower migration rate than the straight stent group, but the migration rate did not reach statistical significance (P = 0.231). After the second stent placement, one of three (33.3%) flared stents migrated again after 4 weeks and two of seven straight stents (28.6%) after 1 week. Conclusion: Although the migration rate was not statistically significant, the flaring of the stent seemed to reduce the migration rate in comparison with straight stents in a canine prostatic urethral model. However, the migration rate of the flared stent was still high, and further developments are required to decrease migration rate. abstract_id: PUBMED:29975854 Migration resistance of esophageal stents: The role of stent design. Objective: Stenting is one of the major treatments for malignant esophageal cancer. However, stent migration compromises clinical outcomes. A flared end design of the stent diminishes its migration. The goal of this work is to quantitatively characterize stent migration to develop new strategies for better clinical outcomes. Methods: An esophageal stent with flared ends and a straight counterpart were virtually deployed in an esophagus with asymmetric stricture using the finite element method. The resulted esophagus shape, wall stress, and migration resistance force of the stent were quantified and compared. Results: The lumen gain for both the flared stent and the straight one exhibited no significant difference. The flared stent induced a significantly larger contact force and thus a larger stress onto the esophagus wall. In addition, more migration resistance force was required to pull the flared stent through the esophagus. This force was inversely related to the occurrence rate of stent migration. A doubled strut diameter also increased the migration resistance force by approximately 56%. An increased friction coefficient from 0.1 to 0.3 also boosted the migration resistance force by approximately 39%. Summary: The mechanical advantage of the flared stent was unveiled by the significantly increased contact force, which provided the anchoring effect to resist stent migration. Both the strut diameter and friction coefficient positively correlated with the migration resistance force, and thus the occurrence of stent migration. abstract_id: PUBMED:17436035 Evaluation of the effects of temporary covered nitinol stent placement in the prostatic urethra: short-term study in the canine model. Purpose: To evaluate the effect of temporary stent placement on the canine prostatic urethra. Methods: Retrievable PTFE-covered nitinol stents were placed in the prostatic urethras of 8 beagle dogs under fluoroscopic guidance. Retrograde urethrography was obtained before and after stenting. Retrograde urethrography and endoscopy were performed 1 and 2 months after deployment. The endoscopic degree of hyperplasia was rated on a scale of 0 to 4 (0 = absence, 4 = occlusion). On day 60, stents were removed and urethrography was performed immediately before euthanasia. Pathologic analysis was performed to determine the degree of glandular atrophy, periurethral fibrosis, and urethral dilation. Results: Stent deployment was technically successful in 7 animals, and failed in 1 dog due to a narrow urethral lumen. Complete migration was seen in 2 animals at 1 month, and an additional stent was deployed. On day 30, endoscopy showed slight hyperplasia (grade 1) in 3 animals. On day 60, moderate hyperplasia (grade 2) was evidenced in 4 cases. No impairment of urinary flow was seen during follow-up. Retrieval was technically easy to perform, and was successful in all dogs. The major histologic findings were chronic inflammatory cell infiltrates; prostate glandular atrophy, with a mean value of 1.86 (SD 0.90); periurethral fibrosis, with a mean ratio of 29.37 (SD 10.41); and dilatation of the prostatic urethra, with a mean ratio of 6.75 (SD 3.22). Conclusion: Temporary prostatic stent placement in dogs is safe and feasible, causing marked enlargement of the prostatic urethral lumen. Retrievable covered stents may therefore be an option for bladder outlet obstruction management in men. abstract_id: PUBMED:1706138 Prostatic urethra dilatation with the Gianturco self-expanding metallic stent: a feasibility study in cadaver specimens and dogs. In an effort to develop a transcatheter technique for dilatation of the prostatic urethra without the use of balloons, the feasibility of using Gianturco self-expanding stents was evaluated. Initially, eight human cadaveric prostatic urethras were stented to evaluate the ability of the stent to dilate the lumen. In all cases, the device attained its unconstrained diameter immediately on placement. Subsequently, stents were placed in the prostatic urethra of 12 dogs and followed up for 1 month (four dogs), 3 months (one dog), and 6 months (seven dogs). Five bare straight-end prostheses, one nylon-covered straight-end device, and six flared-end stents were used. Three of the bare straight-end stents migrated during the follow-up, whereas the nylon-covered and flared-end stents did not. Stent diameters greater than 1.3 times the urethral diameter caused moderate to marked edema and inflammation. After 6 months, white deposits were found on the solder points, presumably from electrolysis. Our experience suggests that placement of Gianturco self-expanding stents may be a useful method of dilating and maintaining the luminal diameter of the prostatic urethra, although care must be taken to select the proper stent size. abstract_id: PUBMED:21116367 Removal of migrated metallic prostatic stent by holmium laser. A 90-year-old male with prostatic hyperplasia with a history of ischemic heart disease and right-sided hemiplegia had undergone a Urolume stent placement because of acute urinary retention 9 months earliar. The stent had migrated into the bladder causing dysuria and a poor stream of urine. We fragmented the prostatic stent by Holmium (HO: YAG) laser followed by a laser prostatectomy. After the procedure, the patient voided satisfactorily. abstract_id: PUBMED:20688535 Temporary placement of a covered, retrievable, barbed stent for the treatment of hormone-induced benign prostatic hyperplasia: technical feasibility and histologic changes in canine prostates. Purpose: To evaluate the technical feasibility of a covered, retrievable, barbed prostatic stent and to determine stent-induced histologic changes of the prostate in canine benign prostatic hyperplasia (BPH). Materials And Methods: BPH was induced in 8 male beagles by administration of dihydrotestosterone and beta-estradiol. Under fluoroscopy guidance, covered, retrievable, barbed stents were placed in the prostatic urethras. Urethrograms were obtained every three to 4 weeks until sacrifice. The stents were removed 8 weeks after placement. The dogs were sacrificed immediately (group A, n = 4) and 8 weeks after stent removal (group B, n = 4). Histologic changes of prostates were microscopically evaluated. Results: Stent placement was technically successful in all dogs. The diameters of the prostatic urethra were 14.1 mm + or - 0.5 and 13.7 mm + or - 0.5 on urethrograms obtained immediately after and 8 weeks after stent removal (P = .268). Stent removal was successful in all dogs except one, in which the stent was obstructed by stone formation. Small sandy stones were identified attached to the removed stents of two dogs. Histologic examination revealed extensive prostatic glandular atrophy involving 95.1% + or - 2.2 (group A) and 94.5% + or - 2.4 (group B) of entire prostate (P = .773). Conclusions: A covered, retrievable, barbed prostatic stent seems to be feasible for use in hormone-induced canine BPH. The stent induced prostatic urethral dilatation, and extensive prostatic glandular atrophy persisted up to 8 weeks after stent removal. abstract_id: PUBMED:21814326 Migration of forgotten stent into renal pelvis. Stent migration is a well recognized complication of forgotten stents, but migration into the renal pelvis is rarely documented. We present a case of migration and coiling of a forgotten stent in the renal pelvis, and discuss briefly, the etiological factors for the phenomenon and associated problems in management. abstract_id: PUBMED:22523616 Covered self expandable metallic stent with flared plastic one inside for pancreatic pseudocyst avoiding stent dislodgement. Endoscopic ultrasound-guided drainage has recently been recommended for increasing the drainage rate of endoscopically managed pancreatic fluid collections and decreasing the morbidity associated with conventional endoscopic trans-mural drainage. The type of stent used for endoscopic drainage is currently a major area of interest. A covered self expandable metallic stent (CSEMS) is an alternative to conventional drainage with plastic stents because it offers the option of providing a larger-diameter access fistula for drainage, and may increase the final success rate. One problem with CSEMS is dislodgement, so a metallic stent with flared or looped ends at both extremities may be the best option. An 85-year-old woman with severe co-morbidity was treated with percutaneous approach for a large (20 cm) pancreatic pseudocyst with corpuscolated material inside. This approach failed. The patient was transferred to our institute for EUS-guided transmural drainage. EUS confirmed a large, anechoic cyst with hyperechoic material inside. Because the cyst was large and contained mixed and corpusculated fluid, we used a metallic stent for drainage. To avoid migration of the stent and potential mucosal growth above the stent, a plastic prosthesis (7 cm, 10 Fr) with flaps at the tips was inserted inside the CSEMS. Two months later an esophagogastroduodenoscopy was done, and showed patency of the SEMS and plastic stents, which were then removed with a polypectomy snare. The patient experienced no further problems. abstract_id: PUBMED:21455345 Stent-in-stent through a side hole to prevent biliary metallicstent migration. The covered self-expandable metallic stent (SEMS) has been developed to overcome the problem of tissue in-growth, However, stent migration is a well-known com--plication of covered SEMS placement. Use of a double pigtail stent to lock the movement of the SEMS and prevent migration has been advised by many ex-perts. Unfortunately, in our case this technique led to an in-cidental upward migration of the SEMS. We used APC to create a side hole in the SEMS for plastic stent insertion as stent-in-stent. This led to a successful pre-ven-tion of stent migration. abstract_id: PUBMED:25610053 Biliary stent migration with duodenal perforation. Intestinal perforation from a migrated biliary stent is a known complication of endoscopic biliary stent placement. We present a case of stent migration and resultant duodenal perforation after stent placement for a malignant biliary stricture in a 52-year-old woman. We review the current literature on the diagnosis and management of stent migration and intestinal perforation after endoscopic stent placement for biliary strictures. A plain abdominal radiograph is necessary for early diagnosis of biliary stent migration. If a stent becomes lodged in the gastrointestinal tract, endoscopic or operative extraction of the stent is necessary to prevent subsequent intestinal perforation and peritonitis. Intestinal perforation secondary to biliary stent dislocation should be considered in all patients presenting with fever and abdominal pain after biliary stent insertion. Any abnormality that prevents stent migration through the intestinal tract such as gastroenterostomy, abdominal wall hernia, extensive adhesions or colonic divertucula may be a contraindication for insertion of a plastic biliary stent because of increased perforation risk. Answer: The efficacy of a flared stent for decreasing migration in the canine prostatic urethra has been evaluated, and while the results suggest that flaring of the stent may reduce the migration rate compared to straight stents, the difference was not statistically significant. In a study where a flared stent was inserted in the prostatic urethra of 10 dogs, 30% of the flared stents migrated into the urinary bladder between 1 and 4 weeks after placement. This was lower than the 58.3% migration rate observed with straight stents in 12 other dogs. However, the difference in migration rates between the flared and straight stent groups did not reach statistical significance (P = 0.231) (PUBMED:18365816). In a separate context, research on esophageal stents has shown that a flared end design can diminish stent migration by inducing a significantly larger contact force onto the esophagus wall, which provides an anchoring effect to resist migration. This suggests that the mechanical advantage of a flared stent design could potentially be applied to prostatic stents as well (PUBMED:29975854). Overall, while the flared stent design seems to have a potential for reducing migration rates in the canine prostatic urethra, the evidence from the study with canine models indicates that the reduction in migration was not statistically significant, and further developments may be required to effectively decrease the migration rate (PUBMED:18365816).
Instruction: Can endosonographers evaluate on-site cytologic adequacy? Abstracts: abstract_id: PUBMED:17531627 Can endosonographers evaluate on-site cytologic adequacy? A comparison with cytotechnologists. Background: On-site determination of cytologic adequacy increases the accuracy of EUS-guided FNA (EUS-FNA); however, on-site cytotechnologists are not available to all endosonographers. We hypothesize that experienced endosonographers can accurately assess whether an on-site FNA specimen is adequate. Objective: To determine the accuracy of on-site cytopathology interpretation of EUS-FNA specimens by comparing endosonographers with a cytotechnologist. Design: Prospective double-blind controlled trial. Setting: Academic medical center with a high-volume EUS practice. Patients: Consecutive patients undergoing EUS-FNA of lymph nodes or pancreas tumors. Main Outcome Measurements: Accuracy, sensitivity, and specificity of 3 endosonographers and 1 cytotechnologist for interpretation of cytologic specimen adequacy and diagnosis compared with a criterion standard of a board-certified cytopathologist. Results: There were 59 lymph node, 49 pancreas, and 9 liver specimens (117 total). For determination of adequacy, none of the endosonographers were statistically equivalent to the cytotechnologist (P=.004). For determination of suspicious/malignant versus benign specimens, all 3 endosonographers were inferior (P<.001) to the cytotechnologist. Limitations: This study represents a small group of trained endosonographers in a high-volume practice and may not be applicable to other settings. The sample size does not allow an accurate evaluation of different biopsy sites (eg, pancreas vs lymph node). Conclusions: Even trained endosonographers have variable and, in some cases, inferior abilities to interpret on-site cytologic adequacy compared with cytotechnologists. abstract_id: PUBMED:24896656 An expanded training program for endosonographers improved self-diagnosed accuracy of endoscopic ultrasound-guided fine-needle aspiration cytology of the pancreas. Objective: Rapid on-site evaluation (ROSE) of cytologic adequacy improves the diagnostic yield of endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA). However, on-site advice from a cytotechnologist or cytopathologist is not always available during EUS-FNA. To enhance endosonographers' ability to assess the adequacy of EUS-FNA specimens, we designed an intensive, 2-h interactive training program. The aim of this study was to determine the usefulness of the program. Methods: Four cytological pictures were selected by a trained cytotechnologist and board-certified cytopathologist from each of the seven patients who underwent EUS-FNA for pancreatic mass in Okayama University Hospital. In total, 28 pictures were used in this study. Twenty endosonographers and 14 cytologists with different levels of EUS-FNA experience evaluated cytological pictures independently before and after the training program. Results: Endosonographers' skill in evaluating the adequacy of EUS-FNA specimens was significantly improved after the completion of the training program (p < 0.001). In contrast, almost all cytologists correctly judged the adequacy of the specimens before taking the training program. Conclusions: This intensive, 2-h interactive training program is useful for endosonographers and capable of improving ROSE of EUS-FNA specimens. abstract_id: PUBMED:28966833 Successful Secure High-definition Streaming Telecytology for Remote Cytologic Evaluation. Background: The use of minimally invasive procedures to obtain material for diagnostic purposes has become more prevalent in recent years. As such, there is increased demand for immediate cytologic adequacy assessment of minimally invasive procedures. The array of different locations in which rapid on-site evaluation (ROSE) is expected requires an ever-increasing number of cytology personnel to provide support for adequacy assessment. In our study, we describe the implementation process of a telecytology (TC) system in a high case volume setting and evaluate the performance of this activity. Methods: We performed retrospectively an analysis of all consecutive remote TC ROSE evaluations obtained for 15 months. The specimens were evaluated using a TC system. The ROSE adequacy assessment obtained at the time of the procedure was compared to the final cytopathologist-rendered adequacy assessment when all the material was available for review, including the alcohol-fixed preparations. Results: A total of 8106 distinct cases were analyzed. TC-assisted preliminary adequacy assessment was highly concordant with the final cytopathologist-rendered adequacy assessment. Perfect concordance or accuracy was at 93.1% (7547/8106). The adequacy upgrade rate (inadequate specimen became adequate) was 6.8% (551/8106), and the initial adequacy downgrade (adequate specimen became inadequate) was <0.1% (8/8106). Conclusions: The TC outcome demonstrates high concordance between the initial adequacy assessment and final cytopathologist-rendered adequacy assessment. Adequacy upgrades were minor but, more importantly, our results demonstrate a minimal adequacy downgrade. The process implemented effectively eliminated the need for an attending pathologist to be physically present onsite during a biopsy procedure. abstract_id: PUBMED:34515217 The Association of "GOOP" on Gross Examination of Fine Needle Aspiration Samples and On-Site Adequacy. Background: Rapid on-site cytologic evaluation (ROSE) is not always available for fine needle aspiration (FNA) specimens. We have examined the relationships between the presence of "GOOP" (defined as gooey white material) on FNA aspirates, on-site adequacy (OSA), and diagnosis. Methods: Consecutive FNA samples obtained over the study interval were included. Samples were assessed macroscopically for the presence or absence of GOOP (GOOP+ or GOOP-). GOOP+ samples were further characterized as shiny (G+S) or cheesy (G+C). Gross descriptors were correlated with OSA and final diagnoses. Results: Of the 204 sites biopsied, 102 were malignant, 94 benign, and 8 nondiagnostic. The presence of GOOP was highly predictive for adequacy (positive predictive value 98%). While these correlations for GOOP positivity were significant, the absence of GOOP did not rule out adequacy or malignancy. The presence of GOOP was also significantly correlated with a malignant diagnosis. Conclusions: We have prospectively demonstrated that the presence of GOOP correlates strongly with adequacy. This may be of value for those for whom ROSE is not available and, when available, may help prioritize specimens for on-site review. abstract_id: PUBMED:33540414 Rapid On-Site Cytologic Evaluation: A Feasibility Study Using Ancillary Interventional Pulmonary Personnel. Background: Ancillary health professionals helping in a procedural service is a common practice everywhere. Objectives: This was a proof-of-concept study to assess feasibility of using ancillary personnel for rapid on-site cytologic evaluation (ROSE) at interventional pulmonary procedures. Methods: After a training interval, a respiratory therapist (RT) performed ROSE on consecutive interventional pulmonary specimens. Sample sites included lymph nodes, lung, liver, and the left adrenal gland. RT findings were subsequently correlated with blinded cytopathology-performed ROSE and with final histopathology results, with primary foci of adequacy and the presence or absence of malignancy. Results: Seventy consecutive cases involved 163 separate sites for ROSE analysis. Adequacy: There was a high level of concordance between RT-performed ROSE (RT-ROSE) and cytopathology ROSE (CYTO-ROSE). They agreed upon the adequacy of 159 specimens. The Cohen's κ coefficient ± asymptotic standard error (ASE) was 0.74 ± 0.175, with p < 0.0001. Malignancy: RT-ROSE concurred highly with CYTO-ROSE, with agreement on 150 (92%) of the 163 specimens. Cohen's κ coefficient ± ASE was 0.83 ± 0.045, with p < 0.0001. When the comparison was for malignancy by case rather than individual site, Cohen's κ coefficient ± ASE was 0.68 ± 0.08, with p < 0.0001. Conclusion: This study demonstrates that ancillary personnel supporting an interventional pulmonary service can be trained to perform initial ROSE. Cytopathology can be called after sampling and staining have produced adequate samples. This setup streamlines ROSE evaluation with regard to time and cost. abstract_id: PUBMED:24255627 Cytotechnologists and on-site evaluation of adequacy. While fine needle aspiration (FNA) is certainly not a new biopsy technique, recent developments in advanced imaging techniques, molecular testing, and targeted therapies have coincided with a rapid increase in the number of FNA procedures being performed. Concurrently, the demand for on-site evaluation of adequacy (OSEA) has also increased, outstripping the capacity of available cytopathologists at some institutions. Among the several alternatives to cytopathologist-performed OSEA, cytotechnologist-attended OSEA stands out because it preserves the representation of the pathology service at the time of the procedure. Herein, we review the current literature about OSEA and the necessity of cytotechnologists to expand access of this useful pathology service to a broader patient population. We also examine how cytotechnologists are likely to fit into the emerging practice of telecytology. abstract_id: PUBMED:34975043 Endosonographers performing on-site evaluation of solid pancreatic specimens for EUS-guided biopsy: A formal training method and learning curves. Objectives: This study aimed to examine the effects of a training program combining formal cytological curriculum and practical assessments on endosonographers and to determine how many operations were needed for training through learning curves. Methods: A formal cytological curriculum was implemented in November 2019 for four endosonographers. The competency of endosonographers before and after the curriculum was judged by quantitative scores. From December 2019 to October 2020, trainees independently examined the adequacy and atypia grade of pancreatic specimens acquired by EUS-guided-fine-needle biopsy based on specific atypical grading criteria. The accuracy of the adequacy and atypia assessment of each trainee was calculated, and learning curves were drawn to evaluate the improvement. Results: The median posttraining score improved by 87% from 49 to 91.5 out of 100. Ninety patients were enrolled in the practical assessments. The accuracy for specimen adequacy and atypia assessment of each trainee was 91.7%, 92.8%, 91.0%, and 89.3% and 80.0%, 82.1%, 81.0%, and 78.9%. The learning curves of each trainee showed a steady and significant improvement, and ninety cases were sufficient for satisfactory adequacy assessment. Conclusions: Basic knowledge of on-site cytopathologic evaluation can be gained through standardized and systematic training. Practical assessments showed that, at the completion of ninety cases, trained endosonographers reached a satisfactory level of recognizing specimen adequacy, but continue practice was needed for atypia assessment. abstract_id: PUBMED:28103654 Rapid On-Site Evaluation by Endosonographers during Endoscopic Ultrasonography-Guided Fine-Needle Aspiration for Diagnosis of Gastrointestinal Stromal Tumors. Background/aims: Endoscopic ultrasonography-guided fine-needle aspiration (EUS-FNA) has been used to diagnose gastrointestinal submucosal tumors (SMTs). Although rapid on-site evaluation (ROSE) has been reported to improve the diagnostic accuracy of EUS-FNA for pancreatic lesions, on-site cytopathologists are not routinely available. Given this background, the usefulness of ROSE by endosonographers themselves for pancreatic tumors has also been reported. However, ROSE by endosonographers for diagnosis of SMT has not been reported. The aim of this study was to evaluate the diagnostic accuracy of EUS-FNA with ROSE by endosonographers for SMT, focusing on diagnosis of gastrointestinal stromal tumor (GIST), compared with that of EUS-FNA alone. Methods: Twenty-two consecutive patients who underwent EUS-FNA with ROSE by endosonographers for SMT followed by surgical resection were identified. Ten historical control subjects who underwent EUS-FNA without ROSE were used for comparison. Results: The overall diagnostic accuracy for SMT was significantly higher in cases with than without ROSE (100% vs. 80%, p=0.03). The number of needle passes by FNA with ROSE by endosonographers tended to be fewer, although accuracy was increased (3.3±1.3 vs. 5.9±3.8, p=0.06). Conclusions: ROSE by endosonographers during EUS-FNA for SMT is useful for definitive diagnosis, particularly for GIST. abstract_id: PUBMED:31051702 Cytotechnologist-attended on-site evaluation of adequacy for fine-needle aspiration of bone and soft tissue neoplasms. Introduction: On-site evaluation of adequacy (OSEA) is commonly used to increase the adequacy rate of fine-needle aspiration (FNA) procedures. OSEA is increasingly necessary with the widespread use of image-guided procedures, which are expensive to perform and repeat. The increased demand for OSEA has prompted an increasing reliance on cytotechnologists for OSEA in many practices, including ours. However, the performance of cytotechnologists has not been compared with that of cytopathologists for OSEA in FNA of bone and soft tissue tumors. Materials And Methods: We retrospectively analyzed 10 years of data in which both cytotechnologists and cytopathologists performed OSEA for 632 bone and soft tissue tumor FNA. We calculated adequacy and accuracy statistics in conjunction with other variables including imaging modality, biopsy site, accompanying core biopsy, tissue type, final diagnosis, and number of smears. Results: Of the 632 bone and soft tissue tumor FNA procedures, OSEA was performed in 223 (35.3%) by cytotechnologists and in 409 (64.7%) by cytopathologists. There was no difference in the adequacy downgrade rate for cytotechnologists versus cytopathologists during this study period (4.5 versus 2.4%, P = 0.23). The adequacy rates for procedures in which OSEA was performed by cytotechnologists and cytopathologists did not differ significantly (78.9 versus 84.1%, P = 0.13). Conclusions: Cytotechnologists and cytopathologists are comparably accurate in OSEA of bone and soft tissue neoplasms. abstract_id: PUBMED:31964550 Rapid on-site cytologic evaluation by pulmonologist improved diagnostic accuracy of endobronchial ultrasound-guided transbronchial biopsy. Background/purpose: Rapid on-site cytologic evaluation (ROSE) has been shown to improve the diagnostic accuracy of endobronchial ultrasound-guided transbronchial biopsy (EBUS-TBB). However, ROSE by a cytopathologist or cytotechnologist is not always available during the procedure. The purposes of this study were to verify that a pulmonologist, after receiving training in cytology, could accurately assess an EBUS-TBB specimen on-site, and to evaluate the contribution of ROSE to EBUS-TBB. Methods: A retrospective chart review of patients who underwent EBUS-TBB for diagnosis of peripheral pulmonary lesions (PPLs) from January 2014 to June 2017 was performed. PPLs without a malignant diagnosis were excluded. The ROSE result determined by a pulmonologist was compared to the formal imprint cytologic report and pathologic report. The diagnostic accuracy of EBUS-TBB was also compared between those with and without ROSE. Results: Two hundred ninety-three patients who underwent 336 EBUS-TBB procedures for PPL diagnosis and were found to have proven malignancy were enrolled. Eighty-six procedures were performed with ROSE. With the formal imprint cytologic diagnosis as the standard, ROSE had 96.9% sensitivity, 68.2% specificity, 89.9% positive predictive value (PPV), 88.2% negative predictive value (NPV), and 89.5% diagnostic accuracy. With the formal pathologic result as the standard, ROSE had 88.2% sensitivity, 80% specificity, 97.1% PPV, 47.1% NPV, and 87.2% diagnostic accuracy, respectively. The diagnostic accuracy was significantly higher when ROSE was performed during EBUS-TBB (88.4% vs 68.0%, P < 0.001). Conclusion: A trained pulmonologist can interpret adequately cytologic smears on-site and effectively improve the accuracy of EBUS-TBB in the diagnosis of PPLs. Answer: Endosonographers can evaluate on-site cytologic adequacy, but their ability to do so accurately may vary and is generally not equivalent to that of cytotechnologists. A study found that even trained endosonographers had variable and sometimes inferior abilities to interpret on-site cytologic adequacy compared with cytotechnologists (PUBMED:17531627). However, training programs have been shown to improve endosonographers' skills in evaluating the adequacy of endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) specimens. An intensive, 2-hour interactive training program significantly improved the endosonographers' ability to assess specimen adequacy (PUBMED:24896656). Additionally, a formal training method combining a cytological curriculum and practical assessments demonstrated that endosonographers could reach a satisfactory level of recognizing specimen adequacy after completing ninety cases (PUBMED:34975043). Moreover, other studies have shown that with proper training, non-cytopathologist personnel, such as respiratory therapists (PUBMED:33540414) and pulmonologists (PUBMED:31964550), can perform rapid on-site cytologic evaluation (ROSE) with high levels of accuracy. The presence of "GOOP" (gooey white material) on FNA aspirates has also been correlated with on-site adequacy and may be of value for those without ROSE availability (PUBMED:34515217). In summary, while endosonographers may not naturally possess the same level of skill as cytotechnologists in evaluating cytologic adequacy, with targeted training and experience, they can improve their accuracy and effectively perform on-site evaluations.
Instruction: Does neighbourhood deprivation cause poor health? Abstracts: abstract_id: PUBMED:30472036 Comparing subjective and objective neighbourhood deprivation and their association with health over time among older adults in England. The importance of neighbourhood on individual health is widely documented. Less is known about the relative role of objective and subjective reports of neighbourhood conditions, how their effect on health changes as people age, and whether they moderate each other's impact on health. This study uses the English Longitudinal Study of Ageing (ELSA) to determine whether older adults report worse self-rated health as they age, and whether this differs between objective and subjective measures of neighbourhood. ELSA data contain 53,988 person-years across six waves collected biannually between 2002 and 03 and 2012 and 13. Objective neighbourhood conditions are measured by the 2004 Index of Multiple Deprivation, and subjective neighbourhood conditions are captured by a summative neighbourhood dissatisfaction score. We find both objective and subjective neighbourhood composite scores independently predict poor health. There is no change over time in the probability of reporting poor health by baseline objective or subjective neighbourhood scores, suggesting neighbourhood effects do not compound as older adults age. There is no moderating effect of area dissatisfaction on the relationship between neighbourhood deprivation and health. The findings provide little support for causal neighbourhood effects operating in later life and indicate different causal pathways through which objective and subjective neighbourhood deprivation impact on health. abstract_id: PUBMED:36612807 Association between Neighbourhood Deprivation Trajectories and Self-Perceived Health: Analysis of a Linked Survey and Health Administrative Data. Life course exposure to neighbourhood deprivation may have a previously unstudied relationship with health disparities. This study examined the association between neighbourhood deprivation trajectories (NDTs) and poor reported self-perceived health (SPH) among Quebec's adult population. Data of 45,990 adults with complete residential address histories from the Care-Trajectories-Enriched Data cohort, which links Canadian Community Health Survey respondents to health administrative data, were used. Accordingly, participants were categorised into nine NDTs (T1 (Privileged Stable)-T9 (Deprived Stable)). Using multivariate logistic regression, the association between trajectory groups and poor SPH was estimated. Of the participants, 10.3% (95% confidence interval [CI]: 9.9-10.8) had poor SPH status. This proportion varied considerably across NDTs: From 6.4% (95% CI: 5.7-7.2) for Privileged Stable (most advantaged) to 16.4% (95% CI: 15.0-17.8) for Deprived Stable (most disadvantaged) trajectories. After adjustment, the likelihood of reporting poor SPH was significantly higher among participants assigned to a Deprived Upward (odds ratio [OR]: 1.77; 95% CI: 1.48-2.12), Average Downward (OR: 1.75; CI: 1.08-2.84) or Deprived trajectory (OR: 1.81; CI: 1.45-2.86), compared to the Privileged trajectory. Long-term exposure to neighbourhood deprivation may be a risk factor for poor SPH. Thus, NDT measures should be considered when selecting a target population for public-health-related interventions. abstract_id: PUBMED:34639611 Life Course Neighbourhood Deprivation and Self-Rated Health: Does It Matter Where You Lived in Adolescence and Do Neighbourhood Effects Build Up over Life? There is an overreliance on concurrent neighbourhood deprivation as a determinant of health. Only a small section of the literature focuses on the cumulative exposure of neighbourhood deprivation over the life course. This paper uses data from the 1958 National Child Development Study, a British birth cohort study, linked to 1971-2011 Census data at the neighbourhood level to longitudinally model self-rated health between ages 23 and 55 by Townsend deprivation score between ages 16 and 55. Change in self-rated health is analysed using ordinal multilevel models to test the strength of association with neighbourhood deprivation at age 16, concurrently and cumulatively. The results show that greater neighbourhood deprivation at age 16 predicts worsening self-rated health between ages 33 and 50. The association with concurrent neighbourhood deprivation is shown to be stronger compared with the measurement at age 16 when both are adjusted in the model. The concurrent association with change in self-rated health is explained by cumulative neighbourhood deprivation. These findings suggest that neglecting exposure to neighbourhood deprivation over the life course will underestimate the neighbourhood effect. They also have potential implications for public policy suggesting that neighbourhood socioeconomic equality may bring about better population health. abstract_id: PUBMED:30069793 The relationship of neighbourhood-level material and social deprivation with health-related quality of life. Purpose: To examine the relationship of neighbourhood-level material and social deprivation with health-related quality of life, measured by the EQ-5D-5L, in the general adult population. Methods: A sample of 11,835 adults living in Alberta, Canada was drawn from three combined annual Health Quality Council of Alberta Satisfaction and Experiences with Health Care Services surveys from 2012 to 2016. Neighbourhood-level material and social deprivation indices were derived using the Pampalon index and the 2006 Canadian census. The EQ-5D-5L dimensions, index and VAS scores were compared across the deprivation indices quintiles in the overall sample and by participants' sub-groups. Differences were tested using ANOVA or Chi-square test as appropriate. Multivariate linear regression models were conducted to examine the independent association of material and social deprivation with the EQ-5D-5L index and VAS scores, and multinomial logistic regression models with each of the EQ-5D-5L dimensions. Results: Respondents in higher material or social deprivation categories had lower EQ-5D-5L index and VAS scores than those in the least deprived categories. Additionally, respondents with higher material deprivation were more likely to report problems on mobility, usual activities, and pain/discomfort; those with higher social deprivation were more likely to report problems on mobility, self-care, usual activities, and anxiety/depression. Conclusion: Higher neighbourhood-level material and social deprivation is significantly associated with lower health-related quality of life in the general adult population. Examining the factors leading to this inequity in health between individuals living in the least and most deprived neighbourhoods is imperative to mitigating these inequities. abstract_id: PUBMED:25878354 Does neighbourhood deprivation cause poor health? Within-individual analysis of movers in a prospective cohort study. Background: Neighbourhood deprivation has been associated with poor health. The evidence for social causation, however, remains scarce because selective residential mobility may also create neighbourhood differences. The present study examined whether individuals had poorer health when they were living in a deprived neighbourhood compared to another time when the same individuals were living in a less deprived neighbourhood. Methods: Participants were from the British Household Panel Survey prospective cohort study with 18 annual measurements of residential location and self-reported health outcomes between 1991 and 2009 (n=137 884 person-observations of 17 001 persons in England). Neighbourhood deprivation was assessed concurrently with health outcomes using the Index of Multiple Deprivation at the geographically detailed level of Lower Layer Super Output Areas. The main analyses were replicated in subsamples from Scotland (n=4897) and Wales (n=4442). Multilevel regression was used to separate within-individual and between-individuals associations. Results: Neighbourhood deprivation was associated with poorer self-rated health, and with higher psychological distress, functional health limitations and number of health problems. These associations were almost exclusively due to differences between different individuals rather than within-individual variations related to different neighbourhoods. By contrast, poorer health was associated with lower odds of moving to less deprived neighbourhoods among movers. The analysis was limited by the restricted within-individual variation and measurement imprecision of neighbourhood deprivation. Conclusions: Individuals living in deprived neighbourhoods have poorer health, but it appears that neighbourhood deprivation is not causing poorer health of adults. Instead, neighbourhood health differentials may reflect the more fundamental social inequalities that determine health and ability to move between deprived and non-deprived neighbourhoods. abstract_id: PUBMED:33495056 Neighbourhood deprivation effects on young people's mental health and well-being: A systematic review of the literature. Since the growth of research into neighbourhood effects on young people's health in the 1980s, there have been major societal changes and scientific methodological advancements. In this systematic review we will, therefore, discuss the recent (>2009) literature on the association between neighbourhood deprivation and young people's (0-19 years old) mental health and well-being. We focus on whether neighbourhood deprivation effects exist, and how and for whom the neighbourhood matters. Together, the thirty studies included in the review indicate that overall there are neighbourhood effects on young people's mental health and well-being. The comparison of results from these studies suggests that such associations were more commonly found for well-being and externalising problem behaviour rather than internalising problem behaviour. Also, mental health and well-being seemed to be more often associated with the neighbourhood social environment than neighbourhood socio-economic status and neighbourhood disorder. Studies investigating mediating processes between the linkage between neighbourhood deprivation and mental health and well-being were rare although there was some evidence that processes within the family and peer context are important mechanisms in this linkage. Inconsistent evidence was found regarding the moderating role of age, gender, and ethnicity. There are ongoing challenges of researching the how and for whom neighbourhoods are important. We should work towards rigorous theory and evidence on how different features of residential contexts matter and on differential exposure and vulnerability to these contexts. abstract_id: PUBMED:12777420 Neighbourhood deprivation and health: does it affect us all equally? Background: Neighbourhood socioeconomic status (SES) may affect rich and poor residents differentially. Two models are proposed. Model 1: living in a non-deprived neighbourhood is better for health because better collective material and social resources are available. Model 2: being poor (rich) relative to the neighbourhood average is associated with worse (better) health because of the discrepancy between an individual's situation and those around them. Methods: Individual data from the Whitehall II study covering health, SES, and perceived status were linked to census data on neighbourhood deprivation. Results: Both individual and neighbourhood deprivation increased the risk of poor general and mental health. There was a suggestion that the effect of living in a deprived area was more marked for poorer individuals, although interactions were not statistically significant. Poor people in poor neighbourhoods reported more financial and neighbourhood problems and rated themselves lowest on the ladder of society. Conclusions: We found no evidence that personal poverty combined with affluent neighbourhood had negative health consequences. Rather, living in a deprived neighbourhood may have the most negative health effects on poorer individuals, possibly because they are more dependent on collective resources in the neighbourhood. abstract_id: PUBMED:29775832 Stressed out? An investigation of whether allostatic load mediates associations between neighbourhood deprivation and health. Deprived neighbourhoods have long been associated with poorer health outcomes. However, many quantitative studies have not evidenced the mechanisms through which place 'gets under the skin' to influence health. The increasing prevalence of biosocial data provides new opportunities to explore these mechanisms and incorporate them into models of contextual effects. The stress pathway is a key biosocial mechanism; however, few studies have explicitly tested it in neighbourhood associations. This paper addresses this gap by investigating whether allostatic load, a biological response to chronic stress, mediates relationships of neighbourhood deprivation to physical and mental health. Data from UK Understanding Society is used to undertaken a multilevel mediation analysis. Allostatic load is found to mediate the association between neighbourhood deprivation and health, substantiating the biological mechanism of the stress pathway. More deprived areas are associated with higher allostatic load, and in turn worse allostatic load relates to poorer physical and mental health. Allostatic load is a stronger mediator of physical health than mental health, suggesting the stress pathway is more pertinent to explaining physical health gradients. Heterogeneity in the results between physical and mental health suggests more research is needed to disentangle the biosocial processes that could be important to health and place relationships. abstract_id: PUBMED:37565544 Neighbourhood deprivation and type 2 diabetes in patients with bipolar disorder: A nationwide follow-up study. Patients with bipolar disorder have higher rates of type 2 diabetes (T2D) compared to the general population. Neighbourhood deprivation is associated with T2D and bipolar disorder. The aim of this study was to explore the potential effect of neighbourhood deprivation on incident T2D in patients with bipolar disorder. This nationwide open cohort study (1997-2018) included adults in Sweden ≥20 years with bipolar disorder (90,780 patients) to examine the subsequent risk of T2D. The association between neighbourhood deprivation and T2D was explored using Cox regression analysis, with hazard ratios (HRs) and 95% confidence intervals (95% CIs). All models were conducted in both men and women and adjusted for individual-level sociodemographic factors and comorbidities. Neighbourhood deprivation was significantly associated with T2D in patients with bipolar disorder. The HRs were 1.61 (95% CI 1.40-1.86) for men and 1.83 (1.60-2.10) for women living in high deprivation neighbourhoods compared to those from low deprivation neighbourhoods. After adjustment, these results remained significant: 1.35 (1.17-1.56) in men and 1.39 (1.20-1.60) in women living in high deprivation neighbourhoods. The suggested graded association of higher incident T2D among patients with bipolar disorder, observed when levels of neighbourhood deprivation increased, raises important clinical and public health concerns. The results may help develop a contextual approach to prevention of T2D in patients with bipolar disorder that includes the neighbourhood environment. abstract_id: PUBMED:30253698 Heat wave-related mortality in Sweden: A case-crossover study investigating effect modification by neighbourhood deprivation. Aims: The present study aimed to investigate if set thresholds in the Swedish heat-wave warning system are valid for all parts of Sweden and if the heat-wave warning system captures a potential increase in all-cause mortality and coronary heart disease (CHD) mortality. An additional aim was to investigate whether neighbourhood deprivation modifies the relationship between heat waves and mortality. Methods: From 1990 until 2014, in 14 municipalities in Sweden, we collected data on daily maximum temperatures and mortality for the five warmest months. Heat waves were defined according to the categories used in the current Swedish heat-wave warning system. Using a case-crossover approach, we investigated the association between heat waves and mortality in Sweden, as well as a modifying effect of neighbourhood deprivation. Results: On a national as well as a regional level, heat waves significantly increased both all-cause mortality and CHD mortality by approximately 10% and 15%, respectively. While neighbourhood deprivation did not seem to modify heat wave-related all-cause mortality, CHD mortality did seem to modify the risk. Conclusions: It may not be appropriate to assume that heat waves in Sweden will have the same impact in a northern setting as in a southern, or that the impact of heat waves will be the same in affluent and deprived neighbourhoods. When designing and implementing heat-wave warning systems, neighbourhood, regional and national information should be incorporated. Answer: The relationship between neighbourhood deprivation and health outcomes has been extensively studied, but the evidence regarding causation is complex and multifaceted. Several studies have found associations between neighbourhood deprivation and poorer health outcomes. For instance, individuals living in deprived neighbourhoods have been found to have poorer self-rated health, higher psychological distress, functional health limitations, and a greater number of health problems (PUBMED:25878354). Similarly, higher neighbourhood-level material and social deprivation is significantly associated with lower health-related quality of life in the general adult population (PUBMED:30069793). However, the causality of this relationship is not straightforward. Some studies suggest that the associations between neighbourhood deprivation and health are not solely due to the neighbourhood itself causing poorer health. Instead, they may reflect broader social inequalities that influence both health and the ability to move between different neighbourhoods (PUBMED:25878354). Moreover, the effect of neighbourhood deprivation on health does not appear to compound as older adults age, indicating that the impact of neighbourhood conditions may not worsen over time for individuals (PUBMED:30472036). Life course exposure to neighbourhood deprivation has been shown to have a relationship with health disparities, with long-term exposure to neighbourhood deprivation being a risk factor for poor self-perceived health (PUBMED:36612807). Additionally, greater neighbourhood deprivation at a young age predicts worsening self-rated health later in life, suggesting that cumulative exposure to neighbourhood deprivation over the life course is an important factor (PUBMED:34639611). The stress pathway has been proposed as a mechanism through which neighbourhood deprivation could influence health, with allostatic load (a biological response to chronic stress) mediating the association between neighbourhood deprivation and health outcomes (PUBMED:29775832). Furthermore, neighbourhood deprivation has been associated with higher rates of type 2 diabetes in patients with bipolar disorder, suggesting that the neighbourhood environment can have specific health implications for certain populations (PUBMED:37565544). In summary, while there is evidence of an association between neighbourhood deprivation and poorer health outcomes, the causality is not definitive. The relationship is likely influenced by a range of factors, including broader social inequalities, life course exposure, stress pathways, and individual health conditions. The evidence suggests that neighbourhood deprivation is an important factor in health disparities, but it may not be the sole cause of poor health.
Instruction: Can order of report prevent satisfaction of search in abdominal contrast studies? Abstracts: abstract_id: PUBMED:15691728 Can order of report prevent satisfaction of search in abdominal contrast studies? Rationale And Objective: A previous receiver operating characteristic (ROC) study showed a systematic shift in decision thresholds for detecting plain film abnormalities on contrast examinations rather than plain radiographs. A previous eye-position study showed that this shift was based on a relative visual neglect of plain film regions on the contrast studies. We now determine whether an intervention that changes visual search can reduce this search-based satisfaction of search effect in contrast studies of the abdomen. Materials And Methods: The authors measured detection of 23 plain film abnormalities in 44 patients who had plain film and contrast examinations. In 2 experiments, each plain-film and contrast study was examined independently in different sessions with observers providing a confidence rating of abnormality for each interpretation. There were 13 observers in the first experiment and 10 in the second experiment. The intervention required that for the contrast studies, observers first report abnormalities in the noncontrast region of the radiograph before reporting contrast findings. ROC curve areas for each observer in each treatment condition were estimated by using a proper ROC model. The analysis focused on changes in decision thresholds among the treatment conditions. Results: The SOS effect on decision thresholds in abdominal contrast studies was replicated. Although reduced, the shift in decision thresholds in detecting plain film abnormalities on contrast examinations remained when observers were required to report those abnormalities before contrast findings. Conclusion: Reporting plain film abnormalities before reporting abnormalities demonstrated by contrast reduced somewhat the satisfaction of search effect on decision thresholds produced by a visual neglect of noncontrast regions on contrast examinations. This suggests that interventions that direct visual search do not offer protection against satisfaction of search effects that are based on faulty visual search. abstract_id: PUBMED:8034444 Satisfaction of search in the detection of plain-film abnormalities in abdominal contrast studies. Rationale And Objectives: Satisfaction of search (SOS) occurs when a lesion is "missed" after detecting another lesion in the same radiograph. The authors investigated the SOS effect in abdominal contrast studies. Methods: The authors measured detection of 23 plain film abnormalities in 43 patients who had plain film and contrast examinations. Each plain-film and contrast study was examined independently by 10 radiologists in two sessions, with receiver operating characteristic (ROC) curve areas estimated with the computer program RSCORE-J (University of Iowa, Iowa City, IA) for each condition. Results: Observers more often missed plain film abnormalities present on contrast studies but also made fewer false-positive (FP) responses. There was no change in ROC area, but decision criteria grew more conservative. Conclusions: The reduction of detecting plain-film abnormalities in contrast examinations differs from the SOS effect of other imaging studies. The reduction in true-positive (TP) and false-positive (FP) rates suggests that a different cause may underlie these misses. abstract_id: PUBMED:8923900 Cause of satisfaction of search effects in contrast studies of the abdomen. Rationale And Objectives: Extraintestinal abnormalities visible without contrast material on abdominal radiographs are reported less frequently when contrast examinations are performed. Gaze dwell time was used to determine whether this difference is due to failure by observers to scan plain-film regions of contrast studies or discounting of plain-film abnormalities that were actually scanned. Methods: Patients were included whose contrast studies had elicited the largest reductions in positive responses compared with their plain-film studies in a previous detection experiment. Gaze of 10 radiologists was studied. Results: Significantly less time was spent gazing at non-contrast regions of contrast examinations than at the corresponding regions of radiographs. Errors with radiographs were based primarily on failures of recognition and decision making, whereas errors with contrast studies were based primarily on faulty scanning. Conclusion: Satisfaction of search errors on contrast examinations are caused by reduction in scanning of noncontrast regions. abstract_id: PUBMED:31300240 Patients' satisfaction after abdominal etching Introduction: The aim of this study is to analyze the patients' satisfaction after aesthetic abdominal etching or associated with an abdominoplasty. Material Et Methods: All records of patients who underwent abdominal etching between 2016 and 2017 were analyzed. Eligible patients were contacted, by telephone and submitted to a questionnaire of satisfaction. The patients were then divided into 3 groups according to the operative indication: aesthetic abdominal etching, abdominal etching for breast lipofilling and abdominal etching associated with an abdominoplasty. Results: Twenty-two out of 30 patients agreed to answer the questionnaire: 6 in the aesthetic abdominal etching group, 9 in the breast lipofilling group and 7 in the abdominoplasty group. For 41% of patients, the result was entirely in line with their expectations. It was fairly consistent in 50% of cases and non-compliant in 9% of cases. 50% of the patients were completely satisfied with the stability of the result over time and 45% were quite satisfied, an average satisfaction of 95%. Fourteen percent of patients rated their result as excellent, 41% as very good, 32% as good and 14% as average. No patient rated his result as mediocre. For 96% of patients, the choice of intervention was a good or very good decision and 77% would do it again without hesitation. Conclusion: Abdominal etching provides a high overall patients' satisfaction. It has a positive impact on the patient's life, with a low risk of complications. abstract_id: PUBMED:31227994 "Satisfaction" in search: Individuals' own search expectations predict their errors in multiple-target search. When people search for multiple targets in a display, finding one target hinders their ability to find additional targets. These errors were originally proposed to stem from a "satisfaction" with finding a first target that leads people to prematurely stop searching. However, empirical evidence for this premise has been elusive, prompting consideration of other theories. We returned to the satisfaction proposal and assessed whether people generate expectations regarding the likelihood of multiple targets that lead to search biases that, in turn, predict the rates at which additional targets are missed. Participants searched for one or two targets among distractors. To measure accuracy, most trials allowed search to progress to completion. The remaining trials terminated when participants had found their first target. In these cases, participants guessed whether an additional (unfound) target was present. The time needed to find a first target was inversely related to the searchers' expectations that a second target would be present. These expectations underestimated objective reality, and the strength of an individual's one-target bias was directly related to his or her likelihood of missing subsequent targets. Thus, people's expectations-based on their own behavior-likely impacted search performance, providing a novel mechanistic explanation for the previously posited "satisfaction-of-search" errors. abstract_id: PUBMED:34455466 From "satisfaction of search" to "subsequent search misses": a review of multiple-target search errors across radiology and cognitive science. For over 50 years, the satisfaction of search effect has been studied within the field of radiology. Defined as a decrease in detection rates for a subsequent target when an initial target is found within the image, these multiple target errors are known to underlie errors of omission (e.g., a radiologist is more likely to miss an abnormality if another abnormality is identified). More recently, they have also been found to underlie lab-based search errors in cognitive science experiments (e.g., an observer is more likely to miss a target 'T' if a different target 'T' was detected). This phenomenon was renamed the subsequent search miss (SSM) effect in cognitive science. Here we review the SSM literature in both radiology and cognitive science and discuss: (1) the current SSM theories (i.e., satisfaction, perceptual set, and resource depletion theories), (2) the eye movement errors that underlie the SSM effect, (3) the existing efforts tested to alleviate SSM errors, and (4) the evolution of methodologies and analyses used when calculating the SSM effect. Finally, we present the attentional template theory, a novel mechanistic explanation for SSM errors, which ties together our current understanding of SSM errors and the attentional template literature. abstract_id: PUBMED:29097008 Study of satisfaction concerning the navel after abdominal dermolipectomy with transposition: A report of 96 cases Goal Of The Study: The umbilicus has a major role in the aesthetics of the anterior abdominal wall. Many publications deal with abdominal dermolipectomies but few focus on umbilicoplasty. However, these are essential in assessing the aesthetic result. Umbilicoplasty in "aile de mouette" used in our service is reliable and easily reproducible. In this article, we evaluate the satisfaction of patients with abdominal dermolipectomy with this technique of transposition. Materials And Method: In the plastic surgery department of the Saint-Louis Hospital in Paris, we carried out a retrospective study of patients undergoing abdominal dermolipectomy with transposition of the umbilicus, between 1 January 2012 and 31 December 2012. All patients were operated according to our technique of umbilicoplasty: disinsertion of the umbilicus in V, reinsertion of the umbilic in "aile de mouette", a degreasing periumbilical associated with a plication of the umbilical stem. The complications identified in patients medical records and satisfaction were assessed by a telephone questionnaire. Results: Ninety-six patients were included. No patient presented umbilical necrosis. The overall result of umbilical transposition was considered good to excellent for 92.7% of patients. Conclusion: Umbilicoplasty in gull wing has many advantages: it is a simple, easily reproducible, reliable technique, the patients of which are for the most part very satisfied. abstract_id: PUBMED:28353059 Getting satisfied with "satisfaction of search": How to measure errors during multiple-target visual search. Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies. abstract_id: PUBMED:33500836 The Search Patterns of Abdominal Imaging Subspecialists for Abdominal Computed Tomography: Toward a Foundational Pattern for New Radiology Residents. Objectives: The routine search patterns used by subspecialty abdominal imaging experts to inspect the image volumes of abdominal/pelvic computed tomography (CT) have not been well characterized or rendered in practical or teachable terms. The goal of this study is to describe the search patterns used by experienced subspecialty imagers when reading a normal abdominal CT at a modern picture archiving and communication system workstation, and utilize this information to propose guidelines for residents as they learn to interpret CT during training. Material And Methods: Twenty-two academic subspecialists enacted their routine search pattern on a normal contrast-enhanced abdominal/pelvic CT study under standardized display parameters. Readers were told that the scan was normal and then asked to verbalize where their gaze centered and moved through the axial, coronal, and sagittal image stacks, demonstrating eye position with a cursor as needed. A peer coded the reported eye gaze movements and scrilling behavior. Spearman correlation coefficients were calculated between years of professional experience and the numbers of passes through the lung bases, liver, kidneys, and bowel. Results: All readers followed an initial organ-by-organ approach. Larger organs were examined by drilling, while smaller organs by oscillation or scanning. Search elements were classified as drilling, scanning, oscillation, and scrilling (scan drilling); these categories were parsed as necessary. The greatest variability was found in the examination the body wall and bowel/mesentery. Two modes of scrilling were described, and these classified as roaming and zigzagging. The years of experience of the readers did not correlated to number of passes made through the lung bases, liver, kidneys, or bowel. Conclusion: Subspecialty abdominal radiologists negotiate through the image stacks of an abdominal CT study in broadly similar ways. Collation of the approaches suggests a foundational search pattern for new trainees. abstract_id: PUBMED:35040101 Contrast-enhanced ultrasound in pediatric blunt abdominal trauma: a systematic review. Purpose: Intra-abdominal injury is a major cause of morbidity in children. Computed tomography (CT) is the reference standard for the evaluation of hemodynamically stable abdominal trauma. CT has an increased risk of long-term radiation induced malignancies and a possible risk associated with the use of iodinated contrast media. Contrast-enhanced ultrasound (CEUS) might represent an alternative to CT in stable children with blunt abdominal trauma (BAT). Nonetheless, CEUS in pediatrics remains limited by the lack of strong evidence. The purpose of this study was to offer a systematic review on the use of CEUS in pediatric abdominal trauma. Methods: Electronic search of PubMed, EMBASE and Cochrane databases of studies investigating CEUS for abdominal trauma in children. The risk of bias was assessed using the ROBINS-I tool. Results: This systematic review included 7 studies. CEUS was performed with different ultrasound equipment, always with a curvilinear transducer. Six out of seven studies used a second-generation contrast agent. No immediate adverse reactions were reported. The dose of contrast agent and the scanning technique varied between studies. All CEUS exams were performed by radiologists, in the radiology department or at the bedside. No standard training was reported to become competent in CEUS. The range of sensitivity and specificity of CEUS were 85.7 to 100% and 89 to 100%, respectively. Conclusion: CEUS appears to be safe and accurate to identify abdominal solid organ injuries in children with BAT. Further research is necessary to assess the feasibility of CEUS by non-radiologists, the necessary training, and the benefit-cost ratio of CEUS as a tool to potentially reduce CT scans. Answer: The order of report can somewhat reduce the satisfaction of search (SOS) effect in abdominal contrast studies, but it does not completely prevent it. A study by Berbaum et al. (PUBMED:15691728) found that when observers were required to report plain film abnormalities before reporting abnormalities demonstrated by contrast, there was a reduction in the SOS effect on decision thresholds. However, the shift in decision thresholds for detecting plain film abnormalities on contrast examinations remained, albeit reduced. This suggests that while interventions that direct visual search can mitigate the SOS effect to some extent, they do not offer full protection against SOS effects that are based on faulty visual search.
Instruction: Molecular evolutionary rates are not correlated with temperature and latitude in Squamata: an exception to the metabolic theory of ecology? Abstracts: abstract_id: PUBMED:27198179 Molecular evolutionary rates are not correlated with temperature and latitude in Squamata: an exception to the metabolic theory of ecology? Background: The metabolic theory of ecology stipulates that molecular evolutionary rates should correlate with temperature and latitude in ectothermic organisms. Previous studies have shown that most groups of vertebrates, such as amphibians, turtles and even endothermic mammals, have higher molecular evolutionary rates in regions where temperature is high. However, the association between molecular evolutionary rates and temperature or latitude has never been tested in Squamata. Results: We used a large dataset including the spatial distributions and environmental variables for 1,651 species of Squamata and compared the contrast of the rates of molecular evolution with the contrast of temperature and latitude between sister species. Using major axis regressions and a new algorithm to choose independent sister species pairs, we found that temperature and absolute latitude were not associated with molecular evolutionary rates. Conclusions: This absence of association in such a diverse ectothermic group questions the mechanisms explaining current pattern of species diversity in Squamata and challenges the presupposed universality of the metabolic theory of ecology. abstract_id: PUBMED:23116407 Temperature dependence of evolutionary diversification: differences between two contrasting model taxa support the metabolic theory of ecology. Biodiversity patterns are largely determined by variation of diversification rates across clades and geographic regions. Although there are multiple reasons for this variation, it has been hypothesized that metabolic rate is the crucial driver of diversification of evolutionary lineages. According to the metabolic theory of ecology (MTE), metabolic rate - and consequently speciation - is driven mainly by body size and environmental temperature. As environmental temperature affects metabolic rate in ecto- and endotherms differently, its impact on diversification rate should also differ between the two types of organisms. Employing two independent approaches, we analysed correlates of speciation rates and, ultimately, net diversification rates for two contrasting taxa: plethodontid salamanders and carnivoran mammals. Whereas in the ectothermic plethodontids speciation rates positively correlated with environmental temperature, in the endothermic carnivorans a reverse, negative correlation was detected. These findings comply with predictions of the MTE and suggest that similar geographic patterns of biodiversity across taxa (e.g. ecto- and endotherms) might have been generated by different ecological and evolutionary processes. abstract_id: PUBMED:30444656 Metabolic Theory and the Temperature-Size Rule Explain the Temperature Dependence of Population Carrying Capacity. The temperature dependence of highly conserved subcellular metabolic systems affects ecological patterns and processes across scales, from organisms to ecosystems. Population density at carrying capacity plays an important role in evolutionary processes, biodiversity, and ecosystem function, yet how it varies with temperature-dependent metabolism remains unclear. Though the exponential effect of temperature on intrinsic population growth rate, r, is well known, we still lack clear evidence that population density at carrying capacity, K, declines with increasing per capita metabolic rate, as predicted by the metabolic theory of ecology (MTE). We experimentally tested whether temperature effects on photosynthesis propagate directly to population carrying capacity in a model species, the mobile phytoplankton Tetraselmis tetrahele. After maintaining populations at a fixed resource supply and fixed temperatures for 43 days, we found that carrying capacity declined with increasing temperature. This decline was predicted quantitatively when models included temperature-dependent metabolic rates and temperature-associated body-size shifts. Our results demonstrate that warming reduces carrying capacity and that temperature effects on body size and metabolic rate interact to determine how temperature affects population dynamics. These findings bolster efforts to relate metabolic temperature dependence to population and ecosystem patterns via MTE. abstract_id: PUBMED:36564965 Coupling temperature-dependent spatial turnover of microbes and plants using the metabolic theory of ecology. There is an urgent need to understand the coupled relationship between belowground microbes and aboveground plants in response to temperature under climate change. The metabolic theory of ecology (MTE) provides a way to predict the metabolic rate and species diversity, but the spatial scale dependence and connections between plants and microorganisms are still unclear. Here, we used two independent datasets to address this question. One is from comprehensive sampling of paddy fields targeting bacteria and microbial functional genes, and the other is a global metadata of spatial turnover for microorganisms (bacteria, fungi and archaea, n = 139) and plants (n = 206). Results showed that spatial turnover of bacterial communities and microbial functional genes increased with temperature and fitted MTE. Through meta-analysis, the temperature-dependent spatial scale pattern was further extended to the global scale, with the spatial turnover of microorganisms and plants being consistent with MTE. Belowground microorganisms and aboveground plants were closely linked with each other even when controlling for temperature, suggesting that factors other than shared relationships with temperature also contribute to their linkages. These results implied a broad application of MTE in biology and have important implications for predicting the ecological consequences of future climate warming. abstract_id: PUBMED:32998077 Evolutionary ecology of Lyme Borrelia. The bacterial genus, Borrelia, is comprised of vector-borne spirochete species that infect and are transmitted from multiple host species. Some Borrelia species cause highly-prevalent diseases in humans and domestic animals. Evolutionary, ecological, and molecular research on many Borrelia species have resulted in tremendous progress toward understanding the biology and natural history of these species. Yet, many outstanding questions, such as how Borrelia populations will be impacted by climate and land-use change, will require an interdisciplinary approach. The evolutionary ecology research framework incorporates theory and data from evolutionary, ecological, and molecular studies while overcoming common assumptions within each field that can hinder integration across these disciplines. Evolutionary ecology offers a framework to evaluate the ecological consequences of evolved traits and to predict how present-day ecological processes may result in further evolutionary change. Studies of microbes with complex transmission cycles, like Borrelia, which interact with multiple vertebrate hosts and arthropod vectors, are poised to leverage the power of the evolutionary ecology framework to identify the molecular interactions involved in ecological processes that result in evolutionary change. Using existing data, we outline how evolutionary ecology theory can delineate how interactions with other species and the physical environment create selective forces or impact migration of Borrelia populations and result in micro-evolutionary changes. We further discuss the ecological and molecular consequences of those micro-evolutionary changes. While many of the currently outstanding questions will necessitate new experimental designs and additional empirical data, many others can be addressed immediately by integrating existing molecular and ecological data within an evolutionary ecology framework. abstract_id: PUBMED:17584252 Variation in scorpion metabolic rate and rate-temperature relationships: implications for the fundamental equation of the metabolic theory of ecology. The fundamental equation of the metabolic theory of ecology (MTE) indicates that most of the variation in metabolic rate are a consequence of variation in organismal size and environmental temperature. Although evolution is thought to minimize energy costs of nutrient transport, its effects on metabolic rate via adaptation, acclimatization or acclimation are considered small, and restricted mostly to variation in the scaling constant, b(0). This contrasts strongly with many conclusions of evolutionary physiology and life-history theory, making closer examination of the fundamental equation an important task for evolutionary biologists. Here we do so using scorpions as model organisms. First, we investigate the implications for the fundamental equation of metabolic rate variation and its temperature dependence in the scorpion Uroplectes carinatus following laboratory acclimation. During 22 days of acclimation at 25 degrees C metabolic rates declined significantly (from 127.4 to 78.2 microW; P = 0.0001) whereas mean body mass remained constant (367.9-369.1 mg; P = 0.999). In field-fresh scorpions, metabolic rate-temperature (MRT) relationships varied substantially within and among individuals, and therefore had low repeatability values (tau = 0.02) and no significant among-individual variation (P = 0.181). However, acclimation resulted in a decline in within-individual variation of MRT slopes which subsequently revealed significant differences among individuals (P = 0.0031) and resulted in a fourfold increase in repeatability values (tau = 0.08). These results highlight the fact that MRT relationships can show substantial, directional variation within individuals over time. Using a randomization model we demonstrate that the reduction in metabolic rate with acclimation while body mass remains constant causes a decline both in the value of the mass-scaling exponent and the coefficient of determination. Furthermore, interspecific comparisons of activation energy, E, demonstrated significant variation in scorpions (0.09-1.14 eV), with a mean value of 0.77 eV, significantly higher than the 0.6-0.7 eV predicted by the fundamental equation. Our results add to a growing body of work questioning both the theoretical basis and empirical support for the MTE, and suggest that alternative models of metabolic rate variation incorporating explicit consideration of life history evolution deserve further scrutiny. abstract_id: PUBMED:30601573 Abyssal deposit-feeding rates consistent with the metabolic theory of ecology. The Metabolic Theory of Ecology (MTE) posits that metabolic rate controls ecological processes, such as the rate of resource uptake, from the individual- to the ecosystem-scale. Metabolic rate has been found empirically to be an exponential function of whole organism body mass. We test a fundamental assumption of MTE, whether resource uptake scales to metabolism, by examining detritivores accessing a single common resource pool, an ideal study case. We used an existing empirical model of ingestion for aquatic deposit feeders adjusted for temperature to test whether ingestion by abyssal deposit feeders conforms to MTE-predicted feeding rates. We estimated the sediment deposit-feeding rates of large invertebrates from two abyssal study sites using time-lapse photography, and related those rates to body mass, environmental temperature, and sediment organic matter content using this framework. Ingestion was significantly related to individual wet mass, with a mass-scaling coefficient of 0.81, with 95% confidence intervals that encompass the MTE-predicted value of 0.75, and the same pattern determined in other aquatic systems. Our results also provide insight into the potential mechanism through which this fundamental assumption operates. After temperature correction, both deep- and shallow-water taxa might be summarized into a single mass-scaled ingestion rate. abstract_id: PUBMED:37695237 Evaluating the Accuracy of Methods for Detecting Correlated Rates of Molecular and Morphological Evolution. Determining the link between genomic and phenotypic change is a fundamental goal in evolutionary biology. Insights into this link can be gained by using a phylogenetic approach to test for correlations between rates of molecular and morphological evolution. However, there has been persistent uncertainty about the relationship between these rates, partly because conflicting results have been obtained using various methods that have not been examined in detail. We carried out a simulation study to evaluate the performance of 5 statistical methods for detecting correlated rates of evolution. Our simulations explored the evolution of molecular sequences and morphological characters under a range of conditions. Of the methods tested, Bayesian relaxed-clock estimation of branch rates was able to detect correlated rates of evolution correctly in the largest number of cases. This was followed by correlations of root-to-tip distances, Bayesian model selection, independent sister-pairs contrasts, and likelihood-based model selection. As expected, the power to detect correlated rates increased with the amount of data, both in terms of tree size and number of morphological characters. Likewise, greater among-lineage rate variation in the data led to improved performance of all 5 methods, particularly for Bayesian relaxed-clock analysis when the rate model was mismatched. We then applied these methods to a data set from flowering plants and did not find evidence of a correlation in evolutionary rates between genomic data and morphological characters. The results of our study have practical implications for phylogenetic analyses of combined molecular and morphological data sets, and highlight the conditions under which the links between genomic and phenotypic rates of evolution can be evaluated quantitatively. abstract_id: PUBMED:19666552 Latitudinal variation in lifespan within species is explained by the metabolic theory of ecology. Many ectotherms exhibit striking latitudinal gradients in lifespan. However, it is unclear whether lifespan gradients in distantly related taxa share a common mechanistic explanation. We compiled data on geographic variation in lifespan in ectotherms from around the globe to determine how much of this intraspecific variation in lifespan may be explained by temperature using the simple predictions of the metabolic theory of ecology. We found that the metabolic theory accurately predicts how lifespan varies with temperature within species in a wide range of ectotherms in both controlled laboratory experiments and free-living populations. After removing the effect of temperature, only a small fraction of species showed significant trends with latitude. There was, however, considerable residual intraspecific variation indicating that other, more local factors are likely to be important in determining lifespan within species. These findings suggest that, given predicted increases in global temperature, lifespan of ectotherms may be substantially shortened in the future. abstract_id: PUBMED:26362925 Temperature dependences of growth rates and carrying capacities of marine bacteria depart from metabolic theoretical predictions. Using the metabolic theory of ecology (MTE) framework, we evaluated over a whole annual cycle the monthly responses to temperature of the growth rates (μ) and carrying capacities (K) of heterotrophic bacterioplankton at a temperate coastal site. We used experimental incubations spanning 6ºC with bacterial physiological groups identified by flow cytometry according to membrane integrity (live), nucleic acid content (HNA and LNA) and respiratory activity (CTC+). The temperature dependence of μ at the exponential phase of growth was summarized by the activation energy (E), which was variable (-0.52 to 0.72 eV) but followed a seasonal pattern, only reaching the hypothesized value for aerobic heterotrophs of 0.65 eV during the spring bloom for the most active bacterial groups (live, HNA, CTC+). K (i.e. maximum experimental abundance) peaked at 4 × 10(6) cells mL(-1) and generally covaried with μ but, contrary to MTE predictions, it did not decrease consistently with temperature. In the case of live cells, the responses of μ and K to temperature were positively correlated and related to seasonal changes in substrate availability, indicating that the responses of bacteria to warming are far from homogeneous and poorly explained by MTE at our site. Answer: Yes, molecular evolutionary rates in Squamata (lizards, snakes, and amphisbaenians) appear to be an exception to the metabolic theory of ecology (MTE), which posits that these rates should correlate with temperature and latitude in ectothermic organisms. A study using a large dataset of 1,651 species of Squamata found no association between molecular evolutionary rates and temperature or absolute latitude, challenging the presupposed universality of the MTE (PUBMED:27198179). This absence of association in such a diverse ectothermic group raises questions about the mechanisms explaining the current pattern of species diversity in Squamata and suggests that factors other than temperature and latitude might be more influential in shaping their molecular evolutionary rates.
Instruction: Can endoscopic sphincterotomy be performed safely in elderly patients aged 80 years or older with pancreatic and biliary diseases? Abstracts: abstract_id: PUBMED:23340278 Can endoscopic sphincterotomy be performed safely in elderly patients aged 80 years or older with pancreatic and biliary diseases? Background/aims: The purpose of this study was to investigate whether endoscopic sphincterotomy (EST) can be usefully and safely performed in elderly patients aged 80 years or older. Methodology: The eligible patients with pancreatic and biliary diseases who required EST were divided into two groups depending on their age: under 80 (group A) and 80 or older (group B). Patient characteristics, EST success rate and incidence of the related complications were evaluated. Results: Of the 720 patients who required EST, 522 patients were in group A and 198 in group B. Group B incidences of patient characteristics at baseline disease and anticoagulant/antiplatelet therapy were significantly higher than in group A (p&lt;0.05). The EST success rates were 97.1% (507/522) in group A and 96.5% (191/198) in group B. The incidences of the related complications were 8.8% (46/522) in group A and 4% (8/198) in group B, respectively, again without significant difference. Conclusions: The EST success rate and the incidence of related complications were comparable between patients in groups A and B, indicating that EST can be safely performed even in the elderly aged 80 years or older. abstract_id: PUBMED:17140895 ERCP is safe and effective in patients 80 years of age and older compared with younger patients. Background: Biliary disease frequently occurs in the elderly, but there are limited data on ERCP in the elderly population. Patients: A total of 502 patients (group A, 97; group B, 405) underwent 724 ERCP procedures. Main Outcome Measurements: All consecutive ERCPs performed between 2000 and 2002 at a single center were retrospectively reviewed for patients &gt;/=80 years old (group A) and patients &lt;80 years old (group B) to evaluate endoscopic findings, interventions, complications, and mortality related to complications. Results: The number of important chronic concomitant diseases was significantly higher in the older group (average per patient 1.08 vs 0.57, P &lt; .001). Successful cannulation was achieved in 88% in group A versus 86% in group B, and endoscopic sphincterotomy was performed in 63.2% versus 51.4%. Periampullary diverticulum was found significantly more often in patients of group A (39.2%) than of group B (14.1%, P &lt; .001). Stents were used in 24.1% of ERCP procedures in group A and in 22.9% in group B. There was no significant difference in the complication rate between group A (6.8%) and group B (5.1%) and in early mortality (1.03% vs 0.25%), respectively. Conclusion: ERCP is a safe and effective intervention in the elderly because complication and early mortality rates are comparable to those of younger patients, although comorbidity is significantly higher. abstract_id: PUBMED:11139666 Late biliary complications after endoscopic sphincterotomy for common bile duct stones in patients older than 65 years of age with gallbladder in situ Unlabelled: The aim of this retrospective study was to evaluate the nature and the frequency of biliary complications after endoscopic retrograde cholangiography for common bile duct stones in elderly patients with gallbladder in situ. Methods: Between 1991 and 1993, 169 consecutive patients with gallbladder in situ, older than 65 years (79 +/- 8) had an endoscopic retrograde cholangiography with sphincterotomy for choledocholithiasis. Information on the early (&lt;1 month) and late biliary complications, treatment and mortality were obtained by mail or phone calls from patients and general practitioners. Long-term data were obtained for 139 patients (82%). Mean follow-up was 56.5 months (80 months for patients still alive at the end of the study). Results: Early complications occurred in 13 patients (10.8%). Seven patients had acute cholecystitis, present before the procedure in all cases; all were treated by surgery. Other early complications included cholangitis (n =7), mild acute pancreatitis (n =3), bleeding (n =1), perforation (n =1), biliary colic (n =1), pneumopathy (n =1) and bradycardia (n =1), all treated medically. Forty patients underwent early cholecystectomy, and 5 died during the first month without biliary disease. Late complications were thus assessed in 94 patients and occurred in 13 (14%), i.e around 2% per year. Complications were acute cholangitis (n=4), biliary pain (n =4), cholecystitis (n =2), abdominal pain (n =2) and jaundice due to sphincterotomy stenosis (n =1). Five patients had cholecystectomy, 1 a radiological drainage and 7 were treated medically. No death due to a biliary complication was observed. The presence of gallstones, the absence of gallbladder opacification at cholangiography were not prognostic factors for the recurrence of biliary symptoms. 65 patients (50%) died without biliary disease during the follow-up (actuarial death rate 10.5% per year). Conclusion: Late biliary complications after endoscopic retrograde cholangiography for choledocholithiasis in patients with gallbladder in situ are rare (2% per year). Prophylactic cholecystectomy after sphincterotomy does not seem warranted in elderly patients, because of rare recurrent biliary symptoms, low mortality rate, and limited life expectancy. abstract_id: PUBMED:8500740 Acute relapsing pancreatitis as a complication of papillary stenosis after endoscopic sphincterotomy. Endoscopic sphincterotomy has proven to be a safe alternative to surgery for selected types of biliary disease. Despite a relatively low morbidity, postprocedure complications are well described. This report presents an experience with three patients in whom acute relapsing pancreatitis developed as a possible complication of papillary stenosis after endoscopic sphincterotomy. None of the patients had a previous history of elevations in serum amylase levels before endoscopic sphincterotomy. After procedure, pancreatitis and subsequently acute relapsing pancreatitis with documented stricture of the pancreatic duct orifice developed in all three patients. After surgical transduodenal sphincteroplasty, no new episodes of acute relapsing pancreatitis occurred. abstract_id: PUBMED:8635706 Endoscopic sphincterotomy in patients with liver cirrhosis. Background: Nonsurgical alternatives in biliary diseases have not been studied in large series of patients with cirrhosis. Our aim was to determine the indications and results of endoscopic treatment in this subset of patients. Methods: Fifty-two patients (36 men, 16 women-Child-Pugh Class A, 18; B, 22; C, 12) aged 63 +/- 18 years underwent endoscopic sphincterotomy (ES) between 1988 and 1993. Antibiotic prophylaxis was routinely carried out and coagulopathy was corrected before ES when required. The data were collected retrospectively up to 30 days after ES. Results: ES succeeded in 98% of the patients (12 cases of needle-knife papillotomy). Twenty-nine patients (55.7%) had choledocholithiasis, 18 had biliary strictures (12 malignant), and 5 had pancreatic or other biliary diseases. Five days after ES, morbidity was 13.5% and mortality was 7.7%. At 1 month, morbidity was 22.9% and mortality 12.5%. Only endoscopic procedures and Ineffective drainage were seen to be significant risk factors or morbidity. The results of the subgroup of cirrhotic patients with choledocholithiasis (n = 29) were compared with those of matched noncirrhotic patients (n = 58). The cirrhotic and noncirrhotic patients treated for choledocholithiasis showed similar results for stone clearance, morbidity, and mortality. Conclusion: ES is a safe and effective procedure for treating choledocholithiasis in cirrhotic patients. ES can therefore be considered as an alternative to surgery in Child class A and B patients and must be preferred for Child class C patients with life-threatening biliary complications. abstract_id: PUBMED:9399767 Pancreatic-biliary ascariasis: experience of 300 cases. Background: Infestation with Ascaris lumbricoides is seen worldwide. Recently, there has been much interest in the pancreatic-biliary complications of Ascaris infection. In this study, we present our experience of 300 patients seen in a tertiary referral center. Materials And Methods: Case charts of patients seen in the Department of Gastroenterology, University of Damascus, Syria, were analyzed, retrospectively, over a 5-yr period (September of 1988 to August of 1993). During this period, 1666 endoscopic retrograde cholangiopancreatographic studies were performed and pancreatic-biliary ascariasis was diagnosed in 300 patients (18%). Results: The most common presenting symptom was abdominal pain, seen in 98% of patients (294 patients). Complications observed were ascending cholangitis (48 patients; 16%), acute pancreatitis (13 patients; 4.3%), and obstructive jaundice (4 patients; 1.3%). History of worm emesis was present in 25% of patients (76 patients). Most patients (240 patients; 80%) had previously undergone a cholecystectomy or an endoscopic sphincterotomy (232 patients; 77%). Worms were successfully extracted endoscopically in all except two patients, and there were no procedure-related complications. Conclusions: In endemic countries, ascariasis should be suspected in patients with pancreatic-biliary disease, especially if a cholecystectomy or sphincterotomy has been performed in the past. Endoscopic management results in rapid resolution of symptoms and prevents development of complications. abstract_id: PUBMED:12488713 Endoscopic retrograde cholangiopancreatography is safe and effective in patients 90 years of age and older. Goals: To review our experience of endoscopic retrograde cholangiopancreatography (ERCP) in patients 90 years and older. Background: ERCP is effective in the investigation and treatment of biliary disease; however, in the very elderly, a perception of high procedural risk and lack of efficacy may limit its use. Study: Retrospective analysis of ERCPs performed on patients 90 years of age and older from one institution. Results: Between 1987-2000, 23 ERCPs were performed on patients 90 years of age and more (16 women; age range, 90-96 years). The primary indications were obstructive jaundice (16 patients), pancreatitis (2), cholangitis (1), unexplained abdominal pain (1), and planned follow-up (3). The main endoscopic findings were common bile duct (CBD) stone (15 patients), pancreatic carcinoma (2), cholangiocarcinoma (2), and dilated duct (only 1). Sixteen sphincterotomies were performed, with successful common duct clearance in 10 patients. Seven biliary stents were inserted for benign disease and three, for malignancy. In two patients, CBD cannulation was unsuccessful. Three minor hemorrhages were controlled endoscopically. Three patients died of nonprocedural causes. Conclusions: ERCP is safe and effective in the very elderly. The decision to undergo ERCP should be determined by clinical need. abstract_id: PUBMED:15542528 Endoscopic retreatment of recurrent choledocholithiasis after sphincterotomy. Background: Endoscopic sphincterotomy (ES) carries a substantial risk of recurrent choledocholithiasis but retreatment with endoscopic retrograde cholangiopancreatography (ERCP) is safe and feasible. However, long term results of repeat ERCP and risk factors for late complications are largely unknown. Aims: To investigate the long term outcome of repeat ERCP for recurrent bile duct stones after ES and to identify risk factors predicting late choledochal complications. Methods: Eighty four patients underwent repeat ERCP, combined with ES in 69, for post-ES recurrent choledocholithiasis. Long term outcomes of repeat ERCP were retrospectively investigated and factors predicting late complications were assessed by multivariate analysis. Results: Complete stone clearance was achieved in all patients. Forty nine patients had no visible evidence of prior sphincterotomy. Two patients experienced early complications. During a follow up period of 2.2-26.0 years (median 10.9 years), 31 patients (37%) developed late complications, including stone recurrence (n = 26), acute acalculous cholangitis(n = 4), and acute cholecystitis (n = 1). There were neither biliary malignancies nor deaths attributable to biliary disease. Multivariate analysis identified three independent risk factors for choledochal complications: interval between initial ES and repeat ERCP &lt; or =5 years, bile duct diameter &gt; or =15 mm, and periampullary diverticulum. Choledochal complications were successfully treated with repeat ERCP in 29 patients. Conclusions: Choledochal complications after repeat ERCP are relatively frequent but are endoscopically manageable. Careful follow up is necessary, particularly for patients with a dilated bile duct, periampullary diverticulum, or early recurrence. Repeat ERCP is a reasonable treatment even for recurrent choledocholithiasis after ES. abstract_id: PUBMED:33959227 Elderly patients (≥ 80 years) with acute calculous cholangitis have similar outcomes as non-elderly patients (&lt; 80 years): Propensity score-matched analysis. Background: Acute cholangitis (AC) is a disease spectrum with varying extent of severity. Age ≥ 75 years forms part of the criteria for moderate (Grade II) severity in both the Tokyo Guidelines (TG13 and TG18). Aging is associated with reduced physiological reserves, frailty, and sarcopenia. However, there is evidence that age itself is not the determinant of inferior outcomes in elective and emergency biliary diseases. There is a paucity of reports comparing clinical outcomes amongst elderly patients vs non-elderly patients with AC. Aim: To investigate the effect of age (≥ 80 years) on AC's morbidity and mortality using propensity score matching (PSM). Methods: This is a single-center retrospective cohort study of all patients diagnosed with calculous AC (January 2016 to December 2016) and ≥ 80 years old (January 2012 to December 2016) at a tertiary university-affiliated teaching hospital. Inclusion criteria were patients who were treated for suspected or confirmed AC secondary to biliary stones. Patients with AC on a background of hepatobiliary malignancy, indwelling permanent metallic biliary stents, or concomitant pancreatitis were excluded. Elderly patients were defined as ≥ 80 years old in our study. A 1:1 PSM analysis was performed to reduce selection bias and address confounding factors. Study variables include comorbidities, vital parameters, laboratory and radiological investigations, and type of biliary decompression, including the time for endoscopic retrograde cholangiopancreatography (ERCP). Primary outcomes include in-hospital mortality, 30-d and 90-d mortality. Length of hospital stay (LOS) was the secondary outcome. Results: Four hundred fifty-seven patients with AC were included in this study (318 elderly, 139 non-elderly). PSM analysis resulted in a total of 224 patients (112 elderly, 112 non-elderly). The adoption of ERCP between elderly and non-elderly was similar in both the unmatched (elderly 64.8%, non-elderly 61.9%, P = 0.551) and matched cohorts (elderly 68.8% and non-elderly 58%, P = 0.096). The overall in-hospital mortality, 30-d mortality and 90-d mortality was 4.6%, 7.4% and 8.5% respectively, with no statistically significant differences between the elderly and non-elderly in both the unmatched and matched cohorts. LOS was longer in the unmatched cohort [elderly 8 d, interquartile range (IQR) 6-13, vs non-elderly 8 d, IQR 5-11, P = 0.040], but was comparable in the matched cohort (elderly 7.5 d, IQR 5-11, vs non-elderly 8 d, IQR 5-11, P = 0.982). Subgroup analysis of patients who underwent ERCP demonstrated the majority of the patients (n = 159/292, 54.5%) had delayed ERCP (&gt; 72 h from presentation). There was no significant difference in LOS, 30-d mortality, 90-d mortality, and in-hospital mortality in patients who had delayed ERCP in both the unmatched and matched cohort (matched cohort: in-hospital mortality [n = 1/42 (2.4%) vs 1/26 (3.8%), P = 0.728], 30-d mortality [n = 2/42 (4.8%) vs 2/26 (7.7%), P = 0.618], 90-d mortality [n = 2/42 (4.8%) vs 2/26 (7.7%), P = 0.618], and LOS (median 8.5 d, IQR 6-11.3, vs 8.5 d, IQR 6-15.3, P = 0.929). Conclusion: Mortality is indifferent in the elderly (≥ 80 years old) and non-elderly patients (&lt; 80 years old) with AC. abstract_id: PUBMED:2592941 Endoscopic sphincterotomy in the treatment of biliary tract diseases. Endoscopic sphincterotomy (EST) was performed in 53 patients with biliary disease. Indications for sphincterotomy were: 48 patients with choledocholithiasis, 3 patients with malignant biliary obstruction, and 2 patients with papillary stenosis. The procedure was successful in 52 patients (98.1%). In patients with choledocholithiasis, the stones were successfully removed or passed out spontaneously in 42 patients (87.5%). Endoscopic retrograde biliary drainage (ERBD) was successfully carried out in 2 cases of malignant biliary obstruction after EST. The clinical and biochemical evidence of cholestasis resolved in both patients with papillary stenosis. Complications consisted of hemorrhage (2 patients) and cholangitis (1 patient), resulting in one death. The overall complication and mortality rates were 5.7% and 1.9% respectively. Thirty-two patients received regular ultrasound examination follow-up with a mean of 13.4 months. Two patients had recurrent common bile duct stones and one had restenosis of papilla. The conditions were managed by endoscopic therapy. Endoscopic sphincterotomy is a relatively safe and effective means of relieving extrahepatic cholestasis; however, its comparison with surgical techniques needs long-term, randomized studies. Answer: Yes, endoscopic sphincterotomy (EST) can be performed safely in elderly patients aged 80 years or older with pancreatic and biliary diseases. Studies have shown that the success rate and incidence of related complications of EST in elderly patients are comparable to those in younger patients. Specifically, one study found that the EST success rates were 97.1% in patients under 80 and 96.5% in patients aged 80 or older, with related complication incidences of 8.8% and 4% respectively, indicating no significant difference between the two age groups (PUBMED:23340278). Another study reported that ERCP, which includes procedures like EST, is safe and effective in patients 80 years of age and older, with complication and early mortality rates comparable to those of younger patients, despite the older group having significantly more chronic concomitant diseases (PUBMED:17140895). Additionally, a retrospective analysis of ERCPs performed on patients 90 years of age and older found that ERCP is safe and effective in the very elderly, suggesting that clinical need, rather than age, should determine the decision to undergo ERCP (PUBMED:12488713). Furthermore, a study on acute calculous cholangitis in elderly patients (≥ 80 years) found that outcomes such as in-hospital mortality, 30-day and 90-day mortality, and length of hospital stay were similar to those in non-elderly patients (< 80 years), even when ERCP was delayed beyond 72 hours from presentation (PUBMED:33959227). These findings collectively suggest that EST can be safely performed in elderly patients with pancreatic and biliary diseases.
Instruction: Does colonization with methicillin-susceptible Staphylococcus aureus protect against nosocomial acquisition of methicillin-resistant S. aureus? Abstracts: abstract_id: PUBMED:24709721 Does colonization with methicillin-susceptible Staphylococcus aureus protect against nosocomial acquisition of methicillin-resistant S. aureus? Objective: To test the hypothesis that methicillin-susceptible Staphylococcus aureus (MSSA) carriage may protect against nosocomial methicillin-resistant S. aureus (MRSA) acquisition by competing for colonization of the anterior nares. Design: Prospective cohort and nested case-control study. Setting: Swiss university hospital. Patients: All adult patients admitted to 14 wards of the general medicine division between April 1 and October 31, 2007. Methods: Patients were screened for MRSA and MSSA carriage at admission to and discharge from the division. Associations between nosocomial MRSA acquisition and MSSA colonization at admission and other confounders were analyzed by univariable and multivariable analysis. Results: Of 898 patients included, 183 (20%) were treated with antibiotics. Nosocomial MRSA acquisition occurred in 70 (8%) of the patients (case patients); 828 (92%) of the patients (control subjects) were free of MRSA colonization at discharge. MSSA carriage at admission was 20% and 21% for case patients and control subjects, respectively. After adjustment by multivariate logistic regression, no association was observed between MSSA colonization at admission and nosocomial MRSA acquisition (adjusted odds ratio [aOR], 1.2 [95% confidence interval (CI), 0.6-2.3]). By contrast, 4 independent predictors of nosocomial MRSA acquisition were identified: older age (aOR per 1-year increment, 1.05 [95% CI, 1.02-1.08]); increased length of stay (aOR per 1-day increment, 1.05 [95% CI, 1.02-1.09]); increased nursing workload index (aOR per 1-point increment, 1.02 [95% CI, 1.01-1.04]); and previous treatment with macrolides (aOR, 5.6 [95% CI, 1.8-17.7]). Conclusions: Endogenous MSSA colonization does not appear to protect against nosocomial MRSA acquisition in a population of medical patients without frequent antibiotic exposure. abstract_id: PUBMED:34780365 Methicillin-resistant Staphylococcus aureus colonization among healthcare workers in Oman. Introduction: Methicillin-Resistant Staphylococcus aureus (MRSA) is a S. aureus strain characterized by resistance to cloxacillin. Healthcare workers (HCWs), are recognized for their heightened risk for MRSA acquisition and possibly for MRSA nosocomial transmission. This cross-sectional study aimed to determine the prevalence and the associated risk factors of MRSA colonization among healthcare workers at Sultan Qaboos University Hospital (SQUH) in Oman. Methodology: A total of 200 nasal swab samples were collected from the healthcare workers at SQUH during the period October 2nd 2018 to January 7th 2019. All nasal swab samples were examined microbiologically for the presence of MRSA using the standard method and the results were confirmed by detection of the mecA product (PBP2a). Data on associated risk factors for MRSA colonization was collected and analyzed. Results: Forty-one of the 200 screened healthcare workers (20.5%) were found to have nasal carriage of Staphylococcus aureus of which 63.4% were Methicillin Sensitive and 36.6% were Methicillin-Resistant (MRSA). Methicillin-Resistant Staphylococcus aureus (MRSA) was isolated from fifteen of the 200 screened healthcare workers giving a prevalence rate of nasal colonization with MRSA of 7.5%. We found no statistical association between healthcare worker MRSA nasal colonization and age, gender, HCWs specialty, hand hygiene practices, skin condition, previous MRSA infection, and previous exposure to antibiotics. Conclusions: Identification of the prevalence and the associated risk factors of MRSA colonization in healthcare workers mandates continuous surveillance and the implementation of all possible preventive measures to reduce re-occurrences. abstract_id: PUBMED:10782237 Colonization by methicillin-resistant Staphylococcus aureus is a predictive factor for the resistance phenotype of an infectious strain of S. aureus Objective: To assess the predictive value of a previous colonization with methicillin-resistant Staphylococcus aureus for the resistance pattern of a bacteriological specimen significantly positive to S. aureus. Study Design: Retrospective study of patients' files. Patients: Patients admitted for at least 48 hours in a surgical intensive care unit from April 1, 1996 to December 31, 1997. Methods: Collection of patients' characteristics and chronology of positive microbiological specimens with methicillin-susceptible (MSSA) or -resistant (MRSA) S. aureus from medical and laboratory records. During the study period, screening for nasal or perineal colonization with MRSA was systematically performed on admission and weekly thereafter. Results: The files of 540 patients were reviewed. MSSA and MRSA infections occurred in 7% (39/540) and 4% (20/540) of the patients respectively. By opposition with MSSA infections, MRSA infections occurred more frequently in patients previously colonized with MRSA (13 infections in 63 colonized patients [21%] versus 7 infections in 477 non-colonized patients [2%], odds ratio = 18, confidence interval: 6-51, P &lt; 0.0001). The median delay between colonization and infection was 5 days. The positive and negative predictive values for previous colonization with MRSA to predict infection with MRSA in presence of a bacteriological specimen significantly positive with S. aureus were 81 and 84%, respectively. Conclusion: The probabilistic use of a glycopeptide in presence of a bacteriological specimen significantly positive with S. aureus should be limited to patients already colonized with MRSA, in order to decrease the abusive administration of these antibiotics. abstract_id: PUBMED:25368805 Microbial Susceptibility and Plasmid Profiles of Methicillin-Resistant Staphylococcus aureus and Methicillin-Susceptible S. aureus. Background: Today, significant increase in the prevalence and emergence of methicillin-resistant Staphylococcus aureus (MRSA) is a serious public health concern and is likely to have a dramatic negative impact on many current medical practices. Therefore, identification of MRSA strains is important for both clinical and epidemiological implications. Objectives: The present study was carried out to determine the frequency of methicillin resistant; antibiotic susceptibility and plasmid profiles of S. aureus recovered from different types of clinical samples of patients in Zabol, Iran. Material And Methods: Clinical samples from 500 outpatient and hospitalized patients were tested for S. aureus. The susceptibility of 106 S. aureus to 11 antibiotics was evaluated by the disk diffusion method and Etest oxacillin strips. The presence of mecA gene was investigated by polymerase chain reaction (PCR). The plasmid profile patterns of all isolates were determined by a modified alkaline lysis method. Results: A total of 67 (63.20%) strains were found to be MRSA isolates. Most of MRSA isolates showed high level of resistance to ampicillin, erythromycin, nalidixic acid, penicillin, and tetracycline. Twenty-six percent of MRSA isolates showed high level of resistance to oxacillin (minimum inhibitory concentration [MIC] ≥ 256 μg/mL). mecA gene was detected among 62 MRSA isolates. Totally, 75 isolates of both strains harbored plasmid. Conclusions: Resistance to oxacillin and other antibiotics was high, and most of the isolates were found to be multi-drug resistance (MDR). Plasmid analysis of representative S. aureus isolates also demonstrates the presence of a wide range of plasmid sizes, with no consistent relationship between plasmid profiles and resistance phenotypes. Regular surveillance of hospital infections and monitoring of their antibiotic sensitivity patterns are required to reduce MRSA prevalence. High prevalence and multi-drug resistance of MRSA isolates in southeast of Iran could be considered as an urgent warning for public health. abstract_id: PUBMED:35320234 Methicillin resistant Staphylococcus aureus: A brief review of virulence and resistance. Staphylococcus aureus is a common gram-positive human pathogen involved in both community-acquired and nosocomial infections ranging from localised superficial lesions to food poisoning and fatal systemic infections owing to its impressive array of virulence factors responsible for attaching, colonising, invading, and avoiding host immune system. The discovery of antibiotics effectively checked the once deadly infections. However, resistance started soon after their discovery and the first methicillin-resistant strain of staphylococcus aureus was reported in the early 1960s. The most important attribute of methicillin-resistant staphylococcus aureus is its acquisition of mecA gene coding for penicillin-binding protein-2a that blocks inhibitory action on peptidoglycan cross-linking. Methicillin-resistant staphylococcus aureus presents a serious global healthcare concern being responsible for prolonged hospital stays and increased mortality. The precise information of virulence factors and resistant traits of methicillin-resistant staphylococcus aureus and their interplay in a community is key to minimize the intermixing of resistant and susceptible pathogens in the community. abstract_id: PUBMED:15188065 Treatment and prevention of Staphylococcus aureus The number of Staphylococcus aureus (S. aureus) community and nosocomial acquired infections have increased in the last twenty years. Between 1990 and 1992, this pathogen has become the most frequent source of nosocomial infections 1,2. On the other hand, as result of antibiotherapy selection pressure, we observe a dramatic increase in the methicillin-resistant S. aureus (MRSA) infections 3,4. With the recent emergence of multiresistant strains, including glycopeptides resistance, the treatment of these infections became still more difficult 2,4-6. It's important to prevent resistant strains emergence. To prevent further emergence of resistant strains, the use of antibiotherapy must be optimised, laboratory methods for the detection of resistant pathogens must be enhanced and strict precautions should be taken following of the infection or colonization of patients 6-10. In this paper, we review anti-staphylococci history evolution and point out the present recommendations for the treatment and prevention of Staphylococcus aureus infections and colonization. abstract_id: PUBMED:20662715 Screening and decolonization: does methicillin-susceptible Staphylococcus aureus hold lessons for methicillin-resistant S. aureus? Methicillin-susceptible Staphylococcus aureus (MSSA) and methicillin-resistant S. aureus (MRSA) have few structural differences, but their epidemiologies differ profoundly in terms of colonization, infection, and transmission. We compare strategies for controlling hospital infection due to MSSA and MRSA. Despite the straightforward epidemiology of MSSA, the effectiveness of screening and decolonization was established only recently. The optimal strategy for controlling MRSA spread and infection remains debated. Many data need to be acquired, given the complexity of MRSA epidemiology, the entanglement between collective and individual objectives, and the challenges faced when adjusting for confounders. However, studies have consistently demonstrated that screening is useful in high-risk units to identify the reservoir and to initiate contact precautions. In an endemic setting, the contribution of MRSA decolonization to cross-transmission limitation is probably small in comparison to the impact of precautions. Screening and decolonization may be effective in decreasing the MRSA infection risk in carriers. abstract_id: PUBMED:29649598 Methicillin resistance in Staphylococcus aureus infections among patients colonized with methicillin-susceptible Staphylococcus aureus. Objectives: We have noticed that patients colonized with methicillin-susceptible Staphylococcus aureus (MSSA) rarely get methicillin-resistant S. aureus (MRSA) infections. The purpose of this study was to compare the odds of a Staphylococcus aureus (SA) infection being an MRSA infection in MSSA carriers, MRSA carriers and non-carriers of SA. Methods: Hospitalizations of adult patients at the Cleveland Clinic Health System from 2008 to 2015 were screened to identify those where the patient was tested for SA colonization. The first such hospitalization was identified. Among these 90 891 patients, those who had an SA infection during the hospitalization were included. SA carrier status (MRSA, MSSA, or non-carrier), was defined based on the first nasal SA test result. The association of carrier status and MRSA infection was examined. Results: The mean (±standard deviation (SD)) age of the 1999 included patients was 61 (17) years, and 1160 (58%) were male. Thirty percent, 26%, and 44%, were MRSA carriers, MSSA carriers and non-carriers, respectively. Of the 601 SA infections in MRSA carriers (reference group), 552 (92%) were MRSA infections compared with 42 (8%) of 516 in MSSA carriers (odds ratio (OR) 0.008, 95% confidence interval (CI) 0.005-0.012, p &lt;0.0001) and 430 (49%) of 882 in non-carriers (OR 0.072, 95% CI 0.051-0.100, p &lt;0.0001), after controlling for age, sex, hospital length of stay and calendar year. Conclusion: Among patients with SA infection, the odds of the infection being an MRSA infection are 125-times lower in an MSSA carrier than in an MRSA carrier. abstract_id: PUBMED:36732413 Increase in methicillin-susceptible Staphylococcus aureus bloodstream infections in Switzerland: a nationwide surveillance study (2008-2021). Purpose: An increasing burden of Staphylococcus aureus bloodstream infections (BSI), despite a decrease in the percentage of methicillin-resistant S. aureus (MRSA), was described recently in other European countries. The main aim of this study was to analyse recent temporal trends of S. aureus, methicillin-susceptible S. aureus (MSSA) and MRSA BSI for Switzerland as well as the different linguistic regions within Switzerland. An additional aim was to estimate potential differences among patient-based and epidemiological risk factors. Methods: A retrospective observational study was conducted in Switzerland over a period of 14 years (2008-2021). Trends in S. aureus, MSSA and MRSA BSI were analysed by applying linear regression models. Results: Staphylococcus aureus BSI increased by + 30% from 19.7 to 25.6 cases per 100,000 inhabitants between 2008 and 2021 (P &lt; 0.01) in Switzerland. Thereof, MSSA increased by + 37% from 17.8 to 24.4 cases per 100,000 inhabitants (P &lt; 0.01). MRSA decreased from 1.9 to 1.2 cases per 100,000 inhabitants (P &lt; 0.01), which was driven by decreasing incidence in the French-speaking region. MSSA BSI increased significantly (P &lt; 0.01) in both linguistic regions. A further stratification revealed that incidence increased the most in male patients of the age group ≥ 80 years of the German-speaking region. Conclusion: The increasing health burden of MSSA BSI in Switzerland indicates that not only proportions of resistant microorganisms but also total BSI incidences should be monitored. In addition, data stratification revealed that the increase was mainly driven by an increasing incidence in elderly males of the German-speaking region. abstract_id: PUBMED:21317194 Colonization and transmission of meticillin-susceptible and meticillin-resistant Staphylococcus aureus in a murine nasal colonization model. Nasal colonization by Staphylococcus aureus is an important risk factor for the development of a nosocomial infection. Acquisition of nasal colonization by S. aureus increases mortality in hospitalized patients, but little is known about the transmission dynamics of S. aureus. To study S. aureus transmission, colonization and colonization persistence, we developed a murine transmission model. In 20 cages, 2 out of 10 mice were nasally inoculated (at 5×10(8) c.f.u. per mouse) with either meticillin-susceptible S. aureus (MSSA) (10 cages) or meticillin-resistant S. aureus (MRSA) (10 cages). On days 5, 15, 25 and 40, all mice in a cage were swabbed or sacrificed and nasal colonization and c.f.u. were determined in all 10 mice by nasal dissection or by nasal swab. Spread and subsequent stable colonization by both MSSA and MRSA from colonized to uncolonized mice within a cage was seen. At day 5, an increased number of colonized mice were observed in the MSSA group compared to the MRSA group (P = 0.003). On day 40, the mean number of c.f.u. per mouse was higher for MRSA than for MSSA (P = 0.06). Faecal-oral transmission was shown to be a possibly important transmission route in this model. These results suggest a more rapid spread of MSSA compared to MRSA. However, MRSA shows a more stable nasal colonization after a longer period of time. Answer: The hypothesis that colonization with methicillin-susceptible Staphylococcus aureus (MSSA) may protect against nosocomial acquisition of methicillin-resistant S. aureus (MRSA) has been investigated, but the evidence does not strongly support this protective effect. A study conducted in a Swiss university hospital found no association between MSSA colonization at admission and nosocomial MRSA acquisition after adjusting for confounders in a multivariate logistic regression analysis (PUBMED:24709721). This suggests that endogenous MSSA colonization does not appear to protect against nosocomial MRSA acquisition in a population of medical patients without frequent antibiotic exposure. Additionally, another study examining the colonization of healthcare workers with MRSA in Oman found no statistical association between healthcare worker MRSA nasal colonization and various factors, including previous MRSA infection and previous exposure to antibiotics (PUBMED:34780365). This indicates that MSSA colonization may not have a significant protective role against MRSA colonization or infection in this context. However, it is important to note that a study conducted at the Cleveland Clinic Health System found that among patients with Staphylococcus aureus infection, the odds of the infection being an MRSA infection were significantly lower in MSSA carriers than in MRSA carriers (PUBMED:29649598). This suggests that while MSSA colonization may not prevent MRSA acquisition, it may be associated with a lower likelihood of MRSA infection among those who are colonized with MSSA. Overall, the evidence does not conclusively support the idea that MSSA colonization provides protection against MRSA acquisition or infection, and further research may be needed to fully understand the dynamics between MSSA and MRSA colonization and infection.
Instruction: Early detection and screening for ovarian cancer: does physician specialty matter? Abstracts: abstract_id: PUBMED:37604591 Early Detection of Ovarian Cancer. The risk of death from ovarian cancer is highly associated with the clinical stage at diagnosis. Efforts to implement screening for ovarian cancer have been largely unsuccessful, due to the low prevalence of the disease in the general population and the heterogeneity of the various cancer types that fall under the ovarian cancer designation. A practical test for early detection will require both high sensitivity and high specificity to balance reducing the number of cancer deaths with minimizing surgical interventions for false positive screens. The technology must be cost-effective to deliver at scale, widely accessible, and relatively noninvasive. Most importantly, a successful early detection test must be effective not only at diagnosing ovarian cancer but also in reducing ovarian cancer deaths. Stepwise or multimodal approaches among the various areas under investigation will likely be required to make early detection a reality. abstract_id: PUBMED:27418634 EPIC Early Detection of Ovarian Cancer. CA125 dominated performance for ovarian cancer early detection among four serum biomarkers evaluated in EPIC study prediagnostic serum, rising on average 3 years prior to detection. Adding HE4 provided only marginal improvement. This natural history supports annual testing for early detection and highlights the importance of biomarker discovery complementing CA125. Clin Cancer Res; 22(18); 4542-4. ©2016 AACRSee related article by Terry et al., p. 4664. abstract_id: PUBMED:36980526 Mucins as Potential Biomarkers for Early Detection of Cancer. Early detection significantly correlates with improved survival in cancer patients. So far, a limited number of biomarkers have been validated to diagnose cancers at an early stage. Considering the leading cancer types that contribute to more than 50% of deaths in the USA, we discuss the ongoing endeavors toward early detection of lung, breast, ovarian, colon, prostate, liver, and pancreatic cancers to highlight the significance of mucin glycoproteins in cancer diagnosis. As mucin deregulation is one of the earliest events in most epithelial malignancies following oncogenic transformation, these high-molecular-weight glycoproteins are considered potential candidates for biomarker development. The diagnostic potential of mucins is mainly attributed to their deregulated expression, altered glycosylation, splicing, and ability to induce autoantibodies. Secretory and shed mucins are commonly detected in patients' sera, body fluids, and tumor biopsies. For instance, CA125, also called MUC16, is one of the biomarkers implemented for the diagnosis of ovarian cancer and is currently being investigated for other malignancies. Similarly, MUC5AC, a secretory mucin, is a potential biomarker for pancreatic cancer. Moreover, anti-mucin autoantibodies and mucin-packaged exosomes have opened new avenues of biomarker development for early cancer diagnosis. In this review, we discuss the diagnostic potential of mucins in epithelial cancers and provide evidence and a rationale for developing a mucin-based biomarker panel for early cancer detection. abstract_id: PUBMED:32130708 Ovarian Cancer Biomarkers: Moving Forward in Early Detection. Ovarian cancer is a silent cancer which rate survival mainly relays in early stage detection. The discovery of reliable ovarian cancer biomarkers plays a crucial role in the disease management and strongly impact in patient's prognosis and survival. Although having many limitations CA125 is a classical ovarian cancer biomarker, but current research using proteomic or metabolomic methodologies struggles to find alternative biomarkers, using non-invasive our relatively non-invasive sources such as urine, serum, plasma, tissue, ascites or exosomes. Metabolism and metabolites are key players in cancer biology and its importance in biomarkers discovery cannot be neglected. In this chapter we overview the state of art and the challenges facing the use and discovery of biomarkers and focus on ovarian cancer early detection. abstract_id: PUBMED:30390764 Early Detection of Ovarian Cancer. Early detection of ovarian cancer could reduce mortality by 10% to 30%. Effective screening requires high sensitivity (&gt;75%) and extremely high specificity (99.7%). Clinical trials suggest the best specificity is achieved with 2-stage strategies in which increasing serum CA125 level triggers transvaginal sonography to detect a malignant pelvic mass, although evidence for such approaches improving overall survival has been limited. Screening may be improved by combining CA125 with novel biomarkers, such as autoantibodies, circulating tumor DNA, or microRNAs. In order to detect premetastatic ovarian cancers originating in the distal fallopian tube, more sensitive approaches to diagnostic imaging are required. abstract_id: PUBMED:26668153 Research Progress of MicroRNA in Early Detection of Ovarian Cancer. Objective: This review aimed to update the progress of microRNA (miRNA) in early detection of ovarian cancer. We discussed the current clinical diagnosis methods and biomarkers of ovarian cancer, especially the methods of miRNA in early detection of ovarian cancer. Data Sources: We collected all relevant studies about miRNA and ovarian cancer in PubMed and CNKI from 1995 to 2015. Study Selection: We included all relevant studies concerning miRNA in early detection of ovarian cancer, and excluded the duplicated articles. Results: miRNAs play a key role in various biological processes of ovarian cancer, such as development, proliferation, differentiation, apoptosis and metastasis, and these phenomena appear in the early-stage. Therefore, miRNA can be used as a new biomarker for early diagnosis of ovarian cancer, intervention on miRNA expression of known target genes, and potential target genes can achieve the effect of early prevention. With the development of nanoscience and technology, analysis methods of miRNA are also quickly developed, which may provide better characterization of early detection of ovarian cancer. Conclusions: In the near future, miRNA therapy could be a powerful tool for ovarian cancer prevention and treatment, and combining with the new analysis technology and new nanomaterials, point-of-care tests for miRNA with high throughput, high sensitivity, and strong specificity are developed to achieve the application of diagnostic kits in screening of early ovarian cancer. abstract_id: PUBMED:31568690 Detection of early stage ovarian cancer in a large community cohort. Background: Although detecting ovarian cancer at early stage is a highly meaningful clinical goal, no studies have evaluated early stage disease presentation in a large community-based population and how it differs from that of late stage disease. Methods: Electronic medical records were evaluated for women diagnosed with ovarian or fallopian tube cancer in 2016 and 2017 to identify the first imaging study to detect disease. Women being followed prior to diagnosis for known genetic risk from BRCA or other mutation were excluded. The visit in which the imaging test was ordered and related encounters were reviewed to determine the indication for imaging. Patient characteristics, presenting symptoms and duration, and modality of first abnormal imaging were compared for early vs late stage ovarian cancer and by provider specialty. Results: Of 540 women with ovarian cancer, 190 (35%) were diagnosed with early stage disease, of whom 141 (74%) were symptomatic, with 45% of women presenting to internists, 33% to gynecologists, and 20% to emergency medicine physicians. Pelvic ultrasonography detected only 23% of late stage cases whereas pelvic ultrasonography and abdominal pelvic computed tomography (CT) each detected 47% of early stage cases. While abdominal pain and bloating were common to both women with early and late stage cancer, women with early stage disease were younger (58 vs 64 years, P &lt; .0001), more likely to present to gynecologists (33% vs 15%, P &lt; .001) and complained more often of a palpable mass (17% vs 6%, P &lt; .0001) or postmenopausal bleeding (11% vs 5%, P &lt; .001). Conclusions: Excluding women with genetic predisposition to ovarian cancer known prior to diagnosis, approximately three out of four cases of early stage ovarian cancer are detected as the result of evaluation of symptoms and one in four cases are detected incidentally. Abdominal pelvic CT and pelvic ultrasonography each detect an equal proportion of early stage cases. In contrast to late stage presentation, women diagnosed with early stage disease present more often with complaints of a palpable mass or postmenopausal bleeding, particularly to gynecologists. abstract_id: PUBMED:29278600 Ovarian Cancer Prevention, Screening, and Early Detection: Report From the 11th Biennial Ovarian Cancer Research Symposium. Objective: The aim of this study is to provide a summary report on recent research advances in ovarian cancer prevention, screening, and early detection that were presented at the 11th Biennial Ovarian Cancer Research Symposium in Seattle, Wash. Methods: At the symposium, researchers from around the world participated in the poster, invited oral presentation and keynote presentation, and discussed the latest advances in the areas of cancer prevention, screening, and early detection. Results: In the Scientific Session for Prevention, Screening, and Early Detection, Usha Menon, PhD (University of College London), presented exploratory studies from the ongoing UK Collaborative Trial of Ovarian Cancer Screening trial. Karen Lu, MD, presented her studies on BRCA testing and salpingectomies as prevention strategies. Eight speakers were selected from the abstracts for short oral presentations, and the topic ranges from Ovarian Cancer Early Detection Program by Saul Rivkin, MD, to the ultra-deep sequencing of somatic mutations in TP53 in normal and cancer patients by Rosana Risques, PhD. Fourteen additional poster presentations, ranging from the potential role of cancer stem cells in recurrence to retrotransposons in ovarian cancer development, round up the session. Conclusions: Although progress is being made in the areas of prevention, screening, and early detection, these advances have not yet translated into tangible clinical benefits for patients with ovarian cancer. A wide array of research topics presented in the session provides a glimmer of hope that better understanding of genetic risk factors, refining screening strategies, and developing new methods for early detection will eventually lead to improved outcome for patients with ovarian cancer. abstract_id: PUBMED:32939802 Statistical approaches using longitudinal biomarkers for disease early detection: A comparison of methodologies. Early detection of clinical outcomes such as cancer may be predicted using longitudinal biomarker measurements. Tracking longitudinal biomarkers as a way to identify early disease onset may help to reduce mortality from diseases like ovarian cancer that are more treatable if detected early. Two disease risk prediction frameworks, the shared random effects model (SREM) and the pattern mixture model (PMM) could be used to assess longitudinal biomarkers on disease early detection. In this article, we studied the discrimination and calibration performances of SREM and PMM on disease early detection through an application to ovarian cancer, where early detection using the risk of ovarian cancer algorithm (ROCA) has been evaluated. Comparisons of the above three approaches were performed via analyses of the ovarian cancer data from the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial. Discrimination was evaluated by the time-dependent receiver operating characteristic curve and its area, while calibration was assessed using calibration plot and the ratio of observed to expected number of diseased subjects. The out-of-sample performances were calculated via using leave-one-out cross-validation, aiming to minimize potential model overfitting. A careful analysis of using the biomarker cancer antigen 125 for ovarian cancer early detection showed significantly improved discrimination performance of PMM as compared with SREM and ROCA, nevertheless all approaches were generally well calibrated. Robustness of all approaches was further investigated in extensive simulation studies. The improved performance of PMM relative to ROCA is in part due to the fact that the biomarker measurements were taken at a yearly interval, which is not frequent enough to reliably estimate the changepoint or the slope after changepoint in cases under ROCA. abstract_id: PUBMED:38125920 Primary Prevention and Early Detection of Hereditary Breast Cancer. Background: Primary prevention and early detection of hereditary breast cancer has been one of the main topics of breast cancer research in recent decades. The knowledge of risk factors for breast cancer has been increasing continuously just like the recommendations for risk management. Pathogenic germline variants (mutations, class 4/5) of risk genes are significant susceptibility factors in healthy individuals. At the same time, germline mutations serve as biomarkers for targeted therapy in breast cancer treatment. Therefore, management of healthy mutation carriers to enable primary prevention is in the focus as much as the consideration of pathogenic germline variants for therapeutic decisions. Since 1996, the German Consortium has provided quality-assured care for counselees and patients with familial burden of breast and ovarian cancer. Summary: Currently, there are 23 university centers with over 100 cooperating DKG-certified breast and gynecological cancer centers. These centers provide standardized, evidence-based, and knowledge-generating care, which includes aspects of primary as well as secondary and tertiary prevention. An important aspect of quality assurance and development was the inclusion of the HBOC centers in the certification system of the German Cancer Society (GCS). Since 2020, the centers have been regularly audited and their quality standards continuously reviewed according to quality indicators adapted to the current state of research. The standard of care at GC-HBOC' centers involves the evaluation as well as evolution of various aspects of care like inclusion criteria, identification of new risk genes, management of variants of unknown significance (class 3), evaluation of risk-reducing options, intensified surveillance, and communication of risks. Among these, the possibility of intensified surveillance in the GC-HBOC for early detection of breast cancer is an important component of individual risk management for many counselees. As has been shown in recent years, in carriers of pathogenic variants in high-risk genes, this approach enables the detection of breast cancer at very early, more favorable stages although no reduction of mortality has been demonstrated yet. The key component of the intensified surveillance is annual contrast-enhanced breast MRI, supplemented by up to biannual breast ultrasound and mammography usually starting at age 40. Key Messages: Apart from early detection, the central goal of care is the prevention of cancer. By utilizing individualized risk calculation, the optimal timeframe for risk-reducing surgery can be estimated, and counselees can be supported in reaching preference-sensitive decisions. Answer: Yes, physician specialty appears to matter in the early detection and screening for ovarian cancer. According to a study that evaluated electronic medical records for women diagnosed with ovarian or fallopian tube cancer, women with early-stage disease were more likely to present to gynecologists (33% vs 15%, P < .001) compared to those with late-stage disease. These women were also more likely to complain of a palpable mass or postmenopausal bleeding, which are symptoms that gynecologists might be more attuned to recognize and investigate further (PUBMED:31568690). This suggests that gynecologists may play a crucial role in the early detection of ovarian cancer due to their specialized knowledge and experience with gynecological symptoms and conditions. Furthermore, the study indicated that women diagnosed with early-stage disease presented more often with complaints of a palpable mass or postmenopausal bleeding, particularly to gynecologists, which could imply that gynecologists are more likely to detect ovarian cancer at an earlier stage due to their familiarity with these specific symptoms (PUBMED:31568690). Overall, the specialty of the physician may influence the likelihood of early detection of ovarian cancer, with gynecologists potentially being more adept at recognizing and acting upon early symptoms of the disease.
Instruction: Can single cIMT measurement during echocardiography affect further investigation decision for coronary artery disease? Abstracts: abstract_id: PUBMED:24143102 Detection of subclinical atherosclerosis and diastolic dysfunction in patients with schizophrenia. Background: Patients with schizophrenia have a higher risk for cardiovascular diseases, which is associated with early mortality compared with the nonschizophrenic population. Early diagnosis of cardiovascular diseases in asymptomatic periods in patients with schizophrenia would enhance their quality of life and reduce mortality. Echocardiography, carotid ultrasonography, and ankle brachial index (ABI) measurement are known to be beneficial methods of detecting subclinical cardiovascular diseases and of risk stratification. The present study investigated carotid intima media thickness (CIMT) and ABI and echocardiographic parameters measured via conventional and tissue Doppler echocardiography in patients with schizophrenia in comparison with a control group. Methods: The present case-control study included 116 patients with schizophrenia and 88 healthy patients. Participants with any current comorbid psychiatric disorder, current or lifetime neurological and medical problems, current coronary artery disease, diabetes, hypertension, hypothyroidism, or hyperthyroidism or who were using antihypertensives, antidiabetic agents, or antiobesity drugs were excluded. High-resolution B-mode ultrasound images were used to measure CIMT. Conventional and tissue Doppler measurements were performed according to the recommendations of the American Society of Echocardiography. Results: Low ABI, mitral ratio of the early (E) to late (A) ventricular filling velocities, septal E', septal S', lateral E', lateral S', septal E'/septal A', lateral E'/lateral A', and high septal A', mitral E/septal E', mitral E/lateral E', and CIMT values were observed in the schizophrenia group compared with the control group. Conclusion: Doppler parameters supported the hypothesis that patients with schizophrenia are at high risk for cardiovascular diseases. abstract_id: PUBMED:19892054 Screening for subclinical coronary artery disease measuring carotid intima media thickness. Traditional coronary risk assessment is based on major cardiovascular risk factors using the Framingham risk score. Carotid intima-media thickness (CIMT) measured by ultrasonography is a noninvasive test used to assess for the presence of coronary atherosclerosis. CIMT has been shown to be an independent predictor of future cardiovascular events and is used in research trials as a surrogate for the presence as well as regression of coronary artery disease. The objectives of this report are to review the published reports on CIMT and to help establish the role of CIMT as a screening tool for coronary artery disease in selected patients. CIMT measurement can modify cardiovascular risk prediction in patients initially classified with the Framingham risk score, with reclassification into higher or lower risk categories. It is most useful for refining risk assessment in patients at intermediate risk. The Screening for Heart Attack Prevention and Education (SHAPE) Task Force recommends screening all asymptomatic middle-aged and older men and women using noninvasive imaging. The American Society of Echocardiography established a consensus on the methodologic aspects of CIMT measurement. Sequential scanning of CIMT to assess atherosclerosis is currently not recommended, because of interscan variability and small expected changes over time. In conclusion, in the primary prevention of coronary artery disease, CIMT measurement reclassifies patients into higher or lower risk categories, allowing early appropriate management. abstract_id: PUBMED:24372996 Epicardial adipose tissue thickness in hemodialysis patients. Aim: Hemodialysis (HD) patients had higher cardiovascular mortality and it is related to atherosclerosis. Epicardial adipose tissue (EAT) thickness is a marker of atherosclerosis and independent predictor of coronary artery disease. The aim of our study was to evaluate the relationship between EAT and carotid intima-media thickness (CIMT) predictors of early atherosclerosis in HD patients. Methods: The study included 62 HD patients and 40 healthy controls. EAT thickness and CIMT were measured by echocardiography in all subjects. Results: Epicardial adipose tissue thickness and CIMT were higher (6.98 ± 1.67 vs. 3.84 ± 0.73 mm, P &lt; 0.001, 0.94 ± 0.17 vs. 0.63 ± 0.11 mm, P &lt; 0.001, respectively) in HD patients than in control subjects. EAT thickness were correlated with CIMT, HD duration, age, and calcium. In addition, HD duration, CIMT, and age were independent predictors of EAT thickness on HD patients in regression analysis. Conclusion: Epicardial adipose tissue thickness may be a useful indicator of early atherosclerosis in HD patients. abstract_id: PUBMED:24085336 Epicardial fat thickness correlates with carotid intima-media thickness, arterial stiffness, and cardiac geometry in children and adolescents. To determine the association between epicardial fat thickness and carotid arterial stiffness, carotid intima-media thickness (CMIT), left atrial (LA) volume, and left-ventricular (LV) geometry parameters in obese children and adolescents compared with controls. A case-control study was performed in 96 children and adolescents (obese n = 66, controls n = 30) age 9-16 years old (38 female and 58 male, mean age 11.7 ± 2.8 years) undergoing transthoracic echocardiography and carotid artery ultrasound. Clinical, anthropometric, and biochemical determinations were also recorded. Epicardial fat thickness (2.76 ± 1.2 vs. 1.36 ± 0.7 mm, p &lt; 0.001), LA volume (35.7 ± 13.2 vs. 28.9 ± 9.8 mL, p = 0.008), LV mass (118.3 ± 38.6 vs. 96.4 ± 35.4 mL, p = 0.008), CIMT (0.48 ± 0.07 vs. 0.44 ± 0.05 mm, p = 0.019), and local pulse wave velocity (LPWV; 3.7 ± 0.5 vs. 3.2 ± 0.4 m/seg, p = 0.007) were significantly increased in obese children and adolescents compared with controls. Epicardial fat showed a significant and positive correlation with LA volume, LV mass, and LPWV as well as a significant and independent association with increased CIMT (odds ratio (OR) = 3.19 [1.88-7.99], p = 0.005) in the study population. Epicardial fat thickness is linked to obesity, carotid subclinical atherosclerosis, and cardiac geometry parameters and might be a useful tool for the cardiovascular risk stratification in children and adolescents. abstract_id: PUBMED:27564776 An increase in epicardial adipose tissue is strongly associated with carotid-intima media thickness and atherosclerotic plaque, but LDL only with the plaque. Objective: Carotid intima-media thickness (CIMT) is reliable marker of subclinical atherosclerosis and cardiovascular events. Until today, there was no study that investigated whether epicardial adipose tissue (EAT), which is a surrogate for lipid depot in a special visceral tissue or circulating lipids, is more important for CIMT and atherosclerotic plaque. Methods: Our study, having cross-sectional and prospective observational design, included 252 patients who were admitted to our outpatient clinic. EAT identified as an echo-free space under the pericardial layer on 2-dimensional echocardiography, was measured perpendicularly in front of the right ventricular free wall at end-systole. Results: EAT significantly correlated with CIMT (r=0.623, p&lt;0.001). CIMT was significantly increased with rising EAT thickness (0.72±0.15 mm, 0.85±0.16 mm, and 0.95±0.12 mm in patients with EAT &lt;5 mm, 5-7, and &gt;7 mm, p&lt;0.001, respectively). Multiple linear regression analysis revealed that age (Beta: 0.406, p&lt;0.001), male gender (Beta: 0.244, p&lt;0.001), and EAT (Beta: 0.450, p&lt;0.001) as independent correlates of CIMT. Otherwise, in logistic regression analysis, only EAT (OR, 1.386; 95% CI, 1.203-1.597, p&lt;0.001) and LDL cholesterol (OR, 1.013; 95% CI, 1.002-1.013, p=0.02) were independent predictors for presence of carotid plaque. Conclusion: Our study showed that EAT has a relationship with both CIMT and the presence of carotid plaque, but LDL is independently related to the plaque. This finding suggests that EAT thickness may be a risk factor and biomarker, playing an important role beginning from early stages of atherosclerosis, unlike LDL cholesterol, which appear to have a role in later stages of atherosclerosis. abstract_id: PUBMED:24666397 Prevalence and correlates of erectile dysfunction in type 2 diabetes mellitus: a cross-sectional single-center study among Turkish patients. Objective: The aim of this study was to evaluate prevalence of erectile dysfunction (ED) in patients with type 2 diabetes mellitus (T2DM) in relation to vascular and neurogenic correlates. Methods: A total of 116 males including T2DM patients [n=68; mean age, 56.7 (5.8) years] and age-matched healthy controls [n=48; mean age, 57.0 (6.6) years] were included in this cross-sectional single-center study. Data on anthropometrics, blood biochemistry, concomitant hypertension, hyperlipidemia, and coronary artery disease (CAD) were recorded in each subject along with measurement of carotid artery intima media thickness (CIMT) and evaluation of erectile dysfunction (ED) via International Index of Erectile Function (IIEF-5) Questionnaire. A univariate analysis was performed to determine the relationship of cardiovascular risk factors and diabetes-related complications to ED. Results: Patient and control groups were similar in terms of percentage patients with hyperlipidemia (51.5% and 39.6%, respectively) and CAD (33.8% and 22.9%, respectively), whereas concomitant hypertension was more common (P=0.05) and CIMT values were significantly higher (P=0.020) in patients with T2DM compared with controls. Polyneuropathy was noted in 46.2% of patients, nephropathy in 30.8%, and retinopathy in 33.8%. ED scores were significantly lower in patients than controls [14.3 (7.3) vs. 18.2 (6.3), P=0.004] with a significantly higher percentage of patients than controls in the category of severe dysfunction (29.4 vs. 10.4%, P=0.014). Univariate analysis revealed that diabetic polyneuropathy was the only factor to be associated with higher likelihood (93.3% in the presence and 60.0% in the absence of neuropathy) and severity (43.3% in the presence and 14.3% in the absence of neuropathy) of ED (P=0.004). Conclusion: Findings from the present cross-sectional single-center study revealed the prevalence of ED to be considerably higher in patients with T2DM than age-matched healthy controls, with identification of diabetic polyneuropathy as the only risk factor associated with higher likelihood and more severe forms of ED. abstract_id: PUBMED:37481586 Pragmatic clinic-based investigation of echocardiogram parameters in asymptomatic patients with type 2 diabetes in routine clinical practice and its association with suggestive coronary artery disease: a pilot study. Background: Patients with diabetes mellitus (DM) have cardiovascular diseases (CVD) as a major cause of mortality and morbidity. The primary purpose of this study was to assess the echocardiographic parameters that showed alterations in patients with type 2 diabetes mellitus(T2DM) with suggestive coronary artery disease (CAD) determined by electrocardiography and the secondary was to assess the relationship of these alterations with established cardiovascular risk factors. Methods: This cross-sectional, observational pilot study included 152 consecutive patients with T2DM who attended a tertiary DM outpatient care center. All patients underwent clinical examination and history, anthropometric measurements, demographic survey, determination of the Framingham global risk score, laboratory evaluation, basal electrocardiogram, echocardiogram, and measurement of carotid intima-media thickness (CIMT). Results: From the overall sample, 134 (88.1%) patients underwent an electrocardiogram. They were divided into two groups: patients with electrocardiograms suggestive of CAD (n = 11 [8.2%]) and those with normal or non-ischemic alterations on electrocardiogram (n = 123 [91.79%]). In the hierarchical multivariable logistic model examining all selected independent factors that entered into the model, sex, high triglycerides levels, and presence of diabetic retinopathy were associated with CAD in the final model. No echocardiographic parameters were significant in multivariate analysis. The level of serum triglycerides (threshold) related to an increased risk of CAD was ≥ 184.5 mg/dl (AUC = 0.70, 95% IC [0.51-0.890]; p = 0.026. Conclusion: Our pilot study demonstrated that no echocardiogram parameters could predict or determine CAD. The combination of CIMT and Framingham risk score is ideal to determine risk factors in asymptomatic patients with T2DM. Patients with diabetic retinopathy and hypertriglyceridemia need further investigation for CAD. Further prospective studies with larger sample sizes are needed to confirm our results. abstract_id: PUBMED:15301899 Atherosclerotic disease regression with statins: studies using vascular markers. Vascular imaging techniques enable identification of atherosclerosis in the sub-clinical phase and allow assessment of interventions to modify disease progression. Angiography has limited utility for tracking the progression of atherosclerosis because of its invasive nature, limited sensitivity for detecting early lesions and relatively low interscan reproducibility. Intravascular ultrasound (IVUS)an electron beam computed tomography (EBCT) are more sensitive and reproducible; however, available data on their ability for tracking disease progression are few. Measurement of carotid intima media thickness (CIMT) by B-mode ultrasound is a well-validated procedure for this application. In comparison with angiography, CIMT demonstrates greater sensitivity for detecting early atherosclerosis and lipid-rich plaques that are vulnerable to rupture. Continued validation and development of imaging techniques, such as magnetic resonance imaging (MRI), will facilitate the assessment of atherosclerosis progression in intervention studies. Stains are effective low-density lipoprotein cholesterol (LDL-C) lowering agents, and imaging studies have demonstrated their ability to slow progression and promote regression of atherosclerosis. The benefits of therapy on atherosclerosis regression appear to extend to soft atherosclerotic plaques that are still developing and treatment effects are independent of baseline LDL-C level. Hence, imaging studies support early intervention with statins in coronary heart disease patients, irrespective of lipid level. abstract_id: PUBMED:11923037 Evaluation of carotid artery and aortic intima-media thickness measurements for exclusion of significant coronary atherosclerosis in patients scheduled for heart valve surgery. Objectives: We assessed the value of carotid intima-media thickness (CIMT) and thoracic aorta intima-media thickness (AoIMT) in ruling out significant coronary artery disease (CAD) in patients scheduled for heart valve surgery. Background: Evaluation of CAD is needed in most patients undergoing heart valve surgery because of the high surgical morbidity in patients with significant CAD, raising the need for sensitive tests to exclude CAD. Coronary angiography is the reference standard, but this invasive procedure is not cost-effective, because more than two-thirds of these patients do not have significant CAD. Methods: In a pilot study, CIMT and AoIMT cutoff values separating low- from high-risk groups were determined in 96 patients by using receiver-operating characteristic curves. Then, a prospective study was conducted in 152 patients to determine the statistical power of these cutoff values used alone or in combination. In both studies, carotid artery ultrasonography and transesophageal echocardiography were performed before coronary angiography and valve surgery. Results: In the pilot study, CIMT &lt; 0.55 mm and AoIMT &lt; 3 mm were excellent predictors of the absence of CAD. In the prospective study, CIMT and AoIMT criteria were independent predictors of significant CAD in these patients, as assessed by logistic regression analysis. Carotid IMT criterion had 100% sensitivity and 100% negative predictive value. For the AoIMT criterion, sensitivity was 98%, and negative predictive value 99%. Combining the two criteria did not change sensitivity and negative predictive value but increased specificity to 78%. Conclusions: Measurements of CIMT and AoIMT may be useful in selecting patients who do not require coronary angiography before heart valve surgery. abstract_id: PUBMED:27522671 Imparied retrobulbar blood flow and increased carotid IMT in patients with Crohn's disease. Crohn's Disease [CD] is one of the Inflammatory Bowel Diseases that are chronic relapsing inflammatory diseases. Despite the major affected organ is intestine in CD, extra intestinal organs and tissues including cardiovascular system are also affected. Several studies have demonstrated that CD patients may have a higher risk of advancing atherosclerosis. The microvascular endothelial dysfunction plays an essential role for developing coronary atherosclerosis. Microvascular structural abnormalities in the retinal circulation may predict macrovascular events such as stroke and coronary heart disease. In order to assess the the microvascular circulation of the retina; retrobulbar blood flow velocities and resisitive indices [RI] of retrobulbar arteries are measured. The carotid intima media thickness [CIMT] correlates strongly with CV risk in the future. We aimed to investigate whether calculation of RI of retrobulbar arteries can be used as novel, easy and reproducible method to define atherosclerotic risk in CD patients along with CIMT. Thirty CD patients with remission period and thirty healthy volunteers were enrolled in the study. Measurement of carotid intima-media thickness and retrobulbar blood flow velocities were obtained with ultrasound scanner and colour Doppler ultrasonography. The RI of the OA [0.77 ± 0.06 vs. 0.65 ± 0.06, p &lt; 0.001] and CIMT 0.52 [0.50-0.60] vs. 0.40 [0.40-0.50] in patients with CD was significantly higher than the control group. Increased OARI and CIMT values may reflect an increased risk of atherosclerosis and OARI measurement may be used as a screening test for microvascular circulation evaluation in patients with CD. Answer: Yes, a single carotid intima-media thickness (CIMT) measurement during echocardiography can affect the decision for further investigation of coronary artery disease (CAD). CIMT is a noninvasive test used to assess the presence of coronary atherosclerosis and has been shown to be an independent predictor of future cardiovascular events (PUBMED:19892054). It can modify cardiovascular risk prediction in patients initially classified with the Framingham risk score, particularly in those at intermediate risk, by reclassifying them into higher or lower risk categories (PUBMED:19892054). This reclassification can lead to early appropriate management, which is crucial in the primary prevention of CAD. Moreover, studies have shown that increased CIMT is associated with a higher risk of cardiovascular diseases in various populations, including patients with schizophrenia (PUBMED:24143102), hemodialysis patients (PUBMED:24372996), and children and adolescents with obesity (PUBMED:24085336). Additionally, an increase in epicardial adipose tissue (EAT) has been strongly associated with CIMT and atherosclerotic plaque, suggesting that EAT thickness may be a risk factor and biomarker for early stages of atherosclerosis (PUBMED:27564776). In patients scheduled for heart valve surgery, CIMT measurements have been used to rule out significant CAD, demonstrating high sensitivity and negative predictive value, suggesting that patients with low CIMT may not require coronary angiography before surgery (PUBMED:11923037). Furthermore, impaired retrobulbar blood flow and increased CIMT in patients with Crohn's disease may reflect an increased risk of atherosclerosis, indicating that CIMT measurement can be a useful screening test for evaluating atherosclerotic risk (PUBMED:27522671). In summary, CIMT measurement during echocardiography can provide valuable information regarding the risk of CAD and can influence the decision-making process for further investigation and management of the disease.
Instruction: Is positron emission tomography using 18F-fluorodeoxyglucose and 11C-acetate valuable in diagnosing indeterminate pancreatic masses? Abstracts: abstract_id: PUBMED:15544073 Is positron emission tomography using 18F-fluorodeoxyglucose and 11C-acetate valuable in diagnosing indeterminate pancreatic masses? Background: It can be impossible to differentiate a mass forming chronic pancreatitis from adenocarcinoma of the pancreas using standard anatomical imaging. Positron emission tomography using 2-[18F] fluoro-2deoxy-D-glucose (18FDG-PET) and 1-[11C]-acetate (11C-acetate-PET) are methods taking advantage of the metabolic differences between benign and malignant tissues. Aims: To determine the diagnostic accuracy of 18FDG-PET and 11C-acetate-PET in indeterminate pancreatic masses. Methods: Twenty patients with an indeterminate mass of the head of the pancreas were prospectively studied. All patients underwent 18FDG-PET and eighteen of them 11C-acetate-PET. Scans were evaluated qualitatively and quantitatively; the later by using regional standardised uptake value (SUV). Final diagnosis was established using histopathologic evaluation of resected specimen or biopsy. Results: Adenocarcinoma was diagnosed in twelve patients and chronic pancreatitis in eight. Qualitative evaluation of 18FDG-PET imaging revealed three false negative and one false-positive results. The sensitivity, specificity, and diagnostic accuracy were 75 %, 88 %, and 80 %, respectively. The cut-off SUV to differentiate malignant from benign disease was 3,5 demonstrating a sensitivity of 91.7 % and a specificity of 75 %. Conclusion: 18FDG-PET imaging could not confirm or exclude malignancy in indeterminate masses of the head of the pancreas with high sensitivity and diagnostic accuracy. 11C-acetate-PET provided no additional diagnostic benefits. abstract_id: PUBMED:29805526 18F]Fluorodeoxyglucose positron emission tomography/computed tomography for diagnosing polymyositis/dermatomyositis. [18F]fluorodeoxyglucose positron emission tomography/computed tomography ([18F]FDG-PET/CT) is useful for diagnosing cancers and inflammatory diseases. A polymyositis/dermatomyositis (PM/DM) lesion is an inflammatory heterogeneous disease of the striated muscle. In the present study, the maximum standardized uptake value (SUVmax) was compared between 22 cases with definite or probable PM/DM (PM/DM group) that underwent [18F]FDG-PET/CT examination and the same number of patients with no myopathy. The average proximal muscle FDG uptake value (SUVaverage) for each patient was represented by calculating the average of the SUVmax for these muscles bilaterally. The correlation between creatine kinase (CK), serum creatine kinase isoenzyme, myodynamia of the proximal limb girdle muscle and SUVmax of each muscle group were analyzed. The results indicated that the SUVmax was markedly different between the PM/DM group and the non-myopathy group. It was demonstrated that [18F]FDG-PET/CT has a diagnostic value for PM/DM. The serum CK levels and the SUVaverage were negatively correlated with myodynamia. [18F]FDG-PET/CT may be used for examination to assess the severity of myositis. Furthermore, it may provide detection sites for muscle biopsy in patients with myositis. abstract_id: PUBMED:31293317 Elastofibroma Dorsi: Findings on 18F-Fluorodeoxyglucose Positron Emission Tomography-Computed Tomography. Elastofibroma dorsi (EFD) is a relatively rare soft-tissue pseudotumor that arises from mesenchymal tissue. We present a case of 48-year-old woman who underwent 18F fluorodeoxyglucose (FDG) positron emission tomography-computed tomography for initial staging of suspected carcinoma of the left breast. Incidental detection of soft-tissue masses showing moderate FDG uptake was seen in the bilateral infrascapular location characteristic of EFD. abstract_id: PUBMED:27107028 Comparability of [18F]THK5317 and [11C]PIB blood flow proxy images with [18F]FDG positron emission tomography in Alzheimer's disease. For amyloid positron emission tomography tracers, the simplified reference tissue model derived ratio of influx rate in target relative to reference region (R1) has been shown to serve as a marker of brain perfusion, and, due to the strong coupling between perfusion and metabolism, as a proxy for glucose metabolism. In the present study, 11 prodromal Alzheimer's disease and nine Alzheimer's disease dementia patients underwent [18F]THK5317, carbon-11 Pittsburgh Compound-B ([11C]PIB), and 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) positron emission tomography to assess the possible use of early-phase [18F]THK5317 and R1 as proxies for brain perfusion, and thus, for glucose metabolism. Discriminative performance (prodromal vs Alzheimer's disease dementia) of [18F]THK5317 (early-phase SUVr and R1) was compared with that of [11C]PIB (early-phase SUVr and R1) and [18F]FDG. Strong positive correlations were found between [18F]THK5317 (early-phase, R1) and [18F]FDG, particularly in frontal and temporoparietal regions. Differences in correlations between early-phase and R1 ([18F]THK5317 and [11C]PIB) and [18F]FDG, were not statistically significant, nor were differences in area under the curve values in the discriminative analysis. Our findings suggest that early-phase [18F]THK5317 and R1 provide information on brain perfusion, closely related to glucose metabolism. As such, a single positron emission tomography study with [18F]THK5317 may provide information about both tau pathology and brain perfusion in Alzheimer's disease, with potential clinical applications. abstract_id: PUBMED:29125557 Early Response Monitoring Following Radiation Therapy by Using [18F]FDG and [11C]Acetate PET in Prostate Cancer Xenograft Model with Metabolomics Corroboration. We aim to characterize the metabolic changes associated with early response to radiation therapy in a prostate cancer mouse model by 2-deoxy-2-[18F]fluoro-d-glucose ([18F]FDG) and [11C]acetate ([11C]ACT) positron emission tomography, with nuclear magnetic resonance (NMR) metabolomics corroboration. [18F]FDG and [11C]ACT PET were performed before and following irradiation (RT, 15Gy) for transgenic adenocarcinoma of mouse prostate xenografts. The underlying metabolomics alterations of tumor tissues were analyzed by using ex vivo NMR. The [18F]FDG total lesion glucose (TLG) of the tumor significant increased in the RT group at Days 1 and 3 post-irradiation, compared with the non-RT group (p &lt; 0.05). The [11C]ACT maximum standard uptake value (SUVmax) in RT (0.83 ± 0.02) and non-RT groups (0.85 ± 0.07) were not significantly different (p &gt; 0.05). The ex vivo NMR analysis showed a 1.70-fold increase in glucose and a 1.2-fold increase in acetate in the RT group at Day 3 post-irradiation (p &lt; 0.05). Concordantly, the expressions of cytoplasmic acetyl-CoA synthetase in the irradiated tumors was overexpressed at Day 3 post-irradiation (p &lt; 0.05). Therefore, TLG of [18F]FDG in vivo PET images can map early treatment response following irradiation and be a promising prognostic indicator in a longitudinal preclinical study. The underlying metabolic alterations was not reflected by the [11C]ACT PET. abstract_id: PUBMED:34811734 Fluorine-18 fluorodeoxyglucose positron emission tomography-computed tomography for staging of canine insulinoma: 3 cases (2019-2020). Objectives: Canine insulinomas are uncommon malignant functional pancreatic neuroendocrine tumours with a high metastatic rate. Diagnostic imaging aids with staging and surgical planning of these tumours; however, identification is unpredictable across modalities. High-grade human pancreatic neuroendocrine tumours display increased avidity on 18 F-fluorodeoxyglucose positron emission tomography-CT. Materials And Methods: Dogs with clinicopathologic findings consistent with pancreatic insulinoma were prospectively enrolled. Patients underwent 18 F-fluorodeoxyglucose positron emission tomography-CT and CT angiography, followed by exploratory laparotomy. Results: Three patients met the inclusion criteria and had histologically confirmed insulinomas. Both metastatic lesions in patient 1 were mildly avid (SUVmax 2.79 and 3.01). In patient 2, the primary pancreatic insulinoma was minimally avid (SUVmax 2.16). The primary pancreatic lesion in patient 3 had similar avidity to normal pancreatic parenchyma (SUVmax 1.54) and was undetected on 18 F-fluorodeoxyglucose positron emission tomography-CT. Insulinomas demonstrated variable attenuation and contrast enhancement patterns on CT angiography and certain lesions were more conspicuous than on 18 F-fluorodeoxyglucose positron emission tomography-CT. Two metastatic lesions not visible on either imaging modality were discovered in patient 2 at surgery. Clinical Significance: Canine insulinomas were inconsistently avid on 18 F-fluorodeoxyglucose positron emission tomography-CT. This finding is likely attributable to the confounding clinicopathological features and multifaceted transformation of these tumours, in addition to the influence of variable tumour size, composition and vascularity. Unpredictable tumoural avidity limits the value of 18 F-fluorodeoxyglucose positron emission tomography-CT for staging canine insulinomas. abstract_id: PUBMED:32351272 A Rare Case of Sinonasal Malignant Melanoma - Local, Regional, and Distant Spread Accurately Detected by 18F Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography. Melanomas are usually cutaneous in origin but rarely can also arise from the mucosal linings in the body. Sinonasal melanomas arise from the mucosa of the nasal cavity and paranasal sinuses, which account for approximately 50% of mucosal melanomas in the body. 18F fluorodeoxyglucose positron emission tomography-computed tomography (18F-FDG PET/CT) has proven its role in malignant melanoma in staging especially in stage III and IV disease, restaging, assessing response to therapy and had affected in treatment change in significant number of cases. We present a case of sinonasal melanoma who underwent FDG PET/CT for staging and showed cervical lymph node and marrow metastases. abstract_id: PUBMED:38319574 A Review of Hypoxia Imaging Using 18F-Fluoromisonidazole Positron Emission Tomography. Tumor hypoxia is an essential factor related to malignancy, prognosis, and resistance to treatment. Positron emission tomography (PET) is a modality that visualizes the distribution of radiopharmaceuticals administered into the body. PET imaging with [18F]fluoromisonidazole ([18F]FMISO) identifies hypoxic tissues. Unlike [18F]fluorodeoxyglucose ([18F]FDG)-PET, fasting is not necessary for [18F]FMISO-PET, but the waiting time from injection to image acquisition needs to be relatively long (e.g., 2-4 h). [18F]FMISO-PET images can be displayed on an ordinary commercial viewer on a personal computer (PC). While visual assessment is fundamental, various quantitative indices such as tumor-to-muscle ratio have also been proposed. Several novel hypoxia tracers have been invented to compensate for the limitations of [18F]FMISO. abstract_id: PUBMED:12629355 Value of [11C]choline-positron emission tomography for re-staging prostate cancer: a comparison with [18F]fluorodeoxyglucose-positron emission tomography. Purpose: We compared [11C]choline-positron emission tomography (PET) with [18F]fluorodeoxyglucose-PET for re-staging prostate cancer in a group of 100 patients. Materials And Methods: A total of 100 consecutive patients referred for whole body [18F]fluorodeoxyglucose-PET for clinical prostate re-staging after radical treatment for prostate cancer were retrospectively included in the study. Mean prostate specific antigen (PSA) was 6.57 ng./ml. In all cases [11C]choline-PET was also performed. PET studies were done with a multiring device 5 minutes after intravenous injection of approximately 370 MBq. [11C]choline and 60 minutes after injection of approximately 370 MBq. [18F]fluorodeoxyglucose. PET findings were compared with those obtained with different conventional imaging and with PSA assessed at the time of PET and 1 year later. Results: Areas of abnormal focal increases were noted in 47% of patients on [11C]choline-PET and in 27% on [18F]fluorodeoxyglucose-PET. Of the 100 patients 49 had positive conventional imaging findings. All except 14 [11C]choline-PET findings were concordant with conventional imaging, including 6 negative and 8 positive conventional imaging results. All except 1 [11C]choline-PET negative cases also had negative conventional imaging after 1 year. PSA at 1 year remained stable or decreased in 80% and 62% of [11C]choline-PET negative and positive cases, respectively. Conclusions: [11C]choline-PET seems to be useful for re-staging prostatectomy cases with increasing serum PSA levels. It is superior to [18F]fluorodeoxyglucose-PET and complementary to conventional imaging but with the advantage of staging disease at a single step. abstract_id: PUBMED:29962719 Role of 18F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Scan in Castleman's Disease. Castleman's disease (CD) is a rare benign lymphoproliferative disorder. We are presenting three cases of CD of which one is unicentric CD, and the other two are an idiopathic multicentric CD. One of the two multicentric cases is associated with POEMS syndrome. The whole body 18F-Fluorodeoxyglucose positron-emission tomography-computed tomography scan plays a significant role in identifying the centricity, distribution of disease, response to therapy, and in early detection of remission. Answer: Positron emission tomography (PET) using 18F-fluorodeoxyglucose (18FDG) has been evaluated for its diagnostic accuracy in indeterminate pancreatic masses, but the results suggest that it may not be highly reliable for this purpose. A study involving twenty patients with indeterminate masses of the head of the pancreas found that 18FDG-PET imaging had a sensitivity of 75%, specificity of 88%, and diagnostic accuracy of 80%. When using a cut-off standardized uptake value (SUV) to differentiate malignant from benign disease, the sensitivity improved to 91.7%, but specificity dropped to 75%. The study concluded that 18FDG-PET imaging could not confirm or exclude malignancy in indeterminate pancreatic masses with high sensitivity and diagnostic accuracy. Additionally, 11C-acetate-PET provided no additional diagnostic benefits in this context (PUBMED:15544073). Therefore, based on the available evidence from this study, PET using 18F-fluorodeoxyglucose and 11C-acetate may not be the most valuable tools for diagnosing indeterminate pancreatic masses due to their limited sensitivity and specificity. Other diagnostic methods or additional imaging modalities may be necessary to accurately diagnose these types of pancreatic masses.
Instruction: Do patient surveys work? Abstracts: abstract_id: PUBMED:35673046 Proposed Patient-Inclusive Methodology for Developing and Validating Patient Experience Surveys. Patient experience while using telehealth services impacts patient care, and health system incentives and reimbursement. Therefore, the patient experience should be continuously improved, which requires evaluation. Surveying patients is integral to this process. Ideally, patient experience surveys used to evaluate telehealth services should cover patient experience sub-themes, be validated, and minimize bias. This paper evaluates how current validated patient experience surveys cover patient experience sub-themes, and how many are telehealth-specific. We collected 11 validated patient experience surveys. We then extracted five themes and 114 sub-themes of the patient experience. We evaluated survey questions against patient experience sub-themes. We found that current validated patient experience surveys cover, at most, 20.2% of patient experience sub-themes, with the most common sub-themes being "Psychosocial Needs" (81.8%) and "Information: Treatment" (72.7%). We cross-referenced validated patient experience surveys against validated telehealth-specific surveys. Only one validated patient experience survey (PEQ) was also telehealth-specific. abstract_id: PUBMED:10639936 Patient surveys. Clinicians continuously gather patient-related information. Histories and physicals, vital signs, lab results, and behavioral descriptions are all part of standard care. In recent years it has also proven advantageous to gather information regarding patient knowledge and satisfaction with the care they receive at our ambulatory and inpatient settings. This article reviews the process of conducting patient surveys throughout the continuum of care. Both the timing of the survey and the format are described in Table 1. abstract_id: PUBMED:25345554 The Net Promoter Score--an asset to patient experience surveys? Background: In the search for more straightforward ways of summarizing patient experiences and satisfaction, there is growing interest in the Net Promoter Score (NPS): How likely is it that you would recommend our company to a friend or colleague? Objective: To assess what the NPS adds to patient experience surveys. The NPS was tested against three other constructs already used in current surveys to summarize patient experiences and satisfaction: global ratings, recommendation questions and overall scores calculated from patient experiences. To establish whether the NPS is a valid measure for summarizing patient experiences, its association with these experiences should be assessed. Methods: Associations between the NPS and the three other constructs were assessed and their distributions were compared. Also, the association between the NPS and patient experiences was assessed. Data were used from patient surveys of inpatient hospital care (N = 6018) and outpatient hospital care (N = 10 902) in six Dutch hospitals. Results: Analyses showed that the NPS was moderately to strongly correlated with the other three constructs. However, their distributions proved distinctly different. Furthermore, the patient experiences from the surveys showed weaker associations with the NPS than with the global rating and the overall score. Conclusions: Because of the limited extent to which the NPS reflects the survey results, it seems less valid as a summary of patient experiences than a global rating, the existing recommendation question or an overall score calculated from patient experiences. In short, it is still unclear what the NPS specifically adds to patient experience surveys. abstract_id: PUBMED:10293191 Preventing pitfalls in patient surveys. Properly conceived, customer satisfaction surveys can yield the quantitative data needed to gauge patient satisfaction. But, as the author notes, these surveys can be "a veritable mine field of surprises for the uninitiated." This article, the last in a three-part series on measuring patient satisfaction, describes potential pitfalls and discusses the merits of in-person, mail and telephone surveys. abstract_id: PUBMED:10163133 Patient surveys and quality improvement. Patient satisfaction may be the ultimate measure of quality but it is only part of the quality measurement and improvement equation. Customers may be good at telling you what needs fixing but they cannot be expected to tell you how to fix the problem. It is important to let patient satisfaction measures guide the focus of your process improvement efforts, but these efforts must include the appropriate process measurement tools. A customer-focused survey will be valid only if it emphasizes key quality characteristics (KQCs), as defined by patients. You'll need to develop a sampling plan that will produce useable data. The design of the questionnaire and the reliability of the survey tool are critical to using patient surveys for quality improvement (QI). Surveys that don't meet standard criteria have a greater potential for error. Another challenge is the report format and analysis that are used to understand the patient responses. Having an external benchmark is critical to accurate measurement in patient surveys. Comparative data can also help prioritize QI projects. In summary, a well designed patient survey system can identify key quality characteristics, determine which KQCs have the most impact, help prioritize QI efforts, and measure patients' satisfaction in a reliable manner. abstract_id: PUBMED:24656761 Patient satisfaction surveys and quality of care: an information paper. With passage of the Patient Protection and Affordable Care Act of 2010, payment incentives were created to improve the "value" of health care delivery. Because physicians and physician practices aim to deliver care that is both clinically effective and patient centered, it is important to understand the association between the patient experience and quality health outcomes. Surveys have become a tool with which to quantify the consumer experience. In addition, results of these surveys are playing an increasingly important role in determining hospital payment. Given that the patient experience is being used as a surrogate marker for quality and value of health care delivery, we will review the patient experience-related pay-for-performance programs and effect on emergency medicine, discuss the literature describing the association between quality and the patient-reported experience, and discuss future opportunities for emergency medicine. abstract_id: PUBMED:34459443 Effect of an Intensive Mindful Practice Workshop on Patient-Centered Compassionate Care, Clinician Well-Being, Work Engagement, and Teamwork. Introduction: Mindfulness-based interventions for health professionals have been linked to improvements in burnout, well-being, empathy, communication, patient-centered care, and patient safety, but the optimal formats and intensity of training have been difficult to determine because of the paucity of studies and the heterogeneity of programs. A 4-days residential "Mindful Practice" workshop for physicians and medical educators featuring contemplative practices, personal narratives, and appreciative dialogs about challenging experiences may hold promise in improving participants' well-being while also improving compassionate care, job satisfaction, work engagement, and teamwork. Methods: We collected baseline and 2-month follow-up data during four workshops conducted in 2018 to 2019 at conference centers in the United States and Europe. Primary outcomes were burnout, work-related distress, job satisfaction, work engagement, patient-centered compassionate care, and teamwork. Results: Eighty-five of 120 participants (71%) completed both surveys (mean age was 49.3 and 68.2% female). There were improvements (P &lt; .01) in two of three burnout components (emotional exhaustion and depersonalization), work-related distress, job satisfaction, patient-centered compassionate care, work engagement and meaning, teamwork, well-being, positive emotion, mindfulness, somatic symptoms, and spirituality. Effect sizes (standardized mean difference of change) ranged from 0.25 to 0.61. With Bonferroni adjustments (P &lt; .0031), teamwork, general well-being, and mindfulness became nonsignificant. Discussion: An intensive, multiday, mindfulness-based workshop for physicians had clinically significant positive effects on clinician well-being, quality of interpersonal care and work satisfaction, and meaning and engagement, all important indicators of improved health and sustainability of the health care workforce. Future iterations of the program should increase the focus on teamwork. abstract_id: PUBMED:34560844 Linking doctor-patient relationship to medical residents' work engagement: The influences of role overload and conflict avoidance. Background: Chinese residents' practical work experiences are different from those described in Western studies. To explore potential mechanisms underlying the effects of doctor-patient relationships on medical residents' work engagement, verifying a posited mediating effect of role overload, and moderating effect of conflict avoidance, in the Chinese context. Methods: Based on the conservation of resources theory, a composite model was constructed. This study's data were collected from four different Chinese tertiary hospitals; 195 residents undergoing regularization training took this survey. Hierarchical moderated and mediated regression analyses were utilized. Results: Doctor-patient relationship were found to be positively related to residents' work engagement (β=0.31, p≤0.001). Role overload partially mediated the effect of these relationships on work engagement, and the moderating role of conflict avoidance in the relationship between doctor-patient relationship and conflict avoidance was negative. Conclusion: Maintaining good doctor-patient relationship can prompt residents to increase their engagement in work in order to meet their patients' needs. Furthermore, role overload has a particular influence in early career stages. Not only is it necessary for residents to gain a sense of recognition and support while they carry out their job responsibilities, especially while dealing with complex doctor-patient relationship, but it is also important to create work environments that can help residents shape their professional competency. abstract_id: PUBMED:19064659 Do patient surveys work? The influence of a national survey programme on local quality-improvement initiatives. Objectives: To assess current attitudes towards the national patient survey programme in England, establish the extent to which survey results are used and identify barriers and incentives for using them. Design: Qualitative interviews with hospital staff responsible for implementing the patient surveys (survey leads). Setting: National Health Service (NHS) hospital organisations (trusts) in England. Participants: Twenty-four patient survey leads for NHS trusts. Results: Perceptions of the patient surveys were mainly positive and were reported to be improving. Interviewees welcomed the surveys' regular repetition and thought the questionnaires, survey methods and reporting of results, particularly inter-organisational benchmark charts, were of a good standard. The survey results were widely used in action planning and were thought to support organisational patient-centredness. There was variation in the extent to which trusts disseminated survey findings to patients, the public, staff and their board members. The most common barrier to using results was difficulty engaging clinicians because survey findings were not sufficiently specific to specialties, departments or wards. Limited statistical expertise and concerns that the surveys only covered a short time frame also contributed to some scepticism. Other perceived barriers included a lack of knowledge of effective interventions, and limited time and resources. Actual and potential incentives for using survey findings included giving the results higher weightings in the performance management system, financial targets, Payment by Results (PbR), Patient Choice, a patient-centred culture, leadership by senior members of the organisation, and boosting staff morale by disseminating positive survey findings. Conclusion: The national patient surveys are viewed positively, their repetition being an important factor in their success. The results could be used more effectively if they were more specific to smaller units. abstract_id: PUBMED:20654286 An evaluation of patient satisfaction surveys by heads of hospitals and departments Introduction: The point of departure is that patient satisfaction surveys should be designed and organized in a manner that furthers ownership and responsibility ensuring follow-up by those who are to employ the results. This study therefore evaluates the perceived usefulness of patient satisfaction surveys among heads of departments and heads of hospitals. Material And Methods: During the period from 1999 to 2006, the County of Aarhus performed four patient survey series. After each series, heads of department and heads of hospital were asked to fill out questionnaires to evaluate the entire system. A total of 200 questionnaires were sent and 173 were returned (86.5%) with 640 comments to the open-ended questions. Results: 82.5% of the leaders evaluated the concept as "Outstanding" or "Good". Leaders from teaching hospitals and heads of hospital were more satisfied. The highest scores were given for the option of having the results reported at ward level and/or diagnostic groups, and the option of collecting comments from patients. 82.4% of the leaders reported that the surveys had resulted in concrete follow-up activities. The main criticism of the concept was its lack of detail. Conclusion: Generic patient satisfaction surveys can gain acceptance from the involved leaders. In part, the acceptance of the users is based on their possibilities of making individual choices within the standardized questionnaires. It is recommended to involve users of patient satisfaction surveys in the design and evaluation of concepts in the future, including frontline staff. Answer: Patient surveys are a widely used tool to gauge patient satisfaction and experiences within healthcare settings. The literature suggests that patient surveys can be effective, but their success depends on various factors, including how they are conducted, the level of detail they provide, and how the results are utilized. Patient surveys are integral to evaluating patient care and improving health system incentives and reimbursement. They can cover a range of patient experience sub-themes, but current validated surveys may only cover a limited percentage of these sub-themes, with common ones being "Psychosocial Needs" and "Information: Treatment" (PUBMED:35673046). The Net Promoter Score (NPS), a simple metric used in patient surveys, has been found to be less valid as a summary of patient experiences compared to other constructs such as global ratings or overall scores calculated from patient experiences (PUBMED:25345554). The effectiveness of patient surveys can be influenced by the survey's design, the timing, the format, and how the results are reported and analyzed. Surveys that do not meet standard criteria are prone to error, and without external benchmarks, it can be challenging to measure patient satisfaction accurately (PUBMED:10163133). Moreover, the way survey findings are disseminated and the engagement of clinicians in using the results are crucial. Barriers to effective use of survey results include the lack of specificity to specialties or departments, limited statistical expertise, and concerns about the surveys covering only a short timeframe (PUBMED:19064659). Patient surveys have been positively viewed, and their repetition is seen as an important factor in their success. They are reported to support organizational patient-centeredness and can lead to action planning and quality improvement initiatives if used effectively (PUBMED:19064659). Furthermore, patient surveys can gain acceptance from healthcare leaders, especially when they allow for individual choices within standardized questionnaires and involve users in the design and evaluation process (PUBMED:20654286). In summary, patient surveys can work and provide valuable insights into patient experiences and satisfaction, but their effectiveness is contingent upon thoughtful design, careful implementation, and a commitment to using the results for tangible improvements in patient care.
Instruction: Donors yesterday and today: have the characteristics of liver donors changed over the last 15 years? Abstracts: abstract_id: PUBMED:18358179 Donors yesterday and today: have the characteristics of liver donors changed over the last 15 years? Introduction: The social, medical and demographic changes of our population and the increase in the number of patients on waiting lists have led to some changes in the selection criteria of organ donors. Material And Method: A retrospective, descriptive and comparative study of the liver cadaveric donors features accepted in the Liver Transplant Unit in La Fe University Hospital of Valencia (Spain) in 2 different periods. We distributed the cases into 2 groups, including in group A the first 200 first donors (from January 1991 to June 1995) and in group B the last 200 donors (from February 2004 to December 2005). Results: The number of donors increased from 18 during 1991 to 106 in 2006. In group A the mean age was 32.4 years, compared to 52.3 years in group B. In group A, the main cause of death was craneoencephalic traumatism and in group B cerebrovascular accidents. The mean time in the intensive care unit was longer in the second group with 67.2 hours. In group B, considerable atherosclerosis was reported in 17% of cadaveric donors and macroscopic liver steatosis in 29.5%, compared to 5 and 12%, respectively, in group A. Conclusions: Nowadays, cadaveric liver donors are older, suffer more chronic diseases, die due to cerbrovascular diseases, remain longer in intensive care units and the livers are macroscopically worse compared to donors accepted 15 years ago. abstract_id: PUBMED:28624109 The use of old donors in liver transplantation. The process of ageing has an impact on the entire human body including the organ systems. In transplantation, professionals are daily faced with risk assessment of suitable donor offers , whether to accept a liver graft for a specific recipient. In this context, livers from elderly donors are more frequently accepted for transplantation, to increase the donor pool and compensate the high waiting list mortality. In the current practice it is not unusual to accept 60-year old donor livers for transplantation, as the donor demographics have significantly changed over the years. However, controversy exists regarding the use of livers from donors above 70 or 80 years, particular in combination with other risk factors, e.g. liver steatosis, warm ischaemia or long cold storage. This review focuses first on the impact of ageing on liver morphology and function. Second, we will highlight outcome after transplantation from elderly donors. Finally, we describe further risk factors and donor-recipient selection under the scope of old donor organs and include our institutional experience and policy. abstract_id: PUBMED:15338118 Liver transplantation using organs from donors older than 60 years. At present, it is frequently accepted to expand the organ pool for liver transplantation (LTx) by including livers from critical donors. From 1990 to June 2002 a total of 1,208 LTx were performed. Of those, 67 livers from donors older than 60 years were transplanted to 66 patients, including re-LTx in eight patients. Fourteen patients had malignant diseases (21%). Ten patients had a high urgency status (15%). Median donor age was 65 years (range 61-80 years). Primary graft function was observed in 84%. Patient survival rate at 1 and 5 years was 79% and 62%, and graft survival was 68% and 53%, respectively. No difference was observed in LTx with livers from donors younger than 60 years. Fifteen graft losses occurred during the study. Surgical complications were observed in 23 patients (34%). The outcome of LTx with livers from donors older than 60 years is satisfactory and is comparable to results of LTx with livers from donors younger than 60 years. The frequency of vascular complications and cholestasis syndrome is not increased. abstract_id: PUBMED:29118932 Results of liver transplantation with donors older than 80 years: a case control study. Aim: The inclusion of elderly donors can increase the pool of organs available for transplant. Background: To compare clinical outcomes and survival rates in patients who received livers from donors aged ≥ 80 years vs. younger donors. Methods: We considered all liver transplantations performed in our unit between January 2006 and January 2015. Twelve patients received liver from a cadaveric donor aged ≥ 80 years (study group) and their outcomes were compared with those of patients who received liver from a younger donor (control group). This study was carried out to analyze the characteristics of donors and recipients, as well as the clinical course and survival of recipients. Results: Statistically significant differences were observed in donors' age (55.6 ± 14.4 vs. 82.7 ± 2.7 years, p &lt; 0.001), donors' ICU stay (p = 0.008), donors' ALT levels (p = 0.009) and donors' AST levels (p = 0.01). Statistically significant differences were found in ischemia time (p &lt; 0.05). In total, 8.3% of the recipients of liver from a donor aged &lt; 80 required retransplantation vs. 25% of recipients of donor's ≥ 80 years. Patient survival at one, three and five years was 89%, 78.6% and 74.5%, respectively vs. 83.4%, 79.4% and 59.6% for the study group. Conclusion: Livers from older donors can be safely used for transplantation with acceptable patient survival rates. However, graft survival rates are lower for recipients of livers from older donors as compared to younger donors, and survival only increased with retransplantation. abstract_id: PUBMED:21486572 Liver transplantation with donors older than 75 years. Introduction: Orthotopic liver transplantation has shown successful results over the last years. For this reason there are increased numbers of patients on waiting lists. To expand the pool of liver donors, elderly donors have been used as a strategy. Objective: We report our experience comparing donors of ≥ 75 years with younger ages for their characteristics, clinical outcomes, and survivals. Methods: From January 2001 to December 2009, we performed 174 consecutive liver transplantation from cadaveric donors in 166 patients. During this period, we used 24 liver grafts from donors ≥ 75 years. We analyzed their outcomes retrospectively, describing donors and recipient characteristics and their clinical evolution. Results: The mean follow-up time among the entire study population was 42 ± 39 months. We observed an overall survival of 68.3% with similar incidences in both groups: 83% in the younger versus 78% in the older group at 1 year, and 69% versus 63%, at 5 years respectively. Both groups showed similar lengths of intensive care unit stay, cold and warm ischemia times, and intraoperative transfusion requirements. The older group had a total operative time than was longer and fewer hypotensive episodes than the younger group. There were no significant differences in the rates of rejection and retransplantation between the groups. The use of older donor livers was associated with a significantly higher rate of poor initial graft function (P = .027), an increased number of reinterventions (P = .013) in the older donor group, as well as more frequent vascular and biliar complications, without reaching significance. Conclusion: Our data suggested that donor age alone did not engendered a survival disadvantage for graft or recipient. However, careful donor selection is needed to avoid additional risk factors that can increase the morbidity or mortality of the procedure. abstract_id: PUBMED:28520727 Predictors of decision ambivalence and the differences between actual living liver donors and potential living liver donors. Background: The decision to become a living liver donor is a stressful event. Ambivalence in decision making may result in psychological distress. Thus, the purpose of this study was to provide a description of the ambivalence of potential living liver donors, to examine the predictors of ambivalence, and to compare the ambivalence of potential living liver donors with that of actual living liver donors. Methods: This descriptive and correlational study was conducted in a medical center from August 2013 to December 2015. Self-reported questionnaires were used to collect data. A total of 263 potential living liver donors who were assessed for donation to their parents were included in this study. Results: The mean age of the total sample was 30.7 years (SD = 6.39, range = 20-47), and males comprised 53.6% of the sample. The majority of the potential donors had a college education (70.8%) and were single (63.5%). Of the total sample, the mean score for ambivalence was 4.27 (SD = 1.87, range = 0-7). Multivariate analysis revealed that the Mental Component Summary (MCS) of quality of life (β = -0.24, p &lt; 0.01), family support (β = -0.17, p = 0.007), and intimacy (β = -0.13, p = 0.04) were significant protective predictors of ambivalence. Actual living liver donors had significantly lower ambivalence (3.82 versus 4.60), higher intimacy with recipients (3.55 versus 3.34), higher MCS (45.26 versus 42.80), and higher family support (34.39 versus 29.79) than did the remaining potential living liver donors. Conclusion: Ambivalence is common in potential living liver donors. The MCS of quality of life, family support, and intimacy were protective predictors in terms of ambivalence. Future research should explore other factors and design interventions targeted toward reducing ambivalence, promoting family support, and enhancing the mental dimensions of quality of life in potential living liver donors. abstract_id: PUBMED:20524455 Evaluation of donors for living donor liver transplantation (LDLT). Liver transplantation has been the last resort of definite treatment for decompensate cirrhosis, early-stage of hepatocellular carcinoma, and acute liver failure. Organ shortage is the major obstacle of deceased-donor liver transplantation. Since the first case of living-donor liver transplantation (LDLT), many centers around the world started the LDLT program. Living donors should be informed about the possible risk of morbidity and mortality, and later give consent for liver donation without coercion. Donor selection and evaluation have become one of the important steps prior to LDLT, aiming to exclude donors who may have high risks from LDLT and to assure that LDLT recipients will receive perfect liver grafts. In Thailand, living donors must have been blood relatives or be legally married with recipients for at least three years. Donor evaluation can be divided into three step-by-step phases. Psychological evaluation of living donors is also included in pre-transplant assessment. abstract_id: PUBMED:16386560 Donors older than 70 years in liver transplantation. Introduction: Expansion of donor criteria has become necessary with the increasing number of liver transplantation candidates, as aged donors who have been considered to yield marginal organs. Methods: Our database of 477 liver transplants (OLT) included 55 cases performed from donors at least 70 years old vs 422 with younger donors. We analyzed pretransplantation donor and recipient characteristics as well as evolution of the recipients. Results: The old donor group showed significantly lower ALT (23 +/- 17 vs 48.9 +/- 67; P = .0001) and LDH (444 +/- 285 vs 570 +/- 329; P = .01). There was a trend toward fewer hypotensive events in the aged donor group (27.2% vs 40.5%; P = .07). No steatosis (&gt;10%) was accepted in the old donor group. Cold ischemia time was statistically shorter for the aged donors (297 +/- 90 minutes vs 346 +/- 139 minutes; P = .03). With these selected donors, the results were not different for primary nonfunction, arterial and biliary complications, hospitalization, acute reoperation or acute retransplantation, and hospital mortality when donors &gt; or =70 years old were compared to younger donors. Functional cholestasis, neither related to rejection nor to biliary complications, was seen more frequently in old donor recipients (40% vs 22%; P = .03). No differences in 1, and 3 year survivals were observed between recipients of donors over 70 years old and these of younger organs: 93.8% and 90.6% vs 90.7% and 82.8%, respectively. Conclusion: When using selected donors &gt; or =70 years old the outcomes were comparable to those obtained with younger donors. Strict selection is necessary to achieve good long-term survival. abstract_id: PUBMED:35011956 An Update on Usage of High-Risk Donors in Liver Transplantation. The ideal management for end stage liver disease, acute liver failure, and hepatocellular carcinoma (HCC), within specific criteria, is liver transplantation (LT). Over the years, there has been a steady increase in the candidates listed for LT, without a corresponding increase in the donor pool. Therefore, due to organ shortage, it has been substantially difficult to reduce waitlist mortality among patients awaiting LT. Thus, marginal donors such as elderly donors, steatotic donors, split liver, and donors after cardiac death (DCD), which were once not commonly used, are now considered. Furthermore, it is encouraging to see the passing of Acts, such as the HIV Organ Policy Equity (HOPE) Act, enabling further research and development in utilizing HIV grafts. Subsequently, the newer antivirals have aided in successful post-transplant period, especially for hepatitis C positive grafts. However, currently, there is no standardization, and protocols are center specific in the usage of marginal donors. Therefore, studies with longer follow ups are required to standardize its use. abstract_id: PUBMED:30758494 Assessment of Trends in Transplantation of Liver Grafts From Older Donors and Outcomes in Recipients of Liver Grafts From Older Donors, 2003-2016. Importance: In light of the growing population of older adults in the United States, older donors (aged ≥70 years) represent an expansion of the donor pool; however, their organs are underused. Liver grafts from older donors were historically associated with poor outcomes and higher discard rates, but clinical protocols, organ allocation, and the donor pool have changed in the past 15 years. Objective: To evaluate trends in demographics, discard rates, and outcomes among older liver donors and transplant recipients of livers from older donors in a large national cohort. Design, Setting, And Participants: Prospective cohort study of 4127 liver grafts from older donors and 3350 liver-only recipients of older donor grafts and 78 990 liver grafts from younger donors (aged 18-69 years) and 64 907 liver-only recipients of younger donor grafts between January 1, 2003, and December 31, 2016, in the United States. The Scientific Registry of Transplant Recipients, which includes data on all transplant recipients in the United States that are submitted by members of the Organ Procurement and Transplantation Network, was used. Exposures: Year of liver transplant and age of liver donor. Main Outcomes And Measures: Odds of graft discard and posttransplant outcomes of all-cause graft loss and mortality. Results: In this study, 4127 liver grafts from older donors were recovered for liver transplant across the study period (2003-2016); 747 liver grafts from older donors were discarded, and 3350 liver grafts from older donors were used for liver-only recipients. After adjusting for donor characteristics other than age and accounting for Organ Procurement Organization-level variation, liver grafts from older donors were more likely to be discarded compared with liver grafts from younger donors in 2003-2006 (adjusted odds ratio [aOR], 1.97; 95% CI, 1.68-2.31), 2007-2009 (aOR, 2.55; 95% CI, 2.17-3.01), 2010-2013 (aOR, 2.04; 95% CI, 1.68-2.46), and 2013-2016 (aOR, 2.37; 95% CI, 1.96-2.86) (P &lt; .001 for all). Transplants of liver grafts from older donors represented a progressively lower proportion of all adult liver transplants, from 6.0% (n = 258 recipients) in 2003 to 3.2% (n = 211 recipients) in 2016 (P = .001). However, outcomes in recipients of grafts from older donors improved over time, with 40% lower graft loss risk (adjusted hazard ratio, 0.60; 95% CI, 0.53-0.68; P &lt; .001) and 41% lower mortality risk (adjusted hazard ratio, 0.59; 95% CI, 0.52-0.68; P &lt; .001) in 2010 through 2016 vs 2003 through 2009; these results were beyond the general temporal improvements in graft loss (interaction P = .03) and mortality risk (interaction P = .04) among recipients of liver grafts from younger donors. Conclusions And Relevance: These findings show that from 2003 to 2016, liver graft loss and mortality among recipients of liver grafts from older donors improved; however, liver graft discard from older donors remained increased and the number of transplants performed with liver grafts from older donors decreased. Expansion of the donor pool through broader use of liver grafts from older donors might be reasonable. Answer: Yes, the characteristics of liver donors have changed over the last 15 years. A study comparing liver cadaveric donors from two different periods at La Fe University Hospital of Valencia (Spain) found that more recent donors (group B: February 2004 to December 2005) were older with a mean age of 52.3 years compared to 32.4 years in the earlier group (group A: January 1991 to June 1995). The main cause of death shifted from craneoencephalic traumatism in group A to cerebrovascular accidents in group B. Additionally, the more recent group had longer stays in intensive care units and a higher incidence of atherosclerosis and macroscopic liver steatosis (17% and 29.5% respectively) compared to the earlier group (5% and 12% respectively) (PUBMED:18358179). The use of livers from elderly donors has become more common in transplantation to address the high waiting list mortality and the shortage of available organs. It is not unusual to accept liver grafts from donors who are 60 years old, and there is ongoing debate about the use of livers from donors above 70 or 80 years old, especially when combined with other risk factors (PUBMED:28624109). Studies have shown that liver transplantation using organs from donors older than 60 years can yield satisfactory outcomes comparable to those with younger donors, without an increased frequency of vascular complications and cholestasis syndrome (PUBMED:15338118). Similarly, liver transplantation with donors older than 80 years has been reported to have acceptable patient survival rates, although graft survival rates are lower compared to younger donors (PUBMED:29118932). Moreover, liver transplantation with donors older than 75 years has been associated with a higher rate of poor initial graft function and an increased number of reinterventions, but donor age alone did not result in a survival disadvantage for the graft or recipient (PUBMED:21486572). Overall, the trend towards using older liver donors reflects the need to expand the donor pool and the changing demographics of the population, despite the associated challenges and the need for careful donor selection to minimize risks (PUBMED:35011956, PUBMED:30758494).
Instruction: Cigarettes and social differentiation in France: is tobacco use increasingly concentrated among the poor? Abstracts: abstract_id: PUBMED:19681803 Cigarettes and social differentiation in France: is tobacco use increasingly concentrated among the poor? Aims: This paper aimed to assess whether the increase of social differentiation of smoking is observed in France. Design And Setting: Five cross-sectional telephone surveys conducted in France between 2000 and 2007. Participants: The surveys were conducted among national representative samples of French subjects aged 18-75 years (n=12 256, n=2906, n=27 499, n=2887, n=6007 in 2000, 2003, 2005, 2006 and 2007, respectively). We focused on three groups: executives, manual workers and the unemployed. Measurements: Time trends of smoking prevalence were assessed, and socio-economic factors (especially occupation and job status) associated with smoking were identified and compared in 2000 and 2005. We also computed respondents' equivalized household consumption (EHI) and their cigarette budget to assess the financial burden of smoking. Findings: Between 2000 and 2007, smoking prevalence decreased by 22% among executive managers and professionals and by 11% among manual workers, and did not decrease among the unemployed. Indicators of an underprivileged social situation were associated more markedly with smoking in 2005 than in 2000. In addition, the falling-off of smoking initiation occurred later and was less marked among manual workers than it was among executive managers and professionals. Finally, in 2005 15% of French smokers devoted at least 20% of their EHI to the purchase of cigarettes, versus only 5% in 2000, and smoking weighted increasingly heavily on the poorest smokers' budgets. Conclusions: While these results point out an increased social differentiation in tobacco use, they underline the need to design and implement other forms of action to encourage people to quit, in particular targeting individuals belonging to underprivileged groups. abstract_id: PUBMED:33601094 E-cigarettes use prior to smoking combustible cigarettes among dual users: The roles of social anxiety and E-cigarette outcome expectancies. E-cigarette use is prevalent, and rates of use continue to increase. Although e-cigarettes are often used to help combustible users quit or reduce smoking, some use e-cigarettes in the absence of combustible cigarettes, increasing risk for smoking combustible cigarettes. Yet, little research has examined individual vulnerability factors implicated in transitioning from exclusive e-cigarettes use to dual use of combustible cigarettes. Social anxiety may be one such factor given it is related to a variety of negative smoking-related outcomes. Thus, the current study tested whether social anxiety was related to using e-cigarettes before smoking combustibles among 226 current undergraduate dual users (use both e-cigarettes and combustible cigarettes). Most dual users reported initiating with e-cigarettes (67%). Those who initiated with e-cigarettes reported statistically significantly greater negative reinforcement (d = 0.59) and weight control expectancies (d = 0.37) and greater social anxiety (d = 0.37) than those who initiated with combustibles. Social anxiety was indirectly related to e-cigarette initiation via negative and weight control expectancies. Findings add to a growing literature that dual users initiated with e-cigarettes and extend understanding of this phenomenon by identifying that socially anxious persons may be especially vulnerable to doing so, at least partially due to expectations regarding e-cigarette's ability to manage negative affect and/or weight. abstract_id: PUBMED:28393082 Electronic Cigarette Use among Current Smokers: A Pilot Qualitative Study. Objective: This pilot study explored psychosocial influences of e-cigarette use among dual users. Methods: Two focus groups among adult current smokers who had ever used e-cigarettes were conducted in Georgia. Discussions were audio-recorded. Principles of grounded theory and thematic analysis were employed. Results: Reasons for initial use included curiosity and social influence. Themes related to regular use included enjoyment of sensory experiences and perception of reduced harm. Nicotine craving, social image, and convenience were reasons for initial and regular dual use. Two patterns of use emerged - (1) using e-cigarettes to supplement combustible cigarettes; and (2) to replace combustible cigarettes. Conclusions: Reasons for dual use were related to nicotine dependence, social influence, product appeal, and perception of reduced harm. Understanding contextual nuances of dual use can inform policy and communication. abstract_id: PUBMED:25158012 Use and perception of electronic cigarettes among college students. Objective: This study provides insight into how electronic cigarettes (e-cigarettes) may affect the social normative environment for tobacco use among college students. Participants: Participants were 244 freshman and sophomore students. Methods: Students completed an online self-report survey in April 2011. Results: There is a higher acceptance rate of e-cigarette smoking in public than traditional tobacco. For intention to use an e-cigarette, the strongest predictor is current tobacco use, followed by a positive orientation toward public use of e-cigarettes. Positive orientation toward public use of e-cigarettes is significantly predicted by the use of alternate tobacco, intention to use or try e-cigarettes, positive orientation toward public use of tobacco, positive attitude toward e-cigarettes, positive perception of social norms for use of e-cigarettes, and favorable orientation toward e-cigarettes as an innovation. Conclusions: These models suggest attitudinal, social normative, innovation, and behavioral factors may combine to bring the e-cigarette into wider use among college students. abstract_id: PUBMED:29291507 E-cigarette use, dual use of e-cigarettes and tobacco cigarettes, and frequency of cannabis use among high school students. Background: The proliferation of electronic vaping products raises many concerns, including whether these products will lead to risky behaviors among adolescents. Evidence suggests that e-cigarettes may be used to vaporize cannabis (marijuana). The current study examined associations between e-cigarette use, dual use of e-cigarettes and tobacco cigarettes, and frequency of cannabis use. Methods: Data on high school students (grades 9-12) were from the 2014-2015 Canadian Student Tobacco, Alcohol and Drugs Survey (n=23,429 respondents). Multinomial logistic regression analyses were used to examine frequency of cannabis use among those who reported using only e-cigarettes, only tobacco cigarettes, both products, and nonusers. Results: Approximately 15% of respondents reported using cannabis at least once in the past month. Likelihood of using cannabis was significantly higher for e-cigarette only and cigarette- only users. Users of both products had increased odds for a higher frequency of cannabis use. These results indicated a sequential risk gradient in the use of cannabis for dual use, cigarette only, e-cigarette only, and non-use groups, respectively. Conclusions: Youth who reported use of e-cigarettes, tobacco cigarettes, and both products showed a heightened risk of using cannabis more frequently. abstract_id: PUBMED:35055504 Single, Dual, and Triple Use of Cigarettes, e-Cigarettes, and Snus among Adolescents in the Nordic Countries. New tobacco and nicotine products have emerged on the market in recent years. Most research has concerned only one product at a time, usually e-cigarettes, while little is known about the multiple use of tobacco and nicotine products among adolescents. We examined single, dual, and triple use of cigarettes, e-cigarettes, and snus among Nordic adolescents, using data of 15-16-year-olds (n = 16,125) from the European School Survey Project on Alcohol and other Drugs (ESPAD) collected in 2015 and 2019 from Denmark, Finland, Iceland, Norway, Sweden, and the Faroe Islands. Country-specific lifetime use of any of these products ranged between 40% and 50%, and current use between 17% and 31%. Cigarettes were the most common product in all countries except for Iceland, where e-cigarettes were remarkably more common. The proportion of dual and triple users was unexpectedly high among both experimental (24%-49%) and current users (31-42%). Triple use was less common than dual use. The users' patterns varied somewhat between the countries, and Iceland differed substantially from the other countries, with a high proportion of single e-cigarette users. More knowledge on the patterns of multiple use of tobacco and nicotine products and on the potential risk and protective factors is needed for targeted intervention and prevention efforts. abstract_id: PUBMED:36721858 Use of tobacco and e-cigarettes among youth in Great Britain in 2022: Analysis of a cross-sectional survey. Introduction: Although e-cigarettes can be an effective form of nicotine substitution for adults attempting to quit smoking, their use among children and young people is a concern. Accurate data about this are needed to inform debates over policy and regulation in the UK and elsewhere. Methods: Using data from an online survey of 2613 youth aged 11-18 years, conducted by the market research company YouGov in March 2022, we present prevalence estimates of e-cigarette and tobacco use. We use logistic regression models to assess differences in e-cigarette use, tobacco use and use of disposable e-cigarettes across a range of covariates including age, sex, tobacco smoking status, social class, and country. Results: Among the 18.0% of those surveyed who reported ever having smoked a cigarette, 83.9% were not regular (at least once per week) smokers and 16.1% were (15.1% and 2.9% of the total sample, respectively). Among the 19.2% of those surveyed who had ever used an e-cigarette, 79.2% were not regular users, while 20.8% were (15.2% and 4.0% of the total sample, respectively). Regular e-cigarette use was more common than regular tobacco smoking (4.0% vs 2.9%). E-cigarette use was more common among those who also smoked tobacco, with 9.0% of never e-cigarette users ever smoking tobacco, compared with 89.4% of regular e-cigarette users. Both smoking and e-cigarette use were associated with increasing age and use by others within the home, but not with social class. Use of disposable e-cigarettes was reported by 53.8% of those who have ever used an e-cigarette, and more common among females than males. Conclusions: Regular e-cigarette use is now more common than smoking in children and youth, though the majority of this is among those who have also smoked tobacco. Measures to reduce the appeal of both e-cigarettes and tobacco to children and young people are warranted. abstract_id: PUBMED:36467719 Dual Use of Combustible Cigarettes and E-cigarettes: a Narrative Review of Current Evidence. Purpose Of Review: E-cigarette use (vaping) among adults has increased, and various patterns of concurrent smoking and vaping (i.e., "dual use") have emerged. Understanding dual use is important for mitigating tobacco-related harm. This narrative review summarizes recent research on dual use, including prevalence and types of dual use, sociodemographic and psychiatric characteristics, nicotine dependence, reasons for dual use, harm perceptions, toxicant exposure, trajectories of dual use, and emerging treatments for dual use. Recent Findings: Nearly half of e-cigarette users concurrently use cigarettes, and many smoke more frequently than they use e-cigarettes. This is concerning because dual users are exposed to both cigarette and potential e-cigarette toxicants and the data are mixed regarding the ability of e-cigarettes to promote smoking cessation. Summary: Further work is needed to identify ways to increase complete smoking abstinence and optimize harm reduction among dual users, including strategies to encourage e-cigarette cessation after stopping smoking. abstract_id: PUBMED:36407975 Social environment exposure to electronic cigarettes and its association with e-cigarette use among adolescents in Shanghai, China. Objective: This study investigated adolescents' social-environmental exposure to e-cigarettes in association with e-cigarette use in Shanghai, China. We also explored these differences by gender and school type. Methods: Sixteen thousand one hundred twenty-three students were included by a stratified random cluster sampling, and the number was weighted according to selection probability. Association between social environment exposure and e-cigarette use was examined by multivariate logistic regressions. Results: There were 35.07, 63.49, 75.19, 9.44, and 18.99% students exposed to secondhand e-cigarette aerosol (SHA), e-cigarette sales, e-cigarette information, parents' and friends' e-cigarette use. Students exposed to SHA (aOR = 1.73, 95% CI 1.40-2.14), e-cigarette sales from ≥2 sources (aOR = 1.55, 95% CI 1.18-2.03), e-cigarette information exposure from ≥2 sources (aOR = 1.39, 95% CI 1.05-1.83), and having a social e-smoking environment (friends' e-cigarette use: aOR = 2.56, 95% CI 2.07-3.16; parents' e-cigarette use: aOR = 1.54, 95% CI 1.17-2.02) were significantly associated with their intention to use e-cigarettes. More girls were exposed to e-cigarette sales in the malls, e-cigarette information at points of sale and on social media (P &lt; 0.01), and exposure to sales from ≥2 sources were associated with girls' intention to use e-cigarettes (aOR = 1.84, 95% CI 1.22-2.78). However, boys were more likely to be exposed to friends' e-cigarette use (P &lt; 0.001), and having friends using e-cigarettes was associated with greater intention to use them in boys (aOR = 2.64, 95% CI 1.97-3.55). Less vocational high school students were exposed to parents' e-cigarette use (P &lt; 0.001), but they were more likely to use e-cigarettes in the future after being exposed (aOR = 2.27, 95% CI 1.50-3.43). A similar phenomenon was observed between junior high students and their exposure to SHA. Conclusions: This study reported adolescents' high exposure rates to the social environment of e-cigarettes. Exposure to SHA, e-cigarette sales from ≥2 sources, e-cigarette information from ≥2 sources and having a social e-smoking environment were related to adolescents' intention to use e-cigarettes. Differences in gender and school type were observed. More attention should be paid to girls, and different interventions should be designed for different types of school students. Additionally, comprehensive tobacco control policies are needed. abstract_id: PUBMED:30703668 Reasons to use e-cigarettes among adults and youth in the Population Assessment of Tobacco and Health (PATH) study. Background: While e-cigarette use is increasing, reasons to use e-cigarettes are poorly summarized in the literature. The objective of this study was to organize reasons to use e-cigarette items into factors and determine associations between these factors and e-cigarette user characteristics. Methods: Data were drawn from youth (12-17) and adults (18+) in Wave 1 (2013-2014) of the Population Assessment of Tobacco and Health (PATH) study. Exploratory factor analysis was conducted on 13 reasons to use survey items from experimental and established adult e-cigarette users and past 30 day youth e-cigarette users to determine two factors - "alternative to cigarettes" and "larger social environment". Weighted linear regression models tested the associations between e-cigarette user group and sociodemographics and reasons to use factors among youth and adults. Results: Adult current established e-cigarette users were associated with both alternative to cigarettes (β = 0.128, p &lt; .001) and larger social environment (β = 0.063, p &lt; .001) factors, while former established e-cigarette users were associated with alternative to cigarettes (β = 0.064, p &lt; .001). Several adult sociodemographic characteristics were associated with one but not the other factor, or both but in opposite directions. Youth that used e-cigarettes earlier today were also associated with both reasons to use factors (β = 0.127-0.130, p &lt; .01, each); however, youth using any other day in the past 30 days was not associated with either factor. Conclusions: Reasons to use are associated with patterns of e-cigarette use among youth and adults. These factors could support a comprehensive approach to addressing rising e-cigarette use among youth and adults and target certain user populations. Answer: The available abstract (PUBMED:19681803) indicates that in France, there is an increasing social differentiation in tobacco use. Between 2000 and 2007, smoking prevalence decreased by 22% among executive managers and professionals and by 11% among manual workers, but did not decrease among the unemployed. Indicators of an underprivileged social situation were more strongly associated with smoking in 2005 than in 2000. Additionally, the decrease in smoking initiation occurred later and was less marked among manual workers compared to executive managers and professionals. Furthermore, in 2005, 15% of French smokers spent at least 20% of their equivalized household consumption (EHI) on cigarettes, a significant increase from 5% in 2000, indicating that smoking is weighing more heavily on the budgets of the poorest smokers. These findings suggest that tobacco use is indeed becoming increasingly concentrated among the poor in France, and there is a need for targeted actions to encourage quitting, especially among underprivileged groups.
Instruction: Does preoperative narcotic use adversely affect outcomes and complications after spinal deformity surgery? Abstracts: abstract_id: PUBMED:24704676 Does preoperative narcotic use adversely affect outcomes and complications after spinal deformity surgery? A comparison of nonnarcotic- with narcotic-using groups. Background Context: The role of preoperative (preop) narcotic use and its influence on outcomes after spinal deformity surgery are unknown. It is important to determine which patient factors and comorbidities can affect the success of spinal deformity surgery, a challenging surgery with high rates of complications at baseline. Purpose: To evaluate if preop narcotic use persists after spinal deformity surgery and whether the outcomes are adversely affected by preop narcotic use. Study Design/setting: Retrospective evaluation of prospectively collected data. Patient Sample: Two hundred fifty-three adult patients (230 females/23 males) undergoing primary spinal deformity surgery were enrolled from 2000 to 2009. Outcome Measures: Preoperative and postoperative (postop) narcotic use and changes in Oswestry Disability Index (ODI), Scoliosis Research Society (SRS) pain, and SRS total scores. Methods: Preoperative, 2-year postop, and latest follow-up pain medication use were collected along with ODI, SRS pain, and SRS scores. Preoperative insurance status, surgical and hospitalization demographics, and complications were collected. All patients had a minimum 2-year follow-up (average 47.4 months). Results: One hundred sixty-eight nonnarcotic (NoNarc) patients were taking no pain meds or only nonsteroidal anti-inflammatories preoperatively. Eighty-five patients were taking mild/moderate/heavy narcotics before surgery. The average age was 48.2 years for the NoNarc group versus 53.6 years for the Narc group (p&lt;.005). There were significantly more patients with degenerative than adult scoliosis in the Narc group (47 vs. 28, p&lt;.001; mild 19 vs. 24, p&lt;.02; moderate 6 vs. 14, p&lt;.0003; heavy 3 vs. 10, p&lt;.0002). Insurance status (private/Medicare/Medicaid) was similar between the groups (p=.39). At latest follow-up, 137/156 (88%) prior NoNarc patients were still not taking narcotics whereas 48/79 (61%) prior narcotic patients were now off narcotics (p&lt;.001). Significant postop improvements were seen in Narc versus NoNarc groups with regard to ODI (26-15 vs. 44-30.3, p&lt;.001), SRS pain (3.36-3.9 vs. 2.3-3.38, p&lt;.001), and overall SRS outcome (3.36-4 vs. 2.78-3.68, p&lt;.001) scores. A comparison of change in outcome scores between the two groups showed a higher improvement in SRS pain scores for the Narc versus NoNarc group (p&lt;.001). Conclusions: In adults with degenerative scoliosis taking narcotics a significant decrease in pain medication use was noted after surgery. All outcome scores significantly improved postop in both groups. However, the Narc group had significantly greater improvements in SRS pain scores versus the NoNarc group. abstract_id: PUBMED:38055123 Preoperative nutritional optimization for adult spinal deformity: Review. Purpose: The main objective of this review article is to examine the role that nutrition has on adult spinal deformity. The information presented in this review aims to provide spine surgeons with a broad overview of screening, assessment, and interventional strategies that may be used for presurgical nutritional optimization. Methods: A comprehensive literature review utilizing three biomedical databases was performed to generate articles of interest. Published articles related to nutrition, adult spinal deformity, spine surgery and orthopaedics were reviewed for the composition of this article. Nutrition may play a role in optimizing postoperative outcomes following adult spinal deformity surgeries, such as limiting delirium, length of stay, blood transfusion, and other medical complications. The use of screening tools, such as the PNI and CONUT score can assess preoperative nutritional status and may provide some utility in evaluating nutrition status in patients undergoing deformity surgery. Balancing both macronutrients and micronutrients, notably, carbohydrates, protein, albumin, and vitamin D can play a role in preoperative optimization. Conclusion: Adult spinal deformity patients are at an increased risk for malnutrition. These patients should be assessed for nutrition status to prime them for surgery, minimize complications, and maximize their outcomes. However, further studies are needed to determine how nutrition ultimately affects adult spinal deformity patients in the postoperative period and to establish specific nutritional recommendations for this unique population. abstract_id: PUBMED:35386240 Complications of adult spinal deformity surgery: A literature review. Purpose: Adult spinal deformity incidence increases accordingly as the population ages. Even though surgery is the best option for the treatment, the complications due to surgery are pretty challenging. This study aims to review the complication rates of adult spinal deformity surgery. Methods: A literature review of the last decade was performed searching for the query "Adult spine deformity and complication." This search yielded 2781 results, where 79 articles were chosen to investigate the complications of adult spinal deformity surgery. In addition, the demographic data, surgical interventions, and complications were extracted from the publications. Results: A total of 26,207 patients were analyzed, and 9138 complications were found (34.5%). Implant failure, including screw loosening, breakage, distal and proximal junctional kyphosis, were the most common complications. The neurologic complications were about 10.8%, and the infection rate was 3.6%. Cardiac and pulmonary complications were about 4.8%. Discussion: Age, body mass index, smoking, osteoporosis, and other comorbidities are the significant risk factors affecting adult spinal deformity surgery. Presurgical planning and preoperative risk factor assessment must be done to avoid complications. Furthermore, intra and postoperative complications affect the patients' quality of life and length of stay, and hospital readmissions. Revision surgery also increases the risk of complications. Conclusion: Good patient evaluation before surgery and careful planning of the surgery are essential in avoiding complications of adult spinal deformity. abstract_id: PUBMED:28811987 Early Complications and Outcomes in Adult Spinal Deformity Surgery: An NSQIP Study Based on 5803 Patients. Study Design: Retrospective analysis. Objective: The purpose of this study is to determine the incidence, impact, and risk factors for short-term postoperative complications following elective adult spinal deformity (ASD) surgery. Methods: Current Procedural Terminology codes were used to query the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) for adults who underwent spinal deformity surgery from 2010 to 2014. Patients were separated into groups of those with and without complications. Univariate analysis and multivariate logistic regression were used to assess the impact of patient characteristics and operative features on postoperative outcomes. Results: In total, 5803 patients were identified as having undergone ASD surgery in the NSQIP database. The average patient age was 59.5 (±13.5) years, 59.0% were female, and 81.1% were of Caucasian race. The mean body mass index was 29.5(±6.6), with 41.9% of patients having a body mass index of 30 or higher. The most common comorbidities were hypertension requiring medication (54.5%), chronic obstructive pulmonary disease (4.9%), and bleeding disorders (1.2%). Nearly a half of the ASD patients had an operative time &gt;4 hours. The posterior fusion approach was more common (56.9%) than an anterior one (39.6%). The mean total relative value unit was 73.4 (±28.8). Based on multivariate analyses, several patient and operative characteristics were found to be predictive of morbidity. Conclusion: Surgical correction of ASD is associated with substantial risk of intraoperative and postoperative complications. Preoperative and intraoperative variables were associated with increased morbidity and mortality. This data may assist in developing future quality improvement activities and saving costs through measurable improvement in patient safety. abstract_id: PUBMED:27278531 Impact of spine surgery complications on costs associated with management of adult spinal deformity. A better understanding of the consequences of spine surgery complications is warranted to optimize patient-reported outcomes and contain the rising health care costs associated with the management of adult spinal deformity (ASD). We systematically searched PubMed and Scopus databases using keywords "adult spinal deformity surgery," "complications," and "cost" for published studies on costs of complications associated with spinal surgery, with a particular emphasis on ASD and scoliosis. In the 17 articles reviewed, we identified 355,354 patients with 11,148 reported complications. Infection was the most commonly reported complication, with an average treatment cost ranging from $15,817 to $38,701. Hospital costs for patients with deep venous thrombosis, pulmonary thromboembolism, and surgical site infection were 2.3 to 3.1 times greater than for patients without those complications. An effort to collect and characterize data on cost of complications is encouraged, which may help health care providers to identify potential resources to limit complications and overall costs. abstract_id: PUBMED:32730730 Classifying Complications: Assessing Adult Spinal Deformity 2-Year Surgical Outcomes. Study Design: Retrospective review of prospective database. Objective: Complication rates for adult spinal deformity (ASD) surgery vary widely because there is no accepted system for categorization. Our objective was to identify the impact of complication occurrence, minor-major complication, and Clavien-Dindo complication classification (Cc) on clinical variables and patient-reported outcomes. Methods: Complications in surgical ASD patients with complete baseline and 2-year data were considered intraoperatively, perioperatively (&lt;6 weeks), and postoperatively (&gt;6 weeks). Primary outcome measures were complication timing and severity according to 3 scales: complication presence (yes/no), minor-major, and Cc score. Secondary outcomes were surgical outcomes (estimated blood loss [EBL], length of stay [LOS], reoperation) and health-related quality of life (HRQL) scores. Univariate analyses determined complication presence, type, and Cc grade impact on operative variables and on HRQL scores. Results: Of 167 patients, 30.5% (n = 51) had intraoperative, 48.5% (n = 81) had perioperative, and 58.7% (n = 98) had postoperative complications. Major intraoperative complications were associated with increased EBL (P &lt; .001) and LOS (P = .0092). Postoperative complication presence and major postoperative complication were associated with reoperation (P &lt; .001). At 2 years, major perioperative complications were associated with worse ODI, SF-36, and SRS activity and appearance scores (P &lt; .02). Increasing perioperative Cc score and postoperative complication presence were the best predictors of worse HRQL outcomes (P &lt; .05). Conclusion: The Cc Scale was most useful in predicting changes in patient outcomes; at 2 years, patients with raised perioperative Cc scores and postoperative complications saw reduced HRQL improvement. Intraoperative and perioperative complications were associated with worse short-term surgical and inpatient outcomes. abstract_id: PUBMED:30067126 Perioperative Narcotic Use and Carpal Tunnel Release: Trends, Risk Factors, and Complications. Background: The goals of the study were to: (1) evaluate trends in preoperative and prolonged postoperative narcotic use in carpal tunnel release (CTR); (2) characterize risks for prolonged narcotic use; and (3) evaluate narcotic use as an independent risk factor for complications following CTR. Methods: A query of a large insurance database from 2007-2016 was conducted. Patients undergoing open or endoscopic CTR were included. Revision surgeries or patients undergoing median nerve repair at the forearm, upper extremity fasciotomies, or with distal radius fractures were excluded. Preoperative use was defined as narcotic use between 1 to 4 months prior to CTR. A narcotic prescription between 1 and 4 months after surgery was considered prolonged postoperative use. Demographics, comorbidities, and other risk factors for prolonged postoperative use were assessed using a regression analysis. Subgroup analysis was performed according to the number of preoperative narcotic prescriptions. Narcotic use as a risk factor for complications, including chronic regional pain syndrome (CRPS) and revision CTR, was assessed. Results: In total, 66 077 patients were included. A decrease in prescribing of perioperative narcotics was noted. Risk factors for prolonged narcotic use included preoperative narcotic use, drug and substance use, lumbago, and depression. Preoperative narcotics were associated with increased emergency room visits, readmissions, CRPS, and infection. Prolonged postoperative narcotic use was linked to CRPS and revision surgery. Conclusions: Preoperative narcotic use is strongly associated with prolonged postoperative use. Both preoperative and prolonged postoperative prescriptions narcotic use correlated with increased risk of complications. Preoperative narcotic use is associated with a higher risk of postoperative CRPS. abstract_id: PUBMED:27250041 Neurological complications in adult spinal deformity surgery. The number of surgeries performed for adult spinal deformity (ASD) has been increasing due to an aging population, longer life expectancy, and studies supporting an improvement in health-related quality of life scores after operative intervention. However, medical and surgical complication rates remain high, and neurological complications such as spinal cord injury and motor deficits can be especially debilitating to patients. Several independent factors potentially influence the likelihood of neurological complications including surgical approach (anterior, lateral, or posterior), use of osteotomies, thoracic hyperkyphosis, spinal region, patient characteristics, and revision surgery status. The majority of ASD surgeries are performed by a posterior approach to the thoracic and/or lumbar spine, but anterior and lateral approaches are commonly performed and are associated with unique neural complications such as femoral nerve palsy and lumbar plexus injuries. Spinal morphology, such as that of hyperkyphosis, has been reported to be a risk factor for complications in addition to three-column osteotomies, which are often utilized to correct large deformities. Additionally, revision surgeries are common in ASD and these patients are at an increased risk of procedure-related complications and nervous system injury. Patient selection, surgical technique, and use of intraoperative neuromonitoring may reduce the incidence of complications and optimize outcomes. abstract_id: PUBMED:35750987 Preoperative duration of pain is associated with chronic opioid use after adult spinal deformity surgery. Purpose: Few studies have explored the association between preoperative patient-reported measures and chronic opioid use following adult spinal deformity (ASD) surgery. We sought to explore the association between preoperative duration of pain, as well as other patient-reported factors, and chronic opioid use after ASD surgery. Methods: We retrospectively reviewed our U.S. academic tertiary care hospital's database of ASD patients. We included patients 18 years or older who underwent arthrodesis of four or more spinal levels from January 2008 to February 2018, with 2-year follow-up. The primary outcome variable was chronic opioid use, defined as opioid use at both 1 and 2 years postoperatively. We analyzed patient characteristics; duration of preoperative pain (&lt;4 years or ≥4 years); radiculopathy; preoperative Scoliosis Research Society-22r (SRS-22r) score; Oswestry Disability Index (ODI) value; and surgical characteristics. Results: Of 119 patients who met the inclusion criteria, 93 (78%) were women, and mean ± standard deviation age was 59 ± 13. Sixty patients (50%) reported preoperative opioid use, and 35 (29%) reported chronic opioid use. Preoperative opioid use was associated with higher odds of chronic use (adjusted odds ratio, 5.9; 95% confidence interval 1.6-21), as was preoperative pain duration of ≥4 years (adjusted odds ratio, 3.3; 95% confidence interval 1.1-9.8). Patient characteristics, surgical variables, ODI value, and SRS-22r score were not significantly associated with chronic postoperative opioid use. Conclusion: Preoperative opioid use and duration of pain of ≥4 years were associated with higher odds of chronic opioid use after ASD surgery. Level Of Evidence: III. abstract_id: PUBMED:27927396 Weight Change and Clinical Outcomes Following Adult Spinal Deformity Surgery in Overweight and Obese Patients. Study Design: Retrospective comparative study. Summary Of Background Data: The effect of spine surgery on postoperative weight loss or weight gain in overweight and obese spinal deformity patients is unknown. Objective: To evaluate the postoperative outcomes and weight changes in adult patients undergoing spinal deformity surgery. Materials And Methods: A total of 104 adult patients undergoing primary spinal deformity surgery were enrolled. All patients had a minimum 2-year follow-up (average, 50.1 months). Preoperative and latest follow-up, body mass index (BMI), Oswestry Disability Index, Scoliosis Research Society (SRS)-22 self-image and SRS outcomes scores were collected. Instrumented levels, estimated blood loss, major and minor complications, length of hospitalization, and hospital discharge status were also reviewed and compared. Results: A total of 66 patients were overweight (BMI &gt; 25-29.9; average, 26.9) (Group OW), whereas 38 patients were obese (BMI &gt; 30; average, 33.5) (Group OB). The average age was 54.5 in Group OW and 48.6 in Group OB (p &lt; .01). Postoperatively, significant changes were not found in the BMI for Group OW, 27.2 (26.9-27.2; p &lt; .39), and for Group OB, 35 (33.5-35; p &lt; .06). Postoperatively, significant improvements were seen in both groups for Oswestry Disability Index (36.1-21.8, Group OW; 44.1-24.4, Group OB; p &lt;.001), SRS self-image (2.9-3.7, Group OW; 2.6-3.8, Group OB; p &lt; .001) and SRS score (3.1-3.8, Group OW; 2.9-3.8, Group OB; p &lt; .001). There were no significant differences in complications between groups. Conclusions: As a group, overweight and obese primary spinal deformity patients did not demonstrate significant weight gain or weight loss from preoperative to latest follow-up. However, both overweight and obese patients had significant improvements in outcome scores at latest follow-up and equivalent rates of complications. Answer: Preoperative narcotic use does not necessarily adversely affect outcomes after spinal deformity surgery. A study comparing nonnarcotic-using groups with narcotic-using groups found that although patients in the narcotic-using group were older and had more degenerative scoliosis, all outcome scores significantly improved postoperatively in both groups. Interestingly, the narcotic-using group had significantly greater improvements in Scoliosis Research Society (SRS) pain scores compared to the nonnarcotic-using group. Additionally, a significant decrease in pain medication use was noted after surgery in adults with degenerative scoliosis taking narcotics (PUBMED:24704676). However, it is important to note that preoperative narcotic use is associated with higher odds of chronic opioid use after adult spinal deformity surgery. Specifically, preoperative opioid use and a preoperative pain duration of 4 years or more were associated with higher odds of chronic opioid use postoperatively (PUBMED:35750987). This suggests that while preoperative narcotic use does not necessarily worsen surgical outcomes, it may be linked to continued reliance on narcotics after surgery. Moreover, the broader context of patient health and surgical risk factors should be considered. For example, complications of adult spinal deformity surgery are influenced by factors such as age, body mass index, smoking, osteoporosis, and other comorbidities (PUBMED:35386240). Preoperative nutritional optimization can also play a role in optimizing postoperative outcomes and minimizing complications (PUBMED:38055123). Additionally, perioperative narcotic use has been associated with increased risk of complications such as chronic regional pain syndrome (CRPS) and revision surgery (PUBMED:30067126). In summary, while preoperative narcotic use does not necessarily lead to worse surgical outcomes and may even be associated with greater improvements in pain scores, it is linked to a higher likelihood of chronic opioid use post-surgery. It is also one of several factors that should be considered in the context of a patient's overall health and risk profile when planning for spinal deformity surgery.
Instruction: Support for tobacco control interventions: do country of origin and socioeconomic status make a difference? Abstracts: abstract_id: PUBMED:22714136 Support for tobacco control interventions: do country of origin and socioeconomic status make a difference? Objective: To examine the attitudes to various tobacco control regulations among smokers from four different countries and explore differences by country and socioeconomic status. Methods: Questions relating to tobacco regulation were asked of adult smokers from the 2007-2008 International Tobacco Control Four Country Survey (ITC4). Measures included attitudes to tobacco industry and product regulation, and measures of socioeconomic status and economic disadvantage. Results: Overall smokers supported greater regulation of the tobacco industry with least supportive US smokers and most supportive Australian smokers. Reporting smoking-related deprivation and a lower income was independently associated with increased support for regulation of the tobacco industry (both p≤0.01). Conclusions: Policy-makers interested in doing more to control tobacco should be reassured that, for the most part, they have the support of smokers, with greatest support in countries with the strongest regulations. Smokers economically disadvantaged by smoking were more supportive of government policies to regulate the tobacco industry suggesting that reactance against regulation is not likely to differentially contribute to lower cessation rates in this group. abstract_id: PUBMED:22092035 Socioeconomic status and smoking: a review. Smoking prevalence is higher among disadvantaged groups, and disadvantaged smokers may face higher exposure to tobacco's harms. Uptake may also be higher among those with low socioeconomic status (SES), and quit attempts are less likely to be successful. Studies have suggested that this may be the result of reduced social support for quitting, low motivation to quit, stronger addiction to tobacco, increased likelihood of not completing courses of pharmacotherapy or behavioral support sessions, psychological differences such as lack of self-efficacy, and tobacco industry marketing. Evidence of interventions that work among lower socioeconomic groups is sparse. Raising the price of tobacco products appears to be the tobacco control intervention with the most potential to reduce health inequalities from tobacco. Targeted cessation programs and mass media interventions can also contribute to reducing inequalities. To tackle the high prevalence of smoking among disadvantaged groups, a combination of tobacco control measures is required, and these should be delivered in conjunction with wider attempts to address inequalities in health. abstract_id: PUBMED:29190143 Perceptions of Tobacco Control Media Campaigns Among Smokers With Lower Socioeconomic Status. People with low socioeconomic status (SES) in the United States have disparately high rates of smoking and experience disproportionately high burdens of smoking-related disease. Tobacco control media campaigns are a critical strategy for reducing tobacco use prevalence, but evidence is mixed about the optimal use of mass media to reach and promote tobacco use cessation among people with low SES. Improved understanding of the factors influencing how low-SES tobacco users evaluate tobacco control media campaigns may inform development of more effective messages and strategies. Focus groups with primarily low-SES smokers in Connecticut were conducted, finding that participants had seen many tobacco control television ads that used graphic imagery and testimonials, but participants voiced two main themes that limited ad effectiveness: (1) skepticism about the content of ads, the role of the tobacco industry and the government in sponsoring the ads, and the safety and efficacy of cessation supports; and (2) barriers to quitting such as stress, social contexts, and addiction that participants perceived as being underappreciated in the context of the ads. Tobacco control media campaigns targeting low-SES tobacco users may need additional messages, tools, or refinements to more optimally motivate this group to make quit attempts. abstract_id: PUBMED:31058009 Socioeconomic Correlates and Key Aspects of Tobacco Surveillance Using Global Adult Tobacco Survey Among College Students of Mangaluru, South India. Introduction Tobacco use in recent times has been identified to be the single biggest cause of morbidity and mortality. The epidemic of tobacco use has increased among young adults, which has changed the equation of the prevalence. The contribution of tobacco use to socioeconomic inequalities in health is increasing in India. Adolescent's tobacco use may play an important role in increasing social inequalities related to smoking and smokeless tobacco use. The objective of this research was to study the association between socioeconomic status and tobacco use among college students of Mangaluru, South India Methods To analyze the association between the socioeconomic status and tobacco use, the study was conducted among 18 to 24-year-old college students (n = 802) in different colleges of Mangaluru, South India. A subset of key questions from the Global Adult Tobacco Survey (GATS) was used. The socioeconomic status of the participants was recorded using Kuppuswamy socioeconomic scale (for India) to categorize them into upper class and lower class. Descriptive statistics were applied to assess the factors related to tobacco use and socioeconomic status using SPSS ver. 24.0. Results Approximately 29.7% males and 70.3% of females completed the interview. Among 802 subjects, 69.9% belonged to the upper class and 30.04% belonged to the lower class. The current smokers who smoked daily 1.7% were from the upper class and 1.7% were from the lower class; no statistically significant difference was observed as well (p = 0.97). Approximately 3.4% from the upper class smoked less than daily and 3.1% from lower class smoked less than daily (NS). Among the upper class, 1.8% used daily and 2% subjects from the lower class used smokeless tobacco. A statistically significant difference was observed with subjects between the upper and lower class in noticing cigarette promotions in various forms during the last 30 days of interview. Conclusion Socioeconomic disparities on tobacco use need to be explored to ensure the initiation of new tobacco control activities and monitor the existing tobacco control policies. The current study finding demonstrates a significant but varied role of socioeconomic status on current and past tobacco use. abstract_id: PUBMED:24046211 Impact of tobacco control interventions on socioeconomic inequalities in smoking: review of the evidence. Objective: We updated and expanded a previous systematic literature review examining the impact of tobacco control interventions on socioeconomic inequalities in smoking. Methods: We searched the academic literature for reviews and primary research articles published between January 2006 and November 2010 that examined the socioeconomic impact of six tobacco control interventions in adults: that is, price increases, smoke-free policies, advertising bans, mass media campaigns, warning labels, smoking cessation support and community-based programmes combining several interventions. We included English-language articles from countries at an advanced stage of the tobacco epidemic that examined the differential impact of tobacco control interventions by socioeconomic status or the effectiveness of interventions among disadvantaged socioeconomic groups. All articles were appraised by two authors and details recorded using a standardised approach. Data from 77 primary studies and seven reviews were synthesised via narrative review. Results: We found strong evidence that increases in tobacco price have a pro-equity effect on socioeconomic disparities in smoking. Evidence on the equity impact of other interventions is inconclusive, with the exception of non-targeted smoking cessation programmes which have a negative equity impact due to higher quit rates among more advantaged smokers. Conclusions: Increased tobacco price via tax is the intervention with the greatest potential to reduce socioeconomic inequalities in smoking. Other measures studied appear unlikely to reduce inequalities in smoking without specific efforts to reach disadvantaged smokers. There is a need for more research evaluating the equity impact of tobacco control measures, and development of more effective approaches for reducing tobacco use in disadvantaged groups and communities. abstract_id: PUBMED:32994297 Impact of population tobacco control interventions on socioeconomic inequalities in smoking: a systematic review and appraisal of future research directions. Background: While price increases and targeted cessation support have been found to reduce inequalities in smoking by socioeconomic status (SES), evidence on other measures is mixed. We aimed to update the most recent (2014) previous review by identifying and appraising evidence published since 2013 on the equity impact of population tobacco control measures. Methods: Systematic searching of 10 electronic databases and hand-searching of four key journals identified 68 primary research articles published since 2013 that sought to examine the equity impact of population tobacco control measures in high-income countries with a negative socioeconomic gradient in smoking. Reported equity impacts were categorised as positive (greater impact among lower SES), neutral (no difference by SES), negative (greater impact among higher SES) or mixed/unclear. Results: There was substantial growth in research seeking to evaluate the equity impact of tobacco control interventions, but the majority of new studies showed mixed/unclear results. Findings for price increases and targeted cessation support continue to suggest an equity-positive impact, but limitations in the available evidence make further assessment difficult. Substantial differences in the context, scale and implementation of tobacco control policies make straightforward comparison of findings from the previous 2014 and current reviews problematic. Conclusion: Researchers need to adopt more sophisticated, multidisciplinary approaches in evaluating the equity impact of tobacco control measures-developing robust measures of equity effect and using frameworks that take account of context, existing systems/processes and the likely mechanisms of action. Socioeconomic differences in intervention impact within low-income and middle-income countries require evaluation. abstract_id: PUBMED:31124377 Black, white, or green? The effects of racial composition and socioeconomic status on neighborhood-level tobacco outlet density. Objective: To compare predominantly-Black and predominantly-White Maryland areas with similar socioeconomic status to examine the role of both race and socioeconomic status on tobacco outlet availability and tobacco outlet access.Design: Maryland tobacco outlet addresses were geocoded with 2011-2015 American Community Survey sociodemographic data. Two-sample t-tests were conducted comparing the mean values of sociodemographic variables and tobacco outlet density per Census Tract, and spatial lag based regression models were conducted to analyze the direct association between covariables and tobacco outlet density while accounting for spatial dependence between and within jurisdictions.Results: Predominantly-White jurisdictions had lower tobacco outlet availability and access than predominantly-Black jurisdictions, despite similar socioeconomic status. Spatial lag model results showed that median household income and vacant houses had consistent associations with tobacco outlet density across most of the jurisdictions analyzed, and place-based spatial lag models showed direct associations between predominantly-Black jurisdictions and tobacco outlet availability and access.Conclusion: Predominantly-White areas have lower levels of tobacco outlet density than predominantly-Black areas, despite both areas having similar socioeconomic statuses. abstract_id: PUBMED:35060332 Prevalence of tobacco smoking and its socioeconomic determinants: Tobacco smoking and its determinants. Introduction: The epidemic tobacco use is a public health concern worldwide. This study aimed to evaluate the prevalence of tobacco use and its socioeconomic determinants in the city of Shiraz, Iran. Methods: In total, 5873 adults aged 20 and older were included in this study from the city of Shiraz, Iran, from June to October 2015. The sampling was conducted using the stratified random sampling method. Active cigarette, hookah, and second-hand smokers were labeled as tobacco users in this study. Past smokers and non-smokers were labeled as non-tobacco users. The participants' socioeconomic status (SES) was determined based on their self-reported level of education, occupation, income, and residence. Results: In this study, 35.4% of the participants were tobacco users. The prevalence of active cigarette, active hookah, dual-users, and secondhand smokers was 13.3%, 8.3%, 0.4%, and 13.4%, respectively. The prevalence of tobacco use was highest among individuals with primary education level (40.9%), manual jobs (46.4%), lowest income level (38.1%), and those living in the suburban areas (36.4%). In multivariate analysis, the most socioeconomic factors related to tobacco usage were lack of academic education, manual job, and low-income level. Conclusions: Tobacco control efforts should be more focused on vulnerable groups of cigarette and hookah users in the southwest of Iran. Moreover, SES and reduction of health-related disparities and inequality should be considered a crucial concern in this regard. abstract_id: PUBMED:29151476 Tobacco Control Measures to Reduce Socioeconomic Inequality in Smoking: The Necessity, Time-Course Perspective, and Future Implications. Previous systematic reviews of population-level tobacco control interventions and their effects on smoking inequality by socioeconomic factors concluded that tobacco taxation reduce smoking inequality by income (although this is not consistent for other socioeconomic factors, such as education). Inconsistent results have been reported for socioeconomic differences, especially for other tobacco control measures, such as smoke-free policies and anti-tobacco media campaigns. To understand smoking inequality itself and to develop strategies to reduce smoking inequality, knowledge of the underlying principles or mechanisms of the inequality over a long time-course may be important. For example, the inverse equity hypothesis recognizes that inequality may evolve in stages. New population-based interventions are initially primarily accessed by the affluent and well-educated, so there is an initial increase in socioeconomic inequality (early stage). These inequalities narrow when the deprived population can access the intervention after the affluent have gained maximum benefit (late stage). Following this hypothesis, all tobacco control measures may have the potential to reduce smoking inequality, if they continue for a long term, covering and reaching all socioeconomic subgroups. Re-evaluation of the impact of the interventions on smoking inequality using a long time-course perspective may lead to a favorable next step in equity effectiveness. Tackling socioeconomic inequality in smoking may be a key public health target for the reduction of inequality in health. abstract_id: PUBMED:24449249 Tobacco industry marketing to low socioeconomic status women in the U.S.A. Objectives: Describe tobacco companies' marketing strategies targeting low socioeconomic status (SES) females in the U.S.A. Methods: Analysis of previously secret tobacco industry documents. Results: Tobacco companies focused marketing on low SES women starting in the late 1970s, including military wives, low-income inner-city minority women, 'discount-susceptible' older female smokers and less-educated young white women. Strategies included distributing discount coupons with food stamps to reach the very poor, discount offers at point-of-sale and via direct mail to keep cigarette prices low, developing new brands for low SES females and promoting luxury images to low SES African-American women. More recently, companies integrated promotional strategies targeting low-income women into marketing plans for established brands. Conclusions: Tobacco companies used numerous marketing strategies to reach low SES females in the U.S.A. for at least four decades. Strategies to counteract marketing to low SES women could include (1) counteracting price discounts and direct mail coupons that reduce the price of tobacco products, (2) instituting restrictions on point-of-sale advertising and retail display and (3) creating counteradvertising that builds resistance to psychosocial targeting of low SES women. To achieve health equity, tobacco control efforts are needed to counteract the influence of tobacco industry marketing to low-income women. Answer: Support for tobacco control interventions does appear to be influenced by both country of origin and socioeconomic status. Smokers from different countries have varying attitudes towards tobacco control regulations, with Australian smokers being the most supportive and US smokers the least supportive of greater regulation of the tobacco industry. Reporting smoking-related deprivation and lower income was independently associated with increased support for regulation of the tobacco industry (PUBMED:22714136). Socioeconomic status (SES) plays a significant role in smoking prevalence, with higher smoking rates observed among disadvantaged groups. These groups may also face higher exposure to tobacco's harms and have less success in quitting due to factors such as reduced social support, stronger addiction, and psychological differences like lack of self-efficacy. Raising the price of tobacco products is seen as the most effective intervention to reduce health inequalities from tobacco, while targeted cessation programs and mass media interventions can also contribute to reducing inequalities (PUBMED:22092035). However, there is mixed evidence regarding the optimal use of mass media to promote tobacco use cessation among people with low SES. Skepticism about the content of ads and barriers to quitting such as stress and addiction are perceived as limitations in the effectiveness of these campaigns (PUBMED:29190143). Research from Mangaluru, South India, found no statistically significant difference in smoking prevalence between upper and lower socioeconomic classes, suggesting that socioeconomic disparities in tobacco use need further exploration to effectively initiate and monitor tobacco control policies (PUBMED:31058009). A review of the evidence on the impact of tobacco control interventions on socioeconomic inequalities in smoking found that price increases have a pro-equity effect, while the equity impact of other interventions is inconclusive. Non-targeted smoking cessation programs may have a negative equity impact due to higher quit rates among more advantaged smokers (PUBMED:24046211). Another systematic review highlighted the need for more sophisticated approaches to evaluate the equity impact of tobacco control measures, as most new studies showed mixed or unclear results. Price increases and targeted cessation support continue to suggest an equity-positive impact, but the evidence for other measures is limited (PUBMED:32994297). In the context of racial composition and socioeconomic status, predominantly-White areas with similar socioeconomic status to predominantly-Black areas have lower levels of tobacco outlet density, indicating that both race and socioeconomic status play a role in tobacco outlet availability and access (PUBMED:31124377). In Shiraz, Iran, the prevalence of tobacco use In Shiraz, Iran, the prevalence of tobacco use was highest among individuals with primary education level, manual jobs, lowest income level, and those living in suburban areas. Multivariate analysis indicated that the most significant socioeconomic factors related to tobacco usage were lack of academic education, manual job, and low-income level, suggesting that tobacco control efforts should be more focused on these vulnerable groups (PUBMED:35060332). A study on the necessity and time-course perspective of tobacco control measures to reduce socioeconomic inequality in smoking concluded that while tobacco taxation has been found to reduce smoking inequality by income, the results for other socioeconomic factors and tobacco control measures are inconsistent. The study suggests that all tobacco control measures may have the potential to reduce smoking inequality if they are sustained over the long term and reach all socioeconomic subgroups (PUBMED:29151476). Finally, tobacco industry marketing strategies in the U.S.A. have targeted low socioeconomic status women for decades, using tactics such as discount coupons, point-of-sale offers, and luxury image promotions to reach this demographic. Counterstrategies to address this targeting could include counteracting price discounts, restricting point-of-sale advertising, and creating counteradvertising that builds resistance to psychosocial targeting of low SES women (PUBMED:24449249). In summary, both country of origin and socioeconomic status influence support for and the impact of tobacco control interventions. While there is strong support for regulation among smokers, especially those who are economically disadvantaged, the effectiveness of interventions can vary based on socioeconomic factors. Price increases are consistently found to have a pro-equity effect, but the impact of other interventions is less clear and may require targeted efforts to reach disadvantaged smokers. Additionally, the tobacco industry's targeted marketing to low SES populations necessitates tailored countermeasures to mitigate its influence.
Instruction: Fulminant gangrenous ischemic colitis: is it the solely severe type of ischemic colitis? Abstracts: abstract_id: PUBMED:27838817 Fulminant gangrenous ischemic colitis: is it the solely severe type of ischemic colitis? Purpose: The purposes of the study are to evaluate the characteristics of gangrenous colitis and to identify clinicobiological factors. Methods: We performed a retrospective study of 75 patients in whom the diagnosis was made endoscopically and confirmed pathologically, between March 2004 and March 2010 at a tertiary teaching hospital. We classified ischemic colitis into the reversible and irreversible types (gangrenous colitis). The influence of factors, such as medical history, symptoms, physical examination, laboratory abnormalities, endoscopic findings, abnormalities on computed tomography, perioperative issues, complications, and several scoring systems, on gangrenous ischemic changes and mortality was evaluated by univariate and multivariate analyses. Results: Ischemic colitis was classified as gangrenous ischemic type in 19 patients. The sigmoid colon was the most common site of involvement. However, the disease distribution was significantly different between the two groups. Pancolitis was most commonly observed for fulminant gangrenous colitis. A difference between the two groups was detected for several factors: age, mortality, physiologic score, APACHE II, mean arterial pressure at the time of admission, heart rate, albumin level, and sodium bicarbonate concentration. Multivariate analysis indicated four factors predictive of gangrenous colitis: absence of hematochezia, abdominal tenderness, absence of diarrhea, and albumin level. Another multivariate analysis excluding gangrenous change factors for mortality indicated four factors: arterial pH, serum sodium bicarbonate (metabolic acidosis), albumin (&lt;3.0), and arterial oxygenation. Conclusion: Absence of diarrhea and hematochezia, presence of abdominal tenderness, and hypoalbuminemia could be the predictors for development of gangrenous changes of ischemic colitis. abstract_id: PUBMED:32954769 Severe colon ischemia in patients with severe coronavirus-19 (COVID-19). COVID-19 is associated with severe coagulopathy. We present three cases of colonic ischemia that can be attributed to the hypercoagulable state related with SARS-CoV2 and disseminated intravascular coagulation. Three males aged 76, 68 and 56 with respiratory distress presented episodes of rectal bleeding, abdominal distension and signs of peritoneal irritation. Endoscopy (case 1) and computed tomography angiography revealed colonic ischemia. One patient (case 2) in which a computed tomography (CT) scan showed perforation of the gangrenous cecum underwent surgery. D-dimer levels were markedly increased (2,170, 2,100 and 7,360 ng/ml) in all three patients. All three patients died shortly after diagnosis. abstract_id: PUBMED:24859995 Surgical Management of Severe Colitis in the Intensive Care Unit. Severe colitis, an umbrella encompassing several entities, is one of the most common acute gastrointestinal disorders resulting in critical illness. Clostridium difficile infection is responsible for the majority of nosocomial diarrhea with fulminant C difficile colitis (CDC) carrying a high mortality. Optimal outcomes can be achieved by early identification and treatment of fulminant CDC, with appropriate surgical intervention when indicated. Ischemic colitis, on the other hand, is uncommon with a range of etiological factors including abdominal aortic surgery, inotropic drugs, rheumatoid diseases, or often no obvious triggering factor. Most cases resolve with nonsurgical management; however, prompt recognition of full-thickness necrosis and gangrene is crucial for good patient outcomes. Fulminant colitis is a severe disease secondary to progressive ulcerative colitis with systemic deterioration. Surgical intervention is indicated for hemorrhage, perforation, or peritonitis and failure of medical therapy to control the disease. Although, failure of medical management is the most common indication, it can be difficult to define objectively and requires a collaborative multidisciplinary approach. This article proposes some simple management algorithms for these clinical entities, with a focus on critically ill patients. abstract_id: PUBMED:21503659 Fulminant gangrenous ischemic colitis: is it the solely severe type of ischemic colitis? N/A abstract_id: PUBMED:28929100 Advents in the Diagnosis and Management of Ischemic Colitis. Background: Ischemic colitis (IC) is a common type of ischemic insult, resulting from decreased arterial blood flow to the colon. This disease can be caused from either atherosclerotic occlusive vascular disease or non-occlusive disease. The aim of this study is to present the diagnostic methodology and management of this severe disease based on current literature. Methods: A literature search has been done including articles referring to modern diagnosis and management of IC. Results: IC is usually a transient disease, but it can also cause gangrene of the colon, requiring emergency surgical exploration. Diagnosis is troublesome and is based on imaging examinations, mainly computerized tomography, which in association with colonoscopy can delineate the distribution pattern and severity of disease. Conclusion: The majority of patients with mild disease have usually complete clinical recovery within a short period. The severe forms of the disease carry high morbidity and mortality rates and prompt surgical intervention is the only way to improve the associated severe prognosis. abstract_id: PUBMED:35509729 Lactulose-Induced Ischemic Colitis: A Rare Presentation and an Overview of Possible Etiologies of the Disease. Ischemic colitis is one of the most common ischemic pathologies of the gastrointestinal system and can be divided into non-gangrenous and gangrenous forms. The pathophysiology involves restricted blood supply to the colonic mucosa. Several risk factors have been implicated in the development of ischemic colitis. Lactulose, one of the mainstay therapies for the treatment of hepatic encephalopathy in patients with cirrhosis, has been rarely reported as a cause of ischemic colitis. To the best of our knowledge, there has been only one case report associating lactulose use with the development of ischemic colitis. The exact pathophysiology is unknown but might be associated with the fermentation of lactulose by intestinal bacteria, causing gaseous distention and increasing the intraluminal pressure. We present the case of a 77-year-old African American male, a known case of non-alcoholic liver cirrhosis with portal hypertension and esophageal varices, brought in by his family to the emergency department for altered mental status, non-bilious vomiting, abdominal distension, and pain for one day. On physical examination, the patient had upper extremity asterixis and was alert but disoriented to place and person. Diagnostic paracentesis was performed, which revealed leukocytosis, predominantly neutrophils. The patient was admitted for spontaneous bacterial peritonitis and hepatic encephalopathy with decompensated liver cirrhosis. The patient was started lactulose with a goal of three to four bowel movements per day. Despite adequate treatment, the patient continued to develop worsening mental function and abdominal distension. This was later followed by a bloody bowel movement. Laboratory assessment showed an elevated white blood cell count, worsening kidney function, and high anion gap metabolic acidosis. CT scan revealed dilated loops of bowel with air and fluid along with submucosal wall edema, findings suggestive of ischemic colitis. Given the poor prognosis and the patient's condition, colonoscopy was deferred. Lactulose was discontinued, as it was thought to be a contributing cause of the patient's ischemic colitis. His condition continued to deteriorate, and he passed away on Day 18 of admission. abstract_id: PUBMED:19069699 Ischaemic colitis--review. Colonic ischaemia, commonly referred to as ischaemic colitis, is the most common type of intestinal ischaemia. The term "ischaemic colitis" was used by Marston (1966) with three typical patterns of injury described: transient reversible ischaemia, ischaemic ulcers with stricturing, and gangrenous ischaemic colitis. Dominant presenting symptoms were colicky abdominal pain, vomiting, bloody diarrhea, and hematochezia. Patients often have minimal signs on clinical examination. Most patients were diagnosed at colonoscopy. Two regions that are believed to be anatomically vulnerable to ischemic disease are "Griffith's point", at the splenic flexure and "Sudeck's critical point", of the Drummond marginal artery. Clinically, ischaemic colitis is classified as non-gangrenous or gangrenous. Non-gangrenous ischaemic colitis involves the mucosa and submucosa and accounts for 80-85 percent of all cases of ischaemic colitis. Non-gangrenous ischaemic colitis is further subclassified into transient, reversible ischaemic colitis with a less severe form of injury and chronic, non-reversible ischaemic colitis, which includes chronic colitis and stricture and has a more severe form of injury. Gangrenous ischaemic colitis accounts for the remaining 15-20 percent of cases and manifests as the most seve-re form of injury. It includes acute fulminant ischaemia with transmural infarction that may progress to necrosis and death. Specific indications for operation include peritonitis, perforation, recurrent fever or sepsis, clinical deterioration in patients refractory to me-ical management. Relative indications include fulminant colitis, massive hemorrhage, chronic protein losing colopathy, and symptomatic ischemic stricture. abstract_id: PUBMED:6745023 Localized ischemic colitis in a young woman with diabetes. ischemic colitis involving a limited segment of the proximal transverse colon in a young diabetic woman was identified and treated. Important clinical, radiographic, and histopathologic findings are detailed. abstract_id: PUBMED:32969365 Acute Intestinal Ischemia in a Patient with COVID-19 Infection. The World Health Organization has declared novel coronavirus disease 2019 (COVID-19) a global public health emergency. Although respiratory symptoms predominate in COVID-19, thrombosis can occur in patients with COVID-19. This paper reports a case of an 82-year-old female with a prior medical history of hypertension, diabetes presenting with fever and cough, and was diagnosed with COVID-19. The patient subsequently developed progressively worsening of abdominal distention, tenderness, and underwent emergent laparotomy. She was found to have a gangrenous colon. This case adds to the limited literature regarding the extrapulmonary complications of COVID-19. abstract_id: PUBMED:15616917 A case of severe gangrenous ischemic colitis following nonsteroidal anti-inflammatory drug intake A 48-year-old woman presented with bloody diarrhea, exsiccosis, abdominal pain and fever. A CT-scan showed air inclusions and thickening of the colonic wall. Laparotomy revealed partial necrosis of the colonic wall from the cecum to the rectosigmoid junction. After colectomy with ileorectostomy recovery was uneventful. The histological examination showed erosions, ulcerations, necrosis and some abscesses along the colonic wall which were not typical for ulcerative colitis or Crohn's disease. The patient reported medication with diclofenac for rheumatoid arthritis. The most probable diagnosis was NSAID-induced colitis. That NSAID may cause colitis has often been described in literature, but there have been only few records on nearly total colon necrosis. Answer: Fulminant gangrenous ischemic colitis is not the sole severe type of ischemic colitis, but it is one of the most severe forms. Ischemic colitis can be classified into non-gangrenous and gangrenous forms, with gangrenous ischemic colitis manifesting as the most severe form of injury, including acute fulminant ischemia with transmural infarction that may progress to necrosis and death (PUBMED:19069699). Gangrenous ischemic colitis is characterized by irreversible changes and carries a high mortality rate (PUBMED:27838817). However, severe colitis can also be caused by other conditions such as Clostridium difficile infection, which can lead to fulminant colitis with a high mortality rate if not promptly identified and treated (PUBMED:24859995). Additionally, severe colon ischemia has been observed in patients with COVID-19, attributed to the hypercoagulable state related to the virus and disseminated intravascular coagulation, leading to colonic ischemia and high mortality (PUBMED:32954769; PUBMED:32969365). Therefore, while fulminant gangrenous ischemic colitis is a severe type of ischemic colitis, it is not the only severe form, as other etiologies and conditions can also lead to severe colitis with significant morbidity and mortality.
Instruction: Does intensive multimodal treatment for maternal ADHD improve the efficacy of parent training for children with ADHD? Abstracts: abstract_id: PUBMED:26123832 Does intensive multimodal treatment for maternal ADHD improve the efficacy of parent training for children with ADHD? A randomized controlled multicenter trial. Background: This is the first randomized controlled multicenter trial to evaluate the effect of two treatments of maternal attention-deficit hyperactivity disorder (ADHD) on response to parent-child training targeting children's external psychopathology. Methods: Mother-child dyads (n = 144; ADHD according to DSM-IV; children: 73.5% males, mean age 9.4 years) from five specialized university outpatient units in Germany were centrally randomized to multimodal maternal ADHD treatment [group psychotherapy plus open methylphenidate medication; treatment group (TG): n = 77] or to clinical management [supportive counseling without psychotherapy or psychopharmacotherapy; control group (CG): n = 67]. After 12 weeks, the maternal ADHD treatment was supplemented by individual parent-child training for all dyads. The primary outcome was a change in the children's externalizing symptom scores (investigator blinded to the treatment assignment) from baseline to the end of the parent-child training 6 months later. Maintenance therapy continued for another 6 months. An intention-to-treat analysis was performed within a linear regression model, controlling for baseline and center after multiple imputations of missing values. Results: Exactly, 206 dyads were assessed for eligibility, 144 were randomized, and 143 were analyzed (TG: n = 77; CG: n = 66). After 6 months, no significant between-group differences were found in change scores for children's externalizing symptoms (adjusted mean TG-mean CG=1.1, 95% confidence interval -0.5-2.7; p = .1854), although maternal psychopathology improved more in the TG. Children's externalizing symptom scores improved from a mean of 14.8 at baseline to 11.4 (TG) and 10.3 (CG) after 6 months and to 10.8 (TG) and 10.1 (CG) after 1 year. No severe harms related to study treatments were found, but adverse events were more frequent in TG mothers than in CG mothers. Conclusions: The response in children's externalizing psychopathology did not differ between maternal treatment groups. However, multimodal treatment was associated with more improvement in maternal ADHD. Child and maternal treatment gains were stable (CCT-ISRCTN73911400). abstract_id: PUBMED:34130566 Does a Behavioral Parent Training Program for Parents of ADHD Children Improve Outcomes? A Pilot Project. Attention Deficit Hyperactivity Disorder (ADHD) is the most common chronic neurobehavioral disorder of childhood. Research suggests increased parent-child conflict exists in families with an ADHD child. The evidence indicates links between child behavior problems and parenting practices. Behavioral Parent Training (BPT) is an evidence-based intervention recommended for the treatment of ADHD. BPT is recommended as first-line treatment in ADHD children under age six and as a combination treatment approach for children older than the age six. BPT programs have demonstrated significant improvement in frequency of the problem behaviors of inattention, hyperactivity, and impulsivity associated with ADHD. Pre- and Post-BPT Parenting Scales and Vanderbilt ADHD Diagnostic Rating Scales for Parents and Teachers were used to evaluate the efficacy of the BPT program. Percent changes for each participant pre- and post-BPT were calculated. The Parenting Scale overall score and overreactivity factor score showed significant improvement post-BPT (p = .05). Participation in a BPT program can affect parenting practices and improve outcomes for ADHD children. BPT programs are effective in reducing negative parenting practices and improving outcomes for this population. abstract_id: PUBMED:29362929 Does the efficacy of parent-child training depend on maternal symptom improvement? Results from a randomized controlled trial on children and mothers both affected by attention-deficit/hyperactivity disorder (ADHD). Multimodal treatment of children with ADHD often includes parent-child training (PCT). However, due to the high heritability, parents of children with ADHD are frequently also affected by the disorder, which is likely to constitute a significant barrier to successful treatment of the child. This secondary analysis of our randomized controlled multicentre AIMAC trial (ADHD in mothers and children) investigates whether children's outcomes following parent-child training in combination with maternal ADHD treatment depend on maternal symptom improvement. In a first step focusing on treatment of maternal ADHD, 144 mothers of mother-child dyads were randomized to multimodal ADHD treatment (group psychotherapy plus methylphenidate) or clinical management (mainly supportive counselling). After 12 weeks (T2), a 12-week PCT program (T2-T3) for all mother-child dyads was added to treat children's ADHD. Maternal symptomatology (CAARS-O:L; SCL-90-R) and children's externalizing symptoms (ADHD-ODD Scale, SDQ) were repeatedly assessed (T1 = baseline, T2, T3). Effects of changes in maternal symptomatology (T1-T2) on the change in children's symptom scores (T1-T3) were analysed using a general linear model, controlling for baseline scores, study centre, and maternal treatment group. 125 mother-child dyads were analysed. Mothers showed significant improvements in ADHD symptoms and overall psychopathology [CAARS-O:L ADHD index: mean - 3.54, SE 0.74 p &lt; 0.0001; SCL-90-R Global Severity (GS): mean - 11.03, SE 3.90, p = 0.0056]. Although children's externalizing symptoms improved significantly (ADHD-ODD Scale: mean - 4.46, SE 0.58, p &lt; 0.0001), maternal improvement had no effect on children's outcomes after Bonferroni-Holm correction for multiple testing. The findings do not support our hypothesis that children's outcomes following PCT for ADHD depend on maternal symptom improvements.Trial register CCT-ISRCTN73911400. abstract_id: PUBMED:37575579 Needs assessment for behavioral parent training for ADHD in Brazil. Introduction: Attention-Deficit/Hyperactivity Disorder (ADHD) is a debilitating condition affecting children and their families worldwide. Behavioral parent training is a recommended form of empirically supported non-pharmacological intervention for young children with mild to moderate ADHD. However, access to such treatment is limited in many countries. Here we identify the treatment needs of Brazilian families with children demonstrating symptoms of ADHD, and the barriers families face in accessing behavioral treatment. Methods: A qualitative needs assessment was undertaken with parents (n = 23), educators (n = 15), and healthcare providers (n = 16). Semi-structured telephone interviews were conducted, and common themes were identified through inductive coding of participants' responses. Results: Participants reported a lack of accessible behavioral treatment, and delays in accessing treatment when available. The majority of parents had not received behavioral parent training, despite it being a recommended form of treatment. Parents, educators and healthcare providers strongly endorsed a need for practical tools to manage the behavior of children with ADHD. Conclusion: Existing services might not meet the needs of children with ADHD and their families in Brazil. Easily accessed behavioral parent training programs are recommended to address the identified treatment gap for Brazilian children with ADHD and their families. abstract_id: PUBMED:31807943 Does helping mothers in multigenerational ADHD also help children in the long run? 2-year follow-up from baseline of the AIMAC randomized controlled multicentre trial. ADHD often affects multiple generations in a family. Previous studies suggested that children with ADHD benefit less from therapy if parents are also affected, since ADHD symptoms interfere with treatment implementation. This two-group randomised controlled trial examined whether targeting maternal ADHD boosts the efficacy of parent-child training (PCT) for the child's ADHD. Here, we report follow-up results 2 years from baseline. Mothers of 144 mother-child dyads (ADHD according to DSM-IV) were examined for eligibility (T1) and randomised to 12 weeks of intensive multimodal treatment comprising pharmacotherapy and DBT-based cognitive behavioural group psychotherapy (TG, n = 77) or clinical management comprising non-specific counselling (CG, n = 67) for Step 1 (concluded by T2). Subsequently, all dyads participated in 12 weekly PCT sessions for Step 2 (concluded by T3). In Step 3, participants received maintenance treatments for 6 months (concluded by T4). At 24 months after baseline (T5), we performed follow-up assessments. The primary endpoint was child ADHD/ODD score (observer blind rating). Outcomes at T5 were evaluated using ANCOVA. Assessments from 101 children and 95 mothers were available at T5. Adjusted means (m) of ADHD/ODD symptoms (range 0-26) in children did not differ between TG and CG (mean difference = 1.0; 95% CI 1.2-3.1). The maternal advantage of TG over CG on the CAARS-O:L ADHD index (range 0-36) disappeared at T5 (mean difference = 0.2; 95% CI - 2.3 to 2.6). Sensitivity analyses controlling for medication and significant predictors of follow-up participation showed unchanged outcomes. Within-group outcomes remained improved from baseline. At the 24-month follow-up, TG and CG converged. The superiority of intensive treatment regarding maternal symptoms disappeared. In general, cross-generational treatment seems to be effective in the long term. (BMBF grant 01GV0605; registration ISRCTN73911400). abstract_id: PUBMED:34975569 Multimodal Interventions Are More Effective in Improving Core Symptoms in Children With ADHD. Objective: To investigate the effect of sensory integration training combined with EEG biofeedback on core symptoms in children with ADHD. Methods: Fifty-two children with attention-deficit, hyperactive-impulsive and combined ADHD were selected. They were randomly divided into control group, sensory integration training group, EEG biofeedback group, and sensory integration training + EEG biofeedback group, and after 4 months of intervention, concentration time and impulsive- hyperactivity and hyperactivity index scores on the PSQ scale were assessed. Results: Compared with that before the intervention, the attention time was significantly increased (P &lt; 0.01), and the impulsive-hyperactivity and hyperactivity index scores were significantly decreased (P &lt; 0.05, P &lt; 0.01). After the intervention, the attention time was significantly higher than that of the control group (P &lt; 0.05, P &lt; 0.01), the attention time of the multimodal intervention group was significantly higher than that of the single intervention group (P &lt; 0.01), and the impulsive-hyperactivity and hyperactivity index scores were significantly lower than those of the single intervention group (P &lt; 0.05). Conclusion: Multimodal intervention can significantly improve the concentration level of children with ADHD, and significantly improve the behavioral symptoms of impulsive-hyperactivity and hyperactivity. Multimodal interventions were more effective than single interventions in improving core symptoms in children with ADHD. The results of this study provide a reference for related research and practical application. abstract_id: PUBMED:25892844 Children With Conduct Problems and Co-occurring ADHD: Behavioral Improvements Following Parent Management Training. To scale up evidence-based treatment of conduct problems, parent management training, Oregon model (PMTO) has been disseminated throughout Norway. This study examined whether Attention Deficit Hyperactivity Disorder (ADHD) predicted the outcomes of PMTO. Of 253 children and families, 97 were reported to have an ADHD diagnosis. Although different at intake, the groups with and without ADHD had close to an equal change in behavioral status following treatment. Maternal depression and family income predicted the combined group's behavior following PMTO. The study indicates that reductions in conduct problems following PMTO are of the same magnitude in children with or without ADHD. However, some characteristics may differentially predict outcomes for children with combined problems. abstract_id: PUBMED:25632280 Barkley's Parent Training Program, Working Memory Training and their Combination for Children with ADHD: Attention Deficit Hyperactivity Disorder. Objective: The aim of the current study was to examine the effectiveness of Barkley's parent training program, working memory training and the combination of these two interventions for children with Attention deficit hyperactivity disorder (ADHD). Methods: In this study, 36 participants with ADHD (aged 6 to 12 years) were selected by convenience sampling. Revision of the Swanson, Nolan and Pelham (SNAP) questionnaire (SNAP-IV), Child Behavior Checklist (CBCL) and clinical interviews were employed to diagnose ADHD. Wechsler Intelligence Scale for Children-Fourth Edition was also implemented. The participants were randomly assigned to the three intervention groups of Barkley's parent training program, working memory training and the combined group. SNAP-IV and CBCL were used as pre-tests and post-tests across all three groups. Data were analyzed using MANCOVA (SPSS version18). Results: There was a significant difference (p&lt; 0.05) in the decline of attention deficit and hyperactivity /impulsivity symptoms between the combined treatment group and working memory training group and also between the combined treatment group and the parent training group in SNAP. In terms of attention problems (experience-based subscales) of CBCL, there was a significant difference (p&lt; 0.001) between the combined treatment group and working memory training group. Furthermore, compared to the working memory training and parent training groups, the combined group demonstrated a significant decline (p&lt; 0.01) in clinical symptoms of ADHD (based on DSM). Conclusion: It was revealed that combined treatment in comparison with the other two methods suppressed the clinical symptoms of ADHD more significantly. abstract_id: PUBMED:28277151 Sequenced neurocognitive and behavioral parent training for the treatment of ADHD in school-age children. The present study examines the potential of sequencing a neurocognitive intervention with behavioral parent training (BPT) to improve executive functions (EFs), psychiatric symptoms, and multiple indices of functional impairment in school-age children aged 7 to 11 years who have been diagnosed with attention-deficit/hyperactivity disorder (ADHD). Specifically, in a randomized controlled trial design, 85 children were assigned to either Cogmed Working Memory Training (CWMT) followed by an empirically supported, manualized BPT intervention, or to a placebo version of CWMT followed by the same BPT intervention. Working memory maintenance (i.e., attention control/short-term memory), working memory processing and manipulation, ADHD and oppositional defiant disorder (ODD) symptoms, impairment in parent-child dynamics, familial impairment, and overall functional compromise were evaluated as outcomes. The results suggest specific effects of the combined CWMT and BPT program on verbal and nonverbal working memory storage and nonverbal working memory processing and manipulation but no incremental benefits in regard to ADHD symptoms, ODD symptoms, and functional outcomes. The present findings do not support the hypothesis regarding the complementary and augmentative benefits of sequenced neurocognitive and BPT interventions for the treatment of ADHD. These results, the study's limitations, and future directions for research are further discussed. abstract_id: PUBMED:24621459 Preliminary Efficacy of a Behavioral Parent Training Program for Children With ADHD in Pakistan. Objective: In an effort to address the lack of evidence-based interventions for ADHD in developing South Asian countries, we examined the preliminary efficacy of a behavioral parent training program in Pakistan. Method: A quasi-experimental design was utilized. Eighty-five 4- to 12-year-old children with clinically significant ADHD symptoms participated: 55 were recruited from hospital clinics (active treatment group) and 30 were recruited from schools (waitlist control group). Parent and teacher ratings of ADHD, oppositional defiant disorder (ODD), and conduct disorder (CD) symptoms and impairment were collected. Results: Using intent-to-treat analyses, the treatment group showed significant pre-post improvement on parent-reported ODD symptoms and ADHD-related impairment. Teacher ratings showed no improvement. Conclusion: This study provides preliminary evidence for the feasibility and effectiveness of behavioral parenting training for children with ADHD in Pakistan and represents a critical first step in identifying evidence-based treatments for Pakistani children with ADHD. Answer: The available research suggests that intensive multimodal treatment for maternal ADHD does not significantly improve the efficacy of parent training for children with ADHD in terms of reducing children's externalizing symptoms. A randomized controlled multicenter trial found that after 6 months, there were no significant differences in change scores for children's externalizing symptoms between the treatment group (mothers receiving multimodal ADHD treatment) and the control group (mothers receiving supportive counseling without psychotherapy or psychopharmacotherapy), although maternal psychopathology improved more in the treatment group (PUBMED:26123832). Additionally, a secondary analysis of the same trial indicated that while maternal ADHD symptoms and overall psychopathology improved significantly, these improvements had no effect on children's outcomes after parent-child training for ADHD (PUBMED:29362929). Furthermore, a 2-year follow-up from the baseline of the AIMAC trial also showed that the superiority of intensive treatment regarding maternal symptoms disappeared over time, and there were no significant differences in child ADHD/ODD symptoms between the treatment and control groups (PUBMED:31807943). However, it is important to note that behavioral parent training (BPT) programs have been found to be effective in reducing negative parenting practices and improving outcomes for children with ADHD (PUBMED:34130566). Other studies have shown that multimodal interventions, including sensory integration training combined with EEG biofeedback, can be more effective in improving core symptoms in children with ADHD compared to single interventions (PUBMED:34975569). Additionally, parent management training has been effective in reducing conduct problems in children with ADHD, regardless of whether they have co-occurring ADHD (PUBMED:37575579). In summary, while multimodal treatment for maternal ADHD leads to improvements in maternal psychopathology, it does not appear to significantly enhance the efficacy of parent training for children with ADHD in terms of reducing externalizing symptoms. However, BPT and other multimodal interventions remain important components of treatment for children with ADHD.
Instruction: Can anesthesia trigger delayed-onset posttraumatic stress disorder in veterans? Abstracts: abstract_id: PUBMED:22724403 Can anesthesia trigger delayed-onset posttraumatic stress disorder in veterans? Purpose: The clinical presentation of a 60-year-old Vietnam veteran who developed delayed-onset posttraumatic stress disorder (PTSD) after surgery is presented along with mechanisms for storing traumatic memory and a discussion of the potential contributions of anesthesia and aging to his clinical presentation. Conclusion: Although additional research is needed to clarify the contributions of memory processes, anesthesia, and aging in delayed-onset PTSD, prudent clinicians will be cognizant of a potential correlation and screen patients appropriately. Practice Implications: A thorough surgical and anesthetic exposure history should be explored in presentations of delayed-onset posttraumatic disorder symptomatology. Additionally, clinicians working with trauma patients who anticipate surgery are uniquely positioned to serve as a liaison between the patient, surgeon, and anesthetist, thereby decreasing the potential for delayed-onset PTSD. abstract_id: PUBMED:19899846 Comparison of immediate-onset and delayed-onset posttraumatic stress disorder in military veterans. Differences in symptoms, trauma exposure, dissociative and emotional reactions to trauma, and subsequent life stress in war veterans reporting immediate-onset or delayed-onset posttraumatic stress disorder (PTSD) or no PTSD were investigated. The role of life stress in delayed-onset PTSD was also studied. Retrospective interviews were conducted with 142 United Kingdom veterans receiving a war pension for PTSD or physical disability. Immediate-onset and delayed-onset PTSD were similar in the number and type of symptoms reported at onset, but the delayed-onset group differed in showing a gradual accumulation of symptoms that began earlier and continued throughout their military career. They were more likely to report major depressive disorder and alcohol abuse prior to PTSD onset. Both groups described similar amounts of trauma exposure, but those in the delayed-onset group reported significantly less peritraumatic dissociation, anger, and shame. Veterans with delayed onsets were more likely than veterans with no PTSD to report the presence of a severe life stressor in the year before onset. In conclusion, the results suggest that delayed onsets involve a more general stress sensitivity and a progressive failure to adapt to continued stress exposure. abstract_id: PUBMED:7987788 Delayed onset post-traumatic stress disorder in World War II veterans. Delayed onset post-traumatic stress disorder is an underreported condition in World War II veterans. There is little agreement about the etiology of this syndrome. Two cases of post-traumatic stress disorder in World War II veterans whose individual onset was delayed for over 30 years are described. These veterans developed the symptoms associated with the stresses of late life including bereavement, social isolation and chronic medical illness. Theories regarding the etiology of this syndrome are reviewed and a hypothesis suggesting a heterogeneous condition is presented. abstract_id: PUBMED:31074663 Adjustment profiles of Yom Kippur war veterans seeking delayed help from the IDF. The most common stress reactions following combat are post-traumatic stress reactions. These responses have a great impact on quality-of-life and can damage victims' personal, familial, and social functioning. However, of the people who develop PTSD, only a few request psychological help. The first aim of the research was, therefore, to examine the long-term adjustment patterns of Yom Kippur War veterans who sought help only after a period of more than 30 years. The second aim was to classify the participants according to a common symptom profile. The participant sample consisted of 195 Yom Kippur War veterans who went through a process of assessment and evaluation via intake interviews conducted between 2006 and 2012, when they came to the Combat Stress Reaction (CSR) unit. Data were collected from the participants' files. The findings indicated that, by the time they arrived at the CSR Unit, 90.8% of the participants were experiencing PTSD. The findings support the argument that delayed onset of PTSD without a background history of symptoms is rare, and indicate that most people who seek help report that their low-level symptoms intensified over the years until these symptoms developed into clinically meaningful PTSD. abstract_id: PUBMED:15879593 Delayed onset of posttraumatic stress disorder among male combat veterans: a case series. Objective: Authors investigated the nature of delayed-onset posttraumatic stress disorder (PTSD) among combat veterans. Methods: PTSD, along with cognitive and emotional functioning, was assessed in a case series of elderly Australian war veterans. Results: Fifteen elderly male subjects consecutively referred to an outpatient psychiatric clinic were identified as having PTSD with significantly delayed onset. In most cases, the onset of PTSD symptoms was associated with unrelated medical complaints, psychosocial stress, and/or mild cognitive impairment. Conclusion: Environmental stressors, coupled with age-related neurodegeneration, may potentially contribute to the late-life recrudescence or emergence of PTSD symptoms in veterans exposed to combat-related trauma. abstract_id: PUBMED:23294854 The clinical picture of late-onset PTSD: a 20-year longitudinal study of Israeli war veterans. Delayed-onset posttraumatic stress disorder (PTSD) has been under medico-legal debate for years. Previous studies examining the prevalence and clinical characteristics of delayed-onset PTSD have yielded inconclusive findings. This study prospectively examines the prevalence and clinical picture of late-onset PTSD among Israeli war veterans. It also evaluates whether or not late-onset PTSD erupts after a completely non-symptomatic period. 675 Israeli veterans from the 1982 Lebanon War, with and without antecedent combat stress reaction (CSR), have been assessed 1, 2 and 20 years post-war. They were divided into 4 groups, according to the duration of delay in PTSD onset. Participants completed self-report questionnaires tapping psychopathology, combat exposure and socio-demographics. 16.5% of the veterans suffered from late-onset PTSD. A longer delay in PTSD onset was associated with less severe psychopathology. Also, CSR was associated with a shorter delay in PTSD onset. Finally, the vast majority of veterans already suffered from PTSD symptoms prior to late PTSD onset. Our results offer further validation for the existence of delayed-onset PTSD. Delayed-onset PTSD appears to be a unique sub-type of PTSD, with an attenuated clinical picture. In addition, delayed-onset PTSD may be the result of an incubation process, wherein symptoms already exist prior to PTSD onset. abstract_id: PUBMED:19478290 Delayed-onset post-traumatic stress disorder among war veterans in primary care clinics. Background: Only limited empirical data support the existence of delayed-onset post-traumatic stress disorder (PTSD). Aims: To expand our understanding of delayed-onset PTSD prevalence and phenomenology. Method: A cross-sectional, epidemiological design (n = 747) incorporating structured interviews to obtain relevant information for analyses in a multisite study of military veterans. Results: A small percentage of veterans with identified current PTSD (8.3%, 7/84), current subthreshold PTSD (6.9%, 2/29), and lifetime PTSD only (5.4%, 2/37) met criteria for delayed onset with PTSD symptoms initiating more than 6 months after the index trauma. Altogether only 0.4% (3/747) of the entire sample had current PTSD with delayed-onset symptoms developing more than 1 year after trauma exposure, and no PTSD symptom onset was reported more than 6 years post-trauma. Conclusions: Retrospective reports of veterans reveal that delayed-onset PTSD (current, subthreshold or lifetime) is extremely rare 1 year post-trauma, and there was no evidence of PTSD symptom onset 6 or more years after trauma exposure. abstract_id: PUBMED:20582726 Delayed-onset PTSD among war veterans: the role of life events throughout the life cycle. Background: The underlying mechanisms of delayed-onset PTSD are yet to be understood. This study examines the role of stressful life events throughout the life cycle in delayed-onset PTSD following combat. Methods: 675 Israeli veterans from the 1982 Lebanon War, 369 with antecedent combat stress reaction (CSR) and 306 without CSR were assessed prospectively, 1, 2 and 20 years after the war. Veterans were divided into four groups, according to the time of first PTSD onset (first onset at 1983, 1984, and 2002 and no PTSD onset). They were assessed for post-, peri- and pre-traumatic life events, as well as military and socio-demographic characteristics. Results: Our findings indicate that shorter delays in PTSD onset were associated with a higher risk for CSR, a higher number of pre- and post-war life events, more severe subjective battle exposure, greater perceived danger during combat and a more stressful military position. CSR was found to be the most powerful predictor of PTSD onset. A recency effect was also found, with more recent life events proving to be stronger predictors of PTSD onset. Conclusions: First, our findings validate the existence of delayed-onset PTSD, as it was found among a substantial number of participants (16.5%). Second, post-, peri- and pre-traumatic life events are associated with the time of PTSD onset. Thus, practitioners and researchers are encouraged to examine not only the original trauma, but also the stressful experiences throughout the survivors' life cycle. In particular, identification of antecedent CSR may help mental help professionals in targeting high-risk populations. abstract_id: PUBMED:25150054 Vietnam veteran perceptions of delayed onset and awareness of posttraumatic stress disorder. Although 40 years have passed since the Vietnam War, demand for treatment of posttraumatic stress disorder (PTSD) among veterans from this conflict has increased steadily. This study investigates the extent to which two factors, delayed onset or awareness of PTSD symptoms, may influence this demand. Using data from two studies of Vietnam Veterans in outpatient (n = 353) and inpatient (n = 721) PTSD treatment, this analysis examines retrospective perceptions of the time of symptom onset and awareness of the connection between symptoms and war-zone stress. The association of these two constructs with pre-war, wartime, and post-war clinical variables are analyzed. Delay in onset of symptoms was reported by 50% of outpatients and 35% of inpatients. Delay in awareness was reported by 60% of outpatients and 65% of inpatients. Onset of symptoms occurred within six years and onset of awareness within 20 years in 90% of individuals. Reported delays in onset and awareness were associated with more numerous negative life events after military service and before the onset of symptoms. Findings suggest that providers, administrators, and policy makers should be aware of the potential for protracted treatment demand among veterans from current conflicts, due in part by delay in onset and awareness of symptoms. abstract_id: PUBMED:34778444 Delayed-Onset PTSD and Coping Strategies of Chinese College Students During the COVID-19 Pandemic. Coronavirus disease 2019 (COVID-19) has become a pandemic, and many Chinese college students both in China and abroad were house-quarantined. This study aimed to investigate the prevalence and symptoms of delayed-onset post-traumatic stress disorder (PTSD) and coping strategies among Chinese overseas and domestic college students during this pandemic. A questionnaire was opportunistically distributed to Chinese college students studying both domestically and abroad six months after the COVID-19 outbreak. The questionnaire consisted of IES-R, SCSQ, and SSRS. The average score of delayed-onset PTSD in our population was 21.411 (full mark, 88 points), which reflected a total high level of delayed-onset PTSD symptoms. Statistical differences were shown between students who have been back to universities during the pandemic or not in the hyperarousal dimension (p = 0.016). Three coping strategies were recognized to influence the respondent's delayed-onset PTSD symptoms, and there was a significant correlation between social support and the coping strategies students chose. A moderate to high level of delayed-onset PTSD was observed among both Chinese overseas and domestic college students 6 months after the COVID-19 outbreak. The useful coping strategies and powerful social supports are significantly important to help them stay mentally healthy and alleviate delayed-onset PTSD during the COVID-19 pandemic. Answer: The clinical presentation of a 60-year-old Vietnam veteran who developed delayed-onset posttraumatic stress disorder (PTSD) after surgery suggests that there may be a potential correlation between anesthesia and the triggering of delayed-onset PTSD in veterans. Although additional research is needed to clarify the contributions of memory processes, anesthesia, and aging in delayed-onset PTSD, clinicians are advised to be aware of this potential correlation and to screen patients appropriately. A thorough surgical and anesthetic exposure history should be explored in presentations of delayed-onset posttraumatic disorder symptomatology (PUBMED:22724403).
Instruction: Peripheral T cells from patients with early systemic sclerosis kill autologous fibroblasts in co-culture: is T-cell response aimed to play a protective role? Abstracts: abstract_id: PUBMED:20418226 Peripheral T cells from patients with early systemic sclerosis kill autologous fibroblasts in co-culture: is T-cell response aimed to play a protective role? Objectives: Oligoclonal T-cell infiltrates have been detected in the skin of patients with early dcSSc. Peripheral T cells from patients with early dcSSc co-cultured with autologous fibroblasts have been found to expand the same T-cell clonotypes found in the affected skin. Here, we characterize oligoclonally expanded T lymphocytes and investigate functional changes occurring in early-dcSSc co-cultured T lymphocytes and fibroblasts. Methods: Peripheral T lymphocytes from five patients with early (&lt;3-year duration) dcSSc were co-cultured with the autologous fibroblasts obtained by punch biopsy of involved skin. Results: T-cell clonotypes expanded in co-cultures were found to be alphabeta(+) and HLA-DR(+), and to promote the apoptosis of autologous fibroblasts. Fibroblasts up-regulated Fas and underwent apoptosis that paired with the expression of Fas ligand (Fasl) on CD4(+) T cells. Finally, the addition of a blocking anti-Fas antibody to the co-cultures resulted in a marked reduction of fibroblast apoptosis, suggesting a critical role of Fas/Fasl engagement in mediating apoptosis in co-cultured fibroblasts. In the co-culture supernatants, we found TGF-beta, IL-1beta, IL-6 and IL-8, cytokines that are known to promote fibrosis in SSc. The same results were registered in each co-culture. Conclusions: Taken together, these data suggest that T-cell response in SSc may also represent an attempt of the immune system to kill fibroblasts, cells likely expressing (auto)antigens, although the overall outcome of the T-cell response contributes to sustain inflammatory loops leading to fibrosis. abstract_id: PUBMED:16542379 Peripheral T lymphocytes from patients with early systemic sclerosis co-cultured with autologous fibroblasts undergo an oligoclonal expansion similar to that occurring in the skin. In recent years several reports have suggested that T cells may have a role in systemic sclerosis (SSc). The aim of our study was to investigate the dynamics of T cell repertoire in early SSc disease analysing a target organ, the skin, and the peripheral blood. To date, indeed, it is not clear if T cell expansions found in SSc reflect a general activation or result from specific antigen stimulation in the target organs. This is an important point to assess in order to characterize the role of T cells in the development of SSc. To address these questions we studied T cell repertoire by CDR3 length analysis in skin biopsies and peripheral blood obtained from patients affected by SSc and we found that a skewed T cell repertoire was present only in the biopsies. In order to characterize more effectively the meaning of these data, we performed co-cultures using fibroblasts and peripheral blood mononuclear cells (PBMCs) obtained from SSc patients. These experiments showed that same T cell expansions were detectable in the skin of SSc patients and in the cultures of PBMCs and autologous fibroblasts of the patients but not in their peripheral blood. Taken together, these data suggest that fibroblasts trigger specific T cell expansions in the early phase of SSc. abstract_id: PUBMED:32174912 T-Cell Proapoptotic and Antifibrotic Activity Against Autologous Skin Fibroblasts in vitro Is Associated With IL-17A Axis Upregulation in Systemic Sclerosis. Background: Systemic sclerosis (SSc) T cells can induce apoptosis of autologous skin fibroblasts in vitro. Th17 cells have been reported to increase in SSc patients, and interleukin-17A (IL-17A) has a profibrotic function. We used a system based on T-cell-autologous fibroblast co-cultures to further investigate a possible role of IL-17A in SSc. Methods: T cells from diffuse SSc patients were co-cultured with autologous skin fibroblasts. IL17A mRNA was assessed by real-time PCR in co-cultured and control T cells, while IL17RA, CXCL1, CCL2, CCL3, COL1A1, COL3A1, CTGF, TGFBR2, and SMAD3 mRNAs were assessed in co-cultured and control fibroblasts. In subset experiments, co-cultures and control cells were treated with either IL-17A or IL-17A plus anti-IL17 receptor monoclonal antibody (α-IL-17RA mAb). Chemokine and procollagen type I (PCI) production was further investigated at the protein level in cell culture supernatants by multiple suspension immunoassay and sandwich ELISA, respectively. Co-cultured and control fibroblasts were also stained with Annexin V and analyzed by flow cytometry. Results: T cell-fibroblast co-cultures overexpressed IL17A and IL17RA. Furthermore, co-cultured fibroblasts upregulated IL-17A targets CXCL1, CCL2, and CCL3, while COL1A1, COL3A1, CTGF, and two key effectors of the TGF-β signaling, TGFBR2 and SMAD3, were found downregulated. Consistently, chemokine concentrations were increased in co-culture supernatants, while PCI levels were reduced, especially after stimulation with ectopic IL-17A. Finally, simultaneous α-IL-17RA mAb treatment restored PCI levels and reduced fibroblast apoptosis in IL-17A-stimulated co-cultures. Conclusion: These data suggest that IL-17A upregulation might play a role in modulating T cell-mediated antifibrotic and proapoptotic effects in co-cultured autologous skin fibroblasts. abstract_id: PUBMED:19589616 Vgamma9+ gammadelta T cells in systemic sclerosis patients are numerically and functionally preserved and induce fibroblast apoptosis. Vdelta1 expressing gammadelta T cells are oligoclonally expanded in systemic sclerosis (SSc) (scleroderma) and thought to play an immunopathogenic role, whereas that of Vgamma9+ gammadelta T cells is unclear. In studies of 16 patients and 16 healthy controls (HCs) we found that whereas the percent of Vdelta1+ gammadelta T cells was significantly elevated among the peripheral blood T cells in patients without radiographic evidence of interstitial lung disease (n=7), Vgamma9+ T cells were equally and persistently represented irrespective of pulmonary disease or cyclophosphamide treatment, at levels similar to healthy controls. Furthermore, ex vivo triggering of patient Vgamma9+ T cells with isopentenyl pyrophosphate plus interleukin-2-induced dose-dependent expansion, secretion of tumor necrosis factor alpha, and contact-dependent apoptosis of co-cultured fibroblasts similarly to Vgamma9+ T cells of controls. Fully functional Vgamma9+ T cells persisting in the peripheral blood of patients with progressive systemic sclerosis could potentially play an immunopathogenic role in vivo by secreting cytokines and inducing death of fibroblasts in a contact-dependent manner when activated by specific antigens. abstract_id: PUBMED:1969335 Functional and phenotypic analysis of T lymphocytes cloned from the skin of patients with systemic sclerosis. Activated T lymphocytes often accumulate in the lower dermis of patients with systemic sclerosis (scleroderma) and may play a role in the development of dermal fibrosis. We propagated and cloned these cells directly from skin biopsies in four of eight cases of early, untreated systemic sclerosis with diffuse scleroderma. The cloning frequency estimates were f = 0.20 and f = 0.48 for T cells derived from the skin of two patients versus f = 0.68 and f = 0.96 for autologous blood T lymphocytes. All but one of 24 skin-derived scleroderma clones were CD4+. Clonal analyses performed with CD4+ clones from patients and normal controls showed that all but one skin-derived clones synthesized either interferon-gamma (60%), glycosaminoglycan-stimulatory factor (26%) or both (9%) when induced in vitro by a mitogen, concanavalin A, but not by autologous dermal fibroblasts. In contrast, blood-derived clones had a different functional phenotype. All skin-derived clones produced tumour necrosis factor-alpha. Our results demonstrate that T lymphocytes obtained from the skin of patients with systemic sclerosis synthesized cytokines which could modulate functions of human dermal fibroblasts. abstract_id: PUBMED:28987475 Current perspectives on the role of CD8+ T cells in systemic sclerosis. Despite long-standing recognition of the importance of T cells in systemic sclerosis (SSc; scleroderma), the role of CD8+ T cells in disease pathogenesis has not been well studied. Our work has shown that over-production of the pro-fibrotic cytokine IL-13 by peripheral blood effector/memory CD8+ T cells is critical for predisposing patients to more severe forms of cutaneous fibrosis. Moreover, IL-13-producing CD8+ T cells induce a pro-fibrotic phenotype in normal and SSc dermal fibroblasts, and exhibit a strong cytotoxic activity ex vivo. We also found that CD8+ T cells are predominantly abundant in the skin lesions of patients in the early stages of diffuse cutaneous (dc)SSc compare to late-stage disease patients. Isolation of CD8+ T cells from the lesional skin of early active dcSSc patients, established that they are skin-resident, express cytolytic molecules and co-express extremely high levels of IL-13 and IFNγ. Other recent studies corroborate these findings and together strongly suggest that CD8+ T cells contribute to SSc pathogenesis through the production of high levels of cytokines with pro-inflammatory and pro-fibrotic function as well as by exhibiting a cytotoxic activity. abstract_id: PUBMED:6218963 Immune response to connective tissue components of the basement membrane. The immune response to connective tissue components of basement membrane (type IV collagen and laminin) and to interstitial collagen (type I) has been examined in human and murine systems. We also examined the role that immunologic sensitization to autologous connective tissue components might play in inducing an inflammatory response resulting in pathologic sequelae. Mice receiving a single subcutaneous injection of 5 micrograms type IV or type I murine collagens, or murine laminin in complete Freund's adjuvant mount a delayed-type hypersensitivity response characterized by a mononuclear cell infiltrate when challenged in the footpad with the sensitizing antigen. Cell-mediated immunity to these connective tissue antigens can be transferred to normal syngeneic mice with sensitized T-lymphocytes. In addition, repeated immunizations with these homologous connective tissue components elicit antibody responses in mice. Our data demonstrate the immunogenic nature of types IV and I collagen, and of laminin in a syngeneic murine model. We have demonstrated autoantibodies to the basement membrane and interstitial collagens in the sera of patients with scleroderma (systemic sclerosis); ELISA ratios correlate directly with the extent of pulmonary fibrosis in these patients. Anti-type IV collagen autoantibodies were found to be primarily IgM and anti-type I collagen antibodies, primarily IgG. An antibody response to autologous connective tissue antigens could lead to complement activation, immune complex formation, and deposition of the complexes along vascular endothelium with recruitment of blood monocytes in situ, mirroring the early scleroderma lesion (perivascular mononuclear cell filtrates). In vitro we examined the role of human peripheral blood mononuclear cells in the activation of fibroblasts. Adherent human blood monocytes release mediators which stimulate fibroblast proliferation and collagen deposition. A model is presented for the induction of immunity to autologous connective tissue components, leading to mononuclear cell inflammation, fibroblast activation and fibrosis. Selective immunity to basement membrane collagens may influence the clinical expression of diffuse connective tissue syndromes such as scleroderma (systemic sclerosis). abstract_id: PUBMED:29985736 Immunotherapy of systemic sclerosis. Systemic sclerosis (SSc) is a chronic systemic disease characterized by microvasculopathy, immune activation, and extensive collagen deposition. Microvasculopathy and immune activation occur very early in the disease process. Evidence from animal models and in vitro studies indicate that T-cells and B-cells activate fibroblasts to produce collagen. Traditional immunosuppressants, cyclophosphamide(CyP), methotrexate(MTX), and more recently mycophenolate mofetil(MMF), may prove more effective if used very early in the disease course. These drugs showed some benefit in skin (MTX, CyP, MMF) and lung function (CyP, MMF). Biologicals, such as intravenous immunoglobulin (IVIg), belimumab(Beli), tocilizumab(TCZ), abatacept(Aba), rituximab(RTX) and fresolimumab(Fresu) appear promising as they exhibited some benefit in skin (IVIg, Beli, TCZ, Aba, RTX, Fresu), hand function (IVIg), and joints (IVIg, TCZ, Aba). Autologous stem cell transplantation showed the best therapeutic efficacy on skin and internal organs, and looks very promising, as modification of transplantation immunosuppression is decreasing the early high mortality. abstract_id: PUBMED:28103916 TIM-1 defines a human regulatory B cell population that is altered in frequency and function in systemic sclerosis patients. Background: Systemic sclerosis (SSc) is a systemic autoimmune disease characterized by excessive production of extracellular matrix by fibroblasts on skin and internal organs. Although Th2 cells have been involved in fibroblast stimulation, hyperactivated B cells may also play an important role. Regulatory B cells (Bregs) are cells capable of inhibiting inflammatory responses and controlling autoimmune diseases. Although many Breg populations have in common the ability to produce high amounts of IL-10, a unique surface marker defining most human Bregs is lacking. It has been described in mice that T cell Ig and mucin domain protein 1 (TIM-1) is an inclusive marker for Bregs, and that TIM-1+ B cells are able to prevent the development of autoimmunity. The aim of this work was to evaluate TIM-1 as a marker for human IL-10+ Bregs, and to determine whether TIM-1+ B cells are defective in SSc patients. Methods: SSc patients (n = 39) and 53 healthy subjects were recruited. TIM-1 and IL-10 expression was assessed in resting or activated peripheral blood CD19+ B cells by flow cytometry. The regulatory function of TIM-1+ or activated B cells from SSc patients and healthy subjects was assessed in autologous and allogenic co-cultures with CD4+ T cells, where T cell proliferation and IFN-γ, IL-17, TNF-α and IL-4 production by T cells was measured by flow cytometry. Results: TIM-1 and IL-10 were preferentially expressed in transitional B cells, but were upregulated in naïve and memory B cells upon stimulation. The frequency of transitional TIM-1+ IL-10+ B cells was significantly decreased in SSc patients compared to healthy controls. In addition, activated B cells from SSc patients induced stronger allogenic Th1 and Th2 responses than activated B cells from healthy controls. Finally, TIM-1+ B cells, including transitional and non-transitional cells, exhibited a higher CD4+ T cell suppressive ability than TIM-1- B cells in healthy controls, but not in SSc patients. Conclusions: TIM-1 is a unique marker for the identification of a human IL-10+ Breg subpopulation which is partially superimposed with transitional B cells. Alterations in TIM-1+ B cells could contribute to the development of autoimmune diseases such as SSc. abstract_id: PUBMED:11083268 Increased interleukin-17 production in patients with systemic sclerosis. Objective: To determine the role of a novel T cell-derived cytokine, interleukin-17 (IL-17), which activates fibroblasts and endothelial cells, in the pathogenesis of systemic sclerosis (SSc). Methods: We examined IL-17 production by lymphocytes from the peripheral blood (PBL) and from fibrotic lesions of the skin and lungs of SSc patients by reverse transcriptase-polymerase chain reaction and enzyme-linked immunosorbent assay. We also studied the effect of IL-17 on the proliferation of fibroblasts and on the production of cytokines and the expression of adhesion molecules on endothelial cells in vitro. Results: IL-17 messenger RNA was expressed in unstimulated PBL and lymphocytes from the skin and lungs of SSc patients, but not in similar samples from patients with systemic lupus erythematosus (SLE) or polymyositis/dermatomyositis or from healthy donors. IL-17 levels were also increased in the serum of SSc patients, but not in that of SLE patients or healthy donors. IL-17 overproduction was significantly related to the early stage of SSc, but not to other clinical features of SSc. Moreover, IL-17 enhanced the proliferation of fibroblasts and induced the expression of adhesion molecules and IL-1 production in endothelial cells in vitro. Conclusion: IL-17 is overproduced by T cells from the peripheral blood and fibrotic lesions of the skin and lungs in SSc patients. These results suggest that IL-17 overproduction plays an important role in the pathogenesis of SSc, especially in the early stages of the disease, by inducing the proliferation of fibroblasts and the production of IL-1 and the expression of adhesion molecules on endothelial cells. Answer: The studies provided suggest that peripheral T cells from patients with early systemic sclerosis (SSc) indeed have the capacity to kill autologous fibroblasts in co-culture. This T cell response may be aimed to play a protective role, as it appears to be an attempt by the immune system to eliminate fibroblasts that are potentially expressing (auto)antigens. In the co-culture experiments, T-cell clonotypes that expanded were found to promote the apoptosis of autologous fibroblasts, with fibroblasts up-regulating Fas and undergoing apoptosis that paired with the expression of Fas ligand (Fasl) on CD4(+) T cells. The addition of a blocking anti-Fas antibody to the co-cultures resulted in a marked reduction of fibroblast apoptosis, suggesting a critical role of Fas/Fasl engagement in mediating this process (PUBMED:20418226). However, the overall outcome of the T-cell response may contribute to sustaining inflammatory loops leading to fibrosis, despite the initial protective intent. The presence of cytokines known to promote fibrosis in SSc, such as TGF-beta, IL-1beta, IL-6, and IL-8, in the co-culture supernatants indicates that the T-cell response could also be contributing to the fibrotic process (PUBMED:20418226). Furthermore, other studies have shown that T cells from SSc patients can induce apoptosis of autologous skin fibroblasts in vitro, and that this proapoptotic and antifibrotic activity is associated with the upregulation of the IL-17A axis (PUBMED:32174912). This suggests that while T cells may have a role in controlling fibroblast proliferation and fibrosis, the complex interplay of cytokines and immune responses can also contribute to disease progression. In summary, the T cell response in early systemic sclerosis may have a dual role: it may be protective by attempting to eliminate fibroblasts expressing (auto)antigens, but it may also inadvertently contribute to the pathogenesis of fibrosis through the production of pro-fibrotic cytokines.
Instruction: Association between physician compensation methods and delivery of guideline-concordant STD care: is there a link? Abstracts: abstract_id: PUBMED:16044979 Association between physician compensation methods and delivery of guideline-concordant STD care: is there a link? Objective: To examine the association between primary care physician (PCP) reimbursement and delivery of sexually transmitted disease (STD) services. Study Design: Cross-sectional sample of PCPs contracted with Medicaid managed care organizations in 2002 in 8 California counties with the highest rates of Medicaid enrollment and chlamydia cases. Methods: The association between physician reimbursement methods and physician practices in delivery of STD services was examined in multiple logistic regression models, controlling for a number of potential confounders. Results: Evidence of an association between reimbursement based on management of utilization and the PCP practice of providing chlamydia drugs for the partner's treatment was most apparent. In adjusted analyses, physicians reimbursed with capitation and a financial incentive for management of utilization (odds ratio [OR] = 1.63) or salary and a financial incentive for management of utilization (OR = 2.63) were more likely than those reimbursed under other methods to prescribe chlamydia drugs for the partner. However, PCPs least often reported they annually screened females aged 15-19 years for chlamydia (OR = 0.63) if reimbursed under salary and a financial incentive for productivity, or screened females aged 20-25 years (OR = 0.43) if reimbursed under salary and a financial incentive for financial performance. Conclusion: Some physician reimbursement methods may influence care delivery, but reimbursement is not consistently associated with how physicians deliver STD care. Interventions to encourage physicians to consistently provide guideline-concordant care despite conflicting financial incentives can maintain quality of care. In addition, incentives that may improve guideline-concordant care should be strengthened. abstract_id: PUBMED:31179800 The Association Between Primary Care Physician Compensation and Patterns of Care Delivery, 2012-2015. As health systems seek to incentivize physicians to deliver high-value care, the relationship between physician compensation and health care delivery is an important knowledge gap. To examine physician compensation nationally and its relationship with care delivery, we examined 2012-2015 cross-sectional data on ambulatory primary care physician visits from the National Ambulatory Medical Care Survey. Among 175 762 office visits with 3826 primary care physicians, 15.4% of primary care physicians reported salary-based, 4.5% productivity-based, and 12.9% "mixed" compensation, while 61.4% were practice owners. After adjustment, delivery of out-of-visit/office care was more common for practice owners and "mixed" compensation primary care physicians, while there was little association between compensation type and rates of high- or low-value care delivery. Despite early health reform efforts, the overall landscape of physician compensation has remained strongly tethered to fee-for-service. The lack of consistent association between compensation and care delivery raises questions about the potential impact of payment reform on individual physicians' behavior. abstract_id: PUBMED:33880062 Disparities in Guideline-Concordant Initial Systemic Treatment in Women with HER2-Negative Metastatic Breast Cancer: A SEER-Medicare Analysis. Background: Data on guideline-concordant initial systemic treatment among women with HER2-negative metastatic breast cancer (MBC) are limited. We determined the proportion of women with HER2-negative MBC who received guideline-concordant treatment and the extent to which independent variables explained differences in guideline-concordant treatment by hormone receptor (HR) status. Methods: We conducted a retrospective cohort study using the SEER-Medicare database. We included women age &gt;65 years diagnosed with HER2-negative MBC during 2010-2013. We used the National Comprehensive Cancer Network treatment guidelines to determine guideline-concordant initial treatment within the first 6 months of a cancer diagnosis. We conducted a multivariable logistic regression to identify the significant predictors of guideline-concordant treatment and a non-linear decomposition method to examine disparities by HR status. Results: Among 1089 eligible women, 72.3% received guideline-concordant treatment. Compared to women who did not receive guideline-concordant treatment, women who received guideline-concordant treatment were more like to be comparatively older (p&lt;0.05), married (p=0.0171), resided in areas with higher proportion of people age ≥25 years with at least four years of college education, and had positive HR status (p&lt;0.0001). Approximately 8% of the disparity in guideline-concordant treatment by HR status was explained by their observed characteristics. Need-related factors explained the highest proportion (66.9%) of the disparity. Conclusion: Our findings indicate improvement of care for older women, who are single/divorced, have negative HR status, and who live in area with lower education levels. Unexplained disparities in guideline-concordant treatment by HR status can be attributed to patient preferences for treatment, physician-level factors, and perceptions. abstract_id: PUBMED:38340647 Survival rates in Hispanic/Latinx subpopulations with cervical cancer associated with disparities in guideline-concordant care. Background: Failure to deliver guideline-concordant treatment may contribute to disparities among Hispanic/Latinx cervical cancer patients. This study investigated the association between survival rates in Hispanic/Latinx subpopulations and the provision of guideline-concordant care. Methods: We analyzed patients with primary cervical cancer from 2004 to 2019 (National Cancer Database). We developed nine quality metrics based on FIGO staging (2009). Clinical and demographic covariates were analyzed using Chi-squared tests. Adjusted associations between receipt of guideline-concordant care and races and ethnicities were analyzed using multivariable marginal Poisson regression models. Adjusted Cox proportional hazard models were utilized to evaluate survival probability. Results: A total of 95,589 patients were included. Hispanic/Latinx and Non-Hispanic Black (NHB) populations were less likely to receive guideline-concordant care in four and five out of nine quality metrics, respectively. Nonetheless, the Hispanic/Latinx group exhibited better survival outcomes in seven of nine quality metrics. Compared to Mexican patients, Cuban patients were 1.17 times as likely to receive timely initiation of treatment in early-stage disease (RR 1.17, 95% CI 1.04-1.37, p &lt; 0.001). Puerto Rican and Dominican patients were, respectively, 1.16 (RR 1.16, 95% CI 1.07-1.27, p &lt; 0.001) and 1.19 (RR 1.19, 95% 1.04-1.37, p &gt; 0.01) times as likely to undergo timely initiation of treatment in early-stage disease. Patients of South or Central American (RR 1.18, 95% CI 1.10-1.27, p &lt; 0.001) origin were more likely to undergo timely initiation of treatment in locally advanced disease. Conclusion: Significant differences in survival were identified among our cohort despite the receipt of guideline concordant care, with notably higher survival among Hispanic/Latinx populations. abstract_id: PUBMED:34099411 Association of guideline-concordant initial systemic treatment with clinical and economic outcomes among older women with metastatic breast cancer in the United States. Purpose: We examined guideline-concordant initial systemic treatment among women with metastatic breast cancer, its predictors, and if guideline-concordant treatment was associated with mortality, healthcare utilization and Medicare expenditures. Methods: This retrospective observational cohort study was conducted using the Surveillance, Epidemiology, End Results-Medicare linked database. Women aged 66-90 years diagnosed with metastatic breast cancer during 2010-2013 (N = 1282) were included. The National Comprehensive Cancer Network treatment guidelines were used to determine the guideline-concordant initial systemic treatment following cancer diagnosis. A logistic regression analysis was conducted to examine significant predictors of guideline-concordant treatment. Generalized linear regressions were used to examine the association between guideline-concordant treatment and healthcare utilization and average monthly Medicare expenditures. Results: About 74% of the study cohort received guideline-concordant initial systemic treatment. Women who received guideline-concordant treatment were significantly more likely to be comparatively younger (p &lt; 0.05), were married/partnered (p = 0.0038), had HER2 positive tumors, and had good performance status. Adjusted hazards ratios for all-cause (2.364, p &lt; 0.0001) and breast-cancer specific mortality (2.179, p &lt; 0.0001) were higher for women who did not receive guideline-concordant treatment. Rates of healthcare utilization were also higher for women not receiving guideline-concordant treatment. Average monthly Medicare expenditures were 100.4% higher (95% confidence interval: $77.3%-126.5%) for women who did not receive guideline-concordant treatment compared to those who received guideline-concordant treatment (p &lt; 0.0001). Conclusion: One fourth of the study cohort did not receive guideline-concordant initial systemic treatment. Guideline-concordant initial treatment was associated with reduced mortality, and lower healthcare utilization and Medicare expenditures in women with metastatic breast cancer. abstract_id: PUBMED:36239795 Guideline concordant therapy improves survival in high-grade endometrial cancer patients. Purpose: Data from randomized controlled trials in high-grade endometrial cancer are scarce due to its low prevalence. Therefore, guideline recommendations in this cancer subtype rely on relatively few randomized trials and data from retrospective studies. The aim of this study was to evaluate the benefits from guideline-concordant therapy in high-grade endometrial cancer in a real-world patient group. Methods: The effect of treatment according to German S3 guidelines and the former S2k guideline on overall survival (OS) and recurrence-free survival (RFS) was evaluated in a cohort of 293 high-grade endometrial cancer patients. Results: Treatment concordant with the S3 guideline significantly improved OS (HR 0.623, CI 0.420-0.923, p = 0.018) and RFS (HR 0.578, CI 0.387-0.863, p = 0.007). Treatment concordant with the S2k guideline did not result in a significantly higher OS (HR 0.783, CI 0.465-1.316, p = 0.335) or RFS (HR 0.741, CI 0.347-1.740, p = 0.242). Conclusion: Therapy according to the German S3 guideline improved OS and RFS in univariate as well as multivariate analysis in this cohort of high-grade endometrial cancer patients. abstract_id: PUBMED:38205942 Association of area-level mortgage denial and guideline-concordant non-small-cell lung cancer care and outcomes in the United States. Background: Racial and socioeconomic disparities in receipt of care for non-small-cell lung cancer (NSCLC) are well described. However, no previous studies have evaluated the association between mortgage denial rates and receipt of timely and guideline-concordant care for NSCLC and patient outcomes. Methods: We identified individuals ≥18 years diagnosed with NSCLC between 2014 and 2019 from the National Cancer Database. Using the Home Mortgage Disclosure Act database, we calculated the proportion of denied home loans to total loans at the zip-code level and categorized them into quintiles. Our outcomes included receipt of guideline-concordant care based on clinical and pathologic stage at diagnosis and the National Comprehensive Cancer Network guidelines, time from surgery to chemotherapy initiation, and overall survival. Results: Of the 629,288 individuals diagnosed with NSCLC (median age 69; IQR 61-76 years, 49.1% female), 47.8% did not receive guideline-concordant care. Residing in areas with higher mortgage denial rates and lower income was associated with worse guideline-concordant care overall (aRR = 1.28; 95% CI = 1.25-1.32) and for each cancer treatment modality, worse receipt of timely chemotherapy (aHR = 1.14; 95% CI = 1.11-1.17) and worse overall survival (aHR = 1.21; 95% CI = 1.19-1.22), compared with residing in areas with the lowest mortgage denial rate and highest income. Conclusions: Area-level mortgage denial rate was associated with worse receipt of timely and guideline-concordant NSCLC care and survival. This highlights the critical need to understand and address systemic practices, such as mortgage denial, that limit access to resources and are associated with worse access to quality cancer care and outcomes. abstract_id: PUBMED:35122751 The Relative Survival Impact of Guideline-Concordant Clinical Staging and Stage-Appropriate Treatment of Potentially Curable Non-Small Cell Lung Cancer. Background: Lung cancer management guidelines strive to improve outcomes. Theoretically, thorough staging promotes optimal treatment selection. We examined the association between guideline-concordant invasive mediastinal nodal staging, guideline-concordant treatment, and non-small cell lung cancer survival. Research Question: What is the current practice of invasive mediastinal nodal staging for patients with lung cancer in a structured multidisciplinary care environment? Is guideline-concordant staging associated with guideline-concordant treatment? How do they relate to survival? Study Design And Methods: We evaluated patients with nonmetastatic non-small cell lung cancer diagnosed from 2014 through 2019 in the Multidisciplinary Thoracic Oncology Program of the Baptist Cancer Center, Memphis, Tennessee. We examined patterns of mediastinal nodal staging and stage-stratified treatment, grouping patients into cohorts with guideline-concordant staging alone, guideline-concordant treatment alone, both, or neither. We evaluated overall survival with Kaplan-Meier curves and Cox proportional hazards models. Results: Of 882 patients, 456 (52%) received any invasive mediastinal staging. Seventy-four percent received guideline-concordant staging; guideline-discordant staging decreased from 34% in 2014 to 18% in 2019 (P &lt; .0001). Recipients of guideline-concordant staging were more likely to receive guideline-concordant treatment (83% vs 66%; P &lt; .0001). Sixty-one percent received both guideline-concordant invasive mediastinal staging and guideline-concordant treatment; 13% received guideline-concordant staging alone; 17% received guideline-concordant treatment alone; and 9% received neither. Survival was greatest in patients who received both (adjusted hazard ratio [aHR], 0.41; 95% CI, 0.26-0.63), followed by those who received guideline-concordant treatment alone (aHR, 0.60; 95% CI, 0.36-0.99), and those who received guideline-concordant staging alone (aHR, 0.64; 95% CI, 0.37-1.09) compared with neither (P &lt; .0001, log-rank test). Interpretation: Levels of guideline-concordant staging were high, were rising, and were associated with guideline-concordant treatment selection in this multidisciplinary care cohort. Guideline-concordant staging and guideline-concordant treatment were complementary in their association with improved survival, supporting the connection between these two processes and lung cancer outcomes. abstract_id: PUBMED:21123412 Association of general medical and psychiatric comorbidities with receipt of guideline-concordant care for depression. Objective: This study described the association of general medical and psychiatric comorbidities with receipt of guideline-concordant depression care. Methods: Pharmacy, medical, and behavioral claims and enrollment data from OptumHealth in 2003-2006 were linked for 1,835 adults with a new depression diagnosis or a new antidepressant fill. Multiple logistic regression was used to estimate the association of comorbidities with receipt of guideline-concordant pharmacotherapy, guideline-concordant psychotherapy, and any guideline-concordant therapy. Results: Eleven percent of patients received guideline-concordant psychotherapy; 23%, guideline-concordant pharmacotherapy; and 33%, any guideline-concordant therapy. Having a psychiatric but no medical comorbidity was associated with higher rates of guideline-concordant psychotherapy and overall guideline concordance; conversely, having a general medical but no psychiatric comorbidity was associated with lower rates of guideline-concordant psychotherapy. Comorbidities were associated with the probability of receiving any guideline-concordant therapy, but they were not associated with improved guideline concordance among patients already receiving therapy. Conclusions: Patients with general medical comorbidities may not receive psychotherapy referrals, perhaps because of well-established relationships with their primary care providers. abstract_id: PUBMED:37141704 Insurance-Based Disparities in Guideline-Concordant Thyroid Cancer Care in the Era of De-escalation. Introduction: Prior studies have demonstrated insurance-based disparities in the treatment of well-differentiated thyroid cancer. However, it remains unclear whether these disparities have persisted in the era of the 2015 American Thyroid Association (ATA) management guidelines. The goal of this study was to assess whether insurance type is associated with the receipt of guideline-concordant and timely thyroid cancer treatment in a modern cohort. Methods: Patients diagnosed with well-differentiated thyroid cancer between 2016 and 2019 were identified from the National Cancer Database. Appropriateness of surgical and radioactive iodine treatment (RAI) was determined based on the 2015 ATA guidelines. Multivariable logistic regression and Cox proportional hazard regression analyses, stratified at age 65, were used to evaluate the associations between insurance type and appropriateness and timeliness of the treatment. Results: 125,827 patients were included (private = 71%, Medicare = 19%, Medicaid = 10%). Compared to privately insured patients, patients with Medicaid more frequently presented with tumors &gt;4 cm in size (11% versus 8%, P &lt; 0.001) and regional metastases (29% versus 27%, P &lt; 0.001). However, patients with Medicaid were also less likely to undergo appropriate surgical treatment (odds ratio 0.69, P &lt; 0.001), less likely to undergo surgery within 90 d of diagnosis (hazard ratio 0.80, P &lt; 0.001), and more likely to be undertreated with RAI (odds ratio 1.29, P &lt; 0.001). There were no differences in the likelihood of guideline-concordant surgical or medical treatment by insurance type in patients ≥65 y old. Conclusions: In the era of the 2015 ATA guidelines, patients with Medicaid remain less likely to receive guideline-concordant, timely surgery and more likely to be undertreated with RAI compared to privately insured patients. Answer: The association between physician compensation methods and the delivery of guideline-concordant STD care has been examined, and evidence suggests that there may be a link. A study found that primary care physicians (PCPs) reimbursed with capitation and a financial incentive for management of utilization, or salary and a financial incentive for management of utilization, were more likely than those reimbursed under other methods to prescribe chlamydia drugs for the partner (PUBMED:16044979). However, the same study also reported that PCPs were less likely to annually screen females aged 15-19 years for chlamydia if reimbursed under salary and a financial incentive for productivity, or screen females aged 20-25 years if reimbursed under salary and a financial incentive for financial performance (PUBMED:16044979). This indicates that while some physician reimbursement methods may influence care delivery, the association is not consistent across all aspects of STD care. Furthermore, another study examining the relationship between primary care physician compensation and health care delivery found little association between compensation type and rates of high- or low-value care delivery (PUBMED:31179800). This suggests that the potential impact of payment reform on individual physicians' behavior may not be straightforward and that the overall landscape of physician compensation has remained strongly tethered to fee-for-service. In conclusion, while there is some evidence to suggest that physician compensation methods can influence the delivery of guideline-concordant STD care, the relationship is complex and not consistently observed across different aspects of care. Additional interventions and incentives may be necessary to encourage physicians to consistently provide guideline-concordant care despite the presence of conflicting financial incentives (PUBMED:16044979).
Instruction: Does cesarean section have an impact on the successful initiation of breastfeeding in Saudi Arabia? Abstracts: abstract_id: PUBMED:28469519 Factors Influencing Exclusive Breastfeeding in Tabuk, Saudi Arabia. Background: Breast milk contains all the nutrients infants need for their first 6 months of life. However, only a minority of Saudi Arabian mothers exclusively breastfeed, so the influencing factors must be examined to encourage more to do so. The study aimed to determine the prevalence of exclusive breastfeeding at 6 months and its associated factors in Tabuk, North West Saudi Arabia. Methods: A total of 589 mothers of healthy infants aged between 6 and 24 months were interviewed while attending Well-Baby Clinics within 5 primary health care centers. Interviews deployed a structured questionnaire to collect sociodemographic information and detailed data concerning breastfeeding practices. A logistic regression analysis was then performed on the data to identify the factors independently associated with exclusive breastfeeding practice for infants at 6 months. Results: Exclusive breastfeeding was practiced by 31.4% of mothers for the first 6 months of their infant's life. The logistic regressions indicated that exclusive breastfeeding at 6 months was less likely to be practiced by working mothers, Saudi nationals, and for babies born via cesarean delivery or at low birth weights. Conversely, the mother's awareness of the recommended exclusive breastfeeding duration was positively associated with exclusive breastfeeding. Conclusions: Programs promoting 6 months of exclusive breastfeeding should target high-risk groups. Two factors identified by this study are modifiable: working mothers and mothers' awareness of the exclusive breastfeeding duration recommendation. Strategies to improve exclusive breastfeeding rates should therefore focus on workplace facilities and increasing awareness of the exclusive breastfeeding recommendation. abstract_id: PUBMED:27446231 Factors associated with the initiation of breastfeeding within the first 48 hours of life in Tabuk, Saudi Arabia. Background: The identification of the factors most closely associated with the initiation of breastfeeding is a vital first step in designing strategies to promote breastfeeding. The study therefore aimed to identify the factors that may be associated with the initiation of breastfeeding in the first 48 h after giving birth among mothers in Tabuk, Saudi Arabia. Methods: This cross-sectional study was based on a sample of 671 mothers of infants aged up to 24 months at five primary healthcare centers between May and September 2015. A structured questionnaire was used to gather general sociodemographic data along with more detailed information on breastfeeding. A logistic regression analysis was then performed to establish the factors which were independently associated with the mothers' initiation of breastfeeding. Results: Breastfeeding was initiated by 92.7 % of mothers within the first 48 h after childbirth. Breastfeeding initiation within the first 48 h of childbirth was lower in women who gave birth by caesarean section (Adjusted Odds Ratio [AdjOR] 0.31, 95 % Confidence Interval [CI] 0.17, 0.57), and had preterm (AdjOR 0.29, 95 % CI 0.12, 0.70) or low birth weight infants (AdjOR 0.35, 95 % CI 0.17, 0.75). Conclusions: Each variable presents an important barrier to breastfeeding initiation. Suitable hospital policies and staff training are needed to support mothers in quickly initiating breastfeeding, and to discourage the use of infant formula in hospital. To encourage higher rates of exclusive breastfeeding in Saudi Arabia, additional support is required for mothers at a higher risk of failing to initiate breastfeeding in a timely manner. abstract_id: PUBMED:29301353 Factors Associated with Not Breastfeeding and Delaying the Early Initiation of Breastfeeding in Mecca Region, Saudi Arabia. The objective of the study was to find the determinants related to not breastfeeding (BF) and others related to the delay in the early initiation of BF in the Mecca region, Saudi Arabia. A cross-sectional study in the Maternity and Children Hospital and primary healthcare centers was performed. A questionnaire was filled by dietitians to 814 asymptomatic Saudi mothers. Determinants related to not BF and the delay in the early initiation of BF were determined by binary logistic regression, and the odds ratio (OR) and 95% confidence interval (CI) were determined. Significant factors associated with not BF were not rooming-in infants in the mother's room (OR: 2.37; 95% CI: 1.66-3.41) and using a pacifier (OR: 1.62; 95% CI: 1.13-2.33). The most significant determinant of the early initiation of BF was the initiation of bottle feeding (OR: 18.16; 95% CI: 10.51-31.4), followed by not rooming-in infants in the mother's room (OR: 2.2; 95% CI: 1.52-3.18), initiation of partial feeding (OR: 1.89; 95% CI: 1.3-2.74), uninformed mothers regarding the importance of BF (OR: 1.56; 95% CI: 1.04-2.35), and cesarean sections (OR:1.42; 95% CI: 1.02-1.98). Risk factors affecting BF and the early initiation of BF in Mecca City should be highlighted in national campaigns to increase mothers' awareness and promote BF practice. abstract_id: PUBMED:25399221 Does cesarean section have an impact on the successful initiation of breastfeeding in Saudi Arabia? Objectives: To investigate whether the type of birth influenced breastfeeding outcomes. Methods: This study used a quantitative descriptive correlation design study in a sample of 60 primigravida mothers. Participants were recruited over a 2-month period from June to July 2011 in the postnatal ward at King AbdulAziz University Hospital (KAUH) in Jeddah city, Kingdom of Saudi Arabia (KSA). Results: The results of the study indicated that women who gave birth vaginally were more likely to breastfeed within the first hour, and at 24 hours after birth than those who had a cesarean section. The mothers who had cesarean section stated that pain interfered with their ability to hold, breastfeed, and care for their baby. Healthy term babies at KAUH are routinely separated from their mothers, and given infant formula supplementation. Conclusion: The findings in this study reinforce the importance of appropriate pain management, keeping well babies with their mothers to remain together, 24 hours a day, and avoidance of non-medically indicated formula supplementation. abstract_id: PUBMED:20174738 Breastfeeding knowledge and attitude among Saudi women in Central Saudi Arabia. Objective: To study the knowledge and attitude of Saudi women towards breastfeeding. Methods: A cross-sectional study conducted in 3 major hospitals in Riyadh, Saudi Arabia during the period 7 July--22 July 2009. Eight hundred and forty-eight women in the post-natal and post-cesarean section wards were included in the study along with women attending antenatal clinics. An Arabic questionnaire was generated and completed by women. Results: Of the 848 participating women, 61.5% fell within the age groups between 21-30 years. Most women (49.8%) have a college or higher education. Only 55.8% of women have previously received breastfeeding education, and approximately 54.2% of women received breastfeeding education when they were in the hospital for delivery. Approximately 48.5% preferred mixed feeding, followed by exclusive breastfeeding (36.8%). The most important reasons for discontinuing breastfeeding were: baby was ill (38.2%), a health professional asked mother not to breastfeed for medical reasons (61.1%), mother was sick or needed to take medicine (71.8%), and finally the use of contraception that interferes with milk supply (32.8%). Conclusion: We found that breastfeeding during the first few months of life was not optimal. The percentage of exclusive breastfeeding was low despite high level of their education. Mixed feeding was the predominant mode of feeding, which raised the need for targeted breastfeeding education. Health care providers should be encouraged to continuously educate women on the benefits of breastfeeding. abstract_id: PUBMED:34765349 Clinical Characteristics of Newborn Infants Delivered to Pregnant Women With Laboratory-Confirmed COVID-19: A Single-Center Experience From Saudi Arabia. Introduction In Saudi Arabia and countries around the world, clinical health practice has been transformed by the coronavirus 2019 (COVID-19) pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). During the early days of the pandemic, it was a major challenge to care for pregnant women with laboratory-confirmed COVID-19 and their newborn infants. In this article, we share our experience in the management of newborn infants delivered to mothers with laboratory-confirmed COVID-19. Methods A prospective single-center observational study was conducted at King Abdulaziz Medical City in Jeddah, National Guard Health Affairs, Saudi Arabia. Data collection started in March 2020 and was completed in October 2020. The inclusion criteria included mothers with laboratory-confirmed COVID-19 and their newborn infants. Results A total of 45 pregnant women with polymerase chain reaction (PCR)-confirmed COVID-19 were included in the study. Their mean age was 30.23±5.92 years. The mode of delivery was spontaneous vaginal delivery in 27 women (60%), cesarean section in 15 women (33.3%), and assisted vaginal delivery in three women (6.7%). Three mothers (6.7%) required intensive care unit admission. A total of 45 babies were born and 25 were females (55.6%), 20 males (44.4%). None of the babies had specific symptoms related to COVID-19. All babies were tested negative on the two COVID-19 nasopharyngeal swabs. Babies were initially admitted to the NICU and one baby required prolonged NICU stay due to extreme prematurity (23 weeks), one baby died due to hypoxic-ischemic encephalopathy and respiratory distress syndrome, and the remaining babies were discharged home in a stable condition. Conclusion Our experience suggests that maternal outcomes are generally favorable and no difference between vaginal and cesarean delivery in the risk of virus transmission. With strict implementation of infection prevention measures, mother-to-infant transmission is very unlikely. Early bathing of the newborn infant is preferred to reduce the risk of transmission of infection to newborn infants and the hospital staff. Breastfeeding is safe if performed under strict infection prevention measures. abstract_id: PUBMED:26711837 Predictors of impaired breastfeeding initiation and maintenance in a diverse sample: what is important? Purpose: This study aimed to investigate socio-demographic, medical and psychological factors that have an impact on breastfeeding. Methods: Questionnaires were administered to 330 women prenatally (TI third trimester) and postpartum (TII 3-4 days, TIII 4 months). Medical data were collected from the hospital records. Self-reported data on initiation and maintenance of breastfeeding was collected simultaneously. Primary endpoint was breastfeeding initiation and maintenance. Data analyses were performed using Spearman's ρ correlations between breastfeeding and other study variables and generalized multiple ordinal logistic regression analysis. Results: Neonatal admission to the NICU, high BMI, cesarean section, difficulties with breastfeeding initiation and high maternal state anxiety were the strongest predictors of impaired breastfeeding initiation, explaining together 50 % of variance. After 4 months, the strongest predictors of impaired maintenance of breastfeeding were maternal smoking, a high BMI and a history of postpartum anxiety disorder, explaining 30 % of variance. Conclusions: Successful initiation and maintenance of breast feeding is a multifactorial process. Our results underline the need of interdisciplinary approaches to optimise breastfeeding outcomes by demonstrating the equality of medical and psychological variables. Whereas practices on maternity wards are crucial for optimal initiation, continuous lifestyle modifying and supporting approaches are essential for breastfeeding maintenance. Healthcare providers can also significantly influence breastfeeding initiation and maintenance by counselling on the importance of maternal BMI. abstract_id: PUBMED:37893873 Multiple Cesarean Section Outcomes and Complications: A Retrospective Study in Jazan, Saudi Arabia. Background: Given the increase in the rate of cesarean sections (CSs) globally and in Saudi Arabia, this study was conducted to assess the maternal and perinatal complications after repeat cesarean sections in the studied population. Methods: This retrospective study was conducted by reviewing the records of all women who underwent CSs between January and July 2023 in three hospitals in the Jazan region of Saudi Arabia. Results: Of the 268 women studied, 195 (72.7%) had a CS for the first or second time and 73 (27.3%) had two, three, or four previous CSs (repeat CS). The most common maternal intra-operative complications reported by the repeat CS group were intra-peritoneal adhesions (7.5%) and fused abdominal layers (7.1%) while the most common postoperative complications were the need for blood transfusion (22%) and UTIs (3%). The most common neonatal complications were a low Apgar score (19%), needing neonatal resuscitation (2.6%), and intensive care admission. In addition, 3.7% of mothers failed to initiate breastfeeding in the first 24 h. Conclusions: The frequent complications were intra-peritoneal adhesions, fused abdominal wall layers, blood transfusion, and postoperative infections which were overcome by the optimal hospital care. However, the frequent neonatal complications were a low Apgar score, needing neonatal resuscitation, and intensive care admission. abstract_id: PUBMED:26413139 Determinants of early initiation of breastfeeding in rural Tanzania. Background: Breastfeeding is widely known for its imperative contribution in improving maternal and newborn health outcomes. However, evidence regarding timing of initiation of breastfeeding is limited in Tanzania. This study examines the extent of and factors associated with early initiation of breastfeeding in three rural districts of Tanzania. Methods: Data were collected in 2011 in a cross-sectional survey of random households in Rufiji, Kilombero and Ulanga districts of Tanzania. From the survey, 889 women who had given birth within 2 years preceding the survey were analyzed. Both descriptive and inferential statistical analyses were conducted. Associations between the outcome variable and each of the independent variables were tested using chi-square. Logistic regression was used for multivariate analysis. Results: Early initiation of breastfeeding (i.e. breastfeeding initiation within 1 h of birth) stood at 51 %. The odds of early initiation of breastfeeding was significantly 78 % lower following childbirth by caesarean section than vaginal birth (adjusted odds ratio (OR) = 0.22; 95 % confidence interval (CI) 0.14, 0.36). However, this was almost twice as high for women who gave birth in health facilities as for those who gave birth at home (OR = 1.75; 95 % CI 1.25, 2.45). Furthermore, maternal knowledge of newborn danger signs was negatively associated with early initiation of breastfeeding (moderate vs. high: OR = 1.73; 95 % CI 1.23, 2.42; low vs. high: OR = 2.06; 95 % CI 1.43, 2.96). The study found also that early initiation of breastfeeding was less likely in Rufiji compared to Kilombero (OR = 0.52; 95 % CI 0.31, 0.89), as well as among ever married than currently married women (OR = 0.46; 95 % CI 0.25, 0.87). Conclusions: To enhance early initiation of breastfeeding, using health facilities for childbirth must be emphasized and facilitated among women in rural Tanzania. Further, interventions to promote and enforce early initiation of breastfeeding should be devised especially for caesarean births. Women residing in rural locations and women who are not currently married should be specifically targeted with interventions aimed at enhancing early initiation of breastfeeding to ensure healthy outcomes for newborns. abstract_id: PUBMED:31583265 Impact of Lactation Support Program on Initiation of Breastfeeding in Term Infants. Purpose: Early initiation of breastfeeding (EIBF) significantly decreases neonatal mortality and improves exclusive breastfeeding. The objective of the present study was to assess the effect of lactation support program (LSP) on early initiation of breastfeeding (BF) among term well infants. Methods: A "before-and-after" design was used to study the effect of the LSP on EIBF at a tertiary care institute in India over a period of two and half years. EIBF was defined as BF initiated &lt;1 hour in vaginal delivery (VD) and &lt;2 hours in cesarean section (CS). Impact of LSP was assessed by comparing baseline data (control group) with data after initiation of LSP (study group). Even after 1 year of initiation of LSP, EIBF in CS remained low, hence a hospital policy was implemented to alter a modifiable factor to promote EIBF in CS. Data of the study group was analyzed over two time periods, as study group A (prior to implementation of hospital policy) and study group B (following the commencement of hospital policy). Results: A total of 2,769 postnatal mothers were included for the study with 537 in the control group, 1,157 in study group A, and 1,075 in study group B. In VD, EIBF rate increased significantly from 92.6% at baseline to 99.8% and 99.6%, in study group A and study group B, respectively (p value &lt; 0.001). In CS, EIBF rate increased from 0.4% at baseline to 1.9% and 92.7% in study group A and study group B, respectively (p &lt; 0.001). The time of initiation of BF reduced from 1.3 (0.9) to 0.7 (0.3) hours in VD and from 4.2 (0.71) to 1.8 (0.66) hours in CS with both having a p value of &lt; 0.001. Conclusion: Lactation support program is a simple but effective way of implementing appropriate steps towards promotion of exclusive BF. Answer: Yes, cesarean section does have an impact on the successful initiation of breastfeeding in Saudi Arabia. Several studies have identified cesarean delivery as a factor associated with lower rates of early initiation of breastfeeding and exclusive breastfeeding practices. For instance, women who gave birth by cesarean section were less likely to initiate breastfeeding within the first 48 hours after childbirth (PUBMED:27446231) and were also more likely to experience difficulties with breastfeeding initiation (PUBMED:25399221). Additionally, the study conducted at King AbdulAziz University Hospital in Jeddah found that women who had a cesarean section were less likely to breastfeed within the first hour and at 24 hours after birth compared to those who had a vaginal delivery (PUBMED:25399221). Furthermore, cesarean section was identified as one of the strongest predictors of impaired breastfeeding initiation (PUBMED:26711837), and a study in rural Tanzania also found that the odds of early initiation of breastfeeding were significantly lower following childbirth by cesarean section (PUBMED:26413139). Moreover, a retrospective study in Jazan, Saudi Arabia, reported that 3.7% of mothers failed to initiate breastfeeding in the first 24 hours following repeat cesarean sections (PUBMED:37893873). These findings suggest that cesarean section is a significant barrier to the successful initiation of breastfeeding in Saudi Arabia, and strategies to support mothers who undergo cesarean deliveries are needed to improve breastfeeding rates.
Instruction: Prognostic implications of preoperative atrial fibrillation in patients undergoing aortic valve replacement: is there an argument for concomitant arrhythmia surgery? Abstracts: abstract_id: PUBMED:16996940 Prognostic implications of preoperative atrial fibrillation in patients undergoing aortic valve replacement: is there an argument for concomitant arrhythmia surgery? Background: The prognostic significance of preoperative atrial fibrillation (AF) at the time of aortic valve replacement is unknown, as is the potential role for concomitant arrhythmia surgery. Methods: We performed a cohort comparison of patients with preoperative AF (n = 129) and preoperative sinus rhythm (SR, n = 252) undergoing aortic valve surgery between 1993 and 2002; patients were matched for age, gender, and left ventricular ejection fraction. Follow-up (mean interval, 4.5 years) was 98% complete. Primary endpoints were late cardiac and all-cause mortality, as well as major adverse cardiac or cerebrovascular event. Results: Patients with preoperative AF presented with more severe congestive heart failure (p = 0.03) and more often had significant tricuspid regurgitation (p = 0.01) preoperatively. They also had worse late survival (risk ratio [RR] for death = 1.5, p = 0.03) with 1-, 5-, and 7-year survival rates substantially reduced at 94%, 87%, and 50%, respectively, for those in AF versus 98%, 90%, and 61% for patients in sinus rhythm preoperatively. Individuals in AF had a greater probability of subsequent rhythm-related intervention (RR = 4.7, p = 0.0002), and more frequently developed congestive heart failure (25% vs 10%, p = 0.005) and stroke (16% vs 5%, p = 0.005). By multivariable analysis, preoperative AF was an independent predictor of late adverse cardiac and cerebrovascular events, but not late death. Conclusions: Performance of concomitant arrhythmia surgery in patients undergoing aortic valve surgery may reduce late morbidity; however, its potential impact on late mortality in this high-risk subset of patients remains unclear. abstract_id: PUBMED:34309884 Preoperative Atrial Fibrillation is associated with long-term morTality in patients undergoing suRgical AortiC valvE Replacement. Introduction: Atrial fibrillation (AF) is frequent after any cardiac surgery, but evidence suggests it may have no significant impact on survival if sinus rhythm (SR) is effectively restored early after the onset of the arrhythmia. In contrast, management of preoperative AF is often overlooked during or after cardiac surgery despite several proposed protocols. This study sought to evaluate the impact of preoperative AF on mortality in patients undergoing isolated surgical aortic valve replacement (AVR). Methods: We performed a retrospective, single-center study involving 2628 consecutive patients undergoing elective, primary isolated surgical AVR from 2008 to 2018. A total of 268/2628 patients (10.1%) exhibited AF before surgery. The effect of preoperative AF on mortality was evaluated with univariate and multivariate analyses. Results: Short-term mortality was 0.8% and was not different between preoperative AF and SR cohorts. Preoperative AF was highly predictive of long-term mortality (median follow-up of 4 years [Q1-Q3 2-7]; hazard ratio [HR]: 2.24, 95% confidence interval [CI]: 1.79-2.79, p &lt; .001), and remained strongly and independently predictive after adjustment for other risk factors (HR: 1.54, 95% CI: 1.21-1.96, p &lt; .001) compared with preoperative SR. In propensity score-matched analysis, the adjusted mortality risk was higher in the AF cohort (OR: 1.47, 95% CI: 1.04-1.99, p = .03) compared with the SR cohort. Conclusions: Preoperative AF was independently predictive of long-term mortality in patients undergoing isolated surgical AVR. It remains to be seen whether concomitant surgery or other preoperative measures to correct AF may impact long-term survival. abstract_id: PUBMED:28987158 Atrial Function as an Independent Predictor of Postoperative Atrial Fibrillation in Patients Undergoing Aortic Valve Surgery for Severe Aortic Stenosis. Background: Postoperative atrial fibrillation (POAF) is a common, clinically relevant, but hardly predictable complication after surgical aortic valve replacement. The aim of this study was to test the role of preoperative left atrial longitudinal strain as a predictor of POAF in clinical practice. Methods: Sixty patients scheduled for aortic valve replacement for severe isolated aortic stenosis, in stable sinus rhythm, were prospectively enrolled and underwent full clinical, biochemical, and transthoracic echocardiographic assessment on the day before surgery. Left atrial strain-derived peak atrial longitudinal strain (PALS) and peak atrial contraction strain (PACS) were obtained. The occurrence of POAF was evaluated during the hospital stay after the intervention. Results: POAF was present in 26 of 60 patients (43.3%). Among all clinical variables examined, age showed a significant correlation with POAF (P = .04), while no significant differences were noted regarding preoperative symptoms, cardiovascular risk factors, medications, and biochemical data. As for the echocardiographic parameters, only PALS and PACS showed strong, significant correlations with the occurrence of arrhythmia (P &lt; .0001 on univariate analysis), with areas under the curve of 0.87 ± 0.04 (95% CI, 0.76-0.94) for PALS and 0.85 ± 0.05 (95% CI, 0.73-0.93) for PACS. In two comprehensive multivariate models, PALS and PACS remained significant predictors of POAF (odds ratio, 0.73 [95% CI, 0.61-0.88; P = .0008] and 0.72 [95% CI, 0.59-0.87; P = .0007]). No significant interaction was detected between PALS or PACS and other clinical and echocardiographic variables, including age, E/E' ratio, and left atrial enlargement. Conclusions: PALS and PACS indexes are routinely feasible and useful to predict POAF in patients with severe isolated aortic stenosis undergoing surgical aortic valve replacement. abstract_id: PUBMED:22808837 Atrial fibrillation ablation in patients undergoing aortic valve replacement. Background And Aim Of The Study: Current guidelines suggest that the use of a mechanical prosthesis is favored when patients are already receiving long-term anticoagulation for conditions such as atrial fibrillation (AF). Surgical AF ablation can restore normal sinus rhythm (NSR) and obviate the need for anticoagulation. The study aim was to determine the impact of concomitant AF ablation in patients with AF undergoing aortic valve replacement (AVR) on the restoration of NSR and subsequent requirement for anticoagulation. Methods: Between April 2004 and December 2009, a total of 124 patients (mean age 74 +/- 12 years) with pre-existing AF underwent AVR with or without coronary artery bypass grafting. The documented preoperative rhythm was long-standing persistent AF in 39 patients (32%), persistent AF in five (4%), and paroxysmal AF in 80 (65%). Eighty patients (65%) had concomitant surgical AF ablation. In the ablation group, bilateral pulmonary vein isolation was performed in 55 cases (69%), left atrial-maze in 15 (19%), and Cox-maze in 10 (13%). A left atrial appendage closure was performed in 70 patients (88%). Sinus rhythm, in addition to anti-arrhythmic and warfarin use, were assessed between three and 15 months after surgery. Postoperatively, 13 patients died and 18 were lost to follow up during the three- to 15-month window; consequently, 71 patients were available for analysis in the ablation group, and 22 in the non-ablation group. Results: In-hospital mortality was 4% (the Ambler score predicted a median (IQR) of 6 (4-9)%). Freedom from AF when not receiving anti-arrhythmic drugs (AADs) occurred in 58 patients (82%) in the ablation group, compared to eight (36%) in the non-ablation group (p &lt; 0.001). Fifty patients (70%) were free from warfarin in the ablation group, compared to six (27%) in the non-ablation group (p &lt; 0.001). No differences were identified in freedom from AF between the surgical AF lesion sets. AF ablation, younger age, and paroxysmal AF were independently associated with freedom from AF when not receiving AADs. Conclusion: Surgical AF ablation is associated with an improved restoration of NSR in patients with AF requiring AVR. The need for anticoagulation is reduced in the majority of patients. A bioprosthetic valve may be an acceptable option for a patient with AF who requires AVR. abstract_id: PUBMED:35429649 Permanent pacemaker implantation after valve and arrhythmia surgery in patients with preoperative atrial fibrillation. Background: Among patients referred for cardiac surgery, atrial fibrillation (AF) is a common comorbidity and a risk factor for postoperative arrhythmias (eg, sinus node dysfunction, atrioventricular heart block), including those requiring permanent pacemaker (PPM) implantation. Objective: The purpose of this study was to evaluate the prevalence and long-term survival of postoperative PPM implantation in patients with preoperative AF who underwent valve surgery with or without concomitant procedures. Methods: Presented analysis pertains to the HEIST (HEart surgery In atrial fibrillation and Supraventricular Tachycardia) registry. During the study period, 11,949 patients underwent valvular (aortic, mitral, or tricuspid valve replacement or repair) surgery and/or surgical ablation (SA) and were stratified according to postoperative PPM status. Results: PPM implantation after surgery was necessary in 2.5% of patients, with significant variation depending on the type of surgery (from 1.1% in mitral valve repair to 3.3% in combined mitral and tricuspid valve surgery). In a multivariate logistic regression model, tricuspid intervention (P &lt;.001), cardiopulmonary bypass time (P = .024), and endocarditis (P = .014) were shown to be risk factors for PPM. Over long-term follow-up, PPM was not associated with increased mortality compared to no PPM (hazard ratio 0.96; 95% confidence interval 0.77-1.19; P = .679). SA was not associated with PPM implantation. However, SA improved survival regardless of PPM status (log rank P &lt;.001). Conclusion: In patients with preoperative AF, the need for PPM implantation after valve surgery or SA is not an infrequent outcome, with SA not affecting its prevalence but actually improving long-term survival. abstract_id: PUBMED:37077739 Clinical efficacy and safety of Cox-maze IV procedure for atrial fibrillation in patients with aortic valve calcification. Objective: Atrial fibrillation is associated with a high incidence of heart valve disease. There are few prospective clinical research comparing aortic valve replacement with and without surgical ablation for safety and effectiveness. The purpose of this study was to compare the results of aortic valve replacement with and without the Cox-maze IV procedure in patients with calcific aortic valvular disease and atrial fibrillation. Methods: We analyzed one hundred and eight patients with calcific aortic valve disease and atrial fibrillation who underwent aortic valve replacement. Patients were divided into concomitant Cox maze surgery (Cox-maze group) and no concomitant Cox-maze operation (no Cox-maze group). After surgery, freedom from atrial fibrillation recurrence and all-cause mortality were evaluated. Results: Freedom from all-cause mortality after aortic valve replacement at 1 year was 100% in the Cox-maze group and 89%, respectively, in the no Cox-maze group. No Cox-maze group had a lower rate of freedom from atrial fibrillation recurrence and arrhythmia control than those in the Cox-maze group (P = 0.003 and P = 0.012, respectively). Pre-operatively higher systolic blood pressure (hazard ratio, 1.096; 95% CI, 1.004-1.196; P = 0.04) and post-operatively increased right atrium diameters (hazard ratio, 1.755; 95% CI, 1.182-2.604; P = 0.005) were associated with atrial fibrillation recurrence. Conclusion: The Cox-maze IV surgery combined with aortic valve replacement increased mid-term survival and decreased mid-term atrial fibrillation recurrence in patients with calcific aortic valve disease and atrial fibrillation. Pre-operatively higher systolic blood pressure and post-operatively increased right atrium diameters are associated with the prediction of recurrence of atrial fibrillation. abstract_id: PUBMED:32642162 The impact of beta-blocker use on postoperative atrial fibrillation after aortic valve replacement. Background: Current guidelines recommend perioperative use of beta-blocker (BB) in patients undergoing cardiac surgery to prevent postoperative atrial fibrillation (POAF). However, this recommendation is mainly based on studies those exclusively enrolled patients undergoing coronary artery bypass grafting. This study was conducted to evaluate the impact of perioperative BB use on the occurrence of POAF after aortic valve replacement (AVR). Methods: From January 2015 to June 2018, 296 patients (male: female = 163:133) who underwent AVR at our institution were retrospectively reviewed. Patients who underwent concomitant valve surgery other than AVR or patients with preoperative arrhythmia were excluded. Mean age at the operation was 67±12 years. All patients were continuously tele-monitored for the occurrence of AF until discharge. Occurrence of any short runs of AF during the hospital stay was treated as POAF. Early outcomes were evaluated and perioperative factors associated with POAF were analyzed using a multivariable logistic regression model. Results: Early mortality rate was 3.7% (11 of 296 patients). The POAF occurred in 154 patients (52.0%). Univariate analyses demonstrated that postoperative use of BB as well as age, type of prosthesis, history of stroke, body surface area, and chronic kidney disease were associated with the occurrence of POAF. The multivariable model showed that postoperative use of BB within 24 hours after AVR was a preventive factor of POAF (odds ratio, 0.354; 95% CI, 0.163 to 0.770; P=0.009). Conclusions: Postoperative use of BB within 24 hours after AVR rather than preoperative use might be effective in prevention of POAF. abstract_id: PUBMED:8996058 Thirty years experience with heart valve surgery: isolated aortic valve replacement. Background: Thirty years have elapsed since the commencement of open-heart surgery in South Australia. A retrospective study was performed to evaluate mortality and complication rates and to identify factors associated with poor outcomes in all patients who underwent prosthetic aortic valve replacement during this period. Methods: Questionnaires and personal contact have been used to generate a combined database of pre-operative and post-operative information and long-term follow-up on 1322 patients who underwent isolated prosthetic aortic valve replacement at the Cardio-Thoracic Surgical Unit of the Royal Adelaide Hospital between 1963 and 1992. Results: Complete survival follow-up data were obtained for 94% (1241) of the patients. The Bjork-Shiley valve was used in 66% (875) of the patients, a Starr-Edwards prosthesis in 31% (412), a St Jude prosthesis in 2% (26), and only 0.7% (9) bioprosthetic valves were inserted. The hospital mortality rate for the 30-year period was 2.9%. Progressively older and less fit patients have undergone surgery in recent years. The long-term survival of patients with aortic stenosis and aortic incompetence was not significantly different. Long-term survival was significantly shorter for patients with higher New York Heart Association (NYHA) functional classifications, and for patients in pre-operative atrial fibrillation. Pre-operative dyspnoea was significantly improved following aortic valve replacement. The rates of postoperative haemorrhagic and embolic complications were low by comparison with other published series. Conclusions: Aortic valve replacement can be performed with low hospital mortality and complication rates, and significant symptomatic improvement can be expected. Aortic valve recipients have a favourable prognostic outcome compared with an age- and sex-matched population, and risk factors that determine long-term survival can be identified pre-operatively. abstract_id: PUBMED:37126228 Multicenter Study of Surgical Ablation for Atrial Fibrillation in Aortic Valve Replacement. There is controversy regarding appropriate surgical ablation procedures concomitant with nonmitral valve surgery. We retrospectively investigated the impact of surgical ablation for atrial fibrillation during aortic valve replacement between 2010 and 2015 in 16 institutions registered through the Japanese Society for Arrhythmia Surgery. Clinical data of 171 patients with paroxysmal and nonparoxysmal atrial fibrillation undergoing aortic valve replacement were collected and classified into full maze operation (n = 79), pulmonary vein isolation (PVI) (n = 56), and no surgical ablation (n = 36) groups. All patients were followed up and electrocardiograms were recorded in 68% at 2 years. The myocardial ischemia time was significantly longer in the maze group than the others during isolated aortic valve replacement (p ≤ 0.01), but there were no significant differences in 30-day or 2-year mortality rates between groups. The ratios of sinus rhythm at 2 years in paroxysmal and nonparoxysmal atrial fibrillation in the maze group versus PVI group were 87% versus 97%, respectively (p = 0.24) and 53% versus 42%, respectively (p = 0.47). No patients with nonparoxysmal atrial fibrillation in the no surgical ablation group maintained sinus rhythm at 2 years. In conclusion, both maze and PVI during aortic valve replacement are valuable strategies to restore sinus rhythm at 2 years and result in favorable early and midterm survival rates. abstract_id: PUBMED:12701782 Prediction of paroxysmal atrial fibrillation after aortic valve replacement in patients with aortic stenosis: identification of potential risk factors. Background And Aim Of The Study: Paroxysmal atrial fibrillation (AF) is a frequent complication after cardiac surgery. The study aim was to identify preoperative predictors of risk for this condition in patients with aortic stenosis after aortic valve replacement. Methods: The influence of clinical, echocardiographic and 24 h electrocardiogram (ECG) parameters on episodes of paroxysmal AF after aortic valve replacement (AVR) in 423 patients (156 women, 267 men; mean age 58 +/- 10 years) with aortic stenosis was analyzed. Episodes of postoperative paroxysmal AF were noted in 120 patients (28%). Results: Univariate analyses identified the following variables as risk factors for arrhythmia: age, NYHA functional class, history of preoperative paroxysmal AF, left ventricular mass index, &gt;300 supraventricular beats on 24h ECG before surgery, presence of supraventricular tachycardia (SVT), SVT of &gt;5 beats or with a rate &gt;120 beats/min. Concomitant coronary artery bypass (CABG) grafting and presence of enlarged left atrium had no impact. By multivariate analysis, four variables were identified as independent predictors: age (odds ratio 1.7; 95% CI 1.2-2.1); history of paroxysmal AF (OR 3.2; CI 1.4-7.3); presence of &gt;300 supraventricular beats/24 h (OR 1.9; CI 1.1-3.4); and presence of SVT (OR 2.1; CI 1.3-3.4). Discriminatory analysis revealed that a model comprising these four parameters enabled risk prediction in 68% of patients. Conclusion: In patients with isolated aortic stenosis, age, past history of paroxysmal AF episodes, &gt;300 supraventricular beats/24 h and presence of SVT during 24 h before AVR were predictors of postoperative paroxysmal AF episodes. Left atrial diameter and simultaneous CABG during AVR did not influence the likelihood of postoperative paroxysmal AF. Answer: The prognostic implications of preoperative atrial fibrillation (AF) in patients undergoing aortic valve replacement (AVR) suggest that AF is associated with worse outcomes compared to patients in sinus rhythm (SR). Patients with preoperative AF have been shown to present with more severe congestive heart failure and significant tricuspid regurgitation, and they have worse late survival rates. They also have a higher probability of subsequent rhythm-related intervention, and more frequently develop congestive heart failure and stroke. Preoperative AF has been identified as an independent predictor of late adverse cardiac and cerebrovascular events, but not late death (PUBMED:16996940). Further studies have confirmed that preoperative AF is highly predictive of long-term mortality, remaining strongly and independently predictive after adjustment for other risk factors. This suggests that preoperative AF is a significant factor to consider when evaluating the prognosis of patients undergoing isolated surgical AVR (PUBMED:34309884). Given these findings, there is an argument for concomitant arrhythmia surgery. Surgical AF ablation has been shown to restore normal sinus rhythm (NSR) and reduce the need for anticoagulation, which can be beneficial for patients with AF requiring AVR. Surgical AF ablation is associated with improved restoration of NSR and reduced need for anticoagulation in the majority of patients (PUBMED:22808837). Additionally, the Cox-maze IV procedure combined with AVR has been reported to increase mid-term survival and decrease mid-term AF recurrence in patients with calcific aortic valve disease and AF (PUBMED:37077739). Moreover, surgical ablation for AF during AVR, including both full maze operation and pulmonary vein isolation (PVI), has been shown to be valuable strategies to restore sinus rhythm at 2 years and result in favorable early and midterm survival rates (PUBMED:37126228). In conclusion, the evidence suggests that preoperative AF is associated with poorer outcomes in patients undergoing AVR, and there is a strong argument for considering concomitant arrhythmia surgery to improve long-term survival and reduce the recurrence of AF.
Instruction: Elderly surgical patients: are there gaps in residency education? Abstracts: abstract_id: PUBMED:26576684 Where do we start? The first survey of surgical residency education in Japan. Background: Currently, Japan does not have a national standardized program for surgical residency. Therefore, surgical education information and strategies are not shared among teaching hospitals. This was the first study aiming to clarify the current situation of surgical residency in Japan. Methods: A questionnaire survey investigating the present situation of surgical residency was sent to the 76 teaching hospitals in Hokkaido Prefecture, Japan. Results: The response rate was 64.5%. Data from the 36 hospitals with active residency programs were analyzed. Most of the program directors (79.4%) were in charge of educational work for less than 5 hours per week. Although half of the hospitals had skills laboratories or simulation centers, only 2 used them routinely for their residency program. Half of the hospitals evaluated the residents' competency and the quality of their educational programs. Conclusions: Structured programs and evaluation systems have not been integrated well into surgical residency in Japan. abstract_id: PUBMED:37663682 Establishment of hospital-based surgical residency programs as a health policy for surgical needs provision: an Indonesian perspective. Objective: One of the health interventions that can be implemented cost-effectively in developing countries is the provision of basic surgical procedures. One way to increase access to surgical services is to increase the production of surgeons, in this case through increasing the surgical residency program. There are similarities between university-based surgical residency programs and hospital-based surgical residency programs. However, in Indonesia, the differences between them are stark; the establishment of hospital-based surgical residency programs is challenging and requires collaboration between the government, university, college of surgery, and private sector. Design: The author reviewed the literature on post-graduate medical education for surgical programs and the contrast between university-based and hospital-based surgical residency programs. Setting: Information collected from the Indonesian Health Education Independent Accreditation Institution and varied literature was included. Participants: Not applicable. Result: In Indonesia, the implementation of an academic surgery environment in the hospital-based surgical residency program is challenging, and the implementation of the academic health system can increase student intake and provide more both from the university and the hospital-based surgical residency program. abstract_id: PUBMED:33221251 Incorporating Surgical Ergonomics Education into Surgical Residency Curriculum. Background: Education on surgical ergonomics during residency training is lacking. This study aimed to determine the feasibility and impact of incorporating surgical ergonomics lectures into residency curriculum. Methods: A survey was distributed to 42 residents with questions regarding demographics, surgical factors, prevalence of musculoskeletal (MSK) symptoms, and awareness of ergonomic recommendations. The residents then received 2 lectures on ergonomics in surgery. A follow up survey was distributed to evaluate the impact of the lectures. Results: Twenty-two residents completed the presession survey. Ninety-one percent reported MSK symptoms attributed to their training. Seventeen residents completed the follow up survey. All reported increased awareness of their own habits while operating and improved understanding of methods to prevent and/or treat work-related injuries. All residents recommended incorporating the lectures as an adjunct to their regular curriculum. Conclusions: The rates of MSK symptoms and/or injury are high among surgeon trainees. Residency is an opportune time to educate on principles of ergonomics and may prevent future injuries. abstract_id: PUBMED:24831443 Elderly surgical patients: are there gaps in residency education? Background: Geriatric education is essential to ensure the competency of residents caring for the aging population. This study assesses and correlates resident and faculty perceptions of resident geriatric-related competencies to clinical care. Methods: A survey was sent to 40 general surgery residents and 57 faculty members. Five clinical care markers were identified for chart audit. A retrospective chart audit was performed of 22 injured elderly patients. Results: Among the respondents, 30 of 40 (75%) residents and 22 of 57 (39%) faculty completed the survey. Residents rated their competency higher than faculty on all competency-related questions (p = 0.0002). The following 4 questions had a mean faculty rating below acceptable: screening guidelines, delirium management, contraindicated medications, and medication adjustments. On chart review: code status was documented in 7 of 22 (32%) patients and goals of care in 1 of 22 (5%) patients. Pain control included rib block or epidural in 14 of 22 (64%) patients. Contraindicated medications were prescribed in 13 of 22 (59%) patients. Conclusion: A competency-based needs assessment of geriatric training in a general surgery residency has identified educational "gaps." This needs assessment supports implementation of geriatric education initiatives in our general surgery program. abstract_id: PUBMED:32540290 Global Neurosurgery Education in United States Residency Programs. Background: Global neurosurgery encompasses the social and surgical practices that effect the neurologic health of vulnerable and underserved populations in domestic and international resource-limited settings. Formal academic engagement in global neurosurgery is limited in residency programs. Here we explore the current status of global neurosurgery education in residency programs across the United States. Methods: We contacted 115 Accreditation Council for Graduate Medical Education-accredited residency programs in the United States to complete an 8-question electronic survey on global neurosurgery education. Questions were framed with binary "yes" or "no" answers to indicate presence of a global health activity. Respondents provided additional information if desired. Global education activity was categorized based on the number of resources attributed to global health activities: low (0-2), moderate (3-5), or high (6-8). Results: Thirty-four residency programs completed the survey (29.6%). The majority of respondents reported offering funding for research and educational opportunities in global neurosurgery (n = 22). Programs tended to support global neurosurgery conferences (n = 20), periodic dedicated lectures (n = 15), and rotations in resource-constrained or marginalized communities domestically or abroad (n = 15). Some programs offer continuity clinics in marginalized settings (n = 10), supplementary reading material (n = 8), core curricula (n = 6), or a designated residency track in global neurosurgery (n = 3). The majority of residency programs had low-level engagement in global neurosurgery (n = 18), with only 3 programs having high levels of engagement. Conclusions: Formal global neurosurgery training is limited within US residency programs. With rising trends in the neurosurgical disease burden globally, it may benefit residency programs to develop training paths to equip the next generation of neurosurgeons to address such needs. abstract_id: PUBMED:8315590 Residency education. Residency education has an interesting history in the US that speaks clearly about some of the cherished beliefs still perpetuated in today's residency training programs. The history also provides a foundation for considering how the current knowledge of adult education theory and practice can be incorporated into residency education programs. The author presents a brief overview of the history of residency education in the US followed by a recommendation for applying a conception of adult learning to residency education. abstract_id: PUBMED:35157671 Status of the palliative care education in surgical postgraduate curriculum - implications for Pakistan. Palliative care has gained considerable attention during last few decades. Increasing demands by the society owing to increased number of elderly and people with chronic illnesses require increased number of health care professionals with competence in providing palliative care. The current review provides a summary of the published literature pertaining to palliative care education in surgical residency programmes internationally and locally. Most surgical residency programmes provide limited, if any, formal education in knowledge and skills related to palliative and end of life care. The situation is alarming in Pakistan where the discipline is in infancy and resources are limited while the demand for palliative care is increasing. Surgery can learn lessons from other disciplines. Development and integration of palliative medicine into most of the existing postgraduate curricula continues to be a challenge for the accreditation bodies and medical educationists. Collaborative measures need to be taken urgently at all levels. abstract_id: PUBMED:25976858 The Changing Landscape of Surgical Education: What are Residency Education Teams and do we Need Them? Objectives: This study aims to understand how general surgery training programs constitute their residency education team (RET), how they define the roles of RET members, and how they measure success of the team. It fundamentally asks the question, "What is a RET and do we need one?" Design And Participants: Program directors, associate program directors, educators, program coordinators, and chief residents from Accreditation Council for Graduate Medical Education (ACGME) general surgery training programs were asked to anonymously complete a survey categorized into 3 sections: (1) roles and responsibilities, (2) views of his/her RET and team members, and (3) general views about RETs. All respondents provided their opinions on the importance of a RET for administering and leading a surgical residency, whom the ideal members would be, and the main outcomes of a high-functioning RET. Results: Respondents (n = 167) included 59 (35.3%) program directors, 16 (9.6%) associate program directors, 8 (4.8%) educators, 67 (40.1%) program coordinators, and 6 (3.6%) chief residents. Overall, 84.4% of respondents were a part of a RET, defined as 2 or more individuals who are responsible and accountable for oversight and conduct of the residency training program. RET respondents expressed statistically significantly and higher importance for a RET (p &lt; 0.0001) than their non-RET counterparts. Conclusions: This study provides a snapshot of how some associated with general surgery residencies view and value RETs. The results of this survey are preliminary and suggest a need for educators within surgery programs and ambiguity about the role of associate program director. It also suggests that a closer look at role responsibilities may be of value, especially in view of the changing landscape of surgical education. Overall, most respondents felt that a RET was important to the main outcomes of a successful residency program. abstract_id: PUBMED:12722671 Medical education and surgical residency program for general thoracic surgery We introduced medical education for general thoracic surgery to medical students and a new surgical residency program to instruct residents who aspire to become general thoracic surgeons. It is important to understand the etiology of thoracic diseases, methods of diagnosis for tumorous lesions, anatomy of the interior thorax, surgical therapies and indications for thoracic diseases, and management of perioperative periods in the medical education of general thoracic surgery. According to the new residency program, a surgical resident must complete a super rotating system that consists of the subjects of internal medicine, general surgery, emergency and critical care medicine, anesthesiology, pediatrics, gynecology, and local healthcare for several months each in the first two years. On the other hand, a surgical residency program includes core surgical residencies to train clinical patient-care oriented surgeons and advanced residencies in such areas as thoracic surgery, cardiovascular surgery, pediatric surgery, and digestive organ surgery. Residents who want to become general thoracic surgeons must complete 350 cases of general surgery including at least 120 cases for an operator in the first 4 or more years in core programs. After passing the board for general surgery, they can advance to a subspecialty program for thoracic surgery in the final 3 or more years to pass the board for general thoracic surgery. It is essential to train the general thoracic surgeons who empathize with patients with thoracic diseases and can contribute to overcoming thoracic diseases by practicing evidence- or science-based medicine. abstract_id: PUBMED:34180247 Applications of Narrative Medicine to Surgical Education. Narrative medicine describes the application of story to medical education and practice. Although it has been implemented successfully in many medical schools as a part of undergraduate medical education, applications to the residency environment have been relatively limited. There are virtually no data concerning the adoption of narrative medicine within surgical residencies. This paper provides a brief introduction to the formal discipline of narrative medicine. We further discuss how storytelling is already used in surgical education and summarize the literature on applications of narrative medicine to residents in other specialties. The relevance of narrative medicine to the ACGME core competencies is explored. We conclude with specific suggestions for implementation of narrative medicine within surgical residency programs. Answer: Yes, there are gaps in residency education regarding the care of elderly surgical patients. A study assessing resident and faculty perceptions of resident geriatric-related competencies and correlating them with clinical care found that residents rated their competency higher than faculty on all competency-related questions. However, faculty ratings for certain competencies, such as screening guidelines, delirium management, contraindicated medications, and medication adjustments, were below acceptable levels. A chart review of injured elderly patients revealed that code status was documented in only 32% of patients and goals of care in just 5%. Additionally, contraindicated medications were prescribed in 59% of patients. These findings indicate a need for improved geriatric training in general surgery residency programs to address these educational gaps (PUBMED:24831443).
Instruction: Do prenatal steroids improve the survival of late preterm infants with complex congenital heart defects? Abstracts: abstract_id: PUBMED:25104120 Do prenatal steroids improve the survival of late preterm infants with complex congenital heart defects? Aim: We evaluated the use of prenatal steroids (PNS) and the effect of that practice on hospital mortality of late preterm infants with complex congenital heart defects (CHD). Methods: Retrospective review of records of late preterm infants with complex CHD infants that were cared for in a single tertiary perinatal center between 2002 and 2009. Multivariate logistic regression analysis was performed to determine which of the risk factors commonly associated with death prior to discharge from the hospital predict the outcome (hospital death). Results: Of the 106 late preterm infants with complex CHD, 31(29%) died and 15 (14%) received PNS. Endotracheal intubation in the delivery room (42% vs 15%), necrotizing enterocolitis (10% vs 0%) and hypoplastic left heart syndrome (52% vs 25%) were statistically more frequent in non-surviving infants. Non-surviving infants were more frequently treated with PNS (23% vs 11%) but this difference was not statistically significant (p = 0.131). Using logistic regression analysis, delivery room intubation (OR 4.91; 95% CI 1.78 - 13.51) and the hypoplastic left heart syndrome (OR 3.29; 95% CI 1.28 - 8.48), but not prenatal steroids were independently associated with increased risk of hospital death. Conclusions: In a selected population of late preterm infants with complex CHD, prenatal steroid treatment did not independently influence survival. abstract_id: PUBMED:24472597 A population-based study of the association of prenatal diagnosis with survival rate for infants with congenital heart defects. Prenatal diagnosis has been shown to improve preoperative morbidity in newborns with congenital heart defects (CHDs), but there are conflicting data as to the association with mortality. We performed a population-based, retrospective, cohort study of infants with prenatally versus postnatally diagnosed CHDs from 1994 to 2005 as ascertained by the Metropolitan Atlanta Congenital Defects Program. Among infants with isolated CHDs, we estimated 1-year Kaplan-Meier survival probabilities for prenatal versus postnatal diagnosis and estimated Cox proportional hazard ratios adjusted for critical CHD status, gestational age, and maternal race/ethnicity. Of 539,519 live births, 4,348 infants had CHDs (411 prenatally diagnosed). Compared with those with noncritical defects, those with critical defects were more likely to be prenatally diagnosed (58% vs 20%, respectively, p &lt;0.001). Of the 3,146 infants with isolated CHDs, 1-year survival rate was 77% for those prenatally diagnosed (n = 207) versus 96% for those postnatally diagnosed (n = 2,939, p &lt;0.001). Comparing 1-year survival rate among those with noncritical CHDs alone (n = 2,455) showed no difference between prenatal and postnatal diagnoses (96% vs 98%, respectively, p = 0.26), whereas among those with critical CHDs (n = 691), prenatally diagnosed infants had significantly lower survival rate (71% vs 86%, respectively, p &lt;0.001). Among infants with critical CHDs, the adjusted hazard ratio for 1-year mortality rate for those prenatally versus postnatally (reference) diagnosed was 2.51 (95% confidence interval 1.72 to 3.66). In conclusion, prenatal diagnosis is associated with lower 1-year survival rate for infants with isolated critical CHDs but shows no change for those with isolated noncritical CHDs. More severe disease among the critical CHD subtypes diagnosed prenatally might explain these findings. abstract_id: PUBMED:17062216 Heterotaxy syndrome with functional single ventricle: does prenatal diagnosis improve survival? Background: Despite improved outcome for many single ventricle lesions, staged reconstruction for heterotaxy syndrome with a functional single ventricle continues to have a high mortality. Prenatal identification of heterotaxy syndrome may improve long-term survival. Methods: Our database was reviewed from January 1996 to December 2004 for patients with heterotaxy syndrome. Assessment was made for prenatal diagnosis and echocardiographic characteristics of heterotaxy syndrome. We sought to assess the accuracy of fetal echocardiography in the diagnosis of heterotaxy syndrome and determine whether prenatal diagnosis and other risk factors have an impact on survival in patients with heterotaxy syndrome. Results: Of 81 patients that met criteria, 43 (53%) had prenatal diagnosis. Prenatal diagnosis had high specificity and positive predictive value for all findings but had low sensitivity for anomalous pulmonary veins. Among the 70 patients born alive, survival was 60% with median follow-up of 51.4 months (range, 6.5 to 109.7 months). Prenatal diagnosis did not improve survival (p = 0.09). None of the 11 patients with complete heart block (CHB) survived past 3 months of age. Two patients underwent heart transplantation as their first intervention and have survived. CHB and anomalous pulmonary venous connection were associated with shorter duration of survival. Conclusions: Prenatal diagnosis of heterotaxy syndrome does not improve survival in patients who undergo single ventricle reconstruction. The most potent risk factors for poor outcome (CHB, anomalous pulmonary veins) are likely not impacted by identification in utero. In light of the poor outcome, cardiac transplantation as an initial therapy may be a viable option for some patients. abstract_id: PUBMED:33530157 Comparison of the 1-year survival rate in infants with congenital heart disease diagnosed by prenatal and postnatal ultrasound: A retrospective study. Abstract: The impact of prenatal diagnosis on the survival outcome of infants with congenital heart disease (CHD) is still unclear. This study aimed to compare the 1-year survival rate between the prenatally and postnatally diagnosed infants with CHDs.A single-center population-based retrospective cohort study was performed on data from all infants diagnosed with CHD born between January 1998 and December 2017. Among infants with isolated CHDs, the 1-year Kaplan-Meier survival probabilities for prenatal and postnatal diagnosis were estimated. Cox proportional hazard ratios were adjusted for critical CHD (CCHD) status and gestational age.A total of 424 (40 prenatally and 384 postnatally) diagnosed infants with CHDs were analyzed. Compared with non-CCHDs, infants with CCHDs were more likely to be prenatally diagnosed (55.0% vs 18.0%; P &lt; .001). Among the 312 infants with isolated CHDs, the 1-year survival rate for the prenatally diagnosed was significantly lower than postnatally diagnosed (77.1% vs 96.1%; P &lt; .001). For isolated CCHDs, the 1-year survival rate for the prenatally diagnosed was significantly lower than postnatally diagnosed (73.4% vs 90.0%; P &lt; .001). The 1-year survival rate was increased with the increase of age at diagnosis. Among infants with isolated CHDs and CCHDs, the adjusted hazard ratios for 1-year mortality rates for the prenatally versus postnatally diagnosed were 2.554 (95% confidence interval [CI], 1.790, 3.654; P &lt; .001) and 2.538 (95% CI: 1.796, 3.699; P &lt; .001), respectively.Prenatal diagnosis is associated with lower 1-year survival rate for infants with isolated CCHDs. This could probably due to variation in the disease severity among the CCHD subtypes. abstract_id: PUBMED:26544084 Trends in hospitalization of preterm infants with intraventricular hemorrhage and hydrocephalus in the United States, 2000-2010. OBJECT Even with improved prenatal and neonatal care, intraventricular hemorrhage (IVH) occurs in approximately 25%-30% of preterm infants, with a subset of these patients developing hydrocephalus. This study was undertaken to describe current trends in hospitalization of preterm infants with posthemorrhagic hydrocephalus (PHH) using the Nationwide Inpatient Sample (NIS) and the Kids' Inpatient Database (KID). METHODS The KID and NIS were combined to generate data for the years 2000-2010. All neonatal discharges with ICD-9-CM codes for preterm birth with IVH alone or with IVH and hydrocephalus were included. RESULTS There were 147,823 preterm neonates with IVH, and 9% of this group developed hydrocephalus during the same admission. Of patients with Grade 3 and 4 IVH, 25% and 28%, respectively, developed hydrocephalus in comparison with 1% and 4% of patients with Grade 1 and 2 IVH, respectively. Thirty-eight percent of patients with PHH had permanent ventricular shunts inserted. Mortality rates were 4%, 10%, 18%, and 40%, respectively, for Grade 1, 2, 3, and 4 IVH during initial hospitalization. Length of stay has been trending upward for both groups of IVH (49 days in 2000, 56 days in 2010) and PHH (59 days in 2000, 70 days in 2010). The average hospital cost per patient (adjusted for inflation) has also increased, from $201,578 to $353,554 (for IVH) and $260,077 to $495,697 (for PHH) over 11 years. CONCLUSIONS The number of neonates admitted with IVH has increased despite a decrease in the number of preterm births. Rates of hydrocephalus and mortality correlated closely with IVH grade. The incidence of hydrocephalus in preterm infants with IVH remained stable between 8% and 10%. Over an 11-year period, there was a progressive increase in hospital cost and length of stay for preterm neonates with IVH and PHH that may be explained by a concurrent increase in the proportion of patients with congenital cardiac anomalies. abstract_id: PUBMED:21546940 The effect of late preterm birth on mortality of infants with major congenital heart defects. Objective: We evaluated the effect of late preterm delivery (34 to 36 weeks) on hospital mortality of infants with congenital heart defects (CHDs). Study Design: Retrospective record review of infants with major CHD born at or after to 34 weeks, cared for in a single tertiary perinatal center between 2002 and 2009. Factors associated with death before discharge from the hospital were ascertained using univariate and multivariate analyses. Result: Of the 753 infants with CHD, 117 were born at late preterm. Using logistic regression analysis, white race (OR; 95% CI) (0.60; 0.39 to 0.95), late preterm delivery (2.70; 1.69 to 4.33), and need for intubation in the delivery room (3.15; 1.92 to 5.17) were independently associated with hospital death. Conclusion: Late preterm birth of infants with major CHDs was independently associated with increased risk of hospital death compared with delivery at more mature gestational ages. abstract_id: PUBMED:24440326 Relation of prenatal diagnosis with one-year survival rate for infants with congenital heart disease. Prenatal diagnosis of congenital heart defects (CHDs) is increasingly common, but it is still unclear whether it translates to improved postoperative outcomes. We performed a retrospective cohort study of all infants (aged &lt;1 year) who underwent surgery for CHDs from 2006 to 2011 at a single institution. Primary outcomes were in-hospital and 1-year mortality rates. Secondary outcomes were readmission within 30 days of discharge, postoperative length of intensive care unit and hospital stay, unplanned reoperation, and extracorporeal membrane oxygenation use. We used chi-square analyses, Wilcoxon rank-sum tests, Kaplan-Meier survival curves, and adjusted Cox proportional hazards models to compare outcomes. Of the 1,642 patients with CHDs, 539 (33%) were diagnosed prenatally. Patients with prenatal diagnoses were of a younger age and less weight at the time of surgery, had greater Risk Adjustment for Congenital Heart Surgery scores, and were more likely to be white, to have an identified syndrome, or to be born at term. Compared with those diagnosed postnatally, those diagnosed prenatally had a significantly higher unadjusted 1-year mortality rate (11% vs 5.5%, respectively, p = 0.03). Controlling for weight, surgical severity, race, age at surgery, prematurity, and the presence or absence of genetic syndrome, patients with prenatal diagnoses had significantly greater mortality at 1 year (adjusted hazard ratio 1.5, p = 0.03), as well as significantly longer intensive care unit and hospital stays. Infants with CHDs diagnosed prenatally had worse outcomes compared with those diagnosed postnatally. Prenatal diagnosis likely captures patients with more severe phenotypes within given surgical risk categories and even within diagnoses and thus may be an important prognostic factor when counseling families. abstract_id: PUBMED:24135071 Prenatal diagnosis, birth location, surgical center, and neonatal mortality in infants with hypoplastic left heart syndrome. Background: Most studies have not demonstrated improved survival after prenatal diagnosis of critical congenital heart disease, including hypoplastic left heart syndrome (HLHS). However, the effect of delivery near a cardiac surgical center (CSC), the recommended action after prenatal diagnosis, on HLHS mortality has been poorly investigated. Methods And Results: Using Texas Birth Defects Registry data, 1999 through 2007, which monitored &gt;3.4 million births, we investigated the association between distance (calculated driving time) from birth center to CSC and neonatal mortality in 463 infants with HLHS. Infants with extracardiac birth defects or genetic disorders were excluded. The associations between prenatal diagnosis, CSC HLHS volume, and mortality were also examined. Neonatal mortality in infants born &lt;10 minutes from a CSC was 21.0%, 10 to 90 minutes 25.2%, and &gt;90 minutes 39.6% (P for trend &lt;0.001). Prenatal diagnosis alone was not associated with improved survival (P=0.14). In multivariable analysis, birth &gt;90 minutes from a CSC remained associated with increased mortality (odds ratio, 2.03; 95% confidence interval, 1.19-3.45), compared with &lt;10 minutes. In subanalysis, birth &gt;90 minutes from a CSC was associated with higher pretransport mortality (odds ratio, 6.69; 95% confidence interval, 2.52-17.74) and birth 10 to 90 minutes with higher presurgical mortality (odds ratio, 4.45; 95% confidence interval, 1.17-17.00). Higher surgical mortality was associated with lower CSC HLHS volume (odds ratio per 10 patients, 0.88; 95% confidence interval, 0.84-0.91). Conclusions: Infants with HLHS born far from a CSC have increased neonatal mortality, and most of this mortality is presurgical. Efforts to improve prenatal diagnosis of HLHS and subsequent delivery near a large volume CSC may significantly improve neonatal HLHS survival. abstract_id: PUBMED:30171818 Impact of prenatal diagnosis of complex congenital heart disease on neonatal and infant morbidity and mortality. Objectives: The objective of this study was to analyze the benefits associated with prenatal diagnosis of complex congenital heart disease (CHD) on preoperative morbidity, 30-day and 1-year mortality in this population. Method: This was a retrospective review of patients with complex CHD born at our tertiary care center over a 10-year period. Date analysis using Student t test and chi-square test. Results: The overall rate of prenatal detection of complex CHD was 68.1%. A steady increase in the number of complex CHD diagnosed prenatally was noted during the study period. The prenatal diagnosis of complex CHD was associated with significant reduction in the incidence of the following preoperative parameters: antibiotic use, mechanical ventilation, inotropic support, hepatic and renal dysfunction, and acidosis. These beneficial effects were more significant in ductal-dependent cardiac anomalies. However, there were no neonatal and infant survival benefits in association with prenatal diagnosis. Conclusion: Prenatal diagnosis of complex CHD leads to improved preoperative morbidity, especially in patients with ductal-dependent cardiac anomalies. No survival benefits were noted with prenatal diagnosis of complex CHD. abstract_id: PUBMED:21362036 Outcomes of congenital heart disease in late preterm infants: double jeopardy? Aim: To evaluate outcomes of late preterm (34-36 6/7 weeks gestation) infants with congenital heart defects requiring surgical intervention during initial admission (added), in comparison with term (37-42 weeks) controls. Methods: This was a retrospective chart review of consecutive late preterm and term infants with congenital heart lesions, requiring surgical intervention during initial admission. Results: Our cohort (n=88) comprised 26 late preterm infants (group 1) and 62 term controls (group 2). The two groups differed significantly in mean (SD) gestation [34.5 (1.5) vs. 39.1 (1) weeks)], birth weight [2335 (402) vs. 3173 (401) g] and weight [2602 (739) vs. 3273 (507) g] and age [33 (51) vs. 11 (14) days] at surgery. Cardiac diagnosis frequencies were similar in both groups. The mean (SD) duration of PGE1 [31.9 (56.8) vs. 11.3 (24.9) days] and need for preoperative pressors (25% vs. 8%) were significantly higher in group 1. Death (23% vs. 8%, p=0.05) tended to be higher in group 1. Rates of necrotizing enterocolitis (23% vs. 1.7%), seizures (19% vs. 0%), oxygen need (12% vs. 0%) and gavage feeds (12% vs. 1.6%) at discharge were all significantly higher in the late preterm cohort. Conclusions: These data highlight the extreme vulnerability of infants with the 'double hits' of prematurity and heart defects. Answer: Based on the information provided in the abstracts, prenatal steroids (PNS) do not independently influence survival in a selected population of late preterm infants with complex congenital heart defects (CHD). Specifically, a retrospective review of records of late preterm infants with complex CHD cared for in a single tertiary perinatal center between 2002 and 2009 showed that non-surviving infants were more frequently treated with PNS (23% vs 11%), but this difference was not statistically significant (p = 0.131). Logistic regression analysis indicated that delivery room intubation and the presence of hypoplastic left heart syndrome, but not prenatal steroids, were independently associated with an increased risk of hospital death (PUBMED:25104120). Therefore, the evidence from this study suggests that prenatal steroids do not improve the survival of late preterm infants with complex CHD. It is important to note that this conclusion is based on the specific population and setting of the study, and further research may be needed to generalize these findings to other populations or to explore potential benefits of prenatal steroids in different contexts or with different treatment protocols.
Instruction: The effects of combat-related mild traumatic brain injury (mTBI): Does blast mTBI history matter? Abstracts: abstract_id: PUBMED:27000305 Combat exposure is associated with cortical thickness in Veterans with a history of chronic pain. Chronic Pain (CP) has been associated with changes in gray matter integrity in the cingulate and insular cortex. However, these changes have not been studied in Veterans, despite high prevalence rates of CP and interactions with combat-derived disorders. In the current study, 54 Veterans with a history of CP and 103 Veterans without CP were recruited from the Translational Research Center for Traumatic Brain Injury and Stress Disorders (TRACTS). Cortical thickness from structural MRI scans was determined using the FreeSurfer software package. Results showed that Veterans with CP showed a negative association between cortical thickness and levels of combat exposure in the left inferior frontal gyrus and superior parietal cortex, as well as the right rostral middle frontal gyrus, precentral and postcentral gyri and the superior temporal cortex. These findings suggest that CP may alter the relationship between cortical thickness and exposure to the stress of combat. abstract_id: PUBMED:24907686 Connecting combat-related mild traumatic brain injury with posttraumatic stress disorder symptoms through brain imaging. Mild traumatic brain injury (mTBI) and posttraumatic stress disorder (PTSD) may share common symptom and neuropsychological profiles in military service members (SMs) following deployment; while a connection between the two conditions is plausible, the relationship between them has been difficult to discern. The intent of this report is to enhance our understanding of the relationship between findings on structural and functional brain imaging and symptoms of PTSD. Within a cohort of SMs who did not meet criteria for PTSD but were willing to complete a comprehensive assessment within 2 months of their return from combat deployment, we conducted a nested case-control analysis comparing those with combat-related mTBI to age/gender-matched controls with diffusion tensor imaging, resting state functional magnetic resonance imaging and a range of psychological measures. We report degraded white matter integrity in those with a history of combat mTBI, and a positive correlation between the white matter microstructure and default mode network (DMN) connectivity. Higher clinician-administered and self-reported subthreshold PTSD symptoms were reported in those with combat mTBI. Our findings offer a potential mechanism through which mTBI may alter brain function, and in turn, contribute to PTSD symptoms. abstract_id: PUBMED:26131789 The effects of combat-related mild traumatic brain injury (mTBI): Does blast mTBI history matter? Background: The effects of mild traumatic brain injury (mTBI) have received significant attention since the beginning of the conflicts in Afghanistan and Iraq. Surprisingly, little is known about the temporal nature of neurocognitive impairment, mTBI, and posttraumatic stress (PTS) symptoms following combat-related mTBI. It is also unclear as to the role that blast exposure history has on mTBI and PTS impairments and symptoms. The purposes of this study were to examine prospectively the effects of mTBI on neurocognitive performance as well as mTBI and PTS symptoms among US Army Special Operations Command personnel and to study the influence of history of blast mTBI on these effects. Methods: Eighty US Army Special Operations Command personnel with (n = 19) and without (n = 61) a history of blast-related mTBI completed the military version of the Immediate Post-concussion Assessment Cognitive Test (ImPACT), Post Concussion Symptom Scale (PCSS), and the PTSD Checklist (PCL) at baseline as well as 1 day to 7 days and 8 days to 20 days following a combat-related mTBI. Results: Results indicated that verbal memory (p = 0.002) and processing speed (p = 0.003) scores were significantly lower and mTBI symptoms (p = 0.001) were significantly higher at 1 day to 7 days after injury compared with both baseline and 8 days to 20 days after injury. PTS remained stable across the three periods. Participants with a history of blast mTBI demonstrated lower verbal memory at 1 day to 7 days after mTBI compared with participants without a history of blast mTBI (p = 0.02). Conclusion: Decreases in neurocognitive performance and increased mTBI symptoms are evident in the first 1 day to 7 days following combat-related mTBI, and a history of blast-related mTBI may influence these effects. Level Of Evidence: Epidemiologic/prognostic study, level II. abstract_id: PUBMED:23437998 Treatment adherence in cognitive processing therapy for combat-related PTSD with history of mild TBI. Objective: This retrospective study examined treatment adherence in Cognitive Processing Therapy (CPT) for combat-related posttraumatic stress disorder (PTSD) in Veterans of Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF) with and without history of mild traumatic brain injury (mTBI). Method: Medical record review of consecutive referrals to an outpatient PTSD clinic identified veterans diagnosed with combat-related PTSD who began treatment with CPT. The sample (N = 136) was grouped according to positive (n = 44) and negative (n = 92) mTBI history. Groups were compared in terms of presenting symptoms and treatment adherence. Results: The groups were not different on a pretreatment measure of depression, but self-reported and clinician-rated PTSD symptoms were higher in veterans with history of mTBI. The treatment completion rate was greater than 61% in both groups. The number of sessions attended averaged 9.6 for the PTSD group and 7.9 for the mTBI/PTSD group (p = .05). Implications: Given the lack of marked group differences in treatment adherence, these initial findings suggest that standard CPT for PTSD may be a tolerable treatment for OEF/OIF veterans with a history of PTSD and mTBI as well as veterans with PTSD alone. abstract_id: PUBMED:28673413 Neurologic Health in Combat Sports. Neurologic injuries of both an acute and chronic nature have been reported in the literature for various combat sport styles; however, reports of the incidence and prevalence of these injury types vary greatly. Combat sports clinicians must continue to strive for the development, implementation, and enforcement of uniform minimum requirements for brain safety. These health care providers must also seize on the honor to provide this oft-underserved population with the health care advocacy they very much deserve, but often do not receive. abstract_id: PUBMED:34563989 White matter integrity, suicidal ideation, and cognitive dysfunction in combat-exposed Iraq and Afghanistan Veterans. White matter alterations in frontolimbic circuits and poorer cognitive functioning have been observed in individuals endorsing suicidality across numerous psychiatric conditions. However, relationships between white matter integrity, cognition, and suicidality in Veterans are poorly understood, particularly for those at increased risk for suicide due to mental health conditions (e.g., posttraumatic stress disorder, depression) and traumatic brain injury history. We (1) examined white matter alterations in combat-exposed Iraq/Afghanistan Veterans with and without suicidal ideation (SI) and (2) investigated relationships between white matter integrity and neuropsychological functioning in regions that differed between SI groups. No group differences were found regarding psychiatric diagnoses. Participants with SI had lower fractional anisotropy (FA) in the posterior corona radiata, superior corona radiata, and superior longitudinal fasciculus relative to those without SI. Worse processing speed/attention performance was associated with lower FA in the superior longitudinal fasciculus, while worse executive functioning performance was associated with lower FA in the superior corona radiata and superior longitudinal fasciculus. Memory performance was not associated with FA. These findings suggest that white matter integrity may be involved in cognitive dysfunction and increased risk for SI. Interventions that target cognitive dysfunction may ameliorate SI, and in turn, reduce risk for suicide among Veterans. abstract_id: PUBMED:29932812 Interactive effects of cumulative lifetime traumatic brain injuries and combat exposure on posttraumatic stress among deployed military personnel. Growing research links Traumatic Brain Injury (TBI) with greater posttraumatic stress disorder (PTSD) symptoms. Much of this research has focused on the influence of the presence or severity of a single TBI while neglecting the potential cumulative effects of multiple TBIs incurred across an individual's lifetime on combat-related PTSD. The present study addressed this gap using a sample of 157 military service members and 4 civilian contractors who underwent structured TBI interviews at a military hospital in Iraq and completed the Combat Experiences Scale (CES) and Posttraumatic Checklist - Military (PCL-M). Results indicated that a greater number of lifetime TBIs were associated with greater PTSD symptoms when accounting for the presence and severity of a recent, deployment-related TBI. Additionally, a significant interaction of number of lifetime TBIs and combat exposure emerged, indicating that exposure to combat yielded greater PTSD symptoms among those with multiple lifetime TBIs compared to those with one or zero lifetime TBIs. These data suggest that incurring multiple TBIs may amplify the link between combat exposure and PTSD and underscore the need to screen for lifetime TBI history. abstract_id: PUBMED:9706538 Findings of mild traumatic brain injury in combat veterans with PTSD and a history of blast concussion. Veterans with chronic posttraumatic stress disorder were evaluated for a history of blast concussion, controlling for confounding conditions. Electroencephalograms were analyzed by discriminant function for traumatic brain injury. A difference was found in discriminant scores between veterans with and without blast concussion. More members of the blast group had attentional symptoms and attentional dysfunction. Combat veterans with a remote history of blast injury have persistent electroencephalographic features of traumatic brain injury as well as attentional problems. The authors hypothesize that these constitute a type of chronic postconcussive syndrome that has cognitive and mood symptoms overlapping those of posttraumatic stress disorder. abstract_id: PUBMED:27690479 Sleep quality affects cognitive functioning in returning combat veterans beyond combat exposure, PTSD, and mild TBI history. Objective: The purpose of this study was to determine how sleep quality affects cognitive functioning in returning combat veterans after accounting for effects of combat exposure, posttraumatic stress disorder (PTSD), and mild traumatic brain injury (mTBI) history. Method: This was a cross-sectional assessment study evaluating combat exposure, PTSD, mTBI history, sleep quality, and neuropsychological functioning. One hundred and nine eligible male Iraq/Afghanistan combat veterans completed an assessment consisting of a structured clinical interview, neuropsychological battery, and self-report measures. Results: Using partial least squares structural equation modeling, combat experiences and mTBI history were not directly associated with sleep quality. PTSD was directly associated with sleep quality, which contributed to deficits in neuropsychological functioning independently of and in addition to combat experiences, PTSD, and mTBI history. Combat experiences and PTSD were differentially associated with motor speed. Conclusions: Sleep affected cognitive function independently of combat experiences, PTSD, and mTBI history. Sleep quality also contributed to cognitive deficits beyond effects of PTSD. An evaluation of sleep quality may be a useful point of clinical intervention in combat veterans with cognitive complaints. Improving sleep quality could alleviate cognitive complaints, improving veterans' ability to engage in treatment. (PsycINFO Database Record abstract_id: PUBMED:23031200 Residual effects of combat-related mild traumatic brain injury. Mild traumatic brain injury (mTBI) has gained considerable notoriety during the past decade of conflict in Afghanistan and Iraq. However, the relationship between combat-related mTBI and residual mTBI symptoms, post-traumatic stress disorder (PTSD) symptoms, and neurocognitive deficits remains unclear. The purpose of the study was to compare residual mTBI and PTSD symptoms, and neurocognitive deficits among U.S. Army Special Operations Command (USASOC) personnel with diagnosed blunt, blast, and blast-blunt combination mTBIs. This study involved a retrospective medical records review of 27,169 USASOC personnel who completed a military version of the Immediate Post-Concussion Assessment Cognitive Test (ImPACT), Post-Concussion Symptom Scale (PCSS), and PTSD Checklist (PCL) between November 2009 and December 2011. Of the 22,203 personnel who met criteria for the study, 2,813 (12.7%) had a diagnosis of at least one mTBI. A total of 28% (n=410) of USASOC personnel with a history of diagnosed mTBI reported clinical levels of PTSD symptoms. Personnel with a history of diagnosed blunt (OR=3.58), blast (OR=4.23) or combination (OR=5.73) mTBI were at significantly (p=0.001) greater risk of reporting clinical levels of PTSD symptoms than those with no history of mTBI. A dose-response gradient for exposure to blast/combination mTBI on clinical levels of PTSD symptoms was also significant (p=0.001). Individuals with blast/combination mTBIs scored higher in residual mTBI (p=0.001) and PTSD symptoms (p=0.001), and performed worse on tests of visual memory (p=0.001), and reaction time (p=0.001) than those with blunt or no mTBI history. Individuals with combination mTBIs scored lower in verbal memory (p=0.02) than those with blunt mTBIs. Residual PTSD and mTBI symptoms appear to be more prevalent in personnel with blast mTBI. A dose-response gradient for blast mTBI and symptoms suggests that repeated exposures to these injuries may have lingering effects. Answer: The effects of combat-related mild traumatic brain injury (mTBI) are indeed influenced by a history of blast mTBI. Research has shown that individuals with a history of blast-related mTBI demonstrate lower verbal memory shortly after injury compared to those without such a history (PUBMED:26131789). Additionally, a history of blast mTBI may result in more severe posttraumatic stress disorder (PTSD) symptoms and neurocognitive deficits, including issues with visual memory and reaction time (PUBMED:23031200). Furthermore, there is evidence of a dose-response relationship, indicating that repeated exposures to blast mTBI can have cumulative effects on PTSD symptoms (PUBMED:29932812). These findings suggest that the history of blast mTBI is an important factor to consider when assessing the long-term impact of combat-related injuries on cognitive and psychological functioning. Clinicians and researchers should take into account the specific nature of the mTBI, including whether it was caused by a blast, as this can have significant implications for the severity and type of symptoms experienced by military personnel.
Instruction: Do quality improvement organizations improve the quality of hospital care for Medicare beneficiaries? Abstracts: abstract_id: PUBMED:32227133 Association of the Comprehensive End-Stage Renal Disease Care Model With Medicare Payments and Quality of Care for Beneficiaries With End-Stage Renal Disease. Importance: Medicare beneficiaries with end-stage renal disease (ESRD) are a medically complex group accounting for less than 1% of the Medicare population but more than 7% of Medicare fee-for-service payments. Objective: To evaluate the association of the Comprehensive End-Stage Renal Disease Care (CEC) model with Medicare payments, health care use, and quality of care. Design, Setting, And Participants: In this economic evaluation, a difference-in-differences design estimated the change in outcomes for 73 094 Medicare fee-for-service beneficiaries aligned to CEC dialysis facilities between the baseline (from January 2014 to March 2015) and intervention periods (from October 2015 to December 2017) relative to 60 464 beneficiaries at matched dialysis facilities. In the CEC model, dialysis facilities, nephrologists, and other providers partner to form ESRD Seamless Care Organizations (ESCOs), specialty-oriented accountable care organizations that coordinate care for beneficiaries with ESRD. ESCOs with expenditures below a benchmark set by the Centers for Medicare &amp; Medicaid Services are eligible to share in savings if they meet quality thresholds. A total of 685 dialysis facilities affiliated with 37 ESCOs participated in the CEC model as of January 2017. Thirteen ESCOs joined the CEC model on October 1, 2015 (wave 1), and 24 ESCOs joined on January 1, 2017 (wave 2). Patients with ESRD who were aligned with CEC dialysis facilities were compared with patients at matched dialysis facilities. Main Outcomes And Measures: Medicare total and service-specific payments per beneficiary per month; hospitalizations, readmissions, and emergency department visits; and select quality measures. Results: Relative to the comparison group (n = 60 464; 55% men; mean [SD] age, 63.5 [14.4] years), total Medicare payments for CEC beneficiaries (n = 73 094; 56% men; mean [SD] age, 63.0 [14.4] years) decreased by $114 in payments per beneficiary per month (95% CI, -$202 to -$26; P = .01), associated primarily with decreases in payments for hospitalizations and readmissions. Payment reductions were offset by shared savings payments to ESCOs, resulting in net losses of $78 in payments per beneficiary per month (95% CI, -$8 to $164; P = .07). Relative to the comparison group, CEC beneficiaries had 5.01 fewer hospitalizations per 1000 beneficiaries per month (95% CI, -8.45 to -1.56; P = .004), as well as fewer catheter placements (CEC beneficiaries with catheter as vascular access for periods longer than 90 days decreased by 0.78 percentage points [95% CI, -1.36 to -0.19; P = .01]) and fewer hospitalizations for ESRD complications (CEC beneficiaries were 0.11 percentage points less likely [95% CI, -0.20 to -0.02; P = .01] to be hospitalized in a given month). Total dialysis sessions and payments increased, suggesting improved adherence to dialysis treatments. Conclusions And Relevance: Early findings from the CEC model demonstrate that a specialty accountable care organization model focused on a particular population was associated with reduced payments and improved quality of care. Future research can assess the longer-term outcomes of the CEC model and its applicability to populations with other complex chronic conditions. abstract_id: PUBMED:12525231 Change in the quality of care delivered to Medicare beneficiaries, 1998-1999 to 2000-2001. Context: Despite widespread concern regarding the quality and safety of health care, and a Medicare Quality Improvement Organization (QIO) program intended to improve that care in the United States, there is only limited information on whether quality is improving. Objective: To track national and state-level changes in performance on 22 quality indicators for care of Medicare beneficiaries. Design, Patients, And Setting: National observational cross-sectional studies of national and state-level fee-for-service data for Medicare beneficiaries during 1998-1999 (baseline) and 2000-2001 (follow-up). Main Outcome Measures: Twenty-two QIO quality indicators abstracted from state-wide random samples of medical records for inpatient fee-for-service care and from Medicare beneficiary surveys or Medicare claims for outpatient care. Absolute improvement is defined as the change in performance from baseline to follow-up (measured in percentage points for all indicators except those measured in minutes); relative improvement is defined as the absolute improvement divided by the difference between the baseline performance and perfect performance (100%). Results: The median state's performance improved from baseline to follow-up on 20 of the 22 indicators. In the median state, the percentage of patients receiving appropriate care on the median indicator increased from 69.5% to 73.4%, a 12.8% relative improvement. The average relative improvement was 19.9% for outpatient indicators combined and 11.9% for inpatient indicators combined (P&lt;.001). For all but one indicator, absolute improvement was greater in states in which performance was low at baseline than those in which it was high at baseline (median r = -0.43; range: 0.12 to -0.93). When states were ranked on each indicator, the state's average rank was highly stable over time (r = 0.93 for 1998-1999 vs 2000-2001). Conclusions: Care for Medicare fee-for-service plan beneficiaries improved substantially between 1998-1999 and 2000-2001, but a much larger opportunity remains for further improvement. Relative rankings among states changed little. The improved care is consistent with QIO activities over this period, but these cross-sectional data do not provide conclusive information about the degree to which the improvement can be attributed to the QIOs' quality improvement efforts. abstract_id: PUBMED:30933596 Medicare's Bundled Payments For Care Improvement Initiative Maintained Quality Of Care For Vulnerable Patients. The Bundled Payments for Care Improvement (BPCI) initiative established four models to test whether linking payments for an episode of care could reduce Medicare payments while maintaining or improving quality. Evaluations concluded that model 2, the largest, generally lowered payments without reducing quality for the average beneficiary, but these global results could mask adverse findings among vulnerable subpopulations. We analyzed changes in emergency department visits, unplanned hospital readmissions, and all-cause mortality within ninety days of hospital discharge among beneficiaries with one or more of three vulnerable characteristics-dementia, dual eligibility for Medicare and Medicaid, and recent institutional care-in 105,458 beneficiary episodes in the period October 2013-December 2016. The results for twelve types of medical and surgical BPCI episodes were evaluated relative to results in matched comparison groups. Our findings suggest that BPCI model 2 did not adversely affect care quality for beneficiaries with vulnerabilities. While this conclusion does not discourage the further development of bundled payment models, policy makers should support ongoing research to ensure that vulnerable populations are not adversely affected by these approaches. abstract_id: PUBMED:12545996 Medicare and Medicaid programs; hospital conditions of participation: quality assessment and performance improvement. Final rule. This final rule requires hospitals to develop and maintain a quality assessment and performance improvement (QAPI) program. In the December 19, 1997 Federal Register, we published a proposed rule to revise the hospitals conditions of participation (CoPs). The QAPI CoP was one of the conditions included in the proposed rule. We separated the QAPI CoP from the larger set of hospital CoPs so that it could be published in advance of the remaining CoPs to implement the Administration's initiatives regarding medical errors. QAPI focuses provider efforts on the actual care delivered to patients, the performance of the hospital as an organization, and the impact of treatment furnished by the hospital on the health status of its patients. Specifically, it is important to note that a QAPI is not designed to measure a hospital's quality, but rather a minimum requirement that the hospital systematically examine its quality and implement specific improvement projects on an ongoing basis. State agencies (SAs) during their surveys, review all aspects of a hospital's operations and this review provides a framework in which the SA can assess a hospital's QAPI program. In addition, the QAPI entails all activities required for measuring quality of care and maintaining it at acceptable levels. This typically includes; 1) Identifying and verifying quality-related problems and their underlying cause; 2) Designing and implementing corrective action activities to address deficiencies; 3) Following up to determine the degree of success of an intervention and to detect new problems and opportunities for improvement. Performance improvement activities aim to improve overall performance assuming that there is no permanent threshold for good performance. Under performance improvement framework, hospitals will continuously study and improve the processes of healthcare and delivery of service. abstract_id: PUBMED:25128008 Emergency care and the national quality strategy: highlights from the Centers for Medicare &amp; Medicaid Services. The Centers for Medicare &amp; Medicaid Services (CMS) of the US Department of Health and Human Services seeks to optimize health outcomes by leading clinical quality improvement and health system transformation through a variety of activities, including quality measure alignment, prioritization, and implementation. CMS manages more than 20 federal quality measurement and public reporting programs that cover the gamut of health care providers and facilities, including both hospital-based emergency departments (EDs) and individual emergency physicians. With more than 130 million annual visits, and as the primary portal of hospital admission, US hospital-based EDs deliver a substantial portion of acute care to Medicare beneficiaries. Given the position of emergency care across clinical conditions and between multiple settings of care, the ED plays a critical role in fulfilling all 6 priorities of the National Quality Strategy. We outline current CMS initiatives and future opportunities for emergency physicians and EDs to effect each of these priorities and help CMS achieve the triple aim of better health, better health care, and lower costs. abstract_id: PUBMED:24101680 Health care quality improvement publication trends. To analyze the extent of academic interest in quality improvement (QI) initiatives in medical practice, annual publication trends for the most well-known QI methodologies being used in health care settings were analyzed. A total of 10 key medical- and business-oriented library databases were examined: PubMed, Ovid MEDLINE, EMBASE, CINAHL, PsycINFO, ISI Web of Science, Scopus, the Cochrane Central Register of Controlled Trials, ABI/INFORM, and Business Source Complete. A total of 13 057 articles were identified that discuss at least 1 of 10 well-known QI concepts used in health care contexts, 8645 (66.2%) of which were classified as original research. "Total quality management" was the only methodology to demonstrate a significant decline in publication over time. "Continuous quality improvement" was the most common topic of study across all publication years, whereas articles discussing Lean methodology demonstrated the largest growth in publication volume over the past 2 decades. Health care QI publication volume increased substantially beginning in 1991. abstract_id: PUBMED:35315378 Measuring Inconsistency in Quality Across Patient Groups to Target Quality Improvement. Background: Quality improvement (QI) may be aimed at improving care for all patients, or it may be targeted at only certain patient groups. Health care providers have little guidance when determining when targeted QI may be preferred. Objectives: The aim was to develop a method for quantifying performance inconsistency and guidelines for when inconsistency indicates targeted QI, which we apply to the performance of health plans for different patient groups. Research Design And Measures: Retrospective analysis of 7 Health Care Effectiveness Data and Information Set (HEDIS) measures of clinical care quality. Subjects: All Medicare Advantage (MA) beneficiaries eligible for any of 7 HEDIS measures 2015-2018. Results: MA plans with higher overall performance tended to be less inconsistent in their performance (r=-0.2) across groups defined by race-and-ethnicity and low-income status (ie, dual eligibility for Medicaid or receipt of Low-Income Subsidy). Plan characteristics were usually associated with only small differences in inconsistency. The characteristics associated with differences in consistency [eg, size, Health Maintenance Organization (HMO) status] were also associated with differences in overall performance. We identified 9 (of 363) plans that had large inconsistency in performance across groups (&gt;0.8 SD) and investigated the reasons for inconsistency for 2 example plans. Conclusions: This newly developed inconsistency metric may help those designing and evaluating QI efforts to appropriately determine when targeted QI is preferred. It can be used in settings where performance varies across groups, which can be defined by patient characteristics, geographic areas, hospital wards, etc. Effectively targeting QI efforts is essential in today's resource-constrained health care environment. abstract_id: PUBMED:30816813 Use of Medicare Administrative Claims to Identify a Population at High Risk for Adverse Drug Events and Hospital Use for Quality Improvement. Background: A system using administrative claims to monitor medication use patterns and associated adverse events is not currently available. Establishment of a standardized method to identify Medicare beneficiaries at high risk for adverse events, by assessing Medicare Part D medication claim patterns and associated outcomes, including outpatient adverse drug events (ADEs) and hospital use, enhances prevention efforts and monitoring for quality improvement efforts. Objectives: To (a) demonstrate that Medicare claims data can be used to identify a population of beneficiaries at high risk for adverse events for quality improvement and (b) define trends associated with adverse health outcomes in identified high-risk beneficiaries for quality improvement opportunities. Methods: We used Medicare fee-for-service Part D claims data to identify a population at high risk for adverse events by evaluating medication use patterns. This population was taking at least 3 medications, 1 of which was an anticoagulant, an opioid, or an antidiabetic agent. Next, we used associated Part A claims to calculate rates of outpatient ADEs, looking for specific ICD-9-CM or ICD-10-CM codes in the principal diagnosis code position. Rates of hospital use (inpatient hospitalization, observation stays, emergency department visits, and 30-day rehospitalizations) were also evaluated for the identified high-risk population. The data were then shared for targeted quality improvement. Results: We identified 8,178,753 beneficiaries at high risk for adverse events, or 20.7% of the total eligible fee-for-service population (time frame of October 2016-September 2017). The overall rate of outpatient ADEs for beneficiaries at high risk was 46.28 per 1,000, with anticoagulant users demonstrating the highest rate of ADEs (68.52/1,000), followed by opioid users (42.11/1,000) and diabetic medication users (20.72/1,000). As expected, the primary setting for beneficiaries at high risk to seek care for outpatient ADEs was the emergency department, followed by inpatient hospitalizations and observation stays. Conclusions: Medicare claims are an accessible source of data, which can be used to establish for quality improvement a population at high risk for ADEs and increased hospital use. Using medication use patterns to attribute risk and associated outcomes, such as outpatient ADEs and hospital use, is a simple process that can be readily implemented. The described method has the potential to be further validated and used as a foundation to monitor population-based quality improvement efforts for medication safety. Disclosures: This work was performed under contract HHSM-500-2014-QINNCC, Modification No. 000004, funded by Centers for Medicare &amp; Medicaid Services (CMS), an agency of the U.S. Department of Health and Human Services. CMS did not have a role in the analysis. At the time of this analysis, Digmann, Peppercorn, Zhang, Irby, and Brock were employees of Telligen, which was awarded the National Coordinating Center-Quality Improvement Organization contract from CMS, which supported the work. Ryan was an employee at Qsource, which was awarded the Quality Innovation Network-Quality Improvement Organization contract from CMS, which supported the work. Thomas was employed by CMS. The content is solely the responsibility of the authors and does not necessarily represent the official views or policies of the CMS. This work is posted on the QIOprogram.org website, as recommended in the Common Rule ( https://www.hhs.gov/ohrp/regulations-and-policy/regulations/common-rule/index.html ). abstract_id: PUBMED:32757219 Mobility Action Group: Using Quality Improvement Methods to Create a Culture of Hospital Mobility. Background/objectives: To describe the Mobility Action Group (MACT), an innovative process to enhance implementation of hospital mobility programs and create a culture of mobility in acute care. Design: Continuous quality improvement intervention with episodic data review. Setting: Inpatient units including medical, surgical, and intensive care settings. Participants: A total of 42 hospitals of varying sizes across the United States. Interventions: The MACT and Change Package were developed to provide a conceptual framework, road map, and step-by-step guide to enable mobility teams to implement mobility programs successfully and meet their mobilization goals. Participants were encouraged to select two to three change tactics to pursue during the first action cycle and select and implement additional tactics in subsequent cycles. Nine learning sessions were held via webinar from April 27, 2017, to October 5, 2017, during which faculty provided brief presentations, facilitation, and group discussion. Measurements: Implementation of programs, walks per day, use of bed and chair alarms, and participant satisfaction. Results: Successful implementation of mobility programs was achieved at most (76%) sites. The proportion of patients who received at least three walks per day increased from 9% to 19%. The proportion of patients who were placed on a bed or chair alarm decreased from 36% to 20%. On average, 69% of participants reported they were "strongly satisfied" with the learning sessions. Most participants found the Change Package (58%) and Toolkit (63%) "very helpful." Since the conclusion of the active initiative, the Change Package has been downloaded 1,200 times. Of those who downloaded it, 48% utilized it to establish a mobility program, and 58% used it at their organization at least once a month. Conclusion: The MACT and Change Package provides an innovative approach emphasizing systemwide change that can help catalyze a culture of mobility in hospitals across the nation, improving the quality of care for hospitalized older adults. J Am Geriatr Soc 68:2373-2381, 2020. abstract_id: PUBMED:30646347 Association of a Care Coordination Model With Health Care Costs and Utilization: The Johns Hopkins Community Health Partnership (J-CHiP). Importance: The Johns Hopkins Community Health Partnership was created to improve care coordination across the continuum in East Baltimore, Maryland. Objective: To determine whether the Johns Hopkins Community Health Partnership (J-CHiP) was associated with improved outcomes and lower spending. Design, Setting, And Participants: Nonrandomized acute care intervention (ACI) and community intervention (CI) Medicare and Medicaid participants were analyzed in a quality improvement study using difference-in-differences designs with propensity score-weighted and matched comparison groups. The study spanned 2012 to 2016 and took place in acute care hospitals, primary care clinics, skilled nursing facilities, and community-based organizations. The ACI analysis compared outcomes of participants in Medicare and Medicaid during their 90-day postacute episode with those of a propensity score-weighted preintervention group at Johns Hopkins Community Health Partnership hospitals and a concurrent comparison group drawn from similar Maryland hospitals. The CI analysis compared changes in outcomes of Medicare and Medicaid participants with those of a propensity score-matched comparison group of local residents. Interventions: The ACI bundle aimed to improve transition planning following discharge. The CI included enhanced care coordination and integrated behavioral support from local primary care sites in collaboration with community-based organizations. Main Outcomes And Measures: Utilization measures of hospital admissions, 30-day readmissions, and emergency department visits; quality of care measures of potentially avoidable hospitalizations, practitioner follow-up visits; and total cost of care (TCOC) for Medicare and Medicaid participants. Results: The CI group had 2154 Medicare beneficiaries (1320 [61.3%] female; mean age, 69.3 years) and 2532 Medicaid beneficiaries (1483 [67.3%] female; mean age, 55.1 years). For the CI group's Medicaid participants, aggregate TCOC reduction was $24.4 million, and reductions of hospitalizations, emergency department visits, 30-day readmissions, and avoidable hospitalizations were 33, 51, 36, and 7 per 1000 beneficiaries, respectively. The ACI group had 26 144 beneficiary-episodes for Medicare (13 726 [52.5%] female patients; mean patient age, 68.4 years) and 13 921 beneficiary-episodes for Medicaid (7392 [53.1%] female patients; mean patient age, 52.2 years). For the ACI group's Medicare participants, there was a significant reduction in aggregate TCOC of $29.2 million with increases in 90-day hospitalizations and 30-day readmissions of 11 and 14 per 1000 beneficiary-episodes, respectively, and reduction in practitioner follow-up visits of 41 and 29 per 1000 beneficiary-episodes for 7-day and 30-day visits, respectively. For the ACI group's Medicaid participants, there was a significant reduction in aggregate TCOC of $59.8 million and the 90-day emergency department visit rate decreased by 133 per 1000 episodes, but hospitalizations increased by 49 per 1000 episodes and practitioner follow-up visits decreased by 70 and 182 per 1000 episodes for 7-day and 30-day visits, respectively. In total, the CI and ACI were associated with $113.3 million in cost savings. Conclusions And Relevance: A care coordination model consisting of complementary bundled interventions in an urban academic environment was associated with lower spending and improved health outcomes. Answer: Yes, quality improvement organizations (QIOs) have been associated with improvements in the quality of hospital care for Medicare beneficiaries. The abstracts provided offer evidence of various initiatives and models that have been implemented to enhance care quality and efficiency within the Medicare system. For instance, the Comprehensive End-Stage Renal Disease Care (CEC) model, which is a specialty accountable care organization model, was associated with reduced Medicare payments and improved quality of care for beneficiaries with end-stage renal disease (PUBMED:32227133). This model demonstrated a decrease in hospitalizations and readmissions, suggesting improved adherence to dialysis treatments and overall care. Similarly, the study on the change in the quality of care delivered to Medicare beneficiaries between 1998-1999 and 2000-2001 showed substantial improvements in care across 20 of the 22 quality indicators, indicating that care for Medicare fee-for-service plan beneficiaries improved significantly during that period (PUBMED:12525231). The Bundled Payments for Care Improvement (BPCI) initiative, specifically model 2, was found to generally lower payments without reducing quality for the average beneficiary, and importantly, it did not adversely affect care quality for beneficiaries with vulnerabilities (PUBMED:30933596). The final rule on hospital conditions of participation in quality assessment and performance improvement (QAPI) programs also underscores the importance of continuous quality improvement in hospitals, which is expected to enhance the care delivered to patients (PUBMED:12545996). Moreover, the Centers for Medicare & Medicaid Services (CMS) has been actively involved in optimizing health outcomes through clinical quality improvement and health system transformation, which includes emergency care as a critical component in fulfilling the priorities of the National Quality Strategy (PUBMED:25128008). The trend in health care quality improvement (QI) publication also indicates a growing academic interest in QI initiatives in medical practice, reflecting an increased focus on improving the quality of care in health care settings (PUBMED:24101680). Lastly, the Johns Hopkins Community Health Partnership (J-CHiP) is another example where a care coordination model was associated with improved outcomes and lower spending, contributing to better quality care for Medicare and Medicaid participants (PUBMED:30646347). In summary, the evidence from these abstracts suggests that QIOs and related quality improvement initiatives have had a positive impact on the quality of hospital care for Medicare beneficiaries.
Instruction: Can the Internet be used effectively to provide sex education to young people in China? Abstracts: abstract_id: PUBMED:17046509 Can the Internet be used effectively to provide sex education to young people in China? Purpose: To assess the feasibility and effectiveness of sex education conducted through the Internet. Methods: Two high schools and four colleges of a university in Shanghai were selected as the research sites. Half of these were assigned to the intervention group and the other half to the control group. The interventions consisted of offering sexual and reproductive health knowledge, service information, counseling and discussion to all grade one students in the intervention group. The intervention phase lasted for 10 months and was implemented through a special website, with web pages, online videos, Bulletin Board System (BBS) and expert mailbox. In total, 624 students from the intervention, and 713 from the control schools and colleges participated in the baseline survey, and about 97% of them were followed up in postintervention survey to assess changes that can be attributed to the sex education interventions provided through the Internet. Results: The median scores of the overall knowledge and of each specific aspect of reproductive health such as reproduction, contraception, condom, sexually transmitted infections (STIs) and human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS) were significantly higher in the intervention group as compared with those in the control group at postintervention (p &lt; .0001), although no significant differences were found between these two groups in the baseline survey (p &gt; .05). Group by time interaction effects in ordinal logistic regression analysis were found on knowledge score (p &lt; .0001) and in attitude of high school students toward sex-related issues (p &lt; .05), suggesting that interventions increased subjects' knowledge significantly and changed high school students' attitudes to being less liberal toward sex. The intervention also had positive influence on students' attitudes toward providing contraceptive service for unmarried people. Conclusions: Providing sex education to students in Shanghai through the Internet was found feasible and effective. The Internet-based sex education program increased students' reproductive health knowledge effectively and changed their attitudes toward sex-related issues in terms of being less liberal toward sex and more favorable to providing services to unmarried young people. The Internet thus offers an important and hitherto untapped potential for providing sex education to students and young people in China. abstract_id: PUBMED:12626177 Can the Internet be used to improve sexual health awareness in web-wise young people? Objectives: To assess Internet use amongst young people to determine whether it would be a practical way to provide sex education and information. Methods: Year 10 students (aged 14-15 years) from North Nottinghamshire schools were asked to participate in focus groups to discuss the Internet. A series of predefined questions were directed to the whole group to generate debate. Areas explored included: Internet access and site; frequency and purpose of Internet use; websites visited; ideas for a genitourinary medicine (GUM) website. Responses were recorded by a hand count or as individual verbal responses. Results: Thirteen focus groups were held involving 287 students of approximately equal sex distribution. All had access to Internet facilities at school and 224 (78.0%) had access elsewhere. Access was at least once a week by 178 (62.0%) mostly for e-mail, games, chatlines and homework. No one accessed for health information. One hundred and seventy-nine (62.4%) participants said they would use a GUM website. A 'question line' where they could e-mail questions to a health care professional was of interest to 202 (70.4%) participants. Conclusions: The Internet would be a practical and accessible way of delivering sexual health education to young people, particularly if it is incorporated into activities and websites they enjoy. abstract_id: PUBMED:11999327 AIDS and sex education for young people in China. Although China has had a rich sexual culture for thousands of years, Chinese people are usually unwilling to openly discuss issues of sex. Some parents are quite ignorant of the change in their children's sexual attitude and behaviour. In China today, adolescents are becoming much more sexually liberated. Premarital sex and unplanned pregnancies among teenagers are increasing. Sexually transmitted diseases (STD) including HIV/AIDS are also spreading rapidly. However, young people lack basic information on AIDS/STD and do not know how to protect themselves from these diseases or how to avoid unintended pregnancies. Several major youth peer education programmes in China are mentioned in this paper. Among them, a four-year programme entitled the Australian-Chinese AIDS/STD/Safer Sex Peer Education Programme for Youth, is discussed in some detail. The programme has so far reached over 40000 university and school students. Evaluation results show that the programme is effective in both significantly increasing students' knowledge about AIDS/STDs and changing their attitude towards AIDS patients. In addition, the programme is highly praised by the students. abstract_id: PUBMED:33612032 Sex-seeking on the internet and mobile-based applications among young people in China: An online survey. The aim of this study was to investigate young people's risk behaviors in use of social network applications for sexual purposes. Snowball sampling technique was used to recruit participants online. Logistic regressions were performed to examine interrelationships among risk behaviors and sex-seeking platforms (A, B, C, D, and others). The prevalence of online sex-seeking was 22.2% (1156/5199) among people with sexual experience, and the most debut online sex-seeking happened in 15-24 years old in both men and women (79.8%, 590/739 vs 86.1%, 359/417). The risk behaviors varied in different platforms among 730 young people age 15-24 years. Among men, participants seeking sex via B were more likely to engage in concurrent sexual partnership (aOR: 1.64; 95% CI: 1.01-2.66). Participants seeking sex via C were more likely to engage in drug use (aOR: 1.74; 95% CI: 1.01-3.02) and condomless sex (aOR: 1.75; 95% CI: 1.04-2.95). Participants seeking sex via A, C, or D were all less likely to have homosexual behaviors. Among women, participants seeking sex via B were more likely to have condomless sex (aOR: 2.06; 95% CI: 1.10-3.86). The study indicated that emerging of the HIV epidemic in young people might be driven by risk behaviors during online sex-seeking. Effective intervention programs need to target on different social network platforms. abstract_id: PUBMED:17474831 Internet use among Chinese college students: implications for sex education and HIV prevention. The Internet has gained great popularity among Chinese college students, but studies on Internet use behaviors and their relationship with sexual risk perception and behaviors have been limited. A survey was conducted among 1,845 college students in an eastern province of China, in order to assess the pattern of Internet use among Chinese college students, to examine the relationship of Internet use and HIV knowledge and susceptibility, and to explore the relationship of online risk behaviors with sexual status, sexual intention, and sexual perceptions. Data suggest that many students used the Internet frequently and often spent long hours online. Differences between genders and across grades were observed in the patterns of students' Internet use. Male students reported much higher rates of visiting pornographic websites or engaging in other online risk behaviors. Students' HIV knowledge or susceptibility did not differ by the extent of Internet use, but online risks were positively associated with their sexual status, sexual intention, and sexual perceptions. Students who were sexually active, planned to have sex, or possessed permissive attitudes toward pre-marital sex were more likely to report online risk behaviors such as visiting pornographic sites, cyberbullying, or distributing erotic materials via the Internet. The data underscore the importance to provide students with appropriate guidance on their Internet use, especially for reducing online risks and utilizing Internet as a source for sex education and HIV prevention. abstract_id: PUBMED:37372786 Type of Sex Education in Childhood and Adolescence: Influence on Young People's Sexual Experimentation, Risk and Satisfaction: The Necessity of Establishing School Nursing as a Pillar. The aim is to identify the source of sexuality education used during childhood and adolescence and to analyse whether this education influences their sexual attitudes, their ability to cope with unwanted situations they experience and their satisfaction with their sexual lives. This is a non-experimental, quantitative, ex post facto, cross-sectional study. The sample is formed of 675 young people, with 50% of the ages included being between 20 (Quartile 1) and 22 (Quartile 3) years. The data collection was done by means of an online questionnaire, including sociodemographic and Likert questions about their sex life. Fisher "Independence Contrasts"and correlations were used to see and quantify the relationship among the variables. The main source of education was pornography (29.3%) and the internet (12.4%). The source of education has a significant relationship with whether the use of contraceptives is accepted or not (p &lt; 0.001), refusal to use contraceptives (p &lt; 0.001), engaging in risky sexual practices (p &lt; 0.001), facing unwanted sexual situations (p &lt; 0.001) and dissatisfaction with their sex life (p &lt; 0.001). It is necessary that children and adolescents have sex education in safe environments, such as in a home or school setting, and the school nurse plays a vital part in this education. This would reduce the need for adolescents and young people to use the internet and pornography as a source of education. School nurses should be the educational axis to offer children and adolescents a reliable point of information about sex education. A joint work with teachers, nurses, students and parents would contribute positively to reduce the number of risky situations young people are facing, and it would promote and improve healthy attitudes towards sex and interpersonal relationships. abstract_id: PUBMED:27625058 What do young people think about their school-based sex and relationship education? A qualitative synthesis of young people's views and experiences. Objectives: Although sex and relationship education (SRE) represents a key strand in policies to safeguard young people and improve their sexual health, it currently lacks statutory status, government guidance is outdated and a third of UK schools has poor-quality SRE. We aimed to investigate whether current provision meets young people's needs. Design: Synthesis of qualitative studies of young people's views of their school-based SRE. Setting: Eligible studies originated from the UK, Ireland, the USA, Australia, New Zealand, Canada, Japan, Iran, Brazil and Sweden. Participants: Studies of students aged 4-19 in full-time education, young adults ≤19 (not necessarily in full-time education) or adults ≤25 if recalling their experiences of school-based SRE. Results: -69 publications were identified, with 55 remaining after quality appraisal (representing 48 studies). The synthesis found that although sex is a potent and potentially embarrassing topic, schools appear reluctant to acknowledge this and attempt to teach SRE in the same way as other subjects. Young people report feeling vulnerable in SRE, with young men anxious to conceal sexual ignorance and young women risking sexual harassment if they participate. Schools appear to have difficulty accepting that some young people are sexually active, leading to SRE that is out of touch with many young people's lives. Young people report that SRE can be negative, gendered and heterosexist. They expressed dislike of their own teachers delivering SRE due to blurred boundaries, lack of anonymity, embarrassment and poor training. Conclusions: SRE should be 'sex-positive' and delivered by experts who maintain clear boundaries with students. Schools should acknowledge that sex is a special subject with unique challenges, as well as the fact and range of young people's sexual activity, otherwise young people will continue to disengage from SRE and opportunities for safeguarding and improving their sexual health will be reduced. abstract_id: PUBMED:33000677 'Safe sex': evaluation of sex education and sexual risk by young adults in Sydney. In this qualitative study, 28 young adults (18-29-year-olds) living in Sydney define 'safe sex', report on their experiences of sex education, and reflect on the relationship between the sex education received in school and the reality of sexual activity. Participants had a broadly neoliberal understanding of health and risk as something individuals should manage through an ideal of rational decision-making. Yet regardless of how comprehensive or limited their sex education experiences, most noted a wide gulf between safe sex as taught in the classroom and the reality of actual sexual encounters, arguing that sexual decision-making was rarely rational, and shaped by age and experience, drug use, sexual desire, and complicated interpersonal dynamics. Sex education that aims to empower by presenting individuals as sole decision-makers when it comes to prophylactic use fails to acknowledge the social contexts of sex described by participants. Significantly, young people defined 'safe sex' not only in biomedical terms such as disease transmission and pregnancy, but also in terms of the social and psychological consequences of sexual encounters, with an emphasis on consent. abstract_id: PUBMED:24467710 Internet-based health education in China: a content analysis of websites. Background: The Internet is increasingly being applied in health education worldwide; however there is little knowledge of its use in Chinese higher education institutions. The present study provides the first review and highlights the deficiencies and required future advances in Chinese Internet-based health education. Methods: Two authors independently conducted a duplicate Internet search in order to identify information regarding Internet-based health education in China. Results: The findings showed that Internet-based education began in China in September 1998. Currently, only 16 of 150 (10.7%) health education institutions in China offer fee-based online undergraduate degree courses, awarding associates and/or bachelors degrees. Fifteen of the 16 institutions were located in the middle or on the eastern coast of China, where were more developed than other regions. Nursing was the most popular discipline in Internet-based health education, while some other disciplines, such as preventive medicine, were only offered at one university. Besides degree education, Chinese institutions also offered non-degree online training and free resources. The content was mainly presented in the form of PowerPoint slides or videos for self-learning. Very little online interactive mentoring was offered with any of the courses. Conclusions: There is considerable potential for the further development of Internet-based health education in China. These developments should include a focus on strengthening cooperation among higher education institutions in order to develop balanced online health curricula, and on enhancing distance education in low- and middle-income regions to meet extensive learning demands. abstract_id: PUBMED:33633497 What and how: doing good research with young people, digital intimacies, and relationships and sex education. As part of a project funded by the Wellcome Trust, we held a one-day symposium, bringing together researchers, practitioners, and policymakers, to discuss priorities for research on relationships and sex education (RSE) in a world where young people increasingly live, experience, and augment their relationships (whether sexual or not) within digital spaces. The introduction of statutory RSE in schools in England highlights the need to focus on improving understandings of young people and digital intimacies for its own sake, and to inform the development of learning resources. We call for more research that puts young people at its centre; foregrounds inclusivity; and allows a nuanced discussion of pleasures, harms, risks, and rewards, which can be used by those working with young people and those developing policy. Generating such research is likely to be facilitated by participation, collaboration, and communication with beneficiaries, between disciplines and across sectors. Taking such an approach, academic researchers, practitioners, and policymakers agree that we need a better understanding of RSE's place in lifelong learning, which seeks to understand the needs of particular groups, is concerned with non-sexual relationships, and does not see digital intimacies as disconnected from offline everyday 'reality'. Answer: Yes, the Internet can be used effectively to provide sex education to young people in China. A study conducted in Shanghai involving interventions through a special website, online videos, and other online tools showed that students in the intervention group had significantly higher median scores in reproductive health knowledge compared to the control group. The Internet-based sex education program was found to be effective in increasing students' knowledge and changing their attitudes toward sex-related issues, suggesting that the Internet offers an important potential for providing sex education to young people in China (PUBMED:17046509). Moreover, the Internet's widespread use among young people and its potential for delivering sexual health education have been recognized. Young people are reported to be open to using a genitourinary medicine (GUM) website and are interested in e-mailing questions to health care professionals, indicating that the Internet is a practical and accessible way of delivering sexual health education (PUBMED:12626177). However, it is important to note that while the Internet is a valuable tool for sex education, it also presents risks. For instance, young people's risk behaviors in using social network applications for sexual purposes have been observed, and effective intervention programs need to target different social network platforms (PUBMED:33612032). Additionally, the type of sex education received during childhood and adolescence, including that from the Internet and pornography, has been shown to influence young people's sexual experimentation, risk, and satisfaction, highlighting the need for reliable sources of sex education (PUBMED:37372786). In summary, the Internet can be an effective medium for sex education in China, but it should be used with caution and in conjunction with other educational strategies to ensure that young people receive accurate information and learn to navigate the associated risks responsibly.
Instruction: Does preoperative atrial fibrillation reduce survival after coronary artery bypass grafting? Abstracts: abstract_id: PUBMED:15111135 Does preoperative atrial fibrillation reduce survival after coronary artery bypass grafting? Background: Preoperative atrial fibrillation has been identified as a risk factor for reduced long-term survival after coronary artery bypass grafting. This study sought to determine whether atrial fibrillation is merely a marker for high-risk patients or an independent risk factor for time-related mortality. Methods: From 1972 to 2000, 46,984 patients underwent primary isolated coronary artery bypass grafting; 451 (0.96% prevalence) had electrocardiogram-documented preoperative atrial fibrillation (n = 411) or flutter (n = 40). Characteristics of patients with and without atrial fibrillation were contrasted by multivariable logistic regression to form a propensity score. With this, comparable groups with and without atrial fibrillation were formed by pairwise propensity-matching to assess survival. Results: Patients with preoperative atrial fibrillation were older (67 +/- 9.0 versus 59 +/- 9.8 years, p &lt; 0.0001), had more left ventricular dysfunction (66% versus 52%, p &lt; 0.0001) and hypertension (73% versus 59%, p &lt; 0.0001), but less severe angina (39% moderate or severe versus 49%, p &lt; 0.0001). Many of these factors are themselves predictors of increased time-related mortality. In propensity-matched patients, survival at 30 days and at 5 and 10 years for patients with versus without atrial fibrillation was 97% versus 99%, 68% versus 85%, and 42% versus 66%, respectively, a survival difference at 10 years of 24%. Median survival in patients with atrial fibrillation was 8.7 years versus 14 years for those without it. Conclusions: Atrial fibrillation in patients undergoing coronary artery bypass grafting is a marker for high-risk patients; in addition, atrial fibrillation itself substantially reduces long-term survival. Thus, if patients in atrial fibrillation require surgical revascularization, it is appropriate to consider performing a concomitant surgical ablation procedure. abstract_id: PUBMED:23988129 Preoperative atrial fibrillation and long-term survival after open heart surgery in a rural tertiary heart institute. Background: Preoperative atrial fibrillation (AF) is associated with increased morbidity and mortality after open heart surgery. However, the impact of preoperative AF on long-term survival after open heart surgery has not been widely examined in rural populations. Patients from rural regions are less likely to receive treatment for cardiac conditions and to have adequate medical insurance coverage. Objective: To examine the influence of preoperative AF on long-term survival following open heart surgery in rural eastern North Carolina. Methods: Long-term survival was compared in patients with and without preoperative AF after coronary artery bypass grafting (CABG) and CABG plus valve (CABG + V) surgery between 2002 and 2011. Hazard ratios (HR) and 95% confidence intervals (CI) were computed using a Cox regression model. Results: The study population consisted of 5438 patients. A total of 263 (5%) patients had preoperative AF. Preoperative AF was an independent predictor of long-term survival (open heart surgery: adjusted HR = 1.6, 95% CI = 1.3-2.0; CABG: adjusted HR = 1.6, 95% CI = 1.3-2.1; CABG + V: adjusted HR = 1.6, 95% CI = 1.1-2.3). Conclusion: Preoperative AF is an important predictor of long-term survival after open heart surgery in this rural population. abstract_id: PUBMED:19329335 Preoperative atrial fibrillation decreases event-free survival following cardiac surgery. Objective: To investigate the relationship between preoperative atrial fibrillation and early and late clinical outcomes following cardiac surgery. Methods: A retrospective cohort including all consecutive coronary artery bypass graft and/or valve surgery patients between 1995 and 2005 was identified (n = 9796). No patient had a concomitant surgical AF ablation. The association between preoperative atrial fibrillation and in-hospital outcomes was examined. We also determined late death and cardiovascular-related re-hospitalization by linking to administrative health databases. Median follow-up was 2.9 years (maximum 11 years). Results: The prevalence of preoperative atrial fibrillation was 11.3% (n = 1105), ranging from 7.2% in isolated CABG to 30% in valve surgery. In-hospital mortality, stroke, and renal failure were more common in atrial fibrillation patients (all p &lt; 0.0001), although the association between atrial fibrillation and mortality was not statistically significant in multivariate logistic regression. Longitudinal analyses showed that preoperative atrial fibrillation was associated with decreased event-free survival (adjusted hazard ratio 1.55, 95% confidence interval 1.42-1.70, p &lt; 0.0001). Conclusions: Preoperative atrial fibrillation is associated with increased late mortality and recurrent cardiovascular events post-cardiac surgery. Effective management strategies for atrial fibrillation need to be explored and may provide an opportunity to improve the long-term outcomes of cardiac surgical patients. abstract_id: PUBMED:24731743 Preoperative brain natriuretic peptide predicts late mortality and functional class but not hospital readmission after cardiac surgery. Objectives: N-terminal brain natriuretic peptide (NT-proBNP) is an established biomarker of heart failure and has been found to predict mortality and morbidity after cardiac surgery. The aim of this study was to investigate whether preoperative NT-proBNP can predict postoperative New York Heart Association (NYHA) functional class and hospital readmission in addition to morbidity and mortality. Design: Retrospective. Setting: University hospital. Participants: All patients undergoing aortic valve replacement for aortic stenosis and coronary artery bypass grafting from January to December 2008 (n = 390). Measurements And Main Results: Preoperative NT-proBNP was recorded prospectively. Five-year mortality was obtained through national registries. Postoperative functional class, morbidity, and hospital readmission were obtained through telephone interviews. Patients were divided into quartiles based on preoperative NT-proBNP; the medians of each quartile were 103 ng/L, 291 ng/L, 825 ng/L and 2,375 ng/L. Increased preoperative NT-proBNP was associated with reduced postoperative functional class. In the first quartile, 7% (7/97) were in NYHA functional class III-IV compared to 26% (25/97) in the fourth quartile (p&lt;0.01). Increased preoperative NT-proBNP was also associated with reduced long-term survival (p&lt;0.01). The covariate adjusted hazard ratio for mortality in the fourth quartile was 2.9 (1.61-5.08; p&lt;0.01) compared to the other quartiles. No association was found between preoperative NT-proBNP and postoperative hospital readmission. Conclusions: Increased preoperative NT-proBNP is associated with reduced long-term survival and functional class but not hospital readmission post-cardiac surgery. Thus, NT-proBNP might have additive value to established risk factors in the preoperative assessment of patients undergoing cardiac surgery. abstract_id: PUBMED:21340210 Influence of preoperative serum N-terminal pro-brain type natriuretic peptide on the postoperative outcome and survival rates of coronary artery bypass patients. Background: The N-terminal fragment of pro-brain type natriuretic peptide (NT-proBNP) is an established biomarker for cardiac failure. Objective: To determine the influence of preoperative serum NT-proBNP on postoperative outcome and mid-term survival in patients undergoing coronary artery bypass grafting (CABG). Methods: In 819 patients undergoing isolated CABG surgery preoperative serum NT-proBNP levels were measured. NT-proBNP was correlated with various postoperative outcome parameters and survival rate after a median follow-up time of 18 (0.5-44) months. Risk factors of mortality were identified using χ², Mann-Whitney test, and Cox regression. Results: NT-proBNP levels &gt; 430 ng/ml and &gt; 502 ng/ml predicted hospital and overall mortality (p &lt; 0.05), with an incidence of 1.6% and 4%, respectively. Kaplan-Meier analysis revealed decreased survival rates in patients with NT-proBNP &gt; 502 ng/ml (p = 0.001). Age, preoperative serum creatinine, diabetes, chronic obstructive pulmonary disease, low left ventricular ejection fraction and BNP levels &gt;502 ng/ml were isolated as risk factors for overall mortality. Multivariate Cox regression analysis, including the known factors influencing NT-proBNP levels, identified NT-proBNP as an independent risk factor for mortality (OR = 3.079 (CI = 1.149-8.247), p = 0.025). Preoperative NT-proBNP levels &gt;502 ng/ml were associated with increased ventilation time (p = 0.005), longer intensive care unit stay (p=0.001), higher incidence of postoperative hemofiltration (p = 0.001), use of intra-aortic balloon pump (p &lt; 0.001), and postoperative atrial fibrillation (p = 0.031) Conclusion: Preoperative NT-proBNP levels &gt; 502 ng/ml predict mid-term mortality after isolated CABG and are associated with significantly higher hospital mortality and perioperative complications. abstract_id: PUBMED:17888963 Observational study on the beneficial effect of preoperative statins in reducing atrial fibrillation after coronary surgery. Background: Recent evidence supports the important role of inflammation in atrial fibrillation (AF) after coronary artery bypass grafting (CABG) and there is growing evidence that statin has cardiac antiarrhythmic effects. The aim of this study was to assess the efficacy of preoperative statins in preventing AF after CABG in a longitudinal observational study. Methods: Over a two-year period, 405 consecutive patients underwent isolated CABG procedures. Univariate analysis was performed exploring the relationship regarding statin use and AF development. A propensity score for treatment with statins was obtained from core patient characteristics. The role of statin therapy on postoperative AF was assessed by means of a conditional logistic model, while stratifying on the quintiles of the propensity score. All analysis was performed retrospectively. Results: Postoperative AF occurred in 29.5% of the patients with preoperative statin therapy compared with 40.9% of those patients without it (p = 0.021). No statistical differences among development of AF and type, dose, or duration of preoperative statin therapy were observed. Preoperative statins were associated with a 42% reduction in risk of AF development after CABG surgery (odds ratio [OR] 0.58, 95% confidence interval [CI] 0.37 to 0.91, p = 0.017, while stratifying on the propensity score). No different effect of statins on AF was observed with respect to age groups (&lt; or = 70 and &gt; 70 years) (interaction p = 0.711). Conclusions: Preoperative statins may reduce postoperative AF after CABG. Patients undergoing elective revascularization may benefit from a preventive statin approach. abstract_id: PUBMED:16682213 Preoperative C-reactive protein is predictive of long-term outcome after coronary artery bypass surgery. Background: Increased levels of C-reactive protein (CRP) are associated with the presence and severity of atherosclerosis, and with increased risk of coronary events as well as of cardiac events after coronary percutaneous intervention. Methods: We have investigated whether preoperative CRP had an impact on the long-term outcome of 843 patients who underwent on-pump coronary artery bypass surgery (CABG). Results: Among operative survivors, patients with preoperative CRP &lt; 1.0 mg/dL had significantly better 12-year overall survival rate (74.1% vs 63.0%, p = 0.004) and survival freedom from fatal cardiac event (86.7% vs 78.1%). Multivariate analysis including patients' age, extracardiac arteriopathy, urgent/emergent operation, recent myocardial infarction, congestive heart failure, left ventricular ejection fraction, atrial fibrillation, transient ischemic attack/stroke, number of distal anastomoses, diabetes, and preoperative CRP &gt; or = 1.0 mg/dL or &lt;1.0 mg/dL, showed that the latter was an independent predictor of late all-cause mortality (p = 0.017, RR 1.60, 95% CI 1.09-2.35). Its impact on overall survival was particularly evident in patients with left ventricular ejection fraction &lt;50% (CRP &lt; 1.0 mg/dL: 58.7% vs CRP &gt; or = 1.0 mg/dL: 43.7%, p &lt; 0.00001). Conclusions: Increased preoperative levels of CRP are associated with significantly decreased overall survival after primary on-pump CABG. abstract_id: PUBMED:33200617 Preoperative atrial fibrillation in association with reduced haemoglobin predicts increased 30-d mortality after cardiac surgery. Objectives: Atrial fibrillation is the most common arrhythmia occurring after cardiac surgery. Less attention has been focused on preoperative atrial fibrillation and anaemia as risk factors for mortality after cardiac surgery. The aim of this study was to determine preoperative risk factors for 30-d mortality after open-heart surgery. Design. The study population consisted of 2015 patients (73.4% men; mean age 68 years) undergoing coronary artery bypass grafting (CABG) (52.0%), aortic valve replacement (AVR) (18.6%), AVR and CABG (10.0%), mitral valve plasty or replacement (14.0%), and AVR and aortic root reconstruction (ARR) (5.5%) in Kuopio University Hospital from January 2013 to December 2016. Univariate and multivariate Cox proportional hazards models were used for statistical analyses. Kaplan-Meier survival curves were generated. Results. Total 30-d mortality was 1.8%. By Cox regression analysis, predictors of 30-d mortality (hazard ratio [HR] [95% confidence interval [CI]]) included female gender (1.95 [1.00-3.77]), preoperative atrial fibrillation, (2.38 [1.12-5.03]) reduced haemoglobin level (3.40 [1.47-7.90]), and pulmonary congestion (3.16 [1.52-6.55]). The combination of preoperative reduced haemoglobin and preoperative atrial fibrillation was a strong predictor (12.37 [4.40-34.77], p &lt; .001). Estimated glomerular filtration rate (eGFR) predicted 30-d mortality in univariate models but was not an independent predictor in multivariate models. Conclusions. According to the main findings of our study, the combination of preoperative atrial fibrillation and reduced haemoglobin level substantially increase the risk of 30-d mortality after cardiac surgery. Identification of high-risk patients pre-operatively could help to make optimal clinical decisions for timing of operation and perioperative treatment. abstract_id: PUBMED:19673591 Statins and coronary artery bypass graft surgery: preoperative and postoperative efficacy and safety. Background: In patients with native coronary artery disease, strong evidence supports the use of statins to reduce the risk of recurrent cardiovascular events and improve survival. However, for patients undergoing coronary artery bypass graft surgery (CABG), statins appear to be underutilized, and concerns have been raised regarding their perioperative safety. Objective: The goal of this systematic review is to evaluate the safety and efficacy of statin therapy before and after coronary surgical revascularization. Methods: A systematic review was performed to retrieve relevant articles from the Medline database published between 1987 and January 2009. Results: Administered before CABG, statins have been demonstrated to reduce perioperative mortality, stroke and atrial fibrillation. Preoperative statin therapy also reduces the systemic inflammatory response associated with cardiopulmonary bypass. Following CABG, statins inhibit saphenous vein graft disease and the progression of atherosclerosis in native coronary arteries. In addition, postoperative statins reduce the recurrence of cardiovascular events and improve all-cause mortality. High-intensity lipid reduction to achieve low-density lipoprotein levels to 70 mg/dl may benefit post-CABG patients, but this has yet to be evaluated prospectively. Adverse effects related to perioperative statin therapy seem to be extremely rare, and little data are available to support the practice of withholding statin therapy before or after surgery. Conclusion: Numerous studies have demonstrated that statins improve the outcomes of patients undergoing CABG. The benefits seem to outweigh the risks associated with their use, both in the preoperative and postoperative period. In the absence of contraindications, essentially all CABG patients are candidates for life-long statin therapy that ideally should be started before surgery. The optimal postoperative statin regimen remains unknown and should be the subject of future study. abstract_id: PUBMED:28826895 Effect of Preoperative Beta-Blocker Use on Outcomes Following Cardiac Surgery. Recent studies suggest that the use of preoperative β blockers in cardiac surgery may not provide improved mortality rates and may even contribute to negative clinical outcomes. We therefore assessed the role of β blockers on several outcomes after cardiac surgery (delirium, acute kidney injury [AKI], stroke, atrial fibrillation (AF), mortality, and hospital length of stay) in 4,076 patients who underwent elective coronary artery bypass grafting, coronary artery bypass grafting + valve, or valve cardiac surgery from November 1, 2009, to September 30, 2015, at Vanderbilt Medical Center. Clinical data from 2 prospectively collected datasets at our institution were reviewed: the Cardiac Surgery Perioperative Outcomes Database and the Society of Thoracic Surgeons Database. Preoperative β-blocker use was defined by Society of Thoracic Surgeons guidelines as patients receiving a β blocker within 24 hours preceding surgery. Of the included patients, 2,648 (65.0%) were administered a β blocker within 24 hours before surgery. Adjusting for possible confounders, preoperative β-blocker use was associated with increased odds of AKI stage 2 (odds ratio 1.96, 95% confidence interval 1.19 to 3.24, p &lt;0.01). There was no evidence that β-blocker use had an independent association with postoperative delirium, AKI stages 1 and 3, stroke, AF, mortality, or prolonged length of stay. A secondary propensity score analysis did not show a marginal association between β-blocker use and any outcome. In conclusion, we did not find significant evidence that preoperative β-blocker use was associated with postoperative delirium, AF, AKI, stroke, or mortality. Answer: Yes, preoperative atrial fibrillation has been identified as a factor that reduces survival after coronary artery bypass grafting (CABG). Studies have shown that patients with preoperative atrial fibrillation are at higher risk and have reduced long-term survival compared to those without atrial fibrillation. Specifically, one study found that survival at 5 and 10 years for patients with preoperative atrial fibrillation was significantly lower than for those without it, with a median survival of 8.7 years versus 14 years, respectively (PUBMED:15111135). Another study confirmed that preoperative atrial fibrillation was an independent predictor of long-term survival after open heart surgery, including CABG (PUBMED:23988129). Additionally, preoperative atrial fibrillation was associated with decreased event-free survival following cardiac surgery, indicating an increased risk of late mortality and recurrent cardiovascular events (PUBMED:19329335). These findings suggest that preoperative atrial fibrillation is not merely a marker for high-risk patients but also an independent risk factor for reduced survival after CABG. Therefore, it may be appropriate to consider concomitant surgical ablation procedures for patients in atrial fibrillation requiring surgical revascularization (PUBMED:15111135).
Instruction: Does the ratio of serum aldosterone to plasma renin activity predict the efficacy of diuretics in hypertension? Abstracts: abstract_id: PUBMED:20009770 Does the ratio of serum aldosterone to plasma renin activity predict the efficacy of diuretics in hypertension? Results of RENALDO. Objectives: We hypothesized that the aldosterone: renin ratio (ARR) predicts the antihypertensive response to mineralocorticoid receptor antagonist, spironolactone (SPIRO), when compared with bendroflumethiazide (BFZ). Methods: We conducted a randomized, crossover, trial on hypertensive patients with either high ARR (HARR defined as &gt;750 and plasma aldosterone &gt;250 pmol/l) or low ARR (LARR defined as &lt;300 and plasma renin activity &lt;10 ng/ml per h). Each group took SPIRO 50 mg once daily for 12 weeks and BFZ 2.5 mg once daily for 12 weeks in random order separated by 2-week washout. Patients with mean 24-h systolic ambulatory blood pressure (SABP) at least 140 mmHg were included. Primary endpoint was difference in SABP between SPIRO and BFZ in patients with HARR compared with those with LARR. Results: One hundred and eleven patients (60 HARR and 51 LARR) completed the study. SABP at 12 weeks in the HARR group was 129.4 mmHg on SPIRO and 134.4 mmHg on BFZ [difference -5.01; 95% confidence interval (CI) -7.51, -2.52; P &lt; 0.0002]. In the LARR group, SABP was 129.7 mmHg on SPIRO and 133.1 mmHg on BFZ [difference -3.43 (95% CI -6.18, -0.68) P &lt; 0.01]. Difference between groups (HARR vs. LARR) was -1.58 mmHg (95% CI 5.25, -2.08; not significant, P = 0.394). In a secondary analysis of the overall study population of 111 patients, SABP reduction with SPIRO 50 mg was superior to BFZ 2.5 mg [SPIRO -14.8 mmHg, BFZ -10.5 mmHg, difference -4.29 mmHg (95% CI -6.12, -2.46)]. Results were similar for secondary endpoints. Plasma renin activity or aldosterone did not predict blood pressure response to SPIRO. Results were independent of concomitant angiotensin-converting enzyme inhibitor or angiotensin receptor blocker use. Conclusion: The ARR did not predict the blood pressure response to SPIRO. SPIRO 50 mg was significantly more effective than BFZ 2.5 mg in lowering SABP irrespective of baseline ARR, plasma renin activity or aldosterone. abstract_id: PUBMED:30404585 The relationship between plasma renin activity and serum lipid profiles in patients with primary arterial hypertension. Introduction:: The aim of the study was to evaluate clinical and biochemical differences between patients with low-renin and high-renin primary arterial hypertension (AH), mainly in reference to serum lipids, and to identify factors determining lipid concentrations. Materials And Methods:: In untreated patients with AH stage 1 we measured plasma renin activity (PRA) and subdivided the group into low-renin (PRA &lt; 0.65 ng/mL/h) and high-renin (PRA ⩾ 0.65 ng/mL/h) AH. We compared office and 24-h ambulatory blood pressure, serum aldosterone, lipids and selected biochemical parameters between subgroups. Factors determining lipid concentration in both subgroups were assessed in regression analysis. Results:: Patients with high-renin hypertension ( N = 58) were characterized by higher heart rate ( p = 0.04), lower serum sodium ( p &lt; 0.01) and aldosterone-to-renin ratio ( p &lt; 0.01), and significantly higher serum aldosterone ( p = 0.03), albumin ( p &lt; 0.01), total protein ( p &lt; 0.01), total cholesterol ( p = 0.01) and low-density lipoprotein cholesterol (LDL-C) ( p = 0.04) than low-renin subjects ( N = 39). In univariate linear regression, only PRA in the low-renin group was in a positive relationship with LDL-C ( R2 = 0.15, β = 1.53 and p = 0.013); this association remained significant after adjustment for age, sex, and serum albumin and aldosterone concentrations. Conclusions:: Higher serum levels of total and LDL-C characterized high-renin subjects, but the association between LDL-C level and PRA existed only in low-renin primary AH. abstract_id: PUBMED:3537932 Diuretics Microaspiration techniques and clearance studies have shown that reabsorption of filtered sodium approximates 65% in the proximal tubule, 25 to 30% in the ascending limb of the loop of Henle and 5 to 10% in the dilution segment. Reabsorption in the Henle loop is of special significance as it governs the process of dilution-concentration of urine. Moreover, inhibition of sodium reabsorption in the loop of Henle necessarily produces a substantial loss of sodium since only a fairly small fraction of urinary sodium is reabsorbed beyond the Henle loop (dilution segment, distal tubule). Excretion of water and electrolytes is regulated by humoral factors, such as the renin-angiotensin-aldosterone system, some prostaglandins and certain kinins. Factors that promote excretion of sodium, produced in particular by the myocardium, have recently been demonstrated. The correlation between blood pressure and salt has been substantiated by many findings. Diuretics are commonly used to treat high blood pressure as well as edema. Recent evidence indicates that sodium transport is altered in idiopathic hypertension, at least in red blood cells. Clinical trials of diuretics are designed to localize the drug's action and quantify its saluretic activity (evaluation of potency and effectiveness--single doses, sustained treatment). Furthermore, the minimal efficient antihypertensive dosage should be determined. Diuretics can be divided into two groups according to whether they produce an increase or decrease in serum potassium. Diuretics that are capable of producing hypokalemia belong to two main families. One consists of the Henle loop diuretics that interfere with the mechanisms of dilution-concentration of urine. Action of these drugs is potent and short-lived. For instance, following a single dose of furosemide, excretion of sodium can reach 25-30% of filtered sodium; renal blood flow increases; CH2O and TCH2O decrease. With furosemide, induction of diuresis is rapid (within a few minutes after IV injection and 20 mn after oral ingestion); elimination half-life is 50 mn; absolute bioavailability is 50-70%; 95% of the drug is bound to plasma proteins; elimination is mainly through the kidneys. Other Henle loop diuretics include ethacrynic acid, whose elimination half-life is less than one hour; bumetanide, which is 40 times more potent than furosemide; muzolimine, whose action is more lasting despite the fact that only 65% of the drug is bound to plasma proteins; and ozolinone, which has a saliuretic action comparable to that of furosemide and in addition exerts a direct vasodilating effect.(ABSTRACT TRUNCATED AT 400 WORDS) abstract_id: PUBMED:33633824 Urinary sodium potassium ratio is associated with clinical success after adrenalectomy in patients with unilateral primary aldosteronism. Background: The urinary sodium potassium (NaK) ratio is associated with dietary sodium and potassium intake and blood pressure, and it also reflects the activity of aldosterone. Herein we evaluated the value of the urinary NaK ratio in predicting the surgical outcomes of patients with unilateral primary aldosteronism (uPA). Methods: This non-concurrent prospective cohort study was conducted from 2011 to 2017 and included 241 uPA patients who had undergone adrenalectomy. Predictors of successful clinical outcomes were analyzed using logistic regression. Results: Among the 241 uPA patients, 197 (81.7%) achieved clinical complete or partial success. A urinary sodium potassium ratio &lt;3 (odds ratio (OR): 2.5; 95% confidence interval (CI): 1.2-5.4; p = 0.015), body mass index &lt;25 kg/m2 (OR: 2.82; 95% CI: 1.31-6.06; p = 0.008), renin &lt;1 ng/mL/h (OR: 2.51; 95% CI: 1.01-6.21; p = 0.047) and mean preoperative blood pressure &gt;115 mmHg (OR: 5.02; 95% CI: 2.10-11.97; p &lt; 0.001) could predict clinical success after adrenalectomy. Furthermore, higher pre-treatment plasma aldosterone (OR: 1.014; 95% CI 1.005-1.024; p = 0.002) or lower serum potassium (OR: 0.523; 95% CI: 0.328-0.836; p = 0.007) were correlated with lower urinary NaK ratio (&lt;3), and log urinary NaK ratio was positively correlated with serum C-reactive protein (β value 2.326; 95% CI 0.029-4.623; p = 0.047). Conclusions: uPA patients with a lower urinary NaK ratio, due to high plasma aldosterone and low serum potassium concentrations, were more likely to have clinical success after adrenalectomy. uPA patients with a higher urinary NaK ratio were associated with more severe inflammatory status, and possibly more resistant hypertension post-operatively. abstract_id: PUBMED:35689092 Plasma and serum prorenin concentrations in diabetes, hypertension, and renal disease. Although the renin-angiotensin-aldosterone system plays a crucial role in fluid homeostasis and cardiovascular disease pathophysiology, measurements of plasma prorenin levels are still unavailable in clinical practice. We previously found that prorenin molecules in human blood underwent significant posttranslational modifications and were undetectable using immunological assays that utilized antibodies specifically recognizing unmodified recombinant prorenin. Using a sandwich enzyme-linked immunosorbent assay that captures posttranslationally modified prorenins with their prosegment antibodies, we measured plasma and serum prorenin concentrations in 219 patients with diabetes mellitus, hypertension and/or renal disease and compared them with those of 40 healthy controls. The measured values were not significantly different from those of the healthy controls and were 1,000- to 100,000-fold higher than previously reported levels determined using conventional assay kits. Multiple regression analyses showed that body weight, serum albumin levels, and serum creatinine levels negatively correlated with plasma prorenin levels, while the use of loop diuretics was associated with elevated plasma prorenin levels. Blood pressure, HbA1c, and plasma renin activity were not independent variables affecting plasma prorenin levels. In contrast, serum prorenin levels were unaffected by any of the above clinical parameters. The association of the plasma prorenin concentration with indices reflecting body fluid status suggests the need to scrutinize its role as a biomarker, while serum prorenins are less likely to have immediate diagnostic value. abstract_id: PUBMED:9198849 The clinical spectrum of potassium-sparing diuretics The role of potassium-sparing diuretics in the treatment of hypertension is discussed. The results of two newer case-control studies showing both an increased risk for sudden cardiac death in patients receiving non-potassium-sparing diuretics compared to those receiving potassium-sparing diuretics are presented. As a consequence of these studies thiazides should be given only at a low dose or in combination with a potassium-sparing agent. Data from several large intervention trials in elderly patients with hypertension show that treatment of high blood pressure with potassium-sparing diuretics is clearly beneficial since cerebral and cardiac events are both significantly reduced. Finally, potassium-sparing diuretics, especially spironolactone, play an important role in controlling high blood pressure in patients with primary aldosteronism. The presentation of a case report demonstrates, that development of resistance toward medical treatment may be due to elevated aldosterone levels and suppressed plasma renin activity and that the addition of spironolactone efficiently reduces elevated blood pressure values in this condition. abstract_id: PUBMED:6137325 Effects of indapamide and various diuretics alone or combined with beta-blockers on serum lipoproteins. A prospective evaluation was started in 1976 to study the influence of diuretics alone or combined with beta-blockers on serum lipoproteins in normal or hypertensive subjects. Compared to placebo conditions, 4 or 6-weeks' monotherapy with various diuretics significantly (p less than 0.05) increased the beta-lipoprotein fraction (furosemide, 80 mg/day or chlorthalidone, 100 mg/day; n = 16) or low-density lipoprotein-cholesterol (LDL-C) (chlorthalidone, 100 mg/day, n = 27 men; tienilic acid, 250 mg/day, n = 16 men, clopamide, 5 mg/day, n = 17 men; or muzolimine, 20 to 40 mg/day, n = 13 men or post-menopausal women). No increase in LDL-C was noted in 43 men (32 normal, 11 with mild hypertension) treated with indapamide, 2.5 mg/day. Serum high-density lipoprotein-cholesterol and apoproteins A1, A2 and B were not consistently changed by any of these agents. In women, chlorthalidone (100 mg/day) significantly increased LDL-C in the (100 mg/day) significantly increased LDL-C in the post-menopausal (n = 18) but not in the pre-menopausal (n = 22) state. Increases in LDL-C caused by chlorthalidone monotherapy were prevented or reversed by the addition of a beta-blocker, usually propranolol or atenolol (n = 18); increases in LDL-C during clopamide monotherapy were reversed after the addition of the beta-blocker pindolol (10 mg/day, n = 17). In all studies, variations in beta-lipoprotein or LDL-C levels could not be explained by changes in blood volume, serum glucose or insulin and did not correlate with alterations in blood pressure, plasma potassium, renin, aldosterone, adrenaline or noradrenaline. These observations indicate that various diuretics may increase serum LDL-C in men or post-menopausal women. Pre-menopausal women may often be protected from this side-effect. Long-term studies are now needed to clarify the pathogenic and prognostic relevance of lipoprotein changes induced by diuretics. In the meantime, it is of clinical interest that indapamide had no significant effect on serum lipoproteins and that certain beta-blockers appear to prevent or reverse increases in LDL-C during diuretic treatment in men and post-menopausal women. abstract_id: PUBMED:824860 Plasma renin activity in essential hypertension: different short- und long-term effects of diuretics (author's transl) In an acute clinical trial 12 patients with essential hypertension on a standardized sodium and potassium dietary intake were given either amilorid (10 mg daily, orally) or potassium canrenoate (200 mg daily, i.v.) for two days. Either treatment caused a significant decrease in blood pressure and an increase in plasma renin activity (PRA). The aldosterone excretion rate was elevated only in the patients receiving amilorid. Furthermore potassium retention and sodium loss were more pronounced in the amilorid group. Long-term treatment (up to 14 weeks) with amilorid (10 mg daily), spironolactone (200 mg daily) or chlortalidone (50 mg daily) significantly lowered the blood pressure of patients with essential hypertension. Plasma potassium and PRA rose significantly in patients receiving either amilorid or spironolactone. However, after three weeks of therapy the mean PRA returned to the pretreatment level in patients on amilorid while it remained persistently elevated in the spironolactone group. On the other hand, chlortalidone caused potassium loss and persistent elevation of PRA. A possible relationship between the changes in plasma potassium levels and PRA in response to diuretics is discussed. abstract_id: PUBMED:28129247 Effectiveness of patiromer in the treatment of hyperkalemia in chronic kidney disease patients with hypertension on diuretics. Objective: Recurrent hyperkalemia frequently limits use of renin-angiotensin-aldosterone system inhibitors (RAASi) in chronic kidney disease (CKD) patients with hypertension, diabetes, and/or heart failure. Patiromer is a sodium-free, nonabsorbed potassium (K)-binding polymer approved by the US Food and Drug Administration for the treatment of hyperkalemia. This post-hoc analysis of OPAL-HK examined the effectiveness and safety of patiromer in reducing serum K in hyperkalemic CKD patients on RAASi, with hypertension, receiving diuretic therapy versus those not on diuretics. Methods: Depending on the degree of hyperkalemia at baseline, CKD patients with serum K from 5.1 to less than 6.5 mmol/l on RAASi (n = 243) were assigned to a patiromer of total dose 8.4 or 16.8 g, divided twice daily. Changes in serum K, and tolerability and safety were assessed over 4 weeks in patients on and not on diuretics. Results: At baseline, 132 patients used diuretics and 111 were not on diuretics, mean age was 64.3 and 64.0 years, respectively, and 63 and 51% were men. Similar reductions in serum K were seen over 4 weeks in both subgroups. At week 4, serum K fell by -0.95 ± 0.04 mmol/l with any diuretic and -1.04 ± 0.05 mmol/l with no diuretic. Patiromer was well tolerated, with mild-to-moderate constipation reported as the most common adverse event (7.6 and 14.4% of patients on any diuretic or no diuretic, respectively). Hypokalemia (s-K &lt;3.5 mEq/l) was reported in 2.3% of patients on any diuretic and in 3.7% not on diuretics. Conclusion: The serum K-lowering efficacy and safety profile of patiromer in hyperkalemia patients with CKD was not compromised by diuretic therapy. abstract_id: PUBMED:33150452 Thiazide diuretics and the rate of disease progression in autosomal dominant polycystic kidney disease: an observational study. Background: In autosomal dominant polycystic kidney disease (ADPKD), hypertension is prevalent and cardiovascular events are the main cause of death. Thiazide diuretics are often prescribed as second-line antihypertensives, on top of renin-angiotensin-aldosterone system (RAAS) blockade. There is a concern, however, that diuretics may increase vasopressin concentration and RAAS activity, thereby worsening disease progression in ADPKD. We aimed to investigate the validity of these suggestions. Methods: We analysed an observational cohort of 533 ADPKD patients. Plasma copeptin (surrogate for vasopressin), aldosterone and renin were measured by enzyme-linked immunosorbent assay and radioimmunoassay, respectively. Linear mixed models were used to assess the association of thiazide use with estimated glomerular filtration rate (eGFR) decline and Cox proportional hazards models for the association with the composite kidney endpoint of incident end-stage kidney disease, 40% eGFR decline or death. Results: A total of 23% of participants (n = 125) used thiazide diuretics at baseline. Compared with non-users, thiazide users were older, a larger proportion was male, they had lower eGFRs and similar blood pressure under more antihypertensives. Plasma copeptin was higher, but this difference disappeared after adjustment for age and sex. Both renin and aldosterone were higher in thiazide users. There was no difference between thiazide users and non-users in the rate of eGFR decline {difference -0.35 mL/min/1.73 m2 per year [95% confidence interval (CI) -0.83 to -0.14], P = 0.2} during 3.9 years of follow-up (interquartile range 2.5-4.9). This did not change after adjustment for potential confounders [difference final model: 0.08 mL/min/1.73 m2 per year [95% CI -0.46 to -0.62], P = 0.8). In the crude model, thiazide use was associated with a higher incidence of the composite kidney endpoint [hazard ratio (HR) 1.53 (95% CI 1.05-2.23), P = 0.03]. However, this association lost significance after adjustment for age and sex and remained unassociated after adjustment for additional confounders [final model: HR 0.80 (95% CI 0.50-1.29), P = 0.4]. Conclusions: These data do not show that thiazide diuretics have a detrimental effect on the rate of disease progression in ADPKD and suggest that these drugs can be prescribed as second-line antihypertensives. Answer: The ratio of serum aldosterone to plasma renin activity (ARR) does not predict the antihypertensive response to the mineralocorticoid receptor antagonist spironolactone (SPIRO) when compared with bendroflumethiazide (BFZ) in hypertensive patients. This was demonstrated in a randomized, crossover trial (RENALDO) that included hypertensive patients with either high ARR (HARR) or low ARR (LARR). The study found that spironolactone was significantly more effective than bendroflumethiazide in lowering systolic ambulatory blood pressure (SABP) irrespective of baseline ARR, plasma renin activity, or aldosterone levels. The primary endpoint, which was the difference in SABP between SPIRO and BFZ in patients with HARR compared with those with LARR, did not show a significant difference between the groups, indicating that ARR did not predict the blood pressure response to SPIRO (PUBMED:20009770). Therefore, based on the results of the RENALDO trial, the ARR does not appear to be a useful predictor for the efficacy of diuretics in the treatment of hypertension.
Instruction: Is microalbuminuria a risk factor for diabetic retinopathy? Abstracts: abstract_id: PUBMED:12769161 Microalbuminuria: a common, independent cardiovascular risk factor, especially but not exclusively in type 2 diabetes. Microalbuminuria (defined as an albumin-creatinine ratio of 10-25 mg/mmol on the first-morning urine sample, or an albumin excretion rate of 20-200 microg/min on a timed collection) is present in 20-30% of all patients with type 2 diabetes, and is especially common in those with hypertension, endothelial dysfunction and other features of insulin resistance. Although microalbuminuria is predictive of worsening microvascular disease in the kidney (5-10% per year progress to overt diabetic nephropathy), an increased albumin excretion rate (AER) reflects a generalized abnormality of vascular function and is associated with 2-4-fold increases in cardiovascular and all-cause mortality. The extent to which microalbuminuria is a risk factor independent of other variables in type 2 diabetes, e.g. blood pressure and smoking, has been highlighted by recent cohort studies, e.g. the Heart Outcome Prevention Evaluation study and the Wisconsin Epidemiological Study of Diabetic Retinopathy. In the former study, for example, microalbuminuria at baseline increased the adjusted relative risks (RR) of a major cardiovascular event (RR 1.83), all-cause death (RR 2.09) and hospitalization for heart failure (RR 3.23) in both diabetic and non-diabetic subjects. These studies also highlighted that AER is a continuous risk factor, and that levels of AER below the arbitrary threshold for defining microalbuminuria are associated with relatively increased cardiovascular risk. Similarly, microalbuminuria affects 10-15% of middle-aged non-diabetics and is associated with coronary, peripheral and cerebral vascular complications. Detection of microalbuminuria, especially in type 2 diabetes, signifies the need to intensify blood pressure control as part of a multiple risk factor intervention strategy in a high-risk group. As hypertensive patients with type 2 diabetes are frequently treated by more than one antihypertensive agent, ACE inhibitors and low-dose diuretics are preferably recommended in order to provide sufficient blood pressure control and target organ protection. abstract_id: PUBMED:11572892 Elevated levels of plasma von Willebrand factor and the risk of macro- and microvascular disease in type 2 diabetic patients with microalbuminuria. Background: The purpose of this study was to examine the concept suggesting that microalbuminuria in combination with high levels of plasma von Willebrand factor is a stronger predictor for cardiovascular disease and microvascular complications than microalbuminuria alone in type 2 diabetic patients. Methods: One hundred and sixty patients with type 2 diabetes mellitus and persistent microalbuminuria were followed for an average of 3.8 (SD 0.3) years. 70% of the patients were treated with angiotensin converting enzyme (ACE)-inhibitors. Patients in this subanalysis were divided into two groups according to baseline plasma von Willebrand factor levels below or above the median. The main outcome was cardiovascular disease (cardiovascular mortality, non-fatal stroke, non-fatal myocardial infarction, coronary artery bypass graft and revascularization or amputation of legs), progression to diabetic nephropathy or progression in diabetic retinopathy. Results: At baseline the two groups were comparable for HbA(1c), fasting levels of s-total-cholesterol, s-HDL-cholesterol and s-triglycerides, systolic and diastolic blood pressure, gender, known diabetes duration, smoking habits, previous cardiovascular disease and antihypertensive therapy as well as retinopathy. Odds ratio for cardiovascular disease was 1.11 (95% CI 0.45-2.73, P=0.82) (multiple logistic regression), odds ratio for progression to nephropathy was 1.08 (0.41-2.85, P=0.87) and odds ratio for progression in retinopathy was 0.96 (0.46-2.00, P=0.92), all with plasma von Willebrand factor levels above the median. Conclusions: Our results do not support the suggestion that the combination of high plasma levels of von Willebrand factor and microalbuminuria is a stronger predictor for cardiovascular disease, progression to diabetic nephropathy or progression in diabetic retinopathy than microalbuminuria alone in patients with type 2 diabetes and persistent microalbuminuria. abstract_id: PUBMED:11395874 Microalbuminuria--a risk factor for diabetic nephropathy and cardiovascular disease In developed countries diabetics patients are the most numerous group with renal replacement therapy (USA 34%). The main and diagnostically irreplaceable criterion of incipient diabetic nephropathy is microalbuminuria which is usually associated with hypertension and poor glycaemic compensation. With advancing microalbuminuria progresses diabetic retinopathy and neuropathy. The increased transcapillary albumin escape rate and changes of some haemocoagulation factors in diabetics patients with microalbuminuria indicate that endothelial dysfunction is involved. In type 1 diabetes microalbuminuria is an indicator of increased mortality in which participate in particular cardiovascular diseases and to a minor extent renal failure. In type 2 diabetes microalbuminuria is an independent risk of generalized vascular disease. Microalbuminuria is also in non-diabetic subjects with hypertension associated with abnormalities such as impaired glucose tolerance and insulin resistance, an unflavourable lipidogram and altered diurnal blood pressure rhythm. The results of a coronarographic investigation revealed that the risk of severe coronary artery disease is more than double in subjects with microalbuminuria. Hypertension and hypercholesterolaemia are causal risk factors of cardiovascular diseases and concurrent microalbuminuria implies a higher expression of already existing microvascular damage in hormonal and metabolic disorders with an atherogenic potential. abstract_id: PUBMED:17513698 Multiple variants in vascular endothelial growth factor (VEGFA) are risk factors for time to severe retinopathy in type 1 diabetes: the DCCT/EDIC genetics study. Objective: We sought to determine if any common variants in the gene for vascular endothelial growth factor (VEGFA) are associated with long-term renal and retinal complications in type 1 diabetes. Research Design And Methods: A total of 1,369 Caucasian subjects with type 1 diabetes from the Diabetes Control and Complications Trial (DCCT)/Epidemiology of Diabetes Interventions and Complications (EDIC) Study had an average of 17 retinal photographs and 10 renal measures over 15 years. In the DCCT/EDIC, we studied 18 single nucleotide polymorphisms (SNPs) in VEGFA that represent all linkage disequilibrium bins (pairwise r(2) &gt; or = 0.64) and tested them for association with time to development of severe retinopathy, three or more step progression of retinopathy, clinically significant macular edema, persistent microalbuminuria, and severe nephropathy. Results: In a global multi-SNP test, there was a highly significant association of VEGFA SNPs with severe retinopathy (P = 6.8 x 10(-5))-the four other outcomes were all nonsignificant. In survival analyses controlling for covariate risk factors, eight SNPs showed significant association with severe retinopathy (P &lt; 0.05). The most significant single SNP association was rs3025021 (hazard ratio 1.37 [95% CI 1.13-1.66], P = 0.0017). Family-based analyses of severe retinopathy provide evidence of excess transmission of C at rs699947 (P = 0.029), T at rs3025021 (P = 0.013), and the C-T haplotype from both SNPs (P = 0.035). Multi-SNP regression analysis including 15 SNPs, and allowing for pairwise interactions, independently selected 6 significant SNPs (P &lt; 0.05). Conclusions: These data demonstrate that multiple VEGFA variants are associated with the development of severe retinopathy in type 1 diabetes. abstract_id: PUBMED:13130255 Is microalbuminuria a risk factor for diabetic retinopathy? Purpose: To determine the relationship between microalbuminuria and diabetic retinopathy. Methods: A prospective 10-year study of 104 younger-onset diabetic patients. The diabetic retinopathy diagnosis was made by fundus retinography, and determination of microalbuminuria was made from urine samples. Results: The incidence of diabetic retinopathy in this group of patients was 39 (37.5%). The epidemiological factors implicated were diabetes duration, higher levels of HbA(1c), male sex, and diastolic arterial hypertension. The incidence of microalbuminuria was 21 patients (20.2%), with high levels of HbA(1c) the epidemiological factor implicated. The association between microalbuminuria and diabetic retinopathy grouped the patients as follows: 56 patients without microalbuminuria or retinopathy, 16 patients who developed microalbuminuria and diabetic retinopathy, 23 patients who developed retinopathy but not microalbuminuria, and nine patients who developed only microalbuminuria. The discriminant analysis showed that the high levels of HbA(1c) were associated with microalbuminuria and diabetes duration and high levels of HbA(1c) were associated with diabetic retinopathy. Conclusions: In the population studied, microalbuminuria was not a good marker for diabetic retinopathy. abstract_id: PUBMED:11239027 Prognostic implications of retinopathy and a high plasma von Willebrand factor concentration in type 2 diabetic subjects with microalbuminuria. Background: Microalbuminuria in subjects with type 2 diabetes may be heterogeneous with respect to clinical features, renal histology, and prognosis. There may be at least two types of microalbuminuria in diabetes, namely with and without generalized endothelial dysfunction. We investigated whether, among microalbuminuric subjects with type 2 diabetes, the presence of generalized endothelial dysfunction, as indicated by the presence of retinopathy or a high plasma von Willebrand factor (vWf) level, has prognostic implications. Methods: In 173 type 2 diabetic subjects of a population-based cohort, we assessed the urinary albumin-to-creatinine ratio, the plasma vWf level, and the presence of retinopathy. The main outcome was cardiovascular mortality. Results: The absolute difference in 7 years' cardiovascular mortality between microalbuminuric (albumin-to-creatinine ratio 2.0-30.0 mg/mmol) and normoalbuminuric subjects was higher in the presence as compared to the absence of retinopathy (55.6 vs 11.1%). The age- and sex-adjusted relative risk (95% confidence interval) of cardiovascular mortality, as compared to normoalbuminuric subjects without retinopathy, was 1.1 (0.1-9.2) for normoalbuminuric subjects with retinopathy, 1.8 (0.5-6.7) for microalbuminuric subjects without retinopathy, and 9.8 (3.1-30.9) for microalbuminuric subjects with retinopathy. The absolute difference in risk of 7 years' cardiovascular mortality between microalbuminuric and normoalbuminuric subjects was higher in the presence as compared to the absence of a high (&gt;1.89 IU/ml) vWf level (49.8 vs 16.4%). The age- and sex-adjusted relative risk of cardiovascular mortality, as compared to normoalbuminuric subjects without a high vWf level, was 1.5 (0.4-5.5) for normoalbuminuric subjects with a high vWf level, 2.6 (0.7-9.6) for microalbuminuric subjects without a high vWf level, and 12.0 (2.9-49.5) for microalbuminuric subjects with a high vWf level. These differences in risk of cardiovascular mortality did not change materially after further adjustment for known duration of diabetes, hypertension, creatinine clearance, level of glycated haemoglobin and high-density lipoprotein cholesterol, and presence of cardiovascular disease. Analysis of all-cause instead of cardiovascular mortality showed a similar difference in risk of mortality between microalbuminuric subjects with or without retinopathy or a high vWf level. Conclusions: Among type 2 diabetic subjects with microalbuminuria, the presence of retinopathy or a high plasma vWf level affects the risk of cardiovascular death. Although larger studies are necessary, these findings support the concept that microalbuminuria in type 2 diabetes can occur in the absence or the presence of generalized endothelial dysfunction, and that the latter is a much more 'malignant' condition than the former. abstract_id: PUBMED:32384512 Early-onset type 2 diabetes: A high-risk factor for proliferative diabetic retinopathy (PDR) in patients with microalbuminuria. We aim to explore the relationship between early-onset diabetes and proliferative diabetic retinopathy (PDR) in type 2 diabetes mellitus (T2DM) patients with microalbuminuria.A total of 461 T2DM patients with microalbuminuria were enrolled. Subjects were defined as early-onset or late-onset based on the age at which they were diagnosed with diabetes (&lt;40 and ≥40 years, respectively). Medical history, anthropometry, and laboratory indicators were documented. PDR was defined as the presence of any of the following changes on fundus photography: neovascularization, vitreous hemorrhage, or preretinal hemorrhage.The prevalence of PDR was 6-fold higher in patients with early-onset than late-onset T2DM [(6.1% vs 1.0%), P = .004]. Univariate correlation analysis showed that early-onset diabetes, use of oral hypoglycemic drugs, and insulin therapy were risk factors for PDR. In multivariate logistic analysis, patients with early-onset diabetes exhibited a 7.00-fold [(95% confidence interval 1.40-38.26), P = .019] higher risk of PDR than subjects with late-onset diabetes after adjusting for sex; T2DM duration; systolic blood pressure; total triglyceride; glycated hemoglobin; insulin therapy; and the use of oral hypoglycemic drugs, antihypertensive drugs, and lipid-lowering drugs.In T2DM patients with microalbuminuria, early-onset diabetes is an independent risk factor for the development of PDR. abstract_id: PUBMED:22994366 Tromsø eye study: prevalence and risk factors of diabetic retinopathy. Purpose: To determine the prevalence of visual impairment, retinopathy and macular oedema, and assess risk factors for retinopathy in persons with diabetes. Methods: The present study included 514 participants with diabetes aged 46-87 years from the Tromsø Eye Study, a sub-study of the population-based Tromsø Study in Norway. Visual acuity was measured using an auto-refractor. Retinal images from both eyes were graded for retinopathy and macular oedema. We collected data on risk factor exposure from self-report questionnaires, clinical examinations, laboratory measurements and case note reviews. Regression models assessed the cross-sectional relationship between potential risk factors and diabetic retinopathy. Results: The prevalence of visual impairment (corrected Snellen visual acuity &lt;20/60 in the better-seeing eye) was 1.6%. The prevalence of diabetic retinopathy was 26.8% and macular oedema 3.9%. In a multivariable logistic regression model, retinopathy was associated with longer diabetes duration (odds ratio, OR 1.07, 95% CI 1.03-1.11), insulin use (OR 2.14, 95% CI 1.19-3.85), nonfasting glucose (OR 1.07, 95% CI 1.00-1.15) and microalbuminuria (OR 1.89, 95% CI 1.28-2.81). Sub-group analyses showed association between retinopathy and even low levels of microalbuminuria (1.16 mg/mmol). Conclusion: The findings suggest that low levels of microalbuminuria may be a useful risk predictor for identifying individuals with diabetes at high risk of retinopathy. The study confirms previous findings that insulin use, longer diabetes duration and higher levels of blood glucose are associated with retinopathy in persons with diabetes. The prevalence of diabetic retinopathy was similar as reported in other studies. abstract_id: PUBMED:20164848 Association between vascular endothelial growth factor and hypertension in children and adolescents type I diabetes mellitus. The aim of the study was to analyse the relationship between the serum level of vascular endothelial growth factor (VEGF) and the incidence of hypertension (HT) in children and adolescents with type I diabetes mellitus (T1DM). One hundred and five patients with T1DM were enrolled in the study. The control group consisted of 30 healthy controls. All the T1DM patients were subjected to biochemical analyses, ophthalmologic examination and 24-h blood pressure monitoring. Besides, all the patients and healthy controls had serum VEGF levels measured with the use of the ELISA methodology. The essence of our research is that patients with T1DM and HT and with microalbuminuria (MA) and diabetic retinopathy (DR) (MA/DR) are characterized by a significantly higher level of VEGF (340.23±93.22 pg ml(-1)) in blood serum in comparison with the group of T1DM patients without HT and MA/DR (183.6±96.6 pg ml(-1)) and with healthy controls (145.32±75.58 pg ml(-1)). In addition, the VEGF level was significantly higher in T1DM patients, who presented all three complications, that is HT, retinopathy and MA in comparison with T1DM patients without HT, but with MA/DR (P=0.036). On the other hand, no statistically significant differences (P=0.19) were noted in the level of VEGF in serum between T1DM patients without HT and MA/DR and the healthy control group. At a further stage of analysis, using the method of multiple regression, it was shown that systolic pressure, HbA1c and duration of disease are independent factors influencing the concentration of VEGF. Summarizing, the measurement of VEGF serum levels allows for the identification of groups of patients who have the highest risk of HT and, subsequently, progression of vascular complications. abstract_id: PUBMED:31462492 HbA1c level as a risk factor for retinopathy and nephropathy in children and adults with type 1 diabetes: Swedish population based cohort study. Objective: To evaluate if the lowest target level for glycated haemoglobin (HbA1c) of &lt;6.5% is associated with lower risk for retinopathy and nephropathy than less tight control in children and adults with type 1 diabetes. Design: Population based cohort study. Setting: Swedish National Diabetes Registry, 1 January 1998 to 31 December 2017. Participants: 10 398 children and adults with type 1 diabetes followed from diagnosis, or close thereafter, until end of 2017. Main Outcome Measures: Relative risk (odds ratios) for retinopathy and nephropathy for different mean levels of HbA1c. Results: Mean age of participants was 14.7 years (43.4% female), mean duration of diabetes was 1.3 years, and mean HbA1c level was 8.0% (63.4 mmol/mol). After adjustment for age, sex, duration of diabetes, blood pressure, blood lipid levels, body mass index, and smoking, the odds ratio for mean HbA1c &lt;6.5% (&lt;48 mmol/mol) compared with 6.5-6.9% (48-52 mmol/mol) for any retinopathy (simplex or worse) was 0.77 (95% confidence interval 0.56 to 1.05, P=0.10), for preproliferative diabetic retinopathy or worse was 3.29 (0.99 to 10.96, P=0.05), for proliferative diabetic retinopathy was 2.48 (0.71 to 8.62, P=0.15), for microalbuminuria or worse was 0.98 (0.60 to 1.61, P=0.95), and for macroalbuminuria was 2.47 (0.69 to 8.87, P=0.17). Compared with HbA1c levels 6.5-6.9%, HbA1c levels 7.0-7.4% (53-57 mmol/mol) were associated with an increased risk of any retinopathy (1.31, 1.05 to 1.64, P=0.02) and microalbuminuria (1.55, 1.03 to 2.32, P=0.03). The risk for proliferative retinopathy (5.98, 2.10 to 17.06, P&lt;0.001) and macroalbuminuria (3.43, 1.14 to 10.26, P=0.03) increased at HbA1c levels &gt;8.6% (&gt;70 mmol/mol). The risk for severe hypoglycaemia was increased at mean HbA1c &lt;6.5% compared with 6.5-6.9% (relative risk 1.34, 95% confidence interval 1.09 to 1.64, P=0.005). Conclusions: Risk of retinopathy and nephropathy did not differ at HbA1c levels &lt;6.5% but increased for severe hypoglycaemia compared with HbA1c levels 6.5-6.9%. The risk for severe complications mainly occurred at HbA1c levels &gt;8.6%, but for milder complications was increased at HbA1c levels &gt;7.0%. Answer: Microalbuminuria is a condition characterized by a slight increase in the levels of albumin in the urine, which is often a marker for kidney disease and cardiovascular risk, particularly in patients with diabetes. The relationship between microalbuminuria and diabetic retinopathy, a complication of diabetes that affects the eyes, has been explored in various studies. Some studies suggest that microalbuminuria may be associated with diabetic retinopathy. For instance, the Tromsø Eye Study found that low levels of microalbuminuria may be a useful risk predictor for identifying individuals with diabetes at high risk of retinopathy, confirming that insulin use, longer diabetes duration, and higher levels of blood glucose are associated with retinopathy in persons with diabetes (PUBMED:22994366). Additionally, in type 2 diabetes mellitus (T2DM) patients with microalbuminuria, early-onset diabetes is an independent risk factor for the development of proliferative diabetic retinopathy (PDR) (PUBMED:32384512). However, other studies have reported different findings. A 10-year prospective study of younger-onset diabetic patients did not find microalbuminuria to be a good marker for diabetic retinopathy (PUBMED:13130255). Similarly, a study examining the combination of high plasma levels of von Willebrand factor and microalbuminuria did not support the suggestion that this combination is a stronger predictor for cardiovascular disease, progression to diabetic nephropathy, or progression in diabetic retinopathy than microalbuminuria alone in patients with type 2 diabetes and persistent microalbuminuria (PUBMED:11572892). In summary, while there is evidence to suggest that microalbuminuria may be associated with an increased risk of diabetic retinopathy in some diabetic patients, the relationship is not universally accepted as a definitive risk factor, and further research may be needed to fully understand the connection between these two conditions.
Instruction: Paradox of vaccination: is vaccination really effective against avian flu epidemics? Abstracts: abstract_id: PUBMED:19295921 Paradox of vaccination: is vaccination really effective against avian flu epidemics? Background: Although vaccination can be a useful tool for control of avian influenza epidemics, it might engender emergence of a vaccine-resistant strain. Field and experimental studies show that some avian influenza strains acquire resistance ability against vaccination. We investigated, in the context of the emergence of a vaccine-resistant strain, whether a vaccination program can prevent the spread of infectious disease. We also investigated how losses from immunization by vaccination imposed by the resistant strain affect the spread of the disease. Methods And Findings: We designed and analyzed a deterministic compartment model illustrating transmission of vaccine-sensitive and vaccine-resistant strains during a vaccination program. We investigated how the loss of protection effectiveness impacts the program. Results show that a vaccination to prevent the spread of disease can instead spread the disease when the resistant strain is less virulent than the sensitive strain. If the loss is high, the program does not prevent the spread of the resistant strain despite a large prevalence rate of the program. The epidemic's final size can be larger than that before the vaccination program. We propose how to use poor vaccines, which have a large loss, to maximize program effects and describe various program risks, which can be estimated using available epidemiological data. Conclusions: We presented clear and simple concepts to elucidate vaccination program guidelines to avoid negative program effects. Using our theory, monitoring the virulence of the resistant strain and investigating the loss caused by the resistant strain better development of vaccination strategies is possible. abstract_id: PUBMED:36721499 Marburg virus disease: the paradox of Nigeria's preparedness and priority effects in co-epidemics. Background: The recent outbreaks of Marburg virus disease (MVD) in Guinea and Ghana have become a major public health concern not only to the West African sub-region but a threat to global health. Main Body Of The Abstract: Given the poorly elucidated ecological and epidemiological dynamics of the Marburg virus, it would be imprudent to preclude the possibility of another pandemic if urgent efforts are not put in place. However, the prior emergence and impact of COVID-19 and other co-occurring epidemics may add 'noise' to the epidemiological dynamics and public health interventions that may be required in the advent of a MVD outbreak in Nigeria. Short Conclusion: Paying attention to the lessons learned from previous (and current) multiple epidemics including Avian Influenza, Yellow fever, Ebola virus disease, Monkeypox, Lassa fever, and COVID-19 could help avoid a potentially devastating public health catastrophe in Nigeria. abstract_id: PUBMED:35043257 Epidemics, pandemics and income inequality. The novel coronavirus is part of a series of infectious disease outbreaks that include: Ebola, Avian influenza, Middle East respiratory syndrome coronavirus, and Influenza A. This paper addresses the question of how do these epidemics and pandemics affect income inequality in countries around the world during the first two decades of this century. To achieve its objective, the paper develops a model that indicates a positive association between these health crises and income inequality. To empirically test our theoretical predictions, the paper explores the effect on the Gini coefficient of a dummy variable that indicates the occurrence of an epidemic or a pandemic in a country in a given year and the number of deaths per 100,000. To properly address potential endogeneity, we implement a Three-Stage-Least Squares technique. The estimation shows that the number of deaths per 100,000 population variable has a statistically significant positive effect on the Gini coefficient, especially when we incorporate COVID-19 data. This suggests that not only the occurrence, but also the health consequences of COVID-19 have a significant and economically important effect on income inequality. Background: The purpose of the study is to examine the effect of epidemics and pandemics on income inequality. This has important implications as the outcome of this study can guide policymakers into implementing policies that can mitigate the economic consequences of these health crises. Methods: The study is a cross country analysis using fixed effects estimation. To address potential endogeneity and determine causality, the paper uses the Three-Stage-Least-Squares estimation. Results: The paper finds that the number of epidemic deaths per 100,000 population variable has a statistically significant positive effect on the Gini coefficient, especially when we incorporate COVID-19 data. Conclusions: The paper finds that it is not only the occurrence of an epidemic, captured by the epidemics dummy variable, but also the health consequences, captured by the number of deaths per 100,000 population, that have a significant effect on income inequality. This is especially the case when we incorporate COVID-19 in our analysis. Trial Registration: Not Applicable. abstract_id: PUBMED:32288765 Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model. A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China. abstract_id: PUBMED:32288529 Emergent viruses and/or repeated threats Emergent viruses have attracted attention during years 2003 and 2004. Ancient fears have reappeared regarding pathogenic agents, capable to result in epidemics with high mortality rates. Such events constitute really repeated threats. The analysis of mechanisms permitting emergence of these viruses has shown that they are not random phenomenons, but they result from accumulated factors leading to transmission from animals to men. Several modes of transmission of infection include: either direct transmission, or by intermediate vectors (mosquitoes, ticks and other (mammals) animals). Convergence of ecologic, economic and epidemiologic factors confer to the epidemics potential ability do spread widely. With the development of surveillance networks and improvement in diagnostic technologies, these "new" viruses are better identified. Recent occurrence of SARS and of the avian influenza are best examples of such experiences. abstract_id: PUBMED:29294594 The effects of closure to live poultry markets on Avian influenza A (H7N9) epidemics in China Since March 2013, China had experienced five seasonal epidemics related to Avian influenza A (H7N9). An unprecedented outbreak of H7N9 epidemic started from September 2016, with 730 cases reported till June 30(th) 2017, in mainland China that caused profound influences on both social development and health of the people. As an emerging infectious disease, information on pathogenic characteristics, transmission patterns and other epidemiological features of H7N9 virus somehow remained unclear. Data from previous studies suggested that the live poultry market (LPM) seemed to have served as main places where H7N9 virus got originated, mutated, spread and thus infected the human beings. Hence, closure of LPMs was suggested a major measure to control and prevent H7N9 epidemics in China. However, the effectiveness of different ways of LPM closures on H7N9 epidemics had been controversial. This study systemically summarized the effects of different ways of LPM closures on H7N epidemics from previous studies, aiming to provide references for developing a better program on H7N9 control and prevention in the country. abstract_id: PUBMED:36197436 Two major epidemics of highly pathogenic avian influenza virus H5N8 and H5N1 in domestic poultry in France, 2020-2022. The spread of highly pathogenic avian influenza (HPAI) viruses worldwide has serious consequences for animal health and a major economic impact on the poultry production sector. Since 2014, Europe has been severely hit by several HPAI epidemics, with France being the most affected country. Most recently, France was again affected by two devastating HPAI epidemics in 2020-21 and 2021-22. We conducted a descriptive analysis of the 2020-21 and 2021-22 epidemics, as a first step towards identifying the poultry sector's remaining vulnerabilities regarding HPAI viruses in France. We examined the spatio-temporal distribution of outbreaks that occurred in France in 2020-21 and 2021-22, and we assessed the outbreaks' spatial distribution in relation to the 2016-17 epidemic and to the two 'high-risk zones' recently incorporated into French legislation to strengthen HPAI prevention and control. There were 468 reported outbreaks during the 2020-21 epidemic and 1375 outbreaks during the 2021-22 epidemic. In both epidemics, the outbreaks' distribution matched extremely well that of 2016-17, and most outbreaks (80.6% and 68.4%) were located in the two high-risk zones. The southwestern high-risk zone was affected in both epidemics, while the western high-risk zone was affected for the first time in 2021-22, explaining the extremely high number of outbreaks reported. As soon as the virus reached the high-risk zones, it started to spread between farms at very high rates, with each infected farm infecting between two and three other farms at the peaks of transmission. We showed that the spatial distribution model used to create the two high-risk zones was able to predict the location of outbreaks for the 2020-21 and 2021-22 epidemics. These zones were characterized by high poultry farm densities; future efforts should, therefore, focus on reducing the density of susceptible poultry in highly dense areas. abstract_id: PUBMED:22372320 Classical swine fever and avian influenza epidemics: lessons learned. Experience with recent large-scale epidemics of Classical Swine Fever and Avian Influenza--among others in the Netherlands--have teached us several lessons that should prepare us better for future outbreaks. Among others, improving early detection of outbreaks--by using syndrome surveillance systems--is a key factor, in which farmers and veterinary practitioners have an imminent role. A major step in this respect is facilitation of the use of exclusion diagnostics without closing down the farm in clinical situations with non-specific clinical signs observed in sick animals. The hesitance of farmers and veterinary practitioners to report a suspect clinical situation on a livestock farm and how to facilitate that process is another major issue. Furthermore, the importance of communication between the field and the laboratory with respect to post mortem examination will be highlighted, and the need for outbreak simulation exercises with neighbouring countries in order to be better prepared, will be discussed. abstract_id: PUBMED:32842978 Spatial and temporal clusters of avian influenza a (H7N9) virus in humans across five epidemics in mainland China: an epidemiological study of laboratory-confirmed cases. Background: Avian influenza A (H7N9) virus was first reported in mainland China in 2013, and alarming in 2016-17 due to the surge across a wide geographic area. Our study aimed to identify and explore the spatial and temporal variation across five epidemics to reinforce the epidemic prevention and control. Methods: We collected spatial and temporal information about all laboratory-confirmed human cases of A (H7N9) virus infection reported in mainland China covering 2013-17 from the open source. The autocorrelation analysis and intensity of cases were used to analyse the spatial cluster while circular distribution method was used to analyse the temporal cluster. Results: Across the five epidemics, a total of 1553 laboratory-confirmed human cases with A (H7N9) virus were reported in mainland China. The global Moran's I index values of five epidemic were 0.610, 0.132, 0.308, 0.306, 0.336 respectively, among which the differences were statistically significant. The highest intensity was present in the Yangtze River Delta region and the Pearl River Delta region, and the range enlarged from the east of China to inner provinces and even the west of China across the five epidemics. The temporal clusters of the five epidemics were statistically significant, and the peak period was from the end of January to April with the first and the fifth epidemic later than the mean peak period. Conclusions: Spatial and temporal clusters of avian influenza A (H7N9) virus in humans are obvious, moreover the regions existing clusters may enlarge across the five epidemics. Yangtze River Delta region and the Pearl River Delta region have the spatial cluster and the peak period is from January to April. The government should facilitate the tangible improvement for the epidemic preparedness according to the characteristics of spatial and temporal clusters of patients with avian influenza A (H7N9) virus. abstract_id: PUBMED:26003682 Evolutionary trajectories of two distinct avian influenza epidemics: Parallelisms and divergences. Influenza A virus can quickly acquire genetic mutations that may be associated with increased virulence, host switching or antigenic changes. To provide new insights into the evolutionary dynamics and the adaptive strategies of distinct avian influenza lineages in response to environmental and host factors, we compared two distinct avian influenza epidemics caused by the H7N1 and H7N3 subtypes that circulated under similar epidemiological conditions, including the same domestic species reared in the same densely populated poultry area for similar periods of time. The two strains appear to have experienced largely divergent evolution: the H7N1 viruses evolved into a highly pathogenic form, while the H7N3 did not. However, a more detailed molecular and evolutionary analysis revealed several common features: (i) the independent acquisition of 32 identical mutations throughout the entire genome; (ii) the evolution and persistence of two sole genetic groups with similar genetic characteristics; (iii) a comparable pattern of amino acid variability of the HA proteins during the low pathogenic epidemics; and (iv) similar rates of nucleotide substitutions. These findings suggest that the evolutionary trajectories of viruses with the same virulence level circulating in analogous epidemiological conditions may be similar. In addition, our deep sequencing analysis of 15 samples revealed that 17 of the 32 parallel mutations were already present at the beginning of the two epidemics, suggesting that fixation of these mutations may occur with different mechanisms, which may depend on the fitness gain provided by each mutation. This highlighted the difficulties in predicting the acquisition of mutations that can be correlated to viral adaptation to specific epidemiological conditions or to changes in virus virulence. Answer: The effectiveness of vaccination against avian flu epidemics is not straightforward and can be considered paradoxical. Vaccination can be a useful tool for controlling avian influenza epidemics, but it may also lead to the emergence of vaccine-resistant strains. Studies have shown that some avian influenza strains can develop resistance to vaccines, which complicates the control of the disease (PUBMED:19295921). A deterministic compartment model used in research suggests that vaccination programs aimed at preventing the spread of disease could inadvertently facilitate the spread when the resistant strain is less virulent than the sensitive strain. If the loss of protection effectiveness is high, the vaccination program may not prevent the spread of the resistant strain, and the final size of the epidemic could be larger than before the vaccination program was implemented (PUBMED:19295921). Therefore, while vaccination can be part of the strategy to control avian flu epidemics, it is crucial to monitor the virulence of the resistant strain and investigate the loss caused by the resistant strain to better develop vaccination strategies. The guidelines for vaccination programs should be clear and simple to avoid negative effects, and there should be a focus on how to use vaccines effectively, even those with a large loss, to maximize program effects (PUBMED:19295921). In summary, vaccination can be effective against avian flu epidemics, but its success is dependent on several factors, including the virulence of vaccine-resistant strains and the overall loss of protection effectiveness. Careful planning and monitoring are essential to ensure that vaccination programs do not inadvertently contribute to the spread of the disease.
Instruction: Does sperm origin affect embryo morphokinetic parameters? Abstracts: abstract_id: PUBMED:26109332 Does sperm origin affect embryo morphokinetic parameters? Purpose: The purpose of our study was to use time-lapse in order to evaluate the impact of sperm origin (fresh ejaculate or surgically retrieved) on embryo morphokinetic parameters and clinical outcome in intracytoplasmic sperm injection (ICSI) cycles. Methods: This retrospective monocentric study was conducted in 485 unselected couples undergoing 604 ICSI cycles with embryo culture in the Embryoscope®. Among them, 445 couples underwent ICSI cycle with fresh ejaculated sperm and 40 with surgically retrieved sperm (26 with testicular sperm and 14 with epididymal sperm). Embryo morphokinetic parameters and clinical cycle outcome were compared between fresh ejaculated sperm and surgically retrieved sperm. A subgroup analysis was also conducted between testicular and epididymal sperm ICSI cycles. Results: Clinical outcome was comparable between groups according to sperm origin. Although most early morphokinetic parameters were comparable between ejaculated and surgical sperm groups, a few parameters were significantly different between both groups, but with a considerable overlap in their distribution. Late cellular events occurred significantly later in the surgical sperm group than in the ejaculated sperm group. Conclusions: Morphokinetic analysis did not allow us to identify clinically relevant differences between fresh ejaculate and surgically retrieved sperm groups. Further studies are needed, especially concerning the relationship between sperm origin and late morphokinetic parameters, such as blastocyst development. abstract_id: PUBMED:33251758 Does sperm origin-Ejaculated or testicular-Affect embryo morphokinetic parameters? Background: It is unclear whether sperm origin, either ejaculated or testicular, in couples diagnosed with male factor infertility, affects the timing of the embryo's developmental events evaluated by time-lapse monitoring and implantation rates. Objective: To examine the effect of sperm origin on embryo morphokinetics in couples diagnosed with male factor infertility. Materials And Methods: This study included a retrospective analysis of morphokinetic parameters performed by time-lapse monitoring between 2013 and 2017. The developmental processes and morphokinetic parameters of 419 embryos obtained from couples with male factor infertility attributed to oligo-astheno-teratozoospermia, 158 embryos derived from surgically extracted testicular spermatozoa from couples diagnosed with non-obstructive azoospermia, and 190 embryos from couples with normal ejaculated spermatozoa and female mechanical factor-related infertility, were evaluated. A comparison of morphokinetic parameters, implantation, and clinical pregnancy rates was performed between the groups with additional analysis in accordance with implantation status. Results: Embryos from the normal ejaculated spermatozoa and oligo-astheno-teratozoospermia patients reached the later morphokinetic milestones-synchronous division (S3) and time to morula (tM)-faster than embryos obtained from testicular spermatozoa. Implantation rate was similar in the normal ejaculated spermatozoa and oligo-astheno-teratozoospermia groups (41.9% vs. 45.8%, NS), with higher implantation rate in the oligo-astheno-teratozoospermia group compared to the testicular spermatozoa group (45.8% vs. 33.6%, p = 0.02). Comparison of Known Implantation Data (KID) positive (KIDp) and KID negative (KIDn) embryos in each group revealed more rapid development in KIDp embryos in the normal ejaculated spermatozoa and the oligo-astheno-teratozoospermia groups, while in the testicular spermatozoa group implanted embryos reached the late morphokinetic milestones (time to 8 cell stage-t8, ECC3, S3, and tM) significantly faster than embryos that failed to implant. In a multivariate logistic regression analysis of the male factor infertility population, (oligo-astheno-teratospermia) (OR = 2.54, p = 0.003) and t8 (OR = 0.95, p = 0.027) were predictive of successful implantation. Male factor infertility embryos that reached the t8 milestone within 48-56 h had favorable implantation rates (p &lt; 0.001). Discussion: The study results may highlight another pathophysiology by means of which sperm origin affects embryo developmental kinetics. Selecting embryos demonstrating a faster developmental rate at t8 and specifically the 48- to 56 h interval following time of pronuclei fading (tPNf) may improve implantation rates in cases of male factor infertility. Conclusion: This study showed that ejaculated spermatozoa is associated with faster late cell divisions, more rapid compaction, and higher implantation rates compared to testicular spermatozoa. Additionally, t8 is an important predictor for implantation in the male factor infertility population. abstract_id: PUBMED:33123909 Influence of Different Quality Sperm on Early Embryo Morphokinetic Parameters and Cleavage Patterns: A Retrospective Time-lapse Study. To investigate whether sperm with low concentration and motility can impact preimplantation embryos and to analyze how the effects present under a time-lapse incubation system, 2905 oocytes were collected from 219 couples between January 2014 and December 2015. Patients were divided into three groups according to sperm quality. Morphokinetic parameters and six cleavage patterns in the initial three cleavages were evaluated using the Primo Vision system. Embryo quality and clinic outcomes such as implantation rate, pregnancy rate and live birth rate were measured. The results showed that the concentration and motility of sperm correlated strongly with the rate of 2PN embryos, good-quality embryos on D3, blastocysts on D5/6 and good-quality embryos on D5/6. The time-lapse system recordings showed that compromised sperm quality could result in a significant delay in cc1 and a decrease in cc2, and impact embryo developmental potential mainly through large fragments or/and blastomere fragmentation in the initial three cleavages. In conclusion, sperm with low concentration and motility can have paternal effects on preimplantation embryos. These paternal effects present both as changes in morphokinetic parameters and cleavage patterns, which occur as early as fertilization and may cause severe damage to the preimplantation embryos. abstract_id: PUBMED:34034847 Sperm DNA fragmentation measured by sperm chromatin dispersion impacts morphokinetic parameters, fertilization rate and blastocyst quality in ICSI treatments. To determine the effects of sperm DNA fragmentation (SDF) on embryo morphokinetic parameters, cleavage patterns and embryo quality, this retrospective study analyzed 151 intracytoplasmic sperm injection (ICSI) cycles (1152 embryos collected) between November 2016 and June 2019. SDF was assessed using sperm chromatin dispersion. The cycles were divided into two groups based on the SDF rate: SDF &lt; 15% (n = 114) and SDF ≥ 15% (n = 37). The embryo morphokinetic parameters, cleavage patterns, and embryo quality were compared between the two groups. The morphokinetic parameters tPNf, t2, t3, t4, t5, t6, and t8 were achieved significantly earlier in the SDF &lt; 15% group compared with in the SDF ≥ 15% group. The fertilization and 2PN rates seemed to be significantly higher in the SDF &lt; 15% group compared with in the SDF ≥ 15% group, while the abnormal cleavage rates were similar. However, a significantly higher rate of chaotic cleavage (CC) was observed in the SDF ≥ 15% group. The D3 high-quality embryo and available embryo rates were similar between the two groups. The blastocyst formation, high-quality blastocyst, and available blastocyst rates in the SDF &lt; 15% group were significantly higher than those in the SDF ≥ 15% group. With an increase in SDF level, the chemical pregnancy, clinical pregnancy and implantation rates tended to decrease, while the miscarriage rate increased. This study demonstrated that SDF ≥ 15% reduces the fertilization rate of ICSI cycles and affects certain morphokinetic parameters. A higher SDF level can also induce a higher rate of CC, with subsequent decreases in the blastocyst formation rate and blastocyst quality. abstract_id: PUBMED:31220913 Correlations between embryo morphokinetic development and maternal age: Results from an intracytoplasmic sperm injection program. Objective: It is widely accepted that aging decreases women's fertility capacity. The aim of this study was to assess correlations between maternal age and the morphokinetic parameters and cleavage pattern of embryos. Methods: The morphokinetics of embryos derived from women &lt;30, 30-35, 36-40, and &gt;40 years of age were compared retrospectively in terms of time of second polar body extrusion, time of pronuclei appearance, time of pronuclei fading, and time of two to eight discrete cells (t2-t8). Furthermore, abnormal cleavage patterns such as uneven blastomeres at the two-cell stage, cell fusion (Fu), and trichotomous mitoses (TM) were assessed. Results: Only t5 occurred later in women aged 36-40 and &gt;40 years when compared with those aged &lt;30 and 30-35 years (p&lt;0.001). Other morphokinetic timing parameters, as well the presence of uneven blastomeres, were comparable between the groups (p&gt;0.05). However, Fu and TM were more common in women aged &gt;40 years than in younger women (p&lt;0.001). Conclusion: Maternal age was correlated with the cleavage pattern of embryos. Therefore, evaluating embryo morphokinetics may contribute to optimal embryo selection, thereby increasing fertility in patients with advanced maternal age. abstract_id: PUBMED:35963755 Effect of the co-administration of HCG and GnRH agonist (dual trigger) versus standard HCG trigger on morphokinetic embryo parameters. Research Question: Does dual trigger (the co-administration of triptorelin 0.2 mg and recombinant human chorionic gonadotrophin (HCG) [Decapeptyl 0.2 mg + Ovitrelle 250 µg]) versus standard recombinant HCG (Ovitrelle 250 µg) affect embryo quality and morphokinetic parameters? Design: Morphokinetic parameters and embryo quality of embryos derived from the first gonadotrophin-releasing hormone (GnRH) antagonist IVF/intracytoplasmic sperm injection (ICSI) cycles triggered by dual trigger or standard HCG trigger in women ≤42 years. Outcome measures included time to pronucleus fading (tPNf), cleavage timings (t2-t8), synchrony of the second cycle (s2), duration of the second cycle (cc2) and known implantation data (KID) scoring for embryo quality. Multivariate linear and logistic regression analyses were performed for confounding factors. Results: A total of 4859 embryos were analysed: 1803 embryos from 267 cycles in the dual trigger group and 3056 embryos from 463 cycles in the HCG trigger group. The groups were similar in patient and treatment characteristics apart from a higher maternal body mass index and lower maturation rate in the dual trigger group. Time to second polar body extrusion was shorter in the dual trigger group. Cleavage timings from zygote to an 8-cell embryo did not differ between the two groups. There was a higher percentage of embryos with an optimal cc2 duration in the HCG group. In multivariate logistic regression models, the trigger type was not a significant factor for cell cycle division parameters. Conclusions: Overall, there was no significant difference in the morphokinetic parameters or quality of embryos evaluated using a time-lapse monitoring system between embryos derived following dual trigger compared with HCG. abstract_id: PUBMED:28384614 How much information about embryo implantation potential is included in morphokinetic data? A prediction model based on artificial neural networks and principal component analysis. Purpose: The aim of this study was to answer the question of how much information about embryo implantation potential can be obtained from morphokinetic parameters through the creation a predictive model based on morphokinetic information and using advanced data-mining and artificial intelligence methods. Materials And Methods: Time-lapse recordings of 610 embryos were included in the analysis. For each embryo, absolute (t2, t3, t4, t5) and relative (cc2 and s2) morphokinetic parameters were collected. Additionally, fragmentation levels assessed at t2, t3, t4 and t5 time-points, presence of multinucleation, evenness of blastomeres after the first and second division and women's age were included in the analysis. Results: The created predictive model of implantation was constructed on the basis of two advanced data-mining methods: principal component analysis (PCA) and artificial neural networks (ANN). The receiver operating characteristic (ROC) curve constructed for the created model demonstrated its good quality. The area under the ROC curve was AUC=0.75 with a 95% confidence interval (0.70, 0.80). Finally, the model was verified on a validation set and the results were also good, although slightly weaker: AUC=0.71, with a 95% confidence interval (0.59, 0.84). Conclusions: The combination of two data-mining algorithms: PCA and ANN may be considered a method which can extract virtually all the available information from data. This methodology is indeed efficient, but models presented in the literature are also effective and close to the limit of the maximal information which can be extracted from morphokinetic data. The limit can be determined at the level of AUC value marginally above 0.7. abstract_id: PUBMED:32119189 Impact of sperm characteristics on time-lapse embryo morphokinetic parameters and clinical outcome of conventional in vitro fertilization. Background: Sperm abnormalities may negatively affect embryo development. Objectives: To determine the influence of sperm abnormalities (morphology, motility, DNA fragmentation) on embryo morphokinetic variables and clinical outcome of conventional IVF. Materials And Methods: Participants were 86 couples undergoing in vitro fertilization (IVF). Sperm morphology was evaluated according to the strict criteria proposed by Kruger/Tygerberg. CASA system was applied for sperm motility assessment. Sperm DNA fragmentation was assessed by the chromatin structure assay (SCSA). Morphokinetic parameters were determined in 223 embryos obtained from conventional IVF only and cultured in a single-step medium using time-lapse imaging technology. Results: Time-lapse variables from the initial embryo development, such as time of pronuclei fading (tPNf) and time for two cells (t2), were those more strongly related with abnormalities of sperm motility, morphology, and DNA fragmentation. Sperm morphological abnormalities rather than sperm motility were more closely associated with embryo morphokinetics. Sperm head defects were mainly correlated with the last stages of embryonic development (t9 to tHB), sperm midpiece defects with intermediate cleaving embryos (t5-t9), and sperm tail defects with the initial stages of embryonic development (tPNa-t4). Excess residual cytoplasm was positively correlated with all embryo morphokinetic parameters except t2 and tM. Absence of acrosomes, pinheads, coiled tails, and multiple sperm morphological defects correlated negatively with time-lapse embryo morphokinetic variables. Discussion: A large number of sperm-related variables, including frequency of specific morphological defects, morphological indexes, DNA fragmentation and motility, and time-lapse embryo variables, such as time intervals based mainly of 15 time points were recorded. Conclusion: There were strong associations between specific sperm defects of the head, midpiece, and tail with certain stages of embryonic development from observation of pronuclei to the hatched blastocyst. Coiled tail, cumulative head defects, and multiple abnormalities index (MAI) were associated both with embryo morphokinetics and the implantation success. abstract_id: PUBMED:23585186 The use of morphokinetic parameters to select all embryos with full capacity to implant. Purpose: Embryo kinetics analysis is an emerging tool for selecting embryo(s) for transfer. The aim of the present study was to determine morphokinetic parameters easily usable in the laboratory and predictive of embryo development and, most importantly, of embryo competence in producing a clinical pregnancy after day 5 transfer. Methods: A retrospective time-lapse monitoring analysis of morphokinetic parameters for 72 fully implanted embryos (group A) were compared to 106 non-implanted embryos (group B), and to 66 embryos with arrested development from the same pool of group A. All the embryos were from 78 patients undergoing ICSI treatment and day 5 embryo transfers. Results: A day 3 embryo will develop into a viable blastocyst if the following ranges of morphokinetic parameters are met: t1 (between 18.4 h and 30.9 h post-ICSI), t2 (21.4-34.8 h), t4 (33.1-57.2 h), t7 (46.1-82.5 h), t8 (46.4-97.8 h), tC-tF (7.7-22.9 h) and s3 (0.7-30.8 h). On day 5 embryos with the highest probability to implant are those with a cc3 between 9.7 h and 21 h. Conclusions: Morphokinetic parameters are helpful to make appropriate decisions for the disposition of each embryo. It is recommended that each laboratory should determine its own ranges of in vitro development (IVD-MKP) and implantation-associated (IMP-MKP) morphokinetic parameters. abstract_id: PUBMED:26843394 Do morphokinetic data sets inform pregnancy potential? Purpose: The aim of this study was to create a model to predict the implantation of transferred embryos based on information contained in the morphokinetic parameters of time-lapse monitoring. Methods: An analysis of time-lapse recordings of 410 embryos transferred in 343 cycles of in vitro fertilization (IVF) treatment was performed. The study was conducted between June 2012 and November 2014. For each embryo, the following data were collected: the duration of time from the intracytoplasmic sperm injection (ICSI) procedure to further division for two, three, four, and five blastomeres, time intervals between successive divisions, and the level of fragmentation assessed in successive time-points. Principal component analysis (PCA) and logistic regression were used to create a predictive model. Results: Based on the results of principal component analysis and logistic regression analysis, a predictive equation was constructed. Statistically significant differences (p &lt; 0.001) in the size of the created parameter between the implanted group (the median value: Me = -5.18 and quartiles: Q 1= -5.61; Q 3 = -4.79) and the non-implanted group (Me = -5.69, Q 1 = -6.34; Q 3 = -5.16) were found. A receiver operating characteristic (ROC) curve constructed for the considered model showed the good quality of this predictive equation. The area under the ROC curve was AUC = 0.70 with a 95% confidence interval (0.64, 0.75). The presented model has been validated on an independent data set, illustrating that the model is reliable and repeatable. Conclusions: Morphokinetic parameters contain information useful in the process of creating pregnancy prediction models. However, embryo quality is not the only factor responsible for implantation, and, thus, the power of prediction of the considered model is not as high as in models for blastocyst formation. Nevertheless, as illustrated by the results of this study, the application of advanced data-mining methods in reproductive medicine allows one to create more accurate and useful models. Answer: Yes, sperm origin does affect embryo morphokinetic parameters. Studies have shown that embryos derived from ejaculated sperm tend to reach later morphokinetic milestones, such as synchronous division (S3) and time to morula (tM), faster than embryos obtained from testicular spermatozoa (PUBMED:33251758). Additionally, embryos from normal ejaculated spermatozoa and oligo-astheno-teratozoospermia patients reached these milestones faster than those from testicular spermatozoa, with the latter group also showing a lower implantation rate (PUBMED:33251758). Another study found that while clinical outcomes were comparable between groups according to sperm origin, late cellular events occurred significantly later in the surgically retrieved sperm group than in the ejaculated sperm group (PUBMED:26109332). This suggests that sperm origin can influence the timing of certain developmental events in embryos. Furthermore, sperm with low concentration and motility have been shown to result in a significant delay in certain morphokinetic parameters and impact embryo developmental potential, which can manifest as early as fertilization and may cause severe damage to preimplantation embryos (PUBMED:33123909). Sperm DNA fragmentation has also been found to affect morphokinetic parameters, with higher levels of fragmentation leading to delayed achievement of certain morphokinetic parameters and reduced blastocyst quality (PUBMED:34034847). Overall, these findings indicate that sperm origin, along with sperm quality characteristics such as motility, morphology, and DNA integrity, can have a significant impact on embryo morphokinetics and potentially on the clinical outcomes of assisted reproductive technologies like ICSI.
Instruction: Executive dysfunction, obsessive-compulsive symptoms, and attention deficit and hyperactivity disorder in Systemic Lupus Erythematosus: Evidence for basal ganglia dysfunction? Abstracts: abstract_id: PUBMED:26723981 Executive dysfunction, obsessive-compulsive symptoms, and attention deficit and hyperactivity disorder in Systemic Lupus Erythematosus: Evidence for basal ganglia dysfunction? Introduction: Chorea is well described in a group of patients with Systemic Lupus Erythematosus (SLE). There is less information, however, on other movement disorders as well as non-motor neuropsychiatric features such as obsessive-compulsive symptoms (OCS), executive dysfunction and attention deficit and hyperactivity disorder (ADHD) in subjects with SLE. Methods: Fifty-four subjects with SLE underwent a battery of neuropsychiatric tests that included the Mini Mental State Examination, the Montreal Cognitive Assessment, the Frontal Assessment Battery (FAB), the FAS verbal and the categorical (animals) semantic fluency tests, the Obsessive and Compulsive Inventory - Revised, the Yale-Brown Obsessive and Compulsive Scale and Beck's Anxiety and Depression Scales. ADHD was diagnosed according to DSM-IV criteria. SLE disease activity and cumulative damage were evaluated according to the modified SLE Disease Activity Index 2000 (mSLEDAI-2K) and the SLICC/ACR, respectively. Results: Six (11.1%) and 33 (61.1%) patients had cognitive impairment according to the MMSE and MoCA, respectively. Eleven (20.4%) had abnormal FAB scores, and 5 (9.3%) had lower semantic fluency scores than expected. The overall frequency of cognitive dysfunction was 72.2% (39 patients) and of neuropsychiatric SLE was 77.8% (42 patients). Two patients (3.7%) had movement disorders. Fifteen (27.8%) had OCS and 17 (31.5%) met diagnostic criteria for ADHD. ADHD and OCS correlated with higher disease activity, p=0.003 and 0.006, respectively. Higher cumulative damage correlated with lower FAB scores (p 0.026). Conclusions: Executive dysfunction, ADHD, OCS, and movement disorders are common in SLE. Our finding suggests that there is frequent basal ganglia dysfunction in SLE. abstract_id: PUBMED:35758635 Associations of a family history of lupus with the risks of lupus and major psychiatric disorders in first-degree relatives. Background: Genetic factors link psychiatric disorders, particularly major depressive disorder (MDD), bipolar disorder, and obsessive-compulsive disorder (OCD), with systemic lupus erythematosus (SLE). Additionally, maternal SLE is a risk factor for long-term developmental problems, particularly learning disabilities, attention disorders, autism spectrum disorder (ASD) and speech disorders, in children. Aim: We aimed to determine whether first-degree relatives (FDRs) of patients with SLE have increased risks of SLE and major psychiatric disorders. Design And Methods: Using the Taiwan National Health Insurance Research Database, we recruited 40 462 FDRs of patients with SLE as well as 161 848 matched controls. The risks of major psychiatric disorders, including schizophrenia, bipolar disorder, OCD, MDD, ASD and attention-deficit/hyperactivity disorder (ADHD), were assessed. Results: The FDRs of patients with SLE had higher risks of SLE (reported as the adjusted relative risk and 95% confidence interval: 14.54; 12.19-17.34), MDD (1.23; 1.12-1.34), ADHD (1.60; 1.55-1.65), OCD (1.41; 1.14-1.74) and bipolar disorder (1.18; 1.01-1.38) compared with controls. Specifically, male FDRs of patients with SLE had higher risks of SLE and bipolar disorder, whereas female FDRs of patients with SLE had higher risks of MDD and OCD. Differences in the familial relationship (i.e. parents, children, siblings and twins) were consistently associated with higher risks of these disorders compared with controls. Conclusions: The FDRs of patients with SLE had higher risks of SLE, MDD, ADHD, OCD and bipolar disorder than the controls. abstract_id: PUBMED:26116578 Symptoms of attention deficit hyperactivity disorder in patients with systemic lupus erythematosus. Objectives: Cognitive function and mood disturbance are common in patients with systemic lupus erythematosus (SLE). This study aims to examine whether SLE patients have more features of adult attention deficit hyperactivity disorder (ADHD) and their relation to anxiety and depressive symptoms. Methods: Symptoms and clinically significant items of the inattention and hyperactivity/impulsivity domains of ADHD were examined in Part A and Part B by the screening instrument of the ADHD Self-Reported Scale (ASRS), respectively. Anxiety and depressive symptoms were measured by HADS-A and HADS-D, respectively. Results: There were no differences in symptom scores of inattention and hyperactivity/impulsivity between inactive SLE patients (n = 117) and age- and sex-matched controls (n = 64). However, SLE patients had more clinically significant items in the inattention domain compared with controls (p = 0.006), particularly among those who had previous cerebral involvement (p = 0.004). Patients who had psychiatric diseases had more clinically significant items in the hyperactivity/impulsivity domain (p = 0.006). Possible ADHD was found in 7.7% of SLE and 6.3% of healthy individuals (p = 1.00) by the screening tool. Patients with higher inattention symptom scores were more likely to be unemployed but not for duration of education and smoking habit. Anxiety and depressive symptoms correlated with ADHD symptoms. HADS-A was an independent predictive factor for clinically significant symptoms of inattention (p &lt; 0.001) and hyperactivity/impulsivity (p = 0.04) by logistic regression. Conclusion: Inactive SLE patients, particularly those who had previous cerebral lupus, had more clinically significant symptoms of inattention but not hyperactivity/impulsivity reflecting underlying cognitive impairment. Anxiety and depressive symptoms were common confounders for ADHD-like symptoms. abstract_id: PUBMED:22687742 Psychiatric symptoms in systemic lupus erythematosus: a systematic review. Objective: Systemic lupus erythematosus (SLE) presents with psychiatric symptoms in most patients that often remain undiagnosed and untreated. This study evaluates the prevalence of psychiatric symptoms in SLE on the basis of clinical trials that fulfilled diagnostic criteria specified by the American College of Rheumatology (ACR). Current hypotheses explaining the pathogenesis of psychiatric symptoms of lupus are reviewed to gain new insights into the neuroimmune pathogenesis of other psychiatric disorders. Data Source: A MEDLINE search of the literature (English language only) from April 1999 to August 2011 was performed using the search terms lupus and psychiatric to identify studies of neuropsychiatric SLE. Study Selection: Of 163 publications, 18 clinical studies were selected that focused on psychiatric symptoms, had a sample size of at least 20, and included patients of any age or gender as long as they fulfilled ACR criteria for neuropsychiatric SLE. Data Extraction: The following data were extracted: author name, year of publication, psychiatric diagnostic method, total number of patients with SLE, and percentage of patients with individual psychiatric diagnoses. The point prevalence of psychiatric symptoms was calculated for neuropsychiatric SLE diagnoses in every study included. Results: Psychiatric symptoms are present in the majority of patients with SLE. Depression (in up to 39% of patients) and cognitive dysfunction (up to 80%) are the most common psychiatric manifestations. Genetic and environmental factors (eg, ultraviolet light, retroviruses, and medications) may play a role in the pathogenesis. In addition, the patient's reaction to the illness may result in anxiety (up to 24%) and depression. Currently known biomarkers are nonspecific for neuropsychiatric SLE and indicate inflammation, microglial activation, ischemia, oxidative stress, mitochondrial dysfunction, and blood-brain barrier dysfunction. Conclusions: Identification of lupus-specific biomarkers of psychiatric symptoms is a high priority. Our current diagnostic assessment methods need improvement. Development of evidence-based guidelines is needed to improve diagnosis, prevention, and treatment of disabling psychiatric complications in lupus. abstract_id: PUBMED:30325587 Genetic correlations among psychiatric and immune-related phenotypes based on genome-wide association data. Individuals with psychiatric disorders have elevated rates of autoimmune comorbidity and altered immune signaling. It is unclear whether these altered immunological states have a shared genetic basis with those psychiatric disorders. The present study sought to use existing summary-level data from previous genome-wide association studies to determine if commonly varying single nucleotide polymorphisms are shared between psychiatric and immune-related phenotypes. We estimated heritability and examined pair-wise genetic correlations using the linkage disequilibrium score regression (LDSC) and heritability estimation from summary statistics methods. Using LDSC, we observed significant genetic correlations between immune-related disorders and several psychiatric disorders, including anorexia nervosa, attention deficit-hyperactivity disorder, bipolar disorder, major depression, obsessive compulsive disorder, schizophrenia, smoking behavior, and Tourette syndrome. Loci significantly mediating genetic correlations were identified for schizophrenia when analytically paired with Crohn's disease, primary biliary cirrhosis, systemic lupus erythematosus, and ulcerative colitis. We report significantly correlated loci and highlight those containing genome-wide associations and candidate genes for respective disorders. We also used the LDSC method to characterize genetic correlations among the immune-related phenotypes. We discuss our findings in the context of relevant genetic and epidemiological literature, as well as the limitations and caveats of the study. abstract_id: PUBMED:21586916 Methylphenidate treatment in pediatric patients with attention-deficit/hyperactivity disorder and comorbid trichotillomania: a preliminary report. Objectives: Trichotillomania (TTM) is a heterogenic mental disorder with a high rate of comorbidity and stressful life events (SLEs). Serotonergic and dopaminergic dysfunction are implicated in the pathophysiology of TTM. As in other impulse control disorders, increased prevalence of attention-deficit/hyperactivity disorder (ADHD) is reported in patients with TTM as well. This study aimed to assess the efficacy and tolerability of methylphenidate (MPH) treatment in children and adolescents who met the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria for both ADHD and TTM. Methods: Nine children and adolescents, aged 6 to 18 years, diagnosed with ADHD and TTM, were treated with MPH for a 12-week period. The severity of ADHD was assessed using the ADHD Rating Scale, and the hair pulling was rated using the Massachusetts General Hospital Hair-Pulling Scale. Additional scales were used for assessing depression and anxiety levels, and history of SLE was recorded. Results: Significant improvement was detected in ADHD after MPH treatment (P &lt; 0.003), but no significant change was observed in hair pulling, as measured by the Massachusetts General Hospital Hair-Pulling Scale (P = 0.096) or in depression and anxiety levels. Lack of response of TTM to MPH (improvement, &lt;50%) was associated with higher rate of positive SLE history (P = 0.047). Conclusions: Some efficacy of MPH treatment was shown in TTM patients with low rate of SLE. A large-scale study is mandatory to evaluate the efficacy of MPH for TTM in ADHD/TTM patients. abstract_id: PUBMED:17503139 Neuropsychiatric disease in collagen vascular diseases and vasculitis. The prevalence of neuropsychiatric symptoms in systemic lupus erythematosus varies between 37 and 95%; cognitive dysfunction, mood disorder, and anxiety syndromes are especially frequent. In Sjoegren's syndrome, cognitive dysfunction is combined with frontal executive disorder and attention deficit. Memory impairment, frontal executive dysfunction and personality changes have been reported in Behçet's disease. Classic polyarteritis nodosa, the Churg Strauss syndrome and Wegener's granulomatosis may be associated with cognitive changes due to inflammatory encephalopathy. Cranial arteritis belongs to the treatable causes of dementia. In primary angiitis of the CNS, small-vessel disease presents more frequently with encephalopathy. abstract_id: PUBMED:15247538 Chorea: non-genetic causes. Purpose Of Review: The aim of this article is to review the literature on the non-genetic causes of chorea. The differential diagnosis of the large number of causes of sporadic chorea is often a challenging task. Interest has also been growing in the possibility that the mechanism responsible for Sydenham's chorea plays a role in the pathogeneis of other neuropsychiatric disorders. Recent Findings: Stroke is the main cause of sporadic chorea. Sydenham's chorea shares clinical features with tic disorders, such as obsessive-compulsive disorder and attention deficit hyperactivity disorder. However, there are unequivocal differences between Sydenham's chorea and Tourette's syndrome. There is initial evidence suggesting the beneficial effect of immunosuppression in Sydenham's chorea. Other autoimmune causes of chorea include systemic lupus erythematosus as well as paraneoplastic syndromes. The growing list of drugs associated with chorea include lamotrigine, methadone and lithium. Among infectious agents, HIV is the leading reported cause of chorea. Summary: Patients with sporadic chorea require a thorough work up because numerous causes can lead to this condition. It remains unconfirmed whether the pathogenic mechanisms of Sydenham's chorea are responsible for other conditions such as isolated obsessive-compulsive disorder or Tourette's syndrome. Drugs and infectious agents, especially HIV, are often implicated in the causes of chorea. abstract_id: PUBMED:33319511 Mapping relationships between ADHD genetic liability, stressful life events, and ADHD symptoms in healthy adults. Attention-deficit/hyperactivity disorder (ADHD) symptoms are continuously distributed in the general population, where both genetic and environmental factors play roles. Stressful life events (SLEs) have been associated with ADHD diagnosis, but the relationship between ADHD genetic liability, SLEs, and ADHD symptoms in healthy individuals is less clear. Using a sample of 1,531 healthy adults (average age 26.9 years; 55.8% female), we investigated relationships between ADHD polygenic risk scores (ADHD-PRSs), SLEs, and ADHD symptoms in a general population sample. Confirming earlier findings in an overlapping sample, all SLE-measures assessed (lifetime SLEs, recent SLEs, and childhood trauma (CT)) were significantly correlated with total ADHD, inattention (IA), and hyperactivity-impulsivity (HI) scores (r2 range = .08-.15; all p &lt; .005). ADHD-PRSs was associated with HI (R2best-fit = .37%), lifetime SLEs (R2best-fit = .56%), and CT (R2best-fit = .40%). Mediation analyses showed that lifetime SLEs partially mediated the association between ADHD-PRSs and HI (indirect effect: β = 68.6, bias corrected accelerated 95% confident interval (BCa95%CI) [11.9, 131.0], p = .016, proportion mediated (PM ) =19.5%), with strongest effects contributed by CT (β = 34.4, BCa95%CI [0.4, 76.5], p = .040, PM = 9.8%). On the other hand, HI partially mediated the association between the ADHD-PRSs and lifetime SLEs (β = 42.9, BCa95%CI [7.3, 83.9], p = .014, PM = 18.8%). Our study observed a complex relationship of genetic and environmental risk factors contributing to ADHD symptoms in the healthy adult population. abstract_id: PUBMED:25791080 Sex-specific and time-dependent effects of prenatal stress on the early behavioral symptoms of ADHD: a longitudinal study in China. There is increasing evidence that prenatal stressful life events (SLEs) may be a potential risk factor for attention-deficit hyperactivity disorder (ADHD), but the sex-specific and time-dependent effects of prenatal stress on ADHD are less clear. In this prospective longitudinal study, data on prenatal SLEs during different stages of gestation and indicators of buffers against stress, including maternal social support and avoidance coping, were obtained from 1765 pregnant women at 32 weeks of gestation. The behavioral symptoms of ADHD in children aged 48-54 months were evaluated by reports from the parents. There were 226 children (12.8%) above the clinically significant cutoff for ADHD. After adjusting for potential confounders, boys whose mother experienced severe SLEs in the second trimester had a significantly increased risk (OR = 2.41, 95% CI: 1.03-5.66) of developing ADHD symptoms compared with boys whose mothers did not experience severe SLEs at this time. However, no significantly increased risk of ADHD symptoms was observed in girls born to mothers experienced prenatal severe SLEs. Additionally, significant interaction effects of prenatal SLEs, social support and coping style on ADHD symptoms were found in males. Boys whose mothers experienced severe SLEs during the second trimester accompanied by a higher score for avoidance coping (OR = 3.31, 95% CI: 1.13-9.70) or a lower score for social support (OR = 4.39, 95% CI: 1.05-18.31) were likely to be at a higher risk for ADHD symptoms. The epidemiological evidence in this prospective follow-up study suggests that the effect of prenatal SLEs on ADHD symptoms in offspring may depend on the timing of prenatal stress and may vary according to the sex of the offspring. Answer: The evidence suggests that executive dysfunction, obsessive-compulsive symptoms (OCS), and attention deficit and hyperactivity disorder (ADHD) are common in patients with Systemic Lupus Erythematosus (SLE), which may indicate basal ganglia dysfunction. A study involving 54 subjects with SLE found that a significant proportion of patients exhibited cognitive impairment, with 11.1% and 61.1% having cognitive impairment according to the Mini Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA), respectively. Additionally, 20.4% had abnormal Frontal Assessment Battery (FAB) scores, and 9.3% had lower semantic fluency scores than expected. The overall frequency of cognitive dysfunction was 72.2%, and neuropsychiatric SLE was 77.8%. Notably, 27.8% had OCS, and 31.5% met diagnostic criteria for ADHD. Both ADHD and OCS correlated with higher disease activity (PUBMED:26723981). Furthermore, a study using the Taiwan National Health Insurance Research Database found that first-degree relatives (FDRs) of patients with SLE had higher risks of SLE, major depressive disorder (MDD), ADHD, obsessive-compulsive disorder (OCD), and bipolar disorder compared with controls. This suggests a genetic link between SLE and these psychiatric disorders (PUBMED:35758635). Another study examining symptoms of ADHD in SLE patients found that inactive SLE patients, particularly those with previous cerebral involvement, had more clinically significant symptoms of inattention but not hyperactivity/impulsivity, reflecting underlying cognitive impairment. Anxiety and depressive symptoms were common confounders for ADHD-like symptoms (PUBMED:26116578). The prevalence of psychiatric symptoms in SLE patients is high, with depression and cognitive dysfunction being the most common psychiatric manifestations. The pathogenesis of psychiatric symptoms in SLE may involve genetic and environmental factors, as well as the patient's reaction to the illness, resulting in anxiety and depression (PUBMED:22687742). Genetic correlations between immune-related disorders and several psychiatric disorders, including ADHD, have been observed, suggesting a shared genetic basis between these conditions (PUBMED:30325587).
Instruction: Optimal results immediately after MitraClip therapy or surgical edge-to-edge repair for functional mitral regurgitation: are they really stable at 4 years? Abstracts: abstract_id: PUBMED:27009105 Optimal results immediately after MitraClip therapy or surgical edge-to-edge repair for functional mitral regurgitation: are they really stable at 4 years? Objectives: Recurrent mitral regurgitation (MR) is common after surgical and percutaneous (MitraClip) treatment of functional MR (FMR). However, the Everest II trial suggested that, in patients with secondary MR and initially successful MitraClip therapy, the results were sustained at 4 years and were comparable with surgery in terms of late efficacy. The aim of this study was to assess whether both those findings were confirmed by our own experience. Methods: We reviewed 143 patients who had an initial optimal result (residual MR ≤ 1+ at discharge) after MitraClip therapy (85 patients) or surgical edge-to-edge (EE) repair (58 patients) for severe secondary MR (mean ejection fraction 28 ± 8.5%). Patients with MR ≥ 2+ at hospital discharge were excluded. The two groups were comparable. Only age and logistic EuroSCORE were higher in the MitraClip group. Results: Follow-up was 100% complete (median 3.2 years; interquartile range 1.8;6.1). Freedom from cardiac death at 4 years (81 ± 5.2 vs 84 ± 4.6%, P = 0.5) was similar in the surgical and MitraClip group. The initial optimal MitraClip results did not remain stable. At 1 year, 32.5% of the patients had developed MR ≥ 2+ (P = 0.0001 compared with discharge). Afterwards, patients with an echocardiographic follow-up at 2 years (60 patients), 3 years (40 patients) and 4 years (21 patients) showed a significant increase in the severity of MR compared with the corresponding 1 year grade (all P &lt; 0.01). Freedom from MR ≥ 3+ at 4 years was 75 ± 7.6% in the MitraClip group and 94 ± 3.3% in the surgical one (P = 0.04). Freedom from MR ≥ 2+ at 4 years was 37 ± 7.2 vs 82 ± 5.2%, respectively (P = 0.0001). Cox regression analysis identified the use of MitraClip as a predictor of recurrence of MR ≥ 2+ [hazard ratio (HR) 5.2, 95% confidence interval (CI) 2.5-10.8, P = 0.0001] as well as of MR ≥ 3 (HR 3.5, 95% CI 0.9-13.1, P = 0.05). Conclusions: In patients with FMR and optimal mitral competence after MitraClip implantation, the recurrence of significant MR at 4 years is not uncommon. This study does not confirm previous observations reported in the Everest II randomized controlled trial indicating that, if the MitraClip therapy was initially successful, the results were sustained at 4 years. When compared with the surgical EE combined with annuloplasty, MitraClip therapy provides lower efficacy at 4 years. abstract_id: PUBMED:32910371 Update on the Current Status and Indications for Transcatheter Edge-to-Edge Mitral Valve Repair. Purpose Of Review: To review the current status and indications of transcatheter edge-to-edge mitral valve repair. Recent Findings: Mitral regurgitation remains a common valvular disease and can be classified as degenerative (primary) or functional (secondary). Randomized controlled trials have shown that transcatheter edge-to-edge mitral valve repair with MitraClip is successful, safe, and effective in reducing mitral regurgitation. The US Food and Drug Administration approved MitraClip in 2013 for treatment of patients with primary mitral regurgitation at prohibitive surgical risk and in 2019 for secondary mitral regurgitation. Several MitraClip generations exist (NT/R, XT/R, NTW, and XTW) with unique features and considerations. Additional edge-to-edge repair, non-edge-to-edge repair, and transcatheter valve replacement systems are under investigation as stand-alone or adjunctive therapy for patients with mitral regurgitation. Mitral regurgitation remains a significant health burden and many patients are not suitable for surgical repair or replacement. Transcatheter mitral valve therapies can be considered in selected patients and are safe and effective. More research is needed to understand how to best select devices and patients and optimize outcomes. abstract_id: PUBMED:33872701 State of the Art: Transcatheter Edge-to-Edge Repair for Complex Mitral Regurgitation. Transcatheter edge-to-edge mitral valve repair has revolutionized the treatment of primary and secondary mitral regurgitation. The landmark EVEREST (Endovascular Valve Edge-to-Edge Repair Study) and COAPT (Clinical Outcomes Assessment of the MitraClip Percutaneous Therapy for High Surgical Risk Patients) trials included only clinically stable patients with favorable mitral valve anatomy for edge-to-edge repair. However, since its initial commercial approval in the United States, growing operator experience, device iterations, and improvements in intraprocedural imaging have led to an expansion in the use of transcatheter edge-to-edge repair to more complex mitral valve pathologies and clinical scenarios, many of which were previously considered contraindications for the procedure. Because patients with prohibitive surgical risk are often older and present with complex mitral valve disease, knowledge of the potential effectiveness, versatility, and technical approach to a broad range of anatomy is clinically relevant. In this review the authors examine the current experience with mitral valve transcatheter edge-to-edge repair in various pathologies and scenarios that go well beyond the EVEREST II trial inclusion criteria. abstract_id: PUBMED:35170230 One-year results following PASCAL-based or MitraClip-based mitral valve transcatheter edge-to-edge repair. Aims: Mitral valve transcatheter edge-to-edge repair (TEER) has been established as a suitable alternative to mitral valve surgery in patients with severe mitral regurgitation (MR) and high surgical risk. The PASCAL system represents a novel device, potentially augmenting the toolkit for TEER. The aim of this study was to assess and compare short and 1 year safety and efficacy of the PASCAL and MitraClip systems for TEER. Methods And Results: Procedural, short, and 1 year outcomes of a 1:2 propensity-matched cohort including 41 PASCAL and 82 MitraClip cases were investigated. Matching was based on clinical, laboratory, echocardiographic, and functional characteristics. The primary endpoints assessed were procedural success [as defined by the Mitral Valve Academy Research Consortium (MVARC)], residual MR, functional class, and a composite endpoint comprising death, heart failure hospitalization, and mitral valve re-intervention. We found for the PASCAL and the matched MitraClip cohort no significant differences in MVARC defined technical (90.2% vs. 95.1%, P = 0.44), device (90.2% vs. 89.0%, P = 1.0), or procedural (87.8% vs. 80.5%, P = 0.45) success rates. Accordingly, the overall MR reduction and improvement in New York Heart Association (NYHA) class were comparable (1 year follow-up: MR ≤ 2 95% vs. 93.6%, P = 1.0; NYHA ≤ 2 57.1% vs. 66.7%, P = 0.59). The composite outcome revealed no statistically significant difference between both devices (1 year follow-up: 31.7% vs. 37.8%, P = 0.55). Interestingly, we found at both short and 1 year follow-up a significantly higher rate of patients with none or trace MR in the PASCAL-treated cohort (short follow-up: 17.9% vs. 0%, P = 0.0081; 1 year follow-up: 25% vs. 0%, P = 0.0016). Conversely, the rate of aborted device implantations due to an elevated transmitral gradient was higher in PASCAL interventions (9.8% vs. 1.2%, P = 0.04). Conclusions: Transcatheter edge-to-edge repair using the PASCAL or MitraClip device results in favourable and comparable outcomes regarding safety, efficacy, and clinical improvement after 1 year. abstract_id: PUBMED:37165804 Complications following transcatheter edge-to-edge mitral valve repair: Personal experience and review of the literature. Mitral valve dysfunction affects around 2% of the population and its incidence is still increasing, making it the second most common valvular heart disease, after aortic stenosis. Depending on the etiology of the disease, it can be classified into primary or secondary mitral regurgitation. The first line of treatment is optimal medical therapy. If ineffective, mitral valve intervention can be considered. For patients disqualified from surgical treatment, transcatheter edge-to-edge repair with the use of MitraClip may be considered. Over 100,000 MitraClip procedures have been performed which makes this the most established transcatheter technique for the treatment of severe mitral regurgitation. The aim of this review is to discuss the technical details of the MitraClip procedure, clinical evidence regarding the efficacy of MitraClip, complications related to the clip implantation alongside with acute complications based on the currently available evidence and clinical experience. abstract_id: PUBMED:32996198 Reinterventions after transcatheter edge to edge mitral valve repair: Is early clipping warranted? Background: Reintervention after transcatheter edge to edge repair using MitraClip is still challenging. We aimed to report our experience in reinterventions after MitraClip procedures and describe the outcomes. Methods: From 2012 to 2020, 167 patients had a transcatheter edge to edge repair; 10 of them needed reinterventions. At the time of the first MitraClip, the median EuroSCORE was 4.29 (2.62-7.52), and the ejection fraction was 30 (20-40)%. Results: Emergency mitral valve replacement (MVR) was performed in two patients, elective MVR in three, cardiac transplantation in two, and repeat clipping in threepatients. The median time from MitraClip to the reintervention was 4.5 (2-13) months. One patient required extracorporeal membrane oxygenation support after elective MVR. Repeat clipping failed to control mitral regurgitation grade in all patients. Clip detachment was reported in five patients (50%). The median follow-up after the reintervention was 19.5 (9-75) months, and mortality occurred in two patients who had repeat clipping (20%). Conclusions: MVR after MitraClip is feasible with low morbidity and mortality. Repeat mitral valve clipping had a high failure rate. Mitral repair was not feasible in all patients in our series, and the use of MitraClip to delay surgical interventions may not be feasible if mitral repair is an option. abstract_id: PUBMED:37283912 Rapidly progressive infective endocarditis after MitraClip therapy: A rare complication of transcatheter edge-to-edge mitral valve repair. Infective endocarditis (IE) is a rare, life-threatening complication of MitraClip (Abbott, Abbott Park, IL, USA) therapy. We report a case of an 84-year-old male who underwent transcatheter edge-to-edge mitral valve (MV) repair using MitraClip (Abbott, Abbott Park, IL, USA) 4 weeks prior for ventricular functional mitral regurgitation (MR) and returned with unstable hemodynamics and high-grade fever. Transthoracic echocardiography (TTE) on emergency admission showed thickening of the anterior mitral leaflet (AML) without apparent MR deterioration. TTE and transesophageal echocardiography (TEE) performed the next day showed severe MR due to rapidly progressing AML degeneration with aneurysmal formation. During the TEE examination, exacerbated heart failure due to severe MR caused cardiogenic shock and subsequent ventricular fibrillation, necessitating emergency extracorporeal cardiopulmonary resuscitation. Considering the positive findings of methicillin-resistant Staphylococcus aureus (MRSA) in blood cultures and degenerative MV findings, MitraClip-related IE was diagnosed; finally, MV replacement was performed. Retrospective consideration suggested that the potential causes of this MitraClip-related IE were valve injuries caused by multiple full-close procedures and insufficient prophylaxis for preoperatively detected MRSA. MitraClip-related IE has destructive characteristics that necessitate surgical intervention despite high risks; therefore, we should prevent procedure-related MV injuries and implement preoperative infection precautions to prevent catastrophic complications, particularly in patients with preoperative nasal MRSA-positive findings. Learning Objectives: MitraClip-related infective endocarditis (IE) is a rare but fatal condition. IE caused by methicillin-resistant Staphylococcus aureus (MRSA), in particular, has an inferior prognosis with high mortality rates due to its destructive nature. Therefore, interventionalists should consider preventive strategies to avoid procedure-related valve injuries and adequately prepare for prophylaxis of patients who are carriers of MRSA to prevent MitraClip-related IE caused by MRSA. abstract_id: PUBMED:37992330 Mitral valve transcatheter edge-to-edge repair with PASCAL Vs MitraClip: A aystematic teview and meta-analysis. Background: Transcatheter edge-to-edge repair (TEER) using the MitraClip (Abbott Vascular) system has emerged as a standard treatment for patients with symptomatic severe secondary or inoperable primary mitral regurgitation (MR). The relatively recent approval of the PASCAL Transcatheter Valve Repair System (Edwards Lifesciences) has expanded the options of TEER devices. However, evidence comparing PASCAL with MitraClip systems is still limited. Methods: We conducted a systematic literature research and meta-analysis in PubMed, Medline, and EMBASE databases for studies comparing PASCAL and MitraClip systems. Results: Four observational studies and 1 randomized controlled trial, involving 1315 patients total, were eligible for inclusion. All patients exhibited symptomatic (NYHA II-IV) MR grades 3+ or 4+. Baseline characteristics were comparable across all included studies. The clinical outcomes were assessed according to the Mitral Valve Academic Research Consortium consensus. The procedural success rates for the 2 devices were comparable in terms of achieving post-procedural MR grades of less than or equal to 2+ and less than or equal to 1+. Furthermore, most patients improved their clinical status, with no significant differences between patients treated with PASCAL and those treated with MitraClip. In terms of safety, both procedures exhibited low overall mortality rates and occurrence of major adverse events (MAE), without significant difference between the 2 devices. These findings remained consistent in both short- and long-term follow-up assessments. Conclusions: Our study revealed similar effectiveness and safety profiles between the PASCAL and MitraClip devices in patients experiencing significant symptomatic MR. abstract_id: PUBMED:30698735 Surgical revision of failed percutaneous edge-to-edge mitral valve repair: lessons learned. Objectives: Although percutaneous edge-to-edge mitral valve repair with the MitraClip system is becoming widely adopted in clinical practice, surgical experience on how to correct failed MitraClip therapy is limited. We aimed to analyse the surgical and pathological outcomes after surgical revision of the failed MitraClip therapy. Methods: Between January 2011 and January 2018, 25 patients (age 73 ± 9 years; men 48%; New York Heart Association class 3.4 ± 0.49) were admitted for severe mitral regurgitation at a median of 54 days (range 1-1496 days) after MitraClip edge-to-edge repair. Perioperative variables were analysed for their association with surgical outcomes. Results: All patients underwent explantation of the MitraClip system and subsequent mitral valve replacement. Perioperative mortality was as high as 28%, mainly due to pre-existing cardiogenic or septic shock. The Kaplan-Meier analysis revealed a 53% overall 1-year survival. Among preoperative variables, the logistic European System for Cardiac Operative Risk Evaluation score, left ventricular ejection fraction and liver dysfunction had a significant influence on in-hospital survival. Intraoperatively, the predominant pathology included mitral valve leaflet damage due to tear, degeneration or infection. Although leaflet tears or MitraClip detachment mainly occurred within the first 6 months after MitraClip therapy, leaflet infections and degeneration mainly occurred later during follow-up. Conclusions: The surgical revision of failed MitraClip therapy is feasible but has high perioperative mortality, especially among patients with cardiogenic shock, septic shock or liver failure. Mitral regurgitation after the MitraClip therapy is mainly caused by mitral valve leaflet damage due to tear, degeneration or infection, all related to the MitraClip itself. abstract_id: PUBMED:31923376 Recent advances in patient selection and devices for transcatheter edge-to-edge mitral valve repair in heart failure. Introduction: Secondary mitral regurgitation (MR) is a consequence of chronic heart failure (HF) with reduced left ventricular ejection fraction (HFrEF) and left ventricular (LV) dilatation. Severe MR worsens prognosis and accelerates LV dilatation as well as decline in LV ejection fraction.Areas covered: In this review we summarize the available data of patients with chronic HF undergoing transcatheter edge-to-edge mitral valve repair (TMVR) with the MitraClip system for severe secondary MR, considering also results of recent MITRA-FR (Multicenter Study of Percutaneous Mitral Valve Repair MitraClip Device in Patients With Severe Secondary Mitral Regurgitation) and COAPT (Cardiovascular Outcomes Assessment of the MitraClip Percutaneous Therapy for Heart Failure Patients With Functional Mitral Regurgitation) trials. Furthermore, we discuss recent advances in devices for TMVR, focussing on the new MitraClip XTR as well as the Edwards PASCAL system.Expert opinion: Optimal patient selection for TMVR is still a matter of great debate due to contradictory results of MITRA-FR and COAPT. New edge-to-edge devices (MitraClip XTR and Edwards PASCAL) come with longer clip arms and the Edwards PASCAL system is the first device with the capability of independent grasping of anterior and posterior leaflet, providing interventionalists with more options to treat patients with secondary MR. Answer: The stability of optimal results immediately after MitraClip therapy or surgical edge-to-edge repair for functional mitral regurgitation (FMR) at 4 years is questionable. A study reviewed 143 patients who had an initial optimal result (residual MR ≤ 1+ at discharge) after MitraClip therapy or surgical edge-to-edge (EE) repair for severe secondary MR. The follow-up, which was 100% complete with a median of 3.2 years, revealed that the initial optimal MitraClip results did not remain stable. At 1 year, 32.5% of the patients had developed MR ≥ 2+ and there was a significant increase in the severity of MR at subsequent yearly follow-ups. Freedom from MR ≥ 3+ at 4 years was 75% in the MitraClip group and 94% in the surgical group, and freedom from MR ≥ 2+ was 37% versus 82%, respectively. The use of MitraClip was identified as a predictor of recurrence of MR ≥ 2+ as well as of MR ≥ 3. This study concluded that recurrence of significant MR at 4 years is not uncommon in patients with FMR and optimal mitral competence after MitraClip implantation, and the results were not as sustained as previously reported in the Everest II randomized controlled trial. Compared with surgical EE combined with annuloplasty, MitraClip therapy provides lower efficacy at 4 years (PUBMED:27009105). In contrast, other studies have shown that transcatheter edge-to-edge mitral valve repair with MitraClip is successful, safe, and effective in reducing mitral regurgitation, with the FDA approving MitraClip for treatment of patients with primary MR at prohibitive surgical risk in 2013 and for secondary MR in 2019 (PUBMED:32910371). However, the long-term stability of these results, particularly in comparison to surgical repair, appears to be less favorable based on the study mentioned above. Therefore, while MitraClip therapy is a valuable option for patients with FMR, especially those at high surgical risk, the long-term stability of optimal results may not be as reliable as initially thought, and surgical edge-to-edge repair may provide better efficacy at 4 years.
Instruction: Are newborns receiving premedication before elective intubation? Abstracts: abstract_id: PUBMED:16598295 Are newborns receiving premedication before elective intubation? Aims: To determine the extent and type of premedication used for elective endotracheal intubation in neonatal intensive care units (NICUs). Methods: A pretested questionnaire was distributed via e-mail to the program directors of the neonatology divisions with accredited fellowship programs in Neonatal-Perinatal Medicine in the United States. Results: Of the 100 individuals contacted, 78 (78%) participated in the survey. Only 34 of the 78 respondents (43.6%) always use any premedication for elective intubation. Nineteen respondents (24.4%) reported to have a written policy regarding premedication. Morphine or fentanyl was used most commonly (57.1%), with a combination of opioids and midazolam or other benzodiazepines used less frequently. Fourteen respondents (25%) also use muscle relaxants with sedation for premedication, but only nine respondents combined paralysis with atropine and sedation. Conclusion: Most neonatology fellowship program directors do not report always using premedication for newborns before elective endotracheal intubation despite strong evidence of physiologic and practical benefits. Only a minority of the NICUs has written guidelines for sedation, which may preclude effective auditing of this practice. Educational interventions may be necessary to ensure changes in clinical practice. abstract_id: PUBMED:26042264 Newborns should be receiving premedication before elective intubation. Background: Intubation is a common neonatal procedure. Premedication is accepted as a standard of care, but its use is not universal and wide variations exist in practice. Objective: To evaluate current practices for premedication use prior to elective neonatal intubation in South Africa (SA). Method: We invited 481 clinicians to participate in a cross-sectional web-based survey. Results: We received responses from 28.3% of the clinicians surveyed; 54.1% were from the private sector and 45.9% from the state sector. Most respondents worked in medium-sized neonatal units with six to ten beds. Most paediatricians (76.0%) worked in the private sector, and 78.6% of neonatologists in the state sector. Premedication was practised by 71.9% of the respondents, but only 38.5% of neonatal units had a written policy. Sedatives were used for premedication by 63.2% of the respondents. Midazolam (41.5%), morphine (34.0%) and ketamine (20.8%) were most commonly used. Muscle relaxants and atropine were not routinely administered. Suxamethonium was the muscle relaxant of choice. Varied combinations of agents or single agents were used. Midazolam used alone was the preferred option. Conclusion: This first survey of premedication for neonatal intubation in SA revealed variations in practice, with a minority of clinicians following a written policy. The findings can be used to benchmark practice and inform the design of local collaborative trials aimed at determining optimal premedication prior to neonatal intubation. The survey demonstrates clinicians' reluctance to participate in surveys, suggesting a need for a national collaborative network to obtain representative data. abstract_id: PUBMED:17905186 Premedication for tracheal intubation in neonates: confusion or controversy? Tracheal intubation is performed frequently in the NICU and delivery room. This procedure is extremely distressing, painful, and has the potential for airway injury. Premedication with sedatives, analgesics, and muscle relaxants is standard practice for pediatric and adult intubation, yet the use of these drugs is not common for intubation in neonates. The risks and benefits of using premedications for intubating unstable newborns are hotly debated, although recent evidence shows that premedication for non-urgent or semi-urgent intubations is safer and more effective than awake intubations. This article reviews clinical practices reported in surveys on premedication for neonatal intubation, the physiological effects of laryngoscopy and intubation on awake neonates, as well as the clinical and physiological effects of different drug combinations used for intubation. A wide variety of drugs, either alone or in combination, have been used as premedication for elective intubation in neonates. Schematically, these studies have been of three main types: (a) studies comparing awake intubation versus those with sedation or analgesia, (b) studies comparing different premedication regimens comprising sedatives, analgesics, and anesthetics, and (c) case series of neonates in which some authors have reported their experience with a specific premedication regimen. The clinical benefits described in these studies and the need for pain control in neonates make the case for using appropriate premedication routinely for elective or semi-urgent intubations. Tracheal intubation without the use of analgesia or sedation should be performed only for urgent resuscitations in the delivery room or other life-threatening situations when intravenous access is unavailable. abstract_id: PUBMED:25607227 Premedication Use Before Nonemergent Intubation in the Newborn Infant. Objective: In 2010, an American Academy of Pediatrics (AAP) clinical report recommended that except for emergent situations, premedication should be used for all endotracheal intubations in newborns. The purpose of this study is to ascertain the current practice of premedication before elective intubation. Study Design: An online, survey-based questionnaire on the practice of premedication before nonemergent intubations was distributed via e-mail to neonatologists who are members of the Perinatal Section of the AAP. Results: Although 72% of respondents believed premedication should be used in nonemergent intubations, only 34% of the respondents report frequently premedicating before intubation with significant variation among the neonatal units (46% among level 4 units and 27% in level 3 and 2 units) p = 0.000. About 44% of respondents report having a written protocol or guideline on premedication which significantly correlated with the use of premedication (62% in level 4, 33% in level 3, and 16% in level 2 units), p = 0.000. Conclusion: Despite a recent AAP clinical report recommending the use of premedication before nonemergent endotracheal intubation, only one-third of neonatologists report frequent use of premedication and less than half of the institutions have a written protocol on premedication. abstract_id: PUBMED:23493980 Premedication for neonatal intubation: Current practice in Saudi Arabia. Background: Despite strong evidence of the benefits of rapid sequence intubation in neonates, it is still infrequently utilized in neonatal intensive care units (NICU), contributing to avoidable pain and secondary procedure-related physiological disturbances. Objectives: The primary objective of this cross-sectional survey was to assess the practice of premedication and regimens commonly used before elective endotracheal intubation in NICUs in Saudi Arabia. The secondary aim was to explore neonatal physicians' attitudes regarding this intervention in institutions across Saudi Arabia. Methods: A web-based, structured questionnaire was distributed by the Department of Pediatrics, Umm Al Qura University, Mecca, to neonatal physicians and consultants of 10 NICUs across the country by E-mail. Responses were tabulated and descriptive statistics were conducted on the variables extracted. Results: 85% responded to the survey. Although 70% believed it was essential to routinely use premedication for all elective intubations, only 41% implemented this strategy. 60% cited fear of potential side effects for avoiding premedication and 40% indicated that the procedure could be executed more rapidly without drug therapy. Treatment regimens varied widely among respondents. Conclusion: Rates of premedication use prior to non-emergent neonatal intubation are suboptimal. Flawed information and lack of unified unit policies hampered effective implementation. Evidence-based guidelines may influence country-wide adoption of this practice. abstract_id: PUBMED:24726672 Implementation of a specific premedication protocol for tracheal intubation in the delivery room. Practice in two level-III hospitals Unlabelled: Tracheal intubation in neonates is a painful procedure performed daily in the delivery room despite the widespread development of noninvasive ventilation. Specific analgesia is not commonly performed. The objective of this observational study was to compare practices between two level-III centers: one with a specific protocol for premedication before tracheal intubation of newborns in the delivery room, the other without. Results: One hundred and fifteen neonates were intubated in the delivery room and included over a 4-month period: 25% of them received specific premedication before intubation, exclusively in the center with the protocol. None of the extreme premature neonates (age≤28 gestational weeks) received analgosedation before the procedure. Nalbuphine, midazolam, and sufentanil were mainly used, via the intravenous or intrarectal route. Infants receiving a premedication were significantly heavier and had a greater gestational age than the others (1500 g [range, 1180-2260 g] vs. 1170 [range, 860-1680 g] P=0.003, and 31 GW [range, 29-34 GW] vs. 29 [range, 27-32 GW] P=0.014, respectively). Most pediatricians (85-100%) favored a specific protocol for sedation before tracheal intubation. Implementation of a specific protocol allows specific analgesia to be implemented for newborns undergoing tracheal intubation. Further studies should be conducted to determine the best strategies for pain management during tracheal intubation of neonates, especially in the delivery room. abstract_id: PUBMED:20176672 Premedication for nonemergency endotracheal intubation in the neonate. Endotracheal intubation is a common procedure in newborn care. The purpose of this clinical report is to review currently available evidence on use of premedication for intubation, identify gaps in knowledge, and provide guidance for making decisions about the use of premedication. abstract_id: PUBMED:34729065 Premedication with intravenous midazolam for neonatal endotracheal intubation: A double blind randomized controlled trial. Background: Pain during the neonatal period has been associated with immediate and long-term adverse effects. One of the most frequent painful procedures that neonates face in neonatal intensive care unit is the endotracheal intubation. Midazolam has been a candidate for premedication before neonatal intubation. Our aim was to evaluate the effects of midazolam as the premedication on endotracheal intubation of premature infants during surfactant administration. Materials And Methods: In a double-blind clinical trial, 80 preterm infants were undertaken for tracheal intubation following the use of atropine associated to either midazolam or placebo. Patient's vital signs and general conditions were constantly monitored, and pain was assessed using premature infant pain profile (PIPP) score. Results: The mean ± standard deviation for postnatal age was 95.38 ± 50.04 and 111.63 ± 49.4 min in control and midazolam groups, respectively. The patients in the midazolam group had significantly better outcomes across several intubation outcome measures such as duration of endotracheal intubation (23.5 ± 6.7 vs. 18.8 ± 4.8 s, P = 0.001), oxygen saturation level (88.05% ±13.7 vs. 95.1 ± 1.8%, P = 0.002), intubation failure (34.2% vs. 2.5%, P = 0.0001), awake and resistance during intubation (95% vs. 20%, P = 0.0001), and excellent patient condition during intubation (0% vs. 82.5%, P = 0.0001). In addition, PIPP score was significantly lower in the midazolam group (5.2 ± 2.06 vs. 12.9 ± 2.9, P = 0.0001). Conclusion: Premedication with midazolam in newborns before intubation, can hold promising effects that manifests as better overall outcomes, less complications, better vital signs, more comfortable situation, and lesser pain for these patients. abstract_id: PUBMED:22379381 Premedication for endotracheal intubation in the newborn infant. Endotracheal intubation, a common procedure in newborn care, is associated with pain and cardiorespiratory instability. The use of premedication reduces the adverse physiological responses of bradycardia, systemic hypertension, intracranial hypertension and hypoxia. Perhaps more importantly, premedication decreases the pain and discomfort associated with the procedure. All newborn infants, therefore, should receive analgesic premedication for endotracheal intubation except in emergency situations. Based on current evidence, an optimal protocol for premedication is to administer a vagolytic (intravenous [IV] atropine 20 μg/kg), a rapid-acting analgesic (IV fentanyl 3 μg/kg to 5 μg/kg; slow infusion) and a short-duration muscle relaxant (IV succinylcholine 2 mg/kg). Intubations should be performed or supervised by trained staff, with close monitoring of the infant throughout. abstract_id: PUBMED:33479006 Premedication for Nonemergent Intubation in the NICU: A Call for Standardized Practice. This paper discusses neonatal endotracheal intubation and the need for standardization in practice regarding the use of premedication. Intubation is common in the NICU because of resuscitation, surfactant administration, congenital anomalies, apnea, and sedation for procedures or surgery. Intubation is both painful and stressful. Unmedicated intubation is associated with several adverse outcomes including repeat and prolonged attempts, airway trauma, bradycardia, severe desaturation, and need for resuscitation. Most providers believe intubation is painful and that premedication should be provided; however, there is still resistance to provide premedication and inconsistency in doing so. Reasons for not providing premedication include concerns about medication side effects such as chest wall rigidity or prolonged respiratory depression inhibiting immediate extubation after surfactant administration. Premedication should include an opioid analgesic for pain, a benzodiazepine for an adjuvant sedation, a vagolytic to decrease bradycardia, and the optional use of a muscle relaxant for paralysis. Answer: Based on the provided abstracts, it appears that the practice of administering premedication to newborns before elective intubation varies and is not universally adopted. Some studies and surveys have shown that premedication is not always used, and there is significant variation in practice both within and between countries. For instance, a survey of neonatology fellowship program directors in the United States found that only 43.6% always use premedication for elective intubation, and only 24.4% reported having a written policy regarding premedication (PUBMED:16598295). Similarly, a study on current practices in South Africa revealed that while 71.9% of respondents practiced premedication, only 38.5% of neonatal units had a written policy on it (PUBMED:26042264). Another survey in Saudi Arabia indicated that although 70% of respondents believed it was essential to use premedication for all elective intubations, only 41% actually implemented this strategy (PUBMED:23493980). In the United States, despite recommendations from the American Academy of Pediatrics, only one-third of neonatologists reported frequent use of premedication, and less than half of the institutions had a written protocol on premedication (PUBMED:25607227). The variation in practice is also evident in the types of medications used and whether combinations of agents are administered. For example, midazolam, morphine, and ketamine are among the sedatives commonly used, with varied combinations or single agents being employed (PUBMED:26042264). Some studies advocate for the use of a combination of a vagolytic, a rapid-acting analgesic, and a short-duration muscle relaxant as an optimal protocol for premedication (PUBMED:22379381). Overall, while there is evidence supporting the physiological and practical benefits of premedication for neonatal intubation, and some clinicians and institutions have adopted this practice, it is not universally practiced, and there is a lack of standardization in the approach to premedication for elective intubation in newborns (PUBMED:17905186, PUBMED:33479006).
Instruction: Vaginal breech delivery: is it still an option? Abstracts: abstract_id: PUBMED:19570620 Is vaginal delivery in twin pregnancy still an option? An analysis of the literature data The aim of this review was to analyze the nature of the data and practices reported in the literature in order to determine if vaginal delivery of twin gestations with a first twin in cephalic presentation after 34 weeks still remains an option. Compared to cesarean, large retrospective population-based studies demonstrated increased neonatal morbidity and mortality of the second twin associated with vaginal delivery. Some then suggested systematic planned cesarean could protect second twins from increased neonatal mortality and morbidity. These results have not been confirmed in hospital retrospective studies in which candidates for vaginal delivery were carefully selected, and second twin delivery actively managed, with internal version when the fetal head is above a 0 station in case of cephalic presentation and with systematic immediate total breech extraction in case of non cephalic presentation. Taking into account the poor external validity of the population-based studies, and the reassuring results of the hospital retrospective studies, attempted vaginal delivery after 34 weeks, when the first twin is in cephalic presentation, still appears as a safe option in low risk populations. Progress in the assessment of the risks associated with the mode of delivery could result from a large nationwide observational prospective study, a randomized trial being, even more than in the breech delivery issue, an inappropriate method for evaluating these risks. abstract_id: PUBMED:14597238 Vaginal breech delivery: is it still an option? Objective: To provide local data on term breech delivery for future guidance. Study Design: Retrospective, 'intended-mode-of-delivery' analysis of 711 antenatally uncomplicated singleton breech deliveries at &gt;or=37 weeks: 445 (63%) planned for vaginal delivery (VD) (no fetal growth restriction, fetal weight 2000-4000 g, adequate pelvic size, maternal consent) and 266 for cesarean section (CS). Outcome Measures: Apgar score, cord blood acid-base status, childhood deaths and disability. Results: Perinatal mortality was nil in both groups. Low 1-min Apgar scores and low arterial cord blood pH were significantly more frequent in planned VD, but not low Apgar scores at 5 or 10 min or low venous pH. Metabolic acidosis, neonatal intensive care, neonatal seizures, birth trauma, childhood death (CS=1), and neurodevelopmental handicaps (CS=2, VD=1) were equally common. Conclusion: Our results do not disqualify selective vaginal breech delivery at term and beyond as an option. abstract_id: PUBMED:33254265 Is Vaginal Breech Delivery Still a Safe Option? Objective: To determine whether there was any difference in neonatal and maternal outcomes between breech vaginal delivery and cephalic vaginal delivery. Methods: A retrospective, case-control study was conducted between January 2015 and December 2017 in a Portuguese hospital. A total of 26 cases of breech vaginal delivery were considered eligible and 52 pregnant women formed the control group. Results: Induced labor was more frequent in the breech vaginal delivery group (46% versus 21%, p = 0.022). Episiotomy was more common in the breech vaginal delivery group (80% versus 52%, p = 0.014), and one woman had a 3rd degree perineal laceration. Newborns in the study group had a lower birthweight (2,805 g versus 3,177 g, p &lt; 0.001). There was no significant difference in the neonatal outcomes. Conclusion: The present study showed that breech vaginal delivery at term compared with cephalic presentation was not associated with significant differences in neonatal and maternal morbidity. It also suggests that breech vaginal delivery remains a safe option under strict selection criteria and in the presence of an experienced obstetrician. abstract_id: PUBMED:30723903 Is induction of labor a reasonable option for breech presentation? Introduction: Whereas spontaneous onset of labor and vaginal delivery for breech presentation is considered to be a safe and effective option in selected cases, the safety of induction of labor is not established yet. The objectives of this study were to describe the outcomes of pregnancy in women induced with a fetus in breech presentation and compare the outcomes with those undergoing planned cesarean delivery. Material And Methods: We performed a secondary analysis of the observational prospective multicenter PREMODA study, including all singleton breech deliveries after 37 weeks in 174 centers in France and Belgium. We excluded women with spontaneous onset of labor, scarred uterus or intrauterine fetal death. Our study population consisted of women with either induction of labor or planned cesarean delivery. The primary outcome was the composite criteria of neonatal mortality and serious morbidity used in the Term Breech Trial and in the PREMODA prospective cohort. Results: Our study population consisted of 4138 women, 218 with induction of labor and 3920 with planned cesarean. Two-thirds (67.4%) of the women in the induction of labor group delivered vaginally. There was no significant difference between the groups for the primary outcome (48 [1.2%] in the planned cesarean group vs 3 [1.4%] in the induction of labor group, P = 0.75). Moreover, none of the criteria of the composite primary outcome was significantly more frequent in the induction of labor group. Conclusions: Induction of labor for breech presentation does not seem to increase neonatal mortality or severe neonatal morbidity compared with planned cesarean delivery. abstract_id: PUBMED:22708506 Selective vaginal breech delivery at term - still an option. Objective: To compare the neonatal outcome between planned vaginal or planned cesarean section (CS) breech delivery and planned vaginal vertex delivery at term with singleton fetuses. Design: A cohort study. Setting: Delivery Unit, Tampere University Hospital, Finland, with 5200 annual deliveries. Population: The term breech deliveries over a period of five years (January 2004 to January 2009), a total of 751 breech deliveries, and 257 vertex controls. Methods: The data were collected from the mother's medical records, including a summary of the newborn. In the case of neonatal health problems, the pediatric records were also examined. Main Outcome Measures: Maternal and neonatal mortality and morbidity as defined in the Term Breech Trial. Low Apgar scores or umbilical cord pH as secondary end-points. Results: There was no neonatal mortality. Severe morbidity was rare in all groups, with no differences between groups. The Apgar scores at one minute were lower in the planned vaginal delivery group compared with the other groups, but there was no difference at the age of five minutes. Significantly more infants in the vaginal delivery group had a cord pH &lt; 7.05. There was one maternal death due to a complicated CS in the planned CS group and none in the other groups. Mothers in the planned CS group suffered significantly more often from massive bleeding and needed transfusions. Conclusions: Vaginal delivery remains an acceptable option for breech delivery in selected cases. abstract_id: PUBMED:16147731 Planned vaginal delivery of term breech remains an option--result of eight years experience at a single centre. In order to compare the short term outcome of vaginal and abdominal delivery of babies with breech presentation at term at a single centre, a retrospective study of 299 women presenting with singleton normal breech fetuses at term between 1st January 1996 and 31st December 2003, at a tertiary referral centre, was conducted. The rates of perinatal mortality, neonatal mortality, serious neonatal morbidity and low Apgar scores of neonates delivered after either planned vaginal or planned abdominal delivery were compared. Among 299 women with singleton normal breech at term, 32.1% delivered vaginally and 67.9% had C/S. Successful planned vaginal delivery rate was 97.9%. There was no perinatal or neonatal death in either group and no significant difference in the rates of serious perinatal morbidity between the two groups. We concluded that planned vaginal delivery is associated with no significant adverse perinatal outcome and remains an option for selected term breech presentation. abstract_id: PUBMED:18302865 Caesarean section as preferred mode of delivery in term breech presentations is not a realistic option in rural Zimbabwe. We undertook this retrospective cross-sectional study in order to establish the outcome of 510 singleton breech presentations at term in seven district hospitals in rural Matabeleland-North Province, Zimbabwe. We also studied the mortality for 1093 caesarean sections (CS), in order to decide if CS should be the preferred option for breech births. The perinatal mortality rate in singleton breech presentation (BrPNMR) was very high:166/1000. Between the hospitals the BrPNMR ranged from 66 to 225/1000 (P = 0.04). Although the outcome was better for CS than for vaginal delivery (odds ratio = 5.4, P = 0.0005), there was no correlation between the BrPNMR and the CS rate. Results indicate that closer monitoring of these high-risk pregnancies by the most senior staff would be more effective than increasing the number of CS performed. An external cephalic version at term could also reduce breech-related mortality. Symphysiotomy is a life-saving emergency intervention when the head is trapped. Increasing the number of CS should be strongly discouraged since the case fatality rate in this rural setting was found to be very high: 18/1093 women died after CS (for any indication) within 42 days after the operation (1, 6%); 15 within 24 h. Haemorrhage was the major cause of death. abstract_id: PUBMED:16580289 Is planned vaginal delivery for breech presentation at term still an option? Results of an observational prospective survey in France and Belgium. Objective: A large trial published in 2000 concluded that planned vaginal delivery of term breech births is associated with high neonatal risks. Because the obstetric practices in that study differed from those in countries where planned vaginal delivery is still common, we conducted an observational prospective study to describe neonatal outcome according to the planned mode of delivery for term breech births in 2 such countries. Study Design: Observational prospective study with an intent-to-treat analysis to compare the groups for which cesarean and vaginal deliveries were planned. Associations between the outcome and planned mode of delivery were controlled for confounding by multivariate analysis. The main outcome measure was a variable that combined fetal and neonatal mortality and severe neonatal morbidity. The study population consisted of 8105 pregnant women delivering singleton fetuses in breech presentation at term in 138 French and 36 Belgian maternity units. Results: Cesarean delivery was planned for 5579 women (68.8%) and vaginal delivery for 2526 (31.2%). Of the women with planned vaginal deliveries, 1796 delivered vaginally (71.0%). The rate of the combined neonatal outcome measure was low in the overall population (1.59%; 95% CI [1.33-1.89]) and in the planned vaginal delivery group (1.60%; 95% CI [1.14-2.17]). It did not differ significantly between the planned vaginal and cesarean delivery groups (unadjusted odds ratio = 1.10, 95% CI [0.75-1.61]), even after controlling for confounding variables (adjusted odds ratio = 1.40, 95% CI [0.89-2.23]). Conclusion: In places where planned vaginal delivery is a common practice and when strict criteria are met before and during labor, planned vaginal delivery of singleton fetuses in breech presentation at term remains a safe option that can be offered to women. abstract_id: PUBMED:22472329 Why vaginal breech delivery is an acceptable option. N/A abstract_id: PUBMED:25226736 Vaginal breech delivery: still a safe option. Background: Elective caesarean section has replaced vaginal delivery for term breech foetuses due to fear of complications of vaginal breech delivery. This increasing rate of caesarean section worldwide is alarming. It has not only led to increase in adverse consequences in subsequent pregnancies and future fertility but also loss of skills for vaginal breech delivery. This study was conducted to determine the safety of vaginal breech birth in terms of maternal and neonatal complications. Methods: This cross sectional study was conducted at department of Obstetrics/Gynaecology, Ayub Medical College, Abbottabad from January 2004 to December 2011. One seventy-eight women having successful vaginal breech delivery of singleton term foetuses from 2004-2008 were selected. They were studied for neonatal complications like low Apgar score (AS) &lt; 7 at 5 min, birth trauma, admission to neonatal intensive care units and perinatal mortality. Maternal complications including any genital tract trauma and post-partum haemorrhage (PPH) were also noted. Results: There were 11243 deliveries during this period, including 674 breech presentations at term (incidence of breech 6%). Out of 178 successful vaginal breech deliveries, 8 (4.49%) neonates had AS &lt; 7 at 5 min, and 6 (3.37%) neonates needed NICU admission. There were no cases of birth trauma or perinatal morbidity. Maternal complications occurred in only 5 (2.8%) patients, 2 (1.1%) having perineal tears, 2 (1.12%) retained placenta and one (0.56%) case of post partum haemorrhage. Conclusion: Vaginal breech delivery can be safely undertaken without compromising maternal and neonatal outcome if strict criteria are met before and during labour. Answer: Vaginal breech delivery remains a viable option under certain conditions and with careful selection and management of cases. Several studies have shown that with strict criteria and experienced obstetricians, vaginal breech delivery can be safe for both the mother and the neonate. A retrospective analysis of term breech deliveries indicated that perinatal mortality was nil for both planned vaginal delivery and cesarean section groups, and severe morbidity was rare, suggesting that selective vaginal breech delivery at term is still an option (PUBMED:14597238). Another study echoed this sentiment, finding no significant differences in neonatal and maternal morbidity between breech vaginal delivery and cephalic vaginal delivery, reinforcing the safety of vaginal breech delivery when strict selection criteria are met (PUBMED:33254265). Furthermore, an observational prospective survey in France and Belgium found that the rate of combined neonatal outcome measure was low in the overall population and did not differ significantly between planned vaginal and cesarean delivery groups, supporting the safety of planned vaginal delivery for singleton fetuses in breech presentation at term (PUBMED:16580289). Similarly, a cohort study in Finland found no neonatal mortality and rare severe morbidity in all groups, with planned cesarean section associated with more maternal complications such as massive bleeding and transfusions, suggesting that vaginal delivery is an acceptable option for breech delivery in selected cases (PUBMED:22708506). A retrospective study at a single center also concluded that planned vaginal delivery is associated with no significant adverse perinatal outcome and remains an option for selected term breech presentation (PUBMED:16147731). Additionally, a cross-sectional study found that vaginal breech delivery can be safely undertaken without compromising maternal and neonatal outcomes if strict criteria are met before and during labor (PUBMED:25226736). However, it is important to note that the safety of induction of labor for breech presentation has not been fully established, although one study found no significant increase in neonatal mortality or severe neonatal morbidity compared with planned cesarean delivery (PUBMED:30723903). In rural settings, where resources may be limited, cesarean section as the preferred mode of delivery for term breech presentations may not be a realistic option, and closer monitoring of high-risk pregnancies by senior staff could be more effective (PUBMED:18302865). In summary, vaginal breech delivery can still be considered a safe option when there is careful selection of candidates, active management of delivery, and the presence of experienced obstetricians.
Instruction: Is delayed gastric emptying so terrible after pylorus-preserving pancreaticoduodenectomy? Abstracts: abstract_id: PUBMED:26011210 Our contrivances to diminish complications after pylorus-preserving pancreaticoduodenectomy. The objective of this study is to diminish postoperative complications after pylorus-preserving pancreaticoduodenectomy. Pylorus-preserving pancreaticoduodenectomy is still associated with major complications, especially leakage at pancreatojejunostomy and delayed gastric emptying. Traditional pylorus-preserving pancreaticoduodenectomy was performed in group A, while the novel procedure, an antecolic vertical duodenojejunostomy and internal pancreatic drainage with omental wrapping, was performed in group B (n = 40 each). We compared the following characteristics between the 2 groups: operation time, blood loss, time required before removal of nasogastric tube and resumption of food intake, length of hospital stay, and postoperative complications. The novel procedure required less time and was associated with less blood loss (both P &lt; 0.0001). In the comparison of the 2 groups, group B showed less time for removal of nasogastric tubes and resumption of food intake, shorter hospital stays, and fewer postoperative complications (all P &lt; 0.0001). The novel procedure appears to be a safe and effective alternative to traditional pancreaticoduodenectomy techniques. abstract_id: PUBMED:21861144 Pancreaticoduodenectomy versus pylorus-preserving pancreaticoduodenectomy: the clinical impact of a new surgical procedure; pylorus-resecting pancreaticoduodenectomy. Pylorus-preserving pancreaticoduodenectomy (PpPD) has been performed increasingly for periampullary tumors as a modification of conventional pancreaticoduodenectomy (PD) with antrectomy. Five randomized controlled trials (RCTs) and two meta-analyses have been performed to compare PD with PpPD. The results of these trials have shown that the two procedures were equally effective concerning morbidity, mortality, quality of life (QOL), and survival, although the length of surgery and blood loss were significantly lower for PpPD than for PD in one RCT and in the two meta-analyses. Delayed gastric emptying (DGE) is the major postoperative complication after PpPD. One of the pathogeneses of DGE after PpPD is thought to be denervation or devascularization around the pyloric ring. Therefore, one RCT was performed to compare PpPD with pylorus-resecting pancreaticoduodenectomy (PrPD; a new PD surgical procedure that resects only the pyloric ring and preserves nearly all of the stomach), concerning the incidence of DGE. The results clarified that the incidence of DGE was 4.5% after PrPD and 17.2% after PpPD, which was a significant difference. Several RCTs of surgical or postoperative management techniques have been performed to reduce the incidence of DGE. One RCT for surgical techniques clarified that the antecolic route for duodenojejunostomy significantly reduced the incidence of DGE compared with the retrocolic route. Two RCTs examining postoperative management showed that the administration of erythromycin after PpPD reduced the incidence of DGE. abstract_id: PUBMED:29725396 Pylorus-preserving pancreaticoduodenectomy versus standard pancreaticoduodenectomy in the treatment of duodenal papilla carcinoma. It is not known whether pylorus-preserving pancreaticoduodenectomy (PPPD) is as effective as the standard pancreaticoduodenectomy (SPD) in the treatment of duodenal papilla carcinoma (DPC). A retrospective cohort trial was undertaken to compare the results of these two procedures. Clinical data, histological findings, short-term results, survival and quality of life of all patients who had undergone surgery for primary DPC between January 2003 and February 2010 were analyzed. According to the inclusion criteria and the surgical methods, 116 patients were divided into the PPPD group (n=43) and the SPD group (n=73). There were no significant differences in various indices, including surgery duration, extent of intraoperative hemorrhage and postoperative pathological indexes. The incidence of postoperative complications, including pancreatic fistula and delayed gastric emptying, were also similar between the two groups (20.9 vs. 21.9%; P=0.900 and 11.6 vs. 5.4%; P=0.402). Long-term survival and quality of life were identical following a median follow-up of 45.6 months (range, 4-144 months). Within 6 months, there was a decreased loss of appetite following the pylorus-preserving procedure (26.9 vs. 49.3; P=0.003). The procedures were equally effective for the treatment of DPC. PPPD offers minor advantages in the early postoperative period but not in the long term. abstract_id: PUBMED:26792273 Delayed gastric emptying after pylorus preserving pancreaticoduodenectomy--does gastrointestinal reconstruction technique matter? Background: The best gastrointestinal reconstruction route after pylorus preserving pancreaticoduodenectomy remains debatable. We aimed to evaluate the incidence of delayed gastric emptying (DGE) after antecolic (AC) and retrocolic (RC) duodenojejunostomy in these patients. Data Sources: Studies comparing AC to RC reconstruction after pylorus preserving pancreaticoduodenectomy were identified from literature databases (PubMed, MEDLINE, EMBASE, SCOPUS, and Cochrane). The meta-analysis included 10 studies with a total of 1,067 patients, where 504 patients underwent AC and 563 patients underwent RC reconstruction. The incidence of DGE was significantly lower with AC reconstruction in both randomized controlled trials (risk ratio = .44, confidence interval = .24 to.77, P = .005) and retrospective studies (risk ratio .21, confidence interval .14 to .30, P &lt; .001) with less output and days of nasogastric tube use. AC reconstruction was associated with a decreased length of stay. There was no difference in operative time, blood loss, pancreatic fistula, and abdominal abscess/collections. Conclusions: AC reconstruction seems to be associated with less DGE, with no association with pancreatic fistula or abscess formation. abstract_id: PUBMED:37697996 From Classic Whipple to Pylorus Preserving Pancreaticoduodenectomy and Ultimately to Pylorus Resecting - Stomach Preserving Pancreaticoduodenectomy: A Review. Pancreaticoduodenectomy is the procedure of choice for benign or malignant tumors of the periampullary region. The preservation of the pylorus has been established as the mostly utilized approach during pancreaticoduodenectomy among the majority of specialized, in the surgical treatment of pancreatic cancer, centers worldwide. The factors that influenced this predilection are the shorter operation times, the less intraoperative blood loss, the decreased technical difficulty, and the quite similar short- and long-term outcomes compared to the classic Whipple. However, there is a notable trend in the literature highlighting the increased incidence of delayed gastric emptying following pylorus preserving pancreaticoduodenectomy. Among other factors, pylorus dysfunction attributable to the surgical maneuvers has been implemented in the etiology of this complication. In an attempt to overcome this limitation of the pylorus preserving pancreaticoduodenectomy, pylorus resecting pancreaticoduodenectomy with the preservation of the stomach was proposed. In theory, pylorus resecting pancreaticoduodenectomy could maintain the advantages of organ sparing surgery, but at the same time guarantee a more seamless gastric emptying. Only three RCTs, to date, aimed to evaluate the approach with only one reporting results in favor of the pylorus resecting pancreaticoduodenectomy in regard to the incidence of delayed gastric emptying. Further well-designed prospective randomized studies are needed for an accurate assessment of the true role of each of these surgical alternatives on the treatment of pancreatic cancer. abstract_id: PUBMED:17153459 Pylorotomy in pylorus-preserving pancreaticoduodenectomy. Background/aims: The incidence of delayed gastric emptying after pylorus-preserving pancreaticoduodenectomy has been reported to be 30% to 70%. Methodology: Between January 1996 and December 2002, 43 patients underwent pylorus-preserving pancreaticoduodenectomy, involving pylorotomy, in the First Department of Surgery, Kinki University School of Medicine. The first step in pylorotomy is to cut the duodenal stump obliquely. The next is incision of the pyloric sphincter along its inferior aspect. The incidences of postoperative complications and changes in body weight were collated retrospectively. Results: Delayed gastric emptying was observed in 4 patients (9.3%). However, this complication did not last more than 1 month in any patients. Two patients (4.7%) developed reflux esophagitis 1 month after surgery, but this complication had resolved by 6 months. Weight gain was noted beginning 3 months after surgery. Conclusions: Pylorus-preserving pancreaticoduodenectomy involving pylorotomy may reduce the incidence of delayed gastric emptying and preserve the long-term quality of life more than similar procedures. abstract_id: PUBMED:19088935 Antecolic gastrointestinal reconstruction with pylorus dilatation. Does it improve delayed gastric emptying after pylorus-preserving pancreaticoduodenectomy? Objective: The aim of our study focuses upon prevention of delayed gastric emptying (DGE) after pancreaticoduodenectomy using a alternative reconstruction procedure. Method: Forty consecutive patients underwent a typical pylorus-preserving pancreaticoduodenectomy (PPPD) with antecolic reconstruction in a two-year period (January 2002 until January 2004), while a similar group of 40 consecutive patients underwent PPPD with application of pyloric dilatation between January 2004 and January 2006. Early and late complications were compared between the two groups. Results: DGE occurred significantly more often in the group of patients treated by the classical PPPD technique (nine patients -22%) compared with those operated on with the addition of pyloric dilatation technique (two patients -5%) (p&lt;0.05). The incidence of other complications did not differ significantly between the two groups. Conclusions: The application of dilatation may decrease the incidence of DGE after PPPD and facilitates earlier hospital discharge. abstract_id: PUBMED:15490027 Delayed gastric emptying after pylorus-preserving pancreaticoduodenectomy. Background And Purpose: Pylorus-preserving pancreaticoduodenectomy is associated with a high incidence of delayed gastric emptying. The aim of this study was to determine the incidence and risk factors for delayed gastric emptying after this procedure. Methods: This retrospective study included 63 consecutive patients who received pylorus-preserving pancreaticoduodenectomy from July 1993 to December 2002. The patients were divided into 2 groups based on the presence of delayed gastric emptying. Preoperative indices, postoperative morbidity, nasogastric intubation, and hospital stay were compared. The risk factors for delayed gastric emptying were analyzed. Results: The postoperative incidence of delayed gastric emptying was 44% (28/63 patients). Multivariate analysis revealed that no preoperative biliary drainage, no cholestatic change in the liver and blood loss &gt; 400 mL were significant risk factors for delayed gastric emptying. Conclusions: Pylorus-preserving pancreaticoduodenectomy is a safe procedure with a high incidence of delayed gastric emptying. Delicate surgical dissection to decrease blood loss and the extent of perigastric inflammation may be the key factor to prevent delayed gastric emptying. abstract_id: PUBMED:33057756 Intraoperative endoluminal pyloromyotomy as a novel approach to reduce delayed gastric emptying after pylorus-preserving pancreaticoduodenectomy-a retrospective study. Background: Delayed gastric emptying (DGE) is one of the most common complications after pylorus-preserving partial pancreaticoduodenectomy (ppPD). The aim of this retrospective study was to assess whether an intraoperative pyloromyotomy during ppPD prior to the creation of duodenojejunostomy reduces DGE. Methods: Patients who underwent pylorus-preserving pancreaticoduodenectomy between January 2015 and December 2017 were divided into two groups on the basis of whether an intraoperative pyloromyotomy was performed (pyloromyotomy (PM) group) or not (no pyloromyotomy (NP) group). The primary endpoint was DGE according to the ISGPS definition. The confirmatory analysis of the primary endpoint was performed with multivariate analysis. Results: One hundred and ten patients were included in the statistical analysis. Pyloromyotomy was performed in 44 of 110 (40%) cases. DGE of any grade was present in 62 patients (56.4%). The DGE rate was lower in the PM group (40.9%) compared with the NP group (66.7%), and pyloromyotomy was associated with a reduced risk for DGE in univariate (OR 0.35, 95% CI 0.16-0.76; P = 0.008) and multivariate analyses (OR 0.32, 95% CI 0.13-0.77; P = 0.011). The presence of an intra-abdominal complication was an independent risk factor for DGE in the multivariate analysis (OR 5.54, 95% CI 2.00-15.36; P = 0.001). Conclusion: Intraoperative endoluminal pyloromyotomy during ppPD was associated with a reduced risk for DGE in this retrospective study. Pyloromyotomy should be considered a simple technique that can potentially reduce DGE rates after ppPD. abstract_id: PUBMED:26034372 Meta-analysis of subtotal stomach-preserving pancreaticoduodenectomy vs pylorus preserving pancreaticoduodenectomy. Aim: To investigate the differences in outcome following pylorus preserving pancreaticoduodenectomy (PPPD) and subtotal stomach-preserving pancreaticoduodenectomy (SSPPD). Methods: Major databases including PubMed (Medline), EMBASE and Science Citation Index Expanded and the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library were searched for comparative studies between patients with PPPD and SSPPD published between January 1978 and July 2014. Studies were selected based on specific inclusion and exclusion criteria. The primary outcome was delayed gastric emptying (DGE). Secondary outcomes included operation time, intraoperative blood loss, pancreatic fistula, postoperative hemorrhage, intraabdominal abscess, wound infection, time to starting liquid diet, time to starting solid diet, period of nasogastric intubation, reinsertion of nasogastric tube, mortality and hospital stay. The pooled odds ratios (OR) or weighted mean difference (WMD) with 95% confidence intervals (95%CI) were calculated using either a fixed-effects or random-effects model. Results: Eight comparative studies recruiting 650 patients were analyzed, which include two RCTs, one non-randomized prospective and 5 retrospective trial designs. Patients undergoing SSPPD experienced significantly lower rates of DGE (OR = 2.75; 95%CI: 1.75-4.30, P &lt; 0.00001) and a shorter period of nasogastric intubation (OR = 2.68; 95%CI: 0.77-4.58, P &lt; 0.00001), with a tendency towards shorter time to liquid (WMD = 2.97, 95%CI: -0.46-7.83; P = 0.09) and solid diets (WMD = 3.69, 95%CI: -0.46-7.83; P = 0.08) as well as shorter inpatient stay (WMD = 3.92, 95%CI: -0.37-8.22; P = 0.07), although these latter three did not reach statistical significance. PPPD, however, was associated with less intraoperative blood loss than SSPPD [WMD = -217.70, 95%CI: -429.77-(-5.63); P = 0.04]. There were no differences in other parameters between the two approaches, including operative time (WMD = -5.30, 95%CI: -43.44-32.84; P = 0.79), pancreatic fistula (OR = 0.91; 95%CI: 0.56-1.49; P = 0.70), postoperative hemorrhage (OR = 0.51; 95%CI: 0.15-1.74; P = 0.29), intraabdominal abscess (OR = 1.05; 95%CI: 0.54-2.05; P = 0.89), wound infection (OR = 0.88; 95%CI: 0.39-1.97; P = 0.75), reinsertion of nasogastric tube (OR = 1.90; 95%CI: 0.91-3.97; P = 0.09) and mortality (OR = 0.31; 95%CI: 0.05-2.01; P = 0.22). Conclusion: SSPPD may improve intraoperative and short-term postoperative outcomes compared to PPPD, especially DGE. However, these findings need to be further ascertained by well-designed randomized controlled trials. Answer: Delayed gastric emptying (DGE) is a significant postoperative complication associated with pylorus-preserving pancreaticoduodenectomy (PPPD). The incidence of DGE after PPPD has been reported to be quite variable, ranging from 4.5% to as high as 70% in different studies (PUBMED:21861144, PUBMED:17153459). DGE can impact the patient's recovery, prolong hospital stays, and delay the resumption of normal diet and activities (PUBMED:26792273, PUBMED:15490027). Several studies have explored methods to reduce the incidence of DGE. For instance, a novel procedure involving an antecolic vertical duodenojejunostomy and internal pancreatic drainage with omental wrapping has shown promising results in reducing the time for removal of nasogastric tubes, resumption of food intake, and overall postoperative complications, including DGE (PUBMED:26011210). Similarly, antecolic (AC) reconstruction after PPPD has been associated with a significantly lower incidence of DGE compared to retrocolic (RC) reconstruction (PUBMED:26792273). Other surgical modifications, such as pylorus-resecting pancreaticoduodenectomy (PrPD), which resects only the pyloric ring and preserves most of the stomach, have also been shown to significantly reduce the incidence of DGE compared to traditional PPPD (PUBMED:21861144, PUBMED:37697996). Additionally, intraoperative techniques like pyloromyotomy and pyloric dilatation have been reported to decrease the incidence of DGE and facilitate earlier hospital discharge (PUBMED:17153459, PUBMED:19088935, PUBMED:33057756). Despite these complications, PPPD is still considered an effective treatment for certain conditions like duodenal papilla carcinoma, with similar long-term survival and quality of life outcomes compared to standard pancreaticoduodenectomy (SPD) (PUBMED:29725396). Moreover, a meta-analysis has indicated that subtotal stomach-preserving pancreaticoduodenectomy (SSPPD) may improve intraoperative and short-term postoperative outcomes, particularly DGE, compared to PPPD (PUBMED:26034372). In conclusion, while DGE is a notable complication of PPPD, it is not necessarily "terrible" as various surgical techniques and postoperative management strategies can significantly reduce its incidence and impact on patients' recovery. However, the severity of DGE and its implications on patient outcomes warrant careful consideration and management in the postoperative care of patients undergoing PPPD.
Instruction: Is cervical length associated with maternal characteristics? Abstracts: abstract_id: PUBMED:25770842 Is cervical length associated with maternal characteristics? Objective: Women with a mid-trimester short cervical length (CL) are at increased risk for preterm delivery. Consequently, CL measurement is a potential screening tool to identify women at risk for preterm birth. Our objective was to assess possible associations between CL and maternal characteristics. Study Design: A nationwide screening study was performed in which CL was measured during the standard anomaly scan among low risk women with a singleton pregnancy. Data on maternal height, pre-pregnancy weight, ethnicity, parity and gestational age at the time of the CL measurement were collected from January 2010 to December 2012. Univariable and multivariable linear regression analyses were performed to assess the relationship between CL and maternal characteristics. Results: We included 5092 women. The mean CL was 44.3mm. No association was found between CL and maternal height or gestational age of the measurement. Maternal weight was associated with CL (p=0.007, adjusted R(2) 0.03). Separate analysis for BMI did not change these results. Ethnicity, known in 2702 out of 5092 women, was associated with CL (mean CL in Caucasian women 45.0mm, Asian 43.9mm, Mediterranean 43.1mm, and African 41.8mm, p=0.003), as well as parity (mean CL multiparous 45.3mm, nulliparous 43.5mm, p&lt;0.0001). Conclusion: Shorter mid-trimester cervical length is associated with higher maternal weight, younger maternal age, nulliparity and non-Caucasian ethnicity, but not with maternal height. abstract_id: PUBMED:28145653 Maternal Characteristics, Short Mid-Trimester Cervical Length, and Preterm Delivery. We aimed to determine the maternal characteristics (demographics, an obstetric history, and prior cervical excisional procedure) associated with a short mid-trimester cervical length (CL, defined as a CL of ≤ 25 mm) and whether having a short cervix explains the association between these maternal characteristics and spontaneous preterm delivery (SPTD, defined as a delivery before 34 weeks). This is a single-center retrospective cohort study of 3,296 consecutive women with a singleton pregnancy who underwent routine CL measurement between 20 and 24 weeks. Data were collected on maternal age, weight, height, parity, obstetric history (nulliparity; a history of at least 1 SPTD; and at least 1 term birth and no preterm birth [low-risk history group]), and prior cervical excisional procedure. In the multivariate regression analysis, an obstetric history, prior cervical excisional procedure, and gestational age at measurement were the variables significantly associated with short CL. In contrast, maternal weight, height, age, and parity were not significantly associated with short CL. By using the likelihood of SPTD as an outcome variable, logistic regression indicated that short CL and obstetric history, but not prior cervical excisional procedure, were significantly associated with SPTD after adjustment for potential confounders. A history of SPTD and prior cervical excisional procedure were associated with an increased risk of a short mid-trimester CL. A history of SPTD, but not prior cervical excisional procedure, is associated with an increased risk of SPTD, independent of a short CL. abstract_id: PUBMED:37465429 Could Maternal Vitamin D Concentration Affect Cervical Length in Mid-Pregnancy? A Report From an Iranian Tertiary Center. Objective: This descriptive-analytic cross-sectional study aimed to figure out whether maternal serum vitamin D concentration correlates with cervical length measurement in mid-gestation or not. Materials And Methods: During Jun-Jan 2021, 213 pregnant women at 18-22 weeks of gestation were investigated. First, demographic features were obtained then, maternal serum vitamin D concentration was measured by enzyme-linked immunoassay and cervical length was measured via transvaginal sonography according to fetal maternal foundation guideline and appropriate statistical test was used to analyze the correlation between maternal vitamin D level and cervical length in mid-pregnancy. Results: It was shown that 29.6% and 25.4% of participants had vitamin D deficiency and insufficiency, respectively. Spearman's test found no significant correlation between maternal vitamin D level and cervical length in mid-pregnancy. Moreover, cervical length and maternal vitamin D level had no association with maternal BMI. Conclusion: Although maternal vitamin D level and its sufficiency status was not associated with cervical length in mid-pregnancy, as a trend toward decreased maternal serum vitamin D level by advancing gestational age was observed. It may be concluded that it is prolonged vitamin D deficiency during gestation that may lead to cervical length shortening and subsequent preterm delivery later in gestation. abstract_id: PUBMED:33090968 The influence of race on cervical length in pregnant women in Brazil. Objectives: Short cervical length is a predictor of preterm birth. We evaluated if there were racial differences in variables associated with cervical length in pregnant Brazilian women. Methods: Cervical length was determined by vaginal ultrasound in 414 women at 21 weeks gestation. All women were seen at the same clinic and analyzed by the same investigators. Women found to have a short cervix (≤25 mm) received vaginal progesterone throughout gestation. Composition of the vaginal microbiome was determined by analysis of the V1-V3 region of the gene coding for bacterial 16S ribosomal RNA. Demographic, clinical and outcome variables were determined by chart review. Subjects were 53.4% White, 37.2% mixed race and 9.4% Black. Results: Pregnancy, medical history and education level were similar in all groups. Mean cervical length was shorter in Black women (28.4 mm) than in White (32.4 mm) or mixed race (32.8 mm) women (p≤0.016) as was the percentage of women with a short cervix (23.1, 12.2, 7.8% in Black, White, mixed race respectively) (p≤0.026). Mean cervical length increased with maternal age in White (p=0.001) and mixed race (p=0.045) women but not Black women. There were no differences in bacterial dominance in the vaginal microbiota between groups. Most women with a short cervix delivered at term. Conclusions: We conclude that Black women in Brazil have a shorter cervical length than White or mixed race women independent of maternal age, pregnancy and demographic history or composition of the vaginal microbiome. abstract_id: PUBMED:20387189 Is short stature associated with short cervical length? We sought to estimate if there is a correlation between maternal height and cervical length in women at high risk for preterm birth. We studied a retrospective cohort of women with singleton gestation and risk factors for preterm birth. Maternal height was categorized as short (&lt;157.5 cm) or not short stature (≥157.5 cm). Cervical length at 14 to 24 weeks was evaluated. Primary outcomes were incidence of initial cervical length &lt;30 mm and incidence of shortest cervical length &lt;25 mm. Four hundred sixteen women met the inclusion criteria. Twenty-two (22.6%) of the short women and 79 (24.5%) of the nonshort women had an initial cervical length &lt;30 mm ( P = 0.81). Twenty-two (23.7%) of the short women and 104 (32.2%) of the nonshort women had a cervical length &lt;25 mm for shortest cervical length measurement ( P = 0.15). In women with singleton gestation and risk factors for preterm birth, no statistically significant relationship exists between maternal height and initial or shortest cervical length. abstract_id: PUBMED:27704492 Association among Maternal Obesity, Cervical Length, and Preterm Birth. Objective The objective of this study was to determine if mid-trimester cervical length is associated with the inverse relationship between maternal body mass index (BMI) at delivery and spontaneous preterm birth (SPTB). Materials and Methods This was a retrospective cohort of women with a singleton pregnancy without prior SPTB who underwent routine transvaginal cervical length assessment between 18 and 24 weeks. Women were categorized into four BMI groups: (1) 18.5 to 24.9, (2) 25 to 29.9, (3) 30 to 34.9, and (4) ≥ 35 kg/m2. Univariable and multivariable analyses were conducted to determine whether BMI group was associated with SPTB at &lt; 37, 34, or 32 weeks independent of the cervical length. Results Of the 18,100 women in this analysis, 43.5% had a BMI ≥ 30. In univariable analysis, increasing BMI group was associated with longer cervical length but not with cervical length &lt; 10th percentile. SPTB at &lt; 37, 35, and 32 weeks was less common among women with higher BMI. In multivariable regression, a higher BMI group was associated with a lower frequency of SPTB at 37 weeks (adjusted odds ratios [aORs] of 0.64, 0.68, and 0.51), at 34 weeks (aORs of 0.53, 0.54, and 0.31) and at 32 weeks (aORs of 0.47, 0.60, and 0.27) for BMI groups 2 to 4, respectively. This association persisted even when cervical length was entered into the model as a covariate. Conclusion Women with a higher BMI group had longer mid-trimester cervical length, and correspondingly reduced SPTB. However, the decreased risk of SPTB was not associated with cervical length. The reason for the potential protective effect from prematurity is unknown and its mechanisms require further investigation. abstract_id: PUBMED:33966736 The degree of cervical length shortening as a predictor of successful or failed labor induction. Objective: To evaluate whether the degree of cervical length change was associated with successful cervical dilatation during labor induction. Materials And Methods: We conducted a secondary analysis of a prospective observational study of term singleton pregnant women who underwent labor inductions. Cases of Cesarean section due to fetal distress or maternal request during the first stage of labor were excluded. The enrolled women were categorized into two groups according to achievement of full cervical dilatation. The cervical length near induction and cervical length shortening over the last four weeks of pregnancy were compared between the two groups. A receiver operating characteristics (ROC) analysis was performed to evaluate the screening performance for failed cervical dilatation during labor induction. Results: A total of 165 women were enrolled for the final analysis; of these, 145 (87.9%) women reached the second stage of labor and 20 (12.1%) women failed to achieve full cervical dilatation. Women who failed to achieve full cervical dilatation had a significantly longer cervical length near induction and less cervical length change over previous four weeks compared with women who achieved full cervical dilatation (P = 0.018 and 0.005, respectively). Multivariate analysis showed that cervical length &gt;29 mm (odds ratios [OR], 4.15; 95% confidence interval [CI], 1.290-13.374, P = 0.017) and cervical length shortening ≦ 6 mm (OR, 5.87; 95% CI, 1.552-22.271, P = 0.009) were significantly associated with failed cervical dilatation after adjusting for birthweight and previous history of vaginal delivery. Cervical length shortening alone provided a better prediction of failed cervical dilatation than the combination of cervical length and shortening (sensitivity, 76.9%; specificity, 63.8%). Conclusion: The probability of failed cervical dilatation during labor induction was significantly increased in cases when the cervical length was greater than 29 mm near induction or when the cervical length shortening was less than 6 mm over the last four weeks. abstract_id: PUBMED:31102541 Cervical modifications after pessary placement in singleton pregnancies with maternal short cervical length: 2D and 3D ultrasound evaluation. Introduction: The use of a pessary proved to prevent preterm birth in asymptomatic women with mid-trimester short cervical length (CL); however, the precise mechanisms by which the pessary confers its benefit remain unclear. The aim of this study was to evaluate multiple cervical characteristics assessed by 2-dimensional and 3-dimensional ultrasound before and after placement of a cervical pessary to ascertain its mechanism of action. Material And Methods: In this prospective cohort study, we assessed the cervical characteristics in singleton pregnancies with maternal short CL and compared them with matched reference women with normal CL. The variables evaluated were: CL, uterocervical angles, cervical consistency indices (cervical consistency index and CL consistency index), cervical volume and vascular indices. All variables were re-assessed immediately after pessary placement and 4-6 weeks later in all participants. Mann-Whitney U test was used to assess differences between groups and paired samples t test for comparisons in two different examinations in the same women. The aim of this study was to evaluate multiple cervical ultrasound variables before and after the placement of a cervical pessary and compare the evolution of these variables with a reference group with normal CL to better understand the device's mechanism of action. Results: Thirty-three women with short CL and 24 reference women with normal CL were enrolled. At the time of enrollment, gestational age and maternal baseline characteristics did not differ between groups. Immediately after pessary placement, CL increased, uterocervical angles were narrower and cervical consistency increased significantly. When the magnitude of change in cervical variables was compared over time between the reference group and the study group, median CL had increased in the study group (1.47 mm) but it had shortened in the reference group (-2.56 mm). These inverse trends were statistically significant (P = 0.006). Conclusions: Cervical pessary reduces both uterocervical angles and corrects cervical angulation by pushing the cervix up toward the uterus. Maintaining the cervix aligned to the uterine axis leads to reduced cervical tissue stretch, so avoiding further cervical shortening. All these changes were present after pessary placement; however, the clinical implications of these findings remain unknown. abstract_id: PUBMED:33011508 Magnetic resonance imaging evaluation of cervical length by the women's age: a retrospective cohort study. Objective: The anthropometric characteristics of the uterus evolve with pubertal development in girls. It is therefore permissible to ask until these anthropometric characteristics change, in order to know if the cervical length criterion defined for preterm delivery threats is applicable to all ages. The main objective of our study was to analyze the evolution of cervical length with the women's age outside pregnancy to overcome factors related to pregnancy that can affect cervical length. Material And Methods: This retrospective descriptive study over a period of 1 year from March 2017 to March 2018. The cervical length measurements were performed by Magnetic Resonnance Imaging. The cervical length was defined by sagittal T2-weighted magnetic resonance imaging (MRI) as the distance on a straight line between the external cervical os (at the point of divergence of the anterior and posterior lips) and the internal cervical os identified by an intersection between the line of the hypersignal of the glandular epithelium and a line passing through the isthmus. Results: A total of 209 patients were included. The cervical length ranged from 25.2 mm on average in children under 16 years (23.6-27.1 mm) to 39.7 mm between 36 and 40 years (27.9 -58.9 mm). There was a linear association between age and cervical length, irrespective of maternal anthropometric data (Pearson's coefficient ρ = 0.43, 95% CI 0.32-0.54 (p &lt; 0.01). In multivariate analysis, the only factors associated with cervical length were women's age (p &lt; 0.01) and the prior delivery (p &lt; 0.01). Conclusion: The cervical length is correlated with women age and the prior delivery. abstract_id: PUBMED:21511388 Can maternal height predict shorter cervical length in asymptomatic low-risk pregnant women? Objective: To evaluate the association between maternal height and cervical length in the first and second trimesters in low-risk asymptomatic pregnant women. Study Design: Maternal height and cervical length of 146 asymptomatic women with singleton pregnancies at low risk for preterm birth were measured during the first and second trimesters. Preterm birth was defined as birth before &lt;37 gestational weeks. Correlations between maternal height and cervical length measurements were determined using Pearson correlation analysis. The women were also divided into three groups based on height percentiles: &lt;25% (Group I), 25-75% (Group II) and &gt;75% (Group III). Cervical lengths were compared among groups. Correlations between cervical length and maternal height and statistically significant differences in cervical length among height percentile groups were the main outcomes. Results: Maternal height was positively but weakly correlated with first and second trimester cervical lengths (p=0.047, r=0.167 and p=0.039, r=0.197 respectively). The mean first trimester cervical lengths were significantly different between the groups (p=0.04). There were no significant differences, however, in the mean second trimester cervical lengths among the three groups although the difference was close to significance (p=0.06). Conclusion: Although our results indicate a relationship between maternal height and cervical length in our population, maternal height seems to have only limited value in identifying women to be screened for shorter cervical length in a low risk asymptomatic population. Answer: Yes, cervical length (CL) is associated with various maternal characteristics. Studies have found that shorter mid-trimester cervical length is associated with higher maternal weight, younger maternal age, nulliparity (having no previous births), and non-Caucasian ethnicity (PUBMED:25770842). However, maternal height does not appear to be significantly associated with cervical length (PUBMED:20387189; PUBMED:21511388). Other factors that have been found to be associated with short cervical length include obstetric history, such as a history of spontaneous preterm delivery (SPTD), and prior cervical excisional procedures (PUBMED:28145653). Additionally, Black women in Brazil were found to have a shorter cervical length than White or mixed-race women, independent of maternal age, pregnancy, and demographic history or composition of the vaginal microbiome (PUBMED:33090968). Maternal vitamin D concentration, however, was not found to be associated with cervical length in mid-pregnancy (PUBMED:37465429). Similarly, maternal obesity was associated with longer cervical length, and correspondingly reduced SPTD, but the decreased risk of SPTD was not associated with cervical length (PUBMED:27704492). Furthermore, cervical length was found to increase with maternal age in White and mixed-race women but not in Black women (PUBMED:33090968), and cervical length is correlated with women's age and prior delivery (PUBMED:33011508). In summary, cervical length is associated with several maternal characteristics, including weight, age, parity, ethnicity, obstetric history, and prior cervical procedures, but not consistently with maternal height or vitamin D concentration.
Instruction: Does variant hepatic artery anatomy in a liver transplant recipient increase the risk of hepatic artery complications after transplantation? Abstracts: abstract_id: PUBMED:15547194 Does variant hepatic artery anatomy in a liver transplant recipient increase the risk of hepatic artery complications after transplantation? Objective: Our aim was to determine whether variant hepatic artery anatomy in a liver transplant recipient increases the risk of hepatic artery complications after liver transplantation. Materials And Methods: The study group consisted of 84 patients who underwent gadolinium-enhanced 3D MR angiography before orthotopic liver transplantation in which a branch patch arterial anastomosis at the gastroduodenal takeoff was used. MR angiography studies were retrospectively reviewed and assessed for the presence and type of variant hepatic artery anatomy. The diameter of the distal common hepatic artery was measured. The incidence of posttransplantation hepatic artery stenosis or thrombosis was assessed. Results: Seven (8.3%) of the 84 patients developed hepatic artery complications after transplantation. Of the 24 patients with variant hepatic artery anatomy, five (20.8%) had posttransplantation/ hepatic artery complications. In contrast, only two (3.3%) of the 60 patients with classic hepatic artery anatomy had complications. The higher complication rate in patients with variant hepatic artery anatomy was statistically significant (p &lt; 0.05). The odds ratio was 7.6 (95% confidence interval, 1.4-42.6). The diameter of the distal common hepatic artery was smaller in patients with variant hepatic artery anatomy compared with those with classic hepatic artery anatomy (range, 4.3-7.1 mm [mean, 5.8 mm] vs 4.0-8.9 mm [mean 6.3 mm], p &lt; 0.05), and it was also smaller in patients who had posttransplantation hepatic artery complications compared with those who had no complications (range, 4.2-6.3 mm [mean, 5.2 mm] vs 4.0-8.9 mm, [mean, 6.2 mm], p &lt; 0.01). Conclusion: Variant hepatic artery anatomy in a liver transplant recipient increased the risk of hepatic artery complications after transplantation. The smaller caliber of the native common hepatic artery may contribute to the higher risk. abstract_id: PUBMED:33503116 RISK FACTORS ASSOCIATED WITH HEPATIC ARTERY THROMBOSIS: ANALYSIS OF 1050 LIVER TRANSPLANTS. Background: Hepatic artery thrombosis is an important cause of graft loss and ischemic biliary complications. The risk factors have been related to technical aspects of arterial anastomosis and non-surgical ones. Aim: To evaluate the risk factors for the development of hepatic artery thrombosis. Methods: The sample consisted of 1050 cases of liver transplant. A retrospective and cross-sectional study was carried out, and the variables studied in both donor and recipient. Results: Univariate analysis indicated that the variables related to hepatic artery thrombosis are: MELD (p=0.04) and warm time ischemia (p=0.005). In the multivariate analysis MELD=14.5 and warm ischemia time =35 min were independent risk factors for hepatic artery thrombosis. In the prevalence ratio test for analysis of the anastomosis as a variable, it was observed that patients with continuous suture had an increase in thrombosis when compared to interrupted suture. Conclusions: Prolonged warm ischemia time, calculated MELD and recipient age were independent risk factors for hepatic artery thrombosis after liver transplantation in adults. Transplanted patients with continuous suture had an increase in thrombosis when compared to interrupted suture. Re-transplantation due to hepatic artery thrombosis was associated with higher recipient mortality. abstract_id: PUBMED:15498316 The diagnosis and treatment of hepatic artery complications in 107 cases of orthotopic liver transplantation Objective: To investigate the diagnosis and managements of hepatic artery complications in orthotopic liver transplantation. Methods: The clinical data of 107 consecutive orthotopic liver transplantation patients was reviewed retrospectively to assess the risk factors and the diagnosis and treatment of the vascular complications. Results: The incidence of the artery related complications in orthotopic liver transplantation was associated with the quality of the donor organ artery and the reconstruction way of donor-recipient artery intimately. The main hepatic artery related complications were hepatic artery thrombosis and stenosis. The incidence of the vascular complications was 6.54%, and the mortality rate was 85.7%. Conclusions: The main influence factors of vascular complications were the quality of the donor organ artery and the reconstruction way of donor-recipient artery. The key steps of organ salvaging and the patients' life saving were early diagnosis and treatment of those complications. abstract_id: PUBMED:11478014 The hepatic artery: a reminder of surgical anatomy. This study was carried out to document the anatomy of the hepatic artery with the purpose of reminding surgeons of the need for this essential knowledge in order to practice safe hepatobiliary surgery. Repeated surgical mistakes on patients referred to our unit prompted the study. One hundred and eighty consecutive livers procured for transplantation was studied, and the anatomy drawn immediately after dissection. The left hepatic artery arose from the left gastric artery in 15%, and either the splenic, gastroduodenal artery or the aorta in 4% of cases. The right hepatic artery arose from the superior mesenteric artery in 15%, the gastroduodenal, right gastric artery or aorta in 10% of cases. There was a major variation of the coeliac axis in 9% of cases studied. Overall, there was an abnormality in 43% of dissections: 48% were multiple and 27% had more than two vascular variations. A constant pattern of abnormalities occurred in the anatomy of the hepatic artery. Realisation of this vascular pattern should make identification of the anatomy easier. When there is one vascular variation, there is a high chance of there being multiple variations. abstract_id: PUBMED:38140931 Impact and Consequences of Recipient Gastroduodenal Artery Ligation Before Hepatic Artery Anastomosis During Orthotopic Liver Transplantation. Objectives: The recipient's gastroduodenal artery is often ligated before the hepatic artery anastomosis during orthotopic liver transplant, to gain either mobility or length of recipient's hepatic artery, potentially protecting the anastomosis by preventing "steal syndrome." In this study, our aim was to evaluate the consequences of gastroduodenal artery ligation and its effect on prevention of hepatic artery thrombosis. Materials And Methods: We retrospectively analyzed deceased-donor orthotopic liver transplant procedures (n = 210) performed at a single center between January 2016 and July 2021 to compare outcomes between recipients with (group 1) and recipients without (group 2) gastroduodenal artery ligation. Group 1 included 78 patients (37%), in which the recipient's common hepatic artery was used for arterial anastomosis; group 2 included 132 patients (63%), in which the right hepatic artery orthe proper hepatic artery was used for arterial anastomosis. Occurrences of hepatic artery thrombosis, postoperative hyperamylasemia, nausea and vomiting, and delayed feeding were compared between the groups. Results: There was no incidence of hepatic artery thrombosis reported in either group. In group 1, 31 patients (39.7%) were reported to have postoperative hyperamylasemia, ranging from 200 to 4700 U/L accompanied by delayed feeding, whereas, in group 2, only 16 of 132 patients (12%) had postoperative hyperamylasemia, ranging from 200 to 1400 U/L (P &lt; .01). Conclusions: Ligation of recipient's gastroduodenal artery is not associated with decreased risk of hepatic artery thrombosis compared with nonligation. However, the procedure does have consequences in the form of possible postoperative hyperamylasemia, leading to delayed feeding probably due to decreased oral tolerance. abstract_id: PUBMED:35872182 Management of difficult hepatic artery reconstructions to reduce complications through continual technical refinements in living donor liver transplantations. Background: Hepatic artery reconstruction (HAR) for liver transplantation is crucial for successful outcomes. We evaluated transplantation outcome improvement through continual technical refinements. Materials And Methods: HAR was performed in 1448 living donor liver transplants by a single plastic surgeon from 2008 to 2020. Difficult HARs were defined as graft or recipient hepatic artery ≤2 mm, size discrepancy (≥2 to 1), multiple hepatic arteries, suboptimal quality, intimal dissection of graft or recipient hepatic artery (HA), and immediate redo during transplantation. Technique refinements include early vessel injury recognition, precise HA dissection, the use of clips to ligate branches, an oblique cut for all HARs, a modified funneling method for size discrepancy, liberal use of an alternative artery to replace a pathologic HA, and reconstruction of a second HA for grafts with dual hepatic arteries in the graft. Results: Difficult HARs were small HA (21.35%), size discrepancy (12.57%), multiple hepatic arteries (11.28%), suboptimal quality (31.1%), intimal dissection (20.5%), and immediate redo (5.18%). The overall hepatic artery thrombosis (HAT) rate was 3.04% in this series. The average HAT rate during the last 4 years (2017-2020) was 1.46% (6/408), which was significantly lower than the average HAT rate from 2008 to 2016 (39/1040, 3.8%) with a statistical significance (p = 0.025). Treatment for posttransplant HAT included anastomosis after trim back (9), reconstruction using alternatives (19), and nonsurgical treatment with urokinase (9). Conclusion: Careful examination of the HA under surgical microscope and selection of the appropriate recipient HA are key to successful reconstruction. Through continual technical refinements, we can reduce HA complications to the lowest degree. abstract_id: PUBMED:21301017 Arcuate ligament compression as a cause of early-onset thrombosis of the hepatic artery after liver transplantation. Background: Early hepatic artery thrombosis (HAT) is a potentially lethal complication after orthotopic liver transplantation (OLT) requiring immediate intervention. Aim: To report an infrequent cause of HAT after OLT and by itself a controversial clinical entity, the median arcuate ligament celiac artery compression. Case Report: A 59-year-old female with hepatitis C virus-induced cirrhosis, Child B, MELD 15, underwent cadaveric-donor OLT with complete vena cava exclusion. Type 1 hepatic artery anatomy was found both in the donor and the recipient, the gastroduodenal artery was ligated. During the first eight postoperative days, clinical and analytical evolution was satisfactory and Doppler ultrasound showed no abnormalities. On the ninth postoperative day, the patient developed hypovolemic shock due to bleeding at the hepatic artery anastomosis, surgical reconstruction was performed. Postoperative color Doppler showed absent hepatic artery flow and an angiography suggested celiac artery compression. The patient was explored again the same day, liberating the celiac artery from the median arcuate ligament and performing thrombectomy and reconstruction of the hepatic artery anastomosis. The patient made a satisfactory recovery and color Doppler showed adequate flow in the hepatic artery. She is alive, free of biliary complications and enjoying a good quality of life 12 months after transplantation. Conclusion: Median arcuate ligament celiac artery compression is an infrequent anatomical variant that should be intentionally evaluated in the recipient at the time of arterial reconstruction in OLT and specifically be considered in early HAT to allow recognition and effective correction. abstract_id: PUBMED:19328978 Hepatic artery calcification in a liver transplant recipient. The abdominal aorta and the renal, mesenteric, and splenic arteries are frequently affected with arterial wall calcification upon increasing age; the hepatic artery is far less often found to be calcified. We report the case of a liver transplant recipient who presented with a calcified hepatic artery in the liver graft 13 years after transplantation for primary sclerosing cholangitis. Although the etiology of hepatic artery calcification was unknown, underlying causes for calcification may include chronic hemodialysis for renal insufficiency and subsequent secondary hyperparathyroidism, as well as a calcified aneurysms. However, it remained unclear whether hepatic artery calcification had to be considered an epiphenomenon or an original pathology of the liver. It thus seems unlikely that hepatic artery calcification as a single finding is to be considered a serious pathologic entity, even for a liver graft. abstract_id: PUBMED:28239947 Analysis of early hepatic artery thrombosis after liver transplantation. Background: Hepatic artery thrombosis (HAT) is one of the deadliest complications after orthotopic liver transplantation (OLT). This study aimed to identify risk factors and summarize the experience of HAT management after OLT. Methods: Clinical data of 407 adult patients who received OLT in our centres from July 2011 to December 2015 were retrospectively investigated. Results: The incidence rate of early HAT was 2.0% (8/407). Recipient/donor weight ratio ≥1.15 (P = 0.02), presence of hepatic arterial reconstruction (P &lt; 0.001) and post-operative blood transfusion (P = 0.001) were significantly associated with early HAT in the univariate analysis and the only independent risk factor (odds ratio = 28.49) in binary logistic regression was the presence of hepatic arterial reconstruction. Among patients with early HAT, five received interventional revascularization while two received surgical revascularization and the remaining one received re-transplantation. Their liver grafts were functioning well with patent hepatic artery until their last follow-up, whereas one died from tumor recurrence at 31st month after transplantation. Conclusion: The presence of hepatic arterial reconstruction, recipient/donor weight ratio ≥1.15 and post-operative blood transfusion were the main risk factors associated with early HAT. Prompt recognition of these factors, strict surveillance protocols and selective anticoagulation for patients at risk need to be evaluated. For patients with early HAT occurring within a week after transplantation, surgical re-arterialization is preferential while interventional revascularization is recommended when it occurs beyond 7 days after transplantation. abstract_id: PUBMED:12826187 Hepatic artery thrombosis in pediatric liver transplantation. Purpose: Children have been reported to be at greater risk for hepatic artery thrombosis when compared to adults due to small arterial size, nonuse of intraoperative microscope, and postoperative hypercoagulable state. Methods: We evaluated arterial anastomosis type, intraoperative field magnification, and hepatic artery complications and how they were managed. All patients underwent ultrasound, anticoagulation consisted of 41 mg aspirin once a day, and 35 patients received alprostadil (PGE) for the first 7 days after transplantation. No patients were administered intravenous heparin following liver transplantation. Results: Of the 74 livers transplanted, 36 grafts (48.6%) were whole organ transplants and 38 grafts (51.4%) were partial livers. We observed HAT in 1 of 74 (1.35%) transplants in our pediatric liver transplant population. The only patient with HAT was a young girl with a history of biliary atresia. The occurrence of a hepatic artery thrombosis on day 7 was caused by the migration of an intimal plaque dissection within the artery graft. She was emergently taken back into the operating room for graft revision. This individual currently has a survival time of 426 days following her last transplant. Conclusions: Hepatic artery thrombosis may be minimized in pediatric liver transplantation without the use of microsurgery. Anticoagulation utilizing ASA and alprostadil is sufficient to avoid HAT. Accurate use of ultrasound is crucial to avoid this complication. Graft and patient salvage is possible with expedient surgical treatment; microsurgery, anticoagulant therapy, site of arterial inflow, and recipient size and weight. Answer: Yes, variant hepatic artery anatomy in a liver transplant recipient does increase the risk of hepatic artery complications after transplantation. A study that included 84 patients who underwent gadolinium-enhanced 3D MR angiography before orthotopic liver transplantation found that of the 24 patients with variant hepatic artery anatomy, 20.8% had posttransplantation hepatic artery complications, compared to only 3.3% of the 60 patients with classic hepatic artery anatomy. This difference was statistically significant, with an odds ratio of 7.6, indicating a higher complication rate in patients with variant hepatic artery anatomy (PUBMED:15547194). Additionally, the diameter of the distal common hepatic artery was smaller in patients with variant hepatic artery anatomy, which may contribute to the higher risk of complications (PUBMED:15547194).
Instruction: Is polyhydramnios in an ultrasonographically normal fetus an indication for genetic evaluation? Abstracts: abstract_id: PUBMED:7503195 Is polyhydramnios in an ultrasonographically normal fetus an indication for genetic evaluation? Objective: Our purpose was to determine the frequency of fetal chromosomal anomalies in pregnancies complicated by polyhydramnios. Study Design: Between Jan. 1, 1992, and July 31, 1993, an amniotic fluid index was measured prospectively in 2730 third-trimester pregnant women. Polyhydramnios was defined as an amniotic fluid index &gt; or = 24 cm. A computer search identified all infants born with structural or chromosomal anomalies. Results: Polyhydramnios was detected in 49 of 2730 women (1.7%). The incidence of chromosomal anomalies was two in 49 (4.1%) compared with three in 2681 (0.12%) among women with normal fluid (p &lt; 0.05). Six of the 49 newborns had structural anomalies (12.2%), whereas 48 of 2681 (1.8%) structural anomalies occurred in the control group (p &lt; 0.05). Among study patients both fetuses with chromosomal anomalies were growth retarded; four of the six structural anomalies were associated with an amniotic fluid index &gt; 30 cm. Conclusions: (1) Polyhydramnios is associated with an increased incidence of congenital fetal anomalies. (2) Growth-retarded fetuses with polyhydramnios warrant genetic evaluation. (3) A genetic study is not absolutely indicated for patients with polyhydramnios and a sonographically normal fetus. abstract_id: PUBMED:14981412 Percutaneous ultrasonographically guided ablation of an acardiac twin. Acardiac twinning is caused by twin reversed arterial perfusion (TRAP). Normal "pump" twins may face early delivery and cardiac decompensation and have a high perinatal mortality. A primagravid patient had serial evaluation of a TRAP pregnancy beginning early in the second trimester. Rapid growth of the acardiac sibling, high diastolic velocity Doppler waveforms in the perfusing vessel, and early hydramnios prompted ablation of blood flow by a percutaneous ultrasonographically guided infusion of absolute alcohol. A term birth of a normal pump twin was the outcome. Ablation of blood flow into an acardiac sibling of a TRAP pregnancy may be indicated in cases with a poor prognosis by use of an effective percutaneous ultrasonographically guided technique. abstract_id: PUBMED:37212013 Clinical and genetic analysis of a fetus with 17q12 microdeletion syndrome Objective: To explore the clinical phenotype and genetic characteristics of a fetus with 17q12 microdeletion syndrome. Methods: A fetus with 17q12 microdeletion syndrome who was diagnosed at Huzhou Maternal &amp; Child Health Care Hospital in June 2020 was selected as the study subject. Clinical data of the fetus was collected. The fetus was subjected to chromosomal karyotyping and chromosomal microarray analysis (CMA). To determine the origin of fetal chromosomal abnormality, its parents were also subjected to CMA assay. The postnatal phenotype of the fetus was also investigated. Results: Prenatal ultrasound revealed polyhydramnios and fetal renal dysplasia. The fetus was found to have a normal chromosomal karyotype. CMA has detected a 1.9 Mb deletion in the 17q12 region, which has encompassed five OMIM genes including HNF1B, ACACA, ZNHIT3, CCL3L1 and PIGW. Based on the guidelines from the American College of Medical Genetics and Genomics (ACMG), the 17q12 microdeletion was predicted as pathogenic copy number variation (CNV). CMA analysis has detected no pathogenic CNV in both parents. After birth, the child was found to have renal cysts and abnormal brain structure. Combined with the prenatal findings, the child was diagnosed with 17q12 microdeletion syndrome. Conclusion: The fetus has 17q12 microdeletion syndrome presenting as abnormalities of the kidney and central nervous system, which are strongly correlated with functional defects of the deletion region involving the HNF1B and other pathogenic genes. abstract_id: PUBMED:34365627 Genetic analysis of a rare fetus with mandibulofacial dysostosis Guion-Almeida type Objective: To delineate the clinical and genetic features of a fetus with micrognathia, low-set ears, microtia, polyhydramnios and anechoic stomach by ultrasonography. Methods: Whole exome sequencing (WES) was carried out to detect genetic variant in the fetus, for which routine chromosomal karyotyping and chromosomal microarray analysis (CMA) yielded no positive finding. Candidate variants were verified by Sanger sequencing and bioinformatic analysis. Results: WES revealed that the fetus has carried a de novo nonsense c.2302C&gt;T (p.Q768X) variant in exon 23 of the EFTUD2 gene, which was detected in neither parent. The variant was unreported previously and may lead to premature termination of the translation of EFTUD2 protein at the 768th amino acid. Bioinformatic analysis predicted the amino acid to be highly conserved and may alter the structure and function of the EFTUD2 protein. Conclusion: The c.2302C&gt;T variant of the EFTUD2 gene probably underlay the mandibulofacial dysostosis Guion-Almeida type in the fetus. Discovery of the novel variant has enriched variant spectrum of the EFTUD2 gene and provided a basis for genetic counseling and prenatal diagnosis for the family. abstract_id: PUBMED:20067885 Prenatal evaluation of a fetus with trisomy 18 and additional balanced de novo Rob(13;14). The main aim of this work is to present unusual case with full trisomy 18 and additional robertsonian translocation- Rob (13;14) detected through abnormalities found in prenatal ultrasound examination. A 26 years-old pregnant women with no family history of any reproductive failure underwent level II ultrasound screening in 19 weeks of gestation. Polyhydramnios, intrauterine growth retardation, hydrocephalus, enlarged lateral ventricles, club foot and cardiac defect were found. Amniocentesis was indicated considering the high likelihood of a chromosomal aberration. Abnormal karyotype was detected 46, XY, der(13;14)(q10;q10), +18. Karyotypes of parents were normal, what confirmed de novo origin of this aberration. Pregnancy was terminated. In postnatal examination fetus demonstrated intrauterine groth retardation and a lot of dysmorphic features characteristic for trisomy 18: microcephaly, prominent occiput, very low set and posteriorly rotated ears, hypertelorism, small mouth, small recessed mandible, a high narrow palate, broad nasal bridge, low-set ears, preauricilar skin appendage, clenched fingers clinodactyly of Vth fingers and club foot. In conclusion it is worth to say that our described fetus demonstrated rather typical for trisomy 18 ultrasonographic features. Balanced Rob (13;14) gives no phenotypic expression. Possible interchromosomal effect in complex chromosomal aberration formation such as Rob (13;14) with trisomy 18 was discussed. abstract_id: PUBMED:27984613 Detection of a fetus with paternally derived 2q37.3 microdeletion and 20p13p12.2 microduplication using whole genome microarray technology Objective: To perform prenatal diagnosis for a fetus with multiple malformations. Methods: The fetus was subjected to routine karyotyping and whole genome microarray analysis. The parents were subjected to high-resolution chromosome analysis. Results: Fetal ultrasound at 28+4 weeks has indicated intrauterine growth restriction, left kidney agenesis, right kidney dysplasia, ventricular septal defect, and polyhydramnios. Chromosomal analysis showed that the fetus has a karyotype of 46,XY,der(2),der(20), t(2;20)(q37.3;p12.2), t(5;15) (q12.2;q25) pat. SNP array analysis confirmed that the fetus has a 5.283 Mb deletion at 2q37.3 and a 11.641 Mb duplication at 20p13p12.2. High-resolution chromosome analysis suggested that the father has a karyotype of 46,XY,t(2;20)(q37.3;p12.2),t(5;15)(q12.2;q25), while the mother has a normal karyotype. Conclusion: The abnormal phenotype of the fetus may be attributed to a 2q37.3 microdeletion and a 20p13p12.2 microduplication. The father has carried a complex translocation involving four chromosomes. To increase the chance for successful pregnancy, genetic diagnosis and/or assisted reproductive technology are warranted. abstract_id: PUBMED:33751529 Prenatal diagnosis and genetic analysis of two fetuses with paternally derived 17q12 microdeletions Objective: To reported on two fetuses diagnosed with 17q12 microdeletion syndrome. Methods: The two fetuses were respectively found to have renal abnormalities and polyhydramnios upon second and third trimester ultrasonography. Umbilical cord blood of the first fetus and amniotic fluid of the second fetus were subjected to single nucleotide polymorphism array (SNP-array) analysis. After 17q12 microdeletion was found in the first fetus, SNP-array was carried out on peripheral blood samples of the parents to determine its origin. With the medical history of the parents taken into consideration, the father underwent high-throughput sequencing for 565 urinary system-related genes to exclude pathogenic or likely pathogenic variants associated with congenital malformations of the urinary and reproductive systems. Results: In both fetuses, SNP-array has revealed a 1.42 Mb deletion at 17q12, or arr[hg19]17q12 (34 822 465-36 243 365) × 1. In both cases the microdeletion was inherited from the father, in whom no urinary disease-related pathogenic or likely pathogenic variants was identified. Conclusion: Paternally derived 17q12 microdeletions probably underlay the genetic etiology of the two fetuses with renal ultrasound abnormalities and polyhydramnios. SNP-array can enable the diagnosis and facilitate genetic counseling and prenatal diagnosis for the families. abstract_id: PUBMED:37102291 Analysis of clinical phenotype and pathogenic variant of a fetus with Cornelia de Lange syndrome type II Objective: To explore the prenatal ultrasonographic features and genetic basis for an abortus suspected for type II Cornelia de Lange syndrome (CdLS2). Methods: A fetus diagnosed with CdLS2 at the Shengjing Hospital Affiliated to China Medical University on September 3, 2019 was selected as the study subject. Clinical data of the fetus and family history was collected. Following induced labor, whole exome sequencing was carried out on the abortus. Candidate variant was verified by Sanger sequencing and bioinformatic analysis. Results: Prenatal ultrasonography (33 weeks of pregnancy) has revealed multiple anomalies in the fetus, which included slightly widened cavity of septum pellucidum, blurred corpus callosum, slightly reduced frontal lobe volume, thin cortex, fusion of lateral ventricles, polyhydramnios, small stomach bubble, and digestive tract atresia. Whole exome sequencing has revealed a heterozygous c.2076delA (p.Lys692Asnfs*27) frameshifting variant in the SMC1A gene, which was found in neither parent and was rated as pathogenic based on the guidelines of American College of Medical Genetics and Genomics (ACMG). Conclusion: The CdLS2 in this fetus may be attributed to the c.2076delA variant of the SMC1A gene. Above finding has provided a basis for genetic counseling and assessment of reproductive risk for this family. abstract_id: PUBMED:29359448 Targeted gene panel sequencing prenatally detects two novel mutations of DYNC2H1 in a fetus with increased biparietal diameter and polyhydramnios. Background: Genetic skeletal disorders (GSDs) are clinically and genetically heterogeneous with more than 350 genes accounting for the diversity of disease phenotypes. Prenatal diagnosis of these disorders has been challenging because of the limited but variable prenatal phenotypes, highlighting the need of a novel genetic approach. Short-rib polydactyly syndrome (SRPS) Type III is an autosomal recessive GSD characterized by extreme narrowness of the thorax, severely shortened tubular bones, polydactyly and multiple malformations. Methods: Cytogenetic and molecular analyses using GTG-banding, single nucleotide polymorphism array and a novel GSDs targeted gene panel sequencing were performed in a 24 weeks fetus with increased biparietal diameter (BPD), short limbs, narrow thorax and polyhydramnios. Results: No chromosomal abnormalities and pathogenic copy number variations (CNVs) were detected in the fetus. Two novel compound heterozygous mutations c.2992C &gt; T and c.12836G &gt; C in the DYNC2H1 gene were identified by targeted genes panel sequencing. A literature review was performed to delineate the prenatal phenotype of SRPS Type III. Conclusion: This is the first report of prenatal diagnosis of DYNC2H1 mutations causing SRPS Type III in a fetus with increased BPD associated with polyhydramnios in China. Our findings expand the mutation spectrum of DYNC2H1 in this rare disease and demonstrate that targeted gene panel capture followed by next-generation sequencing (NGS) is an efficient and cost-effective method to perform a molecular prenatal diagnosis of a rare genetic skeletal disorder. abstract_id: PUBMED:37064333 The First Congenital Disorders of Glycosylation Patient (Fetus) with Homozygous COG5 c.95T&gt;G Variant. Introduction: Congenital disorders of glycosylation (CDG) are autosomal recessive hereditary genetic disorders characterized by abnormal glycosylation of N-linked oligosaccharides. Case Presentation: In this research, prenatal testing (24th week of pregnancy) revealed findings like polyhydramnios, hydrocephaly, abnormal facial features/shape, brain morphology abnormality, spina bifida, vertebral column abnormality, macrocephaly, scoliosis, micrognathia, abnormal kidney morphology, short fetal femur length, and short fetal humerus length in the fetus. Whole-exome sequencing was performed; the COG5 gene has shown a pathogenic variant. Discussion: Homozygous patients have never been seen before in the literature for COG5-CDG. We demonstrate the first CDG patient at fetus stage with homozygous COG5 c.95T&gt;G variant. Answer: Polyhydramnios, which is an excessive accumulation of amniotic fluid, can be associated with an increased incidence of congenital fetal anomalies, including chromosomal and structural anomalies. However, the decision to pursue genetic evaluation in the presence of polyhydramnios may depend on additional findings and circumstances. According to the study by Magann et al. (PUBMED:7503195), polyhydramnios is indeed associated with an increased incidence of congenital fetal anomalies. In their study, the incidence of chromosomal anomalies was significantly higher in women with polyhydramnios compared to those with normal amniotic fluid levels. They also found that growth-retarded fetuses with polyhydramnios warrant genetic evaluation. However, they concluded that a genetic study is not absolutely indicated for patients with polyhydramnios and a sonographically normal fetus, suggesting that the presence of polyhydramnios alone, without other abnormal findings, may not necessarily be an indication for genetic evaluation. Other studies have reported cases where polyhydramnios was associated with genetic abnormalities. For instance, a fetus with 17q12 microdeletion syndrome presented with polyhydramnios and renal dysplasia (PUBMED:37212013), and another fetus with mandibulofacial dysostosis Guion-Almeida type had polyhydramnios among other abnormalities (PUBMED:34365627). Additionally, cases of trisomy 18 (PUBMED:20067885), 2q37.3 microdeletion and 20p13p12.2 microduplication (PUBMED:27984613), paternally derived 17q12 microdeletions (PUBMED:33751529), Cornelia de Lange syndrome type II (PUBMED:37102291), and mutations in the DYNC2H1 gene associated with short-rib polydactyly syndrome (PUBMED:29359448) were all detected in fetuses with polyhydramnios. In conclusion, while polyhydramnios can be associated with genetic abnormalities, the decision to proceed with genetic evaluation should be based on a comprehensive assessment of the fetus, including growth parameters and additional ultrasound findings. If the fetus is otherwise normal on ultrasonography, genetic evaluation may not be absolutely indicated, but it could be considered in the presence of additional risk factors or anomalies (PUBMED:7503195).
Instruction: The course of diabetes in children, adolescents and young adults: does the autoimmunity status matter? Abstracts: abstract_id: PUBMED:27842589 The course of diabetes in children, adolescents and young adults: does the autoimmunity status matter? Background: Initial classification of diabetes of young may require revision to improve diagnostic accuracy of different forms of diabetes. The aim of our study was to examine markers of beta-cell autoimmunity in a cohort of young (0-25 years) patients with type 1 diabetes and compare the presentation and course of the disease according to the presence of pancreatic antibodies. Methods: Cross-sectional population-based study was performed covering 100% of pediatric (n = 860) and 70% of 18-25 years old adult patients (n = 349) with type 1 diabetes in Lithuania. Results: No antibodies (GAD65, IA-2, IAA and ICA) were found in 87 (7.5%) cases. Familial history of diabetes was more frequent in those with antibodies-negative diabetes (24.1 vs. 9.4%, p &lt; 0.001). Gestational age, birth weight and age at diagnosis was similar in both groups. Ketosis at presentation was more frequent in patients with autoimmune diabetes (88.1 vs. 73.5%, p &lt; 0.05). HbA1c at the moment of investigation was 8.6 (3) vs. 8.7 (2.2)% in antibodies-negative and antibodies-positive diabetes groups, respectively, p &gt; 0.05. In the whole cohort, neuropathy was found in 8.8% and nephropathy - in 8.1% of cases, not depending on autoimmunity status. Adjusted for age at onset, disease duration and HbA1c, retinopathy was more frequent in antibodies-negative subjects (13.8 vs. 7.8%, p &lt; 0.05). Conclusion: Antibodies-negative pediatric and young adult patients with type 1 diabetes in this study had higher incidence of family history of diabetes, higher frequency of retinopathy, less frequent ketosis at presentation, but similar age at onset, HbA1c, incidence of nephropathy and neuropathy compared to antibodies-positive patients. abstract_id: PUBMED:33853289 Thyroid diseases - ally or enemy of type 1 diabetes in children and adolescents? Introduction: Autoimmune thyroid diseases (AIT) are one of the most common disorders associated with type 1 diabetes (T1D) and they are capable of influencing its course. For Hashimoto's lymphocytic thyroiditis, the incidence is 14-28%, while for Graves-Basedow hyperthyroidism it is 0.5-7%. Aim of the study: Assessment of type 1 diabetes in the pediatric population with coexisting autoimmune thyroid diseases: Hashimoto's lymphocytic thyroiditis and Graves-Basedow's disease. Material And Methods: Analyzing publications from the PubMed scientific database from 1990 to May 2020. Results: Among pediatric patients with T1D and coexisting thyroid autoimmunity insufficient glycemic control is usually observed. Reported average increase in glycated hemoglobin concentration ranges from 7.9 to 9.2%. In children with T1D and subclinical hypothyroidism, an increased number of episodes of hypoglycemia was noted - 5 vs. 2 episodes per year among children with euthyroidism. In hyperthyroidism patients the number of episodes of hypoglycemia was 34.4 vs. 17.2 per 100 incidents in euthyroidism patients. An increased occurrence of diabetic ketoacidosis events may also be observed - 18.1 vs. 7.7 per 100 patients with euthyroidism per year. The risk of developing chronic complications in the form of cardiovascular diseases is also higher. However, basing on the available literature, this subject is still debatable. Conclusions: Autoimmune thyroid diseases often accompany and interfere with type 1 diabetes in children and adolescents. Paying special attention to the different course of diabetes in the presence of thyroid disorders is an important and essential element of diabetes care. abstract_id: PUBMED:33990377 Frequency of Autoantibody-Negative Type 1 Diabetes in Children, Adolescents, and Young Adults During the First Wave of the COVID-19 Pandemic in Germany. Objective: The aim of this study was to investigate the frequency of newly diagnosed type 1 diabetes without evidence of autoimmunity and the respective frequencies of ketoacidosis in children, adolescents, and young adults during the coronavirus disease 2019 (COVID-19) pandemic in Germany compared with the previous decade. Research Design And Methods: Based on data from the German Diabetes Prospective Follow-up Registry (DPV), we compared data from 715 children, adolescents, and young adults, newly diagnosed with type 1 diabetes during the COVID-19 pandemic in Germany between 1 March and 30 June 2020, with data from 5,428 children, adolescents, and young adults of the same periods from 2011 to 2019. Adjusted differences and relative risks (RRs) of negative β-cell autoantibody test results and diabetic ketoacidosis were estimated using multivariable log-binomial regression analysis. An upper noninferiority test (margin 1%) was applied to evaluate whether the autoantibody-negativity rate in 2020 was not higher than that in 2011 to 2019. Results: The estimated frequencies of autoantibody negativity in 2020 and 2011-2019 were 6.6% (95% CI 5.1-8.4) and 7.2% (95% CI 6.5-8.0), respectively, with an absolute difference of -0.68% (90% CI -2.07 to 0.71; Pupper noninferiority = 0.023). The increase of the estimated frequency of diabetic ketoacidosis during the COVID-19 pandemic was similar between autoantibody-negative and -positive type 1 diabetes (adjusted RRs 1.28 [95% CI 0.80-2.05] and 1.57 [1.41-1.75], respectively). Conclusions: This study found no evidence that the COVID-19 pandemic leads to a significantly increased number of new cases with autoantibody-negative type 1 diabetes in children, adolescents, and young adults. In addition, autoantibody-negative type 1 diabetes showed no particular susceptibility to ketoacidosis, neither before nor during the pandemic. abstract_id: PUBMED:24790372 Pathogenic characteristics at diagnosis in young children with type 1 diabetes presenting prior to 5 years of age. We examined pathogenic characteristics in Japanese children with type 1 diabetes presenting before 5 years of age. The subjects were 23 Japanese children, 9 males and 14 females, 1.1-4.8 yr of age at diagnosis. The majority had severe metabolic decompensation accompanied by complete absence of β-cell function at diagnosis. We found a high frequency of preceding viral illness (41.7%) among them. The prevalence of antibodies to GAD and IA-2 at diagnosis in young children were significantly lower than those in older cases diagnosed after 5 yr of age (31.6% vs. 86.3%, 47.1% vs. 82.5%, respectively). These findings suggest that non-autoimmune mechanisms or age-related differences in autoimmunity could be involved in the pathogenesis of diabetes in young children. In regard to diabetes-related HLA-DRB1 and DQB1 alleles, all subjects had high-risk genotypes in both alleles. On the other hand, none of the patients had any of the protective genotypes in either allele. In regard to haplotypes, the frequencies of DRB1*0405-DQB1*0401 and DRB1*0901/ DQB1*0303 were 60.9% and 52.2%, respectively, and both these haplotypes are associated with strong susceptibility to type 1 diabetes. Patients with early-childhood onset may have diabetes-related autoimmunity and genetic backgrounds different from those of patients diagnosed at a later age. abstract_id: PUBMED:25767657 Thyroid autoimmunity and function among Ugandan children and adolescents with type-1 diabetes mellitus. Introduction: Up to 30% of type-1 diabetes mellitus (T1DM) patients have co-existent thyroid autoimmunity with up to 50% of them having associated thyroid dysfunction. Routine screening for thyroid autoimmunity and dysfunction is recommended in all T1DM patients. However, this was not currently practiced in Ugandan paediatric diabetes clinics. There was also paucity of data regarding thyroid autoimmunity and dysfunction in African children and adolescents with diabetes mellitus. The objective of this study was to quantify the magnitude of thyroid autoimmunity and dysfunction in Ugandan children with TIDM. Methods: This was a cross sectional descriptive study to determine the prevalence of thyroid autoantibodies and describe thyroid function among children and adolescents aged 1-19 years with diabetes mellitus attending the paediatric diabetes clinic at Mulago National Referral Hospital, Kampala, Uganda. Following enrollment, we obtained details of clinical history and performed physical examination. Blood (plasma) was assayed to determine levels of antibodies to thyroid peroxidase (antiTPO), free thyroxine (FT4) and thyrotropin (TSH). Results: The prevalence of thyroid autoimmunity was 7.3% (5/69). All antiTPO positive subjects were post pubertal, aged between 13-17 years with females comprising 3/5 of the antiTPO positive subjects. All study subjects were clinically euthyroid; however, 7.3% (5/69) of the study subjects had subclinical hypothyroidism. Conclusion: These data strengthen the argument for routine screening of all diabetic children and adolescents for thyroid autoimmunity (particularly anti-TPO) as recommended by international guidelines. We also recommend evaluation of thyroid function in diabetic children and adolescents to minimize the risk of undiagnosed thyroid dysfunction. abstract_id: PUBMED:31669629 Clinical features, biochemistry and HLA-DRB1 status in children and adolescents with diabetes in Dhaka, Bangladesh. Aims: Little information is published on diabetes in young people in Bangladesh. We aimed to investigate the demographic, clinical, and biochemical features, and HLA-DRB1 alleles in new cases of diabetes affecting Bangladeshi children and adolescents &lt;22 years of age. Methods: The study was conducted at Bangladesh Institute of Research and Rehabilitation of Diabetes, Endocrine and Metabolic Disorders (BIRDEM) in Dhaka. One hundred subjects aged &lt;22 years at diagnosis were enrolled. Demographic characteristics, clinical information, biochemical parameters (blood glucose, HbA1c, C-peptide, and autoantibodies against glutamic acid decarboxylase 65 (GADA) and islet antigen-2 (IA-2A) were measured. High-resolution DNA genotyping was performed for HLA-DRB1. Results: Eighty-four subjects were clinically diagnosed as type 1 diabetes (T1D), seven as type 2 diabetes (T2D), and nine as fibrocalculous pancreatic disease (FCPD). Of the 84 with T1D, 37 (44%) were males and 47 (56%) females, with median age at diagnosis 13 years (y) (range 1.6-21.7) and peak age at onset 12-15 years. 85% of subjects were assessed within one month of diagnosis and all within eleven months. For subjects diagnosed with T1D, mean C-peptide was 0.46 ± 0.22 nmol/L (1.40 ± 0.59 ng/mL), with 9 (10.7%) IA-2A positive, 22 (26%) GADA positive, and 5 (6%) positive for both autoantibodies. Analysis of HLA-DRB1 genotypes revealed locus-level T1D association (p = 6.0E-05); DRB1*04:01 appeared predisposing (p &lt; 3.0E-06), and DRB1*14:01 appeared protective (p = 1.7E-02). Conclusions: Atypical forms of T1D appear to be more common in young people in Bangladesh than in European populations. This will be helpful in guiding more specific assessment at onset and potentially, expanding treatment options. abstract_id: PUBMED:37423102 Dual direction associations between common autoimmune diseases and leukemia among children and young adults: A systematic review. Background: Childhood leukemia and many autoimmune (AI) diseases are severe pediatric conditions with lifelong consequences. AI diseases form a heterogeneous disease group affecting about 5 % of children worldwide, while leukemia is the most common malignancy among children aged 0-14 years. The timing and similarities in suggested inflammatory and infectious triggers of AI disease and leukemia have raised a question whether the diseases share common etiological origins. We conducted a systematic review to evaluate the evidence linking childhood leukemia and AI diseases. Data Sources: In the systematic literature search CINAHL (from 1970), Cochrane Library (form 1981), PubMed (from 1926) and Scopus (from 1948) were queried in June 2023. Review Methods: We included studies covering the association between any AI disease and acute leukemia, limiting it to children and adolescents under 25 years old. The studies were reviewed independently by two researchers and the risk of bias was assessed. Results: A total of 2119 articles were screened and 253 studies were selected for detailed evaluation. Nine studies met the inclusion criteria, of which eight were cohort studies and one was a systematic review. The diseases covered were type 1 diabetes mellitus, inflammatory bowel diseases and juvenile arthritis alongside acute leukemia. Five cohort studies were suitable for more detailed analysis: a rate ratio for leukemia diagnosis after any AI disease was 2.46 (95 % CI 1.17-5.18; heterogeneity I2 15 %) with a random-effects model. Conclusions: The results of this systematic review indicate that AI diseases in childhood are associated with a moderately increased risk of leukemia. The association for individual AI diseases needs further investigation. abstract_id: PUBMED:38252314 Autoimmune comorbidity in type 1 diabetes and its association with metabolic control and mortality risk in young people: a population-based study. Aims/hypothesis: This register-based study aimed to describe autoimmune comorbidity in children and young adults from type 1 diabetes onset, and to investigate whether such comorbidity was associated with a difference in HbA1c or mortality risk compared with children/young adults with type 1 diabetes without autoimmune comorbidity. Methods: A total of 15,188 individuals from the Swedish National Diabetes Register, registered with type 1 diabetes before 18 years of age between 2000 and 2019, were included. Five randomly selected control individuals from the Swedish population (Statistics Sweden) were matched to each individual with type 1 diabetes (n=74,210 [346 individuals with type 1 diabetes were not found in the Statistics Sweden register at the date of type 1 diabetes diagnosis, so could not be matched to control individuals]). The National Patient Register was used to attain ICD-10 codes on autoimmune diseases and the Cause of Death Register was used to identify deceased individuals. Results: In the total type 1 diabetes cohort, mean±SD age at onset of type 1 diabetes was 9.5±4.4 years and mean disease duration at end of follow-up was 8.8±5.7 years. Of the individuals with type 1 diabetes, 19.2% were diagnosed with at least one autoimmune disease vs 4.0% of the control group. The HRs for comorbidities within 19 years from onset of type 1 diabetes were 11.6 (95% CI 10.6, 12.6) for coeliac disease, 10.6 (95% CI 9.6, 11.8) for thyroid disease, 1.3 (95% CI 1.1, 1.6) for psoriasis, 4.1 (95% CI 3.2, 5.3) for vitiligo, 1.7 (95% CI 1.4, 2.2) for rheumatic joint disease, 1.0 (95% CI 0.8, 1.3) for inflammatory bowel disease, 1.0 (95% CI 0.7, 1.2) for systemic connective tissue disorder, 1.4 (95% CI 1.1, 1.9) for uveitis, 18.3 (95% CI 8.4, 40.0) for Addison's disease, 1.8 (95% CI 0.9, 3.6) for multiple sclerosis, 3.7 (95% CI 1.6, 8.7) for inflammatory liver disease and 19.6 (95% CI 4.2, 92.3) for atrophic gastritis. Autoimmune disease in addition to type 1 diabetes had no statistically significant effect on HbA1c or mortality risk. Conclusions/interpretation: To our knowledge, this is the first comprehensive study where young individuals with type 1 diabetes were followed regarding development of a wide spectrum of autoimmune diseases, from onset of type 1 diabetes. In this nationwide and population-based study, there was already a high prevalence of autoimmune diseases in childhood, especially coeliac and thyroid disease. The presence of autoimmune comorbidity did not have a statistically significant effect on metabolic control or mortality risk. abstract_id: PUBMED:27646921 Type 1 Diabetes in the Young: Organization of Two National Centers in Israel and Slovenia. Type 1 diabetes is a chronic autoimmune disease that affects mainly young people. In the last 50 years, a steady increase of the T1D incidence in the young is reported worldwide, with an average 4 % increase annually. In addition, the mean age at the diagnosis is decreasing. Studies show that good metabolic control is important not only for delaying the chronic complications of diabetes but also for improving the quality of life of patients and their families. Continuous education, together with modern technology, is crucial in achieving these goals. Longitudinal data on glycated hemoglobin (HbA1c), along with the data on severe hypoglycemia and severe diabetic ketoacidosis, can describe the quality of care in a defined population. Two national reference diabetes centres taking care of children, adolescents and young adults with diabetes in Israel and Slovenia are described. abstract_id: PUBMED:36681087 Islet autoantibody screening in at-risk adolescents to predict type 1 diabetes until young adulthood: a prospective cohort study. Background: Screening for islet autoantibodies in children and adolescents identifies individuals who will later develop type 1 diabetes, allowing patient and family education to prevent diabetic ketoacidosis at onset and to enable consideration of preventive therapies. We aimed to assess whether islet autoantibody screening is effective for predicting type 1 diabetes in adolescents aged 10-18 years with an increased risk of developing type 1 diabetes. Methods: Data were harmonised from prospective studies from Finland (the Diabetes Prediction and Prevention study), Germany (the BABYDIAB study), and the USA (Diabetes Autoimmunity Study in the Young and the Diabetes Evaluation in Washington study). Autoantibodies against insulin, glutamic acid decarboxylase, and insulinoma-associated protein 2 were measured at each follow-up visit. Children who were lost to follow-up or diagnosed with type 1 diabetes before 10 years of age were excluded. Inverse probability censoring weighting was used to include data from remaining participants. Sensitivity and the positive predictive value of these autoantibodies, tested at one or two ages, to predict type 1 diabetes by the age of 18 years were the main outcomes. Findings: Of 20 303 children with an increased type 1 diabetes risk, 8682 were included for the analysis with inverse probability censoring weighting. 1890 were followed up to 18 years of age or developed type 1 diabetes between the ages of 10 years and 18 years, and their median follow-up was 18·3 years (IQR 14·5-20·3). 442 (23·4%) of 1890 adolescents were positive for at least one islet autoantibody, and 262 (13·9%) developed type 1 diabetes. Time from seroconversion to diabetes diagnosis increased by 0·64 years (95% CI 0·34-0·95) for each 1-year increment of diagnosis age (Pearson's correlation coefficient 0·88, 95% CI 0·50-0·97, p=0·0020). The median interval between the last prediagnostic sample and diagnosis was 0·3 years (IQR 0·1-1·3) in the 227 participants who were autoantibody positive and 6·8 years (1·6-9·9) for the 35 who were autoantibody negative. Single screening at the age of 10 years was 90% (95% CI 86-95) sensitive, with a positive predictive value of 66% (60-72) for clinical diabetes. Screening at two ages (10 years and 14 years) increased sensitivity to 93% (95% CI 89-97) but lowered the positive predictive value to 55% (49-60). Interpretation: Screening of adolescents at risk for type 1 diabetes only once at 10 years of age for islet autoantibodies was highly effective to detect type 1 diabetes by the age of 18 years, which in turn could enable prevention of diabetic ketoacidosis and participation in secondary prevention trials. Funding: JDRF International. Answer: The course of diabetes in children, adolescents, and young adults can indeed be influenced by autoimmunity status. A study examining markers of beta-cell autoimmunity in young patients with type 1 diabetes found that those without antibodies had a higher incidence of family history of diabetes, higher frequency of retinopathy, less frequent ketosis at presentation, but similar age at onset, HbA1c, incidence of nephropathy and neuropathy compared to antibodies-positive patients (PUBMED:27842589). This suggests that the presence or absence of pancreatic antibodies can affect the presentation and certain complications of the disease. Autoimmune thyroid diseases (AIT), which are common in type 1 diabetes (T1D), can also influence the course of diabetes. Pediatric patients with T1D and coexisting thyroid autoimmunity often have insufficient glycemic control, increased episodes of hypoglycemia, and an increased occurrence of diabetic ketoacidosis. Additionally, there is a higher risk of developing chronic complications such as cardiovascular diseases (PUBMED:33853289). During the COVID-19 pandemic, a study found no evidence that the pandemic led to a significantly increased number of new cases with autoantibody-negative type 1 diabetes in children, adolescents, and young adults. Moreover, autoantibody-negative type 1 diabetes showed no particular susceptibility to ketoacidosis, neither before nor during the pandemic (PUBMED:33990377). In young children with type 1 diabetes presenting before 5 years of age, there was a high frequency of preceding viral illness and a significantly lower prevalence of antibodies to GAD and IA-2 at diagnosis compared to older cases. This indicates that non-autoimmune mechanisms or age-related differences in autoimmunity could be involved in the pathogenesis of diabetes in young children (PUBMED:24790372). In summary, autoimmunity status does matter in the course of diabetes in young individuals, as it can affect the presentation, control, and risk of complications associated with the disease.
Instruction: Are one-step adhesives easier to use and better performing? Abstracts: abstract_id: PUBMED:19826601 Bond Strength of One-Step Adhesives under Different Substrate Moisture Conditions. Objectives: The aim of this study was to evaluate the bond strength of one-step adhesive systems to dry or moist dental substrate. Methods: Thirty human third molars were sectioned into two halves, in the mesio-distal direction, parallel to the long axis of the tooth. Each half was embedded in a polystyrene resin cylinder so that the buccal/lingual surface remained exposed. This exposed surface was abraded to obtain both flat exposed enamel and dentin. The samples were randomly allocated according to the adhesive system (Xeno III, Adper Prompt and iBond) and moisture condition (dry and moist). The substrates were air-dried for 30 s for dry condition, while the moist substrates were re-wet with 2.5 mul of distilled water after drying. After the adhesive procedures, two resin composite cylinders were build-up on dentin and enamel substrates, totaling four per sample. A shear load was applied to the samples at a crosshead speed of 0.5 mm/min until failure. Data were statistically analyzed by three-way ANOVA and the Tukey test (alpha=0.05). Results: The evaluated one-step adhesives showed higher bond strength to dentin than enamel. The iBond presented better bond performance to moist substrate and Xeno III to dry substrate. The moisture condition did not interfere in the performance of Adper Prompt. The Xeno III and iBond presented higher bond strength than the other adhesives to both dry and moist substrates. Conclusions: The moisture condition of substrate interfered in the performance of one-step self-etching adhesives and the best moisture condition was material dependent. abstract_id: PUBMED:19603581 Are one-step adhesives easier to use and better performing? Multifactorial assessment of contemporary one-step self-etching adhesives. Purpose: The objective of this study was to examine whether one-step self-etching adhesives (1-SEAs) really have an advantage over multistep systems. Materials And Methods: Nine one-step self-etching adhesives (Absolute, Adper Prompt L-Pop, Clearfil S3 Bond, G-Bond, Hybrid Bond, iBond, One-up Bond F Plus, Optibond All-in-one and Xeno III) were included in this study. One two-step self-etching adhesive (Clearfil SE Bond) and one three-step etch-and-rinse adhesive (Optibond FL) served as controls. Their microtensile bond strength to bur-cut enamel and dentin was determined using a standardized protocol and the respective adhesive/dentin interface of these adhesives was characterized by transmission electron microscopy. Statistical analysis was performed with the Kruskal-Wallis nonparametric test. Results: Regarding bond strength, the control adhesives tended to perform superior to the one-step adhesives. However, a significant difference between the control adhesives and some one-step adhesives could not always be demonstrated, partly due to the statistical setup of this study. Interface analysis by electron microscopy showed wide variation among the one-step adhesives, depending on their composition and their acidity. 1-SEAs also exhibited two different kinds of droplets, depending on their hydrophilicity. Hydrophobic HEMA-free 1-SEAs such as G-Bond were prone to phase separation, while especially HEMA-containing hydrophilic 1-SEAs, such as Clearfil S3 Bond and Xeno III were predisposed to forming osmosis-induced droplets. Hybrid bond, Absolute, and iBond featured both phase separation as well as osmosis. Optibond All-in-one exhibited a clustering reaction of the filler particles upon solvent evaporation. All adhesives including the control adhesives showed signs of nanoleakage, indicating that all adhesives are to some extent permeable to water. A definitive conclusion with regard to quantitative assessment of nanoleakage was much hindered by inconsistencies in the silver deposition. The application procedure of some 1-SEAs sometimes proved as elaborate and time consuming as those of the two-step adhesive Clearfil SE Bond. Conclusion: Considering bond strength and application procedure, 1-SEAs are not always a better alternative to multistep adhesives. abstract_id: PUBMED:25671215 Effect of additional etching and ethanol-wet bonding on the dentin bond strength of one-step self-etch adhesives. Objectives: This study examined the effects of additional acid etching on the dentin bond strength of one-step self-etch adhesives with different compositions and pH. The effect of ethanol wetting on etched dentin bond strength of self-etch adhesives was also evaluated. Materials And Methods: Forty-two human permanent molars were classified into 21 groups according to the adhesive types (Clearfil SE Bond [SE, control]; G-aenial Bond [GB]; Xeno V [XV]; Beauti Bond [BB]; Adper Easy Bond [AE]; Single Bond Universal [SU]; All Bond Universal [AU]), and the dentin conditioning methods. Composite resins were placed on the dentin surfaces, and the teeth were sectioned. The microtensile bond strength was measured, and the failure mode of the fractured specimens was examined. The data were analyzed statistically using two-way ANOVA and Duncan's post hoc test. Results: In GB, XV and SE (pH ≤ 2), the bond strength was decreased significantly when the dentin was etched (p &lt; 0.05). In BB, AE and SU (pH 2.4 - 2.7), additional etching did not affect the bond strength (p &gt; 0.05). In AU (pH = 3.2), additional etching increased the bond strength significantly (p &lt; 0.05). When adhesives were applied to the acid etched dentin with ethanol-wet bonding, the bond strength was significantly higher than that of the no ethanol-wet bonding groups, and the incidence of cohesive failure was increased. Conclusions: The effect of additional acid etching on the dentin bond strength was influenced by the pH of one-step self-etch adhesives. Ethanol wetting on etched dentin could create a stronger bonding performance of one-step self-etch adhesives for acid etched dentin. abstract_id: PUBMED:26005459 Shear bond strength of one-step self-etch adhesives: pH influence. Background: The aim of this study was to compare the shear bond strength of four one-step self-etch adhesives with different pH values to enamel and dentin. Materials And Methods: In this in vitro study, 200 bovine permanent mandibular incisors were used. Four one-step self-etch adhesives with different pH values were tested both on enamel and on dentin: Adper™ Easy Bond Self-Etch Adhesive (pH = 0.8-1), Futurabond NR (pH=2), G-aenial Bond (pH = 1.5), Clearfil S(3) Bond (pH = 2.7). After adhesive systems application, a nanohybrid composite resin was inserted into the bonded surface. The specimens were placed in a universal testing machine. The shear bond strength was performed at a cross-head speed of 1 mm/min until the sample rupture. The shear bond strength values (MPa) of the different groups were compared with analysis of variance after that Kolmogorov and Smirnov tests were applied to assess normality of distributions. P &lt; 0.05 was considered as significant. Results: In enamel shear bond strength, the highest shear bond strength values were reported with Futurabond NR (P &lt; 0.01); however, no significant differences were found with Clearfil S(3) Bond. The others adhesive systems showed lower shear bond strength values with significant differences between them (P &lt; 0.05). When comparing the dentin shear bond strength, the lowest shear bond strength values were reported with Clearfil S(3) Bond (P &lt; 0.05), while there were no significant differences among the other three products (P &gt; 0.05). Conclusion: The pH values of adhesive systems did not influence significantly their shear bond strength to enamel or dentin. abstract_id: PUBMED:29813001 Optical Evaluation of Enamel Microleakage with One-Step Self-Etch Adhesives. Objective: In this in vitro study, cross-polarization optical coherence tomography (CP-OCT) was used to evaluate microleakage in dental composite restorations bonded to enamel with two types of one-step self-etching (SE) adhesives. Background Data: One-step SE adhesives were proposed to simplify bonding of composite restorations. However, bonding of these simplified adhesive to dental enamel is still questionable. OCT is a promising diagnostic tool that allows micron-scale imaging of biological tissues and structures. Methods: Class-V cavities (4-mm diameter × 1-mm depth) were prepared on the labial surfaces of extracted human anterior teeth with margins and floor located in enamel. The cavities were then divided into two groups (n = 5) and restored with either Tetric N-Bond Universal (Ivoclar Vivadent, Liechtenstein) or Palfique Bond (Tokuyama Dental Corporation, Japan), followed by Estelite Flow Quick flowable composite (Tokuyama Dental Corporation). After storage in 100% humidity for 24 h, specimens were immersed in ammoniacal silver nitrate solution for another 24 h. Later, they were rinsed and immersed in photo-developing solution under fluorescent light for 8 h. CP-OCT at 1310 nm center wavelength was used to scan seven two-dimensional cross-sectional images from each specimen. Each cross-sectional image was analyzed using ImageJ software to quantify interfacial microleakage at the cavity floor. Results: Student's t-test showed significant difference between the tested adhesives (p &lt; 0.05). Palfique Bond adhesive presented inferior interfacial adaptation as it obtained higher values of floor microleakage compared to Tetric N-Bond Universal adhesive group. Conclusions: CP-OCT enabled nondestructive quantitative assessment of microleakage in enamel dental restorations. abstract_id: PUBMED:36431740 Hydroxyapatite Affects the Physicochemical Properties of Contemporary One-Step Self-Etch Adhesives. The study aimed to evaluate the influence of the manipulation surfaces on the physical properties of one-step self-etch adhesives (1-SEAs). Scotchbond Universal (SBU), Clearfil Universal Bond Quick ER (UBQ), and an experimental adhesive (UBQexp) were manipulated on different surfaces: manufacturer's Teflon-based dispensing dish (TD) or hydroxyapatite plate (HA). After manipulation of the adhesives, the pH of each 1-SEA was measured. Samples of each adhesive/manipulation surface were prepared and subjected to water sorption (WS)/solubility (SL) and flexural strength tests. The modulus of elasticity (E) was measured in dry and wet conditions before and after 24 h water storage, and the percentage of variation of E (ΔE) was calculated. Results were analyzed using the t-test with Bonferroni corrections (α = 0.05). When adhesives were manipulated on the HA plate, there was a significant increase in the adhesives' pH. WS and SL of all 1-SEAs decreased when the HA was used. Only SBU showed higher flexural strength when manipulated on the HA compared to the manipulation on TD under dry and wet conditions. For each 1-SEA, the use of HA resulted in significantly higher E in dry and wet conditions. ΔE of all adhesives was smaller with the manipulation on HA than on TD. It was concluded that the manipulation of 1-SEA on a hydroxyapatite plate considerably affected the adhesives' properties. abstract_id: PUBMED:22904656 Bond strength of one-step self-etch adhesives and their predecessors to ground versus unground enamel. Objective: The aim of this study was to compare the shear bond strength of several self-etch adhesives to their two-step predecessors to ground and unground enamel. Methods: Seventy-five extracted, non-carious human third molar teeth were selected for this study. The buccal surfaces of each tooth were mechanically ground to obtain flat enamel surfaces (ground enamel), while the lingual surfaces were left intact (unground enamel). The teeth were randomly divided into five groups according to the adhesive systems (n=15): one-step self-etch adhesive - Clearfil S3 Bond, its two-step predecessor - Clearfil SE Bond, one-step self-etch adhesive - AdheSE One, and its two-step predecessor - AdheSE, and a two-step etch-and-rinse adhesive - Adper Single Bond 2(control). After application of the adhesives to the buccal and lingual enamel surfaces of each tooth, a cylindrical capsule filled with a hybrid composite resin (TPH) was seated against the surfaces. The specimens were stored in distilled water at 37°C for 24 hours, followed by thermocy-cling (5°C-55°C/500 cycles). They were subjected to shear bond strength test in a universal testing machine at a crosshead speed of 1.0 mm/minute. The data were compared using a two-way ANOVA, followed by Bonferroni test at P&lt;.05. Results: All adhesives exhibited statistically similar bond strengths to ground and unground enamel except for the etch-and-rinse adhesive that showed significantly higher bond strengths than the self-etch adhesives (P&lt;.05). No significant differences in bond strength values were observed between ground and unground enamel for any of the adhesives tested (P=.17). Conclusion: Similar bond strengths to ground and unground enamel were achieved with one-step self-etch adhesives and their predecessors. Enamel preparation did not influence the bonding performance of the adhesives tested. abstract_id: PUBMED:32538318 The preparation of multifunction chitosan adhesive hydrogel by "one- step" method. Chitosan adhesive hydrogels have attracted interest because of their unique properties. However, it is still a challenge to use simple and unmodified chitosan to synthesize multifunctional hydrogels with adhesive, self-healing and strain sensing properties. Here, a multifunctional hydrogel of Ca2+-chitosan/polyacrylamide(Ca2+- CS/PAAm)with adhesive self-healing and strain sensing properties was carried out by one-step preparation method. The hydrogel has the ability to adhere to a variety of materials, wherein the adhesion strength to rubber surface is 48.4 kPa. The hydrogel has excellent strain sensing capability on finger bending or cyclic stretching, the sensitivity factor is 0.41 ± 0.015. The cut hydrogel healed completely at room temperature within 4 h, and the healing hydrogel can be stretched to 5 times of its original length. This makes Ca2+-CS/PAAm hydrogel to be great potential in the application of strain sensors, wound dressing and skin adhesives. abstract_id: PUBMED:30511051 Qualitative and Quantitative Evaluation of Cytotoxicity of Five Different One-Step Self-Etching Adhesives. Purpose: To qualitatively and quantitatively compare the cytotoxic potentials of five different one-step self-etching adhesives: Prime&amp;Bond One-Select (PB-OS), Optibond All-in-One (OB-AIO), G-Bond (GB), Clearfil Universal Bond (CUB), Single Bond Universal (SBU). Materials And Methods: During the first stage of the study, the cytotoxic activities of the test materials were evaluated qualitatively using the direct contact method. In this method, the test materials were placed directly into a monkey kidney epithelial cell culture medium. Reaction zones which occurred in the culture medium were evaluated, in addition to the density and changes in the morphology of the cells. During the second stage, the cytotoxic potential of four different dilutions (1%, 0.1%, 0.01%, 0.001%) of the test materials on L929 rat fibroblast cells was quantitatively evaluated at three different time periods (24 h, 48 h, 72 h) with the MTT tetrazolium-based assay. Results: In the first stage, a zone exceeding 1 cm was observed around or below SBU, CUB, GB and OB-AIO. In PB-OS, the zone borders were approximately 1 cm. In the second stage after the MTT assay, CUB was the most cytotoxic after 24 h, GB and SBU after 48 h, and OB-AIO after 72 h. Conclusion: All adhesives tested showed different degrees of cytotoxicity, which statistically significantly increased with dose. Changes were seen related to time. abstract_id: PUBMED:26186405 Cytotoxic effects of one-step self-etching adhesives on an odontoblast cell line. The aim of this study was to evaluate the cytotoxic effects of one-step self-etching adhesives. Cells from an immortalized mouse odontoblast cell line (MDPC-23) were cultured with six different dental adhesive systems (diluted to concentrations of 0.5% for 4 h): Adper Easy Bond (EB), Xeno V (XV), iBond (IB), AdheSE One (AO), Clearfil SE primer (CS), and Adper Single Bond 2 (SB). MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) and flow cytometric apoptosis assays were used to evaluate cell viability and the rate of apoptosis. The odontoblasts were also examined under a scanning electron microscope. While all of the cultures with adhesives showed reduced viability, the viabilities in the IB and SB groups were not significantly different from the control group. Although increased apoptosis rates were observed in all of the cultures with adhesives, the rate in the SB group was not significantly different from the rate in the control. The control group showed the lowest apoptosis rate followed by the SB, AO, IB, EB, XV, and CS groups. When examined under a scanning electron microscope, control odontoblasts and the SB group exhibited relatively large cytoplasmic extensions. In contrast, in the EB and CS groups, fewer fibroblasts remained adhered to the plate surface. Cytoplasmic membrane shrinkage and cell-free areas with residual membrane fragments from dead cells were observed. In conclusion, all cultures with one-step self-etching adhesives showed increased apoptotic activity. SB, an etch-and-rinse adhesive, was comparable to the control group, and CS and EB showed the lowest odontoblast viabilities according to the MTT assay. Answer: The question of whether one-step adhesives are easier to use and better performing is complex and depends on various factors. According to the abstracts provided, one-step self-etching adhesives (1-SEAs) were designed to simplify the bonding process in dental restorations, but their performance compared to multistep systems is not always superior. One study found that the bond strength of one-step adhesives to dry or moist dental substrates varied depending on the product and the moisture condition of the substrate. For instance, iBond showed better bond performance to moist substrate, while Xeno III performed better on dry substrate. The moisture condition did not significantly affect the performance of Adper Prompt (PUBMED:19826601). Another study indicated that control adhesives (a two-step self-etching adhesive and a three-step etch-and-rinse adhesive) tended to perform better than one-step adhesives in terms of bond strength. The application procedure of some one-step adhesives was sometimes as elaborate and time-consuming as that of the two-step adhesive Clearfil SE Bond, suggesting that one-step adhesives are not always easier to use or better performing (PUBMED:19603581). Additional research showed that the effect of additional acid etching on the dentin bond strength of one-step self-etch adhesives was influenced by the pH of the adhesives. Ethanol wetting on etched dentin could enhance the bonding performance of one-step self-etch adhesives (PUBMED:25671215). A study comparing the shear bond strength of one-step self-etch adhesives with different pH values found that pH did not significantly influence their shear bond strength to enamel or dentin (PUBMED:26005459). In terms of cytotoxicity, one-step self-etching adhesives showed different degrees of cytotoxicity, which increased with dose and varied over time (PUBMED:30511051). Another study found that all one-step self-etching adhesives increased apoptotic activity in an odontoblast cell line, with some adhesives showing lower cell viability than others (PUBMED:26186405). In summary, while one-step adhesives may offer a simplified application process, their performance in terms of bond strength, ease of use, and cytotoxicity can vary widely depending on the specific product and conditions. They are not universally easier to use or better performing compared to multistep adhesive systems.
Instruction: Can we spare removing the adrenal gland at radical nephrectomy in children with wilms tumor? Abstracts: abstract_id: PUBMED:20728101 Can we spare removing the adrenal gland at radical nephrectomy in children with wilms tumor? Purpose: In patients with Wilms tumor indications for adrenalectomy are not well-defined. Following the rationale for preserving the adrenal gland in cases of other renal malignancies we determined predictors of adrenal involvement and the impact of adrenalectomy on retroperitoneal recurrence. Materials And Methods: We retrospectively reviewed the record of patients who underwent surgical resection as primary treatment for Wilms tumor between 1990 and 2008 at 2 Canadian pediatric centers. Patient and tumor characteristics were reviewed to determine potential links to adrenal involvement. Recurrence was evaluated as a time dependent variable based on followup duration. Results: Of 180 patients diagnosed with Wilms tumor 95 underwent initial radical nephrectomy. Mean ± SD age at diagnosis was 46 ± 38 months and mean survival followup was 189 ± 8.3 months. Disease was stage 1 to 4 in 28, 34, 23 and 4 patients, respectively. Adrenalectomy was done in 58 patients (61%). Only 1 adrenal gland was reportedly positive for tumor invasion while peri-adrenal fat involvement was noted in 3 patients. No studied patient or tumor characteristics predicted involvement. No statistically significant difference in retroperitoneal recurrence was found between the groups in which the adrenal gland was removed vs preserved. Conclusions: Adrenal involvement in patients with Wilms tumor is rare and difficult to predict. Preserving the adrenal gland was not associated with an increased risk of local recurrence. Thus, it seems prudent to avoid adrenalectomy at radical nephrectomy when technically feasible, instead attempting to otherwise remove all peri-adrenal fat with the specimen. abstract_id: PUBMED:37130763 Robot-assisted radical nephrectomy for Wilms' tumor in children. Introduction: Surgical removal of the tumor is a key step in the management of nephroblastoma. Less invasive surgical approaches such as robot-assisted radical nephrectomy (RARN) has gained momentum over the past few years. This video presents a comprehensive step-by-step video for two cases: one uncomplicated left RARN and one more challenging right RARN. Materials & Methods: Following the UMBRELLA/SIOP protocol, both patients received neoadjuvant chemotherapy. Under general anesthesia, in a lateral decubitus position, four robotic and one assistant port are placed. After mobilization of the colon, the ureter and gonadal vessels are subsequently identified. The renal hilum is dissected, and the renal artery and vein are divided. The kidney is dissected with sparing of the adrenal gland. The ureter and gonadal vessels are divided, and the specimen is removed through a Pfannenstiel incision. Lymph node sampling is performed. Results: Patients were 4 and 5 years old. The total surgical time was 95 and 200 min, with an estimated blood loss of 5 and 10 cc. The hospital stay was limited to 3 and 4 days. Both pathological reports confirmed the diagnosis of nephroblastoma, with tumour-free resection margins. No complications were observed 2 months postoperatively. Conclusion: RARN is feasible in children. abstract_id: PUBMED:32195125 Bilateral nephrectomy robotic-assisted laparoscopic in children with bilateral Wilms' tumor. Our main objective is to report the feasibility of performing simultaneous robotic-assisted laparoscopic (RAL) right partial nephrectomy with contralateral radical nephrectomy in children with bilateral Wilms' tumor. The RAL right partial nephrectomy and contralateral radical nephrectomy were performed via trans-peritoneal with a four-port approach (five ports in one patient), and the patients were repositioned and draped when moving to the other side. The operative time was 90 min, estimated blood loss was 50 cc. Postoperative length of stay was 2 days. No significant intraoperative or postoperative complications occurred. abstract_id: PUBMED:34321792 Superior Mesenteric Artery Injury during Radical Nephrectomy in an Infant: Delayed Diagnosis and Successful Management. Superior mesenteric artery (SMA) injury during a left radical nephrectomy is an uncommon complication in children with a potentially devastating outcome. Successful management depends on early diagnosis and re-establishing SMA perfusion. We report the successful management of an iatrogenic SMA injury during radical nephrectomy in a 10-month-old boy with left upper polar Wilms' tumor. abstract_id: PUBMED:35618582 Practice status and influencing factors of adrenalectomy in patients with Wilms tumor. To investigate the clinical practice status and factors that influence adrenalectomy along with the impact on prognosis in patients with Wilms Tumor (WT). We retrospectively reviewed the demographic, clinical, and follow-up data of patients with WT, including age, tumor side, tumor volume, tumor location within the kidney, stage, pathological type, tumor rupture, levels of adrenocorticotropin (ACTH), renin, aldosterone, and adrenal management, as well as outcomes. The primary outcomes are adrenal practice status and 5-year relapse-free survival (RFS). A total of 162 patients were enrolled in this study. Of these, 131 patients underwent radical nephrectomy with adrenalectomy, and adrenal invasion was only noted in three patients (2.3%). Adrenalectomy was associated with tumor volume and clinical stage (P &lt; 0.05). Multivariable logistic regression analysis (OR = 3.982, P = 0.005) and ROC curve analysis (AUC = 0.708, P = 0.0003) revealed that a larger tumor volume independently increased the risk of adrenalectomy. Adrenalectomy was not significantly associated with tumor location, tumor rupture, or local recurrence (P &gt; 0.05). In addition, the study median follow-up was 50.95 months. The 5-year RFS rates of patients with removed adrenal gland and preserved adrenal gland were 90.3% and 75.8%, respectively (P = 0.078). We followed up children more than 3 years after removal of the adrenal glands, and no children with reduced ACTH, aldosterone, or renin were found. Multivariate Cox regression analysis showed no significant difference on prognosis (P = 0.203), even after adjusting for clinical stage and pathological type. Finally, no evidence of adrenal insufficiency was reported during the follow-up examinations. Our data indicated that invasion of the ipsilateral adrenal gland is rare in WT. Preserving the ipsilateral adrenal gland was not associated with prognosis. Preoperative adequate assessment tumor volume and intraoperative detection of adrenal invasion were necessary to determine whether or not to perform an adrenal resection. abstract_id: PUBMED:23895979 Is adrenalectomy necessary during unilateral nephrectomy for Wilms Tumor? A report from the Children's Oncology Group. Purpose: To determine whether performing adrenalectomy at the time of nephrectomy for unilateral Wilms tumor impacts clinical outcome. Methods: We reviewed information on all patients enrolled on National Wilms Tumor Study-4 and -5. Data were abstracted on patient demographics, tumor characteristics, surgical and pathologic status of the adrenal gland, and patient outcomes. The primary endpoints were intraoperative spill and five-year event-free survival (EFS) in patients who did or did not undergo adrenalectomy. Results: Of 3825 patients with complete evaluable data, the adrenal was left in situ in 2264 (57.9%) patients, and was removed completely in 1367 patients (36.7%) or partially in 194 patients (5.2%). Of the adrenal glands removed, 68 (4.4%) contained tumor. Adrenal involvement was more common in patients with stage 3 (9.8%) than stage 2 disease (1.9%; p &lt; 0.0001). After controlling for stage and histopathology, five-year EFS was similar whether or not the adrenal gland was removed (p = 0.48), or involved with tumor (p = 0.81); however, intraoperative spill rates were higher in patients undergoing adrenalectomy (26.1% vs 15.5%, p &lt; 0.0001), likely due to larger tumor size or technical factors. No patient had clinical evidence of adrenal insufficiency or tumor recurrence in the adrenal gland during follow-up (median 9.9 years). Conclusions: Sparing the adrenal gland during nephrectomy for unilateral Wilms tumor was not associated with a higher incidence of intraoperative spill and was associated with a similar oncologic outcome, on a per-stage basis, with cases where the adrenal was removed. Thus, adrenalectomy should not be considered mandatory during radical nephrectomy for Wilms tumor. abstract_id: PUBMED:31178166 Partial and radical nephrectomy in children, adolescents, and young adults: Equivalent readmissions and postoperative complications. Objectives: To compare and contrast the use of partial nephrectomy (PN) and radical nephrectomy (RN) in pediatric malignant renal tumors using a nationally representative database. Methods: The 2010-2014 Nationwide Readmissions Database (NRD) was used to obtain PN and RN select postoperative data. ICD-9-CM codes were used to identify children (&lt;10 years), adolescents (10-19 years) and young adults (20-30 years) diagnosed with malignant renal tumors who were treated with a PN or RN. The presence of a 30-day readmission, occurrence of postoperative complications, cost, and length of stay (LOS) were studied and weighted logistic regression models were fit to test for associations. Results: There were 4330 weighted encounters (1289 PNs, 3041 RNs) that met inclusion criteria: 50.8% were children, 7.2% were adolescents, and 42% were young adults. Young adults had the highest rates of PN, whereas children had the highest rates of RN (p &lt; 0.0001). Overall, no evidence was found to suggest a difference in odds between surgical modality and the presence of a 30-day readmission or postoperative complication. While PN was on average $9000 cheaper compared to RN overall, its cost was similar to that of RN for children. Similarly, PN patients had a shorter overall LOS compared to RN patients, but their LOS was similar to that of children who underwent RN. Conclusion: There was no evidence of a difference in odds between RN and PN in terms of postoperative readmissions or in-hospital complication rates. Additionally, we observed descriptive differences in both cost and LOS between the surgical modalities across age groups. Type Of Study: Retrospective comparative study (administrative database analysis). Level Of Evidence: Level III. abstract_id: PUBMED:9498379 The risk of nephrectomy during local control in abdominal neuroblastoma. Methods: Eight hundred sixty-eight children presenting from 1981 to 1991 were treated on five multiagent chemotherapy protocols by members of the Pediatric Oncology Group for advanced-stage neuroblastoma with large primary tumors crossing the midline or distant metastasis. Of these children, 696 had abdominal (adrenal or paravertebral) primary tumors. One hundred sixteen children underwent greater than 50% surgical resection of these abdominal primary tumors before chemotherapy, and 233 underwent similar surgery after induction chemotherapy. Results: Among the 349 who underwent surgical resection, 52 children (14.9%) had nephrectomy or renal infarction during surgery for local control. There was a 25% incidence among those with initial resection (29 patients) and a 9.9% incidence in the postchemotherapy resections (23 patients). Reasons for nephrectomy given by the surgeons included direct involvement of the kidney by adjacent tumor (17 children), clinical impression that the tumor was a Wilms' tumor (11 children), renal vessels could not be separated from the tumor (10 children), extensive tumor surrounding the kidney (8 children), postoperative renal infarction (4 children), marked decrease in unilateral renal function after chemotherapy (1 child), and position of the tumor posterior to the kidney and vena cava making resection without nephrectomy impossible (1 child). Of the patients undergoing nephrectomy, four children had an upper pole nephrectomy in conjunction with their adrenalectomy and resection of the tumor. Pathological review of the resected tumor available in 47 cases demonstrated direct involvement of the renal parenchyma in 18 cases (38% of the nephrectomies) and in 5.2% of those undergoing resection. In children undergoing initial resection, the risk for nephrectomy (as calculated by the methods described by Gart) was more than twice compared with those undergoing resection after chemotherapy (P = .012; odds ratio, 2.32; 95% confidence interval of 1.23 to 4.42). Conclusions: This review confirms that renal parenchymal involvement does occur in a significant number of children with abdominal neuroblastoma. It also suggests that preoperative chemotherapy may decrease the number of nephrectomies required to achieve a total or subtotal resection. abstract_id: PUBMED:7858623 Laparoscopic radical nephrectomy for renal cell carcinoma: early experience. We developed a laparoscopic radical nephrectomy for malignant disease of the kidney and successfully performed the operation in six patients between July 1992 and October 1993. This procedure is basically derived from that of Clayman and associates. A CO2 pneumoperitoneum is induced, and five trocars are introduced into the abdominal cavity through the lateral abdominal wall. The kidney is removed en bloc together with the adrenal gland, perirenal fatty tissue, and Gerota's fascia. The free kidney is put into a laparoscopy sack, and the sack is pulled out through a 5- to 6-cm incision in the abdominal wall without morcellation to allow complete pathologic examination. By this procedure, right nephrectomy was performed in four patients and left nephrectomy in two patients. The mean operative time was 6 hours and 13 minutes, and the mean blood loss was 430 mL. There were no severe complications. The average hospital stay was 11 days, and the average time to full convalescence was 21 days. Pathologic examination revealed renal cell carcinoma in five patients and nephroblastoma in one. Of the patients with renal cell carcinoma, the stage was pT1pN0pM0 in three patients and pT2pN0pM0 in two. There has been no recurrent disease in these six patients with a mean follow-up period of 9.5 months. abstract_id: PUBMED:33014769 Effectiveness of Nephron Sparing Surgery and Radical Nephrectomy in the Management of Unilateral Wilms Tumor: A Meta-Analysis. Background: Unilateral Wilms tumor is the most common renal malignancy in the pediatric population. Although the onset of surgical intervention like radical nephrectomy has substantially reduced the mortality rate, recent evidence has raised concerns regarding several postoperative complications associated with this procedure. Nephron sparing surgery has been reported to avoid such postoperative complications and have high technical success rate. However, no attempt to date has been made to synthesize the evidence comparing the efficacy of radical nephrectomy and nephron sparing surgery for managing unilateral Wilms tumor. Methods and Results: To metastatistically compare the efficiency of radical nephrectomy with nephron sparing surgery for managing unilateral Wilms tumor, a systematic identification of the literature was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines on four academic databases: MEDLINE, Scopus, EMBASE, and CENTRAL. A meta-analysis comparing renal function (estimated glomerular filtration rate), survival rate, and rate of relapse was performed to compare the efficacy of radical nephrectomy and nephron sparing surgery. Out of 1,283 records, 20 articles including 5,246 children (mean age, 4.3 ± 3.0 years) were included in this review. Radical nephrectomy was performed on 11 of the included studies, whereas nephron sparing surgery was performed on five studies. Two studies compared the efficacy of both interventions. The meta-analysis reveals the beneficial effects of nephron sparing surgery (Hedge's g, 0.76) as compared to radical nephrectomy (-0.16) for the estimated glomerular filtration rate for children with unilateral Wilms tumor. Moreover, higher survivability (0.59) and lesser occurrence of relapse were (-1.0) also reported for cases operated with nephron sparing surgery. Conclusion: The current meta-analysis recommends the use of nephron sparing surgery for unilateral Wilms tumor. The procedure accounts for higher survivability and postoperative renal function and lesser incidence of relapse as compared to radical nephrectomy. Answer: Yes, it appears that we can spare removing the adrenal gland at radical nephrectomy in children with Wilms tumor. Several studies have indicated that adrenal involvement in patients with Wilms tumor is rare and difficult to predict. One study found that only 1 adrenal gland was positive for tumor invasion, and preserving the adrenal gland was not associated with an increased risk of local recurrence (PUBMED:20728101). Another study reported that adrenalectomy was not significantly associated with tumor location, tumor rupture, or local recurrence, and preserving the ipsilateral adrenal gland was not associated with prognosis (PUBMED:35618582). Additionally, a report from the Children's Oncology Group found that sparing the adrenal gland during nephrectomy for unilateral Wilms tumor was not associated with a higher incidence of intraoperative spill and was associated with a similar oncologic outcome, suggesting that adrenalectomy should not be considered mandatory during radical nephrectomy for Wilms tumor (PUBMED:23895979). Furthermore, the practice of sparing the adrenal gland has been incorporated into less invasive surgical approaches such as robot-assisted radical nephrectomy (RARN), which has been shown to be feasible in children (PUBMED:37130763). The use of nephron sparing surgery (NSS) has also been recommended over radical nephrectomy for unilateral Wilms tumor, as it is associated with higher survivability, better postoperative renal function, and a lesser incidence of relapse (PUBMED:33014769). In conclusion, the evidence suggests that the adrenal gland can be spared during radical nephrectomy for Wilms tumor in children without compromising the oncological outcome, and this approach may prevent unnecessary adrenalectomy and its associated risks.
Instruction: Does conventional posterior vault remodeling alter endocranial morphology in patients with true lambdoid synostosis? Abstracts: abstract_id: PUBMED:23348266 Does conventional posterior vault remodeling alter endocranial morphology in patients with true lambdoid synostosis? Background: True lambdoid synostosis (TLS) produces a consistent morphology that includes occipital flattening, an ipsilateral occipitomastoid prominence, and a mild contralateral hemifacial deficiency that minimally improves with surgery. Prior studies have demonstrated that dysmorphic middle and posterior cranial fossae contribute to the craniofacial scoliosis characteristic of TLS. We hypothesize that these endocranial features remain after surgery, causing the persistent hemifacial deficiency seen in these patients. Methods: Three-dimensional anthropometric measurements were made on preoperative and postoperative CT scans of patients with TLS (n = 5). Quantitative analysis was performed on the middle cranial fossa area (MCF), anterior cranial fossa area (ACF), posterior fossa deflection angle (PFA), petrous ridge angle (PRA), temperomandibular joint (TMJ) angle, and external auditory meatus angle. The results were analyzed using a 2-tailed t test. Results: Preoperative CT scans were obtained at a mean age of 1.05 years. Patients underwent posterior vault remodeling at a mean age of 1.33 years. Postoperative CT scans were obtained a mean age of 3.14 years. Following surgery, patients with TLS demonstrated an unchanged PFA (P = 0.76) with deviation toward the affected suture. The ACF remained symmetrical (P = 0.212), and the contralateral MCF remained enlarged relative to the ipsilateral side (P = 0.003). The contralateral middle fossa became more retrodisplaced (P = 0.021). The ipsilateral PRA remained unchanged (P = 0.95). Ear position also remained asymmetrical (P = 0.037). The position of the TMJ was not statistically different between sides before (P = 0.24) or after surgery (P = 0.07). Conclusions: Traditional cranioplasty effectively restores calvarial shape, but does not significantly alter the dysmorphic features seen in the endocranium of patients with TLS. abstract_id: PUBMED:34051898 Craniosynostosis: Posterior Cranial Vault Remodeling. Posterior cranial vault distraction osteogenesis is a powerful, reliable, low-morbidity method to achieve intracranial expansion. It is particularly useful in treating turribrachycephaly seen in syndromic craniosynostosis, allowing for gradual expansion of the bone while stretching the soft tissues over several weeks allowing greater volumetric expansion than conventional techniques. Posterior cranial vault distraction osteogenesis constitutes a more gradual remodeling modality, with infrequent complications. As a first step in intracranial expansion, it preserves the frontal cranium for future frontofacial procedures. A drawback is the need for a second surgery to remove the device, and this must be taken into account during counseling. abstract_id: PUBMED:33743044 Posterior vault distraction osteogenesis: indications and expectations. Cranial vault remodeling (CVR) in patients with craniosynostosis serves to correct abnormal skull morphology and increase intracranial volume to prevent or treat pathologic increases in intracranial pressure (Taylor and Bartlett, Plast Reconstr Surg 140: 82e-93e, 2017). Distraction osteogenesis is a well-established technique for bony repositioning and growth stimulation in the facial and long bones, in which the gradual separation of bony segments at an osteotomy site results in generation of new bone and subsequent bone lengthening (Greene, 2018). While initially described in the orthopedic literature, the relevance and applicability of distraction osteogenesis to craniofacial surgery has been well-studied and is now well-established (Steinbacher et al., Plast Reconstr Surg 127: 792-801, 2011). Posterior cranial vault distraction osteogenesis (PVDO) was introduced as a treatment option for cranial vault expansion in patients with craniosynostosis in 2009 by White et al., based upon the premise that posterior vault distraction could provide greater intracranial volume expansion than fronto-orbital advancement and remodeling (FOAR), but that acute posterior cranial vault expansions were limited by the soft tissue envelope of the infant scalp and prone to relapse related to the supine positioning typical of infants (White et al., Childs Nerv Syst 25: 231-236, 2009). Since this introduction, significant evidence has accrued regarding the safety of, and outcomes after, PVDO. PVDO is now known to provide larger increases in intracranial volume in comparison to anterior cranial vault remodeling procedures (Derderian et al., Plast Reconstr Surg 135:1665-1672, 2015) and to provide morphologic improvements in both the posterior and anterior cranial vaults (Goldstein et al., Plast Reconstr Surg 131:1367-1375, 2013). Perioperative major morbidity is comparable to conventional vault remodeling (Taylor et al., Plast Reconstr Surg 129:674e-680e, 2012) and the procedure has been safely applied to patients of various ages with syndromic and non-syndromic craniosynostosis (Zhang et al., J Craniofac Surg 29:566-571, 2018; Li et al., J Craniofac Surg 27:1165-1169, 2016). Many high-volume craniofacial centers now consider PVDO the preferred first operation in infants with syndromic craniosynostosis, and indications for this procedure continue to expand as evidence accrues regarding its utility and safety (Steinbacher et al., Plast Reconstr Surg 127: 792-801, 2011; Swanson et al., Plast Reconstr Surg 137:829e-841e, 2016). abstract_id: PUBMED:30505619 Results and limits of posterior cranial vault expansion by osteotomy and internal distractors. Background: Expanding the posterior cranial vault has become a common procedure in the treatment of complex craniosynostosis. Several techniques are available to remodel the posterior vault. Aim of this study was to analyze the posterior vault distraction osteogenesis. Methods: Between 2011 and 2014, 21 children (12 boys) were operated on for a posterior distraction of the cranial vault. The mean age was 8.6 months (minimum, 3 months; maximum, 15 years). Thirteen patients presented a craniofacial syndrome. Five had already been operated on (two anterior cranial surgery, two suboccipital decompression, and one craniotomy for sagittal synostosis). Raised intracranial pressure (ICP) was present in 6 patients. Seven patients had symptomatic cerebellar tonsils herniation (TH). Results: In 17 children, 2 lateral distractors were placed, in 3 a 3rd medial distractor was placed, and in 1 child 4 distractors were implanted. Volumetric analysis based on computed tomography showed a mean increase of volume of 13.9% 117 days later. After the distraction, symptoms related to raised ICP or TH were improved in all patients, however, radiologically TH was improved at the last follow-up in 54% of the cases. Conclusion: Posterior cranial vault distraction is an efficacious technique to enlarge the posterior skull vault and treat increased ICP. Moreover, it appears to be efficacious in treating TH-related symptoms. abstract_id: PUBMED:34169848 Bilateral Lambdoid and Sagittal Craniosynostosis with Hydrocephalus: ETV, Bifrontal Craniotomy, Anterior Cranial Vault Remodeling, and Posterior Cranial Vault Expansion. Background: Bilateral lambdoid and sagittal craniosynostosis (BLSS), also known as Mercedes Benz pattern craniosynostosis, is a rare form of craniosynostosis. The whole cranial vault remodeling gives the best result. Objective: To describe the role of endoscopic third ventriculostomy (ETV) and whole cranial vault remodeling for treatment of BLSS. Material And Methods: Surgical technique/procedure: A single-sitting three-step surgery was performed for a 7-month-old child who had BLSS and hydrocephalus. First ETV and anterior cranial vault remodeling were done in the supine position. The posterior cranial vault expansion was done in the prone position. Results And Conclusions: The ventricle volume reduced at follow-up and the frontal bossing also reduced. The whole cranial vault remodeling gives the best result for BLSS. abstract_id: PUBMED:34170377 Fixed posterior cranial vault expansion technique. Background: Different techniques to reshape the posterior skull vault have been developed in the last decades, all sharing the same goals of increasing the skull volume, decreasing the intracranial pressure (ICP), correcting the cranial dysmorphy, and contributing to a better growth of the skull. Though over the last years most refinements in these techniques have focused on the use of hardware as distractors or springs, the fixed posterior vault expansion remains a valuable procedure for cranial remodeling. Methods: We describe in details the technique used for fixed posterior vault expansion in children that is applied at the French Referral center for Craniosynostosis of Lyon, France. Discussion: The fixed posterior vault expansion increases the risk of perioperative complications due to the elevation of the posterior bone flap from the dura but allows an immediate decompression and correction of the shape, simplifying the postoperative course. abstract_id: PUBMED:34537855 Introduction to the focus session on posterior vault surgery in craniosynostosis. Posterior cranial vault remodeling is a surgical procedure that is currently utilized in all craniofacial centers. It has a dual aim: a cosmetic and a functional one to correct the posterior flattening of the head and creating space for the underlying brain to develop. With the increasing use of the procedure in the last years, numerous variants of the technique have been developed, new different hardware have been introduced (distractors and springs), and new potential functional advantages are discussed, namely, those related to the cerebrovascular circulation and the cerebrospinal fluid movements. This focus session has been then proposed and organized to provide the state of art concerning the indications of posterior cranial vault remodeling operations and the analysis of the technical variants that characterize the use of the procedure in various pediatric neurosurgical centers throughout the world. abstract_id: PUBMED:24106565 CASE REPORT Pan-Suture Synostosis After Posterior Vault Distraction. Objective: Posterior vault remodeling by distraction osteogenesis is a relatively new technique used for initial correction of turribrachycephaly in children with bicoronal craniosynostosis. We present a new potential complication from this procedure; a case of pan-suture synostosis subsequent to posterior vault distraction. Methods: We report an infant girl who presented with bicoronal synostosis in the setting of Saethre-Chotzen syndrome. She underwent posterior vault distraction and was distracted a total of 34 millimeters, with successful osteogenesis at the site. Results: One year postoperatively, the patient was found to have incidental, asymptomatic pan-suture synostosis on computed tomography. Conclusions: To our knowledge, this is the first reported case of delayed craniosynostosis after posterior vault distraction in the literature. The possible pathogenesis and significance of this case are discussed with a review of the current literature. abstract_id: PUBMED:31696291 Ventricular shunt complications in patients undergoing posterior vault distraction osteogenesis. Purpose: The primary objective of this study is to investigate post-operative morbidity and shunt revision rates of patients with shunt-dependent hydrocephalus (SDH) undergoing posterior vault distraction osteogenesis (PVDO) compared to patients undergoing conventional posterior vault reconstruction (PVR). Methods: A retrospective case-controlled cohort analysis of all patients with SDH undergoing PVDO and PVR for syndromic or complex craniosynostosis was performed. Demographic information, perioperative variables, distraction protocols, and shunt-related complications-infection, surgical revision of shunt, increased length of stay (LOS), and readmission within 90 days of surgery-were compared using the appropriate statistical tests. Results: Fourteen patients with ventricular shunts who underwent PVDO and eight patients with shunts who underwent PVR were identified. Shunt-related complication rates were significantly higher with PVDO (n = 5) compared to PVR (n = 0), p = 0.0093. Among the five patients who suffered complications, the most common were shunt infection (n = 4), shunt malfunction (n = 4), and wound infections (n = 3). All patients with complications required additional operations for shunt revision and/or replacement; four patients required multiple takebacks for such procedures, with an average of three additional procedures per patient. Conclusions: In complex or syndromic craniosynostosis patients who have previously undergone ventricular shunting, PVDO is associated with higher shunt-related complications and need for additional procedures when compared to traditional PVR. While the benefits of PVDO in the treatment of syndromic craniosynostosis are well documented, the risks of PVDO in the face of a VP shunt must be considered. Further investigation into patient-specific risk factors and risk reduction strategies is warranted. abstract_id: PUBMED:36186267 Posterior cranial vault distraction osteogenesis: A systematic review. Purpose: Posterior cranial vault distraction osteogenesis (PCVDO) has become the treatment of choice in specific indications within 12 years since its introduction in 2009. It is used to treat or prevent Intracranial hypertension secondary to craniosynostosis. However, there is still a lack of evidence describing this procedure's safety concerning the osteotomy line being close to the torcula. This systematic review aims to generate evidence regarding the safety of this new technique. Materials And Method: A systematic search of Pubmed and Embase database were done without any time limit until the last search date of July 31, 2021. Articles focused on posterior cranial vault distraction osteogenesis were shortlisted using defined inclusion criteria. In addition, complications, volume gain, and distraction protocols in patients with craniosynostosis were assessed. Results: The review included 11 articles from 11 different centres. Of the Total 241 patients, 51 were non-syndromic patients, and 190 were syndromic patients. There were 242 procedures done on 241 patients. The proportion of patients with complications was 30%. Wound infection was the most commonly reported complication. PCVDO resulted in 20-25% volume gain and successfully prevented increased intracranial pressure (ICP). Conclusion: The systematic review has significantly identified a 30.606% incidence of complications in PCVDO. No mortality was linked directly to the Posterior cranial vault distraction osteogenesis procedure (PCVDO). The authors identified the evolution of many unique vectors and osteotomy designs customized for patients and practices for safe surgery. The effects of PCVDO on the anterior cranial fossa remain a topic of interest and require observational studies for better understanding. Answer: Conventional posterior vault remodeling does not significantly alter the dysmorphic features seen in the endocranium of patients with true lambdoid synostosis (TLS). A study that analyzed preoperative and postoperative CT scans of patients with TLS found that following surgery, the posterior fossa deflection angle (PFA) remained unchanged with deviation toward the affected suture, the anterior cranial fossa (ACF) remained symmetrical, and the contralateral middle cranial fossa (MCF) remained enlarged relative to the ipsilateral side. Additionally, the contralateral middle fossa became more retrodisplaced, the ipsilateral petrous ridge angle (PRA) remained unchanged, and ear position also remained asymmetrical. The position of the temperomandibular joint (TMJ) was not statistically different between sides before or after surgery. These findings suggest that traditional cranioplasty effectively restores calvarial shape but does not significantly alter the endocranial dysmorphology characteristic of TLS (PUBMED:23348266).
Instruction: Are outcome data regarding the survivors of neonatal care available from routine sources? Abstracts: abstract_id: PUBMED:9462191 Are outcome data regarding the survivors of neonatal care available from routine sources? Aim: To determine whether existing information and surveillance systems can be used to provide follow up data on groups of infants at increased risk of disability--for example, the survivors of neonatal intensive care. Methods: A survey was made of maternity, neonatal, and community child health information systems and surveillance programmes in the Trent Regional Health Authority. Children known to have received neonatal intensive care in Trent between 1 August 1992 and 31 July 1993, and a random sample of normal children in two health districts (data quality check) were included. A data linkage study was made to determine whether follow up information about a random sample of infants, known to be at increased risk of poor outcome, could be identified on community child health databases. Two widely accepted datasets (birth and 2 years) were used as standards for this exercise. The quality of data was audited. Results: All clinical items of the birth minimum dataset were routinely recorded by at least one agency in each health district in Trent. Of the descriptive items, only the mother's age on leaving full time education was not collected. At 2 years, all clinical items were collected as part of the routine surveillance programme, but data were recorded using a system which severely limited interpretation. Data quality, in terms of the number of errors introduced at data entry, was very good with only 1.1% of the check items (4/368) incorrectly recorded. Only two districts had organised electronic transfer of data between maternity, neonatal, and community child health systems. The mother's NHS number, although available, was not routinely recorded by any system. The NHS number of the infant was routinely collected by six out of 12 community paediatric services. Data linkage was attempted in six districts with appropriate community child health databases. Just over 70% of the intensive care sample was successfully linked with follow up information on child health systems. Conclusions: The existing programmes for routine child surveillance could provide outcome data for high risk groups of infants, such as the survivors of neonatal intensive care. However, the present coding system used for data entry is inadequate. Furthermore, rates of identification, without the use of a unique identifier (NHS number) for each subject, are currently insufficient for monitoring health status in later life. abstract_id: PUBMED:28946862 Long-term outcome in survivors of neonatal tetanus following specialist intensive care in Vietnam. Background: Neonatal tetanus continues to occur in many resource-limited settings but there are few data regarding long-term neurological outcome from the disease, especially in settings with critical care facilities. Methods: We assessed long-term outcome following neonatal tetanus in infants treated in a pediatric intensive care unit in southern Vietnam. Neurological and neurodevelopmental testing was performed in 17 survivors of neonatal tetanus and 18 control children from the same communities using tools previously validated in Vietnamese children. Results: The median age of children assessed was 36 months. Eight neonatal tetanus survivors and 9 community control cases aged &lt; 42 months were tested using the Bayley III Scales of Infant and Toddler Development (Bayley III-VN) and 8 neonatal tetanus survivors and 9 community controls aged ≥42 months were tested using the Movement Assessment Battery for Children. No significant reductions in growth indices or neurodevelopmental scores were shown in survivors of neonatal tetanus compared to controls although there was a trend towards lower scores in neonatal tetanus survivors. Neurological examination was normal in all children except for two neonatal tetanus survivors with perceptive deafness and one child with mild gross motor abnormality. Neonatal tetanus survivors who had expienced severe disease (Ablett grade ≥ 3) had lower total Bayley III-VN scores than those with mild disease (15 (IQR 14-18) vs 24 (IQR 19-27), p = 0.05) with a significantly lower cognitive domain score (3 (IQR 2-6) severe disease vs 7 (IQR 7-8) mild disease, p = 0.02). Conclusions: Neonatal tetanus is associated with long-term sequelae in those with severe disease. In view of these findings, prevention of neonatal tetanus should remain a priority. abstract_id: PUBMED:25441858 The state of routine and emergency obstetric and neonatal care in Southern Province, Zambia. Objective: To evaluate the capacity of health facilities in Southern Province, Zambia, to perform routine obstetric care and emergency obstetric and neonatal care (EmONC). Methods: Surveys were completed at 90 health centers and 10 hospitals between September 1, 2011, and February 28, 2012. An expanded set of signal functions for routine care and EmONC was used to assess the facilities' capacity to provide obstetric and neonatal care. Results: Interviews were completed with 172 health workers. Comprehensive EmONC was available in only six of 10 hospitals; the remaining four hospitals did not perform all basic EmONC signal functions. None of the 90 health centers performed the basic set of EmONC signal functions. Performance of routine obstetric care functions, health worker EmONC training, and facility infrastructure and staffing varied. Conclusion: Assessment of the indicators for routine care revealed that several low-cost interventions are currently underused in Southern Province. There is substantial room for improvement in emergency and routine obstetric and neonatal care at the surveyed facilities. Efforts should focus on improving infrastructure and supplies, EmONC training, and adherence to the UN guidelines for routine and emergency obstetric care. abstract_id: PUBMED:14757610 "Benign" extra-axial fluid in survivors of neonatal intensive care. Objectives: To identify the prevalence of "benign" extra-axial fluid (BEAF), the risk factors associated with this condition, and the natural history in "graduates" of neonatal intensive care. Design: Cross-sectional study. Setting: Neonatal follow-up clinic at a tertiary care center. Patients: Seventy-seven infants with a head circumference greater than the 95th percentile by growth percentiles from either the National Center for Health Statistics or the Infant Health and Development Program growth percentile graphs who attended the Neonatal Follow-up Program at The Children's Hospital of Philadelphia between January 1, 1998, and December 31, 2001. Main Outcome Measures: Bronchopulmonary dysplasia, extracorporeal membrane oxygenation; development at 18 to 24 months. Results: There were 26 infants (34%) in the BEAF group, 43 (56%) in the control group without extra-axial fluid, and 8 (10%) in the hydrocephalus group. Compared with the control group, infants with BEAF were more likely to have bronchopulmonary dysplasia or to require use of extracorporeal membrane oxygenation in the immediate neonatal period (risk ratio, 6.1; 95% confidence interval, 1.5-29.8). Measurements of head circumference in the BEAF group showed rapid growth between 3 and 12 months, followed by growth greater than and parallel to the 95th percentile. Head circumference measurements in the control group showed continued growth along the 95th percentile for age. Infants with BEAF were more likely than controls to develop cerebral palsy (risk ratio, 9.9; 95% confidence interval, 1.3-77.9) and to have evidence of developmental delay at adjusted ages 12 and 18 to 24 months. Conclusion: The presence of extra-axial fluid in macrocephalic survivors of neonatal intensive care is associated with an increased risk of developmental delay and cerebral palsy compared with control macrocephalic survivors. abstract_id: PUBMED:7472846 Prediction of outcome at school entry in neonatal intensive care unit survivors, with use of clinical and electrophysiologic techniques. Objective: To determine the predictive value of multimodality evoked potentials as well as the neonatal neurobehavioral assessment in neonatal intensive care unit survivors at school entry. Study Design: In this prospective study, healthy (n = 24) and high-risk newborn infants (n = 78) were assessed in the newborn period with auditory brain-stem responses (ABRs) and somatosensory evoked potentials (SEPs), as well as the Einstein Neonatal Neurobehavioral Assessment Scale (ENNAS). Healthy and high-risk newborn infants were assessed in a blind fashion at 1, 3, and 5 years of age by a psychologist and a pediatric neurologist. Of those healthy (all 24) and high-risk newborn infants (72/78) with a neonatal ABR, SEP, or both, 62.5% were assessed at 5 years of age with the Wechsler Preschool and Primary Scale of intelligence, the Beery-Buktenica Test of Visual-Motor Integration, the Griffiths Locomotor Subscale, and neurologic examination. Chi-square analyses were carried out on neonatal and outcome measures, and sensitivity, specificity, and predictive values for each of the neonatal tests were ascertained. Results: The ENNAS and the ABR had good negative predictive value for cognitive (85.7% to 93.3%), locomotor (83.3%), and visual-motor (91.4% to 100%) performance. The ABR had good specificity, whereas the ENNAS was more sensitive. The SEP was an excellent prognostic tool, with high sensitivity (100%) and specificity (80% to 81.3%) for motor as well as cognitive domains. All infants with normal SEPs had favorable outcomes, whereas those with absent potentials did poorly. The SEP abnormalities were associated with findings on neurologic examination in all cases. Conclusions: Normal neonatal evoked potentials and ENNAS are associated with favorable outcomes. The SEP most accurately predicts neurodevelopmental status at school entry. abstract_id: PUBMED:23337218 Core domains for a person-focused outcome measurement system in cancer (PROMS-Cancer Core) for routine care: a scoping review and Canadian Delphi Consensus. Objectives: The objectives of this scoping review study were 1) to identify core domains and dimensions for inclusion in a person-focused and self-reported outcome measurement system for cancer and 2) to reach consensus among key stakeholders including cancer survivors on the relevance, acceptability, and feasibility of a core outcome set for collection in routine clinical care. Methods: Following a scoping review of the literature, a Rand Delphi consensus method was used to engage key interdisciplinary decision makers, clinicians, and cancer survivors in reaching consensus on a core patient-reported outcome domain taxonomy and outcome measures. Results: Of the 21,900 citations identified in the scoping review, 1,503 citations were included in the full article review (380 conceptual articles, 461 psychometric evaluation articles, and 662 intervention studies) and subjected to data abstraction and mapping. Final consensus was reached on 20 domains, related subdimensions, and 45 self-report measures considered relevant and feasible for routine collection in cancer by the Delphi panel (PROMS-Cancer Core). Conclusions: Standardization of patient-reported outcome data collection is key to assessing the impact of cancer and treatment on the person for population comparison and monitoring the quality of clinical care. The PROMS-Cancer Core taxonomy of domains and outcome measures can be used to guide the development of a patient-reported outcome information system for cancer. abstract_id: PUBMED:7794616 Another outcome of neonatal intensive care: first year mortality and hospital morbidity. Objective: To determine first year mortality and hospital morbidity after neonatal intensive care. Methodology: Cohort study of 6077 surviving infants inborn in one regional hospital in 1988. Nine hundred and eighty-eight received neonatal intensive care and 103 were very low birthweight (VLBW). Results: For infants who required care in the neonatal intensive care unit (NICU), the relative risk of dying before their first birthday was 3.6 (95% confidence intervals [CI] 1.5-8.8). This increased risk was associated with low birthweight (LBW) rather than requirement for NICU care. Of all inborn survivors, 10.4% were readmitted to hospital in the first year and 2.4% more than once. The readmission rate was 20% for NICU survivors and 30% for VLBW infants. The risk of hospitalization was independently associated both with NICU admission (odds ratio 2.3, CI 1.9-2.9) and with VLBW (OR 1.8, CI 1.1-3.0). The NICU survivors also had multiple admissions and prolonged hospital stays. Conclusions: Both low birthweight and neonatal illness requiring intensive care are important indicators of continuing medical vulnerability over the first year of life. abstract_id: PUBMED:36717946 Underestimating attacks: comparing two sources of publicly-available data about attacks on health care in 2017. Background: Attacks on health care represent an area of growing international concern. Publicly available data are important in documenting attacks, and are often the only easily accessible data source. Data collection processes about attacks on health and their implications have received little attention, despite the fact that datasets and their collection processes may result in differing numbers. Comparing two separate datasets compiled using publicly-available data revealed minimal overlap. This article aims to explain the reasons for the lack of overlap, to better understand the gaps and their implications. Methods: We compared the data collection processes for datasets comprised of publicly-reported attacks on health care from the World Health Organization (WHO) and Insecurity Insight's Security in Numbers Database (SiND). We compared each individual event to compile a comparable dataset and identify unique and matched events in order to determine the overlap between them. We report descriptive statistics for this comparison. Results: We identified a common dataset of 287 events from 2017, of which only 33 appeared in both datasets, resulting in a mere 12.9% (n = 254) overlap. Events affecting personnel and facilities appeared most often in both, and 22 of 31 countries lacked any overlap between datasets. Conclusions: We conclude that the minimal overlap suggests significant underreporting of attacks on health care, and furthermore, that dataset definitions and parameters affect data collection. Source variation appears to best explain the discrepancies and closer comparison of the collection processes reveal weaknesses of both automated and manual data collection that rely on hidden curation processes. To generate more accurate datasets compiled from public sources requires systematic work to translate definitions into effective online search mechanisms to better capture the full range of events, and to increase the diversity of languages and local sources to better capture events across geographies. abstract_id: PUBMED:33127738 Optimal outcome measures for a trial of not routinely measuring gastric residual volume in neonatal care: a mixed methods consensus process. Background: Routine measurement of gastric residual volume to guide feeding is widespread in neonatal units but not supported by high-quality evidence. Outcome selection is critical to trial design. Objective: To determine optimal outcome measures for a trial of not routinely measuring gastric residual volume in neonatal care. Design: A focused literature review, parent interviews, modified two-round Delphi survey and stakeholder consensus meeting. Participants: Sixty-one neonatal healthcare professionals participated in an eDelphi survey; 17 parents were interviewed. 19 parents and neonatal healthcare professionals took part in the consensus meeting. Results: Literature review generated 14 outcomes, and parent interviews contributed eight additional outcomes; these 22 outcomes were then ranked by 74 healthcare professionals in the first Delphi round where four further outcomes were proposed; 26 outcomes were ranked in the second round by 61 healthcare professionals. Five outcomes were categorised as 'consensus in', and no outcomes were voted 'consensus out'. 'No consensus' outcomes were discussed and voted on in a face-to-face meeting by 19 participants, where four were voted 'consensus in'. The final nine consensus outcomes were: mortality, necrotising enterocolitis, time to full enteral feeds, duration of parenteral nutrition, time feeds stopped per 24 hours, healthcare-associated infection; catheter-associated bloodstream infection, change in weight between birth and neonatal discharge and pneumonia due to milk aspiration. Conclusions And Relevance: We have identified outcomes for a trial of no routine measurement of gastric residual volume to guide feeding in neonatal care. This outcome set will ensure outcomes are important to healthcare professionals and parents. abstract_id: PUBMED:8736401 Evoked potentials as predictors of outcome in neonatal intensive care unit survivors: review of the literature. Neonatal intensive care unit survivors are at substantial risk for a range of neurodevelopmental sequelae, and therefore a variety of clinical diagnostic techniques have been evaluated as predictors of outcome. We summarize the prognostic value of evoked potentials in newborns at risk. A review of the literature reveals that brainstem conduction abnormalities in auditory brainstem evoked potentials are associated with neuromotor impairment; however, there are many false negative studies. Visual evoked potentials are highly accurate in predicting neurologic deficits in early childhood in asphyxiated term neonates. Sensitivity and specificity are consistently high for somatosensory evoked potentials in term newborns; however, correlations with outcome in premature infants is controversial. Several studies have compared neonatal findings on neuroimaging studies and evoked potentials, and concordant results between these two tests are highly predictive. However, neurologic sequelae often can most accurately be predicted by visual or somatosensory evoked potentials. Evoked potentials may therefore be a useful adjunct to the clinical investigation and prognostication of outcome in the high risk newborn. Answer: Yes, outcome data regarding the survivors of neonatal care are available from routine sources. A survey of maternity, neonatal, and community child health information systems and surveillance programs in the Trent Regional Health Authority indicated that all clinical items of the birth minimum dataset were routinely recorded by at least one agency in each health district in Trent. At 2 years, all clinical items were collected as part of the routine surveillance program. However, the present coding system used for data entry was found to be inadequate, and rates of identification without the use of a unique identifier (NHS number) for each subject were insufficient for monitoring health status in later life. Despite these limitations, over 70% of the intensive care sample was successfully linked with follow-up information on child health systems (PUBMED:9462191). This suggests that while routine sources can provide outcome data for high-risk groups of infants, such as survivors of neonatal intensive care, improvements in data collection and linkage methods are necessary to enhance the monitoring and evaluation of health outcomes for these individuals.
Instruction: Prolonged perioperative surgical prophylaxis within European hospitals: an exercise in uncertainty avoidance? Abstracts: abstract_id: PUBMED:24225527 Prolonged perioperative surgical prophylaxis within European hospitals: an exercise in uncertainty avoidance? Objectives: Socio-cultural factors have been hypothesized to be important drivers for inappropriate antibiotic prescribing in ambulatory care. This study sought to assess any potential role in perioperative surgical prophylaxis (PAP) administered for &gt;24 h (PAP &gt; 24). Within hospitals, PAP continues to be administered for longer than 24 h, despite unequivocal evidence of ineffectiveness beyond this period. A recently published European Centre for Disease Prevention and Control (ECDC) point prevalence survey (PPS) has reported that in 70% of participating countries, PAP &gt; 24 was administered in more than half of the surgical operations surveyed. Methods: Correlation and simple linear regression modelling was performed using the PAP &gt; 24 proportions for the countries in the ECDC PPS report and the respective scores for the cultural construct of uncertainty avoidance (UA), as detailed by Hofstede. Results: Pearson correlation analysis produced a moderately strong coefficient (r) of 0.50 (95% CI 0.16-0.74; P = 0.007). Simple regression yielded a model of PAP &gt; 24 = 29.87 + 0.40UA (R(2) = 0.25; P = 0.007). Conclusions: Cultural factors, namely UA, appear to be an important driver for PAP &gt; 24. Any strategy aimed at improving prolonged PAP should be informed by clear knowledge of local socio-cultural barriers, so as to achieve the most successful intervention possible. abstract_id: PUBMED:16770607 Perioperative antibiotic prophylaxis. Use in orthopaedics Perioperative antibiotic prophylaxis in orthopaedics is only indicated if a foreign body such as a prosthetic joint remains in the operation site. For this indication first or second generation cephalosporins should be preferred. It is essential to administer the antibiotic within 30 min before incision. In the case of an extended intervention (more than 3 h) a repeated dose should be considered. It is evidence-based that prolonged perioperative antibiotic prophylaxis (more then 24 h) is not beneficial. To avoid the emergence of resistances or hygienic insufficiencies surveillance of surgical site infections should be enforced. Optimal conditions are necessary to achieve a rate of surgical site infection in orthopaedics below 1% in patients with normal infection risk profile. abstract_id: PUBMED:22020573 Perioperative antimicrobial prophylaxis Perioperative antimicrobial prophylaxis (AMP) is an important but not the only procedure to prevent surgical site infections. The effectiveness of AMP to prevent surgical site infections has been proven in numerous studies during the last decades and is part of national and international guidelines. The choice of the antibiotic as well as the duration, time point and mode of application strongly impact on the effectiveness of the prophylaxis. This article provides an overview on recommended antibiotics for AMP and their activity against the expected bacterial pathogens in elective surgery. Furthermore, the current spectrum of microorganisms most frequently isolated from surgical site infections and alternative antibiotic strategies are discussed. abstract_id: PUBMED:23650676 Perioperative antibiotic prophylaxis in cancer surgery The effectiveness of perioperative antibiotic prophylaxis in reducing surgical site infections has been demonstrated. Its utility is recognized for clean-contaminated procedures and some clean surgeries. Prophylactic antibiotics are used as intended to cover the most common germs in the surgical site; first and second generation cephalosporins are the most used. For optimal prophylaxis, an antibiotic with a targeted spectrum should be administered at sufficiently high concentrations in serum, tissue, and the surgical wound during the time that the incision is open and risk of bacterial contamination. The infusion of the first dose of antimicrobial should begin within 60 min before surgical incision and should be discontinued within 24 h after the end of surgery The prolonged use of antibiotic prophylaxis leads to emergence of bacterial resistance and high costs. The principles of antimicrobial prophylaxis in cancer surgery are the same as those described for general surgery; it is recommended to follow and comply with the standard criteria. In mastectomies and clean head and neck surgery there are specific recommendations that differ from non-cancer surgery. In the case of very extensive surgeries, such as pelvic surgery or bone surgery with reconstruction, extension of antibiotics for 48-72 h should be considered. abstract_id: PUBMED:19504054 Perioperative antibiotic prophylaxis in dermatologic surgery. Update 2009 As specific data and studies concerning perioperative antibiotic prophylaxis in dermatologic surgery are scarce, recommendations to date mainly draw on findings from other medical disciplines and use them analogously. Three main factors are essential in the decision process: 1st How contaminated is the wound (or will it be)? 2nd Where is the lesion located and what kind of procedure is intended? 3rd Is the patient among the highest risk group for endocarditis? An algorithm is presented incorporating these key features. The preferred antibiotic for perioperative prophylaxis is a first generation or second generation oral cephalosporin, a beta-lactamase resistant penicillin (isoxazolyl penicillin) or amoxicillin/clavulanic acid. The prophylaxis is administered in the highest possible single dose within 2 hours before and 3 hours after surgery. Penicillin or cephalosporin can be substituted with clindamycin or erythromycin in case of known drug allergies. There is insufficient data on the pros and cons of preoperative hair removal. Should hair removal be necessary, clipping is preferred to shaving. abstract_id: PUBMED:31560081 Perioperative antibiotic prophylaxis in dermatosurgery-2019 status quo Perioperative antibiotic prophylaxis (PAP) is determined as a short-term, in general single-shot administration of an antibiotic prior to a surgical intervention. The main goal of PAP is to reduce postoperative surgical site infections. In addition, PAP is administered to avoid development of bacterial endocarditis in patients who are at high risk. Regarding the increasing prescription of antibiotics by dermatologists and the rising emergence of resistant bacterial strains, a rational use of PAP in dermatosurgery has to be demanded. Thus, identification of risk factors being determined either by the patient, the type of surgery or the location will help to identify those patients requiring PAP. Moreover, utilizing established criteria will avoid the unnecessary administration of PAP-which only creates a false sense of safety. abstract_id: PUBMED:31068545 Questionnaire Survey Regarding Prevention of Surgical Site Infection after Neurosurgery in Japan: Focus on Perioperative Management and Administration of Surgical Antibiotic Prophylaxis. Various guidelines regarding surgical site infection (SSI) have recently been established. However, perioperative management of the wound and use of antibiotics have never been standardized completely in departments of neurosurgery in Japan. This survey investigated current perioperative management and administration of surgical antibiotic prophylaxis (SAP) and compared with guidelines intended to reduce SSI associated with neurosurgery in Japan. Questionnaires were distributed to members of the conference on Neurosurgical Techniques and Tools and the Japan Society of Aesthetic Neurosurgery via internet. The questionnaires asked about methods of perioperative management. A total of 255 members returned answers to the questionnaires. The questionnaires revealed that partial or no removal of the hair and hair shampooing at the day before surgery were performed in 96.1% and 88.1% of each institute following the World Health Organization (WHO) guidelines. Use of SAP at just before, during, and after surgery were 65.0%, 86.2%, and 63.0%, respectively. The postoperative period of use of intravenous SAP prolonged beyond 24 h in 80.0% against the recommendation of WHO. Perioperative management of wounds and use of SAP varies in institutes in Japan and some procedures were far different from the WHO guidelines. Japanese neurosurgeons should notice the prolonged SAP and comply with the WHO guidelines. abstract_id: PUBMED:30607991 Is avoidance of illness uncertainty associated with distress during oncology treatment? A daily diary study. Objective: The aim of this study was to explore the daily relationship between illness uncertainty, avoidance of uncertainty, well-being and treatment-related distress among patients with cancer receiving treatment with curative intent. It was hypothesised that daily illness uncertainty, daily avoidance of uncertainty and daily treatment-related distress would be negatively associated with daily well-being. It was also hypothesised that daily illness uncertainty and daily avoidance of uncertainty would be positively associated with daily treatment-related distress. Design: Thirty-one patients receiving oncology treatment with curative intent completed a daily diary for seven consecutive days. Data were analysed using multilevel modelling. Main Outcome Measures: Daily illness uncertainty, avoidance of illness uncertainty, treatment-related distress and well-being. Results: As hypothesised, on days when patients with cancer reported heightened treatment-related distress they experienced diminished well-being. And on days when patients reported more experiential avoidance of illness uncertainty, they also experienced heightened levels of treatment-related distress. No other daily associations were significant. Conclusion: These findings indicate that patients with cancer experience day-to-day fluctuations in distress and well-being throughout oncology treatment. Avoidance of illness uncertainty-related thoughts and/or emotions are associated with daily distress, but not daily well-being. abstract_id: PUBMED:25230809 Perioperative antibiotic prophylaxis for major urological interventions Background: With the emergence of multidrug resistant bacteria reaching alarming levels and the year-long developmental void of new antimicrobial drugs, the rational and appropriate use of antibiotics is of paramount importance. The number of surgical interventions is still increasing so that surgical site infections represent the most frequent form of nosocomial infection. Method: Fundamental hygiene measures as well as aseptic and tissue-preserving surgical techniques are supported by perioperative antibiotic prophylaxis to prevent surgical site infections. This is accomplished by a single short-term antibiotic administration at the beginning of or at the latest during the operative intervention. Due to its contribution to the total consumption of antibiotics and when misused to the development of multidrug resistance, an appropriate and controlled perioperative antibiotic prophylaxis is mandatory. Indication: The indications for perioperative antibiotic prophylaxis depend on the type of surgical procedure, the classification of operative wounds as well as individual patient and operation-related risk factors. abstract_id: PUBMED:34908856 Surgical Antibiotic Prophylaxis in an Era of Antibiotic Resistance: Common Resistant Bacteria and Wider Considerations for Practice. The increasing incidence of antimicrobial resistance (AMR) presents a global crisis to healthcare, with longstanding antimicrobial agents becoming less effective at treating and preventing infection. In the surgical setting, antibiotic prophylaxis has long been established as routine standard of care to prevent surgical site infection (SSI), which remains one of the most common hospital-acquired infections. The growing incidence of AMR increases the risk of SSI complicated with resistant bacteria, resulting in poorer surgical outcomes (prolonged hospitalisation, extended durations of antibiotic therapy, higher rates of surgical revision and mortality). Despite these increasing challenges, more data are required on approaches at the institutional and patient level to optimise surgical antibiotic prophylaxis in the era of antibiotic resistance (AR). This review provides an overview of the common resistant bacteria encountered in the surgical setting and covers wider considerations for practice to optimise surgical antibiotic prophylaxis in the perioperative setting. Answer: The study by Charani et al. (PUBMED:24225527) suggests that cultural factors, specifically uncertainty avoidance (UA), may be a significant driver for the continuation of perioperative surgical prophylaxis (PAP) for more than 24 hours within European hospitals. Despite clear evidence that PAP beyond 24 hours is ineffective, the practice persists, with a reported 70% of participating countries in a European Centre for Disease Prevention and Control (ECDC) point prevalence survey administering PAP for more than half of the surgical operations surveyed. The study found a moderately strong correlation between the proportions of PAP > 24 hours and the respective scores for UA, indicating that socio-cultural barriers may need to be addressed to improve the situation. This finding aligns with the evidence-based consensus that prolonged perioperative antibiotic prophylaxis (more than 24 hours) is not beneficial and may contribute to the emergence of bacterial resistance (PUBMED:16770607; PUBMED:22020573; PUBMED:23650676; PUBMED:19504054; PUBMED:31560081). The guidelines recommend administering the first dose of prophylactic antibiotics within 60 minutes before surgical incision and discontinuing within 24 hours after surgery to prevent surgical site infections effectively without promoting bacterial resistance (PUBMED:23650676). However, despite these guidelines, a survey in Japan revealed that the postoperative period of use of intravenous surgical antibiotic prophylaxis often extended beyond 24 hours, contrary to WHO recommendations, indicating a need for compliance with established guidelines (PUBMED:31068545). The issue of prolonged perioperative antibiotic prophylaxis is not unique to European hospitals, as it is also observed in other parts of the world, suggesting a broader challenge in changing established practices in the face of uncertainty and the potential for cultural factors to influence medical decision-making (PUBMED:24225527; PUBMED:31068545). In conclusion, prolonged perioperative surgical prophylaxis within European hospitals appears to be an exercise in uncertainty avoidance, with cultural factors playing a role in the continuation of this practice despite evidence-based guidelines suggesting otherwise.
Instruction: N-acetyltransferase 2 (NAT2) genotype and colorectal carcinoma: risk variability according to tumour site? Abstracts: abstract_id: PUBMED:11099063 N-acetyltransferase 2 (NAT2) genotype and colorectal carcinoma: risk variability according to tumour site? Background: Dietary heterocyclic aromatic amines (HAAs) are members of a family of chemicals that comprise highly mutagenic compounds related to colon cancer. The polymorphic N-acetyltransferase 2 enzyme (NAT2, E.C. 2.3.1.5) plays a key role in the transformation of HAAs to ultimate carcinogens. NAT2 enzyme activity is expressed in a genotype-dependent manner in colon epithelium. Therefore local activation of HAAs in colon, and hence increased risk to develop colon cancer, is likely to be related to high NAT2 enzyme activity. This study is aimed at analysing the association between genotypes leading to high NAT2 activity and colorectal cancer risk. Methods: Genomic DNA from 120 colorectal cancer patients and 258 healthy individuals were analysed for enzyme-inactivating mutations at the coding region of the NAT2 gene by means of a mutation-specific polymerase chain reaction. Results: Among patients with sigmoid colon cancer, a significant excess of individuals with genotypes leading to high NAT2 activity was observed as compared both to controls and to the rest of patients with colorectal cancer (P &lt; 0.05). Conclusions: Our findings, which require independent confirmation, suggest that the NAT2 genotype constitutes a secondary risk factor to develop sigmoid colon cancer. abstract_id: PUBMED:10720751 N-Acetyltransferase genetics and their role in predisposition to aromatic and heterocyclic amine-induced carcinogenesis. N-Acetyltransferases (EC 2.3.1.5) are important in both the activation and deactivation of aromatic and heterocyclic amine carcinogens. Two N-acetyltransferase isozymes (NAT1 and NAT2) encoded by NAT1 and NAT2, respectively, have been identified. Both NAT1 and NAT2 exhibit genetic polymorphisms, and recent investigations have increased our understanding of the relationship between genotype and phenotype. Several studies have shown a role for NAT1 and NAT2 acetylation polymorphisms in cancer risk in human populations, but the findings have been inconsistent. These findings may relate to variability in carcinogen exposures and to differences in acetylator genotype/phenotype determinations. abstract_id: PUBMED:8781741 Predictors of N-acetyltransferase activity: should caffeine phenotyping and NAT2 genotyping be used interchangeably in epidemiological studies? To determine whether NAT2 genotyping could be used interchangeably with caffeine phenotyping in assessing N-acetyltransferase activity in epidemiological studies, sources of interindividual variability in N-acetyltransferase activity were assessed among 90 subjects of various ethnic backgrounds in Hawaii. Forty-three subjects were patients with in situ colorectal cancer treated by polypectomy, and 47 were healthy population controls. Subjects were administered a lifestyle questionnaire and were evaluated for N-acetyltransferase activity by caffeine phenotyping. NAT2 genotype was also assessed by PCR amplification of peripheral leukocyte DNA for the M1, M2, and M3 variant alleles. Fifty-four % of the overall variation in acetylation activity was explained by the three genotype categories (homozygous variant, heterozygous, and homozygous wild-type). This proportion was reduced to 42% when genotype was modeled using only two categories ("slow" being homozygous variant; "rapid" being all others). Use of gout medications (probenecid or allopurinol), consumption of heavily browned fish, and P450IA2 activity (also measured by caffeine phenotyping), together explained another 11% of the variance. No association was found between acetylation activity and sex; race; age; education; smoking; physical activity; weight; consumption of coffee, alcohol, red meat, processed meat, and cruciferous vegetables; or use of menopausal estrogens, after taking genotype into account. Results were similar for colorectal cancer patients and controls. Considerable variation in acetylation activity was observed within the homozygous wild-type group. This study suggests that the use of genotyping, instead of phenotyping, to assess the association of acetylation with cancer risk is unlikely to introduce major misclassification or bias, especially when the three genotype categories are modeled and the sample size is large. However, when the rapid acetylation phenotype is the at-risk group (e.g., when studying colon career), phenotyping appears judicious given the variability in acetylation activity within this group. abstract_id: PUBMED:9301503 N-acetyltransferase 2 genotype in colorectal cancer and selective gene retention in cancers with chromosome 8p deletions. Background: Genetic polymorphisms in N-acetyltransferase (NAT2) can change the normally fast acetylation of substrates to slow acetylation, and have been associated with the development of some cancers. The NAT2 locus may also suffer dysregulation during cancer progression, as the gene resides on chromosome 8p22, a region which is frequently deleted in colorectal cancer. Subjects And Methods: A polymerase chain reaction based method was used to determine NAT2 genotype in 275 patients with colon cancer and 343 normal control DNAs. Within the cancer group, 65 cases known to contain deletions in chromosome 8p were examined for loss of heterozygosity at the NAT2 locus. Results: Overall, there was no statistical difference in frequency or distribution of NAT2 alleles and genotype between colon cancer and control groups. There was a significant association between the slow acetylation genotype and early age of onset. NAT2 genotype did not vary with other clinical features of colon cancer, which included Dukes's stage, site of tumour, and sex. Of 48 informative cases, only three (6%) showed loss of heterozygosity, indicating that the NAT2 locus is not commonly deleted in colorectal cancer. This suggests that NAT2 is retained during the process of allele loss possibly because of its proximity to a gene necessary for cell viability. Conclusions: NAT2 does not play a major role in colorectal cancer risk, but may influence risk in some age groups. The nature of the loss of heterozygosity at the chromosome 8p site is complex and is worthy of further study. abstract_id: PUBMED:10207630 Polymorphisms in GSTP1, GSTM1, and GSTT1 and susceptibility to colorectal cancer. Polymorphisms in glutathione S-transferase (GSTs) may predispose to colorectal cancer through deficient detoxification of environmental carcinogens, although previous results are conflicting. A study with 178 matched case-control pairs was conducted to determine the effect of the GSTT1 and GSTM1 null genotypes and polymorphisms in GSTP1 on colorectal cancer susceptibility. In a secondary analysis, we examined interactions between genotypes and with the N-acetyltransferase 2 (NAT2) genotype. Heterogeneity by age, sex, site, and stage of cancer was also examined. No effect of any genotype for GSTM1, GSTT1, or GSTP1 on colorectal cancer susceptibility was detected. Secondary end points showed that individuals with both the GSTT1 null and NAT2 slow genotypes combined appeared to be at increased risk of colorectal cancer (odds ratio = 2.33; 95% confidence interval, 1.1-5.0). We conclude that GST polymorphisms alone do not predispose to colorectal cancer in northeast England. We also observed possible effects of the GSTT1 null genotype on the age and stage at presentation, and these, together with the findings of an apparent interaction with NAT2 genotypes, need to be confirmed in further studies. abstract_id: PUBMED:1471166 Acetyltransferases and susceptibility to chemicals. Arylamine chemicals inflict a number of toxicities including cancer. Metabolic activation (i.e., oxidation) is required in order to elicit the toxic actions. Acetylation is an important step in the metabolic activation and deactivation of arylamines. N-acetylation forms the amide derivative which is often nontoxic. However, O-acetylation of the N-hydroxyarylamine (following oxidation) yields an acetoxy arylamine derivative which breaks down spontaneously to a highly reactive arylnitrenium ion, the ultimate metabolite responsible for mutagenic and carcinogenic lesions. Human capacity to acetylate arylamine chemicals is subject to a genetic polymorphism. Individuals segregate into rapid, intermediate, or slow acetylator phenotypes by Mendelian inheritance regulated by a single gene encoding for a polymorphic acetyltransferase isozyme (NAT2). Individuals homozygous for mutant alleles are deficient in the polymorphic acetyltransferase and are slow acetylators. A second acetyltransferase isozyme (NAT1) is monomorphic and is not regulated by the acetylator genotype. Several human epidemiological studies suggest an association between slow acetylator phenotype and urinary bladder cancer. In contrast, a few studies suggest a relationship between rapid acetylator phenotype and colorectal cancer. The basis for this paradox may relate to the relative importance of N- versus O-acetylation in the etiology of these cancers. Conclusions drawn from human epidemiological data are often compromised by uncontrolled environmental and other genetic factors. Our laboratory recently completed construction of homozygous rapid, heterozygous intermediate, and homozygous slow acetylator congenic Syrian hamsters to be homologous in greater than 99.975% of their genomes. The availability of these acetylator congenic lines should eliminate genetic variability in virtually all aspects of arylamine carcinogenesis except at the acetylator gene locus. Ongoing studies in these congenic hamster lines should provide unequivocal information regarding the role of genetic acetylator phenotype in susceptibility to arylamine-related cancers. abstract_id: PUBMED:10026993 Polymorphisms of N-acetyltransferases, glutathione S-transferases, microsomal epoxide hydrolase and sulfotransferases: influence on cancer susceptibility. It has become clear that several polymorphisms of human drug-metabolizing enzymes influence an individual's susceptibility for chemical carcinogenesis. This review gives an overview on relevant polymorphisms of four families of drug-metabolizing enzymes. Rapid acetylators (with respect to N-acetyltransferase NAT2) were shown to have an increased risk of colon cancer, but a decreased risk of bladder cancer. In addition an association between a NAT1 variant allele (NAT*10, due to mutations in the polyadenylation site causing approximately two fold higher activity) and colorectal cancer among NAT2 rapid acetylators was observed, suggesting a possible interaction between NAT1 and NAT2. Glutathione S-transferases M1 and T1 (GSTM1 and GSTT1) are polymorphic due to large deletions in the structural gene. Meta-analysis of 12 case-control studies demonstrated a significant association between the homozygous deletion of GSTM1 (GSTM1-0) and lung cancer (odds ratio: 1.41; 95% CI: 1.23-1.61). Combination of GSTM1-0 with two allelic variants of cytochrome P4501A1 (CYP1A1), CYP1A1 m2/m2 and CYP1A1 Val/Val further increases the risk for lung cancer. Indirect mechanisms by which deletion of GSTM1 increases risk for lung cancer may include GSTM1-0 associated decreased expression of GST M3 and increased activity of CYP1A1 and 1A2. Combination of GST M1-0 and NAT2 slow acetylation was associated with markedly increased risk for lung cancer (odds ratio: 7.8; 95% CI: 1.4-78.7). In addition GSTM1-0 is clearly associated with bladder cancer and possibly also with colorectal, hepatocellular, gastric, esophageal (interaction with CYP1A1), head and neck as well as cutaneous cancer. In individuals with the GSTT1-0 genotype more chromosomal aberrations and sister chromatid exchanges (SCEs) were observed after exposure to 1,3-butadiene or various haloalkanes or haloalkenes. Evidence for an association between GSTT1-0 and myelodysplastic syndrome and acute lymphoblastic leukemia has been presented. A polymorphic site of GSTP1 (valine to isoleucine at codon 104) decreases activity to several carcinogenic diol epoxides and was associated with testicular, bladder and lung cancer. Microsomal expoxide hydrolase (mEH) is polymorphic due to amino acid variation at residues 113 and 139. Polymorphic variants of mEH were associated with hepatocellular cancer (His-113 allele), ovarian cancer (Tyr-113 allele) and chronic obstructive pulmonary disease (His-113 allele). Three human sulfotransferases (STs) are regulated by genetic polymorphisms (hDHEAST, hM-PST, TS PST). Since a large number of environmental mutagens are activated by STs an association with human cancer risk might be expected. abstract_id: PUBMED:1306111 Determination of CYP1A2 and NAT2 phenotypes in human populations by analysis of caffeine urinary metabolites. The wide variations in urinary bladder and colo-rectal cancer incidence in humans have been attributed in part to metabolic factors associated with exposure to carcinogenic aromatic and heterocyclic amines. Cytochrome P-4501A2 (CYP1A2), which catalyses N-oxidation, and acetyltransferase (NAT2) which catalyses N- and O-acetylation, both appear to be polymorphically distributed in human populations; and slow and rapid NAT2 phenotypes have been implicated as risk factors for these cancers. Caffeine has also been shown to undergo 3-demethylation by CYP1A2, and it is further acetylated to 5-acetylamino-6-formylamino-3-methyluracil (AFMU) by the polymorphic NAT2. In this report, we describe a metabolic phenotyping procedure that can be used to determine concomitantly the hepatic CYP1A2 and NAT2 phenotypes. For the NAT2 phenotype, we confirm the valid use of the urinary molar ratio of AFMU/1-methylxanthine, even in alkaline urines. For the CYP1A2 phenotype, the urinary molar ratio of [1,7-dimethylxanthine + 1,7-dimethyluric acid]/caffeine, taken at 4-5 h after caffeine ingestion, was identified from pharmacokinetic analyses of 12 subjects as being better correlated (r = 0.73; p = 0.007) with the rate constant for caffeine 3-demethylation than other previously suggested ratios. This procedure was then used to determine the CYP1A2 phenotype in subjects from Arkansas (n = 101), Italy (n = 95), and China (n = 78). Statistical and probit analyses of nonsmokers indicated that the CYP1A2 activity was not normally distributed and appeared trimodal. This trimodality allowed arbitrary designation of slow, intermediate, and rapid phenotypes, which ranged from 12-13% slow, 51-67% intermediate, and 20-37% rapid, in the different populations. A reproducibility study of 13 subjects over a 5 day or 5 week period showed that, with one exception, intraindividual variability did not alter this CYP1A2 phenotypic classification. Induction of CYP1A2 by cigarette smoking was also confirmed by the increased caffeine metabolite ratios observed in the Arkansas and Italian smokers (blonde tobacco). However, Italian smokers of black tobacco and Chinese smokers did not appear to be induced. Furthermore, probit analyses of Arkansas and Italian blonde tobacco smokers could not discriminate between phenotypes, apparently as a consequence of enzyme induction. abstract_id: PUBMED:9918135 Relationship between polymorphism of N-acetyltransferase gene and susceptibility to colorectal carcinoma in a Chinese population. Human hepatic N-acetyltransferase (NAT2) is subject to a genetic polymorphism. Because NAT2 is an important enzyme for the detoxification and/or bioactivation of several carcinogenic arylamines, it has been postulated that the polymorphism of NAT2 gene is associated with the occurrence of colorectal and bladder carcinomas. Several mutations have been described in the human NAT2 gene that have been associated with reduced NAT2 activity. However, the majority are single base substitutions at positions 481 (NAT2*5A), 590 (NAT2*6A) and 857 (NAT2*7A) of the NAT2 gene. This study was performed to evaluate the relative distribution of NAT2 alleles and genotypes in 216 colorectal carcinoma patients and 187 normal individuals. The frequencies of NAT2 alleles and genotypes in the sampled Chinese population were characterized by allele-specific polymerase chain reaction. No differences were observed in the distribution of the genotypes coding rapid acetylation (homozygous wild-type and heterozygous wild-type with any of the mutations) when comparing colorectal carcinoma patients with control individuals (P &gt; 0.05). However, the rapid acetylation genotype was associated with cancer occurring on the right site of the colon. The frequencies of the NAT2*4, NAT2*5A, NAT2*6A and NAT2*7A alleles of the NAT2 gene (0.51, 0.07, 0.32 and 0.10, respectively) in control individuals were significantly different from those in patients (0.49, 0.06, 0.26 and 0.19, respectively, P &lt; 0.01). There was a significant increase in the frequency of patients who were compound heterozygotes of NAT2*7A and a variant non-NAT2*7A allele. The NAT2*7A allele was also seen more frequently in distal cancer. abstract_id: PUBMED:24939416 CYP1A1 Ile462Val polymorphism and colorectal cancer risk in Polish patients. Colorectal cancer (CRC) is an epidemiological problem of a great importance in Poland; each year approximately 14,600 new cases of the disease are diagnosed. Mortality associated with CRC reaches approximately 10,400 cases per year (according to the National Cancer Registry). The 5-year survival rate is approximately 25 %, which is one of the lowest rates in Europe. The etiology of sporadic colorectal cancer (CRC) is multifactorial and has been attributed to an interplay between both environmental and genetic risk factors. In addition, there is a general consensus that genetic factors may modulate the influence of environmental insults. Following these assumptions, we performed a study on widely described polymorphisms in xenobiotic-metabolizing enzymes and DNA repair genes which may influence individual susceptibility to cancer. We selected five candidate polymorphisms in following genes: ERCC1 Asp118Asn (rs11615), XPC i11C/A (rs2279017), XRCC3 Met241Thr (rs861539) CYP1A1 Ile462Val (rs1048943) and NAT2 A803G (rs1208) and assessed the importance of chosen SNPs on groups consisting of 478 CRC patients and 404 controls. Only CYP1A1 Ile462Val was statistically significant in CRC patients over 50 years old: OR 2.05 (1.29-3.28); p = 1.25E-02 and this association was more pronounced in the female group of CRC patients after the age of 50: OR 2.72 (1.43-5.14); p = 1.14E-02. Answer: The N-acetyltransferase 2 (NAT2) genotype has been studied for its association with colorectal carcinoma, with particular attention to the risk variability according to the tumor site. A study found a significant excess of individuals with genotypes leading to high NAT2 activity among patients with sigmoid colon cancer compared to controls and other colorectal cancer patients, suggesting that the NAT2 genotype may be a secondary risk factor for developing sigmoid colon cancer (PUBMED:11099063). However, overall, there was no statistical difference in the frequency or distribution of NAT2 alleles and genotype between colon cancer and control groups, although a significant association was found between the slow acetylation genotype and early age of onset (PUBMED:9301503). In a Chinese population, the rapid acetylation genotype was associated with cancer occurring on the right side of the colon, and the NAT2*7A allele was seen more frequently in distal cancer (PUBMED:9918135). These findings suggest that the NAT2 genotype may influence the risk of colorectal cancer in a site-specific manner. It is important to note that the relationship between NAT2 genotypes and cancer risk is complex and may be influenced by interactions with other genetic factors and environmental exposures. For example, individuals with both the GSTT1 null and NAT2 slow genotypes combined appeared to be at increased risk of colorectal cancer (PUBMED:10207630). Additionally, rapid acetylators were shown to have an increased risk of colon cancer but a decreased risk of bladder cancer, indicating that the role of NAT2 in cancer susceptibility may vary depending on the type of cancer and other factors (PUBMED:10026993). In conclusion, the NAT2 genotype may contribute to the variability in colorectal cancer risk according to tumor site, but further research is needed to fully understand the mechanisms and interactions that underlie this association.
Instruction: Can interprofessional education DVD simulations provide an alternative method for clinical placements in nursing? Abstracts: abstract_id: PUBMED:19303674 Can interprofessional education DVD simulations provide an alternative method for clinical placements in nursing? Purpose: The study aimed to evaluate the usability of DVD simulations, the impact on student learning, clinical placement orientation, and the potential for using DVD simulations to reduce the clinical placement burden on the health care system with nursing students. Method: A total of 11 DVD simulations were developed by Monash University academics. Second year students (N=191) from the Bachelor of Nursing course at Monash University, viewed a range of DVDs. Students' perceptions and attitudes about the clinical relevance of the simulations were assessed by having them complete a 7-point Likert self-report scale. Qualitative data was also collected from two focus groups (N=7). Results: Overall, nursing students perceived the DVD simulations positively in relation to learning attention (M=4.93, SD=1.02, CI 4.25-4.54), learning potential (M=4.45, SD= 1.30, CI 5.13-5.50), clinical relevance to practice (M=5.32, SD=0.65, CI 4.36-4.55), and information processing quality (M=5.62, SD= 1.02, CI 5.47-5.76). The following themes emerged from the focus groups: provided familiarisation for clinical placements, learning wastage occurs in varying amounts, simulations could replace some clinical placement rotations, supportive of multidisciplinary approach and integration, and simulations should have pedagogical integration into weekly clinical cases. Conclusion: Nursing students reported that the simulations were educationally, professionally, and clinically relevant. The cost benefit of using DVD simulations as an alternative and potential replacement to elements of nursing clinical placements should be investigated further. abstract_id: PUBMED:24997553 Elements and enablers for interprofessional education clinical placements in diabetes teams. Objective: The purpose of this review is to describe the elements and enablers for interprofessional education (IPE) clinical placements in diabetes teams. Methods: We describe the development of an IPE clinical placement for health professional students in a diabetes team and share the lessons learned over 6 years, from 2008 to 2013. The 6 collaborative practice competencies of the Canadian Interprofessional Health Collaborative and the requirements for Accreditation of Interprofessional Health Professional Education opportunities guided the development of an IPE clinical placement in a diabetes team. Results: A formal IPE clinical placement in diabetes teams requires attention to the site and diabetes team-specific elements and enablers for IPE. That includes students and preceptors from 2 or more health professions, a formal curriculum on collaborative care, adequate IPE resources and strong institutional support for a culture of collaborative care and integration of students in diabetes teams. Conclusions: Diabetes teams can provide a valuable IPE opportunity for health professional students, recognizing that there are challenges that must be addressed in organizational structure of clinical placements in diabetes teams. Studies of the effectiveness of IPE in diabetes teams on collaboration competencies in future diabetes healthcare professionals and long-term patient outcomes are needed. abstract_id: PUBMED:19386881 Can DVD simulations provide an effective alternative for paramedic clinical placement education? Objective: This study aimed to evaluate the usability of DVD simulations, the impact on student learning satisfaction and the potential for using DVD simulations to reduce the clinical placement burden on the current healthcare system. The clinical DVD simulations were underpinned by interprofessional educational principles that supported clinical placements for paramedic students. Method: Eleven DVD simulations were developed by academic staff members from Monash University with input and feedback from a team of healthcare professionals. Students (N = 97) from the Bachelor of Emergency Health at Monash University viewed the DVD simulations. Students' perceptions, attitudes and thoughts about the clinical relevance of the simulations were assessed by completing a standardised self-report 7-point Likert scale questionnaire (7 indicating the highest satisfaction score). Qualitative data assessing if and how the DVD simulations had influenced paramedic students' clinical placement learning experiences were also collected via two focus groups (n = 6). Results: Overall, paramedic students positively perceived the DVD simulations with relation to learning satisfaction (mean (SD) 5.14 (1.14), 95% CI 4.91 to 5.37) and information processing quality (mean (SD) 5.50 (0.83), 95% CI 5.33 to 5.67). The simulations maintained students' attention and concentration (mean (SD) 4.35 (0.95), 95% CI 4.15 to 4.54) and provided clinical authenticity and relevance to practice (mean (SD) 4.27 (0.65), 95% CI 4.14 to 4.40). A number of themes emerged from the focus group data including the impact on employment, greater appreciation of healthcare teamwork and notion of interdisciplinary teamwork, the fact that DVD simulations have the capacity to replace some clinical placement rotations and should be integrated into standard curriculum, and that varying amounts of learning wastage occur during clinical placements. Conclusions: DVD simulations with an interprofessional education focus were developed. Paramedic students reported the simulations as being educationally, professionally and clinically relevant. The students also identified some aspects of current clinical placements that may be replaced by using DVD simulations. The cost benefit of using interprofessional DVD simulations to supplement and replace certain clinical placement rotations should be investigated further. abstract_id: PUBMED:29752859 Shifts in nursing and medical students' attitudes, beliefs and behaviours about interprofessional work: An interprofessional placement in ambulatory care. Aim And Objectives: To examine students' beliefs, behaviours and attitudes in relation to interprofessional socialisation, and their expectations and experience, before and after a 2-week clinical placement in ambulatory care. Background: Interprofessional clinical placements for students are important for developing an understanding of interprofessional collaboration and identity, for the benefit of patient care. Ambulatory care environment involves collaborative management of complex chronic problems. This educator supported placement that enabled final-year nursing and medical students to work together. Design: A descriptive matched before-after study was conducted. Methods: Students' completed an online questionnaire before and after their clinical placement. The questionnaire comprised of three sections: demographic information, the Interprofessional Socialisation and Valuing Scale and open-ended questions. Descriptive analysis and paired t-tests were conducted for the three subscales, and thematic analysis of qualitative responses was conducted. Results: Sixty-two of the 151 students between 2011-2014 completed both surveys. There was a significant increase after placement in the overall Interprofessional Socialisation and Valuing Scale scores. The change was greater for nursing students compared with medical students, although for both groups the change was small. The majority had a good-to-very good experience learning each other's and their own professions and identified the nurse educator and teaching registrar as key to success. Conclusion: A clinical placement in an ambulatory setting for nursing and medical students resulted in an increase in self-perceived ability to work with others and in valuing working with others. Relevance To Clinical Practice: Interprofessional clinical placements are essential for students to understand interprofessional practice for better patient outcomes and developing their own perspective of future work within an interprofessional team. Ambulatory care is an ideal environment for nursing and other health professional students to engage in interprofessional clinical placements. abstract_id: PUBMED:32397862 Community health placements for junior medical and nursing students for interprofessional learning. Health care is increasingly delivered in primary and community settings, yet undergraduate clinical education remains focused in hospitals. Interprofessional student placements in community health offer an alternative to traditional clinical education and extend the capacity for clinical education beyond hospital placements. This study sought to investigate the value of interprofessional clinical placements at a community health center for dyads of second year medical and nursing students. A mixed methods evaluation was used. Quantitative and qualitative feedback was collected using the Interprofessional Clinical Placement Learning Environment Inventory (ICPLEI). Students (n = 58) completed the ICPELI after observation of community health half day sessions, at one of three community health sites. Two focus group interviews were undertaken with the health care practitioners (n = 8) to explore their perception of the new interprofessional placements. The placements were rated highly by all participants with three themes identified: It takes a team, Bouncing ideas and Realities of Community Health. Innovative approaches to clinical education and learning are needed to prepare a healthcare workforce capable of working in a collaborative, interprofessional manner. Community health offers a promising location for interprofessional learning for junior medical and nursing student teams. abstract_id: PUBMED:29724107 Mapping the interprofessional education landscape for students on rural clinical placements: an integrative literature review. Introduction: Interprofessional collaboration and effective teamwork are core to optimising rural health outcomes; however, little is known about the opportunities available for interprofessional education (IPE) in rural clinical learning environments. This integrative literature review addresses this deficit by identifying, analysing and synthesising the research available about the nature of and potential for IPE provided to undergraduate students undertaking rural placements, the settings and disciplines involved and the outcomes achieved. Methods: An integrative review method was adopted to capture the breadth of evidence available about IPE in the rural context. This integrative review is based on a search of nine electronic databases: CINAHL, Cochrane Library, EMBASE, MEDLINE, ProQuest, PubMed, SCOPUS, Web of Science and Google Scholar. Search terms were adapted to suit those used by different disciplines and each database and included key words related to IPE, rurality, undergraduate students and clinical placement. The inclusion criteria included primary research and reports of IPE in rural settings, peer reviewed, and published in English between 2000 and mid-2016. Results: This review integrates the results of 27 primary research studies undertaken in seven countries: Australia, Canada, USA, New Zealand, the Philippines, South Africa and Tanzania. Despite geographical, cultural and health system differences, all of the studies reviewed were concerned with developing collaborative, interprofessional practice-ready graduates and adopted a similar mix of research methods. Overall, the 27 studies involved more than 3800 students (range 3-1360) from 36 disciplinary areas, including some not commonly associated with interprofessional education, such as theology. Interprofessional education was provided in a combination of university and rural placement settings including hospitals, community health services and other rural venues. The education activities most frequently utilised were seminars, tutorial discussion groups (n=21, 84%), case presentations (n=11, 44%) and community projects (n=11, 44%) augmented by preliminary orientation and ongoing interaction with clinicians during placement. The studies reviewed demonstrate that rural clinical learning environments provide rich and varied IPE opportunities for students that increase their interprofessional understanding, professional respect for other roles, and awareness of the collaborative and interprofessional nature of rural practice. Conclusion: This review addresses the lack of attention given to understanding IPE in the rural context, provides Australian and international evidence that initiatives are being offered to diverse student groups undertaking placements in rural settings and proposes a research agenda to develop a relevant framework to support rural IPE. Rural clinical learning environments afford a rich resource whereby health professionals can conceptualise IPE creatively and holistically to construct transformative learning experiences for students. This review develops a case for supporting the development, trialling, evaluation and translation of IPE initiatives that harness the opportunities afforded by rural placements. Further research is required to examine the ways to optimise IPE opportunities in the rural clinical context, including the potential for simulation-based activities, the challenges to achieving sustainable programs, and to evaluate the impact of interprofessional education on collaboration and health outcomes. abstract_id: PUBMED:27797631 Interprofessional Education and Practice Guide No. 8: Team-based interprofessional practice placements. Whilst interest in interprofessional learning (IPL) in practice contexts has grown in recent years, the complexities involved have led many universities to rely on IPL in the classroom, online, and/or simulated contexts. Curtin University's Faculty of Health Sciences has successfully implemented a multi-award winning, large-scale Interprofessional Practice Programme. This programme, which began with five small pilots in 2009, provides team-based interprofessional practice placements for over 550 students from nine professions per annum. Drawing on both the literature and Curtin University's experience, this Interprofessional Education and Practice Guide aims to assist university and practice-based educators to "weigh the case" for introducing team-based interprofessional placements. The key lessons learned at Curtin University are identified to offer guidance to others towards establishing a similar programme for students during their prequalifying courses in health, social care, and related fields. abstract_id: PUBMED:20217001 Can interdisciplinary clinical DVD simulations transform clinical fieldwork education for paramedic, occupational therapy, physiotherapy, and nursing students? Unlabelled: The aim of this study was to examine the usefulness of DVD simulations in a clinical teaching context, their impact on student learning, and their potential as a supplemental learning option for clinical placements/fieldwork education rotations that might assist in reducing the burden on the health care system. Methods: Eleven clinical DVD simulations were developed by Monash University academic staff from four academic departments: nursing, occupational therapy, paramedics, and physiotherapy. Undergraduate students (n = 394) from these health science groups viewed the DVD simulations. Student perceptions and attitudes about the clinical relevance of the DVD simulations were assessed on a 7-point Likert-type scale standardized questionnaire (7 indicating the highest satisfaction score). Qualitative data were also collected from three focus groups that involved 24 students, assessing if and how the DVD simulations influenced the clinical fieldwork education placement learning experiences of the students. Results: Overall, students' responses to the questionnaire indicated that they were satisfied with the DVD simulations with relation to attention (mean 4.25, SD 0.95), learning potential (mean 5.25, SD 1.16), clinical relevance to practice (mean 4.36, SD 0.60), and information-processing quality (mean 5.45, SD 0.23). Qualitative data supported the notions of interdisciplinary teamwork, clinical placements, clinical placement education, and DVD quality evaluation and feedback. Conclusion: Students viewed the simulations as being educationally, professionally, and clinically relevant. The cost benefit of using DVD simulations to replace or supplement components of clinical fieldwork education should be investigated further. abstract_id: PUBMED:27295486 Every team needs a coach: Training for interprofessional clinical placements. Despite growing awareness of the benefits of interprofessional education and interprofessional collaboration (IPC), understanding how teams successfully transition to IPC is limited. Student exposure to interprofessional teams fosters the learners' integration and application of classroom-based interprofessional theory to practice. A further benefit might be reinforcing the value of IPC to members of the mentoring team and strengthening their IPC. The research question for this study was: Does training in IPC and clinical team facilitation and mentorship of pre-licensure learners during interprofessional clinical placements improve the mentoring teams' collaborative working relationships compared to control teams? Statistical analyses included repeated time analysis multivariate analysis of variance (MANOVA). Teams on four clinical units participated in the project. Impact on intervention teams pre- versus post-interprofessional clinical placement was modest with only the Cost of Team score of the Attitudes Towards Healthcare Team Scale improving relative to controls (p = 0.059) although reflective evaluations by intervention team members noted many perceived benefits of interprofessional clinical placements. The significantly higher group scores for control teams (geriatric and palliative care) on three of four subscales of the Assessment of Interprofessional Team Collaboration Scale underscore our need to better understand the unique features within geriatric and palliative care settings that foster superior IPC and to recognise that the transition to IPC likely requires a more diverse intervention than the interprofessional clinical placement experience implemented in this study. More recently, it is encouraging to see the development of innovative tools that use an evidence-based, multi-dimensional approach to support teams in their transition to IPC. abstract_id: PUBMED:32776900 Unpacking the hidden curriculum in nursing education: clinical placements abroad. Many Canadian nursing programs endorse clinical placements abroad. We critically examined the hidden curriculum embedded in clinical placements seeking to provide a 'global' experience. Using purposeful sampling, we interviewed a total of 18 participants, including eight faculty, eight students and two placement coordinators. Data were analyzed using initial and focused coding, supported by NVivo. After generating themes from the coded data, we adapted Hafferty's (1998) framework to further analyze the theme of the hidden curriculum. The findings illuminate how policies sustained international experience as a privileged endeavour, with restricted access based on grades. Placements incorporated little evaluation of benefits for local communities. Few resources were allocated to students for such placements, as many students paid for their placement. The institutional terms 'international and global experience' denoted interactions with cultural "Others". We recommend that nursing programs attend to hidden practices that sustain clinical placements abroad as prestigious, commodified experiences. Answer: Yes, interprofessional education DVD simulations can provide an alternative method for clinical placements in nursing. A study conducted with second-year nursing students at Monash University evaluated the usability of DVD simulations and their impact on student learning and clinical placement orientation. The results showed that nursing students perceived the DVD simulations positively in terms of learning attention, learning potential, clinical relevance to practice, and information processing quality. The themes that emerged from focus groups included the potential for simulations to replace some clinical placement rotations and the support for a multidisciplinary approach and integration. The study concluded that the simulations were educationally, professionally, and clinically relevant, suggesting that the cost-benefit of using DVD simulations as an alternative to elements of nursing clinical placements should be further investigated (PUBMED:19303674). Additionally, another study with paramedic students at Monash University found that DVD simulations with an interprofessional education focus were positively received by students. They reported that the simulations were educationally, professionally, and clinically relevant, and identified aspects of current clinical placements that could potentially be replaced by using DVD simulations (PUBMED:19386881). These findings indicate that DVD simulations could serve as a supplemental learning option for clinical placements, potentially reducing the burden on the healthcare system while still providing relevant and effective educational experiences for nursing students. However, further research is needed to explore the cost-benefit of such an approach and to determine the extent to which DVD simulations can replace traditional clinical placements (PUBMED:20217001).
Instruction: Using morphologic parameters of extraocular muscles for diagnosis and follow-up of Graves' ophthalmopathy: diameters, areas, or volumes? Abstracts: abstract_id: PUBMED:12239055 Using morphologic parameters of extraocular muscles for diagnosis and follow-up of Graves' ophthalmopathy: diameters, areas, or volumes? Objective: The objective of this study was to find the most appropriate, accurate, and convenient muscle parameter that can be used as a substitute for volume in monitoring the effectiveness of therapy of patients with Graves' ophthalmopathy. Subjects And Methods: The four rectus muscles in 110 orbits (35 patients and 20 control subjects) were evaluated with MR imaging. The diameter at the largest extent of the muscle belly, as well as the long and the short diameters and the cross-sectional areas in a preselected coronal scan, were measured for each muscle and were compared with the corresponding muscle volume measured on contiguous T1-weighted transverse slices. Results: The measured coronal area correlated well with the volume of the superior (r = 0.694, p &lt; 0.0001) and inferior (r = 0.783, p &lt; 0.0001) recti, and the largest transverse diameter showed strong correlation with the volume of the lateral (r = 0.868, p &lt; 0.0001) and medial (r = 0.869, p &lt; 0.0001) recti. For the latter muscle, the coronal area also exhibited a good correlation with the volume (r = 0.838, p &lt; 0.0001). Coronal cross-sectional areas can be well estimated by measuring both the short and long coronal muscle diameters (r values were between 0.914 and 0.966; p &lt; 0.0001). Conclusion: In Graves' ophthalmopathy, the volume of three of the rectus muscles can be well estimated by simple measurements on a single coronal slice, and the largest transverse diameter of the lateral rectus is suitable for the same purpose. abstract_id: PUBMED:38279160 A case of bilateral amyloidosis localized to extraocular muscles mimicking thyroid eye disease. Background: Amyloidosis is a rare condition characterized by the abnormal deposition of amyloid proteins in various tissues and organs. While systemic amyloidosis has been well-documented, amyloid deposition in extraocular muscles is an exceptionally rare occurrence, with only 35 reported cases. This case report sheds light on the importance of considering amyloidosis in the differential diagnosis of patients presenting with proptosis and diplopia, which are often associated with thyroid eye disease. Case Presentation: A woman in her twenties sought medical attention due to a complaint of diplopia. Her ocular examination revealed almost normal findings except for exotropia and proptosis. Orbital magnetic resonance imaging displayed fusiform enlargement of nearly all eye muscles, a presentation typically observed in thyroid eye disease. However, despite corticosteroid therapy, her symptoms showed no improvement. Given the unusual lack of response to conventional treatment, and inhomogeneous enhancement of the muscle, an extraocular muscle biopsy was conducted. This biopsy yielded a unique finding-amyloid deposition within the muscle tissue. This discovery was particularly intriguing due to the extreme rarity of amyloidosis affecting extraocular muscles, with fewer than three dozen documented cases worldwide. Conclusion: This unique case underscores the critical need for a comprehensive approach to diagnosing patients with proptosis and diplopia. While these symptoms are commonly attributed to thyroid eye disease, it is essential to consider alternative diagnoses such as amyloidosis, especially when standard treatments fail to yield results. The discovery of amyloid deposition in the extraocular muscles, although exceedingly rare, emphasizes the significance of a thorough differential diagnosis. In conclusion, this case report highlights the importance of vigilance in clinical practice, encouraging ophthalmologists to explore less common diagnostic possibilities when faced with challenging cases. Further research and clinical investigation are warranted to better understand the mechanisms and potential treatments for amyloidosis affecting the extraocular muscles. abstract_id: PUBMED:37466707 Quantitative assessment of extraocular muscles in Graves' ophthalmopathy using T1 mapping. Objective: To evaluate the performance of T1 mapping in the characterization of extraocular muscles (EOMs) of Graves' ophthalmopathy (GO) patients and investigate its feasibility in assessing the response to glucocorticoid therapy in active GO patients. Methods: A total of 133 participants (78 active GO, 23 inactive GO, 18 Graves' disease (GD) patients, and 14 healthy volunteers) were consecutively enrolled from July 2018 to December 2020. Native T1 (nT1) and postcontrast T1 (cT1) values of EOMs were measured and compared. The variations in T1 mapping metrics of EOMs were compared pre/post glucocorticoid treatment in 23 follow-up active GO patients. Logistic regression analysis and receiver operating characteristic (ROC) curve analysis were performed. Results: The nT1 of EOMs in GO patients was higher than that in GD patients and healthy volunteers. The nT1 of superior rectus (SR) in active GO was higher than that in inactive GO patients, and it could be used as a potential marker of GO activity (OR: 1.003; 95% CI: 1.001, 1.004), with a diagnostic sensitivity of 86.3% and specificity of 43.7%. Meanwhile, the cT1 of SR, inferior rectus (IR), and medial rectus (MR) in inactive GO patients were higher than those in active GO patients. The nT1 of EOMs achieved sufficient diagnostic performance in evaluating the response to glucocorticoid therapy for follow-up active GO patients (AUC, 0.797; sensitivity, 71.9%; specificity, 85.7%). Conclusions: T1 mapping could quantitatively assess the activity of GO and the response to glucocorticoid therapy in active GO patients and may even potentially reflect the fibrosis of EOMs. Clinical Relevance Statement: T1 values can reflect the pathological status of the extraocular muscle. T1 mapping could help to quantitatively assess the clinical activity of GO and the response to glucocorticoid therapy in active GO patients. Key Points: • Graves' ophthalmopathy patients had greater nT1 of extraocular muscles than Graves' disease patients and healthy volunteers, and nT1 of the superior rectus could be a potential marker of Graves' ophthalmopathy activity. • The cT1 of extraocular muscles in inactive Graves' ophthalmopathy patients was higher than that in active Graves' ophthalmopathy patients, and it might be associated with muscle fibrosis. • nT1 of extraocular muscles could offer sufficient diagnostic performance in evaluating the response to glucocorticoid therapy for follow-up active Graves' ophthalmopathy patients. abstract_id: PUBMED:32878536 Radiographic analysis of fatty infiltration of the extraocular muscles in thyroid eye disease. Purpose: Fatty infiltration of the extraocular muscles has been described radiographically in patients with thyroid eye disease (TED), yet it has not been studied on a large scale nor quantified. Our purpose was to define and characterize this entity in patients with TED. Methods: An IRB-approved cross-sectional retrospective review of medical records identified patients with a clinical diagnosis of TED and at least one CT of the orbits. A 2:1 age and sex-matched control population consisted of patients without a history nor radiographic evidence of orbital disease or systemic thyroid abnormality. The presence of fatty infiltration in each extraocular rectus muscle was defined using Hounsfield units (HU). Laterality, muscles involved, and pattern of fatty infiltration were also evaluated. Student's t-tests, Chi-square, and Fisher's exact tests were used to compare TED and control groups. Results: The study population consisted of 252 patients with TED and 504 age and sex-matched controls. Fatty infiltration was significantly more prevalent in TED patients (36/252, 14.3%) compared to controls (11/504, 2.2%) (p &lt; .001). The mean density of fat infiltration was significantly lower in TED patients (-40.4 HU) than controls (-34.8 HU) (p = .048). In TED patients, the frequency of muscle involvement was inferior rectus (61.8%), lateral rectus (19.7%), superior rectus (11.8%) and medial rectus (6.6%), which was not significantly different than controls (p &gt; .05). Most muscles (88.2%) in the TED group exhibited a heterogeneous pattern of infiltration, which did not differ from controls (p = .34). Conclusions: This study characterizes fatty infiltration of the extraocular muscles in patients with TED. abstract_id: PUBMED:26321629 Changes of orbital tissue volumes and proptosis in patients with thyroid extraocular muscle swelling after methylprednisolone pulse therapy. Purpose: To evaluate the changes of orbital tissue volumes and proptosis after methylprednisolone pulse therapy in thyroid-associated ophthalmopathy (TAO). Methods: The cross-sectional areas of orbital tissues and proptosis were measured with magnetic resonance imaging in 40 orbits of 20 patients with TAO before and after methylprednisolone pulse therapy. The volumes of the whole orbit, orbital fatty tissue, and extraocular muscles were calculated. The volumes and proptosis were compared before and after treatment using a paired t test. Results: Before treatment, the mean volumes were 33.0 ± 4.8 cm(3) in the whole orbit, 19.9 ± 4.1 cm(3) in the orbital fatty tissue, and 4.6 ± 1.2 cm(3) in the total extraocular muscles. After treatment, the mean volumes were 32.5 ± 4.4 cm(3) in the whole orbit, 19.9 ± 3.7 cm(3) in the orbital fatty tissue, and 4.0 ± 1.0 cm(3) in the total extraocular muscles. The mean volumes of the whole orbit (P = 0.17) and orbital fatty tissue (P = 0.82) were not significantly decreased after treatment, while the mean volume of total extraocular muscles was significantly decreased (P &lt; 0.001). The mean proptosis value was 18.9 ± 2.8 mm before treatment and 18.6 ± 3.4 mm after treatment. The mean proptosis value was not significantly decreased after treatment (P = 0.30). Conclusions: The volume of orbital fatty tissue seemed to be unchanged after methylprednisolone pulse therapy while that of total extraocular muscles was decreased. The proptosis value seemed to be unchanged after treatment. abstract_id: PUBMED:34018373 Clinical Analysis of 2 170 Cases of Thyroid-Associated Ophthalmopathy Involving Extraocular Muscles Objective: To explore the clinical features of thyroid-associated ophthalmopathy (TAO) with extraocular muscle involvement. Methods: The data of 2170 TAO patients who were seen at the Orbital Disease Clinic, West China Hospital, Sichuan University from September, 2009 to January, 2020 were collected retrospectively. The extraocular muscle involvement of these patients was confirmed by CT or MRI. Their general condition, medical history, clinical manifestations and imaging features were analyzed retrospectively. Results: Among the 2170 TAO patients, 932 were male and 1238 were female. The mean (± SD) age of all the patients was (46.95±13.06) years, ranging between 6 and 85. 1684 patients (77.60%) suffered from hyperthyroidism, 13 patients (0.59%) had thyroid cancer, 80 patients (3.69%) had hypothyroidism, and 393 patients (18.11%) had normal thyroid function. Proptosis (55.25%) and diplopia (33.09%) were the main reasons for their visits to the clinic, and restricted eye movements (83.46%) was the most common sign. 122 patients with a mean age of (53.24±13.07) years did not show any eyelid sign and had only extraocular muscle involvement. The 2170 TAO patients had a total of 3799 eyes of extrocular muscle involvement, with 541 patients experiencing monocular involvement and 1629 patients, binocular involvement; 1204 eyes (31.69%) had a single extrocular muscle involved and 2595 eyes (68.31%) had multiple extrocular muscles involved. Inferior rectus was the most commonly involved muscle, followed by superior rectus, medial rectus, and lateral rectus in descending order of involvement frequency. Of the 1014 patients who underwent enhanced MRI, 71.99% were shown to be in the active phase. 69.03% of the 775 patients identified as being in inactive phase according to their clinical activity score (CAS) were shown to be in the active phase according to their MRI results. Conclusion: TAO patients with extraocular muscle involvement have their own specific clinical manifestations. CT and MRI can both be used to assist in the diagnosis of extraocular muscle involvement. MRI can be used to assess the pathological stage of extraocular muscles and is more sensitive than CAS. abstract_id: PUBMED:24783034 Differential involvement of orbital fat and extraocular muscles in graves' ophthalmopathy. Graves' ophthalmopathy (GO) is characterized by swelling of orbital fat and extraocular muscles, but little attention has been given to differential involvement of fat and muscles. Advancements in imaging allow rather accurate measurements of orbital bony cavity volume (OV), fat volume (FV) and muscle volume (MV), and are the topics of this review. Ratios of FV/OV and MV/OV neutralize gender differences. In adult Caucasian controls, mean values ± SD of FV/OV are 0.56 ± 0.11 and of MV/OV are 0.15 ± 0.02. FV increases substantially and MV decreases slightly with advancing age, requiring age-specific reference ranges. In 95 consecutive untreated Caucasian GO patients, both FV and MV were within normal limits in 25%, increased FV but normal MV was present in 5%, normal FV but increased MV was detected in 61%, and both increased FV and MV was evident in 9%. Increased FV was associated with more proptosis and longer GO duration. Increased MV was associated with older age, more severe GO (more proptosis and diplopia, worse eye muscle ductions), higher TBII and current smoking. At the cellular and molecular level differential involvement of fat and muscles might be related to differences between fibroblast phenotypes and cytokine profiles in each compartment, to different orbital T cell subsets during the course of the disease and to peroxisome proliferator activator receptor-γ polymorphisms and modulation of 11β-hydroxysteroid dehydrogenase-1. Enlarged muscles are apparently a rather early phenomenon in GO, whereas increases in fat mass occur relatively late. Why a minor subset of GO patients presents with an increase of only fat remains poorly understood. abstract_id: PUBMED:23112262 Bilateral metastatic melanoma to the extraocular-muscles simulating thyroid eye disease. We report a case of bilateral metastatic melanoma to the extraocular muscles that simulated thyroid eye disease. A 71-year-old man presented with bilateral painless axial proptosis, lid retraction and left gaze restriction. Orbital CT scan demonstrated enlargement of the extraocular muscles with tendon sparing, consistent with thyroid eye disease. However, thyroid function tests and antithyroid antibodies were normal. Systemic review including orbital MRI scan determined the correct diagnosis of metastatic melanoma to the orbit. Metastatic melanoma to the orbit can simulate thyroid eye disease. abstract_id: PUBMED:35713708 Extraocular muscle enlargement. Extraocular muscle enlargement can occur secondary to a range of orbital and systemic diseases. Although the most common cause of extraocular muscle enlargement is thyroid eye disease, a range of other inflammatory, infective, neoplastic, and vascular conditions can alter the size and shape of the extraocular muscles. Imaging with computed tomography and magnetic resonance imaging plays an essential role in the workup of these conditions. This article provides an image-rich review of the wide range of pathology that can cause enlargement of the extraocular muscles. abstract_id: PUBMED:27143850 Relationship between magnetic resonance imaging signal intensity and volume of extraocular muscles in thyroid-associated ophthalmopathy with methylprednisolone pulse therapy. Purpose: To characterize the relationship between inflammation and swelling of extraocular muscles in thyroid-associated ophthalmopathy before and after methylprednisolone pulse therapy. Methods: The signal intensities and volumes of the superior rectus (SR), inferior rectus (IR), lateral rectus (LR), medial rectus (MR), and superior oblique (SO) muscles were measured with magnetic resonance imaging in 25 eyes of 25 patients with thyroid-associated ophthalmopathy in the acute inflammatory phase before and after methylprednisolone pulse therapy. The signal intensity ratios (SIRs) of muscles and brain white matter were calculated. Result: The mean SIRs before treatment were 2.28±0.74 in SR, 2.66±0.57 in IR, 2.03±0.42 in LR, 2.45±0.49 in MR, and 1.98±0.48 in SO muscles. The mean SIRs after treatment were 1.82±0.62, 1.84±0.52, 1.70±0.35, 1.95±0.46, and 1.60±0.36, respectively. The mean volumes (cm(3)) before treatment were 1.35±0.67 in SR, 1.21±0.39 in IR, 0.66±0.13 in LR, 0.94±0.31 in MR, and 0.58±0.14 in SO muscles. Those after treatment were 1.12±0.45, 0.91±0.31, 0.61±0.12, 0.85±0.28, and 0.49±0.11, respectively. The SIRs showed significantly positive correlations with volumes both before and after treatment in SR muscles (before, r=0.77; after, r=0.69), IR muscles (before, r=0.65; after, r=0.60), MR muscles (before, r=0.69; after, r=0.73), and SO muscles (before, r=0.52; after, r=0.50) (P&lt;0.01 for all correlations). Conclusion: Swelling reflected the inflammation in extraocular muscles of thyroid-associated ophthalmopathy both before and after treatment. Answer: The most appropriate morphologic parameters for the diagnosis and follow-up of Graves' ophthalmopathy (GO) using extraocular muscles (EOMs) appear to be a combination of diameters, areas, and volumes, depending on the specific muscle being evaluated. A study found that the volume of three of the rectus muscles (superior, inferior, and medial) can be well estimated by simple measurements on a single coronal slice, while the largest transverse diameter of the lateral rectus is suitable for estimating its volume (PUBMED:12239055). This suggests that both diameters and areas can be used as surrogates for volume in monitoring the effectiveness of therapy in patients with GO. Additionally, T1 mapping has been shown to quantitatively assess the activity of GO and the response to glucocorticoid therapy in active GO patients, potentially reflecting the fibrosis of EOMs (PUBMED:37466707). This indicates that T1 mapping could be a valuable tool in the assessment of EOMs in GO. Furthermore, the study of fatty infiltration of the EOMs in TED patients using Hounsfield units (HU) provides another dimension to the radiographic analysis of these muscles (PUBMED:32878536). This suggests that the density of fatty infiltration could be an additional parameter to consider in the evaluation of EOMs in GO. In conclusion, the use of diameters, areas, volumes, and possibly T1 mapping values and fatty infiltration density, can all contribute to the diagnosis and follow-up of GO. The choice of parameter may depend on the specific muscle being assessed and the available imaging techniques. It is important to consider the individual characteristics of each muscle and the stage of the disease when selecting the most appropriate morphologic parameter for evaluation (PUBMED:12239055; PUBMED:37466707; PUBMED:32878536).