input
stringlengths
6.82k
29k
Instruction: Weather: driving force behind the transmission of severe acute respiratory syndrome in China? Abstracts: abstract_id: PUBMED:17445010 Weather: driving force behind the transmission of severe acute respiratory syndrome in China? Background: The association between weather and severe acute respiratory syndrome (SARS) transmission in Beijing and Hong Kong in the 2003 epidemic was studied to examine the effect of weather on SARS transmission. Methods: Pearson's correlation analyses and negative binomial regression analyses were used to quantify the correlations between the daily newly reported number of SARS cases and weather variables, using daily disease notification data and meteorological data from the two locations. Results: The results indicate that there were inverse association between the number of daily cases and maximum and/or minimum temperatures whereas air pressure was found to be positively associated with SARS transmission. Conclusion: The study suggests that weather might be a contributory factor in the 2003 SARS epidemic, in particular in the transmission among the community members. abstract_id: PUBMED:32127517 Transmission routes of 2019-nCoV and controls in dental practice. A novel β-coronavirus (2019-nCoV) caused severe and even fetal pneumonia explored in a seafood market of Wuhan city, Hubei province, China, and rapidly spread to other provinces of China and other countries. The 2019-nCoV was different from SARS-CoV, but shared the same host receptor the human angiotensin-converting enzyme 2 (ACE2). The natural host of 2019-nCoV may be the bat Rhinolophus affinis as 2019-nCoV showed 96.2% of whole-genome identity to BatCoV RaTG13. The person-to-person transmission routes of 2019-nCoV included direct transmission, such as cough, sneeze, droplet inhalation transmission, and contact transmission, such as the contact with oral, nasal, and eye mucous membranes. 2019-nCoV can also be transmitted through the saliva, and the fetal-oral routes may also be a potential person-to-person transmission route. The participants in dental practice expose to tremendous risk of 2019-nCoV infection due to the face-to-face communication and the exposure to saliva, blood, and other body fluids, and the handling of sharp instruments. Dental professionals play great roles in preventing the transmission of 2019-nCoV. Here we recommend the infection control measures during dental practice to block the person-to-person transmission routes in dental clinics and hospitals. abstract_id: PUBMED:22359565 Trends in notifiable infectious diseases in China: implications for surveillance and population health policy. This study aimed to analyse trends in notifiable infectious diseases in China, in their historical context. Both English and Chinese literature was searched and diseases were categorised according to the type of disease or transmission route. Temporal trends of morbidity and mortality rates were calculated for eight major infectious diseases types. Strong government commitment to public health responses and improvements in quality of life has led to the eradication or containment of a wide range of infectious diseases in China. The overall infectious diseases burden experienced a dramatic drop during 1975-1995, but since then, it reverted and maintained a gradual upward trend to date. Most notifiable diseases are contained at a low endemic level; however, local small-scale outbreaks remain common. Tuberculosis, as a bacterial infection, has re-emerged since the 1990s and has become prevalent in the country. Sexually transmitted infections are in a rapid, exponential growth phase, spreading from core groups to the general population. Together human immunodeficiency virus (HIV), they account for 39% of all death cases due to infectious diseases in China in 2008. Zoonotic infections, such as severe acute respiratory syndrome (SARS), rabies and influenza, pose constant threats to Chinese residents and remain the most deadly disease type among the infected individuals. Therefore, second-generation surveillance of behavioural risks or vectors associated with pathogen transmission should be scaled up. It is necessary to implement public health interventions that target HIV and relevant coinfections, address transmission associated with highly mobile populations, and reduce the risk of cross-species transmission of zoonotic pathogens. abstract_id: PUBMED:32214741 Exploring the epidemic transmission network of SARS in-out flow in mainland China. The changing spatiotemporal patterns of the individual susceptible-infected-symptomatic-treated-recovered epidemic process and the interactions of information/material flows between regions, along with the 2002-2003 Severe Acute Respiratory Syndrome (SARS) epidemiological investigation data in mainland China, including three typical locations of individuals (working unit/home address, onset location and reporting unit), are used to define the in-out flow of the SARS epidemic spread. Moreover, the input/output transmission networks of the SARS epidemic are built according to the definition of in-out flow. The spatiotemporal distribution of the SARS in-out flow, spatial distribution and temporal change of node characteristic parameters, and the structural characteristics of the SARS transmission networks are comprehensively and systematically explored. The results show that (1) Beijing and Guangdong had the highest risk of self-spread and output cases, and prevention/control measures directed toward self-spread cases in Beijing should have focused on the later period of the SARS epidemic; (2) the SARS transmission networks in mainland China had significant clustering characteristics, with two clustering areas of output cases centered in Beijing and Guangdong; (3) Guangdong was the original source of the SARS epidemic, and while the infected cases of most other provinces occurred mainly during the early period, there was no significant spread to the surrounding provinces; in contrast, although the input/output interactions between Beijing and the other provinces countrywide began during the mid-late epidemic period, SARS in Beijing showed a significant capacity for spatial spreading; (4) Guangdong had a significant range of spatial spreading throughout the entire epidemic period, while Beijing and its surrounding provinces formed a separate, significant range of high-risk spreading during the mid-late period; especially in late period, the influence range of Beijing's neighboring provinces, such as Hebei, was even slightly larger than that of Beijing; and (5) the input network had a low-intensity spread capacity and middle-level influence range, while the output network had an extensive high-intensity spread capacity and influence range that covered almost the entire country, and this spread and influence indicated that significant clustering characteristics increased gradually. This analysis of the epidemic in-out flow and its corresponding transmission network helps reveal the potential spatiotemporal characteristics and evolvement mechanism of the SARS epidemic and provides more effective theoretical support for prevention and control measures. abstract_id: PUBMED:32402910 Possible environmental effects on the spread of COVID-19 in China. At the end of 2019, a novel coronavirus, designated as SARS-CoV-2, emerged in Wuhan, China and was identified as the causal pathogen of COVID-19. The epidemic scale of COVID-19 has increased dramatically, with confirmed cases increasing across China and globally. Understanding the potential affecting factors involved in COVID-19 transmission will be of great significance in containing the spread of the epidemic. Environmental and meteorological factors might impact the occurrence of COVID-19, as these have been linked to various diseases, including severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS), whose causative pathogens belong to the same virus family as SARS-CoV-2. We collected daily data of COVID-19 confirmed cases, air quality and meteorological variables of 33 locations in China for the outbreak period of 29 January 2020 to 15 February 2020. The association between air quality index (AQI) and confirmed cases was estimated through a Poisson regression model, and the effects of temperature and humidity on the AQI-confirmed cases association were analyzed. The results show that the effect of AQI on confirmed cases associated with an increase in each unit of AQI was statistically significant in several cities. The lag effect of AQI on the confirmed cases was statistically significant on lag day 1 (relative risk (RR) = 1.0009, 95% confidence interval (CI): 1.0004, 1.0013), day 2 (RR = 1.0007, 95% CI: 1.0003, 1.0012) and day 3 (RR = 1.0008, 95% CI: 1.0003, 1.0012). The AQI effect on the confirmed cases might be stronger in the temperature range of 10 °C ≤ T < 20 °C than in other temperature ranges, while the RR of COVID-19 transmission associated with AQI was higher in the relative humidity (RH) range of 10% ≤ RH < 20%. Results may suggest an enhanced impact of AQI on the COVID-19 spread under low RH. abstract_id: PUBMED:32843626 Origin and cross-species transmission of bat coronaviruses in China. Bats are presumed reservoirs of diverse coronaviruses (CoVs) including progenitors of Severe Acute Respiratory Syndrome (SARS)-CoV and SARS-CoV-2, the causative agent of COVID-19. However, the evolution and diversification of these coronaviruses remains poorly understood. Here we use a Bayesian statistical framework and a large sequence data set from bat-CoVs (including 630 novel CoV sequences) in China to study their macroevolution, cross-species transmission and dispersal. We find that host-switching occurs more frequently and across more distantly related host taxa in alpha- than beta-CoVs, and is more highly constrained by phylogenetic distance for beta-CoVs. We show that inter-family and -genus switching is most common in Rhinolophidae and the genus Rhinolophus. Our analyses identify the host taxa and geographic regions that define hotspots of CoV evolutionary diversity in China that could help target bat-CoV discovery for proactive zoonotic disease surveillance. Finally, we present a phylogenetic analysis suggesting a likely origin for SARS-CoV-2 in Rhinolophus spp. bats. abstract_id: PUBMED:33181330 The time-varying transmission dynamics of COVID-19 and synchronous public health interventions in China. Objectives: We aimed to estimate the time-varying transmission dynamics of COVID-19 in China, Wuhan City, and Guangdong province, and compare to that of severe acute respiratory syndrome (SARS). Methods: Data on COVID-19 cases in China up to 20 March 2020 was collected from epidemiological investigations or official websites. Data on SARS cases in Guangdong Province, Beijing, and Hong Kong during 2002-3 was also obtained. We estimated the doubling time, basic reproduction number (R0), and time-varying reproduction number (Rt) of COVID-19 and SARS. Results: As of 20 March 2020, 80,739 locally acquired COVID-19 cases were identified in mainland China, with most cases reported between 20 January and 29 February 2020. The R0 value of COVID-19 in China and Wuhan was 5.0 and 4.8, respectively, which was greater than the R0 value of SARS in Guangdong (R0 = 2.3), Hong Kong (R0 = 2.3), and Beijing (R0 = 2.6). At the start of the COVID-19 epidemic, the Rt value in China peaked at 8.4 and then declined quickly to below 1.0 in one month. With SARS, the Rt curve saw fluctuations with more than one peak, the highest peak was lower than that for COVID-19. Conclusions: COVID-19 has much higher transmissibility than SARS, however, a series of prevention and control interventions to suppress the outbreak were effective. Sustained efforts are needed to prevent the rebound of the epidemic in the context of the global pandemic. abstract_id: PUBMED:17493952 Estimating variability in the transmission of severe acute respiratory syndrome to household contacts in Hong Kong, China. The extensive data collection and contact tracing that occurred during the 2003 outbreak of severe acute respiratory syndrome (SARS) in Hong Kong, China, allowed the authors to examine how the probability of transmission varied from the date of symptom onset to the date of hospitalization for household contacts of SARS patients. Using a discrete-time likelihood model, the authors estimated the transmission probability per contact for each day following the onset of symptoms. The results suggested that there may be two peaks in the probability of SARS transmission, the first occurring around day 2 after symptom onset and the second occurring approximately 10 days after symptom onset. Index patients who were aged 60 years or older or whose lactate dehydrogenase level was elevated upon admission to the hospital (indicating higher viral loads) were more likely to transmit SARS to their contacts. There was little variation in the daily transmission probabilities before versus after the introduction of public health interventions on or around March 26, 2003. This study suggests that the probability of transmission of SARS is dependent upon characteristics of the index patients and does not simply reflect temporal variability in the viral load of SARS cases. abstract_id: PUBMED:33381929 A Comprehensive Review of Coronavirus Disease 2019: Epidemiology, Transmission, Risk Factors, and International Responses. Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has caused a worldwide pandemic. The first reports of patients with COVID-19 were provided to World Health Organization on December 21, 2019 and were presumably associated with seafood markets in Wuhan, China. As of October 25, 2020, more than 42 million cases have been confirmed worldwide, with more than 1.1 million deaths. Asymptomatic transmission contributes significantly to transmission, and clinical features are non-specific to the disease. Thus, the diagnosis of COVID-19 requires specific viral RNA testing. The disease demonstrates extensive human-to-human transmissibility and has infected healthcare workers at high rates. Clinical awareness of the epidemiology and the risk factors for nosocomial transmission of COVID-19 is essential to preventing infection. Moreover, effective control measures should be further identified by comprehensive evaluation of hospital and community responses. In this review, we provide a comprehensive update on the epidemiology, presentation, transmission, risk factors, and public health measures associated with COVID-19. We also review past insights from previous coronavirus epidemics [i.e., severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS)] to suggest measures to reduce transmission. abstract_id: PUBMED:33575237 Geographical and Epidemiological Characteristics of 3,487 Confirmed Cases With COVID-19 Among Healthcare Workers in China. As the first area to report the outbreak, China used to be the front line of the battle against the novel coronavirus SARS-CoV-2. The present descriptive analysis of 3,487 COVID-19-confirmed cases with health workers reported through April 30, 2020 offers important new information to the international community on the epidemic in China. These data showed that Chinese measures including the high-grade protective gear used, mask wearing, and social distancing, are effective in reducing transmission in hospitals. Answer: The study identified in abstract PUBMED:17445010 investigated the association between weather and the transmission of severe acute respiratory syndrome (SARS) during the 2003 epidemic in Beijing and Hong Kong. The results indicated that there was an inverse association between the number of daily SARS cases and maximum and/or minimum temperatures, while air pressure was found to be positively associated with SARS transmission. This suggests that weather might have been a contributory factor in the SARS epidemic, particularly in the transmission among community members. Therefore, it can be inferred that weather conditions played a role in the dynamics of SARS transmission in China during the 2003 outbreak.
Instruction: Is lipid lowering therapy an independent risk factor for venous thromboembolism? Abstracts: abstract_id: PUBMED:25891841 Is lipid lowering therapy an independent risk factor for venous thromboembolism? A population-based case-control study. Introduction: The independent effect of lipid lowering therapy (LLT) on venous thromboembolism (VTE) risk is uncertain. Objective: To test statin and non-statin LLT as potential VTE risk factors. Methods: Using Rochester Epidemiology Project resources, we identified all Olmsted County, MN residents with objectively diagnosed incident VTE (cases) over the 13-year period, 1988-2000 (n=1340), and one to two matched controls (n=1538). We reviewed their complete medical records for baseline characteristics previously identified as independent VTE risk factors, and for statin and non-statin LLT. Using conditional logistic regression, we tested the overall effect of LLT on VTE risk and also separately explored the role of statin versus that of non-statin LLT, adjusting for other baseline characteristics. Results: Among cases and controls, 74 and 111 received statin LLT, and 32 and 50 received non-statin LLT, respectively. Univariately, and after individually controlling for other potential VTE risk factors (i.e., BMI, trauma/fracture, leg paresis, hospitalization for surgery or medical illness, nursing home residence, active cancer, central venous catheter, varicose veins, prior superficial vein thrombosis, diabetes, congestive heart failure, angina/myocardial infarction, stroke, peripheral vascular disease, smoking, anticoagulation), LLT was associated with decreased odds of VTE (unadjusted OR=0.73; p=0.03). When considered separately, statin and non-statin LLT were each associated with moderate, non-significant lower odds of VTE. After adjusting for angina/myocardial infarction, each was significantly associated with decreased odds of VTE (OR=0.63, p<0.01 and OR=0.61, p=0.04, respectively). Conclusions: LLT is associated with decreased VTE risk after adjusting for known risk factors. abstract_id: PUBMED:22939687 Lipid lowering drugs and the risk of recurrent venous thromboembolism. Introduction: Several studies have suggested that statins may lower the risk of venous thromboembolism (VTE), whereas fibrates may increase this risk. However, no studies have evaluated whether lipid-lowering drugs (LLD) use was associated with the risk of VTE recurrence. Materials And Methods: In a prospective cohort study, we followed-up all patients who had been treated for a first unprovoked VTE event in our centre. The association between LLD exposure and risk of recurrence of VTE after discontinuation of anticoagulation was analyzed with Cox proportional hazards model with adjustment for age, sex, body mass index, site of thrombosis, antiplatelets use, and duration of anticoagulation before inclusion in the study. Results: 432 patients (median age 65.5 years interquartile range 45.0-75.0, 174 men) were followed up for a median of 29.5 months after discontinuation of anticoagulation. Sixty patients (13.9%) had recurrent VTE. During follow-up, 48 patients (11.1%) received statins, 36 patients (8.3%) received fibrates. In multivariate analysis, the risk of recurrent VTE associated with statin exposure was 1.02 (95% confidence interval 0.36-2.91) and 2.15 (95% confidence interval 1.01-4.61) for fibrate exposure. Conclusion: Our results suggest an association between fibrate intake and an increased risk of recurrent VTE, whereas statin intake was not associated with recurrent VTE. Larger studies are needed to validate these results. abstract_id: PUBMED:14606135 Factor V Leiden and venous thromboembolism: risk associated with hormone replacement therapy. Purpose: To determine the risk-benefit ratio of hormone replacement therapy (HRT) and the cost-effectiveness of screening in perimenopausal and postmenopausal women who are carriers of factor V Leiden, as well as to provide evidence-based clinical recommendations for the primary care provider. Data Sources: Databases searched included EMBASE, BIOSIS, MEDLINE, CINAHL, PubMed, SciSearch, and the Cochrane Database. Two reviewers extracted, reviewed, and concurred upon relevant evidence identified in the data-bases. Results: Results confirmed that all women have a higher risk for the development of venous thrombosis while on HRT. The presence of a genetic mutation, such as factor V Leiden, in combination with HRT dramatically increased an individual's chance for developing venous thrombi. Conclusion/implications: Based on the findings of the studies reviewed, it is recommended that women wishing to initiate HRT be thoroughly screened for known risk factors of thrombosis. If risk factors are identified, genetic testing for factor V Leiden may be warranted. abstract_id: PUBMED:22035572 Lipid parameters, lipid lowering drugs and the risk of venous thromboembolism. Background: Besides their effects on atherogenesis, lipids and lipoproteins could contribute to the development of venous thromboembolism (VTE). This association has been investigated in a few studies with conflicting results. Methods: Plasma levels of total cholesterol, triglycerides, HDL-cholesterol, LDL-cholesterol, apolipoprotein A-I and apolipoprotein B were measured in 467 patients with a first unprovoked VTE event diagnosed between May 2000 and December 2004 and in 467 age and sex matched controls. The association between these parameters and VTE was determined in non-users of lipid lowering drugs (LLD), in statin users and in fibrate users in a quartile-based analysis. We repeated this stratified analysis within each stratum of men and women. Results: The median age of patients was 73 years [interquartile range 58-80], 41.5% were men. Among the 934 patients of the study, 100 were treated with statin, 91 with fibrate and 743 were not receiving LLD. Among non users of LLD, high levels of apolipoprotein B were associated with VTE (OR 1.82, 95% CI 1.19-2.79) after adjustment for age and body mass index. Elevated LDL-cholesterol levels were associated with VTE only in men (OR 2.32, 95% CI 1.07-5.01). High levels of LDL/HDL-cholesterol and apolipoprotein B/apolipoprotein A-I ratios were associated with VTE (OR 2.76, 95% CI 1.69-4.50 and OR 1.86, 95% CI 1.16-2.97 respectively) but this effect was mainly observed in men. There was no association between lipid parameters and VTE in statin users and in fibrate users. Conclusion: Our results are in line with the new concept of a global cardiovascular disease combining atherosclerosis and VTE. abstract_id: PUBMED:16828675 Selective prescribing led to overestimation of the benefits of lipid-lowering drugs. Objective: Observational studies have found beneficial effects of lipid-lowering drugs on diverse outcomes, including venous thromboembolism, hip fracture, dementia, and all-cause mortality. Selective use of these drugs in frail people may confound these relationships. Study Design And Setting: We measured 1-year mortality in two cohorts of New Jersey residents, aged 65-99 years, enrolled in state-sponsored drug benefits programs: 112,463 persons hospitalized during the years 1991-1994 and 106,838 nonhospitalized enrollees. Use of lipid-lowering drugs and other medications, as well as diagnoses, were evaluated before follow-up. Results: In age- and sex-adjusted analyses, users of lipid-lowering drugs had a 43% reduced death rate relative to nonusers among hospitalized enrollees and a 56% reduction in the nonhospitalized sample. Available markers of frailty and comorbidity predicted decreased use of these drugs. Control for the propensity to use lipid-lowering drugs attenuated but did not eliminate these effects. After such adjustment, users had a 30% reduction in death rate (95% confidence interval [CI]: 25%-35%) among hospitalized enrollees and a 41% reduction (95% CI: 35%-47%) in the nonhospitalized sample. Unmeasured frailty associated with a 26%-33% reduced odds of receiving lipid-lowering therapy could explain this effect. Conclusion: Frailty and comorbidity that influence use of preventive therapies can substantially confound apparent benefits of lipid-lowering drugs on outcomes. abstract_id: PUBMED:25976012 Factor XI and contact activation as targets for antithrombotic therapy. The most commonly used anticoagulants produce therapeutic antithrombotic effects either by inhibiting thrombin or factor Xa (FXa) or by lowering the plasma levels of the precursors of these key enzymes, prothrombin and FX. These drugs do not distinguish between thrombin generation contributing to thrombosis from thrombin generation required for hemostasis. Thus, anticoagulants increase bleeding risk, and many patients who would benefit from therapy go untreated because of comorbidities that place them at unacceptable risk for hemorrhage. Studies in animals demonstrate that components of the plasma contact activation system contribute to experimentally induced thrombosis, despite playing little or no role in hemostasis. Attention has focused on FXII, the zymogen of a protease (FXIIa) that initiates contact activation when blood is exposed to foreign surfaces, and FXI, the zymogen of the protease FXIa, which links contact activation to the thrombin generation mechanism. In the case of FXI, epidemiologic data indicate this protein contributes to stroke and venous thromboembolism, and perhaps myocardial infarction, in humans. A phase 2 trial showing that reduction of FXI may be more effective than low molecular weight heparin at preventing venous thrombosis during knee replacement surgery provides proof of concept for the premise that an antithrombotic effect can be uncoupled from an anticoagulant effect in humans by targeting components of contact activation. Here, we review data on the role of FXI and FXII in thrombosis and results of preclinical and human trials for therapies targeting these proteins. abstract_id: PUBMED:16097213 Age, an independent risk factor for thrombosis. Epidemiologic data The incidence of thrombosis--arterial and venous--increases with age. This is the case for atheromatous diseases, atrial fibrillation and even venous thromboembolic disease. Ischemic heart disease is the most common cause of death in the elderly. Atrial fibrillation, an independent risk factor for cerebral vascular accidents, affects around 10% of persons older than 80 years. The incidence of venous thromboembolic disease increases with age, reaching 12.5 per 1000 people older than 75 years, compared with 5 per 1000 aged 60-75 and 2.5 per 1000 aged 40-59. Elderly persons often have two or more cardiovascular or venous thromboembolic risk factors and thus a still higher risk of thrombotic events. Their risk of thrombosis justifies the systematic search for acquired risk factors to assess the level of risk and take appropriate prevention measures. abstract_id: PUBMED:21455861 Factor V Leiden in women: a thrombotic risk factor or an evolutionary advantage? Factor V Leiden is a common gain-of-function gene mutation resulting in a genetic predisposition to thromboembolic complications. Growing evidence in the literature indicates an interaction between factor V Leiden thrombophilia and acquired prothrombotic conditions such as contraceptive use or hormone replacement therapy, resulting in an increased risk of venous thromboembolism (VTE). Similarly, when combined with the prothrombotic influence of pregnancy, women who are carriers of factor V Leiden are faced with an increased risk of adverse pregnancy outcomes, including VTE, pre-eclampsia, fetal loss, placental abruption, and fetal growth restriction. The results of the most important meta-analyses on the relationship between inherited (factor V Leiden) and acquired thrombophilia in women are analyzed in this review, along with the possible evolutionary role of this mutation. abstract_id: PUBMED:15735796 The ABO blood group genotype and factor VIII levels as independent risk factors for venous thromboembolism. Factor VIII (FVIII), von Willebrand factor (vWF) and the ABO blood groups have been associated with thrombosis. The ABO locus has functional effects on vWF and FVIII levels and is genetically correlated with FVIII, vWF and thrombosis. We carried out a case-control study to assess the role of FVIII, vWF and ABO types on thrombotic risk. We analyzed 250 patients with venous thrombosis and 250 unrelated controls. FVIII, vWF and other factors related to thrombophilia were measured, ABO groups were analyzed by genotyping. FVIII and vWF were higher in non-O individuals. Group O was more frequent in the controls (44.3% v 23.3%; difference 21.1%; 95% CI: 13.0-29.3%) and Group A in patients (59.2% v. 41.5%; difference 17.7%, 95% CI: 9.1-26.4%). Individuals carrying the A1 allele had a higher risk of thrombosis (OR 2.6; 95% CI, 1.8-3.8). The risk attributed to vWF disappeared after adjusting for the ABO group. Patients with FVIII above the 90th percentile had a high thrombotic risk (adjusted OR 3.7; 95% CI, 2.1-6.5), and a high risk of recurrence (OR 2.3; 95% CI: 1.3-4.1). In conclusion, high FVIII levels and non-O blood groups, likely those with the A1 allele, are independent risk factors for venous thromboembolism and should be considered in evaluating of thrombophilia. abstract_id: PUBMED:10215560 Activated protein C resistance and factor V Leiden mutation are independent risk factors for venous thromboembolism. Background: Resistance to activated protein C due to the factor V R506Q (Leiden) mutation is the most common clotting abnormality in patients with venous thromboembolism. Objective: To evaluate the risk for venous thromboembolism associated with the factor V Leiden mutation or with resistance to activated protein C in the general population. Design: Cross-sectional survey. Setting: General community of Vicenza, Italy. Patients: A population-based sample of 15,109 white persons 18 to 65 years of age who were randomly selected from the census list. Measurements: Sequential validated approach based on participants' reports and Doppler ultrasonography. Resistance to activated protein C was investigated in all participants; 2134 participants with resistance to activated protein C were screened for the factor V Leiden mutation. Results: Carriers of the factor V Leiden mutation had a relative risk of 3.3 (95% CI, 1.7 to 6.1) for venous thromboembolism before 65 years of age. The fraction of cases attributable to the factor V Leiden mutation was 6.6%. By 65 years of age, 5.7% of carriers of the mutation had had venous thromboembolism, mostly after surgery. Participants with a reduced response to activated protein C were at higher risk even if they did not carry the mutation (odds ratio, 1.7 [CI, 1.0 to 2.7]); the attributable risk for venous thromboembolism was 5.1%. Conclusions: The factor V Leiden mutation and resistance to activated protein C are important, independent risk factors for venous thromboembolism. Screening strategies for the factor V Leiden mutation in patients undergoing surgery or experiencing major trauma cannot be recommended, but phenotypic evaluation of resistance to activated protein C should be encouraged in patients with venous thromboembolism. Answer: Lipid lowering therapy (LLT) does not appear to be an independent risk factor for venous thromboembolism (VTE); in fact, it may be associated with a decreased risk of VTE. A population-based case-control study using the Rochester Epidemiology Project resources found that LLT was associated with decreased odds of VTE after adjusting for known risk factors (PUBMED:25891841). This study specifically noted that both statin and non-statin LLT were associated with moderate, non-significant lower odds of VTE, which became significant after adjusting for angina/myocardial infarction. Another study, which was a prospective cohort study, suggested that while statin intake was not associated with recurrent VTE, fibrate intake showed an association with an increased risk of recurrent VTE (PUBMED:22939687). However, this study also indicated that larger studies are needed to validate these results. Furthermore, a study that measured plasma levels of various lipids and lipoproteins in patients with a first unprovoked VTE event and matched controls found no association between lipid parameters and VTE in statin users and in fibrate users (PUBMED:22035572). This suggests that the use of LLT, particularly statins, may not increase the risk of VTE. Lastly, it is important to note that observational studies have sometimes overestimated the benefits of lipid-lowering drugs due to selective prescribing, which can confound relationships between drug use and diverse outcomes, including VTE (PUBMED:16828675). This indicates that while LLT may not be a risk factor for VTE, careful consideration of other factors is necessary when interpreting the effects of these drugs on VTE risk. In conclusion, the evidence does not support LLT as an independent risk factor for VTE. Instead, it may be associated with a decreased risk, although the evidence is not entirely consistent, and further research may be needed to fully understand the relationship between LLT and VTE.
Instruction: Is the tumor infiltrating natural killer cell (NK-TILs) count in infiltrating ductal carcinoma of breast prognostically significant? Abstracts: abstract_id: PUBMED:24870789 Is the tumor infiltrating natural killer cell (NK-TILs) count in infiltrating ductal carcinoma of breast prognostically significant? Purpose: The aim of this study was to investigate the prognostic significance of the CD56+NK-TIL count in infiltrating ductal carcinoma (IDC) of breast. Material And Methods: Immunohistochemistry (IHC) was performed using antibodies specific for CD56 on formalin-fixed and paraffin-embedded tissue sections of 175 infiltrating ductal carcinomas (IDC) of breast. Distribution of intratumoral and stromal CD56+NK-TILs was assessed semi-quantitatively. Results: A low intratumoral CD56+count showed significant and inverse associations with tumor grade, stage, and lymph node status, whereas it had significant and direct association with response to treatment indicating good prognosis. These patients had better survival (χ2=4.80, p<0.05) and 0.52 fold lower death rate (HR=0.52, 95% CI=0.28-0.93) as compared to patients with high CD56+ intratumoral count. The association of survival was insignificant with low CD56 stromal count as compared to high CD56 stromal count (χ2=1.60, p>0.05). Conclusion: To conclude, although NK-TIL count appeared as a significant predictor of prognosis, it alone may not be sufficient for predicting the outcome considering the fact that there exists a crosstalk between NK-TILs and the other immune infiltrating TILs. abstract_id: PUBMED:16246429 Phenotyping of lymphocytes expressing regulatory and effector markers in infiltrating ductal carcinoma of the breast. Dysfunction of the host immune system in cancer patients can be due to a number of reasons including suppression of tumour associated antigen reactive lymphocytes by regulatory T (Treg) cells. In this study, we used flow cytometry to determine the phenotype and relative abundance of the tumour infiltrating lymphocytes (TILs) from 47 enzymatically dissociated tumour specimens from patients with infiltrating ductal carcinoma (IDC) of the breast. The expression of both effector and regulatory markers on the TILs were determined by using a panel of monoclonal antibodies. Analysis revealed CD8(+) T cells (23.4+/-2.1%) were predominant in TILs, followed by CD4(+) T cells (12.6+/-1.7%) and CD56(+) natural killer cells (6.4+/-0.7%). The CD4(+)/CD8(+) ratio was 0.8+/-0.9%. Of the CD8(+) cells, there was a higher number (68.4+/-3.5%) that expressed the effector phenotype, namely, CD8(+)CD28(+) and about 46% of this subset expressed the activation marker, CD25. Thus, a lower number of infiltrating CD8(+) T cells (31.6+/-2.8%) expressed the marker for the suppressor phenotype, CD8(+)CD28(-). Of the CD4(+) T cells, 59.6+/-3.9% expressed the marker for the regulatory phenotype, CD4(+)CD25(+). About 43.6+/-3.8% CD4(+)CD25(+) subset co-expressed both the CD152 and FOXP3, the Treg-associated molecules. A positive correlation was found between the presence of CD4(+)CD25(+) subset and age (> or =50 years old) (r=0.51; p=0.045). However, no significant correlation between tumour stage and CD4(+)CD25(+) T cells was found. In addition, we also found that the CD4(+)CD25(-) subset correlated with the expression of the nuclear oestrogen receptor (ER)-alpha in the tumour cells (r=0.45; p=0.040). In conclusion, we detected the presence of cells expressing the markers for Tregs (CD4(+)CD25(+)) and suppressor (CD8(+)CD28(-)) in the tumour microenvironment. This is the first report of the relative abundance of Treg co-expressing CD152 and FOXP3 in breast carcinoma. abstract_id: PUBMED:25297609 A case report: Blastic plasmacytoid dendritic cell neoplasm is misdiagnosed as breast infiltrating ductal carcinoma. Blastic plasmacytoid dendritic cell neoplasm (BPDCN) is a rare and aggressive hematologic tumor that typically occurs in older adults. Patients with BPDCN usually present with solitary or multiple skin lesions. Localized or disseminated lymphadenopathy at presentation is common. A case report illustrating histopathologically proven BPDCN initially misdiagnosed as breast infiltrating ductal carcinoma in a 39-year-old woman is presented. In this case, the patient presented with a breast mass without an obvious skin lesion initially. The morphology of the tumor cells mimicked high grade breast carcinoma cells. Without complete immunohistochemical study, this case was initially misdiagnosed as infiltrating ductal carcinoma. Reviewing the previous literature about BPDCN, no case with a breast mass and an absence of characteristic skin lesions initially has been reported. The purpose for which we are discussing this case is to reduce misdiagnosis when the initial symptom is unusual. abstract_id: PUBMED:15660281 Immunophenotype of lymphocytic infiltration in medullary carcinoma of the breast. Medullary carcinoma (MC) of the breast is characterized by large anaplastic cells and infiltration by benign lymphocytes. Patients with this pattern of breast carcinoma are considered to have a better prognosis than those with other histological subtypes. We reviewed cases of primary breast carcinoma that were surgically resected between 1990 and 2004. Of these, 13 cases of medullary carcinoma of the breast with lymphocyte infiltration were reported. Tests for CD3, CD4, CD8, CD20, CD56, TIA-1, and granzyme B were performed on paraffin sections. We found that the MC contained very few NK cells, as assessed by their reactivity with the CD56 antibodies. However, MC had a significantly greater percentage of CD3, CD8, TIA-1, and granzyme B lymphocytes infiltrating the stroma of the tumor. Furthermore, more CD8-positive than CD4-positive T-cell lymphocytes were present within the tumor cell nests in MC, as opposed to the proportion in usual ductal carcinoma. The infiltrating cytotoxic/suppressor T cells in MC represent host resistance against cancer, and the high grading of the T-cell infiltration could explain, in part, a key mechanism controlling the good prognosis for this type of tumor and solve the pathological paradox of MC. abstract_id: PUBMED:3032398 Antigenic phenotype of the lymphocytic component of medullary carcinoma of the breast. Medullary carcinoma of the breast, which is usually associated with a dense lymphocytic infiltrate, carries a better prognosis than do most other histologic subtypes of breast carcinoma. We studied cryostat-cut fresh frozen sections from 12 patients with medullary carcinoma and, as controls, nine patients with infiltrating ductal carcinoma in order to determine and compare the antigenic phenotype of the lymphocytic components of these tumors. We used a large panel of monoclonal antibodies and polyclonal antisera for T-cells (Leu-1, Leu-2a, Leu-3a, Leu-9, T-3, T-6, T-10, T-11, and TQ-1), pre-B and B-cells (BA-1, B-1, B-2, B-4, and J5), NK cells (Leu-7 and Leu-11b), and cell activation associated antigens (T-9, HLA-Dr, and Tac). The most commonly encountered antigens on the lymphocytic components of both medullary carcinoma and infiltrating ductal carcinoma were: T-3, T-11, Leu-1, Leu-2a, Leu-3a, and Leu-9. There was little staining for NK-, pre-B-, or B-cell associated antigens in either type of carcinoma. However, the lymphocytes in the control cases tended to express HLA-Dr and T-10 more often than did the lymphocytes in the cases of medullary breast carcinoma. Our data indicate that: the antigenic phenotypes of the lymphocytic infiltrates of medullary carcinoma and those of infiltrating ductal carcinoma of the breast are essentially similar; and the lymphocytes in these carcinomas are composed predominantly of peripheral T-lymphocytes. We therefore conclude that the favorable biologic behavior of medullary carcinoma of the breast cannot readily be explained by the immunophenotype of its lymphocytic component. abstract_id: PUBMED:2824022 Characterization and frequency distribution of lymphoreticular infiltrates in axillary lymph node metastases of invasive ductal carcinoma of the breast. One hundred and seventy-five axillary lymph nodes containing metastatic deposits from 46 invasive ductal carcinomas of the breast were evaluated histologically and immunohistologically. The study yielded the following results: (1) tumor-infiltrating lymphoreticular cells preferentially accumulated in the stromal bands; the tumor foci generally showed a considerably lower degree of infiltration; (2) in most cases, monocytes/macrophages (Mono 1+) represented the overwhelming majority of tumor-infiltrating cells; (3) next in frequency were T-lymphocytes (Leu-1+), especially CD4+ lymphocytes (Leu-3a+), while CD8+ lymphocytes (Leu-2a+) mostly occurred only in moderate numbers; (4) B-lymphocytes (To15+), plasma cells, natural killer cells (Leu-7+), tissue mast cells, and T-accessory reticulum cells (OKT 6+) were observed mostly in low or very low numbers, while eosinophils were nearly absent and B-accessory reticulum cells (Ki-M4+) were totally absent from the lymphoreticular infiltrates. Definite conclusions regarding the functional properties of the tumor-infiltrating cells cannot be drawn from an immunohistologic analysis in situ alone, but the preferred localization of most tumor-infiltrating cells in the stroma does not support an intensive interaction between the host defenses and the metastatic tumor. abstract_id: PUBMED:1553817 High endothelial venule and immunocompetent cells in typical medullary carcinoma of the breast. The characteristics of immunocompetent cells and their role in killing tumour cells in typical medullary carcinoma of the breast (TMC) have been investigated morphologically. Formation of high endothelial venule (HEV)-like vessels in tumour cell nests, the distribution of macrophages, T-zone histiocytes, T- and B-lymphocytes, the ratios of CD4+/CD8+, and natural killer (NK) or NK-like T-cells were examined in five cases of TMC. These results were compared with controls which consisted of three cases of ductal carcinoma with intense lymphocytic infiltration (control I) and four cases of ductal carcinoma with scanty lymphocytic infiltration (control II). An increased incidence of HEV-like vessels with migration of lymphocytes and a higher number of CD8+ lymphocytes with interleukin-2-receptor expression, as well as numerous CD57 cells, were noted in the tumour nests of TMC as compared with those of control groups. Furthermore, large granular lymphocytes, large lymphocytes invaginating tumour cells and necrotic tumour cells were observed electron microscopically. These findings indicate that infiltrating lymphocytes in TMC are activated and become effector cells that can kill the tumour cells by mechanisms similar to those of NK cells. The activities of immunocompetent cells in TMC appear to contribute to a favourable prognosis in TMC of the breast. abstract_id: PUBMED:37180657 High expression of RTN4IP1 predicts adverse prognosis for patients with breast cancer. Background: RTN4IP1 interacts with a membranous protein of endoplasmic reticulum (RTN4), this study was to explore the role RTN4IP1 involved in breast cancer (BC). Methods: After RNAseq data of The Cancer Genome Atlas Breast Invasive Carcinoma (TCGA-BRCA) project were downloaded, correlations between RTN4IP1 expression and clinicopathologic variables, as well as expression levels between cancerous samples and non-cancerous ones were tested. Differentially expressed genes (DEGs) and functional enrichment, gene set enrichment analysis (GSEA) and immune infiltration analysis were conduct for bioinformatics analysis. After logistic regression, Kaplan-Meier curve of disease-specific survival (DSS), univariate and multivariate COX analysis, a nomogram was established for prognosis. Results: RTN4IP1 expression was up-regulated in BC tissue, significantly associated with estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2) status (P<0.001). The 771 DEGs linked RTN4IP1 to glutamine metabolism and mitoribosome-associated quality control. Functional enrichment pointed to DNA metabolic process, mitochondrial matrix and inner membrane, ATPase activity, cell cycle and cellular senescence; whereas GSEA indicated regulation of cellular cycle, G1_S DNA damage checkpoints, drug resistance and metastasis. Eosinophil cells, natural killer (NK) cells and Th 2 cells were found to be correlated with RTN4IP1 expression (R=-0.290, -0.277 and 0.266, respectively, P<0.001). RTN4IP1high BC had worse DSS than RTN4IP1low ones [hazard ratio (HR) =2.37, 95% confidential interval (CI): (1.48-3.78), P<0.001], which has independent prognostic value (P<0.05). Conclusions: Overexpressed in BC tissue, RTN4IP1 predicts adverse prognosis for patients with BC, especially in infiltrating ductal carcinoma, infiltrating lobular carcinoma, Stage II, Stages III&IV and luminal A subtype. abstract_id: PUBMED:2839293 Frequency distribution of lymphoreticular infiltrates in invasive carcinoma of the female breast. Fifty-two invasive ductal carcinomas of the breast were evaluated immunohistologically with a panel of monoclonal antibodies to study the frequency distribution and the localisation of the tumor-infiltrating lymphoreticular cells. The analysis yielded the following results: 1) The lymphoreticular cells mostly accumulated in the intervening and surrounding stroma while the tumor foci regularly exhibited a considerably lower degree of infiltration. 2) The main components of the cellular stromal reaction were monocytes/macrophages, occurring in high numbers in more than 80%, and T4 cells, which were observed in high numbers in 60% of all analyzed tumors. 3) While 2/3 of all cases showed moderate numbers of T8 lymphocytes, the B lymphocytes and natural killer cells generally were encountered in very low numbers or were nearly absent from the lymphoreticular infiltrates. Conclusions on the functional significance of the tumor-infiltrating cells cannot be drawn from an in situ histological study alone, but the preferential intrastromal accumulation of most cells seems not to be indicative of an intensive host defense against clinically detectable human breast cancers. abstract_id: PUBMED:8302815 Medullary carcinoma of the breast. Identification of lymphocyte subpopulations and their significance. Fifty-two infiltrating breast carcinomas with medullary features (BCMF) were studied immunohistochemically to determine the immunophenotype of the mononuclear tumor inflammatory cells (MTIC) in formalin-fixed, paraffin-embedded material. The neoplasms were also examined for Epstein-Barr virus (EBV) DNA by the polymerase chain reaction (PCR). BCMF were independently classified as medullary carcinoma (MC) or infiltrating ductal carcinoma (IDC) by six observers according to the criteria of Pedersen et al. DNA from 35 BCMF was successfully amplified using PCR, but all were negative for EBV DNA. These included, by 4/6 consensus diagnosis, 16 MC, 18 IDC, and one BCMF which failed to achieve consensus diagnosis. MTIC were present to a mild degree in 19 BCMF (37%) and to moderate to severe degrees in 33 (63%). MTIC were predominantly (> or = 75%) lymphocytic in 31 BCMF (13 MC, 16 IDC, two without consensus diagnostic agreement), and plasmacytic in 10 (six MC, four IDC); equal proportions of lymphocytes and plasma cells occurred in 11 (six MC, five IDC). Lymphocytic MTIC were mostly CD45RO+/CD3+ T-cells in nearly all cases, and showed a predominant CD3+/CD4+ and CD3+/CD4- immunophenotype in 36% and 64% of cases, respectively. Natural killer cells (CD57+) and histiocytes (MAC 387+) were virtually absent. The number, cell type, and T-cell subsets of the MTIC were unrelated to consensus diagnosis, axillary lymph node status, or overall survival. EBV is unassociated with MC, despite the histologic similarities of MC to EBV-associated lymphoepithelial lesions of other organs.(ABSTRACT TRUNCATED AT 250 WORDS) Answer: Yes, the tumor infiltrating natural killer cell (NK-TILs) count in infiltrating ductal carcinoma (IDC) of the breast is prognostically significant. A study investigating the prognostic significance of CD56+NK-TIL count in IDC of the breast found that a low intratumoral CD56+ count was significantly and inversely associated with tumor grade, stage, and lymph node status. Additionally, it had a significant and direct association with response to treatment, indicating a good prognosis. Patients with a low intratumoral CD56+ count had better survival and a 0.52 fold lower death rate compared to patients with a high CD56+ intratumoral count. However, the association of survival with low CD56 stromal count as compared to high CD56 stromal count was insignificant (PUBMED:24870789). This finding suggests that while NK-TIL count is a significant predictor of prognosis, it may not be sufficient alone for predicting the outcome, considering the crosstalk between NK-TILs and other immune infiltrating TILs. The study emphasizes the complexity of the tumor microenvironment and the interactions between different immune cells that may influence the prognosis of IDC patients.
Instruction: Emergent operation for isolated severe traumatic brain injury: Does time matter? Abstracts: abstract_id: PUBMED:28728461 Rehabilitation of emergent awareness of errors post traumatic brain injury: A pilot intervention. Impaired awareness of errors is common following traumatic brain injury (TBI) and can be a barrier to successful rehabilitation. The objective of this study was to develop and evaluate a computer-based intervention programme aimed at improving error awareness in individuals with TBI. A further aim was to explore its effects on metacognitive awareness and variability of performance. Participants were 11 individuals with TBI and impaired error awareness who performed a sustained attention task twice-weekly for four weeks. The intervention consisted of audio-visual feedback-on-errors during the sustained attention task. Six participants received audio-visual feedback-on-error, five did not receive feedback. Emergent and metacognitive awareness were measured pre- and post-intervention. Between-groups comparisons of emergent awareness from pre- to post-intervention showed that audio-visual feedback-on-error improved emergent awareness compared to no feedback-on-error. Some changes in metacognitive awareness of executive behaviours as a result of feedback were observed. Audio-visual feedback-on-error improved emergent awareness in individuals with TBI following a four-week/eight-session intervention. This improvement was not observed in the no-feedback group. This pilot intervention is not a stand-alone treatment but it has potential to be usefully incorporated into cognitive or clinical rehabilitation programmes to improve emergent awareness. abstract_id: PUBMED:26317818 Emergent operation for isolated severe traumatic brain injury: Does time matter? Background: It remains unclear whether the timing of neurosurgical intervention impacts the outcome of patients with isolated severe traumatic brain injury (TBI). We hypothesized that a shorter time between emergency department (ED) admission to neurosurgical intervention would be associated with a significantly higher rate of patient survival. Methods: Our institutional trauma registry was queried for patients (2003-2013) who required an emergent neurosurgical intervention (craniotomy, craniectomy) for TBI within 300 minutes after the ED admission. We included patients with altered mental status upon presentation in the ED (Glasgow Coma Scale [GCS] score < 9). Patients with associated severe injuries (Abbreviated Injury Scale [AIS] score ≥ 2) in other body regions were excluded. In-hospital mortality of patients who underwent surgery in less than 200 minutes (early group) was compared with those who underwent surgery in 200 minutes or longer (late group) using univariate and multivariate analyses. Results: A total of 161 patients were identified during the study time frame. Head computed tomographic scan demonstrated subdural hematoma in 85.8%, subarachnoid hemorrhage in 55.5%, and equal numbers of epidural hematoma and intraparenchymal hemorrhage in 22.6%. Median time between ED admission and neurosurgical intervention was 133 minutes. In univariate analysis, a significantly lower in-hospital mortality rate was identified in the early group (34.5% vs. 59.1%, p = 0.03). After adjusting for clinically important covariates in a logistic regression model, early neurosurgical intervention was significantly associated with a higher odds of patient survival (odds ratio, 7.41; 95% confidence interval, 1.66-32.98; p = 0.009). Conclusion: Our data suggest that the survival rate of isolated severe TBI patients who required an emergent neurosurgical intervention could be time dependent. These patients might benefit from expedited process (computed tomographic scan, neurosurgical consultation, etc.) to shorten the time to surgical intervention. Level Of Evidence: Prognostic study, level IV. abstract_id: PUBMED:27704406 White matter abnormalities are associated with overall cognitive status in blast-related mTBI. Blast-related mild traumatic brain injury (mTBI) is a common injury of the Iraq and Afghanistan Wars. Research has suggested that blast-related mTBI is associated with chronic white matter abnormalities, which in turn are associated with impairment in neurocognitive function. However, findings are inconsistent as to which domains of cognition are affected by TBI-related white matter disruption. Recent evidence that white matter abnormalities associated with blast-related mTBI are spatially variable raises the possibility that the associated cognitive impairment is also heterogeneous. Thus, the goals of this study were to examine (1) whether mTBI-related white matter abnormalities are associated with overall cognitive status and (2) whether white matter abnormalities provide a mechanism by which mTBI influences cognition. Ninety-six Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OEF) veterans were assigned to one of three groups: no-TBI, mTBI without loss of consciousness (LOC) (mTBI-LOC), and mTBI with LOC (mTBI + LOC). Participants were given a battery of neuropsychological tests that were selected for their sensitivity to mTBI. Results showed that number of white matter abnormalities was associated with the odds of having clinically significant cognitive impairment. A mediation analysis revealed that mTBI + LOC was indirectly associated with cognitive impairment through its effect on white matter integrity. These results suggest that cognitive difficulties in blast-related mTBI can be linked to injury-induced neural changes when taking into account the variability of injury as well as the heterogeneity in cognitive deficits across individuals. abstract_id: PUBMED:35535874 Neuroprotection and neuroregeneration: roles for the white matter. Efficient strategies for neuroprotection and repair are still an unmet medical need for neurodegenerative diseases and lesions of the central nervous system. Over the last few decades, a great deal of attention has been focused on white matter as a potential therapeutic target, mainly due to the discovery of the oligodendrocyte precursor cells in the adult central nervous system, a cell type able to fully repair myelin damage, and to the development of advanced imaging techniques to visualize and measure white matter lesions. The combination of these two events has greatly increased the body of research into white matter alterations in central nervous system lesions and neurodegenerative diseases and has identified the oligodendrocyte precursor cell as a putative target for white matter lesion repair, thus indirectly contributing to neuroprotection. This review aims to discuss the potential of white matter as a therapeutic target for neuroprotection in lesions and diseases of the central nervous system. Pivot conditions are discussed, specifically multiple sclerosis as a white matter disease; spinal cord injury, the acute lesion of a central nervous system component where white matter prevails over the gray matter, and Alzheimer's disease, where the white matter was considered an ancillary component until recently. We first describe oligodendrocyte precursor cell biology and developmental myelination, and its regulation by thyroid hormones, then briefly describe white matter imaging techniques, which are providing information on white matter involvement in central nervous system lesions and degenerative diseases. Finally, we discuss pathological mechanisms which interfere with myelin repair in adulthood. abstract_id: PUBMED:29213379 What matters in white matter dementia? Dementia studies has primarily focused on disorders of the cerebral cortex and subcortical gray matter, what originated the concepts of cortical and subcortical dementias respectively. Dementia related mainly with cerebral white matter have received less attention. We present five different cases, each one illustrative of a dementia subtype that could be assigned under the category of 'white matter dementia': CADASIL, progressive subcortical gliosis, progressive multifocal leucoencephalopathy, normopressure hydrocephalus and brain injury. Besides that, recent clinical and scientific literature on white matter dementia was reviewed. The composition of exuberant psychiatric symptoms and personality changes (mainly apathy, but also desinhibition) with neurological signs (pyramidal alone or associated with extrapyramidal signs, ataxia and urinary incontinence) and with specific cognitive impairment (mentioned above), should rise strongly the possibility of a white-matter dementia, instead of a cortical or subcortical form of dementia. abstract_id: PUBMED:35768852 White matter pathology in alzheimer's transgenic mice with chronic exposure to low-level ambient fine particulate matter. Background: Air pollution, especially fine particulate matter (PM), can cause brain damage, cognitive decline, and an increased risk of neurodegenerative disease, especially alzheimer's disease (AD). Typical pathological findings of amyloid and tau protein accumulation have been detected in the brain after exposure in animal studies. However, these observations were based on high levels of PM exposure, which were far from the WHO guidelines and those present in our environment. In addition, white matter involvement by air pollution has been less reported. Thus, this experiment was designed to simulate the true human world and to discuss the possible white matter pathology caused by air pollution. Results: 6 month-old female 3xTg-AD mice were divided into exposure and control groups and housed in the Taipei Air Pollutant Exposure System (TAPES) for 5 months. The mice were subjected to the Morris water maze test after exposure and were then sacrificed with brain dissection for further analyses. The mean mass concentration of PM2.5 during the exposure period was 13.85 μg/m3. After exposure, there was no difference in spatial learning function between the two groups, but there was significant decay of memory in the exposure group. Significantly decreased total brain volume and more neuronal death in the cerebral and entorhinal cortex and demyelination of the corpus callosum were noted by histopathological staining after exposure. However, there was no difference in the accumulation of amyloid or tau on immunohistochemistry staining. For the protein analysis, amyloid was detected at significantly higher levels in the cerebral cortex, with lower expression of myelin basic protein in the white matter. A diffuse tensor image study also revealed insults in multiple white matter tracts, including the optic tract. Conclusions: In conclusion, this pilot study showed that even chronic exposure to low PM2.5 concentrations still caused brain damage, such as gross brain atrophy, cortical neuron damage, and multiple white matter tract damage. Typical amyloid cascade pathology did not appear prominently in the vulnerable brain region after exposure. These findings imply that multiple pathogenic pathways induce brain injury by air pollution, and the optic nerve may be another direct invasion route in addition to olfactory nerve. abstract_id: PUBMED:31324309 Preterm brain Injury: White matter injury. Despite the advances in neonatal intensive care, the preterm brain remains vulnerable to white matter injury (WMI) and disruption of normal brain development (i.e., dysmaturation). Compared to severe cystic WMI encountered in the past decades, contemporary cohorts of preterm neonates experience milder WMIs. More than destructive lesions, disruption of the normal developmental trajectory of cellular elements of the white and the gray matter occurs. In the acute phase, in response to hypoxia-ischemia and/or infection and inflammation, multifocal areas of necrosis within the periventricular white matter involve all cellular elements. Later, chronic WMI is characterized by diffuse WMI with aberrant regeneration of oligodendrocytes, which fail to mature to myelinating oligodendrocytes, leading to myelination disturbances. Complete neuronal degeneration classically accompanies necrotic white matter lesions, while altered neurogenesis, represented by a reduction of the dendritic arbor and synapse formation, is observed in response to diffuse WMI. Neuroimaging studies now provide more insight in assessing both injury and dysmaturation of both gray and white matter. Preterm brain injury remains an important cause of neurodevelopmental disabilities, which are still observed in up to 50% of the preterm survivors and take the form of a complex combination of motor, cognitive, and behavioral concerns. abstract_id: PUBMED:30379639 The optimal choices of animal models of white matter injury. White matter injury, the most common neurological injury in preterm infants, is a major cause of chronic neurological morbidity, including cerebral palsy. Although there has been great progress in the study of the mechanism of white matter injury in newborn infants, its pathogenesis is not entirely clear, and further treatment approaches are required. Animal models are the basis of study in pathogenesis, treatment, and prognosis of white matter injury in preterm infants. Various species have been used to establish white matter injury models, including rodents, rabbits, sheep, and non-human primates. Small animal models allow cost-effective investigation of molecular and cellular mechanisms, while large animal models are particularly attractive for pathophysiological and clinical-translational studies. This review focuses on the features of commonly used white matter injury animal models, including their modelling methods, advantages, and limitations, and addresses some clinically relevant animal models that allow reproduction of the insults associated with clinical conditions that contribute to white matter injury in human infants. abstract_id: PUBMED:30040722 White Matter and Cognition in Traumatic Brain Injury. Traumatic brain injury (TBI) is a leading cause of disability and produces a wide range of cognitive, emotional, and physical consequences. The impact of TBI on cognition is among the most important questions in this field but remains incompletely understood. The immediate cognitive effects of concussion, while usually short-lived, may be profound and lasting in some individuals, and long-term sequelae of TBI may include dementia of several varieties including post-traumatic leukoencephalopathy, chronic traumatic encephalopathy, and Alzheimer's disease. Whereas the etiopathogenesis of cognitive dysfunction after TBI remains uncertain, a reasonable point to begin is a focus on the white matter of the brain, where the neuropathological lesion known as diffuse axonal injury (DAI) is routinely identified. White matter is not typically accorded the significance granted to cortical gray matter in discussions of cognitive dysfunction and dementia, but increasing evidence is accumulating to suggest that cognitive decline after TBI is a direct result of white matter injury, and that lesions in this brain component are crucial in the sequence of events leading ultimately to dementia of several types. In this review, we consider the topic of white matter and cognition in TBI, beginning with DAI and proceeding to the role of inflammation in the pathogenesis of cognitive dysfunction and dementia that can follow. A brief review of possible therapeutic options will also be offered, including the use of anti-inflammatory agents and the exploitation of white matter plasticity, to treat acute and post-acute injuries, and lower the incidence of dementia resulting from TBI. abstract_id: PUBMED:36357662 The Ties That Bind: Glial Transplantation in White Matter Ischemia and Vascular Dementia. White matter injury is a progressive vascular disease that leads to neurological deficits and vascular dementia. It comprises up to 30% of all diagnosed strokes, though up to ten times as many events go undiagnosed in early stages. There are several pathologies that can lead to white matter injury. While some studies suggest that white matter injury starts as small infarcts in deep penetrating blood vessels in the brain, others point to the breakdown of endothelial function or the blood-brain barrier as the primary cause of the disease. Whether due to local endothelial or BBB dysfunction, or to local small infarcts (or a combination), white matter injury progresses, accumulates, and expands from preexisting lesions into adjacent white matter to produce motor and cognitive deficits that present as vascular dementia in the elderly. Vascular dementia is the second leading cause of dementia, and white matter injury-attributed vascular dementia represents 40% of all diagnosed dementias and aggravates Alzheimer's pathology. Despite the advances in the last 15 years, there are few animal models of progressive subcortical white matter injury or vascular dementia. This review will discuss recent progress in animal modeling of white matter injury and the emerging principles to enhance glial function as a means of promoting repair and recovery. Answer: Yes, time does matter in the context of emergent operations for isolated severe traumatic brain injury (TBI). A study hypothesized that a shorter time between emergency department (ED) admission to neurosurgical intervention would be associated with a significantly higher rate of patient survival. The results of the study supported this hypothesis, showing that patients who underwent surgery in less than 200 minutes after ED admission (early group) had a significantly lower in-hospital mortality rate compared to those who underwent surgery in 200 minutes or longer (late group). After adjusting for clinically important covariates in a logistic regression model, early neurosurgical intervention was significantly associated with a higher odds of patient survival (odds ratio, 7.41; 95% confidence interval, 1.66-32.98; p = 0.009). These findings suggest that the survival rate of isolated severe TBI patients who required an emergent neurosurgical intervention could be time-dependent, and expedited processes to shorten the time to surgical intervention might benefit these patients (PUBMED:26317818).
Instruction: Is the risk of adult coeliac disease causally related to cigarette exposure? Abstracts: abstract_id: PUBMED:12923372 Is the risk of adult coeliac disease causally related to cigarette exposure? Objective: Previous studies have shown an association between cigarette smoking and coeliac disease, but it has yet to be established whether this relationship is causal. The aim of this study was to assess causality using the Bradford Hill criteria. Methods: A matched case-control study using a questionnaire to establish a detailed smoking history for 138 incident cases of adult coeliac disease and 276 age-matched and sex-matched controls. Subjects were categorized according to their active cigarette exposure prior to diagnosis of the matched case, and odds ratios and tests for linear trends were calculated. Results: At the time of diagnosis, 10% of cases and 30% of controls were current smokers (odds ratio, 0.21 and 95% confidence interval, 0.11-0.40 for coeliac disease in current smokers versus never smokers). A biological gradient was demonstrated for total, recent and current cigarette exposure. The greatest risk reduction related to current exposure (odds ratio, 0.15, and 95% confidence interval, 0.06-0.37 for coeliac disease in current heavy smokers versus never smokers). Conclusions: This study strengthens the case for a causal relationship between smoking and coeliac disease by demonstrating a strong, temporally appropriate and dose-dependent effect, thus meeting the Bradford Hill criteria. This suggests that cigarette smoking truly protects against the development of adult coeliac disease. abstract_id: PUBMED:8881810 Adult coeliac disease and cigarette smoking. Background: Genetic predisposition and gliadin exposure are known to be crucial factors in the development of coeliac disease. Circumstantial evidence suggests that other unidentified environmental factors may also be of pathogenetic importance. Aim: To define the relation between cigarette smoking and the risk of development of symptomatic adult onset coeliac disease. Subjects: Eighty six recently diagnosed adult coeliac disease patients and 172 controls matched for age and sex. Method: Matched case control study, using a simple questionnaire to determine smoking history, and in particular smoking status at the time of diagnosis of coeliac disease. Results: At the time of diagnosis, the proportion of current smokers was 7% in the coeliac group, and 32.6% in the control group, giving a matched odds ratio of 0.15 (95% confidence intervals 0.06, 0.38). The difference could not be accounted for by social class, nor by coeliac patients giving up smoking after the onset of symptoms as most non-smokers in the coeliac group had never smoked. Conclusion: Cigarette smoking, or a factor closely linked to it, seems to exert a major protective effect against the development of symptomatic adult onset coeliac disease. The implication is that gliadin exposure is not the only important environmental factor involved in the pathogenesis of this condition. abstract_id: PUBMED:11559646 Duration of gluten exposure in adult coeliac disease does not correlate with the risk for autoimmune disorders. Background And Aims: Duration of gluten exposure seems to predispose adolescents with coeliac disease to autoimmune diseases. In a retrospective cohort study, we assessed the relationship between autoimmune disorders and actual gluten exposure in patients in whom coeliac disease was diagnosed in adult life (> or = 16 years). Methods: We screened for the presence of autoimmunity in 605 controls (16-84 years) and 422 patients (16-84 years), all of whom had been on gluten withdrawal for at least one year (median follow up 9.5 years). A logistic regression analysis, setting the prevalence of autoimmunity as the dependent variable, was employed to control for independent covariates as predictors of the risk of autoimmunity. Results: The prevalence of autoimmunity was threefold higher (p < 0.00001) in patients than in controls. Mean duration of gluten exposure was 31.2 and 32.6 years for patients with or without autoimmunity. Logistic regression showed that increased age at diagnosis of coeliac disease was related to the prevalence of autoimmune disease while "actual gluten exposure" which takes into account diet compliance, follow up, and age at diagnosis of autoimmune disorders were not predictive for the risk of developing autoimmune diseases (odds ratio 0.82 per year). Conclusion: The prevalence of autoimmune diseases in patients with a late coeliac disease diagnosis does not correlate with duration of gluten intake. Early exposure to gluten may modify the immunological response. Gluten withdrawal does not protect patients with a late diagnosis from autoimmune diseases. abstract_id: PUBMED:12229976 Cigarette smoking and adult coeliac disease. Background: While coeliac disease is clearly induced by dietary gluten ingestion in genetically susceptible individuals, other environmental factors may influence the onset of disease. Two studies have suggested that cigarette smoking has a protective role, but a third has not. Methods: We examined the relationship between cigarette smoking and coeliac disease in individuals with coeliac disease diagnosed in adulthood from two large population-based disease registers and age and sex-matched controls from local general practitioner lists. Participants were mailed a three-page lifestyle and general health questionnaire. Smoking habits of coeliacs were compared with controls and with habits reported in the Health Survey for England 1995. Results: An inverse association between current smoking and adult coeliac disease was identified (odds ratio: 0.77 (95% CI 0.56-1.06)) and remained when comparing ever smoked versus never smoked (odds ratio: 0.83 (0.68-1.00)). When the smoking habits of the coeliacs were compared with the national figures, the number of coeliacs who currently smoked was 40% lower than expected (smoking ratio 0.60, 0.46-0.78). This inverse association was accounted for by the behaviour of the 35-54-year age group (odds ratio for ever smoked 0.67 (0.51-0.89)). There was no association with having ever smoked in the younger age group (odds ratio: 1.44 (0.75-2.78)) or the older group (odds ratio: 0.92 (0.67-1.26)). Conclusions: There was an inverse association between adult coeliac disease and cigarette smoking which was accounted for by middle-aged coeliacs having never smoked. These results are consistent with an age-dependent interaction between cigarette smoking and the other environmental factors implicated in coeliac disease, including gluten. abstract_id: PUBMED:26287738 Isotretinoin Exposure and Risk of Celiac Disease. Background: Isotretinoin (13-cis retinoic acid) is a metabolite of vitamin A and has anti-inflammatory and immunoregulatory effects; however, a recent publication by DePaolo et al. demonstrated that in the presence of IL-15, retinoic acid can act as an adjuvant and promote inflammation against dietary proteins. Objective: To evaluate the risk of overt and latent celiac disease (CD) among users of isotretinoin. Material And Methods: Medical records of patients from 1995 to 2011 who had a mention of isotretinoin in their records (N = 8393) were searched for CD diagnosis using ICD-09CM codes. Isotretinoin exposure was compared across overt CD patients and their age- and gender-matched controls from the same pool. To evaluate the risk of latent CD with isotretinoin exposure, patients were overlapped with a community-based list of patients with waste serum samples that were tested for CD serology, excluding those with overt CD (2006-2011). Isotretinoin exposure was defined as the use of isotretinoin prior to CD diagnosis or serology. Results: Of 8393 patients, 25 had a confirmed CD diagnosis. Compared to matched controls (N = 75), isotretinoin exposure was not significantly different between overt CD patients versus controls (36% versus 39%, respectively; P = 0.712). Likewise, latent CD defined as positive serology was not statistically different between isotretinoin exposed (N = 506) versus non-exposed (N = 571) groups (1.8% versus 1.4%, respectively; P = 0.474). Conclusions: There was no association between isotretinoin use and risk of either overt or latent CD. abstract_id: PUBMED:28471835 Early Life Exposure, Lifestyle, and Comorbidity as Risk Factors for Microscopic Colitis: A Case-Control Study. Background: The pathophysiology of microscopic colitis (MC) is not fully understood. A dysregulation of the adaptive immune response has been hypothesized, of which the maturation and function is imprinted in early life. Various other factors (e.g., hormonal factors) have also been found to be associated, sometimes, with minimal or conflicting evidence. The aims of this study were to evaluate whether an exposure to (microbial) agents in early life might be protective for MC development and to assess the role of several less well-established risk factors in one study. Methods: A case-control study was conducted including MC cases diagnosed in the Southern part of the Netherlands between 2000 and 2012. Cases were matched to non-MC controls from the same area, based on gender and year of birth, and assigned the same index date. All subjects filled out the same study questionnaire on various risk factors. Results: In total, 171 MC cases and 361 controls were included. In the multivariable logistic regression analysis, current smoking (odds ratio 6.23, 95% confidence interval, 3.10-12.49), arthrosis, and a cardiac disorder were associated with MC. No association was observed, for example, factors related to early life exposure to microbial antigens, passive smoking, rheumatoid arthritis, celiac disease, or hormonal factors. Conclusions: Early life exposure to microbial antigens and increased hormonal exposure were not found to be protective for MC. Current smoking seems to be an incontestable risk factor for MC. Therefore, exposure to environmental risk factors later may be of relevance in MC pathogenesis and warrants further investigation. abstract_id: PUBMED:11434592 Adult endomysial antibody-negative coeliac disease and cigarette smoking. Objective: To determine the relative incidence and characteristics of endomysial antibody (EMA)-negative coeliac disease in adults. Design: Retrospective analysis of prospectively collected data on adults with newly diagnosed coeliac disease, with determination of EMA status before gluten withdrawal. Setting: District general hospital (secondary care institution). Participants: Sixty consecutive incident cases. Main Outcome Measures: (i) Proportion of cases who were EMA-negative; (ii) comparison of clinical and laboratory variables at diagnosis for EMA-positive and EMA-negative subjects. Results: Fifteen subjects (25%, 95% CI 15-38%) were EMA negative, of whom only two were IgA deficient. There was clinical evidence in all 15 patients and histological evidence in 13 patients of a response to gluten withdrawal. No significant differences were found between EMA-positive and EMA-negative subjects with respect to histological features, age, gender, clinical manifestations, concurrent autoimmune disorders, family history of coeliac disease, or haemoglobin and albumin concentrations at diagnosis. However, EMA-negative status at diagnosis was associated strongly with current or recent cigarette smoking (OR 7.0, 95% CI 1.7-31.5, P= 0.003). Conclusions: A substantial minority of patients with otherwise typical coeliac disease are EMA negative, and most of these are IgA replete. The value of EMA as a screening tool is therefore limited. EMA status in untreated coeliac disease correlates strongly with cigarette smoking history: this may be of pathogenic significance, given the previously demonstrated association between smoking and the risk of coeliac disease. abstract_id: PUBMED:11111775 Role of lifestyle factors in the pathogenesis of osteopenia in adult coeliac disease: a multivariate analysis. Objectives: Coeliac disease is frequently complicated by alterations of bone mass and mineral metabolism. In this condition the degree of malabsorption is a major determinant of bone loss. However, the role of lifestyle factors such as exposure to sunlight, physical activity and cigarette smoking, which have been demonstrated to influence bone mass and mineral metabolism in other conditions, has never been investigated in coeliac disease. Design: We evaluated the impact of potential co-factors on bone homeostasis in coeliac disease by means of a multivariate analysis model. Methods: Thirty-nine adult patients with untreated coeliac disease (18 symptomatic, 21 subclinical/silent) were studied. Bone mineral density was measured by dual-energy X-ray absorptiometry at lumbar spine and femoral neck levels. Age at diagnosis, gender, duration of symptoms and severity of symptoms were recorded. Nutritional status, cigarette smoking habit, exposure to sunlight, and physical activity were evaluated. The impact of each independent variable on lumbar and femoral bone mineral density was evaluated by means of a multivariate analysis model. Results: The severity of symptoms and nutritional status were significant sources of variability of both lumbar and femoral bone mineral density. Physical activity was a significant source of variability at femoral level, while gender was at lumbar level. Cigarette smoking habit and exposure to sunlight showed no significant effect on bone mineral density. Conclusions: Gender, malnutrition, global severity of the disease and physical activity are important co-factors in the pathogenesis of bone loss in coeliac disease. abstract_id: PUBMED:22584218 The incidence and risk of celiac disease in a healthy US adult population. Objectives: Celiac disease (CD) is an increasingly common disease that may affect as many as 1% of the North American population. Recent population-based data suggest a substantial increase in the prevalence of CD over the last several decades. Several factors are hypothesized as possible disease triggers including intercurrent illnesses, such as gastroenteritis, surgeries, and trauma. We used the active duty US military, a unique healthy worker population with essentially complete medical diagnostic coding, as an opportunity to describe trends in CD and deployment-related risk factors. Methods: Using electronic medical encounter data (1999-2008) on active duty US military (over 13.7 million person-years), a matched, nested case-control study describing the epidemiology and risk determinants of CD (based on ≥2 ICD-9 medical encounters) was conducted. Incidence and duration of CD-related medical care were estimated, and conditional logistic regression was utilized to evaluate CD risk following infectious gastroenteritis (IGE) occurring within 3 years before CD diagnosis while controlling for other risk factors. Results: A total of 455 incident cases of CD were identified and age, gender, and time matched to 1,820 controls. The incidence of CD increased five-fold from 1.3 per 100,000 in 1999 to 6.5 per 100,000 in 2008, with the highest rates of increase among those over 34 years of age (average annual increase of 0.8 cases per 100,000). A total of 172 IGE episodes, predominately of "viral etiology" (60.5%), were documented. In multivariate models, a significant association between IGE and CD was found (Odds ratio (OR): 2.06, 95% confidence interval (CI) 1.43, 2.97). Risk generally increased with temporal proximity to, and non-viral etiology of, exposure. Other notable risk factors for CD in multivariate models were Caucasian race (OR: 3.1, P<0.001), non-Army service (OR: 1.5, P=0.001), and greater than a high-school education (OR: 1.3, P=0.05). Conclusions: Incidence of CD diagnosis in the US military is increasing, particularly among those in the fourth and fifth decades of life and appears higher than other population-based estimates. An association between antecedent IGE and risk of CD was noted, but the potential for exposure misclassification cannot be ruled out and further study is needed to link pathogen-specific exposure to incident CD anti-gluten antibody development or symptom onset. abstract_id: PUBMED:36754128 Environmental pesticide exposure and the risk of irritable bowel syndrome: A case-control study. The agricultural model in southern Spain is highly productive, mainly due to the intensive cultivation under plastic. Despite strict pesticide regulation, human exposure to pesticides in the environment has been connected to an increase in diseases such as celiac disease. Certain pesticides have also been associated to the disruption of the intestinal microbiota, which has been tied to the development of irritable bowel syndrome (IBS). A case-control study was conducted in Andalusia, south Spain, to assess the prevalence and risk of IBS related to pesticide exposure. This research found a high prevalence of IBS in Andalusia between 2000 and 2021 in areas with high pesticide exposure using agronomic criteria. Furthermore, the odds ratio for IBS was significantly higher in the population with high pesticide exposure. This study suggests that pesticides may be involved in IBS, whereas more research is needed to determine the role of pesticides in IBS symptomatology. Answer: The relationship between cigarette smoking and the risk of adult coeliac disease has been investigated in several studies, and there is evidence to suggest that cigarette smoking may have a protective effect against the development of coeliac disease in adults. One study found that at the time of diagnosis, a smaller percentage of coeliac disease cases were current smokers compared to controls, and the odds ratio indicated a strong protective effect of smoking against coeliac disease (PUBMED:12923372). This study also demonstrated a biological gradient for total, recent, and current cigarette exposure, with the greatest risk reduction related to current exposure, thus meeting the Bradford Hill criteria for causality. Another study also reported a lower proportion of current smokers in the coeliac group compared to controls, suggesting that cigarette smoking, or a factor closely linked to it, seems to exert a major protective effect against the development of symptomatic adult onset coeliac disease (PUBMED:8881810). Further supporting this association, a study found an inverse relationship between current smoking and adult coeliac disease, particularly in the middle-aged coeliac group who had never smoked (PUBMED:12229976). Additionally, a retrospective analysis of adults with newly diagnosed coeliac disease indicated that EMA-negative status at diagnosis was strongly associated with current or recent cigarette smoking (PUBMED:11434592). However, it is important to note that while these studies suggest a protective association between smoking and coeliac disease, they do not establish a direct causal relationship. The exact mechanism by which smoking may protect against coeliac disease is not fully understood, and other factors may also be involved in the pathogenesis of the condition. In conclusion, the evidence from these studies suggests that there is a significant inverse association between cigarette smoking and the risk of adult coeliac disease, which may indicate a protective effect. However, causality cannot be definitively established without further research to understand the underlying mechanisms and to rule out other potential confounding factors.
Instruction: Does helicobacter pylori infection have influence on outcome of laparoscopic sleeve gastrectomy for morbid obesity? Abstracts: abstract_id: PUBMED:24862673 Does helicobacter pylori infection have influence on outcome of laparoscopic sleeve gastrectomy for morbid obesity? Introduction: Among the surgical procedures for treatment of morbid obesity, laparoscopic sleeve gastrectomy has known widespread diffusion in the last years, although it is not free from significant morbidity rates. Aim of this work is to evaluate the incidence of Helicobacter pylori (HP) infection on the postoperative outcome of patients undergoing laparoscopic sleeve gastrectomy. Methods: Between January 2008 and December 2013, 184 patients (65 males, 119 females), mean age 35.8 ± 5.7 years, affected with morbid obesity, mean BMI 46.6 ± 6.7, underwent laparoscopic sleeve gastrectomy. All the specimens at the end of the operation were analysed by the same pathologist. Histological grading was based on the Sidney classification. Results: Seventy-two of the patients (39.1%) were HP positive, while 112 (60.9%) were negative. No significant differences were observed between the HP+ and HP- group in terms of age, sex, weight, BMI, incidence of comorbidities and duration of follow-up. All the operations were completed via laparoscopic approach. No mortality was observed. Postoperative complications occurred in 5 patients (2.7%): three leaks (1.6%), all in the HP- group and two bleedings (1.1%), one in the HP+ and one in the HP- group. In two cases a reintervention was necessary. No significant differences were observed in the morbidity rates between the two groups. Overall mean excess weight loss at 6 months, 12 months and 24 months was respectively 47.4 ± 11.3%, 61.1 ± 12.4% and 68.4 ± 13.5%, with no significant differences between the HP+ and HP- groups. Conclusions: HP infection seems not to influence postoperative outcome of patients operated of laparoscopic sleeve gastrectomy. abstract_id: PUBMED:37798512 The Impact of Helicobacter pylori on Laparoscopic Sleeve Gastrectomy Postoperative Complications: a Systematic Review and Meta-analysis. We aimed to assess the impact of Helicobacter pylori infection on postoperative outcomes following laparoscopic sleeve gastrectomy (LSG). We searched Cochrane, Scopus, and PubMed databases, reviewed 1026 studies, and thoroughly analyzed 42 of them. Our final analysis included 13 studies comprising 6199 patients. We found that H. pylori infection was correlated with higher rates of risk of overall postoperative complications (OR 1.56; 95% CI 1.13, 2.16; P = 0.007) and staple line leak (OR 1.89; 95% CI 1.05, 3.41; P = 0.03). There were no significant differences in hospital length of stay or postoperative bleeding rates. Despite observed correlations between H. pylori positivity in gastric specimen and postoperative complications in LSG, definitive causation remains elusive, emphasizing the need for prospective randomized studies evaluating the effect of preoperative H. pylori screening and eradication. abstract_id: PUBMED:32030613 Preoperative Helicobacter pylori Screening and Treatment in Patients Undergoing Laparoscopic Sleeve Gastrectomy. Background: The role of preoperative screening and treatment of Helicobacter pylori (HP) in asymptomatic patients undergoing laparoscopic sleeve gastrectomy (LSG) remains unclear. This study aims to define the preoperative prevalence and management of HP and their effect on postoperative outcomes at our institution. Materials And Methods: We reviewed the medical records and surgical specimens of all LSG performed at an academic centre in Toronto, ON between 2010 and 2017. Results: Review of our institutional database identified 222 patients that underwent LSG, of which 200 had preoperative HP screening: 18% tested positive and 15% were treated. Seven surgical specimens were HP-positive (3.2%). No association was found between preoperative HP status, treatment or HP-positive specimen and postoperative complications at 1 year. Conclusion: Although preoperative screening and treatment likely reduce the prevalence of HP in LSG specimens, our findings suggest that they may be of limited clinical value in LSG as they have little influence on surgical morbidity. abstract_id: PUBMED:32644013 Routine histopathologic examination of the resected specimen after laparoscopic sleeve gastrectomy - what can be expected? Background: Laparoscopic Sleeve Gastrectomy (LSG) is nowadays an established bariatric procedure. Although preoperative gastroscopy is recommended to rule out severe pathologies, there is little evidence about the role of routine histopathologic examination of resected specimens. We sought to identify the prevalence of histopathological relevant findings in patients undergoing LSG and to evaluate their impact in clinical practice. Methods: A retrospective analysis on a prospectively collected dataset on patients undergoing LSG between August 2009 and May 2018 in two bariatric centers was performed. Demographic and clinical data and histopathological results were analyzed. Results: Sixhundred-thrirteen patients were identified, mean age was 43.1 years (14-75), average body mass index was 44.8 kg/m2 (34.4-73.9). Histopathology revealed abnormal findings in 47.97% of the patients, most common pathology was chronic non-active or minimally to moderate active gastritis (n = 202;32.95%). Among others, Helicobacter-associated gastritis (n = 33;5.38%), intestinal metaplasia (n = 13;2.12%), micronodular enterochromaffine-like cell hyperplasia (n = 2; 0.33%) and gastrointestinal stromal tumors (n = 6; 0.98%) were present. No malignancies were found. Histopathological results required a change in the postoperative management in 48 patients (7.83%). The costs of histopathological assessment ranged between 0.77% and 2.55% of per-case payment. Conclusion: A wide range of histopathological findings occur in specimens after LSG, requiring a relevant number of patients additional therapies or surveillance. Therefore, routine histopathological examination after LSG is recommendable. abstract_id: PUBMED:26224375 What Does the Excised Stomach from Sleeve Gastrectomy Tell us? Introduction: Staple-line leak and haemorrhage are the most serious complications following sleeve gastrectomy. The operation is often performed without prior endoscopy. Given that gastric inflammatory conditions are common, could they predispose patients to suffering a serious complication following sleeve gastrectomy? Methods: Consecutive patients undergoing laparoscopic sleeve gastrectomy from March 2007 to May 2014 were included in the study. All final histologic reports were coded and investigated against whether or not the patient had a post-operative leak and/or haemorrhage. Associations were explored using Fisher's exact test. Results: Over this period, 976 laparoscopic sleeve gastrectomies were performed with a pre-operative gastroscopy rate of 2.2%. Over half of the specimens demonstrated a histopathologic abnormality. Helicobacter pylori infection occurred in 8.6%, and the most common histopathologic abnormality was chronic gastritis in 38.9%. There was no association between H. pylori infection or inflammation and staple-line leak and/or haemorrhage. Conclusion: We conclude that inflammatory gastric conditions are unlikely to predispose patients to staple-line leaks or haemorrhages following sleeve gastrectomy and that selective pre-operative gastroscopy may be an appropriate standard of care. abstract_id: PUBMED:26001881 Outcomes in Patients with Helicobacter pylori Undergoing Laparoscopic Sleeve Gastrectomy. Background: In vertical sleeve gastrectomy (VSG), the majority of the stomach is resected and much of the tissue colonized with Helicobacter pylori and the bulk of acid producing cells are removed. In addition, the effect of H. pylori colonization of the stomach of patients undergoing stapling procedures is unclear. As a result, the need for detection and treatment of H. pylori in patients undergoing VSG is unknown. Methods: Four hundred and eighty patients undergoing VSG are the subject of this study. Three surgeons at a single institution performed the procedures. The remnant stomach was sent to pathology and tested for the presence of H. pylori using immunohistochemistry. All patients were discharged on proton pump inhibitors. Results: Of the 480 patients who underwent VSG, 52 were found to be H. pylori positive based on pathology. There was no statistically significant difference in age (p = 0.77), sex (p = 0.48), or BMI (p = 0.39) between the groups. There were 17 readmissions post-op. Five of these were in the H. pylori positive cohort. Six of these complications were classified as severe (anastomotic leak, intra-abdominal collection, or abscess), with two in the H. pylori positive cohort (Table 1). There was no statistically significant difference in the severe complication rates between the two groups (p = 0.67). There were no readmissions for gastric or duodenal ulceration or perforation. Conclusions: Our data suggests that there is no increase in early complications in patients with H. pylori undergoing VSG. If these findings are confirmed in a long-term follow-up, it would mean that preoperative H. pylori screening in patients scheduled for VSG is not necessary. abstract_id: PUBMED:25868840 Vertical sleeve gastrectomy specimens have a high prevalence of unexpected histopathologic findings requiring additional clinical management. Background: Laparoscopic vertical sleeve gastrectomy is used with increasing frequency as a therapeutic option for morbid obesity. Before the procedure, patients undergo a rigorous preoperative evaluation including double contrast upper gastrointestinal radiographic series at our institution. Patients undergoing sleeve gastrectomy are presumed to have no significant gastric pathology. Objectives: To investigate the prevalence of histopathologic findings requiring clinical follow-up in sleeve gastrectomy specimens. Setting: University Hospital, United States. Methods: Retrospective review was conducted of all primary vertical sleeve gastrectomy specimens performed for morbid obesity at our institution from July 2008 until August 2012 (N = 248). Results: Unanticipated findings warranting clinical follow-up were identified in 8.4% of cases and included cases of H. pylori gastritis, autoimmune gastritis with microcarcinoid formation, necrotizing vasculitis, and intestinal metaplasia. H. pylori was identified in 5.2% of all cases and in 33.3% of cases of gastritis. Neoplasms were identified at laparoscopy in 2 additional cases (0.8%). Conclusions: Surgeons and pathologists should be aware of the high prevalence of diagnoses requiring clinical follow-up in vertical sleeve gastrectomy specimens. abstract_id: PUBMED:31290105 Helicobacter Pylori Infection Prevalence and Histopathologic Findings in Laparoscopic Sleeve Gastrectomy. Introduction: Helicobacter pylori (H. pylori) is a type of bacteria that affects more than half of the world's population and has been associated with gastritis. The relationship between H. pylori and obesity is controversial. Laparoscopic sleeve gastrectomy (LSG) is the most commonly used surgery for morbidly obese patients. The aim of this study was to investigate the rate of H. pylori in patients undergoing LSG. Methods: Biopsy specimens of 32,743 patients who underwent esophagogastroduodenoscopy (EGD) and resection materials from 1257 patients who underwent LSG were examined histopathologically. The relationships between body mass index (BMI), age, gender, H. pylori infection, and intestinal metaplasia (IM) were investigated in patients with gastritis. Results: In patients undergoing EGD, the association of H. pylori infection was found to be increased in males and the elderly (p < 0.001). The presence of gastritis and IM was significantly higher with H. pylori infection (p < 0.001 and p = 0.001, respectively). H. pylori infection was significantly higher in patients over the age of 41 years (p < 0.001). There was no significant difference between the results of H. pylori before and after LSG surgery (p = 0.923). The presence of H. pylori together with gastritis and IM was found to be significant (p < 0.001). Conclusions: H. pylori infection increases with age. No significant difference was found in the examination for H. pylori before and after LSG surgery. In addition, no relationship was found between H. pylori and excess weight. However, due to the low average age of patients who underwent LSG, further studies are needed in this area. abstract_id: PUBMED:29305814 Histopathology Findings in Patients Undergoing Laparoscopic Sleeve Gastrectomy. Background: Laparoscopic sleeve gastrectomy (LSG) has gained popularity in the last 10 years for its good results in weight loss and comorbidity control. However, guidelines on the pathological examination of the specimen are lacking. The aim of this retrospective study was to determine the usefulness of the routine specimen examination when presurgery endoscopy (upper gastrointestinal endoscopy, UGIE) and multiple gastric biopsies are part of the preoperative work-up. Methods: A retrospective review of records of the patients submitted to LSG between January 2012 and August 2017 was carried out. Sex, age, histopathology findings in the presurgery endoscopy biopsies and surgical specimen, and the prevalence of Helicobacter pylori infection were analyzed. Results: A total of 925 patients entered the study group (mean age = 44.1 years, Females = 80.3%, BMI = 44.58 kg/m2). The most common histopathology pattern in the endoscopy biopsies and in the surgical specimens was inactive chronic gastritis (64.4 and 55.6%, respectively). Helicobacter pylori infection was 24.6 and 2.48%, respectively. Ninety-nine percent (n 796) of patients with non-significant endoscopy biopsy findings showed the same patterns in specimen analysis. Only three patients (0.3%) who had intestinal presurgery metaplasia were positive in the specimen analysis, and two cases of gastric stromal neoplasms (gastrointestinal stromal tumor and gastric leiomyoma) were found intraoperatively. Conclusion: Most of the findings are non-significant and can be predicted if UGIE plus multiple biopsies is routinely included in the bariatric work-up with significant cost reduction. In those patients who had a significant finding prior to the surgery or intraoperatively, the pathological examination of the specimen is recommended. abstract_id: PUBMED:36372693 Histopathologic findings in laparoscopic sleeve gastrectomy: is routine full pathologic evaluation indicated? Background: Laparoscopic sleeve gastrectomy (SG) is the most commonly performed bariatric surgery. The resected gastric segment is routinely sent for pathology evaluation. No formal national recommendation exists that mandates pathology review. We proposed to study the largest histopathologic series in SG patients yet reported. Objective: The primary objective of our study was to determine whether a subgroup of patients who underwent bariatric surgery in the northeastern Unites States is more susceptible to having clinically significant pathologic findings that may benefit from routine histopathologic evaluation of the gastric sleeve specimen. Setting: University hospital. Methods: A retrospective electronic chart review of patients who underwent SG at a single large academic institution was performed. Patient demographics, body mass index, and histopathologic reports of the gastric specimens obtained during SG were analyzed. Results: The records of 3543 patients were reviewed. A total of 1076 patients had abnormal pathologies, including gastritis (938), follicular gastritis (98), intestinal metaplasia (25), gastrointestinal stromal tumor (12), leiomyoma (1), lymphoma (1), and other malignancy (1). Black and Hispanic patients had a higher incidence of developing gastrointestinal stromal tumor and intestinal metaplasia. A higher incidence of Helicobacter pylori infection among specimens with abnormal pathologies was noted. Conclusions: The findings of this study call into question the routine use of pathology workup in gastric specimens after SG. Our data suggest that such analysis may be warranted in certain subtypes of patients such as older Black and Hispanic patients in the northeastern United States. Answer: The influence of Helicobacter pylori (H. pylori) infection on the outcome of laparoscopic sleeve gastrectomy (LSG) for morbid obesity is a subject of debate. Some studies suggest that H. pylori infection does not significantly influence postoperative outcomes. For instance, one study found no significant differences in postoperative complications, excess weight loss, or morbidity rates between H. pylori positive and negative groups after LSG (PUBMED:24862673). Similarly, another study concluded that preoperative H. pylori screening and treatment may have limited clinical value in LSG, as they did not find an association between preoperative H. pylori status, treatment, or H. pylori-positive specimen and postoperative complications at 1 year (PUBMED:32030613). Additionally, a study found no association between H. pylori infection or inflammation and staple-line leak and/or hemorrhage following sleeve gastrectomy (PUBMED:26224375), and another suggested that there is no increase in early complications in patients with H. pylori undergoing vertical sleeve gastrectomy (VSG) (PUBMED:26001881). However, other studies have identified a correlation between H. pylori infection and higher rates of risk of overall postoperative complications and staple line leak after LSG (PUBMED:37798512). Furthermore, a study found that a high prevalence of unexpected histopathologic findings, including H. pylori gastritis, in sleeve gastrectomy specimens required additional clinical management (PUBMED:25868840). Another study reported a significant number of patients with histopathological findings after LSG required additional therapies or surveillance, suggesting the value of routine histopathological examination (PUBMED:32644013). In summary, while some studies indicate that H. pylori infection does not significantly impact the outcome of LSG for morbid obesity, others have found a correlation with increased postoperative complications. This discrepancy highlights the need for further research, including prospective randomized studies, to clarify the influence of H. pylori on LSG outcomes (PUBMED:37798512).
Instruction: Can preoperative duplex marking of the saphenopopliteal junction be avoided? Abstracts: abstract_id: PUBMED:18265549 Can preoperative duplex marking of the saphenopopliteal junction be avoided? Objectives: Patients undergoing saphenopopliteal junction (SPJ) surgery are currently subjected to two duplex scans. The first is to confirm the reflux, and the second is done preoperatively to accurately mark the SPJ for surgery. The aim of this study was to assess whether the use of hand-held Doppler (HHD) can substitute the second duplex scan. Methods: Sixty limbs with suspected SPJ reflux were studied. Patients underwent an initial duplex scan. The report detailed the position of SPJ in relation to popliteal crease. Guided by this, a HHD was then used to mark the SPJ. Deviation of the HHD mark from the duplex one of < or =10 mm was considered acceptable for surgical accuracy. Results: HHD accurately localized all 27 patients with SPJ reflux (100% accuracy). The distances between the HHD and duplex points in this group ranged between 0 and 5 mm (median=0). Twenty-five patients had SPJ with no reflux, and 22 of them were accurately localized (88%). The distances between the two points in the latter group ranged between 0 and 16 mm (median=3). Conclusion: HHD, guided by the routine duplex scan, can accurately mark SPJ with reflux. A second duplex is not required for marking prior to surgery. This will reduce the workload of the vascular laboratory. abstract_id: PUBMED:22211036 Standardisation of preoperative marking of incompetent perforators and saphenopopliteal junction on Doppler with evaluation of "t" technique. To standardise the preoperative marking of incompetent perforators and saphenopopliteal junction on Doppler with evaluation of "T" technique. A prospective study including 54 consecutive patients (61 lower limbs) who underwent surgery for varicose veins in 2003 and 2004 were included for preoperative marking. "T" technique is a technique of Doppler marking of an incompetent perforator, long limb of the T representing the course of the superficial vein and the junction of the T representing the site of perforator entering the deep fascia. Surgical correlation was done. The overall surgical detection rate of incompetent perforators was 199 / 220(90.5%); detection of the saphenopopliteal junction was 100%. The "T" technique of Doppler marking was found to be easy to perform and aided intraoperative detection. abstract_id: PUBMED:1959680 Preoperative localisation of the saphenopopliteal junction with duplex scanning. The anatomy of the saphenopopliteal junction shows considerable variation, and clinical localisation of this junction is inaccurate. Duplex scanning in preoperative mapping of the saphenous vein system in bypass surgery has been shown to be highly effective. In 62 patients with clinical evidence of insufficiency of the saphenopopliteal junction, preoperative localisation with duplex scanning was performed in 66 extremities. In 62 extremities duplex localisation matched the operative findings and in four extremities a difference of 2 cm or more was found. There was 1 false negative surgical exploration. In 93% of the cases exact localisation of the junction enabled us to perform flush ligation of a small saphenous vein through minimal exposure. Preoperative duplex scanning of the saphenopopliteal junction is highly accurate. abstract_id: PUBMED:9568200 A study of the competency of the saphenopopliteal junction by duplex ultrasonography. Background: Objective determination of saphenopopliteal junction incompetency has eluded surgeons for many years. With the advent of duplex ultrasonography incompetency of the saphenopopliteal junction can be determined with an acceptable degree of clinical certitude. Objective: This study was undertaken in order to determine the accuracy of duplex ultrasonography in studying the competency of the saphenopopliteal junction. Methods: A total of 50 patients were included in the study, and the saphenopopliteal junction was studied bilaterally in each patient. The Biosound Phase II Duplex Ultrasound System with a 7.5-mHz B-mode imaging was used. Results: The degree of accuracy of ascertaining the competency of the saphenopopliteal junction was approximately 96%. Conclusion: This test is reliable, and provides helpful information to the clinician. abstract_id: PUBMED:20870873 Duplex scanning is no substitute for surgical expertise in identifying the saphenopopliteal junction: results following short saphenous vein surgery. Objectives: Short saphenous vein (SSV) surgery carries a high risk of failure to identify the saphenopopliteal junction (SPJ). We assessed the impact of surgical expertise on anatomical outcome from SSV surgery and the role of preoperative duplex SPJ marking in improving outcome for vascular and non-vascular specialists. Methods: A retrospective analysis identified patients (30 limbs) who had undergone SSV surgery. These were recalled for duplex scanning of the SPJ. In a prospective study, 187 limbs had preoperative duplex marking of SPJ and postoperative duplex to assess outcome. Grade of operating surgeon was recorded in both retrospective and prospective analysis. Results: In both retrospective and prospective analysis, vascular specialists were significantly more likely than non-vascular specialists to correctly identify the SPJ (P < 0.0001). Preoperative SPJ marking did not improve outcome for the vascular specialist or the non-vascular specialist. Conclusion: Preoperative SPJ marking is no substitute for surgical expertise. Competence in SSV surgery should be assessed prior to surgeons proceeding to independent practice. abstract_id: PUBMED:36924269 Variations of the Saphenopopliteal Junction: An Ultrasonography Study in a Young Population, A Systematic Review and A Meta-Analysis. Saphenopopliteal junction classification has been developing, but still the precise knowledge of junction type is crucial for proper surgical treatment. We examined the saphenopopliteal junction by duplex venous scanning in 244 extremities in healthy volunteers (median age: 23.0 years, 83 females, 39 male) and performed a meta-analysis of 13 studies focusing on structural types of the junction. According to Schweighoffer's classification we distinguished 5 types of the junction and we subdivided type A according to Cavezzi's classification of gastrocnemial veins termination into two. We added type F (small saphenous vein-SSV terminates into popliteal vein-PV), described especially in cadaveric studies. In our study, the most frequent type was A1 (96 cases), followed by C (70), B (48), A2 (20), E (6), D (3) and F (0). The pooled prevalence estimate for types A + B + D + E was 54.7% (95% CI 40.9-69.6%) and for type C 24.4% (95% CI 19.3-29.5%), whereas in 17.1% (95% CI 6.3-27.9%) of cases, the SSV terminated in the PV with no cranial extension present. The knowledge of the saphenopopliteal junction and its variations prevalence can help clinicians to quickly identify the real type of the junction during routine examination. In mid-European population, the main type is A1 and worldwide type A. abstract_id: PUBMED:20656954 Outcome following saphenopopliteal surgery: a prospective observational study. Objectives: High recurrence rates following small saphenous varicose vein surgery have been reported. The aim of this study was to ascertain initial success rates following saphenopopliteal junction (SPJ) surgery using pre- and postoperative duplex scanning. Methods: A prospective study was performed on patients with ultrasound-proven SPJ reflux. Patients underwent preoperative duplex skin marking and a postoperative quality assurance scan. Results: Ninety procedures were performed in 88 patients. The SPJ was successfully ligated in 87 (96.7%) cases. Reflux was completely abolished in 51 (56.7%) cases, but persisted solely in the small saphenous vein (SSV) in 32.2%. Subsequently, 10 consecutive patients underwent 11 SPJ ligations with stripping of the SSV. Follow-up ultrasound scan demonstrated successful ligation of the SPJ and elimination of superficial venous reflux. Conclusion: This study demonstrates that preoperative duplex SPJ marking results in a high percentage of successful ligation. Given that residual persistent reflux was avoided in patients who underwent stripping of the SSV, we propose that patients who require SPJ surgery undergo duplex marking along with specific consideration with regard to treatment of the residual SSV. abstract_id: PUBMED:37442273 Reflux origin of the insufficient small saphenous vein by duplex ultrasound determination and consequences for therapy considering the saphenopopliteal junction type. Objective: The reflux pathophysiology of the saphenofemoral junction (SFJ) of the insufficient great saphenous vein (GSV) has already been investigated and stratified. These results are still lacking for the small saphenous vein (SSV). The aim of the study was to analyze the pathophysiology of the saphenopopliteal junction (SPJ) in case of refluxing SSV. Methods: The study included 1142 legs investigated between April 1, 2019, and February 15, 2023, with chronic venous insufficiency scheduled for endoluminal thermal ablation of the insufficient SSV. Preoperatively, a standardized duplex ultrasound assessment of the SPJ including the cranial extension of the SSV and the Giacomini vein, respectively, was performed to determine the origin of reflux. Having in mind, that the draining type according to Cavezzi is relevant to the treatment planning, after having scanned 152 legs, the protocol was extended to this feature: Cavezzi type A1 or A2 was recorded on 990 legs. Results: In 984 cases (86%), saphenopopliteal reflux from the popliteal vein into the insufficient SSV was detected, and in 181 cases of these (16%), simultaneous refluxing blood from the cranial extension or Giacomini vein was found. In 119 cases (10%), reflux resulted only from the cranial extension or Giacomini vein with a competent SPJ, and in 39 cases (3%), the reflux source was diffusely from side branches and/or perforating veins. Cavezzi's junction types A1 (independent junction of SSV and muscle veins) and A2 (muscle veins join into SSV, draining together into the popliteal vein through the SPJ) were found in 65% and 35% of cases, respectively. Conclusions: The insufficient SSV shows a high frequency of axial reflux from the deep into the saphenous vein with an indication for high ligation or thermal ablation at the level of the SPJ or immediately distal to the inflow of muscular veins depending on the junction type. In 14%, based on this study, we observed a competent junction of the SSV without indication for ligation or thermal destruction of the SPJ. abstract_id: PUBMED:8076123 Preoperative colour-coded duplex examination of the saphenopopliteal junction in recurrent varicosis of the short saphenous vein. Undetected anatomical variations of the short saphenous vein are a major contributing factor to recurrent varicose veins in the saphenous vein area. Perioperative venography and varicography of the short saphenous vein have been proposed as a means of exact localization of its termination. The present study evaluated whether colour-coded duplex examination constituted a reliable alternative for localization of the saphenopopliteal junction. Between 1989 and 1990, 12 patients were reoperated on after previous classical ligation of the short saphenous vein. Preoperative colour-coded duplex examination revealed an abnormally high saphenopopliteal junction with persistent reflux in 11 patients and a Giacomini vein in one. When comparing these results with venographic and operative findings an accuracy of 100% was reached in these specific cases. It is concluded that for patients with recurrent varicosis of the short saphenous vein, preoperative colour-coded duplex examination provides a reliable non-invasive alternative to venography in the exact localization of the saphenopopliteal junction. abstract_id: PUBMED:8076122 Preoperative colour-coded duplex examination of the saphenopopliteal junction in recurrent varicosity of the short saphenous vein. Undetected anatomical variations of the short saphenous vein (SSV) are a major contributing factor to recurrent varicose veins in the SSV area. Preoperative venography and varicography of the SSV have been proposed as a means of exact localization of its termination. The present study evaluated whether colour-coded duplex (CCD) examination constituted a reliable alternative for localization of the saphenopopliteal (SP) junction. Between 1989 and 1990, 12 patients were reoperated on after previous classical ligation of the SSV. Preoperative CCD examination revealed an abnormally high SP junction with persistent reflux in 11 patients and a Giacomini vein in one patient. When comparing these results with venographic and operative findings an accuracy of 100% was reached in these specific cases. It is concluded that for patients with recurrent varicosis of the SSV preoperative CCD examination provides a reliable non-invasive alternative for venography in the exact localization of the SP junction. Answer: Preoperative duplex marking of the saphenopopliteal junction (SPJ) may not be necessary in all cases. A study found that using hand-held Doppler (HHD), guided by the initial duplex scan, could accurately mark the SPJ with reflux, suggesting that a second duplex scan for marking prior to surgery is not required (PUBMED:18265549). This could reduce the workload of the vascular laboratory without compromising surgical accuracy. Additionally, the "T" technique of Doppler marking was found to be easy to perform and aided intraoperative detection, with a 100% detection rate of the SPJ (PUBMED:22211036). Duplex scanning has been shown to be highly accurate in preoperative localization of the SPJ, matching operative findings in the majority of cases (PUBMED:1959680; PUBMED:9568200). However, surgical expertise plays a crucial role in the successful identification of the SPJ during surgery, and preoperative SPJ marking does not substitute for surgical expertise (PUBMED:20870873). Moreover, a study that included preoperative duplex skin marking and postoperative quality assurance scans demonstrated a high percentage of successful ligation of the SPJ (PUBMED:20656954). This suggests that preoperative duplex marking, combined with surgical expertise, can result in successful outcomes. In conclusion, while preoperative duplex marking of the SPJ can be helpful and improve surgical outcomes, it may not be strictly necessary if other reliable methods, such as HHD guided by an initial duplex scan, are used and if the surgeon has sufficient expertise. The decision to use preoperative duplex marking should consider the available resources, the surgeon's experience, and the specific clinical context.
Instruction: Can peripheral venous pressure be interchangeable with central venous pressure in patients undergoing cardiac surgery? Abstracts: abstract_id: PUBMED:14600810 Can peripheral venous pressure be interchangeable with central venous pressure in patients undergoing cardiac surgery? Objective: Pressure measurements at the level of the right atrium are commonly used in clinical anesthesia and the intensive care unit (ICU). There is growing interest in the use of peripheral venous sites for estimating central venous pressure (CVP). This study compared bias, precision, and covariance in simultaneous measurements of CVP and of peripheral venous pressure (PVP) in patients with various hemodynamic conditions. Design And Setting: Operating room and ICU of a tertiary care university-affiliated hospital. Patients: Nineteen elective cardiac surgery patients requiring cardiopulmonary bypass were studied. Interventions: A PVP catheter was placed in the antecubital vein and connected to the transducer of the pulmonary artery catheter with a T connector. Data were acquired at different times during cardiac surgery and in the ICU. Measurements And Results: A total of 188 measurements in 19 patients were obtained under various hemodynamic conditions which included before and after the introduction of mechanical ventilation, following the induction of anesthesia, fluid infusion, application of positive end expiratory pressure and administration of nitroglycerin. PVP and CVP values were correlated and were interchangeable, with a bias of the PVP between -0.72 and 0 mmHg compared to the CVP. Conclusions: PVP monitoring can accurately estimate CVP under various conditions encountered in the operating room and in the ICU. abstract_id: PUBMED:16797425 Peripheral venous pressure as a predictor of central venous pressure during orthotopic liver transplantation. Study Objective: To assess the reliability of peripheral venous pressure (PVP) as a predictor of central venous pressure (CVP) in the setting of rapidly fluctuating hemodynamics during orthotopic liver transplant surgery. Design: Prospective clinical trial. Setting: UCLA Medical Center, main operating room-liver transplant surgery. Patients: Nine adult patients with liver failure undergoing orthotopic liver transplant surgery. Interventions: A pulmonary artery catheter and a 20-g antecubital peripheral intravenous catheter dedicated to measuring PVP were placed in all patients after standard general endotracheal anesthesia induction and institution of mechanical ventilation. Measurements: Peripheral venous pressure and CVP were recorded every 5 minutes and/or during predetermined, well-defined surgical events (skin incision, venovenous bypass initiation, portal vein anastamosis, 5 minute post graft reperfusion, abdominal closure). Pulmonary artery pressure and cardiac output (via thermodilution) were recorded every 15 and 30 minutes, respectively. Main Results: Peripheral venous pressure (mean +/- SD) was 11.0 +/- 4.5 mmHg vs a CVP of 9.5 +/- 5.0; the two measurements differed by an average of 1.5 +/- 1.6 mmHg. Peripheral venous pressure correlated highly with CVP in every patient, and the overall correlation among all nine patients calculated using a random-effects regression model was r = 0.95 (P < 0.0001). A Bland-Altman analysis used to determine the accuracy of PVP in comparison to CVP yielded a bias of -1.5 mmHg and a precision of +/-3.1 mm Hg. Conclusion: Our study confirms that PVP correlates with CVP even under adverse hemodynamic conditions in patients undergoing liver transplantation. abstract_id: PUBMED:21258073 Peripheral venous pressure as an alternative to central venous pressure in patients undergoing laparoscopic colorectal surgery. Background: Peripheral venous pressure (PVP) is strongly correlated with central venous pressure (CVP) during various surgeries. Laparoscopic surgery in the Trendelenburg position with pneumoperitoneum typically increases CVP. To determine whether PVP convincingly reflects changes in CVP, we evaluated the correlation between PVP and CVP in patients undergoing laparoscopic colorectal surgery. Methods: Both CVP and PVP were measured simultaneously at predetermined time intervals during elective laparoscopic colorectal surgery in 42 patients without cardiac disease. The pairs of venous pressure measurements were analysed for correlation, and the Bland-Altman plots of repeated measures were used to evaluate the agreement between CVP and PVP. Results: A total of 420 data pairs were obtained. The overall mean CVP was 11.3 (sd 4.5) mm Hg, which was significantly lower than the measured PVP of mean 12.1 (4.5) mm Hg (P=0.005). There was a strong positive correlation between overall CVP and PVP (correlation coefficient=0.96, P<0.0001). The mean bias (PVP-CVP) corrected for repeated measurements using random-effects modelling was 0.9 mm Hg [95% confidence interval (CI) 0.54-1.19 mm Hg] with 95% limits of agreement of -1.2 mm Hg (95% CI -1.75 to -0.62 mm Hg) to 2.9 mm Hg (95% CI 2.35-3.48 mm Hg). Conclusions: PVP displays a strong correlation and agreement with CVP under the increased intrathoracic pressure of pneumoperitoneum in the Trendelenburg position and may be used as an alternative to CVP in patients without cardiac disease undergoing laparoscopic colorectal surgery. abstract_id: PUBMED:15352955 Peripheral venous pressure is an alternative to central venous pressure in paediatric surgery patients. Background: Peripheral venous pressure (PVP) is easily and safely measured. In adults, PVP correlates closely with central venous pressure (CVP) during major non-cardiac surgery. The objective of this study was to evaluate the agreement between CVP and PVP in children during major surgery and during recovery. Methods: Fifty patients aged 3-9 years, scheduled for major elective surgery, each underwent simultaneous measurements of CVP and PVP at random points during controlled ventilation intraoperatively (six readings) and during spontaneous ventilation in the post-anaesthesia care unit (three readings). In a subset of four patients, measurements were taken during periods of hypotension and subsequent fluid resuscitation (15 readings from each patient). Results: Peripheral venous pressure was closely correlated to CVP intraoperatively, during controlled ventilation (r=0.93), with a bias of 1.92 (0.47) mmHg (95% confidence interval = 2.16-1.68). In the post-anaesthesia care unit, during spontaneous ventilation, PVP correlated strongly with CVP (r = 0.89), with a bias of 2.45 (0.57) mmHg (95% confidence interval = 2.73-2.17). During periods of intraoperative hypotension and fluid resuscitation, within-patient changes in PVP mirrored changes in CVP (r = 0.92). Conclusion: In children undergoing major surgery, PVP showed good agreement with CVP in the perioperative period. As changes in PVP parallel, in direction, changes in CVP, PVP monitoring may offer an alternative to direct CVP measurement for perioperative estimation of volume status and guiding fluid therapy. abstract_id: PUBMED:16549142 Correlation of peripheral venous pressure and central venous pressure in kidney recipients. Background And Objective: Previous studies in adults have demonstrated a clinically useful correlation between central venous pressure (CVP) and peripheral venous pressure (PVP). The current study prospectively compared CVP measurements from a central versus a peripheral catheter in kidney recipients during renal transplantation. Methods: With ethics committee approval and informed consent, 30 consecutive kidney recipients were included in the study. We excluded patients who had significant valvular disease or clinically apparent left ventricular failure. For each of 30 patients, CVP and PVP were measured on five different occasions. The pressure tubing of the transducer system was connected to the distal lumen of the central or to the peripheral venous catheter for measurements following induction of anesthesia, after induction, 1 hour after induction, reperfusion of the kidney, and the end of the operation, yielding 150 hemodynamic data points. Each hemodynamic measurement included heart rate, mean arterial pressure, mean CVP, and mean PVP determined at end-expiration. Results: The mean PVP was 13.5 +/- 1.8 mm Hg and the mean CVP was 11.0 +/- 1.5 mm Hg during surgery. The mean difference was 2.5 +/- 0.5 (P < .01). Repeated-measures analysis of variance indicated a highly significant relationship between PVP and CVP (P < .01) with a Pearson correlation coefficient of 0.97. Conclusion: Under the conditions of this study, PVP showed a consistently high agreement with CVP in the perioperative period among patients without significant cardiac dysfunction. abstract_id: PUBMED:11133622 Peripheral venous pressure as a hemodynamic variable in neurosurgical patients. Unlabelled: Neurosurgical patients undergoing either craniotomy or complex spine surgery are subject to wide variations in blood volume and vascular tone. The ratio of these variables yields a pressure that is traditionally measured at the superior vena cava and referred to as "central venous pressure" (CVP). We have investigated an alternative to CVP by measuring peripheral venous pressure (PVP), which, in parallel animal studies, correlates highly with changes in absolute blood volume (r = 0.997). We tested the hypothesis that PVP trends parallel CVP trends and that their relationship is independent of patient position. We also tested and confirmed the hypothesis, during planned circulatory arrest, that PVP approximates mean systemic pressure (circulatory arrest pressure), which reflects volume status independent of cardiac function. PVP was compared with CVP across 1026 paired measurements in 15 patients undergoing either craniotomy (supine, n = 8) or complex spine surgery (prone, n = 7). Repeated-measures analysis of variance indicated a highly significant relationship between PVP and CVP (P < 0.001), with a Pearson correlation coefficient of 0.82. The correlation was best in cases with significant blood loss (estimated blood loss >1000 mL; r = 0.885) or hemodynamic instability (standard deviation of CVP > 2; r = 0.923). Implications: In patients undergoing either elective craniotomy or complex spine surgery, peripheral venous pressure (PVP) trends correlated with central venous pressure (CVP) trends with a mean offset of 3 mm Hg (PVP > CVP). PVP trends provided equivalent physiological information to CVP trends in this subset of patients, especially during periods of hemodynamic instability. In addition, measurements made during a planned circulatory arrest support the hypothesis that PVP approximates mean systemic pressure (systemic arrest pressure), which is a direct index of patient volume status independent of cardiac or respiratory activity. abstract_id: PUBMED:11254838 Correlation of peripheral venous pressure and central venous pressure in surgical patients. Objective: To determine the degree of agreement between central venous pressure (CVP) and peripheral venous pressure (PVP) in surgical patients. Design: Prospective study. Setting: University hospital. Participants: Patients without cardiac dysfunction undergoing major elective noncardiac surgery (n = 150). Measurements And Main Results: Simultaneous CVP and PVP measurements were obtained at random points in mechanically ventilated patients during surgery (n = 100) and in spontaneously ventilating patients in the postanesthesia care unit (n = 50). In a subset of 10 intraoperative patients, measurements were made before and after a 2-L fluid challenge. During surgery, PVP correlated highly to CVP (r = 0.86), and the bias (mean difference between CVP and PVP) was -1.6 +/- 1.7 mmHg (mean +/- SD). In the postanesthesia care unit, PVP also correlated highly to CVP (r = 0.88), and the bias was -2.2 +/- 1.9 (mean +/- SD). When adjusted by the average bias of -2, PVP predicted the observed CVP to within +/-3 mmHg in both populations of patients with 95% probability. In patients receiving a fluid challenge, PVP and CVP increased similarly from 6 +/- 2 to 11 +/- 2 mmHg and 4 +/- 2 to 9 +/- 2 mmHg. Conclusion: Under the conditions of this study, PVP showed a consistent and high degree of agreement with CVP in the perioperative period in patients without significant cardiac dysfunction. PVP -2 was useful in predicting CVP over common clinical ranges of CVP. PVP is a rapid noninvasive tool to estimate volume status in surgical patients. abstract_id: PUBMED:15365925 Relationship between peripheral and central venous pressures in different patient positions, catheter sizes, and insertion sites. Objective: To investigate the relationship between peripheral and central venous pressures in different patient positions (supine, prone, lithotomy, Trendelenburg, and Fowler), different catheter diameters (18 G and 20 G), and catheterization sites (dorsal hand and forearm) during surgical procedures. Design: Prospective clinical study. Settings: University hospital. Participants: Five hundred adult patients. Interventions: Peripheral over-the-needle intravenous catheters were placed in the dorsal hand or forearm. Central venous catheters were inserted via the internal jugular or subclavian vein after induction of anesthesia. Measurements And Main Results: Simultaneous measurements of central and peripheral venous pressures were made during stable conditions at random time points in surgery; 1953 paired measurements were performed. Mean central venous pressure was 11 +/- 3.7 mmHg and peripheral venous pressure was 13 +/- 4 mmHg (p = 0.0001). The overall correlation between central venous and peripheral venous pressures was found to be statistically significant (r = 0.89, r(2) = 0.8, p = 0.0001). Mean difference between peripheral and central venous pressure was 2 +/- 1.8 mmHg. Ninety-five percent limits of agreement were 5.6 to -1.6 mmHg. Conclusion: It has been assumed that replacing central venous pressure by peripheral venous pressure would cause problems in clinical interpretation. If the validity of this data is confirmed by further studies, the authors suggest that central venous pressure could be estimated by using regression equations to compare the 2 methods. abstract_id: PUBMED:34763624 Prospective Cohort Study Assessing the Use of Peripheral Saphenous Venous Pressure Monitoring as a Marker of the Transcaval Venous Pressure Gradient in Liver Transplant Surgery. Objectives: Assessment of the transcaval venous pressure gradient, the central venous to inferior vena caval pressure, assists anesthetists and surgeons in management of liver transplant recipients. Traditionally, this entails insertion of a femoral central line with increased patient risk and health care cost. Here, we assessed the ability of a saphenous vein cannula to act as a surrogate for the femoral central line as a means to assess the transcaval pressure gradient in a safer and less invasive manner. Materials And Methods: A prospective cohort of 22 patients undergoing liver transplant underwent saphenous vein cannulation in addition to insertion of a femoral and internal jugular central venous catheter. Data were collected throughout each phase of surgery to assess the central, femoral, and saphenous vein pressures; results of a range of relevant physiological and ventilatory data were also collected. Results: The primary outcome, the correlation between saphenous and femoral venous pressure throughout surgery, was acceptable (r2 = 0.491, P < .001). During the anhepatic phase of surgery, this correlation improved (r2 = 0.912, P < .001). The correlation between the femoral to central venous pressure and saphenous to central venous pressure gradients was also reasonable throughout surgery (r2 = 0.386, P < .001), and this correlation was significantly stronger during the anhepatic phase (r2 = 0.935, P < .001). Conclusions: Saphenous venous pressure, provided by peripheral cannulation, provided a reliable, less invasive, and safer alternative to femoral central line insertion for determination of the transcaval pressure gradient during the anhepatic phase of liver transplant. abstract_id: PUBMED:19204622 Measurement of central venous pressure from a peripheral intravenous catheter in the prone position during spinal surgery. Introduction: A central venous pressure (CVP) measurement is used to assess intravascular status. Although this is usually accomplished by the placement of a central venous catheter (CVC), there are circumstances when placement may be technically difficult or impossible. The current study evaluates the feasibility of measuring CVP from a peripheral intravenous (IV) cannula in the prone position. Methods: CVP was simultaneously measured from a central venous catheter (CVC-P) and from a peripheral IV cannula (PVP). The continuity of the peripheral IV cannula with the central venous system was demonstrated by a change in the PVP during a sustained inspiratory effort. Results: The study cohort included 18 patients. All patients had 2 peripheral IV cannulae, and 10 PVP measurements were taken from each site. In 4 of the 36 cannulae (11%), there was no increase in the PVP in response to a sustained inspiratory effort. For these 4 cannulae, the PVP to CVC-P difference was 13 +/- 4 mm Hg. In the 32 cannulae in which the PVP increased in response to a sustained inspiratory effort, the PVP to CVC-P difference was 2 +/- 1 mm Hg (P < 0.0001), and the difference between the PVP and the CVC-P was <or=5 mm Hg in all of the sample sets. The accuracy of the technique did not vary based on the location of the peripheral cannula, its size, or the actual CVP reading. Conclusion: Provided that the PVP increases to a sustained inspiratory breath, there is a clinically useful correlation between the PVP and the actual CVP in the prone position. Answer: Yes, peripheral venous pressure (PVP) can be interchangeable with central venous pressure (CVP) in patients undergoing cardiac surgery. A study comparing simultaneous measurements of CVP and PVP in patients with various hemodynamic conditions, including those undergoing cardiac surgery, found that PVP monitoring can accurately estimate CVP under various conditions encountered in the operating room and in the ICU. The study reported a bias of the PVP between -0.72 and 0 mmHg compared to the CVP, indicating that PVP values were correlated and were interchangeable with CVP values (PUBMED:14600810).
Instruction: Cytological follow-up after hysterectomy: is vaginal vault cytology sampling a clinical governance problem? Abstracts: abstract_id: PUBMED:25123422 Cytological follow-up after hysterectomy: is vaginal vault cytology sampling a clinical governance problem? The University Hospital of North Staffordshire approach. Objectives: Vaginal vault cytology sampling following hysterectomy is recommended for specific indications in national guidelines. However, clinical governance issues surround compliance with guidance. Our first study objective was to quantify how many patients undergoing hysterectomy at the University Hospital of North Staffordshire (UHNS) had vault cytology advice in their histology report and, if indicated, whether it was arranged. The second was to devise a vault cytology protocol based on local experience and national guidance. Methods: The local cancer registry was searched. Clinical, clerical and histological data for all patients undergoing hysterectomy were collected. Results: In total, 271 patients were identified from both the gynae-oncology and benign gynaecology teams. Of these, 24% (65/271) were gynae-oncology patients with a mean age of 69 years. The benign gynaecology team had 76% (206/271) of patients with a mean age of 55 years. Subsequently, 94% (256/271) had cytology follow-up advice in their histopathology report. Ultimately, from both cohorts, 39% (18/46) had follow-up cytology performed when indicated. Conclusion: A high proportion of cases complied with national guidance. However, a disappointingly high number did not have vault cytology sampling when this was indicated. This is probably a result of the complex guidance that is misunderstood in both primary and secondary care. Vault follow-up of patients after hysterectomy rests with the team performing the surgery. Vault cytology, if indicated, should be performed in secondary care and follow-up should be planned. The protocol set out in this article should be followed to avoid unnecessary clinical governance failings. abstract_id: PUBMED:21656703 Prolapsed fallopian tube: cytological findings in a ThinPrep liquid based cytology vaginal vault sample. Fallopian tube prolapse through the vaginal vault after hysterectomy is a rare complication. The clinical diagnosis is difficult and the patient may undergo unnecessary treatment. A cytological diagnosis of tubal prolapse is rare. There are very few descriptions of the cytological appearances of prolapsed fallopian tube and to our knowledge, they have not been described in liquid based cytology preparations. The presence of classic columnar cells with cilia and sheets of cells with small granular uniform nuclei in an orderly arrangement are the diagnostic appearances of cells originating from the fallopian tube. We describe a case in which the cells had undergone squamous metaplasia with nuclear enlargement and increased nuclear to cytoplasmic ratios corresponding to reactive atypia but with fine and evenly distributed chromatin and smooth nuclear contours, which indicated their benign nature. In addition, in this case intracytoplasmic polymorphs and associated extracellular infiltrates of inflammatory cells are noted. The description of this case may help others to consider a cytological diagnosis of prolapsed fallopian tube, thus preventing repeated cauterisations of vault granulation tissue on one hand, and possibly excessive surgical treatment of a mistaken malignant lesion on the other. abstract_id: PUBMED:23077454 What sampling device is the most appropriate for vaginal vault cytology in gynaecological cancer follow up? Background: In women with cancer-related hysterectomy, the vaginal vault cytology has a low efficacy - when performed by conventional methods - for the early detection of vaginal recurrence. The amount of exfoliated cells collected is generally low because of atrophy, and the vaginal vault corners can be so narrow that the commonly used Ayres spatula cannot often penetrate deeply into them. This prospective study aimed at identifying the advantages obtained in specimens collection using the cytobrush, as compared to the Ayres's spatula. PATIENTS AND METHODS.: 141 gynaecologic cancer patients were studied to compare samplings collected with Ayre's spatula or with cytobrush. In a pilot setting of 15 patients, vaginal cytology samples obtained by both Ayre's spatula and cytobrush were placed at the opposite sites of a single slide for quali-quantitative evaluation. Thereafter, the remaining 126 consecutive women were assigned to either group A (spatula) or B (cytobrush) according to the order of entry. The same gynaecologist performed all the procedures. Results: In all 15 pilot cases, the cytobrush seemed to collect a higher quantity of material. The comparative analysis of the two complete groups indicated that the cytobrush technique was more effective than the spatula one. The odds ratio (OR) for an optimal cytology using the cytobrush was 2.8 (95% confidence interval -C.I. 1.3-6.2; chi-square test, p=0.008). Conclusions: Vaginal vault cytology with cytobrush turned out to better perform than the traditional Ayre's spatula to obtain an adequate sampling in gynecological cancer patients. abstract_id: PUBMED:23288466 Role of vault cytology in follow-up of hysterectomized women: results and inferences from a low resource setting. The study was undertaken to assess the utility of cervico-vaginal/vault cytology in the follow-up of women treated for cervical cancer and benign gynecological conditions. Records of 3,523 cervico-vaginal smears from 2,658 women who underwent hysterectomy and/or radiotherapy or chemotherapy, over a 10-year period were retrieved. Data was collected on type of treatment received, indication for hysterectomy, age of patient, presenting symptoms, stage of tumor, interval since treatment, cytology and biopsy results. The results of cytology versus other parameters were analyzed separately for women treated for cervical cancer and those hysterectomized for benign indications. Malignant cells were detected in 141/1949 (7.2%) follow-up smears from treated cervical cancer cases (140 recurrences and 1 VAIN). Around 92% of recurrences of cervical cancer were detected with in 2 years of follow-up and 75% of these women were symptomatic. Cytology first alerted the clinicians to a recurrence in a quarter of cases. On the other hand, VAIN was detected in 5/1079 (0.46%) vault smears from 997 women hysterectomized for benign gynecologic disease. All these women were asymptomatic and majority (80%) were detected in follow-up smears performed between 3 and 10 years. Vault cytology is an accurate tool to detect local recurrences/VAIN in women treated for cervical cancer or benign gynecological conditions. It may even first alert the clinicians to a possibility of recurrence. However, due to extremely low prevalence of VAIN/vaginal cancer, it seems unwarranted in women hysterectomized for benign indications, especially in resource constrained settings. abstract_id: PUBMED:36865962 Vaginal Vault Prolapse in an Elderly Woman. Vaginal vault prolapse is a painful condition in which the vaginal cuff descends. This report presents a case of a 65-year-old obese and diabetic female who was suffering from a third-degree vault prolapse. Conventionally used non-surgical treatments, such as exercises for the pelvic floor, are not as effective as surgical approaches for the treatment of third-degree vault prolapse. Post-hysterectomy vaginal vault prolapse can be treated safely and effectively with abdominal sacral colpopexy using a permanent mesh. Due to several risk factors, such as grand parity, advancing age, and poor lifestyle mainly involving exercise to strengthen pelvic floor musculature, the vaginal route of surgery was employed, which was found to be effective, and thus the treatment was successful. In conclusion, such individualized as well as unique approaches to such rare cases can produce efficacious results. abstract_id: PUBMED:29169578 Post-hysterectomy vaginal vault prolapse. Post-hysterectomy vaginal vault prolapse (PHVP) is a recognised although rare complication following both abdominal and vaginal hysterectomy and the risk is increased in women following vaginal surgery for urogenital prolapse. The management of PHVP remains challenging and whilst many women will initially benefit from conservative measures, the majority will ultimately require surgery. The purpose of this paper is to review the prevalence and risk factors associated with PHVP as well to give an overview of the clinical management of this often complicated problem. The role of prophylactic primary prevention procedures at the time of hysterectomy will be discussed as well as initial conservative management. Surgery, however, remains integral in managing these complex patients and the vaginal and abdominal approach to managing PHVP will be reviewed in detail, in addition to both laparoscopic and robotic approaches. abstract_id: PUBMED:34313405 Cytomorphologic clues for the diagnosis of fallopian tube prolapse in liquid-based vault samples with review of the literature. Introduction: Fallopian tube prolapse (FTP) is a rare complication of hysterectomy. The cytological features of the prolapsed fallopian tube in vault smears have been occasionally described in the literature. Materials And Methods: This was a retrospective study conducted to identify and describe the characteristic cytologic features of histopathologically confirmed cases of FTP in SurePath™ liquid-based preparations. Additionally, the literature documenting cytologic features of the prolapsed fallopian tube in vault smears was also reviewed. Results: A total of four corresponding vault cytology samples of FTP cases, reported on histopathology, were available. On cytologic examination, these cases demonstrated strips and papillaroid clusters of columnar-shaped cells with mild nuclear enlargement, round to elongated nuclei, fine chromatin, inconspicuous nucleoli, and a moderate amount of wispy cytoplasm. Admixed inflammatory cells were also noted. Some of these cells demonstrated the presence of cilia and terminal bar toward the apical surface, indicative of tubal epithelium. The presence of three-dimensional papillaroid clusters lined by columnar cells as well as strips of similar cells with cilia and terminal bars at the apical surface and fine nuclear chromatin were the most consistent cytologic features in these cases. Conclusion: We conclude that a high index of clinical suspicion in post-hysterectomy cases with knowledge of the characteristic cytologic features can help in suggesting a diagnosis of tubal prolapse in vault samples. abstract_id: PUBMED:34526746 Role of Preoperative and Postoperative Pelvic Floor Distress Inventory-20 in Evaluation of Posthysterectomy Vault Prolapse. Background: Posthysterectomy vault prolapse is a common problem after vaginal or abdominal hysterectomy. The objective was to assess the role of Pelvic Floor Distress Inventory 20 (PFDI-20) in evaluation of vault prolapse. Materials And Methods: Prospective study in 20 women with posthysterectomy vault prolapse of Stage 2 and above. The outcome measure was to calculate PFDI-20 score in all cases before surgical intervention and to recalculate it again in 6 months after different surgical procedures for vault prolapse and to statistically compare the PFDI-20 score in different types of surgery over 4 years period at a tertiary referral hospital for surgical treatment. Prolapse was classified using Pelvic Organ Prolapse Quantification and intraoperative findings. All women were operated for vault prolapse as per hospital protocol and stage of prolapse by either vaginal sacrospinous fixation or abdominal sacrocolpopexy. Results: Mean age, parity, and body mass index were 54.8 years, 3.5, and 22.71 kg/m2 respectively. Preceding surgery was vaginal hysterectomy in 75% women and abdominal hysterectomy in 25% women. Complaints were bulge or mass feeling at perineum (100%), pressure in lower abdomen and perineum (55%), and constipation (60%). The type of prolapse was vault prolapse (100%), cystocele (100%), rectocele (100%), and enterocele (45%). The range of PFDI-20 was 88-152 with mean being 123.50 ± 22.71 before surgery while its range decreased significantly to 80-126 with mean being 106.40 ± 16.45 after surgery (P < 0.01). Mean postoperative PFDI-20 score was 107.40 in vaginal sacrospinous fixation group and was 105.30 in abdominal sacrocolpopexy group and was not statistically different (P = 0.18). Conclusion: PFDI-20 score can be used to see the adverse impact of vault prolapse on pelvic floor and to assess the beneficial effect of different types of surgeries on the score. abstract_id: PUBMED:26217604 A rare case of post-hysterectomy vault site iatrogenic endometriosis. A 45-year-old woman with a prior history of hysterectomy due to adenomyosis and leiomyomas was presented at our outpatient gynecology clinic 13 months later with sudden lower pelvic discomfort and vaginal bleeding symptoms. The patient underwent vaginal vault biopsy however diagnosis was still uncertain. Additional evaluation was required due to massive rebleeding incidents. After an emergent explorative laparoscopic operation with total excision of the vault, a diagnosis of vaginal vault endometriosis was made. Our theory is that a possible transplantation of endometrial cells during morcellation of the adenomyotic uterus which then may have progressed to iatrogenic endometriosis of the vaginal vault. Therefore, vault endometriosis must be considered in incidences of delayed massive bleeding occurring in post-hysterectomy patients when other diagnoses have been excluded. abstract_id: PUBMED:35288459 Vaginal vault smear cytology in detection of recurrence after hysterectomy for early cervical cancer. Objective: To determine the role of vaginal vault cytology as a surveillance tool for the detection of recurrence in patients with early stage cervical cancer treated with hysterectomy without adjuvant therapy. Methods: A retrospective cohort study was conducted of all women with cervical cancer treated with a hysterectomy from January 2000 to July 2016 at the Royal Brisbane & Women's Hospital, Australia. Women included were diagnosed with the equivalent of International Federation of Gynecology and Obstetrics (FIGO) 2018 stage 1A1 to 1B3 squamous cell carcinoma, adenocarcinoma, or adenosquamous carcinoma, received either simple or radical hysterectomy with or without pelvic lymph node dissection, and did not receive adjuvant therapy. Age, stage, histology, surgical procedure, and details of individual surveillance regimens including examination findings and indications and results for all vault cytology tests performed in the first 5 years following surgical management were collected. Results: A total of 155 women met the inclusion criteria. Most cases were FIGO 2018 stage 1B1 (61.9%) and squamous cell carcinoma (64.5%). Included women underwent a median of 80 months of surveillance (range 25-200, IQR 64-108). In the first 5 years of surveillance, there were a total of 1001 vault cytology smears performed, with a median of 6 smears (IQR 5-9) per woman. A total of 19 smears were abnormal (1.9%). Of the cohort of 155 women, 19 (12.3%) had an abnormality detected; 1 (0.65%) had a high-grade intraepithelial abnormality and 2 (1.3%) had recurrences detected on cytology; however, a lesion was also seen and biopsied in all three women. A total of 16 of 1001 smears (1.6%) had low-grade abnormalities detected, all of which resolved with clinical observation only. All were alive and well at last review. There were in total 6 (3.9%) recurrences, 2 (33%) of which had abnormal cytology as above, and all of which had a lesion to biopsy and/or abnormal medical imaging. Conclusions: The routine use of vaginal vault cytology in surveillance following hysterectomy for early stage cervical cancer did not appear to alter the detection of recurrent malignancy. Answer: The collection of vaginal vault cytology samples following hysterectomy is recommended for specific indications according to national guidelines. However, there are clinical governance issues regarding adherence to these guidelines. A study conducted at the University Hospital of North Staffordshire (UHNS) aimed to determine the compliance with guidance on vault cytology advice in histology reports and whether follow-up cytology was arranged when indicated. The study found that while a high proportion of cases complied with national guidance, a significant number did not have vault cytology sampling when indicated, which may be due to complex guidance that is misunderstood in both primary and secondary care. The study concluded that vault cytology, if indicated, should be performed in secondary care and follow-up should be planned according to a protocol to avoid clinical governance failings (PUBMED:25123422). Another study compared the effectiveness of different sampling devices for vaginal vault cytology in gynecological cancer follow-up. It found that the cytobrush was more effective than the traditional Ayre's spatula in obtaining adequate samples (PUBMED:23077454). Additionally, a study from a low resource setting assessed the utility of cervico-vaginal/vault cytology in the follow-up of women treated for cervical cancer and benign gynecological conditions. It concluded that vault cytology is an accurate tool to detect local recurrences/VAIN in treated women, but due to the low prevalence of VAIN/vaginal cancer, routine cytology seems unwarranted in women hysterectomized for benign indications in resource-constrained settings (PUBMED:23288466). In summary, while vaginal vault cytology sampling is recommended for certain cases following hysterectomy, there are issues with compliance and understanding of the guidelines, which can lead to clinical governance problems. The choice of sampling device and the setting in which cytology is performed can impact the effectiveness of follow-up. Additionally, the utility of routine cytology in certain populations, especially in low resource settings, is debatable due to the low incidence of certain complications (PUBMED:25123422, PUBMED:23077454, PUBMED:23288466).
Instruction: Does racial concordance between HIV-positive patients and their physicians affect the time to receipt of protease inhibitors? Abstracts: abstract_id: PUBMED:15566445 Does racial concordance between HIV-positive patients and their physicians affect the time to receipt of protease inhibitors? Background: Compared to whites, African Americans have been found to have greater morbidity and mortality from HIV, partly due to their lower use of effective antiretroviral therapy. Why racial disparities in antiretroviral use exist is not completely understood. We examined whether racial concordance (patients and providers having the same race) affects the time of receipt of protease inhibitors. Methods: We analyzed data from a prospective, cohort study of a national probability sample of 1,241 adults receiving HIV care with linked data from 287 providers. We examined the association between patient-provider racial concordance and time from when the Food and Drug Administration approved the first protease inhibitor to the time when patients first received a protease inhibitor. Results: In our unadjusted model, white patients received protease inhibitors much earlier than African-American patients (median 277 days compared to 439 days; P < .0001). Adjusting for patient characteristics only, African-American patients with white providers received protease inhibitors significantly later than African-American patients with African-American providers (median 461 days vs. 342 days respectively; P < .001) and white patients with white providers (median 461 vs. 353 days respectively; P= .002). In this model, no difference was found between African-American patients with African-American providers and white patients with white providers (342 vs. 353 days respectively; P > .20). Adjusting for patients' trust in providers, as well as other patient and provider characteristics in subsequent models, did not account for these differences. Conclusion: Patient-provider racial concordance was associated with time to receipt of protease inhibitor therapy for persons with HIV. Racial concordance should be addressed in programs, policies, and future racial and ethnic health disparity research. abstract_id: PUBMED:14748856 Adherence counseling practices of generalist and specialist physicians caring for people living with HIV/AIDS in North Carolina. Context: National guidelines recommend that practitioners assess and reinforce patient adherence when prescribing antiretroviral (ART) medications, but the extent to which physicians do this routinely is unknown. Objective: To assess the adherence counseling practices of physicians caring for patients with HIV/AIDS in North Carolina and to determine characteristics associated with providing routine adherence counseling. Design: A statewide self-administered survey. Setting And Participants: All physicians in North Carolina who prescribed a protease inhibitor (PI) during 1999. Among the 589 surveys sent, 369 were returned for a response rate of 63%. The 190 respondents who reported prescribing a PI in the last year comprised the study sample. Main Outcome Measures: Physicians reported how often they carried out each of 16 adherence counseling behaviors as well as demographics, practice characteristics, and attitudes. Results: On average, physicians reported spending 13 minutes counseling patients when starting a new 3-drug ART regimen. The vast majority performed basic but not more extensive adherence counseling; half reported carrying out 7 or fewer of 16 adherence counseling behaviors "most" or "all of the time." Physicians who reported conducting more adherence counseling were more likely to be infectious disease specialists, care for more HIV-positive patients, have more time allocated for an HIV visit, and to perceive that they had enough time, reimbursement, skill, and office space to counsel. After also controlling for the amount of reimbursement and availability of space for counseling, physicians who were significantly more likely to perform a greater number of adherence counseling practices were those who 1). cared for a greater number of HIV/AIDS patients; 2). had more time allocated for an HIV physical; 3). felt more adequately skilled; and 4). had more positive attitudes toward ART. Conclusions: This first investigation of adherence counseling practices in HIV/AIDS suggests that physicians caring for patients with HIV/AIDS need more training and time allocated to provide antiretroviral adherence counseling services. abstract_id: PUBMED:10634076 Experience in the management of the HIV patient among the physicians of the Secretaría de Salud Objective: To determine the experience of the National Health Ministry physicians in the management of HIV-infected patients and in the use of antiretrovirals. Material And Method: A descriptive, observational and transversal study was performed, with support from the National AIDS Council from March to May 1998. Self-applicable questionnaires were filled by National Health Ministry physicians with experience in HIV patient clinical care, at the beginning of 5 different meetings on HIV/AIDS in several cities of the country. Statistical analysis included the chi-square test. Results: One hundred and eighty-one questionnaires were applied. The median of HIV patients attended by physicians was 4 (interval 1-97); 36.5% of the physicians had used antiretrovirals (35.4% prescribed nucleoside analogs and 9.9% protease inhibitors). The most frequently used drugs were AZT and/or ddl (40.3%); 17.7% had administered CD4+ lymphocyte count and 8.8% viral load. Conclusions: The proportion of National Health Ministry physicians with experience in VIH patient care was low, as was the use of antiretrovirals. Efforts should be focus on improving care of HIV patients through physician training. abstract_id: PUBMED:33226999 Time to treatment disruption in children with HIV-1 randomized to initial antiretroviral therapy with protease inhibitors versus non-nucleoside reverse transcriptase inhibitors. Background: Choice of initial antiretroviral therapy regimen may help children with HIV maintain optimal, continuous therapy. We assessed treatment-naïve children for differences in time to treatment disruption across randomly-assigned protease inhibitor versus non-nucleoside reverse transcriptase inhibitor-based initial antiretroviral therapy. Methods: We performed a secondary analysis of a multicenter phase 2/3, randomized, open-label trial in Europe, North and South America from 2002 to 2009. Children aged 31 days to <18 years, who were living with HIV-1 and treatment-naive, were randomized to antiretroviral therapy with two nucleoside reverse transcriptase inhibitors plus a protease inhibitor or non-nucleoside reverse transcriptase inhibitor. Time to first documented treatment disruption to any component of antiretroviral therapy, derived from treatment records and adherence questionnaires, was analyzed using Kaplan-Meier estimators and Cox proportional hazards models. Results: The modified intention-to-treat analysis included 263 participants. Seventy-two percent (n = 190) of participants experienced at least one treatment disruption during study. At 4 years, treatment disruption probabilities were 70% (protease inhibitor) vs. 63% (non-nucleoside reverse transcriptase inhibitor). The unadjusted hazard ratio (HR) for treatment disruptions comparing protease inhibitor vs. non-nucleoside reverse transcriptase inhibitor-based regimens was 1.19, 95% confidence interval [CI] 0.88-1.61 (adjusted HR 1.24, 95% CI 0.91-1.68). By study end, treatment disruption probabilities converged (protease inhibitor 81%, non-nucleoside reverse transcriptase inhibitor 84%) with unadjusted HR 1.11, 95% CI 0.84-1.48 (adjusted HR 1.13, 95% CI 0.84-1.50). Reported reasons for treatment disruptions suggested that participants on protease inhibitors experienced greater tolerability problems. Conclusions: Children had similar time to treatment disruption for initial protease inhibitor and non-nucleoside reverse transcriptase inhibitor-based antiretroviral therapy, despite greater reported tolerability problems with protease inhibitor regimens. Initial pediatric antiretroviral therapy with either a protease inhibitor or non-nucleoside reverse transcriptase inhibitor may be acceptable for maintaining optimal, continuous therapy. abstract_id: PUBMED:33592343 Transmitted drug resistance among HIV-1 drug-naïve patients in Greece. Objectives: Despite the success of antiretroviral treatment (ART), the persisting transmitted drug resistance (TDR) and HIV genetic heterogeneity affect the efficacy of treatment. This study explored the prevalence of TDR among ART-naïve HIV patients in Greece during the period 2016-2019. Methods: Genotypic resistance testing was available for 438 ART-naïve HIV patients. Multivariable Poisson regression models were fitted. Results: The majority of patients were male, and there was a slight predominance of Hellenic (26.5%) over non-Hellenic (21.9%) nationality. The prevalence of TDR was 7.8%. There was a predominance of mutations for non-nucleoside reverse-transcriptase inhibitors (5.7%) over nucleoside reverse-transcriptase inhibitors (0.2%). No mutations to protease inhibitors were detected. The prevalence of resistance was 22.1% based on all mutations identified through the HIVdb interpretation system. The most frequent resistance sites were E138A (9.6%), K103N (6.4%), and K101E (2.1%). The majority of detected mutations were confined to subtype A (52.6%), followed by B (19.6%). Non-Hellenic nationality was significantly associated with an increased risk of TDR (relative risk 1.32, 95% confidence interval 1.04-1.69). Conclusions: Non-B HIV infections predominate in Greece, with an increasing trend in recent years. The prevalence of TDR remains stable. Ongoing surveillance of resistance testing is needed to secure the long-term success of ART. abstract_id: PUBMED:11996964 Assessment of atherosclerosis using carotid ultrasonography in a cohort of HIV-positive patients treated with protease inhibitors. Objective: Lipid disorders associated with the use of protease inhibitors (PI) may be a risk factor for premature atherosclerosis development. The aim of this study is to evaluate the extent of carotid intima media thickness (IMT) among HIV-positive patients treated with PI containing regimens compared to PI-naïve and HIV-negative subjects. Methods: We analysed plasma lipid levels and carotid IMT in 28 HIV-positive patients treated with protease inhibitors (PIs) for a mean of 28.7 months (range 18-43) and in two control groups constituted, respectively, by 15 HIV-positive naïve patients and 16 HIV-negative subjects, that were matched for age, risk factors for HIV infection, cigarette smoke use and CD4+ cell count. Results: PI-treated patients had higher triglyceride, HDL and apo B levels than controls. Carotid IMT was significantly increased in PI-treated patients compared to naïve or HIV-negative subjects. A correlation between cholesterol HDL, triglyceride and ApoB levels and IMT was observed among the entire cohort. Conclusions: Plasma lipid alterations were associated with an increased IMT and intima media thickening was more pronounced in PI-treated patients than in the two control groups. Periodical evaluation of blood lipid profile and, if required, the use of lipid-lowering agents is advisable. Moreover, physicians should address concurrent risk factor for atherosclerosis that can be modified, including smoking, hypertension, obesity and sedentary life-style. abstract_id: PUBMED:11775417 Acute myocardial infarct in HIV-positive patients in treatment with protease inhibitors We report the case of a 40-year-old HIV-positive man, undergoing three-drug antiretroviral therapy for 2 years that included a protease inhibitor (ritonavir). The patient was admitted to our Coronary Care Unit with an acute anterior myocardial infarction. He smoked 20 cigarettes/day and had a family history of hypertension. At the time of hospitalization, triglyceride levels were found to be high (290 mg/dl). Metabolic alterations associated with the prolonged use of protease inhibitors, such as insulin resistance, dyslipidemia and lipodystrophy, have recently been described. This side effect may lead to premature coronary artery disease. Therefore it is mandatory to be aware that treatment with protease inhibitors in HIV-positive patients, despite survival prolongation and lowering of AIDS complications, may accelerate atherosclerosis and precipitate acute coronary events, especially in patients with pre-existing cardiovascular risk factors. abstract_id: PUBMED:16137695 Premature atherosclerosis in HIV positive patients and cumulated time of exposure to antiretroviral therapy (SHIVA study). Background: With the advent of antiretroviral therapy regimens in HIV positive patients, it is crucial to consider their long-term benefits to risk ratios. The responsibility of treatment in premature atherosclerosis is not clear. Thus, the aim of this study is to evaluate the impact of exposure to reverse transcriptase inhibitors (nucleosidic and non-nucleosidic) and to protease inhibitors on the cardiovascular status of an entire hospital based cohort of patients. Methods: 154 patients were included. Using a linear analysis, we sought an association between the cumulative time of exposure to these three classes of antiretroviral drugs and the carotid intima-media thickness measured by ultrasonography and a cardiovascular composite score. Results: The study confirms premature atherosclerosis, which not only correlates with the usual risk factors, such as triglyceride level, but also with protease inhibitor exposure, especially that of lopinavir. Nevertheless as regards current drug exposure, the clinical impact was low: five clinical complications of atherosclerosis and only one out of 35 scintigraphic and ECG exercise tests warranted a coronary angiography which was negative. Conclusion: These data should not lead to the rejection of protease inhibitors but should strengthen the prevention of cardiovascular diseases as an integral part of the management of HIV patients. abstract_id: PUBMED:15546830 Antiretrovirals, Part II: focus on non-protease inhibitor antiretrovirals (NRTIs, NNRTIs, and fusion inhibitors). The second in a series reviewing the HIV/AIDS antiretroviral drugs. This review summarizes the non-protease inhibitor antiretrovirals: nucleoside and nucleotide analogue reverse transcriptase inhibitors (NRTIs), the nonnucleoside reverse transcriptase inhibitors (NNRTIs), and cell membrane fusion inhibitors. In an overview format for primary care physicians and psychiatrists, this review presents the mechanism of action, side effects, toxicities, and drug interactions of these agents. abstract_id: PUBMED:11309224 Severe bleeding complications in HIV-positive haemophiliac patients treated with protease inhibitors. The availability of more potent drugs for the treatment of human immunodeficiency virus (HIV) infection has led to the development of aggressive drug regimens, including the widespread use of HIV protease inhibitors. Several reports have indicated increased bleeding complications in haemophiliac patients after starting treatment with protease inhibitors. We present two cases of exceptionally severe hemorrhagic events in HIV-positive patients with haemophilia A after starting HIV protease inhibitors, resulting in significant morbidity and mortality. One patient developed a progressive paranephric pseudotumor becoming symptomatic only one month after the start of ritonavir. The second patient presented with an intracranial bleed, resulting in his death within forty-eight hours, nineteen weeks after he was started on nelfinavir. Both patients showed an excellent antiviral response to the HIV-protease inhibitors with significant decrease in their HIV-RNA titers. Potentially serious hemorrhagic complications that require emergent intervention may occur in HIV-positive haemophiliac patients undergoing therapy with protease inhibitors. Clinicians should be alert to these complications. Answer: Yes, racial concordance between HIV-positive patients and their physicians does affect the time to receipt of protease inhibitors. A study found that in an unadjusted model, white patients received protease inhibitors much earlier than African-American patients. After adjusting for patient characteristics, African-American patients with white providers received protease inhibitors significantly later than African-American patients with African-American providers and white patients with white providers. However, no difference was found between African-American patients with African-American providers and white patients with white providers when adjusted for patient characteristics only. Adjusting for additional factors such as patients' trust in providers, as well as other patient and provider characteristics, did not account for these differences. The study concluded that patient-provider racial concordance was associated with the time to receipt of protease inhibitor therapy for persons with HIV (PUBMED:15566445).
Instruction: Late and very late onset group B Streptococcus sepsis: one and the same? Abstracts: abstract_id: PUBMED:37284404 Late-Onset Group B Streptococcal Sepsis in Preterm Twins. Group B streptococcal (GBS) infection is one of the leading causes of neonatal sepsis worldwide. Despite a significant decline in early-onset (EOS) sepsis due to intrapartum antibiotic prophylaxis, the incidence of late-onset (LOS) infection has remained unchanged. However, LOS GBS sepsis affecting twins is very rare. We report on preterm twins born at 29 weeks of gestation: Twin B was 31 days old when he developed LOS GBS sepsis and meningitis, and Twin A was 35 days old when he developed LOS GBS sepsis. Tests for maternal GBS colonization in breast milk were negative. Both babies were treated with antibiotics and eventually discharged without complications. abstract_id: PUBMED:36569815 Transmission of Group B Streptococcus in late-onset neonatal disease: a narrative review of current evidence. Group B streptococcus (GBS) late-onset disease (LOD, occurring from 7 through 89 days of life) is an important cause of sepsis and meningitis in infants. The pathogenesis and modes of transmission of LOD to neonates are yet to be elucidated. Established risk factors for the incidence of LOD include maternal GBS colonisation, young maternal age, preterm birth, HIV exposure and African ethnicity. The mucosal colonisation by GBS may be acquired perinatally or in the postpartum period from maternal or other sources. Growing evidence has demonstrated the predominant role of maternal sources in the transmission of LOD. Intrapartum antibiotic prophylaxis (IAP) to prevent early-onset disease reduces neonatal GBS colonisation during delivery; however, a significant proportion of IAP-exposed neonates born to GBS-carrier mothers acquire the pathogen at mucosal sites in the first weeks of life. GBS-infected breast milk, with or without presence of mastitis, is considered a potential vehicle for transmitting GBS. Furthermore, horizontal transmission is possible from nosocomial and other community sources. Although unfrequently reported, nosocomial transmission of GBS in the neonatal intensive care unit is probably less rare than is usually believed. GBS disease can sometime recur and is usually caused by the same GBS serotype that caused the primary infection. This review aims to discuss the dynamics of transmission of GBS in the neonatal LOD. abstract_id: PUBMED:34603843 Newborn Septic Arthritis-A Rare Presentation of Late-Onset Group B Streptococcal Disease: Case Report and Short Review of the Literature. Group B Streptococcus (GBS) disease is a leading cause of invasive bacterial infections among neonates. We present the case of an 11-day-old neonate with septic arthritis as a rare presentation of late-onset disease (LOD) with a favorable short-term outcome. GBS is a leading cause of neonatal infection. Early-onset disease (EOD) is defined as infection from birth to 6 days of age, while LOD occurs from 7 days to approximately 3 months of age. EOD is acquired through vertical transmission and can be reduced through application of intrapartum antibiotic prophylaxis (IAP). LOD can be acquired from the mother or from environmental sources, unlikely to be prevented by IAP. The most common presentation of EOD is bacteremia (83%), pneumonia (9%), and meningitis (7%). While the clinical picture in both EOD and LOD frequently resembles in LOD hamatogenous spreading may predispose neonates to present with uncommon organ manifestation other than the classic systemic signs of sepsis, for example, septic arthritis. Herein, we report on the management and outcome of a term neonate with late onset GqBS bacteremia and subtle clinical symptoms of septic monoarthritis. abstract_id: PUBMED:27920985 The first case of recurrent ultra late onset group B streptococcal sepsis in a 3-year-old child. Group B streptococcus (GBS) is a commonly recognized cause of sepsis and meningitis in neonatal and young infants. Invasive GBS infection is classified into early onset GBS disease (EOD, day 0-6), late onset GBS disease (LOD, day 7-89) and ultra late onset GBS disease (ULOD, after 3 months of age). ULOD is uncommon and recurrence is especially rare. We present the first recurrent case of ULOD GBS sepsis in 3-year-old girl with a past medical history of hydrops fetalis and thoracic congenital lymphatic dysplasia. The first episode presented as sepsis at 2 years 8 months of age. The second episode occurred as sepsis with encephalopathy at 3 years 1 months of age. During each episode, the patient was treated using intravenous antimicrobials and her condition improved. Serotype examination was not performed in the first episode, but GBS type V was serotyped in the second episode. ULOD over 1 year of age is quite rare and may recur. abstract_id: PUBMED:24709927 Synchronous recurrence of group B streptococcal late-onset sepsis in twins. Group B Streptococcus (GBS) remains the leading cause of neonatal sepsis and meningitis in industrialized countries. Whereas the use of intrapartum antibiotic prophylaxis has led to a significant decline in early-onset sepsis, the incidence of late-onset sepsis has remained unchanged. Whether late-onset sepsis usually originates from established mucocutaneous GBS colonization of the infant or whether it results from an acute exogenous GBS infection remains controversial. Here we report on twins who both twice developed GBS sepsis in a strikingly parallel fashion, with both instances originating from a single hypervirulent GBS clone. Factored together, the presentation as cervical soft tissue infection in both cases, the synchronicity of the episodes, and the detection of GBS DNA in breast milk all strongly suggest an enteral mode of transmission with a short incubation period. abstract_id: PUBMED:36110126 Late-Onset Sepsis in a Premature Infant Mediated by Breast Milk: Mother-to-Infant Transmission of Group B Streptococcus Detected by Whole-Genome Sequencing. Background: Late-onset group B Streptococcus (LOGBS) sepsis is a cause of infection and death in infants. Infected breast milk has been considered a source of neonatal GBS infection and invasive infection. However, mother-to-infant transmission of GBS detected by the high-resolution diagnostic method is rarely reported. Methods: This study describes a low-weight premature infant who developed late-onset GBS septicemia 21 days after birth. GBS strains isolated from the mother's cervical secretion, the mother's milk, and the baby's blood were cultured to identify the source of GBS infection. We further confirmed the GBS isolates through matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF-MS). Finally, we performed whole-genome sequencing (WGS) and phylogenetic analyses on the GBS strains recovered. Results: GBS isolates were cultured from the bloodstream of the premature infant and the mother's milk, respectively. Subsequently, WGS and phylogenetic analyses on three GBS isolates demonstrated that the GBS strain from the infant's bloodstream was 100% homologous to that from the mother's breast milk, which had some different gene fragments from the GBS strain from the mother's cervical secretion. It provided evidence that this infant's late-onset GBS septicemia originated from his mother's breast milk instead of the vertical mother-to-infant transmission. Conclusion: Through WGS and phylogenetic analysis of the GBS strains, we proved in this study that the late-onset GBS sepsis in a premature infant was derived from his mother's breast milk. It indicated that WGS diagnosis is an effective tool for infection tracing. Furthermore, this report provides direction for preventing late-onset GBS infection. abstract_id: PUBMED:22654550 Horizontal transmission of group B streptococcus in a neonatal intensive care unit. The incidence of early-onset group B streptococcal (GBS) sepsis in the neonatal population has decreased substantially since the introduction of maternal intrapartum antibiotic prophylaxis and routine prenatal screening. However, these strategies have not reduced the incidence of late-onset GBS infections. Additional research pertaining to the transmission of late-onset GBS infections is required to develop effective preventive methods. The present report describes probable horizontal transmission of late-onset GBS infection among three infants in a neonatal intensive care unit. GBS strain confirmation was based on the microbiological picture, antibiogram and pulsed-field gel electrophoresis. These cases highlight the morbidity associated with late-onset GBS disease and the importance of considering horizontal transmission as an etiological factor in GBS infection in the newborn period. Further studies assessing horizontal transmission in late-onset GBS disease may improve prevention and early intervention. abstract_id: PUBMED:30457734 Adenitis-cellulitis syndrome, an infrequent form of presentation of the late-onset neonatal septicemia: Report of two cases Septicemia is the main cause of neonatal mortality. The early-onset neonatal sepsis is usually related to maternal factor risks including recto-vaginal colonization. In the late-onset neonatal septicemia it is more difficult to establish the etiology because the majority of the cases are nosocomial or community related. The Streptococcus agalactiae (beta-hemolytic Streptococcus) is the most frequent germ associated with neonatal sepsis in developed countries. The late-onset form usually occurs with septic symptoms and meningitis and, in a few cases, with osteoarticular, skin and soft tissue infection. Adenitis-cellulitis syndrome is rarely seen, and its main cause is Staphylococcus aureus, followed by Streptococcus agalactiae. We report two cases of group B Streptococcus late-onset neonatal septicemia, both of them with adenitis-cellulitis syndrome. Patients recovered uneventfully after an adequate antibiotic therapy. abstract_id: PUBMED:34149682 Invasive Group B Streptococcus Disease With Recurrence and in Multiples: Towards a Better Understanding of GBS Late-Onset Sepsis. Group B Streptococcus (GBS) is a common intestinal colonizer during the neonatal period, but also may cause late-onset sepsis or meningitis in up to 0.5% of otherwise healthy colonized infants after day 3 of life. Transmission routes and risk factors of this late-onset form of invasive GBS disease (iGBS) are not fully understood. Cases of iGBS with recurrence (n=25) and those occurring in parallel in twins/triplets (n=32) from the UK and Ireland (national surveillance study 2014/15) and from Germany and Switzerland (retrospective case collection) were analyzed to unravel shared (in affected multiples) or fixed (in recurrent disease) risk factors for GBS disease. The risk of iGBS among infants from multiple births was high (17%), if one infant had already developed GBS disease. The interval of onset of iGBS between siblings was 4.5 days and in recurrent cases 12.5 days. Disturbances of the individual microbiome, including persistence of infectious foci are suggested e.g. by high usage of perinatal antibiotics in mothers of affected multiples, and by the association of an increased risk of recurrence with a short term of antibiotics [aOR 4.2 (1.3-14.2), P=0.02]. Identical GBS serotypes in both recurrent infections and concurrently infected multiples might indicate a failed microbiome integration of GBS strains that are generally regarded as commensals in healthy infants. The dynamics of recurrent GBS infections or concurrent infections in multiples suggest individual patterns of exposure and fluctuations in host immunity, causing failure of natural niche occupation. abstract_id: PUBMED:14600853 Severe late-onset group B streptococcal infection. A case report Background: Group B Streptococcus (GBS) is a well-known cause of neonatal pneumonia, sepsis and meningitis. Peripartal antibiotic prophylaxis for early-onset GBS infection is in routine use since the beginning of the last decade, but strategies for effective prevention of late-onset GBS infections are still lacking. Case Report: Few hours after discharge from a non-local maternity ward a 3-week-old boy was admitted to our hospital because of GBS meningitis with necrotizing encephalomalacia. Maternal mastitis, not a disease of the baby, had led to the first admission. Discussion: Case history and negative maternal swabs and cultures for GBS led to the hypothesis of nosocomial infection. Screening and risk based peripartal antibiotic prophylaxis, better monitoring and improved therapeutic modalities have reduced the incidence and mortality of early-onset GBS infections, but peripartal prophylaxis failed to influence late-onset GBS infections. Up to 40 % of infants with late-onset meningitis develop neurological sequelae. Maternal vaccination with multivalent conjugate vaccines against GBS is a new strategy which may lead to passive protection of the infant. Further studies to examine the efficacy of vaccines are in progress. Answer: Late-onset group B Streptococcus (GBS) sepsis and very late-onset GBS sepsis are not one and the same; they are classified based on the age at which the infection occurs. Late-onset disease (LOD) typically occurs from 7 through 89 days of life and is a significant cause of sepsis and meningitis in infants (PUBMED:36569815). Very late-onset GBS disease (ULOD), on the other hand, occurs after 3 months of age and is less common, with recurrence being especially rare (PUBMED:27920985). LOD can be acquired from the mother or from environmental sources and is unlikely to be prevented by intrapartum antibiotic prophylaxis (IAP), which is more effective against early-onset disease (EOD) (PUBMED:34603843). The clinical presentation of LOD can vary, with some cases presenting with uncommon manifestations such as septic arthritis (PUBMED:34603843). In contrast, ULOD is quite rare and may recur, as seen in the case of a 3-year-old child with a history of hydrops fetalis and thoracic congenital lymphatic dysplasia (PUBMED:27920985). The transmission of GBS in LOD is predominantly from maternal sources, and infected breast milk is considered a potential vehicle for transmitting GBS (PUBMED:36569815). Whole-genome sequencing has been used to confirm mother-to-infant transmission of GBS through breast milk in cases of LOD (PUBMED:36110126). Horizontal transmission from nosocomial and other community sources is also possible, and cases of probable horizontal transmission of late-onset GBS infection among infants in a neonatal intensive care unit have been reported (PUBMED:22654550). In summary, while LOD and ULOD are both forms of late-onset GBS sepsis, they are distinguished by the timing of onset and may have different sources and modes of transmission. LOD is more common and occurs within the first three months of life, while ULOD is less common, occurs after three months, and can sometimes recur.
Instruction: Are amended surgical pathology reports getting to the correct responsible care provider? Abstracts: abstract_id: PUBMED:24926086 Are amended surgical pathology reports getting to the correct responsible care provider? Objectives: Amended reports (AmRs) need to follow patients to treating physicians, to avoid erroneous management based on the original diagnosis. This study was undertaken to determine if AmRs followed the patient appropriately. Methods: AmRs with diagnostic changes and discrepancies between ordering and treating physicians were tracked. Chart reviews, electronic medical report (EMR) reviews, and interviews were conducted to establish receipt of the AmR by the correct physician. Results: Seven of 60 AmRs had discrepancies between the ordering and treating physicians, all with malignant diagnoses. The AmR was present in the treating physician's chart in only one case. Ordering physicians indicated that AmRs were not forwarded to treating physicians when corrected results arrived after patient referral, under the assumption that the new physician was automatically forwarded pathology updates. No harm was documented in any of our cases. In one case with a significant amendment, the correct information was entered in the patient chart based on a tumor board discussion. A review of two electronic health record systems uncovered significant shortcomings in each delivery system. Conclusions: AmRs fail to follow the patient's chain of referrals to the correct care provider, and EMR systems lack the functionality to address this failure and alert clinical teams of amendments. abstract_id: PUBMED:21841408 Study of amended reports to evaluate and improve surgical pathology processes. Background: : Amended surgical pathology reports record defects in the process of transforming tissue specimens into diagnostic information. Objective: : Systematic study of amended reports tests 2 hypotheses: (a) that tracking amendment frequencies and the distribution of amendment types reveals relevant aspects of quality in surgical pathology's daily transformation of specimens into diagnoses and (b) that such tracking measures the effect, or lack of effect, of efforts to improve surgical pathology processes. Materials And Methods: : We applied a binary definition of altered reports as either amendments or addenda and a taxonomy of defects that caused amendments as misidentifications, specimen defects, misinterpretations, and report defects. During the introduction of a LEAN process improvement approach-the Henry Ford Productions System-we followed trends in amendment rates and defect fractions to (a) evaluate specific interventions, (b) sort case-by-case root causes of misidentifications, specimen defects, and misinterpretations, and (c) audit the ongoing accuracy of the classification of changed reports. LEAN is the management and production system of the Toyota Motor Corporation that promotes continuous improvement; it considers wasted resources expended for purposes other than creating value for end customers and targets such expenditures for elimination. Results: : Introduction of real-time editing of amendments saw annual amendment rates increase from 4.8/1000 to 10.1/1000 and then decrease in an incremental manner to 5.6/1000 as Henry Ford Productions System-specific interventions were introduced. Before introduction of HFPS interventions, about a fifth of the amendments were due to misidentifications, a 10th were due to specimen defects, a quarter due to misinterpretation, and almost half were due to report defects. During the period of the initial application of HFPS, the fraction of amendments due to misidentifications decreased as those due to report defects increased, in a statistically linked manner. As HFPS interventions took hold, misidentifications fell from 16% to 9%, specimen defect rates remained variable, ranging between 2% and 11%, and misinterpretations fell from 18% to 3%. Reciprocally, report defects rose from 64% to 83% of all amendment-causing defects. A case-by-case study of misidentifications, specimen defects, and misinterpretations found that (a) intervention at the specimen collection level had disappointingly little effect on patient misidentifications; (b) standardization of specimen accession and gross examination reduced only specimen defects surrounding ancillary testing; but (c) a double review of breast and prostate cases was associated with drastically reduced misinterpretation defects. Finally, audit of both amendments and addenda demonstrated that 10% of the so-called addenda actually qualified as amendments. Discussion: : Monitored by the consistent taxonomy, rates of amended reports first rose, then fell. Examining specific defect categories provided information for evaluating specific LEAN interventions. Tracking the downward trend of amendment rates seemed to document the overall success of surgical pathology quality improvement efforts. Process improvements modestly decreased fractions of misidentifications and markedly decreased misinterpretation fractions. Classification integrity requires real time, independent editing of both amendments (changed reports) and addenda (addition to reports). abstract_id: PUBMED:27959581 Quality Assurance in Breast Pathology: Lessons Learned From a Review of Amended Reports. Context: -A review of amended pathology reports provides valuable information regarding defects in the surgical pathology process. Objective: -To review amended breast pathology reports with emphasis placed on interpretative errors and their mechanisms of detection. Design: -All amended pathology reports for breast surgical specimens for a 5-year period at a large academic medical center were retrospectively identified and classified based on an established taxonomy. Results: -Of 12 228 breast pathology reports, 122 amended reports were identified. Most (88 cases; 72%) amendments were due to noninterpretative errors, including 58 report defects, 12 misidentifications, and 3 specimen defects. A few (34 cases; 27.9%) were classified as misinterpretations, including 14 major diagnostic changes (11.5% of all amendments). Among major changes, there were cases of missed microinvasion or small foci of invasion, missed micrometastasis, atypical ductal hyperplasia overcalled as ductal carcinoma in situ, ductal carcinoma in situ involving sclerosing adenosis mistaken for invasive carcinoma, lymphoma mistaken for invasive carcinoma, and amyloidosis misdiagnosed as fat necrosis. Nine major changes were detected at interpretation of receptor studies and were not associated with clinical consequences. Three cases were associated with clinical consequences, and of note, the same pathologist interpreted the corresponding receptor studies. Conclusions: -Review of amended reports was a useful method for identifying error frequencies, types, and methods of detection. Any time that a case is revisited for ancillary studies or other reasons, it is an opportunity for the surgical pathologist to reconsider one's own or another's diagnosis. abstract_id: PUBMED:9648896 Amended reports in surgical pathology and implications for diagnostic error detection and avoidance: a College of American Pathologists Q-probes study of 1,667,547 accessioned cases in 359 laboratories. Objectives: To evaluate amended report rates relative to surveillance methods and to identify surveillance methods or other practice parameters that lower amended report rates. Design: Participants in the 1996 Q-Probes quality improvement program of the College of American Pathologists were asked to prospectively document amended surgical pathology reports for a period of 5 months or until 50 amended reports were recorded. The methods of error detection were also recorded and laboratory and institutional policies surveyed. Four types of amended reports were investigated: those issued to correct patient identification errors, to revise originally issued final diagnoses, to revise preliminary written diagnoses, and to revise other reported diagnostic information that was significant with respect to patient management or prognosis. Participants: Three hundred fifty-nine laboratories, 96% from the United States. Results: A total of 3147 amended reports in all four categories from a survey of 1,667,547 surgical pathology specimens accessioned during the study period were issued by the participants. The aggregate mean rate of amended reports was 1.9 per 1000 cases (median, 1.5 per 1000 cases). Of these, 19.2% were issued to correct patient identification errors, 38.7% to change the originally issued final diagnosis, 15.6% to change a preliminary written diagnosis, and 26.5% to change clinically significant information other than the diagnosis. Most frequently, a request from a clinician to review a case (20.5%) precipitated the error detection. Although not statistically significant, a higher amended report rate (1.6 per 1000) for all error types was associated with routine diagnostic slide review that was performed after completion of the surgical pathology report. This is compared to rates for institutions that had routine diagnostic slide review of cases prior to finalization of pathology reports (1.2 per 1000) and institutions that had no routine diagnostic slide review (1.4 per 100). Slide review of cases prior to completion of reports lowered the rate of amended reports issued for two types of amended reports: those in which the originally issued final diagnosis was changed and those in which information other than the diagnosis was changed for patient management or prognostic significance. Other laboratory practice variables examined were not found to be associated with the amended report rate. Conclusions: There is an association between lower amended report rates and diagnostic slide review of cases prior to completion of the pathology report. The level of case review and type of case mix that is necessary for optimal quality assurance needs further investigation. abstract_id: PUBMED:34570930 Management of amended variant classification laboratory reports by genetic counselors in the United States and Canada: An exploratory study. For the past two decades, the guidelines put forth by the American College of Medical Genetics and Genomics (ACMG) detailing providers' clinical responsibility to recontact patients have remained mostly unchanged, despite evolving variant interpretation practices which have yielded substantial rates of reclassification and amended reports. In fact, there is little information regarding genetic counselors' roles in informing patients of reclassified variants, or the process by which these amended reports are currently being handled. In this study, we developed a survey to measure current experiences with amended variant reports and preferences for ideal management, which was completed by 96 genetic counselors from the United States and Canada. All respondents indicated they were the individuals responsible for disclosing initial positive genetic testing results and any clinically actionable reclassified variant reports, and over half (56%) received at least a few amended variant reports each year. Nearly a quarter (20/87) of respondents reported having a standard operating procedure (SOP) for managing amended reports and all were very satisfied (12/20) or satisfied (8/20) with the SOP. Of those without a protocol, 76% (51/67) would prefer to have an SOP implemented. Respondents reported a preference for (1) laboratories to send amended variant reports directly to the genetic counselor or ordering physician through email or an online portal, and (2) notification to patients ideally occurring through a phone call. In the event that the original genetic counselor is inaccessible, respondents reported a preference for reports to be sent directly to another genetic counselor (36%) on the team or the clinic in general (27%). Information from this study provides insight into the current practices of genetic counselors as applied to amended reports and what improvements may increase the efficiency of the reporting process. Moreover, these results suggest a need for an updated statement addressing duty to recontact, specifically as it applies to amended variant reports. abstract_id: PUBMED:29582645 Quality of Breast Cancer Surgical Pathology Reports Background: Surgical pathology reporting of breast cancer is needed for appropriate staging and treatment decisions. We here checked the quality of surgical pathology reports of breast cancer from different laboratories of Karachi, Pakistan. Methods: One hundred surgical pathology reports from ten different laboratories of Karachi were assessed for documentation of elements against a checklist adopted from the CAP guideline over a period of six months from January, 2017 to June, 2017 in the Oncology Department, Jinnah Postgraduate Medical Centre, Karachi. Results: Out of 100 reports, clinical information was documented in 68%, type of procedure and lymph node sampling in 84% and 34% respectively. Specimen laterality was mentioned in 90%, tumor site in 44%, tumor size in 92%, focality in 40%, histological type in 96%, grade in 87%, LCIS in 19%, DCIS in 83%, size of DCIS in 19%, architectural pattern in 26% , nuclear grade in 17%, necrosis in 14%, excision margin status in 91%, invasive component in 83%, DCIS in 16%, lymph node status in 91% with positive nodes in 56%, size of macro met in 54%, extranodal involvement in 48%, lymph vascular invasion in 86%, treatment effects in 31%, and pathology reporting with TNM in 57%. Conclusion: This study shows that the quality of surgical pathology reports for breast cancer in Karachi is not satisfactory. Therefore, there is great need to create awareness among histopathologists regarding the importance of accurate breast cancer surgical pathology reporting and to introduce a standardized checklist according to international guidelines for better treatment planning. abstract_id: PUBMED:35468256 Implementation of the Nurse Practitioner as Most Responsible Provider model of care in a Specialised Mental Health setting in Canada. Globally, mental health systems have failed to adequately respond to the growing demands of mental health services resulting in a disparity between the need and provision of treatment. Paucity of mental health care providers contributes to the aforementioned disparity. This can be addressed by engaging Nurse Practitioners (NPs) in an integrated model within healthcare teams. This paper describes the implementation of NPs as Most Responsible Provider (MRP) care of model in a specialised mental health hospital in Ontario, Canada. Guided by the participatory, evidence-based, patient-focused process for advanced practise nursing (APN) role development, implementation, and evaluation (PEPPA) framework, authors developed a model of care and implemented the first seven steps of the PEPPA framework - (a) define the population and describe the current model of care, (b) identify stakeholders, (c) determine the need for a new model of care (d) identify priority areas and goals of improvement, (e) define the new model of care, and (f) plan and implement the NP as MRP model of care. Within these steps, different strategies were implemented: (a) revising policies and procedures (b) harmonising reporting structures, (c) developing and implementing a collaborative practise structure for NPs, (d) standardised and transparent compensation (e) performance standards and monitoring (f) Self-Assessment Competency frameworks, education, and development opportunities. This paper contributes to the state of the knowledge by implementing NPs as MRP model of care in a specialised mental health care setting in Ontario, Canada; and advocates the need for incorporating mental health programmes within the Ontario nursing curriculum. abstract_id: PUBMED:23838150 Do public reports of provider performance make their data and methods available and accessible? Public reports of provider performance are widespread and the methods used to generate the provider ratings differ across the sponsoring entities. We examined 115 hospital and 27 physician public reports to determine whether report sponsors made the methods used to score providers available and accessible. While nearly all websites made transparent some of the methods used to assess provider performance, we found substantial variation in the extent to which they fully adhered to recommended methods elements identified in the Consumer-Purchaser Disclosure Project's Patient Charter for performance reporting. Most public reports provided descriptions of the data sources, whether measures were endorsed, and the attribution approach. Least often made transparent were methods descriptions related to advanced provider review and reconsideration of results, reliability assessment, and case-mix adjustment. Future research should do more to identify the core elements that would lead consumer end users to have confidence in public reports. abstract_id: PUBMED:36823637 Does increased provider effort improve quality of care? Evidence from a standardised patient study on correct and unnecessary treatment. Background: Poor quality of care, including overprovision (unnecessary care) is a global health concern. Greater provider effort has been shown to increase the likelihood of correct treatment, but its relationship with overprovision is less clear. Providers who make more effort may give more treatment overall, both correct and unnecessary, or may have lower rates of overprovision; we test which is true in the Tanzanian private health sector. Methods: Standardised patients visited 227 private-for-profit and faith-based facilities in Tanzania, presenting with symptoms of asthma and TB. They recorded history questions asked and physical examinations carried out by the provider, as well as laboratory tests ordered, treatments prescribed, and fees paid. A measure of provider effort was constructed on the basis of a checklist of recommended history taking questions and physical exams. Results: 15% of SPs received the correct care for their condition and 74% received unnecessary care. Increased provider effort was associated with increased likelihood of correct care, and decreased likelihood of giving unnecessary care. Providers who made more effort charged higher fees, through the mechanism of higher consultation fees, rather than increased fees for lab tests and drugs. Conclusion: Providers who made more effort were more likely to treat patients correctly. A novel finding of this study is that they were also less likely to provide unnecessary care, suggesting it is not simply a case of some providers doing "more of everything", but that those who do more in the consultation give more targeted care. abstract_id: PUBMED:27221868 Errors in Surgical Pathology Reports: a Study from a Major Center in Pakistan. Background: Errors in surgical pathology diagnosis can have serious consequences for the patient. Since the final product of a surgical pathology lab is the report, errors can be picked by reviewing reports of cases. Aim: To determine the frequency and types of error in surgical pathology reports of cases signed out in 2014 in a laboratory in Karachi, Pakistan. Materials And Methods: All surgical pathology reports in which changes were made in the original report after sign out and an amended report was issued were included. Errors included: (1) misinterpretations; (2) missing critical information; (3) erroneous critical information; (4) misidentification; and (5) typographic errors. Results: Errors were identified in 210 cases (0.37%). These comprised 199 formalin fixed specimens and 11 frozen sections. The latter represented 3.8% of a total of 2,170 frozen sections. Of the 11 frozen section errors, 10 were misinterpretations. Of the 199 permanent specimens, 99 (49.7%) were misinterpretations, 65 (32.7%) belonged to missing critical information category, 8 (4%) belonged to erroneous critical information category, 8(4%) were misidentifications, 16(8%) were typographic errors while 3 cases (1.5%) were other errors. Most misinterpretations occurred in the gastro intestinal, liver and pancreato biliary tract (23.2%) and breast (13.1%). Another 87 cases were reviewed on the clinicians' request. However diagnosis after review remained the same as the original diagnosis. In 49 out of these (56.3%), additional workup was performed at the time of the review. Conclusions: Our findings were similar to other published studies. We need to develop documented procedures for timely review of cases to detect errors. Answer: Based on the information provided in the abstracts, it appears that there are significant issues with amended surgical pathology reports (AmRs) reaching the correct responsible care provider. A study found that AmRs with diagnostic changes often failed to follow the patient's chain of referrals to the correct care provider, and electronic medical record (EMR) systems lacked the functionality to address this failure and alert clinical teams of amendments (PUBMED:24926086). This indicates that there is a systemic issue in ensuring that AmRs are communicated effectively to the treating physicians, which could potentially lead to erroneous management based on the original diagnosis. Furthermore, the study of amended reports to evaluate and improve surgical pathology processes revealed that tracking amendment frequencies and the distribution of amendment types is crucial for understanding quality in surgical pathology (PUBMED:21841408). However, the study did not directly address whether AmRs reach the correct care provider, but it does highlight the importance of managing and tracking amendments effectively. Another study on quality assurance in breast pathology learned from a review of amended reports emphasized that any time a case is revisited for ancillary studies or other reasons, it is an opportunity for the surgical pathologist to reconsider the diagnosis (PUBMED:27959581). This suggests that there are mechanisms in place for detecting errors, but it does not provide specific information on the communication of AmRs to the responsible care provider. Overall, the abstracts suggest that there are challenges in ensuring that AmRs are communicated to the correct responsible care provider, and improvements in EMR systems and processes are needed to address this critical issue.
Instruction: Do physicians know and use mandatory quality reports? Abstracts: abstract_id: PUBMED:22864842 Do physicians know and use mandatory quality reports? Objective: Physicians should be principal recipients of quality reports because they play a major role in referral decisions. The purpose of this study was to determine physicians' awareness and use of Germany's mandatory hospital quality reports. Method: A retrospective observational study was carried out through structured telephone interviews of a stratified random sample of 300 physicians working in ambulatory care in Germany. We analysed absolute and relative frequencies of physicians' awareness and use of quality reports. Additionally we analysed physicians' awareness and use of quality reports in relation to age, sex, specialty, practice type and region of practice using binary regression analysis. Results: Less than half of the physicians were aware of the quality reports. Younger physicians were significantly more aware of the reports but did not use them more often than their older colleagues. Overall 10 % of the physicians already used them for counselling patients. Taking physicians' use of online comparative hospital guides into account, the combined total use was 14 %. Conclusions: Germany's mandatory hospital quality reports play only a minor role in physicians counselling of patients who need hospital care because too few physicians know and use the reports. abstract_id: PUBMED:30270032 Public reporting of hospital quality data: What do referring physicians want to know? Objective: To identify ambulatory care physicians' priorities for hospital quality criteria to support them in counselling patients what hospital to choose. Methods: Three hundred non-hospital-based stratified randomly sampled physicians, representing the five main referring specialties in Germany participated in a cross-sectional survey. Physicians rated the importance of 80 hospital quality criteria to be used in their counselling of patients in need of hospital care. Criteria selection was based on a literature analysis and the content of Germany's mandatory hospital quality reports. We calculated the most important criteria and performed an ordinal regression analysis to examine whether the physicians' characteristics 'age', 'sex', 'specialty', 'practice type' and 'region' affected physicians' importance ratings. Results: To counsel patients in need of a hospital referral, physicians preferred hospital quality criteria that reflect their own and their patients' experiences with a hospital. Additionally, hospitals' expertise and results of treatment were rated highly important. In contrast, hospitals' structural characteristics and compliance with external requirements were rated less important. Physicians' characteristics affected importance ratings only negligibly. Conclusions: To support referring physicians' counselling of patients regarding what hospital to choose in order to achieve optimal patient outcomes eventually, hospital report cards must be enriched by information on physicians' and their patients' experiences with hospitals. Hospitals' structural characteristics play a minor role in counselling of patients needing hospital care. abstract_id: PUBMED:18596176 Physicians' views on public reporting of hospital quality data. This article describes physicians' responses to patient questions and physicians' views about public reports on hospital quality. Interviews with 56 office-based physicians in seven states/regions used hypothetical scenarios of patients questioning referrals based on public reports of hospital quality. Responses were analyzed using an iterative coding process to develop categories and themes from data. Four themes describe physicians' responses to patients: (a) rely on existing physician-patient relationships, (b) acknowledge and consider patient perspectives, (c) take actions to follow up on patient concerns, and (d) provide patients' perspectives on quality reports. Three themes summarize responses to hospital quality reports: perceived lack of methodological rigor, content considerations in reports, and attitudes/experience regarding reports. Findings suggest that physicians take seriously patients' questions about hospital-quality reports and consider changing referral recommendations based on their concerns and/or preferences. Results underscore the importance of efforts by report developers and physician outreach/education to address physicians' methodological concerns. abstract_id: PUBMED:34579272 Uptake of Non-Mandatory Vaccinations in Future Physicians in Italy. In 2017 in Italy, a number of vaccinations became mandatory or started to be recommended and offered free of charge. In this study, we aimed at assessing the coverage rates for those vaccinations in the pre-mandatory era among students at the School of Medicine of Padua University studying the degree course in medicine and surgery (future physicians) on the basis of the vaccination certificates presented during health surveillance. The vaccinations considered were those against pertussis, rubella, mumps, measles, varicella, Haemophilus influenzae type b (which became mandatory in 2017), pneumococcus, meningococcus C and meningococcus B (only suggested and offered for free since 2017). The study enrolled 4706 students of medicine and surgery. High vaccine uptake was observed, especially in younger students (born after 1990), with vaccines against pertussis, rubella, mumps and measles. Good completion for Haemophilus influenzae type b and meningococcus C was also observed. Very low coverage rates (all under 10%) for vaccination against varicella, pneumococcus and meningococcus B were observed. In conclusion, uptake for some non-mandatory vaccines was below the recommended threshold, although younger generations showed a higher uptake, possibly as a results of policy implemented at the national level. Our findings support the idea to consider health surveillance visits also as an additional opportunity to overcome confidence and convenience barriers and offer vaccine administration. abstract_id: PUBMED:31029402 Does spontaneous adverse drug reactions' reporting differ between different reporters? A study in Toulouse Pharmacovigilance Centre. Introduction: In France since 2011, report of adverse drug reactions (ADRs) has been extended to patients (and patients' associations) who can declare directly ADRs to their regional pharmacovigilance centre. In pharmacovigilance, informativeness of ADRs reports is important to improve signal's detection. The present study was performed to compare the quality of patients', physicians and community pharmacists' reports. Methods: We performed a retrospective study investigating the quality of patients', physicians and community pharmacies' ADRs reported to Toulouse University PharmacoVigilance Centre (TUPVC) from January 2014 to June 2017. We used mandatory and non-mandatory criteria, as defined by European Medicines Agency. Reports' quality was defined as "satisfactory" when more than 90% of items were completed. We also compared reports' quality according to ADRs seriousness and the used reporting tools (email or the mobile app VigiBip®). Results: The number of reports to TUPVC increased between 2014 and 2016 (+51%) for patients and remained stable for pharmacists and physicians. According to the mandatory criteria, quality of the investigated reports was "satisfactory" (more than 90% of the items filled) whatever the reporter and without significant differences between reporters. For the non-mandatory criteria, clinical description of ADRs and ADRs' outcome were only filled over 90%. Significant differences were observed between the different reporters: community pharmacists informed better clinical description, ADR outcome and concomitant drugs versus both patients and physicians. Physicians informed better medical history and biological data whereas patients informed medical history and other aetiologies better than pharmacists and clinical description of ADRs better than physicians. Conclusion: The present study failed to show differences between pharmacies', physicians' and patients' ADRs reports, for the mandatory criteria. However, significant differences were found for non-mandatory criteria with drug data more filled by pharmacists and medical ones more by physicians and patients. abstract_id: PUBMED:9114897 Physicians' attitudes toward using patient reports to assess quality of care. Purpose: Patients' reports about their care, including reports about specific physician behaviors, are increasingly being used to assess quality of care. The authors surveyed physicians in an academic environment about their attitudes concerning possible uses of these reports. Method: A survey was conducted of the 540 hospital- and community-based internists and housestaff at Beth Israel Hospital in Boston, Massachusetts, in 1993-94. The survey instrument included seven items designed to assess the physicians' views about potential uses of patient reports about their care. The physicians were asked to rate the items on a five-point scale (ranging from "strongly agree" to "strongly disagree"). Results: A total of 343 (64%) of the physicians responded. Eighty-six percent agreed that patient judgments are important in assessing quality of care. There was widespread agreement with four potential uses of patient judgments: for changing a specific physician behavior (94% agreed), for receiving feedback from patients (90%), for use in physician education programs (81%), and for evaluating students and housestaff (72%). However, far fewer of the physicians agreed with two uses over which physicians would have less control: publishing judgments to help patients select physicians (28% agreed) and the use of judgments to influence physician compensation (16%). While the housestaff were less likely to agree with the use of patient reports in housestaff evaluations, the housestaff and faculty had similar opinions about all the other potential uses. Conclusion: The physicians believed that patients' reports about experiences with their physicians are valid indicators of quality. They responded that they would accept using these reports to improve care when the uses are nonthreatening and within the control of physicians. In contrast, there was far less support when the uses are external to physician control and potentially threatening. abstract_id: PUBMED:27692186 A survey of Physicians' Perspectives on the New York State Mandatory Prescription Monitoring Program (ISTOP). Background: Prescription drug monitoring programs (PDMPs) have emerged as one tool to combat prescription drug misuse and diversion. New York State mandates that prescribers use its PDMP (called ISTOP) before prescribing controlled substances. We surveyed physicians to assess their experiences with mandatory PDMP use. Methods: Electronic survey of attending physicians, from multiple clinical specialties, at one large urban academic medical center. Results: Of 207 responding physicians, 89.4% had heard of ISTOP, and of those, 91.1% were registered users. 45.7% of respondents used the system once per week or more. There was significant negative feedback, with 40.4% of respondents describing ISTOP as "rarely" or "never helpful," and 39.4% describing it as "difficult" or "very difficult" to use. Physicians expressed frustration with the login process, the complexity of querying patients, and the lack of integration with electronic medical records. Only 83.1% knew that ISTOP use is mandated in almost all situations. A minority agreed with this mandate (44.2%); surgeons, males, and those who prescribe controlled substances at least once per week had significantly lower rates of agreement (22.6%, 36.2%, and 33.0%, respectively). The most common reasons for disagreement were: time burden, concerns about helpfulness, potential for under-treatment, and erosion of physician autonomy. Emergency physicians, who are largely exempt from the mandate, were the most likely to believe that ISTOP was helpful, yet the least likely to be registered users. 48.4% of non-emergency physicians reported perfect compliance with the mandate; surgeons and males reported significantly lower rates of perfect compliance (18.2% and 36.8%, respectively). Conclusions: This study offers a unique window into how one academic medical faculty has experienced New York's mandatory PDMP. Many respondents believe that ISTOP is cumbersome and generally unhelpful. Furthermore, many disagree with, and don't comply with, its mandatory use. abstract_id: PUBMED:29360025 How Should Physicians Make Decisions about Mandatory Reporting When a Patient Might Become Violent? Mandatory reporting of persons believed to be at imminent risk for committing violence or attempting suicide can pose an ethical dilemma for physicians, who might find themselves struggling to balance various conflicting interests. Legal statutes dictate general scenarios that require mandatory reporting to supersede confidentiality requirements, but physicians must use clinical judgment to determine whether and when a particular case meets the requirement. In situations in which it is not clear whether reporting is legally required, the situation should be analyzed for its benefit to the patient and to public safety. Access to firearms can complicate these situations, as firearms are a well-established risk factor for violence and suicide yet also a sensitive topic about which physicians and patients might have strong personal beliefs. abstract_id: PUBMED:10191807 Mandatory reporting of intimate partner violence to police: views of physicians in California. Objectives: This study examined physicians' perspectives on mandatory reporting of intimate partner violence to police. Methods: We surveyed a stratified random sample of California physicians practicing emergency, family, and internal medicine and obstetrics/gynecology. Results: An estimated 59% of California primary care and emergency physicians (n = 508, 71% response rate) reported that they might not comply with the reporting law if a patient objects. Primary care physicians reported lower compliance. Most physicians agreed that the legislation has potential risks, raises ethical concerns, and may provide benefits. Conclusions: Physicians' stated noncompliance and perceived negative consequences raise the possibility that California's mandatory reporting law is problematic and ineffective. abstract_id: PUBMED:25296061 Mandatory reports of concerns about the health, performance and conduct of health practitioners. Objective: To describe the frequency and characteristics of mandatory reports about the health, competence and conduct of registered health practitioners in Australia. Design And Setting: Retrospective review and multivariate analysis of allegations of "notifiable conduct" involving health practitioners received by the Australian Health Practitioner Regulation Agency (AHPRA) between 1 November 2011 and 31 December 2012. Main Outcome Measures: Statutory grounds for reports, types of behaviour reported, and incidence of notifications by profession, sex, age, jurisdiction and geographic area. Results: Of 819 mandatory notifications made during the study period, 501 (62%) related to perceived departures from accepted professional standards, mostly standards of clinical care. Nurses and doctors dominated notifications: 89% (727/819) involved a doctor or nurse in the role of notifier and/or respondent. Health professionals other than the respondents' treating practitioners made 46% of notifications (335/731), and the profession of the notifier and respondent was the same in 80% of cases (557/697). Employers made 46% of notifications (333/731). Psychologists had the highest rate of notifications, followed by medical practitioners, and then nurses and midwives (47, 41 and 40 reports per 10 000 practitioners per year, respectively). Incidence of notifications against men was more than two-and-a-half times that for women (46 v 17 reports per 10 000 practitioners per year; P < 0.001) and there was fivefold variation in incidence across states and territories. Conclusions: Although Australia's mandatory reporting regime is in its infancy, our data suggest that some of the adverse effects and manifest benefits forecast by critics and supporters, respectively, have not materialised. Further research should explore the variation in notification rates observed, evaluate the outcomes of reports, and test the effects of the mandatory reporting law on whistleblowing and help-seeking behaviour. Answer: Physicians' awareness and use of mandatory quality reports vary, and the extent to which they are informed and utilize these reports can be limited. A study in Germany found that less than half of the physicians were aware of the country's mandatory hospital quality reports, and only 10% used them for counseling patients. When considering the use of online comparative hospital guides, the combined total use was 14% (PUBMED:22864842). This suggests that mandatory hospital quality reports play a minor role in physicians' counseling of patients who need hospital care, primarily because too few physicians are knowledgeable about and use the reports. Moreover, physicians' preferences for hospital quality criteria to counsel patients in need of a hospital referral were found to reflect their own and their patients' experiences with a hospital, as well as hospitals' expertise and treatment results. Structural characteristics and compliance with external requirements were deemed less important (PUBMED:30270032). Physicians' views on public reporting of hospital quality data also indicate that they take patients' questions about hospital-quality reports seriously and consider changing referral recommendations based on patient concerns and preferences. However, there is a need for report developers and physician outreach/education to address physicians' methodological concerns about these reports (PUBMED:18596176). In summary, while physicians recognize the importance of quality reports and public reporting of hospital quality data, their actual knowledge and use of these mandatory reports are not as widespread as might be expected. Factors such as the perceived relevance of the information, ease of use, integration with existing workflows, and the impact on physician-patient relationships can influence whether and how these reports are utilized in clinical practice.
Instruction: Is availability of endoscopy changing initial management of vesicoureteral reflux? Abstracts: abstract_id: PUBMED:19625050 Is availability of endoscopy changing initial management of vesicoureteral reflux? Purpose: The optimal management of vesicoureteral reflux continues to be controversial. Since dextranomer/hyaluronic acid copolymer implants were approved in 2001 for endoscopic antireflux surgery, the perception that endoscopy is less morbid than open surgery, combined with concerns over potential adverse effects of prophylactic antibiotics, has led some to advocate endoscopy as initial therapy for reflux. We examined whether the availability of endoscopy has changed the management of reflux. Materials And Methods: The i3 Innovus database (Ingenix, Eden Prairie, Minnesota) contains longitudinal claims data on more than 39 million patients spanning a 5-year period. We analyzed children diagnosed with vesicoureteral reflux (ICD-9 code 593.7, plus claim for radiographic or nuclear cystogram within 90 days) and at least 1 year of followup. We assessed patient characteristics, and diagnostic and therapeutic interventions. We evaluated surgical trends, including the changing use of endoscopic vs open antireflux surgery. Results: Among 9,496 children meeting inclusion criteria 1,998 (21%) underwent antireflux surgery during the study period (2002 to 2006). Median followup for surgical cases was 894 days. Of patients undergoing antireflux surgery 1,046 (52.4%) underwent an open procedure and 952 (47.6%) underwent endoscopy. Females were more likely to undergo endoscopy (52% vs 33% of males, p <0.0001), as were children older than 5 years (53% vs 45% of those younger, p = 0.0002). Of patients undergoing surgery 1,234 (62%) were treated early (within 12 months of diagnosis). During the study period the rate of newly diagnosed reflux cases managed by early surgery increased from 12.0% to 17.3% (Mantel-Haenszel chi-square test p <0.0001). This increase was primarily due to a more than doubling of patients undergoing early endoscopy (4.2% in 2002 vs 9.7% in 2006, p <0.0001). The rate of newly diagnosed cases managed by early open surgery did not change significantly (p = 0.3446). Conclusions: During a 5-year period after dextranomer/hyaluronic acid was introduced for endoscopic therapy the number of children newly diagnosed with vesicoureteral reflux treated with early antireflux surgery increased primarily due to increased use of endoscopy. This finding suggests that despite the lack of evidence of benefit, endoscopy is increasingly viewed as first line therapy for reflux. abstract_id: PUBMED:16045552 Recent trends of genitourinary endoscopy in children. Downsizing and refinement of the pediatric endoscope in video-monitoring systems have facilitated genitourinary endoscopy even in small children without any traumatic instrumentation. Indications for endoscopy in children with hematuria or tractable urinary tract infection have been tailored for the rareness of genitourinary malignancy or secondary vesicoureteral reflux (VUR) as a result of infravesical obstruction. Most mechanical outlet obstructions can be relieved endoscopically irrespective of sex and age. Endoscopic decompression by puncture or incision of both intravesical and ectopic ureteroceles can be an initial treatment similar to open surgery for an affected upper moiety. Endoscopy is necessary following urodynamic study to exclude minor infravesical obstruction only in children with unexplained dysfunctional voiding. Genitourinary endoscopy is helpful for structural abnormalities before and at the time of repairing congenital urogenital anomalies. Endoscopic injection therapy of VUR has been established as a less invasive surgical treatment. Pediatric endoscopy will play a greater role in the armamentarium for most pediatric urological diseases through the analysis of visual data and discussion on the indications for endoscopy throughout the world. abstract_id: PUBMED:22263446 The changing paradigm for the management of pediatric vesicoureteral reflux. Pediatric vesicoureteral reflux is recognized in children whose ureterovesical valve is incompetent, resulting in flow of urine from the kidney to the bladder and back to the kidney. It is a disease process with a rapidly changing management paradigm; more research is being done to determine the long-term outcomes for those affected in both childhood and adulthood. This article will provide a brief overview of current management and discuss the role of the advanced practice registered nurse with families. abstract_id: PUBMED:10492225 Management of ectopic ureterocele associated with renal duplication: a comparison of partial nephrectomy and endoscopic decompression. Purpose: We compared the efficacy of primary endoscopic decompression versus partial nephrectomy for treating ectopic duplex ureteroceles. Materials And Methods: We retrospectively reviewed the records of patients with renal duplication and upper pole ectopic ureterocele. Patients were classified according to the initial radiological evaluation. The operation performed was arbitrarily chosen by the surgeon. Results: A total of 54 patients had unilateral upper or bilateral upper pole ureterocele with no associated vesicoureteral reflux. Partial nephrectomy was performed in 26 patients, of whom 4 (15%) required additional surgery for new onset ipsilateral lower pole reflux. Endoscopic decompression was performed in 28 patients, of whom 18 (64%) required additional treatment due to reflux into the ipsilateral lower pole ureter and ureterocele in 9, reflux into the ureterocele only in 4, ipsilateral lower pole reflux only in 3 and persistent ureterocele obstruction in 2 (p<0.01). An ectopic ureterocele with vesicoureteral reflux into 1 or more moieties was identified in 111 patients, including 56 of 67 (84%) treated with partial nephrectomy and 37 of 44 (84%) treated with endoscopy who have persistent reflux or required further surgery for reflux resolution. Conclusions: In patients with an ectopic ureterocele and no vesicoureteral reflux partial nephrectomy should be considered the treatment of choice. However, when the initial cystogram reveals vesicoureteral reflux, partial nephrectomy and endoscopic ureterocele decompression have identical definitive cure rates of only 16%. The majority of the latter patients require continued observation and/or additional surgery for managing persistent reflux. abstract_id: PUBMED:21519275 Recent advances in the management of ureteroceles in infants and children: why less may be more. Purpose Of Review: Ureteroceles are an infrequently seen and challenging pediatric urological condition that in addition to causing obstruction, may also be associated with vesicoureteral reflux and/or obstruction of the bladder outlet. Past experience with the morbidity associated with ureteroceles presenting with urinary tract infection may have stimulated a particularly aggressive approach as evidenced by more historical reports describing total reconstruction. This article purposes to review the recent literature in support of less aggressive management of ureteroceles in children. Recent Findings: The widespread availability and reported high success rates with endoscopic puncture of ureteroceles, along with the recognition that vesicoureteral reflux associated with ureteroceles can be effectively managed nonoperatively, has shifted the paradigm towards an individualized approach with greater emphasis placed on nonoperative management or less aggressive surgical techniques. Cystic renal dysplasia associated with ureterocele, much like that seen in isolation, is likely to involute thus providing spontaneous 'decompression' of the ureterocele and avoiding the need for surgery when it is present. Summary: Although total reconstruction of renal moieties associated with ureteroceles might be appealing as it can achieve a normal appearing urinary tract with a single procedure performed in infancy, a more individualized approach that relies on less aggressive surgical treatments and nonoperative management over time can achieve the same functional results. abstract_id: PUBMED:21944107 Parental preferences in the management of vesicoureteral reflux. Purpose: Considering that there are few absolute indications for the timing and type of surgical correction of vesicoureteral reflux, we objectively measured parental choice in how the child's vesicoureteral reflux should be managed. Materials And Methods: We prospectively identified patients 0 to 18 years old with any grade of newly diagnosed vesicoureteral reflux. All races and genders were included, and non-English speakers were excluded from analysis. Parents were shown a video presented by a professional actor that objectively described vesicoureteral reflux and the 3 treatment modalities of antibiotic prophylaxis, open ureteral reimplantation and endoscopic treatment. Then they completed a questionnaire regarding their preference for initial management, and at hypothetical followup points of 18, 36 and 54 months. Consultation followed with the pediatric urologist who was blinded to the questionnaire results. Results: A total of 86 girls and 15 boys (150 refluxing units) were enrolled in the study. Mean patient age was 2.6 years old. Preferences for initial treatment were antibiotic prophylaxis in 36, endoscopic surgery in 26, open surgery in 11, unsure in 26 and no response in 2. Among those initially selecting antibiotic prophylaxis, after 18 months the preference was for endoscopic treatment, but after 36 and 54 months preferences trended toward open surgery. After consultation with the pediatric urologist 68 parents chose antibiotic prophylaxis. Conclusions: Our data show that antibiotic prophylaxis is preferred as the initial therapy for vesicoureteral reflux by 35.6% of parents. However, given persistent vesicoureteral reflux, preferences shifted toward surgery. With time the preference for open surgery increased and the preference for endoscopic surgery decreased. abstract_id: PUBMED:22687344 Is an initial endoscopic treatment for all ureteroceles appropriate? Objective: A systematic initial endoscopic approach has been locally adopted since 2002 for the treatment of ureterocele. Our aim was to compare outcomes for patients treated with this approach to those treated prior to this date. Methods: We reviewed the charts of 145 children with ureteroceles treated surgically between 1992 and 2010. Patients were divided according to ureterocele position, year of treatment and type of initial intervention. Evaluation was completed by ultrasound, voiding cystourethrogram and nuclear renal scans. Results: Mean age at initial surgery was 18 months. Group 1 comprised 68 patients operated before 2002, and Group 2 66 patients operated after 2002. Group 1 patients showed a higher rate of preoperative vesicoureteral reflux. Mean follow-up was 43 and 25 months for group 1 and 2, respectively. Ureteroceles treated endoscopically underwent secondary procedures in 61% (group 1) and 42% (group 2) for ectopic and in 42% (group 1) and 10% (group 2) for orthotopic ureteroceles. Overall, there was more de novo upper moiety VUR in group 1 (48% vs 12%). Conclusion: Primary endoscopic ureterocele treatment seems to be an appropriate option for children with a clinically significant ureterocele. The rate of secondary procedures was higher for ectopic ureteroceles but acceptable compared to the upper tract approach. abstract_id: PUBMED:29127964 Minimally Invasive Management for Vesicoureteral Reflux in Infants and Young Children. Minimally invasive ureteral reimplantation is an attractive and useful tool in the armamentarium for the management of complicated vesicoureteral reflux (VUR). Subureteric dextranomer/hyaluronic acid injection, laparoscopic extravesical ureteric reimplantation and pneumovesicoscopic intravesical ureteral reimplantation with or without robotic assistance are established minimally invasive approaches to management of VUR. The high cost and the limited availability of robotics have restricted accessibility to these approaches. Laparoscopic and/or robotic ureteral reimplantation continues to evolve and will have a significant bearing on the management of complicated VUR in infants and young children. abstract_id: PUBMED:11435878 Parental preferences in the management of vesicoureteral reflux. Purpose: We determined parental preferences for the treatment of vesicoureteral reflux in their child. Materials And Methods: Parents of children with vesicoureteral reflux were prospectively recruited to evaluate choices in reflux management. In each case a standard questionnaire that described the treatment options for reflux was administered. Parents were asked to choose between long-term antibacterial prophylaxis with annual radiography studies and open or endoscopic treatment at each of 1 to 5 years of followup. They were also given the choice between open or endoscopic treatment. Annual resolution and/or correction rates provided for medical, surgical and endoscopic management were 20%, 95% to 100% and 80% after 1 or 2 injections, respectively. Results: We queried 91 families of female (81%) and male (19%) patients. Average duration of reflux followup was 2 years and mean patient age was 49.8 months. At diagnosis reflux was grades I to II in 65% of cases, grade III in 26% and grades IV to V in 9%. The majority of parents chose daily antibiotics over surgery if the child was predicted to have vesicoureteral reflux for 1 to 4 years. However, the majority chose ureteral reimplantation over daily antibiotics and yearly x-ray if a 5-year course was predicted. In contrast, parents chose daily antibiotics rather than endoscopic treatment if the anticipated interval was 1 to 3 years. After 3 years the majority preferred the endoscopic approach. Also, 60% of parents stated that they would choose endoscopic treatment over reimplantation, although the child may require repeat endoscopic treatment and there was a 20% chance of persistent vesicoureteral reflux. Conclusions: Parents of children with vesicoureteral reflux prefer antibiotic prophylaxis as initial treatment. However, when daily antibiotics and yearly cystography may be required beyond 3 to 4 years, most parents would choose definitive correction. While endoscopic treatment is less effective than surgery, parents prefer endoscopic treatment, most likely because it is less invasive. Also, when compared directly against each other, the majority of parents stated that they would choose endoscopic treatment over surgery, although it has a lower success rate. abstract_id: PUBMED:21873694 Technical report—Diagnosis and management of an initial UTI in febrile infants and young children. Objectives: The diagnosis and management of urinary tract infections (UTIs) in young children are clinically challenging. This report was developed to inform the revised, evidence-based, clinical guideline regarding the diagnosis and management of initial UTIs in febrile infants and young children, 2 to 24 months of age, from the American Academy of Pediatrics Subcommittee on Urinary Tract Infection. Methods: The conceptual model presented in the 1999 technical report was updated after a comprehensive review of published literature. Studies with potentially new information or with evidence that reinforced the 1999 technical report were retained. Meta-analyses on the effectiveness of antimicrobial prophylaxis to prevent recurrent UTI were performed. Results: Review of recent literature revealed new evidence in the following areas. Certain clinical findings and new urinalysis methods can help clinicians identify febrile children at very low risk of UTI. Oral antimicrobial therapy is as effective as parenteral therapy in treating UTI. Data from published, randomized controlled trials do not support antimicrobial prophylaxis to prevent febrile UTI when vesicoureteral reflux is found through voiding cystourethrography. Ultrasonography of the urinary tract after the first UTI has poor sensitivity. Early antimicrobial treatment may decrease the risk of renal damage from UTI. Conclusions: Recent literature agrees with most of the evidence presented in the 1999 technical report, but meta-analyses of data from recent, randomized controlled trials do not support antimicrobial prophylaxis to prevent febrile UTI. This finding argues against voiding cystourethrography after the first UTI. Answer: Yes, the availability of endoscopy is changing the initial management of vesicoureteral reflux (VUR). Since the approval of dextranomer/hyaluronic acid copolymer implants for endoscopic antireflux surgery in 2001, there has been a notable increase in the use of endoscopy as an initial therapy for VUR. This shift is attributed to the perception that endoscopy is less morbid than open surgery and concerns over potential adverse effects of prophylactic antibiotics. A study analyzing children diagnosed with VUR found that the number of children newly diagnosed with VUR treated with early antireflux surgery increased, primarily due to increased use of endoscopy. Specifically, the rate of patients undergoing early endoscopy more than doubled from 2002 to 2006 (PUBMED:19625050). Additionally, the minimally invasive nature of endoscopic treatment for VUR and ureteroceles is increasingly being recognized. Endoscopic injection therapy of VUR has been established as a less invasive surgical treatment, and the minimally invasive management of VUR in infants and young children is gaining traction (PUBMED:16045552, PUBMED:29127964). Furthermore, parental preferences also play a role in the management of VUR, with some parents preferring endoscopic treatment over open surgery due to its less invasive nature, despite a lower success rate (PUBMED:11435878). The paradigm for managing pediatric VUR is rapidly changing, with more research being conducted to determine long-term outcomes and a shift towards individualized approaches that place greater emphasis on nonoperative management or less aggressive surgical techniques (PUBMED:22263446, PUBMED:21519275). Overall, the trend suggests that endoscopy is increasingly viewed as a first-line therapy for the management of VUR.
Instruction: Does mental illness stigma contribute to adolescent standardized patients' discomfort with simulations of mental illness and adverse psychosocial experiences? Abstracts: abstract_id: PUBMED:18349328 Does mental illness stigma contribute to adolescent standardized patients' discomfort with simulations of mental illness and adverse psychosocial experiences? Objective: Adolescent mental illness stigma-related factors may contribute to adolescent standardized patients' (ASP) discomfort with simulations of psychiatric conditions/adverse psychosocial experiences. Paradoxically, however, ASP involvement may provide a stigma-reduction strategy. This article reports an investigation of this hypothetical association between simulation discomfort and mental illness stigma. Methods: ASPs were randomly assigned to one of two simulation conditions: one was associated with mental illness stigma and one was not. ASP training methods included carefully written case simulations, educational materials, and active teaching methods. After training, ASPs completed the adapted Project Role Questionnaire to rate anticipated role discomfort with hypothetical adolescent psychiatric conditions/adverse psychosocial experiences and to respond to open-ended questions regarding this discomfort. A mixed design ANOVA was used to compare comfort levels across simulation conditions. Narrative responses to an open-ended question were reviewed for relevant themes. Results: Twenty-four ASPs participated. A significant effect of simulation was observed, indicating that ASPs participating in the simulation associated with mental illness stigma anticipated greater comfort with portraying subsequent stigma-associated roles than did ASPs in the simulation not associated with stigma. ASPs' narrative responses regarding their reasons for anticipating discomfort focused upon the role of knowledge-related factors. Conclusion: ASPs' work with a psychiatric case simulation was associated with greater anticipated comfort with hypothetical simulations of psychiatric/adverse psychosocial conditions in comparison to ASPs lacking a similar work experience. The ASPs provided explanations for this anticipated discomfort that were suggestive of stigma-related knowledge factors. This preliminary research suggests an association between ASP anticipated role discomfort and mental illness stigma, and that ASP work may contribute to stigma reduction. abstract_id: PUBMED:23355778 Effects, experiences, and impact of stigma on patients with bipolar disorder. Background: Many people with mental illness experience stigma that has impacted their lives. In this study, we validated the Inventory of Stigmatizing Experiences (ISE) as a tool to help quantify the stigma experienced by patients with bipolar disorder and its impact on their lives. The ISE has two components, ie, the Stigma Experiences Scale (SES) and the Stigma Impact Scale (SIS), which were administered to a population of Argentinean patients with bipolar disorder. We characterized the differences between these two populations using the SES and SIS. Finally, we compared SES and SIS scores with those in a population of Canadian patients with bipolar disorder. Methods: The SES and SIS scales were administered to tertiary care patients with bipolar I and II disorder in Argentina (n = 178) and Canada (n = 214). Results: In this study, we validated both SES (Kuder-Richardson coefficient of reliability, 0.78) and SIS (Cronbach's alpha, 0.91) scales in a population of Argentinean patients with bipolar disorder. There were no significant differences in stigma between patients with bipolar I or II disorder on SES or SIS. However, over 50% of all respondents believed that the average person is afraid of those with mental illnesses, that stigma associated with mental illness has affected their quality of life, and that their self-esteem has suffered due to stigma. In comparison with the Canadian population, Argentinean participants scored lower on both the SES and SIS, which may be due to cultural differences or to differences in population characteristics. Conclusion: Stigma associated with mental illness is serious and pervasive. If we are to find successful strategies to mitigate stigma, it is first important to understand how patients perceive such stigma. The ISE is a valuable tool which allows us to do this with high reliability among cultures. abstract_id: PUBMED:37505791 Self-Stigma's Effect on Psychosocial Functioning Among People With Mental Illness. Abstract: Consequences of self-stigma exhibit a four-step regressive model from being aware of public stigma, to agreeing with it, to applying it to oneself, to resulting harm on the self. We hypothesize the relationship between self-stigma and psychosocial functioning is mediated by three constructs: the why try effect, stigma stress coping resources, and personal recovery. Two hundred eight people with depressive and bipolar disorders participated the study. Data partially supported the regressive model of self-stigma. Awareness was not found to be associated with other regressive stages. The model representing the path between self-stigma-harm and psychosocial functioning was significant and robust. The path was mediated by the why try effect and personal recovery. Findings echo the growing body of research attempting to describe outcomes of self-stigma, in this case, psychosocial functioning. Programs meant to erase self-stigma, and its effect on functioning should incorporate the why try effect and personal recovery as strategic ingredients. abstract_id: PUBMED:32112548 Severity of panic disorder, adverse events in childhood, dissociation, self-stigma and comorbid personality disorders Part 1: Relationships between clinical, psychosocial and demographic factors in pharmacoresistant panic disorder patients. Objectives: Little is known about the relation between severity of panic disorder, adverse events in childhood, dissociation, self-stigma and comorbid personality disorders. The aim of this study is to look for the intercorrelations between these factors. Method: The study explores the relation between clinical, demographic and social factors in panic disorder using cross sectional design. The inpatients with pharmacoresistant panic disorder with and without agoraphobia were included in the study. Participants were also assessed for comorbidity with other anxiety or personality disorder. The Clinical Global Impression (CGI), Beck Anxiety Inventory (BAI), Beck Depression Inventory (BDI-II), Dissociative Experiences Scale (DES), Internalized Stigma of Mental Illness (ISMI), Childhood Trauma Questionnaire-Short Form (CTQ-SF), Panic Disorder Severity Scale (PDSS) and demographic data were used as measurement tools. Results: A total of 142 pharmacoresistant patients with panic disorder with or without agoraphobia were admitted for 6-week cognitive behavioral therapy inpatient program in psychotherapeutic department between November 2015 and July 2019. One hundred and five inpatients (33 males and 72 females) with mean age 37.8 + 12.1 years were included in the study. Sixty-nine patients suffer from additional comorbid anxiety disorder and 43 had comorbid personality disorder. abstract_id: PUBMED:25554354 Internalized stigma and its psychosocial correlates in Korean patients with serious mental illness. We aimed to examine internalized stigma of patients with mental illness in Korea and identify the contributing factors to internalized stigma among socio-demographic, clinical, and psychosocial variables using a cross-sectional study design. A total of 160 patients were recruited from a university mental hospital. We collected socio-demographic data, clinical variables and administered self-report scales to measure internalized stigma and levels of self-esteem, hopelessness, social support, and social conflict. Internalized stigma was identified in 8.1% of patients in our sample. High internalized stigma was independently predicted by low self-esteem, high hopelessness, and high social conflict among the psychosocial variables. Our finding suggests that simple psychoeducation only for insight gaining cannot improve internalized stigma. To manage internalized stigma in mentally ill patients, it is needed to promote hope and self-esteem. We also suggest that a relevant psychosocial intervention, such as developing coping skills for social conflict with family, can help patients overcome their internalized stigma. abstract_id: PUBMED:37659106 Association of childhood adversities with psychosocial difficulties among Chinese children and adolescents. Background: Adverse childhood experiences (ACEs) have been well recognized as risk factors for various adverse outcomes. However, the impacts of ACEs on psychological wellbeing among Chinese children and adolescents are unknown. Methods: In total, 27 414 participants (6592 Grade 4-6 and 20 822 Grade 7-12 students) were included and information on ACEs and various psychosocial outcomes was collected. We identified subgroups with distinct psychosocial statuses using cluster analysis and logistic regression was applied to measure the associations of ACEs [individual, cumulative numbers by categories or co-occurring patterns identified by using multiple correspondence analysis (MCA)] with item- and cluster-specific psychosocial difficulties. Results: Three and four cluster-based psychosocial statuses were identified for Grade 4-6 and Grade 7-12 students, respectively, indicating that psychosocial difficulties among younger students were mainly presented as changes in relationships/behaviours, whereas older students were more likely featured by deviations in multiple domains including psychiatric symptoms and suicidality. Strongest associations were found for threat-related ACEs (e.g. bullying experiences) with item- or cluster-based psychosocial difficulties (e.g. for cluster-based difficulties, the highest odds ratios = 1.72-2.08 for verbal bullying in Grade 4-6 students and 6.30-12.81 for cyberbullying in Grade 7-12 students). Analyses on cumulative numbers of ACEs and MCA-based ACE patterns revealed similar risk patterns. Additionally, exposure patterns predominated by poor external environment showed significant associations with psychosocial difficulties among Grade 7-12 students but not Grade 4-6 students. Conclusions: Chinese adolescents faced different psychosocial difficulties that varied by age, all of which were associated with ACEs, particularly threat-related ACEs. Such findings prompt the development of early interventions for those key ACEs to prevent psychosocial adversities among children and adolescents. abstract_id: PUBMED:29914409 Adverse childhood experiences among patients with substance use disorders at a referral psychiatric hospital in Kenya. Background: Substance use disorders are a major cause of health and social problems worldwide. Research evidence shows a strong graded relationship of adverse childhood experiences and substance use in adulthood. This study aimed at determining the prevalence of adverse childhood experiences and their association with substance use among patients with substance use disorders. Method: The study used a descriptive cross-sectional design. A total of 134 patients aged 18 years and above receiving inpatient treatment for substance use disorders were recruited into the study. A mental state exam was done to rule out active psychopathology. Data on socio demographic variables, adverse childhood experiences (ACEs) and substance use was collected using Adverse Childhood Experiences International Questionnaire and The Alcohol, Smoking and Substance Involvement Screening Test respectively. Data was analysed using statistical package for social sciences (SPSS) version 20 for windows. Results: Males accounted for the majority of the study participants (n = 118, 88.1%). Only 43.3% (n = 58) of the participants had a family history of substance use disorder. The most frequently used substance was alcohol which was reported by 82.1% of the participants. Nearly 93% of the respondents had experienced at least one ACE and the most prevalent ACE was one or no parent which was reported by half of the respondents. The adverse childhood experiences significantly associated with current problematic substance use were; emotional abuse, having someone with mental illness in the household, physical abuse and physical neglect. Emotional abuse significantly predicted tobacco (A.O.R = 5.3 (1.2-23.9)) and sedative (A.O.R = 4.1 (1.2-14.2)) use. Childhood exposure to physical abuse was associated with cannabis use [A.O.R = 2.9 (1.0-7.9)]. Experiencing five or more ACEs was associated with increased risk of using sedatives. Conclusion: There is a high prevalence of adverse childhood experiences among patients with substance use disorders. Experiencing emotional abuse, having someone with mental illness in the household, physical abuse and physical neglect in childhood are risk factors of substance use disorders. ACEs screening and management should be incorporated in substance abuse prevention programs and policies. abstract_id: PUBMED:31617486 The Prevalence and Consequences of Adverse Childhood Experiences in the German Population. Background: Multiple studies have shown a link between cumulative adverse experiences in childhood and a wide variety of psychosocial problems in later life. There have not been any pertinent representative studies of the German population until now. The goal of this study is to determine the frequency of adverse childhood experiences (ACE), the extent to which they manifest themselves in patterns of co-occurrence, and their possible connection to psychosocial abnormalities in the German population. Methods: 2531 persons (55.4% female) aged 14 years and up (mean [M] = 48.6 years, standard deviation [SD] = 18) were retro- spectively studied for ACE and psychosocial abnormalities by means of the Patient Health Questionnaire-4 (PHQ-4) and further questions on aggressiveness and life satisfaction. The frequency of ACE and their cumulative occurrence were analyzed in de- scriptive terms. Patterns of simultaneously occurring types of ACE were studied with latent class analysis. Associations between ACE and psychosocial abnormalities were tested with logistic regression analyses. Results: 43.7% of the respondents reported at least one ACE; 8.9% reported four or more. The most commonly reported ones were parental separation and divorce (19.4%), alcohol consumption and drug abuse in the family (16.7%), emotional neglect (13.4%), and emotional abuse (12.5%). Four ACE patterns were identified by latent class analysis: no/minimal ACE, household dysfunction, child maltreatment, and multiple ACE. In the cumulative model, the high-risk group with four or more ACE displayed a significantly elevated risk for depressiveness (odds ratio [OR] = 7.8), anxiety (OR = 7.1), physical aggressiveness (OR = 10.5), and impaired life satisfaction (OR = 5.1). Conclusion: Adverse childhood experiences are common, and their cumulation is associated with markedly increased negative sequelae for the affected persons. Preventive approaches are needed that extend beyond the area of child maltreatment alone and address other problems in the parental home, such as mental illness in the parents. Data acquisition by self-reporting is a limitation of this study. abstract_id: PUBMED:31176081 A comparative study of childhood/adolescent and adult onset schizophrenia: does the neurocognitive and psychosocial outcome differ? Aims & Objectives: The present study aimed to evaluate the neurocognitive functioning and psychosocial outcome (in terms of social functioning, disability and internalized stigma) in patients with schizophrenia with childhood/adolescent onset (age of onset ≤18 years) and adult onset (>18years) schizophrenia and to evaluate the effect of neurocognitive impairment on the outcome variables in patients with youth and adult onset schizophrenia. Methodology: 34 patients with youth onset schizophrenia (Group-I) and 56 patients with adult onset schizophrenia (Group-II), who were currently in clinical remission were assessed on a comprehensive neurocognitive battery,Positive and Negative syndrome Scale (PANSS), Global Assessment of Functioning Scale (GAF), Indian Disability Evaluation and Assessment Scale (IDEAS),Social and Occupational Functioning Assessment Scale (SOFS) and Internalised Stigma of Mental Illness Scale (ISMIS). Results: On neurocognitive domains (after adjusting for co-variates) significant differences were noted between the two groups in terms of processing speed (TMT-A; I > II; p-value -0.009), verbal fluency (COWA;I < II;p-value-0.001) and cognitive flexibility (TMT-B; I > II; p -0.031). Compared to patients with adult onset schizophrenia, patients with childhood & adolescent onset schizophrenia had significantly higher PANSS negative score, higher disability in all domains of IDEAS, poorer socio-occupational functioning, low global functioning and reported more stigma in the domains of alienation and discrimination.In patients with childhood & adolescent onset schizophrenia, higher deficits in the processing speed and verbal fluency were associated with significantly lower socio-occupational functioning and higher disability; higher executive dysfunction was associated with higher internalized stigma. Among patients with adult onset schizophrenia, higher disability was related to executive dysfunction only and higher stigma was associated with poor cognitive processing, selective attention and poor executive functioning. Conclusions: The present study suggests that compared to adult onset schizophrenia, patients with childhood & adolescent onset schizophrenia have more deficits in neurocognition, have higher level of disability, poorer socio-occupational functioning and have higher level of self-stigma. abstract_id: PUBMED:26718424 Adverse childhood experiences, family functioning and adolescent health and emotional well-being. Objectives: Adverse childhood experiences (ACEs) have been consistently linked in a strong and graded fashion to a host of health problems in later adulthood but few studies have examined the more proximate effect of ACEs on health and emotional well-being in adolescence. Study Design: Nationally representative cross-sectional study. Methods: Using logistic regression on the 2011/12 National Survey of Children's Health, we examined the cumulative effect of total ACE score on the health and emotional well-being of US adolescents aged 12 to 17 years. We investigated the moderating effect of family functioning on the impact of ACE on adolescent health and emotional well-being. Results: Adolescents with higher ACE scores had worse reported physical and emotional well-being than adolescents with fewer ACEs net of key demographic and socio-economic characteristics. Family functioning moderated the negative impact of cumulative ACE on adolescent health and emotional well-being. Conclusions: Adolescent well-being has enduring consequences; identifying children with ACE exposure who also have lower-functioning family could also help identify those families at particular risk. Answer: Yes, mental illness stigma does contribute to adolescent standardized patients' (ASP) discomfort with simulations of mental illness and adverse psychosocial experiences. The study reported in abstract PUBMED:18349328 found that ASPs who participated in a simulation associated with mental illness stigma anticipated greater comfort with portraying subsequent stigma-associated roles than ASPs in the simulation not associated with stigma. This suggests that exposure to psychiatric case simulations was associated with greater anticipated comfort with hypothetical simulations of psychiatric/adverse psychosocial conditions. The ASPs provided explanations for this anticipated discomfort that were suggestive of stigma-related knowledge factors, indicating an association between ASP anticipated role discomfort and mental illness stigma. Additionally, the involvement of ASPs in such simulations may contribute to stigma reduction.
Instruction: Does availability of physical activity and food outlets differ by race and income? Abstracts: abstract_id: PUBMED:22954386 Does availability of physical activity and food outlets differ by race and income? Findings from an enumeration study in a health disparate region. Background: Low-income, ethnic/racial minorities and rural populations are at increased risk for obesity and related chronic health conditions when compared to white, urban and higher-socio-economic status (SES) peers. Recent systematic reviews highlight the influence of the built environment on obesity, yet very few of these studies consider rural areas or populations. Utilizing a CBPR process, this study advances community-driven causal models to address obesity by exploring the difference in resources for physical activity and food outlets by block group race and income in a small regional city that anchors a rural health disparate region. To guide this inquiry we hypothesized that lower income and racially diverse block groups would have fewer food outlets, including fewer grocery stores and fewer physical activity outlets. We further hypothesized that walkability, as defined by a computed walkability index, would be lower in the lower income block groups. Methods: Using census data and GIS, base maps of the region were created and block groups categorized by income and race. All food outlets and physical activity resources were enumerated and geocoded and a walkability index computed. Analyses included one-way MANOVA and spatial autocorrelation. Results: In total, 49 stores, 160 restaurants and 79 physical activity outlets were enumerated. There were no differences in the number of outlets by block group income or race. Further, spatial analyses suggest that the distribution of outlets is dispersed across all block groups. Conclusions: Under the larger CPBR process, this enumeration study advances the causal models set forth by the community members to address obesity by providing an overview of the food and physical activity environment in this region. This data reflects the food and physical activity resources available to residents in the region and will aid many of the community-academic partners as they pursue intervention strategies targeting obesity. abstract_id: PUBMED:26643585 Neighborhood characteristics contribute to urban alcohol availability: Accounting for race/ethnicity and social disorganization. This study examined the role that race/ethnicity and social disorganization play in alcohol availability in Milwaukee, Wisconsin, census block groups. This study estimated negative binomial regression models to examine separately the relationship between neighborhood racial/ethnic composition and social disorganization levels for (1) total, (2) on-premise, and (3) off-premise alcohol outlets. Results of this study suggest that proportion Hispanic was positively associated with total and with off-premise alcohol outlets. Second, proportion African American was negatively associated with on-premise alcohol outlets and positively associated with off-premise alcohol outlets. Proportion Asian was not associated with total, on-premise, or off-premise alcohol outlets. However, the effects of race/ethnicity on alcohol availability were either unrelated or negatively related to alcohol outlet availability once neighborhood social disorganization levels were taken into account, and social disorganization was positively and significantly associated with all alcohol outlet types. Neighborhood characteristics contribute to alcohol availability and must be considered in any efforts aimed toward prevention of alcohol-related negative health and social outcomes. abstract_id: PUBMED:26427621 Geographic measures of retail food outlets and perceived availability of healthy foods in neighbourhoods. Objective: To examine associations between geographic measures of retail food outlets and perceived availability of healthy foods. Design: Cross-sectional. Setting: A predominantly rural, eight-county region of South Carolina, USA. Subjects: Data from 705 household shoppers were analysed using ordinary least-squares regression to examine relationships between geographic measures (presence and distance) of food outlets obtained via a geographic information system and perceived availability of healthy foods (fresh fruits and vegetables and low-fat foods). Results: The presence of a supermarket within an 8·05 km (5-mile) buffer area was significantly associated with perceived availability of healthy foods (β=1·09, P=0·025) when controlling for all other food outlet types. However, no other derived geographic presence measures were significant predictors of perceived availability of healthy foods. Distances to the nearest supermarket (β=-0·16, P=0·003), dollar and variety store (β=-0·15, P=0·005) and fast-food restaurant (β=0·11, P=0·015) were all significantly associated with perceptions of healthy food availability. Conclusions: Our results suggest that distance to food outlets is a significant predictor of healthy food perceptions, although presence is sensitive to boundary size. Our study contributes to the understanding and improvement of techniques that characterize individuals' food options in their community. abstract_id: PUBMED:23726897 A closer examination of the relationship between children's weight status and the food and physical activity environment. Objectives: Conflicting findings on associations between food and physical activity (PA) environments and children's weight status demand attention in order to inform effective interventions. We assess relationships between the food and PA environments in inner-city neighborhoods and children's weight status and address sources of conflicting results of prior research. Methods: Weight status of children ages 3-18 was assessed using parent-measured heights and weights. Data were collected from 702 children living in four low-income cities in New Jersey between 2009 and 2010. Proximity of a child's residence to a variety of food and PA outlets was measured in multiple ways using geo-coded data. Multivariate analyses assessed the association between measures of proximity and weight status. Results: Significant associations were observed between children's weight status and proximity to convenience stores in the 1/4 mile radius (OR = 1.9) and with presence of a large park in the 1/2 mile radius (OR = 0.41). No associations were observed for other types of food and PA outlets. Conclusions: Specific aspects of the food and PA environments are predictors of overweight and obese status among children, but the relationships and their detection are dependent upon aspects of the geospatial landscape of each community. abstract_id: PUBMED:37293617 Trends in food and beverage purchases in informal, mixed, and formal food outlets in Mexico: ENIGH 1994-2020. Background: The retail food environment in Mexico is characterized by the co-existence of both, formal and informal food outlets. Yet, the contribution of these outlets to food purchases over time has not been documented. Understanding the longitudinal trends where Mexican households purchase their foods is critical for the development of future food retail policies. Methods: We used data from Mexico's National Income and Expenditure Survey from 1994 to 2020. We categorized food outlets as formal (supermarkets, chain convenience stores, restaurants), informal (street markets, street vendors, acquaintances), and mixed (fiscally regulated or not. i.e., small neighborhood stores, specialty stores, public markets). We calculated the proportion of food and beverage purchases by food outlet for each survey for the overall sample and stratified by education level and urbanicity. Results: In 1994, the highest proportion of food purchases was from mixed outlets, represented by specialty and small neighborhood stores (53.7%), and public markets (15.9%), followed by informal outlets (street vendors and street markets) with 12.3%, and formal outlets from which supermarkets accounted for 9.6%. Over time, specialty and small neighborhood stores increased 4.7 percentage points (p.p.), while public markets decreased 7.5 p.p. Street vendors and street markets decreased 1.6 p.p., and increased 0.5 p.p. for supermarkets. Convenience stores contributed 0.5% at baseline and increased to 1.3% by 2020. Purchases at specialty stores mostly increased in higher socioeconomic levels (13.2 p.p.) and metropolitan cities (8.7 p.p.) while public markets decreased the most in rural households and lower socioeconomic levels (6.0 p.p. & 5.3 p.p.). Supermarkets and chain convenience stores increased the most in rural localities and small cities. Conclusion: In conclusion, we observed an increase in food purchases from the formal sector, nonetheless, the mixed sector remains the predominant food source in Mexico, especially small-neighborhood stores. This is concerning, since these outlets are mostly supplied by food industries. Further, the decrease in purchases from public markets could imply a reduction in the consumption of fresh produce. In order to develop retail food environment policies in Mexico, the historical and predominant role of the mixed sector in food purchases needs to be acknowledged. abstract_id: PUBMED:24935611 Race/ethnicity and income in relation to the home food environment in US youth aged 6 to 19 years. Background: The home food environment is complex and has the potential to influence dietary habit development in young people. Several factors may influence the home food environment, including income and race/ethnicity. Objective: To examine the relationship of income and race/ethnicity with three home food environment factors (ie, food availability frequency, family meal patterns [frequency of family and home cooked meals], and family food expenditures). Design: A cross-sectional analysis of the National Health and Nutrition Examination Survey (NHANES). Participants: A total of 5,096 youth aged 6 to 19 years from a nationally representative sample of US individuals participating in NHANES 2007-10. Statistical Analyses Performed: Prevalence of food availability frequency was assessed for the entire sample, race/ethnicity, poverty income ratio (PIR), and race/ethnicity stratified by PIR. Mean values of family meal patterns and food expenditures were calculated based on race/ethnicity, PIR, and race/ethnicity stratified by PIR using analysis of variance and least squares means. Tests of main effects were used to assess differences in food availability prevalence and mean values of family meal patterns and food expenditures. Results: Non-Hispanic whites had the highest prevalence of salty snacks (51.1%±1.5%) and fat-free/low-fat milk (39.2%±1.7%) always available. High-income homes had the highest prevalence of fruits (75.4%±2.4%) and fat-free/low-fat milk (38.4%±2.1%) always available. Differences were found for prevalence of food availability when race/ethnicity was stratified by PIR. Non-Hispanic blacks had the lowest prevalence of fat-free/low-fat milk always available across PIR groups. Differences in mean levels of family meal patterns and food expenditures were found for race/ethnicity, PIR, and race/ethnicity stratified by PIR. Conclusions: Race/ethnicity and PIR appear to influence food availability, family meal patterns, and family food expenditures in homes of youth. Knowledge of factors that influence the home food environment could assist in developing effective strategies to improve food environments for young people. abstract_id: PUBMED:35842649 Systematic literature review of instruments that measure the healthfulness of food and beverages sold in informal food outlets. Background: Informal food outlets, defined as vendors who rarely have access to water and toilets, much less shelter and electricity, are a common component of the food environment, particularly in many non-Western countries. The purpose of this study was to review available instruments that measure the quality and particularly the healthfulness of food and beverages sold within informal food outlets. Methods: PubMed, LILACS, Web of Science, and Scopus databases were used. Articles were included if they reported instruments that measured the availability or type of healthy and unhealthy foods and beverages by informal food outlets, were written in English or Spanish, and published between January 1, 2010, and July 31, 2020. Two trained researchers reviewed the title, abstract and full text of selected articles; discrepancies were solved by two independent researchers. In addition, the list of references for selected articles was reviewed for any additional articles of relevance. The quality of published articles and documents was evaluated using JBI Critical appraisal checklist for analytical cross-sectional studies. Results: We identified 1078 articles of which 14 were included after applying the selection criteria. Three additional articles were considered after reviewing the references from the selected articles. From the final 17 articles, 13 measurement tools were identified. Most of the instruments were used in low- and middle-income countries (LMIC). Products were classified as healthy/unhealthy or produce/non-produce or processed/unprocessed based on availability and type. Six studies reported psychometric tests, whereas one was tested within the informal food sector. Conclusions: Few instruments can measure the healthfulness of food and beverages sold in informal food outlets, of which the most valid and reliable have been used to measure formal food outlets as well. Therefore, it is necessary to develop an instrument that manages to measure, specifically, the elements available within an informal one. These actions are extremely important to better understand the food environment that is a central contributor to poor diets that are increasingly associated with the obesity and Non-communicable disease (NCD) pandemic. abstract_id: PUBMED:35928596 Characterizing food environments near schools in California: A latent class approach simultaneously using multiple food outlet types and two spatial scales. It is challenging to evaluate associations between the food environment near schools with either prevalence of childhood obesity or with socioeconomic characteristics of schools. This is because the food environment has many dimensions, including its spatial distribution. We used latent class analysis to classify public schools in urban, suburban, and rural areas in California into food environment classes based on the availability and spatial distribution of multiple types of unhealthy food outlets nearby. All urban schools had at least one unhealthy food outlet nearby, compared to seventy-two percent of schools in rural areas did. Food environment classes varied in the quantity of available food outlets, the relative mix of food outlet types, and the outlets' spatial distribution near schools. Regardless of urbanicity, schools in low-income neighborhoods had greater exposure to unhealthy food outlets. The direction of associations between food environment classes and school size, type, and race/ethnic composition depends on the level of urbanicity of the school locations. Urban schools attended primarily by African American and Asian children are more likely to have greater exposures to unhealthy food outlets. In urban and rural but not suburban areas, schools attended primarily by Latino students had more outlets offering unhealthy foods or beverages nearby. In suburban areas, differences in the spatial distribution of food outlets indicates that food outlets are more likely to cluster near K-12 schools and high schools compared to elementary schools. Intervention design and future research need to consider that the associations between food environment exposures and school characteristics differ by urbanicity. abstract_id: PUBMED:36438248 Availability of healthy and unhealthy foods in modern retail outlets located in selected districts of Greater Accra Region, Ghana. Background: Intake of unhealthy foods is linked to the onset of obesity and diet-related non-communicable diseases (NCDs). Availability of unhealthy (nutritionally poor) foods can influence preference, purchasing and consumption of such foods. This study determined the healthiness of foods sold at modern retail outlets- supermarkets and mini-marts in the Greater Accra Region of Ghana. Methods: All modern retail outlets located in six districts of Greater Accra were eligible. Those < 200 m2 of floor area and with permanent structures were categorized as mini-marts; and those ≥200 m2 as supermarkets. Shelf length of all available foods were measured. Healthiness of food was determined using two criteria - the NOVA classification and energy density of foods. Thus, ultra-processed foods or food items with >225 kcal/100 g were classified as unhealthy. The ratio of the area occupied by unhealthy to healthy foods was used to determine the healthiness of modern retail outlets. Results: Of 67 retail outlets assessed, 86.6% were mini-marts. 85.0% of the total SHELF area was occupied by foods categorized as unhealthy (ranging from 9,262 m2 in Ashiaman Municipality to 41,892 m2 in Accra Metropolis). Refined grains/grain products were the most available, occupying 30.0% of the total food shelf space, followed by sugar-sweetened beverages (20.1% of total shelf space). The least available food group-unprocessed staples, was found in only one high income district, and occupied 0.1% of the total food shelf space. Retail outlets in two districts did not sell fresh fruits or fresh/unsalted canned vegetables. About two-thirds of food products available (n = 3,952) were ultra-processed. Overall, the ratio of ultra-processed-to-unprocessed foods ranged from 3 to 7 with an average (SD) of 5(2). Thus, for every healthy food, there were five ultra-processed ones in the studied retail outlets. Conclusion: This study reveals widespread availability of ultra-processed foods in modern retail outlets within the selected districts. Toward a healthier food retail environment, public health and food regulators, in partnership with other stakeholders need to institute measures that improve availability of healthy foods within supermarkets and mini-marts. abstract_id: PUBMED:36822405 Dietary contributions of food outlets by urbanization level in the US population aged 2 years and older-NHANES 2013-2018. Background: Differences in food access, availability, affordability, and dietary intake are influenced by the food environment, which includes outlets where foods are obtained. These differences between food outlets within rural and urban food environments in the United States are not well understood. Objectives: The aim of this analysis is to describe the contribution of foods and beverages from 6 outlets-grocery stores, convenience stores, full-service restaurants, quick-service restaurants, schools, and other outlets-to the total energy intake and Healthy Eating Index (HEI)-2015 scores in the United States population, by urbanization level (nonmetropolitan statistical areas [MSAs], small-to-medium MSAs, and large MSAs). Methods: Data from the National Health and Nutrition Examination Survey 2013-2018 were used. Dietary intake from one 24-h dietary recall was analyzed by the outlet where a food or beverage was obtained and by urbanization. Linear regression, adjusted for sex, age, race and Hispanic origin, and family income, was used to predict the contribution of each food outlet to the total energy intake and HEI-2015 total and component scores by urbanization level. Results: During 2013-2018, foods and beverages from grocery stores and quick-service and full-service restaurants contributed to 62.1%, 15.1%, and 8.5% of the energy intake, respectively. The percentage of energy intake from full- and quick-service restaurants increased with increasing urbanization level. HEI-2015 total scores increased with the increasing urbanization level overall (48.1 non-MSAs, 49.2 small-to-medium MSAs, and 51.3 large MSAs) for grocery stores (46.7 non-MSAs, 48.0 small-to-medium MSAs, and 50.6 large MSAs) and for quick-service restaurants (35.8 non-MSAs, 36.3 small-to-medium MSAs, and 37.5 large MSAs). Conclusions: Grocery stores and restaurants were the largest contributors of energy intake in urban and rural areas. Diet quality improved with increasing urbanization overall and for grocery stores and quick-service restaurants. Answer: The availability of physical activity and food outlets does not consistently differ by race and income across different studies and regions. In a study conducted in a small regional city that anchors a rural health disparate region, there were no differences in the number of food outlets or physical activity resources by block group income or race. The distribution of outlets was dispersed across all block groups, suggesting equitable access in that particular region (PUBMED:22954386). However, other studies have found associations between neighborhood characteristics and the availability of certain types of food outlets. For instance, in Milwaukee, Wisconsin, the proportion of Hispanic residents was positively associated with total and off-premise alcohol outlets, while the proportion of African American residents was negatively associated with on-premise alcohol outlets and positively associated with off-premise outlets. These effects were modified when accounting for neighborhood social disorganization levels, which were positively and significantly associated with all alcohol outlet types (PUBMED:26643585). In South Carolina, USA, the presence of a supermarket within an 8.05 km buffer area was significantly associated with perceived availability of healthy foods, and distances to various food outlets were significant predictors of healthy food perceptions (PUBMED:26427621). In low-income cities in New Jersey, proximity to convenience stores and the presence of a large park were associated with children's weight status, indicating that specific aspects of the food and physical activity environments can predict overweight and obesity among children (PUBMED:23726897). In Mexico, trends in food purchases from formal, informal, and mixed outlets over time have been documented, with an increase in purchases from the formal sector but a predominant role of the mixed sector, especially small neighborhood stores (PUBMED:37293617). In the US, race/ethnicity and poverty income ratio appear to influence food availability, family meal patterns, and family food expenditures in homes of youth (PUBMED:24935611). A systematic literature review highlighted the lack of instruments specifically designed to measure the healthfulness of food and beverages sold in informal food outlets, which are a significant part of the food environment in many non-Western countries (PUBMED:35842649). In California, food environment classes near schools varied by the quantity and mix of food outlet types, as well as their spatial distribution, with schools in low-income neighborhoods having greater exposure to unhealthy food outlets (PUBMED:35928596).
Instruction: Robot-assisted vs pure laparoscopic radical prostatectomy: are there any differences? Abstracts: abstract_id: PUBMED:26212891 Transperitoneal versus extraperitoneal robot-assisted laparoscopic radical prostatectomy: A prospective single surgeon randomized comparative study. Objectives: To compare operative, pathological, and functional results of transperitoneal and extraperitoneal robot-assisted laparoscopic radical prostatectomy carried out by a single surgeon. Methods: After having experience with 32 transperitoneal laparoscopic radical prostatectomies, 317 extraperitoneal laparoscopic radical prostatectomies, 30 transperitoneal robot-assisted laparoscopic radical prostatectomies and 10 extraperitoneal robot-assisted laparoscopic radical prostatectomies, 120 patients with prostate cancer were enrolled in this prospective randomized study and underwent either transperitoneal or extraperitoneal robot-assisted laparoscopic radical prostatectomy. The main outcome parameters between the two study groups were compared. Results: No significant difference was found for age, body mass index, preoperative prostate-specific antigen, clinical and pathological stage, Gleason score on biopsy and prostatectomy specimen, tumor volume, positive surgical margin, and lymph node status. Transperitoneal robot-assisted laparoscopic radical prostatectomy had shorter trocar insertion time (16.0 vs 25.9 min for transperitoneal robot-assisted laparoscopic radical prostatectomy and extraperitoneal robot-assisted laparoscopic radical prostatectomy, P < 0.001), whereas extraperitoneal robot-assisted laparoscopic radical prostatectomy had shorter console time (101.5 vs 118.3 min, respectively, P < 0.001). Total operation time and total anesthesia time were found to be shorter in extraperitoneal robot-assisted laparoscopic radical prostatectomy, without statistical significance (200.9 vs 193.2 min; 221.8 vs 213.3 min, respectively). Estimated blood loss was found to be lower for extraperitoneal robot-assisted laparoscopic radical prostatectomy (P = 0.001). Catheterization and hospitalization times were observed to be shorter in extraperitoneal robot-assisted laparoscopic radical prostatectomy (7.3 vs 5.8 days and 3.1 vs 2.3 days for transperitoneal robot-assisted laparoscopic radical prostatectomy and extraperitoneal robot-assisted laparoscopic radical prostatectomy, respectively, P < 0.05). The time to oral diet was significantly shorter in extraperitoneal robot-assisted laparoscopic radical prostatectomy (32.3 vs 20.1 h, P = 0.031). Functional outcomes (continence and erection) and complication rates were similar in both groups. Conclusions: Extraperitoneal robot-assisted laparoscopic radical prostatectomy seems to be a good alternative to transperitoneal robot-assisted laparoscopic radical prostatectomy with similar operative, pathological and functional results. As the surgical field remains away from the bowel, postoperative return to normal diet and early discharge can be favored. abstract_id: PUBMED:24912809 Pitfalls of robot-assisted radical prostatectomy: a comparison of positive surgical margins between robotic and laparoscopic surgery. Objectives: To compare the surgical outcomes of laparoscopic radical prostatectomy and robot-assisted radical prostatectomy, including the frequency and location of positive surgical margins. Methods: The study cohort comprised 708 consecutive male patients with clinically localized prostate cancer who underwent laparoscopic radical prostatectomy (n = 551) or robot-assisted radical prostatectomy (n = 157) between January 1999 and September 2012. Operative time, estimated blood loss, complications, and positive surgical margins frequency were compared between laparoscopic radical prostatectomy and robot-assisted radical prostatectomy. Results: There were no significant differences in age or body mass index between the laparoscopic radical prostatectomy and robot-assisted radical prostatectomy patients. Prostate-specific antigen levels, Gleason sum and clinical stage of the robot-assisted radical prostatectomy patients were significantly higher than those of the laparoscopic radical prostatectomy patients. Robot-assisted radical prostatectomy patients suffered significantly less bleeding (P < 0.05). The overall frequency of positive surgical margins was 30.6% (n = 167; 225 sites) in the laparoscopic radical prostatectomy group and 27.5% (n = 42; 58 sites) in the robot-assisted radical prostatectomy group. In the laparoscopic radical prostatectomy group, positive surgical margins were detected in the apex (52.0%), anterior (5.3%), posterior (5.3%) and lateral regions (22.7%) of the prostate, as well as in the bladder neck (14.7%). In the robot-assisted radical prostatectomy patients, they were observed in the apex, anterior, posterior, and lateral regions of the prostate in 43.0%, 6.9%, 25.9% and 15.5% of patients, respectively, as well as in the bladder neck in 8.6% of patients. Conclusions: Positive surgical margin distributions after robot-assisted radical prostatectomy and laparoscopic radical prostatectomy are significantly different. The only disadvantage of robot-assisted radical prostatectomy is the lack of tactile feedback. Thus, the robotic surgeon needs to take this into account to minimize the risk of positive surgical margins. abstract_id: PUBMED:35858807 ROBOT-ASSISTED RADICAL CYSTECTOMY AT HIROSHIMA CITY ASA HOSPITAL -COMPARISON WITH LAPAROSCOPIC RADICAL CYSTECTOMY (Objective) We compared the perioperative parameters of robot-assisted radical cystectomy (RARC) and laparoscopic radical cystectomy (LRC) to evaluate the utility of RARC. (Patients and methods) At Hiroshima City Asa Hospital, 25 patients underwent RARC from July 2018 to May 2020 (R group) and 79 patients underwent LRC from July 2012 to June 2018 (L group). We retrospectively compared the patient characteristics, perioperative outcomes, and pathological outcomes between the R group and the L group. (Results) Regarding the patient characteristics, the R group had significantly more neo-adjuvant chemotherapy than the L group (64.0% vs. 32.9%, P=0.009), but the other characteristics did not differ. Between the R group and the L group, there were no significant differences in the total operating time (R group = 400 minutes vs. L group = 421 minutes), estimated blood loss (R group = 228 ml vs. L group = 318 ml), or pathological outcomes. However, there were significantly less postoperative complications in the R group than in the L group (24.0% vs. 52.6%, P=0.020). (Conclusion) This study showed that there might be benefits to introducing RARC into medical centers that perform LRC. abstract_id: PUBMED:34145964 The clinical impact of robot-assisted laparoscopic rectal cancer surgery associated with robot-assisted radical prostatectomy. Introduction: Robot-assisted laparoscopic surgery has been performed in various fields, especially in the pelvic cavity. However, little is known about the utility of robot-assisted laparoscopic rectal cancer surgery associated with robot-assisted radical prostatectomy (RARP). We herein report the clinical impact of robot-assisted laparoscopic rectal cancer surgery associated with RARP. Methods: We experienced five cases of robot-assisted laparoscopic rectal cancer surgery associated with RARP. One involved robot-assisted laparoscopic abdominoperineal resection with en bloc prostatectomy for T4b rectal cancer, and one involved robot-assisted laparoscopic intersphincteric resection combined with RARP for synchronous rectal and prostate cancer. The remaining three involved robot-assisted laparoscopic low anterior resection (RaLAR) after RARP. For robot-assisted laparoscopic rectal cancer surgery, the da Vinci Xi surgical system was used. Results: We could perform planned robotic rectal cancer surgery in all cases. The median operation time was 529 min (373-793 min), and the median blood loss was 307 ml (32-1191 ml). No patients required any transfusion in the intra-operative or immediate peri-operative period. The circumferential resection margin was negative in all cases. There were no complications of grade ≥III according to the Clavien-Dindo classification and no conversions to conventional laparoscopic or open surgery. Conclusion: Robot-assisted laparoscopic surgery associated with RARP is feasible in patients with rectal cancer. The long-term surgical outcomes remain to be further evaluated. abstract_id: PUBMED:33457670 Surgical Drain-Related Intestinal Obstruction After Robot-Assisted Laparoscopic Radical Prostatectomy in Two Cases. Background: Drainage tubes are almost always routinely used after a laparoscopic or robot-assisted radical prostatectomy and pelvic lymphadenectomy to prevent urinoma formation and lymphoceles. They are seldom of any consequence. We present our unique experience of bowel obstruction resulting from the use of pelvic drains. Case Presentation: We are reporting on two prostate cancer cases with rare postoperative complications. Each of them received robot-assisted laparoscopic radical prostatectomy and bilateral pelvic lymph node dissection and subsequently developed ileus and bowel obstruction. Series follow-up images suggested the bowel obstruction was related to their drainage tube. No evidence of urine leakage or intestine perforation was found based on drainage fluid analysis. We performed exploratory laparotomy in the first patient and found drainage tube kinking with the terminal ileum and adhesion band. The drainage tube was removed and patient recovery occurred over the following days. In the second case, the patient experienced bowel obstruction for 4 days after surgery. Based on our experience in the first case, and a drainage fluid survey showing no evidence of urine leakage, we removed the drainage tube on the morning of the 4th day, giving the patient a dramatic recovery with flatus and stool passage occurring in the afternoon. Both of the patients recovered well in hospital and during regular follow-up. Conclusion: To best of our knowledge, despite there being certain case reports regarding drainage tube ileus in colorectal and bowel surgery, we have reported here on the first two cases of small bowel obstruction as a complication arising from the abdominal drainage tube used in robot-assisted urology surgery. abstract_id: PUBMED:38337466 Inpatient Outcomes of Patients Undergoing Robot-Assisted versus Laparoscopic Radical Cystectomy for Bladder Cancer: A National Inpatient Sample Database Study. Background: Bladder cancer is a common urinary tract malignancy. Minimally invasive radical cystectomy has shown oncological outcomes comparable to the conventional open surgery and with advantages over the open procedure. However, outcomes of the two main minimally invasive procedures, robot-assisted and pure laparoscopic, have yet to be compared. This study aimed to compare in-hospital outcomes between these two techniques performed for patients with bladder cancer. Methods: This population-based, retrospective study included hospitalized patients aged ≥ 50 years with a primary diagnosis of bladder cancer who underwent robot-assisted or pure laparoscopic radical cystectomy. All patient data were extracted from the US National Inpatient Sample (NIS) database 2008-2018 and were analyzed retrospectively. Primary outcomes were in-hospital mortality, prolonged length of stay (LOS), and postoperative complications. Results: The data of 3284 inpatients (representing 16,288 US inpatients) were analyzed. After adjusting for confounders, multivariable analysis revealed that patients who underwent robot-assisted radical cystectomy had a significantly lower risk of in-hospital mortality (adjusted OR [aOR], 0.50, 95% CI: 0.28-0.90) and prolonged LOS (aOR, 0.63, 95% CI: 0.49-0.80) than those undergoing pure laparoscopic cystectomy. Patients who underwent robot-assisted radical cystectomy had a lower risk of postoperative complications (aOR, 0.69, 95% CI: 0.54-0.88), including bleeding (aOR, 0.73, 95% CI: 0.54-0.99), pneumonia (aOR, 0.49, 95% CI: 0.28-0.86), infection (aOR, 0.55, 95% CI: 0.36-0.85), wound complications (aOR, 0.33, 95% CI: 0.20-0.54), and sepsis (aOR, 0.49, 95% CI: 0.34-0.69) compared to those receiving pure laparoscopic radical cystectomy. Conclusions: Patients with bladder cancer, robot-assisted radical cystectomy is associated with a reduced risk of unfavorable short-term outcomes, including in-hospital mortality, prolonged LOS, and postoperative complications compared to pure laparoscopic radical cystectomy. abstract_id: PUBMED:36860680 Robot-assisted versus conventional laparoscopic radical hysterectomy in cervical cancer stage IB1. Objective: The aim of this study was to compare survival outcomes of robot-assisted laparoscopic radical hysterectomy (RRH) and conventional laparoscopic radical hysterectomy (LRH) in cervical cancer stage IB1. Method: This is a retrospective study of patients with cervical cancer stage IB1 who surgically treated by either RRH or LRH. Oncologic outcomes of the patients were compared according to surgical approach. Results: In total, 66 and 29 patients were assigned to LRH and RRH groups. All patients had stage IB1 disease (FIGO 2018). Intermediate risk factors (tumor size, LVSI, and deep stromal invasion), proportion of patients receiving adjuvant therapy (30.3% vs. 13.8%, p = 0.09), and median follow-up time (LRH, 61 months; RRH, 50 months; p=0.085) did not differ significantly between the two groups. The recurrence rate was higher in the LRH group; however, there was no significant difference between the two groups (p=0.250). DFS (55.4 vs 48.2 months, p = 0.250), and OS (61.2 vs 50.0 months, p = 0.287) were similar between the LRH and RRH groups. Conclusion: In patients with a tumor size < 2 cm, the recurrence rate was lower in RRH group; however, there was no significant difference. Further large-scale RCTs and clinical studies are required to provide relevant data. abstract_id: PUBMED:28799065 Laparoscopic inguinal hernioplasty after robot-assisted laparoscopic radical prostatectomy. Purpose: To evaluate the efficacy and safety of laparoscopic transabdominal preperitoneal (TAPP) inguinal hernia repair in patients who have undergone robot-assisted laparoscopic radical prostatectomy (RALP). Methods: From July 2014 to December 2016, TAPP inguinal hernia repair was conducted in 40 consecutive patients who had previously undergone RALP. Their data were retrospectively analyzed as an uncontrolled case series. Results: The mean operation time in patients who had previously undergone RALP was 99.5 ± 38.0 min. The intraoperative blood loss volume was small, and the duration of hospitalization was 2.0 ± 0.5 days. No intraoperative complications or major postoperative complications occurred. During the average 11.2-month follow-up period, no patients who had previously undergone prostatectomy developed recurrence. Conclusions: Laparoscopic TAPP inguinal hernia repair after RALP was safe and effective. TAPP inguinal hernia repair may be a valuable alternative to open hernioplasty. abstract_id: PUBMED:32475905 Effect of Robot-assisted Surgery on Anesthetic and Perioperative Management for Minimally Invasive Radical Prostatectomy under Combined General and Epidural Anesthesia. Background: Robot-assisted surgery and pure laparoscopic surgery are available for minimally invasive radical prostatectomy (MIRP). The differences in anesthetic management between these two MIRPs under combined general and epidural anesthesia (CGEA) remain unknown. This study therefore aimed to determine the effects of robot-assisted surgery on anesthetic and perioperative management for MIRP under CGEA. Methods: This retrospective observational study analyzed data from patients' electronic medical records. Data on demographics, intraoperative variables, postoperative complications, and hospital stays after MIRPs were compared between patients who underwent robot-assisted laparoscopic radical prostatectomy (RALP) and those treated by pure laparoscopic radical prostatectomy (LRP). Results: There were no differences in background data between the 102 who underwent RALP and 112 who underwent LRP. Anesthesia and surgical times were shorter in the RALP group than in the LRP group. Doses of anesthetics, including intravenous opioids, and epidural ropivacaine, were lower in the RALP group. Although estimated blood loss and volume of colloid infusion were lower in the RALP group, the volume of crystalloid infusion was larger. Intraoperative allogeneic transfusion was not required in either group. There was no difference between groups in the incidences of postoperative cardiopulmonary complications or postoperative nausea and vomiting. Hospital stays after the procedure were shorter in the RALP group. Conclusions: Robot-assisted surgery required varied consumption of anesthetics and infusion management during MIRP under GCEA. It also shortened postoperative hospital stays, without increasing rates of postoperative complications. abstract_id: PUBMED:30789615 Methods for training of robot-assisted radical prostatectomy Robotic surgery is a future method of minimal invasive surgery. Robot-assisted radical prostatectomy (RARP) is a common method of surgical treatment of prostate cancer. Due to significant differences of the surgical technique of RARP compared to open or laparoscopic radical prostatectomy (LRP) new methods of training are needed. At the moment there are many opinions how to train physicians best. Which model is the most effective one remains nowadays controversial. Objective: Analyze currently available data of training methods of RARP. Determine the most effective training model and evaluate its advantages and disadvantages. Establish a standardized plan and criteria for proper training and certification of the entire surgical team. Material And Methods: Literature review based on PubMed database, Web of Science and Scopus by keywords: robot-assisted radical prostatectomy, training of robot-assisted prostatectomy, training in robot-assisted operations, a learning curve of robot-assisted prostatectomy, virtual reality simulators (VR-simulators) in surgery. Results: According to the literature in average 18 to 45 procedures are required for a surgeon to achieve the plateau of the learning curve of the RARP. Parallel training, pre-operative warm-up and the use of virtual reality simulators (VR-simulators) can significantly increase the learning curve. There are many described models of RARP training. Conclusions: The absence of accepted criteria of evaluation of the learning curve does not allow to use this parameter as a guide for the surgeon's experience. Proper training of robotic surgeons is necessary and requires new methods of training. There are different types of training programs. In our opinion the most effective training program is when a surgeon observes the performance of tasks or any steps of operation on the VR-simulator, then he performs them and analyzes mistakes by video recording. Then the surgeon observes real operations and performs some steps of the operation which are already leant on the simulator under supervision of the mentor and analyzes mistakes by video recording. Thus, mastering first the simple stages under supervision of a mentor, the surgeon effectively adopts the surgical experience from him. It is necessary to train not only the surgeons but also the entire surgical team. Answer: The comparison between robot-assisted and pure laparoscopic radical prostatectomy has been the subject of various studies, and several differences have been noted in terms of surgical outcomes, operative times, and complications. One study comparing transperitoneal and extraperitoneal robot-assisted laparoscopic radical prostatectomy found that while there were no significant differences in operative, pathological, and functional results, the extraperitoneal approach had shorter catheterization and hospitalization times, as well as a quicker return to an oral diet (PUBMED:26212891). This suggests that the extraperitoneal approach may offer some advantages in terms of postoperative recovery. Another study comparing laparoscopic radical prostatectomy and robot-assisted radical prostatectomy found that robot-assisted procedures resulted in significantly less bleeding. However, the overall frequency of positive surgical margins was similar between the two groups, with a slight difference in distribution. The study also noted that the lack of tactile feedback in robot-assisted surgery is a disadvantage that needs to be considered to minimize the risk of positive surgical margins (PUBMED:24912809). In terms of perioperative management, robot-assisted surgery required varied consumption of anesthetics and infusion management during minimally invasive radical prostatectomy under combined general and epidural anesthesia. It also shortened postoperative hospital stays without increasing rates of postoperative complications (PUBMED:32475905). Training for robot-assisted radical prostatectomy is also different from traditional methods due to the significant differences in surgical technique compared to open or laparoscopic radical prostatectomy. New methods of training are needed, and the most effective training program is considered to be one that includes observation, performance, and analysis of tasks on virtual reality simulators, followed by real operations under the supervision of a mentor (PUBMED:30789615). In conclusion, robot-assisted radical prostatectomy offers some advantages over pure laparoscopic radical prostatectomy, such as less bleeding and potentially shorter hospital stays, but it also presents challenges such as the lack of tactile feedback and the need for specialized training. The choice between the two methods may depend on the surgeon's experience, the patient's condition, and the available resources.
Instruction: Evaluation of a hospital-wide resuscitation team: does it increase survival for in-hospital cardiopulmonary arrest? Abstracts: abstract_id: PUBMED:11426472 Evaluation of a hospital-wide resuscitation team: does it increase survival for in-hospital cardiopulmonary arrest? Objective: To assess the impact (defined not only with regard to patient outcome but also to record keeping for evaluation of care) of a formal, structured resuscitation team for in-hospital cardiopulmonary resuscitation over the year following its creation. Methods: This is a "before and after" study in which charts of all patients needing resuscitation during the two-year period were reviewed and data arranged in the Utstein Style of in-hospital reporting of cardiac arrests. The review was limited to adults (> or = 18 years of age) in nonICU settings. Results: A total of 220 events were identified. Demographics and presenting rhythms for the two periods under review were similar. For the period of August 1996-August 1997 (group 1), there were 70 resuscitation events recorded with a return of spontaneous circulation (ROSC) rate of 21/70 (30%). For the period of August 1997-August 1998 (group 2), 150 events were recorded and the ROSC rate was significantly higher 87/150 (58%)) (P=0.0002). ROSC after ventricular fibrillation and ventricular tachycardia was similar in both groups (50 vs 57%) (P = 1.00) but an improvement in survival was seen in group 2 from events of bradycardia perfusing rhythm (25% vs 84%) (P = 0.0003). Survival from PEA/Asystole was also improved during period 2 (18 vs 48%) (P = 0.013). Survival to discharge was seen in 3/50 (6%) of patients in period 1 and 18/102 (18%) of patients in period 2 (P = 0.09). Conclusions: The formation of a structured, formalized hospital resuscitation team was associated with an increase in the number of recorded events, in the number of patients experiencing ROSC and in the percentage of patients who were discharged from the hospital. Facilities with no formal resuscitation team or with no skilled, practiced resuscitator on their current team should consider implementation of a similar strategy. abstract_id: PUBMED:21674423 Successful implementation of an "in-hospital resuscitation team" in a university hospital Background And Objective: Resuscitation is the most important emergency action in a life-threatening cardiopulmonary arrest. The organizational, personnel and equipment requirements for an optimal treatment of emergency patients in a university hospital are described, as well as the short- and mid-term results. Patients And Methods: Retrospective analysis of 132 cases of cardiopulmonary resuscitation based on a two-pages reporting form whose completion by the involved physician and intensive care nurse is mandatory after each event. Results: About 65 % of all events were triggered by cardiac and respiratory causes. In 50 % of all cases there was an acute life-threatening situation, requiring an intubation in 46 % and mechanical reventilation in 42 % of all cases. One third of all patients who were successfully reanimated were discharged alive from hospital after the intensive care treatment. Conclusion: A well organized and adequately equipped resuscitation team is the basis for achieving optimal chances of survival in life-threatening emergencies. This is especially so in large university hospitals with often care for patients with multiple morbidities. abstract_id: PUBMED:1417377 Low survival rate after cardiopulmonary resuscitation in a county hospital. Background: The standard of practice in hospitals in the United States is to perform cardiopulmonary resuscitation on all patients who suffer a cardiac arrest unless a specific order has been written to the contrary. In recent decades, however, data showing a low rate of survival to discharge under certain conditions have accumulated, leading some to question this policy. The objective of this study was to examine variables predictive of patient survival following cardiopulmonary resuscitation using standardized methods of measuring severity of illness. Methods: All patients were identified who underwent cardiopulmonary resuscitation on the medicine service at Los Angeles County (California) Hospital from August 15, 1990, to February 15, 1991. Severity of illness was evaluated by examining diagnosis, Acute Physiology and Chronic Health Evaluation II score, and organ system failure. Cases were followed up prospectively until death or hospital discharge, and data concerning post-arrest mental status, utilization of resources, and disposition were gathered. Results: Of the 131 patients identified, 22 patients (16.8%) survived for 24 hours but died before discharge; only four patients (3.1%) survived to discharge. Conclusions: This study suggests that in some settings (eg, institutions that are for sick patients under conditions where monitoring is limited because of scarcity of resources), survival after full cardiopulmonary arrest may be even lower than previously documented. abstract_id: PUBMED:29033721 An audit of in-hospital cardiopulmonary resuscitation in a teaching hospital in Saudi Arabia: A retrospective study. Objectives: Data reflecting cardiopulmonary resuscitation (CPR) efforts in Saudi Arabia are limited. In this study, we analyzed the characteristics, and estimated the outcome, of in-hospital CPR in a teaching hospital in Saudi Arabia over 4 years. Methods: A retrospective, observational study was conducted between January 2009 and December 2012 and included 4361 patients with sudden cardiopulmonary arrest. Resuscitation forms were reviewed. Demographic data, resuscitation characteristics, and survival outcomes were recorded. Results: The mean ± standard deviation age of arrested patient was 40 ± 31 years. The immediate survival rate was 64%, 43% at 24 h, and 30% at discharge. The death rate was 70%. Respiratory type of arrest, time and place of arrest, short duration of arrest, witnessed arrest, the use of epinephrine and atropine boluses, and shockable arrhythmias were associated with higher 24-h survival rates. A low survival rate was found among patients with cardiac types of arrest, and those with a longer duration of arrest, pulseless electrical activity, and asystole. Comorbidities were present in 3786 patients with cardiac arrest and contributed to a poor survival rate (P < 0.001). Conclusions: The study confirms the findings of previously published studies in highly developed countries and provides some reflection on the practice of resuscitation in Saudi Arabia. abstract_id: PUBMED:7623652 In-hospital cardiopulmonary resuscitation. Survival in 1 hospital and literature review. Cardiopulmonary resuscitation (CPR) has been used extensively in the hospital setting since its introduction over 3 decades ago. We reviewed the CPR records at 1 hospital during a 2-year period and the results from 113 published reports of inpatient CPR with a total patient population of 26,095. We compared the survival rates of patients following CPR and the pre-arrest and intra-arrest factors related to survival. At the hospital where CPR records were reviewed, 44% of patients initially survived following CPR, and the 1-year survival rate was 5%. Patients with shorter durations of CPR and those administered fewer procedures and medications during CPR survived longer than patients with prolonged CPR. Patients with witnessed cardiac arrests were more likely to survive than those with unwitnessed arrests. Also, patients with respiratory arrests had much better survival than patients with cardiopulmonary arrests. Worldwide, 113 studies showed a survival to discharge rate of 15.2% (United States = 15%, Canada = 16%, United Kingdom = 17%, other European countries = 14%). Patients were more likely to survive to discharge if they were treated in a community hospital (versus a teaching or Veterans Affairs hospital) or were younger. Patients with ventricular tachycardia or fibrillation were more likely to survive than those with asystole or electromechanical dissociation. Patient's location was related to outcome, with emergency room and coronary care unit patients more likely to survive than intensive care unit and general ward patients. Other factors related to better survival rates were respiratory arrest, witnessed arrest, absence of comorbidity, and short duration of CPR. Knowledge of the likelihood of survival following CPR for subgroups of the hospital population based on pre-arrest and intra-arrest factors can help patients, their families, and their physicians decide, with compassion and conviction, in what situations CPR should be administered. abstract_id: PUBMED:31913310 Two-dimensional echocardiography after return of spontaneous circulation and its association with in-hospital survival after in-hospital cardiopulmonary resuscitation. This retrospective cohort study investigated the association between in-hospital survival and two-dimensional (2D) echocardiography within 24 hours after the return of spontaneous circulation (ROSC) in patients who underwent in-hospital cardiopulmonary resuscitation (ICPR) after in-hospital cardiopulmonary arrest (IHCA). The 2D-echo and non-2D-echo groups comprised eligible patients who underwent transthoracic 2D echocardiography performed by the cardiology team within 24 hours after ROSC and those who did not, respectively. After propensity score (PS) matching, 142 and 284 patients in the 2D-echo and non-2D-echo groups, respectively, were included. A logistic regression analysis showed that the likelihood of in-hospital survival was 2.35-fold higher in the 2D-echo group than in the non-2D-echo group (P < 0.001). Regarding IHCA aetiology, in-hospital survival after cardiac arrest of a cardiac cause was 2.51-fold more likely in the 2D-echo group than in the non-2D-echo group (P < 0.001), with no significant inter-group difference in survival after cardiac arrest of a non-cardiac cause (P = 0.120). In this study, 2D echocardiography performed within 24 hours after ROSC was associated with better in-hospital survival outcomes for patients who underwent ICPR for IHCA with a cardiac aetiology. Thus, 2D echocardiography may be performed within 24 hours after ROSC in patients experiencing IHCA to enable better treatment. abstract_id: PUBMED:10416318 Survival after cardiopulmonary resuscitation in an urban Indian hospital. Background: Survival after cardiopulmonary resuscitation depends upon the quality of pre-hospital support, availability of resuscitation equipment and the competence of the resuscitator. There are few data on the prognosis of patients undergoing such resuscitation in India. Methods: In a retrospective analysis of 215 resuscitations done in a 125-bed community hospital between January 1995 and November 1997, return of spontaneous circulation and survival to discharge were evaluated. Multivariate methods were used to identify the predictors of successful outcome. Results: Of all the patients, 14.4% were alive at discharge. Survival after a cardiorespiratory arrest in the hospital was 18.4%, which was significantly better than survival after pre-hospital events (5.9%; p = 0.027). Multivariate predictors of survival at discharge were resuscitation duration of less than 20 minutes [odds ratio (95% confidence limit): 32.6 (6.5-164.3)], presentation with ventricular tachycardia or fibrillation [odds ratio: 18.5 (4.4-77.9)], in-hospital cardiorespiratory arrest [odds ratio: 5.2 (1.2-21.6)] and female sex [odds ratio: 3.2 (1.1-9.6)]. Bystander resuscitation, though rarely provided, increased survival at discharge (p = 0.026). Conclusions: With 5.5 resuscitation attempts needed for one live discharge after in-hospital cardiorespiratory arrest and 17 attempts to save a life after pre-hospital events, our outcomes are comparable to those reported from developed nations. A return of pulse after shorter durations of cardiopulmonary resuscitation, ventricular fibrillation or tachycardia as the abnormal presenting rhythm, in-hospital location of cardiorespiratory (CR) arrest and female sex were independent predictors of live discharge. Age and aetiology of CR arrest did not influence the outcome. abstract_id: PUBMED:10838235 Utstein style reporting of in-hospital paediatric cardiopulmonary resuscitation. Study Objective: To report paediatric in-hospital cardiac arrest data according to Utstein style and to determine the effectiveness of cardiopulmonary resuscitation (CPR) in hospitalized children. Design: Retrospective 5-year case series. Setting: Urban, tertiary-care children's hospital. Participants: All patients who sustained cardiopulmonary arrest. Results: Altogether 227 patients experienced a cardiopulmonary arrest during the study period, 109 (48.0%) were declared dead without attempted resuscitation, and CPR was initiated in 118 (52.0%). The incidence of cardiac arrest was 0. 7% of all hospital admissions and 5.5% of PICU admissions; the incidence of CPR attempts was 0.4 and 2.5%, respectively. Most of the CPR attempts (64.4%) took place in the PICU and the most frequent aetiology was cardiovascular (71.2%). The 1-year survival rate was 17.8%. Short duration of external CPR was the best prognostic factor associated with survival. With few exceptions, the Paediatric Utstein Style was found to be applicable for reporting retrospective data from in-hospital cardiac arrests in children. Conclusions: In-hospital cardiopulmonary resuscitation was shown to be an uncommon event in children; the survival rate was similar to earlier studies. abstract_id: PUBMED:26535496 Cardiopulmonary Resuscitation in Children With In-Hospital and Out-of-Hospital Cardiopulmonary Arrest: Multicenter Study From Turkey. Objectives: The objectives of this study were to determine the causes, location of cardiopulmonary arrest (CPA) in children, and demographics of cardiopulmonary resuscitation (CPR) in Turkish pediatric emergency departments and pediatric intensive care units (PICUs) and to determine survival rates and morbidities for both in-hospital and out-of-hospital CPA. Methods: This multicenter descriptive study was conducted prospectively between January 15 and July 15, 2011, at 18 centers (15 PICUs, 3 pediatric emergency departments) in Turkey. Results: During the study period, 239 children had received CPR. Patients' average age was 42.4 (SD, 58.1) months. The most common cause of CPA was respiratory failure (119 patients [49.8%]). The location of CPA was the PICU in 168 (68.6%), hospital wards in 43 (18%), out-of-hospital in 24 (10%), and pediatric emergency department in 8 patients (3.3%). The CPR duration was 30.7 (SD, 23.6) minutes (range, 1-175 minutes) and return of spontaneous circulation was achieved in 107 patients (44.8%) after the first CPR. Finally, 58 patients (24.2%) were discharged from hospital; survival rates were 26% and 8% for in-hospital and out-of-hospital CPA, respectively (P = 0.001). Surviving patients' average length of hospital stay was 27.4 (SD, 39.2) days. In surviving patients, 19 (32.1%) had neurologic disability. Conclusion: Pediatric CPA in both the in-hospital and out-of-hospital setting has a poor outcome. abstract_id: PUBMED:15312869 A program encouraging early defibrillation results in improved in-hospital resuscitation efficacy. Objectives: The purpose of this study was to determine whether survival to discharge after in-hospital cardiopulmonary arrest could be improved by a program encouraging early defibrillation that included switching from monophasic to biphasic devices. Background: In-hospital resuscitation continues to have a low success rate. Biphasic waveform devices have demonstrated characteristics that might improve survival, and outside the hospital, automated external defibrillators (AEDs) have shown promise in improving survival of patients suffering cardiopulmonary arrest. Methods: A program including education and replacement of all manual monophasic defibrillators with a combination of manual biphasic defibrillators used in AED mode and AEDs in all outpatient clinics and chronic care units was implemented. Results: With program implementation, the percentage survival of all patients with resuscitation events improved 2.6-fold, from 4.9% to 12.8%. Factors independently predicting survival included event location outside an intensive care unit, younger age, an initial rhythm of pulseless ventricular tachycardia (VT) or ventricular fibrillation (VF), pre-arrest beta-blocker, and program initiation. The outcome was independent of gender, race, work shift, number of previous resuscitation attempts, body mass index, comorbidity index, presence of diabetes, presence of hypertension, or use of angiotensin-converting enzyme inhibitors. The improvement in mortality was attributable solely to an effect on patients presenting with VT/VF. Patients with these initial rhythms were 14-fold (odds ratio = 0.07 of death, confidence interval = 0.02 to 0.3) more likely to survive to discharge after program initiation. Automated external defibrillators performed similarly to biphasic manual defibrillators in AED mode. Conclusions: A program including education and use of biphasic manual defibrillators in AED mode and selective use of AEDs improved survival to discharge in hospitalized patients suffering from cardiopulmonary arrest. Answer: The evaluation of a hospital-wide resuscitation team indicates that it does increase survival for in-hospital cardiopulmonary arrest. A study assessing the impact of a formal, structured resuscitation team over a year following its creation found that the formation of such a team was associated with an increase in the number of recorded events, in the number of patients experiencing return of spontaneous circulation (ROSC), and in the percentage of patients who were discharged from the hospital. Specifically, the ROSC rate increased significantly from 30% to 58% after the implementation of the resuscitation team, and survival from bradycardia perfusing rhythm improved from 25% to 84%. Additionally, survival from PEA/Asystole also improved, and survival to discharge increased from 6% to 18% between the two periods under review (PUBMED:11426472). Another study in a university hospital setting also supports the notion that a well-organized and adequately equipped resuscitation team is the basis for achieving optimal chances of survival in life-threatening emergencies, especially in large university hospitals with often care for patients with multiple morbidities (PUBMED:21674423). These findings are consistent with the broader literature, which suggests that factors such as the location of the event, the presence of a witnessed arrest, the initial rhythm, and the duration of CPR are associated with survival outcomes. For instance, patients with witnessed cardiac arrests and those with initial rhythms of ventricular tachycardia or fibrillation are more likely to survive than those with asystole or pulseless electrical activity (PUBMED:7623652, PUBMED:10416318). Additionally, the implementation of programs encouraging early defibrillation, including the use of biphasic waveform devices, has been shown to improve survival to discharge after in-hospital cardiopulmonary arrest (PUBMED:15312869). In conclusion, the formation and implementation of a structured, formalized hospital resuscitation team, along with appropriate training, equipment, and protocols, have been associated with improved survival rates for in-hospital cardiopulmonary arrest.
Instruction: Does obesity modify the effect of blood pressure on the risk of cardiovascular disease? Abstracts: abstract_id: PUBMED:1889862 Blood pressure and high blood pressure. Aspects of risk. This report deals with three aspects of risk related to blood pressure and high blood pressure. The first aspect of risk concerns distributions of systolic blood pressure (SBP) and diastolic blood pressure (DBP) in the adult population and their relation to long-term risk of morbidity and mortality. By middle age, only a minority (about 20%) of Americans have optimal SBP and DBP levels, less than 120 mm Hg and less than 80 mm Hg, respectively. For the majority with higher levels, risks of major clinical events, including death from cardiovascular diseases and from all causes, are markedly increased. The relations of SBP and DBP with risk are strong, continuous, and graded. Risk is sizable not only for persons with high blood pressure by usual clinical criteria (SBP greater than or equal to 140 mm Hg or DBP greater than or equal to 90 mm Hg), but also for those with "high-normal" blood pressure (e.g., SBP 130-139 mm Hg or DBP 80-89 mm Hg). Thus, the blood pressure problem is a population-wide one and requires for its control a combined population-wide and high-risk strategy. A major component of this strategy must be nutritional-hygienic measures for the primary prevention of the rise in blood pressure during adulthood and of high blood pressure (i.e., primary prevention not only of the complications of high blood pressure but also of high blood pressure itself) through improved lifestyles having the potential to shift downward the blood pressure distribution of the whole population. The second aspect of risk concerns the known risk factors (i.e., aspects of modern lifestyle) leading to the mass occurrence of blood pressure rise during adulthood and of high blood pressure. These risk factors are high salt intake, high dietary sodium/potassium ratio, calorie imbalance and resultant obesity, and high alcohol intake. The extensive data base establishing the role of these common traits in the etiology of the blood pressure/high blood pressure problem is the scientific foundation for efforts to achieve the primary prevention of high blood pressure. The third aspect of risk relates to the combined impact of other risk factors along with blood pressure-high blood pressure in markedly increasing the probabilities of morbidity and mortality (e.g., "rich" diet, diet-dependent serum cholesterol and uric acid, smoking, diabetes, and target-organ damage). Prevention and control of lifestyle-related traits are essential components of the strategy for dealing with the blood pressure-high blood pressure problem. abstract_id: PUBMED:12555371 Blood pressure parameters and cardiovascular risk in the elderly The treatment decision must take into account the benefit and risks related to the intervention: the benefit demonstrated and quantified in many therapeutic trials in hypertension in the elderly, but also the patient's initial risk. It is now recognized that, in elderly hypertensive patients, systolic blood pressure is a better predictor of morbid and lethal events related to hypertension than diastolic blood pressure. Recent data in the medical literature attribute a predictive role to pulse pressure which is even greater than that of systolic blood pressure. From a pathophysiological point of view, the level of pulse pressure reflects the degree of rigidity of large arterial trunks. The arterial rigidity parameter could integrate the harmful effect of "cardiovascular risk factors" (hypertension, but also atherogenic dyslipidaemia, diabetes, smoking, homocysteine, genetic factors, etc.) on the years or decades of exposure, and pulsed pressure would therefore appear to be a better marker of cardiovascular risk than other blood pressure parameters. Pulse pressure should therefore be integrated into the benefit/risk ratio of antihypertensive treatment in the elderly. abstract_id: PUBMED:33674917 Alcohol Exerts a Shifted U-Shaped Effect on Central Blood Pressure in Young Adults. Background: Consumption of 1-2 alcoholic beverages daily has been associated with a lower risk of cardiovascular disease and all-cause mortality in middle-aged and older adults. Central blood pressure has emerged as a better predictor of cardiovascular risk than peripheral blood pressure. However, the effects of habitual alcohol consumption on central blood pressure particularly in young adults, who are among the largest consumers of alcohol in North America, have yet to be investigated. Objective: We aimed to study the effect of alcohol consumption on central and peripheral blood pressure, and arterial stiffness in young adults. Design: Cross-sectional observational study. Main Measures: Using a standardized questionnaire, alcohol consumption (drinks/week) was queried; participants were classified as non- (< 2), light (2-6), moderate (women 7-9, men 7-14), and heavy drinkers (women > 9, men > 14). Central blood pressure and arterial stiffness were measured using applanation tonometry. Key Results: We recruited 153 healthy, non-smoking, non-obese individuals. We found a U-shaped effect of alcohol consumption on blood pressure. Light drinkers had significantly lower central systolic and mean arterial blood pressure, but not peripheral blood pressure when compared to non- and moderate/heavy drinkers (P < 0.05). No significant associations with arterial stiffness parameters were noted. Conclusions: A U-shaped relationship was found between alcohol consumption and central and mean arterial blood pressure in young individuals, which importantly, was shifted towards lower levels of alcohol consumption than currently suggested. This is the first study, to our knowledge, that examines the effect of alcohol consumption on central blood pressure and arterial stiffness exclusively in young individuals. Prospective studies are needed to confirm the relationships observed herein. abstract_id: PUBMED:25964207 Effects of parental smoking on exercise systolic blood pressure in adolescents. Background: In adults, exercise blood pressure seems to be more closely related to cardiovascular risk than resting blood pressure; however, few data are available on the effects of familial risk factors, including smoking habits, on exercise blood pressure in adolescents. Methods And Results: Blood pressure at rest and during exercise, parental smoking, and other familial risk factors were investigated in 532 adolescents aged 12 to 17 years (14.6±1.5 years) in the Kiel EX.PRESS. (EXercise PRESSure) Study. Exercise blood pressure was determined at 1.5 W/kg body weight using a standardized submaximal cycle ergometer test. Mean resting blood pressure was 113.1±12.8/57.2±7.1 mm Hg, and exercise blood pressure was 149.9±19.8/54.2±8.6 mm Hg. Parental smoking increased exercise systolic blood pressure (+4.0 mm Hg, 3.1 to 4.9; P=0.03) but not resting blood pressure of the subjects (adjusted for age, sex, height, body mass index percentile, fitness). Parental overweight and familial hypertension were related to both higher resting and exercise systolic blood pressure values, whereas associations with an inactive lifestyle and a low educational level of the parents were found only with adolescents' blood pressure during exercise. The cumulative effect of familial risk factors on exercise systolic blood pressure was more pronounced than on blood pressure at rest. Conclusions: Parental smoking might be a novel risk factor for higher blood pressure, especially during exercise. In addition, systolic blood pressure during a submaximal exercise test was more closely associated with familial risk factors than was resting blood pressure, even in adolescents. abstract_id: PUBMED:34314347 Nocturnal blood pressure and nocturnal blood pressure fluctuations: the effect of short-term CPAP therapy and their association with the severity of obstructive sleep apnea. Study Objectives: We determined the relationship of cardiovascular risk factors, cardiovascular diseases, nocturnal blood pressure (NBP), and NBP fluctuations (NBPFs) with the severity of obstructive sleep apnea (OSA). We also investigated the effect of short-term continuous positive airway pressure therapy on NBP parameters. Methods: This retrospective study included 548 patients from our cardiac clinic with suspected OSA. Patients underwent polysomnography and continuous NBP measurement using the pulse transit time. According to their apnea-hypopnea index (AHI), patients were subclassified as controls (AHI < 5 events/h), mild (AHI 5 to < 15 events/h), moderate (AHI 15 to < 30 events/h), and severe OSA (AHI ≥ 30 events/h); 294 patients received continuous positive airway pressure therapy. Results: Analysis of covariance showed that NBP and the frequency of NBPFs were the highest in severe followed by moderate and mild OSA (all P < .001). Multivariable regression analysis revealed a significant association of NBPFs with AHI, body mass index, systolic NBP, and lowest SpO2. The severity of OSA is also associated with the frequency of obesity, hypertension, diabetes mellitus, atrial fibrillation, heart failure (all P < .001), and coronary artery disease (P = .035). Short-term continuous positive airway pressure decreased the frequency of NBPFs in all OSA groups and the systolic NBP in severe and moderate but not in mild OSA. Conclusions: The severity of OSA is associated with an increase in NBP and NBPFs. Continuous positive airway pressure reduces NBP parameters already after the first night. In addition to BP, the diagnosis and therapy of NBPFs should be considered in patients with OSA. Clinical Trial Registration: Registry: German Clinical Trials Register; Name: Nocturnal blood pressure and nocturnal blood pressure fluctuations associated with the severity of obstructive sleep apnea; URL: https://www.drks.de/drks_web/navigate.do?navigationId=trial.HTML&TRIAL_ID=DRKS00024087; Identifier: DRKS00024087. Citation: Picard F, Panagiotidou P, Tammen A-B, et al. Nocturnal blood pressure and nocturnal blood pressure fluctuations: the effect of short-term CPAP therapy and their association with the severity of obstructive sleep apnea. J Clin Sleep Med. 2022;18(2):361-371. abstract_id: PUBMED:18824645 Does obesity modify the effect of blood pressure on the risk of cardiovascular disease? A population-based cohort study of more than one million Swedish men. Background: Some studies have suggested that increased blood pressure has a stronger effect on the risk of cardiovascular disease (CVD) in lean persons than in obese persons, although this is not a universal finding. Given the inconsistency of this result, we tested it using a large population-based cohort data set. Methods And Results: Systolic and diastolic blood pressures (BPs) and body mass index were measured in 1 145 758 Swedish men born between 1951 and 1976 who were in young adulthood (median age 18.2 years). During the register-based follow-up, which lasted until the end of 2006, 65 611 new CVD events took place, including 6799 myocardial infarctions and 8827 strokes. Hazard ratios (HRs) per 1-SD increase in systolic and diastolic BP were computed within established body mass index categories (underweight, normal, overweight, or obese) with Cox proportional hazards models. The strongest associations of diastolic BP with CVD (HR 1.18), myocardial infarction (HR 1.22), and stroke (HR 1.13) were observed in the obese category. For systolic BP, the strongest associations were observed in the obese category with CVD (HR 1.16) and stroke (HR 1.29) but in the overweight category with myocardial infarction (HR 1.19). We observed statistically significant interactions (P<0.0001) with body mass index for diastolic BP in relation to CVD and for systolic BP in relation to CVD and stroke. Conclusions: In contrast to the findings of previous studies, we observed a general increase in the magnitude of the association between blood pressure and subsequent CVD with increasing body mass index. Hypertension should not be regarded as a less serious risk factor in obese than in lean or normal-weight persons. abstract_id: PUBMED:2529066 Exercise, cardiovascular disease and blood pressure. The "chronic" effect of exercise on blood pressure has been controversial and the debate has been confused by a large number of studies with inadequate methodology. Recent consistent findings in epidemiological, experimental and longitudinal intervention studies have suggested that a true antihypertensive effect which is independent of confounding effects of sodium intake, weight, etc. is more likely than not. Unlike some other measures of lowering blood pressure such as sodium restriction, alcohol moderation and some drugs, regular exercise is associated with beneficial effects on several risk factors and probably has an independent effect on cardiovascular mortality. The magnitude of the effect in previously sedentary subjects is greater than that of dietary measures which lower blood pressure except for weight reduction in the obese. Long-term effects on blood pressure are supported by evidence of a favourable influence on left ventricular hypertrophy. The mechanisms involved in the antihypertensive effect of exercise are unclear, but sympathetic withdrawal is one factor involved. Present evidence appears sufficient to include regular exercise amongst the useful therapies for hypertension. abstract_id: PUBMED:29860809 Study on relationship between prevalence or co-prevalence of risk factors for cardiovascular disease and blood pressure level in adults in China Objective: To study the relationship between blood pressure level and major risk factors for cardiovascular diseases in adults in China. Methods: A total of 179 347 adults aged ≥18 years were recruited from 298 surveillance points in 31 provinces in China in 2013 through complex multistage stratified sampling. The survey included face to face interview and physical examination to collect information about risk factors, such as smoking, drinking, diet pattern, physical activity, overweight or obesity, and the prevalence of hypertension. The blood pressure was classified into 6 levels (ideal blood pressure, normal blood pressure, normal high blood pressure and hypertension phase Ⅰ, Ⅱ and Ⅲ). The relationship between the prevalence or co-prevalence of risk factors for cardiovascular disease and blood pressure was analyzed. Results: The adults with ideal blood pressure, normal blood pressure, normal high pressure, hypertension phase Ⅰ, Ⅱ and Ⅲ accounted for 36.14%, 22.77%, 16.22%, 16.43%, 5.97% and 2.48%, respectively. Among them, the blood pressure was higher in men, people in Han ethnic group and those married, and the blood pressure was higher in those with older age, lower income level and lower education level, the differences were all significant (P<0.05). Whether taking antihypertensive drug or not, co-prevalence of risk factors influenced the blood pressure levels of both sexes (P<0.05), and the blood pressure levels of those taking no antihypertensive drug was influenced more by the co-prevalence of risk factors. Finally, multiple logistic analysis showed that the risks for high blood pressure in adults with 1, 2 and ≥3 risk factors were 1.36, 1.79 and 2.38 times higher, respectively, than that of the adults without risk factor. Conclusion: The more the risk factors for cardiovascular disease in adults, the higher their blood pressure were. It is necessary to conduct comprehensive behavior intervention targeting ≥ 2 risk factors for the better control of blood pressure in general population. abstract_id: PUBMED:9022557 Blood pressure response to sodium in children and adolescents. The predisposition to primary hypertension is composed of genetic factors, and aberrant mechanisms leading to the clinical expression of hypertension may be operational in children and adolescents. Dietary composition may play a role in the expression of hypertension. The effects of diet on blood pressure in the young may be indirect, reflecting the relation of diet with growth and body composition. Alternatively, there may be a direct effect of a specific dietary factor, such as sodium, on mechanisms regulating blood pressure. The average daily sodium intake by children and adolescents exceeds recommended amounts. Despite the high sodium intake among children, there are few data showing that decreasing sodium intake lowers blood pressure. Children who do express blood pressure sensitivity to sodium intake also have related risk factors for cardiovascular disease such as a positive family history or obesity. Prospective data are needed in children with characteristic risk factors to determine whether sodium intake contributes to the pathogenesis of hypertension and whether this course can be modified by alterations in diet. abstract_id: PUBMED:10467214 Historic perspectives on the relative contributions of diastolic and systolic blood pressure elevation to cardiovascular risk profile. The aim of this review was to examine the relative contributions of systolic and diastolic blood pressures to the risk of cardiovascular disease on the basis of epidemiologic evidence from the Framingham Heart Study and the change in attitudes toward systolic blood pressure that occurred during the course of the study. Historic texts were evaluated in comparison with data from the Framingham Heart Study, a prospective longitudinal analysis of the relation between blood pressure and occurrence of subsequent cardiovascular morbidity and mortality rates in a fixed cohort. Historically, systolic hypertension has been considered an innocent accompaniment to arterial stiffening, occurring as a compensatory phenomenon in the elderly. Epidemiologic data show that the development of hypertension is neither inevitable nor beneficial. The data also provide evidence that systolic pressure is more important than diastolic pressure as a determinant of cardiovascular sequelae. Mild or moderate elevations of systolic blood pressure, even when unaccompanied by diastolic pressure elevations, are associated with an increased risk of cardiovascular disease. Risk is increased further by the added presence of related metabolic disturbances such as dyslipidemia, glucose intolerance, insulin resistance, cardiac hypertrophy, and obesity. Over-reliance on diastolic blood pressure in assessing the risk of hypertension can be misleading. Systolic pressure constitutes a powerful predictor of cardiovascular disease and a valuable tool when incorporated within multivariate risk formulas for estimating the conditional probability of coronary and stroke events. Answer: Yes, obesity does modify the effect of blood pressure on the risk of cardiovascular disease. A population-based cohort study of more than one million Swedish men found that the association between blood pressure and subsequent cardiovascular disease (CVD) generally increased with increasing body mass index (BMI). Specifically, the strongest associations of diastolic blood pressure with CVD, myocardial infarction, and stroke were observed in the obese category. For systolic blood pressure, the strongest associations were observed in the obese category with CVD and stroke, but in the overweight category with myocardial infarction. The study observed statistically significant interactions with BMI for diastolic blood pressure in relation to CVD and for systolic blood pressure in relation to CVD and stroke, indicating that hypertension should not be considered a less serious risk factor in obese than in lean or normal-weight persons (PUBMED:18824645).
Instruction: Sponsored-research funding by newly recruited assistant professors: can it be modeled as a sequential series of uncertain events? Abstracts: abstract_id: PUBMED:15234913 Sponsored-research funding by newly recruited assistant professors: can it be modeled as a sequential series of uncertain events? Purpose: Recruitment of junior faculty with an investigative focus is essential to regenerate and expand the research mission of academic health centers. Predicting funding profiles for junior faculty is limited by variability in the timing, magnitude, and duration of projected research grant funding. The author demonstrated the validity of Monte Carlo simulation to predict sponsored-research revenues by newly recruited faculty. Method: Demographic characteristics and funding profiles were determined for assistant professors recruited to Yale University School of Medicine in four separate fiscal years (1992-93, 1993-94, 1996-97, 1997-98). These data were applied to develop and assess the simulation model. Results: Only when assistant professors were subcategorized by type of research was it possible to accurately predict recovery of both direct research costs and facilities and administrative costs. Simulations illustrated both the high degree of variability among individual faculty and also the advantage of a prediction tool that displays the range and probability of all possible outcomes. Conclusion: Sponsored-research funding by newly recruited assistant professors can be modeled as a sequential series of uncertain events and used to predict consequences of imminent changes in federal funding for biomedical research. abstract_id: PUBMED:36627628 Training load of newly recruited nurses in Grade-A Tertiary Hospitals in Shanghai, China: a qualitative study. Background: This study aimed to provide insight into the training load of newly recruited nurses in grade-A tertiary hospitals in Shanghai, China. The lack of nurses in hospitals across China has resulted in newly recruited nurses in grade-A tertiary hospitals in Shanghai having to integrate into the work environment and meet the needs of the job quickly; thus, they undergo several training programs. However, an increase in the number of training programs increases the training load of these nurses, impacting the effectiveness of training. The extent of the training load that newly recruited nurses have to bear in grade-A tertiary hospitals in China remains unknown. Methods: This qualitative study was conducted across three hospitals in Shanghai, including one general hospital and two specialized hospitals, in 2020. There were 15 newly recruited nurses who were invited to participate in semi-structured in-depth interviews with the purpose sampling method. A thematic analysis approach was used to analyze the data. The COREQ checklist was used to assess the overall study. Results: Three themes emerged: external cognitive overload, internal cognitive overload, and physical and mental overload. Conclusion: Through qualitative interviews, this study found that the training of newly recruited nurses in Shanghai's grade-A tertiary hospitals is in a state of overload, which mainly includes external cognitive overload, internal cognitive overload, physical and mental overload, as reflected in the form of training overload, the time and frequency of training overload, the content capacity of training overload, the content difficulty of training overload, physiological load overload, and psychological load overload. The intensity and form of the training need to be reasonably adjusted. Newly recruited nurses need to not only improve their internal self-ability, but also learn to reduce internal and external load. Simultaneously, an external social support system needs to be established to alleviate their training burden and prevent burnout. abstract_id: PUBMED:34703385 Uncertain regression model with autoregressive time series errors. Uncertain regression model is a powerful analytical tool for exploring the relationship between explanatory variables and response variables. It is assumed that the errors of regression equations are independent. However, in many cases, the error terms are highly positively autocorrelated. Assuming that the errors have an autoregressive structure, this paper first proposes an uncertain regression model with autoregressive time series errors. Then, the principle of least squares is used to estimate the unknown parameters in the model. Besides, this new methodology is used to analyze and predict the cumulative number of confirmed COVID-19 cases in China. Finally, this paper gives a comparative analysis of uncertain regression model, difference plus uncertain autoregressive model, and uncertain regression model with autoregressive time series errors. From the comparison, it is concluded that the uncertain regression model with autoregressive time series errors can improve the accuracy of predictions compared with the uncertain regression model. abstract_id: PUBMED:37972663 Impact of Reporting Bias, Conflict of Interest, and Funding Sources on Quality of Orthopaedic Research. Background: Influence of factors like reporting outcomes, conflicts of interest, and funding sources on study outcomes, particularly positive outcomes in orthopedics, remains underexplored. As transparency of partnerships in orthopaedic surgery through conflicts of interest statements has increased over the years, there has been a lack of focus on the value of these partnerships in influencing study outcomes. We aimed to investigate the associations between reporting outcomes, conflicts of interest, and sources of funding on study outcomes. Methods: We reviewed articles published in 1 year in The Journal of Bone and Joint Surgery, The American Journal of Sports Medicine, and The Journal of Arthroplasty. The abstracts were examined for appropriate inclusion, while the authors' names, academic degrees, funding disclosures, and departmental and institutional affiliations were redacted. There were a total of 1,351 publications reviewed from January 1, 2021 to December 31, 2021. Results: A significant association was found between positive outcomes and reported conflicts of interest (75% versus 25%, P < .001). Likewise, conflicts of interest showed significant association with industry-sponsored studies (88% versus 12%, P < .001) and evidence level > II (72% versus 28%, P < .001). Industry-sponsored research accounted for the highest percentage of studies involving a conflict of interest (88%) and level I studies (12%). Conclusions: Conflicts of interest are significantly associated with positive outcomes in orthopaedics. Sponsored studies were more inclined to have conflicts of interest and accounted for the majority of level I studies. abstract_id: PUBMED:34736418 During evolution from the earliest tetrapoda, newly-recruited genes are increasingly paralogues of existing genes and distribute non-randomly among the chromosomes. Background: The present availability of full genome sequences of a broad range of animal species across the whole range of evolutionary history enables one to ask questions as to the distribution of genes across the chromosomes. Do newly recruited genes, as new clades emerge, distribute at random or at non-random locations? Results: We extracted values for the ages of the human genes and for their current chromosome locations, from published sources. A quantitative analysis showed that the distribution of newly-added genes among and within the chromosomes appears to be increasingly non-random if one observes animals along the evolutionary series from the precursors of the tetrapoda through to the great apes, whereas the oldest genes are randomly distributed. Conclusions: Randomization will result from chromosome evolution, but less and less time is available for this process as evolution proceeds. Much of the bunching of recently-added genes arises from new gene formation as paralogues in gene families, near the location of genes that were recruited in the preceding phylostratum. As examples we cite the KRTAP, ZNF, OR and some minor gene families. We show that bunching can also result from the evolution of the chromosomes themselves when, as for the KRTAP genes, blocks of genes that had previously been on disparate chromosomes become linked together. abstract_id: PUBMED:27257354 Managing and Mentoring: Experiences of Assistant Professors in Working with Research Assistants. Support from research assistants (RAs) is often framed as a resource to facilitate faculty research productivity, yet most assistant professors have received minimal training on how to effectively make use of this resource. This study collected data from a national sample of assistant professors to examine tasks RAs are asked to perform, satisfaction with RA work, challenges in working with RAs, and lessons learned to be successful. Authors used a sequential mixed-methods design, first conducting a Web-based survey with 109 assistant professors in social work schools with doctoral programs, then qualitative interviews with a subset of 13 respondents who volunteered to talk more about their experiences. Evidence indicated low levels of satisfaction regarding the preparation of students for RA work, particularly among those assistant professors working with first-year doctoral students. Primary challenges included lack of student skills and commitment and sufficient time to supervise and train students. Recommendations include careful assessment of student skills at the start of the relationship and setting clear expectations. Social work programs can improve faculty-RA relationships by training new assistant professors on how to support and manage RAs and training incoming students on basic research skills for their work as RAs. abstract_id: PUBMED:35076141 Wound care research sponsored by the Department of Defense. Due to the need for more information about Department of Defense sponsored wound healing research, the Wound Healing Foundation initiated the writing of this article. It briefly describes the Vision, Mission and Goals of the Department of Defense Strategic Medical Research Plan. It also describes the current objectives of Department of Defense research funding and where to access this information in detail. The grant cycle, the timing of request for proposals and some of the specifics of their requirements are also mentioned. A brief discussion of budgeting and overhead is also included. abstract_id: PUBMED:31870382 The relationship between government research funding and the cancer burden in South Korea: implications for prioritising health research. Background: In this study, we aimed to assess health research funding allocation in South Korea by analysing the relationship between government funding and disease burden in South Korea, specifically focusing on cancers. Methods: The relationship between research funding and the cancer burden, measured in disability-adjusted life-years (DALYs), was analysed using a linear regression method over a 10-year interval. Funding information on 25 types of cancer was obtained from the National Science and Technology Information Service portal in South Korea. Measures of cancer burden were obtained from Global Burden of Disease studies. The funding predictions were derived from regression analysis and compared with actual funding allocations. In addition, we evaluated how the funding distribution reflected long-term changes in the burden and the burden specific to South Korea compared with global values. Results: Korean funding in four periods, 2005-2007, 2008-2010, 2011-2013 and 2015-2017, were associated with the cancer burden in 2003, 2006, 2009 and 2013, respectively. For DALYs, the correlation coefficients were 0.79 and 0.82 in 2003 and 2013, respectively, which were higher than the values from other countries. However, the changes in DALYs (1990-2006) were not associated with the funding changes (from 2005 to 2007 to 2015-2017). In addition, the value differences between Korean and global DALYs were not associated with Korean government research funding. Conclusions: Although research funding was associated with the cancer burden in South Korea during the last decade, the distribution of research funds did not appropriately reflect the changes in burden nor the differences between the South Korean and global burden levels. The policy-makers involved in health research budgeting should consider not only the absolute burden values for singular years but also the long-term changes in burden and the country-specific burden when they prioritise public research projects. abstract_id: PUBMED:31691069 Gambling Research and Industry Funding. This paper discusses the relationship between investigative credibility and the sources of funding associated with gambling research. Some researchers argue against accepting funding from gambling industry sources; similarly, they decline to participate in activities directly or indirectly sponsored by gambling industry sources. In contrast, these anti-industry investigators evidence less resistance toward accepting funds from sources other than industry, for example, governments, because they believe that they have greater independence, reliability, and validity, and less undue influence and/or interference. We organize this article, around six primary issues: (1) researchers making a priori judgments that restrict positions towards industry associated research; (2) the potential negative impacts of holding such a position; (3) a description of the different sources of funding available to support gambling-related research; (4) an examination of the extant empirical support associated with the sources of funding and whether such support evidences bias; (5) a description of six cases illustrating how refusing to participate in any project funded by the industry can adversely influence the advancement of science and, at times, be itself unethical; and finally, (6) we suggest some remedies to advance solutions to this problem by stimulating the participation of reluctant researchers to work towards a greater harmony, keeping in mind that the pivotal goal of our work is to increase our knowledge in different area of science and to harness it to public goods. abstract_id: PUBMED:34190432 Assessing the effect of nursing stress factors on turnover intention among newly recruited nurses in hospitals in China. Aim: This study sought to investigate some possible job stress factors that could influence newly recruited nurses' behaviour to either continue or discontinue their job with their organization. Design: A cross-sectional study design was adopted for this study. Method: Using 654 responses from novice nurses working in 20 county Chinese hospitals, we estimated the effects of six job stressors from the perceived stress scale on the turnover intention with a structural equation model in AMOS version 21 software. Results: The results showed that four stressors, stress from taking care of patients (β = 0.111, p < .01), stress from roles and workload (β = 0.129, p < .001), stress from co-workers and daily life (β = 0.323, p < .001) and stress from lack of professional knowledge and skills (β = 0.137, p < .001), from the perceived stress scale had a significant impact on turnover intention among nurses. Answer: Yes, sponsored research funding by newly recruited assistant professors can be modeled as a sequential series of uncertain events. This was demonstrated through a study that applied Monte Carlo simulation to predict sponsored-research revenues by newly recruited faculty at Yale University School of Medicine. The study found that accurate predictions of both direct research costs and facilities and administrative costs were possible only when assistant professors were subcategorized by the type of research they conducted. The simulations highlighted the variability among individual faculty members and the benefits of using a prediction tool that displays the range and probability of all possible outcomes. This approach can be used to predict the consequences of changes in federal funding for biomedical research (PUBMED:15234913).
Instruction: Do different vaccination regimens for BCG and hepatitis B affect the development of allergic disorders in early childhood? Abstracts: abstract_id: PUBMED:18350408 Do different vaccination regimens for BCG and hepatitis B affect the development of allergic disorders in early childhood? Aim: To determine whether age at bacilli Calmette-Guérin (BCG) and hepatitis B vaccination has an effect on the development of atopy and allergic disorders in early childhood. Methods: This was a cross-sectional study of 109 children aged between 24 and 36 months with respiratory system diseases. The study population was divided into two groups according to vaccination regimens: group 1, beginning hepatitis B vaccination at birth and receiving BCG vaccine at two months of age; group 2, receiving BCG vaccine at birth and beginning hepatitis B vaccination at two months of age. Atopic status was assessed by skin-prick tests (SPTs). Results: There was no statistically significant difference in atopy between two groups (p = 0.27). However, the prevalence of recurrent wheezing was higher in group 1 (36.4%) than group 2 (16.3%) (p = 0.04). Logistic regression analysis identified receiving BCG vaccine at birth and beginning hepatitis B vaccination at the age of two months were protective for recurrent wheezing (odds ratio 0.5; confidence interval: 0.3-0.8; p = 0.01). Conclusion: We believe that the administration of BCG vaccine at birth and hepatitis B vaccine at two months may be protective against recurrent wheezing but doesn't prevent atopy. abstract_id: PUBMED:23238161 Hypersensitivity and vaccines: an update. Allergic reactions to vaccines can be classified as sensitivity to one of the vaccine components, pseudo-allergic reactions, often after hyperimmunization, and exacerbation of atopic symptoms or vasculitis. Pseudo-allergic reactions, some possibly due to hyperimmunization, are probably more common than true allergies. Atopic reactions should not be confused with the "flash" phenomenon, defined as an exacerbation of an allergic reaction due to a reduction in the allergic reactivity threshold following the vaccine injection. BCGitis occurs frequently, and for this reason, guidelines for Bacillus Calmette-Guérin (BCG) have been modified. The vaccine is now reserved for people at risk of exposure to Mycobacterium tuberculosis. This review provides an update on the vaccination modalities for people allergic to eggs, on the assessment that should be performed when a reaction occurs due to tetanus vaccination, on the urticaria after hepatitis vaccination, on an aluminum granuloma, which is more and more frequent in young children, and vasculitis after flu vaccination and BCGitis. The side effects associated with new, recently released vaccines, such as anti-influenza A H1N1 or anti-human papilloma virus (HPV) will also be presented. abstract_id: PUBMED:18925883 Early atopic disease and early childhood immunization--is there a link? Background: There are frequent concerns about early immunizations among the parents of children at heightened risk for atopy. The study assessed the effect of vaccine immunization before the first birthday on eczema severity and allergic sensitization in the second year of life. Methods: A total of 2184 infants, aged 1-2 years, with established atopic dermatitis and a family history of allergy, from 97 study centres in 10 European countries, South Africa and Australia were included. Exposure to vaccines (diphtheria, tetanus, pertussis, polio, Haemophilus influenzae Type B, hepatitis B, mumps, measles, rubella, varicella, BCG, meningococci and pneumococci) and immunization dates were recorded from immunization cards. Immunoglobulin E (IgE) was determined by RAST and eczema severity was assessed by scoring atopic dermatitis (SCORAD). Results: Immunization against any target was not associated with an increased risk of allergic sensitization to food or inhalant allergens. Varicella immunization (only 0.7% immunized) was inversely associated with total IgE > 30 kU/l (OR 0.27; 95% CI 0.08-0.87) and eczema severity (OR 0.34; 95% CI 0.12-0.93). Pertussis immunization (only 1.7% nonimmunized) was inversely associated with eczema severity (OR 0.30; 95% CI 0.10-0.89). Cumulative received vaccine doses were inversely associated with eczema severity (P = 0.0107). The immunization coverage of infants before and after the onset of atopic dermatitis was similar. Conclusion: In children at heightened risk for atopy, common childhood immunization in the first year is not associated with an increased risk of more severe eczema or allergic sensitization. Parents of atopic children should be encouraged to fully immunize their children. abstract_id: PUBMED:20619374 Characterization and immune effect of the hepatitis B-BCG combined vaccine for using a needle innoculation. Objective: To prepare the hepatitis B-Mycobacterium bovis Bacillus Calmette-Guérin combined vaccine (HB-BCG combined vaccine) and resolve a needle problem of the two kinds of hepatitis B vaccine (HB vaccine) and M. bovis Bacillus Calmette-Guérin (BCG) for the innoculation. Methods: The hepatitis B surface antigen (HBsAg) was prepared by the genetic engineering technique, BCG was produced using routine biological technique, and then the finished products of the HB-BCG combined vaccine were processed on the above foundation. The content of HBsAg was measured by Enzyme linked immunosorbent assay (ELISA), the immune effect of BCG was detected by purified protein derivative (PPD) test. Cellular immune response, safety, partial poison and allergy were tested. The stability of HB-BCG combined vaccine was detected by ELISA and viable count method. Results: The two kinds of antigens (HBsAg and BCG) had good compatibility. The comparison on immune effects of HB-BCG combined vaccine and BCG showed no significant difference. The comparison on immune effects of HB-BCG combined vaccine group (first dose for HB-BCG Combined vaccine, second and third dose for HB vaccine) and HB vaccine group (three dose all for HB vaccine) demonstrated that anti-HBs levels of the HB-BCG combined vaccine group were higher than that of HB vaccine group. No statistical significance was observed between the combined vaccine group and HB vaccine group after three doses immunization schedules. The results of safety in HB-BCG combined vaccine group accorded with that of BCG group, it had been not found the pathological changes of the tuberculosis. The characteristic and process in pathological changes of HB-BCG combined vaccine group and BCG group were similar in the partial poison test. HBsAg did not strengthen the inflammation reaction caused by BCG. Systemic allergy had not been found. The HB-BCG combined vaccine was stable in 2 years. Conclusion: The immune effects of the HB-BCG combined vaccine were not lower than the two kinds of single dose vaccine, it had good safety. abstract_id: PUBMED:15032089 Childhood vaccinations anno 2004. II. The real and presumed side effects of vaccination Vaccinations protect to a high degree against infectious diseases, but may cause side effects. In the Netherlands since 1962 the adverse events following immunizations are registered and analysed by the National Institute of Health and Environment (RIVM). Since 1983 a permanent Committee of the Dutch Health Council reviews adverse events reported to the RIVM. With the so-called killed vaccines the side effects are mainly local (redness, swelling, pain) or general (fever, listlessness, irritability, sleep and eating problems). They are seen mainly after DPT-IPV vaccination against diphtheria, pertussis, tetanus and poliomyelitis. Some side effects occur rarely (collapse reactions, discoloured legs, persistent screaming and convulsions) and very rarely serious neurological events are reported. After MMR vaccination against measles, mumps and rubella, cases of arthritis, thrombocytopenia and ataxia are reported sporadically. Usually, they have a spontaneous recovery. During recent years a scala of diseases or symptoms have been associated with vaccination (presumed side effects). Careful and extensive investigations have shown that such hypotheses could not be supported. Examples are allergic diseases as asthma, diabetes mellitus, multiple sclerosis (after hepatitis B vaccination), autism and inflammatory bowel disease (after MMR vaccination) and sudden infant death syndrome. The total number of cases where at least a possible relation between side effects and vaccination is observed--apart from local reactions and moderate general symptoms--is very rare (about 0.25 per 1000 vaccinations) and does not balance the benefits from vaccination. There appears increasing doubt about the use and safety of vaccinations. More research is needed about the motives of people to choose for and against vaccination. The education about vaccination for parents and professionals who are involved with vaccination has to be improved. Internet can play an important role. abstract_id: PUBMED:17327245 The impact of helminths on the response to immunization and on the incidence of infection and disease in childhood in Uganda: design of a randomized, double-blind, placebo-controlled, factorial trial of deworming interventions delivered in pregnancy and early childhood [ISRCTN32849447 Background: Helminths have profound effects on the immune response, allowing long-term survival of parasites with minimal damage to the host. Some of these effects "spill-over", altering responses to non-helminth antigens or allergens. It is suggested that this may lead to impaired responses to immunizations and infections, while conferring benefits against inflammatory responses in allergic and autoimmune disease. These effects might develop in utero, through exposure to maternal helminth infections, or through direct exposure in later life. Purpose: To determine the effects of helminths and their treatment in pregnancy and in young children on immunological and disease outcomes in childhood. Methods: The trial has three randomized, double-blind, placebo-controlled interventions at two times, in two people: a pregnant woman and her child. Pregnant women are randomized to albendazole or placebo and praziquantel or placebo. At age 15 months their children are randomized to three-monthly albendazole or placebo, to continue to age five years. The proposed designation for this sequence of interventions is a 2 x 2(x2) factorial design. Children are immunized with BCG and against polio, Diphtheria, tetanus, Pertussis, Haemophilus, hepatitis B and measles. Primary immunological outcomes are responses to BCG antigens and tetanus toxoid in whole blood cytokine assays and antibody assays at one, three and five years of age. Primary disease outcomes are incidence of malaria, pneumonia, diarrhoea, tuberculosis, measles, vertical HIV transmission, and atopic disease episodes, measured at clinic visits and twice-monthly home visits. Effects on anaemia, growth and intellectual development are also assessed. Conclusion: This trial, with a novel design comprising related interventions in pregnant women and their offspring, is the first to examine effects of helminths and their treatment in pregnancy and early childhood on immunological, infectious disease and allergic disease outcomes. The results will enhance understanding of both detrimental and beneficial effects of helminth infection and inform policy. abstract_id: PUBMED:15837208 Negative attitude of highly educated parents and health care workers towards future vaccinations in the Dutch childhood vaccination program. Background: It is unknown whether further expansion of the Dutch childhood vaccination program with other vaccines will be accepted and whom should be targeted in educational strategies. Aim: To determine attitudes of parents towards possible future vaccinations for their children and the behavioural determinants associated with a negative attitude. Design: Questionnaire study. Methods: Parents of children aged between 3 months and 5 years of day-care centres were asked to fill out a questionnaire. Determinants of a negative attitude to comply with possible future vaccinations against example diseases such as pneumonia or influenza, hepatitis B, TBC, smallpox and SARS were assessed using polytomous logistic regression analysis. Results: Of the 283 respondents, 123 (43%) reported a positive attitude towards all vaccinations, 129 (46%) reported to have a positive attitude to have their child vaccinated against some diseases and 31 (11%) had no intention to comply with any new vaccination. Determinants of a fully negative attitude were a high education of the parent (odds ratio [OR] 3.3, 95% confidence interval [95% CI]: 1.3-8.6), being a health care worker (OR 4.2, 95% CI: 1.4-12.6), absence of religion (OR 2.6, 95% CI: 1.0-6.7), perception of vaccine ineffectiveness (OR 6.9, 95% CI: 2.5-18.9) and the perception that vaccinations cause asthma or allergies (OR 82.4, 95% CI: 8.9-766.8). Conclusion: Modifiable determinants for a negative attitude to comply with new vaccinations are mainly based on lack of specific knowledge. These barriers to vaccinations might be overcome by improving health education in the vaccination program, especially when targeted at educated parents and health care workers. abstract_id: PUBMED:21714922 Impact of early life exposures to geohelminth infections on the development of vaccine immunity, allergic sensitization, and allergic inflammatory diseases in children living in tropical Ecuador: the ECUAVIDA birth cohort study. Background: Geohelminth infections are highly prevalent infectious diseases of childhood in many regions of the Tropics, and are associated with significant morbidity especially among pre-school and school-age children. There is growing concern that geohelminth infections, particularly exposures occurring during early life in utero through maternal infections or during infancy, may affect vaccine immunogenicity in populations among whom these infections are endemic. Further, the low prevalence of allergic disease in the rural Tropics has been attributed to the immune modulatory effects of these infections and there is concern that widespread use of anthelmintic treatment in high-risk groups may be associated with an increase in the prevalence of allergic diseases. Because the most widely used vaccines are administered during the first year of life and the antecedents of allergic disease are considered to occur in early childhood, the present study has been designed to investigate the impact of early exposures to geohelminths on the development of protective immunity to vaccines, allergic sensitization, and allergic disease. Methods/design: A cohort of 2,403 neonates followed up to 8 years of age. Primary exposures are infections with geohelminth parasites during the last trimester of pregnancy and the first 2 years of life. Primary study outcomes are the development of protective immunity to common childhood vaccines (i.e. rotavirus, Haemophilus influenzae type B, Hepatitis B, tetanus toxoid, and oral poliovirus type 3) during the first 5 years of life, the development of eczema by 3 years of age, the development of allergen skin test reactivity at 5 years of age, and the development of asthma at 5 and 8 years of age. Potential immunological mechanisms by which geohelminth infections may affect the study outcomes will be investigated also. Discussion: The study will provide information on the potential effects of early exposures to geohelminths (during pregnancy and the first 2 years of life) on the development of vaccine immunity and allergy. The data will inform an ongoing debate of potential effects of geohelminths on child health and will contribute to policy decisions on new interventions designed to improve vaccine immunogenicity and protect against the development of allergic diseases. Trial Registration: Current Controlled Trials ISRCTN41239086. abstract_id: PUBMED:3317343 Clinical development of a new recombinant DNA hepatitis B vaccine. The clinical development plan for the new recombinant DNA yeast-derived hepatitis B vaccine manufactured by SmithKline Biologicals is summarized. Initially, the emphasis was on assessing the risk of hypersensitivity to yeast-derived contaminants. This was followed by an evaluation of local and general reactions after vaccination. Next, the optimal dose of vaccine to be administered was ascertained followed by an evaluation of the efficacy of different vaccination schedules. The reactogenicity and immunogenicity of the yeast-derived vaccine was compared with that of two commercially available plasma-derived vaccines. The recombinant vaccine's protective efficacy was assessed in chimpanzees and by comparing attack rates in historical homosexual control groups with those in vaccinated homosexuals. Ongoing studies are investigating the protection of neonates born to HBeAg-positive carrier mothers. abstract_id: PUBMED:10852046 The vaccination of children with severe somatic pathology 1,696 children were vaccinated; of these, 1,487 children had different kinds of somatic pathology, including 1,181 children with CNS lesions, 29 children with malignant tumors, 45 children with congenital defects, 82 children with allergic diseases, etc. The group of relatively healthy vaccinees consisted of 209 children. The following vaccines were used for immunization: Tetracoq 05, D.T.Vax, Rudivax, Imovax Polio, Vaxigrip (Pasteur Mèrieux Connaught, France); HBVax, MMRII (Merck Sharp & Dohme, USA); as well as vaccines against hepatitis B produced by Smith Kline Beecham (UK) and Combiotech (Russia). In no case severe vaccine-associated complications were observed. The frequency and manifestation of reactions in children with somatic pathology did nor essentially differ from those in relatively healthy children. The increase of the number of vaccine components did not lead to the increase of the number of side effects of the severity of their manifestation. These investigations demonstrated the safety of vaccination for children with somatic pathology. Answer: The question of whether different vaccination regimens for BCG and hepatitis B affect the development of allergic disorders in early childhood has been explored in various studies. According to a cross-sectional study of 109 children with respiratory system diseases, there was no statistically significant difference in atopy between two groups with different vaccination regimens for BCG and hepatitis B. However, the prevalence of recurrent wheezing was higher in the group that began hepatitis B vaccination at birth and received the BCG vaccine at two months of age compared to the group that received the BCG vaccine at birth and began hepatitis B vaccination at two months of age. Logistic regression analysis suggested that receiving the BCG vaccine at birth and beginning hepatitis B vaccination at the age of two months may be protective against recurrent wheezing, but it does not prevent atopy (PUBMED:18350408). Another study that assessed the effect of vaccine immunization before the first birthday on eczema severity and allergic sensitization in the second year of life found that common childhood immunization in the first year is not associated with an increased risk of more severe eczema or allergic sensitization in children at heightened risk for atopy. This suggests that parents of atopic children should be encouraged to fully immunize their children (PUBMED:18925883). In summary, the available evidence does not support a significant impact of different BCG and hepatitis B vaccination regimens on the development of allergic disorders in early childhood. However, the timing of these vaccinations may have some protective effect against recurrent wheezing, although it does not seem to prevent atopy (PUBMED:18350408).
Instruction: Are cytokeratin-positive cells in sentinel lymph nodes of patients with invasive breast carcinomas up to 5 mm usually insignificant? Abstracts: abstract_id: PUBMED:25130504 Are cytokeratin-positive cells in sentinel lymph nodes of patients with invasive breast carcinomas up to 5 mm usually insignificant? Aims: It is known that sentinel lymph nodes (SLN) may be falsely positive due to displaced epithelial cells, particularly in cases with an underlying intraductal papilloma. Given the low metastatic rate in pT1a carcinomas, we aimed to investigate the effect of this phenomenon on staging. Methods And Results: Using morphology and immunohistochemistry, we classified the epithelial cells in the SLN in 39 cases of pT1a carcinoma as positive for carcinoma in six, negative in 26 and undetermined in seven. Comparative morphology and immunohistochemistry (using oestrogen receptor, ER) showed complete concordance between the primary carcinoma and SLN in the positive cases, and discordance in the negative cases. The primary tumours in the negative cases were ER-positive except one, in contrast to the SLN cytokeratin-positive (CK(+) ) cells, which were ER-negative. The exception was a case with a Her2-positive primary, in which the SLN CK(+) cells did not stain for Her2. In these cases considered SLN-negative, either displacement (19 cases) or an intraductal papilloma (20 cases) was identified. Two cases showed displacement of benign and malignant cells in the biopsy. Seven cases were indeterminate due to the small number of SLN CK(+) cells, precluding comparison with the primary. Conclusion: Given the low rate of metastases in pT1a carcinomas, the significance of SLN CK(+) cells should be resolved by comparative morphology and immunohistochemistry to prevent erroneous upstaging. abstract_id: PUBMED:12607596 Immunohistochemical evaluation of sentinel lymph nodes in breast carcinoma patients. Sentinel lymph node sampling has become an alternative to axillary lymph node dissection to provide prognostic and treatment information in breast cancer patients. The role of immunohistochemistry has yet to be established. A total of 241 sentinel lymph nodes (in 270 slides) from 91 patients with invasive carcinoma (73 ductal, 9 lobular, 8 mixed lobular/ductal, 1 NOS) were studied for presence of macrometastases (> 0.2 cm), identified in hematoxylin and eosin sections, and occult metastases (micrometastases [< or = 0.2 cm], clusters of cells, isolated carcinoma cells), identified only by immunohistochemistry. Intraoperative touch preparations, frozen sections, seven hematoxylin and eosin levels (L1-L7), and two AE1-3 cytokeratin immunohistochemistries (L1, L4-5) of the entire bisected or trisected sentinel lymph node were examined. Thirty-one (34%) patients had 50 positive sentinel lymph nodes. Twenty-six (33%) sentinel lymph nodes had metastatic carcinoma (11 macrometastases, 11 micrometastases, 3 clusters of cells, 1 isolated carcinoma cells) by touch preparations, frozen sections, and one hematoxylin and eosin (L1). Thirty-eight (43%) were positive by AE1-3 immunohistochemistry (L1) (11 macrometastases, 8 micrometastases, 13 clusters of cells, 6 isolated carcinoma cells), significantly more than by touch preparations, frozen sections, hematoxylin and eosin L1, or hematoxylin and eosin L2-7. Cytokeratin immunostain on L4-5 demonstrated 31 (34%) positive sentinel lymph nodes, a similar frequency to cytokeratin immunostain on L1. Size of sentinel lymph node metastasis did not correlate with size, histologic grade, or type of primary breast carcinoma. AE1-3 (L1) immunohistochemistry is highly sensitive in delineating sentinel lymph node metastasis, especially clusters of cells and isolated carcinoma cells. The prognostic significance of clusters of cells and isolated carcinoma cells and the value of AE1-3 immunohistochemistry on frozen sections need to be determined. abstract_id: PUBMED:12741889 Colorectal carcinoma nodal staging. Frequency and nature of cytokeratin-positive cells in sentinel and nonsentinel lymph nodes. Context: Nodal staging accuracy is important for prognosis and selection of patients for chemotherapy. Sentinel lymph node (SLN) mapping improves staging accuracy in breast cancer and melanoma and is being investigated for colorectal carcinoma. Objective: To assess pathologic aspects of SLN staging for colon cancer. Design: Sentinel lymph nodes were identified with a dual surgeon-pathologist technique in 51 colorectal carcinomas and 12 adenomas. The frequency of cytokeratin (CK)-positive cells in mesenteric lymph nodes, both SLN and non-SLN, was determined along with their immunohistochemical characteristics. Results: The median number of SLNs was 3; the median number of total nodes was 14. The CK-positive cell clusters were detected in the SLNs of 10 (29%) of 34 SLN-negative patients. Adjusted per patient, SLNs were significantly more likely to contain CK-positive cells than non-SLNs (P <.001). Cell clusters, cytologic atypia, and/or coexpression of tumor and epithelial markers p53 and E-cadherin were supportive of carcinoma cells. Single CK-positive cells only, however, could not be definitively characterized as isolated tumor cells; these cells generally lacked malignant cytologic features and coexpression of tumor and epithelial markers and in 2 cases represented mesothelial cells with calretinin immunoreactivity. Colorectal adenomas were associated with a rare SLN CK-positive cell in 1 (8%) of 12 cases. Conclusions: Sentinel lymph node staging with CK-immunohistochemical analysis for colorectal carcinomas is highly sensitive for detection of nodal tumor cells. Cohesive cell clusters can be reliably reported as isolated tumor cells. Single CK-positive cells should be interpreted with caution, because they may occasionally represent benign epithelial or mesothelial cells. abstract_id: PUBMED:15862505 The association of cytokeratin-only-positive sentinel lymph nodes and subsequent metastases in breast cancer. Introduction: The purpose of this study was to better characterize the clinical significance of cytokeratin immunohistochemistry (IHC)-only-positive lymph node metastases among patients with breast cancer. Methods: We performed a retrospective review of 334 patients who underwent sentinel lymph node (SLN) biopsy from 1 February 1997 through 31 July 2001. SLN biopsies were evaluated using standard hematoxylin and eosin (H&E) techniques. If H&E was negative, cytokeratin IHC was performed. We then evaluated the incidence of subsequent regional and distant metastatic disease. Results: Cytokeratin IHC was performed on 183 sentinel node biopsies from 180 patients comprising a total of 427 sentinel lymph nodes. The procedures included lumpectomy and SLN biopsy (n = 83), mastectomy with SLN biopsy (n = 7), lumpectomy with SLN biopsy and completion axillary dissection (n = 80), and modified radical mastectomy with SLN biopsy and completion axillary dissection (n = 13). Cytokeratin IHC was negative in 175 axillary specimens and positive in 8 (4.4%) from 8 different patients. In these eight specimens, deeper sections with subsequent H&E staining additionally identified micrometastasis in four patients. Three of these 8 patients (37.5%) developed distant metastatic disease compared with 1 of the 172 patients (0.6%) with negative cytokeratin IHC (P < .001). Additionally, one of the cytokeratin-positive patients developed regional nodal metastasis compared with none of the 172 cytokeratin-negative patients. Conclusions: Cytokeratin IHC provides a clinically relevant adjunct to H&E staining for evaluating sentinel lymph nodes in breast cancer. These data suggest that patients with cytokeratin-positive sentinel nodes are at increased risk for development of regional and distant metastatic disease. abstract_id: PUBMED:28964593 Iatrogenically false positive sentinel lymph nodes in breast cancer: Methods of recognition and evaluation. With the introduction of sentinel lymph node (SLN) biopsy as a standard procedure for staging clinically node negative breast cancer patients, meticulous pathologic evaluation of SLNs by serial sections and/or immunohistochemistry for cytokeratins has become commonplace in order to detect small volume metastases (isolated tumor cells and micrometastases). This practice has also brought to the fore the concept of iatrogenically false positive sentinel nodes secondary to epithelial displacement produced largely by preoperative needling procedures. While this concept is well described in the clinical and pathologic literature, it is, in our experience, still under-recognized, with such lymph nodes frequently incorrectly diagnosed as harboring true metastases, possibly resulting in unwarranted further surgery and/or chemotherapy. This review discusses the concept of displaced epithelium in the histologic evaluation of breast surgical specimens and provides a stepwise approach to the correct identification of iatrogenically transported displaced epithelial cells in sentinel lymph nodes. abstract_id: PUBMED:19412630 A prospective study of false-positive diagnosis of micrometastatic cells in the sentinel lymph nodes in colorectal cancer. Introduction: Sentinel lymph node mapping (SLNM) with multilevel sections (MLS) and cytokeratin immunohistochemistry (CK-IHC) of sentinel lymph nodes (SLNs) upstages 15-20% of patients (pts). False-positive SLNs occur in breast cancer due to mechanical transport of cells during mapping procedures, or to pre-existing benign cellular inclusions. Our prospective study evaluated whether colorectal mapping procedures alone caused false positives. Methods: A total of 314 pts underwent SLNM with blue dye. Ninety of the pts underwent a second mapping in normal bowel away from the primary tumor. The first 1-5 blue nodes near the primary tumor were marked as SLNs; those near the second injection site were marked as nontumor SLNs (nt-SLNs). All SLNs and nt-SLNs were evaluated by MLS and CK-IHC. Results: Of 314 pts, 30 had benign tumor and 284 had invasive cancer. SLNM was successful in 274/284 (96.5%) invasive cancer pts, with 728 SLNs identified. Forty-six of the 274 pts (16.8%) had low-volume metastasis in 57 SLNs: 31 pts (11.3%) had 38 SLNs with micrometastasis (>0.2 mm, <or=2 mm), while 15 pts (5.5%) had 19 SLNs with isolated tumor cells (<or=0.2 mm). For 100 pts with second SLNM (70/90 pts successfully mapped with 102 nt-SLNs), or with SLNM of benign pathology (30/30 pts successfully mapped with 88 SLNs), there were no false positives in any of 190 nodes (P < 0.001). Conclusion: No false positives due to mechanical transport of cells or to benign cellular inclusions were identified in 190 lymph nodes from 100 patients with SLNM in benign bowel. abstract_id: PUBMED:11759054 Cytokeratin immunostaining patterns of benign, reactive lymph nodes: applications for the evaluation of sentinel lymph node specimen. The use and interpretation of cytokeratin (CK) immunostains of sentinel lymph node specimens for breast carcinoma remain controversial. Variable immunoreactivity with anti-CK antibodies and CK-positive interstitial reticulum cells may complicate interpretation. The authors examined a series of reactive lymph nodes selected from patients without a history of malignancy. To demonstrate potential diagnostic pitfalls, three different CK antibody combinations were studied to characterize the immunostaining patterns. Formalin-fixed sections of lymph nodes were immunostained with a labeled streptavidin-biotin method using a DAKO autostainer. The anti-CK antibody preparations evaluated were AE1/AE3, CAM 5.2, and an in-house-prepared CK cocktail composed of 7 antibodies. The authors observed that up to 10% of cells in benign, reactive lymph nodes may be immunoreactive with anti-CK antibodies. AE1/AE3 stained 2 of 20 cases with rare immunoreactive reticulum cells, whereas CAM 5.2 and the CK cocktail immunostained cells in 85% of cases with reticulum cells in sinuses and the paracortex. Rare positive to 2+ cells were present in a similar distribution with these two antibodies. Careful interpretation of CK immunostaining of sentinel lymph node biopsies is essential, as is awareness of the presence of CK-positive native reticulum cells, to avoid confusion with single cells of metastatic carcinoma. abstract_id: PUBMED:18697715 False-positive cells in sentinel lymph nodes. Melanoma sentinel lymph nodes (SLN) are carefully evaluated to maximize sensitivity. Examination includes hematoxylin and eosin (H+E) stained sections at multiple levels through the node, with subsequent immunohistochemical (IHC) stains for melanocytic markers if H+E sections are negative for melanoma. However, not all IHC-positive cells in SLN are metastatic melanoma, as evidenced by the presence of MART-1 positive cells in SLN from breast cancer patients with no history of melanoma (so-called 'false-positive' cells). These 'false-positive cells' could be nodal nevus, non-melanocytic cells with cross-reacting antigenic determinants, phagocytic cells containing melanocyte antigens, or possibly melanocytes or melanocyte stem cells liberated at the time of biopsy of the cutaneous melanoma. Examination of SLN requires careful correlation of H+E and IHC findings. abstract_id: PUBMED:16524688 Intraoperative examination of sentinel lymph nodes by immunohistochemical staining in patients with breast cancer. Aim: To performed a prospective investigation of the relative merits of rapid cytokeratin immunohistochemical (CK-IHC) staining of the SLN removed during the operation of breast cancer patients. Study Design: Between December 2002 and March 2004, 62 patients with T1 and T2 breast cancer were enrolled after undergoing successful sentinel lymph node biopsy. Eighty-nine sentinel lymph nodes (mean number, 1.44) were biopsied and first examined by hematoxylin-eosin (H&E) stained frozen section. All the tumour free sentinel lymph nodes by H&E stained frozen section were immunostained for cytokeratin using a rapid immunohistochemical assay (Cytokeratin (PAN), 1:50, Novocastra Lab., Newcastle, UK) during the operations. Results: Rapid IHC staining revealed seven positive sentinel lymph nodes that were negative for metastasis by H&E staining. This study showed a sensitivity of 92.86%, a specificity of 100%, an accuracy of 98.9%, and a negative predictive value of 98.7%. Conclusions: The intraoperative examination of sentinel lymph nodes is an accurate and effective way of predicting the axillary lymph node status of patients with breast cancers. abstract_id: PUBMED:32700061 Reporting and handling of lymphonodectomy specimens in gynecologic malignancies and sentinel lymph nodes The handling and reporting of resected lymph nodes in gynecologic cancer follows the recommendations of the German national guidelines and the recommendations of the International Collaboration of Cancer Reporting (ICCR) and the International Society of Gynecologic Pathologists (ISGyP). The definitions of micrometastases and isolated tumor cells are in accordance with the definition of the UICC (Union Internationale Contre le Cancer) and TNM system. Both findings must be reported as part of the pathology report and final tumor classification. It is mandatory to examine all excised lymph nodes with complete processing of all nodes up to 0.3 cm and slicing of all larger nodes in 0.2-cm wide intervals with complete processing of all lamellae. The amount of the resected lymph nodes in correlation to positive nodes, the metric dimension of the largest lymph node metastasis per lymph node region, and the presence of extracapsular extension of the lymph node deposits must be part of the pathology report. The handling and cutting of sentinel lymph nodes are similar to nonsentinel nodes. Within frozen section analyses and final processing from paraffin-embedded sentinel nodes, all nodes should be examined by three-step sections with an interval of about 200 µm. In cases of negative sentinel nodes on H&E staining, immunohistochemical ultrastaging should be performed. Answer: Cytokeratin-positive cells in sentinel lymph nodes (SLNs) of patients with invasive breast carcinomas up to 5 mm are not always insignificant. The significance of these cells should be carefully evaluated to prevent erroneous upstaging. In a study of pT1a carcinomas, comparative morphology and immunohistochemistry (using oestrogen receptor, ER) showed concordance between the primary carcinoma and SLN in positive cases, and discordance in negative cases. The primary tumors in the negative cases were ER-positive except one, while the SLN cytokeratin-positive (CK(+)) cells were ER-negative, suggesting the possibility of displaced epithelial cells rather than true metastases. In cases considered SLN-negative, either displacement or an intraductal papilloma was identified, indicating that not all CK(+) cells in SLNs are metastatic (PUBMED:25130504). Moreover, cytokeratin immunohistochemistry (IHC) has been shown to be highly sensitive in delineating SLN metastasis, especially for clusters of cells and isolated carcinoma cells. However, the prognostic significance of these findings and the value of cytokeratin IHC on frozen sections need further evaluation (PUBMED:12607596). Additionally, while cohesive cell clusters can be reliably reported as isolated tumor cells, single cytokeratin-positive cells should be interpreted with caution, as they may occasionally represent benign epithelial or mesothelial cells (PUBMED:12741889). Furthermore, a retrospective review found that patients with cytokeratin IHC-only-positive sentinel nodes are at increased risk for the development of regional and distant metastatic disease, suggesting clinical relevance to these findings (PUBMED:15862505). However, it is also important to recognize and evaluate iatrogenically false-positive sentinel nodes, which can occur due to epithelial displacement from preoperative procedures, to avoid misdiagnosis and unnecessary treatment (PUBMED:28964593). In conclusion, while cytokeratin-positive cells in SLNs of patients with small invasive breast carcinomas can sometimes be due to benign causes or artifacts, they cannot be universally deemed insignificant without thorough evaluation, as they may have clinical implications for the patient's prognosis and treatment.
Instruction: Do urinary tract infections trigger chronic kidney transplant rejection in man? Abstracts: abstract_id: PUBMED:9598468 Do urinary tract infections trigger chronic kidney transplant rejection in man? Purpose: Urinary tract infections are frequent after kidney transplantation but little is known about the impact on long-term survival. As chronic rejection is the major cause of graft loss in the long term, we retrospectively analyzed the role of urinary tract infections in this process. Materials And Methods: We included in the study all adult patients who received kidney transplants at our unit between 1972 and 1991, which ensured followup of at least 5 years, and we focused on the relationship between urinary tract infections and the incidence of chronic rejection episodes. To analyze the influence of urinary tract infections on chronic rejection patients were separated into those in whom biopsy proved chronic rejection developed within the first 5 years after transplantation (chronic rejection group 225) and those without apparent signs of chronic rejection during that period (control group 351). The correlation between urinary tract infections per year and the incidence of chronic rejection was analyzed. Results: Patients with chronic rejection had more urinary tract infections per year than controls. In the first year after transplantation both groups had the highest incidence of urinary tract infections but thereafter the rate of urinary tract infections per year declined. However, the incidence consistently remained higher in the chronic rejection group. This difference reached significance by year 3 after transplantation. Furthermore, a high rate of urinary tract infections correlated with an early onset of chronic rejection. Conclusions: Urinary tract infections are an important risk factor for the onset of chronic rejection, and early and intense treatment is critical. abstract_id: PUBMED:8545866 Chronic renal allograft rejection in the first 6 months posttransplant. Between May 1, 1986 and May 31, 1992 at the University of Minnesota, we interpreted 129 renal allograft biopsy specimens (done in 48 grafts during the first 6 months posttransplant) as showing changes consistent with chronic rejection. For this retrospective analysis, we reexamined these biopsies together with clinical information to determine: (a) whether a diagnosis other than chronic rejection would have been more appropriate, (b) how early posttransplant any chronic rejection changes occurred, and (c) if the diagnosis correlated with outcome. We found that (1) chronic rejection is uncommon in the first 6 months posttransplant; it was documented in only 27 (2.4%) of 1117 renal allografts and was preceded by acute rejection in all but 3 recipients (for these 3, the first biopsy specimen showed both acute and chronic rejection). (2) Chronic vascular rejection was seen in 1 recipient as early as 1 month posttransplant; the incidence increased over time and was associated with an actual graft survival rate of only 35%. (3) The most frequent cause of arterial intimal fibrosis in the first 6 months posttransplant was arteriosclerotic nephrosclerosis (ASNS) of donor origin. Long-term graft function for recipients with ASNS was 67%. (4) Early-onset ischemia or thrombosis was seen in 14 recipients and predicted poor outcome: only 35.7% of these recipients had long-term graft function. (5) Cyclosporine (CsA) toxicity was implicated in only 3 recipients, who had mild diffuse interstitial fibrosis in association with elevated CsA levels. Other variables (including systemic hypertension, urinary tract infection, obstructive uropathy, neurogenic bladder, cobalt therapy, and recurrent disease) were not significantly associated with chronic renal lesions in the first 6 months posttransplant. A significant number of biopsies were originally interpreted as showing chronic rejection, but the diagnosis was changed upon reevaluation in conjunction with clinical data. We conclude that many factors coexist to produce chronic lesions in biopsies during the first 6 months posttransplant, so clinical correlation is needed before establishing a diagnosis of chronic rejection. abstract_id: PUBMED:19807981 Effect of chronic rhinosinusitis on liver transplant patients. Background: The use of immunosuppressant after liver transplantation makes transplant recipients susceptible to infections. Most infections that can alter mortality after liver transplantation are wound infections, urinary infections, and pneumonias. There is no evidence, however, that chronic rhinosinusitis can alter mortality in patients awaiting liver transplantation. We have therefore assessed the association of rhinosinusitis with mortality and prognosis after liver transplantation. Methods: The clinical records of 996 patients who received liver transplants between January 1995 and March 2005 were reviewed and the collected data were analyzed. Results: Of the 996 patients who received liver transplants between January 1995 and March 2005, 28 (2.8%) had pretransplant rhinosinusitis. Of the latter, 5 patients were treated medically, 1 patient had endoscopic sinus surgery, and 22 patients had no treatment before liver transplantation. Untreated rhinosinusitis before liver transplantation was associated with aggravated rhinosinusitis after transplantation but did not contribute to an increase in infectious mortality or overall mortality rate. Conclusion: Pretransplant chronic rhinosinusitis does not contribute to mortality in patients undergoing liver transplantation. abstract_id: PUBMED:33242860 A Case of Transplant Nephrectomy due to Chronic Graft Intolerance Syndrome. We report a case of graft intolerance syndrome in which transplant nephrectomy was performed 11 years after kidney transplantation. A 46-year-old man was admitted to our hospital in February 2018 with a mild fever, left lower abdominal pain, and gross hematuria with enlargement of the transplanted kidney. Urinary tract infection was ruled out. Because the symptoms developed after the immunosuppressants had been stopped after kidney graft loss, graft intolerance syndrome was suspected. He had lost his graft in 2016 and had stopped all immunosuppressants since January of 2017. Immunosuppressive therapy was intensified, and steroid half-pulse therapy was added for 3 days. After the steroid pulse therapy, the C-reactive protein (CRP) decreased from 6.47 mg/dL to 0.76 mg/dL, but there was little improvement in the symptoms, and the CRP then increased to 4.44 mg/dL. Transplant nephrectomy was performed in March 2018. Postoperatively, the symptoms disappeared without the administration of immunosuppressants, and the CRP decreased. Pathologically, the resected kidney graft showed persistent active allograft rejection with severe endarteritis, transplant glomerulopathy, and diffuse interstitial fibrosis. Massive thrombi occluded the large arteries, and there was extensive hemorrhagic cortical necrosis. Transplant nephrectomy is uncommon in patients >6 months after transplantation. However, even if more time has passed since transplantation, as in this case, transplant nephrectomy may be a valid option in some cases of severe graft intolerance syndrome. abstract_id: PUBMED:26474168 The Impact of Infection on Chronic Allograft Dysfunction and Allograft Survival After Solid Organ Transplantation. Infectious diseases after solid organ transplantation (SOT) are a significant cause of morbidity and reduced allograft and patient survival; however, the influence of infection on the development of chronic allograft dysfunction has not been completely delineated. Some viral infections appear to affect allograft function by both inducing direct tissue damage and immunologically related injury, including acute rejection. In particular, this has been observed for cytomegalovirus (CMV) infection in all SOT recipients and for BK virus infection in kidney transplant recipients, for community-acquired respiratory viruses in lung transplant recipients, and for hepatitis C virus in liver transplant recipients. The impact of bacterial and fungal infections is less clear, but bacterial urinary tract infections and respiratory tract colonization by Pseudomonas aeruginosa and Aspergillus spp appear to be correlated with higher rates of chronic allograft dysfunction in kidney and lung transplant recipients, respectively. Evidence supports the beneficial effects of the use of antiviral prophylaxis for CMV in improving allograft function and survival in SOT recipients. Nevertheless, there is still a need for prospective interventional trials assessing the potential effects of preventive and therapeutic strategies against bacterial and fungal infection for reducing or delaying the development of chronic allograft dysfunction. abstract_id: PUBMED:25380892 Chronic allograft dysfunction in kidney transplant recipients: long-term single-center study. Objective: The aim of this study was to determine the prevalence and risk factors responsible for the occurrence and progression of chronic allograft dysfunction (CAD) among patients treated in our transplant center. Material And Methods: Retrospective analysis included 637 kidney allograft recipients transplanted between 1990 and 2003 with functioning graft for at least 1 year. CAD was diagnosed based on increased creatinine concentration ≥ 2 mg/dL, occurrence of proteinuria, and worsening of arterial hypertension. In immunosuppressive treatment, 50% of patients received cyclosporine A (CsA), azathioprine, and prednisone; 25% received CsA, mycophenolate mofetil (MMF), and prednisone; whereas 20% received tacrolimus, MMF, and prednisone. The influence of immune and non-immune factors before and after transplantation on the occurrence and progression of CAD was analyzed. Results: CAD was diagnosed in 43.1% of patients within 10 years after kidney transplantation. CAD development considerably worsened the actual 10-year survival rate of patients (80% versus 92%) and the graft (42% versus 92%). The following factors had the greatest influence (as confirmed with multivariate regression analysis) on CAD progression: proteinuria (odds ratio [OR]: 11.3; P < .0001), serum creatinine concentration > 1.5 mg/dL at 12th month (OR: 3.5; P < .0001) and 24th month (OR: 6.69; P < .0001), cytomegalovirus (CMV) infections (OR: 3.15; P < .0001), and male gender of recipients (OR: 1.48; P < .04). In the CAD patients, acute rejection episodes, delayed graft function, urinary tract infections, and hepatitis C virus (HCV) infections were statistically significantly more often observed compared with the group of patients with stable renal function (reference group). Moreover, in the CAD group, donors were older and recipients younger. The CAD patients had higher arterial pressure and uric acid concentration. Conclusions: During the 10-year follow-up, chronic renal allograft dysfunction developed in 43.1% of patients. Proteinuria, serum creatinine level >1.5 mg/dL at 12th and 24th month, and CMV infections were identified as the most significant CAD progression factors. CAD had detrimental effects both on graft and patient survival rates. abstract_id: PUBMED:33012072 Kidney transplantation in children with CAKUT and non-CAKUT causes of chronic kidney disease: Do they have the same outcomes? Almost half the children who undergo kidney transplantation (KTx) have congenital abnormalities of the kidney and urinary tract (CAKUT). We compared patient, graft survival, and kidney function at last follow-up between CAKUT and non-CAKUT patients after KTx. We divided the analysis into two eras: 1988-2000 and 2001-2019. Of 923 patients, 52% had CAKUT and 48% non-CAKUT chronic kidney disease (CKD). Of the latter, 341 (77%) had glomerular disease, most frequently typical HUS (32%) and primary FSGS (27%); 102 had non-glomerular disease. CAKUT patients were more often boys, younger at KTx, transplanted more frequently preemptively, but with longer time on chronic dialysis. They had less delayed graft function (DGF) and better eGFR, but higher incidence of urinary tract infection (1 year post-KTx). In both eras, 1-, 5-, and 10-year patient survival was similar in the groups, but graft survival was better in CAKUT recipients vs those with primary glomerular and primary recurrent glomerular disease: Era 1, 92.3%, 80.7%, and 63.6% vs 86.9%, 70.6%, and 49.5% (P = .02), and 76.7%, 56.6%, and 34% (P = .0003); Era 2, 96.2%, 88%, and 73.5% vs 90.3%, 76.1%, and 61% (P = .0075) and 75.4%, 54%, and 25.2% (P < .0001), respectively. Main predictors of graft loss were DGF, late acute rejection (AR), and age at KTx in CAKUT group and disease relapse, DGF, early AR, and number of HLA mismatches in recipients with glomerular disease. Graft survival was better in CAKUT patients. DGF was the main predictor of graft loss in all groups. Disease recurrence and early AR predicted graft failure in patients with glomerular disease. abstract_id: PUBMED:6818739 Late mortality and morbidity in recipients of long-term renal allografts. The experience of the Peter Bent Brigham Hospital with 217 renal allografts functioning for more than 5 years is reviewed. Patient and graft survival were similar after 5 years, with patient survival being 88 and 66% at 10 and 15 years, respectively, and graft survival 85 and 75% at the same time intervals. Actuarial graft survival at 15 years was higher than patient survival because death with a functioning graft was not considered to be graft failure. No differences in patients or graft survival were found between living related and cadaver donor allografts. There were 33 deaths (15.2%), occurring from 5 1/2 to 20 1/2 years post-transplantation. Chronic liver failure and sepsis were the most common causes of death. Thirty-two patients (14.7%) lost their grafts after 5 years, most commonly from chronic rejection. Another 33 patients (15.2%) had evidence of graft dysfunction secondary to chronic rejection, recurrent glomerulonephritis, ureteral obstruction, or renal artery stenosis. Chronic rejection was generally not responsive to alterations in immunosuppressive medication. Complications of varying severity were common affecting 204 (94%) of the patients. The most frequent were hypertension, cataracts, avascular necrosis, malignancy, urinary tract infection, and pneumonia. These data demonstrate that transplant-related mortality and morbidity continue to occur in recipients of long-term renal allografts. These patients require careful and continuing care in medical centers experienced in transplantation. abstract_id: PUBMED:335086 The application of ileal conduits in pediatric renal transplantation. Our experience with the use of ileal conduits as receptors of renal homografts in 5 of 41 transplant recipients during the preceding 4 years is described. These 5 chronic renal failure patients were between 6 months and 17 years old, and had unsalvageable bladders that required an ileal conduit at an elective time before transplantation. The causes of renal failure in these children included dysplasia and chronic urinary tract infection. In an attempt to retard the rate of renal insufficiency all patients underwent multiple operative procedures on the lower urinary tract. Two cadaveric and 3 kidneys from living relatives were transplanted. One infant (6 months old) died of severe fluid and electrolyte imbalance 5 days post-transplantation. Of the remaining patients 3 are now 2 to 3 1/2 years post-transplantation and exhibit normal homograft function, and 1 died recently of chronic rejection. All conduits functioned well without evidence of ureteroileal obstruction, significant infection or stomal stenosis. abstract_id: PUBMED:6764861 Vesicoureteral reflux and reflux nephropathy. Vesicoureteral reflux is an anatomic abnormality, mostly affecting a pediatric population, which may be the second leading cause of end-stage renal failure. Most cases of reflux are due to abnormalities in the insertion of the ureters into the bladder, either congenital or acquired. Most commonly, VUR is discovered during routine evaluation of urinary tract infections, but may also be present in patients with severe hypertension or chronic renal failure. The diagnosis is confirmed radiologically, utilizing either voiding cinecystography or radioisotopic methods. VUR can result in renal failure through scarring secondary to 'chronic pyelonephritis' or through a glomerulopathy, possibly immune in origin. In most series, the glomerulopathy is felt to be the cause of the end-stage renal failure. Treatment of VUR includes conservative (medical) management with the hope that maturation of the ureterovesical junction will cure reflux. Surgical therapy is reserved for those patients in whom this maturation is not expected to occur or in those whose urinary infections cannot be controlled. In those patients who have developed the glomerulopathy secondary to VUR, surgery may not halt the progression of the renal disease. VUR in a transplanted kidney may result in a higher risk of loss of the graft due to glomerulopathy or chronic rejection. Answer: Yes, urinary tract infections (UTIs) have been identified as a risk factor for chronic kidney transplant rejection in humans. A study analyzing the impact of UTIs on long-term kidney transplant survival found that patients with chronic rejection experienced more UTIs per year compared to controls. The highest incidence of UTIs for both groups was in the first year after transplantation, but the rate remained consistently higher in the chronic rejection group, particularly by the third year post-transplantation. This study concluded that UTIs are an important risk factor for the onset of chronic rejection, emphasizing the need for early and intense treatment of UTIs in kidney transplant recipients (PUBMED:9598468). Another study that reexamined renal allograft biopsy specimens from the first 6 months post-transplant found that chronic rejection was uncommon during this period and was often preceded by acute rejection. The study did not find a significant association between UTIs and chronic renal lesions in the first 6 months post-transplant (PUBMED:8545866). Furthermore, a long-term single-center study on chronic allograft dysfunction (CAD) in kidney transplant recipients identified urinary tract infections, among other factors, as being observed more often in patients with CAD compared to those with stable renal function. This suggests that UTIs may contribute to the progression of CAD (PUBMED:25380892). In summary, evidence from these studies supports the notion that UTIs can be a contributing factor to chronic kidney transplant rejection, highlighting the importance of managing UTIs effectively in transplant patients to improve long-term graft survival.
Instruction: Short-segment pedicle instrumentation of thoracolumbar burst fractures: does transpedicular intracorporeal grafting prevent early failure? Abstracts: abstract_id: PUBMED:11154543 Short-segment pedicle instrumentation of thoracolumbar burst fractures: does transpedicular intracorporeal grafting prevent early failure? Study Design: A prospective, randomized study comparing two treatment methods for thoracolumbar burst fractures: short-segment instrumentation with transpedicular grafting and the same procedure without transpedicular grafting. Objective: To evaluate the efficacy of transpedicular grafting in preventing failure of short-segment fixation for the treatment of thoracolumbar burst fractures. Summary Of Background Data: Short-segment pedicle instrumentation for thoracolumbar burst fractures is known to fail early because of the absence of anterior support. Additional transpedicular grafting has been offered as an alternative to prevent this failure. However, there is controversy about the results of transpedicular grafting. Methods: Twenty patients with thoracolumbar burst fractures were included in the study. The inclusion criterion was the presence of fractures through the T11-L3 vertebrae without neurologic compromise. The patients were randomized by a simple method into two groups. Group 1 patients were treated using short-segment instrumentation with transpedicular grafting (TPG) (n = 10), and Group 2 patients were treated by short-segment fixation alone (NTPG) (n = 10). Clinical (Likert's questionnaire) and radiologic (sagittal index, percentage of anterior body height compression, and local kyphosis) outcomes were analyzed. Results: The two groups were similar in age, follow-up period, and severity of the deformity and fracture. The postoperative and follow-up sagittal index, percentage of anterior body height compression, and average correction loss in local kyphosis in both groups were not significantly different. The failure rate, defined as an increase of 10 degrees or more in local kyphosis and/or screw breakage, was also not significantly different (TPG = 50%, NTPG = 40%, P = 0.99). Conclusions: Short-segment transpedicular instrumentation of thoracolumbar burst fractures is associated with a high rate of failure that cannot be decreased by additional transpedicular intracorporeal grafting. abstract_id: PUBMED:19096554 Short-segment Pedicle Instrumentation of Thoracolumbar Burst-compression Fractures; Short Term Follow-up Results. Objective: The current literature implies that the use of short-segment pedicle screw fixation for spinal fractures is dangerous and inappropriate because of its high failure rate, but favorable results have been reported. The purpose of this study is to report the short term results of thoracolumbar burst and compression fractures treated with short-segment pedicle instrumentation. Methods: A retrospective review of all surgically managed thoracolumbar fractures during six years were performed. The 19 surgically managed patients were instrumented by the short-segment technique. Patients' charts, operation notes, preoperative and postoperative radiographs (sagittal index, sagittal plane kyphosis, anterior body compression, vertebral kyphosis, regional kyphosis), computed tomography scans, neurological findings (Frankel functional classification), and follow-up records up to 12-month follow-up were reviewed. Results: No patients showed an increase in neurological deficit. A statistically significant difference existed between the patients preoperative, postoperative and follow-up sagittal index, sagittal plane kyphosis, anterior body compression, vertebral kyphosis and regional kyphosis. One screw pullout resulted in kyphotic angulation, one screw was misplaced and one patient suffered angulation of the proximal segment on follow-up, but these findings were not related to the radiographic findings. Significant bending of screws or hardware breakage were not encountered. Conclusion: Although long term follow-up evaluation needs to verified, the short term follow-up results suggest a favorable outcome for short-segment instrumentation. When applied to patients with isolated spinal fractures who were cooperative with 3-4 months of spinal bracing, short-segment pedicle screw fixation using the posterior approach seems to provide satisfactory result. abstract_id: PUBMED:31786380 Minimally Invasive Decompression and Intracorporeal Bone Grafting Combined with Temporary Percutaneous Short-Segment Pedicle Screw Fixation for Treatment of Thoracolumbar Burst Fracture with Neurological Deficits. Objective: We evaluated the clinical and radiographic outcomes of patients with thoracolumbar burst fractures and neurological deficits treated with minimally invasive decompression and intracorporeal bone grafting combined with percutaneous short-segment stabilization. Methods: Patients with thoracolumbar burst fractures and neurological deficits underwent minimally invasive decompression and intracorporeal bone grafting combined with percutaneous short-segment stabilization. Instrumentation was removed approximately 1 year after vertebral fracture union. The clinical and radiographic outcomes were analyzed. Results: The mean operative duration and intraoperative bleeding volume were 135 ± 63 minutes and 120 ± 200 mL, respectively. The average American Spinal Injury Association impairment scale scores had significantly improved at the final follow-up examination. The visual analog scale score had decreased from 7.8 ± 1.1 preoperatively to <2.9 ± 1.3 (P < 0.05) at 1 week postoperatively. The Oswestry disability index had decreased from 86.1 ± 8.8 preoperatively to 15.9 ± 6.4 (P < 0.05) at 1 year postoperatively. The canal stenosis index had improved from 43.4% ± 12.0% to 93.8% ± 4.8% (P < 0.05). The sagittal Cobb angle had been corrected from 17.8° ± 7.5° to 4.0° ± 1.9° (P < 0.05) and remained at 4.9° ± 2.0° (P > 0.05) at 1 year postoperatively. The sagittal index had been corrected from 16.6° ± 6.1° to 0.3° ± 4.6° (P < 0.05) and remained at 1.5° + 4.5° (P > 0.05) at 1 year postoperatively. The anterior vertebral height had increased from 49.3% ± 11.1% to 97.6% ± 6.5% (P < 0.05) and remained at 95.7% ± 6.0% (P > 0.05) at 1 year postoperatively. After implant removal, the total kyphosis correction losses were 1.5° ± 0.8° for the Cobb angle, 2.0° ± 1.1° for the sagittal index, and 3.4% ± 2.1% for the anterior vertebral height. One pullout screw and one broken rod were found in 1 patient each. Conclusion: Minimally invasive decompression and intracorporeal bone grafting combined with percutaneous short-segment fixation yielded satisfactory results in decompression and immediate kyphosis correction. Additionally, this procedure resulted in maintenance of the vertebral height and prevented late correction loss after implant removal. abstract_id: PUBMED:28043420 Two additional augmenting screws with posterior short-segment instrumentation without fusion for unstable thoracolumbar burst fracture - Comparisons with transpedicular grafting techniques. Background: Transpedicular grafting techniques with posterior short-segment instrumentation have demonstrated to prevent high implant failure in unstable thoracolumbar burst fractures. We tested our hypothesis that short-segment instrumentation with two additional augmenting screws in the injured vertebra could provide stability and was similar to those of the transpedicular grafting technique. Methods: Twenty patients belonged to group A; treated with short-segment pedicle screw fixation and reinforced by two augmenting screws at the fractured vertebra. Group B had thirty-one patients; the fractured vertebra was augmented with transpedicular autogenous bone graft. Group C had twenty patients; the injured vertebra was strengthened with calcium sulfate cement. Clinical outcome and radiographic parameters were compared. Results: Group A had the least blood loss (101.7 ± 72.5 vs. 600 ± 403.1 vs. 247.5 ± 164.2 ml, p < 0.001) and the least operation time (142.0 ± 57.2 vs. 227.2 ± 43.6 vs. 161.6 ± 28.5 min, p < 0.001). However, group A had the highest collapsed rate of the body height at the 18-month follow-up (10.5 ± 7.0 vs. 4.6 ± 4.8 vs. 7.2 ± 8.5%, p = 0.002). The failure rate, include implant failure or loss of 10° or more of correction, group B had the lowest failure rate (10% vs. 3.2% vs. 10%, p = 0.542). The group A had the highest rate of return to their previous employment (50% vs. 38% vs. 35%, p = 0.265). Conclusions: Compared with transpedicular grafting techniques, additional two "augmenting screws" in the fracture vertebra with short-segment instrumentation are sufficient for one-level thoracolumbar burst fracture. abstract_id: PUBMED:29856666 Clinical Effects of Posterior Limited Long-Segment Pedicle Instrumentation for the Treatment of Thoracolumbar Fractures. Objective: The purpose of this study was to assess the clinical effects of treating thoracolumbar fractures with posterior limited long-segment pedicle instrumentation (LLSPI). Methods: A total of 58 thoracolumbar fracture patients were retrospectively analyzed, including 31 cases that were fixed by skipping the fractured vertebra with 6 screws using LLSPI and 27 cases that were fixed by skipping the fractured vertebra with 4 screws using short-segment pedicle instrumentation (SSPI). Surgery time, blood loss, hospital stay, Oswestry disability index (ODI), neurological function, sagittal kyphotic Cobb angle (SKA), percentage of anterior vertebral height (PAVH), instrumentation failure, and the loss of SKA and PAVH were recorded before and after surgery. Results: No significant differences were observed in either the surgery time or hospital stay (P < 0.05), while there were significant differences in blood loss between the two groups. At the final follow-up, both the ODI and the neurological status were notably improved compared to those at the preoperative state (P < 0.05), but the difference between the two groups was relatively small. Furthermore, the SKA and PAVH were notably improved at the final follow-up compared to postoperative values (P < 0.05), but no significant difference was observed between the two groups. During long-term follow-up, the loss of SKA and PAVH in the LLSPI group was significantly less than that in the SSPI group (P < 0.05). Conclusion: Based on strict criteria for data collection and analysis, the clinical effects of LLSPI for the treatment of thoracolumbar fractures were satisfactory, especially for maintaining the height of the fractured vertebra and reducing the loss of SKA and instrumentation failure rates. abstract_id: PUBMED:28243383 Short Segment versus Long Segment Pedicle Screws Fixation in Management of Thoracolumbar Burst Fractures: Meta-Analysis. Posterior pedicle screw fixation has become a popular method for treating thoracolumbar burst fractures. However, it remains unclear whether additional fixation of more segments could improve clinical and radiological outcomes. This meta-analysis was performed to evaluate the effectiveness of fixation levels with pedicle screw fixation for thoracolumbar burst fractures. MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, Springer, and Google Scholar were searched for relevant randomized and quasirandomized controlled trials that compared the clinical and radiological efficacy of short versus long segment for thoracolumbar burst fractures managed by posterior pedicle screw fixation. Risk of bias in included studies was assessed using the Cochrane Risk of Bias tool. Based on predefined inclusion criteria, Nine eligible trials with a total of 365 patients were included in this meta-analysis. Results were expressed as risk difference for dichotomous outcomes and standard mean difference for continuous outcomes with 95% confidence interval. Baseline characteristics were similar between the short and long segment fixation groups. No significant difference was identified between the two groups regarding radiological outcome, functional outcome, neurologic improvement, and implant failure rate. The results of this meta-analysis suggested that extension of fixation was not necessary when thoracolumbar burst fracture was treated by posterior pedicle screw fixation. More randomized controlled trials with high quality are still needed in the future. abstract_id: PUBMED:21139791 Treatment of acute thoracolumbar burst fractures with kyphoplasty and short pedicle screw fixation: Transpedicular intracorporeal grafting with calcium phosphate: A prospective study. Background: In the surgical treatment of thoracolumbar fractures, the major problem after posterior correction and transpedicular instrumentation is failure to support the anterior spinal column, leading to loss of correction and instrumentation failure with associated complaints. We conducted this prospective study to evaluate the outcome of the treatment of acute thoracolumbar burst fractures by transpedicular balloon kyphoplasty, grafting with calcium phosphate cement and short pedicle screw fixation plus fusion. Materials And Methods: Twenty-three consecutive patients of thoracolumbar (T(9) to L(4)) burst fracture with or without neurologic deficit with an average age of 43 years, were included in this prospective study. Twenty-one from the 23 patients had single burst fracture while the remaining two patients had a burst fracture and additionally an adjacent A1-type fracture. On admission six (26%) out of 23 patients had neurological deficit (five incomplete, one complete). Bilateral transpedicular balloon kyphoplasty with liquid calcium phosphate to reduce segmental kyphosis and restore vertebral body height and short (three vertebrae) pedicle screw instrumentation with posterolateral fusion was performed. Gardner kyphosis angle, anterior and posterior vertebral body height ratio and spinal canal encroachment were calculated pre- to postoperatively. Results: All 23 patients were operated within two days after admission and were followed for at least 12 months after index surgery. Operating time and blood loss averaged 45 min and 60 cc respectively. The five patients with incomplete neurological lesions improved by at least one ASIA grade, while no neurological deterioration was observed in any case. The VAS and SF-36 (Role physical and Bodily pain domains) were significantly improved postoperatively. Overall sagittal alignment was improved from an average preoperative 16° to one degree kyphosis at final followup observation. The anterior vertebral body height ratio improved from 0.6 preoperatively to 0.9 (P<0.001) postoperatively, while posterior vertebral body height improved from 0.95 to 1 (P<0.01). Spinal canal encroachment was reduced from an average 32% preoperatively to 20% postoperatively. Cement leakage was observed in four cases (three anterior to vertebral body and one into the disc without sequalae). In the last CT evaluation, there was a continuity between calcium phosphate and cancellous vertebral body bone. Posterolateral radiological fusion was achieved within six months after index operation. There was no instrumentation failure or measurable loss of sagittal curve and vertebral height correction in any group of patients. Conclusions: Balloon kyphoplasty with calcium phosphate cement secured with posterior short fixation in the thoracolumbar spine provided excellent immediate reduction of posttraumatic segmental kyphosis and significant spinal canal clearance and restored vertebral body height in the fracture level. abstract_id: PUBMED:29798565 Treatment of thoracolumbar burst fractures with short-segment pedicle instrumentation and recombinant human bone morphogenetic protein 2 and allogeneic bone grafting in injured vertebra Objective: To investigate the effect of preventing the loss of correction and vertebral defects after thoracolumbar burst fractures treated with recombinant human bone morphogenetic protein 2 (rhBMP-2) and allogeneic bone grafting in injured vertebra uniting short-segment pedicle instrumentation. Methods: A prospective randomized controlled study was performed in 48 patients with thoracolumbar fracture who were assigned into 2 groups between June 2013 and June 2015. Control group ( n=24) received treatment with short-segment pedicle screw instrumentation with allogeneic bone implanting in injured vertebra; intervention group ( n=24) received treatment with short-segment pedicle screw instrumentation combining with rhBMP-2 and allogeneic bone grafting in injured vertebra. There was no significant difference in gender, age, injury cause, affected segment, vertebral compression degree, the thoracolumbar injury severity score (TLICS), Frankel grading for neurological symptoms, Cobb angle, compression rate of anterior verterbral height between 2 groups before operation ( P>0.05). The Cobb angle, compression rate of anterior vertebral height, intervertebral height changes, and defects in injured vertebra at last follow-up were compared between 2 groups. Results: All the patients were followed up 21-45 months (mean, 31.3 months). Bone healing was achieved in 2 groups, and there was no significant difference in healing time of fracture between intervention group [(7.6±0.8) months] and control group [(7.5±0.8) months] ( t=0.336, P=0.740). The Frankel grading of all patients were reached grade E at last follow-up. The Cobb angle and compression rate of anterior verterbral height at 1 week after operation and last follow-up were significantly improved when compared with preoperative ones in 2 groups ( P<0.05). There was no significant difference in Cobb angle and compression rate of anterior verterbral height between 2 groups at 1 week after operation ( P>0.05), but the above indexes in intervention group were better than those in control group at last follow-up ( P<0.05). At last follow-up, there was no significant difference of intervertebral height changes of internal fixation adjacent upper position, injured vertebra adjacent upper position, injured vertebra adjacent lower position, and internal fixation adjacent lower position between 2 groups ( P>0.05). Defects in injured vertebra happened in 18 cases (75.0%) in control group and 5 cases (20.8%) in intervention group, showing significant difference ( χ2=14.108, P=0.000); and in patients with defects in injured vertebra, bone defect degree was 7.50%±3.61% in control group, and was 2.70%±0.66% in intervention group, showing significant difference ( t=6.026, P=0.000). Conclusion: Treating thoracolumbar fractures with short-segment pedicle screw instrumentation with rhBMP-2 and allogeneic bone grafting in injured vertebra can prevent the loss of correction and vertebral defects. abstract_id: PUBMED:27931944 Restoration of Anterior Vertebral Height by Short-Segment Pedicle Screw Fixation with Screwing of Fractured Vertebra for the Treatment of Unstable Thoracolumbar Fractures. Background: The treatment of unstable thoracolumbar fractures remains controversial. Long-segment pedicle screw constructs may be stiffer and impart greater forces on adjacent segments compared with short-segment constructs. Short-segment pedicle screw fixation alone may be associated with instrumentation failure. Reinforcement fractured vertebra by the placement of an additional 2 screws at fracture level may be useful in thoracolumbar fractures for restoration of anterior vertebral height. Material And Methods: We retrospectively analyzed 35 patients (21 males, 14 females) with unstable thoracolumbar fractures. The patients were divided into 2 groups. In group I, patients were operated with posterior approach via the use of pedicle screws fixed long (2 levels above and 1 or 2 levels below of the fractured vertebra). In group II patients, short-segment stabilization with additional screwing at fracture level was made. Immediate postoperative radiologic evaluations were done by measuring the correction and maintenance of kyphotic angle at the fracture level, Cobb angle, and height of fractured vertebra. Results: Average local kyphosis angle, anterior kyphotic angle at the fracture level, and Cobb angle were not statistically significantly different in the postoperative period (P > 0.05); however, postoperative anterior height of fractured vertebra was statistically significantly different between the 2 groups (P < 0.05). Conclusions: We compared a standard long-segment construct with a short-segment construct using instrumentation of the fractured segment. Short-segment pedicle screw fixation with screwing of fractured vertebra in unstable thoracolumbar fracture levels is an effective method to restoring anterior vertebral height for the treatment of unstable thoracolumbar fractures. It also provides anterior column support. abstract_id: PUBMED:27541493 Treatment of unstable thoracolumbar junction fractures: short-segment pedicle fixation with inclusion of the fracture level versus long-segment instrumentation. Background: The surgical management of thoracolumbar burst fractures frequently involves posterior pedicle screw fixation. However, the application of short- or long-segment instrumentation is still controversial. The aim of this study was to compare the outcome of the short-segment fixation with inclusion of the fracture level (SSFIFL) versus the traditional long-segment fixation (LSF) for the treatment of unstable thoracolumbar junction fractures. Methods: From December 2009 to February 2014, 60 patients with unstable thoracolumbar junction fractures (T11-L2) were divided into two groups according to the number of instrumented levels. Group 1 included 30 patients treated by SSFIFL (six-screw construct including the fracture level). Group 2 included 30 patients treated by LSF (eight-screw construct excluding the fracture level). Local kyphosis angle (LKA), anterior body height (ABH), posterior body height (PBH), ABH/PBH ratio of fractured vertebra, and Asia Scale Impairment Scale were evaluated. Results: The two groups were similar in regard to age, sex, trauma etiology, fracture level, fracture type, neurologic status, pre-operative LKA, ABH, PBH, and ABH/PBH ratio and follow-up (p > 0.05). Reduction of post-traumatic kyphosis (assessed with LKA) and restoration of fracture-induced wedge shape of the vertebral body (assessed with ABH, PBH, and ABH/PBH ratio) at post-operative period were not significantly different between group 1 and group 2 (p = 0.234; p = 0.754). There was no significant difference between the two groups in term of correction loss at the last follow-up too (LKA was 15.97° ± 5.62° for SSFIFL and 17.76° ± 11.22° for LSF [p = 0.427]). Neurological outcome was similar in both groups. Conclusions: Inclusion of fracture level in a short-segment fixation for a thoracolumbar junction fractures results in a kyphosis correction and in a maintenance of the sagittal alignment similar to a long-segment instrumentation. Finally, this technique allowed us to save two or more segments of vertebral motion. Answer: Based on the provided abstracts, transpedicular intracorporeal grafting does not appear to prevent early failure of short-segment pedicle instrumentation in thoracolumbar burst fractures. A prospective, randomized study comparing short-segment instrumentation with and without transpedicular grafting found no significant difference in the failure rate between the two groups (TPG = 50%, NTPG = 40%, P = 0.99) (PUBMED:11154543). This suggests that additional transpedicular intracorporeal grafting does not decrease the high rate of failure associated with short-segment transpedicular instrumentation of thoracolumbar burst fractures.
Instruction: Penetrating stab injuries at a single urban unit: are we missing the point? Abstracts: abstract_id: PUBMED:23890398 Penetrating trauma in urban women: patterns of injury and violence. Background: Penetrating trauma is known to occur with less frequency in women than in men, and this difference has resulted in a lack of characterization of penetrating injury patterns involving women. We hypothesized that the nature of penetrating injury differs significantly by gender and that these injuries in women are associated with important psychosocial and environmental factors. Materials And Methods: A level 1 urban trauma center registry was queried for all patients with penetrating injuries from 2002-2010. Patient and injury variables (demographics and mechanism of injury) were abstracted and compared between genders; additional social and psychiatric histories and perpetrator information were collected from the records of admitted female patients. Results: Injured women were more likely to be Caucasian, suffer stab wounds instead of gunshot wounds, and present with a higher blood alcohol level than men. Compared with women with gunshot wounds, those with stab wounds were three times more likely to report a psychiatric or intimate partner violence history. Women with self-inflicted injuries had a significantly greater incidence of prior penetrating injury and psychiatric and criminal history. Male perpetrators outnumbered female perpetrators; patients frequently not only knew their perpetrator but also were their intimate partners. Intimate partner violence and random cross-fire incidents each accounted for about a quarter of injuries observed. Conclusions: Penetrating injuries in women represent a nonnegligible subset of injuries seen in urban trauma centers. Psychiatric and social risk factors for violence play important roles in these cases, particularly when self-infliction is suspected. Resources allocated for urban violence prevention should proportionately reflect the particular patterns of violence observed in injured women. abstract_id: PUBMED:24867087 Penetrating stab injuries at a single urban unit: are we missing the point? Background: Penetrating trauma--the classical presentation of disorganised crime--can pose a challenge in their management due to their complexity and unpredictability. Aim: We examined the experience of one urban unit in the management of penetrating injuries to draw conclusions pertinent to other Irish centres. Methods: A retrospective study was performed of all penetrating injuries presenting to the Emergency Department (ED) of Connolly Hospital, Dublin between January 2009 and December 2012. Information was collected from the Hospital Inpatient Enquiry database, theatre logbooks and ED records. Results: One hundred and four patients presented with penetrating injuries in the given period. Four mortalities were recorded. Abdominal injury was recorded in 22% of patients; 26% had multiple injuries not involving the abdomen; 11% had an isolated thoracic injury. Fifty-seven percent required surgery, of which 40% required emergency or early surgical intervention. Laparotomy and laparoscopy were required in 14 and 7%, respectively; 5% required thoracotomy of which two had penetrating cardiac injuries, both of whom survived. Conclusions: Although many patients with penetrating trauma can be safely managed conservatively, our study shows that over half required surgical intervention. These data highlight the need for a trauma team in each Irish centre receiving trauma with a clear need for general surgeons on emergency on-call rotas to be experienced in trauma management. There is an urgent need to centralise the management of trauma to a limited number of designated trauma centres where expertise is available by surgeons with a special interest in trauma management. abstract_id: PUBMED:36895221 Stephanion to cranial base penetrating stab wound with outstanding recovery: A case report. Background: Mortality due to head trauma is common in developed countries in all age groups. Nonmissile penetrating skull base injuries (PSBIs) due to foreign bodies are quite rare, accounting for about 0.4%. PSBI carries that a poor prognosis brainstem involvement usually is often fatal. We are reporting the first case of PSBI with a foreign body insertion site through the stephanion with a remarkable outcome. Case Description: The 38-year-old male patient was referred with a penetrating stab wound to the head through the stephanion caused by a knife after a conflict in the street. He had no focal neurological deficit or cerebrospinal fluid leak, and Glasgow coma scale (GCS) was 15/15 on admission. A preoperative computed tomography scan showed the path of the stab beginning at the stephanion, which is the point where the coronal suture crosses the superior temporal line, heading toward the cranial base. Postoperatively, GCS was 15/15 without any deficit apart from the left wrist drop, possibly due to a left arm stab. Conclusion: Careful investigations and diagnoses must be made to ensure convenient knowledge of the case due to the variety of injury mechanisms, foreign body characteristics, and individual patients' characteristics. Reported cases of PSBIs in adults have not reported a stephanion skull base injury. Although brain stem involvement is usually fatal, our patient had a remarkable outcome. abstract_id: PUBMED:20476680 Transorbital stab penetrating brain injury. Report of a case. Introduction: Penetrating injury of the skull and brain is relatively uncommon, representing about 0.4% of head injuries. In this paper the Authors describe a case of patient victim of transorbital stab with brain injury with good recovery and review the literature about cranial stab wound. Case Report: A 23-year-old man was involved in an altercation which resulted in the patient sustaining wounds to the head, with penetrating in left transorbital, affecting the eye. At arrival to the first trauma center the patient was conscent and complete responsive with 15 points in Glasgow Coma Scale, and motor deficit grade III. CT scan demonstrated left periventricular brain hematoma and supraorbital fracture. A four-vessel cerebral angiogram demonstrated no abnormality. In this evolution patient presented good neurologic outcome. Conclusion: In patients conscents with no surgical lesion like our patient, the hospital discharge must occur after the angiogram have excluded intracranial vascular lesion. abstract_id: PUBMED:30683660 Multiple foreign bodies in the facial region from a penetrating stab injury. Penetrating injuries can lead to multiple retained foreign bodies. To present a case of a penetrating stab injury on to the right orbital region of a 37-year-old woman which resulted in lacerations on both eyelids, loss of vision in addition to the retention of glass particle and woven artificial hair strands at the anterior end of the floor of the orbit. The woven artificial hair strand, being flexible in nature, was apparently logged in by the penetrating force of the broken glass used as the stab injury object. Under local anaesthesia, a gentle intermittent pull on one hair strand led to the dislodgement of a piece of broken glass particle along with the other end of the hair strand. The resultant wound was repaired. Stab injuries can result in retained multiple foreign bodies. This possibility should be considered during assessment and management of facial injuries to avoid complications of retention. abstract_id: PUBMED:26379908 The predictive value of physical examination in the decision of laparotomy in penetrating anterior abdominal stab injury. A selective conservative treatment for penetrating anterior abdominal stab injuries is an increasingly recognized approach. We analyzed patients who followed-up and treated for penetrating anterior abdominal stab injuries. The anterior region was defined as the area between the arcus costa at the top and the mid-axillary lines at the laterals and the inguinal ligaments and symphysis pubis at the bottom. An emergency laparotomy was performed on patients who were hemodynamically unstable or had symptoms of peritonitis or organ evisceration; the remaining patients were followed-up selectively and conservatively. A total of 175 patients with purely anterior abdominal injuries were included in the study. One hundred and sixty-five of the patients (94.29%) were males and 10 (5.71%) were females; the mean age of the cohort was 30.85 years (range: 14-69 years). While 16 patients (9%) were made an emergency laparotomy due to hemodynamic instability, peritonitis or evisceration, the remaining patients were hospitalized for observation. During the selective conservative follow-up, an early laparotomy was performed in 20 patients (12%), and a late laparotomy was performed in 13 patients (7%); the remaining 126 patients (72%) were discharged after non-operative follow-up. A laparotomy was performed on 49 patients (28%); the laparotomy was therapeutic for 42 patients (86%), non-therapeutic for 4 patients (8%), and negative for 3 patients (6%). A selective conservative approach based on physical examination and clinical follow-up in penetrating anterior abdominal stab injuries is an effective treatment approach. abstract_id: PUBMED:26105131 Evaluation of diaphragm in penetrating left thoracoabdominal stab injuries: The role of multislice computed tomography. Introduction: Penetrating left thoracoabdominal stab injuries are accompanied by diaphragmatic injury in 25-30% of cases, about 30% of which later develop into diaphragmatic hernia. This study aimed to determine the role of multislice computed tomography in the evaluation of left diaphragm in patients with penetrating left thoracoabdominal stab wounds. Materials And Methods: This study reviewed penetrating left thoracoabdominal stab injuries managed in our clinic between April 2009 and September 2014. The thoracoabdominal region was defined as the region between the sternum, fourth intercostal space, and arcus costa anteriorly and the vertebra, lower tip of scapula, and the curve of the last rib posteriorly. Unstable cases and cases with signs of peritonitis were operated with laparotomy; the remaining patients were closely monitored. Forty-eight hours later, a diagnostic laparoscopy was applied to evaluate the left hemidiaphragma in asymptomatic patients who did not need laparotomy. The preoperatively obtained multislice thoracoabdominal computed tomography images were retrospectively examined for the presence of left diaphragm injury. Then, operative and tomographic findings were compared. Results: This study included a total of 43 patients, 39 (91%) males and 4 (9%) females of mean age 30 years (range 15-61 years). Thirty patients had normal tomography results, whereas 13 had left diaphragmatic injuries. An injury to the left diaphragm was detected during the operation in 9 (1 in laparotomy and 8 in diagnostic laparoscopy) of 13 patients with positive tomography for left diaphragmatic injury and 2 (in diagnostic laparoscopy) of 30 patients with negative tomography. Multislice tomography had a sensitivity of 82% (95% CI: 48-98%), a specificity of 88% (71-96%), a positive predictive value of 69% (39-91%), and a negative predictive value of 93% (78-99%) for detection of diaphragmatic injury in penetrating left thoracoabdominal stab injury. Conclusions: Although diagnostic laparoscopy is the gold standard for diaphragmatic examination in patients with penetrating left thoracoabdominal stab wounds, multislice computed tomography is also valuable for detecting diaphragmatic injury. abstract_id: PUBMED:35430645 Development of Diaphragmatic Hernia in Patients with Penetrating Left Thoracoabdominal Stab Wounds. Background: This study aimed to investigate the consequences of repairing versus not repairing diaphragmatic injury caused by penetrating left thoracoabdominal stab wounds. Methods: Diagnostic laparoscopy was performed to evaluate the left diaphragm in patients with penetrating left thoracoabdominal stab wounds who did not have an indication for emergency laparotomy. Patients who did not consent to laparoscopy were discharged without undergoing surgery. Post-discharge radiological images of patients who underwent diaphragmatic repair and radiological images of patients who could not undergo laparoscopy, both during hospitalization and after discharge, were evaluated and compared. Results: Diagnostic laparoscopy was performed on 109 patients. Diaphragmatic injuries were detected and repaired in 32 (29.36%) of these patients. Seventeen patients were lost to follow-up. After a mean follow-up of 57.67 months, none of the remaining 15 patients developed a diaphragmatic hernia. On the other hand, 43 patients refused to undergo diagnostic laparoscopy. Twenty of them were lost from follow-up. The diaphragmatic injury was detected in seven of the remaining 23 patients (30.44%) during initial computed tomography (CT) examinations. In this group, the mean follow-up time was 42.57 months, and delayed diaphragmatic hernia developed in one patient (14.30%). Patients who underwent diaphragmatic repair were compared to patients who did not undergo diagnostic laparoscopy but had diaphragmatic injuries detected on their CT. No statistical differences were detected. Conclusions: Diaphragmatic injuries caused by penetrating stab wounds can sometimes heal spontaneously. However, diagnostic laparoscopy is still relevant for revealing and repairing possible diaphragmatic injuries. abstract_id: PUBMED:34160655 Leukocytes are not Reliable in Predicting Possible Diaphragmatic Injury in Patients with Penetrating Left Thoracoabdominal Stab Wounds. Background: The diaphragm is injured in approximately one-third of penetrating left thoracoabdominal stab wounds. Diagnostic laparoscopy or thoracoscopy is performed to reveal the diaphragmatic injury. This study investigated whether leukocytes, leukocyte subgroups, platelets, the neutrophil-to-lymphocyte ratio (NLR), and the thrombocyte-to-lymphocyte ratio (PLR) can be used to detect diaphragm injury without the need for diagnostic laparoscopy. Methods: Patients hospitalized between January 2010 and January 2020 due to penetrating left thoracoabdominal stab wounds were examined. Laparotomy was performed in patients who had indications for laparotomy, such as hemodynamic instability and peritonitis. Diagnostic laparoscopy was performed to reveal possible diaphragmatic injury in patients who did not require laparotomy after 48h of follow-up. Leukocytes, leukocyte subgroups, platelets, NLR, and PLR were measured both at admission and during follow-up, and the results were compared between patients with and without diaphragm injury during diagnostic laparoscopy. Results: The study included 108 patients with penetrating left thoracoabdominal stab wounds that did not require laparotomy after 48h of follow-up. Of these, 102 patients were male (94.44%), and the average age was 27.68 years (range 15-66 years). Diaphragm injury was detected in 31 patients (28.70%) in diagnostic laparoscopy, and the diaphragm was intact in 77 patients (71.30%). In the comparison of patients with and without diaphragmatic injury, no statistically significant difference was found in terms of age, gender, platelets, leukocyte values, NLR, and PLR both at admission and during follow-up. Conclusions: Leukocytes, leukocyte subsets, platelets, NLR, and PLR were insufficient in the detection of asymptomatic diaphragmatic injuries caused by penetrating left thoracoabdominal stab wounds. abstract_id: PUBMED:12415500 Selective approach to the penetrating stab wounds to the abdomen Background: To present the results of the selective treatment on the penetrating stab wound to the abdomen Methods: From December 1997 to February 200, 175 patients had penetrating stab wound injuries to the abdomen. Of the 175 patients, 61 (34.9%) in Group I were taken to the operating room urgently, 114 (65.1%) in Group 11 were treated selectively. Results: It is evident that the rate of unnecessary laparotomies (X2=6.7, p=0.03), morbidity rate (X2=15.4, p<O.OO1), and hospital stay (F=54, p<O.OOO1 ) in the selectively treated group decrease significantly when compared with the non-selective group. Conclusion: Selective approach to the penetrating stab wounds to the abdomen is a safe method. Furthermore, this method obviates most of the unnecessary laparotomies. For this reason, morbidity rate and hospitalization decrease significantly. Key words: Conservative treatment, unnecessary laparotomy, morbidity, hospital stay. Answer: The abstract from PUBMED:24867087 discusses penetrating stab injuries at a single urban unit and raises concerns about whether the current management strategies are adequate. The study conducted a retrospective review of all penetrating injuries presenting to the Emergency Department (ED) of Connolly Hospital, Dublin, between January 2009 and December 2012. It found that over half of the patients with penetrating trauma required surgical intervention, with a significant portion requiring emergency or early surgical intervention. The study highlights the need for a trauma team in each Irish center receiving trauma and emphasizes the importance of having general surgeons on emergency on-call rotas experienced in trauma management. It suggests an urgent need to centralize the management of trauma to a limited number of designated trauma centers where expertise in trauma management is available. The question "Penetrating stab injuries at a single urban unit: are we missing the point?" seems to reflect the concerns raised in the abstract about the potential gaps in the current management of penetrating stab injuries. The phrase "missing the point" could be interpreted as questioning whether the current approach to treating these injuries is overlooking critical aspects of care, such as the need for specialized trauma teams and centralized management at designated trauma centers to improve outcomes for patients with penetrating trauma.
Instruction: Vein involvement during pancreaticoduodenectomy: is there a need for redefinition of "borderline resectable disease"? Abstracts: abstract_id: PUBMED:23620151 Vein involvement during pancreaticoduodenectomy: is there a need for redefinition of "borderline resectable disease"? Introduction: Current National Comprehensive Cancer Network guidelines recommend neoadjuvant therapy for borderline resectable pancreatic adenocarcinoma to increase the likelihood of achieving R0 resection. A consensus has not been reached on the degree of venous involvement that constitutes borderline resectability. This study compares the outcome of patients who underwent pancreaticoduodenectomy with or without vein resection without neoadjuvant therapy. Methods: A multi-institutional database of patients who underwent pancreaticoduodenectomy was reviewed. Patients who required vein resection due to gross vein involvement by tumor were compared to those without evidence of vein involvement. Results: Of 492 patients undergoing pancreaticoduodenectomy, 70 (14 %) had vein resection and 422 (86 %) did not. There was no difference in R0 resection (66 vs. 75 %, p = NS). On multivariate analysis, vein involvement was not predictive of disease-free or overall survival. Conclusion: This is the largest modern series examining patients with or without isolated vein involvement by pancreas cancer, none of whom received neoadjuvant therapy. Oncological outcome was not different between the two groups. These data suggest that up-front surgical resection is an appropriate option and call into question the inclusion of isolated vein involvement in the definition of "borderline resectable disease." abstract_id: PUBMED:37769516 Superior mesenteric vein/portal vein contact in preoperative imaging indicates biological malignancy in anatomically resectable pancreatic cancer. Background: Pancreatic cancer in contact with the superior mesenteric vein/portal vein is classified as resectable pancreatic cancer; however, the biological malignancy and treatment strategy have not been clarified. Methods: Data from 186 patients who underwent pancreatectomy for pancreatic cancer were evaluated using a prospectively maintained database. The patients were classified as having resectable tumors without superior mesenteric vein/portal vein contact and with superior mesenteric vein/portal vein contact of ≤180°. Disease-free survival, overall survival, and prognostic factors were analyzed. Results: In the univariate analysis, superior mesenteric vein/portal vein contact in resectable pancreatic cancer was a significant prognostic index for disease-free survival and overall survival. In the multivariate analysis for poor disease-free survival, the superior mesenteric vein/portal vein contact remained significant (hazard ratio = 2.13, 95% confidence interval: 1.29-3.51; p < 0.01). In the multivariate analysis, superior mesenteric vein/portal vein contact was a significant independent prognostic index for overall survival (hazard ratio = 2.17, 95% confidence interval: 1.27-3.70; p < 0.01), along with sex, tumor differentiation, nodal involvement, and adjuvant chemotherapy. Portal vein resection for superior mesenteric vein/portal vein contact did not improve the overall survival (p = 0.86). Conclusions: Superior mesenteric vein/portal vein contact in resectable pancreatic cancer was found to be an independent predictor of disease-free survival and overall survival after elective resection. Thus, pancreatic cancer in contact with the superior mesenteric vein/portal vein may be considered as borderline resectable pancreatic cancer. abstract_id: PUBMED:31409042 Contemporary Review of Borderline Resectable Pancreatic Ductal Adenocarcinoma. Borderline resectable pancreatic adenocarcinoma (PDAC) presents challenges in definition and treatment. Many different definitions exist for this disease. Some are based on anatomy alone, while others include factors such as disease biology and patient performance status. Regardless of definition, evidence suggests that borderline resectable PDAC is a systemic disease at the time of diagnosis. There is high-level evidence to support the use of neoadjuvant systemic therapy in these cases. Evidence to support the use of radiation therapy is ongoing. There are ongoing trials investigating the available neoadjuvant therapies for borderline resectable PDAC that may provide clarity in the future. abstract_id: PUBMED:36013111 Advances and Remaining Challenges in the Treatment for Borderline Resectable and Locally Advanced Pancreatic Ductal Adenocarcinoma. Pancreatic ductal adenocarcinoma (PDAC) remains one of the deadliest malignancies in the United States. Improvements in imaging have permitted the categorization of patients according to radiologic involvement of surrounding vasculature, i.e., upfront resectable, borderline resectable, and locally advanced disease, and this, in turn, has influenced the sequence of chemotherapy, surgery, and radiation therapy. Though surgical resection remains the only curative treatment option, recent studies have shown improved overall survival with neoadjuvant chemotherapy, especially among patients with borderline resectable/locally advanced disease. The role of radiologic imaging after neoadjuvant therapy and the potential benefit of adjuvant therapy for borderline resectable and locally advanced disease remain areas of ongoing investigation. The advances made in the treatment of patients with borderline resectable/locally advanced disease are promising, yet disparities in access to cancer care persist. This review highlights the significant advances that have been made in the treatment of borderline resectable and locally advanced PDAC, while also calling attention to the remaining challenges. abstract_id: PUBMED:34410997 Superior Mesenteric Vein Resection Followed by Porto-Jejunal Anastomosis During Pancreatoduodenectomy for Borderline Resectable Pancreatic Cancer - A Case Report and Literature Review. Background/aim: Pancreatic cancer represents the most lethal abdominal malignancy, the only chance for achieving an improvement in terms of survival being represented by radical surgery. Although it has been considered that venous invasion represents a contraindication for resection, recently it has been demonstrated that in regards to overall survival after radical resection, it is similar to the one reported after standard pancreatoduodenectomy. Case Report: A 53-year-old patient with no significant medical past was diagnosed with a borderline resectable pancreatic adenocarcinoma invading the superior mesenteric vein. The patient was submitted to pancreatoduodenectomy en bloc with superior mesenteric vein resection; the two jejunal veins were further anastomosed to the remnant portal vein. The postoperative outcome was favorable; the patient was discharged in the 10th postoperative day. Conclusion: Although technically more demanding, pancreatoduodenectomy en bloc with superior mesenteric vein resection and jejunal portal anastomosis is feasible and might offer a chance for long-term survival in borderline pancreatic head carcinoma invading the superior mesenteric vein. abstract_id: PUBMED:28090195 Complex Surgical Strategies to Improve Resectability in Borderline-Resectable Disease. Colorectal cancer is the third most common malignancy in the USA and continues to pose a significant epidemiologic problem, despite major advances in the treatment of patients with advanced disease. Up to 50 % of patients will develop metastatic disease at some point during the course of their disease, with the liver being the most common site of metastatic disease. In this review, we address the relatively poorly defined entity of borderline-resectable colorectal liver metastases. The workup and staging of borderline-resectable disease are discussed. We then discuss management strategies, including surgical techniques and medical therapies, which are currently utilized in order to improve resectability. abstract_id: PUBMED:31149418 A case of successful resection after FOLFIRINOX in a patient with borderline resectable pancreatic adenocarcinoma. Pancreatic adenocarcinoma (PAC), one of the most aggressive human neoplasms, continues to have an exceedingly poor prognosis. With the advance of diagnostic techniques, a distinct subset of pancreatic cancer labeled "borderline resectable pancreatic cancer" has emerged. Optimal treatment of this disease with a multidisciplinary approach including neoadjuvant and adjuvant therapy remains controversial. We describe a case of borderline resectable PAC treated with FOLFIRINOX (5-fluorouracil, oxaliplatin, irinotecan, and leucovorin) followed by successful pancreaticoduodenectomy. CT scan demonstrated a pancreatic head tumor attached to the superior mesenteric artery, subsequent to which the patient received FOLFIRINOX. Follow-up images showed no lymph node involvement or metastatic disease, suggesting that radical surgery would be curative. The patient underwent pancreaticoduodenectomy with negative margins and was subsequently diagnosed as Stage III (T3N0M0). Though requiring precise case selection and toxicity management, recent literature suggests that FOLFIRINOX is an effective neoadjuvant regimen in the setting of borderline resectable PAC. abstract_id: PUBMED:27865281 Definition and Management of Borderline Resectable Pancreatic Cancer. Patients with localized pancreatic ductal adenocarcinoma seek potentially curative treatment, but this group represents a spectrum of disease. Patients with borderline resectable primary tumors are a unique subset whose successful therapy requires a care team with expertise in medical care, imaging, surgery, medical oncology, and radiation oncology. This team must identify patients with borderline tumors then carefully prescribe and execute a combined treatment strategy with the highest possibility of cure. This article addresses the issues of clinical evaluation, imaging techniques, and criteria, as well as multidisciplinary treatment of patients with borderline resectable pancreatic ductal adenocarcinoma. abstract_id: PUBMED:30560841 Diagnosis and treatment of pancreatic head cancer followed by mesenteric-portal vein invasion Aim: To evaluate the outcomes of pancreaticoduodenectomy with mesenteric-portal vein resection for pancreatic head cancer. Material And Methods: Retrospective analysis included 124 patients with pancreatic head cancer for the period 2010-2017. Mesenteric-portal vein (MPV) invasion was diagnosed in 37 (29.8%) patients, tumor contact with superior mesenteric artery as a borderline resectable state was noted in 11 cases. All patients underwent pancreaticoduodenectomy with mesenteric-portal vein resection. Results: Vein invasion was histologically confirmed in 19 (51.3%) out of 37 patients. At the same time, arterial invasion was absent in 11 patients with a borderline resectable tumor. CT-associated overdiagnosis of venous wall invasion was 6.4%, intraoperative overdiagnosis - 87.5%. R0-resection was achieved in 88.5% after conventional pancreaticoduodenectomy and in 78.4% after pancreaticoduodenectomy followed by MPV resection. Median survival was 17 months, 2-year survival - 41%. Among 11 patients with a borderline resectable tumor median survival was 11 months. Pancreaticoduodenectomy without vein resection was followed by 2-year survival near 68.1%. Differences were significant (p=0.02). Conclusion: Pancreaticoduodenectomy followed by MPV resection as the first stage of combined treatment of pancreatic head cancer is absolutely justified if circumferential involvement of the vein and contact with superior mesenteric artery or celiac trunk do not exceed 50%. Vein resection can provide R0-surgery in these cases. abstract_id: PUBMED:27931968 Borderline resectable pancreatic cancer: More than an anatomical concept. Borderline resectable pancreatic cancer (BRPC) accounts for about 10-15% of newly diagnosed pancreatic cancer, and its management requires a skilled multidisciplinary team. The main definition of BRPC refers to resectability, but also a high risk of positive surgical margins and recurrence. This raises questions about the value of surgery and suggests an opportunity to utilize preoperative treatment in this subset of patients. Besides technical borderline resectable disease which is defined on anatomical and radiological criteria, there is also a biological borderline resectable disease which is defined on clinical and biological prognostic factors. Technical borderline resectable disease requires tumor shrinkage with aggressive therapy including modern drug combinations +/- radiotherapy to achieve radical surgery. Biological BRPC needs always an early systemic treatment in order to select the best candidates for subsequent radical surgery. It is important to distinguish between these different clinical scenarios, both in clinical practice and for clinical trials design. Answer: The question of whether vein involvement during pancreaticoduodenectomy necessitates a redefinition of "borderline resectable disease" in pancreatic cancer is a subject of ongoing debate. According to the study reported in PUBMED:23620151, the oncological outcomes for patients with isolated vein involvement who underwent pancreaticoduodenectomy without neoadjuvant therapy were not significantly different from those without vein involvement. This suggests that upfront surgical resection could be an appropriate option for these patients, and the inclusion of isolated vein involvement in the definition of "borderline resectable disease" may need to be reconsidered. However, another study (PUBMED:37769516) found that superior mesenteric vein/portal vein contact in resectable pancreatic cancer was an independent predictor of both disease-free survival and overall survival after elective resection. This indicates that pancreatic cancer in contact with these veins may be more biologically malignant and could be considered as borderline resectable pancreatic cancer, which would support the inclusion of vein involvement in the definition. A contemporary review (PUBMED:31409042) highlights that borderline resectable pancreatic adenocarcinoma is a systemic disease at the time of diagnosis and that there is high-level evidence supporting the use of neoadjuvant systemic therapy in these cases. This suggests that the definition of borderline resectable disease should not only be based on anatomical criteria but also consider disease biology and patient performance status. Further complicating the issue, PUBMED:36013111 discusses the advances in treatment for borderline resectable and locally advanced pancreatic ductal adenocarcinoma, emphasizing the improved overall survival with neoadjuvant chemotherapy for these patients. This underscores the importance of a multidisciplinary approach and the potential need for redefining borderline resectable disease to optimize treatment strategies. In summary, while some evidence suggests that vein involvement may not affect oncological outcomes and questions the need for its inclusion in the definition of borderline resectable disease (PUBMED:23620151), other studies indicate that vein contact is associated with biological malignancy and worse survival outcomes, supporting its inclusion in the definition (PUBMED:37769516). The definition of borderline resectable pancreatic cancer is complex and may need to incorporate both anatomical and biological factors to guide treatment decisions effectively.
Instruction: Mandatory insurance coverage and hospital productivity in Massachusetts: bending the curve? Abstracts: abstract_id: PUBMED:22728579 Mandatory insurance coverage and hospital productivity in Massachusetts: bending the curve? Objective: The aim of this study was to examine whether universal insurance coverage mandates lead to a more productive use of hospital resources. Data Sources: The American Hospital Association's Annual Survey and the Centers for Medicare and Medicaid Services' case mix index for fiscal years 2005 through 2008 were used. Study Design: A Malmquist approach was used to assess hospitals' productivity in the United States and Massachusetts over the sample period. Propensity score matching is used to "simulate" a randomized control group of hospitals from other markets to compare with Massachusetts. Comparisons are then made to examine if productivity differences are due to universal health insurance coverage mandate. Principal Findings: In the early stages, Massachusetts' coverage mandates lead to a significant drop in hospitals' productivity relative to comparable facilities in other states. In 2008, Massachusetts functioned 3.53% below its 2005 level, whereas facilities across the United States have seen a 4.06% increase over the same period. Conclusions: If the individual mandate is implemented nationwide, the Massachusetts' experience indicates that a near-term decrease in overall hospital productivity will occur. As such, current cost estimates of the Patient Protection and Affordable Care Act's impact on overall health spending are potentially understated. abstract_id: PUBMED:19791703 The role of risk equalization in moving from voluntary private health insurance to mandatory coverage: the experience in South Africa. Objective: The South African health system has long been characterised by extreme inequalities in the allocation of financial and human resources. Voluntary private health insurance, delivered through medical schemes, accounts for some 60% of total expenditure but serves only the 14.8% of the population with higher incomes. A plan was articulated in 1994 to move to a National Health Insurance system with risk-adjusted payments to competing health funds, income cross-subsidies and mandatory membership for all those in employment, leading over time to universal coverage. This chapter describes the core institutional mechanism envisaged for a National Health Insurance system, the Risk Equalisation Fund (REF). A key issue that has emerged is the appropriate sequencing of the reforms and the impact on workers of possible trajectories is considered. Methodology: The design and functioning of the REF is described and the impact on competing health insurance funds is illustrated. Using a reference family earning at different income levels, the impact on worker of various trajectories of reform is demonstrated. Findings: Risk equalization is a critical institutional component in moving towards a system of social or national health insurance in competitive markets, but the sequence of its implementation needs to be carefully considered. The adverse impact of risk equalization on low-income workers in the absence of income cross-subsidies and mandatory membership is considerable. Implications For Policy: The South African experience of risk equalization is of interest as it attempts to introduce more solidarity into a small but highly competitive private insurance market. The methodology for considering the impact of reforms provides policymakers and politicians with a clearer understanding of the consequences of reform. abstract_id: PUBMED:21357198 Do declining private insurance coverage rates influence pediatric hospital charging practices? Objective: To analyze trends in primary payer composition for pediatric hospitalizations and insurance coverage rates from 2000 to 2006 and possible effects on hospital charging practices. Design: We documented national trends in hospital charge-to-cost ratios and primary payer mixes for pediatric discharges from 2000 to 2006 using the Healthcare Cost and Utilization Project (HCUP) Kid's Inpatient Database (KID). We then performed regression analyses at the hospital level to analyze associations between pediatric insurance coverage rates and hospital charge-to-cost ratios. Results: We found pediatric inpatient charge-to-cost ratios increased dramatically during study period. Charge-to-cost ratios were higher for hospitals located in states with either higher uninsurance rates or a public-private coverage mix that was skewed towards public coverage. Conclusions: This study provides evidence of both important changes in pediatric health insurance distribution in the United States and hospital charging practices. abstract_id: PUBMED:35649299 State-level changes in health insurance coverage and parental substance use-associated foster care entry. For many families whose children are placed in foster care, initial contact with the child welfare system occurs due to interactions with the healthcare system, particularly in the context of the opioid epidemic and increased attention to prenatal drug exposure. In the last decade, many previously uninsured families have gained Medicaid health coverage, which has implications for their access to healthcare services and visibility to mandatory reporters. Using administrative foster care case data from the Adoption and Foster Care Analysis and Reporting System Foster Care Files and health insurance data from the American Community Survey, this study analyzes the associations between state-level health insurance coverage and rates of foster care entry due to parental substance use between 2009 and 2019. State-level fixed effects models revealed that public, but not private, health insurance rates were positively associated with rates of foster care entry due to parental substance use. These results support the hypothesis that health insurance coverage may promote greater contact with mandatory reporters among low-income parents with substance use disorders. Furthermore, this study illustrates how healthcare policy may have unintended consequences for the child welfare system. abstract_id: PUBMED:30566809 The Evaluation of the System of Mandatory Medical Insurance by Medical Workers of the Moscow Oblast The article considers the results of sociological survey of medical personnel of the Moscow oblast. The purpose of study is to analyze attitude of medical personnel to the system of mandatory medical insurance in modern conditions. The sociological survey carried out according standard methodology using originally developed questionnaire. The public opinion of medical personnel of medical municipal organizations of the Moscow oblast was investigated in 2013 (n = 632) and 2017 (n = 798). It was established that, 25 years later from the moment of organization of the system of mandatory medical insurance, not all medical personnel is oriented in it. The percentage of those who consider it as a false and over bureaucratized one increased. The number of respondents considering that medical insurance organizations protect interests of patients decreased and those who feel no impact of mandatory medical insurance foundations on activities of medical organizations increased up to 35%. The most of respondents consider functions of medical insurance organizations and mandatory medical insurance foundations exclusively as controlling ones. In both surveys less than 30% of respondents supported actual system of mandatory medical insurance. The general results of the sociological survey testify the necessity of changing policy of relationship between the system of mandatory medical insurance and medical personnel, including activization of activities concerning explanation of tasks, functions and authorities of mandatory medical insurance foundations and medical insurance organizations and also consensual consideration of opinions of all participants of the mandatory medical insurance system. abstract_id: PUBMED:17630461 Hospital consolidation and racial/income disparities in health insurance coverage. Non-Hispanic whites are significantly more likely to have health insurance coverage than most racial/ethnic minorities, and this differential grew during the 1990s. Similarly, wealthier Americans are more likely to have health insurance than the poor, and this difference also grew over the 1990s. This paper examines the role of provider competition in increasing these disparities in insurance coverage. Over the 1990s, the hospital industry consolidated; we analyze the impact of this consolidation on health insurance take-up for different racial/ethnic minorities and income groups. We found that the hospital consolidation wave increased health insurance disparities along racial and income dimensions. abstract_id: PUBMED:33407534 Willingness to pay for private health insurance among workers with mandatory social health insurance in Mongolia. Background: High out-of-pocket health expenditure is a common problem in developing countries. The employed population, rather than the general population, can be considered the main contributor to healthcare financing in many developing countries. We investigated the feasibility of a parallel private health insurance package for the working population in Ulaanbaatar as a means toward universal health coverage in Mongolia. Methods: This cross-sectional study used a purposive sampling method to collect primary data from workers in public and primary sectors in Ulaanbaatar. Willingness to pay (WTP) was evaluated using a contingent valuation method and a double-bounded dichotomous choice elicitation questionnaire. A final sample of 1657 workers was analyzed. Perceptions of current social health insurance were evaluated. To analyze WTP, we performed a 2-part model and computed the full marginal effects using both intensive and extensive margins. Disparities in WTP stratified by industry and gender were analyzed. Results: Only < 40% of the participants were satisfied with the current mandatory social health insurance in Mongolia. Low quality of service was a major source of dissatisfaction. The predicted WTP for the parallel private health insurance for men and women was Mongolian Tugrik (₮)16,369 (p < 0.001) and ₮16,661 (p < 0.001), respectively, accounting for approximately 2.4% of the median or 1.7% of the average salary in the country. The highest predicted WTP was found for workers from the education industry (₮22,675, SE = 3346). Income and past or current medical expenditures were significantly associated with WTP. Conclusion: To reduce out-of-pocket health expenditure among the working population in Ulaanbaatar, Mongolia, supplementary parallel health insurance is feasible given the predicted WTP. However, given high variations among different industries and sectors, different incentives may be required for participation. abstract_id: PUBMED:36927575 Factors associated with the choice of supplementary hospital insurance in Switzerland - an analysis of the Swiss Health Survey. Background: Switzerland has universal coverage via mandatory health insurance that covers a generous basket of health services. In addition to the basic coverage, the insured can buy supplementary insurance for the inpatient sector. Supplementary hospital insurance in Switzerland provides additional services during inpatient stays. Little is known about which factors are associated with the choice of semi-private and private hospital insurances. However, this is of importance to policy makers and the insured population, who might be concerned about a "two-class" inpatient care system. Therefore, the aim of the paper was to explore the factors associated with supplementary hospital insurance enrolment in Switzerland. Methods: We used the five most recent waves of the representative Swiss Health Survey (1997, 2002, 2007, 2012, 2017) to explore which factors are associated with supplementary hospital insurance enrolment in adults aged 25 or older. We estimated the same probit model for all five surveys waves and computed average marginal effects. Results: Our study shows that in all cross-sections the likelihood of enrolling in supplementary hospital insurance increased with higher age, education, household income and was higher for people with a strong preference for unrestricted choice of a specialist and with a higher-than-default deductible choice. The likelihood of supplementary hospital insurance enrolment was lower for the unemployed relative to their inactive counterparts and those living in rural areas relative to comparable urban residents. Ever-smoker status was not statistically significantly associated with supplementary hospital insurance choice. However, our findings indicated differences in estimates over the years regarding demographic as well as insurance-related variables. For example, women were more likely to choose supplementary hospital insurance than comparable men in earlier years. Conclusion: Most importantly, our results indicate that factors related to socioeconomic status - such as education, labour market status, and income - consistently show significant associations with the probability of having supplementary hospital insurance for the entire study period, as opposed to demographic variables - such as nationality and sex. abstract_id: PUBMED:20687096 Outreach strategies for expanding health insurance coverage in children. Background: Health insurance has the potential to improve access to health care and protect people from healthcare costs when they are ill. However, coverage is often low, particularly in people most in need of protection. Objectives: To assess the effectiveness of outreach strategies for expanding insurance coverage of children who are eligible for health insurance schemes. Search Strategy: We searched the Cochrane Effective Practice and Organisation of Care Group (EPOC) Specialised Register (The Cochrane Library 2009, Issue 2), PubMed (January 1951 to January 2010), EMBASE (January 1966 to April 2009), PsycINFO (January 1967 to April 2009) and other relevant databases and websites. In addition, we searched the reference lists of included studies and relevant reviews, and carried out a citation search for included studies to find more potentially relevant studies. Selection Criteria: Randomised controlled trials, controlled clinical trials, controlled before-after studies and interrupted time series which evaluated the effects of outreach strategies on increasing health insurance coverage for children. We defined outreach strategies as measures to improve the implementation of existing health insurance to enrol more eligible populations. This included increasing awareness of schemes, modifying enrolment, improving management and organis ation of insurance schemes, and mixed strategies. Data Collection And Analysis: Two review authors independently extracted data and assessed the risk of bias . We narratively summari sed the data. Main Results: We included two studies, both from the United States. One randomised controlled trial study with a low risk of bias showed that community- based case managers who provided health insurance information, application support, and negotiated with the insurer were effective in enrolling and maintaining enrolment of Latino American children into health insurance schemes (n = 257). The second quasi-randomised controlled trial, with an unclear risk of bias (n = 223), indicated that handing out insurance application materials in hospital emergenc y departments can increase enrolment of children into health insurance. Authors' Conclusions: The two studies included in this review provide evidence that in the US providing health insurance information and application assistance, and handing out application materials in hospital emergency departments can probably both improve insurance coverage of children. Further studies evaluating the effectiveness of different outreach strategies for expanding health insurance coverage of children in different countries are needed, with careful attention given to study design. abstract_id: PUBMED:33901374 The comparative analysis of mandatory and voluntary medical insurance in national and international practice Nowadays, the formation and development of medical insurance system of citizens attracts attention of experts in the medical field and the insurance market. The possibilities of using foreign practice in the formation and organization of medical insurance, including improving of models of financing health care system in foreign countries and implementation of mechanisms of resource support of health care industry tested in world practice, deserve interest of researchers in order to enhance financial institutions of national system of mandatory and voluntary medical insurance. In Russia and foreign countries, the mandatory medical insurance policy allows citizens insured in mandatory medical insurance system, to undergo examination and receive treatment in medical organizations in amount established at the state level. In Russia, popularity of the voluntary medical insurance policy depends on medical care quality in the mandatory medical insurance system and volume of services guaranteed within the framework of mandatory medical insurance. Answer: The study examining the impact of universal insurance coverage mandates on hospital productivity in Massachusetts found that in the early stages, Massachusetts' coverage mandates led to a significant drop in hospitals' productivity relative to comparable facilities in other states. Specifically, in 2008, Massachusetts hospitals functioned 3.53% below their 2005 level, while facilities across the United States saw a 4.06% increase over the same period. This suggests that if the individual mandate were implemented nationwide, a near-term decrease in overall hospital productivity might occur, potentially understating current cost estimates of the Patient Protection and Affordable Care Act's impact on overall health spending (PUBMED:22728579).
Instruction: Homocysteine levels in preterm infants: is there an association with intraventricular hemorrhage? Abstracts: abstract_id: PUBMED:23611481 Homocysteine levels and its association with intraventricular hemorrhage in preterm infants born to preeclamptic mothers. Objective: To investigate the relation between serum homocysteine levels and intraventricular hemorrhage (IVH) in preterm infants born to preeclamptic mothers. Method: This study included 84 preterm infants (42 born to preeclamptic mothers and 42 born to normotensive healthy mothers) who were admitted to Izmir Tepecik Training and Research Hospital Neonatology Clinic on the postnatal first day. The measurement of homocysteine levels in all samples were performed with an Immulite 2000 analyzer, using the chemiluminescence method. Cranial ultrasounds were performed on the fourth day and in the 1 month of age. Results: The mean plasma levels of homocysteine in infants born to preeclamptic mothers and in the control group were 8.2 ± 5.9 μmol/L and 5.3 ± 2.7 μmol/L, respectively. The plasma levels of homocysteine were significantly higher in the study group (p = 0.006). There was no association between the plasma homocysteine levels and IVH or other neonatal complications including necrotizing enterocolitis, retinopathy of prematurity, bronchopulmonary dysplasia and mortality. Conclusion: Our data suggest that plasma levels of homocysteine are higher among infants born to preeclamptic mothers, but these high levels are not associated with IVH and other neonatal complications in preterm infants. abstract_id: PUBMED:17556845 Red cell folate and plasma homocysteine in preterm infants. Background: The supplementation of preterm infants with folic acid is routine practice in many neonatal units. However, the advent of preterm formula milks and breast milk fortifiers have increased folic acid intake. We measured red cell folate in preterm infants who received preterm formula milks and breast milk fortifiers to determine whether additional folic acid supplementation was still required. A potential benefit of folic acid supplementation is reduction of plasma total homocysteine (tHcy). tHcy appears to have a linear association with the risk of atherothrombotic vascular events in adults but its role in intraventricular haemorrhage and associated white matter damage in preterm infants is not known. As there is little information regarding tHcy in preterm infants, we also measured tHcy in this study. Methods: Red cell folate and tHcy were measured at 1 and 4 weeks of age and before discharge in 28 consecutive infants <34 weeks' gestation. Factors which may have affected folate and homocysteine status were recorded. Results: Red cell folate ranged between 266 and 1,513 ng/ml and deficiency (<140 ng/ml) was not observed in any sample. Red cell folate concentration tended to increase with increasing age. tHcy ranged from 0.8 to 12.2 micromol/l and fell within the 'normal' range for fasting adults. Conclusions: Preterm formula milks and breast milk fortifiers provide sufficient folic acid to prevent folate deficiency in preterm infants. Although tHcy fell within the 'normal' range for fasting adults, more research is needed to determine optimal concentration of tHcy for preterm infants. abstract_id: PUBMED:18045460 Homocysteine levels in preterm infants: is there an association with intraventricular hemorrhage? A prospective cohort study. Background: The purpose of this study was to characterize total homocysteine (tHcy) levels at birth in preterm and term infants and identify associations with intraventricular hemorrhage (IVH) and other neonatal outcomes such as mortality, sepsis, necrotizing enterocolitis, bronchopulmonary dysplasia, and thrombocytopenia. Methods: 123 infants < 32 weeks gestation admitted to our Level III nursery were enrolled. A group of 25 term infants were enrolled for comparison. Two blood spots collected on filter paper with admission blood drawing were analyzed by a high performance liquid chromatography (HPLC) method. Statistical analysis included ANOVA, Spearman's Rank Order Correlation and Mann-Whitney U test. Results: The median tHcy was 2.75 micromol/L with an interquartile range of 1.34 - 4.96 micromol/L. There was no difference between preterm and term tHcy (median 2.76, IQR 1.25 - 4.8 micromol/L vs median 2.54, IQR 1.55 - 7.85 micromol/L, p = 0.07). There was no statistically significant difference in tHcy in 31 preterm infants with IVH compared to infants without IVH (median 1.96, IQR 1.09 - 4.35 micromol/L vs median 2.96, IQR 1.51 - 4.84 micromol/L, p = 0.43). There was also no statistically significant difference in tHcy in 7 infants with periventricular leukomalacia (PVL) compared to infants without PVL (median 1.55, IQR 0.25 - 3.45 micromol/L vs median 2.85, IQR 1.34 - 4.82 micromol/L, p = 0.07). Male infants had lower tHcy compared to female; prenatal steroids were associated with a higher tHcy. Conclusion: In our population of preterm infants, there is no association between IVH and tHcy. Male gender, prenatal steroids and preeclampsia were associated with differences in tHcy levels. abstract_id: PUBMED:36434409 Peculiarities of melatonin levels in preterm infants. Background: Melatonin plays an important role in organism functioning, child growth, and development. Of particular importance is melatonin for preterm infants. The aim of our research was to study the peculiarities of melatonin levels depending on various factors in preterm infants with gestational age (GA) of less than 34 weeks. Methods: The study involved 104 preterm infants with GA less than 34 weeks who were treated in the neonatal intensive care unit (NICU). The level of melatonin in urine samples was determined by an enzyme-linked immunosorbent assay. Results: Melatonin concentration was significantly lower in extremely and very preterm infants compared to moderate preterm (3.57 [2.10; 5.06] ng/ml vs. 4.96 [3.20; 8.42] ng/ml, p = 0.007) and was positively correlated with GA (Spearman r = 0.32; p < 0.001). Positive correlations were revealed between melatonin levels and Apgar scores at the 1st (Spearman r = 0.31; p = 0.001) and 5th minutes after birth (Spearman r = 0.35; p < 0.001). Melatonin levels were lower in newborns with respiratory distress syndrome (p = 0.011). No significant correlations were found between melatonin concentration and birth weight (Spearman r = 0.15; p = 0.130). There were no associations of melatonin concentrations and mode of delivery (p = 0.914), the incidence of early-onset sepsis (p = 0.370) and intraventricular hemorrhages (p = 0.501), and mechanical ventilation (p = 0.090). The results of multiple regression showed that gestational age at birth was the most significant predictor of melatonin level in preterm infants (B = 0.507; p = 0.001). Conclusion: Gestational age and the Apgar score were associated with decreased melatonin levels in preterm infants. The level of melatonin in extremely and very preterm infants was lower compared to moderate preterm infants. abstract_id: PUBMED:33833733 Iatrogenic vs. Spontaneous Preterm Birth: A Retrospective Study of Neonatal Outcome Among Very Preterm Infants. Objective: Preterm birth is a leading contributor to childhood morbidity and mortality, and the incidence tends to increase and is higher in developing countries. The aim of this study was to analyze the potential impact of preterm birth in different etiology groups on neonatal complications and outcomes and to gain insight into preventive strategies. Methods: We performed a retrospective cohort study of preterm infants less than 32 weeks' gestation in the Third Affiliated Hospital of Zhengzhou University from 2014 to 2019. Preterm births were categorized as spontaneous or iatrogenic, and these groups were compared for maternal and neonatal characteristics, neonatal complications, and outcomes. All infants surviving at discharge were followed up at 12 months of corrected age to compare the neurodevelopmental outcomes. Results: A total of 1,415 mothers and 1,689 neonates were included, and the preterm population consisted of 1,038 spontaneous preterm infants and 651 iatrogenic preterm infants. There was a significant difference in the incidence of small for gestational age between the two groups. Infants born following spontaneous labor presented with a higher risk of intraventricular hemorrhage, whereas iatrogenic preterm birth was associated with higher risk of necrotizing enterocolitis and coagulopathy and higher risk of pathoglycemia. There was no difference in mortality between the two groups. Follow-up data were available for 1,114 infants, and no differences in neurologic outcomes were observed between the two preterm birth subtypes. Conclusions: Preterm births with different etiologies were associated with some neonatal complications, but not with neurodevelopmental outcomes at 12 months of corrected age. abstract_id: PUBMED:36780888 Associations between Early Thyroid-Stimulating Hormone Levels and Morbidities in Extremely Preterm Neonates. Introduction: High-end cutoffs of thyroid-stimulating hormone (TSH) have been emphasized for hypothyroidism therapy in extremely preterm infants, but the significance of low TSH levels remains unknown. This study hypothesized that the spectrum of TSH levels by newborn screening after birth signifies specific morbidities in extremely preterm neonates. Methods: The multicenter population cohort analyzed 434 extremely preterm neonates receiving TSH screening at 24-96 h of age in 2008-2019. Neonates were categorized by blood TSH levels into group 1: TSH <0.5 µU/mL, group 2: 0.5 ≤ TSH <2 µU/mL, group 3: 2 ≤ TSH <4 µU/mL, and group 4: TSH ≥4 µU/mL. Neonatal morbidities were categorized using the modified Neonatal Therapeutic Intervention Scoring System. Results: The four groups differed in gestational age, birth weight, and the postnatal age at blood sampling so did the proportions of mechanical ventilation usage (p = 0.01), hypoxic respiratory failure (p = 0.005), high-grade intraventricular hemorrhage (p = 0.007), and periventricular leukomalacia (p = 0.048). Group 1 had higher severity scores for respiratory distress syndrome (RDS; effect size 0.39 [95% confidence interval [CI]: 0.18-0.59]) and brain injury (0.36 [0.15-0.57]) than group 2, which remained significant after adjusting for gestational age, birth weight, dopamine usage, and the postnatal age at TSH screening (RDS: mean + 0.45 points [95% CI: 0.11-0.79]; brain injury: +0.32 [0.11-0.54]). Conclusions: Low TSH levels in extremely preterm neonates are associated with severe RDS and brain injuries. Studies recruiting more neonates with complete thyroid function data are necessary to understand central-peripheral interactions of the hypothalamic-pituitary-thyroid axis. abstract_id: PUBMED:34515185 Neonatal Hemoglobin Levels in Preterm Infants Are Associated with Early Neurological Functioning. Background: Neonatal anemia may compromise oxygen transport to the brain. The effects of anemia and cerebral oxygenation on neurological functioning in the early neonatal period are largely unknown. Objective: This study aimed to determine the association between initial hemoglobin levels (Hb) and early neurological functioning in preterm infants by assessing their general movements (GMs). Methods: A retrospective analysis of prospectively collected data on preterm infants born before 32 weeks of gestation was conducted. We excluded infants with intraventricular hemorrhage > grade II. On day 8, we assessed infants' GMs, both generally as normal/abnormal and in detail using the general movement optimality score (GMOS). We measured cerebral tissue oxygen saturation (rcSO2) on day 1 using near-infrared spectroscopy. Results: We included 65 infants (median gestational age 29.9 weeks [IQR 28.2-31.0]; median birth weight 1,180 g [IQR 930-1,400]). Median Hb on day 1 was 10.3 mmol/L (range 4.2-13.7). Lower Hb on day 1 was associated with a higher risk of abnormal GMs (OR = 2.3, 95% CI: 1.3-4.1) and poorer GMOSs (B = 0.9, 95% CI: 0.2-1.7). Hemoglobin strongly correlated with rcSO2 (rho = 0.62, p < 0.01). Infants with lower rcSO2 values tended to have a higher risk of abnormal GMs (p = 0.06). After adjusting for confounders, Hb on day 1 explained 44% of the variance of normal/abnormal GMs and rcSO2 explained 17%. Regarding the explained variance of the GMOS, this was 25% and 16%, respectively. Conclusions: In preterm infants, low Hb on day 1 is associated with impaired neurological functioning on day 8, which is partly explained by low cerebral oxygenation. abstract_id: PUBMED:28803456 Evaluation of prenatal corticosteroid use in spontaneous preterm labor in the Brazilian Multicenter Study on Preterm Birth (EMIP). Objective: To evaluate prenatal corticosteroid use in women experiencing spontaneous preterm labor and preterm delivery. Methods: The present cross-sectional multicenter study analyzed interview data from patients attending 20 hospitals in Brazil owing to preterm delivery between April 1, 2011 and July 30, 2012. Patients were stratified based on preterm delivery occurring before 34 weeks or at 34-36+6 weeks of pregnancy, and the frequency of prenatal corticosteroid use at admission was compared. Prenatal corticosteroid use, sociodemographic data, obstetric characteristics, and neonatal outcomes were examined. Results: There were 1455 preterm deliveries included in the present study; 527 (36.2%) occurred before 34 weeks of pregnancy and prenatal corticosteroids were used in 285 (54.1%) of these pregnancies. Among neonates delivered at 32-33+6 weeks, prenatal corticosteroid use was associated with lower pneumonia (P=0.026) and mortality (P=0.029) rates. Among neonates delivered at 34-36+6 weeks, prenatal corticosteroid use was associated with longer neonatal hospital admission (P<0.001), and an increased incidence of 5-minute Apgar scores below 7 (P=0.010), endotracheal intubation (P=0.042), surfactant use (P=0.006), neonatal morbidities (P=0.048), respiratory distress (P=0.048), and intraventricular hemorrhage (P=0.023). Conclusion: Preterm labor and late preterm delivery were associated with worse neonatal outcomes following prenatal corticosteroids. This could reflect a sub-optimal interval between administration and delivery. abstract_id: PUBMED:36349194 Relationship of plasma MBP and 8-oxo-dG with brain damage in preterm. Preterm infants face a significant risk of brain injury in the perinatal period, as well as potential long-term neurodevelopmental disabilities. However, preterm children with brain injury lack specific clinical manifestations in the early days. Therefore, timely and accurate diagnosis of brain injury is of vital importance. This study was to explore the diagnostic efficiency of myelin basic protein (MBP) and 8-oxo-deoxyguanosine (8-oxo-dG) serum levels in brain injury of premature infants. A total of 75 preterm infants with gestational age between 28 and 32 weeks and birth weight higher than 1,000 g were prospectively included. MBP serum levels were significantly higher in premature infants with white matter injury (WMI). 8-oxo-dG serum levels were significantly increased in both WMI and periventricular-intraventricular hemorrhages (PIVH). MBP and 8-oxo-dG were significantly correlated. The area under the curve was 0.811 [95% confidence interval (CI) 0.667-0.955; p = 0.002] in MBP and 0.729 (95% CI 0.562-0.897; p = 0.020) in 8-oxo-dG. Therefore, the results showed that high MBP levels indicated a possibility of WMI in the premature brain during the early postnatal period, while high 8-oxo-dG levels were closely related to both WMI and PIVH, thus suggesting that MBP and 8-oxo-dG could be used as potential neuro-markers of preterm brain injury. abstract_id: PUBMED:25766199 Factors influencing independent oral feeding in preterm infants. Objective: Determine the mean post-menstrual age when preterm infants attain independent oral feeding skills and whether gestational age, common neonatal morbidities, gender, race, delivery route, or birth year affects this reflex. Methods: A retrospective chart review of 2700 preterm infants, born before 37 weeks gestational age admitted to a level III NICU between January 1978 and July 2013, to determine the post-menstrual age when independent oral feedings occur. Results: Mean post-menstrual age at achievement of independent oral feeding was 36 + 4/7 weeks ± 14 days. Gestational age under 29 weeks correlated with delayed post-menstrual age at achievement of independent oral feeding at 37 + 3/7 weeks versus 36 + 1/7 weeks for gestational age 29-33 weeks and 36 + 3/7 weeks for late preterm infants (p < 0.0001). Preterm infants with certain morbidities experienced a delay in independent oral feeding: necrotizing enterocolitis at 38 + 6/7 weeks (p < 0.0001), bronchopulmonary dysplasia at 38 + 1/7 weeks (p < 0.0001), severe intraventricular hemorrhage at 37 + 6/7 weeks (p < 0.001). Preterm infants born before the year 2000 achieved independent oral feeding two days later than preterm infants born since the year 2000 (p < 0.0001). Preterm infants delivered vaginally achieved independent oral feeding three days sooner than infants delivered via c-section (p < 0.0001). Female infants orally fed one day sooner than male preterm infants (p = 0.0008). Conclusions: Preterm infants achieve independent oral feeding at 36 + 4/7 weeks. Factors negatively influencing when the preterm infant will achieve independent oral feeding include gestational age under 29 weeks and major morbidities, whereas vaginal delivery and ongoing advances in neonatal care may accelerate the transition to independent oral feeding. Answer: The relationship between homocysteine levels in preterm infants and intraventricular hemorrhage (IVH) has been investigated in several studies, but the findings suggest that there is no clear association between the two. One study that specifically examined preterm infants born to preeclamptic mothers found that while plasma levels of homocysteine were higher among infants born to these mothers, these elevated levels were not associated with IVH or other neonatal complications such as necrotizing enterocolitis, retinopathy of prematurity, bronchopulmonary dysplasia, and mortality (PUBMED:23611481). Another study that measured total homocysteine (tHcy) levels at birth in both preterm and term infants did not find a statistically significant difference in tHcy levels between preterm infants with IVH compared to those without IVH. The study concluded that there was no association between IVH and tHcy levels in their population of preterm infants (PUBMED:18045460). Similarly, a prospective cohort study also found no statistically significant difference in tHcy levels in preterm infants with IVH compared to those without IVH. The study concluded that there is no association between IVH and tHcy levels in preterm infants (PUBMED:18045460). In summary, the available evidence from these studies suggests that there is no significant association between homocysteine levels and the occurrence of intraventricular hemorrhage in preterm infants (PUBMED:23611481, PUBMED:18045460).
Instruction: Is there any connection between angiotensin converting enzyme activity and liver cirrhosis of alcoholic genesis? Abstracts: abstract_id: PUBMED:18044470 Is there any connection between angiotensin converting enzyme activity and liver cirrhosis of alcoholic genesis? Unlabelled: The objective of this study was to assess the serum angiotensin converting enzyme (ACE) activity in patients with liver cirrhosis caused by chronic alcohol consumption, in order to get better insight into the function of the renin-angiotensin system. Patients And Methods: Serum level of ACE activity was measured by Neels spectrophotometry in 35 alcoholic liver cirrhosis patients classified according to Child-Pugh-Turcotte criteria and 35 dyspeptic patients with any liver disease excluded (control group). Results: Serum values of ACE were statistically significantly higher (p < 0.00001) in the group of liver cirrhosis patients (x = 250.16 +/- 85.5 nmol) than in the control group (x = 115.88 +/- 58.19 nmol). The highest levels of ACE were measured in class B group of liver cirrhosis patients vs. class A and class B groups (p < 0.013). Conclusion: It is concluded that liver cirrhosis patients have elevated ACE levels, which coud be useful in the diagnosis and follow up of these patients. abstract_id: PUBMED:8938630 Serum angiotensin converting enzyme and C4 protein of complement as a combined diagnostic index in alcoholic liver disease. Diagnosis of liver cirrhosis relies on hepatic biopsy. So far, attempts have failed to achieve a serologic test that differentiates cirrhosis from other hepatic conditions. The aim of this work was to assess the diagnostic value of the ratio of serum angiotensin converting enzyme activity (SACE) and the levels of protein C4 of serum complement (SACE/C4) in differentiating cirrhotic from noncirrhotic alcoholic liver diseases. In this study, 68 active alcoholic patients (17 with fatty liver or minimal changes, 11 with acute alcoholic hepatitis and 40 with cirrhosis) were included. Twenty healthy subjects were studied as a control group. Liver biopsy was performed in all patients. SACE levels were significantly higher in the group with cirrhosis when compared with the group of patients without cirrhosis and the control. On the other hand, serum C4 level decreased as liver damage progressed. SACE values above 25 IU/l had a sensitivity of 92.5 percent (95 percent confidence interval, 87.5 to 97.5) and a specificity of 79 percent (95% percent confidence interval, 70.5 to 87.5), in detecting those patients with liver cirrhosis. The sensitivity further increased to 95 percent (95 percent confidence interval, 90.5 to 99.5) and the specificity to 100 percent when the SACE/C4 ratio was used and a cutoff point of 145 was chosen. To conclude, in alcoholics SACE is specifically elevated in patients with cirrhosis, and the SACE/C4 ratio is a excellent biochemical index for the diagnosis of cirrhosis in alcoholic patients. abstract_id: PUBMED:38284663 FIB-4 liver fibrosis index correlates with aortic valve sclerosis in non-alcoholic population. Aim: Hepatic fibrosis, a progressive scarring of liver tissue, is commonly caused by non-alcoholic fatty liver disease (NAFLD), which increases the risk of cardiovascular disease. The Fibrosis-4 (FIB-4) index is a non-invasive tool used to assess liver fibrosis in patients with NAFLD. Aortic valve sclerosis (AVS), a degenerative disorder characterized by thickening and calcification of valve leaflets, is prevalent in the elderly and associated with increased cardiovascular morbidity and mortality. Recent studies have suggested that AVS may also be linked to other systemic diseases such as liver fibrosis. This study aimed to investigate the relationship between the FIB-4 index and AVS in a non-alcoholic population, with the hypothesis that the FIB-4 index could serve as a potential marker for AVS. Method: A total of 92 patients were included in this study. AVS was detected using transthoracic echocardiography, and patients were divided into groups according to the presence of AVS. The FIB-4 index was calculated for all patients and compared between the groups. Results: A total of 17 (18.4%) patients were diagnosed AVS. Patients with AVS had higher rates of diabetes mellitus, older age, hypertension, angiotensin-converting enzyme inhibitor use, higher systolic blood pressure (BP) and diastolic BP in the office, coronary artery disease prevalence, left atrial volume index (LAVI), left ventricular mass index (LVMI), and late diastolic peak flow velocity (A) compared to those without AVS. Moreover, AVS patients had significantly higher creatinine levels and lower estimated glomerular filtration rate. Remarkably, the FIB-4 index was significantly higher in patients with AVS. In univariate and multivariate analyses, higher systolic BP in the office (OR, 1.044; 95% CI 1.002-1.080, p = .024) and higher FIB-4 index (1.46 ± .6 vs. .91 ± .46, p < .001) were independently associated with AVS. Conclusion: Our findings suggest that the FIB-4 index is associated with AVS in non-alcoholic individuals. Our results highlight the potential utility of the FIB-4 index as a non-invasive tool for identifying individuals at an increased risk of developing AVS. abstract_id: PUBMED:23560443 Inhibition of Glutaminyl Cyclases alleviates CCL2-mediated inflammation of non-alcoholic fatty liver disease in mice. Inflammation is an integral part of non-alcoholic fatty liver disease (NAFLD), the most prevalent form of hepatic pathology found in the general population. In this context, recently we have examined the potential role of Glutaminyl Cyclases (QC and isoQC), and their inhibitors, in the maturation of chemokines, for example, monocyte chemoattractant protein 1 (MCP-1, CCL2), to generate their bioactive conformation. Catalysis by isoQC leads to the formation of an N-terminal pyroglutamate residue protecting CCL2 against degradation by aminopeptidases. This is of importance because truncated forms possess a reduced potential to attract immune cells. Since liver inflammation is characterized by the up-regulation of different chemokine pathways, and within this CCL2 is known to be a prominent example, we hypothesised that application of QC/isoQC inhibitors may alleviate liver inflammation by destabilizing CCL2. Therefore, we investigated the role of QC/isoQC inhibition, in comparison with the angiotensin receptor blocker Telmisartan, during development of pathology in a mouse model of non-alcoholic fatty liver disease. Application of a QC/isoQC inhibitor led to a significant reduction in circulating alanine aminotransferase and NAFLD activity score accompanied by an inhibitory effect on hepatocyte ballooning. Further analysis revealed a specific reduction of inflammation by decreasing the number of F4/80-positive macrophages, which is in agreement with the proposed CCL2-related mechanism of action of QC/isoQC inhibitors. Finally, QC/isoQC inhibitor application attenuated liver fibrosis as characterized by reduced collagen deposition in the liver parenchyma. Thus in conclusion, QC/isoQC inhibitors are a promising novel class of anti-non-alcoholic steatohepatitis drugs which have a comparable disease-modifying effect to that of Telmisartan, which is probably mediated via specific interference with a comparable monocyte/macrophage infiltration that occurs under inflammatory conditions. abstract_id: PUBMED:28419124 A randomised controlled trial of losartan as an anti-fibrotic agent in non-alcoholic steatohepatitis. Introduction: Non-alcoholic fatty liver disease (NAFLD) is a common liver disease worldwide. Experimental and small clinical trials have demonstrated that angiotensin II blockers (ARB) may be anti-fibrotic in the liver. The aim of this randomised controlled trial was to assess whether treatment with Losartan for 96 weeks slowed, halted or reversed the progression of fibrosis in patients with non-alcoholic steatohepatitis (NASH). Methods: Double-blind randomised-controlled trial of Losartan 50 mg once a day versus placebo for 96 weeks in patients with histological evidence of NASH. The primary outcome for the study was change in histological fibrosis stage from pre-treatment to end-of-treatment. Results: The study planned to recruit 214 patients. However, recruitment was slower than expected, and after 45 patients were randomised (median age 55; 56% male; 60% diabetic; median fibrosis stage 2), enrolment was suspended. Thirty-two patients (15 losartan and 17 placebo) completed follow up period: one patient (6.7%) treated with losartan and 4 patients (23.5%) in the placebo group were "responders" (lower fibrosis stage at follow up compared with baseline). The major reason for slow recruitment was that 39% of potentially eligible patients were already taking an ARB or angiotensin converting enzyme inhibitor (ACEI), and 15% were taking other prohibited medications. Conclusions: Due to the widespread use of ACEI and ARB in patients with NASH this trial failed to recruit sufficient patients to determine whether losartan has anti-fibrotic effects in the liver. Trial Registration: ISRCTN 57849521. abstract_id: PUBMED:24905085 Renin-angiotensin system and fibrosis in non-alcoholic fatty liver disease. Background & Aims: Therapeutic options are limited for patients with non-alcoholic fatty liver disease (NAFLD). One promising approach is the attenuation of necroinflammation and fibrosis by inhibition of the renin-angiotensin system (RAS). We explored whether the risk of fibrosis was associated with the use of commonly used medications in NAFLD patients with hypertension. Specifically, we sought to determine the association between RAS blocking agents and severity of hepatic fibrosis in NAFLD patients with hypertension. Methods: Cross-sectional study where clinical information including demographics, anthropometry, medical history, concomitant medication use, biochemical and histological features were ascertained in 290 hypertensive patients with biopsy proven NAFLD followed at two hepatology outpatient clinics. Stage of hepatic fibrosis was compared in patients with and without RAS blocker use. Other risk factors for fibrosis were evaluated from the electronic medical records and patient follow-up. Results: Baseline characteristics of hypertensive patients treated with and without RAS blockers were similar except for less ballooning (1.02 vs. 1.31, P = 0.001) and lower fibrosis stage (1.63 vs. 2.16, P = 0.002) in patients on RAS blockers On multivariate analysis, advancing age (OR: 1.04; 95%CI: 1.01-1.06, P = 0.012) and presence of diabetes (OR: 2.55; 95%CI: 1.28-5.09, P = 0.008) had an independent positive association, while use of RAS blockers (OR: 0.37; 95%CI: 0.21-0.65, P = 0.001) and statins (OR: 0.52; 95%CI: 0.29-0.93, P = 0.029) had a negative association with advanced fibrosis. Conclusion: Hypertensive patients with NAFLD on baseline RAS blockers had less advanced hepatic fibrosis suggesting a beneficial effect of RAS blockers in NAFLD. abstract_id: PUBMED:37149841 Non-alcoholic faty liver disease and liver fibrosis score have an independent relationship with the presence of mitral annular calcification. Non-alcoholic faty liver disease (NAFLD) and liver fibrosis score (FIB 4) are associated with increased mortality from cardiovascular causes. NAFLD and cardiac diseases are different manifestations of systemic metabolic syndrome. In this study, we aimed to reveal the relationship between NAFLD and FIB 4 liver fibrosis scores and mitral annular calcification (MAC). One hundred patients were included in the study. Blood samples and echocardiography measurements were obtained from each subject. The two groups were compared in terms of demographic and echocardiographic characteristics. Thirty-one men and 69 women with a mean age of 48.6 ± 13.1 years were included in the analysis. The patients were divided into two groups as those with MAC (n = 26) and those without (n = 74). The baseline demographic and laboratory data for the two groups were compared. In the group with MAC (+) age, serum creatinine levels, FIB4 and NAFLD Scores; HL, DM rates, angiotensin converting enzyme (ACE) inhibitor and statin usage rates were higher, with statistical significance. NAFLD and FIB 4 liver fibrosis scores have an independent relationship with MAC. abstract_id: PUBMED:16225469 Review article: the metabolic syndrome and non-alcoholic fatty liver disease. Metabolic syndrome represents a common risk factor for premature cardiovascular disease and cancer whose core cluster includes diabetes, hypertension, dyslipidaemia and obesity. The liver is a target organ in metabolic syndrome patients in which it manifests itself with non-alcoholic fatty liver disease spanning steatosis through hepatocellular carcinoma via steatohepatitis and cirrhosis. Given that metabolic syndrome and non-alcoholic fatty liver disease affect the same insulin-resistant patients, not unexpectedly, there are amazing similarities between metabolic syndrome and non-alcoholic fatty liver disease in terms of prevalence, pathogenesis, clinical features and outcome. The available drug weaponry for metabolic syndrome includes aspirin, metformin, peroxisome proliferator-activated receptor agonists, statins, ACE (angiotensin I-converting enzyme) inhibitors and sartans, which are potentially or clinically useful also to the non-alcoholic fatty liver disease patient. Studies are needed to highlight the grey areas in this topic. Issues to be addressed include: diagnostic criteria for metabolic syndrome; nomenclature of non-alcoholic fatty liver disease; enlargement of the clinical spectrum and characterization of the prognosis of insulin resistance-related diseases; evaluation of the most specific clinical predictors of metabolic syndrome/non-alcoholic fatty liver disease and assessment of their variability over the time; characterization of the importance of new risk factors for metabolic syndrome with regard to the development and progression of non-alcoholic fatty liver disease. abstract_id: PUBMED:25887687 Impact of hepatic immunoreactivity of angiotensin-converting enzyme 2 on liver fibrosis due to non-alcoholic steatohepatitis. Background: We aimed to evaluate the hepatic immunoreactivity of angiotensin-converting enzyme 2 (ACE2) in non-alcoholic steatohepatitis (NASH) patients, elucidate its association with the clinicopathological characteristics and also determine its role in fibrosis progression. Methods: The consecutive biopsy proven NASH patients were subdivided into two groups according to their fibrosis score. Fibrotic stages<3 in mild fibrosis group and fibrotic stages ≥ 3 in advanced fibrosis depending on the presence of bridging fibrosis. Liver biopsy specimens were immunohistochemically stained for ACE2 immunoreactivity. Demographics and clinical properties were compared between the groups. Univariate and multivariate analysis were also performed to evaluate the independent predicting factors for the presence of advanced liver fibrosis caused by NASH. Results: One hundred and eight patients were enrolled in the study. Out of this, ninety-four patients representing 87% were classified as mild fibrosis group, whilst fourteen representing 13% were in advanced fibrosis group. We compared high hepatic immunoreactivity of ACE2 between mild and advanced fibrosis groups and found a statistically significant difference 65.9% vs 28.5%, respectively and P=0.008. Hepatic ACE2 immunoreactivity was inversely correlated with the fibrosis score (r: -0.337; P<0.001). The significant variables in the univariate analysis were then evaluated in multivariate logistic regression analysis and hepatic ACE2 immunoreactivity was an independent predicting factor of liver fibrosis [odds ratio (OR): 0.194; 95% confidence interval (CI): 0.082-0.897, P=0.036]. Conclusion: Hepatic immunoreactivity of ACE2 was inversely correlated with the liver fibrosis among biopsy proven NASH patients and it was also an independent predicting factor of advanced fibrosis. abstract_id: PUBMED:27010899 Effects of renin-angiotensin system inhibitors on fibrosis in patients with alcoholic chronic pancreatitis. Chronic pancreatitis (CP) results in impairment of exocrine as well as endocrine functions and progressive fibrosis. Previous studies, have demonstrated the presence of renin-angiotensin system receptors within different pancreatic cells. The aim of the present study was to assess the effects of renin-angiotensin system (RAS) inhibitors on serum levels of fibrosis biomarkers (matrix metalloproteinase 2 and 9 (MMP-2, MMP-9), tissue inhibitor of MMP (TIMP- 1, TIMP-2), hyaluronic acid (HA)) and fasting glucose levels in patients with alcoholic CP. Seventy seven outpatients (mean age 43 years, 62 males) with diagnosed alcoholic CP were randomly enrolled into 5 study groups depending on the RAS inhibitors administered and their doses (2.5 or 5 mg and 12.5 or 25 mg for ramipril or losartan, respectively). Venous blood was sampled monthly for a period of one year to monitor serum drug levels. MMP-2, -9, TIMP-1, TIMP-2 and HA were measured with ELISA method on the onset and at the end of the study. Only forty five patients regularly participated in follow-up visits and completed the study. The fluctuations in serum HA levels observed among patients from the remaining groups also did not reach statistical significance. Serum MMP-2 levels (P = 0.06) and MMP-2/TIMP-1 ratio (P = 0.06) showed increasing tendency in the losartan 25 mg group. High doses of ramipril and losartan statistically significantly reduced fasting glucose levels. High doses of losartan can increase the MMP-2 activity in serum of alcoholic CP patients, which potentially is likely to affect turnover of extracellular matrix proteins within the pancreas. Moreover, high doses of both RAS inhibitors decrease the fasting glucose level. Answer: Yes, there is a connection between angiotensin converting enzyme (ACE) activity and liver cirrhosis of alcoholic genesis. Studies have found that serum levels of ACE are significantly higher in patients with liver cirrhosis caused by chronic alcohol consumption compared to control groups without liver disease (PUBMED:18044470; PUBMED:8938630). The highest levels of ACE were measured in class B group of liver cirrhosis patients versus class A and class C groups, suggesting that ACE levels could be useful in the diagnosis and follow-up of these patients (PUBMED:18044470). Additionally, the ratio of serum ACE activity to the levels of protein C4 of serum complement (SACE/C4) has been shown to be an excellent biochemical index for the diagnosis of cirrhosis in alcoholic patients, with high sensitivity and specificity when a certain cutoff point is used (PUBMED:8938630). These findings indicate that there is indeed a connection between ACE activity and liver cirrhosis of alcoholic genesis.
Instruction: Isolated choroid plexus cyst or echogenic cardiac focus on prenatal ultrasound: is genetic amniocentesis indicated? Abstracts: abstract_id: PUBMED:17547911 Isolated choroid plexus cyst or echogenic cardiac focus on prenatal ultrasound: is genetic amniocentesis indicated? Objective: The purpose of this study was to determine whether or not genetic amniocentesis is warranted when isolated choroid plexus cysts (CPC) or echogenic cardiac foci (EF) are noted on prenatal ultrasound. Study Design: We performed a retrospective analysis on patients from our perinatal database. All obstetric patients with CPC or EF noted on second-trimester perinatology ultrasound from April, 1998 to November, 2004 were included. Information regarding ultrasound findings and neonatal outcome were analyzed. Results: During the study period, 515 patients with CPC or EF were evaluated. Of these, 429 (83.3%) had isolated CPC or EF and 86 (16.7%) had additional risk factors. The incidence of abnormal karyotype was 0 versus 2.3%, respectively (P = .03). The additional risk factors considered were: advanced maternal age, abnormal serum triple marker screening, and/or other abnormal ultrasound findings. Furthermore, during the study period there were 20,122 live births and 27 (0.1%) cases of aneuploidy diagnosed postnatally. Of these, none had isolated CPC or EF on prenatal ultrasound. Conclusion: CPC or EF noted on prenatal ultrasound warrants referral for careful consultative ultrasound evaluation. In the absence of other risk factors, however, genetic amniocentesis for isolated CPC or EF does not appear to be necessary. abstract_id: PUBMED:34760567 Isolated 'soft signs' of fetal choroid plexus cysts or echogenic intracardiac focus - consequences of their continued reporting. Background: Choroid plexus cysts (CPC) and echogenic intracardiac focus (EIF) are obsolete soft markers found on morphology ultrasound and not a valid reason for adjusting fetal risk of aneuploidy. Method: We conducted a retrospective audit of women referred to genetic counsellor and fetal medicine services at St George Hospital (SGH) and the Royal Hospital for Women (RHW) for CPC and EIF from 1 January 2006 to 31 December 2016 inclusive. Results: In total, 208 CPC and/or EIF referrals were identified, 118 (57%) of which were for isolated CPC and/or EIF and 102 (49%) occurring in women low risk for aneuploidy prior to morphology ultrasound. Significantly, more women had undergone combined first-trimester screening in the 2014 to 2016 epoch vs. previous years at both SGH (P = 0.03) and RHW (P = 0.004). However, the number of women referred for CPC and EIF remained relatively constant. No fetus was born with a major structural or chromosomal abnormality in the group of low-risk women with isolated signs. However, 18% of these women were referred to both genetic counselling and fetal medicine services, 7% had NIPT after morphology, 14% had amniocentesis, and 33% had additional ultrasound(s). Conclusion: Despite advances in screening technology, low-risk women are still referred to specialist services for these 2 soft signs and undergoing unnecessary follow-up, NIPT and amniocentesis. abstract_id: PUBMED:34171388 Society for Maternal-Fetal Medicine Consult Series #57: Evaluation and management of isolated soft ultrasound markers for aneuploidy in the second trimester: (Replaces Consults #10, Single umbilical artery, October 2010; #16, Isolated echogenic bowel diagnosed on second-trimester ultrasound, August 2011; #17, Evaluation and management of isolated renal pelviectasis on second-trimester ultrasound, December 2011; #25, Isolated fetal choroid plexus cysts, April 2013; #27, Isolated echogenic intracardiac focus, August 2013). Soft markers were originally introduced to prenatal ultrasonography to improve the detection of trisomy 21 over that achievable with age-based and serum screening strategies. As prenatal genetic screening strategies have greatly evolved in the last 2 decades, the relative importance of soft markers has shifted. The purpose of this document is to discuss the recommended evaluation and management of isolated soft markers in the context of current maternal serum screening and cell-free DNA screening options. In this document, "isolated" is used to describe a soft marker that has been identified in the absence of any fetal structural anomaly, growth restriction, or additional soft marker following a detailed obstetrical ultrasound examination. In this document, "serum screening methods" refers to all maternal screening strategies, including first-trimester screen, integrated screen, sequential screen, contingent screen, or quad screen. The Society for Maternal-Fetal Medicine recommends the following approach to the evaluation and management of isolated soft markers: (1) we do not recommend diagnostic testing for aneuploidy solely for the evaluation of an isolated soft marker following a negative serum or cell-free DNA screening result (GRADE 1B); (2) for pregnant people with no previous aneuploidy screening and isolated echogenic intracardiac focus, echogenic bowel, urinary tract dilation, or shortened humerus, femur, or both, we recommend counseling to estimate the probability of trisomy 21 and a discussion of options for noninvasive aneuploidy screening with cell-free DNA or quad screen if cell-free DNA is unavailable or cost-prohibitive (GRADE 1B); (3) for pregnant people with no previous aneuploidy screening and isolated thickened nuchal fold or isolated absent or hypoplastic nasal bone, we recommend counseling to estimate the probability of trisomy 21 and a discussion of options for noninvasive aneuploidy screening through cell-free DNA or quad screen if cell-free DNA is unavailable or cost-prohibitive or diagnostic testing via amniocentesis, depending on clinical circumstances and patient preference (GRADE 1B); (4) for pregnant people with no previous aneuploidy screening and isolated choroid plexus cysts, we recommend counseling to estimate the probability of trisomy 18 and a discussion of options for noninvasive aneuploidy screening with cell-free DNA or quad screen if cell-free DNA is unavailable or cost-prohibitive (GRADE 1C); (5) for pregnant people with negative serum or cell-free DNA screening results and an isolated echogenic intracardiac focus, we recommend no further evaluation as this finding is a normal variant of no clinical importance with no indication for fetal echocardiography, follow-up ultrasound imaging, or postnatal evaluation (GRADE 1B); (6) for pregnant people with negative serum or cell-free DNA screening results and isolated fetal echogenic bowel, urinary tract dilation, or shortened humerus, femur, or both, we recommend no further aneuploidy evaluation (GRADE 1B); (7) for pregnant people with negative serum screening results and isolated thickened nuchal fold or absent or hypoplastic nasal bone, we recommend counseling to estimate the probability of trisomy 21 and discussion of options for no further aneuploidy evaluation, noninvasive aneuploidy screening through cell-free DNA, or diagnostic testing via amniocentesis, depending on clinical circumstances and patient preference (GRADE 1B); (8) for pregnant people with negative cell-free DNA screening results and isolated thickened nuchal fold or absent or hypoplastic nasal bone, we recommend no further aneuploidy evaluation (GRADE 1B); (9) for pregnant people with negative serum or cell-free DNA screening results and isolated choroid plexus cysts, we recommend no further aneuploidy evaluation, as this finding is a normal variant of no clinical importance with no indication for follow-up ultrasound imaging or postnatal evaluation (GRADE 1C); (10) for fetuses with isolated echogenic bowel, we recommend an evaluation for cystic fibrosis and fetal cytomegalovirus infection and a third-trimester ultrasound examination for reassessment and evaluation of growth (GRADE 1C); (11) for fetuses with an isolated single umbilical artery, we recommend no additional evaluation for aneuploidy, regardless of whether results of previous aneuploidy screening were low risk or testing was declined. We recommend a third-trimester ultrasound examination to evaluate growth and consideration of weekly antenatal fetal surveillance beginning at 36 0/7 weeks of gestation (GRADE 1C); (12) for fetuses with isolated urinary tract dilation A1, we recommend an ultrasound examination at ≥32 weeks of gestation to determine if postnatal pediatric urology or nephrology follow-up is needed. For fetuses with urinary tract dilation A2-3, we recommend an individualized follow-up ultrasound assessment with planned postnatal follow-up (GRADE 1C); (13) for fetuses with isolated shortened humerus, femur, or both, we recommend a third-trimester ultrasound examination for reassessment and evaluation of growth (GRADE 1C). abstract_id: PUBMED:1410370 Isolated choroid plexus cysts in the second-trimester fetus: is amniocentesis really indicated? Choroid plexus (CP) cysts have been associated with trisomy 18, although most fetuses with CP cysts are normal. Since many fetuses with trisomy 18 have other sonographic abnormalities, the necessity of obtaining a karyotype for all fetuses with isolated CP cysts remains controversial. The authors prospectively studied 234 second-trimester fetuses with sonographically discovered CP cysts. Two hundred twenty of them had no other sonographic findings. None of these 220 normal fetuses had evidence of aneuploidy at amniocentesis or an anomaly at birth. Fourteen fetuses had major anomalies detected in utero: 11 had trisomy 18, one had triploidy, and two had normal karyotypes but were structurally abnormal. While size and bilaterality of the CP cysts were not helpful in predicting aneuploidy, the meticulous anatomic survey of fetuses with CP cysts allowed successful identification of all aneuploid fetuses. These data show that the yield of abnormal karyotypes in fetuses with isolated CP cysts is low and may not justify the risk of amniocentesis. abstract_id: PUBMED:7943072 Isolated choroid plexus cyst(s): an indication for amniocentesis. Objective: Our purpose was to prospectively evaluate the risk of chromosomal abnormalities associated with isolated choroid plexus cyst(s) in gravid women undergoing second-trimester ultrasonographic examination. Study Design: During a 24-month period 9100 pregnant women underwent midtrimester ultrasonographic evaluation. Women with a fetal diagnosis of choroid plexus cyst(s) were offered amniocentesis and a repeat examination in 4 to 6 weeks. Results: A diagnosis of choroid plexus cyst(s) was made in 102 fetuses (1.1%). In four of these fetuses multiple congenital anomalies were noted. Three of the four fetuses had a chromosomal abnormality, two trisomy 18 and one unbalanced translocation, t(3;13). In the remaining 98 fetuses the choroid plexus cysts were isolated findings, that is, there were no other ultrasonographically detected anomalies. Seventy-five of these 98 fetuses underwent amniocentesis. An abnormal karyotype was identified in four fetuses: three had Down syndrome (two trisomy 21 and one unbalanced translocation, t[14;21]), and one trisomy 18. The offspring of the 23 patients in which amniocentesis was declined were phenotypically normal. Conclusions: In our prospective study the risk of chromosomal abnormality with isolated choroid plexus cyst(s) was 1:25, a risk that exceeds the 1:200 risk of pregnancy loss after amniocentesis and the 1:126 and 1:260 risk for aneuploidy and Down syndrome, respectively, in a 35-year-old pregnant women during the midtrimester. These findings indicate that amniocentesis should be offered to pregnant women in the presence of isolated fetal choroid plexus cyst(s). abstract_id: PUBMED:8539507 Amniocentesis and single choroid plexus cyst. Current status Management of a patient with a diagnosed choroid plexus cyst (CPC) is probably one of the most difficult of all prenatal diagnostic problems. Similarity between the risk of chromosomopathy due to the appearance of CPC only and the risk of fetal mortality due to amniocentesis (both being about 1/200) is such that an individual approach must be adopted in each case. The couple must be given a full explanation of all the details, which will enable them to finally decide whether a conservative attitude is appropriate or, on the contrary, if a specific diagnosis should be sought by amniocentesis. abstract_id: PUBMED:33122027 Prenatal chromosomal microarray analysis in 2466 fetuses with ultrasonographic soft markers: a prospective cohort study. Background: Soft markers are nonspecific findings detected by ultrasonography during the second trimester that are often transient and nonpathologic but may imply an increased risk of underlying fetal aneuploidy. However, large-scale prospectively stratified studies focusing on the prevalence of chromosomal aberrations, including copy number variants, in fetuses with different types of isolated soft markers have rarely been published in the literature. Objective: This study aimed to investigate clinical outcomes in fetuses with isolated soft markers by single nucleotide polymorphism array with long-term follow-up and to propose a diagnostic algorithm based on specific types of soft markers. Study Design: The prevalence of fetal isolated soft markers was 13.2% (7869 of 59,503). A total of 2466 fetuses with ultrasonographic soft markers during the second trimester, which were subjected to single nucleotide polymorphism array with long-term follow-up, were selected in this prospective study over a 5-year period. Soft markers were categorized into 12 groups. The demographic profile and chromosomal microarray analysis detection results were analyzed and compared among different groups. Results: The overall prevalence of chromosomal aberrations in fetuses with soft markers was 4.3% (107 of 2466), which comprised 40.2% with numeric chromosomal abnormalities, 48.6% with pathogenic copy number variants, and 11.2% with likely pathogenic copy number variants. The incidence of numeric chromosomal abnormalities was significantly higher in multiple soft markers (5.5% vs 1.5%; P=.001) and the thickened nuchal fold group (8.3% vs 1.7%; P=.024). Meanwhile, the incidence of pathogenic copy number variants was significantly higher in multiple soft markers (5.5% vs 2.4%; P=.046) and the short femur length group (6.6% vs 2.2%; P<.0001). The incidences of pathogenic copy number variants in fetuses with isolated echogenic intracardiac focus, enlarged cisterna magna, choroid plexus cysts, echogenic bowel, or single umbilical artery were lower than 1.5%. The normal infant rate in fetuses without chromosomal aberrations was 91.7%; however, it was significantly lower in the mild ventriculomegaly (86.2% vs 93.0%; P<.0001) and short femur length groups (71.4% vs 93.6%; P<.0001). Conclusion: The potential chromosomal aberrations and clinical prognoses varied widely among different types of isolated soft markers. Pathogenic copy number variants are more often present in specific soft markers, especially when multiple soft markers are found. Thus, a specific soft marker type-based prenatal genetic testing algorithm was proposed. abstract_id: PUBMED:8953631 Is genetic amniocentesis warranted when isolated choroid plexus cysts are found? Our aim was to evaluate the prevalence of trisomy 18 in the setting of isolated fetal choroid plexus cysts and then to consider the risk of trisomy 18 versus the risks of genetic amniocentesis. Fetuses with choroid plexus cysts were prospectively obtained from a total mid-trimester population of 18861 fetuses with known outcomes. If the fetuses had trisomy 18, they were part of the study group and part of the control group if they had normal karyotypes. Scans were retrospectively reviewed for the characterization of cysts according to size, laterality, and appearance (simple or complex echo patterns). Chi-square analysis of contingency tables of results was performed. 208/18861 (1.1 per cent) fetuses had choroid plexus cysts. 201/208 (96.6 per cent) were normal fetuses or newborns, while 7/208 (3.4 per cent) of the fetuses with choroid plexus cysts had trisomy 18. Overall, 16 fetuses had trisomy 18 and seven (44 per cent) of these had choroid plexus cysts. 0/16 fetuses had choroid plexus cysts as the only sonographic finding. Although laterality or complexity of the cysts did not correlate with the presence or absence of a cytogenetic abnormality, cysts > or = 10 mm were more often associated with trisomy 18 than with a normal karyotype (P < 0.01). We conclude that the discovery of choroid plexus cysts in otherwise normal fetuses in the late second trimester does not by itself justify the risks of genetic amniocentesis. abstract_id: PUBMED:9822504 An economic evaluation of prenatal strategies for detection of trisomy 18. Objective: The objective of this study was to perform an economic evaluation of prenatal diagnostic strategies for women who are at increased risk for fetal trisomy 18 caused by either fetal choroid plexus cysts discovered in a conventional sonogram or an abnormal triple screen. Study Design: The prevalence of trisomy 18 in the presence of second-trimester fetal choroid plexus cysts and also in the presence of abnormal triple screen were made on the basis of previously reported studies. A cost/benefit analysis and cost-effectiveness determination of 3 strategies were performed: (1) no prenatal diagnostic workup of at-risk patients, (2) universal genetic amniocentesis of all at-risk patients, and (3) universal second-trimester targeted genetic ultrasonography of all at-risk patients with amniocentesis (for fetal karyotyping) reserved only for those with abnormal ultrasonography results. Results: The strategy of no prenatal diagnostic workup was the least expensive approach, costing $1,650,000 annually in the United States. The more costly approach was the strategy of universal amniocentesis for detecting fetal trisomy 18 in the presence of either second-trimester choroid plexus cysts or abnormal maternal serum screening, generating an annual cost of approximately $12 million and 40 fetal losses as a result of amniocenteses. The strategy of targeted genetic ultrasonography generated an annual cost of only $5 million and 8 fetal losses as a result of amniocenteses. Conclusions: Routine second-trimester amniocentesis in patients at increased risk for fetal trisomy 18 caused by either the presence of fetal choroid plexus cysts or abnormal triple screening is not justified from the cost/benefit point of view. abstract_id: PUBMED:24179631 Sonographic markers for early diagnosis of fetal malformations. Fetal malformations are very frequent in industrialized countries. Although advanced maternal age may affect pregnancy outcome adversely, 80%-90% of fetal malformations occur in the absence of a specific risk factor for parents. The only effective approach for prenatal screening is currently represented by an ultrasound scan. However, ultrasound methods present two important limitations: the substantial absence of quantitative parameters and the dependence on the sonographer experience. In recent years, together with the improvement in transducer technology, quantitative and objective sonographic markers highly predictive of fetal malformations have been developed. These markers can be detected at early gestation (11-14 wk) and generally are not pathological in themselves but have an increased incidence in abnormal fetuses. Thus, prenatal ultrasonography during the second trimester of gestation provides a "genetic sonogram", including, for instance, nuchal translucency, short humeral length, echogenic bowel, echogenic intracardiac focus and choroid plexus cyst, that is used to identify morphological features of fetal Down's syndrome with a potential sensitivity of more than 90%. Other specific and sensitive markers can be seen in the case of cardiac defects and skeletal anomalies. In the future, sonographic markers could limit even more the use of invasive and dangerous techniques of prenatal diagnosis (amniocentesis, etc.). Answer: Based on the provided abstracts, genetic amniocentesis may not be routinely indicated when isolated choroid plexus cysts (CPC) or echogenic cardiac foci (EF) are noted on prenatal ultrasound, particularly in the absence of other risk factors. A retrospective analysis indicated that the incidence of abnormal karyotype was 0% in patients with isolated CPC or EF, suggesting that genetic amniocentesis for isolated CPC or EF does not appear to be necessary in the absence of other risk factors (PUBMED:17547911). Similarly, another study found that no fetus was born with a major structural or chromosomal abnormality in the group of low-risk women with isolated signs, and despite advances in screening technology, low-risk women are still referred to specialist services for these soft signs and undergoing unnecessary follow-up, NIPT, and amniocentesis (PUBMED:34760567). The Society for Maternal-Fetal Medicine recommends not recommending diagnostic testing for aneuploidy solely for the evaluation of an isolated soft marker following a negative serum or cell-free DNA screening result. For isolated CPC, they recommend counseling to estimate the probability of trisomy 18 and a discussion of options for noninvasive aneuploidy screening with cell-free DNA or quad screen if cell-free DNA is unavailable or cost-prohibitive (PUBMED:34171388). Another study prospectively evaluated the risk of chromosomal abnormalities associated with isolated CPC and found that the yield of abnormal karyotypes in fetuses with isolated CPC is low and may not justify the risk of amniocentesis (PUBMED:1410370). However, one study suggested that the risk of chromosomal abnormality with isolated CPC was 1:25, which exceeds the risk of pregnancy loss after amniocentesis, indicating that amniocentesis should be offered to pregnant women in the presence of isolated fetal CPC (PUBMED:7943072). In conclusion, the current consensus based on the provided abstracts leans towards not routinely recommending genetic amniocentesis for isolated CPC or EF on prenatal ultrasound, especially when other risk factors are not present and when noninvasive screening results are negative. However, patient counseling and consideration of individual circumstances are important in the decision-making process.
Instruction: Pressurized bag pump and syringe pump arterial flushing systems: an unrecognized hazard in neonates? Abstracts: abstract_id: PUBMED:12415454 Pressurized bag pump and syringe pump arterial flushing systems: an unrecognized hazard in neonates? Objective: Hand-held flushing of radial arterial lines at 0.5 ml/s in neonates can result in retrograde embolization of flush solution into the central arterial circulation. We studied flush flow velocities during intermittent arterial line purging using a flow regulating device with an infusion bag pump and a syringe pump system. Measurements And Interventions: In this in vitro experiment we simulated flushing of a 24- and a 22-G cannula against a mean arterial pressure of 45 mmHg. Fluid flow velocities were gravimetrically measured during flushing from an infusion bag system pressurized to 100, 200, and 300 mmHg and from a syringe pump flush system after initialization of boluses of 0.5, 1.0, 1.5, 2.0, and 2.5 ml. The flow regulating device was opened for 1, 2, and 5 s. Results: Both flush systems tested allowed delivery of flush flow velocities exceeding 0.5 ml/s (e.g., 22-G cannula; bag system, pressure 300 mmHg up to 0.64+/-0.08 ml/s; syringe pump, 2 ml bolus up to 0.74+/-0.05 ml/s). In syringe pump systems the main determinant of flow velocity was bolus size, in bag pump systems flushing time and bag pressure. Conclusions: Based on data about critical flow velocities through an radial arterial cannula in neonates, both tested flushing systems carry the risk of exceeding the critical value of 0.5 ml/s. They are likely to cause retrograde embolization of flushing solution into the central arterial circulation with the associated risk of clot and air embolization. In vivo studies should identify margins of safety to minimize the risk of retrograde flushing into the central arterial circulation. abstract_id: PUBMED:35509919 A low-cost push-pull syringe pump for continuous flow applications. Syringe pumps are very useful tools to ensure a constant and pulsation-free flow rate, however usability is limited to batch processes. This article shows an open-source method for manufacturing a push pull syringe pump, valid for continuous processes, easy to build, low-cost and programmable. The push-pull syringe pump (PPSP) is driven by an Arduino nano ATmega328P which controls a NEMA 17 in microstepping via the A4988 stepper driver. The Push-Pull Syringe Pump setup is configurable by means of a digital encoder and an oled screen programmed using C ++. A PCB was designed and built to facilitate the assembly of the device. The continuous flow is guaranteed by four non-return valves and a dampener, which has been sized and optimized for use on this device. Finally, tests were carried out to evaluate the flow rates and the linearity of the flow. The device is achievable with a cost of less than 100 €. abstract_id: PUBMED:12472710 Flush volumes delivered from pressurized bag pump flush systems in neonates and small children. Background: The aim of this study was to measure the volumes of fluid delivered with a fast flush bolus from a flow regulating device. Methods: In-vitro fast flush bolus volumes, the volumes delivered from a bag pump flush system while opening the flow regulating device for 1, 2 or 5 s, were gravimetrically measured through a 22-G and a 24-G cannula. In-vivo 1- and 2-s fast flush bolus volumes and the volume required to purge the tubing between stopcock and arterial cannula from visible blood after blood sampling were recorded in 12 anaesthetized neonates and infants (mean age 2.17 +/- 1.97 months, range 0.26-5.37 months) with a 24-G radial arterial cannula by continuously weighing the bag pump flush system at manometer pressures of 100, 200 and 300 mmHg. Results: In-vitro fast flush bolus volumes ranged from 0.23 +/- 0.04 ml (1-s, 100 mmHg, 24-G cannula) to 2.95 +/- 0.38 ml (5-s, 300 mmHg, 22-G cannula). Volumes were larger using a 22-G cannula than a 24-G cannula (P < 0.01) and increased with longer flushing periods (P < 0.0001) and higher manometer pressures (P < 0.0001). In-vivo 1- and 2-s fast flush bolus volumes correlated well with driving pressures (infusion pressure minus mean arterial pressure) (r2 = 0.81/0.72). 1-s fast flush bolus volumes delivered (ml) were 0.0025 x mmHg driving pressure and 2-s fast flush bolus volumes delivered (ml) were 0.0043 x mmHg driving pressure. The mean volume delivered to purge blood from the arterial pressure tubing was 0.94 +/- 0.18 ml (range 0.61-1.34 ml). Conclusions: Fast bolus flushing from pressurized infusion bag systems, using the flow regulating device tested, can be applied during neonatal and paediatric anaesthesia without delivering uncontrolled amounts of fluid. abstract_id: PUBMED:37779821 Low-cost, low-power, clockwork syringe pump. A low-cost ($120 NZD, $75 USD), low-power (1-year battery life), portable, and programmable syringe pump design is presented, which offers an alternative to high-cost commercial devices with limited battery life. Contrary to typical motor-driven syringe pumps, the design utilizes a compression spring coupled with a clockwork escapement mechanism to advance the syringe plunger. Full control over flow-rate and discrete (bolus) deliveries is achieved through actuation of a clockwork escapement using programmable, low-power electronics. The escapement mechanism allows the syringe plunger to advance a fixed linear distance, delivering a dose size of 0.001 ml in the configuration presented. The modular pump assembly is easily reconfigured for different applications by interchanging components to alter the minimum dose size. Testing to IEC 60601-2-24(2012), the average error of the clockwork syringe pump was 8.0%, 4.0%, and 1.9% for 0.001 ml, 0.002 ml, and 0.01 ml volumes, respectively. An overall mean error of 1.0% was recorded for a flow-rate of 0.01 ml h-1. Compared to a commercial insulin pump, the clockwork pump demonstrated reduced variability but greater average error due to consistent over-delivery. Further development of the design and/or manufacture should yield a device with similar performance to a commercial pump. abstract_id: PUBMED:36648629 Effects of infusion tubing line lengths and syringe sizes on infusion system compliance: an experimental study using a syringe-type infusion pump at low flow rate. Ideally, the flow delivery of an infusion system is proportional only to the rate of mechanical actuation of the syringe pump plunger. However, in the real world, overall infusion system compliance may be affected by components such as an extension of tubing lines, or different sizes of syringes. With higher compliance, there may be greater chances of flow irregularity. In this experimental study, we investigated the effects of lengths of infusion lines and syringe sizes on the compliance of syringe pumps with low flow rate (2 ml h-1). In the first experiment, infusion system compliance was measured in various settings by occlusion release. As the infusion tubing length and size of the syringe increased, the time to reach each pressure was delayed and the infusion system compliance increased. The contributions to system compliance from syringes were significantly greater compared to those of extended infusion lines. In the occlusion alarm experiment, the occlusion alarm could be delayed by 69.76 ± 3.98 min for the 50-ml syringe with a 560 cm infusion line set-up. In conclusion, the compliance of a syringe pump system increases as the loaded syringe size, or the length of the infusion tubing increases. The occlusion alarm may be much delayed and not useful in highly compliant systems with respect to the potential occlusion of the infusion system, so more attention is required when using a highly compliant infusion system. abstract_id: PUBMED:24225409 Amount of accidental flush by syringe pump due to inappropriate release of occluded intravenous line. Background: An unintended bolus is delivered by the syringe pump if intravenous line occlusion is released in an inappropriate manner. Objective: The aim of this study was to measure the amount of flushed fluid when an occlusion is inappropriately released and to assess the effect of different syringe pump settings (flow rate, alarm setting, size of syringe and syringe pump model) on the flushed amount. Methods: After the stopcock was closed, infusions were started with different model syringe pumps (Terufusion® TE312 and TE332S), different syringe sizes or at different alarm settings. After the occlusion alarm sounded, the occlusion was released and the amount of fluid emerging from the stopcock was measured. Results: The bolus was significantly lower when the alarm was set at a low-pressure setting. The bolus was significantly lower with a 10-ml than a 50-ml syringe. A significant difference was seen only when a 50-ml syringe was used (TE312: 1.99 ± 0.16 ml vs. TE332S: 0.674 ± 0.116 ml, alarm High, p < 0.001). Conclusion: To minimize the amount of accidentally injected medication, a smaller syringe size and a low alarm setting are important. Using a syringe pump capable of reducing the inadvertently administered bolus may be helpful. abstract_id: PUBMED:37668096 Effect of vertical pump position on start-up fluid delivery of syringe pumps used for microinfusion. Background: Connection and opening a syringe infusion pump to a central venous line can lead to acute anterograde or retrograde fluid shifts depending on the level of central venous pressure. This may lead to bolus events or to prolonged lag times of intravenous drug delivery, being particularly relevant when administering vasoactive or inotropic drugs in critically ill patients using microinfusion. The aim of this study was to assess the effect of syringe pump positioning at different vertical heights on start-up fluid delivery before versus after purging and connection the pump to the central venous catheter. Methods: This in vitro study measured ante- and retrograde infusion volumes delivered to the central venous line after starting the syringe pump at a set infusion rate of 1 mL/h. In setup one, the pump was first positioned to vertical levels of +43 cm or -43 cm and then purged and connected to a central venous catheter. In setup two, the pump was first purged and connected at zero level and secondarily positioned to a vertical level of +43 cm or -43 cm. Central venous pressure was adjusted to 10 mmHg in both setups. Results: Positioning of the pump prior to purging and connection to the central venous catheter resulted in a better start-up performance with delivered fluid closer to programmed and expected infusion volumes when compared to the pump first purged, connected, and then positioned. Significant backflow volumes were observed with the pump purged and connected first and then positioned below zero level. No backflow was measured with the pump positioned first below zero level and then purged and connected. Conclusions: Syringe infusion pump assemblies should be positioned prior to purging and connection to a central venous catheter line when starting a new drug, particularly when administering highly concentrated vasoactive or inotropic drugs delivered at low flow rates. abstract_id: PUBMED:32173100 Ten-year outcomes after off-pump versus on-pump coronary artery bypass grafting: Insights from the Arterial Revascularization Trial. Objective: We performed a post hoc analysis of the Arterial Revascularization Trial to compare 10-year outcomes after off-pump versus on-pump surgery. Methods: Among 3102 patients enrolled, 1252 (40% of total) and 1699 patients received off-pump and on-pump surgery (151 patients were excluded because of other reasons); 2792 patients (95%) completed 10-year follow-up. Propensity matching and mixed-effect Cox model were used to compare long-term outcomes. Interaction term analysis was used to determine whether bilateral internal thoracic artery grafting was a significant effect modifier. Results: One thousand seventy-eight matched pairs were selected for comparison. A total of 27 patients (2.5%) in the off-pump group required conversion to on-pump surgery. The off-pump and on-pump groups received a similar number of grafts (3.2 ± 0.89 vs 3.1 ± 0.8; P = .88). At 10 years, when compared with on-pump, there was no significant difference in death (adjusted hazard ratio for off-pump, 1.1; 95% confidence interval, 0.84-1.4; P = .54) or the composite of death, myocardial infarction, stroke, and repeat revascularization (adjusted hazard ratio, 0.92; 95% confidence interval, 0.72-1.2; P = .47). However, off-pump surgery performed by low volume off-pump surgeons was associated with a significantly lower number of grafts, increased conversion rates, and increased cardiovascular death (hazard ratio, 2.39; 95% confidence interval, 1.28-4.47; P = .006) when compared with on-pump surgery performed by on-pump-only surgeons. Conclusions: The findings showed that in the Arterial Revascularization Trial, off-pump and on-pump techniques achieved comparable long-term outcomes. However, when off-pump surgery was performed by low-volume surgeons, it was associated with a lower number of grafts, increased conversion, and a higher risk of cardiovascular death. abstract_id: PUBMED:12670817 Arterial fast bolus flush systems used routinely in neonates and infants cause retrograde embolization of flush solution into the central arterial and cerebral circulation. Purpose: To evaluate the risk of retrograde embolization of flush solution in neonates and infants with routinely used electronic syringe pumps and infusion bag pump flush systems. Methods: With hospital Ethical Committee approval we studied intubated neonates and infants with a 24-GA radial arterial cannula. Fast flush boluses were delivered from the infusion bag pump flush system by opening the flow regulating device for two seconds at bag pump manometre pressures of 100, 200 and 300 mmHg. In the syringe pump flush system, fast flush bolus volumes of 0.5, 1.0, 1.5 and 2.0 mL were programmed on the electronic syringe pump and released by opening the flow regulating device for two seconds. A 12-MHz ultrasonic probe placed in the jugular fossa was used to detect white bubbles indicating retrograde embolization of flush solution into the ipsilateral subclavian and common carotid artery. Results: Sixteen patients, aged from 1-105 days (median 22 days) were studied. In all patients retrograde embolization into the subclavian artery was detected at syringe pump bolus volumes of 0.5-1.5 mL and at 100-200 mmHg bag pump pressure. In nine of the 16 patients a positive signal was detected in the common carotid artery with 1.5-2.0 mL syringe pump bolus volumes and at 200-300 mmHg bag pump pressure. Conclusions: In neonates and infants, the standard practice of arterial fast bolus flushing using syringe pump and bag pump flush systems causes retrograde embolization of flush solution into the central arterial and even into the cerebral circulation. The mandatory limitation of fast flush bolus volumes and manometre pressures is urgent in order to reduce retrograde embolization of flush solution and the associated risks in these small patients. abstract_id: PUBMED:30673627 Challenges in Implementing an E-Learning Education Program for Syringe Pump Use. To prevent the incidence of risks imputable to human error during the process of preparing the infusion pump, clarity in teaching and learning are required. Because traditional classroom training is difficult and time-consuming, the aim of the present study was to challenge the implementation of an e-learning education program for syringe pump use. The impact of the e-learning program was evaluated with 100 nurses between March and June 2016. The e-learning program significantly increased general baseline knowledge in syringe pump use; however, aspects of the program confused participants. The feedback from 98 % of nurses on the e-learning program was that it was effective in helping them with this skill. However, only 54 % opted for the e-learning program versus traditional training. The present study showed nurses preferred a blended learning format. Based on our hospital incident and error reports, this study shows local training requires a specific approach strategy for syringe pump education. Answer: The use of pressurized bag pump and syringe pump arterial flushing systems in neonates may indeed pose an unrecognized hazard. Studies have shown that hand-held flushing of radial arterial lines at a flow velocity of 0.5 ml/s can result in retrograde embolization of flush solution into the central arterial circulation in neonates (PUBMED:12415454). Both pressurized bag pump systems and syringe pump systems have been found to deliver flush flow velocities that exceed this critical value, which could potentially cause retrograde embolization of flushing solution, with the associated risk of clot and air embolization (PUBMED:12415454). In an in vitro experiment, it was demonstrated that both flush systems tested allowed delivery of flush flow velocities exceeding 0.5 ml/s, with the main determinant of flow velocity being bolus size in syringe pump systems and flushing time and bag pressure in bag pump systems (PUBMED:12415454). The study concluded that both flushing systems carry the risk of exceeding the critical flow velocity through a radial arterial cannula in neonates, suggesting that there is a potential risk of causing retrograde embolization into the central arterial circulation (PUBMED:12415454). Another study found that arterial fast bolus flush systems used routinely in neonates and infants can cause retrograde embolization of flush solution into the central arterial and cerebral circulation (PUBMED:12670817). This study detected retrograde embolization into the subclavian artery at syringe pump bolus volumes of 0.5-1.5 mL and at 100-200 mmHg bag pump pressure. Furthermore, a positive signal was detected in the common carotid artery with larger syringe pump bolus volumes and higher bag pump pressures (PUBMED:12670817). The findings indicate that standard practice of arterial fast bolus flushing using these systems can lead to retrograde embolization, emphasizing the need for mandatory limitations on fast flush bolus volumes and manometer pressures to reduce the associated risks in small patients (PUBMED:12670817). In conclusion, the evidence suggests that pressurized bag pump and syringe pump arterial flushing systems can indeed be an unrecognized hazard in neonates, and safety measures should be implemented to minimize the risk of retrograde flushing into the central arterial circulation.
Instruction: Gómez-López-Hernández syndrome in a child born to consanguineous parents: new evidence for an autosomal-recessive pattern of inheritance? Abstracts: abstract_id: PUBMED:24690526 Gómez-López-Hernández syndrome in a child born to consanguineous parents: new evidence for an autosomal-recessive pattern of inheritance? Background: Gómez-López-Hernández syndrome is a rare genetic disease characterized by scalp alopecia with trigeminal anesthesia, brachycephaly or turribrachycephaly, midface retrusion, and rhombencephalosynapsis. We report the second case with this condition who presented with consanguineous parents. Patient: This boy was evaluated shortly after birth because of suspected craniosynostosis. He was the only son of healthy, consanguineous parents (his maternal grandmother and his paternal great-grandfather were siblings). His examination was notable for turribrachycephaly, prominent forehead, bilateral parietotemporal alopecia, midfacial retrusion, anteverted nostrils, micrognathia, low-set and posteriorly rotated ears, and short neck with redundant skin. Radiographs and tridimensional computed tomography scan of skull revealed lambdoid craniosynostosis. Brain magnetic resonance imaging revealed complete rhombencephalosynapsis, aqueductal stenosis, fused colliculi, abnormal superior cerebellar penducle, mild ventriculomegaly, and dysgenesis of the corpus callosum. Conclusions: Since its first description, 34 patients with this condition have been reported. The etiology of Gómez-López-Hernández syndrome is unknown. However, it is noteworthy that the patient in this report presented with a family history of consanguinity because this finding reinforces the possibility of an autosomal-recessive inheritance for this condition. abstract_id: PUBMED:30181940 Co-occurrence of Gomez-Lopez-Hernandez syndrome and Autism Spectrum Disorder: Case report with review of literature. We report on Gomez-Lopez-Hernandez syndrome (GLHS) in a Caucasian patient, Georgian, 36 months, male, only child born to non-consanguineous parents. There were no similar cases in the family and among close relatives. MRI study confirmed rhombencephalosynapsis (fusion of cerebellar hemispheres in combination with the agenesis of cerebellar vermis) and mild dilation of the lateral ventricles. Other main findings are bilateral parieto-temporal alopecia and brachiturricephaly (broad skull shape and tower-like elongation of the cranium in the vertical axis), low-set posteriorly retracted ears, strabismus (in the right eye), hypotonia (Beighton scale score - 6) and ataxia (trouble maintaining balance). Patient has no signs of trigeminal anesthesia, no recurrent, painless eye infections, corneal opacities and ulcerated wounds on the facial skin and buccal mucosa were observed. Based on the scientific literature we suggest a finding of brachiturricephaly in addition to rhombencephalosynapsis and bilateral alopecia sufficient to put a diagnosis of GLHS. Patient did not speak, disregarded guardians and clinician addressing him, did not make eye contact, was restless and occasionally displayed aggression and self-injurious behavior. These symptoms confirm the earlier diagnosis of Autism Spectrum Disorder (ASD). Therefore, the current study describes a case of co-occurrence of GLHS and ASD. abstract_id: PUBMED:22427147 Gómez-López-Hernández syndrome: another consideration in corneal neurotrophic ulcers. Purpose: To report a case of a Gómez-López-Hernández syndrome (GLHS) variant with corneal neurotrophic ulcer. Methods: Case report and review of the literature. Results: A 6-year-old child presented with watering in the right eye for 3 days without ocular inflammation or pain. He had a peculiar facial phenotype and scalp alopecia in the right side. Slit-lamp examination showed an epithelial defect in the right eye and a corneal scar around the defect. Belmonte noncontact esthesiometry showed reduced corneal mechanosensory and thermal sensitivity. In vivo confocal microscopy revealed the absence of innervation in the right cornea. There was also an evident insensitivity in the alopecic region. Despite normal magnetic resonance imaging, the phenotypic manifestations along with ocular features suggested the diagnosis of a GLHS variant. Conclusions: Patients with GLHS remain asymptomatic even when they develop a corneal ulcer. Parents should be advised regarding the susceptibility of an affected child to the development of corneal lesions and the importance of regular follow-up and prompt treatment to prevent vision-threatening abnormalities. abstract_id: PUBMED:18342593 Gomez-Lopez-Hernandez syndrome: an easily missed diagnosis. Gomez-Lopez-Hernandez syndrome (GLHS) is a rare syndrome comprising the triad rhombencephalosynapsis (RS), parietal alopecia, and trigeminal anesthesia. Other typical findings are skull abnormalities, craniofacial dysmorphic signs, and short stature. Intellectual impairment is typical but cases with normal cognitive functions have also been reported. Only 15 cases of GLHS have been described so far, all sporadic. We report four further patients with GLHS: one neonate, two children and a middle aged man. In all cases the diagnosis was made only in retrospect; one child died as neonate due to esophageal atresia. All patients presented RS and parietal alopecia, three intermittent head stereotypies, two had obvious trigeminal anesthesia, and one normal cognition. Alopecia and also trigeminal anesthesia can be very mild and can be easily missed. However, the dysmorphic signs including bilateral alopecia are already present in the neonatal period and are highly suggestive of GLHS. RS should be looked for in this situation. It is important to mention that neuroimaging does not allow distinguishing between isolated RS and GLHS. If RS is diagnosed the clinical signs of GLHS should be sought. The diagnosis of GLHS can only be made by the combination of the typical dysmorphic signs and neuroimaging in the neonatal period, but not prenatally. abstract_id: PUBMED:25339626 Trigeminal nerve agenesis with absence of foramina rotunda in Gómez-López-Hernández syndrome. Gómez-López-Hernández syndrome (GLHS) is a clinical condition traditionally characterized by rhombencephalosynapsis (RS), parieto-occipital alopecia, and trigeminal anesthesia. It is a neurocutaneous disorder with no known etiology. The underlying cause of the trigeminal anesthesia in GLHS has not been examined or reported; it has merely been identified on clinical grounds. In this report, a 10-month-old white female born at 37 weeks gestational age with GLHS underwent a contrast-enhanced CT for the evaluation of craniofacial dysmorphic features. Thin-section bone algorithm images showed absence of bilateral foramina rotunda and trigeminal nerve fibers. The maxillary branch of the trigeminal nerve passes through the foramen rotundum and carries sensory information from the face. This case is unique because trigeminal nerve absence has not been suggested as a possible etiology for trigeminal anesthesia associated with GLHS. It is not known how many cases of GLHS have agenesis of the trigeminal nerve; however, a review of the literature suggests that this patient is the first. The triad of RS, alopecia, and trigeminal anesthesia is specific to GLHS; therefore, early identification of trigeminal nerve agenesis in patients with RS could expedite diagnosis of GLHS, particularly given that the clinical diagnosis of trigeminal anesthesia in neonates is a challenging one. Diagnosing alopecia in newborns is likewise challenging. Early diagnosis could allow for early intervention, especially for ophthalmic complications, which are known to have significant long-term effects. This case illustrates the benefits of CT imaging in the detection of trigeminal nerve and foramina rotunda abnormalities in neonates with suspected GLHS. abstract_id: PUBMED:34607749 The Forgotten Phacomatoses: A Neuroimaging Review of Rare Neurocutaneous Disorders. Phakomatoses, or neurocutaneous syndromes, are a heterogeneous group of rare genetic disorders that predominantly affect structures arising from the embryonic ectoderm, namely the skin, eye globe, retina, tooth enamel, and central nervous system. Other organs are also involved in some syndromes, mainly cardiovascular, pulmonary, renal, and musculoskeletal systems. Currently, more than sixty distinct entities belonging to this category have been described in the literature. Common phakomatoses include conditions like Neurofibromatosis and Tuberous sclerosis. Several review papers have focused on various aspects of these common conditions, including clinical presentation, genetic and molecular basis, and neuroimaging features. In this review, we focus on rare neurocutaneous syndromes: Melanophakomatoses (Ie, Neurocutaneous Melanosis, and Incontinentia Pigmenti), Vascular Phakomatoses (Ie, Ataxia Telangiectasia and PHACE Syndrome), and other conditions such as Cowden Syndrome, Basal Nevus Syndrome, Schwannomatosis, Progressive Facial Hemiatrophy, Gomez-Lopez-Hernandez Syndrome, Wyburn-Mason Syndrome, CHILD Syndrome, and Proteus Syndrome. We also review the neuroradiologic manifestations of these conditions as a guide for neurologists and neuroradiologists in their daily practice. abstract_id: PUBMED:17483961 Gomez-Lopez-Hernandez syndrome (cerebello-trigeminal-dermal dysplasia): description of an additional case and review of the literature. Gomez-Lopez-Hernandez syndrome is a very rare genetic disorder with a distinct phenotype (OMIM 601853). To our knowledge there have been seven cases documented to date. We report on an additional male patient now aged 15 8/12 years with synostosis of the lambdoid suture, partial scalp alopecia, corneal opacity, mental retardation and striking phenotypic features (e.g., brachyturricephaly, hypertelorism, midface hypoplasia and low-set ears) consistent with Gomez-Lopez-Hernandez syndrome. In early childhood the patient demonstrated aggressive behavior and raging periods. He also had seizures that were adequately controlled by medication. Magnetic resonance imaging (MRI) revealed rhombencephalosynapsis, i.e., a rare fusion of the cerebellar hemispheres, also consistent with Gomez-Lopez-Hernandez syndrome. In addition a lipoma of the quadrigeminal plate was observed, a feature not previously described in the seven patients reported in the literature. Cytogenetic and subtelomere analyses were inconspicuous. Microarray-based comparative genomic hybridization (array-CGH) testing revealed five aberrations (partial deletions of 1p21.1, 8q24.23, 10q11.2, Xq26.3 and partial duplication of 19p13.2), which, however, have been classified as normal variants. Array-CGH has not been published in the previously reported children. The combination of certain craniofacial features, including partial alopecia, and the presence of rhombencephalosynapsis in the MRI are suggestive of Gomez-Lopez-Hernandez syndrome. Children with this syndrome should undergo a certain social pediatric protocol including EEG diagnostics, ophthalmological investingation, psychological testing, management of behavioral problems and genetic counseling. abstract_id: PUBMED:32382369 Diagnosis of rhomboencephalosynapsis by MRI in a 5-year-old child. Rhombencephalosynapsis is a rare defect in the development of the cerebellum, characterized by partial or total agenesis of the vermis, with fusion of the cerebellar hemispheres in the midline. It is usually related to Gómez-López-Hernández Syndrome (cerebellar-trigeminal dermal dysplasia) and VACTERL association association (vertebral defects, vascular anomalies, anal atresia, cardiac anomalies, tracheoesophageal fistula with esophageal atresia, renal dysplasia and limb anomalies). A 5-year-old female patient with spastic cerebral palsy and hydrocephalus presented here, neuroimages documented the next features: absence of cerebellar vermis, periventricular leukomalacia, cerebellar tonsils descent and rhombencephalosynapsis. Other important imaging findings and associations in these cases are discussed. abstract_id: PUBMED:32454266 Developmental risk for mood dysregulation in a pediatric case of Gómez-López-Hernández syndrome: Neurocognitive considerations. N/A abstract_id: PUBMED:34678507 Gómez-López-Hernández syndrome: a case report on pediatric neurotrophic corneal ulcers and review of the literature. We present a case of Gómez-López-Hernández syndrome (GLHS), a rare neurocutaneous syndrome, in a 10-month-old girl with neurotrophic keratopathy secondary to trigeminal anesthesia. Answer: The case of Gómez-López-Hernández syndrome (GLHS) in a child born to consanguineous parents provides new evidence for a possible autosomal recessive pattern of inheritance for this condition. The reported patient was the only son of healthy, consanguineous parents, with a family history where his maternal grandmother and his paternal great-grandfather were siblings (PUBMED:24690526). This finding is significant because consanguinity increases the likelihood of autosomal recessive disorders, as parents who are related are more likely to carry the same recessive gene mutations. The presence of GLHS in a child of consanguineous parents reinforces the possibility that the syndrome could be inherited in an autosomal recessive manner, although the etiology of GLHS remains unknown (PUBMED:24690526). It is important to note that this is not a definitive conclusion, as the inheritance pattern of GLHS has not been fully established. However, the consanguinity in the family of the affected child suggests that further genetic studies could be valuable in determining the inheritance pattern of GLHS.
Instruction: Injury to the suprascapular nerve during superior labrum anterior and posterior repair: is a rotator interval portal safer than an anterosuperior portal? Abstracts: abstract_id: PUBMED:25125380 Injury to the suprascapular nerve during superior labrum anterior and posterior repair: is a rotator interval portal safer than an anterosuperior portal? Purpose: The purpose of this study was to compare the risk of injury to the suprascapular nerve during suture anchor placement in the glenoid when using an anterosuperior portal versus a rotator interval portal. Methods: Ten bilateral fresh human cadaveric shoulders were randomized to anchor placement through the anterosuperior portal on one shoulder and the rotator interval portal on the contralateral shoulder. Standard 3 × 14 mm suture anchors were placed in the glenoid rim (1 o'clock, 11 o'clock, and 10 o'clock positions for the right shoulder). The suprascapular nerve was dissected. When glenoid perforation occurred, the distance from the anchor tip to the suprascapular nerve, the distance from the glenoid rim to the suprascapular nerve, and the drill-hole depth at each entry site were recorded. Results: All far-posterior anchors perforated the glenoid rim when using the anterosuperior or rotator interval portal. The distance from the far-posterior anchor tip to the suprascapular nerve averaged 8 mm (range, 3.4 to 14 mm) for the anterosuperior portal and 2.1 mm (range, 0 to 5.5 mm) for the rotator interval portal (P ≤ .001). Conclusions: Using an anterosuperior or rotator interval portal results in consistent penetration of 1 o'clock and 2 o'clock posterior anchors and might place the suprascapular nerve at risk of iatrogenic injury. Based on closer proximity of the anchor tip to the suprascapular nerve, the risk of injury is significantly greater with a rotator interval portal. Clinical Relevance: Using a rotator interval portal for suture anchor placement in the posterior aspect of the glenoid rim can lead to a higher likelihood of suprascapular nerve injury. abstract_id: PUBMED:27026026 Drilling through lateral transmuscular portal lowers the risk of suprascapular nerve injury during arthroscopic SLAP repair. Purpose: The aim of our study was to evaluate the risk of medial glenoid perforation and possible injury to suprascapular nerve during arthroscopic SLAP repair using lateral transmuscular portal. Methods: Ten cadaveric shoulder girdles were isolated and drilled at superior glenoid rim from both anterior-superior portal (1 o'clock) and lateral transmuscular portal (12 o'clock) for SLAP repairs. Drill hole depth was determined by the manufacturer's drill stop (20 mm), and any subsequent drill perforations through the medial bony surface of the glenoid were directly confirmed by dissection. The bone tunnel depth and subsequent distance to the suprascapular nerve, scapular height and width, were compared for investigated locations. Results: Four perforations out of ten (40 %) occurred through anterior-superior portal with one associated nerve injury. One perforation out of ten (10 %) occurred through lateral transmuscular portal without any nerve injury. The mean depth was calculated as 17.6 mm (SD 3) for anterior-superior portal and 26.5 mm (SD 3.6) for lateral transmuscular portal (P < 0.001). Conclusions: It is anatomically possible that suprascapular nerve could sustain iatrogenic injury during labral anchor placement during SLAP repair. However, lateral transmuscular portal at 12 o'clock drill entry location has lower risk of suprascapular nerve injury compared with anterior-superior portal at 1 o'clock drill entry location. abstract_id: PUBMED:28935431 Evaluation of Risk to the Suprascapular Nerve During Arthroscopic SLAP Repair: Is a Posterior Portal Safer? Purpose: The purpose of this study was to compare the risk of glenoid perforation during SLAP repair for suture anchors placed through an anterolateral portal versus a posterolateral portal of Wilmington. Methods: Ten bilateral cadaveric shoulders were randomized to suture anchor placement through an anterolateral portal on one shoulder and a posterolateral portal on the contralateral shoulder. Anchors were placed into anterior, posterior, and far posterior positions on the glenoid rim (1 o'clock, 11 o'clock, and 10 o'clock positions for right shoulders). The shoulder was then dissected, and the distance from the suture anchor tip to the nerve was measured if perforation occurred. The maximum load and failure mechanism of each anchor was assessed with a materials testing system machine. Results: Only 2 of 20 anchors placed in the posterosuperior glenoid through the posterolateral portal perforated compared with 16 of 20 of the anchors placed through the anterolateral portal (P < .05). The mean distance from the perforated anchor tip to the suprascapular nerve was 2.5 ± 1.4 mm for the anterolateral portal and 4.4 ± 0.6 mm for the posterolateral portal (P = .18). We did not observe a significant difference in biomechanical strength (P > .05). Conclusions: There is a high rate of glenoid perforation in close proximity to the suprascapular nerve when placing anchors in the posterosuperior glenoid through an anterolateral portal. Use of the posterolateral portal results in a much lower incidence of glenoid perforation for anchors placed in the posterosuperior glenoid, but there is a higher risk of glenoid perforation for an anchor placed in the anterosuperior glenoid from the posterolateral portal. Clinical Relevance: There is a higher risk of injury to the suprascapular nerve when suture anchors are placed in the posterosuperior glenoid through an anterolateral portal compared with a posterolateral portal for SLAP repair. abstract_id: PUBMED:23961305 Posttraumatic persistent shoulder pain: Superior labrum anterior-posterior (SLAP) lesions. Patient: Male, 57 FINAL DIAGNOSIS: Typ 2 Superior labrum anterior-posterior lesion Symptoms: Shoulder pain after trauma Medication: - Clinical Procedure: - Specialty: Orthopedics and Traumatology • Emergency Medicine. Objective: Rare disease. Background: Due to the anatomical and biomechanical characteristics of the shoulder, traumatic soft-tissue lesions are more common than osseous lesions. Superior labrum anterior-posterior (SLAP) lesions are an uncommon a cause of shoulder pain. SLAP is injury or separation of the glenoid labrum superior where the long head of biceps adheres. SLAP lesions are usually not seen on plain direct radiographs. Shoulder MRI and magnetic resonance arthrography are useful for diagnosis. Case Report: A 57-year-old man was admitted to the emergency department due to a low fall on his shoulder. In physical examination, active and passive shoulder motion was normal except for painful extension. Anterior-posterior shoulder x-ray imaging was normal. The patient required orthopedics consultation in the emergency observation unit due to persistent shoulder pain. In shoulder MRI, performed for diagnosis, type II lesion SLAP was detected. The patient was referred to a tertiary hospital due to lack of arthroscopy in our hospital. Conclusions: Shoulder traumas are usually soft-tissue injuries with no findings in x-rays. SLAP lesion is an uncommon cause of traumatic shoulder pain. For this reason, we recommend orthopedic consultation in post-traumatic persistent shoulder pain. abstract_id: PUBMED:20470614 SLAP Injuries - Superior Labrum Anterior Posterior. From May to December 1996 the author treated at his Clinic five patients with injuries of the anchoring of the labrum superior and anchoring of the long head of the biceps. Snyder describes these injuries as SLAP -superior labrum anterior posterior. Only the development of arthroscopic technique provided new findings on these lesions. Anamnestic data and the clinical picture closely resemble the impingement syndrome. Here too irritation of the rotator cuff occurs but the cause is intraarticular. Theoretical work provides evidence that injuries of this type lead to reduction of the torsion rigidity of the shoulder joint reduced tension of the lower glenohumeral ligament. This along with chronic overburdening by so-called overhead activity creates prerequisites for the development of microinstability of the shoulder. This is why acromioplasty does not lead to improvement. As regards treatment a conservative procedure focused on correct conditioning of the brachial plexus dominates (in particular of the stabilizers of the scapula), administration of non-steroid anti-inflammatory drugs (NSAID), intraarticular corticoid administration, physical therapy, surgery is the last method of choice. Depending on the type of lesion it is recommended to treat the labrum superior and the tendon of the biceps, and depending on the degree of instability, the operation is supplemented by stabilization. Debridement of the rotator cuff is not always necessary, acromioplasty is performed when the acromion has shape II or III. The presented results are preliminary. In the first three patients the condition improved and at present they pursue sports as formerly. Another two patients still have rehabilitation treatment. Key words: SLAP lesion, superior labrum anterior posterior, microinstability, surgery of SLAP, arthroscopy, stabilization surgery and acromioplasty in the treatment of SLAP lesions. abstract_id: PUBMED:26301179 Arthroscopic approach to the posterior compartment of the knee using a posterior transseptal portal. Arthroscopic surgery of the posterior compartment of the knee is difficult when only two anterior portals are used for access because of the inaccessibility of the back of the knee. Since its introduction, the posterior transseptal portal has been widely employed to access lesions in the posterior compartment. However, special care should be taken to avoid neurovascular injuries around the posteromedial, posterolateral, and transseptal portals. Most importantly, popliteal vessel injury should be avoided when creating and using the transseptal portal during surgery. Purpose of the present study is to describe how to avoid the neurovascular injuries during establishing the posterior three portals and to introduce our safer technique to create the transseptal portal. To date, we have performed arthroscopic surgeries via the transseptal portal in the posterior compartments of 161 knees and have not encountered nerve or vascular injury. In our procedure, the posterior septum is perforated with a 1.5-3.0-mm Kirschner wire that is protected by a sheath inserted from the posterolateral portal and monitored from the posteromedial portal to avoid popliteal vessel injury. abstract_id: PUBMED:38073408 A Randomised Control Trial Comparing the Outcomes of Anterior with Posterior Approach for Transfer of Spinal Accessory Nerve to Suprascapular Nerve in Brachial Plexus Injuries. Background: In brachial plexus surgery, a key focus is restoring shoulder abduction through spinal accessory nerve (SAN) to suprascapular nerve (SSN) transfer using either the anterior or posterior approach. However, no published randomised control trials have directly compared their outcomes to date. Therefore, our study aims to assess motor outcomes for both approaches. Methods: This study comprises two groups of patients. Group A: anterior approach (29 patients), Group B: Posterior approach (29 patients). Patients were allocated to both groups using selective randomisation with the sealed envelope technique. Functional outcome was assessed by grading the muscle power of shoulder abductors using the British Medical Research Council (MRC) scale. Results: Five patients who were operated on by posterior approach had ossified superior transverse suprascapular ligament. In these cases, the approach was changed from posterior to anterior to avoid injury to SSN. Due to this reason, the treatment analysis was done considering the distribution as: Group A: 34, Group B: 24. The mean duration of appearance of first clinical sign of shoulder abduction was 8.16 months in Group A, whereas in Group B, it was 6.85 months, which was significantly earlier (p < 0.05). At the 18-month follow-up, both intention-to-treat analysis and as-treated analysis were performed, and there was no statistical difference in the outcome of shoulder abduction between the approaches for SAN to SSN nerve transfer. Conclusions: Our study found no significant difference in the restoration of shoulder abduction power between both approaches; therefore, either approach can be used for patients presenting early for surgery. Since the appearance of first clinical sign of recovery is earlier in posterior approach, therefore, it can be preferred for cases presenting at a later stage. Also, the choice of approach is guided on a case to case basis depending on clavicular fractures and surgeon preference to the approach. Level of Evidence: Level II (Therapeutic). abstract_id: PUBMED:20371192 Injury of the suprascapular nerve during arthroscopic repair of superior labral tears: an anatomic study. Hypothesis: The purpose of this cadaveric anatomic study was to investigate the risk of iatrogenic suprascapular nerve injury during the standard drilling techniques in arthroscopic superior labrum anterior-posterior (SLAP) repairs. Materials And Methods: Cadaveric shoulder girdles were isolated and drilled at the glenoid peripheral rim by use of standard arthroscopic equipment reproducing common drill locations and portal orientations for SLAP repairs. Drill hole depth was determined by the manufacturer's drill stop (20 mm), and any subsequent drill perforations through the medial bony surface of the glenoid were directly confirmed by dissection. The suprascapular nerve was then isolated to note the presence of any observable direct nerve injury from the drilling. The bone tunnel depth, subsequent distance to the suprascapular nerve, scapular height and width, and humeral length were also recorded. Results: Eighteen drill perforations through the medial glenoid wall occurred in 8 of 21 cadavers (38%). Twelve perforations occurred through anterosuperior drill holes with only one associated nerve injury. Six perforations occurred through low posterosuperior drill holes with four associated nerve injuries. Five of the six shoulders with low posterosuperior perforation also had an associated anterior perforation. No perforations occurred through high posterosuperior drill holes. Of the specimens, 5 had bilateral involvement (4 female and 1 male). Specimens with a perforation had a significantly shorter scapular height (P = .007) and humeral length (P = .01). Conclusions: The suprascapular nerve is at risk for direct injury during arthroscopic SLAP repairs from penetration of the medial glenoid with arthroscopic drill equipment in cadavers. abstract_id: PUBMED:26188985 What is the function of the anterior coracoscapular ligament? - a morphological study on the newest potential risk factor for suprascapular nerve entrapment. Background: The suprascapular notch (SSN) is the most common site of suprascapular nerve neuropathy, which may be brought on by the presence of a deep, narrow SSN and structures restricting the space for the nerve. The anterior coracoscapular ligament (ACSL) is a fibrous band extending on the anterior side of the suprascapular notch. As it may tighten the osteo-fibrous tunnel for the nerve, it has been proposed as a new anatomical risk factor in its entrapment. However, this structure occurs in up to 60% of patients, many of whom do not demonstrate any nerve injury. The aim of this work is to evaluate the association between the occurrence of the ACSL and SSN morphology. Materials And Methods: The suprascapular notch region was dissected in 100 formalin-fixed, cadaveric shoulders. The ACSL (if present) and the SSN were assigned to a classification based on their morphology and diameters. Statistical analysis was performed. Results: The ACSL was present in 52 scapulae (52%) and in all cases, the suprascapular nerve travelled superior to the ACSL. Mechanically relevant types of ACSL were found to occur significantly more often when deeper notches were present (p=0.0018). Conclusions: The mechanically efficient ACSL is more common in deep and narrow SSNs, which are associated with suprascapular nerve neuropathy. Thus, by supporting the nerve, the ACSL can protect against its injury. abstract_id: PUBMED:36199504 Is timing of superior labrum anterior to posterior (SLAP) repair important? A cohort study evaluating the effect of the duration of symptoms prior to surgery on the outcomes of patients who underwent type II SLAP repair. Background: The management of superior labrum anterior to posterior (SLAP) tears is somewhat controversial. It is unclear if the length of time between symptom onset and surgery affects SLAP repair outcomes. Methods: Sixty-one SLAP repairs were retrospectively reviewed pre-operatively and post-operatively at 1, 6, 24 weeks, and > 2 years post-surgery. Patients were allocated to an 'early repair' or 'late repair' group based on time between symptom onset and surgery. Results: Of the 61 patients, 22 patients had surgery within six months of symptom onset. Pre-operatively, 'late repair' patients played a higher level of sport than 'early repair' patients prior to injury. Post-operatively, both groups had similar outcomes up to six months, though at six months 'early repair' patients reported a higher level of work than 'late repair' patients (p = 0.01). At > 2 years after surgery, 'early repair' patients had reduced pain and difficulty with overhead activities (p = 0.002), less stiffness (p = 0.001) and were more satisfied than 'late repair' patients (p = 0.04). Conclusions: Up to six months post-operatively, the time between symptom onset and surgery has limited effect on functional outcomes. However, at > 2 years after surgery, earlier repairs ( < 6 months) are interestingly associated with better functional outcomes. Further studies are required to determine if this is a causal relationship. Answer: The risk of injury to the suprascapular nerve during superior labrum anterior and posterior (SLAP) repair when using a rotator interval portal is not safer than using an anterosuperior portal. In fact, the risk of injury is significantly greater with a rotator interval portal. A study comparing the two portal approaches found that using either an anterosuperior or rotator interval portal results in consistent penetration of 1 o'clock and 2 o'clock posterior anchors, which might place the suprascapular nerve at risk of iatrogenic injury. Specifically, the distance from the far-posterior anchor tip to the suprascapular nerve averaged 8 mm for the anterosuperior portal and was significantly closer at 2.1 mm for the rotator interval portal (P ≤ .001), indicating a higher likelihood of nerve injury with the rotator interval portal (PUBMED:25125380). Other studies have also evaluated the risk of suprascapular nerve injury during SLAP repair using different portals. One study found that drilling through a lateral transmuscular portal at the 12 o'clock position has a lower risk of suprascapular nerve injury compared with an anterior-superior portal at the 1 o'clock position (PUBMED:27026026). Another study concluded that there is a higher risk of injury to the suprascapular nerve when suture anchors are placed in the posterosuperior glenoid through an anterolateral portal compared with a posterolateral portal (PUBMED:28935431). These findings suggest that the choice of portal and the position of anchor placement can significantly affect the risk of suprascapular nerve injury during SLAP repair.
Instruction: Is inadequate family history a barrier to diagnosis in CADASIL? Abstracts: abstract_id: PUBMED:16218915 Is inadequate family history a barrier to diagnosis in CADASIL? Objectives: Cerebral autosomal dominant arteriopathy with subcortical infarcts and leucoencephalopathy (CADASIL) has typical clinical features that include stroke, migraine, mood disturbances and cognitive decline. However, misdiagnosis is common. We hypothesized that family history is poorly elicited in individuals presenting with features of CADASIL and that enquiry into family history of all four cardinal manifestations of CADASIL is superior to elicitation of family history of premature stroke alone in raising the diagnostic possibility of CADASIL. Materials And Methods: Retrospective review of family histories at presentation in 40 individuals with confirmed CADASIL was performed through structured interview in a Neurovascular Genetics clinic (182 first-degree and 242 second-degree relatives identified). Family history obtained from structured interview was compared to family history initially documented at presentation. Results: At initial presentation, 30% of individuals were inaccurately documented to have no family history of significant neurological illness. Thirty-five per cent of patients had an initial alternative diagnosis. Initial inaccurate documentation of negative family history was more frequent in individuals with an initial alternative diagnosis. After structured interviews, 34% of 182 first-degree and 35% of 242 second-degree relatives of CADASIL patients had history of stroke (16% of first-degree relatives had stroke before the age of 50 years). Forty-three per cent of first-degree and 28% of second-degree relatives had migraine, mood disturbance or cognitive decline. Conclusions: A false-negative family history was commonly documented in individuals presenting with features of CADASIL and was associated with initial misdiagnosis. Restriction of family history to premature stroke alone is probably inadequate to identify affected CADASIL pedigrees. abstract_id: PUBMED:16494813 CADASIL versus multiple sclerosis This case history reports on the second family in Denmark to be diagnosed with CADASIL (cerebral autosomal dominant arteriopathy with subcortical infarcts and leucoencephalopathy). Due to symptoms, signs and paraclinical findings, an initial diagnosis of multiple sclerosis was made. However, MRI findings of leucoencephalopathy in the external capsule and anterior temporal lobes together with negative CSF findings and family history raised suspicions of CADASIL. Skin biopsy with granular osmiophilic material (GOM) and genetic testing showing the NOTCH 3 mutation proved the diagnosis. There is considerable variability in the symptoms of CADASIL; however, most often a family history of migraine, cerebrovascular events and dementia in early life is found. abstract_id: PUBMED:31934998 CADASIL syndrome: differential diagnosis with multiple sclerosis Two cases of clinical and MRI manifestations of genetically verified CADASIL syndrome in female patients under 40 years of age are presented. The primary misinterpretation of clinical data and the neuroimaging results within multiple sclerosis indicates a lack of awareness of radiologists and neurologists about this disease. The article reviewed the current literature on the problems of diagnosis and treatment of CADASIL. The clinical and neuroimaging pattern of the syndrome, the approaches to genetic testing and the basic principles of patient management are considered in detail. abstract_id: PUBMED:19542611 On the diagnosis of CADASIL. Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL), a genetic arteriopathy related to Notch3 mutations, is difficult to diagnosis. The goal of this study was to determine the value of clinical, immunohistochemical, and molecular techniques for the diagnosis of CADASIL. Clinical features and the immunohistochemical and molecular findings in 200 subjects with suspected CADASIL in whom 93 biopsies and 190 molecular studies are reported. Eighteen pathogenic mutations of the Notch3 gene, six of them previously unreported, were detected in 67 patients. The clinical features did not permit differentiation between CADASIL and CADASIL-like syndromes. The sensitivity and specificity of the skin biopsies was 97.7% and 56.5%, respectively, but increased to 100% and 81.5%, respectively, in cases with proven family history. In conclusion, a clinical diagnosis of CADASIL is difficult to determine and confirmatory techniques should be used judiciously. abstract_id: PUBMED:16796821 CADASIL: a short review of the literature and a description of the first family from Greece. Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is an inherited disease clinically characterized by migraine, subcortical ischemic events, dementia and mood disorders. We present a short review of the literature on the clinical presentation of patients with CADASIL and provide recommendations for the detection and diagnosis of similar cases. We also describe the clinical, radiological and genetic findings of two Greek patients with CADASIL, members of the same family. abstract_id: PUBMED:34352628 A Chinese CADASIL family with p.R578C mutation at exon 11 of the NOTCH3 gene. Objective: To analyze one clinical case of cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy(CADASIL), and to perform analysis of the related gene mutation for the proband and her family. Methods: Analysis of clinical data from the patient diagnosed with CADASIL, including clinical manifestations, blood test results and brain imaging results, followed by high-throughput sequencing of blood samples. Pathogenicity assessment of the gene mutation, and first generation verification were performed on some family members according to genetic variation interpretation standards and guidelines of the American College of Medical Genetics and Genomics (ACMG). Results: Onset of the proband occurred younger than 50-years-old with recurrent migraine attacks and positive family history of migraine and stroke, but without risk factors for cerebrovascular diseases. The craniocerebral magnetic resonance imaging (MRI) results showed diffusive white matter lesions and thus clinically met criteria for CADASIL diagnosis. NOTCH3 gene analysis showed a p.R578C mutation (1732 C > T) at the11th exon on chromosome 19 of the proband and some family members. Conclusions: NOTCH3 mutation is related to CADASIL. In this study, we observed a rather rare familial NOTCH3 mutation in China. This report further support the mutation site is pathogenic. abstract_id: PUBMED:30311053 The role of clinical and neuroimaging features in the diagnosis of CADASIL. Background: Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is the most common familial cerebral small vessel disease, caused by NOTCH3 gene mutations. The aim of our study was to identify clinical and neuroradiological features which would be useful in identifying which patients presenting with lacunar stroke and TIA are likely to have CADASIL. Methods: Patients with lacunar stroke or TIA were included in the present study. For each patient, demographic and clinical data were collected. MRI images were centrally analysed for the presence of lacunar infarcts, microbleeds, temporal lobe involvement, global atrophy and white matter hyperintensities. Results: 128 patients (mean age 56.3 ± 12.4 years) were included. A NOTCH3 mutation was found in 12.5% of them. A family history of stroke, the presence of dementia and external capsule lesions on MRI were the only features significantly associated with the diagnosis of CADASIL. Although thalamic, temporal pole gliosis and severe white matter hyperintensities were less specific for CADASIL diagnosis, the combination of a number of these factors together with familial history for stroke result in a higher positive predictive value and specificity. Conclusions: A careful familial history collection and neuroradiological assessment can identify patients in whom NOTCH3 genetic testing has a higher yield. abstract_id: PUBMED:25672735 The new diagnostic methods of CADASIL as differential diagnosis of HDLS Both hereditary diffuse leukoencephalopathy with axonal spheroids (HDLS) and cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) are autosomal dominant white matter diseases. First symptoms of HDLS are cognitive decline or dementia, while those of CADASIL are migraine or ischemic infarcts. Family histories of young patients with stroke are important, because most of patients with CADASIL have these family histories. Temporal pole lesions are specific for CADASIL. However, some of the patients have no such lesions. We should differ CADASIL from non-CADASIL by evaluation of family history or the other MRI findings such as confluent external capsular lesions or multiple white matter medullary infarcts. Coronal views of MRI are useful for differentiating ischemic lesions from demyelinated lesions, even if horizontal views of MRI give little information. In addition, evaluation of immunohistochemical staining of Notch3 by frozen skin samples is useful for diagnosis. We discovered the methods of detecting light microscopic findings of GOM in frozen section. To reveal the pathogenesis of CADASIL, it is indispensable to analyze the chemical nature of GOM by histochemical stainings. We are going to analyze coexist proteins or materials in small arterial granular degeneration by proteomics of LC/ MS/ MS. abstract_id: PUBMED:37099178 Diffusion prepared pseudo-continuous arterial spin labeling reveals blood-brain barrier dysfunction in patients with CADASIL. Objectives: Diffusion prepared pseudo-continuous arterial spin labeling (DP-pCASL) is a newly proposed MRI method to noninvasively measure the function of the blood-brain barrier (BBB). We aim to investigate whether the water exchange rate across the BBB, estimated with DP-pCASL, is changed in patients with cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL), and to analyze the association between the BBB water exchange rate and MRI/clinical features of these patients. Methods: Forty-one patients with CADASIL and thirty-six age- and sex-matched controls were scanned with DP-pCASL MRI to estimate the BBB water exchange rate (kw). The MRI lesion burden, the modified Rankin scale (mRS), and the neuropsychological scales were also examined. The association between kw and MRI/clinical features was analyzed. Results: Compared with that in the controls, kw in patients with CADASIL was decreased at normal-appearing white matter (NAWM) (t = - 4.742, p < 0.001), cortical gray matter (t = - 5.137, p < 0.001), and deep gray matter (t = - 3.552, p = 0.001). After adjustment for age, gender, and arterial transit time, kw at NAWM was negatively associated with the volume of white matter hyperintensities (β = - 0.754, p = 0.001), whereas decreased kw at NAWM was independently associated with an increased risk of abnormal mRS scale (OR = 1.058, 95% CI: 1.013-1.106, p = 0.011) in these patients. Conclusions: This study found that the BBB water exchange rate was decreased in patients with CADASIL. The decreased BBB water exchange rate was associated with an increased MRI lesion burden and functional dependence of the patients, suggesting the involvement of BBB dysfunction in the pathogenesis of CADASIL. Clinical Relevance Statement: DP-pCASL reveals BBB dysfunction in patients with CADASIL. The decreased BBB water exchange rate is associated with MRI lesion burden and functional dependence, indicating the potential of DP-pCASL as an evaluation method for disease severity. Key Points: • DP-pCASL reveals blood-brain barrier dysfunction in patients with CADASIL. • Decreased BBB water exchange rate, an indicator of BBB dysfunction detected by DP-pCASL, was associated with MRI/clinical features of patients with CADASIL. • DP-pCASL can be used as an evaluation method to assess the severity of disease in patients with CADASIL. abstract_id: PUBMED:24844136 Interpretation of NOTCH3 mutations in the diagnosis of CADASIL. CADASIL is an autosomal dominant inherited disease, characterized by mid-adult onset of cerebrovascular disease and dementia. CADASIL is caused by mutations in the NOTCH3 gene, which encodes the NOTCH3 protein. Pathogenic mutations in CADASIL are highly distinctive in the sense that they lead to the loss or gain of a cysteine residue in 1 of the 34 EGFr domains of the NOTCH3 protein. The majority are missense mutations, but small deletions, insertions and splice-site mutations have been reported, which typically also lead to a numerical cysteine alteration. Whether numerical cysteine-altering mutations are a rule in CADASIL remains subject of debate, as there are reports suggesting pathogenicity of other types of mutations. However, for most of these the association with CADASIL was later revoked or is questionable. Here, we discuss and provide recommendations for the interpretation of NOTCH3 mutations in the diagnosis of CADASIL. Answer: Yes, inadequate family history is a barrier to diagnosis in CADASIL. A retrospective review of family histories at presentation in 40 individuals with confirmed CADASIL showed that at initial presentation, 30% of individuals were inaccurately documented to have no family history of significant neurological illness. This inaccurate documentation was more frequent in individuals with an initial alternative diagnosis, and after structured interviews, a significant proportion of first-degree and second-degree relatives of CADASIL patients had a history of stroke, migraine, mood disturbance, or cognitive decline (PUBMED:16218915). The study concluded that a false-negative family history was commonly documented in individuals presenting with features of CADASIL and was associated with initial misdiagnosis, suggesting that restriction of family history to premature stroke alone is probably inadequate to identify affected CADASIL pedigrees. Furthermore, the importance of family history in raising suspicions of CADASIL is supported by a case history where an initial diagnosis of multiple sclerosis was made, but MRI findings and family history led to suspicions of CADASIL, which was later confirmed by skin biopsy and genetic testing (PUBMED:16494813). Another study also indicated that the primary misinterpretation of clinical data and neuroimaging results within multiple sclerosis points to a lack of awareness about CADASIL among radiologists and neurologists (PUBMED:31934998). In conclusion, an accurate and comprehensive family history is crucial for the diagnosis of CADASIL, and inadequate family history can be a significant barrier to proper diagnosis.
Instruction: Virtual reality colonoscopy simulation: a compulsory practice for the future colonoscopist? Abstracts: abstract_id: PUBMED:34231475 The Effect of Virtual Reality on Pain and Anxiety During Colonoscopy: A Randomized Controlled Trial. Background: The aim of the study is to evaluate the effect of virtual reality application during a colonoscopy on the pain and anxiety experienced by patients. Methods: The study was conducted as experimental, randomized, controlled research. The study was carried out between October 15, 2017 and May 20, 2018 in the Endoscopy Unit of a Public Hospital in northern Turkey. The study sample consisted of 60 patients who underwent colonoscopy. The patients were divided into 2 groups by using simple randomization. The patients in the experimental group watched virtual reality applications during colonoscopy, whereas the patients in the control group underwent standard colonoscopy protocol. Colonoscopy was performed on patients in both groups by the same gastroenterologist without the use of anesthesia. The demographic data of both groups, pain levels during and after the procedure, before and after the procedure anxiety levels were evaluated. Results: The mean age of the patients in the experimental group was 56.33 ± 11.81, the mean age of the patients in the control group was 56.20 ± 15.62. There was no statistically significant difference between the pre- and post-operative state anxiety score averages of the patients in the experimental and control groups. There was a statistically significant difference between the trait anxiety scores (P < .000) and pain scores (P < .03) during the procedure between both groups. Conclusion: The virtual reality application was found to reduce patients' pain during the colonoscopy procedure. The virtual reality application, an easily available, inexpensive, and non-invasive method, can be used by nurses in pain management during colonoscopy. abstract_id: PUBMED:34935997 Virtual and augmented reality in urology Although continuous technological developments have optimized and evolved medical care throughout time, these technologies were mostly still comprehensible for users. Driven by immense financial efforts, modern innovative products and technical solutions are transforming medicine today and will do so even more in the future: virtual and augmented reality. This review critically summarizes the current literature and future uses of virtual and augmented reality in the field of urology. abstract_id: PUBMED:34177721 Looking Back From the Future: Perspective Taking in Virtual Reality Increases Future Self-Continuity. In the current study, we tested a novel perspective-taking exercise aimed at increasing the connection participants felt toward their future self, i.e., future self-continuity. Participants role-played as their successful future self and answered questions about what it feels like to become their future and the path to get there. The exercise was also conducted in a virtual reality environment and in vivo to investigate the possible added value of the virtual environment with respect to improved focus, perspective-taking, and effectiveness for participants with less imagination. Results show that the perspective taking exercise in virtual reality substantially increased all four domains of future self-continuity, i.e., connectedness, similarity, vividness, and liking, while the in vivo equivalent increased only liking and vividness. Although connectedness and similarity were directionally, but not significantly different between the virtual and in vivo environments, neither the focus, perspective taking, or individual differences in imagination could explain this difference-which suggests a small, but non-significant, placebo effect of the virtual reality environment. However, lower baseline vividness in the in vivo group may explain this difference and suggests preliminary evidence for the dependency of connectedness and similarity domains upon baseline vividness. These findings show that the perspective taking exercise in a VR environment can reliably increase the future self-continuity domains. abstract_id: PUBMED:37278621 The Effects of Virtual Reality Glasses on Vital Signs and Anxiety in Patients Undergoing Colonoscopy: A Randomized Controlled Trial. Colonoscopy is a painful procedure that causes anxiety and changes in vital signs. Pain and anxiety may cause patients to avoid colonoscopy, which is a preventive and curative healthcare service. The aim of this study was to examine the effects of virtual reality glasses on the vital signs (blood pressure, pulse, respiration, oxygen saturation, and pain) and anxiety in patients undergoing colonoscopy. The population of the study consisted of 82 patients who underwent colonoscopy without sedation between January 2, 2020, and September 28, 2020. Post-power analysis was performed with 44 patients who agreed to participate in the study, met the inclusion criteria, and were followed up for pre- and post-tests. The experimental group participants (n = 22) watched a 360° virtual reality video through virtual reality glasses whereas the control group participants (n = 22) underwent a standard procedure. Data were collected using a demographic characteristics questionnaire, the Visual Analog Scale-Anxiety, Visual Analog Scale-Pain, Satisfaction Evaluation Form, and monitoring of vital signs. The experimental group participants had significantly lower levels of pain, anxiety, systolic blood pressure, and respiratory rate and significantly higher peripheral oxygen saturation during colonoscopy than the control group participants. The majority of the experimental group participants were satisfied with the application. Virtual reality glasses have a positive effect on vital signs and anxiety during colonoscopy. abstract_id: PUBMED:29370860 Colonoscopy procedure simulation: virtual reality training based on a real time computational approach. Background: Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. Methods: In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. Conclusion: We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy. abstract_id: PUBMED:34683070 Development and Effect of Virtual Reality Practice Program for Improving Practical Competency of Caregivers Specializing in Dementia. The number of dementia patients in Korea is increasing with the increase in the elderly population. Accordingly, the importance of the role of the caregivers, who are the main care worker other than the family, is increasing. Therefore, in this study, a virtual reality practice program was developed to enhance the practical competency of caregivers who take care of dementia patients, and the effects were analyzed. The caregiver said that among the mental behaviors of dementia patients, aggression and delusion were the most difficult. Based on this information, a practice program was developed by realizing a case of a male dementia patient who expressed refusal to bathing help as an aggressive behavior due to delusion in virtual reality, and the effect of the virtual reality practice program was analyzed for five caregivers. As a result, 'interest in new teaching methods', 'improving concentration of practical education based on real cases', and 'increasing confidence in caring for dementia patients' were found. As this study is a pilot test, it is necessary to repeat the study with more subjects in the future, and to develop virtual reality implementation cases for various mental and behavioral symptoms. abstract_id: PUBMED:35293870 Virtual Reality in Clinical Practice and Research: Viewpoint on Novel Applications for Nursing. Virtual reality is a novel technology that provides users with an immersive experience in 3D virtual environments. The use of virtual reality is expanding in the medical and nursing settings to support treatment and promote wellness. Nursing has primarily used virtual reality for nursing education, but nurses might incorporate this technology into clinical practice to enhance treatment experience of patients and caregivers. Thus, it is important for nurses to understand what virtual reality and its features are, how this technology has been used in the health care field, and what future efforts are needed in practice and research for this technology to benefit nursing. In this article, we provide a brief orientation to virtual reality, describe the current application of this technology in multiple clinical scenarios, and present implications for future clinical practice and research in nursing. abstract_id: PUBMED:37929329 Virtual reality interventions to reduce psychological distress during colonoscopy: a rapid review. Introduction: Colonoscopy can cause psychological distress in patients, consequently discouraging patients from undergoing an unpleasant procedure or reducing compliance with follow-up examinations. This rapid review aimed to assess the feasibility and efficacy of Virtual Reality (VR) interventions during colonoscopy on patients' perceived psychological distress and procedure satisfaction. Areas Covered: We searched PubMed, CINAHL, ProQuest/All Databases, and Cochrane Library databases on 1 December 2022, with a date limiter of 2002-2022 for articles that investigated the effect and feasibility of any type of immersive VR-based intervention on patients' pain, anxiety, discomfort, and procedure satisfaction immediately before, during, and/or post-procedure of colonoscopy. Expert Opinion: Initially, 118 articles were identified, of which seven were eligible and included in this rapid review. Our findings demonstrate that VR interventions during colonoscopy were feasible, significantly reduced participant pain and anxiety, and significantly increased participant satisfaction with the procedure. VR interventions appear to be an effective alternative for patients who prefer to avoid analgetic medications or as an adjunct to routine sedation during colonoscopy. Directions of research design should focus on an optimized blinding process, using the high-end technology of 3-dimensional devices, considering an audiovisual distracting intervention, and designing multicenter and high-quality Randomized Controlled Trials. abstract_id: PUBMED:35252318 Virtual Reality in the Neurosciences: Current Practice and Future Directions. Virtual reality has made numerous advancements in recent years and is used with increasing frequency for education, diversion, and distraction. Beginning several years ago as a device that produced an image with only a few pixels, virtual reality is now able to generate detailed, three-dimensional, and interactive images. Furthermore, these images can be used to provide quantitative data when acting as a simulator or a rehabilitation device. In this article, we aim to draw attention to these areas, as well as highlight the current settings in which virtual reality (VR) is being actively studied and implemented within the field of neurosurgery and the neurosciences. Additionally, we discuss the current limitations of the applications of virtual reality within various settings. This article includes areas in which virtual reality has been used in applications both inside and outside of the operating room, such as pain control, patient education and counseling, and rehabilitation. Virtual reality's utility in neurosurgery and the neurosciences is widely growing, and its use is quickly becoming an integral part of patient care, surgical training, operative planning, navigation, and rehabilitation. abstract_id: PUBMED:33606946 Virtual Reality Therapy in Mental Health. Initially designed for the treatment of phobias, the use of virtual reality in phobic disorders has expanded to other mental health disorders such as posttraumatic stress disorder, substance-related disorders, eating disorders, psychosis, and autism spectrum disorder. The goal of this review is to provide an accessible understanding of why this approach is important for future practice, given its potential to provide clinically relevant information associated with the assessment and treatment of people suffering from mental illness. Most of the evidence is available for the use of virtual reality exposure therapy in anxiety disorders and posttraumatic stress disorder. There is hardly any evidence that virtual reality therapy is effective in generalized anxiety disorder and obsessive-compulsive disorder. There is increasing evidence that cue exposure therapy is effective in addiction and eating disorders. Studies into the use of virtual reality therapy in psychosis, autism spectrum disorder, and attention deficit hyperactivity disorder (ADHD) are promising. Answer: The use of virtual reality (VR) in medical training, including for colonoscopy procedures, has been gaining attention due to its potential to enhance the learning experience and improve clinical skills without putting patients at risk. The abstract from PUBMED:29370860 discusses a realistic and real-time interactive simulator for training colonoscopy procedures, which includes features such as modeling the colonoscopy as thick flexible elastic rods to simulate the behavior of the colonoscope. This simulator is presented as an alternative to the traditional 'see one, do one, teach one' training style, which is resource-intensive and may be less effective. The simulator aims to provide fast, high-fidelity feedback to trainees and has undergone initial validation to determine its clinical utility and efficacy. While the abstract does not explicitly state that VR colonoscopy simulation will become a compulsory practice for future colonoscopists, it does suggest that VR simulators offer a promising alternative for training. The benefits of such simulators include the ability to practice procedures in a controlled environment, receive immediate feedback, and potentially accelerate the learning curve for complex procedures like colonoscopy. Given the advancements in VR technology and its applications in medical training, as highlighted in the abstracts from PUBMED:34935997 and PUBMED:35252318, it is plausible that VR simulation could become an integral part of the training curriculum for future colonoscopists. However, the decision to make VR simulation a compulsory practice would likely depend on further research into its effectiveness, the availability of resources, and the integration of VR training into medical education standards and accreditation requirements.
Instruction: Does gestational sac volume predict the outcome of missed miscarriage managed expectantly? Abstracts: abstract_id: PUBMED:12404517 Does gestational sac volume predict the outcome of missed miscarriage managed expectantly? Purpose: The aim of this study was to investigate whether gestational sac volume (GSV) can predict the outcome of missed miscarriages that are managed expectantly. Methods: This was a prospective observational study. Between February 1, 2000, and January 31, 2001, all patients with a confirmed first-trimester missed miscarriage who chose to undergo expectant management were recruited to participate. A single investigator performed all sonographic examinations and measurements. The main outcome measure was a complete spontaneous abortion within 4 weeks of the initial diagnosis. A complete miscarriage was defined as a maximum anteroposterior diameter of the endometrium of less than 15 mm on transvaginal sonography and no persistent heavy vaginal bleeding. The patients could opt to undergo surgery at any time, but those who had not expelled the products of conception within 4 weeks of the diagnosis were advised to have surgical uterine evacuation. Results: In total, 90 patients were enrolled, and 86 patients completed the study. The mean GSV, as measured by 3-dimensional sonography, was 9.7 +/- 8.9 ml, and the mean sac diameter was 24.5 +/- 8.0 mm. A significant exponential correlation was found between the mean sac diameter and the GSV (r = 0.86; p < 0.0001). Forty-six (53.5%) of the 86 patients experienced a complete miscarriage within 4 weeks of the diagnosis (ie, expectant management was successful), but expectant management was unsuccessful in the remaining 40 (46.5%) patients (5 had an incomplete miscarriage, and 35 did not expel the products of conception). The GSV did not differ significantly between the "successful" and "unsuccessful" groups (p = 0.82). A logistic regression analysis showed no significant correlation between GSV and the outcome of missed miscarriages managed expectantly (p = 0.59). Conclusions: The GSV does not predict the outcome of expectant management of missed miscarriage within 4 weeks of the diagnosis. abstract_id: PUBMED:25944606 Measurement of gestational sac volume in the first trimester of pregnancy Objective: The aim of our study was to measure the volume of gestational sac and amniotic sac in physiological pregnancies and missed abortion. We wanted to create nomograms for individual weeks of gestation. Design: Retrospective cohort study. Setting: Institute for the Care of Mother and Child, Prague. Methods: The study randomized 413 women after spontaneous conception. The patients were divided into two groups: women with physiological pregnancy and childbirth in the period (374), and women with pregnancy terminated by missed abortion. Both groups were performed measurement volume of gestational and amniotic sac in the first trimester of pregnancy. Analysis was performed using 4D View software applications, and volume calculations were performed using VOCAL (Virtual Organ Computer Aided anaLysis). Results: We have created the first in the Czech Republic nomograms volumes of gestational and amniotic sac in physiological pregnancies and missed abortion. We performed a correlation between the size of gestational sac and prosperity pregnancy. Conclusion: In our study we found no correlation between the volume of gestational sac and the development of the pregnancy. abstract_id: PUBMED:20533447 Gestational sac volume in missed abortion and anembryonic pregnancy compared to normal pregnancy. Purpose: To compare gestational sac (GS) volume (GSV) between normal pregnancies and missed abortions and anembryonic pregnancies and to determine at what gestational age differences in GS volume become evident. METHODS.: GSV in missed abortion and anembryonic pregnancy were measured using three-dimensional ultrasound and the results were compared with GSV in normal pregnancies. Pregnancies between 6 and 12(+6) gestational weeks of age according to last menstrual period were included in normal pregnancies, missed abortions, and anembryonic pregnancies. Results: There were 141 normal pregnancies and 82 missed or anembryonic abortions. GSV was significantly larger in normal pregnancies than in missed or anembryonic abortion: 27.51 + or - 25.25 cm(3) and 8.04 + or - 10.54 cm(3), respectively (p < 0.001). When stratified by weeks, statistically significant differences were found beginning at 7 weeks, while GSV measurements were not significantly different between the normal and abnormal pregnancies from 6 to 6(+6) weeks. Conclusion: GSV in missed abortion and anembryonic pregnancies is significantly smaller than in normal pregnancies, starting at 7 weeks of gestational age. This finding may be helpful in the diagnosis of missed abortion or anembryonic pregnancies in selected cases. abstract_id: PUBMED:30385346 Yolk sac size & shape as predictors of first trimester pregnancy outcome: A prospective observational study. Objective: To determine the value of yolk sac size and shape for prediction of pregnancy outcome in the first trimester. Material And Methods: 500 pregnant women between 6+0 and 9+6 weeks of gestation underwent transvaginal ultrasound and yolk sac diameter (YSD), gestational sac diameter (GSD) were measured, presence/absence of yolk sac (YS) and shape of the yolk sac were noted. Follow up ultrasound was done to confirm fetal well-being between 11+0 and 12+6 weeks and was the cutoff point of success of pregnancy. Results: Out of 500 cases, 8 were lost to follow up, YS was absent in 14, of which 8 were anembryonic pregnancies. Thus, 478 out of 492 followed up cases were analyzed for YS shape and size and association with the pregnancy outcome. In our study, abnormal yolk sac shape had a sensitivity and specificity (87.06% & 86.5% respectively, positive predictive value (PPV) of 58.2%, negative predictive value (NPV) of 96.8% in predicting a poor pregnancy outcome as compared to yolk sac diameter (sensitivity and specificity 62.3% & 64.1% respectively and PPV and NPV of 27.3% and 88.7% respectively). The degree of association for both the variables was significant to the level of p<0.000. Conclusion: The presence or absence of yolk sac has a strong predictive value for poor pregnancy outcome. Yolk sac shape was a better predictor of poor pregnancy outcome in terms of higher specificity and negative predictive value as compared to yolk sac diameter. abstract_id: PUBMED:3528521 Yolk sac sign: sonographic appearance of the fetal yolk sac in missed abortion. With improving technology, the fetal yolk sac can be routinely visualized sonographically in all living gestations of six to ten weeks. The minimal growth of the yolk sac during this interval and its subsequent obscuration by the growing amniotic sac are verified in this study. An important new sign of missed abortion has been inferred by Bernard and Cooperberg (AJR 144:597, 1985), and is titled in this article the "yolk sac sign." A gestational sac of 25 mm or more in mean diameter and empty except for the yolk sac is highly suspicious for nonviable gestation. This one-year prospective study adds nine such cases. To enhance specificity of this sign, additional criteria specify a yolk sac measuring 4 mm, a free-floating position within the gestational sac, and evacuation of the yolk sac on follow-up scan. However, when a ring-like structure measures 3 mm or less and lies peripherally in the gestational sac, this must be presumed to be a potential fetal pole. abstract_id: PUBMED:9435738 Significance of yolk sac measurements with vaginal sonography in the first trimester in the prediction of pregnancy outcome. Background: The purpose of this prospective clinical study was to determine and evaluate the prognostic value of secondary yolk sac diameter of the embryo on pregnancy outcome. Methods: One hundred and thirty pregnant women in the first trimester were included in the study. Crown-rump length (CRL) and yolk sac diameters were measured in every patient and the outcome of the pregnancies were compared with the measurements. Intact normal pregnancy (group A), threatened abortion (group B) and missed abortion (group C) were diagnosed in 67, 43 and 20 pregnancies, respectively. Results: We detected a significant linear correlation between secondary yolk sac diameter and gestational age in group A (r = 0.5085; p < 0.0001) and a moderate correlation in group B (r = 0.4048; p = 0.007) and C patients (r = 0.3478; p = 0.1333). When the groups were evaluated irrespective of gestational age, a significant difference in secondary yolk sac diameters among the groups was noted (p = 0.037). When confidence intervals for secondary yolk sac diameters of intact normal pregnancies (group A) were calculated by linear regression, two patients in group B were below the 5% confidence interval. However, in group C patients, the yolk sac diameter of six patients were detected below the 5% confidence interval, while two of the measurements were above 95% confidence interval. Therefore, eight measurements (40%) of group C patients were outside the 5-95% confidence interval. Conclusion: In the first trimester, when discrepancy is detected between secondary yolk sac diameter and gestational age, additional sonographic investigation should be performed one or two weeks later, in order to estimate the pregnancy outcome. abstract_id: PUBMED:16953856 The quality and size of yolk sac in early pregnancy loss. Background: Accurate differentiation between normal pregnancy and pregnancy loss in early gestation remains a clinical challenge. Aims: To determine whether ultrasound findings of yolk sac size and morphology are valuable in relation to pregnancy loss at six to ten weeks gestation. Methods: Transvaginal ultrasonography was performed in 111 normal singleton pregnancies, 25 anembryonic gestations, and 18 missed abortions. Mean diameters of gestational sac and yolk sac were measured. The relationship between yolk sacs and gestational sacs in normal pregnancies was depicted. The yolk sacs ultrasound findings in cases of pregnancy loss were recorded. Results: In normal pregnancies with embryonic heartbeats, a deformed or an absent yolk sac was never detected. Sequential appearance of yolk sac, embryonic heartbeats and amniotic membrane was essential for normal pregnancy. The largest yolk sac in viable pregnancies was 8.1 mm. Findings in anembryonic gestations included an absent yolk sac, an irregular-shaped yolk sac and a relatively large yolk sac (> 95% upper confidence limits, in 11 cases). In cases of missed abortion with prior existing embryonic heartbeats, abnormal findings included a relatively large, a progressively regressing, a relatively small, and a deformed yolk sac (an irregular-shaped yolk sac, an echogenic spot, or a band). Conclusion: A very large yolk sac may exist in normal pregnancy. When embryonic heartbeats exist, the poor quality and early regression of a yolk sac are more specific than the large size of a yolk sac in predicting pregnancy loss. When an embryo is undetectable, a relatively large yolk sac, even of normal shape, may be an indicator of miscarriage. abstract_id: PUBMED:11753506 Volume and vascularity of the yolk sac assessed by three-dimensional and power doppler ultrasound. The yolk sac is an organ of increasingly recognized importance in the initial mechanisms of pregnancy maintenance and the early growth and welfare of the embryo. The aim of our study was to assess the vascularity of the yolk sac and vitelline duct in 150 patients between the 6th and 10th weeks of normal, and uncomplicated pregnancies who were scheduled for termination of pregnancy for psychosocial reasons and 130 complicated pregnancies. In same patients volume of the yolk sac was assessed using Combison 530 3D Voluson, Medison-Kretz Company. Overall visualization rate for yolk sac vessels was 80,38%. The highest visualization rates were obtained in the 7th and 8th weeks of gestation reaching values of 90,71 %. In the same period the visualization rates of the vitelline duct arteries were 87,71% and 91,28% respectively. A characteristic waveform profile included low velocity (5,8+/-1,7 cm/s) and absence of diastolic flow which was obtained from all examined yolk sacs. The PI showed the mean value of 3,24+/-0,94 without significant changes between subgroups (p>0,05). Vitelline vessels showed similar PSV (5,4+/-1,8 cm/s) and PI values (3,14+/0,91) (p>0,05) to that obtained from the yolk sac. Three types of abnormal vascular signals were derived from the yolk sac in patients with missed abortion (n=32): irregular blood flow (n=6), permanent diastolic flow (n=2) and venous blood flow signals (n=5). However, in the most of the patients (n=19) blood flow signals could not have been extracted from these early embryonic structures. Using three-dimensional ultrasound we found a positive correlation between gestational age and volumes of the gestational and yolk sac until 10 weeks gestation. At the end of the first trimester yolk sac volume remained constant, while gestational sac volume continued to grow. It seems that changes in both yolk sac appearance (size, shape, volume and echogenicity) and vascularization are probably a consequence of poor embryonic development or even embryonic death, rather than being a primary cause of an early pregnancy failure. abstract_id: PUBMED:3276195 The yolk sac in early pregnancy failure. An attempt was made to visualize the yolk sac in 845 patients scheduled for chorionic villi sampling. The distribution of yolk sac diameters and the interpolating growth curve up to 11 weeks of development were analyzed in 239 pregnant women who were delivered of normal infants. The highest visualizing rate of the yolk sac in normal pregnancies was 97 at 7 weeks of gestation. A total of 130 miscarriages occurred before chorionic villi sampling. In these cases, the diameter of the yolk sac versus crown-rump length tended to be larger than found in normal pregnancies. The visualizing rate of the yolk sac in miscarriages after the embryo had been formed was significantly higher in those women who demonstrated fetal heart activity (82.1%) than in those who did not (54.5%). On the other hand, the yolk sac was observed in 44% of miscarriages without a visible embryo. These findings suggest different types of missed abortion. An abnormal karyotype was observed in 23 of 29 chromosomal analyses performed on aborted specimens. An abnormal karyotype was observed in all eight cases with only a yolk sac-like structure within the gestational sac. abstract_id: PUBMED:22875048 Early embryonic morphology sonographically assessed and its correlation with yolk sac in missed abortion. Purpose: To provide additional information about embryo morphology sonographic assessment and its correlation with yolk sac. Methods: A systematic study in 200 consecutive cases of missed abortion <10 weeks diagnosed by transvaginal ultrasound. Results: In 104 gestations of embryos with morphological abnormalities, 88 (84.6 %) were at least 1 week smaller than expected for gestational age and 16 (15.4 %) were the expected size. From 32 normal morphologic embryos, 7 (21.9 %) were at least 1 week smaller than expected for gestational age, and 25 (78.1 %) were the expected size (p < 0.005). Normal morphologic embryos are linked more frequently with normal yolk sac (62.5 %). Findings in anembryonic gestations (GD1) included an absent yolk sac (46.9 %) and a cystic yolk sac (25 %). Likewise, findings in GD2-3 embryos included more frequently a cystic yolk sac (42.9 %) and an absent yolk sac (32.5 %). GD4 embryos are associated with an echogenic yolk sac (40 %), a relatively small-hypoplastic- (40 %) and a relatively large-cystic- (20 %). In DI embryos, yolk sac appears cystic (62.5 %) or echogenic (37.5 %). Conclusions: Our study proves the correlation between morphology of conceptuses and yolk sac appearance in cases of missed abortion. Answer: No, gestational sac volume (GSV) does not predict the outcome of missed miscarriage managed expectantly. A study investigating this found that there was no significant difference in GSV between the groups with "successful" and "unsuccessful" expectant management of missed miscarriage within 4 weeks of the diagnosis. Logistic regression analysis also showed no significant correlation between GSV and the outcome (PUBMED:12404517). Another study also found no correlation between the volume of the gestational sac and the development of the pregnancy (PUBMED:25944606). Additionally, while gestational sac volume in missed abortion and anembryonic pregnancies is significantly smaller than in normal pregnancies starting at 7 weeks of gestational age, this finding is more helpful in the diagnosis rather than predicting the outcome of the pregnancy (PUBMED:20533447).
Instruction: Racial disparity in influenza vaccination: does managed care narrow the gap between African Americans and whites? Abstracts: abstract_id: PUBMED:11572737 Racial disparity in influenza vaccination: does managed care narrow the gap between African Americans and whites? Context: Substantial racial disparities exist in use of some health services. Whether managed care could reduce racial disparities in the use of preventive services is not known. Objective: To determine whether the magnitude of racial disparity in influenza vaccination is smaller among managed care enrollees than among those with fee-for-service insurance. Design, Setting, And Participants: The 1996 Medicare Current Beneficiary Survey of a US cohort of 13 674 African American and white Medicare beneficiaries with managed care and fee-for-service insurance. Main Outcome Measures: Percentage of respondents (adjusted for sociodemographic characteristics, clinical comorbid conditions, and care-seeking attitudes) who received influenza vaccination and magnitude of racial disparity in influenza vaccination, compared among those with managed care and fee-for-service insurance. Results: Eight percent of the beneficiaries were African American and 11% were enrolled in managed care. Overall, 65.8% received influenza vaccination. Whites were substantially more likely to be vaccinated than African Americans (67.7% vs 46.1%; absolute disparity, 21.6%; 95% confidence interval [CI], 18.2%-25.0%). Managed care enrollees were more likely than those with fee-for-service insurance to receive influenza vaccination (71.2% vs 65.4%; difference, 5.8%; 95% CI, 3.6%-8.3%). The adjusted racial disparity in fee-for-service was 24.9% (95% CI, 19.6%-30.1%) and in managed care was 18.6% (95% CI, 9.8%-27.4%). These adjusted racial disparities were both statistically significant, but the absolute percentage point difference in racial disparity between the 2 insurance groups (6.3%; 95% CI, -4.6% to 17.2%) was not. Conclusion: Managed care is associated with higher rates of influenza vaccination for both whites and African Americans, but racial disparity in vaccination is not reduced in managed care. Our results suggest that additional efforts are needed to adequately address this disparity. abstract_id: PUBMED:29028558 Determinants of trust in the flu vaccine for African Americans and Whites. Trust is thought to be a major factor in vaccine decisions, but few studies have empirically tested the role of trust in adult immunization. Utilizing a 2015 national survey of African American and White adults (n = 1630), we explore multiple dimensions of trust related to influenza immunization, including generalized trust, trust in the flu vaccine, and trust in the vaccine production process. We find African Americans report lower trust than Whites across all trust measures. When considering demographic, racial, and ideological predictors, generalized trust shows statistically significant effects on both trust in the flu vaccine and trust in the vaccine process. When controlling for demographic, racial, and ideological variables, higher generalized trust was significantly associated with higher trust in the flu vaccine and the vaccine process. When controlling for generalized trust, in addition to the baseline covariates, psychosocial predictors (i.e. risk perception, social norms, knowledge) are significant predictors of trust in flu vaccine and trust in the vaccine process, with significant differences by race. These findings suggest that trust in vaccination is complex, and that significant differences in trust between White and African American adults may be contributing to disparities in influenza immunization. abstract_id: PUBMED:25035719 Influenza vaccination in patients with diabetes: disparities in prevalence between African Americans and Whites. Background: Patients with diabetes who contract influenza are at higher risk of complications, such as hospitalization and death. Patients with diabetes are three times more likely to die from influenza complications than those without diabetes. Racial disparities among patients with diabetes in preventive health services have not been extensively studied. Objective: To compare influenza vaccination rates among African Americans and Whites patients with diabetes and investigate factors that might have an impact on racial disparities in the receipt of influenza vaccinations. Methods: A secondary data analysis of 47,283 (unweighted) patients with diabetes from the 2011 Behavioral Risk Factor Surveillance System survey (BRFSS) (15,902,478 weighted) was performed. The survey respondents were asked whether they received an influenza vaccination in the last twelve months. We used logistic regression to estimate the odds of receiving the influenza vaccine based on race. Results: The results indicated a significantly lower proportion of African Americans respondents (50%) reported receiving the influenza vaccination in the last year when compared with Whites respondents (61%). Age, gender, education, health care coverage, health care cost, and employment status were found to significantly modify the effect of race on receiving the influenza vaccination. Conclusions: This study found a significant racial disparity in influenza vaccination rates in adults with diabetes with higher rates in Whites compared to African Americans individuals. The public health policies that target diabetes patients in general and specifically African Americans in the 65+ age group, women, and homemakers, may be necessary to diminish the racial disparity in influenza vaccination rates between African Americans and Whites diabetics. abstract_id: PUBMED:28126202 Exploring racial influences on flu vaccine attitudes and behavior: Results of a national survey of White and African American adults. Introduction: Racial disparities in adult flu vaccination rates persist with African Americans falling below Whites in vaccine acceptance. Although the literature has examined traditional variables including barriers, access, attitudes, among others, there has been virtually no examination of the extent to which racial factors including racial consciousness, fairness, and discrimination may affect vaccine attitudes and behaviors. Methods: We contracted with GfK to conduct an online, nationally representative survey with 819 African American and 838 White respondents. Measures included risk perception, trust, vaccine attitudes, hesitancy and confidence, novel measures on racial factors, and vaccine behavior. Results: There were significant racial differences in vaccine attitudes, risk perception, trust, hesitancy and confidence. For both groups, racial fairness had stronger direct effects on the vaccine-related variables with more positive coefficients associated with more positive vaccine attitudes. Racial consciousness in a health care setting emerged as a more powerful influence on attitudes and beliefs, particularly for African Americans, with higher scores on racial consciousness associated with lower trust in the vaccine and the vaccine process, higher perceived vaccine risk, less knowledge of flu vaccine, greater vaccine hesitancy, and less confidence in the flu vaccine. The effect of racial fairness on vaccine behavior was mediated by trust in the flu vaccine for African Americans only (i.e., higher racial fairness increased trust in the vaccine process and thus the probability of getting a flu vaccine). The effect of racial consciousness and discrimination for African Americans on vaccine uptake was mediated by perceived vaccine risk and flu vaccine knowledge. Conclusions: Racial factors can be a useful new tool for understanding and addressing attitudes toward the flu vaccine and actual vaccine behavior. These new concepts can facilitate more effective tailored and targeted vaccine communications. abstract_id: PUBMED:15741715 Do managed care plans reduce racial disparities in preventive care? This study was designed to determine whether managed care plans reduce racial disparities in use of influenza vaccination, mammography, and prostate-specific antigen screening. The study analyzed the use of three types of preventive care in a population-based sample of adults who were 65 years or older and were enrolled in a Medicare managed care (MMC) or fee-for-service (FFS) plan in Allegheny County, Pennsylvania. The study sample included 463 African Americans and 592 whites. Fewer African Americans than whites reported having had an influenza vaccination (64.4% versus 76.5%; p < 0.01) or a prostate-specific antigen test (64% versus 71.2%; p = 0.09) during the previous year. Slightly more African Americans than white women reported having had a mammogram (66.1% versus 63.8%). Logistic regression showed that, regardless of health plan type, African Americans were significantly less likely than whites to have an influenza vaccination (p < 0.05). A MMC plan did not narrow racial differences in preventive care. Reducing disparities may require interventions developed for specific racial/ethnic groups. abstract_id: PUBMED:19648832 Racial and ethnic disparities in pneumonia treatment and mortality. Background: The extent to which racial/ethnic disparities in pneumonia care occur within or between hospitals is unclear. Objective: Examine within and between-hospital racial/ethnic disparities in quality indicators and mortality for patients hospitalized for pneumonia. Research Design: Retrospective cohort study. Subjects: A total of 1,183,753 non-Hispanic white, African American, and Hispanic adults hospitalized for pneumonia between January 2005 and June 2006. Measures: Eight pneumonia care quality indicators and in-hospital mortality. Results: Performance rates for the 8 quality indicators ranged from 99.4% (oxygenation assessment within 24 hours) to 60.2% (influenza vaccination). Overall hospital mortality was 4.1%. African American and Hispanic patients were less likely to receive pneumococcal and influenza vaccinations, smoking cessation counseling, and first dose of antibiotic within 4 hours than white patients at the same hospital (ORs = 0.65-0.95). Patients at hospitals with the racial composition of those attended by average African Americans and Hispanics were less likely to receive all indicators except blood culture within 24 hours than patients at hospitals with the racial composition of those attended by average whites. Hospital mortality was higher for African Americans (OR = 1.05; 95% CI = 1.02, 1.09) and lower for Hispanics (OR = 0.85; 95% CI = 0.81, 0.89) than for whites within the same hospital. Mortality for patients at hospitals with the racial composition of those attended by average African Americans (OR = 1.21; 95% CI = 1.18, 1.25) or Hispanics (OR = 1.18; 95% CI = 1.14, 1.23) was higher than for patients at hospitals with the racial composition of those attended by average whites. Conclusions: Racial/ethnic disparities in pneumonia treatment and mortality are larger and more consistent between hospitals than within hospitals. abstract_id: PUBMED:28933619 African American adults and seasonal influenza vaccination: Changing our approach can move the needle. Consistent disparities in influenza (flu) vaccine uptake among African Americans, coupled with a disproportionate burden of chronic diseases, places too many African Americans at high risk for complications, hospitalizations and premature mortality. This disparity is the result of individual attitudes and beliefs, social norms, and health care practices. Recent research identifies critical factors affecting vaccine uptake among African American adults including perceived risk of vaccine side effects, social norms that do not support for vaccination, and lower knowledge of the flu and the vaccine. Yet in our nationally representative survey of African Americans, we also found that there is substantial trust in one's own physician about the flu vaccine coupled with valuing the provider's vaccine recommendation. Other recent research has found that African Americans are not receiving strong recommendations and specific offers of the vaccine in their health care visit. This commentary suggests particular roles and strategies for health care providers, public health agencies, and African American communities and families, which can literally move the needle to increase seasonal flu vaccination. abstract_id: PUBMED:21438863 Do vaccination strategies implemented by nursing homes narrow the racial gap in receipt of influenza vaccination in the United States? Objectives: To determine whether the racial inequity between African Americans and Caucasians in receipt of influenza vaccine is narrower in residents of nursing homes with facility-wide vaccination strategies than in residents of facilities without vaccination strategies. Design: Secondary data analysis using the National Nursing Home Survey 2004, a nationally representative survey. Setting: One thousand one hundred seventy-four participating nursing homes sampled systematically with probability proportional to bed size. Participants: Thirteen thousand five hundred seven randomly sampled residents of nursing homes between August and December 2004. Measurements: Receipt of influenza vaccine within the last year. Logistic regression was used to examine the relationship between facility-level influenza immunization strategy and racial inequity in receipt of vaccination, adjusted for characteristics at the resident, facility, state, and regional levels. Results: Overall in the United States, vaccination coverage was higher for Caucasian and African-American residents; the racial vaccination gaps were smaller (<6 percentage points) and nonsignificant in residents of homes with standing orders for influenza vaccinations (P=.14), verbal consent allowed for vaccinations(P=.39), and routine review of facility-wide vaccination rates (P=.61) than for residents of homes without these strategies. The racial vaccination gap in residents of homes without these strategies were two to three times as high (P=.009, P=.002, and P=.002, respectively). Conclusion: The presence of several immunization strategies in nursing homes is associated with higher vaccination coverage for Caucasian and African-American residents, narrowing the national vaccination racial gap. abstract_id: PUBMED:16306546 Race, ethnicity, socioeconomic position, and quality of care for adults with diabetes enrolled in managed care: the Translating Research Into Action for Diabetes (TRIAD) study. Objective: To examine racial/ethnic and socioeconomic variation in diabetes care in managed-care settings. Research Design And Methods: We studied 7,456 adults enrolled in health plans participating in the Translating Research Into Action for Diabetes study, a six-center cohort study of diabetes in managed care. Cross-sectional analyses using hierarchical regression models assessed processes of care (HbA(1c) [A1C], lipid, and proteinuria assessment; foot and dilated eye examinations; use or advice to use aspirin; and influenza vaccination) and intermediate health outcomes (A1C, LDL, and blood pressure control). Results: Most quality indicators and intermediate outcomes were comparable across race/ethnicity and socioeconomic position (SEP). Latinos and Asians/Pacific Islanders had similar or better processes and intermediate outcomes than whites with the exception of slightly higher A1C levels. Compared with whites, African Americans had lower rates of A1C and LDL measurement and influenza vaccination, higher rates of foot and dilated eye examinations, and the poorest blood pressure and lipid control. The main SEP difference was lower rates of dilated eye examinations among poorer and less educated individuals. In almost all instances, racial/ethnic minorities or low SEP participants with poor glycemic, blood pressure, and lipid control received similar or more appropriate intensification of therapy relative to whites or those with higher SEP. Conclusions: In these managed-care settings, minority race/ethnicity was not consistently associated with worse processes or outcomes, and not all differences favored whites. The only notable SEP disparity was in rates of dilated eye examinations. Social disparities in health may be reduced in managed-care settings. abstract_id: PUBMED:33276969 The COVID-19 and Influenza "Twindemic": Barriers to Influenza Vaccination and Potential Acceptance of SARS-CoV2 Vaccination in African Americans. Influenza is a contagious respiratory virus that causes a significant annual health burden in the United States (US). In spite of effective yearly vaccinations to protect individuals against influenza-related health complications, especially with certain chronic co-morbid illnesses, persistent racial/ethnic disparities exist in influenza immunization. African Americans continue to experience low vaccination uptake, stemming, at least in part, from years of bias in and mistrust of orthodox medicine, safety concerns, and environmental barriers to vaccine access. The novel respiratory coronavirus, SARS-CoV2, causes COVID-19, leading to a pandemic that in the U.S. has exerted severe physical, psychological, and economic tolls on the African Americans and other disadvantaged communities. These two respiratory-borne virus' cause disparate effects in the black community, unmasking persistent disparities in healthcare. Unfortunately, suboptimal influenza immunization acceptance exacerbates flu-related adverse health outcomes, similar to difficulties from the effects of the COVID-19 pandemic. In consideration of the impending influenza-COVID-19 "twindemic", robust educational campaigns, policy initiatives, and novel approaches to influenza immunization must be considered for the African American community to build trust in the health benefits of the influenza vaccination and, ultimately, to trust in the health benefits of potential SARS-CoV2 vaccines, when available for the general public. Answer: The racial disparity in influenza vaccination between African Americans and whites has been a subject of research, and studies have explored whether managed care can narrow this gap. The findings, however, indicate that managed care does not significantly reduce the racial disparity in influenza vaccination rates. A study using the 1996 Medicare Current Beneficiary Survey found that while managed care enrollees were more likely than those with fee-for-service insurance to receive influenza vaccination, the racial disparity in vaccination was not reduced in managed care. Both African Americans and whites had higher rates of vaccination in managed care, but the adjusted racial disparity was still significant, with whites being substantially more likely to be vaccinated than African Americans (PUBMED:11572737). Another study examining racial influences on flu vaccine attitudes and behavior found significant racial differences in vaccine attitudes, risk perception, trust, hesitancy, and confidence. Racial fairness and racial consciousness in a healthcare setting emerged as powerful influences on attitudes and beliefs, particularly for African Americans, affecting their trust in the vaccine and the vaccine process (PUBMED:28126202). Furthermore, research on influenza vaccination in patients with diabetes revealed a significant racial disparity in vaccination rates, with higher rates in whites compared to African Americans. Factors such as age, gender, education, health care coverage, health care cost, and employment status were found to significantly modify the effect of race on receiving the influenza vaccination (PUBMED:25035719). An additional study focused on nursing homes found that facility-wide vaccination strategies could narrow the racial gap in influenza vaccination between African American and Caucasian residents. Facilities with standing orders for influenza vaccinations, verbal consent allowed for vaccinations, and routine review of facility-wide vaccination rates had smaller and nonsignificant racial vaccination gaps compared to facilities without these strategies (PUBMED:21438863). In summary, while managed care is associated with higher rates of influenza vaccination for both African Americans and whites, it does not appear to significantly reduce the racial disparity in vaccination rates. Other factors, such as trust, racial fairness, and specific facility-wide vaccination strategies, may influence vaccination rates and could potentially help narrow the racial gap in influenza vaccination (PUBMED:11572737, PUBMED:28126202, PUBMED:25035719, PUBMED:21438863).
Instruction: Can the full range of paramedic skills improve survival from out of hospital cardiac arrests? Abstracts: abstract_id: PUBMED:9315924 Can the full range of paramedic skills improve survival from out of hospital cardiac arrests? Objective: To examine the effect of full implementation of advanced skills by ambulance personnel on the outcome from out of hospital cardiac arrest. Setting: Patients with cardiac arrest treated at the accident and emergency department of the Royal Infirmary of Edinburgh. Methods: All cardiorespiratory arrests occurring in the community were studied over a one year period. For patients arresting before the arrival of an ambulance crew, outcome of 92 patients treated by emergency medical technicians equipped with defibrillators was compared with that of 155 treated by paramedic crews. The proportions of patients whose arrest was witnessed by lay persons and those that had bystander cardiopulmonary resuscitation (CPR) were similar in both groups. Results: There was no difference in the presenting rhythm between the two groups. Eight of the 92 patients (8.7%) treated by technicians survived to discharge compared with eight of 155 (5.2%) treated by paramedics (NS). Of those in ventricular fibrillation or pulseless ventricular tachycardia, eight of 43 (18.6%) in the technician group and seven of 80 (8.8%) in the paramedic group survived to hospital discharge (NS). For patients arresting in the presence of an ambulance crew, four of 13 patients treated by technicians compared with seven of 15 by paramedics survived to hospital discharge. Only two patients surviving to hospital discharge received drug treatment before the return of spontaneous circulation. Conclusions: No improvement in survival was demonstrated with more advanced prehospital care. abstract_id: PUBMED:9404259 Heartstart Scotland: the use of paramedic skills in out of hospital resuscitation. Objective: To assess the frequency with which paramedic skills were used in out of hospital cardiac arrest and the effect of tracheal intubation on outcome. Design: Retrospective analysis of ambulance service reports and hospital records. Setting: Scottish Ambulance Service and hospitals admitting acute patients throughout Scotland. Results: A total of 8651 out of hospital resuscitation attempts were recorded and tracheal intubation was attempted in 3427 (39.6%) arrests. One hundred and thirty six patients (3.7%) were intubated and 476 (9.1%) of the patients who were not intubated survived to hospital discharge (p < 0.001). Among the patients who were defibrillated the proportion intubated was highest in the patients who received the greatest number of shocks (p < 0.01). Among subjects receiving similar numbers of shocks survival rates were lower for intubated patients (p < 0.01). Patients with unwitnessed arrests were most frequently intubated and survival rates were lowest in this group. Conclusions: Patients who are intubated seem to have lower survival rates. This may however reflect the difficulty of the resuscitation attempt rather than the effects of intubation. The use of basic life support skills rapidly after cardiac arrest is associated with the best survival rates. abstract_id: PUBMED:26812932 Paramedic Exposure to Out-of-Hospital Cardiac Arrest Resuscitation Is Associated With Patient Survival. Background: Although out-of-hospital cardiac arrest (OHCA) is a major public health problem, individual paramedics are rarely exposed to these cases. In this study, we examined whether previous paramedic exposure to OHCA resuscitation is associated with patient survival. Methods And Results: For the period 2003 to 2012, we linked data from the Victorian Ambulance Cardiac Arrest Registry to Ambulance Victoria's employment data set. We defined exposure as the number of times a paramedic attended an OHCA where resuscitation was attempted in the 3 years preceding each case. Using a multivariable model adjusting for known predictors of survival, we measured the association between paramedic OHCA exposure and patient survival to hospital discharge. During the study period, there were 4151 paramedics employed and 48 291 OHCAs (44% with resuscitation attempted). The median exposure of all paramedics was 2 (interquartile range 1-3) OHCAs/year. Eleven percent of paramedics were not exposed to any OHCA cases. Increased paramedic exposure was associated with reduced odds of attempted resuscitation (P<0.001). In the 3 years preceding each OHCA where resuscitation was attempted, the median exposure of the treating paramedics was 11 (interquartile range 6-17) OHCAs. Compared with patients treated by paramedics with a median of ≤6 exposures during the previous 3 years (7% survival), the odds of survival were higher for patients treated by paramedics with >6 to 11 (12%, adjusted odds ratio 1.26, 95% confidence interval 1.04-1.54), >11 to 17 (14%, adjusted odds ratio 1.29, 95% confidence interval 1.04-1.59), and >17 exposures (17%, adjusted odds ratio 1.50, 95% confidence interval 1.22-1.86). Paramedic years of experience were not associated with survival. Conclusions: Patient survival after OHCA significantly increases with the number of OHCAs that paramedics have previously treated. abstract_id: PUBMED:19499471 The effect of paramedic experience on survival from cardiac arrest. Objective: We hypothesized that paramedics with more experience would be more successful at treating patients in ventricular fibrillation (VF) cardiac arrest than those with less experience. We conducted a study examining the relationship between the years of experience of paramedics and survival from out-of-hospital cardiac arrest. Methods: This retrospective cohort study examined all witnessed, out-of-hospital VF cardiac arrests (n = 699) that occurred between January 1, 2002, and December 31, 2006. Logistic regression was used to determine the odds of survival and the 95% confidence intervals (95% CIs) relating to the number of years of experience that each of the treating paramedics had. Results: We found that every additional year of experience of the medic in charge of implementing procedures such as intravenous line insertions, intubations, and provision of medications was associated with a 2% increase in the likelihood of survival of the patient (95% CI: 1.00-1.04). The number of years of experience of the paramedic who did not perform procedures but instead was in charge of treatment decisions was not significantly associated with survival (odds ratio [OR] 1.01, 95% CI: 0.99-1.03). When we combined both paramedics' years of experience, we saw a 1% increase in the odds of survival for every additional year of experience (95% CI: 1.00-1.03). Conclusions: This study suggests that the amount of experience of the paramedic who performed procedures on cardiac arrest patients was associated with increased rates of survival. However, we did not find an association between survival from VF and the number of years of experience of the paramedic who made treatment decisions. abstract_id: PUBMED:28347556 Paramedic Intubation Experience Is Associated With Successful Tube Placement but Not Cardiac Arrest Survival. Study Objective: Paramedic experience with intubation may be an important factor in skill performance and patient outcomes. Our objective is to examine the association between previous intubation experience and successful intubation. In a subcohort of out-of-hospital cardiac arrest cases, we also measure the association between patient survival and previous paramedic intubation experience. Methods: We analyzed data from Ambulance Victoria electronic patient care records and the Victorian Ambulance Cardiac Arrest Registry for January 1, 2008, to September 26, 2014. For each patient case, we defined intubation experience as the number of intubations attempted by each paramedic in the previous 3 years. Using logistic regression, we estimated the association between intubation experience and (1) successful intubation and (2) first-pass success. In the out-of-hospital cardiac arrest cohort, we determined the association between previous intubation experience and patient survival. Results: During the 6.7-year study period, 769 paramedics attempted intubation in 14,857 patients. Paramedics typically performed 3 intubations per year (interquartile range 1 to 6). Most intubations were successful (95%), including 80% on the first attempt. Previous intubation experience was associated with intubation success (odds ratio 1.04; 95% confidence interval 1.03 to 1.05) and intubation first-pass success (odds ratio 1.02; 95% confidence interval 1.01 to 1.03). In the out-of-hospital cardiac arrest subcohort (n=9,751), paramedic intubation experience was not associated with patient survival. Conclusion: Paramedics in this Australian cohort performed few intubations. Previous experience was associated with successful intubation. Among out-of-hospital cardiac arrest patients for whom intubation was attempted, previous paramedic intubation experience was not associated with patient survival. abstract_id: PUBMED:36088252 Paramedic interactions with significant others during and after resuscitation and death of a patient. Background: Out-of-hospital cardiac arrest often occurs at home, requiring paramedics to interact with family members and bystanders during resuscitation and inform them should the patient die. This study explores how paramedics navigate interactions and the changing needs of the patient and the bereaved. Methods: Phenomenological methodology inspired individual, semi-structured interviews. Data was then coded using reflexive thematic analysis. Results: Ten individual interviews with working paramedics with an average of 7.2 years of experience were analysed and resulted in four overarching themes. These themes encompassed communication goals and factors affecting their implementation. Four themes emerged: maximising patient outcome, minimising psychological trauma for significant others, paramedic engagement and communicating across cultures. Communication goals shift from maximising patient outcome to minimising psychological trauma for significant others during the resuscitation. Implementation of those goals is affected by paramedic engagement and communicating across cultures. Conclusions: Paramedics used communication techniques based on personal and professional experiences, attempting to navigate limited resources, factors affecting paramedic engagement and a perceived lack of education and support in matters of grief and death. abstract_id: PUBMED:34952179 Paramedic rhythm interpretation misclassification is associated with poor survival from out-of-hospital cardiac arrest. Background: Early recognition and rapid defibrillation of shockable rhythms is strongly associated with survival in out of hospital cardiac arrest (OHCA). Little is known about the accuracy of paramedic rhythm interpretation and its impact on survival. We hypothesized that inaccurate paramedic interpretation of initial rhythm would be associated with worse survival. Methods: This is a retrospective cohort analysis of prospectively collected OHCA data over a nine-year period within a single, urban, fire-based EMS system that utilizes manual defibrillators equipped with rhythm-filtering technology. We compared paramedic-documented initial rhythm with a reference standard of post-event physician interpretation to estimate sensitivity and specificity of paramedic identification of and shock delivery to shockable rhythms. We assessed the association between misclassification of initial rhythm and neurologically intact survival to hospital discharge using multivariable logistic regression. Results: A total of 863 OHCA cases were available for analysis with 1,756 shocks delivered during 542 (63%) resuscitation attempts. Eleven percent of shocks were delivered to pulseless electrical activity (PEA). Sensitivity and specificity for paramedic initial rhythm interpretation were 176/197 (0.89, 95% CI 0.84-0.93) and 463/504 (0.92, 95% CI 0.89-0.94) respectively. No patient survived to hospital discharge when paramedics misclassified the initial rhythm. Conclusions: Paramedics achieved high sensitivity for shock delivery to shockable rhythms, but with an 11% shock delivery rate to PEA. Misclassification of initial rhythm was associated with poor survival. Technologies that assist in rhythm identification during CPR, rapid shock delivery, and minimal hands-off time may improve outcomes. abstract_id: PUBMED:6102690 Out-of-hospital cardiac arrest: improved survival with paramedic services. Survival after out-of-hospital cardiac arrest was studied in a suburban community (population 304000) before and after addition of paramedic services. During period 1 emergency medical technicians provided basic emergency care (cardiopulmonary resuscitation at the scene of collapse and during the journey to hospital). In period 2 additional care was given at the scene of collapse by paramedics capable of advanced emergency care (defibrillation, endotracheal intubation, drugs). During the 3-yr study 585 patients with cardiac arrest caused by heart disease received prehospital emergency resuscitation. Paramedic services improved the rate of live admission to the coronary-care or intensive-care unit from 19% to 34% (p less than 0.001) and the rate of discharge from 7% to 17% (p less than 0.01). The mean time from collapse to delivery of advanced emergency care was 27.5 min during period 1 with technician services, and 7.7 min during period 2 with paramedic services. Ventricular fibrillation caused cardiac arrest in nearly all patients who survived; it occurred in 91 of the 160 (57%) patients during period 1 whose rhythms were determined and in 192 of the 343 (56%) patients during period 2. The decreased time from collapse to delivery of advanced emergency care accounted for the improved survival with paramedic services. abstract_id: PUBMED:9315925 Paramedics, technicians, and survival from out of hospital cardiac arrest. Objective: To test the hypothesis that limited paramedic advanced life support skills afford no advantage in survival from cardiac arrest when compared with non-paramedic ambulance crews equipped with defibrillators in an urban environment; and to investigate whether separate response units delayed on scene times. Methods: A prospective, observational study was conducted over 17 consecutive months on all adult patients brought to the accident and emergency (A&E) department of Glasgow Royal Infirmary having suffered an out of hospital cardiac arrest of cardiac aetiology. The main interventions were bystander cardiopulmonary resuscitation (CPR) and limited advance life support skills. Main Outcome Measures: Return of spontaneous circulation, survival to admission, and discharge. Results: Of 240 patients brought to the A&E department, 19 had no clear record of whether a paramedic was or was not involved and so were excluded. There was no difference in survival between the two groups, although a trend to admission favoured non-paramedics. Paramedics spent much longer at the scene (P < 0.0001). Witnessed arrests (P = 0.01), early bystander CPR (P = 0.12), shockable rhythms (P = 0.003), and defibrillation (P < 0.0001) were associated with better survival. Intubation and at scene times were not associated with better survival. Delayed second response units did not prolong at scene times. Conclusions: The interventions of greatest benefit in out of hospital cardiac arrest are basic life support and defibrillation. Additional skills are of questionable benefit and may detract from those of greatest benefit. abstract_id: PUBMED:6700072 Treatment of ventricular fibrillation. Emergency medical technician defibrillation and paramedic services. We compared the effect of rapid defibrillation by emergency medical technicians (EMTs) combined with paramedic care with that of standard EMT and paramedic care on survival from 540 witnessed episodes of out-of-hospital cardiac arrest caused by ventricular fibrillation. More than 400 EMTs were trained in the recognition of ventricular fibrillation and operation of a defibrillator. For a portion of the three-year study, emergency care for 179 cases was randomized between the two types of services. For randomized cases, when the time interval between EMT and paramedic arrival was greater than four minutes there was significantly improved survival with EMT defibrillation and paramedic care (42%) compared with basic EMT and paramedic care (19%). Similar findings occurred when all cases were considered (38% v 18%). Defibrillation by EMTs combined with paramedic services can enhance survival from ventricular fibrillation, compared with basic EMT and paramedic care. Answer: The full range of paramedic skills does not necessarily guarantee improved survival from out-of-hospital cardiac arrests (OHCA). Several studies have explored the impact of advanced paramedic interventions on OHCA outcomes with varying results. One study found no significant difference in survival to hospital discharge between patients treated by emergency medical technicians equipped with defibrillators and those treated by paramedic crews with advanced skills (PUBMED:9315924). Another study suggested that patients who were intubated by paramedics had lower survival rates, although this could reflect the complexity of the resuscitation attempt rather than the effects of intubation itself. It was also noted that the use of basic life support skills promptly after cardiac arrest is associated with the best survival rates (PUBMED:9404259). Interestingly, paramedic exposure to OHCA resuscitation was associated with patient survival, indicating that the number of OHCAs that paramedics have previously treated significantly increases patient survival after OHCA (PUBMED:26812932). This was supported by another study that found an increase in the odds of survival for every additional year of experience of the paramedic performing procedures on cardiac arrest patients (PUBMED:19499471). However, a study focusing on paramedic intubation experience found that while previous experience was associated with successful intubation, it was not associated with patient survival in OHCA cases (PUBMED:28347556). Additionally, accurate paramedic rhythm interpretation was crucial, as misclassification of initial rhythm was associated with poor survival (PUBMED:34952179). In contrast, a study from a suburban community showed that the addition of paramedic services, which included advanced emergency care such as defibrillation and drug administration, improved survival rates from 7% to 17% (PUBMED:6102690). Another study found no advantage in survival from cardiac arrest when comparing limited paramedic advanced life support skills with non-paramedic ambulance crews equipped with defibrillators (PUBMED:9315925). Lastly, a study comparing rapid defibrillation by emergency medical technicians combined with paramedic care to standard EMT and paramedic care found significantly improved survival with the former approach when the time interval between EMT and paramedic arrival was greater than four minutes (PUBMED:6700072). In summary, while certain paramedic skills and experience can be beneficial, the full range of advanced skills does not uniformly improve survival from OHCA.
Instruction: Mitral valve surgery using right anterolateral thoracotomy: is the aortic cannulation a safety procedure? Abstracts: abstract_id: PUBMED:21103739 Mitral valve surgery using right anterolateral thoracotomy: is the aortic cannulation a safety procedure? Introduction: The right anterolateral thoracotomy is an alternative technique for surgical approach of mitral valve. In these cases, femoral-femoral bypass still has been used, rising occurrence of complications related to femoral cannulation. Objective: Describe the technique and results of mitral valve treatment by right anterolateral thoracotomy using aortic cannulation for cardiac pulmonary bypass (CPB). Methods: From 1983 e 2008, 100 consecutive female patients, with average age 35 ±13 years, 96 (96%) underwent mitral valve surgical treatment in the Heart Institute of São Paulo. A right anterolateral thoracotomy approach associated with aortic cannulation was used for CPB. Eighty (80%) patients had rheumatic disease and 84 (84%) patients presented functional class III or IV. Results: Were performed 45 (45%) comissurotomies, 38 (38%) valve repairs, 7(7%) mitral valve replacements, seven (7%) recomissurotomies and three (3%) prosthesis replacement. Sparing surgery was performed in 90 (90%) patients. The average CPB and clamp time were 57 ± 27 min e 39 ± 19 min, respectively. There were no in-hospital death, reoperation due to bleeding and convertion to sternotomy. Introperative complications were related to heart harvest (5%), especially in reoperations (3%). The most important complications in postoperative period were related to pulmonary system (11%), followed by atrial fibrilation (10%) but without major systemic repercussions. The mean inhospital length of stay was 8 ± 3 days. Follow-up was 6.038 patients/month. Actuarial survival was 98.0 ± 1.9% and freedom from reoperation was 81.4 ± 7.8% in 180 months. Conclusion: The right anterolateral thoracotomy associated with aortic cannulation in mitral valve surgery is a simple technique, reproducible and safety. abstract_id: PUBMED:28934975 Right anterolateral thoracotomy: an attractive alternative to repeat sternotomy for high-risk patients undergoing reoperative mitral and tricuspid valve surgery. Background: Reoperative cardiac valve surgery via sternotomy is associated with a substantial morbidity and mortality. This study evaluated the right anterolateral thoracotomy for high-risk patients undergoing mitral and tricuspid valve redo procedures. Methods: Out of a series of 173 patients undergoing redo cardiac valve surgery, 24 patients were reoperative via the right anterolateral thoracotomy as the high-risk group on the basis of the proximity of the heart and great vessels to the sternum and the presence and location of patent bypass grafts. Results: In all cases, sternotomy was avoided. The mitral valve and tricuspid valve were replaced in 4 and 19 patients and repaired in 1 and 2 patients, respectively. Moreover, left atrial folding was performed in 5 patients. Mortality was 8.3%. All other patients had uneventful outcomes and normal valve function at follow-up. Conclusions: Reoperative cardiac valve surgery can be performed safely using the right anterolateral thoracotomy in high-risk patients. It offers enough exposure. It minimizes the need for cardiac dissection, and thus, the risk for injury. Avoiding a high-risk resternotomy increases patients comfort and safety of redo mitral and tricuspid valve surgery. abstract_id: PUBMED:10856864 Limited right anterolateral thoracotomy for mitral valve surgery. Objective: There has been great enthusiasm in recent years to perform mitral valve surgery through small multiple incisions with the use of the Port Access technique. The procedure is costly, involves a relatively long training curve and leaves the patient with multiple scars in the chest and groin. We used a mini-thoracotomy technique for mitral valve patients and compared our results with the conventional technique. Methods: We randomized 100 consecutive patients presenting to our practice for mitral valve surgery between two groups. The first group (test group) consisted of 50 patients in which mitral valve surgery was performed via mini-right anterolateral thoracotomy approach. The control group (50 patients) underwent classical mitral valve surgery through median sternotomy. Standard aortic and bicaval cannulation with antegrade blood cardioplegia was adopted in both groups. Results: There was no statistical difference between the two groups preoperatively regarding their age, pathology, LV function and male/female ratio. Most of the patients had valve replacement except four in the test group and three in the control group. The incision in the test group was 12-15 cm long in the right submammary groove. Direct aortic cannulation, clamping and cardioplegia administration was achieved in all patients easily. The mean bypass time was slightly longer in the test group (64+/-12 min) when compared with the test group (59+/-11 min). The cross-clamp time was lower in the test group (27+/-8 min) when compared with the control group (31+/-9 min). There was no hospital mortality in both groups and there was one morbidity in the form of sternal infection in the control group. The mean hospital stay was similar for both groups (7+/-2 days). Conclusion: The cosmetic appearance in the test group was excellent and the patients' wounds were scarcely apparent in the female patients. The study demonstrates the efficacy and safety of this older technique, with excellent cosmetic results and no additional cost or risk to the patients. abstract_id: PUBMED:9697057 Approach for primary mitral valve surgery: right anterolateral thoracotomy or median sternotomy. Background And Aims Of The Study: Although both right anterolateral thoracotomy and median sternotomy have been used for mitral valve surgery (repair/replacement), the latter approach is considered standard for primary mitral valve surgery. We hypothesized that primary mitral valve surgery, if performed through a right anterolateral thoracotomy, would not only be better accepted cosmetically by patients, but also make redo surgery through a median sternotomy easy and trouble free from re-entry bleeding. Methods: A right anterolateral thoracotomy was used for primary mitral valve surgeries in 52 patients (group A; 22 males, 30 females) of mean age 30.3 +/- 09.14 years (range: 14 to 50 years). Equal numbers of cases operated on during the same period by via median sternotomy were selected retrospectively from hospital records to serve as controls (group B). Groups were matched with respect to age, body weight, body surface area, sex, cardiac rhythm, functional status, type of mitral valve pathology and associated lesions. Results: Operative mortality was similar in both groups, but fewer postoperative complications occurred in group A. Total hospital stay, intensive care unit stay, postoperative bleeding, inotrope requirement and ventilatory support postoperatively was significantly less in group A. Conclusions: Right anterolateral thoracotomy provides excellent exposure of the mitral valve, even with a small left atrium, and offers a better cosmetic lateral scar which is less prone to keloid formation. In addition, right anterolateral thoracotomy is as safe as median sternotomy for primary mitral valve repair/replacement, and should be used as an initial approach to mitral valve surgery, while median sternotomy be kept for repeat mitral valve or other open-heart surgery required later in life. abstract_id: PUBMED:24801447 Minimally invasive video-assisted double-valve replacement through right anterolateral Minithoracotomy. Objective: This study aimed to investigate feasibility and safety of minimally invasive video-assisted surgery for double-valve (mitral and aortic) replacement through right anterolateral minithoracotomy. Methods: Between February 2011 and April 2013, 60 patients with combined valvular disease underwent double valve replacement, 26 of them by minimally invasive video-assisted surgery through right anterolateral minithoracotomy (study group) and 34 by median sternotomy (control group). Peripheral cardiopulmonary bypass (CPB) was established through right femoral artery and vein. The incision was made around the right breast approximately 5 cm in length. Pericardiotomy, bicaval occlusion, atriotomy and aortotomy, and double valve replacement were performed with thoracoscope. Results: In the study group, times of CPB and aortic cross-clamp were 146.5 ± 40.5 min and 91.5 ± 23.4 min, respectively, which were significantly different from those in the control group, 115.4 ± 26.5 min and 75.4 ± 16.5 min (P<0.05). Thoracic drainage in the study group was significantly lower than the control group, 587 ± 245 ml (study group) versus 756 ± 267 ml (control group) (P<0.05). Length of ICU and postoperative hospital stay were shorter in the study group, 1.9 ± 0.8 and 8.7 ± 4.5 days versus 2.8 ± 1.3 and 11.2 ± 5.6 days in the control group (P<0.05), respectively. There was no statistical difference in the postoperative results of TTE (transthoracic echocardiography) (P>0.05). All patients recovered smoothly with follow-up of six months to two years, with no severe complications. Conclusions: Minimally invasive video-assisted procedure through right anterolateral minithoracotomy is a new promising approach for double valve replacement. Our study suggested that this approach was feasible, safe and had cosmetic effects. abstract_id: PUBMED:25053663 Minimally invasive mitral valve surgery via minithoracotomy and direct cannulation. Background: To reduce the morbidity of mitral valve operations, a right anterolateral minithoracotomy under direct vision was introduced. We report our experience with this procedure. Methods: From July 2001 to December 2013, 320 consecutive patients underwent direct minimally invasive mitral valve surgery through a right anterolateral minithoracotomy at our institution. Evidence of rheumatic disease was observed in 231 (72%) patients, and 89 (28%) repaired valves had myxomatous changes. Tricuspid valve repair was performed in 80 (25%) patients and radiofrequency ablation in 80 (25%) with chronic atrial fibrillation. All cannulas were introduced through the thoracotomy incision, eliminating femoral cannulation. No new instruments, retractors, or ports were used. Pleural and pericardial drainage was accomplished through a single drain. Results: There was no hospital death. Conversion to sternotomy was needed in 3 patients because we were unable to obtain satisfactory arterial cannulation. Eight patients required reoperation: 7 for mitral insufficiency and one for postoperative bleeding. Mean cardiopulmonary bypass and crossclamp times were 55.3 ± 17.0 and 43.0 ± 13.4 min, respectively. Mean intensive care unit stay was 29 h, and hospital stay was 4.3 days. Conclusions: Based on our experience, this minimally invasive approach is safe, rapid, cost-effective, and more comfortable for the patients, in addition to its cosmetic benefits. It may be the preferred approach in young females. abstract_id: PUBMED:10543559 Minimally invasive mitral valve surgery through right anterolateral minithoracotomy. Background: This study evaluates the feasibility of minimally invasive mitral valve surgery. The aim of the study was to minimize surgical access to achieve better cosmetic results, less postoperative discomfort, and faster recovery. Methods: From September 1997 to October 1998, 76 patients underwent mitral valve surgery through a right anterolateral minithoracotomy at the fourth intercostal space. The mitral valve was either repaired (n = 21) or replaced (n = 55). In all cases, open femoral artery-femoral vein cannulation was used for cardiopulmonary bypass. In 27 cases, an endoluminal aortic clamp was used, but in 49 cases, the aorta was cross-clamped with a transthoracic, sliding-rod-design clamp. Results: There were no approach-related limitations to surgical intervention. Intraoperative transesophageal echocardiography revealed excellent results after valve repair and no paravalvular leak in any patient after mitral valve replacement. Mean duration of intensive care and postoperative hospital stay was 32+/-5.2 hours and 7+/-1.1 days, respectively. There were no major complications related to femoral vessel cannulation. In 1 patient, transient neurological problems developed, with subsequent complete recovery. There was one hospital mortality (85-year-old male patient died of upper GI bleeding). Conclusions: Minimally invasive port access mitral valve surgery can accelerate recovery and decrease pain, while maintaining overall surgical efficacy. It also provides better cosmetic results to our patients, and now it has become our standard approach for isolated mitral valve surgery. abstract_id: PUBMED:29501464 Analysis of the Learning Curve in Mitral Valve Replacement Through the Right Anterolateral Minithoracotomy Approach: A Surgeon's Experience with the First 100 Patients. Background: To apply the cumulative sum (CUSUM) failure analysis to assess the performance of a single surgeon during mitral valve replacement via the right anterolateral minithoracotomy (RAMT) approach and to analyse the learning curve for the procedure. Methods: A total of 100 mitral valve replacements were performed using the RAMT approach from June 2011 to April 2013 by a single surgeon with no prior experience of this technique. Patients were divided into five blocks according to the operation date. The perioperative data were collected prospectively and analysed using descriptive statistics and CUSUM failure analysis. Results: No significant differences in the background factors among the five periods were observed, except for a small increase in patient age from periods 1 to 5 (p=0.004). The surgeon's performance improved with time; a decrease in the cross-clamp time, operative time, and blood loss was observed (p<0.001). However, no significant difference in the number of failed cases was observed among the periods. All failure cases were evaluated by the CUSUM failure analysis and the CUSUM curve reflected a learning curve associated with this new procedure. The surgeon crossed the lower 80% boundary after about 33 operations, which indicates that better results can be obtained after this point. Conclusions: Minimally invasive mitral valve surgery using the RAMT approach can be performed by a new surgeon. Furthermore, CUSUM curve analysis is a simple statistical method to implement continuous individual performance monitoring. abstract_id: PUBMED:11502495 Correction of congenital heart defects and mitral valve operations using limited anterolateral thoracotomy. Purpose: Median sternotomy, which generally is used as a standard access for atrial septal defect (ASD) and mitral valve operations, has a significant risk of postoperative instability/osteomyelitis of the sternum. Moreover, especially in young women, the resulting large scar is a poor cosmetic result that may have adverse psychological consequences. Our presentation suggests that these difficulties may be avoided by the use of a less invasive approach consisting of a limited anterolateral thoracotomy with standard cannulation. Material And Methods: From June 1997 until December 1999, 13 women, mean age 31.9 +/- 9.2 years, with atrial septum defect (n = 8), sinus venosus defect with partial anomalous pulmonary venous connection (n = 1), left atrial myxoma (n =1) or mitral valve regurgitation (n = 3), were scheduled for less invasive operation. In all cases a double lumen tube was used for ventilation. After a submammarian skin incision of about 10 cm a limited anterolateral thoracotomy was performed in the fifth right intercostal space. For cannulation of the ascending aorta a trochar cannula was used. Both caval veins were cannulated by angled vena cava catheters. Standard cardiopulmonary bypass was established using normothermia in all patients undergoing operations with correction of congenital heart defects and mild hypothermia (32 degrees C) in the three patients undergoing mitral valve operation. Surgery was performed in cardioplegic arrest using Bretschneider's solution. All corrections of congenital heart defects were performed by Goretex patches. Mitral valve reconstruction was carried out in two patients, and one patient underwent mitral valve replacement. Results: No complications occurred in any of the 13 patients peri- or postoperatively. Total time of operation was 211.9 +/- 36.0 minutes, the perfusion time was 77.0 +/- 25.8 minutes, and the aortic cross-clamp time was 51.8 +/- 21.9 minutes. Mean stay in ICU was 1.2 +/- 0.4 days (total hospital stay: 7.8 +/- 2.2 days). Postoperative thoracic x-ray and cardiac echocardiography/dopplersonography revealed no pathological findings in any patients. Conclusion: Atrial septal defect operations, including partial anomalous pulmonary venous connection, left atrial myxoma and mitral valve operations, can be performed safely and effectively using a limited anterolateral thoracotomy and standard cannulation technique with excellent cosmetic results. abstract_id: PUBMED:33100620 Right mini-thoracotomy approach in patients undergoing redo mitral valve procedure. Aim: A minimally invasive technique is an attractive option in cardiac surgery. In this study, we present our experience with minimally invasive cardiac surgery (MICS) via right mini-thoracotomy on patients undergoing mitral valve procedure as reoperation. Methods: From 2017 until 2019, 20 patients underwent reoperation of the mitral valve through a right-sided mini-thoracotomy. Cardiopulmonary bypass was established through cannulation of the femoral vessels. All patients requiring isolated re-operative mitral valve surgery with suitable femoral vessels for cannulation were included in the study. Patients requiring concomitant coronary artery bypass grafting (CABG) or with peripheral artery disease were excluded. Results: The mean age was 65 ± 12 years. The average log. EuroSCORE was 9 ± 5%. Ten patients with severe mitral valve regurgitation (MR) underwent re-repair of the mitral valve. Seven of them were post mitral valve repair (MVR), one was post aortic valve replacement (AVR), one had tricuspid valve repair, and one other patient had CABG before. Ten patients underwent mitral valve replacement due to mixed mitral valve disease (n = 9) or mitral valve endocarditis (n = 1). Eight patients were post MVR and 2 had AVR before. The mean time to reoperation was 7.5 ± 8 years. In-hospital mortality was 5% (n = 1). The mean cross clamp time was 54 ± 26 min. Postoperative echocardiography revealed competent valve function in all cases with mean ejection fraction of 55 ± 9%. The Kaplan-Meier 1- and 2-year survival was 95%. Conclusion: The MICS approach for mitral valve reoperation in selected patients seems to be safe and feasible. It is also a surgical option for high-risk patients. Answer: The use of right anterolateral thoracotomy for mitral valve surgery with aortic cannulation appears to be a safe procedure. In a study of 100 consecutive female patients undergoing mitral valve surgical treatment, a right anterolateral thoracotomy approach associated with aortic cannulation was used for cardiopulmonary bypass (CPB). The results showed no in-hospital deaths, reoperations due to bleeding, or conversions to sternotomy. The most significant complications in the postoperative period were related to the pulmonary system and atrial fibrillation, but these did not have major systemic repercussions. The actuarial survival was 98.0 ± 1.9% and freedom from reoperation was 81.4 ± 7.8% in 180 months, indicating that the procedure is reproducible and safe (PUBMED:21103739). Another study evaluating the right anterolateral thoracotomy for high-risk patients undergoing mitral and tricuspid valve redo procedures found that the approach minimized the need for cardiac dissection, thus reducing the risk for injury. The study concluded that reoperative cardiac valve surgery can be performed safely using the right anterolateral thoracotomy in high-risk patients, avoiding a high-risk resternotomy and increasing patient comfort and safety (PUBMED:28934975). Furthermore, a study comparing minimally invasive mitral valve surgery via minithoracotomy and direct cannulation to conventional techniques reported no hospital deaths, and conversion to sternotomy was needed in only 3 patients due to unsatisfactory arterial cannulation. The approach was deemed safe, rapid, cost-effective, and more comfortable for patients, with additional cosmetic benefits (PUBMED:25053663). In summary, the evidence from these studies suggests that mitral valve surgery using right anterolateral thoracotomy with aortic cannulation is a safe procedure with low rates of complications and mortality, and it provides a viable alternative to conventional sternotomy, especially in high-risk patients or those seeking a less invasive option with better cosmetic outcomes.
Instruction: Complete clinical response to neoadjuvant chemoradiation in rectal cancers: can surgery be avoided? Abstracts: abstract_id: PUBMED:32234159 Clinical factors of pathological complete response after neoadjuvant chemoradiotherapy in rectal cancer Objective: To explore the feasibility of clinical factors to predict the pathological complete response after neoadjuvant chemoradiotherapy in rectal cancer. Methods: A retrospective analysis was performed on clinical factors of 162 patients with rectal cancer, who underwent neoadjuvant chemoradiotherapy in the General Hospital of People's Liberation Army from January 2011 to December 2018.According to the postoperative pathological results, the patients were divided into pathological complete response (pCR) group and non-pathological complete response group (non-pCR group) to check the predictive clinical factors for pCR. Results: Twenty-eight cases achieved pCR after neoadjuvant chemoradiation (17.3%, 28/162). Univariate analysis showed that patients with higher differentiation (P=0.024), tumor occupation of the bowel lumen≤1/2 (P=0.006), earlier clinical T stage (P=0.013), earlier clinical N stage (P=0.009), the time interval between neoadjuvant chemoradiotherapy and surgery>49 days (P=0.006), and maximum tumor diameter≤5 cm (P=0.019) were more likely to obtain pCR, and the differences werestatistically significant. Multivariate analysis showed that tumor occupation of the bowel lumen≤1/2 (P=0.01), maximum tumor diameter≤5 cm (P=0.035), and the interval>49 days (P=0.009) were independent factors in predicting pCR after neoadjuvant therapy. Conclusion: Tumor occupation of the bowel lumen, maximum tumor diameter, and the time interval between neoadjuvant chemoradiotherapy and surgery can predict the pCR in rectal cancer. abstract_id: PUBMED:34878582 Complete response after neoadjuvant therapy of rectal cancer: implications for surgery For (locally advanced) rectal cancer, a multimodal therapy concept comprising neoadjuvant radiotherapy/chemoradiotherapy, radical surgical resection with partial/complete mesorectal excision and subsequent adjuvant chemotherapy represents the current international standard of care. Further developments in neoadjuvant therapy concepts, such as the principle of total neoadjuvant therapy, lead to an increasing number of patients who show a complete clinical response in restaging after neoadjuvant therapy without clinically detectable residual tumor. In view of the risk associated with radical surgical resection in terms of perioperative morbidity and a potentially non-continence-preserving procedure, the question of the oncological justifiability of an organ-preserving procedure in the case of a complete clinical response under neoadjuvant therapy is increasingly being raised. The therapeutic principle of watch and wait, defined by refraining from immediate radical surgical resection and inclusion in a close-meshed, structured follow-up program, currently appears to be oncologically justifiable based on the current study situation; however, for the initial evaluation of the extent of the clinical response and for the structuring of the close-meshed follow-up program, further optimization and standardization based on broadly designed studies appear necessary in order to be able to provide this concept to a clearly defined patient collective as an oncologically equivalent therapy principle also outside specialized centers. abstract_id: PUBMED:34684080 Evaluation and Predictive Factors of Complete Response in Rectal Cancer after Neoadjuvant Chemoradiation Therapy. The response to neoadjuvant chemoradiation therapy is an important prognostic factor for locally advanced rectal cancer. Although the majority of the patients after neoadjuvant therapy are referred to following surgery, the clinical data show that complete clinical or pathological response is found in a significant proportion of the patients. Diagnostic accuracy of confirming the complete response has a crucial role in further management of a rectal cancer patient. As the rate of clinical complete response, unfortunately, is not always consistent with pathological complete response, accurate diagnostic parameters and predictive markers of tumor response may help to guide more personalized treatment strategies and identify potential candidates for nonoperative management more safely. The management of complete response demands interdisciplinary collaboration including oncologists, radiotherapists, radiologists, pathologists, endoscopists and surgeons, because the absence of a multidisciplinary approach may compromise the oncological outcome. Prediction and improvement of rectal cancer response to neoadjuvant therapy is still an active and challenging field of further research. This literature review is summarizing the main, currently known clinical information about the complete response that could be useful in case if encountering such condition in rectal cancer patients after neoadjuvant chemoradiation therapy, using as a source PubMed publications from 2010-2021 matching the search terms "rectal cancer", "neoadjuvant therapy" and "response". abstract_id: PUBMED:25931933 Organ preservation in rectal cancer patients following complete clinical response to neoadjuvant chemoradiotherapy: Long-term results in three patients. Rectal cancer patients following complete clinical response to neoadjuvant chemoradiotherapy (CRT) can be followed up without surgery. Those patients in particular who needed abdominoperineal resection before CRT choose the follow-up protocol, should they be given the necessary information. The purpose of this study was to demonstrate the long-term follow-up results of patients following neoadjuvant CRT without surgery. abstract_id: PUBMED:31054548 Principle of surgical management for rectal cancer patients with complete clinical response after neoadjuvant therapy A proportion of patients with locally advanced rectal cancer will achieve clinical complete response (cCR) or pathologic complete response (pCR) after neoadjuvant chemoradiotherapy. With the proposal of the concept of total neoadjuvant therapy (TNT), higher complete response rates will be observed. The management of patients with cCR has long been an issue of controversy and is attractive for clinical trials. A "watch and wait" strategy for patients with cCR has been put forward by some scholars. A non-operative approach can preserve the organfunction and avoid complications after radical surgery. The safety and feasibility of a "watch and wait" strategy have been established in several non-randomized controlled studies. There is no consensus on how to make an optimal decision for patients with cCR. For example, it is only observed in partial patients that cCR is consistent with pCR and the molecular biomarkers for predicting pCR are suboptimal. Besides, cCR is inconsistently defined and surveillance recommendations varies. Furthermore, there are insufficient high-level evidence for the "watch and wait" strategy. For patients with good response after chemoradiotherapy, local excision is an attractive alternative to total mesorectal excision, however with uncertain indications and challenged oncological safety. For patients with cCR, we implement the therapeutic principles of goal-orientation, layered treatment and the whole process management. abstract_id: PUBMED:31741760 Complete response in patients with locally advanced rectal cancer after neoadjuvant treatment with nivolumab. Introduction: PD-1 inhibitors have been approved for the treatment of dMMR patients with metastatic colorectal cancer, but the efficacy of neoadjuvant treatment with PD-1 in dMMR locally advanced rectal cancer (LARC) patients has not yet been defined. Patients and methods: Two patients with LARC received Nivolumab as neoadjuvant treatment in July 2017. Whole-exome sequencing (WES) and multiplex immunofluorescence analysis were performed. Results: Of the two patients, one achieved pathological complete response after six cycles of nivolumab followed by surgery. The other patient was confirmed to be clinical complete response after six cycles of nivolumab. "Watch and wait" strategy was performed for anal preservation. WES showed high tumor mutation burden. Multiplex immunofluorescence analysis showed immune microenvironment alternation between pretreatment specimen and post-treatment specimen. Conclusion: Neoadjuvant nivolumab induced complete response in both of the two patients with LARC. Immunotherapy might be an alternative strategy for neoadjuvant treatment for dMMR/MSI rectal cancer. abstract_id: PUBMED:29063019 Advances for achieving a pathological complete response for rectal cancer after neoadjuvant therapy. Neoadjuvant therapy has become the standard of care for locally advanced mid-low rectal cancer. Pathological complete response (pCR) can be achieved in 12%-38% of patients. Patients with pCR have the most favorable long-term outcomes. Intensifying neoadjuvant therapy and extending the interval between termination of neoadjuvant treatment and surgery may increase the pCR rate. Growing evidence has raised the issue of whether local excision or observation rather than radical surgery is an alternative for patients who achieve a clinical complete response after neoadjuvant therapy. Herein, we highlight many of the advances and resultant controversies that are likely to dominate the research agenda for pCR of rectal cancer in the modern era. abstract_id: PUBMED:29184475 Management of the Complete Clinical Response. Organ preservation is considered in the management of selected patients with rectal cancer. Complete clinical response observed after neoadjuvant chemoradiation for rectal cancer is one of these cases. Patients who present complete clinical response are candidates to the watch-and-wait approach, when radical surgery is not immediately performed and is offered only to patients in the event of a local relapse. These patients are included in a strict follow-up, and up of 70% of them will never be operated during the follow-up. This strategy is associated with similar oncological outcomes as patients operated on, and the advantage of avoiding the morbidity associated to the radical operation. In this article we will discuss in detail the best candidates for this approach, the protocol itself, and the long-term outcomes. abstract_id: PUBMED:29602976 Do clinical criteria reflect pathologic complete response in rectal cancer following neoadjuvant therapy? Background: Clinical complete response (cCR) in rectal cancer is being evaluated as a tool to identify patients who would not require surgery in the curative management of rectal cancer. Our study reviews mucosal changes after neoadjuvant therapy for rectal cancer in patients treated at our center. Methods: Pathology reports were retrieved for patients treated with neoadjuvant chemoradiation therapy (CRT) or high-dose rate brachytherapy (HDRBT). The macroscopic appearance of the specimen was compared with pathologic staging. Results: This study included 282 patients: 88 patients underwent neoadjuvant CRT and 194 patients underwent HDRBT; all patients underwent total mesorectal excision (TME). There were 160 male and 122 female patients with a median age of 65 years (range 29-87). The median time between neoadjuvant therapy and surgery was 50 and 58 days. Sixty patients (21.2%) were staged as ypT0N0, 21.2% had a pathologic complete response (pCR), and only 3.2% had a cCR. Of the 67 patients with initial involvement of the circumferential radial margin (CRM), 44 converted to pathologic CRM-. Two hundred seventy-three patients (96.8%) had mucosal abnormalities. Of the 222 patients with residual tumor, 70 patients had no macroscopic tumor visualized but an ulcer in its place. Conclusion: Most patients undergoing neoadjuvant therapy for rectal cancer have residual mucosal abnormalities which preclude to a cCR as per published criteria from Brazil. Further studies are required to optimize clinical evaluation and MRI imaging in selected patients. abstract_id: PUBMED:36305516 Clinical and biochemical predictors of tumor response after neoadjuvant therapy in rectal cancer. Background: Patients who have a good clinical and/or pathologic response to neoadjuvant chemoradiotherapy (nCRT) for rectal cancer have better long-term outcomes and can potentially be spared morbid surgery. This study aimed to identify pretreatment clinical and biochemical predictors of response to neoadjuvant treatment for rectal cancer. Methods: Patients undergoing neoadjuvant therapy for rectal cancer between 2007 and 2022 were retrospectively included. Those patients who achieved a complete clinical response were offered a nonoperative management strategy and the remaining patients underwent surgical resection. The primary endpoint was tumor regression grade (TRG) based on radiological imaging (mrTRG) or pathology (pTRG). Patient response was classified as good (mrTRG 1-2 or pTRG 0-1) versus poor (mrTRG 3-4 or pTRG 2-3). Logistic regression was performed to determine predictors of TRG. Results: A total of 984 patients with rectal cancer were identified of which 274 met the inclusion criteria. Of 274 patients, 228 (83%) underwent surgical resection. A good TRG response was observed in 119 (41%) patients, and a complete response was achieved in 53 (17%) patients. On univariable and multivariable logistic regression, clinical T2 stage and body mass index of ≥25 kg/m2 were significant predictors of a good TRG. Clinical T2 stage and a personalised total neoadjuvant therapy regimen were significant predictors of complete response. Conclusion: Clinical T2 stage and a BMI≥25 kg/m2 were predictors of good response to neoadjuvant therapy for rectal cancer. Future prospective studies are required to confirm these findings and evaluate their potential use in better targeting of nCRT. Answer: The concept of avoiding surgery in rectal cancer patients who achieve a complete clinical response (cCR) after neoadjuvant chemoradiation is a subject of ongoing research and debate. The "watch and wait" strategy, which involves refraining from immediate radical surgical resection and instead enrolling patients in a closely monitored follow-up program, is considered oncologically justifiable based on current studies. This approach is particularly relevant for patients who would otherwise require more invasive surgeries that could compromise continence or other organ functions (PUBMED:34878582). Clinical factors such as tumor occupation of the bowel lumen ≤1/2, maximum tumor diameter ≤5 cm, and a time interval between neoadjuvant chemoradiotherapy and surgery >49 days have been identified as independent predictors of pathological complete response (pCR) (PUBMED:32234159). However, it is important to note that a clinical complete response does not always equate to a pathological complete response, and accurate diagnostic parameters and predictive markers are needed to guide more personalized treatment strategies (PUBMED:34684080). The safety and feasibility of the "watch and wait" strategy have been established in several non-randomized controlled studies, but there is no consensus on how to make an optimal decision for patients with cCR. The definitions of cCR are inconsistent, and surveillance recommendations vary. Furthermore, there is a lack of high-level evidence for the "watch and wait" strategy (PUBMED:31054548). For patients who achieve a cCR, local excision may be an alternative to total mesorectal excision, but its indications and oncological safety are still being debated (PUBMED:29063019). Organ preservation is considered in the management of selected patients with rectal cancer, and up to 70% of patients who present with a complete clinical response and follow a watch-and-wait approach may never require surgery during follow-up, with similar oncological outcomes to those who undergo surgery (PUBMED:29184475). In conclusion, while the possibility of avoiding surgery for rectal cancer patients with a complete clinical response to neoadjuvant chemoradiation is being explored, it requires careful patient selection, accurate assessment of response, and a structured follow-up program. Further research is needed to optimize this approach and ensure its oncological safety (PUBMED:34878582, PUBMED:31054548, PUBMED:29184475).
Instruction: Does Helicobacter pylori infection relate to iron deficiency anaemia in prepubescent children under 12 years of age? Abstracts: abstract_id: PUBMED:12948075 Does Helicobacter pylori infection relate to iron deficiency anaemia in prepubescent children under 12 years of age? Aim: To investigate the association between Helicobacter pylori (H. pylori) infection and iron deficiency anaemia (IDA). Methods: Haemoglobin levels, iron parameters and serum IgG antibodies to H. pylori were measured in 693 children aged 9 to 12 y. Results: No significant differences in the seroprevalence of H. pylori infection and antibody titres to H. pylori were found between the IDA group and the non-anaemic controls. Conclusion: H. pylori infection does not seem to contribute to iron deficiency in prepubescent children. abstract_id: PUBMED:31500264 Association between Active H. pylori Infection and Iron Deficiency Assessed by Serum Hepcidin Levels in School-Age Children. Hepcidin regulates iron metabolism. Its synthesis increases in infection and decreases in iron deficiency. The aim of this study was to evaluate the relationship between H. pylori infection and iron deficiency by levels of hepcidin in children. A total of 350 school-age children participated in this cross-sectional study. Determinations of serum ferritin, hemoglobin, hepcidin, C-reactive protein, and α-1-acid-glycoprotein were done. Active H. pylori infection was performed with a 13C-urea breath test. In schoolchildren without H. pylori infection, hepcidin was lower in those with iron deficiency compared to children with normal iron status (5.5 ng/mL vs. 8.2 ng/mL, p = 0.017); while in schoolchildren with H. pylori infection the levels of hepcidin tended to be higher, regardless of the iron nutritional status. Using multivariate analysis, the association between H. pylori infection and iron deficiency was different by hepcidin levels. The association between H. pylori and iron deficiency was not significant for lower values of hepcidin (Odds Ratio = 0.17; 95% Confidence Interval [CI] 0.02-1.44), while the same association was significant for higher values of hepcidin (OR = 2.84; CI 95% 1.32-6.09). This joint effect is reflected in the adjusted probabilities for iron deficiency: Individuals with H. pylori infection and higher levels of hepcidin had a probability of 0.24 (CI 95% 0.14-0.34) for iron deficiency, and this probability was 0.24 (CI 95% 0.14-0.33) in children without H. pylori infection and lower levels of hepcidin. In children with H. pylori infection and iron deficiency, the hepcidin synthesis is upregulated. The stimulus to the synthesis of hepcidin due to H. pylori infection is greater than the iron deficiency stimulus. abstract_id: PUBMED:37773488 Endoscopic findings and predictors of gastrointestinal lesions in children with iron deficiency anemia. Iron deficiency anemia (IDA) can be caused by occult gastrointestinal (GI) blood loss; however, the endoscopic findings in children with anemia are unclear. The study aimed to determine the frequency and factors related to lesions in children with IDA undergoing endoscopy. We retrospectively analyzed the clinical and endoscopic findings of children with a laboratory-based diagnosis of IDA. Of 58 patients, 36 (62.1%) had upper GI tract lesions, with erosive gastritis being the most common lesion. Further, 26 patients underwent concomitant colonoscopy, and 12 (46.2%) had lower GI tract lesions. Overall, 44 (75.9%) patients had lesions in either the upper or lower GI tract. Helicobacter pylori infection was detected in 13 patients (22.4%). Patients with lesions found by endoscopy had significantly lower hemoglobin level (8.9 vs. 10.0 g/dL, p = 0.047) and mean corpuscular volume (75.5 vs. 80.9 fL, p = 0.038). The proportion of patients with previous treatment for IDA was also higher in those with lesions on endoscopy. In multivariate analysis, age of ≥10 years (odds ratio [OR], 6.00; 95% confidence Interval [CI], 0.56-10.75) and positive fecal occult blood test (FOBT) findings (OR, 2.25; 95% CI, 0.14-4.52) were factors related to GI lesions. The presence of GI symptoms was not associated with GI lesions. A high proportion of GI lesions were found by endoscopy in children with IDA in this study. Endoscopy should be considered in children with IDA even without GI symptoms, especially in older children, and those with positive FOBT results. abstract_id: PUBMED:21612616 An association between Helicobacter pylori infection and cognitive function in children at early school age: a community-based study. Background: H. pylori infection has been linked to iron deficiency anemia, a risk factor of diminished cognitive development. The hypothesis on an association between H. pylori infection and cognitive function was examined in healthy children, independently of socioeconomic and nutritional factors. Methods: A community-based study was conducted among 200 children aged 6-9 years, from different socioeconomic background. H. pylori infection was examined by an ELISA kit for detection of H. pylori antigen in stool samples. Cognitive function of the children was blindly assessed using Stanford-Benit test 5th edition, yielding IQ scores. Data on socioeconomic factors and nutritional covariates were collected through maternal interviews and from medical records. Multivariate linear regression analysis was performed to obtain adjusted beta coefficients. Results: H. pylori infection was associated with lower IQ scores only in children from a relatively higher socioeconomic community; adjusted beta coefficient -6.1 (95% CI -11.4, -0.8) (P = 0.02) for full-scale IQ score, -6.0 (95% CI -11.1, -0.2) (P = 0.04) for non-verbal IQ score and -5.7 (95% CI -10.8, -0.6) (P = 0.02) for verbal IQ score, after controlling for potential confounders. Conclusions: H. pylori infection might be negatively involved in cognitive development at early school age. Further studies in other populations with larger samples are needed to confirm this novel finding. abstract_id: PUBMED:29211362 Clinical differences of Helicobacter pylori infection in children. Helicobacter pylori infection is widely spread all over the world. The prevalence of H. pylori infection in the world varies and depends on numerous factors such as age, ethnicity, geographical and socioeconomic status. Humans have been in a symbiotic relationship with this bacterium for thousands of years. However 10-20% of people infected with H. pylori are likely to develop gastroduodenal diseases such as peptic ulcer disease, iron deficiency anemia, gastric mucosal atrophy, metaplasia, dysplasia, MALT lymphoma, or gastric adenocarcinoma. Most of these diseases develop as the infection progresses and they are likely to occur later in life among the elderly. In the following years, the use of modern molecular techniques has led to the discovery of new Helicobacter strains and their genotypic differentiation. Newly discovered Helicobacter microorganisms can colonize human gastrointestinal tract and bile ducts. This article summarizes the distinct features of H. pylori infection in children including its prevalence, clinical manifestation, indications for treatment and recommended schemes of eradication. abstract_id: PUBMED:35028968 Resolution of iron deficiency following successful eradication of Helicobacter pylori in children. Aim: To assess correlation between successful Helicobacter pylori (HP) eradication and resolution of iron deficiency in children, without iron supplementation. Methods: Medical records of children diagnosed with HP infection based on endoscopy were retrospectively reviewed. Among those with non-anaemic iron deficiency (NAID) or iron deficiency anaemia (IDA), haemoglobin, ferritin and CRP levels were compared prior and 6-9 months' post-successful HP eradication. Predictors of resolution of iron deficiency following HP eradication were assessed. Results: Among 60 included children (median age 14.8, IQR12.3-16 years; 62% males), 35% had IDA while the remaining 65% had NAID. Following successful HP eradication, iron normalised in 60% of patients with iron deficiency (ID), without iron supplementation. There were significant improvements in haemoglobin and ferritin concentrations following HP eradication with haemoglobin increasing from 12.3 g/dL to 13.0 g/dL and ferritin increasing from 6.3 μg/L to 15.1 μg/L (p < 0.001). In multiple logistic regression, older age was the only factor associated with resolution of anaemia following HP eradication (OR 1.65, 95% CI 1.16-2.35, p = 0.005). Conclusion: Successful HP eradication could be helpful in improving iron status among children with refractory NAID or IDA. Older age may predict this outcome. Screening for HP might be considered in the workup of refractory IDA or ID. abstract_id: PUBMED:32589356 Autoimmune atrophic gastritis: The role of Helicobacter pylori infection in children. Background: Autoimmune atrophic gastritis (AIG) is very rare in children. Despite a better understanding of histopathologic changes and serological markers in this disease, underlying etiopathogenic mechanisms and the effect of Helicobacter pylori (H pylori) infection are not well known. We aimed to investigate the relation between AIG and H pylori infection in children. Materials And Methods: We evaluated the presence of AIG and H pylori infection in fifty-three patients with positive antiparietal cell antibody (APCA). Demographic data, clinical symptoms, laboratory and endoscopic findings, histopathology, and presence of H pylori were recorded. Results: The children were aged between 5 and 18 years, and 28 (52.8%) of them were male. Mean age was 14.7 ± 2.6 years (median: 15.3; min-max: 5.2-18), and 10 (18.8%) of them had AIG confirmed by histopathology. In the AIG group, the duration of vitamin B12 deficiency was longer (P = .022), hemoglobin levels were lower (P = .018), and APCA (P = .039) and gastrin (P = .002) levels were higher than those in the non-AIG group. Endoscopic findings were similar between the two groups. Intestinal metaplasia was higher (P = .018) in the AIG group. None of the patients in the AIG group had H pylori infection (P = .004). One patient in the AIG group had enterochromaffin-like cell hyperplasia. Conclusions: Our results show that, in children, H pylori infection may not play a role in AIG. AIG could be associated with vitamin B12 deficiency, iron deficiency, and APCA positivity in children. APCA and gastrin levels should be investigated for the early diagnosis of AIG and intestinal metaplasia. abstract_id: PUBMED:21434997 Iron deficiency and Helicobacter pylori infection in children. Aim: To examine the relationship between iron deficiency (ID) and Helicobacter pylori infection in school-aged children. Methods: Altogether 363 children from ambulatory admission were consecutively enrolled in the study. Haemoglobin (Hb), soluble transferrin receptor (sTfR), IgG against H. pylori and IgA against tissue transglutaminase were measured. The criteria for ID were sTfR > 5.7 mg/L in children aged 7-12 years and sTfR > 4.5 mg/L in older children, for anaemia Hb < 115 g/L in the younger group and Hb < 130 g/L for older boys and Hb < 120 g/L for girls. Results: Iron deficiency was found in 17% of the children, 5% had also anaemia. H. pylori colonization was detected in 27% and serum markers for coeliac disease in 0.6% of the children. The prevalence of ID and H. pylori seropositivity was higher in older children (23% and 29%, vs 9% and 22%, respectively). Children with H. pylori were significantly shorter [length SDS 1.0 (0.98-1.01) vs 0.98 (0.97-0.99)]. Older children had risk for ID (OR 1.1, 95% CI 1.0-1.3, p = 0.03). Although the prevalence of H. pylori seropositivity was higher in the ID group, it was not significantly associated with ID in multivariate analysis. Conclusion: Helicobacter pylori seropositivity was not associated with ID. The associated factor for ID was age. abstract_id: PUBMED:22803297 Helicobacter pylori and recurrent abdominal pain in children: is there any relation? Background: The role of Helicobacter pylori (HP) as a cause of recurrent abdominal pain (RAP) and gastrointestinal symptoms is controversial and there still remains a big debate whether to test and treat or not. Aim: To investigate the correlation between HP infection and RAP as well as other GI symptoms. Methods: We conducted a case control study at the Jeddah Clinic Hospital from January 2009 to December 2010. It included 244 cases (group I) aged 2-16 years with RAP after exclusion of any organic disease. Cases receiving antibiotics, bismuth, H2 antagonists or proton pump inhibitors during last 45 days were excluded. 122 age and gender matched asymptomatic children (group II) were enrolled as controls. Both groups were tested for Helicobacter pylori infection using stool antigen and/or urea breath test. Results: The mean age of cases was 7.76 +/- 3.38 years. 48% of cases were males. There was no significant statistical difference between both groups regarding age and sex distribution, nationality and body weight (BW). 42.6% cases were positive for H. pylori infection in group I and 45% in group II. Comparison between HP positive cases and HP negative cases in group I revealed a statistically significant difference in incidence of vomiting, epigastric pain, history of infected family member and iron deficiency anemia (p = 0.001, 0.000, 0.000 and 0.025 respectively). Conclusion: HP infection is documented in more than 40% of both symptomatic and asymptomatic children. There is no association between RAP and HP. abstract_id: PUBMED:25598957 Evaluation of Iron deficiency anemia and BMI in children suffering from Helicobacter pylori infection. Background: Recent studies suggest an association between H. pylori infection and disorders such as iron deficiency anemia and growth delay. Considering the high prevalence of H. pylori infection and iron deficiency anemia, this study was performed in order to evaluate their relevance in children undergoing an upper endoscopy. Materials And Methods: In this case-control study, children aged 2 to 16 years old, undergoing endoscopy from March 2012 to March 2013 at Besat Hospital of Hamedan, were selected. Participants were divided in H.Pylori infected and non-infected groups. Then the two groups were compared in terms of body mass index (BMI) and the incidence of iron deficiency anemia. The presence of Helicobacter pylori infection in children was confirmed by Giemsa staining of gastric biopsy specimens. Collected data was analyzed by SPSS 17.0 (SPSS Inc., Chicago, IL) and t-test and chi-square. Results: In this study, 200 children (94 male and 106 female) were evaluated. The most common presenting symptom in both groups was abdominal pain. 8.2 % (9 cases) of the infected patients and 10.5% (10 cases) of the non-infected patients had iron deficiency anemia which this difference was not statistically significant (p=270). Also, no statistically significant difference was noted between the two groups in terms of gender (p=0.32), hemoglobin (p=0.35), Ferritin levels (p= 0.275) and body mass index (p= 0.273). Conclusion: The results of this study not showed an association between H. pylori infection and iron deficiency anemia or body mass index in studied children. Answer: The relationship between Helicobacter pylori (H. pylori) infection and iron deficiency anemia (IDA) in prepubescent children under 12 years of age is not clearly established based on the provided abstracts. Some studies suggest there may be a connection, while others do not find a significant association. One study found no significant differences in the seroprevalence of H. pylori infection and antibody titres to H. pylori between the IDA group and the non-anaemic controls, suggesting that H. pylori infection does not seem to contribute to iron deficiency in prepubescent children (PUBMED:12948075). However, another study indicated that in school-age children with H. pylori infection, the levels of hepcidin, a regulator of iron metabolism, tended to be higher regardless of the iron nutritional status. This suggests that the stimulus to the synthesis of hepcidin due to H. pylori infection is greater than the iron deficiency stimulus, which could imply a relationship between active H. pylori infection and iron deficiency (PUBMED:31500264). Additionally, a study that examined endoscopic findings in children with IDA found that a significant proportion of patients had lesions in either the upper or lower GI tract, and Helicobacter pylori infection was detected in 22.4% of patients. This suggests that there might be a link between H. pylori infection and gastrointestinal lesions that could contribute to IDA (PUBMED:37773488). Another study found that successful eradication of H. pylori could be helpful in improving iron status among children with refractory non-anaemic iron deficiency (NAID) or IDA, without iron supplementation. This indicates a potential relationship between H. pylori infection and iron deficiency in children (PUBMED:35028968). In contrast, a study examining the relationship between iron deficiency and H. pylori infection in school-aged children did not find a significant association between H. pylori seropositivity and iron deficiency after controlling for age in multivariate analysis (PUBMED:21434997). Overall, the evidence from the provided abstracts is mixed, and while some studies suggest a potential relationship between H. pylori infection and iron deficiency anemia in children, others do not find a significant link. Further research may be needed to clarify the association in prepubescent children under 12 years of age.
Instruction: Are boys and girls that different? Abstracts: abstract_id: PUBMED:33403198 Noncommunicable Disease Risk Factors Among Adolescent Boys and Girls in Bangladesh: Evidence From a National Survey. Objectives: To assess the prevalence of noncommunicable disease (NCD) risk factors and the factors associated with the coexistence of multiple risk factors (≥ 2 risk factors) among adolescent boys and girls in Bangladesh. Methods: Data on selected NCD risk factors collected from face to face interviews of 4,907 boys and 4,865 girls in the national Nutrition Surveillance round 2018-2019, was used. Descriptive analysis and multivariable logistic regression were performed. Results: The prevalence of insufficient fruit and vegetable intake, inadequate physical activity, tobacco use, and being overweight/obese was 90.72%, 29.03%, 4.57%, and 6.04%, respectively among boys; and 94.32%, 50.33%, 0.43%, and 8.03%, respectively among girls. Multiple risk factors were present among 34.87% of boys and 51.74% of girls. Younger age (p < 0.001), non-slum urban (p < 0.001) and slum residence (p < 0.001), higher paternal education (p = 0.001), and depression (p < 0.001) were associated with the coexistence of multiple risk factors in both boys and girls. Additionally, higher maternal education (p < 0.001) and richest wealth quintile (p = 0.023) were associated with the coexistence of multiple risk factors in girls. Conclusion: The government should integrate specific services into the existing health and non-health programs which are aimed at reducing the burden of NCD risk factors. abstract_id: PUBMED:33681863 Implementation and Evaluation of a Psychoactive Substance Use Intervention for Children in Afghanistan: Differences Between Girls and Boys at Treatment Entry and in Response to Treatment. Psychoactive substance use among children in Afghanistan is an issue of concern. Somewhere around 300,000 children in the country have been exposed to opioids that either parents directly provided to them or by passive exposure. Evidence-based and culturally appropriate drug prevention and treatment programs are needed for children and families. The goals of this study were to: (1) examine lifetime psychoactive substance use in girls and boys at treatment entry; and (2) examine differential changes in substance use during and following treatment between girls and boys. Children ages 10-17 years old entering residential treatment were administered the Alcohol, Smoking and Substance Involvement Screening Test for Youth (ASSIST-Y) at pre- and post-treatment, and at three-month follow-up. Residential treatment was 45 days for children and 180 days for adolescents and consisted of a comprehensive psychosocial intervention that included education, life skills, individual and group counseling and, for older adolescents, vocational skills such as embroidery and tailoring. Girls and boys were significantly different regarding lifetime use of five substances at treatment entry, with girls less likely than boys to have used tobacco, cannabis, stimulants, and alcohol, and girls more likely than boys to have used sedatives. Differences between boys and girls were found for past-three-month use of four substances at treatment entry, with girls entering treatment with higher past-three-month use of opioids and sedatives, and boys with higher past-three-month use of tobacco, cannabis, and alcohol. Change over the course of treatment showed a general decline for both girls and boys in the use of these substances. Girls and boys in Afghanistan come to treatment with different substance use histories and differences in past-three-month use. Treatment of children for substance use problems must be sensitive to possible differences between girls and boys in substance use history. abstract_id: PUBMED:34988108 Cord Blood Thyroid Hormones and Neurodevelopment in 2-Year-Old Boys and Girls. Objective: Thyroid hormones are essential for neurodevelopment in early life. However, the impact of mild alterations in neonatal thyroid hormones on infant neurodevelopment and its sex dimorphism is unclear. We aimed to assess whether mild variations in neonatal thyroid hormones of term-born newborns with maternal euthyroid are related to neurodevelopment in 2-year-old boys and girls. Methods: This study used data from 452 singleton term-born infants of mothers with normal thyroid function in Shanghai, China, and their follow-up measure at the age of 2 years. Cord serum concentrations of free thyroxine (FT4), free triiodothyronine (FT3), thyroid-stimulating hormone (TSH), and thyroid peroxidase antibody (TPOAb) were measured by chemiluminescent microparticle immunoassays and classified into three groups: the low (1st, Q1), middle (2nd-4th, Q2-Q4), and high (5th, Q5) quintiles. Neurodevelopment indices were assessed using the Ages and Stages Questionnaire, third edition (ASQ-3), at 24 months of age. Results: Compared to infants with thyroid hormones in the middle (Q2-Q4), boys with FT4 in the lowest quintile had 5.08 (95% CI: 1.37, 8.78) points lower scores in the communication domain, 3.25 (0.25,6.25) points lower scores in the fine motor domain, and 3.84 (0.04, 7.64) points lower scores in the personal-social domain, respectively. Boys with FT3 in the highest quintile had 4.46 (0.81, 8.11) points increase in the personal-social domain. These associations were not observed in girls. No associations were observed between cord blood serum TSH and ASQ-assessed neurodevelopment in the boys or the girls. Conclusions: Mild alterations in thyroid hormones of newborns were associated adversely with neurodevelopment in boys, suggesting the importance of optimal thyroid hormone status for neurodevelopment in early life. abstract_id: PUBMED:28711896 Computer-tomographic characteristics of root length incisors and canines of the upper and lower jaws in boys and girls with different craniotypes and physiological bite. Introduction: In recent years, in world literature appeared research, which focuses on the study of relationships craniotypes with odonto-metric indicators, size, shape of the dental arches and occlusion. However, most of the experts focus on the study of individual features of the structure of the teeth-jaw system in people with different types of faces and aspects of sexual dimorphism and ethnic characteristics. Studies containing information about cranio-typological variability of the roots of teeth we did not encounter. The aim is to reveal features length of roots of incisors and canines of the upper and lower jaw, according to the CT scan in boys and girls of different craniotypes with physiological bite, residents Podilskiy region of Ukraine. Material And Methods: The study involved young men with orthognathic bite and their cephalometric performance taken from database from Scientific and Research Center Vinnitsa National Medical University named after Pirogov. To make CT used dental cone beam CT scan - Veraviewepocs 3D, Morita. In the upper and lower incisors and canines were measured vestibular-oral and mesio-distal projection length of the root. The height (length) of the root was measured in the medial (or distal) norm, focusing on border basics crowns (root) and the apex of the tooth root. Established the following distribution craniotype: mesocephalic boys - 16, boys brachycephalic - 19, mesocephalic girls - 16, brachycephalic girls- 26. Statistical analysis of the results was performed using the statistical software package licensed "Statistica 6,0" using non-parametric estimation methods. Results: The peculiarities computed tomographic characteristics of root length incisors and canines of the upper and lower jaws in boys and girls Podilskiy region of Ukraine with different craniotypes and physiological bite have been set. In boys or girls mesocephals majority values of vestibular-oral and mesio-distal projection length of the teeth root, medial and lateral incisors in the upper and lower jaws significantly higher compared to studied similar gender brachycephals. Most values vestibular-oral and mesiodistal projection length of the teeth root, medial and lateral incisors in the upper and lower jaws in boys of total group and brachycephals significantly higher compared with girls of similar comparison groups. In boys mesocephals only value of vestibular-oral projection length of teeth root on the lower jaw was significantly higher compared with girls of the same craniotype. Conclusions: In mesocephalic boys or girls majority values of vestibular-oral and mesio-distal length projection of the root teeth, medial and lateral incisors on the upper and lower jaws significantly higher compared to similar brachycephalic genders studied. abstract_id: PUBMED:25611939 The impact of playworks on boys' and girls' physical activity during recess. Background: School-based programs, such as Playworks, that guide students in organized activities during recess and make improvements to the recess play yard may lead to significant increases in physical activity-especially for girls. This study builds on past research by investigating the impact of Playworks separately for girls and boys. Methods: Twenty-nine schools were randomly assigned to receive Playworks for 1 school year or serve as a control group. Postintervention physical activity data were collected via accelerometers and recess observations. Impacts were estimated separately for girls and boys using regression models. Results: Girls in Playworks schools had significantly higher accelerometer intensity counts and spent more time in vigorous physical activity than girls in control schools. No significant differences based on accelerometer data were found for boys. A significant impact was also found on the types of activities in which girls engaged during recess; girls in the treatment group were less likely than those in the control group to be sedentary and more likely to engage in jumping, tag, and playground games. Conclusions: The current findings suggest that Playworks had a significant impact on some measures of girls' physical activity, but no significant impact on measures of boys' physical activity. abstract_id: PUBMED:28624747 Who is most affected by prenatal alcohol exposure: Boys or girls? Objective: To examine outcomes among boys and girls that are associated with prenatal alcohol exposure. Methods: Boys and girls with fetal alcohol spectrum disorders (FASD) and randomly-selected controls were compared on a variety of physical and neurobehavioral traits. Results: Sex ratios indicated that heavy maternal binge drinking may have significantly diminished viability to birth and survival of boys postpartum more than girls by age seven. Case control comparisons of a variety of physical and neurobehavioral traits at age seven indicate that both sexes were affected similarly for a majority of variables. However, alcohol-exposed girls had significantly more dysmorphology overall than boys and performed significantly worse on non-verbal IQ tests than males. A three-step sequential regression analysis, controlling for multiple covariates, further indicated that dysmorphology among girls was significantly more associated with five maternal drinking variables and three distal maternal risk factors. However, the overall model, which included five associated neurobehavioral measures at step three, was not significant (p=0.09, two-tailed test). A separate sequential logistic regression analysis of predictors of a FASD diagnosis, however, indicated significantly more negative outcomes overall for girls than boys (Nagelkerke R2=0.42 for boys and 0.54 for girls, z=-2.9, p=0.004). Conclusion: Boys and girls had mostly similar outcomes when prenatal alcohol exposure was linked to poor physical and neurocognitive development. Nevertheless, sex ratios implicate lower viability and survival of males by first grade, and girls have more dysmorphology and neurocognitive impairment than boys resulting in a higher probability of a FASD diagnosis. abstract_id: PUBMED:34129105 Clinically significant body dissatisfaction: prevalence and association with depressive symptoms in adolescent boys and girls. Body dissatisfaction is distressing and a risk factor for adverse consequences including eating disorders. However, data pertaining to the prevalence of body dissatisfaction in adolescence, a key period for its emergence, are lacking. This is a substantial barrier to tailored assessment and early intervention. This study addresses this gap and provides the prevalence of body dissatisfaction and associations with depressive symptoms and body change strategies. Adolescent boys (n = 367; Mage = 12.8, SD = 0.7) and girls (n = 368; Mage = 12.7, SD = 0.7) completed measures of body dissatisfaction and depressive symptoms with established cut-off levels. They also completed measures of dietary restraint and strategies to increase muscle size. Of boys and girls, 37.9% and 20.7%, respectively experienced moderate, and 6.8% and 19.6% experienced clinically significant body dissatisfaction, with higher rates among girls than boys and among adolescents aged 13 and 14 than aged 12. More than one-quarter of boys (26.70%) and one-third of girls (33.15%) reported subthreshold depressive symptoms or possible, probable or major depressive episodes. Girls revealed a higher prevalence of possible-, probable-, or major depressive episode than boys. Relative to those with no or low body dissatisfaction, adolescents with clinically significant body dissatisfaction were 24 times more likely to also report possible-, probable-, or major depressive episodes. Among boys and girls, clinically significant body dissatisfaction was associated with higher levels of dietary restraint and engagement in strategies to increase muscle size. Greater attention to identification and early intervention for body dissatisfaction is needed, especially for girls. abstract_id: PUBMED:27472510 Behavioural Differences Between Online Sexual Groomers Approaching Boys and Girls. This study focused on the behavior of convicted offenders who had approached profiles of boys and girls online for offline sexual encounters. A detailed coding scheme was designed to code and analyze offenders' grooming behaviors in transcripts of conversational interactions between convicted offenders and 52 volunteer workers purporting to be girls and 49 volunteer workers who masqueraded as boys. Behavioral differences and commonalities associated with the gender of the groomed child decoys were examined. Results showed that offenders approaching boys were significantly older and pretended to be younger than offenders approaching girls. When compared to offenders grooming boy decoys, offenders grooming girl decoys typically built more rapport, were less sexually explicit, and approached sexual topics carefully and indirectly. Offenders also used more strategies to conceal contact with girls than with boys. abstract_id: PUBMED:25901282 Relationships between body size attitudes and body image of 4-year-old boys and girls, and attitudes of their fathers and mothers. Background: Body size attitudes and body image form early in life, and understanding the factors that may be related to the development of such attitudes is important to design effective body dissatisfaction and disordered eating prevention interventions. This study explored how fathers' and mothers' body size attitudes, body dissatisfaction, and dietary restraint are associated with the body size attitudes and body image of their 4-year-old sons and daughters. Methods: Participants were 279 4-year-old children (46% boys) and their parents. Children were interviewed and parents completed questionnaires assessing their body size attitudes and related behaviours. Results: Socially prescribed stereotypical body size attitudes were evident in 4-year-old boys and girls; however, prevalence of body dissatisfaction was low in this sample. Correlation analyses revealed that boys' body size attitudes were associated with a number of paternal body image variables. In boys, attributing negative characteristics to larger figures and positive characteristics to thinner figures were associated with fathers having more negative attitudes towards obese persons. Attributing positive characteristics to larger figures by boys was associated with greater levels of paternal dietary restraint. In girls, attributing positive characteristics to thinner figures was only associated with greater maternal dietary restraint. Conclusions: Findings suggest the possibility that fathers' body size attitudes may be particularly important in establishing body size attitudes in their sons. Further research is necessary to better understand the role of fathers in the development of children's body size attitudes. abstract_id: PUBMED:32798100 Maternal Age at Menarche and Pubertal Timing in Boys and Girls: A Cohort Study From Chongqing, China. Purpose: This study explored the association of maternal age at menarche (AAM) with pubertal timing among girls and boys in Chongqing, China. Methods: Pubertal development of 1,237 children (542 girls and 695 boys) were examined half-yearly through inspection and palpation from April 2014 to June 2019. Characteristics of parents and maternal AAM were collected by a parental questionnaire at baseline. Maternal AAM was used both as a continuous and a categorical variable in Cox regression models. Results: A total of 1,198 children (528 girls and 670 boys) were included in the study. In the simple Cox model, earlier maternal AAM was associated with girls' earlier menarche, breast and pubic hair development, and boys' first ejaculation, testicular development, and genital development. When adjusting for children's body mass index z-scores (BMIz) and socioeconomic covariates, we found that girls whose mothers had early AAM had a higher risk of earlier onset of menarche (hazard ratio [HR]: .922, 95% confidence interval [CI]: .852-.998 for continuous maternal AAM, HR: 1.297, 95% CI: 1.041-1.616 for maternal AAM ≤13 years), and boys whose mother achieved menarche earlier experienced a higher risk of earlier onset of first ejaculation (HR: .896, 95% CI: .830-.968). Children's BMIz were related to all nine pubertal milestones. Parental education and relationship, birth weight, parity, and family type were also associated with pubertal timing. Conclusions: Earlier maternal AAM was related to earlier pubertal timing in both girls and boys in Chongqing, especially girls' age at menarche and boys' first ejaculation. Children's BMIz was the most consistent factor for pubertal timing. Children's BMIz and socioeconomic conditions had greater influence on most pubertal milestones than maternal AAM. Answer: Based on the provided abstracts, boys and girls exhibit differences in various aspects, including noncommunicable disease risk factors, responses to substance use interventions, neurodevelopmental outcomes related to thyroid hormones, dental characteristics, physical activity levels during recess, outcomes associated with prenatal alcohol exposure, body dissatisfaction and depressive symptoms, online sexual grooming behaviors, body size attitudes and body image, and pubertal timing. For instance, in Bangladesh, adolescent girls had a higher prevalence of multiple noncommunicable disease risk factors compared to boys (PUBMED:33403198). In Afghanistan, girls and boys showed different patterns of psychoactive substance use at treatment entry and responded differently to treatment (PUBMED:33681863). Boys' neurodevelopment at 2 years old was more affected by mild alterations in neonatal thyroid hormones than girls' (PUBMED:34988108). Boys and girls with different craniotypes showed variations in root length of incisors and canines (PUBMED:28711896). Playworks, a school-based program, had a significant impact on girls' physical activity during recess but not on boys' (PUBMED:25611939). Prenatal alcohol exposure led to more dysmorphology and neurocognitive impairment in girls than in boys (PUBMED:28624747). Clinically significant body dissatisfaction and depressive symptoms were more prevalent among adolescent girls than boys (PUBMED:34129105). Online sexual groomers approached boys and girls differently, with more rapport-building and indirect approaches used with girls (PUBMED:27472510). Fathers' and mothers' body size attitudes and behaviors were associated with those of their 4-year-old sons and daughters, with some evidence suggesting fathers' attitudes may be particularly influential for boys (PUBMED:25901282). Lastly, maternal age at menarche was associated with earlier pubertal timing in both girls and boys, with body mass index and socioeconomic conditions also playing a role (PUBMED:32798100). In summary, the abstracts suggest that there are indeed differences between boys and girls in various health, developmental, and behavioral domains. However, these differences are complex and influenced by a range of biological, environmental, and social factors.
Instruction: Characterization of multiple sclerosis plaques using susceptibility-weighted imaging at 1.5 T: can perivenular localization improve specificity of imaging criteria? Abstracts: abstract_id: PUBMED:25783798 Characterization of multiple sclerosis plaques using susceptibility-weighted imaging at 1.5 T: can perivenular localization improve specificity of imaging criteria? Background And Purpose: The purpose of this study was to determine if magnetic resonance (MR) susceptibility-weighted imaging (SWI) can increase the conspicuity of corticomedullary veins within the white matter lesions of multiple sclerosis (MS) and, thus, aid in distinguishing plaques from leukoaraiosis. Methods: We retrospectively reviewed MR examinations in 21 patients with a clinical diagnosis of MS and 18 patients with a clinical diagnosis of dementia. Examinations included fluid-attenuated inversion recovery (FLAIR) and SWI sequences obtained in the axial plane. Lesions greater than 5 mm in diameter on the axial FLAIR sequence were identified as periventricular or subcortical. Three neuroradiologists evaluated SWI images, compared with FLAIR, for a centrally located signal void in each lesion that was scored as present, absent, or indeterminate. Results: In patients with MS, central veins were present in both periventricular lesions (75%, P < 0.001) and subcortical lesions (52%, P < 0.005). In patients with dementia, central veins were seen much less frequently in subcortical lesions (14%, P < 0.001); their association with periventricular lesions was not significant. Conclusions: Central veins were detected in MS lesions with a significantly greater frequency than that in patients with dementia. Susceptibility-weighted imaging increases the conspicuity of corticomedullary veins and may improve the specificity of MR findings in MS. abstract_id: PUBMED:21960001 Detection of active plaques in multiple sclerosis using susceptibility-weighted imaging: comparison with gadolinium-enhanced MR imaging. Purpose: Susceptibility-weighted (SW) imaging is a magnetic resonance (MR) imaging technique reported effective in visualizing multiple sclerosis (MS) plaques, but its capacity to distinguish active plaques remains unclear. We evaluated active plaque detection by SW compared with contrast-enhanced MR imaging. Methods: We prospectively examined 11 patients using a 3-tesla scanner. Two neuroradiologists independently evaluated signal changes of plaques and accompanying low signal rims in 74 plaques on various SW images (magnitude, phase, and minimum intensity projection [minIP]), and on contrast-enhanced T(1)-weighted images (T(1)WI). We correlated signal alterations on various SW images and contrast enhancement on T(1)WI using Fisher's exact test and calculated sensitivity and specificity for predicting gadolinium enhancement. Results: Only changes in plaque signal on SW magnitude images correlated significantly with contrast enhancement of the plaques (P=0.008), and high signal intensity had 0.556 sensitivity and 0.787 specificity for predicting contrast-enhanced plaques. Furthermore, plaques with rims of low signal showed sensitivity of 0.296 and specificity of 0.957. Conclusions: Susceptibility-weighted magnitude, but not phase or minIP, images can predict MS plaques with contrast enhancement with high specificity. abstract_id: PUBMED:36575588 Automatic detection of active and inactive multiple sclerosis plaques using the Bayesian approach in susceptibility-weighted imaging. Background: Susceptibility-weighted imaging (SWI) is efficient in detecting multiple sclerosis (MS) plaques and evaluating the level of disease activity. Purpose: To automatically detect active and inactive MS plaques in SWI images using a Bayesian approach. Material And Methods: A 1.5-T scanner was used to evaluate 147 patients with MS. The area of the plaques along with their active or inactive status were automatically identified using a Bayesian approach. Plaques were given an orange color if they were active and a blue color if they were inactive, based on the preset signal intensity. Results: Experimental findings show that the proposed method has a high accuracy rate of 91% and a sensitivity rate of 76% for identifying the type and area of plaques. Inactive plaques were properly identified in 87% of cases, and active plaques in 76% of cases. The Kappa analysis revealed an 80% agreement between expert diagnoses based on contrast-enhanced and FLAIR images and Bayesian inferences in SWI. Conclusion: The results of our study demonstrated that the proposed method has good accuracy for identifying the MS plaque area as well as for identifying the types of active or inactive plaques in SWI. Therefore, it might be helpful to use the proposed method as a supplemental tool to accelerate the specialist's diagnosis. abstract_id: PUBMED:35126591 Comparison of susceptibility weighted imaging with conventional MRI sequences in multiple sclerosis plaque assessment: A cross-sectional study. Background: The current study was performed to compare susceptibility-weighted imaging (SWI) with magnetic resonance imaging (MRI) methods of T2-weighted (T2W) and fluid-attenuated inversion recovery (FLAIR) imaging in multiple sclerosis (MS) plaque assessment. Materials And Methods: This cross-sectional study was conducted among 50 MS patients referred to Shafa Imaging Center, Isfahan, Iran. Patients who fulfilled McDonald criteria and were diagnosed with MS by a professional neurologist at least 1 year before the study initiation were included in the study. Eligible patients underwent brain scans using SWI, T2W imaging, and FLAIR. Plaques' number and volume were detected separately for each imaging sequence. Moreover, identified lesions in SWI sequence were evaluated in terms of iron deposition and central veins. Results: Totally 50 patients (10 males and 40 females) with a mean age of 28.48 ± 5.25 years were included in the current study. Majority of patients (60%) had a disease duration of >5 years, and mean expanded disability status score was 2.56 ± 1.32. There was no significant difference between different imaging modalities in terms of plaques' number and volume (P > 0.05). It was also found that there was a high correlation between SWI and conventional imaging techniques of T2W (r = 0.97, 0.91, P < 0.001) and FLAIR (r = 0.99, 0.99, P < 0.001) in the estimation of both the number and volume of plaques (P < 0.001). Conclusion: The results of the present study indicated that SWI and conventional MRI sequences have similar efficiency for plaque assessment in MS patients. abstract_id: PUBMED:25270052 Susceptibility-weighted imaging and quantitative susceptibility mapping in the brain. Susceptibility-weighted imaging (SWI) is a magnetic resonance imaging (MRI) technique that enhances image contrast by using the susceptibility differences between tissues. It is created by combining both magnitude and phase in the gradient echo data. SWI is sensitive to both paramagnetic and diamagnetic substances which generate different phase shift in MRI data. SWI images can be displayed as a minimum intensity projection that provides high resolution delineation of the cerebral venous architecture, a feature that is not available in other MRI techniques. As such, SWI has been widely applied to diagnose various venous abnormalities. SWI is especially sensitive to deoxygenated blood and intracranial mineral deposition and, for that reason, has been applied to image various pathologies including intracranial hemorrhage, traumatic brain injury, stroke, neoplasm, and multiple sclerosis. SWI, however, does not provide quantitative measures of magnetic susceptibility. This limitation is currently being addressed with the development of quantitative susceptibility mapping (QSM) and susceptibility tensor imaging (STI). While QSM treats susceptibility as isotropic, STI treats susceptibility as generally anisotropic characterized by a tensor quantity. This article reviews the basic principles of SWI, its clinical and research applications, the mechanisms governing brain susceptibility properties, and its practical implementation, with a focus on brain imaging. abstract_id: PUBMED:31463267 Efficacy of diffusion-weighted imaging in symptomatic and asymptomatic multiple sclerotic plaques. Introduction: Magnetic resonance imaging (MRI) currently accompanies clinical findings in disease diagnosis, patients' follow-up, assessment of drugs complications, and evaluation of treatment response. Although contrast-enhanced MRI (CE-MRI) is considered as the imaging modality of choice for multiple sclerosis (MS), due to disease chronicity, applying multiple doses of gadolinium-based contrast agents (GBCAs) increases the risk of nephrogenic syndrome in patients with acute (ARF) and chronic renal syndromes (CRF). Moreover, the effect of gadolinium on the fetus is not well-known in pregnant patients. Therefore, this study evaluates the possibility of replacing postcontrast images with physiologically based MRI sequences such as diffusion weighted imaging (DWI) and apparent diffusion coefficient (ADC). Method: We prospectively evaluated 26 patients with known multiple sclerosis. The patients with MS attacks and the asymptomatic patients who were referred for follow-up were enrolled. Conventional MRI including postcontrast T1W, DWI, and ADC were performed for all patients. The signal intensity (SI) of all enhancing and nonenhancing plaques of more than 10 × 10 mm size were investigated in all sequences and analyzed. Results: A total of 83 plaques were detected in T2-FLAIR sequences of which 51 plaques were enhanced (68%) after gadolinium administration. While 42 MS plaques had hypersignal intensity in DWI (56%), 32 plaques had iso- or hyposignal intensities in DWI (44%). No statistically significant values were obtained. Conclusion: Although DWI could not replace CE-MRI, using these two modalities together could increase detection of active MS plaques and alter patients' therapy and prognosis. abstract_id: PUBMED:25532777 Susceptibility-weighted imaging helps to discriminate pediatric multiple sclerosis from acute disseminated encephalomyelitis. Background: Susceptibility-weighted imaging is a relatively new magnetic resonance imaging sequence that can identify lesions of multiple sclerosis in adults. This study was designed to determine if susceptibility-weighted imaging is a useful discriminator between children who develop multiple sclerosis and children with monophasic acute disseminated encephalomyelitis. Methods: Eighteen children who presented with acute central nervous system demyelination and had a brain magnetic resonance imaging study including susceptibility-weighted imaging within 6 months of the first clinical attack were studied. Final diagnosis was based on international consensus definitions. Brain lesions detected on the fluid-attenuated inversion recovery sequence were assessed for abnormal signal on susceptibility-weighted imaging. The burden of susceptibility abnormalities was then analyzed for differences between the multiple sclerosis and acute disseminated encephalomyelitis groups. Results: Eight patients had a final diagnosis of acute disseminated encephalomyelitis and ten had multiple sclerosis. Twenty-two percent of fluid-attenuated inversion recovery lesions were identified on susceptibility-weighted imaging. The percentage of fluid-attenuated inversion recovery lesions identified on susceptibility-weighted imaging differed between the multiple sclerosis and acute disseminated encephalomyelitis groups (P = 0.04). The median percentage (minimum-maximum) of lesions identified on susceptibility-weighted imaging in the multiple sclerosis group was 0.22 (0-0.68) and in the acute disseminated encephalomyelitis group was 0.0 (0-0.17). Conclusion: Susceptibility-weighted imaging may be a useful technique in differentiating acute disseminated encephalomyelitis from multiple sclerosis at initial presentation. abstract_id: PUBMED:26207600 Magnetic resonance susceptibility weighted imaging in neurosurgery: current applications and future perspectives. Susceptibility weighted imaging (SWI) is a relatively new imaging technique. Its high sensitivity to hemorrhagic components and ability to depict microvasculature by means of susceptibility effects within the veins allow for the accurate detection, grading, and monitoring of brain tumors. This imaging modality can also detect changes in blood flow to monitor stroke recovery and reveal specific subtypes of vascular malformations. In addition, small punctate lesions can be demonstrated with SWI, suggesting diffuse axonal injury, and the location of these lesions can help predict neurological outcome in patients. This imaging technique is also beneficial for applications in functional neurosurgery given its ability to clearly depict and differentiate deep midbrain nuclei and close submillimeter veins, both of which are necessary for presurgical planning of deep brain stimulation. By exploiting the magnetic susceptibilities of substances within the body, such as deoxyhemoglobin, calcium, and iron, SWI can clearly visualize the vasculature and hemorrhagic components even without the use of contrast agents. The high sensitivity of SWI relative to other imaging techniques in showing tumor vasculature and microhemorrhages suggests that it is an effective imaging modality that provides additional information not shown using conventional MRI. Despite SWI's clinical advantages, its implementation in MRI protocols is still far from consistent in clinical usage. To develop a deeper appreciation for SWI, the authors here review the clinical applications in 4 major fields of neurosurgery: neurooncology, vascular neurosurgery, neurotraumatology, and functional neurosurgery. Finally, they address the limitations of and future perspectives on SWI in neurosurgery. abstract_id: PUBMED:31681152 Characterization of Contrast-Enhancing and Non-contrast-enhancing Multiple Sclerosis Lesions Using Susceptibility-Weighted Imaging. Susceptibility-weighted magnetic resonance imaging (MRI) (SWI) offers additional information on conventional MRI contrasts. Central veins can be identified within lesions, and recently, it has been suggested that multiple sclerosis (MS) lesions with slowly expanding demyelination, so-called smoldering lesions, can be identified by a phase rim surrounding the lesion. We analyzed post-contrast SWI in regard to intrinsic lesion characteristics in a cohort of MS patients. A total of 294 MS patients were evaluated using a 3-T MRI. A comprehensive MRI protocol was used including post-contrast SWI. Lesions of at least 5 mm in size were analyzed on conventional MRI and SWI with a structured reporting scheme with a focus on SWI lesion characteristics. A total of 1,323 lesions were analyzed: 1,246/1,323 (94%) were non-enhancing and 77/1,323 (6%) were contrast-enhancing (CE) lesions. In CE lesions, the following patterns were seen: contrast enhancement was nodular in 34/77, ring-shaped enhancement was present in 33/77, and areas of peripheral enhancement were present in 10/77. In CE lesions, an association with central veins was found in 38/77 (50%). In 75/1,246 (6%) non-enhancing lesions, a central dark dot in keeping with a central vein was seen, whereas 162/1,246 (13%) showed peripheral hypointense dots/rims, 199/1,246 (16%) showed scattered hypointense dots mainly within the lesion area, and in 374/1,246 (30%), no SWI hypointensity was detected. Furthermore, 436/1,246 (35%) lesions showed isointensity to the surrounding tissue and were not visible on SWI. SWI is able to offer additional aspects of MS pathology also when used after the application of a contrast agent. Veins connected to lesions, a potentially useful marker in the differential diagnosis of MS, were seen in about 50% of enhancing lesions. Susceptibility artifacts, suggested to mark the presence of myelin-laden macrophages and smoldering inflammation, were visible in 28% of lesions as hypointense dots in and in the periphery of the lesion. Given those results, SWI may provide practical useful additional information in the evaluation of the lesion status in MS patients. abstract_id: PUBMED:33872085 Multiple sclerosis: prevalence of the 'central vein' sign in white matter lesions on gadolinium-enhanced susceptibility-weighted images. Aims: To evaluate prospectively whether an intravenous gadolinium injection could improve the detection of the central vein sign on susceptibility-weighted imaging sequences obtained with a 1.5 T magnetic resonance scanner in patients with multiple sclerosis compared to unenhanced susceptibility-weighted images. Materials And Methods: This prospective, institution review board-approved study included 19 patients affected by multiple sclerosis (six men; 13 women; mean age 40.8 years, range 20-74 years). Patients had the relapsing-remitting clinical subtype in 95% of cases, and only one (5%) patient had the primary progressive clinical subtype of multiple sclerosis. T2-weighted images, fluid-attenuated inversion recovery images, unenhanced and contrast-enhanced susceptibility-weighted images were evaluated in consensus by two neuroradiologists for the presence of the central vein sign. The readers were blinded to magnetic resonance imaging reports, clinical information, the presence and the localisation of focal hyperintense white matter lesions. Any discordance between readers was resolved through a joint review of the recorded images with an additional neuroradiologist. Results: A total of 317 multiple sclerosis lesions were analysed. The central vein sign had a higher prevalence detection rate on gadolinium-enhanced susceptibility-weighted images (272 of 317 lesions, 86%) compared to unenhanced susceptibility-weighted images (172 of 317 lesions, 54%). Conclusion: Gadolinium-enhanced susceptibility-weighted imaging improves the detection rate of the central vein sign in multiple sclerosis lesions. Answer: The study characterized in PUBMED:25783798 aimed to determine if magnetic resonance susceptibility-weighted imaging (SWI) at 1.5 T could increase the conspicuity of corticomedullary veins within white matter lesions of multiple sclerosis (MS) and aid in distinguishing MS plaques from leukoaraiosis. The study found that central veins were detected in MS lesions with a significantly greater frequency than in patients with dementia. In patients with MS, central veins were present in 75% of periventricular lesions and 52% of subcortical lesions. This was in contrast to patients with dementia, where central veins were seen much less frequently in subcortical lesions (14%) and their association with periventricular lesions was not significant. The results suggest that SWI increases the conspicuity of corticomedullary veins and may improve the specificity of MR findings in MS. Therefore, perivenular localization using SWI at 1.5 T can indeed improve the specificity of imaging criteria for MS plaques.
Instruction: Do preoperative pulmonary function indices predict morbidity after coronary artery bypass surgery? Abstracts: abstract_id: PUBMED:26139731 Do preoperative pulmonary function indices predict morbidity after coronary artery bypass surgery? Context: The reported prevalence of chronic obstructive pulmonary disease (COPD) varies among different groups of cardiac surgical patients. Moreover, the prognostic value of preoperative COPD in outcome prediction is controversial. Aims: The present study assessed the morbidity in the different levels of COPD severity and the role of pulmonary function indices in predicting morbidity in patients undergoing coronary artery bypass graft (CABG). Settings And Design: Patients who were candidates for isolated CABG with cardiopulmonary bypass who were recruited for Tehran Heart Center-Coronary Outcome Measurement Study. Methods: Based on spirometry findings, diagnosis of COPD was considered based on Global Initiative for Chronic Obstructive Lung Disease category as forced expiratory volume in 1 s [FEV1]/forced vital capacity <0.7 (absolute value, not the percentage of the predicted). Society of Thoracic Surgeons (STS) definition was used for determining COPD severity and the patients were divided into three groups: Control group (FEV1 >75% predicted), mild (FEV1 60-75% predicted), moderate (FEV1 50-59% predicted), severe (FEV1<50% predicted). The preoperative pulmonary function indices were assessed as predictors, and postoperative morbidity was considered the surgical outcome. Results: This study included 566 consecutive patients. Patients with and without COPD were similar regarding baseline characteristics and clinical data. Hypertension, recent myocardial infarction, and low ejection fraction were higher in patients with different degrees of COPD than the control group while male gender was more frequent in control patients than the others. Restrictive lung disease and current cigarette smoking did not have any significant impact on postoperative complications. We found a borderline P = 0.057 with respect to respiratory failure among different patients of COPD severity so that 14.1% patients in control group, 23.5% in mild, 23.4% in moderate, and 21.9% in severe COPD categories developed respiratory failure after CABG surgery. Conclusion: Among post-CABG complications, patients with different levels of COPD based on STS definition, more frequently developed respiratory failure. This finding may imply the prognostic value of preoperative pulmonary function test for determining COPD severity and postoperative morbidities. abstract_id: PUBMED:30243869 Pulmonary-Systemic Pressure Ratio Correlates with Morbidity in Cardiac Valve Surgery. Objectives: Pulmonary hypertension portends worse outcomes in cardiac valve surgery; however, isolated pulmonary artery pressures may not reflect patients' global cardiac function accurately. To better account for the interventricular relationship, the authors hypothesized that patients with greater pulmonary-systemic ratios (mean pulmonary arterial pressure)/(mean systemic arterial pressure) would correlate with worse outcomes after valve surgery. Design: Retrospective cohort study. Setting: Single academic hospital. Participants: The study comprised 314 patients undergoing valve surgery with or without coronary artery bypass grafting (2004-2016) with Society of Thoracic Surgeons predicted risk scores and preoperative right heart catheterization. Interventions: None. Measurements And Main Results: The pulmonary-systemic ratio was calculated as follows: mean pulmonary arterial pressure/mean systemic arterial pressure. Patients were stratified by pulmonary-systemic ratio quartile. Logistic regression was used to assess the risk-adjusted association between pulmonary-systemic ratio or mean pulmonary arterial pressure. Median pulmonary-systemic ratio was 0.33 (Q1-Q3: 0.23-0.65); median pulmonary arterial pressure was 29 (21-30) mmHg. Patients with the highest pulmonary-systemic ratio had the highest rates of morbidity and mortality (p < 0.0001). A high pulmonary-systemic ratio was associated with longer duration in the intensive care unit (p < 0.0001) and hospital (p < 0.0001). After risk-adjustment, pulmonary-systemic ratio and pulmonary arterial pressure were independently associated with morbidity and mortality, but the pulmonary-systemic ratio (odds ratio 23.88, p = 0.008, Wald 7.1) was more strongly associated than the pulmonary arterial pressure (odds ratio 1.035, p = 0.011, Wald 6.5). Conclusions: The pulmonary-systemic ratio is more strongly associated with risk-adjusted morbidity and mortality in valve surgery than pulmonary arterial pressure. By integrating ventricular interactions, this metric may better characterize the risk of valve surgery. abstract_id: PUBMED:9238826 Preoperative pulmonary function tests do not predict outcome after coronary artery bypass. Purpose: To evaluate the utility of preoperative pulmonary function tests in predicting postoperative complications and lengths of stay after coronary artery bypass grafting. Methods: Medical records of 193 consecutive patients who underwent coronary artery bypass grafting from October 1993 to September 1994 were reviewed. Preoperative pulmonary function tests, comorbid conditions, smoking history, postoperative complications, and total days in the intensive care unit, hospital, and on mechanical ventilation were abstracted. Data were analyzed using linear regressions, analyses of variance, and nonpaired Student's t tests. Results: Pulmonary function tests were normal in 56 subjects (29%, group 1), mildly impaired in 72 (37%, group 2), and moderately impaired in 35 (18%, group 3). Thirty patients (16%) had no pulmonary function tests. Group 3 subjects were older (71) compared to groups 1 and 2 (63 and 65, P < 0.05). There was no major difference in comorbid conditions or smoking status among the groups. All patients had atelectasis postoperatively. The most frequent postoperative complications were pleural effusions (43%), pulmonary edema or congestive heart failure (28%), and atrial fibrillation (35%). The repeat surgery rate was 3.6%. The mean length of hospital stay was 10.1 +/- 0.6 days, with 1.5 +/- 0.1 days of mechanical ventilation and 2.8 +/- 0.2 days of intensive care unit stay. Overall, pulmonary function tests had no predictive value for postoperative pulmonary and nonpulmonary complications, nor for durations of mechanical ventilation and intensive care unit stay. There was a trend toward increased length of hospital stay in patients with impaired pulmonary function tests (group 18.6 +/- 0.6, group 2 9.6 +/- 0.8, group 312.7 +/- 2.3 days, P = 0.09) but this was consistent with random variation. Conclusions: Preoperative pulmonary function tests were not useful in predicting postoperative outcomes in patients undergoing coronary artery bypass grafting. abstract_id: PUBMED:2139550 Indications for pulmonary function testing. Study Objective: To critically assess original studies evaluating the role of preoperative pulmonary function testing in predicting postoperative outcomes. Design: MEDLINE search of English-language articles from 1966 to 1987 using the following medical subjects headings respiratory function tests, lung, lung diseases, and preoperative care. Measurements And Main Results: Relevant studies were subdivided by operative site. We included only studies for which we could determine pre- and post-test probabilities of morbidity, mortality, sensitivity, and specificity. Preoperative pulmonary function testing was found to have measureable benefit in predicting outcome in lung resection candidates. In selected patients, split perfusion lung scanning and pulmonary exercise testing appeared to be useful. Confirmation of these reports is necessary before these preoperative tests can be routinely recommended. In studies of upper abdominal surgery, spirometry and arterial blood gas analysis did not consistently have measureable benefit in identifying patients at increased risk for postoperative pneumonia, prolonged hospitalization, and death. Studies of preoperative testing for other patients, including those having coronary artery bypass grafting, lacked adequate data for meaningful analysis. Conclusions: Preoperative pulmonary function testing helps clinicians to make decisions on management of lung resection candidates. Although many studies of patients before abdominal surgery have focused on the utility of preoperative pulmonary function testing, methodologic difficulties undermine the validity of their conclusions. The impact of testing on care of other preoperative patients is even less clear because of poor study design and insufficient data. Therefore, further investigation is necessary before a consensus can be reached on the role of preoperative pulmonary function testing in evaluating patients before all surgical procedures except lung resection. abstract_id: PUBMED:23239840 Incremental value of the preoperative echocardiogram to predict mortality and major morbidity in coronary artery bypass surgery. Background: Although echocardiography is commonly performed before coronary artery bypass surgery, there has yet to be a study examining the incremental prognostic value of a complete echocardiogram. Methods And Results: Patients undergoing isolated coronary artery bypass surgery at 2 hospitals were divided into derivation and validation cohorts. A panel of quantitative echocardiographic parameters was measured. Clinical variables were extracted from the Society of Thoracic Surgeons database. The primary outcome was in-hospital mortality or major morbidity, and the secondary outcome was long-term all-cause mortality. The derivation cohort consisted of 667 patients with a mean age of 67.2±11.1 years and 22.8% females. The following echocardiographic parameters were found to be optimal predictors of mortality or major morbidity: severe diastolic dysfunction, as evidenced by restrictive filling (odds ratio, 2.96; 95% confidence interval, 1.59-5.49), right ventricular dysfunction, as evidenced by fractional area change <35% (odds ratio, 3.03; 95% confidence interval, 1.28-7.20), or myocardial performance index >0.40 (odds ratio, 1.89; 95% confidence interval, 1.13-3.15). These results were confirmed in the validation cohort of 187 patients. When added to the Society of Thoracic Surgeons risk score, the echocardiographic parameters resulted in a net improvement in model discrimination and reclassification with a change in c-statistic from 0.68 to 0.73 and an integrated discrimination improvement of 5.9% (95% confidence interval, 2.8%-8.9%). In the Cox proportional hazards model, right ventricular dysfunction and pulmonary hypertension were independently predictive of mortality over 3.2 years of follow-up. Conclusions: Preoperative echocardiography, in particular right ventricular dysfunction and restrictive left ventricular filling, provides incremental prognostic value in identifying patients at higher risk of mortality or major morbidity after coronary artery bypass surgery. abstract_id: PUBMED:11888755 Performance of three preoperative risk indices; CABDEAL, EuroSCORE and Cleveland models in a prospective coronary bypass database. Objectives: The aim of the present study was to evaluate the performance of three different preoperative risk models in the prediction of postoperative morbidity and mortality in coronary artery bypass (CAB) surgery. Methods: Data on 1132 consecutive CAB patients were prospectively collected, including preoperative risk factors and postoperative morbidity and in-hospital mortality. The preoperative risk models CABDEAL, EuroSCORE and Cleveland model were used to predict morbidity and mortality. A C statistic (receiver operating characteristic (ROC) curve) was used to test the discrimination of these models. Results: The area under the ROC curve for morbidity was 0.772 for the CABDEAL, 0.694 for the EuroSCORE and 0.686 for the Cleveland model. Major morbidity due to postoperative complications occurred in 268 patients (23.6%). The mortality rate was 3.4% (n=38 patients). The ROC curve areas for prediction of mortality were 0.711 for the CABDEAL, 0.826 for the EuroSCORE and 0.858 for the Cleveland model. Conclusions: The CABDEAL model was initially developed for the prediction of major morbidity. Thus, it is not surprising that this model evinced the highest predictive value for increased morbidity in this database. Both the Cleveland and the EuroSCORE models were better predictive of mortality. These results have implications for the selection of risk indices for different purposes. The simple additive CABDEAL model can be used as a hand-held model for preoperative estimation of patients' risk of postoperative morbidity, while the EuroSCORE and Cleveland models are to be preferred for the prediction of mortality in a large patient sample. abstract_id: PUBMED:26680309 Preoperative Renal Function Predicts Hospital Costs and Length of Stay in Coronary Artery Bypass Grafting. Background: Renal failure remains a major source of morbidity after cardiac surgery. Whereas the relationship between poor renal function and worse cardiac surgical outcomes is well established, the ability to predict the impact of preoperative renal insufficiency on hospital costs and health care resource utilization remains unknown. Methods: Patient records from a statewide The Society for Thoracic Surgeons (STS) database linked with estimated cost data were evaluated for isolated coronary artery bypass graft (CABG) operations (2000 to 2012). Patients with documented preoperative renal failure/dialysis were excluded. Preoperative renal function was determined using calculated creatinine clearance (CrCl). Multivariable regression analyses utilizing restricted cubic splines evaluated the continuous relationship between CrCl and risk-adjusted outcomes. Results: A total of 46,577 isolated CABG operations were evaluated with a median STS predicted risk of mortality score of 1.2% (interquartile range, 0.7% to 2.4%), including 9% off-pump CABG. Median CrCl was 85 mL/min (range, 2 to 120 mL/min), and median total cost was $25,011. After adjustment for preoperative risk factors, worsening CrCl (declining renal function) was highly associated with greater total costs of hospitalization (coefficient = -122, p < 0.001) and postoperative length of stay (coefficient = -0.03, p < 0.001). Furthermore, predicted total costs were incrementally increased by 10%, 20%, and 30% with worsening of CrCl from 80 mL/min to 60, 40, and 20 mL/min. As expected, decreasing CrCl was also associated with an increased risk-adjusted likelihood for hemodialysis and mortality (both p < 0.001). Conclusions: Preoperative renal function is highly associated with the cost of CABG. Assessment of renal function may be used to preoperatively predict cost and resource utilization. Optimizing renal function preoperatively has the potential to improve patient quality and costs by approximately 6% ($1,250) for every 10 mL/min improvement in creatinine clearance. abstract_id: PUBMED:12379410 Preoperative prediction of early mortality and morbidity in coronary bypass surgery. Objective: A scoring system to predict early mortality and morbidity in CABG, distinguishing low and high risk patients. Methods: 563 patients (1998) served as development dataset, 969 patients as validation set. Univariate and logistic regression analysis was used to identify risk factors. Results: Gender, hypertension, pulmonary disease, reoperation, age, operative status and left-ventricular function were predictive variables for early mortality. The area under the ROC curve was 0.81. We identified a low risk, mortality of 1.8% and a high-risk group, mortality of 13.4%. Diabetes, hypertension, kidney and lung disease, reoperation, operative status and left ventricular function were predictive variables for morbidity. The area under the ROC curve was 0.73. We identified a low risk, morbidity of 17%, and a high-risk group, morbidity of 41%. Conclusion: This scoring system is a simple system identifying a low and high-risk group for morbidity and early mortality. abstract_id: PUBMED:8651777 Preoperative prediction of postoperative morbidity in coronary artery bypass grafting. Background: The risk factors of patients selected for coronary artery bypass grafting have increased in recent years because of the aging population. Prediction of postoperative complications is essential for optimal use of the available resources. The aim of this study was to develop a scoring method for prediction of postoperative morbidity of individual patients undergoing bypass grafting. Methods: Data from 386 consecutive patients who underwent coronary artery bypass grafting in a single center were retrospectively collected. The relationship between the preoperative risk factors and the postoperative morbidity was analyzed by the Bayesian approach. Three risk indices (15-factor and seven-factor computed and seven-factor manual models) were developed for the prediction of morbidity. The criterion for morbidity was a prolonged hospital stay postoperatively (> 12 days) because of adverse events. Results: The best predictive preoperative factors for increased morbidity were emergency operation, diabetes, rhythm other than sinus rhythm on the electrocardiogram or recent myocardial infarction, low ejection fraction (< 0.49), age greater than 70 years, decreased renal function, chronic pulmonary disease, cerebrovascular disease, and obesity. The sensitivity of the scoring methods ranged from 51% to 72% and the specificity, from 77% to 86%. Conclusions: The results show that individual patients can be stratified according to postoperative risk for complications on the basis of preoperative information that is available for most patients. abstract_id: PUBMED:21524298 Longitudinal evaluation the pulmonary function of the pre and postoperative periods in the coronary artery bypass graft surgery of patients treated with a physiotherapy protocol. Background: The treatment of coronary artery disease (CAD) seeks to reduce or prevent its complications and decrease morbidity and mortality. For certain subgroups of patients, coronary artery bypass graft surgery (CABG) may accomplish these goals. The objective of this study was to assess the pulmonary function in the CABG postoperative period of patients treated with a physiotherapy protocol. Methods: Forty-two volunteers with an average age of 63 ± 2 years were included and separated into three groups: healthy volunteers (n = 09), patients with CAD (n = 9) and patients who underwent CABG (n = 20). Patients from the CABG group received preoperative and postoperative evaluations on days 3, 6, 15 and 30. Patients from the CAD group had evaluations on days 1 and 30 of the study, and the healthy volunteers were evaluated on day 1. Pulmonary function was evaluated by measuring forced vital capacity (FVC), maximum expiratory pressure (MEP) and Maximum inspiratory pressure (MIP). Results: After CABG, there was a significant decrease in pulmonary function (p < 0.05), which was the worst on postoperative day 3 and returned to the preoperative baseline on postoperative day 30. Conclusion: Pulmonary function decreased after CABG. Pulmonary function was the worst on postoperative day 3 and began to improve on postoperative day 15. Pulmonary function returned to the preoperative baseline on postoperative day 30. Answer: The relationship between preoperative pulmonary function indices and morbidity after coronary artery bypass surgery (CABG) is not entirely clear-cut, with studies showing varying results. One study found that patients with different levels of chronic obstructive pulmonary disease (COPD), as determined by preoperative pulmonary function tests, more frequently developed respiratory failure after CABG, suggesting a potential prognostic value of these tests for determining COPD severity and postoperative morbidities (PUBMED:26139731). However, another study concluded that preoperative pulmonary function tests were not useful in predicting postoperative outcomes in patients undergoing CABG, as they had no predictive value for postoperative pulmonary and nonpulmonary complications, nor for durations of mechanical ventilation and intensive care unit stay (PUBMED:9238826). A study evaluating the role of preoperative pulmonary function testing in predicting postoperative outcomes found that such testing had measurable benefit in predicting outcomes in lung resection candidates but did not consistently have measurable benefit in identifying patients at increased risk for postoperative pneumonia, prolonged hospitalization, and death in upper abdominal surgery. For coronary artery bypass grafting, the study indicated that there was insufficient data for meaningful analysis (PUBMED:2139550). Another study highlighted the incremental value of preoperative echocardiography, particularly right ventricular dysfunction and restrictive left ventricular filling, in identifying patients at higher risk of mortality or major morbidity after CABG (PUBMED:23239840). This suggests that while pulmonary function indices may provide some information, other preoperative assessments like echocardiography might offer additional prognostic value. Furthermore, a study on preoperative renal function found that it was highly associated with the cost of CABG and could be used to preoperatively predict cost and resource utilization, indicating that factors beyond pulmonary function may also be significant predictors of postoperative outcomes (PUBMED:26680309). In summary, while some evidence suggests that preoperative pulmonary function indices may have a role in predicting morbidity after CABG, the predictive value is not consistently supported across studies, and other preoperative assessments may also be important in determining patient risk.
Instruction: Immunization requirements for childcare programs. Are they enough? Abstracts: abstract_id: PUBMED:15261904 Immunization requirements for childcare programs. Are they enough? Background: School immunization legislation has resulted in high vaccination coverage rates and low rates of vaccine-preventable disease among school children. Similar legislation has been directed toward children in licensed and regulated childcare programs. The purpose of this investigation was to compare immunization coverage among children in and not in childcare. Methods: For 18 months during 2001 and 2002, the National Immunization Survey (NIS), a random-digit-dialing telephone survey, collected information on children aged 19 through 35 months, including data on enrollment in childcare. Data were analyzed retrospectively to determine coverage at 24 months and at the time of the survey. Children were considered up-to-date if they had received all recommended immunizations for their age. Results: Of the eligible NIS respondents, about 41% had a child in childcare at the time of or before the survey. Retrospective analysis of children at 24 months showed no significant differences in coverage between those in and not in childcare (73.1% vs 71.9%). Likewise, analysis of coverage at the time of the survey revealed no significant differences (76.4% vs 72.6%). Conclusions: Immunization legislation and regulations have been successful in increasing coverage rates in the school population. Similar legislation for childcare facilities appears not to have been as effective. Given these findings, it seems that new strategies are needed to increase coverage in preschool children. abstract_id: PUBMED:37989270 Childcare Subsidy Employment and Copayment Requirements and Child Maltreatment. Economic support programs for low-income families may play an important role in preventing child abuse and neglect. In the United States, childcare subsidies are provided to low-income families who meet certain requirements to offset the high cost of childcare. States have flexibility in setting many policies related to the provision of childcare subsidies, which results in a great deal of variation in how the programs operate between states. One policy dimension on which states vary is the number of employment hours required to receive childcare subsidies. A small body of work has begun to investigate the ways in which these state policy variations might relate to child maltreatment. Using 11 years of administrative data from the United States, the current study sought to estimate the relationship between two sources of variation in childcare subsidy policies: employment requirements and copayment size; and child neglect, physical abuse, and emotional abuse substantiations. The study found a nuanced relationship between required employment and neglect substantiations. Specifically, requiring some level of work was not associated with neglect substantiations, but requiring 30 hours of employment was associated with higher rates. The study did not find a relationship between copayment size and maltreatment substantiations. abstract_id: PUBMED:35385115 Association Between State Hepatitis A Vaccination Requirements and Hepatitis A Vaccination Rates. Using National Immunization Survey Child and Teen (2008-2017), we associated state vaccination requirements with hepatitis A (Hep A) vaccination rates in children and adolescents. States with school entry or both childcare and school entry requirements were associated with 35%-40% higher Hep A vaccination rates, compared with states without such requirements. abstract_id: PUBMED:7841568 Immunization requirements for pharmacy students. Objective: To identify current immunization requirements for pharmacy students throughout the US. Design: Self-administered questionnaire. Setting: Seventy-five colleges and schools of pharmacy in the US. Main Outcome Measures: Immunization policies, immunologic requirements, timing of vaccination in relation to the beginning of clerkship experience, payment, mechanism to revise policies. Data Analysis: Descriptive statistics. Results: Overall, 57 programs (81 percent) have an immunization program in place, but 13 programs (19 percent) have no immunization program. More than 50 percent of the colleges or schools reported requiring that pharmacy students have measles, mumps, rubella, tetanus, and purified protein derivative of tuberculin (PPD) vaccinations upon entry of clerkship. Only 25 college or schools of pharmacy (44 percent) required students to have the hepatitis B vaccine and 8 (14 percent) to have a PPD evaluation upon completion of clerkship experience. Responsibility for the immunization program was shared evenly between the clerkship coordinator and the student health clinic. Approximately 65 percent of programs maintain an immunization record on file for each student. Completion of immunizations was required in 36 schools (64 percent) before entering clerkship activities, 15 (26 percent) before entrance to the professional program, and 3 (5 percent) in the first year of the program. Six schools (11 percent) had a program in place for less than one year, 27 (47 percent) between one and five years, and 24 (42 percent) for more than five years. At the majority of schools, students are responsible for the cost of immunization. Conclusions: Most schools of pharmacy do not adhere to the specific immunization recommendations described by the Centers for Disease Control and Prevention for healthcare workers. Pharmacy schools need to reexamine their immunization policies and update them to reflect the most current standards. We suggest a policy for immunization of pharmacy students. abstract_id: PUBMED:23449124 Structures, roles, and procedures of state advisory committees on immunization. Context: Advisory committees have the potential to play a critical role in decision making and implementation at the state level. Many states have advisory committees for their immunization programs to assist in decision making on topics such as implementing new vaccines in their states, school and childcare requirements and exemptions and addressing concerns about vaccine safety. Objective: This article describes how immunization advisory committees work; their roles, formation, organization, and structure; membership; the issues they address; and their benefit to state immunization programs. Design: In 2011, the Association of State and Territorial Health Officials, in collaboration with the Centers for Disease Control and Prevention, conducted an online survey of immunization program managers to determine which states have immunization advisory committees, how these committees function, and the perceived benefits of the committees to state immunization programs. Follow-up half-hour telephone interviews were conducted with 5 states to gain in-depth information on specific advisory committees. Results: One hundred percent of states and 3 territories responded, giving an overall response rate of 91%. Thirty-four of the 53 respondents (64%) reported having an advisory committee for immunization issues. Membership is composed of physicians, public health representatives, and nurses as well as public advocates and members of the public. States reported a variety of issues their committee has worked on; the most frequently mentioned issue was school and childcare vaccination requirements. Others included immunization information systems and vaccination of health care personnel. Conclusions: Overall, states with immunization advisory committees reported that the committees were helpful on issues faced by the program and worth the time and monetary commitment. Given the reported benefits of state immunization advisory committees and the complex program and policy decisions that states face in the dynamic immunization environment, additional states may want to consider establishing immunization advisory committees. abstract_id: PUBMED:35587182 The State of our Breastfeeding Friendly Childcare Programs: Ten Years After the 2011 Surgeon General's Call to Action to Support Breastfeeding. Background: Ten years ago, the U.S. Surgeon General's Call to Action to Support Breastfeeding made recommendations for childcare settings, including: (1) accommodating and supporting breastfeeding families; and (2) adopting national guidelines on breastfeeding support in childcare settings. Research Aims: To (1) describe the existing breastfeeding friendly childcare designation programs in the United States; and (2) describe how states are accommodating breastfeeding families in childcare settings. Method: The study design was cross-sectional, prospective thematic description of existing publicly available documents. A search of state breastfeeding coalitions was conducted to assess the number of states with breastfeeding friendly childcare designation programs. A definitive yes-or-no answer regarding whether each state had a program was obtained from all 50 states. For states with programs, designation materials were analyzed using thematic analysis and the framework method to compare designation components. Results: Fifteen states had evidence of breastfeeding friendly childcare designation programs and similarities exist across designation program components. Four standards were common to all 15 programs: written policy on breastfeeding, suitable space within the center where mothers can breastfeed or express their milk, educational materials, and resources on breastfeeding available to parents. Most states required self-assessment to achieve designation status. Conclusion: Research is needed to enable evidence-based programs and decision-making regarding components and processes. Federal funding should support these programs' mission, including funding research to assess how and in what circumstances these programs are improving breastfeeding-related outcomes and supporting breastfeeding families. abstract_id: PUBMED:14700707 Impact of vaccine shortages on immunization programs and providers. Background: During 2001 and the first half of 2002, the United States experienced severe shortages of five of the eight universally recommended vaccines for children. Objectives: To evaluate the impact of shortages of diphtheria-tetanus-acellular pertussis vaccine (DTaP), pneumococcal conjugate vaccine (PCV7), and tetanus and diphtheria vaccine (Td) shortages on state and urban area immunization programs and immunization providers between September 2001 and January 2002. Methods: (1) Survey of state and urban area immunization program managers. Outcome measures included changes in vaccine distribution and suspension of daycare/Head Start and school entry immunization requirements for Td, DTaP, and PCV7. (2) Interviews with Vaccines for Children Program immunization providers scheduled to receive a routine site visit between January 21 and February 1, 2002. Outcome measures included problems experienced with vaccine orders, implementation of Advisory Committee on Immunization Practices (ACIP) interim recommendations for DTaP and PCV7, and length of time with no DTaP or PCV7 vaccines in stock. Results: Over 85% of immunization programs changed the way they distributed PCV7, DTaP, and Td vaccines to providers, including limiting the amount of vaccine ordered or distributing partial orders. Additionally, 76% of programs experienced problems purchasing or receiving varicella vaccine. Sixty-eight percent of programs suspended school entry requirements for Td. Immunization providers reported problems with orders of Td (56%), PCV7 (45%), DTaP (30%), and varicella (29%). Approximately 16% and 29% of providers implemented the interim ACIP recommendations for DTaP and PCV7, respectively. However, 21% of providers suspended administration of all doses of PCV7 because they ran out of vaccine before learning of the shortage. Conclusions: From suspension of school entry requirements to delaying administration of vaccine, the recent vaccine shortages affected immunization programs' and providers' ability to administer vaccines in a timely manner. abstract_id: PUBMED:16895152 Linking practices with community programs to improve immunization rates. Pediatricians should be knowledgeable about programs available in their community and support efforts to collaborate with other providers, public health departments and immunization coalitions in their community. Through immunization coalitions and widespread use of an immunization registry, together with participating private providers, case management and home visitation programs can improve immunization rates among the highest risk children. abstract_id: PUBMED:9737839 Are state immunization programs effective? Implications for the children's immunization initiative. Several states operate universal vaccine purchase (UVP) programs aimed at ensuring adequate immunization of children. Some of these programs have been in operation for many decades. Recently, there has been a great deal of interest in implementing a federal UVP program. It is not clear, however, that such programs can significantly increase immunization levels; many nonfinancial barriers to full immunization exist and would have to be addressed. This paper uses cross-sectional data at the state level to estimate the effect of state UVP programs on the immunization levels of preschool children. The results indicate that states with UVP programs do not have significantly higher immunization rates than do other states. Therefore, it is not likely that a federal UVP program would significantly affect immunization rates. abstract_id: PUBMED:31007370 The effects of the transition from home-based childcare to childcare centers on children's health and development in Colombia. Colombia's national early childhood strategy launched in 2011 aimed at improving the quality of childcare services offered to socio-economically vulnerable children, and included the possibility that children and their childcare providers could transfer from non-parental family daycare units to large childcare centers in urban areas. This study seeks to understand whether the offer to transfer and the actual transfer from one program to the other had an impact on child cognitive and socioemotional development, and nutrition, using a cluster-randomized control trial with a sample of 2767 children between the ages of 6 and 60 months located in 14 cities in Colombia. The results indicate a negative effect of this initiative on cognitive development, a positive effect on nutrition, and no statistically significant effect of the intervention on socioemotional development. We also explored the extent to which these impacts might be explained by differences in the quality of both services during the transition, and report that quality indicators are low in both programs but are significantly worse in centers compared to community nurseries. Answer: The investigation into immunization requirements for childcare programs reveals mixed findings regarding their effectiveness. On one hand, school immunization legislation has been successful in achieving high vaccination coverage rates among school children, but similar legislation for childcare facilities does not appear to have been as effective. A study analyzing data from the National Immunization Survey (NIS) found no significant differences in immunization coverage between children in childcare and those not in childcare, suggesting that new strategies may be needed to increase coverage in preschool children (PUBMED:15261904). In contrast, another study found that states with vaccination requirements for both childcare and school entry were associated with 35%-40% higher hepatitis A vaccination rates compared to states without such requirements, indicating that state vaccination requirements can have a positive impact on vaccination rates (PUBMED:35385115). The role of state advisory committees on immunization is also highlighted, with many states having such committees to assist in decision-making on topics including school and childcare vaccination requirements. These committees are generally perceived as beneficial to state immunization programs (PUBMED:23449124). However, vaccine shortages have had a significant impact on the ability of immunization programs and providers to administer vaccines in a timely manner, with over 85% of immunization programs changing vaccine distribution methods and 68% suspending school entry requirements for certain vaccines during shortages (PUBMED:14700707). Overall, while immunization requirements for childcare programs have had some success, the evidence suggests that they may not be sufficient on their own to ensure high coverage rates. Additional strategies, such as improving the quality of childcare services, supporting breastfeeding families in childcare settings (PUBMED:35587182), and addressing nonfinancial barriers to full immunization (PUBMED:9737839), may be necessary to enhance the effectiveness of these programs.
Instruction: Splenic preserving distal pancreatectomy: does vessel preservation matter? Abstracts: abstract_id: PUBMED:29124843 Splenic vessel preservation versus splenic vessel resection in laparoscopic spleen-preserving distal pancreatectomy. Background: Laparoscopic spleen-preserving distal pancreatectomy for low-grade malignant pancreas tumours was recently demonstrated and can be performed with splenic vessel preservation (SVP) or splenic vessel resection (SVR). Whether one approach is superior to another is still a matter of debate. Methods: A systematic literature search (PubMed, Embase, Science Citation Index, Springer-Link and Cochrane Central Register of Controlled Trials) was performed. Pooled intra- and post-operative outcomes were evaluated. Stratified and sensitivity analyses were performed to explore heterogeneity between studies and to assess the effects of the study qualities. Results: A total of six studies were included. There was no significant difference for SVR and SVP in terms of overall post-operative complications and the pooled odds ratio (OR) was 0.87 (95% confidence interval (CI) 0.55-1.38, I2 = 25%). Meta-analysis on the pooled outcome of intraoperative operative time and blood loss favoured SVR; the mean differences were 18.64 min (95% CI 6.91-30.37 min, I2 = 21%) and 65.67 mL (95% CI 18.88-112.45 mL, I2 = 48%), respectively. Subgroup analysis showed a decrease incidences in perigastric varices (OR = 0.07, 95% CI 0.03-0.18, I2 = 29%) and splenic infarction (OR = 0.16, 95% CI 0.08-0.32, I2 = 0%) in SVP. Conclusion: For selected patients who underwent laparoscopic spleen-preserving distal pancreatectomy, an increased preference for the SVP technique should be suggested considering its short-term benefits. However, in case of large tumours that distort and compress vessel course, SVR could be applied with acceptable splenic ischaemia and perigastric varices. abstract_id: PUBMED:21463805 Splenic preserving distal pancreatectomy: does vessel preservation matter? Background: Splenic preserving distal pancreatectomy (SPDP) can be accomplished with splenic artery and vein preservation or ligation. However, no data are available on the relative merits of these techniques. The aim of this analysis was to compare the outcomes of splenic preserving distal pancreatectomy with and without splenic vessel preservation. Study Design: From 2002 through 2009, 434 patients underwent distal pancreatectomy and 86 (20%) had splenic preservation. Vessel preservation (VP) was accomplished in 45 and ligation (VL) was performed in 41. These patients were similar with respect to age, American Society of Anesthesiologists class, pathology, surgeons, and minimally invasive approach (79%). For comparison, a matched group of 86 patients undergoing distal pancreatectomy with splenectomy (DP+S) was analyzed. Results: The VP-SPDP procedure was associated with less blood loss than VL-SPDP or DP+S (224 vs 508 vs 646 mL, respectively; p < 0.05). The VP-SPDP procedure also resulted in fewer grade B or C pancreatic fistulas (2% vs 12% vs 14%; p = NS) and splenic infarctions (5% vs 39%; p < 0.01), less overall morbidity (18% vs 39% vs 38%, respectively; p < 0.05) and need for drainage procedure (2% vs 15% vs 16%; p < 0.05), and shorter post-operative length of stay (4.5 vs 6.2 vs 6.6 days; p < 0.05). Conclusions: This analysis suggests that outcomes are (1) best for VP-SPDP and (2) VL-SPDP provides no short-term advantage over distal pancreatectomy with splenectomy. We conclude that splenic VP is preferred when SPDP is performed. abstract_id: PUBMED:28138594 Study on laparoscopic spleen preserving distal pancreatectomy procedures comparing splenic vessel preservation and non-preservation. Background: The purpose of this study is to investigate whether two types of laparoscopic spleen-preserving distal pancreatectomy (Lap-SPDP) techniques are being implemented safely. The study compares the clinical outcomes from laparoscopic Warshaw operation (Lap-W) with those from laparoscopic splenic vessels preserving SPDP (Lap-SPDP-VP) and considers the role of those operations. Methods: On August 2013, the Warshaw technique was introduced to our institution and 17 patients with a lesion in the distal pancreas who underwent Lap-SPDP by December 2015 were enrolled. Six patients who underwent a Lap-W and 11 patients who underwent a Lap-SPDP-VP were investigated retrospectively. Results: In the Lap-W and Lap-SPDP-VP patients, the sizes of the tumors were 46.5±31.2 and 25.7±14.9 mm [Probability (P) value =0.0913)]; the operative times were 287 min (range, 225-369 min) and 280 min (range, 200-496 min); the blood loss was 95 mL (range, 50-200 mL) and 60 mL (range, 0-650 mL); the length of the postoperative hospital stay was 12 days (range, 8-43 days) and 11 days (range, 7-28 days); median follow-up was 19 months (range, 13-28 months) and 23 months (range, 6-28 months), respectively. There was no case of symptomatic spleen infarction in either group. However, partial infarctions of the spleen without symptoms were observed by computed tomography in three out of six cases (50%) in the Lap-W. No patient required reoperation and the postoperative mortality was zero in both groups. All patients were alive and recurrence-free at the end of the follow-up period. Collateral veins around the spleen developed in 83.3% (five out of six patients) in the Lap-W and developed in 12.5% (one out of eight patients) in the Lap-SPDP-VP. A significant difference was observed between groups (P=0.0256). Gastric varices developed in 33.3% (two out of six patients) in the Lap-W. However, no case of rupture of varices, or other late phase complications was observed in either group. Conclusions: Both the Lap-W and Lap-SPDP-VP were found to be safe and effective, and in cases in which the detachment work of the splenic vessels from the tumor or the pancreatic parenchyma is difficult, performing Lap-W, rather than Lap-SPDP-VP, is considered appropriate. While Lap-SPDP is recommended for patients with benign or low grade malignant diseases, long-term follow-up to monitor hemodynamic changes in splenogastric circulation is considered needed. abstract_id: PUBMED:26770307 The efficacy of spleen-preserving distal pancreatectomy with or without splenic vessel preservation: a meta-analysis. Background: Spleen-preserving distal pancreatectomy can be performed with splenic vessel preservation (SPDP-SVP) or splenic vessel resection (SPDP-SVR). This meta-analysis aimed to evaluate the clinical outcomes of patients undergoing SPDP-SVP or SPDP-SVR. Method: A systematic literature search of PubMed, Embase, and the Cochrane Library was performed. The operative time, estimated blood loss, postoperative complications, pancreatic fistula (Grade B+C) rates, splenic infarction rates, gastric/perigastric varices rates and postoperative hospital stay were evaluated. RevMan 5.3 software was used to evaluate the differences between groups. Results: Nine studies involving 639 patients were included in this meta-analysis, of whom 402 underwent SPDP-SVP and 237 underwent SPDP-SVR. Patients who underwent SPDP-SVP had lower splenic infarction and gastric/perigastric varices rates. The operative time, estimated blood loss, postoperative complications, pancreatic fistula (Grade B+C) rates and postoperative hospital stays were comparable between these two groups. Conclusions: SPDP-SVP and SPDP-SVR are both safe, feasible procedures for the management of benign or low-grade malignant pancreatic body or tail tumors. However, SPDP-SVR is related to higher incidence rates of early splenic ischemia and gastric/perigastric varices. abstract_id: PUBMED:26721691 Spleen and splenic vessel preserving distal pancreatectomy for bifocal PNET in a young patient with MEN1. Background: MEN1 patients requiring resection of neuroendocrine tumors (pNET) are frequently young, active patients in whom a minimal access approach minimizes perioperative morbidity and splenic preservation decreases the risk of post-splenectomy sepsis. Laparoscopic spleen preserving distal pancreatectomy can be performed with removal (Warshaw's technique) or preservation of the splenic vessels, the later having a higher rate of successful splenic preservation. Patient: This is an active, 16-year-old Jehovah's Witness with trifocal nonfunctioning neuroendocrine tumor in the proximal body and tail of the pancreas as part of MEN1 syndrome. A spleen preserving distal pancreatectomy was performed with the final pathology showing three pNET with low mitotic count and three lymph nodes free of cancer (final stage pT2pN0). Technique: This video demonstrates patient and trocar positioning as well as operative tactics for a laparoscopic distal pancreatectomy with preservation of splenic vessels. Intraoperative ultrasound is crucial in assessing pNETs' relation to critical vessels, pancreatic duct, and to exclude synchronous lesions. The video focuses on safe laparoscopic creation of the retropancreatic tunnel and dissecting the pancreas off the splenic vessels using novel energy devises to control direct splenic venous branches into the pancreas. Conclusion: Improvements in laparoscopic techniques and technology have enabled surgeons to preserve the major splenic vessels to avoid splenic infarcts, abscesses and re-operations, and minimize the risk of left-sided portal hypertension. Splenic preservation is particularly important in young MEN1 patients undergoing laparoscopic pancreatectomy for pNET due to the increased risk of overwhelming post-splenectomy sepsis. abstract_id: PUBMED:35655470 Three-Port Laparoscopic Spleen-Preserving Distal Pancreatectomy with Splenic Vessel Preservation. Laparoscopic distal pancreatectomy is now accepted treatment for benign and certain malignant pancreatic body and/or tail processes and is generally performed using four to six ports. Splenic preservation avoids inherent risks associated with the post-splenectomy state, but adds surgical complexity. In this case series, we describe our single surgeon's experience with a novel technique for safe, successful three-port laparoscopic spleen-preserving distal pancreatectomy with splenic vessel preservation. Our series supports success with our technique for a variety of benign and low-grade pancreatic neoplasms. Our results demonstrate this approach is a technically feasible and safe approach. As previously discussed by our group, this approach is also applicable to other procedures in the left upper quadrant. abstract_id: PUBMED:29636057 A case of complete splenic infarction after laparoscopic spleen-preserving distal pancreatectomy. Background: Laparoscopic spleen-preserving distal pancreatectomy (LSPDP), a newly developed operative procedure, is indicated for benign and low-grade malignant disease of the pancreas. However, few studies have reported on postoperative splenic infarction after LSPDP. Case Presentation: We report a case of complete splenic infarction and obliteration of the splenic artery and vein after LSPDP. The patient was a 69-year-old woman with a 35-mm cystic tumor of the pancreatic body who underwent LSPDP. Although the operation was completed with preservation of the splenic artery and vein, postoperative splenic infarction was revealed with left back pain and fluid collection around the stump of the pancreas on postoperative day 9. Fortunately, clinical symptoms disappeared within days and additional splenectomy was not needed. Splenic infarction was attributed to scattered micro-embolizations within the spleen after drawing strongly on the tape encircling the splenic vessels. Conclusion: Preserving splenic vessels in LSPDP is a demanding procedure. To prevent splenic infarction in LSPDP, we should carefully isolate the pancreatic parenchyma from the splenic vessels, and must avoid drawing tightly on the vessel loop encircling splenic vessels. abstract_id: PUBMED:37966019 Laparoscopic spleen-preserving distal pancreatectomy: A novel technique with splenic artery resection and splenic vein preservation. Introduction: Laparoscopic spleen-preserving distal pancreatectomy (LSDP) is widely performed to treat benign and low-grade malignant diseases. Although preservation of splenic vessels may be desirable considering the risk of postoperative complications, it is sometimes difficult due to tumor size, inflammation, and proximity of the tumor and splenic vessels. Herein, we present the first case of LSDP with splenic artery resection and splenic vein preservation. Materials And Surgical Technique: A 40-year-old woman with a pancreatic tumor was referred to our hospital. Contrast-enhanced computed tomography (CT) revealed a tumor in the pancreatic tail that was in contact with the splenic artery and distant from the splenic vein. The splenic artery and vein were separated from the pancreas near the dissection line. The splenic artery was resected after pancreatic dissection using a linear stapler. After the pancreatic tail was separated from the splenic hilum while preserving the splenic vein, the distal side of the splenic artery was resected, and the specimen was removed. The postoperative course was uneventful and the patient was discharged on postoperative Day 9. Four months after surgery, postoperative follow-up CT findings showed neither splenic infarction nor gastric varices. Discussion: This technique is an alternative method of splenic preservation when there is no attachment of the tumor to the splenic vein or uncontrolled expected bleeding of the splenic artery using the Kimura technique. abstract_id: PUBMED:29981673 Laparoscopic Spleen Preserving Distal Pancreatectomy with Splenic Vessels Preservation. Spleen preserving Distal Pancreatectomy (SPDP) is commonly applied in patients with benign or low-grade malignant tumors in the body and tail of the pancreas. Our aim is to present our technique and the early results in performing laparoscopic spleen preserving distal pancreatectomy with splenic vessel preservation (LSPDP-SVP) during 2017-2018 in Ponderas Academic Hospital. abstract_id: PUBMED:38114974 Learning curve of robotic-assisted splenic vessel-preserving spleen-preserving distal pancreatectomy by one single surgeon: a retrospective cohort study. Aim: Splenic vessel-preserving spleen-preserving distal pancreatectomy (SVP-SPDP) has a lower risk of splenic infarction than the splenicvessel-sacrificing SPDP, but it is more technically demanding. Learning curve of robotic-assisted SVP-SPDP (RSVP-SPDP) remains unreported. This study sought to analyze the perioperative outcomes and learning curve of RSVP-SPDP by one single surgeon. Methods: Seventy-four patients who were intended to receive RSVP-SPDP at the First Affiliated Hospital of Sun Yat-sen University between May 2015 and January 2023 were included. The learning curve were retrospectively analyzed by using cumulative sum (CUSUM) analyses. Results: Sixty-two patients underwent RSVP-SPDP (spleen preservation rate: 83.8%). According to CUSUM curve, the operation time (median, 318 vs. 220 min; P < 0.001) and intraoperative blood loss (median, 50 vs. 50 mL; P = 0.012) was improved significantly after 16 cases. Blood transfusion rate (12.5% vs. 3.4%; P = 0.202), postoperative major morbidity rate (6.3% vs. 3.4%; P = 0.524), and postoperative length-of-stay (median, 10 vs. 8 days; P = 0.120) improved after 16 cases but did not reach statistical difference. None of the patients had splenic infarction or abscess postoperatively. Conclusion: RSVP-SPDP was a safe and feasible approach for selected patients after learning curve. The improvement of operation time and intraoperative blood loss was achieved after 16 cases. Answer: Yes, vessel preservation does matter in splenic preserving distal pancreatectomy (SPDP). The outcomes of SPDP with splenic vessel preservation (VP) are generally better compared to those without vessel preservation (VL). Studies have shown that VP-SPDP is associated with less blood loss, fewer grade B or C pancreatic fistulas, less splenic infarction, lower overall morbidity, less need for drainage procedures, and shorter post-operative length of stay compared to VL-SPDP or distal pancreatectomy with splenectomy (DP+S) (PUBMED:21463805). Additionally, a meta-analysis indicated that patients who underwent SPDP with splenic vessel preservation (SPDP-SVP) had lower rates of splenic infarction and gastric/perigastric varices compared to those who underwent SPDP with splenic vessel resection (SPDP-SVR) (PUBMED:26770307). Another study supported the preference for the SVP technique considering its short-term benefits, although in cases of large tumors that distort and compress vessel course, SVR could be applied with acceptable outcomes (PUBMED:29124843). Furthermore, a novel technique involving splenic artery resection and splenic vein preservation has been described as an alternative method of splenic preservation when there is no attachment of the tumor to the splenic vein or when there is uncontrolled expected bleeding of the splenic artery using other techniques (PUBMED:37966019). In conclusion, preserving splenic vessels during SPDP is preferred when feasible, as it is associated with better postoperative outcomes and lower complication rates, particularly regarding splenic infarction and perigastric varices (PUBMED:21463805; PUBMED:26770307; PUBMED:29124843).
Instruction: Do critically ill surgical neonates have increased energy expenditure? Abstracts: abstract_id: PUBMED:11150439 Do critically ill surgical neonates have increased energy expenditure? Background/purpose: Adult metabolic studies suggest that critically ill patients have increased energy expenditures and thus require higher caloric allotments. To assess whether this is true in surgical neonates the authors utilized a validated, gas leak-independent, nonradioactive, isotopic technique to measure the energy expenditures of a stable postoperative group and a severely stressed cohort. Methods: Eight (3.46+/-1.0 kg), hemodynamically stable, total parenteral nutrition (TPN)-fed, nonventilated, surgical neonates (5 with gastroschisis, 2 with intestinal atresia, and 1 with intestinal volvulus) were studied on postoperative day 15.5+/-11.9. These were compared with 10 (BW = 3.20+/-0.2 kg), TPN-fed, extracorporeal life support (ECLS)-dependent neonates, studied on day of life 7.0+/- 2.8. Energy expenditure was obtained using a primed, 3-hour infusion of NaH(13)CO(3'), breath (13)CO(2) enrichment determination by isotope ratio mass spectroscopy, and the application of a standard regression equation. Interleukin (IL)-6 levels and C-reactive protein (CRP) concentrations were measured to assess metabolic stress. Comparisons between groups were made using 2 sample Student's t tests. Results: The mean energy expenditure was 53+/-5.1 kcal/kg/d (range, 45.6 to 59.8 kcal/kg/d) for the stable cohort and 55+/-20 kcal/kg/d (range, 32 to 79 kcal/kg/d) for the ECLS group (not significant, P =.83). The IL-6 and CRP levels were significantly higher in the ECLS group (29 +/-11.5 v 0.7+/-0.6 pg/mL [P<.001], and 31+/-22 v 0.6+/-1.3 mg/L [P<.001], respectively). Mortality rate was 0% for the stable postoperative patients and 30% for the ECLS group. Conclusions: Severely stressed surgical neonates, compared with controls, generally do not show increased energy expenditures as assessed by isotopic dilution methods. These data suggest that the routine administration of excess calories may not be warranted in critically ill surgical neonates and support the hypothesis that neonates obligately redirect energy, normally used for growth, to fuel the stress response. This is a US government work. There are no restrictions on its use. abstract_id: PUBMED:31451246 Systematic review of factors associated with energy expenditure in the critically ill. Background And Aims: Indirect calorimetry is the reference standard for energy expenditure measurement. Predictive formulae that replace it are inaccurate. Our aim was to review the patient and clinical factors associated with energy expenditure in critically ill patients. Methods: We conducted a systematic review of the literature. Eligible studies were those reporting an evaluation of factors and energy expenditure. Energy expenditure and factor associations with p-values were extracted from each study, and each factor was classified as either significantly, indeterminantly, or not associated with energy expenditure. Regression coefficients were summarized as measures of central tendency and spread. Metanalysis was performed on correlations. Results: The search strategy yielded 8521 unique articles, 307 underwent full text review, and 103 articles were included. Most studies were in adults. There were 95 factors with 352 evaluations. Minute volume, weight, age, % body surface area burn, sedation, post burn day, and caloric intake were significantly associated with energy expenditure. Heart rate, fraction of inspired oxygen, respiratory rate, respiratory disease diagnosis, positive end expiratory pressure, intensive care unit days, C- reactive protein, and size were not associated with energy expenditure. Multiple factors (n = 37) were identified with an unclear relationship with energy expenditure and require further evaluation. Conclusions: An important interval step in the development of accurate formulae for energy expenditure estimation is a better understanding of relationships between patient and clinical factors and energy expenditure. The review highlights the limitations of currently available data, and identifies important factors that are not included in current prediction formulae of the critically ill. abstract_id: PUBMED:11877636 Energy expenditure in ill premature neonates. Background/purpose: The energy needs of critically ill premature neonates undergoing surgery remain to be defined. Results of studies in adults would suggest that these neonates should have markedly increased energy expenditures. To test this hypothesis, a recently validated stable isotopic technique was used to measure accurately the resting energy expenditure (REE) of critically ill premature neonates before and after patent ductus arteriosus (PDA) ligation. Methods: Six ventilated, fully total parenteral nutrition (TPN)-fed, premature neonates (24.5 plus minus 0.5 weeks' gestational age) were studied at day of life 7.5 plus minus 0.7, immediately before and 16 plus minus 3.7 hours after standard PDA ligation. REE was measured with a primed continuous infusion of NaH(13)CO(3), and breath samples were analyzed by isotope ratio mass spectroscopy. Serum CRP and cortisol concentrations also were obtained. Statistical analyses were made by paired sample t tests and linear regression. Results: The resting energy expenditures pre- and post-PDA ligation were 37.2 plus minus 9.6 and 34.8 plus minus 10.1 kcal/kg/d (not significant, P =.61). Only preoperative energy expenditure significantly (P <.01) predicted postoperative energy expenditure (R(2) = 88.0%). Pre- and postoperative determinations of CRP were 2.1 plus minus 1.5 and 7.1 plus minus 4.2 mg/dL (not significant, P =.34), and cortisol levels were 14.1 plus minus 2.3 and 14.9 plus minus 2.1 microgram/dL (not significant, P =.52). Conclusions: Thus, critically ill premature neonates do not have elevated REE, and, further, there is no increase in REE evident the first day after surgery. This suggests that routine allotments of excess calories are not necessary either pre-or postoperatively in critically ill premature neonates. Given the high interindividual variability in REE, actual measurement is prudent if protracted nutritional support is required. abstract_id: PUBMED:2493238 Errors in estimating energy expenditure in critically ill surgical patients. Thirty-one critically ill surgical patients were receiving central parenteral nutrition. All were intubated, and 29 were receiving mechanical ventilatory support. Nutritional and metabolic data were recorded at the time of indirect calorimetry. Measured energy expenditure (MEE) was compared with predictions of basal energy expenditure (BEE) and calculated energy expenditure, defined as the product of BEE and a stress factor estimated by the nutrition support service to account for severity of illness and activity. The MEE was significantly greater than the BEE and significantly less than the calculated energy expenditure. The estimated stress factor was significantly greater than the actual MEE/BEE ratio, and the correlation between these values was poor. Clinical assessment may overestimate energy expenditure in critically ill patients because of the apparent degree of illness used to determine the stress factor. Bedside indirect calorimetry may be useful to assess more accurately energy expenditure and optimize nutritional support. abstract_id: PUBMED:33436084 Energy expenditure and indirect calorimetry in critical illness and convalescence: current evidence and practical considerations. The use of indirect calorimetry is strongly recommended to guide nutrition therapy in critically ill patients, preventing the detrimental effects of under- and overfeeding. However, the course of energy expenditure is complex, and clinical studies on indirect calorimetry during critical illness and convalescence are scarce. Energy expenditure is influenced by many individual and iatrogenic factors and different metabolic phases of critical illness and convalescence. In the first days, energy production from endogenous sources appears to be increased due to a catabolic state and is likely near-sufficient to meet energy requirements. Full nutrition support in this phase may lead to overfeeding as exogenous nutrition cannot abolish this endogenous energy production, and mitochondria are unable to process the excess substrate. However, energy expenditure is reported to increase hereafter and is still shown to be elevated 3 weeks after ICU admission, when endogenous energy production is reduced, and exogenous nutrition support is indispensable. Indirect calorimetry is the gold standard for bedside calculation of energy expenditure. However, the superiority of IC-guided nutritional therapy has not yet been unequivocally proven in clinical trials and many practical aspects and pitfalls should be taken into account when measuring energy expenditure in critically ill patients. Furthermore, the contribution of endogenously produced energy cannot be measured. Nevertheless, routine use of indirect calorimetry to aid personalized nutrition has strong potential to improve nutritional status and consequently, the long-term outcome of critically ill patients. abstract_id: PUBMED:19019296 Metabolism and nutrition in the surgical neonate. Considerable improvements have been achieved in pediatric surgery during the last two decades: the mortality rate of neonates undergoing major operations has declined to less than 10%, and the morbidity of major operations has become negligible. This considerable improvement can be partly ascribed to a better understanding of the physiological changes that occur after an operation and to more appropriate management and nutrition of the critically ill and "stressed" neonates and children. The metabolic response to an operation is different in neonates from adults: there is a small increase in oxygen consumption and resting energy expenditure immediately after surgery with return to normal by 12-24 hours. The increase in resting energy expenditure is significantly greater in infants having a major operation than in those having a minor procedure. The limited increase in energy expenditure may be due to diversion of energy from growth to tissue repair. During parenteral nutrition, it is not advisable to administer more than 18 g/kg/day of carbohydrate because this intake will be associated with lipogenesis, increased CO(2) production, and increased free radical-mediated lipid peroxide formation. Glutamine intake is potentially beneficial during total parenteral nutrition, although a large, randomized, controlled trial in surgical neonates requiring parenteral nutrition is needed to provide evidence for its benefit. abstract_id: PUBMED:12706137 Enteral nutrition practice in a surgical intensive care unit: what proportion of energy expenditure is delivered enterally? The object of this study was to document enteral feeding practice in critically ill patients in a surgical intensive care unit. We asked what proportion of measured energy expenditure is delivered enterally. Patient, material, and therapy-related factors should be assessed and related to enteral nutrition. Sixty patients receiving enteral nutrition for a period of at least 10 days were included in the study. Mean daily energy expenditure was 27.8+8.7 kcal/kg. Mean daily enteral delivered calories reached 19.7+/-10.3 kcal/kg (P<0.05). Twenty-one out of 60 (35%) patients were fed isocalorically; 46% of enteral nutrition days failed to reach 80% of energy expenditure. Ten out of 30 patients (33%) fed over a gastric tube were nourished isocalorically in comparison to 8 out of 20 patients (40%) fed over a duodenal tube. Factors associated with hypocaloric enteral feeding in multiple logistic regression were abdominal, pelvic and lumbal spine trauma, gastrointestinal intolerance, problems with the feeding tube, additional surgical interventions, airway management and use of fentanyl. In the course of the study, gastrointestinal complications were the cause for more than 50% of insufficient enteral delivery cases, while therapy and material related reasons contribute to only a minor part.Abdominal, pelvic and lumbal spine traumas are associated with a higher possibility towards developing problems with enteral delivery, as shown by odds-ratios greater than eight. These diagnoses amounted in our investigation to nearly 40% and make a great difference to medical patients. Therefore, recommendations for optimising enteral feeding must take the concerned patient collective into account. abstract_id: PUBMED:31267545 An Exploratory Retrospective Study of Factors Affecting Energy Expenditure in Critically Ill Children. Background: Accurate measurement of energy expenditure is not widely available. Patient and clinical factors associated with energy expenditure have been poorly explored, leading to errors in estimation formulae. The objective of this study was to determine clinical factors associated with measured energy expenditure (MEE), expressed in kcal/kg/d, in critically ill children. Methods: This was a retrospective study at 2 Canadian pediatric intensive care units (ICUs). Patients were mechanically ventilated children who had 1 or more MEE using indirect calorimetry. Associations between MEE and 28 clinical factors were evaluated in univariate regression and 16 factors in a multivariate regression model accounting for repeated measurements. Results: Data from 239 patients (279 measurements) were analyzed. Median (Q1, Q3) MEE was 34.8 (26.8, 46.2) kcal/kg/d. MEE was significantly associated with weight, heart rate, diastolic blood pressure, ICU day of indirect calorimetry (P = 0.004), minute ventilation, vasoactive inotropic score (P = 0.004), opioids, chloral hydrate, dexmedetomidine, inhaled salbutamol (P = 0.02), and propofol dose (all P < 0.0001 unless otherwise specified) in the final multivariate regression model. Conclusions: This study demonstrated association between MEE (kcal/kg/d) and factors not previously explored in pediatric critical illness. Further evaluation of these factors to confirm associations and more precisely quantify the magnitude of effect is required to support refinement of formulae to estimate energy expenditure. abstract_id: PUBMED:7774438 Effects of over feeding on the energy expenditure and substrates oxidative rate in surgical patients In this study, computerized indirect calorimetric measurements were made using a medical graphics critical care monitor (CCM) desktop analysis system in the observation of metabolic state of 20 patients complicated with external gastrointestinal fistula. While these malnourished patients were provided with 1.5 x REE in total energy intake, the malnutrition state could be reversed. But with 1.75 or 2.0 x REE or up total energy intake, the general nutritional state could not be improved faster, the O2 consumption and CO2 production and energy expenditure increased, while the net glucose oxidation increased and net lipid oxidation decreased or net lipogenesis occurred simultaneously. We believed that superfluous energy intake is harmful to critically ill patients and may lead to cell injury and dysfunction. abstract_id: PUBMED:32866253 Methods for Estimating Energy Expenditure in Critically Ill Adults. Energy expenditure (EE) is the sum of metabolic activity within the body at a given time and comprises basal EE, diet-induced thermogenesis, and physical activity. In the intensive care unit, EE is most often assessed to determine a patient's caloric requirements. Energy expenditure also may be useful to understand disease states and the metabolic impact of interventions. Several methods for estimating EE are relevant for clinical use, including indirect calorimetry, predictive equations, exhaled carbon dioxide volume, and the Fick method. Indirect calorimetry is the preferred method for evaluating EE and is considered the gold standard for estimating EE in hospitalized patients. However, use of indirect calorimetry is not always practical or possible. Therefore, other methods of estimating EE must be considered. In this review, methods of evaluating EE in critically ill adults are examined and the benefits and limitations of each method are discussed, with practical considerations for use. Answer: Critically ill surgical neonates do not generally show increased energy expenditure compared to stable postoperative neonates. A study utilizing a validated isotopic technique to measure energy expenditures found that severely stressed surgical neonates had similar energy expenditures to a stable postoperative group, suggesting that the routine administration of excess calories may not be warranted in these patients (PUBMED:11150439). This finding is supported by another study that measured resting energy expenditure (REE) in critically ill premature neonates before and after patent ductus arteriosus (PDA) ligation, which also concluded that these neonates do not have elevated REE and that there is no increase in REE evident the first day after surgery (PUBMED:11877636). Furthermore, a systematic review of factors associated with energy expenditure in critically ill patients identified that certain factors such as minute volume, weight, age, and caloric intake were significantly associated with energy expenditure, but many other factors were not, indicating the complexity of predicting energy needs in this population (PUBMED:31451246). Errors in estimating energy expenditure in critically ill surgical patients have been noted, with clinical assessments often overestimating energy needs due to the apparent degree of illness used to determine the stress factor (PUBMED:2493238). Overall, the evidence suggests that critically ill surgical neonates do not have a universally increased energy expenditure and that individual measurements and assessments are prudent to avoid overfeeding and to tailor nutritional support appropriately (PUBMED:33436084, PUBMED:19019296).
Instruction: Is the pediatric ureter as efficient as the adult ureter in transporting fragments following extracorporeal shock wave lithotripsy for renal calculi larger than 10 mm.? Abstracts: abstract_id: PUBMED:11586249 Is the pediatric ureter as efficient as the adult ureter in transporting fragments following extracorporeal shock wave lithotripsy for renal calculi larger than 10 mm.? Purpose: We determined whether the thin ureter of the young child transports stone fragments after extracorporeal shockwave lithotripsy (ESWL) as efficiently as the adult ureter does. This determination was done by comparing the outcome after lithotripsy of renal stones greater than 10 mm. between young children and adults. Materials And Methods: Our study group consisted of 38 children 6 months to 6 years old (median 3 years) with renal stones greater than 10 mm. in diameter. This group was further divided into 3 subgroups according to the longest stone diameter on plain abdominal film. There were 21 children with a renal stone diameter of 10 to 15 mm. (subgroup 1), 8, 16 to 20 mm. (subgroup 2) and 9 greater than 20 mm. (subgroup 3). The control group consisted of 38 adults older than 20 years randomly selected from the local ESWL registry. Each adult was matched with a child regarding stone diameter and localization. The control group was similarly divided into subgroups 1a, 2a and 3a. ESWL was performed with the unmodified Dornier HM-3 lithotriptor (Dornier Medical Systems, Inc., Marietta, Georgia). The stone-free rate, complication rate, and need for tubes, including stent or nephrostomy, and greater than 1 ESWL session were compared. Results: The stone-free rate was 95% in the study and 78.9% in the control group (p = 0.086). Stone-free rates were 95%, 100% and 89% in subgroups 1, 2 and 3, and 95%, 65% and 56% in subgroups 1a, 2a and 3a, respectively. There were 10 children and 4 adults who underwent greater than 1 ESWL session (p = 0.14). Then there were 10 children and 6 adults who required a tube before ESWL (p = 0.04), and almost all of them were included in subgroups 3 and 3a. Early complications were rare in both the study and control groups. Late complications had included 2 cases of Steinstrasse in the control and none in the study group. Conclusions: The stone-free rate after ESWL for large renal stones is higher in young children compared to adults with matching stone size. Renal stones greater than 20 mm. often require more than 1 ESWL session. The pediatric ureter is at least as efficient as the adult for transporting stone fragments after ESWL. abstract_id: PUBMED:14661393 Extracorporeal shock wave lithotripsy as surgical therapy in the kidney-ureter calculosis Introduction: The coming of the extra-corporeal shock waves lithotripsy (ESWL) represented one of the main progress in medicine of all times. Materials And Methods: From January '84 up to March '99, 7.508 patients underwent extracorporeal shock wave lithotripsy (ESWL) for renal, ureteral and bladder stones with a total of 13.032 treatments. 6.329 kidney stones, 2.165 ureteral-calculosis and 52 bladder stones, for a total amount of 8.546 stones were treated. Seven different lithotripters have been used: radiologic-based Dornier HM3 with electrohydraulic shock waves production, radiologic-based modified-Dornier HM3 with electrohydraulic shock waves production, ultrasonography-based Dornier MPL 9000 with electrohydraulic shock waves production, ultrasonography-based EDAP LT 01 with piezoelectric shock waves production, ultrasonography and radiologic-based EDAP LT 02 with piezoelectric shock waves production, ultrasonography-based Piezolith 2200 and the Piezolith 2300 only with piezoelectric shock waves production. In 1,451 patients an auxiliary action was necessary. Results: Only a treatment was sufficient in 5.337 patients (77.7%), two in 1.497, three in 507 patients, four in 248 patients, five in 109, six in 55, seven in 34, eight in 9, nine in 12, ten in 5, eleven in a patient, twelve in a patient, thirteen in a patient, fourteen in two patients and fifteen in a patient. The patients who had a negative abdomen radiograph or ultrasound after two months were considered "stone-free". We have considered urolithies smaller than 3 mm as fragments, with the possibility of natural expulsion, non conditioning lying behind. Totally the results was: 5.950 patients "stone-free", 753 patients with fragments, 257 patients with non-broken calculi and 547 patients with dust only. The "stainstrasse" with spontaneous resolution and the one with instrumental resolution, the hyperpyrexia, the serious colics, the symptomatic renal haematomas and the intolerance to the treatment (vomiting and nausea) have been considered as complications. Conclusions: Nowadays, except for the cases in which are necessary admission to hospital and urgency treatment, since imminent colics are present, the ESWL may be performed in day-hospital. Since the first years of employment the ESWL has solved almost all the cases of urolithiasis. Nevertheless the experience has proved the extreme aggressiveness and inappropriate characteristics of a number sometimes too high of re-treatments. The ESWL should maintain the feature of non-spread methodic when it is possible to use it in cases solving with two--maximum three--treatments. abstract_id: PUBMED:23904918 Extracorporeal Shock-wave Lithotripsy Success Rate and Complications: Initial Experience at Sultan Qaboos University Hospital. Objective: To assess the efficacy and safety of extracorporeal shock wave lithotripsy with Modularis Vario Siemens in the management of patients with renal and ureteral stones. Methods: Between 2007 and 2009, 225 outpatients were treated with Siemens Modularis Vario lithotripter at Sultan Qaboos University Hospital. Stone size, location, total number of shockwaves, stone-free rate, complications and adjunctive interventions were investigated. Chi-Square and Logistic Regression analyses were used, with p<0.05 set as the level of significance. Results: Of the 225 initial consecutive patients who underwent extracorporeal shock wave lithotripsy, 192 (85%) had renal stones and 33 (15%) had ureteric stones. The mean±SD stone size was 11.3±4.5 mm, while the mean age of the patients was 39.9±12.8 years with 68.5% males. The mean renal stone size was 11.6±4.7 mm; a mean of 1.3 sessions was required. The mean ureteric stone size was 9.9±3 mm; and a mean of 1.3 sessions was required. Treatment success (defined as complete clearance of ureteric stones, stone-free or clinically insignificant residual fragments of <4 mm for renal stones) was 74% for renal stones and 88% for ureteric stones. Additional extracorporeal shock wave lithotripsy and ureteroscopy were the most adjunctive procedures used for stone clearance. Complications occurred in 74 patients (38.5%) with renal stones and 13 patients (39.4%) with uretetric stones. The most common complication was loin pain (experienced by 16.7% with renal stones and 21% with ureteric stones). Severe renal colic mandating admission occurred in 2% of patients with renal stones and 6% of patients with ureteric stones. In patients with renal stone, steinstrasse occurred in 3.6% and infection post extracorporeal shock wave lithotripsy in 0.5%. Using Multivariate Logistic Regression analysis, factors found to have significant effect on complete stone clearance were serum creatinine (p=0.004) and the number of shockwaves (p=0.021). Conclusion: Siemens Modularis Vario lithotripter is a safe and effective tool for treating renal and ureteric stones. abstract_id: PUBMED:3177118 Clinical study of extracorporeal shock wave lithotripsy Extracorporeal shock wave lithotripsy was performed on 81 patients with urolithiasis (35 patients with ureteral stones, 25 patients with renal stones less than 2 cm in diameter, and 21 patients with renal stones more than 2 cm in diameter) at Kanbara Hospital from August to October, 1986. A 4 Fr catheter was placed transurethrally in the ureter up to the stone to identify the stone position and the degree of fragmented stones. In four patients with staghorn calculi, a double-J catheter was placed in the ureter to prevent stone street formation. More than 50% of the patients with renal stones less than 2 cm in maximum diameter and ureteral stones had satisfactorily excreted fragments or sand of crushed stones not later than 2 weeks after the operation. However, in patients with renal stones more than 2 cm in maximum diameter, it took much more time to discharge the crushed stones compared with the foregoing two groups and some patients needed further management to remove the remnants. Combined treatment, ureteral catheterization or endoscopic operation with ESWL is recommended for treatment of renal stones larger than 2 cm in diameter. abstract_id: PUBMED:15977596 Utility of ureteral stent for stone street after extracorporeal shock wave lithotripsy We reviewed the records of the 530 patients with urinary stones (renal stones: 243; ureter stones 287) who received extracorporeal shock wave lithotripsy (ESWL) (MFL5000; Dornier), from January 1995 to July 2002, retrospectively and determined whether the ureteral stent affected the incidence rate of stone street (SS). We also assessed the effect of ureteral stent on the subsequent management for SS. Forty patients (7.5%) developed SS. Twenty patients were inserted a ureteral stent prior to ESWL (stent group), and 20 patients were performed ESWL without a ureteral stent (in situ group). In the stent group, the most common (80.0%) location for SS was in the upper third ureter, while in the in situ group, SS mostly developed in the distal third ureter (60.0%). The incidence of SS did not differ significantly between the two groups when the size of renal and ureter stones was below 30 and 20 mm, respectively. When the renal stones were larger than above 30 mm, the incidence of SS in the stent group was significantly higher than that in the in situ group. SS disappeared spontaneously with stone passage in 10 of the patients in in situ group, but in only 1 patient in the stent group. In the stent group, 15 patients were treated for SS by removal of ureteral stent regardless of stone diameter. We conclude that ESWL should be performed without a ureteral stent when the stone diameter is below 20 mm. When the ureteral stent is thought to interfere with the delivery of stone fragments, the decision to remove it should be made as soon as possible. abstract_id: PUBMED:31070130 Treatment of uretero-renal stones with shock wave lithotripsy. Results and complications with the dornier gemini emse 220f-XXP. Objectives: Extracorporeal shock wave lithotripsy is a minimally invasive therapeutic option for the treatment of renal-ureteral lithiasis. The aim of this study was to analyze the results and complications of shock wave extracorporeal lithotripsy treatment with the Dornier Gemini® Generator EMSE 220f-XXP device in patients with renal and ureteral lithiasis. Material And Methods: Retrospective study including 377 patients with renal or ureteral lithiasis with indication for treatment with extracorporeal shock wave lithotripsy. The following variables were analyzed, age, sex, body mass index, lithiasis size, lithiasis location, presence of urinary diversion, number of lithotripsy sessions, number of shock waves, fluoroscopy time, wave energy, applied focal energy coefficient, efficiency coefficient, lithiasic fragmentation, lithiasic clearance, residual lithiasis, presence of lithiasis and complications. The results were analyzed with SPSS 17.0 considering statistical significance p≤0.05. Results: Of the 377 patients, 213 were men and 164 women, with a mean age of 51.28 ± 12.77 years. The mean size of the stones in maximum diameter was 11.77 ± 6.13 mm. Lithiasis fragmentation occurred in 81.9% of cases, with a percentage of residual lithiasis after the first session of 58.7% and a total or partial expulsion rate of lithiasis fragments of 68.3%, with global success at the end of sessions of lithotripsy of 69.8%. The overall Efficiency Ratio was 0.42, higher in upper calyx 0.51 and lower in medium calyx 0.35, with significant differences (p<0.05). The only differences were found in relation to the success of lithotripsy treatment (75% versus 64.6%, p=0.02), according to lithiasis size (≤10 mm maximum diameter in comparison to >10 mm). In patients with a DJ catheter there is a higher percentage of residual lithiasis (p=0.006). Conclusions: Treatment with extracorporeal lithotripsy in small lithiasis and in well-selected patients obtains good results with a low rate of complications regardless of sex and body mass index. abstract_id: PUBMED:31387108 Particularities and Efficacy of Extracorporeal Shock Wave Lithotripsy in Children. Background: Extracorporeal shock wave lithotripsy (ESWL) was first introduced in paediatric population in 1986. Given the more frequent recurrence in children, compared to adults, urinary stones treatments should require minimal invasive treatment methods. In this study, we aimed to evaluate the profile of the young patient with lithiasis who can benefit from ESWL, analysing the experience of 2 clinical departments. Materials And Methods: We have retrospectively reviewed the medical records of 54 children who underwent ESWL for urolithiasis. ESWL success rate was defined as stone-free status or the presence of clinically insignificant residual fragments. Data were analysed using the STATA 14.2. Results: In our study, the incidence of renal-ureteral calculi is significantly higher in girls (68.5%), compared to boys (31.5%). In total, 83.3% of patients showed a favourable outcome after treatment and the remaining 16.7% showed minimal complications. The presence of complications and remaining calculi was correlated to children age. The overall stone free rate was 88.9%. For calculus of 8.5 mm, only one ESWL session is recommended. Conclusions: The high percentage of cases with favourable outcome indicate that ESWL treatment is effective, considering the minimal cost, minimal invasiveness, repeatability and no need for general anaesthesia. abstract_id: PUBMED:3357946 Pseudoureteroceles following extracorporeal shock wave lithotripsy. Routine follow-up urography demonstrated pseudoureteroceles caused by impacted calculus fragments in the distal portion of the ureter in five patients who had undergone extracorporeal shock wave lithotripsy of renal calculi 4-6 weeks before. Although all five patients showed some degree of obstruction at the time of follow-up urography, four were asymptomatic. Either small isolated fragments (two patients) or a column (steinstrasse) of fragments (three patients) was responsible for the abnormality. Two patients required further intervention to relieve the obstruction. abstract_id: PUBMED:36160067 Massive perinephric hematoma - A rare complication of extracorporeal shock wave lithotripsy. Extracorporeal shock wave lithotripsy is commonly used to fragment renal calculi. This treatment modality is considered safe with minimal complications. We report a case of massive retroperitoneal hematoma following extracorporeal shock wave lithotripsy in a patient with no comorbidity and normal coagulation parameters. Patient was managed conservatively and had excellent recovery with normal renal function. abstract_id: PUBMED:3275796 The role of silicone ureteral stents in extracorporeal shock wave lithotripsy of large renal calculi. Between November 1984 and December 1985 extracorporeal shock wave lithotripsy was used to treat 1,645 kidneys at our institution. A total of 646 kidneys with stone burden greater than 14 mm. was evaluated with regard to the impact of silicone ureteral stents in post-extracorporeal shock wave lithotripsy morbidity. Our results indicate that small stones were pulverized and eliminated with minimum morbidity. Larger stones frequently were associated with post-treatment ureteral obstruction by sand and fragments. Of 283 kidneys with stone burden exceeding 25 mm. pretreatment placement of silicone ureteral stents reduced complications from 26 to 7 per cent and auxiliary procedure rates from 15 to 6 per cent. Silicone ureteral stents protect the kidney from ureteral obstruction, and allow for safe and effective extracorporeal shock wave lithotripsy of large renal calculi. Answer: Yes, the pediatric ureter is at least as efficient as the adult ureter in transporting fragments following extracorporeal shock wave lithotripsy (ESWL) for renal calculi larger than 10 mm. A study comparing the outcome after lithotripsy of renal stones greater than 10 mm between young children and adults found that the stone-free rate was higher in young children compared to adults with matching stone size. Specifically, the stone-free rates were 95%, 100%, and 89% in children's subgroups and 95%, 65%, and 56% in adults' subgroups, respectively. This suggests that the pediatric ureter is capable of efficiently transporting stone fragments after ESWL (PUBMED:11586249).
Instruction: Do immigrants have an increased prevalence of unhealthy behaviours and risk factors for coronary heart disease? Abstracts: abstract_id: PUBMED:16319542 Do immigrants have an increased prevalence of unhealthy behaviours and risk factors for coronary heart disease? Background: Although previous research has demonstrated a high risk of coronary disease in immigrants, the prevalence of unhealthy behaviours and risk factors is less known. The aim of this study was to investigate whether unhealthy behaviours and risk factors for coronary disease are more common in immigrants than in Swedish-born individuals. Methods: Between 1 January 1996 and 31 December 2002 a simple random sample of the population was drawn and interviewed face to face. Eight immigrant groups in Sweden and a Swedish-born reference group, aged between 27 and 60 years, were studied. A log-binomial model was used to analyse the cross-sectional association between country of birth and unhealthy behaviours as well as coronary disease risk factors. Results: Many of the immigrant groups showed higher risks of smoking, of physical inactivity and of obesity than Swedish-born individuals in age-adjusted models. On also adjusting for the level of education, occupational status and social network, the differences in risk persisted in the majority of groups. However, the over-risks of physical inactivity in Finnish and south European immigrant men and of diabetes in Finnish and Turkish immigrant women disappeared. Conclusions: The high prevalence of unhealthy behaviours and risk factors for coronary disease in many immigrant groups might be a lifestyle remnant from their country of birth or might be brought about by a stressful migration and acculturation into a new social and cultural environment. Nevertheless, it is important in primary healthcare to be aware of a possible preventable increased risk of unhealthy behaviours and risk factors for coronary disease in some immigrants. abstract_id: PUBMED:26350421 Cardiovascular diseases and risk factors among Chinese immigrants. The aim of this study is to identify the prevalence of cardiovascular disease (CVD) and major CVD risk factors, including diabetes, hypertension, dyslipidemia, obesity and smoking among Chinese immigrants by a systematic review of studies from various countries. PubMed and the China National Knowledge Infrastructure databases were searched for studies of the prevalence of major CVDs and risk factors, and of CVD mortality among Chinese immigrants. The search identified 386 papers, 16 of which met the inclusion criteria for this review. In mainland China, there is a pattern of high stroke prevalence but low coronary heart disease (CHD) prevalence. Among Chinese immigrants, there is a much lower prevalence and mortality of stroke, but a higher prevalence and mortality of CHD, even though these are lower than the rates in immigrants of other ethnicities in the host country. The prevalence of CVD risk factors is also markedly different in immigrants. Compared with mainland Chinese, Chinese immigrants have a higher prevalence of diabetes and hypertension, higher serum cholesterol, poorer dietary patterns, and higher prevalence of obesity and smoking. Thus, the epidemiological pattern of CVD among Chinese immigrants changes compared with resident mainland Chinese. The less healthy environmental factor after immigration may be a major trigger in the adverse CVD status of Chinese immigrants. It is important for policy-makers to pay more attention to specific minority immigrant groups, and to implement more effective preventive measures to improve the health of immigrant populations. abstract_id: PUBMED:34444370 A Longitudinal Assessment of Risk Factors and Chronic Diseases among Immigrant and Non-Immigrant Adults in Australia. This study aimed to investigate the prevalence and trajectories of chronic diseases and risk behaviors in immigrants from high-income countries (HIC), low-middle-income countries (LMIC), to Australian-born people. Data were used from five waves of the HABITAT (2007-2016) study-11,035 adults living in Brisbane, Australia. Chronic diseases included cancer, diabetes mellitus, coronary heart disease, and chronic obstructive pulmonary disease (COPD). Risk factors assessed were body mass index (BMI), insufficient physical activity, and cigarette smoking. Diabetes mellitus increased in all groups, with the highest increase of 33% in LMIC immigrants. The prevalence of cancers increased 19.6% in the Australian-born, 16.6% in HIC immigrants, and 5.1% in LMIC immigrants. The prevalence of asthma increased in HIC immigrants while decreased in the other two groups. Poisson regression showed that LMIC immigrants had 1.12 times higher rates of insufficient physical activity, 0.75 times lower rates of smoking, and 0.77 times lower rates of being overweight than the Australian-born population. HIC immigrants had 0.96 times lower rates of insufficient physical activity and 0.93 times lower rates of overweight than Australian-born. The findings of this study can inform better strategies to reduce health disparities by targeting high-risk cohorts. abstract_id: PUBMED:37273875 Trends in unhealthy lifestyle factors in US NHANES respondents with cardiovascular disease for the period between 1999 and 2018. Objectives: To examine national trends in unhealthy lifestyle factors among adults with cardiovascular disease (CVD) in the United States (US) between 1999 and 2018. Methods: We analyzed data from National Health and Nutrition Examination Survey (NHANES), a nationally representative survey of participants with CVD who were aged ≥20 years, which was conducted between 1999 and 2000 and 2017-2018. CVD was defined as a self-report of congestive heart failure, coronary heart disease, angina, heart attack, or stroke. The prevalence rate of each unhealthy lifestyle factor was calculated among adults with CVD for each of the 2-year cycle surveys. Regression analyses were used to assess the impact of sociodemographic characteristics (age, sex, race/ethnicity, family income, education level, marital status, and employment status). Results: The final sample included 5610 NHANES respondents with CVD. The prevalence rate of their current smoking status remained stable among respondents with CVD between 1999 and 2000 and 2017-2018. During the same period, there was a decreasing trend in the age-adjusted prevalence rate of poor diet [primary American Heart Association (AHA) score <20; 47.5% (37.9%-57.0%) to 37.5% (25.7%-49.3%), p < 0.01]. Physical inactivity marginally increased before decreasing, with no statistical significance. The prevalence rate of sedentary behavior increased from 2007 to 2014 but subsequently returned to its original level in 2018 with no statistical significance. The age-adjusted prevalence rate of obesity increased from 32% (27.2%-36.8%) in 1999-2000 to 47.9% (39.9%-55.8%) in 2017-2018 (p < 0.001). The age-adjusted prevalence rate of depression increased from 7% (4.2%-9.9%) in 1999-2000 to 13.9% (10.2%-17.6%) in 2017-2018 (p = 0.056). Trends in mean for each unhealthy lifestyle factor were similar after adjustment for age. We found that respondents who had low education and income levels were at a higher risk of being exposed to unhealthy lifestyle factors (i.e., smoking, poor diet, and physical inactivity) than those who had high education and income levels. Conclusions: There is a significant reduction in the prevalence rate of poor diet among US adults with CVD between 1999 and 2018, while the prevalence rate of obesity showed increasing trends over this period. The prevalence rate of current smoking status, sedentary behavior, and depression was either stable or showed an insignificant increase. These findings suggest that there is an urgent need for health policy interventions targeting unhealthy lifestyles among adults with CVD. abstract_id: PUBMED:30819763 Cardiovascular risk factors and disease among non-European immigrants living in Catalonia. Objective: To describe the prevalence and incidence of cardiovascular risk factors, established cardiovascular disease (CVD) and cardiovascular medication use, among immigrant individuals of diverse national origins living in Catalonia (Spain), a region receiving large groups of immigrants from all around the world, and with universal access to healthcare. Methods: We conducted a population-based analysis including >6 million adult individuals living in Catalonia, using the local administrative healthcare databases. Immigrants were classified in 6 World Bank geographic areas: Latin America/Caribbean, North Africa/Middle East, sub-Saharan Africa, East Asia and South Asia. Prevalence calculations were set as of 31 December 2017. Results: Immigrant groups were younger than the local population; despite this, the prevalence of CVD risk factors and of established CVD was very high in some immigrant subgroups compared with local individuals. South Asians had the highest prevalence of diabetes, and of hyperlipidemia among adults aged <55 years; hypertension was highly prevalent among sub-Saharan Africans, and obesity was most common among women of African and South Asian ancestry. In this context, South Asians had the highest prevalence of coronary heart disease across all groups, and of heart failure among women. Heart failure was also highly prevalent in African women. Conclusions: The high prevalence of risk factors and established CVD among South Asians and sub-Saharan Africans stresses the need for tailored, aggressive health promotion interventions. These are likely to be beneficial in Catalonia, and in countries receiving similar migratory fluxes, as well as in their countries of origin. abstract_id: PUBMED:22774303 Coronary angiographic findings and conventional coronary artery disease risk factors of Indo-Guyanese immigrants with stable angina pectoris and acute coronary syndromes. Background: The prevalence of coronary artery disease (CAD) among migrant Indian populations exceeds that of Caucasians. Migrant Indians also suffer from more premature, clinically aggressive and angiographically extensive, (i.e., 3-vessel disease). It is not known whether the extent of angiographic CAD or the conventional CAD risk factors of Indo-Guyanese (IG) immigrants differs from that of Caucasians. Methods: We reviewed the conventional CAD risk factors and angiographic findings of 198 IG and 191 Caucasians who were consecutively referred for cardiac catheterization with a diagnosis of stable angina pectoris or acute coronary syndrome. Results: Three-vessel CAD was approximately 1.5 times more common among IG than Caucasians (34.8% vs. 24.0%; P = .02). Age (P = .01), male sex (P = .03) and diabetes mellitus (P = .05) were independently associated with an increased likelihood of 3-vessel CAD and there was a trend towards IG ethnicity predicting 3-vessel disease (P = .13). The frequency of diabetes mellitus (51.5% vs. 30.9%; P <.001), hypertension (82.3% vs. 67.0%; P < .001) and dyslipidemia (75.5% vs. 60.2%; P = .001) were significantly greater among IG, however, that of smoking was not. While IG were significantly leaner than Caucasians (27.7 kg/m2 vs. 30.0 kg/m2 ; P < .001), their mean body mass index fell within the ethnic-specific range for obesity. Conclusions: We conclude that IG immigrants presenting for coronary angiography have significantly higher rates of 3-vessel CAD as well as higher rates of diabetes mellitus, hypertension and dyslipidemia than Caucasians. Aggressive screening, prevention and treatment may be warranted in this cohort. abstract_id: PUBMED:19183743 Excess coronary artery disease risk in South Asian immigrants: can dysfunctional high-density lipoprotein explain increased risk? Background: Coronary artery disease (CAD) is the leading cause of mortality and morbidity in the United States (US), and South Asian immigrants (SAIs) have a higher risk of CAD compared to Caucasians. Traditional risk factors may not completely explain high risk, and some of the unknown risk factors need to be explored. This short review is mainly focused on the possible role of dysfunctional high-density lipoprotein (HDL) in causing CAD and presents an overview of available literature on dysfunctional HDL. Discussion: The conventional risk factors, insulin resistance parameters, and metabolic syndrome, although important in predicting CAD risk, may not sufficiently predict risk in SAIs. HDL has antioxidant, antiinflammatory, and antithrombotic properties that contribute to its function as an antiatherogenic agent. Recent Caucasian studies have shown HDL is not only ineffective as an antioxidant but, paradoxically, appears to be prooxidant, and has been found to be associated with CAD. Several causes have been hypothesized for HDL to become dysfunctional, including Apo lipoprotein A-I (Apo A-I) polymorphisms. New risk factors and markers like dysfunctional HDL and genetic polymorphisms may be associated with CAD. Conclusions: More research is required in SAIs to explore associations with CAD and to enhance early detection and prevention of CAD in this high risk group. abstract_id: PUBMED:14569368 Coronary heart disease clinical manifestation and risk factors in Japanese immigrants and their descendents in the city of São Paulo. Objective: To assess whether a difference exists in coronary heart disease clinical manifestations and the prevalence of risk factors between Japanese immigrants and their descendents in the city of São Paulo. Methods: Retrospective analysis of coronary artery disease clinical manifestations and the prevalence of risk factors, comparing 128 Japanese immigrants (Japanese group) with 304 Japanese descendents (Nisei group). Results: The initial manifestation of the disease was earlier in the Nisei group (mean=53 years), a difference of 12 years when compared with that in the Japanese group (mean=65 years) (P<0.001). Myocardial infarction was the first manifestation in both groups (P=0.83). The following parameters were independently associated with early coronary events: smoking (OR=2.25; 95% CI=1.35-3.77; P<0.002); Nisei group (OR=10.22; 95% CI=5.64-18.5; P<0.001); and female sex (OR=5.04; 95% CI=2.66-9.52; P<0.001). Conclusion: The clinical presentation of coronary heart disease in the Japanese and their descendents in the city of São Paulo was similar, but coronary heart disease onset occurred approximately 12 years earlier in the Nisei group than in the Japanese group. abstract_id: PUBMED:29058074 Acute coronary syndrome in immigrants and non-immigrants : Results of an Austrian prospective pilot study. Background: There are indications that immigrant patients with acute coronary syndrome (ACS) differ in demographic characteristics and clinical presentation from non-immigrant patients. The aim of this prospective pilot study was to gather clinical and sociodemographic data from patients with ACS and to compare immigrants with non-immigrants. Methods: Included were consecutive patients who underwent acute coronary angiography in one cardiological department for ACS from September 2011 to September 2013. Information was gathered about age, sex, results of the coronary angiography, classical risk factors, socioeconomic characteristics as well as ethnicity. Patients who had their place of birth outside Austria were specified as immigrants. Results: A total of 100 patients (29% female) with a mean age of 60 years (range 34-91 years) were included. Of the patients 35 (35%) were immigrants, 12 came from Serbia, 4 from Bosnia, 3 from South America, 2 from Germany, 2 from Turkey, 2 from the Czech Republic, 2 from Croatia, 2 from Macedonia, and 1 each from Bangladesh, Poland, Romania, Libya, Bulgaria and Pakistan. Immigrants tended to be younger on average (56 vs. 62 years, p = 0.04) and had a two or multivessel disease more often than the non-immigrants but this difference was not significant (51% vs. 38%, p = 0.29). There were no differences between non-immigrants and immigrants concerning the classical risk factors for ACS (hypercholesterinemia 60% vs. 69%, nicotine abuse 51% vs. 60%, hypertension 69% vs. 79%) except diabetes mellitus (15% vs. 37%, p = 0.02). Sociodemographic data showed differences in education and socioeconomic status (SES). Non-immigrants had jobs with high skill level more often than immigrants (30% vs. 4%, p = 0.02), although there was no difference between immigrants and non-immigrants in the level of high education (9% each); however, immigrants more often had low education (31% vs. 11%, p = 0.01) and a monthly income below 1000 € than non-immigrants (41% vs. 14%, p = 0.03). Conclusions: Immigrants with ACS suffered more often from coronary two or multivessel disease and diabetes mellitus and were slightly younger than non-immigrants, although they did not differ regarding classical risk factors. Results suggest that the lower SES of immigrants compared with non-immigrants might contribute to the severity of coronary heart disease. abstract_id: PUBMED:33969397 Burden of cardiovascular risk factors and disease in five Asian groups in Catalonia: a disaggregated, population-based analysis of 121 000 first-generation Asian immigrants. Aims: To evaluate the burden of cardiovascular risk factors and disease (CVD) among five Asian groups living in Catalonia (Spain): Indian, Pakistani, Bangladeshi, Filipino, and Chinese. Methods And Results: Retrospective cohort study using the Catalan Health Surveillance System database including 42 488 Pakistanis, 40 745 Chinese, 21 705 Indians, 9544 Filipinos, and 6907 Bangladeshis; and 5.3 million native individuals ('locals'). We estimated the age-adjusted prevalence (as of 31 December 2019) and incidence (during 2019) of diabetes, hypertension, hyperlipidaemia, obesity, tobacco use, coronary heart disease (CHD), cerebrovascular disease, atrial fibrillation, and heart failure (HF). Bangladeshis had the highest prevalence of diabetes (17.4% men, 22.6% women) followed by Pakistanis. Bangladeshis also had the highest prevalence of hyperlipidaemia (23.6% men, 18.3% women), hypertension among women (24%), and incident tobacco use among men. Pakistani women had the highest prevalence of obesity (28%). For CHD, Bangladeshi men had the highest prevalence (7.3%), followed by Pakistanis (6.3%); and Pakistanis had the highest prevalence among women (3.2%). For HF, the prevalence in Pakistani and Bangladeshi women was more than twice that of locals. Indians had the lowest prevalence of diabetes across South Asians, and of CHD across South Asian men, while the prevalence of CHD among Indian women was twice that of local women (2.6% vs. 1.3%). Filipinos had the highest prevalence of hypertension among men (21.8%). Chinese men and women had the lowest prevalence of risk factors and CVD. Conclusions: In Catalonia, preventive interventions adapted to the risk profile of different Asian immigrant groups are needed, particularly for Bangladeshis and Pakistanis. Answer: Yes, immigrants do have an increased prevalence of unhealthy behaviors and risk factors for coronary heart disease (CHD). Studies have shown that many immigrant groups exhibit higher risks of smoking, physical inactivity, and obesity compared to Swedish-born individuals, even after adjusting for education, occupational status, and social network (PUBMED:16319542). Chinese immigrants, for example, have been found to have a higher prevalence of diabetes, hypertension, dyslipidemia, obesity, and smoking compared to mainland Chinese, suggesting that the epidemiological pattern of cardiovascular disease (CVD) changes after immigration, potentially due to less healthy environmental factors in the host country (PUBMED:26350421). In Australia, immigrants from low-middle-income countries (LMIC) have shown a higher increase in diabetes mellitus and a higher rate of insufficient physical activity compared to Australian-born people, although they have lower rates of smoking and being overweight (PUBMED:34444370). In the United States, there has been a significant increase in obesity rates among adults with CVD, while the prevalence rate of poor diet has decreased, and the prevalence rate of current smoking status, sedentary behavior, and depression has remained stable or shown an insignificant increase (PUBMED:37273875). South Asians and sub-Saharan Africans living in Catalonia have been reported to have a high prevalence of risk factors and established CVD, with South Asians having the highest prevalence of diabetes, hyperlipidemia, coronary heart disease, and heart failure among women (PUBMED:30819763). Indo-Guyanese immigrants have been found to have higher rates of three-vessel CAD as well as higher rates of diabetes mellitus, hypertension, and dyslipidemia compared to Caucasians (PUBMED:22774303). South Asian immigrants in the United States also have a higher risk of CAD, which may not be fully explained by traditional risk factors, suggesting the need to explore other factors such as dysfunctional high-density lipoprotein (PUBMED:19183743). Furthermore, Japanese immigrants and their descendants in São Paulo have shown differences in the onset of coronary heart disease, with descendants experiencing onset approximately 12 years earlier than the immigrants (PUBMED:14569368).
Instruction: Prevalence of endoscopically identified heterotopic gastric mucosa in the proximal esophagus: endoscopist dependent? Abstracts: abstract_id: PUBMED:17450028 Prevalence of endoscopically identified heterotopic gastric mucosa in the proximal esophagus: endoscopist dependent? Goals: The aim of this study is to determine the prevalence of heterotopic gastric mucosa in the proximal esophagus (HGMPE) and whether thorough endoscopic search may influence such prevalence. Background: Heterotopic gastric mucosa in the esophagus (sometimes known as inlet patch) refers to a discrete area of gastric mucosa, with a spherical or ellipsoid configuration, that is typically located in the proximal esophagus. The prevalence of endoscopically diagnosed HGMPE varies from 0.1% to 10%. Endoscopic detection may be difficult as HGMPE is often located at or just below the upper esophageal sphincter. It might be associated with severe complications such as bleeding, perforation, fistula, and stricture formation, in addition to the development of adenocarcinoma. Study: During a 2-year period, 455 consecutive patients with various gastrointestinal complaints underwent esophagogastroduodenoscopy by a single endoscopist (group 1). This endoscopist paid special attention to detecting HGMPE by thoroughly examining the proximal esophagus upon withdrawal of the endoscope. During the same period of time, endoscopy reports of 472 patients who underwent esophagogastroduodenoscopy in the same hospital by 3 other endoscopists were retrospectively reviewed (group 2). These endoscopists were aware of the existence of the HGMPE and reported that the presence of HGMPE would be included as an endoscopic finding in their reports. Results: In the first group, HGMPE was identified in 12 out of 455 patients (2.6%). Whereas in the second group, only 2 out of 472 patients (0.4%) had reports identifying HGMPE (P<0.01). Conclusions: Endoscopic detection of HGMPE is influenced by the endoscopist's thorough search of this entity, and thus, more time devoted to such a search may lead to higher detection rates. abstract_id: PUBMED:22066087 Early gastric cancer arising from heterotopic gastric mucosa in the gastric submucosa. The incidence of heterotopic gastric mucosa located in the submucosa in resected stomach specimens has been reported to be 3.0 to 20.1%. Heterotopic gastric mucosa is thought to be a benign disease, which rarely becomes malignant. Heterotopic gastric mucosa exists in the gastric submucosa, and gastric cancer rarely occurs in heterotopic gastric mucosa. Since tumors are located in the normal submucosa, they appear as submucosal tumors during endoscopy, and are diagnosed through endoscopic biopsies with some difficulty. For such reasons, heterotopic gastric mucosa is mistaken as gastric submucosal tumor. Recently, two cases of early gastric cancer arising from heterotopic gastric mucosa in the gastric submucosa were treated. Both cases were diagnosed as submucosal tumors based on upper gastrointestinal endoscopy, endoscopic ultrasound, and computed tomography findings, and in both cases, laparoscopic wedge resections were performed, the surgical findings of which also suggested submucosal tumors. However, pathologic assessment of the surgical specimens led to the diagnosis of well-differentiated intramucosal adenocarcinoma arising from heterotopic gastric mucosa in the gastric submucosa. abstract_id: PUBMED:36306058 A case of pyloric gland adenoma with high-grade dysplasia in the duodenum arising from heterotopic gastric mucosa observed over 5 years. Pyloric gland adenoma (PGA) in the duodenum is a rare gastric phenotype duodenal neoplasm. Although heterotopic gastric mucosa in the duodenum has been recognized as a benign lesion, it is a potential precursor of PGA and gastric phenotype adenocarcinoma. Herein, we present a case follow-up of endoscopic and histological changes in the PGA in the duodenum from low-grade to high-grade dysplasia. PGA was considered to arise from the heterotopic gastric mucosa, because the heterotopic gastric mucosa was observed in the initial examination. It is difficult to distinguish heterotopic gastric mucosa from PGAs, both endoscopically and histologically. This increase in size may be useful for their differentiation. Therefore, endoscopists should not underestimate the growth of the heterotopic gastric mucosa compared to that in the previous examination. abstract_id: PUBMED:17335714 Heterotopic gastric mucosa in the upper esophagus. An unknown cause of dysphagia Heterotopic gastric mucosa in the proximal third of the esophagus is an embryological lesion that has been described in between 1.1% and 10% of gastroscopies. Although most of these lesions are asymptomatic, they can sometimes be accompanied by upper esophageal symptoms due to acid secretion. We present a case of heterotopic gastric mucosa in the proximal third of the esophagus with dysphagia. pH-metry demonstrated acid secretion by these lesions, which was resolved by treatment with proton pump inhibitors. abstract_id: PUBMED:4058228 Adenocarcinoma of heterotopic gastric mucosa in the proximal esophagus 25 cases of adenocarcinoma of the proximal esophagus have been described in the literature up to now, only two of them arose from heterotopic gastric mucosa. Two more such case reports are given here. abstract_id: PUBMED:32913146 A rare case of the pancreas with heterotopic gastric mucosa detected by EUS (with video). Pancreas with heterotopic gastric mucosa is a rare congenital malformation and hardly be detected.In the embryonic stages, primitive gut, including foregut, midgut and hindgut, originated in the gastrula endoderm. Stomach and pancreas were stemed from the ending of foregut. When abnormal differentiation occurred, pancreatic tissue was usually ectopic to the stomach, but heterotopic gastric mucosa of the pancreas was rare. This malformation was usually confirmed by post-operative pathology. We report a case of congenital malformation of heterotopic gastric mucosa of pancreas detected by EUS and contrast enhanced EUS. The manifestations in EUS are different from the pancreatic cyst lesion. abstract_id: PUBMED:1916499 Incidence of heterotopic gastric mucosa in the upper oesophagus. In a prospective study of the frequency and clinical importance of heterotopic gastric mucosa in the upper oesophagus, 634 consecutive veteran patients (98% male), undergoing endoscopy for various gastrointestinal complaints, were evaluated. Sixty four patients (10%) had heterotopic gastric mucosal patches varying in size from 0.2-0.3 cm to 3 x 4-5 cm often immediately below the upper oesophageal sphincter. Biopsies of these patches showed fundic type gastric mucosa with chief and parietal cells. The 10% prevalence is more than twice the highest reported prevalence rate of endoscopically detected patches in the upper oesophagus. The characteristic location of these patches at the sphincter area, their uniformly fundic type gastric mucosa, and their poor correlation with clinical and endoscopic evidence of gastro-oesophageal reflux support the hypothesis that they are congenital in nature. abstract_id: PUBMED:9427488 Heterotopic gastric mucosa in the upper esophagus: a prospective study of 33 cases and review of literature. Background And Study Aims: The prevalence of endoscopically diagnosed heterotopic gastric mucosa in the upper esophagus (HGMUE) has been reported in a few studies, and varies from 0.1 to 10%. Clinical relevance and possible association with other pathological conditions remain a matter of debate. A prospective study was carried out to determine the prevalence of HGMUE, the influence on it of age and sex, and to study the macroscopic and microscopic aspects of the lesion, its clinical relevance and possible association with other pathological conditions. Patients And Methods: A total of 674 new patients with upper digestive complaints or alteration of their state of health underwent upper gastrointestinal endoscopy, with special attention paid to the proximal esophagus when withdrawing the gastroscope. They had been carefully questioned, especially regarding possible complaints, which could have drawn attention to the upper esophagus. Results: Heterotopic columnar epithelium in the proximal esophagus was found in 4.9 % of patients. No difference was observed according to age or sex. A mild to moderate chronic inflammatory infiltration of the heterotopic patch was observed in most cases, not related to the presence in the lesion of Helicobacter pylori, which was found in only one case. Pathological conditions of the gastroesophageal junction, especially esophagitis, were slightly more frequent in patients with HGMUE. Mild complaints, possibly related to the presence of the lesion, were observed in three out of the 33 cases. Conclusions: On the basis of our prospective study we consider that heterotopic columnar epithelium in the proximal esophagus is a rather common, generally asymptomatic, benign congenital anomaly. Malignant transformation of heterotopic gastric mucosa in the upper esophagus and other severe complications are rare. The need for surveillance should be reserved for the rare cases with metaplasia or dysplasia in the heterotopic columnar mucosa. abstract_id: PUBMED:25396006 Heterotopic gastric mucosa of upper oesophagus: evaluation of 12 cases during gastroscopic examination. Introduction: Oesophageal heterotopic gastric mucosa mostly presents in the upper part of the oesophagus. It is commonly under-diagnosed because of its localisation. Aim: To expose the association between heterotopic gastric mucosa and endoscopic features of the upper gastrointestinal tract. Material And Methods: A total of 1860 upper endoscopic examinations performed between January 2012 and July 2013 were analysed retrospectively. Endoscopic features and histological examinations of 12 heterotopic gastric mucosa (HGM) of the upper oesophagus were documented and evaluated retrospectively. Results: There were 7 (58%) male and 5 (42%) female patients aged between 22 and 80 years with a mean age of 43.2 years. Heterotopic gastric mucosa was present in 12 (0.6%) of all patients. We were able to perform biopsy for histopathological observation on 8 (66%) of the 12 patients in which HGM was seen during endoscopy. Five (42%) patients with heterotopic gastric mucosa had oesophagitis. Los Angeles Grade A oesophagitis was found in all patients, and histologically proven Barrett's oesophagus was detected in only one patient. Conclusions: When a patient has ongoing dyspeptic complaints and reflux symptoms despite the treatment, one should be careful about possible HGM during upper gastrointestinal endoscopy. The point to be taken into consideration for patients who have metaplasia or dysplasia within HGM may need to be considered for surveillance. abstract_id: PUBMED:29093586 Heterotopic gastric mucosa in duodenum: endoscopic and histological features Objective: The aim of this study was characterized sin duodenum the heterotopic gastric mucosa. Materials And Methods: The slides with the diagnosis of heterotopic gastric mucosa during 2014-2015, were reviewed, and clinical, histological and endoscopic data was to collected for every case. Results: 45 cases of heterotopic gastric mucosa in duodenum were found, 91.1% were located in duodenum bulb and 73.2%, presenting as polyps in 73.2% of cases. In all cases, parietal cell was identified as main criteria for the diagnosis. Neither Helicobacter pylori nor dysplasia were identified. Conclusions: Our data confirm its non neoplastic nature. Heterotopic gastric mucosa should be taken into account in diagnosis of duodenal polyps. Parietal cells identification in duodenal mucosa is essential in differential diagnosis with peptic duodenitis. Answer: The prevalence of endoscopically identified heterotopic gastric mucosa in the proximal esophagus (HGMPE) appears to be influenced by the endoscopist's thorough search for this entity. A study conducted over a 2-year period involving 455 consecutive patients who underwent esophagogastroduodenoscopy by a single endoscopist who paid special attention to detecting HGMPE found a prevalence rate of 2.6% (12 out of 455 patients). In contrast, a review of reports from 472 patients who underwent the same procedure by three other endoscopists, who were aware of HGMPE but did not specifically focus on it, showed a much lower prevalence rate of 0.4% (2 out of 472 patients) (PUBMED:17450028). This suggests that the detection rate of HGMPE can be significantly increased when the endoscopist is actively looking for it during the procedure.
Instruction: Do postmenopausal women with NIDDM have a reduced capacity to deposit and conserve lower-body fat? Abstracts: abstract_id: PUBMED:9589249 Do postmenopausal women with NIDDM have a reduced capacity to deposit and conserve lower-body fat? Objective: To compare regional body fat distribution and sex hormone status of postmenopausal women with NIDDM with those of age- and BMI-matched normoglycemic women. Research Design And Methods: The regional body fat distribution and sex hormone status of 42 postmenopausal women with NIDDM were compared with those of 42 normoglycemic women matched for age and BMI, who served as control subjects. Body composition was measured by dual-energy X-ray absorptiometry, and sex hormone-binding globulin (SHBG) and testosterone were measured in serum. Results: Although the levels of total body fat were similar between the two groups, the women with NIDDM had significantly less lower-body fat (LBF) (P < 0.01) than the control subjects matched for age and BMI. This pattern of fat deposition in women with NIDDM was accompanied by an androgenic hormone profile, with decreased SHBG concentration and an increased free androgen index (P < 0.05 and P < 0.01, respectively). Conclusions: A reduced capacity to deposit and/or conserve LBF may be an independent factor associated with (or may be a marker of) the metabolic manifestations of the insulin resistance syndrome in women with NIDDM. The possibility that the smaller relative accumulation of LBF is a consequence of the androgenic hormonal profile should be investigated in future studies. abstract_id: PUBMED:34732526 Body Fat Distribution, Cardiometabolic Traits, and Risk of Major Lower-Extremity Arterial Disease in Postmenopausal Women. Objective: To assess the relationship between body fat distribution and incident lower-extremity arterial disease (LEAD). Research Design And Methods: We included 155,925 postmenopausal women with anthropometric measures from the Women's Health Initiative who had no known LEAD at recruitment. A subset of 10,894 participants had body composition data quantified by DXA. Incident cases of symptomatic LEAD were ascertained and adjudicated through medical record review. Results: We identified 1,152 incident cases of LEAD during a median 18.8 years follow-up. After multivariable adjustment and mutual adjustment, waist and hip circumferences were positively and inversely associated with risk of LEAD, respectively (both P-trend < 0.0001). In a subset (n = 22,561) where various cardiometabolic biomarkers were quantified, a similar positive association of waist circumference with risk of LEAD was eliminated after adjustment for diabetes and HOMA of insulin resistance (P-trend = 0.89), whereas hip circumference remained inversely associated with the risk after adjustment for major cardiometabolic traits (P-trend = 0.0031). In the DXA subset, higher trunk fat (P-trend = 0.0081) and higher leg fat (P-trend < 0.0001) were associated with higher and lower risk of LEAD, respectively. Further adjustment for diabetes, dyslipidemia, and blood pressure diminished the association for trunk fat (P-trend = 0.49), yet the inverse association for leg fat persisted (P-trend = 0.0082). Conclusions: Among U.S. postmenopausal women, a positive association of upper-body fat with risk of LEAD appeared to be attributable to traditional risk factors, especially insulin resistance. Lower-body fat was inversely associated with risk of LEAD beyond known risk factors. abstract_id: PUBMED:27487528 Two compartment model of body composition and abdominal fat area in postmenopausal women - pilot study Introduction: Both menopausal period and aging have influence on body composition, increase of total body fat and visceral fat in particular. We should be aware that changes in body composition, mainly fat translocation to abdominal region, can occur without significant changes in body weight. Therefore quantitative abdominal fat assessment should be our aim. The Aim: Body composition analysis based on two compartment model and abdominal fat area assessment in cross section. Material And Methods: Subjects in postmenopausal period (41 women) were recruited for this study and divided into 2 groups: group 1 - women aged 45-56 years and group 2 - women aged 57-79 years. Body composition analysis and abdominal fat area assessment were conducted by using bioelectrical impedance method with BioScan 920 (Maltron int.) accordingly with standardized procedure. Results: Women in early postmenopausal stage (Group 1) had statistically significant lower total body fat percentage in comparison with women in late postmenopausal period (Group 2) (41.09 ± 7.72% vs. 50.7 ± 9.88%, p=0.0021). Also women in group 1 were characterized by significant lower visceral fat area (VAT) as well as subcutaneous fat area (SAT) in comparison with group 2 (respectively VAT 119.25 ± 30.09 cm2 vs. 199.36 ± 87.38 cm2, p=0.0011; SAT 175.19 ±57.67 cm2 vs. 223.4±74.29 cm2, p=0.0336). According to VAT criteria (>120 cm2), 44% of women in group 1 and 80% in group 2 had excess of visceral fat. Conclusions: Both total body fat and intra-abdominal fat increased with age, independently of weight changes. abstract_id: PUBMED:26856648 Relationship between vitamin D and body fat distribution evaluated by DXA in postmenopausal women. Objective: The aim of this study was to explore the relationship between 25-hydroxyvitamin D (25[OH]D) serum concentrations and body fat distribution in a sample of postmenopausal women. Methods: We enrolled sixty-two postmenopausal women; 25(OH)D serum concentrations, serum intact parathyroid hormone, blood analyses, and anthropometric measurements were carried out. Body fat composition was evaluated by dual-energy X-ray absorptiometry. Insulin resistance was estimated by homeostatic model assessment of insulin resistance (HOMA-IR) calculation. Results: Low levels of vitamin D (<30 ng/mL) were found in 77.4% of the population studied. There was a correlation (P < 0.0001) between 25(OH)D and waist circumference (r = -0.543), android fat to gynoid fat (A/G) ratio (r = -0.554), high-density lipoprotein cholesterol (r = 0.498), and HOMA-IR (r = -0.520). A/G fat ratio (B = -34.90; 95% confidence interval [-55.30, -14.1]; P = 0.019), HOMA-IR (B = -3.17; 95% confidence interval [-5.99, -0.351]; P = 0.028), and high-density lipoprotein cholesterol (B = 0.361; 95% confidence interval [0.033, 0.698]; P = 0.032), were found to be independent predictors of lower 25(OH)D by multilogistic regression analysis. Except for waist circumference, both these results were maintained when correlations were adjusted for age, onset of menopause, serum intact parathyroid hormone, and medications, and when body mass index was added as covariate. Conclusions: Vitamin D deficiency and insufficiency are common conditions. A/G ratio appeared to be associated with 25(OH)D concentrations and it is well-known that the android disposition of body fat is more closely associated with the onset of metabolic syndrome. Longitudinal studies are needed to better characterize the direction and the causal links of this association. abstract_id: PUBMED:31256194 Association between regional body fat and cardiovascular disease risk among postmenopausal women with normal body mass index. Aims: Central adiposity is associated with increased cardiovascular disease (CVD) risk, even among people with normal body mass index (BMI). We tested the hypothesis that regional body fat deposits (trunk or leg fat) are associated with altered risk of CVD among postmenopausal women with normal BMI. Methods And Results: We included 2683 postmenopausal women with normal BMI (18.5 to <25 kg/m2) who participated in the Women's Health Initiative and had no known CVD at baseline. Body composition was determined by dual energy X-ray absorptiometry. Incident CVD events including coronary heart disease and stroke were ascertained through February 2017. During a median 17.9 years of follow-up, 291 incident CVD cases occurred. After adjustment for demographic, lifestyle, and clinical risk factors, neither whole-body fat mass nor fat percentage was associated with CVD risk. Higher percent trunk fat was associated with increased risk of CVD [highest vs. lowest quartile hazard ratio (HR) = 1.91, 95% confidence interval (CI) 1.33-2.74; P-trend <0.001], whereas higher percent leg fat was associated with decreased risk of CVD (highest vs. lowest quartile HR = 0.62, 95% CI 0.43-0.89; P-trend = 0.008). The association for trunk fat was attenuated yet remained significant after further adjustment for waist circumference or waist-to-hip ratio. Higher percent trunk fat combined with lower percent leg fat was associated with particularly high risk of CVD (HR comparing extreme groups = 3.33, 95% CI 1.46-7.62). Conclusion: Among postmenopausal women with normal BMI, both elevated trunk fat and reduced leg fat are associated with increased risk of CVD. abstract_id: PUBMED:18257144 Oxidative stress, body fat composition, and endocrine status in pre- and postmenopausal women. Objective: To evaluate the role of menopause on the regional composition and distribution of fat in women and eventual correlations with the oxidative state. Design: In this observational clinical investigation, 90 women (classified for menopause status according to Stages of Reproductive Aging Workshop criteria) were evaluated for body mass composition and fat distribution by dual-energy x-ray absorptiometry and for oxidative status by determination of serum hydroperoxide levels and residual antioxidant activity. Results: Total body fat mass increases significantly in postmenopause (P < 0.05) by 22% in comparison with premenopause, with specific increases in fat deposition at the level of trunk (abdominal and visceral) (P < 0.001) and arms (P < 0.001). Concomitantly, the antioxidant status increases significantly (P < 0.001) by 17%. When data were adjusted for age by analysis of covariance, statistical significance disappeared for the increase in fat mass, but it was retained for antioxidant status (P < 0.05). Both antioxidant status and hydroperoxide level increased with trunk fat mass, as shown by linear correlation analysis (r = 0.46, P < 0.001 and r = 0.26, P < 0.05, respectively). Conclusions: The results of our investigation demonstrate that fat content increases in the upper part of the body (trunk and arms) in postmenopause and that age is the main determinant of this increase. During the comparison of premenopausal and postmenopausal women, we also detected a significant increase in antioxidant status. Apparently this change is mainly related to menopausal endocrine and fat changes. abstract_id: PUBMED:26732682 The influence of body fat distribution patterns and body mass index on MENQOL in women living in an urban area. Objectives: To investigate the influence of patterns of body fat distribution and body mass index (BMI) on menopause-specific quality of life in peri- and postmenopausal women living in an urban area. Methods: A total of 214 peri- and postmenopausal women, mean age 55 years, with intact uterus and no history of hormonal treatment were recruited. Anthropometric measurements were conducted as standard techniques. The Menopause-specific Quality of Life (MENQOL) questionnaire was used to evaluate menopause-specific quality of life. The Mann-Whitney U test and Kruskal-Wallis test were used to compare MENQOL between body fat patterns or BMI. Results: According to the body fat distribution patterns, 53.3% were women of the android type and 46.7% were of the gynoid type. The android body pattern was associated with worsening of vasomotor and psychosocial domains (p < 0.05). However, overweight and obese women had slightly better scores in the sexual domain of the MENQOL. Conclusions: Peri- and postmenopausal women with the android body pattern have lower quality of life in the vasomotor and psychosocial domains while women with normal BMI have the slightly lower quality of life in the sexual domain. The maintenance of premenopausal body proportion might mitigate the menopause-specific quality of life. abstract_id: PUBMED:35937205 Association between body fat percentage and H-type hypertension in postmenopausal women. Background: Previous studies have explored the relationship between body fat percentage (BFP) and hypertension or homocysteine. However, evidence on the constancy of the association remains inconclusive in postmenopausal women. The aim of this study was to investigate the association between BFP and H-type hypertension in postmenopausal women. Methods: This cross-sectional study included 1,597 eligible female patients with hypertension. Homocysteine levels ≥10 mmol/L were defined as H-type hypertension. BFP was calculated by measuring patients' physical parameters. Subjects were divided into 4 groups according to quartiles of BFP (Q1: 33.4% or lower, Q2: 33.4-36.1%, Q3: 36.1-39.1%, Q4: >39.1%). We used restricted cubic spline regression models and logistic regression analysis to assess the relationship between BFP and H-type hypertension. Additional subgroup analysis was performed for this study. Results: Among 1,597 hypertensive patients, 955 (59.8%) participants had H-type hypertension. There were significant differences between the two groups in age, BMI, educational background, marital status, exercise status, drinking history, WC, TG, LDL, Scr, BUN, and eGFR (P < 0.05). The prevalence of H-type hypertension in the Q1 to Q4 groups was 24.9, 25.1, 24.9, and 25.1%, respectively. After adjusting for relevant factors, we found that the risk of H-type hypertension in the Q4 group had a significantly higher than the Q1 group (OR = 3.2, 95% CI: 1.3-7.5). Conclusion: BFP was positively associated with the risk of H-type hypertension in postmenopausal women. Postmenopausal women should control body fat to prevent hypertension. abstract_id: PUBMED:31453501 The relationship between body composition and knee osteoarthritis in postmenopausal women. Objectives: The aim of this study was to examine the relationship between the body composition measures and knee osteoarthritis (OA) in postmenopausal women to determine whether the fat mass or the lean mass was closely associated with knee OA. Patients And Methods: This retrospective, cross-sectional study included a total of 212 postmenopausal women (mean age 59.9±6.2 years; range, 46 to 76 years). Descriptive characteristics were recorded and height was measured using a stadiometer. Body weight, fat mass, and lean mass were estimated using bioelectrical impedance analysis. X-rays of the knees were obtained. Knee OA was radiographically diagnosed according to the Kellgren-Lawrence criteria. Based on the radiographic knee OA diagnosis the women were divided into two groups. Logistic regression analyses were used to examine the associations between the body composition measures and knee OA. Results: Knee OA was present in 117 women. Body Mass Index, body weight, fat mass, and lean mass were all significantly associated with knee OA after adjusting for confounding factors such as age, height, and physical activity (p<0.001). When assessed together with lean mass in a multivariate regression model, fat mass was closely and significantly found to be related to knee OA, compared to lean mass (p<0.001, OR: 1.17). Furthermore the fat mass/lean mass ratio was positively associated with knee OA (p<0.001). Conclusion: In postmenopausal women body weight was significantly and positively associated with knee OA and among the components that constitute body weight, fat mass was closely and significantly associated with knee OA compared to lean mass. This result suggests that the relationship between obesity and knee OA is mainly based on the fat mass, and supports the importance of systemic metabolic effects of adiposity in this relationship. abstract_id: PUBMED:16984520 Precedence of the shift of body-fat distribution over the change in body composition after menopause. Aim: This study investigated the sequence of certain phenomena after menopause: decrease in bone mineral density (BMD), change in body composition (lean and fat components), and the shift toward upper body fat distribution. Methods: Subjects were 188 postmenopausal women aged 50-65 years old. They were divided into four subgroups based on 4-year increments in age. Regularly menstruating women (n = 51) aged 50-53 years old served as controls. Age, height, weight, and years since menopause were recorded. Body fat mass, percentage of body fat (%fat), lean body mass (LBM), lumbar spine (L2-4), total body BMD, and the trunk-leg fat mass ratio were measured by dual-energy X-ray absorptiometry. Results: In postmenopausal women (n = 42) aged 50-53 years, BMD was lower compared to age-matched controls (P < 0.05), while other variables did not differ. Trunk-leg fat mass ratio in women aged 54-57 years or more was significantly higher than that in control. LBM was significantly lower while percentage fat was significantly higher in women aged 58-61 years old or more. Conclusion: An initial event during the menopausal process is BMD loss, which is followed by body fat distribution shift, then LBM loss and reciprocal increase in body fat mass. Answer: Yes, postmenopausal women with non-insulin-dependent diabetes mellitus (NIDDM) appear to have a reduced capacity to deposit and conserve lower-body fat (LBF). A study comparing regional body fat distribution and sex hormone status of postmenopausal women with NIDDM to age- and BMI-matched normoglycemic women found that although total body fat levels were similar between the two groups, women with NIDDM had significantly less LBF. This pattern of fat deposition was also associated with an androgenic hormone profile, characterized by decreased sex hormone-binding globulin (SHBG) concentration and an increased free androgen index. These findings suggest that a reduced capacity to deposit or conserve LBF may be associated with the metabolic manifestations of insulin resistance syndrome in women with NIDDM (PUBMED:9589249).
Instruction: Can thyroid size still be considered as a useful tool for assessing iodine intake? Abstracts: abstract_id: PUBMED:26094528 Can thyroid size still be considered as a useful tool for assessing iodine intake? Introduction: It has always been very difficult to precisely define a goitre. For years, the borderline values have been sought which could be universally used in such evaluations. However, presented reference values were very often disappointing as they proved to be either too restrictive or too liberal. Objective: The aim of the study was to assess the two methods of goitre evaluation: 1) traditional, based on ultrasound reference ranges for the thyroid size, 2) based on the analysis of thyroid volume (V) referred to the body surface area (BSA). Materials And Method: For this purpose, the study was conducted to evaluate the incidence of goitre and ioduria among 102 school-aged children in Opoczno, Poland. The study group comprised 59 girls and 43 boys; age range: 8-12 years. Results: The incidence of goitre among the examined children varied from 1.0-11.8% in relation to the age, and from 0-14.5% in relation to the BSA, depending on the references ranges used. Conclusion: Analysis of V/BSA ratio is a better estimation of the size of the thyroid gland than the evaluation of thyroid size based on traditional ultrasound reference values. Summing up, relating the size of the thyroid gland to BSA is a good, sensitive tool for such analysis, and can be used for comparisons of different populations, as well as surveys conducted at different time points. abstract_id: PUBMED:31070733 Serum Iodine Is Correlated with Iodine Intake and Thyroid Function in School-Age Children from a Sufficient-to-Excessive Iodine Intake Area. Background: An alternative feasible and convenient method of assessing iodine intake is needed. Objective: The aim of this study was to examine the utility of serum iodine for assessing iodine intake in children. Methods: One blood sample and 2 repeated 24-h urine samples (1-mo interval) were collected from school-age children in Shandong, China. Serum free triiodothyronine (FT3), free thyroxine (FT4), thyroid-stimulating hormone (TSH), thyroglobulin (Tg), total iodine (StI), and non-protein-bound iodine (SnbI) concentrations and urine iodine (UIC) and creatinine (UCr) concentrations were measured. Iodine intake was estimated based on two 24-h urine iodine excretions (24-h UIE). Associations between serum iodine and other factors were analyzed using the Spearman rank correlation test. Receiver operating characteristic (ROC) curves were used to illustrate diagnostic ability of StI and SnbI. Results: In total, 1686 children aged 7-14 y were enrolled. The median 24-h UIC for the 2 collections was 385 and 399 μg/L, respectively. The median iodine intake was estimated to be 299 μg/d and was significantly higher in boys than in girls (316 μg/d compared with 283 μg/d; P < 0.001). StI and SnbI were both positively correlated with FT4 (ρ = 0.30, P < 0.001; and ρ = 0.21, P < 0.001), Tg (ρ = 0.21, P < 0.001; and ρ = 0.19, P < 0.001), 24-h UIC (ρ = 0.56, P < 0.001; and ρ = 0.47, P < 0.001), 24-h UIE (ρ = 0.46, P < 0.001; and ρ = 0.49, P < 0.001), urine iodine-to-creatinine ratio (ρ = 0.58, P < 0.001; and ρ = 0.62, P < 0.001), and iodine intake (ρ = 0.49, P < 0.001; and ρ = 0.53, P < 0.001). The areas under the ROC curves for StI and SnbI for the diagnosis of excessive iodine intake in children were 0.76 and 0.77, respectively. The optimal StI and SnbI threshold values for defining iodine excess in children were 101 and 56.2 μg/L, respectively. Conclusions: Serum iodine was positively correlated with iodine intake and the serum FT4 concentration in children. It is a potential biomarker for diagnosing excessive iodine intake in children. This trial was registered at clinicaltrials.gov as NCT02915536. abstract_id: PUBMED:30890682 Urinary iodine is increased in papillary thyroid carcinoma but is not altered by regional population iodine intake status: a meta-analysis and implications. Excessive iodine intake has been associated with increased risk of thyroid cancer (TC) in many studies, but the results have not been consistent. Since it was common knowledge that urinary iodine (UI) is considered a sensitive marker of current iodine intake, we conducted a meta-analysis to clarify the association between high UI and TC. We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement, and the Cochrane Collaboration. Between-group meta-analyses were performed to compare UI between TC patients and the healthy/euthyroid subjects in local residents and benign thyroid nodules (BTN) patients. Then, between-group meta-analyses to compare the incidence rate of iodine excess were also conducted. The 22 case-control studies included in the meta-analyses represented 15,476 participants. It is the first time to clarify that UI was increased in PTC patients, but was not altered by regional population iodine intake status. Compared with BTN patients, PTC patients exhibited both higher UIC and higher odds ratio of iodine excess only in adequate iodine intake status subgroup; UIC, not the odds ratio of iodine excess, was higher in patients with PTC than those with BTN in above requirements iodine intake subgroup. A novel insight is offered that high UI in PTC patients was less influenced by regional population iodine intake status. It is indicted that high iodine intake is not a risk factor for PTC and high urinary iodine is just a specific characteristic of the disease. abstract_id: PUBMED:34790773 Iodine intake level and incidence of thyroid disease in adults in Shaanxi province: a cross-sectional study. Background: Exploring the relationship between adult iodine intake level and thyroid disease in Shaanxi area is of great significance for adult scientific iodine supplement and individual iodine supplement strategy. At present, the relationship between iodine and incidence of thyroid disease has not been determined. Methods: This study was based on the clinical data of 1,159 patients from the Shaanxi Province aged over 18 years and diagnosed with thyroid-related diseases who were admitted to the Xijing Hospital from 2016 to 2020, and 182 provincial healthy volunteers aged over 18 years who agreed and signed informed consent for physical examination in 2020. The chi-square test and nonparametric test were used to investigate the relationship between iodine intake level and thyroid disease. Results: (I) A total of 1,341 patients were enrolled and observed in this study. The median urinary iodine (MUI) was 233.20 μg/L. Compared with the control, group participants the urine iodine (UI) of those with hyperthyroidism, Hashimoto's thyroiditis (HT), papillary thyroid cancer (PTC), and benign nodules was significantly different (P<0.05). (II) The incidence of PTC was higher in women with excessive iodine intake and people aged ≥45 years (P<0.05). (III) There was no significant difference in urinary iodine (UI), age, gender, and other factors between benign nodules and PTC (P>0.05). Conclusions: The iodine intake level of adults in Shaanxi is high, which is related to hyperthyroidism, HT, benign nodules, thyroid cancer, and other diseases. There were 3 factors, including excessive iodine intake, age ≥45 years, and female gender, found to be associated with the development of PTC. abstract_id: PUBMED:11939756 Thyroid size and iodine intake in iodine-repleted pregnant women in Isfahan, Iran. Objective: To evaluate the goiter and iodine intake status of pregnant women in Isfahan, after 8 years of iodized salt distribution in Iran. Methods: Thyroid staging was assessed by clinical examination, thyroid volume was determined by sonography, and urinary iodine (UI) excretion was assessed by the digestion method in 90 healthy pregnant women (30 in each trimester) and 90 age-matched nonpregnant women selected by random sampling in prenatal and primary health-care clinics. The data were reported as mean +/- standard deviation; P values <0.05 were considered statistically significant. Results: The mean age of the pregnant and the nonpregnant women was 25.3 and 27.5 years, respectively-no significant difference (P = NS). The clinical goiter prevalence in the pregnant and the nonpregnant groups was 37% and 32%, respectively (P = NS). The mean thyroid volume in the pregnant and nonpregnant women was 7.8 +/- 3.2 and 7.8 +/- 2.8 mL, respectively (P = NS). Urinary iodine (UI) excretion was 20.7 +/- 6.9 mg/dL in pregnant women and 23.7 +/- 7.6 mg/dL in nonpregnant women (P = NS). The prevalence of goiter assessed by sonography was 29% in pregnant women and 21% in nonpregnant women (P = NS). The mean thyroid size in 26 of 90 pregnant women with goiter (thyroid volume >9.2 mL) was 11.8 +/- 2.73 mL and in 19 of 90 nonpregnant women with goiter was 12.36 +/- 1.6 mL (P = NS). The mean thyroid volume was 6.0 +/- 1.7, 9.9 +/- 1.7, 11.8 +/- 2.2, and 18.9 +/- 2.4 mL in the pregnant women with or without goiter at thyroid stages 0, Ia, Ib, and II, respectively. A strong correlation between goiter staging assessed by clinical examination and thyroid volume determined by sonography was found in pregnant (r = 0.77) and nonpregnant (r = 0.78) women (both P<0.000001). Mean UI excretion was 20.9 +/- 7.0, 19.9 +/- 6.8, 20.6 +/- 7.5, and 25.9 +/- 2.3 mg/dL in the pregnant women at thyroid stages 0, Ia, Ib, and II, respectively. In the pregnant and the nonpregnant women, no correlation was found between thyroid stage and UI excretion or between thyroid volume and UI excretion. Conclusion: No iodine deficiency was found in Isfahani pregnant women. Thus, as in most iodine-sufficient areas, thyroid size did not increase during pregnancy. Despite sufficient iodine intake, a moderate prevalence of goiter was noted in pregnant and nonpregnant women. This study also revealed that careful physical examination of the thyroid had diagnostic accuracy similar to sonography. abstract_id: PUBMED:26004893 Development of thyroid dysfunction among women with excessive iodine intake--A 3-year follow-up. Objectives: Thyroid dysfunction can be a result of excessive iodine intake, which may have adverse health consequences, particularly for women in fertile age. In 2010, we conducted a cross-sectional study among lactating women with excessive iodine intake in the Saharawi refugee camps in Algeria and found a high prevalence of thyroid dysfunction. Three years later, we conducted a follow-up study to monitor the iodine situation and explore whether thyroid dysfunction still was highly prevalent when the women no longer were post-partum. None of the women were treated for hyper- or hypothyroidism between baseline and follow-up. Methods: In 2013, we were able to recapture 78 of the 111 women from the baseline. Thyroid hormones and antibodies were measured in serum and thyroid size was assessed by palpation. Urinary iodine concentration (UIC) and drinking water iodine concentration were measured. Results: The overall prevalence of thyroid dysfunction and/or positive antibodies was 34.3% and was not significantly changed from baseline. Of the non-pregnant women we reexamined, 17 had hypo- or hyperthyroidism in 2010; among these, 12 women still had abnormal thyroid function at follow-up. In addition, we found 9 new cases with marginally abnormal thyroid function. Women with thyroid dysfunction and/or positive antibodies had significantly higher BMI and thyroglobulin than women with normal thyroid function. We also found that women with high breast milk iodine concentration (BMIC) at baseline had more thyroid dysfunction at follow-up than the women with lower BMIC at baseline. Conclusions: At follow-up, the prevalence of thyroid dysfunction was still high and had not changed during the 3 years between studies and from a postpartum period. The women still had a high iodine intake indicated by high UIC. Breast milk iodine concentration from baseline predicted thyroid dysfunction at follow-up. abstract_id: PUBMED:1122886 The iodine requirement and influence of iodine intake on iodine metabolism and thyroid function in the adult beagle. Various aspects of iodine metabolism were studied in adult beagles maintained at iodine intake levels ranging from 480 to 20 mug/day. On the basis of changes in radioiodine metabolism, the minimum daily iodine requirement of the adult beagle was found to be 140 mug. Although striking changes were observed in radioiodine metabolism when iodine intake was reduced to 90mug/day, serum T4 and T3 levels were unaffected. Marked reductions in serum T4 occurred in dogs restricted to an iodine intake of 50 or 20 mug/day, but even at these low levels of iodine intake there were only slight reductions in serum T3 concentration and a eumetabolic state was maintained. Prolonged iodine deficiency (8-12 months) resulted in variable patterns of thyroid histology, which were related to differences in thyroidal 127I content and in the rate of release of radioiodine from the thyroid. The heterogeneity in thyroid morphology and iodine kinetics did not, however, have a significant effect on serum T4 and T3 levels. abstract_id: PUBMED:25447589 Excessive iodine intake and thyroid dysfunction among lactating Saharawi women. Objectives: Excessive iodine intake may lead to thyroid dysfunction, which may be particularly harmful during pregnancy and lactation. The main objective was to describe iodine status and the prevalence of thyroid dysfunction among lactating women in areas with high iodine (HI) and very high iodine (VHI) concentrations in drinking water. Design And Methods: A cross-sectional survey was performed among 111 lactating women in the Saharawi refugee camps, Algeria. Breast milk iodine concentration (BMIC), urinary iodine concentration (UIC) and the iodine concentration in the most commonly consumed foods/drinks were measured. A 24-h dietary recall was used to estimate iodine intake. Thyroid hormones and antibodies were measured in serum. Results: Median UIC, BMIC and iodine intake across both areas was 350 μg/L, 479 μg/L and 407 μg/day, respectively. In multiple regression analyses, we discovered that being from VHI area was associated with higher UIC and BMIC. BMIC was also positively associated with iodine intake. Thyroid dysfunction and/or positive thyroid antibodies were found in 33.3% of the women, of which 18.9% had hypothyroidism and 8.1% had hyperthyroidism and 6.3% had positive antibodies with normal thyroid function. Elevated thyroid antibodies were in total found in 17.1%. We found no difference in distribution of thyroid dysfunction or positive antibodies between HI and VHI areas. BMI, BMIC and elevated thyroglobulin (Tg) predicted abnormal thyroid function tests. Conclusions: The high prevalence of thyroid dysfunction may be caused by excessive iodine intake over several years. abstract_id: PUBMED:1272812 Scintiscan characteristics of normal thyroid gland in a region of known iodine intake. Morphological features of a normal thyroid gland in a geographical region where the daily iodine intake is about 1 mg are established. The mean weight of the thyroid gland is 31.3 gm with a range from 19 to 43 gm. Oblique length of the right lobe is 5.0 cm and that of the left lobe 4.8 cm. The surface area of the right and left lobes is 9.7 and 9.1 cm2, respectively. The weight of the thyroid gland calculated on the basis of the scan obtained with 99mTcO4 is quite variable and shows poor correlation (gamma = 0.40) with the weight obtained on the basis of I-131 scan. It is suggested that the criteria of normalcy be established regionally based on iodine intake, and that an isotope of iodine be used in calculating the weight of thyroid gland for dosimetry purposes. abstract_id: PUBMED:32363445 Prevalence of thyroid dysfunction in healthy adults according to the estimated iodine intake in 24-hour urine samples: The SALMEX cohort. Purpose: The aim of this study was to evaluate the prevalence of thyroid dysfunction in a cohort of healthy individuals in Mexico City, as well as to investigate the potential associations of these results with their estimated iodine intake (EII) as reflected by their 24-hour urinary iodine excretion (24-h UIE). Methods: From the SALMEX cohort, 683 adults provided an appropriate 24-h urine sample. Thyroid function tests and thyroid antibody concentrations were determined in the participants' sera. We analyzed discrepancies between the commonly used urinary parameters to determine the iodine intake status and the performance of thyroglobulin (Tg) as a biomarker of its status in the adult population. Results: The prevalence of dysthyroidism was high, being similar to other studies. Subclinical hypothyroidism was detected in 5.0% of individuals, clinical hypothyroidism in 1.8% of individuals, and sub-clinical hyperthyroidism in 2.8% of individuals. The median EII was 285 μg/d (IQR 215.0-369.0); 94% of individuals had EII >150 µg/d recommended daily allowance (RDA) in adults. The urinary iodine concentration (UIC) and the UIE had relative biases in their averages of 34.4%. The Tg median was 7.21 ng/mL. The prevalence of increased Tg was 6.15%. There was no correlation between Tg and EII (r= 0.019, p= 0.606). Conclusions: Thyroid dysfunction was highly prevalent in this population. Our cohort revealed a slight discrepancy between dysthyroidism manifestations and iodine intake markers; the latter represent a population with adequate iodine intake. Further studies are necessary to clearly define the prevalence of thyroid dysfunction as well as the iodine nutritional status in Mexico. Answer: Thyroid size has traditionally been used as a tool for assessing iodine intake, but its utility has been questioned in light of new research and methodologies. According to the study conducted among school-aged children in Opoczno, Poland, the analysis of thyroid volume (V) referred to body surface area (BSA) is a better estimation of the size of the thyroid gland than the evaluation of thyroid size based on traditional ultrasound reference values. The study concluded that relating the size of the thyroid gland to BSA is a sensitive tool for analysis and can be used for comparisons of different populations and surveys conducted at different time points (PUBMED:26094528). However, other studies suggest that there are additional factors and methods to consider when assessing iodine intake. For instance, serum iodine has been shown to be positively correlated with iodine intake and thyroid function in children, indicating its potential as a biomarker for diagnosing excessive iodine intake (PUBMED:31070733). Moreover, urinary iodine, a sensitive marker of current iodine intake, has been found to be increased in papillary thyroid carcinoma patients but is not altered by regional population iodine intake status, suggesting that high urinary iodine may be a characteristic of the disease rather than a result of iodine intake (PUBMED:30890682). In adults from Shaanxi province, the iodine intake level was found to be high and related to various thyroid diseases, with excessive iodine intake being one of the factors associated with the development of papillary thyroid cancer (PUBMED:34790773). Similarly, in Isfahan, Iran, despite sufficient iodine intake, a moderate prevalence of goiter was noted in pregnant and nonpregnant women, indicating that thyroid size did not increase during pregnancy in an iodine-sufficient area (PUBMED:11939756). A 3-year follow-up study among lactating Saharawi women with excessive iodine intake showed that the prevalence of thyroid dysfunction remained high and unchanged, suggesting that thyroid dysfunction may be a result of excessive iodine intake over several years (PUBMED:26004893). In conclusion, while thyroid size can still be a useful tool for assessing iodine intake, it should be considered alongside other factors and methods, such as serum iodine levels, urinary iodine concentration, and the analysis of thyroid volume relative to BSA, to provide a more comprehensive assessment of iodine intake and thyroid health.
Instruction: Is tumor volume an independent prognostic factor in clinically localized prostate cancer? Abstracts: abstract_id: PUBMED:36497304 Tumor Location and a Tumor Volume over 2.8 cc Predict the Prognosis for Japanese Localized Prostate Cancer. (1) Objective: Our study investigated the prognostic value of tumor volume and location in prostate cancer patients who received radical prostatectomy (RP). (2) Methods: The prognostic significance of tumor volume and location, together with other clinical factors, was studied using 557 patients who received RP. (3) Results: The receiver operating characteristic (ROC) curve identified the optimal cutoff value of tumor volume as 2.8 cc for predicting biochemical recurrence (BCR). Cox regression analysis revealed that a tumor in the posterior area (p = 0.031), peripheral zone (p = 0.0472), and tumor volume ≥ 2.8 cc (p < 0.0001) were predictive factors in univariate analysis. After multivariate analysis, tumor volume ≥ 2.8 cc (p = 0.0225) was an independent predictive factor for BCR. Among them, a novel risk model was established using tumor volume and location in the posterior area and peripheral zone. The progression-free survival (PFS) of patients who met the three criteria (unfavorable group) was significantly worse than other groups (p ≤ 0.001). Furthermore, multivariate analysis showed that the unfavorable risk was an independent prognostic factor for BCR. The prognostic significance of our risk model was observed in low- to intermediate-risk patients, although it was not observed in high-risk patients. (4) Conclusion: Tumor volume (≥2.8 cc) and localization (posterior/peripheral zone) may be a novel prognostic factor in patients undergoing RP. abstract_id: PUBMED:15247716 Is tumor volume an independent prognostic factor in clinically localized prostate cancer? Purpose: There continues to be debate regarding the prognostic significance of tumor volume (TV) in radical prostatectomy (RP) specimens. We assessed the prognostic significance of TV in a large series of patients followed for a long time to discover whether the effect of TV has changed with earlier detection of smaller tumors. Materials And Methods: TV was measured planimetrically in 1,302 consecutive RP specimens with clinical stage T1-3 prostate cancer from 1983 to 2000. We correlated TV with standard clinical and pathological features, and determined the prostate specific antigen nonprogression rate. Median followup was 46 months (range 1 to 202). Results: TV was weakly associated with other clinical and pathological features. Median TV decreased significantly over time (2.16 cm3 before 1995 vs 1.25 cm3 after 1995, p <0.001) and this decrease was also found within each clinical stage. In univariate analysis TV correlated strongly with the probability of progression. However, in multivariate analysis TV was not a significant independent predictor of prognosis, either in the whole cohort of patients or in those with peripheral zone cancer only. Even in univariate analysis TV had no effect on prognosis for patients in whom cancer was either confined to the prostate or was Gleason score 2 through 6. Conclusions: TV provides no independent prognostic information when considered in multivariate analysis with Gleason score and pathological stage. Measurement of TV before treatment is less likely to characterize prostate cancer accurately than assessment of tumor grade and extent. There seems to be little reason to measure TV routinely in RP specimens. abstract_id: PUBMED:34281850 Lactate Dehydrogenase Is a Serum Prognostic Factor in Clinically Regional Lymph Node-positive Prostate Cancer. Background/aim: Currently, there is no established prognostic serum parameter except PSA in clinically regional lymph node-positive prostate cancer. The aim of this study was to identify serum prognostic factors in clinically regional lymph node-positive prostate cancer. Patients And Methods: Patients diagnosed with regional lymph node-positive prostate cancer between 2008 and 2017 were included. The prognostic value of serum parameters for progression-free survival (PFS) and overall survival (OS) was investigated. Results: Univariate and multivariate analyses showed a statistically significant increased hazard risk for PFS and OS for men with lactate dehydrogenase (LDH) ≥230 IU/l at diagnosis. PFS at 5 years for patients with high and low LDH levels were 69.9% (95% CI=56.8-79.8%) and 18.9% (95% CI=1.23-53.2%), respectively (p=0.003). OS at 5 years for low and high LDH levels were 89.2% (95% CI=78.6-94.7%) and 46.3 (95% CI=11.2-76.2%), respectively (p=0.006). Conclusion: This study shows that LDH is an independent predictor of PFS and OS in patients with regional lymph node metastatic prostate cancer. abstract_id: PUBMED:18754868 Reg IV is an independent prognostic factor for relapse in patients with clinically localized prostate cancer. Regenerating islet-derived family, member 4 (REG4, which encodes Reg IV) is a candidate marker for cancer and inflammatory bowel disease. We investigated the potential prognostic role of Reg IV immunostaining in clinically localized prostate cancer (PCa) after radical prostatectomy. Immunohistochemical staining of Reg IV was performed in 98 clinically localized PCa tumors obtained during curative radical prostatectomy. Intestinal and neuroendocrine differentiation was investigated by MUC2 and chromogranin A immunostaining, respectively. The prognostic significance of immunohistochemical staining for these factors on prostate-specific antigen (PSA)-associated recurrence was assessed by Kaplan-Meier analysis and a Cox regression model. Phosphorylation of the epidermal growth factor receptor (EGFR) by Reg IV was analyzed by Western blot. In total, 14 (14%) of the 98 PCa cases were positive for Reg IV staining. Reg IV positivity was observed frequently in association with MUC2 (P = 0.0182) and chromogranin A positivity (P = 0.0012). Univariate analysis revealed that Reg IV staining (P = 0.0004), chromogranin A staining (P = 0.0494), Gleason score (P < 0.0001) and preoperative PSA concentration (P = 0.0167) were significant prognostic factors for relapse-free survival. Multivariate analysis indicated that Reg IV staining (P = 0.0312), Gleason score (P = 0.0014) and preoperative PSA concentration (P = 0.0357) were independent predictors of relapse-free survival. In the LNCaP cell line, EGFR phosphorylation was induced by the addition of Reg IV-conditioned medium. These results suggest that Reg IV expression is an independent prognostic indicator of relapse after radical prostatectomy. abstract_id: PUBMED:11268456 Independent prognostic importance of microvessel density in clinically localized prostate cancer. Background: Previous studies have reported a possible prognostic importance of microvessel density (MVD) in prostate cancer, although the significance after radical prostatectomy is not clear. The purpose of this study was to assess the prognostic value of MVD in clinically localized prostatic adenocarcinomas, focusing on moderately-differentiated tumours. Materials And Methods: We examined a series of 104 patients treated for presumed organ-confined cancer in the period 1988-94. The area of highest tumour grade was selected from the prostatectomy specimens and vessels were high-lighted by staining for factor-VIII-related antigen. MVD was quantitated in the "hot spot" area and related to biochemical failure and clinical recurrence. Results: In moderately differentiated tumours (WHO grade) (n = 66), MVD was associated with preoperative s-PSA and positive surgical margins. In univariate 5-year analysis, microvessel density (MVDmean > 122 mm-2, median) (p = 0.0074), s-PSA, tumour dimension, capsular penetration, seminal vesicle invasion and positive surgical margins were all significant predictors of biochemical failure, while MVDmean (p = 0.0084) was the only statistically significant predictor of clinical recurrence. In multivariate Cox' analysis, MVDmean (p = 0.0003), capsular penetration and tumour dimension remained as independent predictors of biochemical failure. Conclusion: Assessment of MVD in moderately differentiated prostatic adenocarcinomas may improve the prognostic stratification of patients after radical prostatectomy. abstract_id: PUBMED:19681901 The independent value of tumour volume in a contemporary cohort of men treated with radical prostatectomy for clinically localized disease. Objective: To determine if prostate tumour volume is an independent prognostic factor in a contemporary cohort of men who had a radical prostatectomy (RP) for clinically localized disease, as the effect of tumour volume on prostate cancer outcomes has not been consistently shown in the era of widespread screening with prostate-specific antigen (PSA). Patients And Methods: The study included 856 men who had RP from 1998 to 2007 for localized prostate cancer. Tumour volume based on pathology was analysed as a continuous and categorized (<0.26, 0.26-0.50, 0.51-1.00, 1.01-2.00, 2.01-4.00, >4.00 mL) variable using Cox proportional hazards regression and Kaplan-Meier analysis. A multivariable analysis was also conducted controlling for PSA level, Gleason grade, surgical margins, and pathological stage. Results: Tumour volume had a positive association with grade and stage, but did not correlate with biochemical recurrence-free survival on univariate analysis as a continuous variable (hazard ratio 1.00, P = 0.09), and was only statistically significant for volumes of >4 mL as a categorical variable. No tumour volume was an independent predictor of prostate cancer recurrence on multivariate analysis. There was no difference between tumour volume and time to cancer recurrence for organ-confined tumours using Kaplan-Meier analysis. In low-risk patients (PSA level <10 ng/mL, Gleason score < or = 6, clinical stage T1c/T2a) tumour volume did not correlate with biochemical recurrence-free survival in univariate or multivariable analysis. Conclusions: There is no evidence that tumour volume is an independent predictor of prostate cancer outcome and it should not be considered as a marker of tumour risk, behaviour or prognosis. abstract_id: PUBMED:29707790 Time of metastatic disease presentation and volume of disease are prognostic for metastatic hormone sensitive prostate cancer (mHSPC). Background: Currently, there is no universally accepted prognostic classification for patients (pts) with metastatic hormone sensitive prostate cancer (mHSPC) treated with androgen deprivation therapy (ADT). Subgroup analyses demonstrated that pts with low volume (LV), per CHAARTED trial definition, mHSPC, and those who relapse after prior local therapy (PLT) have longer overall survival (OS) compared to high volume (HV) and de-novo (DN), respectively. Using a hospital-based registry, we aimed to assess whether a classification based on time of metastatic disease (PLT vs DN) and disease volume (LV vs HV) are prognostic for mHSPC pts treated with ADT. Methods: A retrospective cohort of consecutive patients with mHSPC treated with ADT between 1990 and 2013 was selected from the prospectively collected Dana-Farber Cancer Institute database and categorized as DN or PLT and HV or LV, at time of ADT start. Primary and secondary endpoints were OS and time to castration-resistant prostate cancer (CRPC), respectively, which were measured from date of ADT start using Kaplan-Meier method. Multivariable Cox proportional hazards models using known prognostic factors was used. Results: The analytical cohort consisted of 436 patients. The median OS and time to CRPC for PLT/LV were 92.4 (95%CI: 80.4-127.2) and 25.6 (95%CI: 21-35.7) months and 43.2 (95%CI: 37.2-56.4) and 12.2 (95%CI: 9.8-14.8) months for DN/HV, respectively, whereas intermediate values were observed for PLT/HV and DN/LV. A robust gradient for both outcomes was observed (Trend test P < 0.0001) in the four groups. In a multivariable analysis, DN presentation, HV, and cancer-related pain were independent prognostic factors. Conclusions: In our hospital-based registry, time of metastatic presentation and disease volume were prognostic for mHSPC pts treated with ADT. This simple prognostic classification system can aid patient counseling and future trial design. abstract_id: PUBMED:19117060 Tumour growth fraction measured by immunohistochemical staining of Ki67 is an independent prognostic factor in preoperative prostate biopsies with small-volume or low-grade prostate cancer. Accurate prognostic parameters in prostate biopsies are needed to better counsel individual patients with prostate cancer. We evaluated the prognostic impact of morphologic and immunohistochemical parameters in preoperative prostate cancer biopsies. A consecutive series of prostate biopsies of 279 men (72% with clinical stage T1c and 23% with T2) who subsequently underwent radical prostatectomy was prospectively analysed for Gleason score, number and percentage of positive cores (NPC, PPC), total percentage of biopsy tissue with tumour (TPT), maximum tumour percentage per core (MTP), and expression of Ki67, Bcl-2 and p53. All biopsy features were significantly associated with at least one feature of the radical prostatectomy specimen. pT stage was independently predicted by PSA, seminal vesicle invasion by Ki67 LI, positive margins by PSA and MTP, large tumour diameter by PSA and PPC, and Gleason score by biopsy Gleason score, MTP, and Ki67 LI, respectively. Biopsy Gleason score, NPC (1 vs. >1), TPT (<7 vs. >or=7%), and Ki67 LI (<10 vs. >or=10%) were significant predictors of biochemical recurrence after radical prostatectomy (p < 0.01, each). KI67 LI was the only independent prognostic factor in case of a low TPT (<7%) or low Gleason score (<7), the hazard ratio being 6.76 and 6.44, respectively. In summary, preoperative Gleason score, NPC, TPT and Ki67 LI significantly predict the risk of recurrence after radical prostatectomy, and Ki67 is an independent prognosticator in biopsies with low-volume or low-grade prostate cancer. Analysis of Ki67 LI in these biopsies may help to better identify patients with clinically insignificant prostate cancer. abstract_id: PUBMED:9690662 Prognostic significance of neuroendocrine differentiation in clinically localized prostatic carcinoma. Several recent studies have focused attention on neuroendocrine differentiation (NED) in prostatic carcinoma (PC). Clinical studies have shown PC with NED to behave aggressively and to be associated with poor prognosis. To evaluate NED as an independent prognostic factor, we conducted a retrospective study of 87 patients with clinically localized PC who underwent radical prostatectomy. The presence of neuroendocrine tumor cells was confirmed by positive immunostaining for serotonin, chromogranin A, and neuron-specific enolase. The correlation between NED and disease progression was assessed. Progression of cancer was demonstrated in 35 (40%) of the patients. The presence of NED was confirmed in 60 (69%) of cases, and of these patients 26 (43%) manifested evidence of disease progression. Disease progression was also manifest in nine (33%) of the 27 patients without evidence of NED. Thus, in the setting of clinically localized carcinoma of the prostate, NED does not appear to be a statistically significant independent prognostic factor. abstract_id: PUBMED:30132067 Importance of metastatic volume in prognostic models to predict survival in newly diagnosed metastatic prostate cancer. Purpose: To explore the prognostic importance of metastatic volume in a contemporary daily practice cohort of patients with newly diagnosed metastatic hormone-naive prostate cancer (mHNPC) and to develop a pragmatic prognostic model to predict survival for these patients. Methods: Since 2014, 113 patients with newly diagnosed mHNPC were prospectively registered. Statistical analysis was performed using SPSS 25.0™ with two-sided p value < 0.05 indicating statistical significance. Univariate and multivariate cox regression analyses were performed to identify prognostic risk factors. Kaplan-Meier method with log-rank statistics was constructed to analyze difference in survival in the prognostic groups. Model performance was assessed using the Concordance-index (C-index) and cross-validated in R v3.4.1. High-volume mHNPC (HVD) was defined as the presence of visceral metastasis or ≥ 4 bone metastases with ≥ 1 appendicular lesion. Results: Multivariate analysis identified HVD (p = 0.047) and elevated alkaline phosphatase (ALP) (p = 0.018) as independent prognostic risk factors for overall survival (OS). Consequently, three prognostic groups were created: a good (no risk factors), intermediate (1 risk factor) and poor prognosis group (2 risk factors). Median OS for the good, intermediate and poor prognosis group was not reached, 73 and 20 months (95% CI 9-31 months with p < 0.001 and Correspondence-index of 0.78), respectively. Conclusions: We developed a pragmatic and qualitative prognostic model consisting of three prognostic risk groups for OS in a daily practice cohort of patients with newly diagnosed mHNPC. Independent prognostic risk factors included in the model were HVD and abnormal ALP. Answer: The role of tumor volume as an independent prognostic factor in clinically localized prostate cancer is debated. Some studies have found tumor volume to be a significant prognostic factor, while others have not. A study by PUBMED:36497304 found that a tumor volume over 2.8 cc was an independent predictive factor for biochemical recurrence (BCR) in Japanese patients who underwent radical prostatectomy (RP). They established a novel risk model using tumor volume and location, which significantly correlated with progression-free survival (PFS) in low- to intermediate-risk patients, but not in high-risk patients. Conversely, research by PUBMED:15247716 concluded that tumor volume did not provide independent prognostic information when considered alongside Gleason score and pathological stage in multivariate analysis. They suggested that there is little reason to measure tumor volume routinely in RP specimens. Another study, PUBMED:19681901, in a contemporary cohort of men treated with RP for localized prostate cancer, found that tumor volume did not correlate with biochemical recurrence-free survival on univariate analysis and was not an independent predictor of prostate cancer recurrence on multivariate analysis. Similarly, PUBMED:30132067 developed a prognostic model for newly diagnosed metastatic hormone-naive prostate cancer, which included high-volume disease as an independent prognostic risk factor, but this study focused on metastatic rather than localized disease. In summary, the evidence is mixed regarding the independent prognostic value of tumor volume in clinically localized prostate cancer. Some studies have identified it as a significant factor (PUBMED:36497304), while others have found it not to be independently predictive when other factors such as Gleason score and pathological stage are considered (PUBMED:15247716, PUBMED:19681901). Therefore, the prognostic significance of tumor volume may vary depending on the study population and the other prognostic factors included in the analysis.
Instruction: Patient-reported outcomes and surgical triage: A gap in patient-centered care? Abstracts: abstract_id: PUBMED:34801218 Patient-reported outcomes: Is this the missing link in patient-centered perioperative care? Patient-reported outcomes (PROs) have been increasingly recognized as valuable information for delivery of optimal perioperative care to high-risk surgical patients in recent years. However, progress from clinical research on PROs has not been widely adopted in routine patient care. This review discusses the current concepts and practice status regarding PROs and addresses the missing links from research to practice adoption to further improve patient's experiences and clinical outcomes in perioperative care. Insufficient empirical research on appropriate PROs and its methodologies, insufficient implementation research to solve the practical issues, and insufficient data collection methods and experiences on ePROs are also discussed. Future research agenda should focus on evidence-supported, PRO-based symptom monitoring systems for early diagnosis and management of impending compromised clinical outcomes. abstract_id: PUBMED:27271699 Patient-reported outcomes and surgical triage: A gap in patient-centered care? Purpose: To help address wait times for elective surgery, British Columbia has implemented a triaging system that assigns priority levels to patients based on their diagnoses. The extent to which these priority levels concords with patients' assessment of their health status is not known. The purpose of this study was to measure the association between the priority levels assigned to patients and their patient-reported outcomes data collected at the time of being enrolled on the surgical wait list. Methods: Patients waiting for elective surgery in the Vancouver Coastal Health Authority were sampled. Participants completed a set of generic and condition-specific patient-reported outcome instruments, including: the EQ-5D(3L) (general health), PEG (pain), and the PHQ-9 (depression). A multivariate ordered logistic model was used to regress patient-reported outcome values on the priority level assigned at the time of wait list registration. Results: A total of 2725 participants completed the survey package (response rate 49 %). Using the EQ-5D(3L), 63 % reported having problems with pain or discomfort, 41 % problems performing usual activities, 36 % problems with depression or anxiety, 28 % problems with mobility, and 8 % a problem with self-care. The results from the ordered logistic model indicated very little association between the patient-reported outcomes and wait list priority levels, when adjusted for patient factors. Conclusions: This study observed no relationship between patients' self-reported health status and their assigned priority level for elective surgery. A more patient-centered approach to triaging patients for surgical treatment would incorporate patients' perspective in surgical wait list prioritization systems. abstract_id: PUBMED:35989058 Patient-Reported Outcomes and the Patient-Reported Outcome Measurement Information System of Functional Medicine Care and Research. The functional medicine model of care is focused on patient-centered rather than disease-centered care. Patient-centered care incorporates the patient's voice or experience of their condition alongside conventional biological factors to provide a "more complete" account of health. PROMIS Global, an NIH-validated patient-reported outcome (PRO) measure that evaluates the health-related quality of life, can be incorporated within the functional medicine model of care to evaluate self-reported physical, mental and social well-being across various conditions and guide personalized management strategies. Proper incorporation of PROMIS Global into clinical care and research is warranted to expand the available evidence base. abstract_id: PUBMED:35180494 Enhancing Patient-Centered Surgical Care With Mobile Health Technology. From smartphones or wearables to portable physiologic sensors and apps, healthcare is witnessing an exponential growth in mHealth-digital health tools used to support medical and surgical care, as well as public health. In surgery, there is interest in harnessing the capabilities of mHealth to improve the quality of patient-centered care delivery. Digitally delivered surveys have enhanced patient-reported outcome measurement and patient engagement throughout care. Wearable devices and sensors have allowed for the assessment of physical fitness before surgery and during recovery. Smartphone-based digital phenotyping has introduced novel methods of integrating multiple data streams (accelerometer, global positioning system, call and text logs) to create multidimensional digital health footprints for patients following surgery. Yet, with all the technological sophistication and 'big data' mHealth provides, widespread implementation has been elusive. Do clinicians and patients find these data valuable or clinically actionable? How can mHealth become integrated into the day-to-day workflows of surgical systems? Do these data represent opportunities to address disparities of care or worsen them? In this review, we discuss experiences and future opportunities to use mHealth to enhance patient-centered surgical care. abstract_id: PUBMED:24758551 Quantitative challenges facing patient-centered outcomes research. Patient-centered outcomes research collects and analyzes data from patients and other stakeholders to improve health care delivery and outcomes and guide health care decisions. However, there are a number of challenges in conducting quantitative analyses of patient-centered data. This article provides an overview of the analytical challenges and describes approaches to consider to overcome the challenges, as well as directions for future development. abstract_id: PUBMED:28647074 Patient-Reported Outcomes in Thoracic Surgery. The existing thoracic surgical literature contains several retrospective and observational studies that include patient-reported outcomes. To deliver true patient-centered care, it will be necessary to universally gather patient-reported outcomes prospectively, including them in routine patient care, clinical registries, and clinical trials. abstract_id: PUBMED:33282398 Patient reported outcomes: integration into clinical practices. Patient-centered care is a growing focus of research and modern surgical practice. To this end, there has been an ever-increasing utilization of patient reported outcomes (PRO) and health-related quality of life metrics (HR-QOL) in thoracic surgery research. Here we describe reasons and methods for integration of PRO measurement into routine thoracic surgical practice, commonly utilized PRO measurement instruments, and several examples of successful integration. abstract_id: PUBMED:30370443 Patient-Centered Outcomes in Bladder Cancer. Purpose Of Review: To summarize current knowledge on patient-prioritized outcomes for their bladder cancer care. Recent Findings: Patient-centered outcomes research seeks to help patients identify the right treatment for the right patient at the right time in their care. As such, patient-centered outcomes research relies on studying a treatment's impact on patient-centered outcomes. Some outcomes, like survival, are commonly prioritized by patients and by clinical experts. Patients often place greater emphasis than experts on quality of life outcomes. Thus, many patient-centered outcomes are also patient-reported outcomes. Unique domains that are often prioritized by patients, but overlooked by experts, include the costs and financial impact of care, anxiety, and depression related to a health condition, and the impact of a condition or its treatment on a caregiver or loved one. Patient-centered outcomes are realizing greater recognition for their innate importance and potential to augment the impact of research studies. Although patient-centered outcomes are often patient-reported outcomes, this is not universal. Unique to bladder cancer, the availability of a research-oriented Patient Survey Network intended to identify research questions that are important to patients may be an opportunity to broadly solicit input on patient-centered outcomes for bladder cancer research questions. abstract_id: PUBMED:38401954 GLOSSARY FOR DENTAL PATIENT-CENTERED OUTCOMES. Dental patient-centered outcomes can improve the relevance of clinical study results to dental patients and generate evidence to optimize health outcomes for dental patients. Dental patient-reported outcomes (dPROs) are of great importance to patient-centered dental care. They can be used to evaluate the health outcomes of an individual patient about the impact of oral diseases and treatment, and to assess the quality of oral health care delivery for a health care entity. dPROs are measured with dental patient-reported outcome measures (dPROMs). dPROMs should be validated and tested before wider dissemination and application to ensure that they can accurately capture the intended dPROs. Evidence suggests inadequate dPRO usage among dental trials, as well as potential flaws in some existing dPROMs. This Glossary presents a collection of main terms in dental patient-centered outcomes to help clinicians and researchers read and understand patient-centered clinical studies in dentistry. abstract_id: PUBMED:27044954 Incorporating Patient-Reported Outcomes Into Health Care To Engage Patients And Enhance Care. The provision of patient-centered care requires a health care environment that fosters engagement between patients and their health care team. One way to encourage patient-centered care is to incorporate patient-reported outcomes into clinical settings. Collecting these outcomes in routine care ensures that important information only the patient can provide is captured. This provides insights into patients' experiences of symptoms, quality of life, and functioning; values and preferences; and goals for health care. Previously embraced in the research realm, patient-reported outcomes have started to play a role in successful shared decision making, which can enhance the safe and effective delivery of health care. We examine the opportunities for using patient-reported outcomes to enhance care delivery and outcomes as health care information needs and technology platforms change. We highlight emerging practices in which patient-reported outcomes provide value to patients and clinicians and improve care delivery. Finally, we examine present and future challenges to maximizing the use of patient-reported outcomes in the clinic. Answer: The concept of patient-reported outcomes (PROs) is increasingly recognized as an essential component of patient-centered care, particularly in the context of perioperative care for high-risk surgical patients (PUBMED:34801218). However, there is evidence of a gap between the use of PROs and surgical triage systems, which can impact the delivery of patient-centered care. A study conducted in British Columbia, which implemented a triaging system to address wait times for elective surgery, found that there was very little association between the PROs collected at the time of waitlist registration and the priority levels assigned to patients (PUBMED:27271699). This suggests that the current triage system may not adequately reflect patients' self-reported health status, indicating a gap in incorporating the patient's perspective into surgical waitlist prioritization. The functional medicine model of care, which emphasizes patient-centered care, suggests that incorporating PRO measures like PROMIS Global can provide a more complete account of health by evaluating self-reported physical, mental, and social well-being (PUBMED:35989058). This approach aligns with the idea that patient-centered care should include the patient's voice or experience of their condition. Despite the potential benefits of integrating PROs into clinical practice, widespread implementation has been elusive, and there are challenges in conducting quantitative analyses of patient-centered data (PUBMED:24758551). Moreover, while there is interest in using mobile health technology to enhance patient-centered surgical care, questions remain about the value and clinical actionability of the data provided by such technologies (PUBMED:35180494). In thoracic surgery, the literature suggests that to deliver true patient-centered care, it is necessary to gather PROs prospectively and include them in routine patient care, clinical registries, and clinical trials (PUBMED:28647074). Similarly, in bladder cancer care, patient-centered outcomes research emphasizes the importance of studying a treatment's impact on outcomes that are often patient-reported and prioritized by patients, such as quality of life, costs, and the impact on caregivers (PUBMED:30370443). In conclusion, while the importance of PROs in enhancing patient-centered care is recognized, there is a gap in integrating these outcomes into surgical triage and routine clinical practice. Addressing this gap requires a concerted effort to incorporate the patient's perspective into healthcare decision-making processes, including surgical waitlist prioritization and the broader healthcare delivery system.
Instruction: Outcome analysis of initial neonatal shunts: does the valve make a difference? Abstracts: abstract_id: PUBMED:15540684 Multivariate analysis of technical variables in pancreaticoduodenectomy: the effect of pylorus preservation and retromesenteric jejunal position on early outcome. Background: To evaluate the effect of technical modifications to pancreaticoduodenectomy (PD) on postoperative outcome, we established a register of all patients undergoing PD at Victoria General Hospital (Queen Elizabeth II Health Sciences Centre), a tertiary care, university-affiliated hospital. Patients And Method: Data from 78 consecutive patients who underwent PD from January 1998 through November 2000 were collected for univariate and multivariate analyses of clinical and technical factors on early outcome after PD, including duration of gastric stasis, development of complications and length of hospital stay. Results: Two patients (2.6%) died; complications were recorded in 43 (55%). Upon univariate analysis, 3 factors (a diagnosis of chronic pancreatitis, pylorus preservation, and route of the jejunal limb) significantly affected duration of gastric stasis; but on multivariate analysis, only pylorus preservation and jejunal-limb route remained significant. Retromesenteric jejunal-limb placement was associated with longer periods of gastric stasis (mean 11.9 d, standard deviation [SD] 8.1 d) than the antemesenteric (retrocolic) route (mean 7.2, SD 3.6 d; p < 0.05); likewise pyloric preservation (mean gastric stasis 10.4 d, SD 5.9 d) compared with resection of the pylorus (mean 7.0 d, SD 3.2 d; p < 0.05). Pancreatic leaks occurred in 18% of retromesenteric and 8% of antemesenteric reconstructions (p = 0.3). Fewer patients with mucomucosal pancreaticojejunostomy suffered complications than those with invaginated anastomoses, but their hospital stays were similar in length. Conclusion: Route of the jejunal efferent limb and preservation of the pylorus are independent technical variables affecting early outcome after PD. abstract_id: PUBMED:22947627 Outcome of myocardial revascularisation in Iceland Introduction: In Iceland over 3500 coronary artery bypass operations have been performed, both On-Pump, using cardiopulmonary bypass and Off-Pump, surgery on a beating heart. The aim was to study their outcome. Material And Methods: This was a retrospective study on 720 consecutive patients who underwent surgical revascularisation at Landspítali-The National University Hospital of Iceland between 2002-2006; 513 On-Pump and 207 Off-Pump patients. Complications and operative mortality (<30 days) were compared between the groups and predictors of survival identified using multivariate analysis. Results: The number of males was significantly higher in the On-Pump group, but other risk factors of coronary artery disease, including age and high body mass index, were comparable, as were the number of distal anastomoses and EuroSCORE. The Off-Pump procedure took 25 minutes longer on average and chest tube output was significantly increased, but the amount of transfusions administered was similar. The rate of minor complications was higher in the On-Pump group. Of the major complications, stroke rates were similar in both groups (2%) but the rate of reoperation for bleeding was higher in the On-Pump group. Mean length of hospital stay was one day longer for On-Pump patients but operative mortality was similar for both groups (4% vs. 3%, p=0.68) as was 5 year survival (92% in both groups). In multivariate analysis both EuroSCORE and age predicted outcome of operative mortality and long term survival but type of surgery (On-Pump vs. Off-Pump) was not a predictive variant. Conclusions: Outcome of myocardial revascularisation in Iceland is good as regards operative mortality and long term survival. This applies to both conventional On-Pump and Off-Pump procedures. abstract_id: PUBMED:7955195 A comparison of internal mammary artery and saphenous vein grafts after coronary artery bypass surgery. No difference in 1-year occlusion rates and clinical outcome. CABADAS Research Group of the Interuniversity Cardiology Institute of The Netherlands. Background: Superior patency rates for internal mammary artery (IMA) grafts compared with vein coronary bypass grafts have been demonstrated by retrospective studies. This difference may have been affected by selection bias of patients and coronary arteries for IMA grafting. Methods And Results: To estimate the difference between IMA and vein grafts, we analyzed graft patency data of 912 patients who entered a randomized clinical drug trial. In this trial, 494 patients received both IMA and vein grafts (group 1) and 418 only vein grafts (group 2). Occlusion rates of IMA grafts and IMA plus vein grafts in group 1 were compared with those of vein grafts in group 2. Multivariate analysis was used to compare occlusion rates of IMA and vein grafts while other variables related to graft patency were controlled for. In addition, 1-year clinical outcome was assessed by the incidence of myocardial infarction, thrombosis, major bleeding, and death. Occlusion rates of distal anastomoses in group 1 versus group 2 were 5.4% (IMA grafts) versus 12.7% (vein grafts) (P < .0001) and 10.4% (IMA plus vein grafts) versus 12.7% (vein grafts) (P = .14). There was no difference in adjusted risk of occlusion between IMA grafts and vein grafts (P = .089). Type and location of distal anastomosis and lumen diameter of the grafted coronary artery were shown to be predictors of occlusion. Clinical events occurred in 17.8% (group 1) and 16.0% (group 2) of patients (P = .53). Conclusions: The observed difference in 1-year occlusion rates between IMA and vein grafts can be explained by a maldistribution of graft characteristics by selection of coronary arteries for IMA grafting rather than being ascribed to graft material. One-year clinical outcome is not improved by IMA grafting. abstract_id: PUBMED:17999816 With adequate supervision, the grade of the operating surgeon is not a determinant of outcome for patients undergoing urgent colorectal surgery. Introduction: It is essential that higher surgical trainees (HSTs) obtain adequate emergency operative experience without compromising patient outcome. The aim of this study was to compare the outcomes of patients operated by HSTs with those operated by consultants and to look at the effect of consultant supervision. Patients And Methods: A retrospective analysis of 362 patients who underwent urgent colorectal surgery was performed. The primary outcome was 30-day mortality. Secondary outcomes were intra-operative and postoperative surgery, specific and systemic complications, and delayed complications. Results: Comparison of the patients operated by a consultant (n = 190) and a HST (n = 172) as the primary surgeon revealed no significant difference between the two groups for age, gender, ASA status or indication for surgery. There was a difference in the type of procedure performed (left-sided resections: consultants 122/190, HST 91/172; P = 0.050). There was no difference between the two groups for the primary and secondary outcomes. However, HSTs operating unsupervised performed significantly fewer primary anastomoses for left-sided resections (P = 0.019) and had more surgery specific complications (P = 0.028) than those supervised by a consultant. Conclusions: HSTs can perform emergency colorectal surgery with similar outcomes to their consultants, but adequate consultant supervision is vital to achieving these results. abstract_id: PUBMED:15548185 Hospital outcome after aorta-radial versus internal thoracic artery-radial artery grafts. Background: We researched our data to determine whether use of radial artery (RA) led to similar hospital morbidity as use of pedicled internal thoracic artery (ITA) with vein grafts. We also investigated if use of RA, different RA operative techniques, or number of inflow grafts were predictors for hospital outcome. Method: Retrospectively the hospital outcome of the first 512 patients with RAs (RA group) was compared with 108 matched patients with left ITA (LITA) and vein grafts (LITA control group). Two subgroups of RA operative techniques were further analyzed: 327 patients with RA directly from aorta (aorta-RA group), and 185 patients with RA from ITA, as a composite graft, (ITA-RA group). Results: Hospital outcome of the RA group was similar to that of the LITA control group. When all ischemic events (IE) were grouped together, univariate analysis showed that aorta-RA group resulted in less IE than the ITA-RA group (2.1% versus 5.9%, respectively, p = 0.025). Number of inflow grafts did not influence IE. Multivariate analysis, however, did not show that technique of proximal RA anastomosis or number of inflow grafts were predictors for IE. Conclusions: Hospital outcome after the use of the RA is similar to that of LITA with vein grafts. Univariate analysis shows less IE after direct aorta-RA anastomoses, but multivariate analysis did not show that technique of proximal RA anastomosis and number of inflow grafts are important predictors for hospital outcome. abstract_id: PUBMED:32668486 Robotic Excision of Choledochal Cyst with Hepaticoduodenostomy (HD): Report of HD Technique, Initial Experience, and Early Outcome. Introduction: Minimal access surgical approach to choledochal cyst (CC) is becoming a standard of care in pediatric age group. Robotic-assisted excision of CC is increasingly being practiced at centers which have access to the system. We present our experience and technique of hepaticoduodenostomy (HD). Over all initial experience, short-term outcomes and complications are also presented and discussed. Materials And Methods: Patients with CC and undergoing robotic excision were retrospectively studied. Patients with active cholangitis, liver dysfunction, and perforated CC were excluded for robotic procedures. All included patients were preoperatively evaluated as per the defined protocol. They underwent excision of CC with HD. The duodenal anastomosis was done after limited mobilization and emphasis was laid on anastomosing the distal D2 part to the common hepatic duct. This prevents bile reflux into stomach. The follow-up evaluation was done for these patients. Hepatobiliary iminodiacetic acid (HIDA) scan for duodenogastric reflux (DGR) was done only if patients reported symptoms related to it. Results: A total of 19 patients (10 females) were studied. The mean age was 84 months. Type 1b was present in 12 patients and the rest were type IVb. Complete cyst excision with HD was done in all patients except conversion to open in one patient. The mean surgical time was 170 ± 40 minutes with console time of 140 ± 20 minutes. Median follow-up duration is 2.5 years (range: 3.5-0.5 years). HIDA scan was done in five patients who had reported epigastric pain. Of these five, one patient had a positive DGR. He is on conservative management. Conclusion: Robot-assisted CC excision with HD is feasible as proven by the outcome of 19 patients presented in this series. HD is to be done away from pylorus in distal part of down curving D2. This particular step prevents DGR and is the most important point of technique in doing HD. The presented series is the first report of robotic excision of CC with HD. The robot is a facilitator for complex and difficult operations as CC excision and HD. abstract_id: PUBMED:25493038 Is hand sewing comparable with stapling for anastomotic leakage after esophagectomy? A meta-analysis. Aim: To compare the outcome of hand sewing and stapling for anastomotic leakage after esophagectomy. Methods: A rigorous study protocol was established according to the recommendations of the Cochrane Collaboration. An electronic database search, hand search, and reference search were used to retrieve all randomized controlled trials that compared hand-sewn and mechanical esophagogastric anastomoses. Results: This study included 15 randomized controlled trials with a total of 2337 patients. The results revealed that there was no significant difference in the incidence of anastomotic leakage between the methods [relative risk (RR) = 0.77, 95% confidence interval (CI): 0.57-1.04; P = 0.09], but a subgroup analysis yielded a significant difference for the sutured layer and year of publication (Ps < 0.05). There was also no significant difference in the incidence of postoperative mortality (RR = 1.52, 95%CI: 0.97-2.40; P = 0.07). However, the anastomotic strictures rate was increased in the stapler group compared with the hand-sewn group (RR = 1.45, 95%CI: 1.11-1.91; P < 0.01) in the end-to-side subgroup, while the incidence of anastomotic strictures was decreased (RR = 0.34, 95%CI: 0.16-0.76; P < 0.01) in the side-to-side subgroup. Conclusion: The stapler reduces the anastomotic leakage rate compared with hand sewing. End-to-side stapling increases the risk of anastomotic strictures, but side-to-side stapling decreases the risk. abstract_id: PUBMED:34969507 Timing and outcome of right- vs left-sided colonic anastomotic leaks: Is there a difference? Background: Anastomotic leaks (AL) contribute to postoperative mortality, prolonged hospitalization, and increased health care costs. While left-sided AL (LAL) are well described in the literature, there is a paucity of studies on outcomes and management of right-sided AL (RAL). This study aimed to compare the timing of RAL versus LAL, and the variable diagnosis, management and outcomes of RAL versus LAL. We hypothesized that the timing of RAL may be later compared to LAL and may result in worse overall outcomes. Methods: Patients who underwent curative intent surgery for neoplastic disease from January 1995 to December 2015 were included. Patients that underwent an anastomosis below the peritoneal reflection, neoadjuvant treatment, fecal diversion, previous colectomy/anastomosis, multiple anastomoses, and patients with inflammatory bowel disease or hereditary colorectal cancer syndromes were excluded. Patient demographics, neoplastic data, operative data, time to AL, methods utilized for diagnosis of AL, and management of AL were collected. The primary endpoint was timing of AL, and secondary endpoints were management and outcome based on RAL versus LAL. RAL and LAL were analyzed and compared using Chi-squared and categorical variables were expressed as number (percentage) and continuous variables expressed as median (interquartile range). Results: A total of 2223 patients underwent oncologic resection for colonic neoplasia (1457 right sided and 766 left sided anastomoses). 67% of patients were male and median age was 69 years (range, 34-91). There were 48 total AL events (2.16%): 26 RAL (1.78%) and 22 LAL (2.87%). There was no statistical difference in leak rates between RAL and LAL and no difference in time to diagnosis or management (Table 1). RAL had significantly decreased operative time (p = 0.016), decreased intraoperative blood loss (p = 0.002), and increased diagnosis by CT/plain radiograph (p = 0.04). All patients that underwent surgery for leak had some form of fecal diversion performed. Morbidity and mortality were comparable between groups (p = 0.70; p = 1.0). Conclusions: This study found overall very low AL rates with comparable timing of RAL and LAL, and no difference in management or outcome of RAL vs. LAL. These findings are informative for patient and surgeon expectations before and after surgery and when AL is suspected. abstract_id: PUBMED:11727055 Colonic atresia: surgical management and outcome. Colonic atresia (CA) is a very rare cause of intestinal obstruction, and little information has been available about the management and predictors of outcome. A retrospective clinical trial was performed to delineate the clinical characteristics of CA with special emphasis on surgical treatment and factors affecting outcome. Children with CA who were treated in our department between 1977 and 1998 were reviewed: 14 boys and 4 girls aged 1 day to 5 months. All but 2 referred patients and 1 with prenatal diagnosis presented with intestinal obstruction. Plain abdominal X-ray films showed findings of intestinal obstruction in 14 cases; a barium enema demonstrated a distal atretic segment and microcolon in 4. The types of atresia were IIIa (n=9), I (n=6), and II (n=3). Type IIIa atresias were located proximal to the splenic flexure (n=8) and in the sigmoid colon (n=1), type I atresias were encountered throughout the colon; and all type II atresias were proximal to the hepatic flexure. Associated anomalies were multiple small-intestinal atresias (MSIA) (n=4), gastroschisis (GS) (n=2), pyloric atresia (n=1), Hirschsprung's disease (n=1), and complex urologic abnormalities (n=1). The initial management was an enterostomy in 15 patients (83%), including 2 referred and 2 with GS, and primary anastomosis in the remaining 3 (17%). Secondary procedures were the Santulli operation (n=2), colostomy closure and recolostomy followed by a Swenson operation (n=1), sacroabdominoperineal pull-through (n=1), and colostomy closure (n=1). Leakage was encountered in all primarily anastomosed patients. The overall mortality was 61%. Deaths occurred in patients with associated major anomalies (GS 2, MSIA 3, pyloric atresia 1) (55%) and in 3 patients who were initially managed by primary anastomosis (27%). Two additional patients died of sudden infant death syndrome (18%). Type I atresia was more common than in previously reported series and was associated with proximal multiple atresias. The initial management of CA should be prompt decompression of the intestine by an ostomy procedure, preferably end- or double-barrel. The type of surgery (primary anastomosis without prior colostomy) and associated abnormalities are the major determinants of poor outcome. abstract_id: PUBMED:9556242 Handsewn vs. stapled anastomoses in colon and rectal surgery: a meta-analysis. Purpose: Trials comparing handsewn with stapled anastomoses in colon and rectal surgery have not found statistical differences. Despite this, authors have differed in their conclusions as to which technique is superior. To help determine whether differences in patient outcomes are present, a meta-analysis of all trials was performed. Method: A meta-analysis of all randomized, controlled trials assessing handsewn and stapled colon and rectal anastomoses was done using a fixed-effects model. Outcome variables were mortality, technical problems, leak rates, wound infections, strictures, and cancer recurrence. Outcomes were assessed for all anastomoses involving the colon and for the subset of colorectal anastomoses. Results: Thirteen distinct trials met the inclusion criteria. Intraoperative technical problems were more likely to occur with stapled than with handsewn anastomoses for all anastomoses (P < 0.0001) and for colorectal anastomoses (P < 0.001). Strictures were also more common following stapled anastomoses (P = 0.015 for all anastomoses; P = 0.028 for colorectal anastomoses). All other outcome measures, including mortality, clinical and radiologic leak rates, and local cancer recurrence rates showed no difference between groups. Conclusion: Although intraoperative technical problems and postoperative strictures were more common with stapled anastomoses, other outcome measures showed no difference between groups. Thus, both techniques are effective, and the choice may be based on personal preference. Answer: The provided abstracts do not contain information directly related to the outcome analysis of initial neonatal shunts or the impact of the valve on such outcomes. The abstracts focus on various surgical procedures and outcomes in different contexts, such as pancreaticoduodenectomy, myocardial revascularization, coronary artery bypass surgery, urgent colorectal surgery, aorta-radial versus internal thoracic artery-radial artery grafts, robotic excision of choledochal cyst with hepaticoduodenostomy, hand sewing versus stapling for anastomotic leakage after esophagectomy, timing and outcome of colonic anastomotic leaks, surgical management and outcome of colonic atresia, and handsewn versus stapled anastomoses in colon and rectal surgery. None of these studies address the specific question of neonatal shunt valve outcomes. Therefore, based on the provided abstracts, it is not possible to answer the question about the impact of the valve on the outcome of initial neonatal shunts.
Instruction: Are psychiatrists trained to be leaders in mental health? Abstracts: abstract_id: PUBMED:26129817 Are psychiatrists trained to be leaders in mental health? A survey of early career psychiatrists. Objective: The aim of this study was to investigate how early career psychiatrists in 2014 valued the leadership skill education in their training to become psychiatrists. Method: All psychiatrists who gained Fellowship since 2009 after training in New Zealand or Australia were invited to take part in a survey. Results: Respondents considered themselves not adequately prepared for the leadership, management and administrative tasks and roles they have as psychiatrists, with preparedness for management tasks scoring the lowest. They valued as most useful to have opportunity to practice with a leadership role, to be able to observe 'leaders at work', to have a supervisor with special interests and skills in leadership and management and to have a formal teaching program on leadership and management. They advised teaching to be given throughout the entire 5 years of the training program by experienced leaders. Conclusions: Leadership skills training in the education of psychiatrists should contain both practical experience with leadership and management roles and formal teaching sessions on leadership and management skills development. Suggestions for improvement of the leadership and management skills education in the training of psychiatrists have been formulated. abstract_id: PUBMED:31363437 Family Physicians' Approaches to Mental Health Care and Collaboration with Psychiatrists. Objective This study aimed to determine the proportion of family physicians referring patients to psychiatrists and conducting psychotherapy or mental health consultations themselves. Additionally, the factors affecting family physicians' approaches to dealing with mental health patients were investigated, including referrals to psychiatrists and physicians' views about better management plans for patients with mental health disorders. Method In this cross-sectional observational study, online surveys were distributed, using Google forms, to family physicians in primary healthcare centers and hospitals in Jeddah, Saudi Arabia. The participants were 175 family physicians. A previously developed survey under the name "collaboration between psychologists and primary health care physicians" was adapted to suit the purposes of the present study, by changing the aim of the survey from psychologists to family physicians. Results Physicians who received inter-professional training in a clinical training program were more likely to agree that their education prepared them well for collaboration with psychiatrists, compared to those who did not receive such an education (p<0.001). The younger and less experienced physicians were more likely to carry out psychotherapy and mental health consultations by themselves more often than were the more experienced physicians (33.1% versus 9.7%; p<0.001), it has also been shown that almost 90% of physicians agreed that collaboration with psychiatrists is necessary for the care of their patients, and only a third responded that psychiatrists were accessible if and when they want to consult with them. Conclusions Family and primary care physicians must collaborate with psychiatric professionals in order to provide effective services. Moreover, family physicians should receive more education about mental health, and effective communication should be encouraged in order to deliver better care to psychiatric patients in primary healthcare settings. abstract_id: PUBMED:36269511 Faith Leaders' Views on Collaboration with Mental Health Professionals. When faced with experiences of mental struggle, Americans often turn to faith leaders as their first recourse. Although studies have explored religious leaders' mental health literacy, few studies have investigated how religious leaders believe faith communities and mental health professionals should collaborate. The data gathered for this research is from in-depth and focus group interviews with faith leaders from Christian, Jewish, Buddhist, and Sikh communities in South Texas and the Mid-Atlantic region between 2017-2019 (n=67). This research analyzed faith leaders' response to the question "How can mental health professionals and faith communities better work together" by utilizing the flexible coding approach (Deterding and Waters 2018). Four distinct themes emerged from the faith leaders' responses: education, relationship building, external factors, and dismissal. By learning about how faith leaders believe they can better work together with mental health professionals we can help bridge the gap between religion and mental health further by fostering a much-needed dialogue between these two groups. abstract_id: PUBMED:35261319 Pastoral Leaders Perceptions of Mental Health and Relational Concerns within Faith Based Organizations. The purpose of this study was to explore the attitudes, beliefs, and perspectives of pastoral leaders regarding mental health and relational concerns within Faith-Based Organizations (FBO). As a follow-up to a previous study (Moore et al., 2016), the authors intended to gain insight regarding how pastoral leaders view their role within their organizations related to promoting sound mental health and relational health. Utilizing a qualitative description, authors disseminated a survey to 12 pastoral leaders to complete. Three themes emerged from their responses, which included: (1) Defining mental health; (2) The role of pastoral leaders in mental health; and (3) Mental health needs in pastoral leadership. In the study, investigators discuss clinical implications and provide recommendations regarding how pastoral leaders and Faith- Based Organizations may address the topic of mental health and relational health among its constituents. We believe this research is relevant to the readers of this journal as it contributes to a discussion about pastoral leaders and mental health, as well as how pastoral leaders' perception of mental health may impact how they discuss this topic within their own organizations. Furthermore, for readers who are clinicians, this study contributes to the body of knowledge about what pastoral leaders and constituents may need, as one considers opportunities for collaboration. abstract_id: PUBMED:26094114 Managing common mental health problems: contrasting views of primary care physicians and psychiatrists. Background: Recent studies have reported a lack of collaboration and consensus between primary care physicians (PCPs) and psychiatrists. Objective: To compare the views of PCPs and psychiatrists on managing common mental health problems in primary care. Methods: Four focus group interviews were conducted to explore the in-depth opinions of PCPs and psychiatrists in Hong Kong. The acceptance towards the proposed collaborative strategies from the focus groups were investigated in a questionnaire survey with data from 516 PCPs and 83 psychiatrists working in public and private sectors. Results: In the focus groups, the PCPs explained that several follow-up sessions to build up trust and enable the patients to accept their mental health problems were often needed before making referrals. Although some PCPs felt capable of managing common mental health problems, they had limited choices of psychiatric drugs to prescribe. Some public PCPs experienced the benefits of collaborative care, but most private PCPs perceived limited support from psychiatrists. The survey showed that around 90% of PCPs and public psychiatrists supported setting up an agreed protocol of care, management of common mental health problems by PCPs, and discharging stabilized patients to primary care. However, only around 54-67% of private psychiatrists supported different components of these strategies. Besides, less than half of the psychiatrists agreed with setting up a support hotline for the PCPs to consult them. Conclusions: The majority of PCPs and psychiatrists support management of common mental health problems in primary care, but there is significantly less support from the private psychiatrists. abstract_id: PUBMED:27716339 Improving Ghana's mental healthcare through task-shifting- psychiatrists and health policy directors perceptions about government's commitment and the role of community mental health workers. Background: The scarcity of mental health professionals places specialist psychiatric care out of the reach of most people in low and middle income countries. There is growing interest in the effectiveness of task shifting as a strategy for targeting expanding health care demands in settings with shortages of qualified health personnel. Given this background, the aim of our study was to examine the perceptions of psychiatrists and health policy directors about the policy to expand mental health care delivery in Ghana through a system of task-shifting from psychiatrists to community mental health workers (CMHWs). Methods: A self-administered semi-structured questionnaire was developed and administered to 11 psychiatrists and 29 health policy directors. Key informant interviews were also held with five psychiatrists and four health policy directors. Quantitative data were analysed using descriptive statistics. Qualitative data were analysed thematically. Results: Almost all the psychiatrists and 23 (79.3 %) health policy directors were aware of the policy of the Government of Ghana to improve on the human resource base within mental health through a system of task-shifting. Overall, about half of the psychiatrists and 9 (31 %) health policy directors perceived there is some professional resistance to the implementation of the policy of task shifting. The majority of respondents were of the view that CMHWs should be allowed to assess, diagnose and treat most of the common mental disorders. The respondents identified that CMHWs usually perform two sets of roles, namely; officially assigned roles for which they have the requisite training and assumed roles for which they usually do not have the requisite training. The stakeholders identified multiple challenges associated with current task shifting arrangements within Ghana's mental health delivery system, including inadequate training and supervision, poor awareness of the scope of their expertise on the part of the CMHWs. Conclusion: Psychiatrists and health policy directors support the policy to expand mental health service coverage in Ghana through a system of task-shifting, despite their awareness of resistance from some professionals. It is important that the Government of Ghana upholds its commitment of expanding mental healthcare by maintaining and prioritizing its policy on task shifting and also providing the necessary resources to ensure its success. abstract_id: PUBMED:35782425 Workplace Violence and Turnover Intention Among Psychiatrists in a National Sample in China: The Mediating Effects of Mental Health. Background: Workplace violence (WPV) in healthcare has received much attention worldwide. However, scarce data are available on its impact on turnover intention among psychiatrists, and the possible mechanisms between WPV and turnover intention have not been explored in China. Methods: A cross-sectional survey was conducted among psychiatrists in 41 tertiary psychiatric hospitals from 29 provinces and autonomous regions in China. A stress-strain-outcome (SSO) model was adopted to examine the effects of WPV on mental health and turnover intention. The association and mediation by burnout and stress were examined by multivariate logistic regression (MLR) and generalized structure equation modeling (GSEM). Results: We invited 6,986 psychiatrists to participate, and 4,520 completed the survey (64.7% response rate). The prevalence of verbal and physical violence against psychiatrist in China was 78.0 and 30.7%, respectively. MLR analysis showed that psychiatrists who experienced verbal violence (OR = 1.15, 95% CI = 1.10-1.21) and physical violence (OR = 1.15, 95% CI = 1.07-1.24) were more likely to report turnover intention. GSEM analysis showed that burnout (β = 4.00, p < 0.001) and stress (β = 1.15, p < 0.001) mediated the association between verbal violence and turnover intention; similarly, burnout (β = 4.92, p < 0.001) and stress (β = 1.80, p < 0.001) also mediated the association between physical violence and turnover intention. Conclusions: Experience of WPV is a significant contributor to turnover intention among psychiatrists. Mental health status, such as burnout and stress level significantly mediated the association. Policy makers and hospital administrators need to be aware of this association. Action is needed to promote mental health among the psychiatrists to improve morale and workforce sustainability. abstract_id: PUBMED:28371455 Barriers and facilitators for psychiatrists in managing mental health patients in Hong Kong-Impact of Chinese culture and health system. Introduction: This study investigated the barriers and facilitators for psychiatrists in managing mental health patients under a Chinese context and a mixed private-public health system. Methods: Two focus group interviews were conducted to explore the in-depth opinions of psychiatrists in Hong Kong. The themes identified from the focus groups were investigated in a questionnaire survey with data from 83 psychiatrists working in public and private sectors. Results: No insurance coverage of mental health problems, patients' poor compliance of medication, and stigma of seeing psychiatrists were rated as the top barriers in the survey. Some psychiatrists mentioned in focus groups that they might write down the associated physical symptoms of the patients rather than the mental disorder diagnoses on the medical certificate. They observed some patients suspecting that psychiatric drugs were prescribed to control their behavior and make them more muddleheaded. The survey also found that consultation time constraint, long patient waiting list, and difficulty in discharging patients to primary care mostly affected public psychiatrists rather than private ones. However, they perceived similar facilitators, including public campaign to promote positive results of help-seeking, adequate explanation by other health professionals to the patients before referrals, handling severe cases by casework approach, and having a regular primary care physician. Discussion: The top barriers are related to insufficient public awareness and negative attitudes towards mental illness and its treatment. Major solutions include promoting positive results of help seeking, enhancing collaboration with primary care physicians, and follow-up of severe cases by a casework approach. abstract_id: PUBMED:25097753 Work practices and the provision of mental-health care on the verge of reform: a national survey of Israeli psychiatrists and psychologists. Background: The State of Israel is preparing to transfer legal responsibility for mental- health care from the government to the country's four competing, nonprofit health-plans. A prominent feature of this reform is the introduction of managed care into the mental-health system. This change will likely affect the service delivery patterns and care practices of professional caregivers in mental-health services. The study examines psychiatrists' and psychologists' patterns of service delivery and practice, and their attitudes toward the reform's expected effects, focusing on the following questions: To what extent do today's patterns of service delivery suit a managed-care environment? To what extent do professionals expect the reform to change their work? And do psychiatrists and psychologists differ on these questions? Methods: A survey of 1,030 psychiatrists and psychologists using a closed mail questionnaire for self-completion was conducted from December 2011 to May 2012. Results: Substantial differences were found between psychiatrists' and psychologists' personal and professional characteristics, work patterns, and treatment-provision characteristics. In addition, the study identified gaps between the treatment-provision characteristics of some of the professionals, mostly psychologists, and the demands of a managed-care environment. Moreover, a high percentage of the mental-health professionals (mostly psychologists) do not expect improvement in the quality of care or its accessibility and availability following the reform. However, those reporting practices associated with managed care (e.g. short-term treatment, compliance with monitoring procedures, and emphasis on evidence-based treatment) are less likely to expect negative changes in the provision and quality of care after the reform. Conclusions: Steps need to be taken to reduce the gaps between the treatment-provision characteristics of the professionals and the demands of a managed-care environment, and there are several possible ways to do so. In order to recruit experienced, skilled professionals, the health plans should consider enabling various work models and offering training focused on the demands of working in a managed-care environment. It is advisable to implement this kind of training also during the training and specialization process by including these topics in the professional curricula. abstract_id: PUBMED:32843895 An evaluation of a mental health literacy course for Arabic speaking religious and community leaders in Australia: effects on posttraumatic stress disorder related knowledge, attitudes and help-seeking. Background: Australia is an ethnically diverse nation with one of the largest refugee resettlement programs worldwide, including high numbers of refugees with an Arabic speaking background. Evidence suggests that refugees can demonstrate high levels of psychological distress and are at a higher risk of developing mental illness such as posttraumatic stress disorder (PTSD) and major depressive disorder (MDD). Notwithstanding, research has also shown Arabic speaking refugees have lower levels of professional help-seeking behaviours, postulated to be related to mental health literacy levels. Methods: A culturally sensitive mental health literacy (MHL) training program was developed and delivered in Arabic to Arabic speaking religious and community leaders using a 1-day training workshop format. An uncontrolled pre-, and post study design was used to provide a preliminary evaluation of improvement in PTSD-related knowledge, attitudes and help-seeking measures. Results: A total of 54 adults were trained, with 52 completing the pre- and post-intervention questionnaire. Significant differences were found post-training in measures such as the ability to recognise mental health problems (p = 0.035) and an increased recognition of the role that medication can play in the treatment of PTSD (p = 0.00). Further, an improvement in negative attitudes such as a desire for social distance (p = 0.042) was noted and participants reported more helpful strategies in line with promoting professional help-seeking following training (p = 0.032). Conclusion: Our findings indicated the training led to an improvement of some measures of MHL. To the best of our knowledge, this is the first time that the MHL program has been tailored for Arabic speaking religious and community leaders; who assist refugees with an Arabic background. By equipping community leaders with the knowledge to better respond to mental health problems, the overall goal of improving the mental health outcomes of Arabic speaking refugee communities is closer to being realised. Answer: The survey of early career psychiatrists in Australia and New Zealand revealed that respondents did not feel adequately prepared for leadership, management, and administrative tasks and roles they have as psychiatrists. They expressed that preparedness for management tasks scored the lowest. The respondents valued practical experience with leadership roles, observing leaders at work, having a supervisor with special interests in leadership and management, and formal teaching programs on leadership and management. They recommended that leadership skills training should be integrated throughout the entire 5-year training program and be delivered by experienced leaders (PUBMED:26129817). This suggests that while psychiatrists may receive some training relevant to leadership in mental health, there is a perceived need for more comprehensive and practical leadership education within their training programs. The findings indicate that current training may not sufficiently prepare psychiatrists to take on leadership roles in mental health, and improvements in this area have been suggested by the study participants.
Instruction: Is there equity in access to health services for ethnic minorities in Sweden? Abstracts: abstract_id: PUBMED:11420800 Is there equity in access to health services for ethnic minorities in Sweden? Background: This paper addresses the extent to which equity of treatment according to need, as defined by self-reported health status, is received by members of ethnic minorities in Swedish health services. Methods: The study was based on a multivariate analysis of cross-sectional data from the Swedish Survey of Living Conditions and Immigrant Survey of Living Conditions in 1996 on use of health services, morbidity and socioeconomic indicators. The study population consisted of 1,890 Swedish residents aged 27-60 years born in Chile, Poland, Turkey and Iran and 2,452 age-matched, Swedish-born residents. Main Results: Residents born in Chile, Iran and Turkey were more likely to have consulted a physician during the 3 months prior to the interview compared to Swedish-born residents; odds ratios (ORs) 1.4 (95% CI: 1.2-1.7), 1.3 (95% CI: 1.1-1.7) and 1.5 (95% CI: 1.3-1.9) respectively. The higher consultation rate in these ethnic minorities was primarily explained by a less satisfactory, self-reported health status compared to Swedish-born residents. Thirty-eight percent of the minority study groups reported exposure to organised violence in their country of origin, which was associated with a higher level of use of consultations with a physician (OR 1.3, 95% CI: 1.1-1.6). Conclusions: This study did not indicate any gross pattern of inequity in access to care for ethnic minorities in Sweden. Systems for allocating resources to health authorities need to consider the possibility that ethnic minorities in Sweden and in particular victims of organised violence, use health services more than is suggested by socioeconomic indicators only. abstract_id: PUBMED:35491859 Issues behind the Utilization of Community Mental Health Services by Ethnic Minorities in Hong Kong. This study collected data on the utilization rates of community mental health services among ethnic minorities and explained the results from the frontline social workers' perspective. Information about users' ethnicity was collected from 11 community mental health service providers from 2015 to 2018. This was followed by two sessions of focus groups conducted with 10 frontline social workers from six community mental health centers in Hong Kong. A hybrid analysis model was employed to analyze the qualitative data. The average utilization rates of community mental health services by ethnic minorities were 0.49%, 0.58%, and 0.68% in the years 2015-16, 2016-17, and 2017-18, respectively, showing that ethnic minorities who comprised 8% of the population were significantly underrepresented. It is worth noting that supply-side and demand-side factors are interrelated, suggesting the low utilization rate may be overcome by implementing a proactive social work service strategy. abstract_id: PUBMED:37953100 "Who is Anders Tegnell?" Unanswered questions hamper COVID-19 vaccine uptake: A qualitative study among ethnic minorities in Sweden. Background: Despite high COVID-19 vaccination coverage in many European countries, vaccination uptake has been lower among ethnic minorities, including in Sweden. This is in spite of the increased risk of contracting the virus and targeted efforts to vaccinate among first and second generation migrants. The aim of this study was to understand this dilemma by investigating ethnic minorities' perceptions and their experience of accessing the COVID-19 vaccine. Methods: This is a qualitative study drawing on 18 semi-structured interviews with health volunteers working in ethnic minority communities and with participants from the two largest ethnic minorities in Sweden (Syria and Somalia). Deductive qualitative analysis was completed using the 3C model by WHO (Complacency, Confidence and Convenience). Results: Complacency does not appear to be a barrier to intention to vaccinate. Participants are well aware of COVID-19 risk and the benefits of the vaccine. However, confidence in vaccine poses a barrier to uptake and there are a lot of questions and concerns about vaccine side effects, efficacy and related rumors. Confidence in health providers, particularly doctors is high but there was a sense of conflicting information. Accessing individually tailored health information and health providers is not convenient and a major reason for delaying vaccination or not vaccinating at all. Trust in peers, schools and faith-leaders is high and constitute pathways for effective health information sharing. Conclusion: Ethnic minorities in Sweden are willing to get vaccinated against COVID-19. However, to increase vaccination uptake, access to individually tailored and face to face health information to answer questions about vaccine safety, efficacy, conflicting information and rumors is urgently required. abstract_id: PUBMED:19718526 Ethnic health care advisors: a good strategy to improve the access to health care and social welfare services for ethnic minorities? Empirical studies indicate that ethnic minorities have limited access to health care and welfare services compared with the host population. To improve this access, ethnic health care (HC) advisors were introduced in four districts in Amsterdam, the Netherlands. HC advisors work for all health care and welfare services and their main task is to provide information on health care and welfare to individuals and groups and refer individuals to services. Action research was carried out over a period of 2 years to find out whether and how this function can contribute to improve access to services for ethnic minorities. Information was gathered by semi-structured interviews, analysing registration forms and reports, and attending meetings. The function's implementation and characteristics differed per district. The ethnicity of the health care advisors corresponded to the main ethnic groups in the district: Moroccan and Turkish (three districts) and sub-Sahara African and Surinamese (one district). HC advisors reached many ethnic inhabitants (n = 2,224) through individual contacts. Half of them were referred to health care and welfare services. In total, 576 group classes were given. These were mostly attended by Moroccan and Turkish females. Outreach activities and office hours at popular locations appeared to be important characteristics for actually reaching ethnic minorities. Furthermore, direct contact with a well-organized back office seems to be important. HC advisors were able to reach many ethnic minorities, provide information about the health care and welfare system, and refer them to services. Besides adapting the function to the local situation, some general aspects for success can be indicated: the ethnic background of the HC advisor should correspond to the main ethnic minority groups in the district, HC advisors need to conduct outreach work, there must be a well-organized back office to refer clients to, and there needs to be enough commitment among professionals of local health and welfare services. abstract_id: PUBMED:34515598 Recommendations for ethnic equity in health: A Delphi study from Denmark. Aims: A key issue in public health is how to approach ethnic inequities. Despite an increased focus on the health of people from ethnic minorities in the last 15 years, significant ethnic health inequities still exist in Denmark. These arise during pregnancy and are exacerbated by higher rates of exposure to health risks during the life course. This study aimed to formulate recommendations on both structural and organisational levels to reduce ethnic health inequities. Methods: Nine decision-makers - representing municipalities, regions, the private sector and voluntary organisations in Denmark - participated in the formulation of recommendations inspired by the Delphi method. The consensus process was conducted in three rounds during spring 2020, resulting in eight overall recommendations, including suggestions for action. Results: The recommendations address both structural and organisational levels. They aim to strengthen: 1) health policies and strategies related to the needs of people from ethnic minorities, including health literacy, linguistic, cultural and social differences; 2) health-promoting local initiatives developed in co-creation with people from ethnic minorities; 3) health promotion and prevention from a life course perspective with a focus on early intervention; 4) cross-sectoral and interdisciplinary collaborations that facilitate transitions and coordination; 5) competencies of professionals in terms of cultural knowledge, awareness, reflexivity and skills; 6) access to healthcare services by increasing information and resources; 7) interpreting assistance for, and linguistic accessibility to, healthcare services; 8) documentation and intervention research. Conclusions: To reduce ethnic health inequities, it is crucial that Danish welfare institutions, including their strategies, approaches and skills of employees, are adapted to serve an increasingly heterogeneous population. abstract_id: PUBMED:35902967 Ethnic minority experiences of mental health services in the Netherlands: an exploratory study. Objective: Despite considerable spending on mental health in the Netherlands, access to mental health remains suboptimal, particularly for migrants and ethnic minorities. Addressing the growing mental health service needs requires an understanding of the experiences of all stakeholders, specifically minority populations. In this exploratory study, we sought to understand the perspectives and experience of mental health services by migrants and their provider. An exploratory qualitative study was conducted with 10 participants, five of whom were mental health service providers and the other five were clients who had utilized or currently utilized MHS in the Netherlands. Results: We identified three themes that explained the experiences of clients and providers of MHS in the Netherlands (i) Perceptions of mental health service utilization (ii) Mismatch between providers (iii) Availability of services. The most significant factor that influenced participants experience was a service provider of a different cultural background. Minority populations accessing mental health services have multiple needs, including an expressed need for cultural understanding. Their experiences of mental health services could be improved for minority populations by addressing the diversity of health providers. abstract_id: PUBMED:36006587 Socio-cultural Norms and Gender Equality of Ethnic Minorities in Vietnam. Background: Thirty years ago, Vietnam was a poor low-income country; since then, it has accomplished remarkable achievements in socio-economic development, not least in its high rate of poverty reduction. But cultural stereotypes remain a root cause of inequality for women and girls, forming a barrier to accessing opportunities in education, health care, and in receiving equal treatment. Methodology: This paper used a variety of methods, including a literature review of cultural stereotypes and gender equality and a survey of gender equality for ethnic minorities covering 2894 households by IFGS in 2019. It analyzed the correlation between different variables, including gender, ethnicity, family type, parents' perception about opportunities for ethnic minority girls for schooling, health care, and equality of treatment. In addition, this article uses material from qualitative data collected from 15 in-depth interviews and life stories. Results: Traditional gender stereotypes are a major obstacle to achieving gender equality for women and girls. Customs and cultural practices ground prejudices against women and girls in the perception of numerous ethnic minorities in Vietnam. Being perceived as the breadwinner for their family, ethnic women have to participate in the labor force, in addition to taking care of other family members. Furthermore, they lack opportunities to communicate outside their small community. Therefore, in some ethnic minorities, there is a high rate of girls experiencing child marriage, and early pregnancies are more likely; this significantly affects their development and results in negative consequences for nutrition and maternal and child health care. In addition, girls who have an early marriage to satisfy their parents' desire have to drop out of school and limit their social interactions. Different causes, such as limited awareness, attachment to traditional beliefs, and parents' prejudice (that schooling is prioritized for boys), have locked adolescent girls into a vicious circle of child marriage and school dropout. This is also the reason for the high illiteracy rate of ethnic minority women in Vietnam. Conclusions: Traditional stereotyped gender perceptions have become major barriers to the development of ethnic minority girls and women in Vietnam. Child marriage and teenage pregnancy stem from the notion that girls do not need to be educated, but should join the workforce as early as possible and take care of their families; the result is a wide range of negative consequences, including keeping girls away from school and social interaction. Ethnic minority women have a high rate of illiteracy and social communication constraints, leading to their poor access to health care services. The spiral of gender inequality toward ethnic minority girls and women in Vietnam is still ongoing as parents' perceptions have not changed. Ethnic minority girls and women continue to be marginalized by the barriers of gender stereotypes and traditional culture. abstract_id: PUBMED:33350290 Working towards health equity for ethnic minority elders: spanning the boundaries of neighbourhood governance. Purpose: This paper analyses how neighbourhood governance of social care affects the scope for frontline workers to address health inequities of older ethnic minorities. We critically discuss how an area-based, generic approach to service provision limits and enables frontline workers' efforts to reach out to ethnic minority elders, using a relational approach to place. This approach emphasises social and cultural distances to social care and understands efforts to bridge these distances as "relational work". Design/methodology/approach: The authors conducted a two-year multiple case study of the cities of Nijmegen and The Hague, the Netherlands, following the development of policies and practices relevant to ethnic minority elders. They conducted 44 semi-structured interviews with managers, policy officers and frontline workers as well as 295 h of participant observation at network events and meeting activities. Findings: Relational work was open-ended and consisted of a continuous reorientation of goals and means. In some cases, frontline workers spanned neighbourhood boundaries to connect with professional networks, key figures and places meaningful to ethnic minority elders. While neighbourhood governance is attuned to equality, relational work practice fosters possibilities for achieving equity. Research Limitations/implications: Further research on achieving equity in relational work practice and more explicit policy support of relational work is needed. Originality/value: The paper contributes empirical knowledge about how neighbourhood governance of social care affects ethnic minority elders. It translates a relational view of place into a "situational" social justice approach. abstract_id: PUBMED:27032868 Access, treatment and outcomes of care: a study of ethnic minorities in Europe. Objectives: Recent research has shown that ethnic minorities still have less access to medical care and are less satisfied with the treatment they receive and the outcomes of the health care process. This article assesses how migrants in Europe experience access, treatment and outcomes in the European health care systems. Methods: Data were obtained from the QUALICOPC study (Quality and Costs of Primary Care in Europe). Regression analyses were used to estimate the access, treatment and outcomes of care for ethnic minorities. Results: In several countries, migrants experience that the opening hours of their GP practice were too limited and indicate that the practice was too far away from their work or home (lower access). They are more likely to report negative patient-doctor communication and less continuity of care than native patients (worse treatment). In addition, they are less satisfied with the care they received and are more likely to postpone care (worse outcomes). Conclusions: In general, migrants are still disadvantaged during the health care process. However, our results also indicate that satisfaction with the health care process improves for second-generation migrants in comparison with first-generation migrants. abstract_id: PUBMED:37267026 Mental health of ethnic minorities: the role of racism. Racism and racial discrimination heavily impact on health and mental health of ethnic minorities. In this conceptual paper and narrative review, we aim to report on relevant evidence from the international literature describing the prevalence and the qualitative aspects of mental illness due to racism and ethnic- discrimination in different settings and populations. Some variables related to racism, such as cultural, institutional, interpersonal factors, as well as the concepts of perceived and internalised racism will be described and discussed. These are relevant characteristics in the explanatory model of the relationship between racism and mental health. Epidemiological data on the prevalence of depressive and psychotic symptoms as well as substance abuse/misuse among ethnic minorities in large catchment areas, such as United States and United Kingdom, will be represented. We conclude that anti-racism policies are essential in order to address racism and racial discrimination around the world. Pluralistic societies should be promoted in order to understand mental illnesses among ethnic and cultural minorities. Also, anti-racism programs should be delivered in the educational and health-care settings and their impact evaluated. Answer: The study reported in PUBMED:11420800 suggests that there is not a gross pattern of inequity in access to care for ethnic minorities in Sweden. The study found that residents born in Chile, Iran, and Turkey were more likely to have consulted a physician compared to Swedish-born residents, which was primarily explained by a less satisfactory self-reported health status. However, the study also highlights that ethnic minorities, particularly victims of organized violence, use health services more than what is suggested by socioeconomic indicators alone, indicating that systems for allocating resources to health authorities should consider these factors. Additionally, PUBMED:37953100 indicates that while ethnic minorities in Sweden are willing to get vaccinated against COVID-19, there are barriers to uptake due to a lack of confidence in the vaccine and inconvenient access to individually tailored health information. This suggests that while there may not be overt inequity in access to health services, there are still challenges in effectively reaching ethnic minorities with health information and services that meet their specific needs. In conclusion, while there may not be explicit inequity in access to health services for ethnic minorities in Sweden, there are underlying issues that affect the utilization and effectiveness of these services for these populations.
Instruction: Does the presence of an epiretinal membrane alter the cleavage plane during internal limiting membrane peeling? Abstracts: abstract_id: PUBMED:32923936 Vitrectomy for the removal of idiopathic epiretinal membrane with or without internal limiting membrane peeling: a meta-analysis. Purpose: The aim of this study is to analyze the effect of internal limiting membrane peeling in removal of idiopathic epiretinal membranes through meta-analysis. Methods: We searched PubMed for studies published until 30 April 2018. Inclusion criteria included cases of idiopathic epiretinal membranes, treated with vitrectomy with or without internal limiting membrane peeling. Exclusion criteria consisted of coexisting retinal pathologies and use of indocyanine green to stain the internal limiting membrane. Sixteen studies were included in our meta-analysis. We compared the results of surgical removal of epiretinal membrane, with or without internal limiting membrane peeling, in terms of best-corrected visual acuity and anatomical restoration of the macula (central foveal thickness). Studies or subgroups of patients who had indocyanine green used as an internal limiting membrane stain were excluded from the study, due to evidence of its toxicity to the retina. Results: Regarding best-corrected visual acuity levels, the overall mean difference was -0.29 (95% confidence interval: -0.319 to -0.261), while for patients with internal limiting membrane peeling was -0.289 (95% confidence interval: -0.334 to -0.244) and for patients without internal limiting membrane peeling was -0.282 (95% confidence interval: -0.34 to -0.225). Regarding central foveal thickness levels, the overall mean difference was -117.22 (95% confidence interval: -136.70 to -97.74), while for patients with internal limiting membrane peeling was -121.08 (95% confidence interval: -151.12 to -91.03) and for patients without internal limiting membrane peeling was -105.34 (95% confidence interval: -119.47 to -96.21). Conclusion: Vitrectomy for the removal of epiretinal membrane combined with internal limiting membrane peeling is an effective method for the treatment of patients with idiopathic epiretinal membrane. abstract_id: PUBMED:37155426 A useful technique of starting internal limiting membrane peeling from the edge of the internal limiting membrane defect in epiretinal membrane surgery. Clinicians should be aware that internal limiting membrane (ILM) defects may occur concurrently with epiretinal membrane, and starting ILM peeling at the ILM defect margin may be useful in such cases. Abstract: We describe a useful surgical technique for the treatment of idiopathic epiretinal membrane with concurrent internal limiting membrane (ILM) defect, in which ILM peeling was started from the ILM defect margin. A dissociated optic nerve fiber layer-like appearance on fundus examination and optical coherence tomography may suggest an ILM defect. abstract_id: PUBMED:20006906 Does the presence of an epiretinal membrane alter the cleavage plane during internal limiting membrane peeling? Purpose: To determine whether the presence of a clinically and/or microscopically detectable epiretinal membrane (ERM) alters the cleavage plane during internal limiting membrane (ILM) peeling. Design: Retrospective, observational, immunohistochemical study of ILM specimens using archival formalin-fixed, paraffin-embedded tissue. Participants: Fifty-one patients who had had ILM excision. Methods: Fifty-one ILM specimens peeled during vitrectomy for various etiologies were examined by light microscopy. The removal of ILM was assisted using Trypan blue (n = 30), indocyanine green (n = 7), or brilliant blue G (n = 14). Monoclonal antibodies to glial fibrillary acidic protein and to neurofilament protein were used to detect glial or neuronal cells respectively on the vitreous or retinal surfaces of the ILM. Specimens were divided into 2 groups: ILM peeled for full-thickness macular hole (MH; n = 31) and ILM peeled after removal of clinically detectable ERM (n = 20). Main Outcome Measures: Primary outcome measure was the localization of immunohistochemical markers to neuronal or glial cells on the vitreous or retinal surfaces of ILM. The secondary outcome measure was the correlation of the results of the primary measure with the dyes used to facilitate ILM peeling. Results: Glial and/or neuronal cells were detected on the retinal surface of the ILM in 10 of 31 (32%) of the MH ILM specimens and in 13 of 20 (65%) of the ILM peeled after ERM excision; the difference was significant (P = 0.02). There was no association between the presence of neuronal and glial cells with the type of dye used (P = 0.2). Of the 23 ILM specimens with cells attached to the retinal surface, 21 (91%) were associated with clinical and/or histologic evidence of ERM and 2 (9%) were not. The correlation between the presence of cells on the vitreous and the retinal surfaces of ILM was high (P<0.0001). Conclusions: The findings suggest that ERM may be associated with sub-ILM changes that alter the plane of separation during ILM peeling. This study does not confirm any influence of dyes on the cleavage plane during surgery. abstract_id: PUBMED:32338531 Broad internal limiting membrane peeling with adjunctive plasma-thrombin for repair of large macular holes : A retrospective case series. Purpose: To assess the potential efficacy of broad internal limiting membrane peeling with adjunctive plasma-thrombin instillation to treat large macular holes and to make qualitative comparisons to internal limiting membrane peeling without adjunctive treatment and internal limiting membrane peeling with inverted and free internal limiting membrane flaps. Methods: A systematic literature review and a retrospective case series. Participants in the case series (N = 39) had idiopathic macular holes larger than 400 µm as measured on spectral-domain optical coherence tomography and underwent pars plana vitrectomy, internal limiting membrane peeling, placement of autologous plasma and bovine thrombin over the hole, and gas tamponade. Repeat imaging and clinical data were collected from 1, 2, 3, 6, and 12 months postoperatively. Results: Macular hole closure rate was 97%; 82% had U-type closures. At 12 months, 11% had defects in the external limiting membrane and 22% in the ellipsoid zone. This closure rate is similar to prior studies of internal limiting membrane flaps, while the U-type closure rate and retinal layer restoration compare favorably to those reported for internal limiting membrane peeling alone and internal limiting membrane flaps; 75% experienced a three-line improvement in visual acuity by 6 months, which exceeds results by either method. Conclusion: Plasma-thrombin instillation over macular holes may be a less-complicated alternative adjunct to internal limiting membrane flaps that can achieve similar outcomes when combined with internal limiting membrane peeling. abstract_id: PUBMED:35792974 Extensive internal limiting membrane peeling for proliferative vitreoretinopathy. Purpose: The aim of this study was to describe the anatomical outcomes of Brilliant Blue G (BBG)-assisted extensive internal limiting membrane peeling for proliferative vitreoretinopathy (PVR) under three-dimensional (3D) visualization. Methods: This study constitutes a retrospective case series conducted in a private retina practice, of 14 consecutive patients (14 eyes) with rhegmatogenous retinal detachment complicated by PVR who underwent pars plana vitrectomy between January 2019 and January 2020. The internal limiting membrane (ILM) was selectively stained with BBG, and perspectives were enhanced with a 3D visualization system. We peeled off the ILM beyond the vascular arcades up to the periphery. The main outcome was anatomical success, defined as persistent retinal reattachment after removal of the silicone oil tamponade. Results: Anatomic success was achieved with a single surgery in 11 of 14 (78.6%) eyes, and eventual success was achieved in all eyes. The mean patient follow-up time was 12.3 months (range, 7-16 months). The mean preoperative best-corrected visual acuity (BCVA) was 2.93 ± 0.79 logMAR which improved to 1.75 + 0.91 at the last follow-up. Conclusion: Extensive ILM peeling allowed the creation of a cleavage plane underlying the PVR membranes that facilitated its complete removal, thereby achieving anatomically reattached retina and reducing the risk of recurrence of retinal detachment. The long-term effects of this technique need further research. abstract_id: PUBMED:35814997 Internal limiting membrane peeling in combined hamartoma of retina and retinal pigment epithelium: Does it make a difference? This case study reports a pediatric case of a combined hamartoma of the retina and retinal pigment epithelium (CHR-RPE) successfully treated with pars plana vitrectomy (PPV), membrane peeling, and internal limiting membrane (ILM) peeling. In this rare tumor, adding ILM peeling to the surgical treatment of CHR-RPE with epiretinal membrane may result in a favorable outcome. abstract_id: PUBMED:31914954 Comparative efficacy evaluation of inverted internal limiting membrane flap technique and internal limiting membrane peeling in large macular holes: a systematic review and meta-analysis. Background: The purpose of this study was to compare the anatomical and visual outcomes of inverted internal limiting membrane (ILM) flap technique and internal limiting membrane peeling in large macular holes (MH). Methods: Related studies were reviewed by searching electronic databases of Pubmed, Embase, Cochrane Library. We searched for articles that compared inverted ILM flap technique with ILM peeling for large MH (> 400 μm). Double-arm meta-analysis was performed for the primary end point that was the rate of MH closure, and the secondary end point was postoperative visual acuity (VA). Heterogeneity, publication bias, sensitivity analysis and subgroup analysis were conducted to guarantee the statistical power. Results: This review included eight studies involving 593 eyes, 4 randomized control trials and 4 retrospective studies. After sensitivity analysis for eliminating the heterogeneity of primary outcome, the pooled data showed the rate of MH closure with inverted ILM flap technique group was statistically significantly higher than ILM peeling group (odds ratio (OR) = 3.95, 95% confidence interval (CI) = 1.89 to 8.27; P = 0.0003). At the follow-up duration of 3 months, postoperative VA was significantly better in the group of inverted ILM flap than ILM peeling (mean difference (MD) = - 0.16, 95% CI = - 0.23 to 0.09; P < 0.00001). However, there was no difference in visual outcomes between the two groups of different surgical treatments at relatively long-term follow-up over 6 months (MD = 0.01, 95% CI = - 0.12 to 0.15; P = 0.86). Conclusion: Vitrectomy with inverted ILM flap technique had a better anatomical outcome than ILM peeling. Flap technique also had a signifcant visual gain in the short term, but the limitations in visual recovery at a longer follow-up was found. abstract_id: PUBMED:29179705 Vitrectomy with internal limiting membrane peeling versus inverted internal limiting membrane flap technique for macular hole-induced retinal detachment: a systematic review of literature and meta-analysis. Background: To evaluate the effects on vitrectomy with internal limiting membrane (ILM) peeling versus vitrectomy with inverted internal limiting membrane flap technique for macular hole-induced retinal detachment (MHRD). Methods: Pubmed, Cochrane Library, and Embase were systematically searched for studies that compared ILM peeling with inverted ILM flap technique for macular hole-induced retinal detachment. The primary outcomes are the rate of retinal reattachment and the rate of macular hole closure 6 months later after initial surgery, the secondary outcome is the postoperative best-corrected visual acuity (BCVA) 6 months later after initial surgery. Results: Four studies that included 98 eyes were selected. All the included studies were retrospective comparative studies. The preoperative best-corrected visual acuity was equal between ILM peeling and inverted ILM flap technique groups. It was indicated that the rate of retinal reattachment (odds ratio (OR) = 0.14, 95% confidence interval (CI):0.03 to 0.69; P = 0.02) and macular hole closure (OR = 0.06, 95% CI:0.02 to 0.19; P < 0.00001) after initial surgery was higher in the group of vitrectomy with inverted ILM flap technique than that in the group of vitrectomy with ILM peeling. However, there was no statistically significant difference in postoperative best-corrected visual acuity (mean difference (MD) 0.18 logarithm of the minimum angle of resolution; 95% CI -0.06 to 0.43 ; P = 0.14) between the two surgery groups. Conclusion: Compared with ILM peeling, vitrectomy with inverted ILM flap technique resulted significantly higher of the rate of retinal reattachment and macular hole closure, but seemed does not improve postoperative best-corrected visual acuity. abstract_id: PUBMED:24549686 Drusen characteristics after internal limiting membrane peeling Background: There are some reports showing isolated cases of drusen regression after pars plana vitrectomy (ppV) with peeling of the internal limiting membrane (iLM). Drusen characteristics after iLM peeling was investigated in this study. Patients And Methods: The data of 527 patients who had received iLM peeling between 2004 and 2012 were retrospectively collected and those patients with retinal drusen were selected for the study. Fundus photographs before and after vitrectomy due to a macular hole or epiretinal gliosis were compared and drusen arrangement in the peeling site was analyzed. The aim of the study was to show whether there was drusen regression 2-5 months after surgery. Results: Out of the 527 patients 11 showed central macular drusen, 4 with confluent large drusen (> 63 µm diameter) and 7 with small hard drusen (≤ 63 µm diameter). One patient showed drusen regression after iLM peeling without any changes in the other eye and all other patients showed no differences in the drusen findings (n = 6) or even some additional drusen (n = 4) without drusen alterations in the other eye. Conclusion: The results of this study could not confirm some reports showing drusen regression after iLM peeling in the peeling site in general and there was only one single case of central drusen regression. abstract_id: PUBMED:31686266 Fovea-sparing internal limiting membrane peeling versus complete internal limiting membrane peeling for myopic traction maculopathy. Purpose: To compare the outcomes of vitrectomy with fovea-sparing internal limiting membrane peeling (FSIP) and complete internal limiting membrane peeling (ILMP) for myopic traction maculopathy (MTM). Study Design: A retrospective, observational study. Patients And Methods: In this study, we included 22 eyes of 21 consecutive patients who underwent vitrectomy with FSIP or ILMP for MTM and were monitored for at least 6 months. Eleven eyes were treated with FSIP, and 11, with ILMP. Results: With FSIP, the postoperative best-corrected visual acuity (BCVA) significantly improved from 0.61 (20/82) to 0.34 (20/44; P = .009) logarithm of the minimum angle of resolution (logMAR) units. With ILMP, the postoperative BCVA improved from 0.65 (20/89) to 0.52 (20/66) logMAR units, but was not significant (P = .106). The postoperative final central foveal thickness (CFT) reduced significantly after FSIP (from 557.6 to 128.8 µm, P = .003) and ILMP (from 547.3 to 130.3 µm, P = .008). The postoperative incidence of a macular hole was 0% (0/11 eyes) with FSIP and 27.3% (3/11 eyes) with ILMP. All patients with a macular hole had foveal detachment in association with a thin fovea preoperatively. With ILMP, postoperative BCVA with a macular hole worsened by -3.5 letters; in contrast, postoperative BCVA without a macular hole improved by +10.5 letters. With FSIP, postoperative BCVA without a macular hole significantly improved by +13.5 letters (P = .009). Conclusions: FSIP resulted in significant improvement in MTM and prevented postoperative macular hole development. Answer: Yes, the presence of an epiretinal membrane (ERM) may alter the cleavage plane during internal limiting membrane (ILM) peeling. A study found that ERM may be associated with sub-ILM changes that affect the plane of separation during ILM peeling. Specifically, glial and/or neuronal cells were detected on the retinal surface of the ILM in a higher percentage of specimens that were peeled after ERM excision compared to those peeled for full-thickness macular hole. This suggests that the presence of ERM can influence the cleavage plane during the surgical procedure (PUBMED:20006906).
Instruction: Can perceptual indices estimate physiological strain across a range of environments and metabolic workloads when wearing explosive ordnance disposal and chemical protective clothing? Abstracts: abstract_id: PUBMED:25865709 Can perceptual indices estimate physiological strain across a range of environments and metabolic workloads when wearing explosive ordnance disposal and chemical protective clothing? Objective: Explosive ordnance disposal (EOD) often requires technicians to wear multiple protective garments in challenging environmental conditions. The accumulative effect of increased metabolic cost coupled with decreased heat dissipation associated with these garments predisposes technicians to high levels of physiological strain. It has been proposed that a perceptual strain index (PeSI) using subjective ratings of thermal sensation and perceived exertion as surrogate measures of core body temperature and heart rate, may provide an accurate estimation of physiological strain. Therefore, this study aimed to determine if the PeSI could estimate the physiological strain index (PSI) across a range of metabolic workloads and environments while wearing heavy EOD and chemical protective clothing. Methods: Eleven healthy males wore an EOD and chemical protective ensemble while walking on a treadmill at 2.5, 4 and 5.5km·h(-1) at 1% grade in environmental conditions equivalent to wet bulb globe temperature (WBGT) 21, 30 and 37°C. WBGT conditions were randomly presented and a maximum of three randomised treadmill walking trials were completed in a single testing day. Trials were ceased at a maximum of 60-min or until the attainment of termination criteria. A Pearson's correlation coefficient, mixed linear model, absolute agreement and receiver operating characteristic (ROC) curves were used to determine the relationship between the PeSI and PSI. Results: A significant moderate relationship between the PeSI and the PSI was observed [r=0.77; p<0.001; mean difference=0.8±1.1a.u. (modified 95% limits of agreement -1.3 to 3.0)]. The ROC curves indicated that the PeSI had a good predictive power when used with two, single-threshold cut-offs to differentiate between low and high levels of physiological strain (area under curve: PSI three cut-off=0.936 and seven cut-off=0.841). Conclusions: These findings support the use of the PeSI for monitoring physiological strain while wearing EOD and chemical protective clothing. However, future research is needed to confirm the validity of the PeSI for active EOD technicians operating in the field. abstract_id: PUBMED:25878167 Inside the 'Hurt Locker': The Combined Effects of Explosive Ordnance Disposal and Chemical Protective Clothing on Physiological Tolerance Time in Extreme Environments. Background: Explosive ordnance disposal (EOD) technicians are often required to wear specialized clothing combinations that not only protect against the risk of explosion but also potential chemical contamination. This heavy (>35kg) and encapsulating ensemble is likely to increase physiological strain by increasing metabolic heat production and impairing heat dissipation. This study investigated the physiological tolerance times of two different chemical protective undergarments, commonly worn with EOD personal protective clothing, in a range of simulated environmental extremes and work intensities Methods: Seven males performed 18 trials wearing 2 ensembles. The trials involved walking on a treadmill at 2.5, 4, and 5.5 km h(-1) at each of the following environmental conditions, 21, 30, and 37°C wet bulb globe temperature. The trials were ceased if the participants' core temperature reached 39°C, if heart rate exceeded 90% of maximum, if walking time reached 60min or due to volitional fatigue. Results: Physiological tolerance times ranged from 8 to 60min and the duration (mean difference: 2.78min, P > 0.05) were similar in both ensembles. A significant effect for environment (21 > 30 > 37°C wet bulb globe temperature, P < 0.05) and work intensity (2.5 > 4 > 5.5 km h(-1), P < 0.05) was observed in tolerance time. The majority of trials across both ensembles (101/126; 80.1%) were terminated due to participants achieving a heart rate equivalent to greater than 90% of their maximum. Conclusions: Physiological tolerance times wearing these two chemical protective undergarments, worn underneath EOD personal protective clothing, were similar and predominantly limited by cardiovascular strain. abstract_id: PUBMED:24586228 Physiological tolerance times while wearing explosive ordnance disposal protective clothing in simulated environmental extremes. Explosive ordnance disposal (EOD) technicians are required to wear protective clothing to protect themselves from the threat of overpressure, fragmentation, impact and heat. The engineering requirements to minimise these threats results in an extremely heavy and cumbersome clothing ensemble that increases the internal heat generation of the wearer, while the clothing's thermal properties reduce heat dissipation. This study aimed to evaluate the heat strain encountered wearing EOD protective clothing in simulated environmental extremes across a range of differing work intensities. Eight healthy males [age 25 ± 6 years (mean ± sd), height 180 ± 7 cm, body mass 79 ± 9 kg, VO2max 57 ± 6 ml(.) kg(-1.)min(-1)] undertook nine trials while wearing an EOD9 suit (weighing 33.4 kg). The trials involved walking on a treadmill at 2.5, 4 and 5.5 km ⋅ h(-1) at each of the following environmental conditions, 21, 30 and 37 °C wet bulb globe temperature (WBGT) in a randomised controlled crossover design. The trials were ceased if the participants' core temperature reached 39 °C, if heart rate exceeded 90% of maximum, if walking time reached 60 minutes or due to fatigue/nausea. Tolerance times ranged from 10-60 minutes and were significantly reduced in the higher walking speeds and environmental conditions. In a total of 15 trials (21%) participants completed 60 minutes of walking; however, this was predominantly at the slower walking speeds in the 21 °C WBGT environment. Of the remaining 57 trials, 50 were ceased, due to attainment of 90% maximal heart rate. These near maximal heart rates resulted in moderate-high levels of physiological strain in all trials, despite core temperature only reaching 39 °C in one of the 72 trials. abstract_id: PUBMED:27110873 The Pandolf load carriage equation is a poor predictor of metabolic rate while wearing explosive ordnance disposal protective clothing. This investigation aimed to quantify metabolic rate when wearing an explosive ordnance disposal (EOD) ensemble (~33kg) during standing and locomotion; and determine whether the Pandolf load carriage equation accurately predicts metabolic rate when wearing an EOD ensemble during standing and locomotion. Ten males completed 8 trials with metabolic rate measured through indirect calorimetry. Walking in EOD at 2.5, 4.0 and 5.5km·h-1 was significantly (p < 0.05) greater than matched trials without the EOD ensemble by 49% (127W), 65% (213W) and 78% (345W), respectively. Mean bias (95% limits of agreement) between predicted and measured metabolism during standing, 2.5, 4 and 5.5km·h-1 were 47W (19 to 75W); -111W (-172 to -49W); -122W (-189 to -54W) and -158W (-245 to -72W), respectively. The Pandolf equation significantly underestimated measured metabolic rate during locomotion. These findings have practical implications for EOD technicians during training and operation and should be considered when developing maximum workload duration models and work-rest schedules. Practitioner Summary: Using a rigorous methodological design we quantified metabolic rate of wearing EOD clothing during locomotion. For the first time we demonstrated that metabolic rate when wearing this ensemble is greater than that predicted by the Pandolf equation. These original findings have significant implications for EOD training and operation. abstract_id: PUBMED:33617350 Characterizing the Effects of Explosive Ordnance Disposal Operations on the Human Body While Wearing Heavy Personal Protective Equipment. Objective: To provide a comprehensive characterization of explosive ordnance disposal (EOD) personal protective equipment (PPE) by evaluating its effects on the human body, specifically the poses, tasks, and conditions under which EOD operations are performed. Background: EOD PPE is designed to protect technicians from a blast. The required features of protection make EOD PPE heavy, bulky, poorly ventilated, and difficult to maneuver in. It is not clear how the EOD PPE wearer physiologically adapts to maintain physical and cognitive performance during EOD operations. Method: Fourteen participants performed EOD operations including mobility and inspection tasks with and without EOD PPE. Physiological measurement and kinematic data recording were used to record human physiological responses and performance. Results: All physiological measures were significantly higher during the mobility and the inspection tasks when EOD PPE was worn. Participants spent significantly more time to complete the mobility tasks, whereas mixed results were found in the inspection tasks. Higher back muscle activations were seen in participants who performed object manipulation while wearing EOD PPE. Conclusion: EOD operations while wearing EOD PPE pose significant physical stress on the human body. The wearer's mobility is impacted by EOD PPE, resulting in decreased speed and higher muscle activations. Application: The testing and evaluation methodology in this study can be used to benchmark future EOD PPE designs. Identifying hazards posed by EOD PPE lays the groundwork for developing mitigation plans, such as exoskeletons, to reduce physical and cognitive stress caused by EOD PPE on the wearers without compromising their operational performance. abstract_id: PUBMED:31196190 Validity of a noninvasive estimation of deep body temperature when wearing personal protective equipment during exercise and recovery. Background: Deep body temperature is a critical indicator of heat strain. However, direct measures are often invasive, costly, and difficult to implement in the field. This study assessed the agreement between deep body temperature estimated from heart rate and that measured directly during repeated work bouts while wearing explosive ordnance disposal (EOD) protective clothing and during recovery. Methods: Eight males completed three work and recovery periods across two separate days. Work consisted of treadmill walking on a 1% incline at 2.5, 4.0, or 5.5 km/h, in a random order, wearing EOD protective clothing. Ambient temperature and relative humidity were maintained at 24 °C and 50% [Wet bulb globe temperature (WBGT) (20.9 ± 1.2) °C] or 32 °C and 60% [WBGT (29.0 ± 0.2) °C] on the separate days, respectively. Heart rate and gastrointestinal temperature (TGI) were monitored continuously, and deep body temperature was also estimated from heart rate (ECTemp). Results: The overall systematic bias between TGI and ECTemp was 0.01 °C with 95% limits of agreement (LoA) of ±0.64 °C and a root mean square error of 0.32 °C. The average error statistics among participants showed no significant differences in error between the exercise and recovery periods or the environmental conditions. At TGI levels of (37.0-37.5) °C, (37.5-38.0) °C, (38.0-38.5) °C, and > 38.5 °C, the systematic bias and ± 95% LoA were (0.08 ± 0.58) °C, (- 0.02 ± 0.69) °C, (- 0.07 ± 0.63) °C, and (- 0.32 ± 0.56) °C, respectively. Conclusions: The findings demonstrate acceptable validity of the ECTemp up to 38.5 °C. Conducting work within an ECTemp limit of 38.4 °C, in conditions similar to the present study, would protect the majority of personnel from an excessive elevation in deep body temperature (> 39.0 °C). abstract_id: PUBMED:37209632 Association between physiological and perceptual heat strain while wearing stab-resistant body armor. In this study, we explored the association between physiological and perceptual heat strain while wearing stab-resistant body armor (SRBA). Human trials were performed on ten participants in warm and hot environments. Physiological responses (core temperature, skin temperature, and heart rate), and perceptual responses (thermal sensation vote, thermal comfort vote, restriction of perceived exertion (RPE), wetness of skin, and wetness of clothing) were recorded throughout the trials, and subsequently, the physiological strain index (PSI), and perceptual strain index (PeSI) were calculated. The results indicated that the PeSI showed a significant moderate association with the PSI, and was capable of predicting PSI for low (PSI = 3) and high (PSI = 7) levels of physiological strain with the areas under the curves of 0.80 and 0.64, respectively. Moreover, Bland-Altman analysis indicated that the majority of the PSI ranged within the 95% confidence interval, and the mean difference between PSI and PeSI was 0.14 ± 2.02 with the lower 95% limit and upper 95% limit being -3.82 to 4.10, respectively. Therefore, the subjective responses could be used as an indicator for predicting physiological strain while wearing SRBA. This study could provide fundamental knowledge for the usage of SRBA, and the development of physiological heat strain assessment. abstract_id: PUBMED:33192573 Optimizing the Use of Phase Change Material Vests Worn During Explosives Ordnance Disposal Operations in Hot Conditions. Phase change material (PCM) cooling garments' efficacy is limited by the duration of cooling provided. The purpose of this study was to evaluate the effect of replacing a PCM vest during a rest period on physiological and perceptual responses during explosive ordnance disposal (EOD) related activity. Six non-heat acclimated males undertook three trials (consisting of 2 × 3 × 16.5 min activity cycles interspersed with one 10 min rest period) in 40°C, 12% relative humidity whilst wearing a ≈38 kg EOD suit. Participants did not wear a PCM cooling vest (NoPCM); wore one PCM vest throughout (PCM1) or changed the PCM vest in the 10 min rest period (PCM2). Rectal temperature (T re ), mean skin temperature (T skin ), heart rate (HR), Physiological Strain Index (PSI), ratings of perceived exertion, temperature sensation and thermal comfort were compared at the end of each activity cycle and at the end of the trial. Data displayed as mean [95% CI]. After the rest period, a rise in T re was attenuated in PCM2 compared to NoPCM and PCM1 (-0.57 [-0.95, -0.20]°C and -0.46 [-0.81, -0.11]°C, respectively). A rise in HR and T skin was also attenuated in PCM2 compared to NoPCM and PCM1 (-23 [-29, -16] beats⋅min-1 and -17 [-28, -6.0] beats⋅min-1; -0.61 [-1.21, -0.10]°C and -0.89 [-1.37, -0.42]°C, respectively). Resulting in PSI being lower in PCM2 compared to NoPCM and PCM1 (-2.2 [-3.1, -1.4] and -0.8 [-1.3,-0.4], respectively). More favorable perceptions were also observed in PCM2 vs. both NoPCM and PCM1 (p < 0.01). Thermal perceptual measures were similar between NoPCM and PCM1 and the rise in T re after the rest period tended to be greater in PCM1 than NoPCM. These findings suggest that replacing a PCM vest better attenuates rises in both physiological and perceptual strain compared to when a PCM vest is not replaced. Furthermore, not replacing a PCM vest that has exhausted its cooling capacity, can increase the level of heat strain experienced by the wearer. abstract_id: PUBMED:34513130 Heat strain in chemical protective clothing in hot-humid environment: Effects of clothing thermal properties. Heat strain experienced by individuals wearing chemical protective clothing (CPC) is severe and dangerous especially in hot-humid environment. The development of material science and interdisciplinary studies including ergonomics, physiology and heat transfer is urgently required for the reduction of heat strain. The aim of this paper was to study the relationship among clothing thermal properties, physiological responses and environmental conditions. Three kinds of CPC were selected. Eight participants wore CPC and walked (4 km/h, two slopes with 5% and 10%) on a treadmill in an environment with (35±0.5) °C and RH of (60±5)%. Core temperature, mean skin temperature, heart rate, heat storage and tolerance time were recorded and analyzed. Physiological responses were significantly affected by the clothing thermal properties and activity intensity in hot-humid environment. The obtained results can help further development of heat strain model. New materials with lower evaporative resistance and less weight are necessary to release the heat strain in hot-humid environments. abstract_id: PUBMED:36810398 FC-EODR: Immersive Humanoid Dual-Arm Dexterous Explosive Ordnance Disposal Robot. In this study, we proposes a humanoid dual-arm explosive ordnance disposal (EOD) robot design. First, a seven-degree-of-freedom high-performance collaborative and flexible manipulator is developed, aiming at the transfer and dexterous operation of dangerous objects in EOD tasks. Furthermore, an immersive operated humanoid dual-arm dexterous explosive disposal robot (FC-EODR) is designed, which has a high passability to complex terrains such as low walls, slope roads, and stairs. It can remotely detect, manipulate, and remove explosives in dangerous environments through immersive velocity teleoperation. In addition, an autonomous tool-changing system is constructed, which enables the robot to flexibly switch between different tasks. The effectiveness of the FC-EODR is finally verified through a series of experiments, including the platform performance test, manipulator load test, teleoperated wire trimming, and screw-screwing experiments. This letter provides the technical foundation for robots to replace humans in EOD tasks and emergency situations. Answer: Yes, perceptual indices can estimate physiological strain across a range of environments and metabolic workloads when wearing explosive ordnance disposal (EOD) and chemical protective clothing. A study aimed to determine if the perceptual strain index (PeSI), which uses subjective ratings of thermal sensation and perceived exertion, could estimate the physiological strain index (PSI) across various metabolic workloads and environments while wearing heavy EOD and chemical protective clothing. The results showed a significant moderate relationship between the PeSI and the PSI, with a Pearson's correlation coefficient of r=0.77, indicating good predictive power for differentiating between low and high levels of physiological strain (PUBMED:25865709). Additionally, the study found that the PeSI had a good predictive power when used with two single-threshold cut-offs to differentiate between low and high levels of physiological strain, with the area under the curve for PSI three cut-off being 0.936 and seven cut-off being 0.841. These findings support the use of the PeSI for monitoring physiological strain while wearing EOD and chemical protective clothing, although further research is needed to confirm the validity of the PeSI for active EOD technicians operating in the field (PUBMED:25865709). Another study exploring the association between physiological and perceptual heat strain while wearing stab-resistant body armor (SRBA) found that the PeSI showed a significant moderate association with the PSI. The study concluded that subjective responses could be used as an indicator for predicting physiological strain while wearing SRBA, providing fundamental knowledge for the usage of such protective equipment and the development of physiological heat strain assessment (PUBMED:37209632).
Instruction: Are they valid Spanish growth reference charts? Abstracts: abstract_id: PUBMED:22566328 Are they valid Spanish growth reference charts? Objective: To modify the results of the longitudinal study on growth in Navarra (NA 09) by censoring the obesity cases from the population sample as well as to perform a comparative analysis with the most qualified Spanish and international growth curves. Materials And Methods: all the cases with obesity according to Cole et al. standards have been censored out of the 930 participants in NA-09, the final sample being 782 participants (371 males and 411 females). The results obtained were compared with the Spanish studies by Serra-Majem et al. (enKid study), Carrascosa et al. (ESP 08), and NA 09, which do not censor the obesity cases, and with the Centers for Disease Control and Prevention (CDC, 2000) table, and the WHO tables (WHO, 2007) that apply depuration criteria of the poorly healthy anthropometric data (obesity and malnutrition). Results: We present the mean values adjusted by height, weight, and BMI with their percentile distribution for both genders. When comparing with the Spanish studies, we observe that the evolutionary values of the 3d and 50th percentiles for height, weight, and BMI are virtually the same; however, the evolutionary values for the 97th percentile for weight and BMI tend to differ more and more. When comparing them to the international standards, the evolutionary values for the 3d, 50th, and 97th percentiles for BMI lay between both references. Conclusions: for the growth curves and tables to be useful as reference patterns, all obese people should be excluded from their elaboration; otherwise, they should be considered as only descriptive studies of a population with a recognized tendency to excessive body weight and thus their clinical applicability would be put in question. abstract_id: PUBMED:31104894 Effect of changing reference growth charts on the prevalence of short stature Introduction: Short stature is a family concern, and is a common reason for consultations in paediatrics. Growth charts are an essential diagnostic tool. The objective of this study is to evaluate the impact of changing reference charts in the diagnosis of short stature in a health area. Subjects And Methods: A population-based-cross-sectional-descriptive-study was performed in which the height of children of 4, 6, 10 and 13 years-old were compared with the growth charts of the Fundación Orbegozo 2004 Longitudinal and 2011. The prevalence of short stature and the 3rd percentile of the study sample were calculated. Results: There were 12,256 valid records (89% of the population). The prevalence of short stature increased at all ages with the change in the growth charts, with differences of prevalence of 3.6% (95% CI: 2.8 to 4.5) at 4 years; 1.8% (95% CI: 1.3 to 2.3) at 6 years; 2.8% (95% CI: 2.2 to 3.4) at 10 years, and 1.4% (95% CI: 0.8 to 1.9) at 13 years. In absolute numbers, it went from 58 diagnoses of short stature with the 2004 Longitudinal charts (34 boys and 24 girls) to 352 with the 2011 (155 boys and 197 girls). Conclusions: The change in reference growth charts has increased by 6-fold the number of diagnoses of short stature. The pathological condition found in the cases diagnosed with the 2011 growth charts that had not been diagnosed with the previous charts will allow us to evaluate the suitability of the change. abstract_id: PUBMED:19419522 New reference growth charts for Japanese girls with Turner syndrome. Background: Currently used growth charts for Japanese girls with Turner syndrome (TS) were constructed with auxological data obtained before the secular trend in growth reached a plateau. These charts were published in 1992 and may no longer be valid for the evaluation of stature and growth in girls with TS in clinical settings. Thus, we need to establish new clinical growth charts. Methods: The samples for analysis were obtained by a retrospective cohort study. A total of 1867 Japanese girls with TS were registered between 1991 and 2004 for growth hormone (GH) treatment and their pretreatment anthropometric measurements were obtained. Reference growth charts were newly constructed using the LMS method from 1447 girls' cross-sectional data after exclusion of measurements derived from those with the presence of puberty, with previous growth-promoting treatment, or without cytogenetic evidence of TS. Results: The new clinical reference growth charts differ from the old charts. Secular trends can be detected in both height and weight. Mean adult height on the new chart is 141.2 cm, 3.0 cm taller than the old data. This result seems attributable to the secular trend observed during the same period in Japanese women. Conclusions: The newly constructed clinical reference growth charts for Japanese girls with TS seem to be better for the evaluation of growth in girls with TS born after approximately 1970, although selection bias and some other limitations in the present study should be kept in mind. abstract_id: PUBMED:37814549 Growth reference charts for children with hypochondroplasia. Hypochondroplasia (HCH) is a rare skeletal dysplasia causing mild short stature. There is a paucity of growth reference charts for this population. Anthropometric data were collected to generate height, weight, and head circumference (HC) growth reference charts for children with a diagnosis of HCH. Mixed longitudinal anthropometric data and genetic analysis results were collected from 14 European specialized skeletal dysplasia centers. Growth charts were generated using Generalized Additive Models for Location, Scale, and Shape. Measurements for height (983), weight (896), and HC (389) were collected from 188 (79 female) children with a diagnosis of HCH aged 0-18 years. Of the 84 children who underwent genetic testing, a pathogenic variant in FGFR3 was identified in 92% (77). The data were used to generate growth references for height, weight, and HC, plotted as charts with seven centiles from 2nd to 98th, for ages 0-4 and 0-16 years. HCH-specific growth charts are important in the clinical care of these children. They help to identify if other comorbidities are present that affect growth and development and serve as an important benchmark for any prospective interventional research studies and trials. abstract_id: PUBMED:26126922 Synthetic growth reference charts. Objectives: To reanalyze the between-population variance in height, weight, and body mass index (BMI), and to provide a globally applicable technique for generating synthetic growth reference charts. Methods: Using a baseline set of 196 female and 197 male growth studies published since 1831, common factors of height, weight, and BMI are extracted via Principal Components separately for height, weight, and BMI. Combining information from single growth studies and the common factors using in principle a Bayesian rationale allows for provision of completed reference charts. Results: The suggested approach can be used for generating synthetic growth reference charts with LMS values for height, weight, and BMI, from birth to maturity, from any limited set of height and weight measurements of a given population. Conclusion: Generating synthetic growth reference charts by incorporating information from a large set of reference growth studies seems suitable for populations with no autochthonous references at hand yet. abstract_id: PUBMED:33914307 Polish growth charts for preterm infants - comparison with reference Fenton charts. Objectives: Proper infant classification, particularly a preterm infant, as small or large for gestational age, is crucial to undertake activities to improve postnatal outcomes. This study aimed to assess the usability of the Fenton preterm growth charts to evaluate the anthropometric parameters of Polish preterm neonates. Material And Methods: In this single-center, retrospective study data extracted from the medical documentation of preterm neonates born 2002-2013 were analyzed. Body weight, body length, and head circumference were evaluated and used to develop growth charts, which were compared with the reference Fenton growth charts. Results: This study included 3,205 preterm neonates, of whom 937 were born before 30 weeks of pregnancy. Overall, 11.04%, 3.3%, and 5.2% of neonates were below the 10th percentile on the Fenton charts for birth weight, body length, and head circumference, respectively. Only 26 (6.67%) of 390 analyzed anthropological parameters differed significantly between the study and the Fenton groups. Statistically significant differences between the study and the Fenton populations were found only in body length for both sexes, and in head circumference for female neonates. Conclusions: The growth charts developed in this study for a population of Polish preterm neonates corresponded to the Fenton charts in terms of birth weight but differed in terms of body length and head circumference. Our findings suggest the need to evaluate growth charts for Polish preterm newborns. abstract_id: PUBMED:20139683 Growth charts compared. Growth assessment of children requires comparison of growth measurements with normative references, usually in the form of growth charts. Traditionally growth charts (growth references) have described the growth of children who were considered normal and were living in a defined geographic area. The new WHO growth charts, on the other hand, are growth standards that aim to represent growth as it occurs worldwide. Moreover, they represent growth as it occurs under optimal circumstances and is thought to be conducive to optimal long-term health. Most growth references are single-country references, exemplified here by charts from the UK, the Netherlands and the USA. By contrast, the Euro-Growth reference and the WHO standard are based on multinational samples. Comparison of these five charts reveals surprisingly large differences that are for the most part unexplained. Differences between the WHO charts and other charts are only partially explained by the use of a prescriptive approach and by the data truncation employed. The large differences between charts probably are of merely trivial consequence when charts are used in monitoring individual children. When charts are used in health assessment of groups of children, the impact of the differences, however, is substantial. abstract_id: PUBMED:29268248 Gender-Specific Antenatal Growth Reference Charts in Monochorionic Twins. Objective: To create antenatal gender-specific reference growth charts in uncomplicated monochorionic diamniotic twins. Materials And Methods: This is a prospective longitudinal study in which uncomplicated monochorionic (MC) twin pregnancies were included from 23 + 4 weeks of gestation onwards. Estimated fetal weight (EFW) and biometric parameters (biparietal diameter, head circumference, abdominal circumference, and femur length) were evaluated in both fetuses every 2 weeks using standardized methodology. Maternal and fetal complications were excluded. Charts were fitted for each biometric parameter and EFW in relation to gestational age and fetal gender using multilevel mixed models. Results: The final analysis included a total of 456 ultrasound examinations in 62 MC twins, with a mean of 7 scans per pregnancy (range 5-8). The mean as well as 5th and 95th percentiles of each biometric parameter and EFW were adjusted in relation to gender and gestational age between 24 and 37 weeks of gestation. Male fetuses have higher reference values than females, and the disparity is larger in the upper centiles of the distribution. Discussion: We provide gender-specific reference growth charts for MC twins. We suggest that these charts will improve prenatal MC twin assessment and surveillance, with a more accurate classification of normal or growth-restricted fetuses adjusted per sex. abstract_id: PUBMED:20143011 Comparison between the growth of Brazilian children and adolescents and the reference growth charts: data from a Brazilian project. Objective: To compare the growth of Brazilian children and adolescents with reference growth charts. Methods: School-based cross-sectional study involving 41,654 students (23,328 boys and 18,326 girls) aged 7 to 17 years. Physical growth variables (body weight, height) and body mass index (BMI) were measured. Height, 50th percentile, and BMI, 85th percentile, were compared with the World Health Organization (WHO) reference values, and weight, 50th percentile, was compared with the Centers for Disease Control and Prevention (CDC) growth charts. Results: Mean height and body weight of Brazilian schoolchildren were generally higher than the WHO and CDC reference values for both genders at most ages. BMI values of Brazilian schoolchildren were significantly lower than the WHO obesity values at all ages. Conclusion: The growth in body weight, height, and BMI of children and adolescents in the urban area of Brazil is increasingly similar to those reported in developed countries. Further prospective studies should be conducted in Brazil comparing their results with the international growth charts to enable more accurate inferences. abstract_id: PUBMED:25456302 Head circumference growth reference charts of children younger than 7 years in Chinese rural areas. Background: The head circumference growth reference charts for children in China are presently based on urban children. However, the references may not apply to rural children because of the differences between urban and rural areas, such as economy, culture, and dietary habits. Our objective was to provide a reliable continuous set of head circumference growth reference charts for male and female children less than 7 years of age in Chinese rural areas. Methods: Children in our study were identified by multistage stratified cluster sampling from rural areas of 10 provinces in China. Questionnaire survey and anthropometric measurements were conducted in data collection. Head circumference was measured with a nonelastic tape on a line passing over the glabella and posterior occipital protrusion in children. We compared the fiftieth percentile of our cross-sectional data with the data of Chinese cities, World Health Organization, and the United States. Results: A total of 95,904 children (48,722 boys and 47,182 girls) were included in the study. We present age- and sex-appropriate head circumference growth charts younger than 7 years for Chinese rural areas. The head circumference percentiles of the children in rural China are much smaller than the children in Chinese urban areas, World Health Organization, and the US percentiles after 2 years old. Conclusions: Head circumference percentiles can be applied in growth monitoring, but current head circumference growth references promulgated in urban China may not be suitable for rural areas in China. Providing head circumference growth reference charts for rural Chinese children who are younger than 7 years old is very important. Answer: The abstract from PUBMED:22566328 discusses the modification of the longitudinal study on growth in Navarra (NA 09) by censoring obesity cases from the population sample and comparing the results with other Spanish and international growth curves. The study concludes that for growth curves and tables to be useful as reference patterns, all obese people should be excluded from their elaboration; otherwise, they should be considered as only descriptive studies of a population with a recognized tendency to excessive body weight, which questions their clinical applicability. This suggests that the Spanish growth reference charts in question may not be entirely valid if they include data from obese individuals, as this could skew the results and not accurately represent healthy growth patterns. Therefore, based on the information provided in the abstract, it can be inferred that the Spanish growth reference charts may have limitations in their validity, particularly if they do not exclude obese individuals from their datasets. The study implies that for growth reference charts to be valid, they should be constructed without the inclusion of data from obese individuals to ensure they accurately reflect the growth patterns of a healthy population (PUBMED:22566328).
Instruction: Can continuity of care be improved? Abstracts: abstract_id: PUBMED:17018200 What is 'continuity of care'? Continuity of care is concerned with the quality of care over time. There are two important perspectives on this. Traditionally, continuity of care is idealized in the patient's experience of a 'continuous caring relationship' with an identified health care professional. For providers in vertically integrated systems of care, the contrasting ideal is the delivery of a 'seamless service' through integration, coordination and the sharing of information between different providers. As patients' health care needs can now only rarely be met by a single professional, multidimensional models of continuity have had to be developed to accommodate the possibility of achieving both ideals simultaneously. Continuity of care may, therefore, be viewed from the perspective of either patient or provider. Continuity in the experience of care relates conceptually to patients' satisfaction with both the interpersonal aspects of care and the coordination of that care. Experienced continuity may be valued in its own right. In contrast, continuity in the delivery of care cannot be evaluated solely through patients' experiences, and is related to important aspects of services such as 'case-management' and 'multidisciplinary team working'. From a provider perspective, the focus is on new models of service delivery and improved patient outcomes. A full consideration of continuity of care should therefore cover both of these distinct perspectives, exploring how these come together to enhance the patient-centredness of care. abstract_id: PUBMED:33792437 Defragmenting the Day: The Effect of Full-Day Continuity Clinics on Continuity of Care and Perceptions of Clinic. Problem: Traditional half-day continuity clinics within primary care residency programs require residents to split time between their assigned clinical rotation and continuity clinic, which can have detrimental effects on resident experiences and patient care within continuity clinics. Most previous efforts to separate inpatient and outpatient obligations have employed block scheduling models, which entail significant rearrangements to clinical rotations, team structures, and didactic education and have yielded mixed effects on continuity of care. A full-day continuity clinic schedule within a traditional, non-block rotation framework holds potential to de-conflict resident schedules without the logistical rearrangements required to adopt block scheduling models, but no literature has described the effect of such full-day continuity clinics on continuity of care or resident experiences within continuity clinic. Intervention: A pediatric residency program implemented full-day continuity clinics within a traditional rotation framework. We examined the change in continuity for physician (PHY) measure in the six months prior to versus the six months following the switch, as well as changes in how often residents saw clinic patients in follow-up and personally followed up clinic laboratory and radiology results, which we term episodic follow-up. Resident and attending perceptions of full-day continuity clinics were measured using a survey administered 5-7 months after the switch. Context: The switch to full-day continuity clinics occurred in January 2018 within the Wright State University/Wright-Patterson Medical Center Pediatric Residency Program. The program has 46 residents who are assigned to one of two continuity clinic sites, each of which implemented the full-day continuity clinics simultaneously. Outcome: The PHY for residents at one clinic decreased slightly from 18.0% to 13.6% (p<.001) with full-day continuity clinics but was unchanged at another clinic [60.6% vs 59.5%, p=.86]. Measures of episodic follow-up were unchanged. Residents (32/46 = 77% responding) and attendings (6/8 = 75% responding) indicated full-day continuity clinics improved residents' balance of inpatient and outpatient obligations, preparation for clinic, continuity relationships with patients, and clinic satisfaction. Lessons Learned: Full-day continuity clinics within a traditional rotation framework had mixed effects on continuity of care but improved residents' experiences within clinic. This model offers a viable alternative to block scheduling models for primary care residency programs wishing to defragment resident schedules. Supplemental data for this article is available online at https://doi.org/10.1080/10401334.2021.1879652. abstract_id: PUBMED:15798043 Interpersonal continuity of care and care outcomes: a critical review. Purpose: We wanted to undertake a critical review of the medical literature regarding the relationships between interpersonal continuity of care and the outcomes and cost of health care. Methods: A search of the MEDLINE database from 1966 through April 2002 was conducted by the primary author to find original English language articles focusing on interpersonal continuity of patient care. The articles were then screened to select those articles focusing on the relationship between interpersonal continuity and the outcome or cost of care. These articles were systematically reviewed and analyzed by both authors for study method, measurement technique, and quality of evidence. Results: Forty-one research articles reporting the results of 40 studies were identified that addressed the relationship between interpersonal continuity and care outcome. A total of 81 separate care outcomes were reported in these articles. Fifty-one outcomes were significantly improved and only 2 were significantly worse in association with interpersonal continuity. Twenty-two articles reported the results of 20 studies of the relationship between interpersonal continuity and cost. These studies reported significantly lower cost or utilization for 35 of 41 cost variables in association with interpersonal continuity. Conclusions: Although the available literature reflects persistent methodologic problems, it is likely that a significant association exists between interpersonal continuity and improved preventive care and reduced hospitalization. Future research in this area should address more specific and measurable outcomes and more direct costs and should seek to define and measure interpersonal continuity more explicitly. abstract_id: PUBMED:24252087 Continuity of care in day surgical care - perspective of patients. Background: The realisation of continuity in day surgical care is analysed in this study. The term 'continuity of care' is used to refer to healthcare processes that take place in time (time flow) and require coordination (coordination flow), rapport (caring relationship flow) and information (information flow). Patients undergoing laparoscopic cholecystectomy or inguinal hernia day surgery are ideal candidates for studying the continuity of care, as the diseases are very common and the treatment protocol is mainly the same in different institutions, in addition to which the procedure is elective and most patients have a predictable clinical course. Aim: The aim of the study was to describe, from the day surgery patients' own perspective, how continuity of care was realised at different phases of the treatment, prior to the day of surgery, on the day of surgery and after it. Method: The study population consisted of 203 day surgical patients 10/2009-12/2010 (N = 350, response rate 58%). A questionnaire was developed for this study. Results: Based on the results, the continuity of care was well realised as a rule. Continuity is improved by the fact that patients know the nurse who will look after them in the hospital before the day of surgery and have a chance to meet the nurse even after the operation. Meeting the surgeon who performed the operation afterwards also improves patients' perception of continuation of care. Conclusions: Continuity of care may be improved by ensuring that the patient meets caring staff prior to the day of operation and after the procedure. An important topic for further research would be how continuation of care is realised in the case of other patient groups (e.g. in internal medicine). On the other hand, realisation of continuation of care should also be studied from the viewpoint of those taking part in patient care in order to find similarities/differences between patients' perceptions and professionals' views. Studying interventions aimed to promote continuity of care, for example in patient guidance, would also be of great importance. abstract_id: PUBMED:11523291 Continuity of care: a reconceptualization. Although continuity of care is considered an essential feature of good health care, researchers have used and measured continuity in many different ways, and no clear conceptual framework links continuity to outcomes. This article of offers a reconceptualization and definition of continuity based on agency theory. It posits that the value of continuity is to reduce agency loss by decreasing information asymmetry and increasing goal alignment. Three decades of empirical literature on continuity were examined to assess whether this model would provide greater clarity about continuity. Some authors measured improved information transfer, but more appeared to assume that continuity would lead to better information. Most authors appeared to have assumed that goal alignment was present and did not measure it. The model of continuity based on agency theory appears to provide a useful conceptual tool for health services research and policy. abstract_id: PUBMED:37228731 Continuity of care among diabetic patients in Accra, Ghana. Introduction: Diabetes mellitus is a fast-rising non-contagious disease of global importance that remains a leading cause of indisposition and death. Evidence shows that effective management of diabetes has a close link with continuity of care which is known to be the integral pillar of quality care. This study, therefore, sought to determine the extent of continuity of care between diabetic patients and their care providers as well as factors associated with relational continuity of care. Methodology: This cross-sectional, facility-based study was conducted among diabetics in Accra, Ghana. We sampled 401 diabetic patients from three diabetic clinics in the region using a stratified and systematic random sampling technique. Data were collected using a structured questionnaire containing information on socio-demographic characteristics, the four dimensions of continuity of care, and patients' satisfaction. A 5-point Likert scale was used to measure patient's perception of relational, flexible, and team continuity, while most frequent provider continuity was used to measure longitudinal continuity of care. Scores were added for each person and divided by the highest possible score for each domain to estimate the continuity of care index. Data were collected and exported to Stata 15 for analysis. Results: The results show that team continuity was the highest (0.9), followed by relational and flexibility continuity of care (0.8), and longitudinal continuity of care was the least (0.5). Majority of patients experienced high team (97.3%), relational (68.1%), and flexible (65.3%) continuity of care. Most patients (98.3%) were satisfied with the diabetes care they received from healthcare providers. Female subjects had higher odds of experiencing relational continuity of care as compared to male subjects. Furthermore, participants with higher educational levels were five times more likely to experience relational continuity of care than those with lower educational background. Conclusion: The study demonstrated that the majority of diabetics had team continuity of care being the highest experienced among the four domains, followed by flexible and longitudinal being the least experienced. Notably, team and flexible continuity of care had a positive association with relational continuity of care. Higher educational level and being female were associated with relational continuity of care. There is therefore the need for policy action on the adoption of multidisciplinary team-based care. abstract_id: PUBMED:26748569 Improved continuity of care in a resident clinic. Background: For residents in the out-patient clinic, continuity in patient care is an integral and vital aspect of internal medicine training, but is frequently compromised by resident in-patient schedules, the structure of the out-patient clinic and the need to comply with the increasing regulation of duty hours. Method: In this study, we examined whether the creation and implementation of a new team approach, the Firms Model, would improve the continuity of patient care in the internal medicine resident out-patient clinic. Results: Before the implementation of the Firms Model, an examination of a consecutive clinic sample indicated that patients were seen by their assigned resident providers 41.9 per cent of the time (n = 1319 clinic visits). After implementation of the Firms Model, an examination of a consecutive clinic sample indicated that patients were seen by their assigned Firm resident providers 88.9 per cent of the time (n = 1341 clinic visits). Conclusion: Implementation of the Firms Model resulted in a statistically significant increase in the percentage of patients seen by assigned resident providers in an internal medicine out-patient clinic, culminating in a substantial improvement in continuity of care within our resident out-patient clinic. We discuss the implications of these findings. Continuity in patient care is an integral and vital aspect of internal medicine training, but is frequently compromised. abstract_id: PUBMED:36951508 Continuity of care and advanced prostate cancer. Background: Continuity of care is an important element of advanced prostate cancer care due to the availability of multiple treatment options, and associated toxicity. However, the association between continuity of care and outcomes across different racial groups remains unclear. Objective: To assess the association of provider continuity of care with outcomes among Medicare fee-for-service beneficiaries with advanced prostate cancer and its variation by race. Design: Retrospective cohort study using Surveillance, Epidemiology, and End Results (SEER)-Medicare data. Subjects: African American and white Medicare beneficiaries aged 66 or older, and diagnosed with advanced prostate cancer between 2000 and 2011. At least 5 years of follow-up data for the cohort was used. Measures: Short-term outcomes were emergency room (ER) visits, hospitalizations, and cost during acute survivorship phase (2-year post-diagnosis), and mortality (all-cause and prostate cancer-specific) during the follow-up period. We calculated continuity of care using Continuity of Care Index (COCI) and Usual Provider Care Index (UPCI), for all visits, oncology visits, and primary care visits in acute survivorship phase. We used Poisson models for ER visits and hospitalizations, and log-link GLM for cost. Cox model and Fine-Gray competing risk models were used for survival analysis, weighted by propensity score. We performed similar analysis for continuity of care in the 2-year period following acute survivorship phase. Results: One unit increase in COCI was associated with reduction in short-term ER visits (incidence rate ratio [IRR] = 0.65, 95% confidence interval [CI] 0.64, 0.67), hospitalizations (IRR = 0.65, 95% CI 0.64, 0.67), and cost (0.64, 95% CI 0.61, 0.66) and lower hazard of long-term mortality. Magnitude of these associations differed between African American and white patients. We observed comparable results for continuity of care in the follow-up period. Conclusions: Continuity of care was associated with improved outcomes. The benefits of higher continuity of care were greater for African Americans, compared to white patients. Advanced prostate cancer survivorship care must integrate appropriate strategies to promote continuity of care. abstract_id: PUBMED:32046574 Care Continuity and Care Coordination: A Preliminary Examination of Their Effects on Hospitalization. Both care continuity and care coordination are important features of the health care system. However, little is known about the relationship between care continuity and care coordination, their effects on hospitalizations, and whether these effects vary across patients with various levels of comorbidity. This study employed a panel study design with a 3-year follow-up from 2007 to 2011 in Taiwan's universal health coverage system. Patients aged 18 years or older who were newly diagnosed with diabetes in 2007 were included in the study. We found that the correlation between care continuity and care coordination was low. Patients with higher levels of care continuity or care coordination were less likely to experience hospitalization for diabetes-related conditions. Furthermore, both care continuity and care coordination showed stronger effects for patients with higher comorbidity scores. Improving care continuity and coordination for patients with multiple chronic conditions is the right direction for policymakers. abstract_id: PUBMED:33738479 Is relational continuity of care as important to people as policy makers think? Preferences for continuity of care in primary care. Background: In 2005, the Portuguese government launched a Primary Care reform that aimed to reinforce continuity of care. After a promising start, the reform is still incomplete and continuity has been compromised by the lack of General Practice doctors. Objective: This study evaluates public preferences for relational continuity of care alongside other attributes of Primary Care services in Portugal. Methods: We use a discrete choice experiment (DCE) to evaluate preferences and estimate the population's willingness to pay (WTP) for Primary Care attributes. We use a sequential, mixed-methods approach to develop a D-efficient fractional factorial design for the DCE. Five attributes were included in the DCE and there were 32 DCE choice sets. The data collection was conducted in 2014 and the final sample had 517 respondents. A random parameters multinomial logit was used to analyse the data. Results: We find that respondents value relational continuity of care, but that the current focus of the Portuguese NHS on relational continuity at the expense of other attributes is too simplistic. Conclusions: Relational continuity should be part of a broader policy that emphasizes person-centred care and considers the preferences of patients for Primary Care attributes. Answer: Yes, continuity of care can be improved through various strategies and interventions. The literature suggests that continuity of care is a multifaceted concept that can be enhanced from both patient and provider perspectives. For instance, the implementation of full-day continuity clinics within a traditional rotation framework has been shown to improve residents' experiences within the clinic and may offer a viable alternative to block scheduling models for primary care residency programs, potentially leading to better continuity of care (PUBMED:33792437). Additionally, the creation and implementation of a new team approach, such as the Firms Model in an internal medicine resident outpatient clinic, resulted in a statistically significant increase in the percentage of patients seen by assigned resident providers, thus substantially improving continuity of care (PUBMED:26748569). Moreover, continuity of care can be enhanced by ensuring that patients meet caring staff prior to the day of operation and after the procedure, as this has been found to improve patients' perception of continuity (PUBMED:24252087). In the context of diabetes care, the adoption of multidisciplinary team-based care has been suggested as a policy action to improve continuity of care, with higher educational level and being female associated with relational continuity of care (PUBMED:37228731). Furthermore, continuity of care has been linked to improved outcomes and reduced costs, suggesting that efforts to improve continuity can have significant benefits for both patients and the healthcare system (PUBMED:15798043). For advanced prostate cancer care, continuity of care was associated with improved outcomes, and the benefits were greater for African Americans compared to white patients, indicating the importance of promoting continuity of care in survivorship care (PUBMED:36951508). Lastly, improving care continuity and coordination, especially for patients with multiple chronic conditions, is recommended as the right direction for policymakers (PUBMED:32046574). However, it is important to note that while relational continuity is valued, it should be part of a broader policy that emphasizes person-centered care and considers the preferences of patients for primary care attributes (PUBMED:33738479). In summary, continuity of care can be improved through organizational changes, policy actions, and by taking into account the preferences and experiences of patients. These improvements can lead to better patient outcomes, increased satisfaction, and potentially lower healthcare costs.
Instruction: Does Post-Transplant Maintenance Therapy With Tyrosine Kinase Inhibitors Improve Outcomes of Patients With High-Risk Philadelphia Chromosome-Positive Leukemia? Abstracts: abstract_id: PUBMED:27297665 Does Post-Transplant Maintenance Therapy With Tyrosine Kinase Inhibitors Improve Outcomes of Patients With High-Risk Philadelphia Chromosome-Positive Leukemia? Introduction: The effect of post-transplant maintenance tyrosine kinase inhibitors (TKIs) on the outcomes of allogeneic hematopoietic stem cell transplantation in high-risk Philadelphia chromosome-positive (Ph(+)) leukemia remains unknown. Patients And Methods: A retrospective analysis that included allograft recipients with accelerated phase and blast phase chronic myeloid leukemia or Ph(+) acute lymphoblastic leukemia who had received post-transplant maintenance TKI therapy from 2004 to 2014. Results: A total of 26 patients, 9 with accelerated phase/blast phase CML and 17 with Ph(+) acute lymphoblastic leukemia, received maintenance post-transplant therapy with imatinib, dasatinib, nilotinib, or ponatinib. The TKI was selected according to the pretransplantation TKI response, anticipated toxicities, and ABL1 domain mutations, when present. Newer generation TKIs were initiated at a ≥ 50% dose reduction from the standard pretransplantation dosing to limit the toxicities and avoid therapy interruptions. TKIs were started a median of 100 days (range, 28-238 days) after transplantation and were administered for a median of 16 months (range, 8 days to 105 months). Eight patients discontinued therapy because of adverse events. With a median follow-up of 3.6 years (range, 4 months to 8.7 years), the 5-year relapse-free survival rate was 61%. All 3 patients who developed a relapse underwent successful salvage treatment and remained disease-free. The 5-year overall survival rate was 78%. Conclusion: Maintenance TKI therapy after transplantation is feasible and might reduce the incidence of relapses and improve outcomes after allogeneic hematopoietic stem cell transplantation for patients with high-risk Ph(+) leukemia. abstract_id: PUBMED:32800519 Effect of Prophylactic Post-transplant Ponatinib Administration on Outcomes in Patients With Philadelphia Chromosome-positive Acute Lymphoblastic Leukemia. Background: The objective of the present retrospective study was to evaluate the effect of ponatinib administration as maintenance therapy on the outcomes after allogeneic hematopoietic stem cell transplantation in patients with Philadelphia chromosome-positive acute lymphoblastic leukemia. Patients And Methods: We retrospectively analyzed the data from 34 consecutive patients treated at our institution from January 2008 to June 2019. We had administered post-transplant tyrosine kinase inhibitors preemptively before December 2017. Thereafter, we had initiated the prophylactic use of post-transplant ponatinib. The initial ponatinib dose was 15 mg/d. Ponatinib plasma trough levels were measured using the liquid chromatography-tandem mass spectrometry method 8 days after the first administration and subsequently. Results: Nine patients received ponatinib maintenance. The 2-year overall survival and leukemia-free survival in the ponatinib maintenance group tended to be better than that in the non-ponatinib group (100% vs. 70.5%, P = .10; and 100% vs. 50.8%, P = .02, respectively). In the first 7 of the 9 consecutive patients, the median plasma concentration after ponatinib administration (15 mg/d) was 15.6 ng/mL (range, 4.8-23.3 ng/mL). Although the treatment schedule for 1 patient was altered because of adverse effects (elevation of serum amylase and neutropenia), ponatinib administration was continued for all the patients, except for 1 patient with molecular relapse. One patient developed a transient elevation of serum lipase. No patient presented with any arterial occlusive events. Conclusion: Our results have indicated that the strategy of ponatinib maintenance after allogeneic hematopoietic stem cell transplantation is safe, efficacious, and promising. abstract_id: PUBMED:33518401 Epstein-Barr virus-associated post-transplant lymphoproliferative disease during dasatinib treatment occurred 10 years after umbilical cord blood transplantation. Post-transplant lymphoproliferative disease (PTLD) is defined as a lymphoma that occurs after solid-organ or hematopoietic stem-cell transplantation (HSCT), caused by immunosuppression and Epstein-Barr virus (EBV) reactivation. It is an important post-transplant complication that can be fatal. After HSCT, most PTLD occurs within 2 years. Recent evidence suggests that tyrosine kinase inhibitors (TKIs) are expected to be effective maintenance therapy after HSCT for Philadelphia chromosome-positive leukemia. However, it is unclear whether the use of TKIs might pose a risk of developing PTLD after HSCT. We present the first case of late-onset PTLD during dasatinib treatment, which occurred 10 years after umbilical cord blood transplantation (CBT). A 59-year-old man who received CBT for chronic myeloid leukemia blast phase needed long-term dasatinib therapy for molecular relapse. Ten years after CBT, he developed diffuse-large B-cell lymphoma (DLBCL). We observed chimerism of the DLBCL sample, which indicated complete donor type and EBV-DNA, and the patient was diagnosed with PTLD. Because of treatment resistance, he died 6 months after PTLD onset. Although he received no long-term administration of immunosuppressive agents, he received long-term dasatinib treatment, which suggests that prolonged dasatinib use after CBT caused EBV reactivation and led to PTLD. Our case suggests that the potential contribution of molecular-targeted agents after HSCT to the development of PTLD should be carefully considered. abstract_id: PUBMED:25842050 Impact of Additional Cytogenetic Abnormalities in Adults with Philadelphia Chromosome-Positive Acute Lymphoblastic Leukemia Undergoing Allogeneic Hematopoietic Cell Transplantation. The occurrence of additional cytogenetic abnormalities (ACAs) is common in Philadelphia chromosome-positive acute lymphoblastic leukemia (Ph+ ALL) but is of unknown significance in the tyrosine kinase inhibitor (TKI) era. We retrospectively analyzed data from a consecutive case series of adults with Ph+ ALL who had undergone allogeneic hematopoietic cell transplantation (alloHCT) at City of Hope between 2003 and 2014. Among 130 adults with Ph+ ALL who had TKI therapy before alloHCT, 78 patients had available data on conventional cytogenetics at diagnosis and were eligible for outcomes analysis. ACAs were observed in 41 patients (53%). There were no statistically significant differences in median age, median initial WBC count, post-HCT TKI maintenance, or disease status at the time of transplant between the Ph-only and ACA cohorts; however, the Ph-only cohort had a higher rate of minimal residual disease positivity at the time of HCT. Three-year leukemia-free survival (79.8% versus 39.5%, P = .01) and 3-year overall survival (83% versus 45.6%, P = .02) were superior in the Ph-only cohort compared with the ACA cohort, respectively. Monosomy 7 was the most common additional aberration observed in our ACA cohort (n = 12). Thus, when TKI therapy and alloHCT are used as part of adult Ph+ ALL therapy, the presence of ACAs appears to have a significant deleterious effect on outcomes post-HCT. abstract_id: PUBMED:30093398 Incidence, outcomes, and risk factors of pleural effusion in patients receiving dasatinib therapy for Philadelphia chromosome-positive leukemia. Dasatinib, a second-generation BCR-ABL1 tyrosine kinase inhibitor, is approved for the treatment of chronic myeloid leukemia and Philadelphia chromosome-positive acute lymphoblastic leukemia, both as first-line therapy and after imatinib intolerance or resistance. While generally well tolerated, dasatinib has been associated with a higher risk for pleural effusions. Frequency, risk factors, and outcomes associated with pleural effusion were assessed in two phase 3 trials (DASISION and 034/Dose-optimization) and a pooled population of 11 trials that evaluated patients with chronic myeloid leukemia and Philadelphia chromosome-positive acute lymphoblastic leukemia treated with dasatinib (including DASISION and 034/Dose-optimization). In this largest assessment of patients across the dasatinib clinical trial program (N=2712), pleural effusion developed in 6-9% of patients at risk annually in DASISION, and in 5-15% of patients at risk annually in 034/Dose-optimization. With a minimum follow up of 5 and 7 years, drug-related pleural effusion occurred in 28% of patients in DASISION and in 33% of patients in 034/Dose-optimization, respectively. A significant risk factor identified for developing pleural effusion by a multivariate analysis was age. We found that overall responses to dasatinib, progression-free survival, and overall survival were similar in patients who developed pleural effusion and in patients who did not. clinicaltrials.gov identifier 00481247; 00123474. abstract_id: PUBMED:25527562 Tyrosine kinase inhibitors improve long-term outcome of allogeneic hematopoietic stem cell transplantation for adult patients with Philadelphia chromosome positive acute lymphoblastic leukemia. This study aimed to determine the impact of tyrosine kinase inhibitors given pre- and post-allogeneic stem cell transplantation on long-term outcome of patients allografted for Philadelphia chromosome-positive acute lymphoblastic leukemia. This retrospective analysis from the EBMT Acute Leukemia Working Party included 473 de novo Philadelphia chromosome-positive acute lymphoblastic leukemia patients in first complete remission who underwent an allogeneic stem cell transplantation using a human leukocyte antigen-identical sibling or human leukocyte antigen-matched unrelated donor between 2000 and 2010. Three hundred and ninety patients received tyrosine kinase inhibitors before transplant, 329 at induction and 274 at consolidation. Kaplan-Meier estimates of leukemia-free survival, overall survival, cumulative incidences of relapse incidence, and non-relapse mortality at five years were 38%, 46%, 36% and 26%, respectively. In multivariate analysis, tyrosine-kinase inhibitors given before allogeneic stem cell transplantation was associated with a better overall survival (HR=0.68; P=0.04) and was associated with lower relapse incidence (HR=0.5; P=0.01). In the post-transplant period, multivariate analysis identified prophylactic tyrosine-kinase inhibitor administration to be a significant factor for improved leukemia-free survival (HR=0.44; P=0.002) and overall survival (HR=0.42; P=0.004), and a lower relapse incidence (HR=0.40; P=0.01). Over the past decade, administration of tyrosine kinase inhibitors before allogeneic stem cell transplantation has significantly improved the long-term allogeneic stem cell transplantation outcome of adult Philadelphia chromosome-positive acute lymphoblastic leukemia. Prospective studies will be of great interest to further confirm the potential benefit of the prophylactic use of tyrosine kinase inhibitors in the post-transplant setting. abstract_id: PUBMED:33322172 Clinical Outcome in Pediatric Patients with Philadelphia Chromosome Positive ALL Treated with Tyrosine Kinase Inhibitors Plus Chemotherapy-The Experience of a Polish Pediatric Leukemia and Lymphoma Study Group. The treatment of children with Philadelphia chromosome positive acute lymphoblastic leukemia (ALL Ph+) is currently unsuccessful. The use of tyrosine kinase inhibitors (TKIs) combined with chemotherapy has modernized ALL Ph+ therapy and appears to improve clinical outcome. We report herein the toxicity events and results of children with ALL Ph+ treated according to the EsPhALL2010 protocol (the European intergroup study of post-induction treatment of Philadelphia chromosome positive ALL) in 15 hemato-oncological centers in Poland between the years 2012 and 2019. The study group included 31 patients, aged 1-18 years, with newly diagnosed ALL Ph+. All patients received TKIs. Imatinib was used in 30 patients, and ponatinib was applied in one child due to T315I and M244V mutation. During therapy, imatinib was replaced with dasatinib in three children. The overall survival of children with ALL Ph+ treated according to the EsPhALL2010 protocol was 74.1% and event-free survival was 54.2% after five years. The cumulative death risk of the study group at five years was estimated at 25.9%, and its cumulative relapse risk was 30%. Our treatment outcomes are still disappointing compared to other reports. Improvements in supportive care and emphasis placed on the determination of minimal residual disease at successive time points, which will impact decisions on therapy, may be required. abstract_id: PUBMED:23909077 Prognostic factors and outcomes of unrelated bone marrow transplantation for Philadelphia chromosome positive acute lymphoblastic leukemia (Ph+ALL) pre-treated with tyrosine kinase inhibitors. Background: The treatment and prognosis of Acute Lymphoblastic Leukemia (ALL), including Philadelphia chromosome positive ALL (Ph+ALL), a poor prognostic factor, has changed with the introduction of tyrosine kinase inhibitors (TKIs). Nevertheless, allogeneic hematopoietic cell transplantation (allo-HCT) is still recommended as the first-line curative treatment. To date, no study has investigated the prognostic factors and outcomes of unrelated bone marrow transplantation (u-BMT) for Ph+ALL following pre-transplant treatment with a TKI-containing regimen. Methods: We retrospectively evaluated 15 transplantations of 14 patients with Ph+ALL pre-treated with a TKI-containing regimen at our institute. The 14 patients comprised 11 males and 3 females, with a median age of 50 years (range: 19-64). We performed univariate and multivariate analyses of risk factors that contributed to overall survival (OS) or leukemia-free survival (LFS). Results: Three-year OS of the patients with molecular complete remission (MCR) and with non-MCR at transplantation were 89% and 40% (p = 0.006), respectively, and three-year LFS rates were 79% and 0% (p = 0.001), respectively. Univariate analysis revealed that first hematological complete remission (HCR1) and MCR at transplant were significantly related to better OS and LFS. Multivariate analysis showed that MCR at transplant was significantly associated with better OS and LFS. Conclusions: In agreement with a previous study that included other stem cell sources, u-BMT was deemed feasible for the treatment of Ph+ALL. Analysis of a larger cohort is required to clarify the prognostic factors that affect transplant outcome in Ph+ALL since the introduction of TKIs. abstract_id: PUBMED:24606654 Second-generation tyrosine kinase inhibitors combined with allogeneic hematopoietic stem cell transplant for Philadelphia chromosome positive leukemia Objective: To investigate the efficacy and safety of second-generation tyrosine kinase inhibitors (TK-II) combined with allogeneic hematopoietic stem cell transplantation (allo-HSCT) in the treatment of high-risk Philadelphia chromosome positive (Ph⁺) leukemia. Methods: The clinical data of 17 cases of high-risk Ph⁺ leukemia patients underwent allo-HSCT were retrospectively analyzed, including 1 case in accelerated phase and 7 cases in blast crises of chronic myeloid leukemia, and 9 cases of Ph⁺ acute lymphoblastic leukemia. Nilotinib or Dasatinib were administered before and (or) after allo-HSCT in all patients. Results: All patients successfully engrafted. Median times to neutrophil and platelet recovery were 12 days (range 10-14) and 15 days (range 11- 23), respectively. Acute GVHD developed in 7 patients: 6 patients had grade 1 to 2 and 1 patient grade 3. Chronic GVHD developed in 6 patients, all were limited and no lethal GVHD occurred. At a median follow-up of 17(range 3-60) months, 11(64.7%) patients survived disease free, 6 patients relapsed and 5 died. Conclusion: TK-II combined with allo-HSCT effectively improved the remission rate of high-risk Ph⁺ leukemia and reduced recurrence after allo-HSCT, which represented an important improvement in the treatment of patients with high-risk Ph+ leukemia. abstract_id: PUBMED:23294316 Can anthocyanins improve maintenance therapy of Ph(+) acute lymphoblastic leukaemia? Chemotherapy and tyrosine kinase inhibitors provide high remission rates. However, prognosis of adult patients with Philadelphia chromosome-positive acute lymphoblastic leukaemia (Ph(+) ALL) still remains poor. Because most adults eventually relapse without allogeneic stem cell transplantation, which is not available for all patients, novel strategies are required for relapse prevention. As the integrity of the immune system is essential for the control of remaining leukaemia cells, we compared the efficacy of anthocyanins, imatinib and a DNA-based vaccine as non-immunosuppressant components with 6-mercaptopurine (6-MP) to control minimal residual disease in vitro and in vivo using different leukaemia cell lines and syngeneic mice. Proliferation of Ph(+) ALL was significantly better inhibited by anthocyanin-rich berry extract or imatinib compared with 6-MP. Although anthocyanins induced apoptosis in some leukaemia cell lines, the level of caspase-3, caspase-8 and caspase-9 was significantly lower compared with imatinib and 6-MP. When used as single components, anthocyanins and imatinib mesylate failed to eradicate pre-existing Ph(+) ALL in syngeneic mice, while 6-MP led to 10% and DNA vaccination to 56% survival. Intriguingly, only the combination of DNA vaccination with berry extract but not with the isolated anthocyanin, cyanidin-3-rutinoside or imatinib further increased leukaemia-free and overall survival, and 90% of lethally challenged mice survived. We suggest that induction and enhancement of a leukaemia-specific immunity by DNA vaccination and anthocyanin-rich berry extract can also decrease the relapse rate in patients with Ph(+) ALL. Furthermore, this approach may serve as strategy for maintenance therapy of other malignancies. Answer: Yes, post-transplant maintenance therapy with tyrosine kinase inhibitors (TKIs) does appear to improve outcomes for patients with high-risk Philadelphia chromosome-positive (Ph+) leukemia. Several studies have shown that the use of TKIs post-transplant can be beneficial: - A retrospective analysis indicated that maintenance TKI therapy after transplantation is feasible and might reduce the incidence of relapses and improve outcomes after allogeneic hematopoietic stem cell transplantation for patients with high-risk Ph+ leukemia. With a median follow-up of 3.6 years, the 5-year relapse-free survival rate was 61%, and the 5-year overall survival rate was 78% (PUBMED:27297665). - Another study found that prophylactic post-transplant ponatinib administration tended to improve the 2-year overall survival and leukemia-free survival in the ponatinib maintenance group compared to the non-ponatinib group (100% vs. 70.5% and 100% vs. 50.8%, respectively) (PUBMED:32800519). - A retrospective analysis from the EBMT Acute Leukemia Working Party included 473 de novo Philadelphia chromosome-positive acute lymphoblastic leukemia patients and found that TKIs given before and after allogeneic stem cell transplantation were associated with better overall survival and lower relapse incidence (PUBMED:25527562). - A study on the impact of additional cytogenetic abnormalities in adults with Ph+ ALL undergoing allogeneic hematopoietic cell transplantation in the TKI era showed that when TKI therapy and alloHCT are used as part of adult Ph+ ALL therapy, the presence of ACAs appears to have a significant deleterious effect on outcomes post-HCT. However, the study still supports the use of TKIs in this context (PUBMED:25842050). - A study on the incidence, outcomes, and risk factors of pleural effusion in patients receiving dasatinib therapy for Philadelphia chromosome-positive leukemia found that overall responses to dasatinib, progression-free survival, and overall survival were similar in patients who developed pleural effusion and in patients who did not, suggesting that dasatinib can be an effective part of maintenance therapy despite its association with pleural effusions (PUBMED:30093398).
Instruction: Can ICU stay be predicted accurately enough for fast-tracking cardiac surgical patients? Abstracts: abstract_id: PUBMED:18798717 Can ICU stay be predicted accurately enough for fast-tracking cardiac surgical patients? Objectives: To improve the precision of currently available models for predicting length of stay of individual patients in the intensive care unit, to assist in directing patients into fast-track management after coronary artery bypass graft (CABG) surgery. Setting: ICU in an Australian teaching hospital. Design And Participants: Prospectively collected data from 333 patients who underwent elective CABG surgery were analysed by univariate and multivariate regression, to develop models of increasing power through the addition of factors covering the operative and early ICU phases (1, 4 and 8 hours postoperatively) to traditional preoperative risk of patient care. The model that gave the best combination of precision and availability for clinical decision-making was then validated on a series of 117 patients who underwent CABG surgery. Overall competence of this model was assessed. Results: Addition of intraoperative factors to the first (preoperative only) model (R2 = 0.07) doubled the precision of the analysis (R2 = 0.18). Addition of factors derived from the first 4 hours of ICU management increased precision fivefold (R2 = 0.38). This model was satisfactorily validated: regression of actual versus predicted ICU stay from the validation set gave a slope of 0.85 and y intercept of 2.60 hours. The 95% confidence levels of individual predictions obtained from the development set, for an estimated ICU stay of 12 hours, spanned 43 hours. Conclusions: Although the optimal model greatly increases precision, it is still inadequate for scheduling fasttrack patients, where wrong predictions for individuals can cause major problems in resource allocation. abstract_id: PUBMED:26811837 Impact of immediate versus delayed tracheal extubation on length of ICU stay of cardiac surgical patients, a randomized trial. Introduction: Ultra-fast track anaesthesia aims at immediate extubation of cardiac surgical patients at the end of the operation. This study compares the effect of ultrafast track anesthesia versus continued postoperative mechanical ventilation on the intensive care unit length of stay. Methods: Fifty-two elective adult patients were randomly allocated into ultrafast track anaesthesia and conventional groups by computer-generated random numbers. Redo operations, pre-operative intubation, uncontrolled diabetes, shock/left ventricular ejection fraction < 45%, pulmonary artery systolic pressure >55mmHg, creatinine clearance -1, haemodynamic instability, or those with concerns of postoperative bleeding were excluded. Pre- and intra-operative management was similar and Logistic EuroSCORE II was calculated for all. Intra-operatively, haemodynamic parameters, urine output, oxygen saturation, arterial blood gas analysis, 5-lead electrocardiogram, operative bypass- and cross-clamp time, and opioid consumption were collected. Postoperatively, patients were compared during their intensive care unit stay. Data were analysed by χ²/Fischer exact, unpaired student's t-test, univariate two-group repeated measures with post hoc Dunnett's test, and Mann-Whitney U tests as appropriate. p < 0.05 was considered significant. Results: Patients were comparable regarding their peri-operative characteristics and EuroSCORE. The intensive care unit stay was shorter in the ultrafast track anaesthesia group [57.4 (18.6) vs. 95 (33.6) h. p < 0.001], without increasing postoperative renal, respiratory complications rate or reopening rate. Conclusions: In this single center study, ultrafast track anaesthesia decreased intensive care unit stay without increasing the rate of post-operative complications. abstract_id: PUBMED:32552781 Peripheral perfusion index predicting prolonged ICU stay earlier and better than lactate in surgical patients: an observational study. Background: Peripheral perfusion index (PPI) is an indicator reflecting perfusion. Patients undergoing long time surgeries are more prone to hypoperfusion and increased lactate. Few studies focusing on investigating the association between PPI and surgical patients' prognoses. We performed this study to find it out. Methods: From January 2019 to September 2019, we retrospected all surgical patients who were transferred to ICU, Xinyang Central hospital, Henan province, China. Inclusive criteria: age ≥ 18 years old; surgical length ≥ 120 min. Exclusive criteria: died in ICU; discharging against medical advice; existing diseases affecting blood flow of upper limbs, for example, vascular thrombus in arms; severe liver dysfunction. We defined "prolonged ICU stay" as patients with their length of ICU stay longer than 48 h. According to the definition, patients were divided into two groups: "prolonged group" (PG) and "non-prolong group" (nPG). Baseline characteristics, surgical and therapeutic information, ICU LOS, SOFA and APACHE II were collected. Besides we gathered data of following parameters at 3 time points (T0: ICU admission; T1: 6 h after admission; T2: 12 h after admission): mean artery pressure (MAP), lactate, heart rate (HR), PPI and body temperature. Data were compared between the 2 groups. Multivariable binary logistic regression and ROC (receiver operating characteristic) curves were performed to find the association between perfusion indictors and ICU LOS. Results: Eventually, 168 patients were included, 65 in PG and 103 in nPG. Compared to nPG, patients in PG had higher blood lactate and lower PPI. PPI showed significant difference between two groups earlier than lactate (T0 vs T1). The value of PPI at two time points was lower in PG than nPG(T0: 1.09 ± 0.33 vs 1.41 ± 0.45, p = 0.001; T1: 1.08 ± 0.37 vs 1.49 ± 0.41, p < 0.001). Increased lactateT1(OR 3.216; 95% CI 1.253-8.254, P = 0.015) and decreased PPIT1 (OR 0.070; 95% CI 0.016-0.307, P < 0.001) were independently associated with prolonged ICU stay. The area under ROC of the PPIT1 for predicting ICU stay> 48 h was 0.772, and the cutoff value for PPIT1 was 1.35, with 83.3% sensitivity and 73.8% specificity. Conclusions: PPI and blood lactate at T1(6 h after ICU admission) are associated with ICU LOS in surgical patient. Compared to lactate, PPI indicates hypoperfusion earlier and more accurate in predicting prolonged ICU stay. abstract_id: PUBMED:22338411 Fast tracking in adult cardiac surgery at Pakistan Institute of Medical Sciences. Background: Early extubation after cardiac operation is an important aspect of fast-track cardiac anaesthesia. The length of stay in ICU limits utilisation of operation theatre in cardiac surgery. Increasing cost, limited resources, and newer surgical strategies have stimulated effectiveness of all routines in cardiac surgery, anaesthesia, and intensive care. Aim of this study was to determine the feasibility of fast-tracking in adult cardiac surgery and its effects on post operative recovery in our setup. Methods: This descriptive study was conducted over 14 months between 16th Jul 2007 to 16th Sep 2008. All the open heart cases were included unless absolute contraindications were there. We applied the rapid recovery protocol adopted from Oslo Hospital Norway in an attempt to achieve fast-tracking in our setup. Results: Two-hundred-seventy-four consecutive cases out of 400 operated cases were included in this study. Mean age was 47.69 +/- 15.11 years, 27.7% were females, 5.8% were emergency cases. 5.1% were COPD, 11.1% were atrial fibrillation, and 6.9% were NYHA class-III cases. CABG was done in 66.1% cases and mean CPB-time was 75.92 +/- 16.20 min. Mean Ventilation-time was 4.47 +/- 4.48 hrs, 86% patients were fast-tracked to be extubated within 6 hours, and 85.4% patients remained free of post-op complications. Six (2.2%) re-intubatlions, 2.6% arrhythmias, 6.6% pleural effusions and 2.2% consolidation were observed post-operatively. Mean ICU stay was 2.49 +/- 0.95 days and in-hospital mortality was 2.2%. Conclusion: Fast-tracking with extubation within 6 hours is feasible approach which minimises the post-operative complications significantly in adult cardiac surgical patients. abstract_id: PUBMED:27473872 Systematic review of factors influencing length of stay in ICU after adult cardiac surgery. Background: Intensive care unit (ICU) care is associated with costly and often scarce resources. In many parts of the world, ICUs are being perceived as major bottlenecks limiting downstream services such as operating theatres. There are many clinical, surgical and contextual factors that influence length of stay. Knowing these factors can facilitate resource planning. However, the extent at which this knowledge is put into practice remains unclear. The aim of this systematic review was to identify factors that impact the duration of ICU stay after cardiac surgery and to explore evidence on the link between understanding these factors and patient and resource management. Methods: We conducted electronic searches of Embase, PubMed, ISI Web of Knowledge, Medline and Google Scholar, and reference lists for eligible studies. Results: Twenty-nine papers fulfilled inclusion criteria. We recognised two types of objectives for identifying influential factors of ICU length of stay (LOS) among the reviewed studies. These were general descriptions of predictors and prediction of prolonged ICU stay through statistical models. Among studies with prediction models, only two studies have reported their implementation. Factors most commonly associated with increased ICU LOS included increased age, atrial fibrillation/ arrhythmia, chronic obstructive pulmonary disease (COPD), low ejection fraction, renal failure/ dysfunction and non-elective surgery status. Conclusion: Cardiac ICUs are major bottlenecks in many hospitals around the world. Efforts to optimise resources should be linked to patient and surgical characteristics. More research is needed to integrate patient and surgical factors into ICU resource planning. abstract_id: PUBMED:8840057 Can clinicians predict ICU length of stay following cardiac surgery? Purpose: To determine whether a group of experienced clinicians can predict intensive care unit (ICU) length of stay (LOS) following cardiac surgery. Methods: A cohort of 265 adult patients undergoing cardiac surgery at St. Michael's Hospital, Toronto, Ontario, between January 2, 1992, and June 26, 1992, were seen preoperatively by the clinicians participating in the study and ICU length of stay was predicted based on the clinicians' preoperative assessment and/or information recorded in the patient's chart. Results: Five hundred and ten ICU length of stay predictions were obtained from a group of eight experienced clinicians (anaesthetists/intensivists, cardiologists, nurses). The clinicians predicted the exact ICU length of stay (in days) correctly 51.2% of the time and were within +/- 1 day 84.5% of the time. The clinicians correctly predicted short ICU stays (< or = 2 days) for 87.6% of the patients who had short ICU stays but only predicted long ICU stays (> 2 days) in 39.4% of the patients who had long ICU stays. Conclusions: Experienced clinicians can predict preoperatively with a considerable degree of accuracy patients who will have short ICU lengths of stay following cardiac surgery. However, many patients who had long ICU stays were not correctly identified preoperatively. Unidentified preoperative risk factors or unanticipated intraoperative/postoperative events may be causing these patients to have longer than expected ICU stays. abstract_id: PUBMED:32421036 Fast tracking after repair of congenital heart defects. Fast tracking after repair of congenital heart defects (CHD) is a process involving the reduction of perioperative period by timely admission, early extubation after surgery, short intensive care unit (ICU) stay, early mobilisation, and faster hospital discharge. It requires a coordinated multidisciplinary team involvement. In the last 2 decades, many centres have adopted the fast tracking strategy in paediatric cardiac population, safely and successfully extubating patients in the OR with reported benefits in terms of reduced morbidity and ICU/hospital stay. In this manuscript, we will review the literature available on early extubation after repair of CHD and share our experience with this approach. abstract_id: PUBMED:30398978 Survival, Quality of Life, and Functional Status Following Prolonged ICU Stay in Cardiac Surgical Patients: A Systematic Review. Objectives: Compared with noncardiac critical illness, critically ill postoperative cardiac surgical patients have different underlying pathophysiologies, are exposed to different processes of care, and thus may experience different outcome trajectories. Our objective was to systematically review the outcomes of cardiac surgical patients requiring prolonged intensive care with respect to survival, residential status, functional recovery, and quality of life in both hospital and long-term follow-up. Data Sources: MEDLINE, Embase, CINAHL, Web of Science, and Dissertations and Theses Global up to July 21, 2017. Study Selection: Studies were included if they assessed hospital or long-term survival and/or patient-centered outcomes in adult patients with prolonged ICU stays following major cardiac surgery. After screening 10,159 citations, 114 articles were reviewed in full; a final 34 articles met criteria for data extraction. Data Extraction: Two reviewers independently extracted data and assessed risk of bias using the National Institutes of Health Quality Assessment Tool for Observational Studies. Extracted data included the used definition of prolonged ICU stay, number and characteristics of prolonged ICU stay patients, and any comparator short stay group, length of follow-up, hospital and long-term survival, residential status, patient-centered outcome measure used, and relevant score. Data Synthesis: The definition of prolonged ICU stay varied from 2 days to greater than 14 days. Twenty-eight studies observed greater in-hospital mortality among all levels of prolonged ICU stay. Twenty-five studies observed greater long-term mortality among all levels of prolonged ICU stay. Multiple tools were used to assess patient-centered outcomes. Long-term health-related quality of life and function was equivalent or worse with prolonged ICU stay. Conclusions: We found consistent evidence that patients with increases in ICU length of stay beyond 48 hours have significantly increasing risk of hospital and long-term mortality. The significant heterogeneity in exposure and outcome definitions leave us unable to precisely quantify the risk of prolonged ICU stay on mortality and patient-centered outcomes. abstract_id: PUBMED:25813298 Fast-tracking ambulatory surgery patients following anesthesia. Purpose: The purpose of this process improvement project was to introduce and evaluate the efficacy of fast-tracking ambulatory surgical patients in a community hospital. Design: An observational pre-post design was used, in which patient data from a reference period (pre-fast-tracking) was compared with patient data collected during an implementation period (post-fast-tracking). Methods: Anesthesia providers were trained to use a tool to assess patients for eligibility to bypass the postanesthesia care unit (PACU). Fifty-nine patients met the fast-track criteria during the implementation period and were transferred directly to the ambulatory care unit from the operating room. Finding: During the fast-track implementation period, a PACU-bypass rate of 79% was achieved, and a significant decrease in the total number of patients held in the operating room and in total length of stay was noted. Conclusions: Results suggest that fast-tracking is a suitable intervention to increase work flow efficiency and decrease both patient and hospital costs while promoting a more rapid discharge from the facility. abstract_id: PUBMED:33938827 Efficacy of paravertebral block in "Fast-tracking" pediatric cardiac surgery - Experiences from a tertiary care center. Introduction: Fast tracking plays a crucial role in reducing perioperative morbidity and financial burden by facilitating early extubation and discharge from hospital. Paravertebral block (PVB) is becoming more popular in paediatric surgeries as an alternative to epidural and caudal analgesia. There is scarcity of data regarding the efficacy and safety of PVB in paediatric cardiac surgery. Methods: We performed a review of records of paediatric cardiac patients who underwent cardiac surgery under general anaesthesia with single shot PVB and compared the analgesia and postoperative outcomes with matched historical controls who underwent cardiac surgery with same anaesthesia protocol without PVB. Results: The data from 200 children were analysed. 100 children who received paravertebral block were compared with a matched historical controls. The median time to extubation was shorter in the PVB group (0 hr, IQR 0-3 hrs) compared to the control group (16 hrs, IQR 4-20 hrs) (P value 0.017*). Intraoperative and postoperative fentanyl requirement was much lower in the PVB group (3.49 (0.91)) compared to the control group (9.86 (1.37)) P value <0.01*. Time to first rescue dose of analgesic was longer (7 hrs vs 5 hrs, P 0.01*), while time to extubation and duration of ICU stay were significantly less in PVB group . Mean postoperative pain scores were significantly lower in the PVB group at the time of ICU admission (0.85 vs 3.12, P 0.001*) till 4 hours (2.11 vs 3.32, P 0.001*). Conclusion: PVB provides an effective and safe anaesthetic approach which can form an important component of "fast-track" care in paediatric cardiac surgery. Answer: The prediction of ICU stay for fast-tracking cardiac surgical patients has been a subject of research, but the results suggest that while there have been improvements in predictive models, they are still not sufficiently accurate for individual patient scheduling in fast-track management. A study conducted in an Australian teaching hospital found that even with the addition of intraoperative factors and early ICU management data to preoperative risk factors, the precision of predicting ICU stay increased but was still inadequate for scheduling fast-track patients due to the potential for significant errors in resource allocation (PUBMED:18798717). Other studies have focused on strategies to reduce ICU stay, such as ultra-fast track anesthesia, which has been shown to decrease ICU stay without increasing postoperative complications (PUBMED:26811837). Peripheral perfusion index (PPI) has been found to predict prolonged ICU stay earlier and better than lactate in surgical patients, suggesting that PPI could be a useful tool for early identification of patients at risk for longer ICU stays (PUBMED:32552781). Fast-tracking in adult cardiac surgery has been deemed feasible and can minimize post-operative complications significantly, with a study from Pakistan reporting that 86% of patients were extubated within 6 hours, leading to a mean ICU stay of 2.49 days (PUBMED:22338411). However, a systematic review highlighted that while certain factors such as increased age, atrial fibrillation, COPD, low ejection fraction, renal dysfunction, and non-elective surgery status are commonly associated with increased ICU LOS, more research is needed to integrate these factors into ICU resource planning (PUBMED:27473872). Clinicians have been shown to predict short ICU stays accurately but are less successful at predicting long ICU stays, indicating that unanticipated intraoperative or postoperative events may contribute to longer stays (PUBMED:8840057). Fast-tracking after repair of congenital heart defects and in ambulatory surgery has also been explored, with evidence suggesting benefits in terms of reduced morbidity and ICU/hospital stay (PUBMED:32421036, PUBMED:25813298). In conclusion, while there have been advances in predicting ICU stay and strategies to reduce it, the current level of precision is not yet adequate for the reliable scheduling of fast-track cardiac surgical patients on an individual basis. Further research and improvement in predictive models are needed to enhance the accuracy of these predictions (PUBMED:18798717).
Instruction: Does physical activity during pregnancy reduce the risk of gestational diabetes among previously inactive women? Abstracts: abstract_id: PUBMED:18844644 Does physical activity during pregnancy reduce the risk of gestational diabetes among previously inactive women? Background: Gestational diabetes affects approximately 7 percent of all pregnancies in the United States; its prevalence may have increased among all ethnic groups since the early 1990 s. Our study examined whether physical activity during pregnancy reduced the risk of gestational diabetes among women who were physically inactive before pregnancy. Methods: We used data from the 1988 National Maternal and Infant Health Survey (NMIHS), a nationally representative sample of mothers with live births. The NMIHS obtained mothers' gestational diabetes diagnoses from care providers and mothers reported their physical activity before and during pregnancy, including the number of months with physical activity and types of physical activity. We developed a physical activity index, the product of the number of months with physical activity, and average metabolic equivalents for specific activities. The analysis included 4,813 women who reported being physically inactive before pregnancy, with singleton births and no previous diabetes diagnosis. Results: Gestational diabetes was diagnosed in 3.5 percent of the weighted sample in 1988. About 11.8 percent of these previously inactive women began physical activity during pregnancy. Women who became physically active had 57 percent lower adjusted odds of developing gestational diabetes than those who remained inactive (OR 0.43, 95% CI 0.20-0.93). Women who had done brisk walking during pregnancy had a lower adjusted risk of gestational diabetes (OR 0.44, CI 0.19-1.02) and women with a physical activity index score above the median had 62 percent lower odds of developing gestational diabetes than the inactive women (CI 0.15-0.96). Conclusions: Results suggest that physical activity during pregnancy is associated with lower risk for gestational diabetes among previously inactive women. abstract_id: PUBMED:38229213 Assisted reproductive technology and physical activity among Chinese pregnant women at high risk for gestational diabetes mellitus in early pregnancy: A cross-sectional study. Currently, the number of pregnant women at high risk for gestational diabetes mellitus (GDM) and using assisted reproductive technology (ART) is increasing. The present study aims to explore the relationship between ART and physical activity in Chinese pregnant women at high risk for GDM in early pregnancy. A cross-sectional study was conducted in a regional teaching hospital in Guangzhou, China, between July 2022 and March 2023. Three hundred fifty-five pregnant women at high risk for GDM in early pregnancy completed the Chinese version of the Pregnant Physical Activity Questionnaire (PPAQ), the Pregnancy Physical Activity Knowledge Scale, the Pregnancy Physical Activity Self-Efficacy Scale, the Pregnancy Physical Activity Social Support Scale, and a sociodemographic and obstetric characteristics data sheet. Compared to women who conceived naturally, women who used ART were more likely to be 35 years or older, unemployed, primigravidae, and to have intentionally planned their pregnancies. Women who used ART had significantly lower levels of physical activity and self-efficacy compared to their counterparts who conceived naturally. Over half (55.6%) of women who used ART reported being physically inactive, and those with lower self-efficacy, as well as the unemployed, were significantly more likely to be inactive. Physical inactivity is a critical clinical issue among women who use ART, especially in the context of GDM risk. Future research should develop and test physical activity programs, including enhancing physical activity self-efficacy for women who use ART. Patient or public contribution: In this study, survey questionnaires were completed by participants among Chinese pregnant women at high risk for GDM in early pregnancy. abstract_id: PUBMED:24161237 Physical activity before and during pregnancy and risk of abnormal glucose tolerance among Hispanic women. Aim: Women diagnosed with abnormal glucose tolerance and gestational diabetes mellitus are at increased risk for subsequent type 2 diabetes, with higher risks in Hispanic women. Studies suggest that physical activity may be associated with a reduced risk of these disorders; however, studies in Hispanic women are sparse. Methods: We prospectively evaluated this association among 1241 Hispanic participants in Proyecto Buena Salud. The Pregnancy Physical Activity Questionnaire was used to assess pre, early, and mid pregnancy physical activity. Medical records were abstracted for pregnancy outcomes. Results: A total of 175 women (14.1%) were diagnosed with abnormal glucose tolerance and 57 women (4.6%) were diagnosed with gestational diabetes. Increasing age and body mass index were strongly and positively associated with risk of gestational diabetes. We did not observe statistically significant associations between total physical activity or meeting exercise guidelines and risk. However, after adjusting for age, BMI, gestational weight gain, and other important risk factors, women in the top quartile of moderate-intensity activity in early pregnancy had a decreased risk of abnormal glucose tolerance (odds ratio=0.48, 95% Confidence Interval 0.27-0.88, Ptrend=0.03) as compared to those in the lowest quartile. Similarly, women with the highest levels of occupational activity in early pregnancy had a decreased risk of abnormal glucose tolerance (odds ratio=0.48, 95% Confidence Interval 0.28-0.85, Ptrend=0.02) as compared to women who were unemployed. Conclusion: In this Hispanic population, total physical activity and meeting exercise guidelines were not associated with risk. However, high levels of moderate-intensity and occupational activity were associated with risk reduction. abstract_id: PUBMED:31523429 Physical activity pattern in early pregnancy and gestational diabetes mellitus risk among low-income women: A prospective cross-sectional study. Objective: Gestational diabetes mellitus is increasing worldwide, mainly in developing countries, and physical activity has not been studied in gestational diabetes mellitus prevention among low-income population. This prospective cross-sectional study assessed the gestational diabetes mellitus risk related to physical activity in early pregnancy among low-income women. Methods: A prospective cross-sectional study with 544 low-income pregnant women was conducted at the Instituto de Medicina Integral Prof. Fernando Figueira, Brazil. Gestational diabetes mellitus was diagnosed using the International Association of Diabetes and Pregnancy Study Groups criteria. Physical activity was assessed during early pregnancy using the Pregnancy Physical Activity Questionnaire and categorized as sedentary, light, moderate, or vigorous intensity. Results: Gestational diabetes mellitus occurred in 95 of 544 women (17.4%). Body mass index was higher in the gestational diabetes mellitus group. Nearly half of all pregnant women studied were physically inactive, and none of them were classified as vigorous physical active. Sedentary physical activity pattern was associated with a higher odds of gestational diabetes mellitus (odds ratio = 1.8, 95% confidence interval = 1.1-2.9), which did not change after adjusting for several covariates (odds ratio = 1.9, 95% confidence interval = 1.2-3.1). Conclusion: Physical inactivity in early pregnancy is associated with a higher risk of gestational diabetes mellitus among low-income women. abstract_id: PUBMED:34322404 Level of exercise and physical activity among pregnant women in Saudi Arabia: A systematic review. The current study aimed to clarify the health benefits of physical activity on the mother and fetus in the Saudi women population. Besides, it is intended to provide recommendation based on the literature and results of studies from Saudi Arabia for exercise in pregnancy to improve the general health of women in Saudi Arabia. Prenatal physical exercise enhances physical and mental health of pregnant women. It can also reduce the risk of multiple pregnancy-related complications such as; lower back pain, fluid retention and risk of gestational diabetes. All these factors can affect fetal development and life later. Multiple studies showed that prenatal exercise could reduce the risk of fetal macrosomia with no effect on other perinatal or postnatal complications. The study followed the systematic literature review approach where it included multiple medical search Databases using PICOS eligibility criteria up to January 2019. The review was based on the following keywords: pregnancy, gestational, or prenatal) and (physical exercise, exercise, or physical activity. There are only two studies that dealt with physical exercises among Saudi women. The results indicated a relation between prenatal physical exercise on improving or decreasing risks on the mother and child during pregnancy. abstract_id: PUBMED:36040352 The Association Between Acculturation and Diet and Physical Activity Among Pregnant Hispanic Women with Abnormal Glucose Tolerance. Background: Hispanic women are disproportionately affected by gestational diabetes mellitus (GDM), yet few studies have assessed the impact of acculturation on health behaviors that may reduce GDM risk. Materials and Methods: We assessed relationships between acculturation and meeting American Diabetes Association guidelines for macronutrient intake and American College of Obstetricians and Gynecologists guidelines for physical activity (PA) using baseline data from Estudio Project Aiming to Reduce Type twO diabetes, a randomized trial conducted in Massachusetts (2013-2017) among 255 Hispanic pregnant women with hyperglycemia. Acculturation was assessed via the Psychological Acculturation Scale, duration of time and generation in the continental United States, and language preference; diet with 24-hours dietary recalls; and PA with the Pregnancy Physical Activity Questionnaire (PPAQ). Results: The majority of participants who reported low psychological acculturation (74.9%), preferred English (78.4%), were continental U.S. born (58.0%), and lived in the continental United States ≥5 years (91.4%). A total of 44.8%, 81.8%, 22.9%, and 4.6% of women met guidelines for carbohydrate, protein, fat, and fiber intakes, respectively; 31.9% met guidelines for PA. Women with higher acculturation were less likely to meet carbohydrate guidelines (English preference: adjusted risk ratios [aRR] 0.45, 95% confidence intervals [CI] 0.23-0.75; U.S. born: aRR 0.60, 95% CI 0.36-0.91; duration of time in United States: aRR 0.96, 95% CI 0.92-0.99). Women with higher acculturation were more likely to meet PA guidelines (U.S. born: aRR 1.95, 95% CI 1.11-3.44). Conclusions: In summary, higher acculturation was associated with lower likelihood of meeting dietary guidelines but greater likelihood of meeting PA guidelines during pregnancy. Interventions aimed at reducing GDM in Hispanics should be culturally informed and incorporate acculturation. Clinical Trial Registration: clinicaltrials.gov NCT01679210. abstract_id: PUBMED:27756289 Physical activity and the risk for gestational diabetes mellitus amongst pregnant women living in Soweto: a study protocol. Background: Over the past decade the prevalence of gestational diabetes mellitus (GDM) has increased rapidly in both developed and developing countries and has become a growing health concern worldwide. A recent systematic review highlighted the paucity of data available on the prevalence and potential burden of GDM in Africa, which was emphasised by the fact that only 11 % of African countries were represented in the review. In South Africa, the prevalence of GDM remains unknown, although one would estimate it to be high due to urbanisation and the growing obesity epidemic. In addition, the association between physical activity (PA), sedentary behaviour (SB) and GDM is not well understood in this population. The aim of the proposed research is to determine whether there is an association between physical activity, sedentary behaviour and risk for GDM in pregnant black women living in urban Soweto in South Africa. Methods/design: This prospective cohort study of 80 participants will include pregnant women from Soweto enrolled into the Soweto First 1000 Days Study (S1000) at the MRC/Wits Departmental Pathways for Health Research Unit (DPHRU) based at the Chris Hani Baragwanath Academic Hospital in Soweto, South Africa. Women will be enrolled into the S1000 Study at <14 weeks gestation, and baseline demographic and anthropometric measures will be taken at 14-18 weeks gestation (visit 1). In addition, participants will complete the Global Physical Activity Questionnaire (GPAQ) to measure self-reported physical activity and will be given an ActiGraph accelerometer to wear for seven days to measure habitual physical activity at 14-18 weeks gestation (visit 1), and at 28-33 weeks gestation (visit 3). At visit 2 (24-28 weeks gestation) an oral glucose tolerance test (OGTT) will be conducted. Discussion: Physical activity during pregnancy has been associated with minimum risk to a pregnancy and may play a role in improving glucose metabolism and therefore decreasing risk for GDM. This is particularly pertinent to assess amongst black South African women who are a potentially high risk population due to the high prevalence of obesity and type 2 diabetes (T2D). The findings of the study will assist in developing targeted interventions as well as feasible healthcare strategies. abstract_id: PUBMED:21918237 Feasibility and efficacy of a physical activity intervention among pregnant women: the behaviors affecting baby and you (B.A.B.Y.) study. Background: Physical activity during pregnancy is associated with reduced risk of adverse maternal and fetal outcomes. However, the majority of pregnant women are inactive and interventions designed to increase exercise during pregnancy are sparse. We evaluated the feasibility and preliminary efficacy of an exercise intervention among a diverse sample of pregnant women. Methods: The B.A.B.Y. (Behaviors Affecting Baby and You) Study is conducted at a large tertiary care facility in Western Massachusetts. We randomized 110 prenatal care patients (60% Hispanic) to an individually tailored 12-week exercise intervention arm (n = 58) or to a health and wellness control arm (n = 52) at mean = 11.9 weeks gestation. Physical activity was assessed via the Pregnancy Physical Activity Questionnaire (PPAQ). Results: After the 12-week intervention, the exercise arm experienced a smaller decrease (-1.0 MET-hrs/wk) in total activity vs. the control arm (-10.0 MET-hrs/wk; P = .03), and a higher increase in sports/exercise (0.9 MET-hrs/wk) vs. the control arm (-0.01 MET-hrs/wk; P = .02). Intervention participants (95%) reported being satisfied with the amount of information received and 86% reported finding the study materials interesting and useful. Conclusions: Findings support the feasibility and preliminary efficacy of a tailored exercise intervention in increasing exercise in a diverse sample of pregnant women. abstract_id: PUBMED:23849310 Measuring physical activity in pregnancy: a comparison of accelerometry and self-completion questionnaires in overweight and obese women. Objectives: Increased physical activity in pregnancy may reduce the risk of gestational diabetes and pre-eclampsia, which occur more commonly in overweight and obese women. There is limited assessment of physical activity questionnaires in pregnancy. This study compares self-reported physical activity using two questionnaire methods with objectively recorded physical activity using accelerometry in overweight and obese pregnant women. Study Design: 59 women with booking BMI≥25 kg/m(2) completed the Recent Physical Activity Questionnaire (RPAQ) and Australian Women's Activity Survey (AWAS) or recorded at least 3 days of accelerometry at median 12 weeks' gestation. Accelerometer thresholds of 100 counts/min and 1952 counts/min were used to define light and moderate or vigorous physical activity (MVPA) respectively. Results: 48% of women were in their first pregnancy and 41% were obese. Median daily self-reported MVPA was significantly higher for both AWAS (127 min, p<0.001) and RPAQ (81 min, p<0.001) than that recorded by accelerometer (35 min). There was low or moderate correlation between questionnaire and accelerometer estimates of total active time (AWAS ρ=0.36, p=0.008; RPAQ ρ=0.53, p<0.001) but no significant correlation between estimates of time spent in MVPA. Conclusions: These self-report questionnaires over-estimated MVPA and showed poor ability to discriminate women on the basis of MVPA. Accelerometry measurement was feasible and acceptable. Objective methods should be used where possible in studies measuring physical activity in pregnancy. Questionnaires remain valuable to define types of activity. abstract_id: PUBMED:35313870 Physical activity and health-related quality of life among high-risk women for type 2 diabetes in the early years after pregnancy. Background: Previous studies have shown that physical activity (PA) correlates positively with health-related quality of life (HRQoL) in the general population. Few studies have investigated associations between device-measured PA and HRQoL among premenopausal women at risk for type 2 diabetes (T2D). In addition to physical well-being, general well-being improved by PA has been suggested to strengthen PA's benefits in reducing metabolic diseases. The aim of this study was to examine the associations between PA and HRQoL (general and dimensions) among high-risk women in the early post-pregnancy years when T2D risk is highest and to estimate whether current obesity or prior gestational diabetes (GDM) modified these associations. Methods: This cross-sectional study of high-risk women [body mass index (BMI) ≥ 30 kg/m2 and/or prior GDM)]4-6 years after delivery measured sleep, sedentary time, daily steps, and light (LPA), moderate-to-vigorous (MVPA), and vigorous PA (VPA) with the SenseWear ArmbandTM accelerometer for seven days and HRQoL with the 15D instrument. Results: The analyses included 204 women with a median (IQR) age of 39 (6.0) years and a median BMI of 31.1 kg/m2 (10.9). 54% were currently obese (BMI ≥ 30 kg/m2), and 70% had prior gestational diabetes (GDM+). Women with obesity had lower PA levels than women with normal weight or overweight (p < 0.001) but there was no difference between the GDM+ or GDM- women. Women with both current obesity and GDM+ had highest sedentary time and lowest PA levels. The whole sample's median 15D score was 0.934 (IQR 0.092), lower among women with obesity compared to the others (p < 0.001), but not different between GDM+ or GDM-. There was a positive correlation between VPA (adjusted rs = 0.262 p = 0.001) and the 15D score. After grouping according to BMI (< and ≥ 30 kg/m2), the associations remained significant only in women without obesity. Among them, sleep, total steps, MVPA, and VPA were positively associated with 15D. Conclusions: Higher PA levels are associated with better HRQoL among high-risk women with normal weight and overweight but no differences were found among women affected by obesity in the early years after pregnancy. Trial registration Ethics committees of Helsinki University Hospital (Dnro 300/e9/06) and South Karelian Central Hospital (Dnro 06/08). Answer: Yes, physical activity during pregnancy is associated with a reduced risk of gestational diabetes among previously inactive women. A study using data from the 1988 National Maternal and Infant Health Survey found that women who were physically inactive before pregnancy and began physical activity during pregnancy had 57 percent lower adjusted odds of developing gestational diabetes than those who remained inactive. Additionally, women who engaged in brisk walking during pregnancy had a lower adjusted risk of gestational diabetes, and those with a physical activity index score above the median had 62 percent lower odds of developing the condition compared to inactive women (PUBMED:18844644). These findings suggest that initiating physical activity during pregnancy can be beneficial for reducing the risk of gestational diabetes among women who were not active before becoming pregnant.
Instruction: Improving care for rural veterans: are high dual users different? Abstracts: abstract_id: PUBMED:24689539 Improving care for rural veterans: are high dual users different? Background: Rural veterans face considerable barriers to access to care and are likely to seek health care services outside the Veterans Health Administration (VHA), or dual care. Objective: The objective of this study was to examine the characteristics of high users of dual care versus occasional and nonusers of dual care, and the determinants of satisfaction with care received by rural veterans. Design: The design was a cross-sectional observational study. Participants: Structured telephone interviews of a random sample of veterans residing in rural Nebraska were conducted in 2011. Main Measures: Veterans' frequency of use of dual care and satisfaction with care received were assessed using multinomial and ordinal regression models. Key Results: Veterans who have an established relationship with a VHA provider or a personal doctor or nurse at the VHA and those who were more satisfied with VHA quality of care were less likely to be high users of dual care. Veterans who were Medicare beneficiaries, or had private insurance or chronic illnesses, or were confused about where to seek care were more likely to be users of dual care. Veterans who report being confused about where to seek care, and those who perceive lack of coordination between the VHA and non-VHA systems are less satisfied with care received. Conclusions: Understanding what motivates veterans to use dual care and influences their satisfaction with care received will enable the VHA to implement policy that improves the quality of care provided to rural veterans. abstract_id: PUBMED:33562993 Examining dual roles in long-term-care homes in rural Alberta: a qualitative study. Introduction: In rural settings, many healthcare professionals experience intersections of professional and personal relationships, often known as dual roles. Dual roles are traditionally studied in terms of their potential for ethical conflicts or negative effects on care. In the existing scholarship, there is little discussion of dual roles in long-term care (LTC) settings, which present distinct conditions for care. Unlike other forms of health care, LTC work is provided daily, over longer periods, in care recipients' home environments. This article outlines results from a case study of LTC in rural Alberta, Canada and provides evidence of some of the challenges and, more notably, the considerable benefits of dual roles in these settings. Methods: The qualitative data discussed in this article come from a multi-site comparative case study of rural LTC that, among other questions, asked, 'How do personal and professional lives intersect in rural LTC settings across the province?' These data were collected through the use of rapid ethnographies at three rural LTC homes across the province of Alberta. The research team conducted semi-structured, in-depth interviews (n=90) and field observations (~200 hours). Participants were asked about care team dynamics, the organization of care work, the role of the LTC home in the community, and the intersections of public and private lives. The results were coded and critically analyzed using thematic analysis. Results: Dual roles were primarily described as beneficial for care provision. In many cases, dual roles provided participants with opportunities for reciprocity, enhanced person-centered care, and increased perceptions of trust and community accountability. Similar to what has been documented in the extant literature, dual roles also presented some challenges regarding personal and professional boundaries for those in leadership. However, the negative examples were outweighed by positive accounts of how dual roles can serve as a potential asset of rural LTC. Conclusion: There is a need for more nuanced conversations around the implications of dual roles. Policies and care approaches need to emphasize and support the use of good judgment and the responsible navigation of dual roles, rather than taking either a permissive or prohibitive approach. Leaders in rural LTC can promote conversations among care providers, with an emphasis on the cultural context of care provision and how dual roles play out in their specific professional practice. Blanket policies or educational approaches that frame dual roles as necessarily problematic are not only insensitive to the unique nature of rural LTC, but prohibitive of relational elements that these results suggest are highly supportive of person-centered care. abstract_id: PUBMED:27558939 Veteran Use of Health Care Systems in Rural States: Comparing VA and Non-VA Health Care Use Among Privately Insured Veterans Under Age 65. Objective: To quantify use of VA and non-VA care among working-age veterans with private insurance by linking VA data to private health insurance plan (PHIP) data. Methods: Demographics and utilization were compared between dual users of VA and non-VA systems versus single-system users for veterans < 65 living in 2 rural Midwestern states concurrently enrolled in VA health care and a PHIP for ≥ 1 complete federal fiscal year from 2000 to 2010. Chi-square and t-tests were used for univariate analyses. VA reliance was computed as the percentage of visits, admissions and prescriptions in VA. Multinomial logistic regression was used to compare characteristics by dual use versus non-VA only or VA only use. Results: Of 16,330 eligible veterans, 54% used both VA and non-VA services, 39% used non-VA only, and 5% used VA only. Compared with single-system use, dual use was associated with older age, priority levels 1-4, service-connected conditions, rural residence, greater years of study eligibility, and enrollment in the PHIP before VA. VA reliance was 33% for outpatient care, 14% for inpatient, and 40% for pharmacy. PHIP data substantially underestimated VA use compared to VA data; 26% who used VA health care had no VA claims in the PHIP dataset. Conclusions: Over half of working-age veterans enrolled in VA and private insurance used services in both systems. Care coordination efforts across systems should include veterans of all ages, particularly rural veterans more likely to be dual users, and better methods are needed to identify veterans with private insurance and their private providers. abstract_id: PUBMED:27481190 Insured Veterans' Use of VA and Non-VA Health Care in a Rural State. Purpose: To understand how working-age VA-enrolled veterans with commercial insurance use both VA and non-VA outpatient care, and how rural residence affects dual use, for common diagnoses and procedures. Methods: We analyzed VA and non-VA outpatient treatment records for any months during 2005-2010 that New Hampshire veterans ages <65 were simultaneously enrolled in VA health care and commercial insurance (per NH's mandatory claims database). Controlling for covariates, we used analysis of variance to compare urban and rural VA users, non-VA users, and dual users on travel burden, diagnosis counts, duration in outpatient care, and visit frequencies, and logistic regressions to assess whether rural veterans were as likely to be seen for common conditions and procedures. Findings: More than half of patients were non-VA users and another third were dual users; rural residents were slightly more likely than urban residents to be dual users. For nearly any common diagnosis or procedure, dual users were more likely to have it at some time during treatment than other patients in either VA or non-VA care, but they seldom had it listed in both care systems. Dual users also were seen most often overall, although within either care system they were seen less often than other patients, particularly if they were rural residents living far from care. Rural residence reduced chances of treatment for a wide variety of conditions, though it also was associated with more musculoskeletal and connective tissue diagnoses. It also reduced chances that patients had some diagnostic and treatment procedures but increased the odds of others that may require fewer visits. Conclusions: Dual users living in rural areas may have less continuity in their health care. Ensuring that rural dual users are identified in primary care should improve access and care coordination. abstract_id: PUBMED:26582045 Improving the Care of Dual Eligible Patients in Rural Federally Qualified Health Centers: The Impact of Care Coordinators and Clinical Pharmacists. Background: Dual eligible persons are those covered by both Medicare and Medicaid. There were 9.6 million dual eligible persons in the United States and 82 000 in West Virginia in 2010. Dual eligibles are poorer, sicker, and more burdened with serious mental health conditions than Medicare or Medicaid patients as a whole. Their health care costs are significantly higher and they are more likely to receive fragmented ineffective care. Purpose: To improve the care experience and health care outcomes of dual eligible patients by the expanded use of care coordinators and clinical pharmacists. Methods: During 2012, 3 rural federally qualified community health centers in West Virginia identified 200 dual eligible patients each. Those with hospitalizations received more frequent care coordinator contacts. Those on more than 15 chronic medications had drug utilization reviews with recommendations to primary care providers. Baseline measures included demographics, chronic diseases, total medications and Beers list medications, hospitalization, and emergency room (ER) use in the previous year. Postintervention measures included hospitalization, ER use, total medications, and Beers list medications. Results: Out of 556 identified patients, 502 were contacted and enrolled. Sixty-five percent were female. The median age was 69 years, with a range of 29 to 93 years. Nineteen percent (19%) of patients were on 15 or more medications, 56% on psychotropic medication, and 33% on chronic opiates. One site showed reductions of 34% in hospitalizations and 25% in ER visits during the intervention year. For all sites combined, there was a 5.5% reduction in total medications and a 14.8% reduction in Beers list medications. Conclusions: A modest investment in care coordination and clinical pharmacy review can produce significant reductions in hospitalization and harmful polypharmacy for community dwelling dual eligible patients. abstract_id: PUBMED:20350447 The experience of primary health care users: a rural-urban paradox. Introduction: We sought to assess the care experience of primary health care users, to determine whether users' assessments of their experience vary according to the geographical context in which services are obtained, and to determine whether the observed variations are consistent across all components of the care experience. Methods: We examined the experience of 3389 users of primary care in 5 administrative regions in Quebec, focusing on accessibility, continuity, responsiveness and reported use of health services. Results: We found significant variations in users' assessments of the specific components of the care experience. Access to primary health care received positive evaluations least frequently, and continuity of information received the approval of the highest percentage of users. We also found significant variations among geographical contexts. Positive assessments of the care experience were more frequently made by users in remote rural settings; they became progressively less frequent in near-urban rural and near-urban settings, and were found least often in urban settings. We observed these differences in almost all of the components of the care experience. Conclusion: Given the relatively greater supply of services in urban areas, this analysis has revealed a rural-urban paradox in the care experience of primary health care users. abstract_id: PUBMED:18709750 Improving the quality of rural nursing care. The purpose of this chapter is to review the literature on quality of care in rural areas. Keywords related to rural quality of care were used to search CINAHL and MEDLINE databases for articles published between 2005 and 2007 (limited to studies occurring in the United States). The review consisted of a total of 46 articles. Limitations include inconsistent definitions of rural, the use of only articles available to the reviewers, an unclear understanding of the context of many of the studies, and lack of a clear operational definition of quality. The studies were grouped and discussed according to quality of workforce, practice, treatment, interventions, and technology in rural areas. Each study's contribution to the understanding of quality health care in rural areas and to determining what was effective in improving staff, patient, or organizational outcomes in rural areas was considered. This chapter also offers a discussion of ethical issues and data quality in rural research. Issues for future research include a focus on patient safety, mental health issues, and the use of technology to improve quality of care in rural areas. Future research should also focus on demonstration studies of model applications. The nursing profession has a unique opportunity to conduct research that will contribute to the development of knowledge that will ultimately improve the quality of health and health care for individuals in rural communities. abstract_id: PUBMED:32041608 Lessening barriers to healthcare in rural Ghana: providers and users' perspectives on the role of mHealth technology. A qualitative exploration. Background: Key barriers to healthcare use in rural Ghana include those of economic, social, cultural and institutional. Amid this, though rarely recognised in Ghanaian healthcare settings, mHealth technology has emerged as a viable tool for lessening most healthcare barriers in rural areas due to the high mobile phone penetration and possession rate. This qualitative study provides an exploratory assessment of the role of mHealth in reducing healthcare barriers in rural areas from the perspective of healthcare users and providers. Method: Semi-structured interviews were conducted with 30 conveniently selected healthcare users and 15 purposively selected healthcare providers within the Birim South District in the Eastern Region of Ghana between June 2017 and April 2018. Data were thematically analysed and normative standpoints of participants were presented as quotations. Results: The main findings were that all the healthcare users had functioning mobile phones, however, their knowledge and awareness about mHealth was low. Meanwhile, rural health care users and providers were willing to use mHealth services involving phone call in the future as they perceived the technology to play an important role in lessening healthcare barriers. Nevertheless, factors such as illiteracy, language barrier, trust, quality of care, and mobile network connectivity were perceived as barriers associated with using mHealth in rural Ghana. Conclusion: The support for mHealth service is an opportunity for the development of synergistic relationship between health policy planners and mobile network companies in Ghana to design efficient communication and connectivity networks, accessible, localised, user-friendly and cost-effective mobile phone-based health programmes to assist in reducing healthcare barriers in rural Ghana. abstract_id: PUBMED:15603694 Comparison of rural and urban users and non-users of home care in Canada. Introduction: Geography is considered a determinant of health because people living in rural and remote areas, compared with those in urban areas, have poorer health status and more difficulty accessing health care. Purpose: To examine the characteristics associated with the use of publicly funded home care services among rural and urban Canadians 18 years of age and over. Methods: The Andersen and Newman Behavioural Model of Health Services Use guided the selection of variables, analyses and interpretation of the findings. Descriptive, correlation and multiple logistic regression analyses were completed on 2 cross-sectional cycles of Statistics Canada's National Population Health Surveys. Results And Conclusion: This research revealed that rural residents are increasingly less likely to receive personal care assistance, and rural home care users appear to have more resources (e.g., higher levels of education, sense of coherence) that likely influence their ability to access and receive home care services, than their urban counterparts. Rural residents without these resources may be less likely to receive home care services. abstract_id: PUBMED:28762300 Social justice, access and quality of healthcare in an age of austerity: users' perspective from rural Iceland. Iceland is sparsely populated but social justice and equity has been emphasised within healthcare. The aim of the study is to examine healthcare services in Fjallabyggð, in rural northern Iceland, from users' perspective and evaluate social justice, access and quality of healthcare in an age of austerity. Mixed-method approach with transformative design was used. First, data were collected with questionnaires (response rate of 53% [N=732] in 2009 and 30% [N=415] in 2012), and analysed statistically, followed by 10 interviews with healthcare users (2009 and 2014). The results were integrated and interpreted within Bronfenbrenner's Ecological Model. There was significantly less satisfaction with accessibility and variety of healthcare services in 2012 after services downsizing. Solid primary healthcare, good local elderly care, some freedom in healthcare choice and reliable emergency services were considered fundamental for life in a rural area. Equal access to healthcare is part of a fundamental human right. In times of economic downturn, people in rural areas, who are already vulnerable, may become even more vulnerable and disadvantaged, seriously threatening social justice and equity. With severe cutbacks in vitally important healthcare services people may eventually choose to self-migrate. Answer: The study conducted on rural veterans residing in Nebraska found that high users of dual care, which refers to those who seek health care services both within and outside the Veterans Health Administration (VHA), have distinct characteristics compared to occasional and nonusers of dual care. High dual care users were more likely to be Medicare beneficiaries, have private insurance, suffer from chronic illnesses, or be confused about where to seek care. These veterans were also less likely to have an established relationship with a VHA provider or a personal doctor or nurse at the VHA. Satisfaction with the quality of care received from the VHA was associated with a lower likelihood of being a high user of dual care. Additionally, veterans who reported confusion about where to seek care and those who perceived a lack of coordination between VHA and non-VHA systems were less satisfied with the care they received (PUBMED:24689539). In the context of improving care for rural veterans, it is important to understand the motivations behind dual care usage and the factors influencing satisfaction with care. This understanding can help the VHA implement policies that enhance the quality of care provided to rural veterans, potentially reducing the reliance on dual care and improving overall satisfaction with the VHA services.
Instruction: Is prostate cancer screening worthy in southern European male populations? Abstracts: abstract_id: PUBMED:22729605 Is prostate cancer screening worthy in southern European male populations? A case study in Eleusina, Greece. Objective: To investigate the frequency of PSA-detected prostate cancer among non-symptomatic Greek males. Methods: A prospective study on prostate cancer (PC) screening was performed in a representative sample of asymptomatic Greek males aged 40-75 years in Eleusina, (Greece) between January and November 2001. Indication for prostate biopsy was a PSA value above 3.0 ng/mL on repeat examination. Patients found with PC at biopsy received the appropriate treatment. Ten years later, patients initially diagnosed with PC were surveyed between June and July 2011 by telephone interview, in order to evaluate PC screening effects. Outcomes examined included: overall survival, disease-free survival and cancer mortality. Results: 309 asymptomatic males were screened. The mean age of the study population was 62 years (median 62). The PSA median was 1.1 ng/mL with 90.2% presenting with <3.0 ng/mL. Seven out of 29 patients found with a serum PSA value above 3.0 ng/mL (9.8%) were finally diagnosed with PC at biopsy. During the survey time the two patients with prostate carcinoma of low differentiation died despite aggressive treatment. Of the remaining 5 patients diagnosed with PC, one died of causes other than PC, 2 are disease-free while 2 patients are alive with the disease. Conclusions: The low PC detection rate questions the overall usefulness of PC screening in a geographical region where histological PC is not very common. abstract_id: PUBMED:31970768 Key issues that need to be considered while revising the current annex of the European Council Recommendation (2003) on cancer screening. The 2003 European Council recommendation urging the Member States to introduce or scale up breast, cervical and colorectal cancer screening through an organized population-based approach has had a remarkable impact. We argue that the recommendation needs to be updated for at least two sets of reasons. First, some of the current clinical guidelines include new tests or protocols that were not available at the time of the Council document. Some have already been adopted by organized screening programs, such as newly defined age ranges for mammography screening, Human Papillomavirus (HPV)-based cervical cancer screening, fecal immunochemical test (FIT) and sigmoidoscopy for colorectal cancer screening. Second, the outcomes of randomized trials evaluating screening for lung and prostate cancer have been published recently and the balance between harms and benefits needs to be pragmatically assessed. In the European Union, research collaboration and networking to exchange and develop best practices should be regularly supported by the European Commission. Integration between primary and secondary preventive strategies through comprehensive approaches is necessary not only to maximize the reduction in cancer burden but also to control the rising trend of other noncommunicable diseases sharing the same risk factors. abstract_id: PUBMED:32279141 Comparing the prediction of prostate biopsy outcome using the Chinese Prostate Cancer Consortium (CPCC) Risk Calculator and the Asian adapted Rotterdam European Randomized Study of Screening for Prostate Cancer (ERSPC) Risk Calculator in Chinese and European men. Purpose: To externally validate the clinical utility of Chinese Prostate Cancer Consortium Risk Calculator (CPCC-RC) and Asian adapted Rotterdam European Randomized Study of Screening for Prostate Cancer Risk Calculator 3 (A-ERSPC-RC3) for prediction prostate cancer (PCa) and high-grade prostate cancer (HGPCa, Gleason Score ≥ 3 + 4) in both Chinese and European populations. Materials And Methods: The Chinese clinical cohort, the European population-based screening cohort, and the European clinical cohort included 2,508, 3,616 and 617 prostate biopsy-naive men, respectively. The area under the receiver operating characteristic curve (AUC), calibration plot and decision curve analyses were applied in the analysis. Results: The CPCC-RC's predictive ability for any PCa (AUC 0.77, 95% CI 0.75-0.79) was lower than the A-ERSPC-RC3 (AUC 0.79, 95% CI 0.77-0.81) in the European screening cohort (p < 0.001), but similar for HGPCa (p = 0.24). The CPCC-RC showed lower predictive accuracy for any PCa (AUC 0.65, 95% CI 0.61-0.70), but acceptable predictive accuracy for HGPCa (AUC 0.73, 95% CI 0.69-0.77) in the European clinical cohort. The A-ERSPC-RC3 showed an AUC of 0.74 (95% CI 0.72-0.76) in predicting any PCa, and a similar AUC of 0.74 (95% CI 0.72-0.76) in predicting HGPCa in Chinese cohort. In the Chinese population, decision curve analysis revealed a higher net benefit for CPCC-RC than A-ERSPC-RC3, while in the European screening and clinical cohorts, the net benefit was higher for A-ERSPC-RC3. Conclusions: The A-ERSPC-RC3 accurately predict the prostate biopsy in a contemporary Chinese multi-center clinical cohort. The CPCC-RC can predict accurately in a population-based screening cohort, but not in the European clinical cohort. abstract_id: PUBMED:9351556 The European Randomized Study of Screening for Prostate Cancer: an update. Background: A consensus meeting on screening and global strategy for prostate carcinoma, held in Antwerp in 1994, determined the willingness among European cancer prevention centers to pursue vigorously the collaborative formation of a multinational randomized screening trial. This trial was to be named the European Randomized Study of Screening for Prostate Cancer (ERSPC). Methods: During the years prior to that meeting, several feasibility trials were conducted in Antwerp and Rotterdam to evaluate the pitfalls and problems of a randomized procedure for population screening. Today, five centers in five European countries share their study work and results via the ERSPC, and others are lining up to join this massive effort. Regular meetings and specific work groups enable the research centers to compare their data, because the trial methodology differs slightly from one center to another. Results: However, a common work strategy and analysis of the data has recently been reached, and the first study results of the trial (evaluating 180,000 men over a 10-year screening period) are expected by the year 2007. Conclusions: A randomized trial of prostate carcinoma screening is set up in Europe currently with five participating centers from five countries. First overall effect results of regular screening are expected after a 10-year period of follow-up. abstract_id: PUBMED:8536772 Attitudes of European urologists to early prostatic carcinoma, II. Attitude to therapy and to screening examinations. The attitudes of 656 European urologists toward therapy of localized prostatic cancer (PC) and screening examinations of the male population for PC were surveyed. Eighty percent of the urologists would offer curative therapy to a 60-year-old patient with localized PC, while 20% would offer watchful waiting or hormonal therapy. The choice of curative therapy was not correlated to the grade of the cancer. Radical prostatectomies were offered 2.5 times as often as external beam radiotherapy. The number of radical prostatectomies performed was considered to be increasing by 56% of the urologists surveyed, decreasing by 10% and stable by 34%. Fifty-five percent thought that screening for prostate cancer should be undertaken in their country, but only 35% believed this would decrease mortality from prostate cancer. A majority would include digital rectal examination, prostate-specific antigen and symptom evaluation in a screening program. Agreement among urologists from different European countries regarding the handling of early prostatic cancer is poor. Large regional differences were observed with a more active attitude to therapy and screening in southern and central Europe. Attitudes to screening and to therapy, however, were only weakly correlated. In conclusion, it seems paradoxal that many urologists who would offer curative therapies to patients with localized PC would not take steps to diagnose this disease via screening of the male population. abstract_id: PUBMED:18774469 Screening for prostate cancer (PC)--an update on recent findings of the European Randomized Study of Screening for Prostate Cancer (ERSPC). Introduction for screening for prostate cancer as a healthcare policy is desirable provided its effectiveness can be shown in terms of decreasing prostate cancer mortality at an acceptable price in terms of quality of life and costs. The European Randomized Study of Screening for Prostate Cancer (ERSPC) was initiated in 1993 and should in 2008 have the power to produce the required information. The structure and status of ERSPC. ERSPC is a randomized controlled trial running in eight European countries (Belgium, Finland, France, Italy, The Netherlands, Spain, Sweden, and Switzerland). A total of 267,994 have been randomized to screening vs. control. An interim look at the data has taken place in 2006; the advice of the Data Monitoring Committee was to continue the study. This was based on a total of 23,794 deaths in both study groups, 6,033 cases of prostate cancer detected in both groups of which about 1, 200 had died. Contributions to a better understanding of the screening methodology. ERSPC has contributed with a large number of publications, either coming from individual centers or combining data of several centers. A complete listing can be found at www.erspc.org. Lead-time and overdiagnosis with the screening regimen utilized in ERSPC Rotterdam were established to amount to 10.3 years and 54%. This information is of great importance for the development of further screening strategies. During the process of ERSPC, digital rectal examination was omitted and replaced by the inclusion of PSA 3-4 as a biopsy indication. The data on which this decision has been based were published and validated. Overdiagnosis and overtreatment have an adverse influence on quality of life, as it will be included in the evaluation of ERSPC. The recent development of a nomogram for the identification of indolent disease is a major step to improve on this outcome parameter. The application of this nomogram to screen detected cases allows the the advice "active observation" to about 30% of such patients. ERSPC is set to show or exclude at least a 25% reduction in prostate cancer mortality through screening. Many pending problems still have to be resolved prior to the introduction of populations based screening as a worldwide healthcare policy. abstract_id: PUBMED:21239021 Cost-effectiveness of prostate specific antigen screening in the United States: extrapolating from the European study of screening for prostate cancer. Purpose: Preliminary results of the European Randomized Study of Screening for Prostate Cancer showed a decrease in prostate cancer specific mortality associated with prostate specific antigen screening. We evaluated the cost-effectiveness of prostate specific antigen screening using data from the European Randomized Study of Screening for Prostate Cancer protocol when extrapolated to the United States. Materials And Methods: We used previously reported Surveillance, Epidemiology and End Results-Medicare data and a nationwide sample of employer provided estimates of costs of care for patients with prostate cancer. The European data were used in accordance with the study protocol to determine the costs and cost-effectiveness of prostate specific antigen screening. Results: The lifetime cost of screening with prostate specific antigen, evaluating abnormal prostate specific antigen and treating identified prostate cancer to prevent 1 death from prostate cancer was $5,227,306 based on the European findings and extrapolated to the United States. If screening achieved a similar decrease in overall mortality as the decrease in prostate cancer specific mortality in the European study, such intervention would cost $262,758 per life-year saved. Prostate specific antigen screening reported in the European study would become cost effective when the lifelong treatment costs were below $1,868 per life-year, or when the number needed to treat was lowered to 21 or fewer men. Conclusions: The lifelong costs of screening protocols are determined by the cost of treatment with an insignificant contribution from screening costs. We established a model that predicts the minimal requirements that would make screening a cost-effective measure for population based implementation. abstract_id: PUBMED:29508084 Decline in Cancer Screening in Vulnerable Populations? Results of the EDIFICE Surveys. Background: We studied cancer screening over time and social vulnerability via surveys of representative populations. Methods: Individuals aged 50-75 years with no personal history of cancer were questioned about lifetime participation in screening tests, compliance (adherence to recommended intervals [colorectal, breast and cervical cancer]) and opportunistic screening (prostate and lung cancer). Results: The proportion of vulnerable/non-vulnerable individuals remained stable between 2011 and 2016. In 2011, social vulnerability had no impact on screening participation, nor on compliance. In 2014, however, vulnerability was correlated with less frequent uptake of colorectal screening (despite an organised programme) and prostate cancer screening (opportunistic), and also with reduced compliance with recommended intervals (breast and cervical cancer screening). In 2016, the trends observed in 2014 were substantiated and even extended to breast, colorectal and cervical cancer screening uptakes. Social vulnerability has an increasingly negative impact on cancer screening attendance. The phenomenon was identified in 2014 and had expanded by 2016. Conclusion: Although organised programmes have been shown to ensure equitable access to cancer screening, this remains a precarious achievement requiring regular monitoring. Further studies should focus on attitudes of vulnerable populations and on ways to improve cancer awareness campaigns. abstract_id: PUBMED:8567109 European randomized study of screening for prostate cancer--the Rotterdam pilot studies. Five randomized pilot studies of screening for prostate cancer (PC) have been conducted in the area of Rotterdam from 1991 to 1994. The purpose of these studies was to establish the feasibility of a randomized screening protocol with PC mortality as the major end point in The Netherlands and at a European level. All procedures related to recruitment of participants, to application of the screening tests and to data collection were evaluated. Men (7,200) aged 55-74 years were invited through the Rotterdam Population Registry. The recruitment rate over the 5 pilot studies averaged 38.2% (2,747 men). Recruitment procedures proved to be relevant for establishing higher participation rates (invitation and consent by mail). The screening tests were well accepted and tolerated. The general population-based character of the sample was confirmed by studying symptoms of prostatic disease in participants and in men who refused participation. Data based on one PSA serum determination, rectal examination and transrectal ultrasonography are presented; 204/1,403 men (14.5%) had a positive screening result by either test combination and underwent biopsy. Forty-nine cancers were found in 1,403 men (3.5%); 65% of prostate cancers (17/26) identified in men who eventually underwent radical prostatectomy proved to be locally confined. From the pilot studies, we conclude that a large contribution to a European Randomized Study of Screening for Prostate Cancer (ERSPC) can be made by recruiting about 40,000 men in the area of Rotterdam. The preliminary data suggest that after confirmation of the present data during the first years in the European study, DRE and TRUS can be withheld depending on the PSA result in a large proportion of the screening population. abstract_id: PUBMED:36816146 Accuracy of prostate cancer screening recommendations for high-risk populations on YouTube and TikTok. Objectives: This study aimed to evaluate content quality and racial/ethnic representation, particularly of high-risk cohorts, of prostate cancer screening videos on YouTube (YT) and TikTok (TK). Materials And Methods: The top 50 videos populated for the search term 'prostate cancer screening' on YT and TK that met inclusion criteria were retrieved in a cache-cleared browser. Three reviewers analysed all videos using validated criteria for the quality of consumer health information (DISCERN and Patient Education Materials Assessment Tool [PEMAT]). High quality was defined as follows: DISCERN ≥ 4, PEMAT understandability ≥75% and PEMAT actionability ≥75%. A 5-point Likert scale was used to demonstrate the level of misinformation compared to American Urological Association and National Comprehensive Cancer Network guidelines. Perceived race and ethnicity of people in the videos were assessed by consensus approach. Results: TK videos were shorter (median 3.7 vs. 0.5 min, p < 0.001) and had more views per month (5437.5 vs. 19.3, p = 0.03) than YT videos. Perceived Black and Hispanic representation was present in 10% and 6% of YT videos and 20% and 12% of TK videos, respectively. High-risk racial/ethnic groups were explicitly discussed in 46% of YT videos and 8% of TK videos. A total of 98% of YT videos and 100% of TK videos had low- to moderate-quality consumer health information, and 88% of YT videos and 100% of TK videos had moderate to high levels of misinformation based on screening guidelines. Conclusions: YT and TK videos about prostate cancer screening are widely viewed but do not provide quality consumer health information. Black and Hispanic men remain under-represented on both platforms, and high-risk racial groups were not discussed in most videos despite the importance for screening criteria. The low understandability and actionability, significant misinformation and lack of diversity in online videos support the need for higher quality videos with adequate attention to high-risk ethnic cohorts. Answer: The question of whether prostate cancer screening is worthy in southern European male populations is complex and depends on various factors, including the prevalence of prostate cancer in the region, the balance between the benefits and harms of screening, and the effectiveness of screening programs. A study conducted in Eleusina, Greece, found a low detection rate of prostate cancer among asymptomatic Greek males, which led the authors to question the overall usefulness of prostate cancer screening in a geographical region where histological prostate cancer is not very common (PUBMED:22729605). This suggests that in some southern European populations, the benefit of widespread prostate cancer screening may be limited. However, the European Council has recommended organized population-based screening for various cancers, including prostate cancer, and the balance between harms and benefits of such screening needs to be pragmatically assessed (PUBMED:31970768). The European Randomized Study of Screening for Prostate Cancer (ERSPC) has contributed to understanding the methodology and outcomes of prostate cancer screening, including issues of overdiagnosis and overtreatment (PUBMED:18774469). The ERSPC has shown that prostate cancer screening can lead to a reduction in prostate cancer mortality, but many problems still need to be resolved before population-based screening can be implemented as a worldwide healthcare policy (PUBMED:18774469). Additionally, the accuracy of prostate cancer screening recommendations for high-risk populations, including those in southern Europe, has been evaluated in various studies, including the comparison of different risk calculators in Chinese and European men (PUBMED:32279141). The study found that the Asian adapted Rotterdam European Randomized Study of Screening for Prostate Cancer Risk Calculator (A-ERSPC-RC3) accurately predicted prostate biopsy outcomes in a European screening cohort. In conclusion, while there is evidence to support the potential benefits of prostate cancer screening in reducing mortality, the decision to implement widespread screening in southern European male populations should be made after careful consideration of the specific regional prevalence of prostate cancer, the effectiveness of screening programs, and the potential for overdiagnosis and overtreatment. The low detection rate in some regions, such as Greece, suggests that the value of screening may vary across different southern European populations (PUBMED:22729605).
Instruction: Negative Fine-Needle Aspiration in Patients with Goiter: Should We Doubt It? Abstracts: abstract_id: PUBMED:35670966 Fine-Needle Aspiration Under Guidance of Ultrasound Examination of Thyroid Lesions. Fine-needle aspiration biopsy is the most common method for preoperative diagnosis of thyroid carcinomas including papillary carcinoma. The procedure is best performed with ultrasound by operator with professional skill and knowledge. Several guidelines recommend the indication of fine-needle aspiration concerning the pattern of ultrasound and size of nodules. Besides, fine-needle aspiration biopsy of lymph nodes should be performed if malignancies are suspected. Fine-needle aspiration biopsy of thyroid gland is mostly safe, but complications such as blood extravasation-related complications, acute thyroid enlargement, infection in thyroid gland, and pneumothorax could occur. The most frequent complications are blood extravasation-related complications, which could be fatal. Similarly, acute thyroid enlargement could also be severe. To conclude, fine-needle aspiration biopsy is useful and should be performed under the precise indication and the updated knowledge of complications including the way of handling if they occur. abstract_id: PUBMED:32037778 Diagnostic accuracy of fine needle aspiration cytology of thyroid nodules. Background Routine application of fine needle aspiration cytology (FNAC) has decreased unnecessary referral of thyroid nodules for surgical treatment and has also increased the cancer rates found in surgery materials. Success of thyroid FNAC depends on skilled aspiration, skilled cytological interpretation and rational analysis of cytological and clinical data. The aim of this study was to determine the diagnostic accuracy rates of thyroid FNAC results obtained in our institution. Methods The data from FNAC and thyroidectomy reports of patients presenting with goiter and who had been evaluated from 1st January 2014 to 1st March 2018 were used. There were 149 patients in total who had undergone thyroidectomy following FNAC. The Bethesda System for Reporting Thyroid Cytology was used in all cytological diagnoses. Results The sensitivity of thyroid FNAC for malignant cases was 57.89%, specificity was 88.10%, false-positive rate was 11.90%, false-negative rate was 42.11%, positive predictive value was 52.38%, negative predictive value was 90.24% and accuracy rate was 82.52%. "Focus number" variable was detected as the factor that affected the accurate prediction of FNAC and thyroidectomy results by the pathologist. Conclusions This study showed that there was a moderate conformity between thyroid FNAC and thyroidectomy cyto-histopathological diagnosis in malignant cases. As two or more nodules have a negative effect on the physician's diagnosis of malignant nodules, we think that a more sensitive approach is needed in the determination of these cases. Sampling defects may affect this non-matching. abstract_id: PUBMED:30815558 Diagnostic accuracy of ultrasound and fine-needle aspiration in the study of thyroid nodule and multinodular goitre. Objective: Ultrasonography and cytology obtained by fine-needle aspiration are part of the basic study of the thyroid nodule. Although they are not diagnostic in every case, they are cost-effective methods that inform surgical treatment and its extent. The purpose of this study was to evaluate the accuracy of ultrasonography associated with fine-needle aspiration to predict malignancy in nodular thyroid pathology. Design And Patients: We collected prospective data from patients undergoing thyroidectomy by single nodule or multinodular goitre between 2006 and 2016. A total of 417 patients were included. Ultrasounds were classified as suspected of malignancy if they had 2 or more of the following characteristics: hypoechogenicity, microcalcifications, intranodular central hypervascularization, irregular margins and poorly defined edges. Measurements: Ultrasound and fine-needle aspiration accuracy. Results: In the postoperative study, 40% presented malignant pathology. 33% of patients with nonsuspicious ultrasound and 73% of those with suspicious ultrasound had malignant disease. Among patients with single nodule and suspicious ultrasound, the malignancy rate reached 80%. As for cytology, 100% of Bethesda VI patients, 88% of V, 63% of IV, 31% of III and 12% of II were found to have carcinoma. The combination of the 2 tests showed a high predictive value, particularly in cases of Bethesda IV cytology. Conclusions: Thyroid cytology provides high predictive value of the presence of carcinoma. The predictive value of ultrasound is also high, mainly in the study of isolated nodules. The combination of the 2 tests results in increased diagnostic accuracy. abstract_id: PUBMED:29949024 The Value of Negative Diagnosis in Thyroid Fine-Needle Aspiration: a Retrospective Study with Histologic Follow-Up. The Bethesda System for reporting thyroid cytopathology (BSRTC) predicts an incidence of malignancy of less than 5% in thyroid nodules with a benign diagnosis on fine-needle aspiration (FNA). However, recent series have suggested that the true rate of malignancy might be significantly higher in this category of patients. We reviewed our experience by performing a retrospective analysis of patients with benign thyroid FNA results who underwent thyroidectomy between 2008 and 2013 at a large academic center. Information including demographics, ultrasound features, FNA diagnosis, and surgical follow-up information were recorded. Slides were reviewed on cytology-histology discrepant cases, and it was determined whether the discrepancy was due to sampling or interpretation error. A total of 802 FNA cases with a benign diagnosis and surgical follow-up were identified. FNA diagnoses included 738 cases of benign goiter and 64 cases of lymphocytic thyroiditis. On subsequent surgical resection, 144 cases were found to be neoplastic, including 117 malignant cases. False negative, defined as interpretation error and inadequate biopsy of the nodule harboring malignancy, was 6%. When cases of noninvasive follicular thyroid neoplasm with papillary-like nuclear features (NIFTP) were excluded from the analysis, false-negative rate was 5%. When microPTC cases were excluded, false-negative rate was 3% and was slightly less than 3% when both microPTC and NIFTP cases were excluded from the analysis. Retrospective review of neoplastic cases showed that 57% were due to sampling error and 43% were due to interpretation error. Interpretation error was more likely to occur in follicular patterned neoplasms (75%), while sampling error was more common in non-follicular variants of papillary thyroid carcinoma (non-FVPTC) (61%). With the exclusion of microPTC, interpretation errors were still more likely to occur in follicular neoplasms (79%) but there was no significant difference in sampling error between non-FVPTC (37%) and follicular patterned neoplasms (42%). Tumor size was larger in cases with interpretation error (mean = 2.3 cm) compared to cases with sampling error (mean = 1.4 cm). This study shows that the false-negative rate of thyroid FNA at our institution is not significantly above the rate suggested by the BSRTC. Interpretation errors were more likely to occur in follicular patterned neoplasms, while non-FVPTC was more frequently found in false negative cases due to inadequate sampling. abstract_id: PUBMED:28220939 Acute diffuse thyroid swelling: A rare complication of fine-needle aspiration. We present a case illustrating the rare complication of acute generalized thyroid swelling shortly after sonographic-guided fine needle aspiration of a thyroid nodule. Ultrasound revealed the presence of characteristic linear hypoechoic avascular areas interspersed throughout the gland suggestive of edema. The patient was treated conservatively, with near complete normalization of the thyroid within 24 hours. Recognition of this potential complication is important, as the rapid onset of diffuse thyroid enlargement is often alarming but typically has a transient and self-limiting course. © 2017 Wiley Periodicals, Inc. J Clin Ultrasound 45:426-429, 2017. abstract_id: PUBMED:16121774 A comparative study of fine needle aspiration cytology versus non-aspiration technique in thyroid lesions. The present study was done to explore the diagnostic yield by the non-aspiration technique as compared with fine needle aspiration cytology (FNAC) of lesions in the thyroid gland. This method of non-aspiration fine needle cytology, which utilizes no active suction or aspiration, was performed on 150 patients presenting with enlargement of the thyroid gland. Smears were then cytologically assessed as unsuitable, diagnostically adequate or diagnostically superior, without knowledge of the sampling method employed. Diagnostically superior specimens were obtained significantly more frequently by the non-aspiration technique in 136 benign lesions and 14 neoplasms. Thus, the non-aspiration technique combined with FNAC can result in obtaining good quality cellular material in thyroid lesions. abstract_id: PUBMED:22470244 Fine needle aspiration cytology as the primary diagnostic tool in thyroid enlargement. Background: In the preoperative decision-making of the thyroid swellings, fine needle aspiration cytology (FNAC) is becoming an ever more vital tool. Objectives: To compare the advantage of preoperative FNAC of thyroid swellings with postoperative histopathology to reach a consensus protocol as a simple procedure for diagnosis and optimal management of thyroid swellings. Materials And Methods: A prospective study of preoperative FNAC was carried out on 178 incidental thyroid swellings attending a tertiary care centre in Kishanganj, Bihar. Evidence-based surgical interventions were done, irrespective of FNAC findings and diagnosis was confirmed by histopathological examination (HPE) postoperatively in all the cases. Results: In the FNAC, preponderance of the cases (75.84%) was colloid goitre followed by granulomatous thyroiditis; follicular carcinoma was noted in 7.30 percent and anaplastic carcinoma in 3.37 percent of cases. Histopathological examination showed colloid goitre predominantly (76.97%), followed by follicular carcinoma (8.99%). The overall prevalence of malignancy was 11.24 percent diagnosed by HPE and 9.55 percent by FNAC. In our FNAC series sensitivity of was 90 percent while specificity was 100 percent; accuracy was 98.88 percent. Predictive value of a positive test and negative tests was 100 percent and 98.75 percent respectively. Conclusion: The study highlights that FNAC should be treated as a first-line diagnostic test for thyroid swellings to guide the management though this is not a substitute for HPE as a need to improve primary healthcare in India. abstract_id: PUBMED:26319258 Negative Fine-Needle Aspiration in Patients with Goiter: Should We Doubt It? Background: Epidemiologic studies demonstrated higher incidence of thyroid cancer in patients with multinodular goiters compared to the general population. The aim of this study was to evaluate the risk of finding significant thyroid cancer in patients undergoing thyroidectomy for presumed benign disease. Methods: The records of 273 patients operated for indications other than cancer or indeterminate cytology were reviewed and analyzed. Results: 202 (74%) patients had a preoperative fine-needle aspiration (FNA) performed. FNA was benign in 96% of patients and non-diagnostic in 4%. Malignancy was unexpectedly found in 50 (19%) patients. Papillary carcinoma constituted 94% of cancers and 86% of cancers were incidental microcarcinomas. Only 7 (2.6%) patients of the entire cohort had tumors greater than 1 cm, of those only 3 had a previous benign FNA (false-negative rate 1.5%). Conclusions: The rate of significant thyroid cancer found unexpectedly in resected goiters is extremely low. A negative FNA excludes significant cancer with near certainty. abstract_id: PUBMED:25745298 A focal nodular Hürthle cell hyperplasia in Hashimoto's thyroiditis: A diagnostic dilemma on fine needle aspiration. Hürthle cells are seen in a variety of nonneoplastic and neoplastic thyroid gland lesions. Number and morphology of Hürthle cell vary in thyroid aspirate. Occasionally, thyroid aspirate in focal nodular Hürthle cell hyperplasia in Hashimoto's thyroiditis exclusively comprise of Hürthle cells and mimics Hürthle cell neoplasm. Fine needle aspiration (FNA) diagnosis in such cases is challenging. A 60-year-old female presented with goiter and clinical features of hyperthyroidism. FNA smears showed Hürthle cells arranged in flat sheets and lying singly with occasional lymphocytes in Hürthle cell sheets. Repeat aspiration from other site showed lymphocytes, infiltrating the thyroid follicular cells. We conclude that a careful search of lymphocytes in Hürthle cell sheets in cytology smears, multiple aspirates, associated clinical findings and ancillary techniques reduce the diagnostic pitfall and avoid unnecessary surgery. abstract_id: PUBMED:22090699 Fine needle aspiration cytology of Hashimoto's thyroiditis - A diagnostic pitfall with review of literature. Hashimoto's thyroiditis is the second most common thyroid lesion next to goiter diagnosed on fine needle aspiration cytology (FNAC). It is also an important cause for hypothyroidism. FNAC plays a significant role in the diagnosis of thyroid lesions due to its simplicity and low cost. It can accurately diagnose Hashimoto's thyroidits in most patients. However, a small percentage of cases may be missed due to the inherent limitations of this procedure and the varied cytomorphology of this lesion. Therefore thorough cytological evaluation and an integrated approach are necessary to pick up correct diagnosis and to avoid unnecessary surgery. We present a 56-year-old female with solitary thyroid nodule diagnosed as Hurthle cell neoplasm on FNAC, but subsequent histopathological diagnosis following resection revealed Hashimoto's thyroiditis with marked Hurthle cell change. Answer: Negative fine-needle aspiration (FNA) in patients with goiter is generally considered reliable in excluding significant cancer. A study that reviewed records of 273 patients operated for indications other than cancer or indeterminate cytology found that malignancy was unexpectedly present in 19% of patients, with 94% of these being papillary carcinoma and 86% being incidental microcarcinomas. However, only 7 (2.6%) patients in the entire cohort had tumors greater than 1 cm, and of those, only 3 had a previous benign FNA, resulting in a false-negative rate of 1.5%. This suggests that the rate of significant thyroid cancer found unexpectedly in resected goiters is extremely low, and a negative FNA can exclude significant cancer with near certainty (PUBMED:26319258). However, it is important to note that there are limitations to FNA, and false negatives can occur. A retrospective study of 802 FNA cases with a benign diagnosis and surgical follow-up identified a false-negative rate of 6% when considering interpretation error and inadequate biopsy of the nodule harboring malignancy. When certain types of neoplasms were excluded from the analysis, the false-negative rate decreased slightly (PUBMED:29949024). Additionally, the diagnostic accuracy of FNA can be affected by factors such as the number of nodules present, with two or more nodules having a negative effect on the physician's diagnosis of malignant nodules (PUBMED:32037778). In conclusion, while negative FNA results in patients with goiter are generally reliable, there is still a small risk of false negatives. Therefore, it is important to consider the limitations of FNA and to use an integrated approach, including clinical findings and possibly additional diagnostic methods, to ensure accurate diagnosis and optimal management of thyroid swellings (PUBMED:22090699, PUBMED:22470244).
Instruction: Can the aspiration detected by videofluoroscopic swallowing studies predict long-term survival in stroke patients with dysphagia? Abstracts: abstract_id: PUBMED:15742979 Can the aspiration detected by videofluoroscopic swallowing studies predict long-term survival in stroke patients with dysphagia? Purpose: This study aimed to evaluate whether the aspiration detected by videofluoroscopic swallowing study (VSS) could predict the long-term survival in stroke patients with dysphagia in the post-acute phase of stroke. Methods: A cohort of 182 consecutive patients with stroke-related dysphagia referred for VSS from July 1994 to April 1999 was retrospectively constructed. VSS findings and clinical features in the post-acute phase of stroke were recorded. The records thus obtained were then linked to the National Death Register to track the occurrence of patient deaths until December 31, 2000. Results: Of the 182 patients, 91 (50%) showed aspiration during VSS performed for a median duration of 8.4 weeks after stroke, and 76 (42%) had silent aspiration. In the post-acute phase of stroke (14.7 +/- 8.7 weeks after stroke, mean + standard deviation), 56 (31%) were tube-fed, and 88 (48%) were wheelchair-confined. A total of 65 patients died in a median follow-up duration of 30.8 months after VSS. Patients were classified into three groups based on the findings of VSS-detected aspiration or penetration, but no difference was noted in their survival curves. In the Cox stepwise regression analysis, only advanced age, recurrent stroke (hazard ratio 1.74, 95% CI 1.06-2.85), the need of tube-feeding (hazard ratio 2.07, 95% CI 1.19-3.59), and being wheelchair-confined (hazard ratio 2.83, 95% CI 1.54-5.19) during follow-up were independent predictors of long-term survival. Conclusions: VSS-detected aspiration during the post-acute phase of stroke was not predictive for the long-term survival in stroke patients with dysphagia. abstract_id: PUBMED:34071752 Usefulness of the Modified Videofluoroscopic Dysphagia Scale in Choosing the Feeding Method for Stroke Patients with Dysphagia. Introduction: The Videofluoroscopic Dysphagia Scale (VDS) is used to predict the long-term prognosis of dysphagia in patients with strokes. However, the inter-rater reliability of the VDS was low in a previous study. To overcome the mentioned limitations of the VDS, the modified version of the VDS (mVDS) was created and clinically applied to evaluate its usefulness in choosing the feeding method for stroke patients with dysphagia. Methods: The videofluoroscopic swallowing study (VFSS) data of 56 stroke patients with dysphagia were collected retrospectively. We investigated the presence of aspiration pneumonia and the selected feeding method. We also evaluated the correlations between the mVDS and the selected feeding method, and between the mVDS and the presence of aspiration pneumonia after stroke. Univariate logistic regression and receiver operating characteristic analyses were used in the data analysis. Results: The inter-rater reliability (Cronbach α value) of the total score of the mVDS was 0.886, which was consistent with very good inter-rater reliability. In all patients with dysphagia, the supratentorial stroke subgroup, and the infratentorial stroke subgroup, the mVDS scores were statistically correlated with the feeding method selected (p < 0.05) and the presence of aspiration pneumonia (p < 0.05). Conclusions: The mVDS can be a useful scale for quantifying the severity of dysphagia, and it can be a useful tool in the clinical setting and in studies for interpreting the VFSS findings in stroke patients with dysphagia. Further studies with a greater number of patients and various stroke etiologies are required for more generalized applications of the mVDS. abstract_id: PUBMED:29560323 Usefulness of Early Videofluoroscopic Swallowing Study in Acute Stroke Patients With Dysphagia. Objective: To demonstrate the usefulness of early videofluoroscopic swallowing study (VFSS) and to investigate change patterns in dietary methods in stroke patients with dysphagia. Methods: The VFSS was performed within 7 days of stroke onset in neurologically stable patients. The patients were divided into three groups according to type of brain lesion: cortical lesion (CL), subcortical lesion (SCL), and brainstem/cerebellar lesion (BCL). Based on the VFSS results, this study investigated change patterns in feeding method and discrepancies in the aspiration risk predicted by the Water Swallowing Test (WST) and the VFSS. Complications, such as aspiration pneumonia, were also evaluated. Results: A total of 163 patients met the inclusion criteria and the VFSS was performed within 7 days of stroke. Patients considered at risk for aspiration (Penetration-Aspiration Scale [PAS] scores of 6 to 8) were found in all three groups using the VFSS (47.5% of the CL group, 59.3% of the SCL group, and 47.9% of the BCL group). After early VFSS, 79.7% of the patients were assessed to require restricted feeding methods. A 19.0% discrepancy was found between the WST and VFSS results. At 3-week follow-up after the VFSS, aspiration pneumonia was observed in 12 patients (7.4%) with restricted feeding methods. Conclusion: Early VFSS during the acute period can facilitate determination of the most appropriate feeding method, and support effective dysphagia management for stroke patients. abstract_id: PUBMED:34640316 Usefulness of the Modified Videofluoroscopic Dysphagia Scale in Evaluating Swallowing Function among Patients with Amyotrophic Lateral Sclerosis and Dysphagia. Introduction: The videofluoroscopic dysphagia scale (VDS) is used to predict the long-term prognosis of dysphagia among patients with the condition. Previously, a modified version of the VDS (mVDS) was established to overcome the relatively low inter-rater reliability of VDS, and was verified in patients with dysphagia, such as stroke patients. However, the validity of mVDS in patients with amyotrophic lateral sclerosis (ALS) has never been proved. Therefore, in this study, we attempted to seek the validity of the mVDS score in patients with ALS suffering from dysphagia. Method: Data from the videofluoroscopic swallowing study (VFSS) of 34 patients with ALS and dysphagia were retrospectively collected. We investigated the presence of aspiration pneumonia and the selected feeding method based on the VFSS. We also evaluated the correlations between the mVDS and the selected feeding method, and between the mVDS and the presence of aspiration pneumonia. Multivariate logistic regression and receiver operating characteristic (ROC) analyses were performed during the data analysis. Results: In patients with ALS and dysphagia, the mVDS scores were statistically correlated with the selected feeding method (p < 0.05) and the presence of aspiration pneumonia (p < 0.05). In the ROC curve analysis, the area under the ROC curve values for the selected feeding method and the presence of aspiration pneumonia were 0.886 (95% confidence interval (CI), 0.730-0.969; p < 0.0001) and 0.886 (95% CI, 0.730-0.969; p < 0.0001), respectively. Conclusion: The mVDS can be a useful tool for quantifying the severity of dysphagia and interpreting the VFSS findings in patients with ALS and dysphagia. However, further studies involving a more general population of patients with ALS are needed to elucidate a more accurate cut-off value for the allowance of oral feeding and the presence of aspiration pneumonia. abstract_id: PUBMED:32585726 The effect of reclining position on swallowing function in stroke patients with dysphagia. Background: Dysphagia is a common problem in patients with a history of stroke. In Japan, a reclined position is commonly used as a compensatory technique to address this problem. Objective: To evaluate the effect of reclined position on swallowing function in patients with stroke who had dysphagia. Methods: A retrospective analysis was carried out on the videofluoroscopic examination of swallowing (VF) of 4ml honey-thick liquid swallows collected over 9 years. Penetration-aspiration scale (PAS) and residue scores were compared for the following: a body position at 90° upright (90°U) and 60° reclining (60°R) groups, as well as 60°R and 45° reclining (45°R) groups. Results: Two hundred and five records from 98 subjects were reviewed. These included patients with ischaemic stroke (62%), haemorrhagic stroke (32%) and subarachnoid haemorrhage (6%). PAS scores were lower when the body was in a more reclined position (P < .001). The amount of residue in the valleculae and pyriform sinus also reduced in the more reclined position (P < .001). The deeper bolus head at swallowing onset was positively correlated with severe PAS (P < .001). Conclusions: These findings suggest that in patients with stroke who had dysphagia, a reclined position may be useful in reducing the risk of penetration and aspiration, and in decreasing the amount of residue in the pharyngeal area. The depth of the bolus head at the onset of swallowing increases the severity of penetration and aspiration. abstract_id: PUBMED:38293925 Prospective Observational Study for the Comparison of Screening Methods Including Tongue Pressure and Repetitive Saliva Swallowing With Detailed Videofluoroscopic Swallowing Study Findings in Patients With Acute Stroke. Background: Simple, noninvasive, and repeatable screening methods are essential for assessing swallowing disorders. We focused on patients with acute stroke and aimed to assess the characteristics of swallowing screening tests, including the modified Mann Assessment of Swallowing Ability score, tongue pressure, and repetitive saliva swallowing test (RSST), compared with detailed videofluoroscopic swallowing study (VFSS) findings to contribute as a helpful resource for their comprehensive and complementary use. Methods And Results: We enrolled first-ever patients with acute stroke conducting simultaneous assessments, including VFSS, modified Mann Assessment of Swallowing Ability score, tongue pressure measurement, and RSST. VFSS assessed aspiration, laryngeal penetration, oral cavity residue, vallecular residue, pharyngeal residue, and swallowing reflex delay. Screening tests were compared with VFSS findings, and multiple logistic analysis determined variable importance. Cutoff values for each abnormal VFSS finding were assessed using receiver operating characteristic analyses. We evaluated 346 patients (70.5±12.6 years of age, 143 women). The modified Mann Assessment of Swallowing Ability score was significantly associated with all findings except aspiration. Tongue pressure was significantly associated with oral cavity and pharyngeal residue. The RSST was significantly associated with all findings except oral cavity residue. Receiver operating characteristic analyses revealed that the minimum cutoff value for all VFSS abnormal findings was RSST ≤2. Conclusions: The modified Mann Assessment of Swallowing Ability is useful for broadly detecting swallowing disorders but may miss mild issues and aspiration. The RSST, with a score of ≤2, is valuable for indicating abnormal VFSS findings. Tongue pressure, especially in oral and pharyngeal residues, is useful. Combining these tests might enhance accuracy of the swallowing evaluation. abstract_id: PUBMED:35000369 Decreased Maximal Tongue Protrusion Length May Predict the Presence of Dysphagia in Stroke Patients. Objective: To investigate the relationship between maximal tongue protrusion length (MTPL) and dysphagia in post-stroke patients. Methods: Free tongue length (FTL) was measured using the quick tongue-tie assessment tool and MTPL was measured using a transparent plastic ruler in 47 post-stroke patients. The MTPL-to-FTL (RMF) ratio was calculated. Swallowing function in all patients was evaluated via videofluoroscopic swallowing study (VFSS), PenetrationAspiration Scale (PAS), Functional Oral Intake Scale (FOIS), and Videofluoroscopic Dysphagia Scale (VDS). Results: The MTPL and RMF values were significantly higher in the non-aspirator group than in the aspirator group (MTPL, p=0.0049; RMF, p<0.001). MTPL and RMF showed significant correlations with PAS, FOIS and VDS scores. The cut-off value in RMF for the prediction of aspiration was 1.56, with a sensitivity of 84% and a specificity of 86%. Conclusion: There is a relationship between MTPL and dysphagia in post-stroke patients. MTPL and RMF can be useful for detecting aspiration in post-stroke patients. abstract_id: PUBMED:20228462 Quantitative videofluoroscopic analysis of penetration-aspiration in post-stroke patients. Background: Dysphagia is a common complication of stroke and is a potential cause for aspiration and malnutrition and is also associated with poor outcome. Videofluoroscopic Swallowing Study (VFSS) is the most objective method for evaluation of swallowing disorders. Aim: To investigate the incidence and characteristics of penetration-aspiration in post-stroke patients, and to study the relationship between penetration-aspiration and kinematic parameters of swallow. Materials And Methods: We prospectively studied swallowing function in 105 consecutive post-stroke patients and 100 normal adults by videofluoroscopic swallowing studies. The severity of airway invasion, penetration-aspiration, was studied quantitatively and kinematic parameters of swallow i.e. oral transit time, pharyngeal transit time (PTT), pharyngeal delay timem (PDT), maximal extent of vertical and anterior movement of larynx and hyoid bone for four kinds of boluses were also studied. Logistic regression was used to analyze the association between aspiration and kinematic parameters of swallow. Results: Stroke patients scored significantly higher scores on penetration-aspiration scale than the normal subjects (P < 0.001) during four bolus swallows. Logistic regression analysis showed that PTT, PDT, maximal extent of vertical laryngeal and hyoid movement were statistically associated with the prevalence of aspiration (P < 0.05). Conclusion: Penetration-aspiration is common in stroke patients. Several kinematic parameters of swallow are associated with the presence of aspiration on fluoroscopy. These data demonstrate that VFSS may be helpful for objective identification of dysphagia in stroke patients. abstract_id: PUBMED:22837970 Correlation between Location of Brain Lesion and Cognitive Function and Findings of Videofluoroscopic Swallowing Study. Objective: To investigate whether patterns of swallowing difficulties were associated with the location of the brain lesion, cognitive function, and severity of stroke in stroke patients. Method: Seventy-six patients with first-time acute stroke were included in the present investigation. Swallowing-related parameters, which were assessed videofluoroscopically, included impairment of lip closure, decreased tongue movement, amount of oral remnant, premature loss of food material, delay in oral transit time, laryngeal elevation, delay in pharyngeal triggering time, presence of penetration or aspiration, and the amount of vallecular and pyriform sinus remnants. The locations of brain lesions were classified into the frontal, parietotemporal, subcortical, medulla, pons, and cerebellum. The degree of cognitive impairment and the severity of stroke were assessed by the Mini Mental Status Examination (MMSE) and the National Institute of Health Stroke Scale (NIHSS), respectively. Results: An insufficient laryngeal elevation, the amount of pyriform sinus, and vallecular space remnant in addition to the incidence of aspiration were correlated with medullary infarction. Other swallowing parameters were not related to lesion topology. Lip closure dysfunction, decreased tongue movement, increased oral remnant and premature loss were associated with low MMSE scores. A delayed oral transit time were associated with NIHSS scores. Conclusion: In-coordination of the lip, the tongue, and the oropharynx were associated with the degree of cognitive impairment and the stroke severity rather than with the location of the lesion, whereas incomplete laryngeal elevation and aspiration were predominant in medullary lesions. abstract_id: PUBMED:34291064 Videofluoroscopic Swallowing Study Findings Associated With Subsequent Pneumonia in Patients With Dysphagia Due to Frailty. Dysphagia in frailty or deconditioning without specific diagnosis that may cause dysphagia such as stroke, traumatic brain injury, or laryngeal pathology, has been reported in previous studies; however, little is known about which findings of the videofluoroscopic swallowing study (VFSS) are associated with subsequent pneumonia and how many patients actually develop subsequent pneumonia in this population. In this study, we followed 190 patients with dysphagia due to frailty or deconditioning without specific diagnosis that may cause dysphagia for 3 months after VFSS and analyzed VFSS findings for the risk of developing pneumonia. During the study period, the incidence of subsequent pneumonia was 24.74%; regarding the VFSS findings, (1) airway penetration (PAS 3) and aspiration (PAS 7 and 8) were associated with increased risk of developing pneumonia, and (2) the functional dysphagia scale (FDS) scores of the patients who developed subsequent pneumonia were higher than those of the patients who did not develop subsequent pneumonia. Our study findings might assist clinicians in making clinical decisions based on the VFSS findings in this population. Answer: The aspiration detected by videofluoroscopic swallowing studies (VSS) does not appear to predict long-term survival in stroke patients with dysphagia. A study that evaluated the long-term survival of stroke patients with dysphagia found that VSS-detected aspiration during the post-acute phase of stroke was not predictive of long-term survival. Instead, independent predictors of long-term survival included advanced age, recurrent stroke, the need for tube-feeding, and being wheelchair-confined during follow-up (PUBMED:15742979). While aspiration detected by VSS does not seem to be a predictor of long-term survival, the modified Videofluoroscopic Dysphagia Scale (mVDS) has been shown to be useful in quantifying the severity of dysphagia and in choosing the feeding method for stroke patients with dysphagia. The mVDS scores were statistically correlated with the selected feeding method and the presence of aspiration pneumonia (PUBMED:34071752). Additionally, early VSS can facilitate the determination of the most appropriate feeding method and support effective dysphagia management for stroke patients (PUBMED:29560323). Other studies have also highlighted the importance of various screening methods and measurements in relation to dysphagia in stroke patients. For instance, decreased maximal tongue protrusion length (MTPL) may predict the presence of dysphagia in stroke patients (PUBMED:35000369), and quantitative videofluoroscopic analysis can help identify kinematic parameters associated with penetration-aspiration in post-stroke patients (PUBMED:20228462). Furthermore, the correlation between the location of the brain lesion and cognitive function with findings of the videofluoroscopic swallowing study has been investigated, suggesting that certain swallowing difficulties are associated with specific lesion locations (PUBMED:22837970). In summary, while VSS-detected aspiration is not a predictor of long-term survival in stroke patients with dysphagia, videofluoroscopic swallowing studies and related scales and measurements can be valuable tools for assessing the severity of dysphagia, choosing appropriate feeding methods, and managing the risk of aspiration pneumonia.
Instruction: Is the relationship among outcome variables shown in randomized trials? Abstracts: abstract_id: PUBMED:25886370 Is the relationship among outcome variables shown in randomized trials? Background: Randomized controlled trials (RCTs) often have more than one primary outcome and frequently have secondary and harm outcomes. Comparison of outcomes between study arms is the primary focus of RCTs, but there are times when the relation between outcomes is important, such as determining whether an intermediate outcome and a clinical outcome have a strong association. We sought to determine how often reports of RCTs depict the relations among outcomes at the individual patient level and, for those studies that use composite outcomes, how often the relations between component elements are depicted. Methods: We selected 20 general, specialty and subspecialty medical journals with high impact factors that publish original clinical research. We identified every RCT in the 2011 and 2012 issues and randomly selected 10 articles per journal. For each article we recorded the number of outcomes, the number of composite outcomes and how often the relations between outcomes or elements of composite outcomes were portrayed. Results: All but 16 of the 200 RCTs had more than one outcome. Thus, outcomes could have been related in 92% of studies, but such relations were only reported in 2 (1%). A total of 33 (17%) investigations measured a composite outcome, 32 of which showed data for each component. None, however, showed cross-tabulation of the components. Conclusions: Readers are rarely shown the relation between outcomes. Mandatory posting of datasets or requirements for detailed appendices would allow readers to see these cross-tabulations, helping future investigators know which outcomes are redundant, which provide unique information and which are most responsive to changes in the independent variables. While not every relationship between outcomes requires depiction, at present such information is seldom portrayed. abstract_id: PUBMED:25872751 Leveraging prognostic baseline variables to gain precision in randomized trials. We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. abstract_id: PUBMED:28038801 Execution time determines the outcome of the multicenter randomized controlled trials. Objectives: Multicenter randomized controlled trials are the core of evidence-based medicine. Our study aimed to investigate the key factor which determined the outcome of the multicenter randomized controlled trials. Methods: We searched publications in PubMed for multicenter randomized controlled trials reporting primary data on treating and preventing cardiovascular diseases circulation area. The data were extracted from the including trials and used for analysis. Results: A total of 1075 trials for treating and preventing cardiovascular diseases were included, of which 979 were involved in the heart diseases and 96 involved in stroke. The execution time significantly contributed to the outcome of trials with shorter time related to significant outcome. However, the numbers of participated centers and their locations and participants had no effect on the outcome of trials. Moreover, the number of centers showed no significant relationship with execution time. Conclusions: Execution time but not centers or participants contributed to the outcome of multicenter randomized controlled trials. abstract_id: PUBMED:11880910 Effect of continuous versus dichotomous outcome variables on study power when sample sizes of orthopaedic randomized trials are small. It is often not feasible to conduct large trials in orthopaedic surgery. Therefore, surgeons must identify strategies to optimize the statistical power of their smaller studies. The aim of this study was to compare study power in randomized trials with continuous versus dichotomous outcome variables. We performed a systematic review of the literature to identify randomized trials in orthopaedic trauma. Of these, we examined only those trials with small sample sizes (50 patients or less). The outcomes in each eligible study were categorized as continuous or dichotomous. Standard power calculations were performed for each study, and comparisons were made between continuous and dichotomous outcome variables. We identified 196 randomized trials in orthopaedic trauma. Of these, 76 trials had a sample size of 50 patients or fewer (29 trials with continuous outcomes, 47 trials with dichotomous outcomes). Studies that reported continuous outcomes had a significantly higher mean power than those that reported dichotomous variables (power 49% vs 38%, p=0.042). Twice as many trials with continuous outcome variables reached acceptable levels of study power (i.e. >80% power) when compared with trials with dichotomous variables (37% vs 18.6%, p=0.04). When orthopaedic surgeons anticipate small sample sizes for their study, they can optimize their study's statistical power by choosing a continuous outcome variable. abstract_id: PUBMED:26778385 Variation in outcome reporting in endometriosis trials: a systematic review. Objective: We reviewed the outcomes and outcome measures reported in randomized controlled trials and their relationship with methodological quality, year of publication, commercial funding, and journal impact factor. Data Sources: We searched the following sources: (1) Cochrane Central Register of Controlled Trials, (2) Embase, and (3) MEDLINE from inception to November 2014. Study Eligibility: We included all randomized controlled trials evaluating a surgical intervention with or without a medical adjuvant therapy for the treatment of endometriosis symptoms. Study Design: Two authors independently selected trials, assessed methodological quality (Jadad score; range, 1-5), outcome reporting quality (Management of Otitis Media with Effusion in Cleft Palate criteria; range, 1-6), year of publication, impact factor in the year of publication, and commercial funding (yes or no). Univariate and bivariate analyses were performed using Spearman Rh and Mann-Whitney U tests. We used a multivariate linear regression model to assess relationship associations between outcome reporting quality and other variables. Results: There were 54 randomized controlled trials (5427 participants), which reported 164 outcomes and 113 outcome measures. The 3 most commonly reported primary outcomes were dysmenorrhea (10 outcome measures; 23 trials), dyspareunia (11 outcome measures; 21 trials), and pregnancy (3 outcome measures; 26 trials). The median quality of outcome reporting was 3 (interquartile range 4-2) and methodological quality 3 (interquartile range 5-2). Multivariate linear regression demonstrated a relationship between outcome reporting quality with methodological quality (β = 0.325; P = .038) and year of publication (β = 0.067; P = .040). No relationship was demonstrated between outcome reporting quality with journal impact factor (Rho = 0.190; P = .212) or commercial funding (P = .370). Conclusion: Variation in outcome reporting within published endometriosis trials prohibits comparison, combination, and synthesis of data. This limits the usefulness of research to inform clinical practice, enhance patient care, and improve patient outcomes. In the absence of a core outcome set for endometriosis we recommend the use of the 3 most common pain (dysmenorrhea, dyspareunia, and pelvic pain) and subfertility (pregnancy, miscarriage, and live birth) outcomes. International consensus among stakeholders is needed to establish a core outcome set for endometriosis trials. abstract_id: PUBMED:31746483 Reporting and handling of incomplete outcome data in implant dentistry: A survey of randomized clinical trials. Aim: To assess the reporting and handling of incomplete outcome data in randomized clinical trials (RCTs) published in implant dentistry. Materials And Methods: We included RCTs on interventions related to the treatment with dental implants and presented any form of missing data. PubMed, SCOPUS and Cochrane databases were searched for studies published between May 2015 and May 2018. Reporting and handling of missing data at the study level were evaluated using a series of relevant questions. Descriptive data were reported, and univariable analyses were performed to evaluate the association of study variables with quality of reporting and data handling. Results: One-hundred and thirty-seven RCT reports were included from the 7,116 initially retrieved publications. The reporting of incomplete outcome data varied greatly among the trials and for the different questions. The range of adequately reported questions was between 3.64% (question: comparison of baseline characteristics of all randomised participants) and 100% (question: explicit reporting of missing data). The complete case analysis was the most used (45.3%) approach for incomplete outcome data handling. Conclusions: Randomized studies in implant dentistry have room for improvement in both the reporting and the handling of incomplete outcome data. abstract_id: PUBMED:30877204 Outcome Domains, Outcome Measures, and Characteristics of Randomized Controlled Trials Testing Nonsurgical Interventions for Osteoarthritis. Objective: Core outcome set (COS) is the minimum set of outcome domains that should be measured and reported in clinical trials. We analyzed outcome domains, prevalence of use of COS published by Outcome Measures in Rheumatology (OMERACT) initiative, outcome measures for outcome domains recommended by OMERACT COS, duration and size of randomized controlled trials (RCT) testing nonsurgical interventions for osteoarthritis (OA). Methods: We searched PubMed and analyzed RCT about nonsurgical interventions for OA published from June 2012 to June 2017. We extracted data about trial type, use of OMERACT COS, efficacy outcome domains, safety outcome domains, outcome measures used for COS assessment, duration, and sample size. Results: Among 334 analyzed trials, complete OMERACT-recommended COS was used by 14% of trials. Higher median prevalence of using OMERACT COS was found in trials explicitly described as phase III, and trials of pharmacological interventions with followup ≥ 1 year, but both with wide range of COS usage. Trialists used numerous different outcome measures for analyzing core outcome domains: 50 different outcome measures for pain, 74 for physical function, 9 for patient's global assessment, and 5 for imaging. Conclusion: Suboptimal use of recommended COS and heterogeneity of outcome measures is reducing quality and comparability of OA trials and hinders conclusions about efficacy and comparative efficacy of nonsurgical interventions. Interventions for improving study design of trials in this field would be beneficial. abstract_id: PUBMED:36584733 Distributions of baseline categorical variables were different from the expected distributions in randomized trials with integrity concerns. Background And Objectives: Comparing observed and expected distributions of baseline continuous variables in randomized controlled trials (RCTs) can be used to assess publication integrity. We explored whether baseline categorical variables could also be used. Methods: The observed and expected (binomial) distribution of all baseline categorical variables were compared in four sets of RCTs: two controls, and two with publication integrity concerns. We also compared baseline calculated and reported P-values. Results: The observed and expected distributions of baseline categorical variables were similar in the control datasets, both for frequency counts (and percentages) and for between-group differences in frequency counts. However, in both sets of RCTs with publication integrity concerns, about twice as many variables as expected had between-group differences in frequency counts of one or 2, and far fewer variables than expected had between-group differences of >4 (P < 0.001 for both datasets). Furthermore, about one in six reported P-values for baseline categorial variables differed by > 0.1 from the calculated P-value in trials with publication integrity concerns. Conclusion: Comparing the observed and expected distributions and reported and calculated P-values of baseline categorical variables may help in the assessment of publication integrity of a body of RCTs. abstract_id: PUBMED:30771826 Fragility of randomized clinical trials of treatment of clavicular fractures. Background: Statistical significance, as reported by the P value, has traditionally been the most commonly reported way to determine whether a difference exists between clinical interventions. Unfortunately, P values alone confer little about the robustness of a study's conclusions. An emerging metric, the fragility index (FI), helps to address this challenge by quantifying the number of events per outcome group that would need to be reversed to the alternative outcome in order to raise the P value above the 0.05 threshold. Methods: Using systematic search strategy, we identified randomized controlled trials (RCTs) pertaining to clavicular fractures published in the last decade (2007-2017). Studies included for analysis involved 2 parallel arms, were published in English, allocated patients to treatment and control arms in a 1:1 ratio, and reported statistical significance (P < .05) for dichotomous variables. The FI was determined based on the Fisher exact test, using previously published methods. Results: Fifteen RCTs were included. The median FI was 2 (range, 0-17). Eleven studies (73.3%) had an FI of 2 or less. Seven of the trials (46.7%) reported that the number of patients lost to follow-up exceeded the FI. Conclusions: The median FI reported in the recent literature on clavicular fractures is only 2. The FI is a useful metric to analyze the robustness of study conclusions that should complement other methods of critical data evaluation, including the P value or effect sizes. Future efforts are needed to increase institutional collaboration and patient recruitment to strengthen the robustness of RCT conclusions, especially in the realm of clavicular fracture management. abstract_id: PUBMED:28803445 A systematic review of primary outcomes and outcome measure reporting in randomized trials evaluating treatments for pre-eclampsia. Background: An evaluation of outcome reporting is required to develop a core outcome set. Objectives: To assess primary outcomes and outcome measure reporting in pre-eclampsia trials. Search Strategy: Five online databases were searched from inception to January 2016 using terms including "preeclampsia" and "randomized controlled trial". Selection Criteria: Randomized controlled trials evaluating treatments for pre-eclampsia published in any language were included. Data Collection And Analysis: Primary outcomes and data on outcome measure reporting were systematically extracted and categorized. Main Results: Overall, 79 randomized trials including data from 31 615 women were included. Of those, 38 (48%) reported 35 different primary outcomes; 28 were maternal outcomes and seven were fetal/neonatal outcomes. Three randomized trials reported composite outcomes, incorporating between six and nine outcome components. The method of definition or measurement was infrequently or poorly reported. Even when outcomes were consistent across trials, different methods of definition or measurement were frequently described. Conclusions: In randomized trials evaluating interventions for pre-eclampsia, critical information related to the primary outcome, including definition and measurement, is regularly omitted. Developing a core outcome set for pre-eclampsia trials would help to inform primary outcome selection and outcome measure reporting. Answer: The relationship among outcome variables in randomized trials is infrequently reported. A study examining reports of randomized controlled trials (RCTs) found that although 92% of the 200 RCTs reviewed had more than one outcome, which could have been related, such relations were only reported in 1% of the studies. Furthermore, for those studies that used composite outcomes, none showed cross-tabulation of the components, which would depict the relations between them (PUBMED:25886370). This lack of reporting on the relationship between outcomes makes it difficult for readers to understand which outcomes are redundant, which provide unique information, and which are most responsive to changes in the independent variables. The study suggests that mandatory posting of datasets or requirements for detailed appendices could help future investigators and readers see these important cross-tabulations (PUBMED:25886370).
Instruction: Does pharmacologic treatment in patients with established coronary artery disease and diabetes fulfil guideline recommended targets? Abstracts: abstract_id: PUBMED:24691153 Does pharmacologic treatment in patients with established coronary artery disease and diabetes fulfil guideline recommended targets? A report from the EUROASPIRE III cross-sectional study. Purpose: The aim was to investigate the use of cardioprotective drug therapies (aspirin or other antiplatelet agents, β-blockade, renin-angiotensin-aldosterone-system-blockade (RAAS-blockade) and statins) and treatment targets achieved in a large cohort of patients with established coronary artery disease and diabetes across Europe. Methods And Results: EUROASPIRE III is an observational cross-sectional study of stable coronary artery disease patients aged 18-80 years from 76 centres in 22 European countries conducted in 2006-2007. The glycaemic status (prevalent, incident or no diabetes), the guideline treatment targets achieved and the use of pharmacotherapies were assessed at one visit 6-36 months after the index event. Of all 6588 patients investigated (women 25%), 4295 (65%) had no diabetes, 752 (11%) had incident diabetes and 1541 (23%) had prevalent diabetes. All four drugs were used in 44% of the patients with no diabetes, 51% with incident diabetes and 50% with prevalent diabetes respectively. Individual prescriptions for patients with no, incident and prevalent diabetes were respectively: aspirin or other antiplatelet agents 91, 93, and 91%; β-blockers: 81, 84, and 79%; RAAS-blockers: 77, 76, and 68%; statins: 80, 80, and 79%. The proportion of patients with coronary artery disease and prevalent diabetes reaching the treatment targets were 20% for blood pressure, 53% for low density lipoprotein cholesterol (LDL-cholesterol) and 22% for haemoglobin A1c (HbA1c). Conclusion: This European study demonstrates a low use of cardioprotective drug therapies among patients with a combination of coronary artery disease and diabetes, which will be contributing to the poor achievement of risk factor treatment targets for cardiovascular prevention. abstract_id: PUBMED:28905359 Guideline-Recommended Medications and Physical Function in Older Adults with Multiple Chronic Conditions. Background/objectives: The benefit or harm of a single medication recommended for one specific condition can be difficult to determine in individuals with multiple chronic conditions and polypharmacy. There is limited information on the associations between guideline-recommended medications and physical function in older adults with multiple chronic conditions. The objective of this study was to estimate the beneficial or harmful associations between guideline-recommended medications and decline in physical function in older adults with multiple chronic conditions. Design: Prospective observational cohort. Setting: National. Participants: Community-dwelling adults aged 65 and older from the Medicare Current Beneficiary Survey study (N = 3,273). Participants with atrial fibrillation, coronary artery disease, depression, diabetes mellitus, or heart failure were included. Measurements: Self-reported decline in physical function; guideline-recommended medications; polypharmacy (taking <7 vs ≥7 concomitant medications); chronic conditions; and sociodemographic, behavioral, and health risk factors. Results: The risk of decline in function in the overall sample was highest in participants with heart failure (35.4%, 95% confidence interval (CI) = 26.3-44.5) and lowest for those with atrial fibrillation (20.6%, 95% CI = 14.9-26.2). In the overall sample, none of the six guideline-recommended medications was associated with decline in physical function across the five study conditions, although in the group with low polypharmacy exposure, there was lower risk of decline in those with heart failure taking renin angiotensin system blockers (hazard ratio (HR) = 0.40, 95% CI = 0.16-0.99) and greater risk of decline in physical function for participants with diabetes mellitus taking statins (HR = 2.27, 95% CI = 1.39-3.69). Conclusions: In older adults with multiple chronic conditions, guideline-recommended medications for atrial fibrillation, coronary artery disease, depression, diabetes mellitus, and heart failure were largely not associated with self-reported decline in physical function, although there were associations for some medications in those with less polypharmacy. abstract_id: PUBMED:24275017 Guideline-adherent therapy in patients with cardiovascular diseases in Taiwan. Background/purpose: Aggressive and persistent control of risk factors is recommended for prevention of secondary comorbidities in patients with cardiovascular diseases. This study aimed to evaluate guideline recommendations for achieving targets for lipid and blood pressure (BP) control in patients with cardiovascular diseases in Taiwan. Methods: This multicenter cohort study was conducted in 14 hospitals in Taiwan. A total of 3316 outpatients who had established cerebrovascular disease (CVD), coronary artery disease (CAD), or both were recruited. Risk factors for comorbid conditions such as high BP, sugar, hemoglobin A1C, abnormal lipids, lipoproteins, and medication use were compared between patients with CVD, CAD, or both. Results: Of all patients, 503 (15.2%) had CVD only, 2568 (77.4%) had CAD only, and 245 (7.4%) had both CVD and CAD. Compared with patients who had only CAD, those with CVD were older, had higher frequency of hypertension, and lower frequency of diabetes mellitus. Patients with CAD were more likely to receive lipid-lowering and antihypertensive drugs than those with CVD (p < 0.001). Only 54.8% and 55.9% of patients achieved the recommended lipid and BP control targets, respectively. Patients with CVD (adjusted odds ratio: 0.61; 95% confidence interval: 0.48-0.78; p < 0.001) and women (adjusted odds ratio: 0.65; 95% confidence interval: 0.55-0.78; p < 0.001) were less likely to achieve the recommended lipid and BP targets. Conclusion: The guideline-recommended targets for lipids and BP in patients with CAD and CVD were still suboptimal in Taiwan. Greater efforts are required to achieve the targets, particularly in patients with CVD and in women. abstract_id: PUBMED:19213947 Applying the evidence: do patients with stroke, coronary artery disease, or both achieve similar treatment goals? Background And Purpose: The importance of early and aggressive initiation of secondary prevention strategies for patients with both coronary artery disease (CAD) and cerebrovascular disease (CVD) is emphasized by multiple guidelines. However, limited information is available on cardiovascular protection and stroke prevention in an outpatient setting from community-based populations. We sought to evaluate and compare differences in treatment patterns and the attainment of current guideline-recommended targets in unselected high-risk ambulatory patients with CAD, CVD, or both. Methods: This multicenter, prospective, cohort study was conducted from December 2001 to December 2004 among ambulatory patients in a primary care setting. The prospective Vascular Protection and Guidelines-Oriented Approach to Lipid-Lowering Registries recruited 4933 outpatients with established CAD, CVD, or both. All patients had a complete fasting lipid profile measured within 6 months before enrollment. The primary outcome measure was the achievement of blood pressure (BP) <140/90 mm Hg (or <130/80 mm Hg for patients with diabetes) and LDL cholesterol <2.5 mmol/L (<97 mg/dL) according to the Canadian guidelines in place at that time (similar to the National Cholesterol Education Program's value of 100 mg/dL). Secondary outcomes include use of antithrombotic, antihypertensive, and lipid-modifying therapies. Results: Of the 4933 patients, 3817 (77%) had CAD only; 647 (13%) had CVD only; and 469 (10%) had both CAD and CVD. Mean+/-SD age was 67+/-10 years, and 3466 (71%) were male. Mean systolic and diastolic BPs were 130+/-16 and 75+/-9 mm Hg, respectively. Minor but significant differences were observed on baseline BP, total cholesterol, and LDL cholesterol measurements among the 3 groups. Overall, 83% of patients were taking a statin and 93% were receiving antithrombotic therapy (antiplatelet and/or anticoagulant agents). Compared with patients with CAD, those with CVD only were less likely to achieve the recommended BP (45.3% vs 57.3%, respectively; P<0.001) and lipid (19.4% vs 30.5%, respectively; P<0.001) targets. Among patients with CVD only, women were less likely to achieve the recommended BP and lipid targets compared with their male counterparts (for LDL cholesterol <2.5 mmol/L, 18.7% vs 23.8%, respectively; P=0.048). In multivariable analysis, patients with CVD alone were less likely to achieve treatment success (BP or lipid targets) after adjusting for age, sex, diabetes, and use of pharmacologic therapy. Conclusions: Despite the proven benefits of available antihypertensive and lipid-lowering therapies, current management of hypertension and dyslipidemia continues to be suboptimal. A considerable proportion of patients failed to achieve guideline-recommended targets, and this apparent treatment gap was more pronounced among patients with CVD and women. Quality improvement strategies should target these patient subgroups. abstract_id: PUBMED:23461429 Guideline-recommended medications: variation across Medicare Advantage plans and associated mortality. Objectives: To evaluate variation in the prescription of guideline-recommended medications across Medicare Advantage (MA) plans and to determine whether such variation is associated with increased mortality. Methods: Observational study of 111,667 patients aged 65 years or older receiving care in 203 MA plans. We linked data from the Medicare Health Outcomes (HOS) Survey cohort 9 (April 2006-May 2008) with the Medicare Part D prescription benefit files (January 1, 2006-December 31, 2007) to examine variation in treatment across MA plans and its association with differences in observed (O)/expected (E) mortality ratio for 5 high-volume chronic conditions: diabetes, coronary artery disease (CAD), congestive heart failure (CHF), chronic obstructive pulmonary disease (COPD)/asthma, and depression. Results: Analysis of variance confirmed that the 203 MA plans differed significantly in their use of guideline-recommended treatment (P≤0.02). Those MA plans with higher use of angiotensin-converting enzyme inhibitors/angiotensin II receptor blockers (r=-0.40; P<0.0001) and beta-blockers (r=-0.27; P<0.0001) in patients with CHF were significantly associated with lower O/E mortality ratios. Those MA plans with higher use of multiple guideline-recommended medications were significantly associated with lower O/E mortality ratios in CHF (r=-0.45; P<0.0001) and diabetes (r=-0.14; P<0.042). There were no significant associations between the variation in performance indicators and mortality ratios in patients with CAD and COPD/asthma. Those MA plans with higher use of antidepressant medications had significantly higher O/E mortality ratios (r=0.28, P<0.0001). Conclusions: There was wide variation across MA plans in the prescription of guideline-recommended medications that had a measurable relationship to the mortality of elderly patients with CHF and diabetes. These findings can serve to both motivate and target quality improvement programs. abstract_id: PUBMED:24041994 Management of risk factors among ambulatory patients at high cardiovascular risk in Canada: a follow-up study. Background: Limited longitudinal data are available on attainment of guideline-recommended treatment targets among ambulatory patients at high risk for cardiovascular events. Methods: The Vascular Protection registry and the Guidelines Oriented Approach to Lipid Lowering registry recruited 8056 ambulatory patients at high risk for, or with established cardiovascular disease; follow-up was not protocol-mandated. We stratified the study population according to the availability of 6-month follow-up data into 2 groups, and compared their clinical characteristics, medication profile, and attainment of contemporaneous guideline-recommended blood pressure (BP) and lipid targets both at enrollment and at 6-month follow-up. Results: Of the 8056 patients, only 5371 (66.7%) patients had 6-month follow-up, who had significant increases in the use of statins and antihypertensive medications at 6 months compared with at enrollment (all P < 0.001). Compared with at time of enrollment, more patients attained the BP target (45.3% vs 42.3%), low-density lipoprotein cholesterol (LDL-C) target (62.8% vs 45.8%), and both targets (29.7% vs 21.6%) at 6-month follow-up (all P < 0.001). In multivariable analysis, independent predictors of attainment of BP target included history of coronary artery disease and heart failure (all P ≤ 0.001). On the other hand, advanced age, diabetes, coronary artery disease, previous coronary revascularization, and use of statin therapy were independently associated with achievement of LDL-C target (all P < 0.005). Conclusions: Most (> 50%) patients without 6-month follow-up did not attain guideline-recommended BP and LDL-C targets at enrollment. Although BP and lipid control improved at 6 months among patients with follow-up, most still failed to achieve optimal BP and lipid targets. Effective ongoing quality improvement measures and follow-up are warranted. abstract_id: PUBMED:31123935 Attainment of Guideline-Directed Medical Treatment in Stable Ischemic Heart Disease Patients With and Without Chronic Kidney Disease. Background: Stable ischemic heart disease (SIHD) is prevalent in patients with chronic kidney disease (CKD); however, whether guideline-directed medical therapy (GDMT) is adequately implemented in patients with SIHD and CKD is unknown. Hypothesis: Use of GDMT and achievement of treatment targets would be higher in SIHD patients without CKD than in patients with CKD. Methods: This was a retrospective study of 563 consecutive patients with SIHD (mean age 67.8 years, 84% Caucasians, 40% females). CKD was defined as an estimated glomerular filtration rate (eGFR) of < 60 mL/min/1.73m2 using the four-variable MDRD Study equation. We examined the likelihood of achieving GDMT targets (prescription of high-intensity statins, antiplatelet agents, renin-angiotensin-aldosterone system inhibitors (RAASi), and low-density lipoprotein cholesterol levels < 70 mg/dL, blood pressure < 140/90 mmHg, and hemoglobin A1C < 7% if diabetes) in patients with (n = 166) and without CKD (n = 397). Results: Compared with the non-CKD group, CKD patients were significantly older (72 vs 66 years; p < 0.001), more commonly female (49 vs 36%; p = 0.002), had a higher prevalence of diabetes (46 vs 34%; p = 0.004), and left ventricular systolic ejection fraction (LVEF) < 40% (23 vs. 10%, p < 0.001). All GDMT goals were achieved in 26% and 24% of patients with and without CKD, respectively (p = 0.712). There were no between-group differences in achieving individual GDMT goals with the exception of RAASi (CKD vs non-CKD: adjusted risk ratio 0.73, 95% CI 0.62-0.87; p < 0.001). Conclusions: Attainment of GDMT goals in SIHD patients with CKD was similar to patients without CKD, with the exception of lower rates of RAASi use in the CKD group. abstract_id: PUBMED:16887414 Contemporary management of dyslipidemia in high-risk patients: targets still not met. Purpose: Our objective was to evaluate treatment patterns and the attainment of current National Cholesterol Education Program (NCEP)-recommended lipid targets in unselected high-risk ambulatory patients. Methods: Between December 2001 and December 2004, the prospective Vascular Protection and Guidelines Oriented Approach to Lipid Lowering Registries recruited 8056 outpatients with diabetes, established cardiovascular disease (CVD), or both, who had a complete lipid profile measured within 6 months before enrollment. The primary outcome measure was treatment success, defined as the achievement of LDL-cholesterol<2.6 mmol/L (100 mg/dL) according to NCEP guidelines. We examined patient characteristics and use of lipid-modifying therapy in relation to treatment outcome, which included the recently proposed optional LDL-cholesterol target (<1.8 mmol/L [70 mg/dL]) for very high-risk patients. Results: Overall, 78.2% of patients were treated with a statin and 51.2% had achieved the recommended LDL-cholesterol target. Treatment success rate was highest in diabetic patients with CVD (59.6%), followed by nondiabetic patients with CVD (51.8%), and lowest (44.8%) in diabetic patients without CVD (P<.0001). Compared with untreated patients, those on statins were more likely to achieve target (34.4% vs 55.9%, P<.0001). Of the patients who failed to meet target, only 9.9% were taking high-dose statin, while 29.3% were not prescribed any statin therapy. Among very high-risk patients, 20.8% attained the optional LDL-cholesterol goal. In multivariable analysis, advanced age, male sex, diabetes, coronary artery disease, coronary revascularization, and use of statin were associated with treatment success (all P<.0001). Conclusion: Despite the well-established benefits of available lipid-modifying drugs, current management of dyslipidemia continues to be suboptimal, with a substantial proportion of patients failing to achieve guideline-recommended lipid targets. There remains an important opportunity to improve the quality of care for these high-risk patients. abstract_id: PUBMED:28948527 Variations in the referral patterns to pharmacologic and exercise myocardial perfusion imaging. Background: Myocardial perfusion imaging (MPI) is commonly utilized for the non-invasive evaluation of patients with suspected coronary artery disease (CAD). It is either performed with exercise or pharmacologic stress. The objective of this study is to compare the referral patterns and diagnostic findings in patients referred for pharmacologic vs exercise MPI. Methods And Results: This was a prospective study of 429 consecutive patients who were referred for MPI at the American University of Beirut Medical Center (23% had pharmacologic stress with dipyridamole and 77% had exercise stress testing). Patients referred to pharmacologic stress were older, had a higher percentage of women, and a higher prevalence of diabetes and hypertension. There were more abnormal scans in the pharmacologic stress group (38% vs 20%, P < 0.001), as well as a higher prevalence of ischemia (21% vs 13%, P < 0.001) and impaired left ventricular function with an ejection fraction < 50% (19% vs 7.9%, P < 0.001). The significant predictors for referral to pharmacologic stress by multivariable logistic regression analysis were older age (OR = 2.01 (1.57-2.57), P < 0.001) and diabetes (OR = 2.04 (1.19-3.49), P = 0.009). Conclusion: Patients referred for pharmacologic stress MPI are at a higher risk than those referred for exercise stress MPI with more CAD risk factors, older age, and a higher prevalence of abnormal MPI findings. abstract_id: PUBMED:37293139 Guideline-directed medical therapies for comorbidities among patients with atrial fibrillation: results from GARFIELD-AF. Aims: This study aimed to identify relationships in recently diagnosed atrial fibrillation (AF) patients with respect to anticoagulation status, use of guideline-directed medical therapy (GDMT) for comorbid cardiovascular conditions (co-GDMT), and clinical outcomes. The Global Anticoagulant Registry in the FIELD (GARFIELD)-AF is a prospective, international registry of patients with recently diagnosed non-valvular AF at risk of stroke (NCT01090362). Methods And Results: Guideline-directed medical therapy was defined according to the European Society of Cardiology guidelines. This study explored co-GDMT use in patients enrolled in GARFIELD-AF (March 2013-August 2016) with CHA2DS2-VASc ≥ 2 (excluding sex) and ≥1 of five comorbidities-coronary artery disease, diabetes mellitus, heart failure, hypertension, and peripheral vascular disease (n = 23 165). Association between co-GDMT and outcome events was evaluated with Cox proportional hazards models, with stratification by all possible combinations of the five comorbidities. Most patients (73.8%) received oral anticoagulants (OACs) as recommended; 15.0% received no recommended co-GDMT, 40.4% received some, and 44.5% received all co-GDMT. At 2 years, comprehensive co-GDMT was associated with a lower risk of all-cause mortality [hazard ratio (HR) 0.89 (0.81-0.99)] and non-cardiovascular mortality [HR 0.85 (0.73-0.99)] compared with inadequate/no GDMT, but cardiovascular mortality was not significantly reduced. Treatment with OACs was beneficial for all-cause mortality and non-cardiovascular mortality, irrespective of co-GDMT use; only in patients receiving all co-GDMT was OAC associated with a lower risk of non-haemorrhagic stroke/systemic embolism. Conclusion: In this large prospective, international registry on AF, comprehensive co-GDMT was associated with a lower risk of mortality in patients with AF and CHA2DS2-VASc ≥ 2 (excluding sex); OAC therapy was associated with reduced all-cause mortality and non-cardiovascular mortality, irrespective of co-GDMT use. Clinical Trial Registration: Clinical Trial Registration-URL: http://www.clinicaltrials.gov. Unique identifier: NCT01090362. Answer: The pharmacologic treatment in patients with established coronary artery disease (CAD) and diabetes does not appear to fully meet guideline-recommended targets. The EUROASPIRE III study, which is a cross-sectional study conducted across Europe, found that the use of cardioprotective drug therapies was low among patients with a combination of CAD and diabetes. Only 44% to 51% of these patients were on all four recommended drug therapies (aspirin or other antiplatelet agents, β-blockers, renin-angiotensin-aldosterone-system-blockers, and statins). Furthermore, the proportion of patients with CAD and prevalent diabetes reaching the treatment targets was only 20% for blood pressure, 53% for low-density lipoprotein cholesterol (LDL-cholesterol), and 22% for hemoglobin A1c (HbA1c) (PUBMED:24691153). In Taiwan, a multicenter cohort study revealed that only 54.8% and 55.9% of patients with cardiovascular diseases achieved the recommended lipid and blood pressure control targets, respectively. This indicates that guideline-recommended targets for lipids and blood pressure in patients with CAD and cerebrovascular disease (CVD) were suboptimal (PUBMED:24275017). Similarly, a study of ambulatory patients in Canada found that a considerable proportion of patients with CAD, CVD, or both failed to achieve guideline-recommended targets for blood pressure and LDL cholesterol. This treatment gap was more pronounced among patients with CVD and women (PUBMED:19213947). In the context of stable ischemic heart disease (SIHD) patients with chronic kidney disease (CKD), the attainment of guideline-directed medical therapy (GDMT) goals was similar to patients without CKD, except for lower rates of renin-angiotensin-aldosterone system inhibitors use in the CKD group (PUBMED:31123935). Overall, these studies suggest that there is a gap between the pharmacologic treatment being provided to patients with established CAD and diabetes and the achievement of guideline-recommended targets, indicating a need for improved management strategies to meet these targets.
Instruction: Do gastrotomies require repair after endoscopic transgastric peritoneoscopy? Abstracts: abstract_id: PUBMED:20438886 Do gastrotomies require repair after endoscopic transgastric peritoneoscopy? A controlled study. Background: The optimal method for closing gastrotomies after transgastric instrumentation has yet to be determined. Objective: To compare gastrotomy closure with endoscopically delivered bioabsorbable plugs with no closure. Design: Prospective, controlled study. Setting: Animal laboratory. Subjects: Twenty-three dogs undergoing endoscopic transgastric peritoneoscopy between July and August 2007. Interventions: Endoscopic anterior wall gastrotomies were performed with balloon dilation to allow passage of the endoscope into the peritoneal cavity. The plug group (n = 12) underwent endoscopic placement of a 4 x 6-cm bioabsorbable mesh plug in the perforation, whereas the no-treatment group (n = 11) did not. Animals underwent necropsy 2 weeks after the procedure. Main Outcome Measurements: Complications related to gastrotomy closure, gastric burst pressures, relationship of burst perforation to gastrotomy, and the degree of adhesions and inflammation at the gastrotomy site. Results: After the gastrotomy, all dogs survived without any complications. At necropsy, burst pressures were 77 +/- 11 mm Hg and 76 +/- 15 mm Hg (P = .9) in the plug group and no-treatment group, respectively. Perforations occurred at the site of the gastrotomy in 2 of 12 animals in the plug group and in none of the 11 dogs in the no-treatment group (P = .5). Finally, there were minimal adhesions in all dogs (11/11) in the no-treatment group and minimal adhesions in 3 and moderate adhesions or inflammatory masses in 9 of the 12 animals in the plug group (P = .004). Limitations: Small number of subjects, animal model, no randomization. Gastrotomy trauma during short peritoneoscopy may not be applicable to longer procedures. Conclusions: After endoscopic gastrotomy, animals that were left untreated did not show any clinical ill effects and demonstrated adequate healing, with fewer adhesions and less inflammation compared with those treated with a bioabsorbable plug. abstract_id: PUBMED:18855060 Transgastric endoscopic peritoneoscopy does not require decontamination of the stomach in humans. Introduction: Natural orifice translumenal endoscopic surgery (NOTES) is a rapidly evolving field that provides endoscopic access to the peritoneum via a natural orifice. One important requirement of this technique is the need to minimize the risk of clinically significant peritoneal contamination. We report the bacterial load and contamination of the peritoneal cavity in ten patients who underwent diagnostic transgastric endoscopic peritoneoscopy. Methods: Patients participating in this trial were scheduled to undergo diagnostic laparoscopy for evaluation of presumed pancreatic cancer. Findings at diagnostic laparoscopy were compared with those of diagnostic transgastric endoscopic peritoneoscopy, using an orally placed gastroscope, blinding the endoscopist to the laparoscopic findings. We performed no gastric decontamination. Diagnostic findings, operative times, and clinical course were recorded. Gastroscope and peritoneal fluid aspirates were obtained prior to and after the gastrotomy. Each sample was sent for bacterial colony counts, culture, and identification of species. Results: Ten patients, with an average age of 63.7 years, have completed the protocol. All patients underwent diagnostic laparoscopy followed by successful transgastric access and diagnostic peritoneoscopy. The average time for laparoscopy was 7.2 min, compared with 18 min for transgastric instrumentation. Bacterial sampling was obtained in all ten patients. The average number of colony-forming units (CFU) in the gastroscope aspirate was 132.1 CFU/ml, peritoneal aspirates prior to creation of a gastrotomy showed 160.4 CFU/ml, and peritoneal sampling after gastrotomy had an average of 642.1 CFU/ml. There was no contamination of the peritoneal cavity with species isolated from the gastroscope aspirate. No infectious complications or leaks were noted at 30-day follow-up. Conclusions: There was no clinically significant contamination of the peritoneal cavity from the gastroscope after transgastric endoscopic instrumentation in humans. Transgastric instrumentation does contaminate the abdominal cavity but, the pathogens do not mount a clinically significant response in terms of either the species or the bacterial load. abstract_id: PUBMED:23368513 Direct incision versus submucosal tunneling as a method of creating transgastric accesses for natural orifice transluminal endoscopic surgery (NOTES) peritoneoscopy: randomized controlled trial. Aim: The optimal approach for creating accesses for transgastric peritoneoscopy is still uncertain. The present study aims to assess the feasibility of carrying out transgastric submucosal tunnel (SMT) peritoneoscopy and to determine whether this approach improves or restricts access to various sectors within the peritoneal cavity. Methods: This was a randomized comparative study carried out in an in-vivo survival porcine model. Sixty-six beads in six swine were visualized and touched via gastrotomies created by either direct incision (DI) or SMT. The influence of the type of gastrotomy on improving or restricting access to particular sites within the peritoneal cavity for natural orifice transluminal endoscopic surgery (NOTES) peritoneoscopy was compared. The main outcome measurements were localization score of beads, overall procedural time, morbidities and mortalities. Results: A significantly higher mean (SD) localization score was observed in peritoneoscopies carried out in the DI group (P < 0.001). Both the visualization and the touching scores were significantly better with the DI technique, and the overall yield of NOTES peritoneoscopy with DI and SMT were 72.73% and 60.6%, respectively (P = 0.043). Significantly more beads that were not touched in the SMT group were located in the sub-phrenic area (P = 0.013). The overall procedural time was significantly shorter in the DI group (P = 0.004). No major morbidities or mortalities occurred in any procedures. Conclusions: SMT resulted in lower visualization and touching scores for transgastric NOTES peritoneoscopy. Alternate methods to improve the diagnostic yield to the sub-phrenic area are required. abstract_id: PUBMED:21251061 Initial experience from the transgastric endoscopic peritoneoscopy and biopsy: a stepwise approach from the laboratory to clinical application. Background And Aim: Natural-orifice translumenal endoscopic surgery (NOTES) is a newly minimally invasive technique that gives access to the abdominal cavity via transgastric, transcolonic, transvaginal or transvesical routes. The aim of the present study was to evaluate the safety and feasibility of transgastric endoscopic peritoneoscopy and biopsy from laboratory to clinical application. Methods: With the animals under general anesthesia, a sterile esophageal overtube was placed and a gastric antibiotic lavage was performed. Subsequently, a needle-knife and through-the-scope dilating balloon were used to make an anterior gastric wall incision through which a therapeutic gastroscope was advanced into the peritoneal cavity. After 2 weeks, another transgastric endoscopic exploration was performed in a different location of the stomach. The peritoneal cavity was examined before the gastric incision was closed. After 4 weeks of observation, necropsy was performed. In the clinical application, after gastric lavage, the first step was the creation of the gastrotomy under general anesthesia, sometime under direct vision of the laparoscopic scope. Then the endoscope can be maneuvered in the peritoneal cavity. And peritoneoscopy and biopsy were performed. Biopsies can be obtained from any suspicious areas using punch biopsy forceps. The gastrotomy was then closed with clips. The gastroscopy was examined after one week. Results: Twenty-eight transgastric endoscopic peritoneoscopies and biopsies in pigs and a total of five transgastric human endoscopic peritoneoscopies and biopsies have been performed. All procedures were completed satisfactorily in the pig model and all patients. There were no intraoperative or postoperative complications. Conclusions: The advantages of peritoneoscopy and biopsy appeared to be enhanced by this approach. Patients had minor postoperative pain and minimal scarring. It is safe and feasible for us to use transgastric endoscopic peritoneoscopy and biopsy in humans. abstract_id: PUBMED:22858574 Specialized instrumentation facilitates stable peritoneal access, gastric decompression, and visualization during transgastric endoscopic peritoneoscopy. Purpose: The lack of high-fidelity instrumentation has impeded the development and implementation of natural orifice transluminal endoscopic surgery (NOTES). A steerable flexible trocar (SFT), a rotary access needle (RAN), and an articulating needle knife were developed as components of a flexible instrument set to facilitate transgastric peritoneal access and transluminal abdominal procedures. This cohort study aimed to assess the safety, feasibility, and efficacy of these devices during transgastric peritoneoscopy. Methods: Ten morbidly obese patients undergoing laparoscopic Roux-en-Y gastric bypass participated in the study. Following laparoscopic access, transgastric peritoneal access was established using the SFT and RAN, and transgastric peritoneoscopy performed. NOTES adhesiolysis was performed in 2 patients with significant intra-abdominal adhesions due to prior surgery. Outcome measures included time to enter the peritoneal cavity, ability to visualize each quadrant of the abdomen, ability to perform adhesiolysis, and complications. Results: Ten patients with a median body mass index as stated in table 1 of 47.5 kg/m(2) were enrolled. Successful transgastric access was achieved in 8 of the 10 patients. One procedure was aborted because of difficulty creating the gastrotomy. Another procedure was aborted because of the difficult passage of the device through the oropharynx. An upper esophageal laceration occurred in one patient. Transgastric peritoneal access required 17.4 ± 5.5 minutes, and peritoneoscopy averaged 24.7 ± 7.6 minutes. The 4 abdominal quadrants were visualized and were accessible with the endoscope in all patients. Conclusions: The SFT and RAN facilitate transgastric peritoneal access and visualization of difficult-to-reach areas of the peritoneum. These devices provide advanced instrumentation for transgastric NOTES procedures; however, care must be taken during the transoral insertion to avoid complications. abstract_id: PUBMED:17701250 Natural-orifice transgastric endoscopic peritoneoscopy in humans: Initial clinical trial. Background: Natural-orifice translumenal endoscopic surgery (NOTES) is a possible advancement for surgical interventions. We initiated a pilot study in humans to investigate feasibility and develop the techniques and technology necessary for NOTES. Reported herein is the first human clinical trial of NOTES, performing transoral transgastric diagnostic peritoneoscopy. Methods: Patients were scheduled to undergo diagnostic laparoscopic evaluation of a pancreatic mass. The findings of traditional laparoscopy were recorded by anatomical abdominal quadrant. A second surgeon, blinded to the laparoscopic findings, performed transgastric peritoneoscopy. Diagnostic findings between the two methods were compared and operative times and clinical course were recorded. Definitive care was based on findings at diagnostic laparoscopy. Results: Ten patients completed the protocol with an average age of 67.6 years. All patients underwent diagnostic laparoscopy followed by successful transgastric access and diagnostic endoscopic peritoneoscopy. The average time of diagnostic laparoscopy was 12.3 minutes compared to 24.8 minutes for the transgastric route. Transgastric abdominal exploration corroborated the decision to proceed to open exploration made during traditional laparoscopic exploration in 9 of 10 patients. Peritoneal or liver biopsies were obtained in four patients by traditional laparoscopy and in one patient by the transgastric access route. Findings were confirmed by laparotomy in nine patients. Eight patients underwent pancreaticoduodenectomy and two underwent palliative gastrojejunostomy and/or hepaticojejunostomy. Conclusions: Transgastric diagnostic peritoneoscopy is safe and feasible. This study demonstrates the initial steps of NOTES in humans, providing a potential platform for incisionless surgery. Technical issues, including instrumentation, visualization, intra-abdominal manipulation, and gastric closure need further development. abstract_id: PUBMED:19122940 Diagnostic transgastric flexible peritoneoscopy: is pure natural orifice transluminal endoscopic surgery a fantasy? We present the first transgastric peritoneoscopy in a 20-year-old man. The objectives were to evaluate the impact of the site of viscerotomy on the technical feasibility of natural orifice transluminal endoscopic surgery (NOTES), assess transgastric peritoneoscopy as a complementary procedure, determine the safety and efficacy of NOTES, and attempt inspection/biopsy of the gallbladder. The patient was admitted with a benign gastric outlet obstruction, chronic cholecystitis and radiological suspicion of a mass in the gallbladder which was not visualised on diagnostic laparoscopy. Complementary transgastric peritoneoscopy was performed to gain deeper penetration of the tumour with the flexible tip of the gastroscope. The visceral "aperture" was created in the antrum where gastrojejunal anastomosis would be fashioned. Laparoscopic transillumination of the anterior gastric wall facilitated this part of the procedure. During transgastric peritoneoscopy, the gallbladder and structures in the upper and left hemi-abdomen appeared retrograde due to the unusual location of the gastrotomy. The right hemi-abdomen and pelvis were easily examined with a "straight shaft" approach. The gallbladder could not be identified with exploratory laparoscopy and transgastric peritoneoscopy. Due to risk of visceral injury, open gastrojejunal anastomosis and cholecystectomy were performed. Intraoperatively, an inflamed, thick-walled gallbladder was found adherent to the proximal duodenum. Transgastric peritoneoscopy was safely performed in our patient. The postoperative course was uneventful. Our patient showed significant improvement at 13 weeks after surgery without any procedure-related complication. In conclusion, transgastric peritoneoscopy may be used to complement diagnostic laparoscopy. Laparoscopic assistance during transluminal access facilitates simple tasks inside the peritoneal cavity to be performed safely. abstract_id: PUBMED:26275155 Gastrotomy Healing After Transgastric Peritoneoscopy: A Randomized Study in a Pig Model. Introduction: Reliable closure and infection prevention are the main barriers for implementation of pure transgastric peritoneoscopy. The primary aim of this study was to assess healing of over the scope clip (OTSC) closed gastrotomies. Materials And Methods: Pure transgastric peritoneoscopy was performed in 7 pigs. The pigs were randomized to 14 or 28 postoperative days (POD) of follow-up. Decontamination of the access route was performed before instrumentation. A full necropsy was performed. Closure was evaluated with histopathological examination of excised gastrorrhaphies. Results: Three pigs were allowed 14 POD of follow-up, and 4 pigs were allowed 28 POD of follow-up. Survival was achieved in 6 of the 7 animals; 1 pig was euthanized due to diffuse peritonitis. Based on our definition, full-thickness healing had only been achieved in a single pig allowed 28 POD. With respect to clinical relevancy, full-thickness healing was deemed achieved in 4 of 6 pigs completing follow-up and in all pigs allowed and surviving 28 POD. Access required repeated punctures and the use of several endoscopic instruments. Conclusions: Full-thickness healing of the gastrotomy was only found in a single case when adhering to the per protocol definition. Endoscopic ultrasonography-guided access was difficult. It lacks reproducibility and needs refinement. Despite a combined decontamination regimen, infectious complications still occurred. abstract_id: PUBMED:20054581 Diagnostic transgastric endoscopic peritoneoscopy: extension of the initial human trial for staging of pancreatic head masses. Background: The validity of natural orifice transluminal endoscopic surgery (NOTES) was confirmed in a human trial of 10 patients undergoing diagnostic transgastric endoscopic peritoneoscopy (DTEP) for staging of pancreatic head masses. This report is an update with 10 additional patients in the series and includes bacterial contamination data. Methods: The patients in this human trial were scheduled to undergo diagnostic laparoscopy for abdominal staging of a pancreatic head mass. A second surgeon, blinded to the laparoscopic findings, performed a transgastric endoscopic peritoneoscopy (TEP). The findings of laparoscopic exploration were compared with that those of the TEP. Diagnostic findings, operative times, and clinical course were recorded. Bacterial contamination data were collected for the second cohort of 10 patients. Bacterial samples were collected from the scope before use and the abdominal cavity before and after creation of the gastrotomy. Samples were assessed for bacterial counts and species identification. Definitive care was rendered based on the findings from laparoscopy. Results: In this study, 20 patients underwent diagnostic laparoscopy followed by DTEP. The average time for completion of diagnostic laparoscopy was 10 min compared with 21 min for TEP. The experience acquired during the initial 10 procedures translated to a 7-min decrease in TEP time for the second 10 cases. For 19 of the 20 patients, DTEP corroborated laparoscopic findings for surgical decision making. One endoscopic and five laparoscopic biopsies were performed. Pancreaticoduodenectomy was performed for 14 patients and palliative gastrojejunostomy for 6 patients. No cross-contamination of the peritoneum or infectious complications were noted. No significant complications related to either the endoscopic or laparoscopic approach occurred. Conclusions: This study supports the authors' previous conclusions that the transgastric approach to diagnostic peritoneoscopy is feasible, safe, and accurate. The lack of documented bacterial contamination further supports the use of this technique. Technical issues, including intraabdominal manipulation and gastric closure, require further investigation. abstract_id: PUBMED:22179468 Transgastric pure-NOTES peritoneoscopy and endoscopic ultrasonography for staging of gastrointestinal cancers: a survival and feasibility study. Background: Human natural orifice transluminal endoscopic surgery (NOTES) has mainly been based on simultaneous laparoscopic assistance (hybrid NOTES), forgoing the theoretical benefits of the NOTES technique. This is due to a lack of NOTES-specific instruments and endoscopes, making pure-NOTES procedures difficult and time consuming. An area where pure NOTES could be adopted at its present stage of development is minimally invasive staging of gastrointestinal (GI) cancer. The aim of this study is to evaluate the feasibility of combining transgastric (TG) pure-NOTES peritoneoscopy and intraperitoneal endoscopic ultrasonography (ip-EUS) with intraluminal EUS (il-EUS) for peritoneal evaluation. Methods: This was a feasibility and survival study where il-EUS followed by ip-EUS and peritoneoscopy was performed in 10 pigs subjected to TG pure NOTES. A score was given with regard to achieved visualisation of predefined anatomical structures. Survival was assessed at postoperative day (POD) 14. Results: All animals survived until POD 14. Median total procedural time was 94 min (range 74-130 min). Median time for il-EUS, ip-EUS and peritoneoscopy was 11 min (range 7-14 min), 13 min (range 8-20 min) and 10 min (range 6-23 min). Il-EUS, ip-EUS and peritoneoscopy resulted in a score of 15/15 points (range 14-15 points), 6/9 points (range 1-8 points) and 12/13 points (range 8-13 points). Conclusions: TG pure-NOTES peritoneoscopy and ip-EUS combined with il-EUS is feasible and provides sufficient peritoneal evaluation. The technique could have potential for minimally invasive staging of GI cancers. Answer: Based on the controlled study involving twenty-three dogs undergoing endoscopic transgastric peritoneoscopy, it was found that animals that did not receive treatment for gastrotomy closure did not show any clinical ill effects and demonstrated adequate healing. These untreated animals also had fewer adhesions and less inflammation at the gastrotomy site compared to those treated with a bioabsorbable plug (PUBMED:20438886). This suggests that gastrotomies may not require repair after endoscopic transgastric peritoneoscopy, at least in the animal model used in this study. However, it is important to note that this study had limitations, including a small number of subjects, use of an animal model, lack of randomization, and the fact that the trauma from short peritoneoscopy may not be applicable to longer procedures. Therefore, while the study provides evidence that gastrotomy repair may not be necessary in some cases, further research, particularly in humans, would be needed to confirm these findings and their applicability to clinical practice.
Instruction: Does amblyopia have a functional impact? Abstracts: abstract_id: PUBMED:30957266 Functional limitations recognised by adults with amblyopia and strabismus in daily life: a qualitative exploration. Purpose: Patients' perceptions about the functional impact of amblyopia and strabismus in daily life have not been explored extensively. Therefore, this study aimed to explore the lived experiences of adults with these conditions and understand the functional limitations they face in their day-to-day life. Methods: A qualitative study design was adopted. Participants over 18 years of age, with a primary diagnosis of amblyopia (with or without strabismus) were recruited from the community and various eye care practices in South Australia and Victoria, Australia. Participants took part in either focus group discussions or individual interviews and described the functional limitations they experienced in their daily life due to their eye condition. These sessions were audio recorded, transcribed verbatim, coded inductively, and analysed iteratively to form emergent themes. Results: Thirty-seven adult participants took part in the study: 23 (62%) had strabismic amblyopia; 5 (14%) anisometropic amblyopia;, 7 (19%) combined-mechanism amblyopia; and 2 (5%) deprivational amblyopia. Their median age was 54 years (range: 21-82 years) and 19 (51%) were female. Participants reported several challenges in performing everyday tasks such as driving (e.g. judging distances, changing lanes), reading (e.g. fine print, reading for prolonged time) and sports (e.g. catching a ball). They also articulated trouble in navigating safely (e.g. using stairs, bumping into objects), performing work-tasks (e.g. taking longer than peers to complete tasks) and other routine tasks (e.g. chopping vegetables with care). Conclusions: Several functional limitations were encountered by adults living with amblyopia and strabismus. Participants recognised these limitations in their normal day-to-day life and related the challenges they faced to symptoms associated with their eye condition. By presenting rich in-depth qualitative data, the paper demonstrates qualitative evidence of the functional impacts associated with amblyopia and strabismus. abstract_id: PUBMED:32511967 Understanding the Impact of Residual Amblyopia on Functional Vision and Eye-related Quality of Life Using the PedEyeQ. Purpose: To evaluate the effect of residual amblyopia on functional vision and eye-related quality of life (ER-QOL) in children and their families using the Pediatric Eye Questionnaire (PedEyeQ). Design: Prospective cross-sectional study. Methods: Seventeen children with residual amblyopia (no current treatment except glasses), 48 visually normal controls without glasses, and 19 controls wearing glasses (aged 8-11 years) completed the Child 5-11 year PedEyeQ. One parent for each child completed the Proxy 5-11 PedEyeQ, Parent PedEyeQ. Rasch-calibrated domain scores were calculated for each questionnaire domain and compared between amblyopic children and controls. Results: PedEyeQ scores were significantly lower (worse) for children with residual amblyopia than for controls without glasses across all domains: Child PedEyeQ greatest mean difference 18 points worse on Functional vision domain (95% confidence interval [CI] -29 to -7; P < .001); Proxy PedEyeQ greatest mean difference 31 points worse on Functional vision domain (95% CI -39 to -24; P < .001); Parent PedEyeQ greatest mean difference 34 points worse on the Worry about child's eye condition domain (95% CI -46 to -22; P < .001). Compared with controls wearing glasses, PedEyeQ scores were lower for residual amblyopia on the Child Frustration/worry domain (P = .03), on 4 of 5 Proxy domains (P ≤ .05), and on 3 of 4 Parent domains (P ≤ .05). Conclusions: Residual amblyopia affects functional vision and ER-QOL in children. Parents of amblyopic children also experience lower quality of life. These data help broaden our understanding of the everyday-life impact of childhood residual amblyopia. abstract_id: PUBMED:29484704 The functional impact of amblyopia. Amblyopia is the most common disorder managed in paediatric ophthalmic practice in industrialised countries. Reports on the impact of amblyopia on tasks relevant to the activities of children, or on skills pertinent to their education and quality of life, is leading to greater understanding of the functional disabilities associated with the condition. This review considers the extent to which amblyopia affects the ability to carry out everyday tasks, with particular attention to studies of motor skills and reading proficiency in children. Collectively, these studies show that amblyopia results in poorer outcomes on tests of skills required for proficiency in everyday tasks and which relate to childhood academic performance. However, the relative contributions that the documented vision anomalies inherent in amblyopia contribute to various functional disabilities is not fully determined. Recent reports have demonstrated improvement following treatment in standardised measures of fine motor skills involved in practical, everyday tasks. Including measurement of functional performance skills in amblyopia treatment trials is desirable to show treatment effect on crucial, real-world activities. abstract_id: PUBMED:17003430 The VF-14 and psychological impact of amblyopia and strabismus. Purpose: To assess the impact of amblyopia, strabismus and glasses on subjective visual and psychological function among amblyopes. Methods: Questionnaires were administered to 120 teenagers with amblyopia (cases), with residual amblyopia after treatment, or with or without strabismus and 120 control subjects (controls) Cases underwent ophthalmic examination including cycloplegic refraction. Two questionnaires (visual function 14 [VF-14] and a newly designed eight-item questionnaire) were administered to assess the psychological impact score of general daily life, having a weaker eye, glasses wear, and current noticeable strabismus. Questionnaires were validated in 60 subjects in each group by a second administration of the questionnaire. The VF-14 scores, psychological impact scores, and clinical data were compared. Results: The VF-14 and psychological impact scores were highly reproducible. The mean VF-14 score for the control group was 95.5 and for the cases was 78.9 (P < 0.0001), but the scores did not correlate with the severity of amblyopia. The psychological impact score in general daily life was sensitive in discriminating between mild (median score 31) and moderate to severe (median score 56) amblyopes (P < 0.02). The cases segregated into two clear groups; those who scored high (large detrimental psychological impact) on psychological impact, with subjectively noticeable manifest strabismus, and those who scored low (low detrimental psychological impact), without noticeable strabismus. The subjective experience of patching treatment differentiated the two groups best of all. Conclusions: Subjective visual and psychological functions are altered compared with normal subjects due to amblyopia, strabismus, and a previous unpleasant patching experience. The mean VF-14 score was similar to that previously published for patients with glaucoma. The study underlines that amblyopia and/or strabismus have an impact on teenagers' subjective visual function and well-being. abstract_id: PUBMED:24698617 Functional and psychosocial impact of strabismus on Singaporean children. Purpose: To quantify the effects of strabismus in Singaporean children using the Intermittent Exotropia Questionnaire (IXTQ) and the Adult Strabismus 20 Questionnaire (AS20). Methods: Consecutive strabismus patients 5-16 years of age were recruited along with an equal number of age-matched controls with eye conditions other than strabismus and amblyopia (group A) and controls with no known eye conditions (group B). All children completed the IXTQ; those 8-16 years of age also completed the AS20 questionnaire. Parents completed the parental proxy IXTQ (pp-IXTQ) and AS20 (pp-AS20) and a parental IXTQ (PIXT). Results: A total of 60 patients and 60 age-matched controls in each group were included. Children with strabismus had lower IXTQ (70.1 ± 19.0) and AS20 (80.0 ± 13.8) scores than those in group B (IXTQ, 90.3 ± 11.8 [P < 0.001]; AS20, 90.0 ± 10.9 [P < 0.001]) and group A (IXTQ, 80.6 ± 14.9 [P = 0.001]; AS20, 81.6 ± 18.3 [P = 0.691]). Among children with strabismus, child IXTQ scores were significantly lower than parental proxy scores (70.1 ± 19.0 vs 76.4 ± 15.8 [P = 0.026]), but there was no difference in control group scores or with AS20 scores. Item-level analysis suggested that children's worry focused on what others thought about them and their ability to make friends, whereas parents were more concerned about eyesight and whether surgery was required. Conclusions: The IXTQ and AS20 were better at differentiating between children with strabismus and those with no eye condition than between children with strabismus and other eye conditions. Parental proxies were accurate in predicting child scores but parents were more likely to underestimate the psychosocial effects of their children's strabismus. abstract_id: PUBMED:24661911 Abnormal functional connectivity density in children with anisometropic amblyopia at resting-state. Amblyopia is a developmental disorder resulting from anomalous binocular visual input in early life. Task-based neuroimaging studies have widely investigated cortical functional impairments in amblyopia, but changes in spontaneous neuronal functional activities in amblyopia remain largely unknown. In the present study, functional connectivity density (FCD) mapping, an ultrafast data-driven method based on fMRI, was applied for the first time to investigate changes in cortical functional connectivities in amblyopia during the resting-state. We quantified and compared both short- and long-range FCD in both the brains of children with anisometropic amblyopia (AAC) and normal sighted children (NSC). In contrast to the NSC, the AAC showed significantly decreased short-range FCD in the inferior temporal/fusiform gyri, parieto-occipital and rostrolateral prefrontal cortices, as well as decreased long-range FCD in the premotor cortex, dorsal inferior parietal lobule, frontal-insular and dorsal prefrontal cortices. Furthermore, most regions with reduced long-range FCD in the AAC showed decreased functional connectivity with occipital and posterior parietal cortices in the AAC. The results suggest that chronically poor visual input in amblyopia not only impairs the brain's short-range functional connections in visual pathways and in the frontal cortex, which is important for cognitive control, but also affects long-range functional connections among the visual areas, posterior parietal and frontal cortices that subserve visuomotor and visual-guided actions, visuospatial attention modulation and the integration of salient information. This study provides evidence for abnormal spontaneous brain activities in amblyopia. abstract_id: PUBMED:18423988 Structural and functional deficits in human amblyopia. Many neuroimaging tools have been used to assess the site of the cortical deficits in human amblyopia. In this paper, we aimed at detecting the structural and functional deficits in humans with amblyopia, with the aid of anatomic magnetic resonance imaging (aMRI) and functional MRI (fMRI). We designed the visual stimulus to investigate the functional deficits, and delineated the V1/V2 areas by retinotopic mapping. Then we performed the brain parcellation to calculate the volume of the subcortical structure on each individual, and reconstructed the cortical surfaces to measure the cortical thickness. At last, the statistical comparison was carried out to find the structural abnormities and their relationship to the functional deficits. Compared with the normal controls, it is found that the hemisphere difference existed on the unilateral amblyopia subjects, and the functional deficit might come along with the changes in the cortical volume, especially in the occipital lobe. The examined results may provide insight to the study of the neural substrates of amblyopia. abstract_id: PUBMED:23844297 Altered functional connectivity of the primary visual cortex in subjects with amblyopia. Amblyopia, which usually occurs during early childhood and results in poor or blurred vision, is a disorder of the visual system that is characterized by a deficiency in an otherwise physically normal eye or by a deficiency that is out of proportion with the structural or functional abnormalities of the eye. Our previous study demonstrated alterations in the spontaneous activity patterns of some brain regions in individuals with anisometropic amblyopia compared to subjects with normal vision. To date, it remains unknown whether patients with amblyopia show characteristic alterations in the functional connectivity patterns in the visual areas of the brain, particularly the primary visual area. In the present study, we investigated the differences in the functional connectivity of the primary visual area between individuals with amblyopia and normal-sighted subjects using resting functional magnetic resonance imaging. Our findings demonstrated that the cerebellum and the inferior parietal lobule showed altered functional connectivity with the primary visual area in individuals with amblyopia, and this finding provides further evidence for the disruption of the dorsal visual pathway in amblyopic subjects. abstract_id: PUBMED:36282454 Amblyopia: progress and promise of functional magnetic resonance imaging. Amblyopia is a neurodevelopmental disorder characterized by functional deficits in the visual cortex. Functional magnetic resonance imaging (fMRI) is the most commonly used neuroimaging technique for investigating amblyopia. Herein, we systematically searched a PubMed database from inception to December 2021 to highlight the current progress and promises about fMRI technology in amblyopia; amblyopia's neural mechanism, the comparison of different types of amblyopia, and the evaluation of the therapeutic effect were explored. Relevant articles published in English and appropriate cross-references were considered for inclusion, including basic studies, imaging techniques, clinical diagnostic and therapeutic studies, case series, and reviews. abstract_id: PUBMED:22712767 Does amblyopia have a functional impact? Findings from the Dunedin Multidisciplinary Health and Development Study. Background: Existing data on the functional impact of amblyopia are conflicting. The functional impact of amblyopia is a critical component of the viability and effectiveness of childhood vision screening programmes and treatment regimes. Design: Prospective longitudinal birth cohort (the Dunedin Multidisciplinary Health and Development Study). Participants: One thousand thirty-seven children born in Dunedin, New Zealand, between April 1972 and March 1973, assessed from ages 3 to 32 years. Methods: Comparison of study members with no amblyopia, recovered amblyopia, possible amblyopia or amblyopia according to both classic (6/12 visual acuity or worse in at least one eye, or a two-line or greater differential between the visual acuity in both eyes) and modern (6/9 visual acuity or worse in at least one eye) definitions of amblyopia. Main Outcome Measures: Childhood motor development, teenage self-esteem and adult socioeconomic status (assessed by occupation, education, reading ability and income). Results: There was no evidence of poorer motor development, lower self-esteem or reduced adult socioeconomic status in study members with amblyopia or recovered amblyopia when compared with those with no amblyopia. Conclusions: Amblyopia or having recovered amblyopia does not functionally impact on childhood motor development, teenage self-esteem or adult socioeconomic status within this cohort. The wide range of visual deficits and adaptations that are known to occur in amblyopic vision do not translate into important 'real life' outcomes for the study members with amblyopia or recovered amblyopia. The age-related cumulative lifetime risk of bilateral visual impairment in amblyopia will be assessed in future studies. Answer: Yes, amblyopia does have a functional impact. Studies have shown that amblyopia results in functional limitations and affects the ability to carry out everyday tasks. Adults with amblyopia and strabismus report challenges in performing daily activities such as driving, reading, sports, navigating safely, and performing work tasks (PUBMED:30957266). Children with residual amblyopia experience lower functional vision and eye-related quality of life compared to controls, and their parents also report a lower quality of life due to concerns about their child's eye condition (PUBMED:32511967). Amblyopia has been associated with poorer outcomes on tests of skills required for proficiency in everyday tasks and academic performance in children (PUBMED:29484704). Teenagers with amblyopia or strabismus have altered subjective visual and psychological functions compared to normal subjects, which can impact their well-being (PUBMED:17003430). Singaporean children with strabismus show lower scores on questionnaires assessing the psychosocial effects of their condition, indicating a significant impact on their quality of life (PUBMED:24698617). Neuroimaging studies have also revealed that amblyopia is associated with abnormal functional connectivity density and altered functional connectivity in the brain, suggesting that the condition affects spontaneous brain activities and neural substrates (PUBMED:24661911; PUBMED:18423988; PUBMED:23844297). However, one longitudinal study found no evidence of poorer motor development, lower self-esteem, or reduced adult socioeconomic status in individuals with amblyopia or recovered amblyopia, suggesting that the range of visual deficits in amblyopia may not translate into significant 'real life' outcomes within that cohort (PUBMED:22712767).
Instruction: Prosthetic replacement of the medial meniscus in cadaveric knees: does the prosthesis mimic the functional behavior of the native meniscus? Abstracts: abstract_id: PUBMED:15262640 Prosthetic replacement of the medial meniscus in cadaveric knees: does the prosthesis mimic the functional behavior of the native meniscus? Unlabelled: Meniscus replacement by a polymer meniscus prosthesis in dogs resulted in generation of new meniscal tissue. Hypothesis: Optimal functioning of the prosthesis would involve realistic deformation and motion patterns of the prosthesis during knee joint motion. Study Design: Controlled laboratory study. Methods: The movements of the meniscus were determined during knee joint flexion and extension with and without internal and external tibial torque by means of roentgen stereophotogrammetric analysis. Subsequently, the meniscus in 6 human cadaveric knee joints was replaced by a meniscus prosthesis. Results: All different parts of the meniscus showed a posterior displacement during knee joint flexion. The anterior horn was more mobile than the posterior horn. The prosthesis mimicked the movements of the meniscus. However, the excursions of the prosthesis on the tibial plateau were less. The knee joint laxity was not significantly higher after replacement with the meniscus prosthesis. Conclusions: The prosthesis approximated the behavior of the native meniscus. Improvement in both the gliding characteristics of the prosthetic material and the fixation of the prosthesis may improve the function. Clinical Relevance: The meniscus prosthesis needs to be optimized to achieve a better initial function in the knee joint. abstract_id: PUBMED:34758053 Quantifying the differential functional behavior between the medial and lateral meniscus after posterior meniscus root tears. Meniscus tears of the knee are among the most common orthopedic knee injury. Specifically, tears of the posterior root can result in abnormal meniscal extrusion leading to decreased function and progressive osteoarthritis. Despite contemporary surgical treatments of posterior meniscus root tears, there is a low rate of healing and an incidence of residual meniscus extrusion approaching 30%, illustrating an inability to recapitulate native meniscus function. Here, we characterized the differential functional behavior of the medial and lateral meniscus during axial compression load and dynamic knee motion using a cadaveric model. We hypothesized essential differences in extrusion between the medial and lateral meniscus in response to axial compression and knee range of motion. We found no differences in the amount of meniscus extrusion between the medial and lateral meniscus with a competent posterior root (0.338mm vs. 0.235mm; p-value = 0.181). However, posterior root detachment resulted in a consistently increased meniscus extrusion for the medial meniscus compared to the lateral meniscus (2.233mm vs. 0.4705mm; p-value < 0.0001). Moreover, detachment of the posterior root of the medial meniscus resulted in an increase in extrusion at all angles of knee flexion and was most pronounced (4.00mm ± 1.26mm) at 30-degrees of knee flexion. In contrast, the maximum mean extrusion of the lateral meniscus was 1.65mm ± 0.97mm, occurring in full extension. Furthermore, only the medial meniscus extruded during dynamic knee flexion after posterior root detachment. Given the differential functional behaviors between the medial and lateral meniscus, these findings suggest that posterior root repair requires reducing overall meniscus extrusion and recapitulating the native functional responses specific to each meniscus. abstract_id: PUBMED:28164045 Ipsilateral Medial and Lateral Discoid Meniscus with Medial Meniscus Tear. Introduction: Discoid meniscus is a well-documented knee pathology, and there are many cases of medial or lateral discoid meniscus reported in the literature. However, ipsilateral concurrent medial and lateral discoid meniscus is very rare, and only a few cases have been reported. Herein, we report a case of concurrent medial and lateral discoid meniscus. Case Report: A 27-year-old Japanese man complained of pain on medial joint space in his right knee that was diagnosed as a complete medial and lateral discoid meniscus. In magnetic resonance imaging, although the lateral discoid meniscus had no tear, the medial discoid meniscus had a horizontal tear. Arthroscopic examination of his right knee similarly revealed that the medial discoid meniscus had a horizontal tear. In addition, the discoid medial meniscus also had an anomalous insertion to the anterior cruciate ligament, and there was also mild fibrillation of the medial tibial cartilage surface. We performed arthroscopic partial meniscectomy for the torn medial discoid meniscus but not for the asymptomatic lateral discoid meniscus. The latest follow-up at 18 months indicated satisfactory results. Conclusion: We report a rare case of ipsilateral medial and lateral discoid meniscus with medial meniscus tear. The medial discoid meniscus with tear was treated with partial meniscectomy, whereas the lateral discoid meniscus without tear was only followed up. abstract_id: PUBMED:26814740 Replacement of the Meniscus with a Collagen Implant (CMI). Objective: Replacement of an almost completely absent medial meniscus with a collagen implant (CMI), reconstruction of form and function of the medial meniscus, delay of the development of arthrosis deformans. Indications: Subtotal degenerative or traumatic loss of the medial meniscus, stable meniscal periphery, stable anterior and posterior meniscal insertions, joint with stable ligaments. Contraindications: Complete loss of the medial meniscus. Untreated knee ligament instability. Extreme varus deformity. Extensive cartilaginous damage, i.e., levels IV and VI as described by Bauer and Jackson. Advanced unicompartmental or generalized arthrosis. Replacement of the lateral meniscus. Surgical Technique: Standard anterior arthroscopy portals. Resection of the medial meniscus leaving a complete and stable outer rim. Revitalization of the periphery to promote healing. Measurement of defect size. Insertion and fixation of the CMI with nonresorbable suture material in inside-out technique. Postoperative Management: Postoperative knee brace with limited motion in extension/flexion of 0/0/60° until week 4, 0/0/90° until week 6. Coutinuous passive motion within the limits of motion from the 1st postoperative day, actively assisted physiotherapy. No weight bearing for 6 weeks, then increased weight bearing for 2 weeks until full weight bearing is achieved. Cycling can commence from 3 months postoperatively. Full sporting activity after 6 months. Results: 60 patients (19-68 years, average 41.6 years) with subtotal loss of the medial meniscus and varus morphotype were treated from January 2001 to May 2004 as part of a prospective, randomized, arthroscopically controlled study. The sample consisted of 30 patients with high tibial valgus osteotomy combined with implantation of a CMI, and 30 patients with valgization correction osteotomy only. The CMI had to be removed from one patient because of a dislocation. Evaluation on the Lysholm Score, IKDC (International Knee Documentation Committee), and subjective pain data revealed only slight, nonsignificant differences for 39 patients after 24 months (CMI and correction n = 23; correction only n = 16). The chondroprotective effect of the CMI in the long term remains to be seen. abstract_id: PUBMED:30043117 3D strain in native medial meniscus is comparable to medial meniscus allograft transplant. Purpose: Injury or degeneration of the meniscus has been associated with the development of osteoarthritis of the knee joint. Meniscal allograft transplant (MAT) has been shown to reduce pain and restore function in patients who remain symptomatic following meniscectomy. The purpose of this study is to evaluate and compare the three-dimensional (3D) strain in native medial menisci compared to allograft-transplanted medial menisci in both the loaded and unloaded states. Methods: Ten human cadaveric knees underwent medial MAT, utilizing soft-tissue anterior and posterior root fixation via transosseous sutures tied over an anterolateral proximal tibial cortical bone bridge. The joint was imaged first in the non-loaded state, then was positioned at 5° of flexion and loaded to 1× body weight (650 ± 160 N) during MR image acquisition. Anatomical landmarks were chosen from each image to create a tibial coordinate system, which were then input into a custom-written program (Matlab R2014a) to calculate the 3D strain from the unloaded and loaded marker positions. Six independent strains were obtained: three principal strains and three shearing strains. Results: No statistically significant difference was found between the middle and posterior strains in the native knee compared to the meniscus allograft. This would suggest that soft-tissue fixation of meniscal allografts results in similar time zero principal and shear strains in comparison to the native meniscus. Conclusion: These results suggest that time zero MAT performs in a similar manner to the native meniscus. Optimizing MAT strain behavior may lead to potential improvements in its chondroprotective effect. abstract_id: PUBMED:33890130 Use of contralateral lateral meniscus for medial meniscal allograft transplantation: a cadaveric study. Purpose: Meniscal allografts are a preferred alternative to menisectomy in cases of irrepairable meniscal tears in young patients. Biological meniscal allograft transplantation requires a cadaveric donor, limiting its availability for transplantation. We are exploring the possibility of using contralateral lateral meniscus for medial meniscal allograft transplantation, as it can be easily procured from proximal tibial cuts from total knee replacement. Methods: Ten paired knees from five formalin-fixed Indian male cadavers were dissected. Outer and inner circumferences of the medial and meniscus, area of the articular surface of the medial tibial plateau covered by the native medial meniscus and transplanted lateral meniscus were noted. Measurements were taken using software ImageJ (National Institute of Health). The mean of the recordings from two independent observers was taken as the final value. Inter-observer and intra-observer reliability were also calculated. Results: The mean inner circumference of the medial meniscus was significantly larger than the lateral meniscus (p < 0.0001). However, outer circumferences were not significantly different from each other (p = 0.1). Area of the tibial plateau covered by the native medial meniscus was smaller than the area covered by the transplanted lateral meniscus, though the difference was not statistically significant. Inter-observer reliability and intra-observer reliability were good (ICC 0.904 and 0.927, respectively). Conclusion: Based on measurements of the outer circumference of medial and lateral menisci, lateral meniscal allograft can be matched for transplantation on the contralateral medial side from the donor with same dimensions of the tibial plateau. Further clinical studies are necessary to prove the clinical significance of this cadaveric study. Level Of Evidence: Diagnostic study. abstract_id: PUBMED:24367782 The insertion of the anterior horn of the medial meniscus: an anatomic study. Objective: The purpose of this study was to identify the various patterns of insertion of the anterior horn of the medial meniscus in Ghanaian subjects. Study: The study involved 35 cadaveric knees (26 males and 9 females). Berlet and Fowler classification was used to classify the insertion of the anterior horn of the medila meniscus. Findings: The distribution of the insertion pattern was as follows; 42.9% (15) had type I insertion, 45.7% (16) had type II, type III and IV insertions were each found in 5.7% (2) of the dissected knees. Type II insertion had the highest incidence which was a deviation from what has been reported in literature. The incidence of the anterior intermeniscal ligament (AIML) was 34.3%, which was much lower that most studies have reported. Conclusion: The findings of the study may suggest that the pattern of insertion of the anterior horn of the medial meniscus may be different in the Ghanaian population; further research is needed in this area. abstract_id: PUBMED:27163099 Meniscus delivery: a maneuver for easy arthroscopic access to the posterior horn of the medial meniscus. Pathology of posterior horn of medial meniscus is common and often presents a difficult approach during arthroscopy for various reasons. We describe an easy maneuver to facilitate "delivery of the medial meniscus" during arthroscopy. abstract_id: PUBMED:25766391 Comparison of the biomechanical tensile and compressive properties of decellularised and natural porcine meniscus. Meniscal repair is widely used as a treatment for meniscus injury. However, where meniscal damage has progressed such that repair is not possible, approaches for partial meniscus replacement are now being developed which have the potential to restore the functional role of the meniscus, in stabilising the knee joint, absorbing and distributing stress during loading, and prevent early degenerative joint disease. One attractive potential solution to the current lack of meniscal replacements is the use of decellularised natural biological scaffolds, derived from xenogeneic tissues, which are produced by treating the native tissue to remove the immunogenic cells. The current study investigated the effect of decellularisation on the biomechanical tensile and compressive (indentation and unconfined) properties of the porcine medial meniscus through an experimental-computational approach. The results showed that decellularised medial porcine meniscus maintained the tensile biomechanical properties of the native meniscus, but had lower tensile initial elastic modulus. In compression, decellularised medial porcine meniscus generally showed lower elastic modulus and higher permeability compared to that of the native meniscus. These changes in the biomechanical properties, which ranged from less than 1% to 40%, may be due to the reduction of glycosaminoglycans (GAG) content during the decellularisation process. The predicted biomechanical properties for the decellularised medial porcine meniscus were within the reported range for the human meniscus, making it an appropriate biological scaffold for consideration as a partial meniscus replacement. abstract_id: PUBMED:24133310 Effect of partial and complete posterior cruciate ligament transection on medial meniscus: A biomechanical evaluation in a cadaveric model. Background: The relationship between medial meniscus tear and posterior cruciate ligament (PCL) injury has not been exactly explained. We studied to investigate the biomechanical effect of partial and complete PCL transection on different parts of medial meniscus at different flexion angles under static loading conditions. Materials And Methods: TWELVE FRESH HUMAN CADAVERIC KNEE SPECIMENS WERE DIVIDED INTO FOUR GROUPS: PCL intact (PCL-I), anterolateral bundle transection (ALB-T), posteromedial bundle transection (PMB-T) and PCL complete transection (PCL-T) group. Strain on the anterior horn, body part and posterior horn of medial meniscus were measured under different axial compressive tibial loads (200-800 N) at 0°, 30°, 60° and 90° knee flexion in each groups respectively. Results: Compared with the PCL-I group, the PCL-T group had a higher strain on whole medial meniscus at 30°, 60° and 90° flexion in all loading conditions and at 0° flexion with 400, 600 and 800 N loads. In ALB-T group, strain on whole meniscus increased at 30°, 60° and 90° flexion under all loading conditions and at 0° flexion with 800 N only. PMB-T exihibited higher strain at 0° flexion with 400 N, 600 N and 800 N, while at 30° and 60° flexion with 800 N and at 90° flexion under all loading conditions. Conclusions: Partial PCL transection triggers strain concentration on medial meniscus and the effect is more pronounced with higher loading conditions at higher flexion angles. Answer: The prosthetic replacement of the medial meniscus in cadaveric knees was investigated to determine if the prosthesis could mimic the functional behavior of the native meniscus. The study involved replacing the meniscus in human cadaveric knee joints with a meniscus prosthesis and analyzing the movements of the meniscus during knee joint flexion and extension with and without internal and external tibial torque using roentgen stereophotogrammetric analysis (PUBMED:15262640). The results showed that the prosthesis did mimic the movements of the meniscus, with all different parts of the meniscus displaying a posterior displacement during knee joint flexion. The anterior horn was found to be more mobile than the posterior horn. However, the excursions of the prosthesis on the tibial plateau were less than those of the native meniscus, and knee joint laxity was not significantly higher after replacement with the meniscus prosthesis. The study concluded that the prosthesis approximated the behavior of the native meniscus, but improvements in the gliding characteristics of the prosthetic material and the fixation of the prosthesis could potentially enhance its function (PUBMED:15262640). This suggests that while the prosthetic replacement does not perfectly replicate the native meniscus's behavior, it does provide a close approximation, which could be improved with further optimization of the prosthesis design and materials.
Instruction: Does a reduction in family medicine clerkship time affect educational outcomes? Abstracts: abstract_id: PUBMED:11411971 Does a reduction in family medicine clerkship time affect educational outcomes? Background And Objectives: Little is known about the relationship between the length of a family medicine clerkship and its educational outcomes. After our family medicine clerkship time decreased from 6 weeks to 4 weeks in July 1997, we studied how this change in clerkship length affected educational outcomes. Methods: Educational outcomes for the 2-year periods before and after the change were examined and compared whenever possible. Outcome measurements included student ratings of different aspects of the clerkship and student performance on clerkship examinations. Results: Students' exposure to common clinical problems was unaffected by the change. For the 4-week clerkship, there was a slight increase in student ratings of the adequacy of number of patients seen, the opportunity to follow-up with patients, the ability to develop health promotion plans, and overall satisfaction. Because the combinations of examinations used differed each year, student performance on clerkship examinations could not be directly compared. Conclusions: Educational outcomes of the 4-week clerkship were similar to the 6-week clerkship. A few key outcomes improved. Various curricular and structural changes instituted for the 4-week clerkship contributed to the stability in outcomes. Reports from other medical schools may give additional insight into understanding this relationship. abstract_id: PUBMED:37540529 The Impact of Tele-Education on Family Medicine Clerkship Students' Learning Outcomes. Background And Objectives: The COVID-19 pandemic necessitated rapid changes to medical education for student and patient protection. A dearth of published US studies examine resulting clinical education outcomes due to pandemic-induced curricula changes. We describe adaptations made to a family medicine clerkship to move it from traditional in-person delivery to virtual only, and then from virtual to hybrid; and compare educational outcomes of students across delivery types. Methods: We stratified 386 medical students in their third year completing their 8-week family medicine clerkship by type of content delivery, including in person, virtual only, and hybrid instruction. We examined the impact of these changes on three clerkship learning outcomes: the midblock assessment score, the National Board of Medical Examiners (NBME) exam score, and the final numeric score (FNS). Results: In our sample, 164 (42.5%) received content in person, 36 (9.3%) received virtual only, and 186 (48.2%) received hybrid content. Students receiving virtual only (M=76.4, SD=9.1) had significantly higher midblock assessment scores (F=8.06, df=2, P=.0004) than students receiving hybrid (M=71.7, SD=8.8) and in-person training (M=74.5, SD=7.2). No significant differences existed in students' NBME exam scores or FNSs across delivery types. Conclusions: Students receiving virtual-only or hybrid content performed at least as well on three clerkship-related educational outcomes as their pre-COVID peers participating in person. Further research is needed to understand how changes to medical education affected student learning and skill development. abstract_id: PUBMED:31722098 Grade Inflation in the Family Medicine Clerkship. Background And Objectives: Medical educators perceive grade inflation to be a serious problem. There is some literature discussing the magnitude of the problem and ways to remediate it, but little literature is available in the field of family medicine. We sought to examine what methods of remediating grade inflation have been tried by family medicine clerkship directors, and what factors influence the chosen method of addressing this problem. Methods: We conducted a national Council of Academic Family Medicine's (CAFM) Educational Research Alliance (CERA) survey of family medicine clerkship directors, inquiring about their perceptions of the seriousness of grade inflation, whether it was perceived as a remediable problem, and what methods had been tried within the last 3 years to address this problem. Results: The response rate was 69%. Clerkship directors' perceptions that grade inflation is a serious problem either nationally or in their own clerkship did not correlate with how they weighted the objective versus subjective portions of the clerkship grade. Clerkship directors who agreed that grade inflation was a remediable problem had a higher percentage of nonexamination objective criteria and a lower percentage of subjective criteria in their grading formula. Clerkship directors who agreed grade inflation is a problem in their clerkship were more likely to have tried giving feedback to graders on grade distribution than those who didn't think grade inflation was a problem. Conclusions: Family medicine clerkship directors perceive grade inflation to be a serious problem, both at a national level and in their clerkships. Various methods of addressing grade inflation have been tried by family medicine clerkship directors. abstract_id: PUBMED:34019682 Active Learning Versus Traditional Teaching Methods in the Family Medicine Clerkship. Background And Objectives: Active learning, defined as a variety of teaching methods that engage the learner in self-evaluation and personalized learning, is emerging as the new educational standard. This study aimed to evaluate how family medicine clerkship directors are incorporating active learning methods into the clerkship curriculum. Methods: Data were collected via a Council of Academic Family Medicine Educational Research Alliance survey of family medicine clerkship directors. Participants answered questions about the number and type of teaching faculty in their department, the various teaching methods used in their family medicine clerkship, and what challenges they had faced in implementing active learning methods. Results: The survey response rate was 64%; 97% of family medicine clerkships use active learning techniques. The most common were online modules, problem-based learning, and hands-on workshops. The number of teaching faculty was significantly correlated with hours spent in live (not online) active teaching. One-third of clerkship directors felt challenged by lack of resources for adopting active learning. Clerkship directors did not cite lack of expertise as a challenge to implementing active learning. Time dedicated to clerkship director duties or the presence of a dedicated educator in the department was not associated with the adoption of active learning. Conclusions: The use of active learning in the family medicine clerkship is required both by educational standards and student expectations. Clerkship directors may feel challenged by lack of resources in their attempts to adopt active learning. However, there are many methods of active learning, such as online modules, that are less faculty time intensive. abstract_id: PUBMED:9597530 Tracking the contribution of a family medicine clerkship to the clinical curriculum. Background And Objectives: Medical educators are working to articulate the objectives and measure the outcomes of medical education. In clinical training, faculty need methods to identify both the principal educational contributions of individual clerkships and how prior experiences influence student learning. Methods: We analyzed students' perceived acquisition of clinical knowledge and skills on a 4-week, community-based family medicine clerkship. The data represent 349 third-year medical students who participated in the clerkship during a 2-year time period. Results were summarized by three different combinations of prior clerkship experiences and overall. Results: Students reported gains as a result of the clerkship for the majority of medical problems and procedures. However, there were differences in the clerkship's perceived contribution depending on the timing and sequence of clinical rotations. Even when the family medicine clerkship followed all other primary care rotations, students perceived that the clerkship contributed to gains in knowledge of undifferentiated and commonly seen problems; applications of health promotion, disease prevention, and patient education; importance of family dynamics in patient care; business aspects of medical practice; and appreciation of family practice. Conclusions: The results demonstrate how a required family medicine clerkship can enhance the clinical learning that occurs on other rotations. The study also demonstrates that it is possible to track a clerkship's contribution to student development and to understand how a clerkship's role may change according to students' prior experiences. abstract_id: PUBMED:12269537 Clerkship order and performance on family medicine and internal medicine National Board of Medical Examiners Exams. Background And Objectives: Taking National Board of Medical Examiners (NBME) subject examinations later in the year is known to lead to higher scores. The effect of taking these exams in a particular order is not well understood. Methods: Scores on family medicine and internal medicine examinations from 312 students in 2 academic years were analyzed to determine the effect of clerkship order on student performance. US Medical Licensing Examination (USMLE) Step 1 scores were used to control for prior academic achievement. Students were separated into groups based on the time of year they took each clerkship and prior experiences. Results: When controlling for USMLE scores, NBME scores varied in relation to time of year and order of clerkship experiences. Students who took internal medicine first performed better on the family medicine exam. Taking psychiatry, obstetrics-gynecology, or surgery clerkships prior to the internal medicine exam improved scores on the internal medicine examination. Conclusions: The timing and order of family medicine and internal medicine clerkship experiences affects performance on the NBME family medicine and internal medicine exams. Clerkship directors should consider this effect when evaluating medical students. abstract_id: PUBMED:8326845 Innovative educational practices in a required family medicine clerkship. This paper traces the 7-year evolution of a required clerkship in Family Practice from the time of initial grant application to the current academic year. Results of experience in areas of student placement, preceptor recruitment, curriculum development, test construction, grading schema and course evaluation are described. Emphasis on streamlining administrative systems to decrease paperwork of course director is a major focus. Changing needs of department, medical school and student are reflected in the adaptations of the clerkship to these needs. abstract_id: PUBMED:6717308 A comprehensive clerkship in family medicine. A 6-week family medicine clerkship that is comprehensive in its approach to the academic base of family medicine and in its integration of clinical and didactic experiences is discussed. The goals of the clerkship are to enable students to: acquire knowledge of common illnesses in ambulatory care; integrate compassion, concern and empathy in the delivery of comprehensive and personalized care; and to experience the medical and social issues of family medicine. The curriculum includes 50 hours of didactic preparation in the family life cycle, common illnesses, patient interviewing, problem-oriented medical records, genograms . compliance, and clinical problem-solving. Clinical experiences include placement in a family practice centre, an experience in a drug-abuse treatment centre and a brief rotation in geriatric medicine. Students are evaluated on their performance in clinical settings and in an objective test of primary care concepts. Test performance shows significant growth from student pre- to post-test scores. Student evaluations have been very favourable toward the clerkship, and self-reports reveal growth in knowledge of common illnesses, problems of the elderly, problems of drug abuse and the role of the family doctor. abstract_id: PUBMED:9827343 Implementing problem-based learning in a family medicine clerkship. Background And Objectives: Problem-based learning (PBL) has been implemented in the curriculum of many medical schools, but limited information is available about the outcome of this learning technique. The educational intervention presented in this paper implemented a PBL learning component in our third-year family medicine clerkship and measured the outcomes of this curricular change. Methods: One third of the curricular time devoted to didactic teaching in our family medicine clerkship was replaced with PBL activities. Simulated cases were developed and presented to students who, with the aid of faculty facilitators, studied the cases, gathered information about the cases, and developed diagnostic and management plans for the cases. The outcome of the intervention was measured by a) comparing students' scores on the National Board of Medical Examiners (NBME) family medicine clerkship examination to scores achieved by students in the year before PBL was introduced and b) students' evaluations of the relevance and success of PBL in the clerkship curriculum. Results: Students' NBME clerkship examination scores increased from a mean of 66 the year before PBL began to 73 after PBL was implemented. More than 80% of students reported that PBL was a good way to learn family medicine, and 85% reported that the PBL technique provided sufficient information to formulate learning issues. Conclusions: PBL can be introduced into a third-year family medicine clerkship curriculum with general acceptance by students. Students rated the technique highly, and their examination scores improved. abstract_id: PUBMED:24911295 Exploring interprofessional education in the family medicine clerkship: a CERA study. Background And Objectives: The need for interprofessional education (IPE) to improve teamwork skills has been recognized by several national organizations. The purpose of this study was to investigate IPE integration in family medicine clerkships and factors associated with IPE's success. Methods: A survey of clerkship directors at US allopathic medical schools was conducted through the Council of Academic Famiily Medicine Educational Research Alliance (CERA). Respondents were asked (1) whether IPE was part of the curriculum, (2) the educational methods used, (3) which health professions students and faculty participated in IPE, (4) what outcomes were measured, (5) the types of faculty development provided, and (6) the barriers encountered when implementing IPE. Results: The response rate was 66% (88/134), and 38% reported incorporating IPE into the clerkship with most offering IPE in clinical activities. A wide variety of health professions students and faculty participated in clerkship IPE activities. One third of the respondents offered faculty development. Most agreed that third party funding (85%), IPE team training (94%), clearly defined roles (94%), and dedicated time during clinical care for team meetings (93%) were vital for IPE to succeed. Many programs did not measure IPE-specific outcomes (49%). Eighty percent reported at least one barrier to implementing IPE. The most common barriers were scheduling conflicts (46%) and lack of IPE experience (40%). No one reported a lack of institutional support for IPE. Conclusions: Few clerkships offered IPE. However, family medicine is in a unique position to highlight the value of interprofessional teamwork for students and should recognize and promote IPE opportunities. Answer: A reduction in family medicine clerkship time does not necessarily affect educational outcomes negatively. A study that examined the impact of reducing the family medicine clerkship from 6 weeks to 4 weeks found that educational outcomes were similar between the two durations. In fact, there was a slight increase in student ratings for various aspects of the clerkship, such as the adequacy of the number of patients seen, the opportunity to follow-up with patients, the ability to develop health promotion plans, and overall satisfaction. However, direct comparison of student performance on clerkship examinations was not possible due to differences in the combinations of examinations used each year (PUBMED:11411971). Furthermore, during the COVID-19 pandemic, adaptations were made to move a family medicine clerkship from traditional in-person delivery to virtual only, and then to a hybrid model. The study comparing educational outcomes across these delivery types found that students receiving virtual-only or hybrid content performed at least as well on three clerkship-related educational outcomes as their pre-COVID peers who participated in person. There were no significant differences in students' National Board of Medical Examiners (NBME) exam scores or final numeric scores across the different delivery types (PUBMED:37540529). These findings suggest that a reduction in clerkship time or a shift to virtual or hybrid delivery methods does not inherently compromise educational outcomes, provided that the curriculum is adapted appropriately to maintain the quality of the educational experience.
Instruction: Is the delay to surgery for isolated hip fracture predictive of outcome in efficient systems? Abstracts: abstract_id: PUBMED:16612294 Is the delay to surgery for isolated hip fracture predictive of outcome in efficient systems? Background: Adverse outcomes for patients with isolated hip fracture have been documented when preoperative delay is longer than 48 hours. An efficient system will have the capacity to repair all hip fractures within 48 hours. We hypothesized that in an efficient system, there would be a medical justification for a delay greater than 48 hours. The purpose of this study was to identify the causes and outcome of delay for hip surgery in an efficient system. Methods: All patients with isolated hip fracture admitted to a regional trauma center from April 1993 to March 2003 were reviewed. Demographics, presence of comorbidity, preoperative delay, complications, and mortality were collected. Univariate and multivariate analysis were carried out. Results: The cohort included 977 patients. Overall mortality was 12.2%. Surgery was performed within 24 hours in 53% of cases and within 48 hours in 87% of cases. The presence of comorbidity partly explained longer (>48 hours) surgical delays. Multivariate analysis revealed that age greater than 65, male sex, and the presence of pulmonary and cardiac comorbid conditions or an active cancer but not surgical delay were associated with mortality and complications. However, surgical delay was associated with longer postsurgical hospital stay, independently of the presence of comorbidity or increasing age. Conclusions: Preoperative delay does not entail adverse outcomes when the surgery is delayed to allow for treatment of comorbid medical conditions. Preoperative delay is associated with a longer hospital stay. The presence of comorbidity only partly explains preoperative delay and adverse outcomes. A prospective study coding for the severity of comorbid conditions and the justification of the preoperative delay will be required to fully elucidate the link between delay and outcome. abstract_id: PUBMED:16917474 "Is the delay to surgery for isolated hip fracture predictive of outcome in efficient systems?" from the April 2006 issue of The Journal of Trauma. N/A abstract_id: PUBMED:28956102 Prolonged pre-operative hospital stay as a predictive factor for early outcomes and mortality after geriatric hip fracture surgery: a single institution open prospective cohort study. Introduction: The aim of this open prospective cohort study was to determine if a prolonged pre-operative hospital stay is a true predictor of higher morbidity or mortality in geriatric patients with hip fractures. Materials And Methods: We analysed early outcome parameters, such as functional independence measure (FIM), at discharge and four months post-operatively, peri-operative nonsurgical complications, intra-hospital and one year mortality compared with prolonged pre-operative hospital stay in 308 patients from a continuous cohort of 344. Results: Average pre-operative stay was 8.39 ± 5.80 days. Delaying surgery for > 72 hours was independently predictive for general complications and lower motor FIM gain at four months. All findings worsen progressively after the fifth day of delay. Pre-operative period was not found to be an independent predictor of mortality. Conclusion: In all observed outcome parameters except mortality, pre-operative delay > 72 hours was shown to be a true predictive factor. abstract_id: PUBMED:33990873 Analysis of the effects of a delay of surgery in patients with hip fractures: outcome and causes. This study analyzed characteristics of hip fracture patients who did not undergo surgery within 24 hours after hospitalization, as recommended by the Belgian quality standards. Reasons for delay were analyzed. Delay in surgery for hip fracture was related to the medical condition of the patients. Introduction: To compare patients with optimal timing to patients with a delay in hip surgery, with respect to outcome (complications (postoperative) and mortality) and reasons for delay. Methods: A retrospective analysis of medical records compared patients operated on within 24h (Group A) to patients operated on more than 24h after admission (Group B). A follow-up period of 5 years after release or up to the time of data collection was used. Reasons for delay in relation with mortality were analyzed descriptively. Descriptive statistics were used for patient demographics and complications. Relationships causing a delayed surgery and mortality were analyzed using binary logistic regression. Additionally, a survival analysis was provided for overall mortality. Results: Respectively, 536 and 304 patients were included in Group A and B. The most prominent reason for delaying surgery was the patient not being medically fit (20.7%). Surgical delay was associated with more cardiovascular (p = 0.010), more pulmonary (p < 0.001), and less hematologic complications (p=0.037). Thirty-day mortality was higher with increasing age (p < 0.001), with hematologic (p < 0.001) or endocrine-metabolic complications (p = 0.001), and lower when no complications occurred (p = 0.004). Mortality at the end of data collection was higher for patients with delayed surgery (OR = 2.634, p < 0.001), an increased age (p = 0.006), male gender (p < 0.001), institutionalized patients (p = 0.009), pulmonary complication (p = 0.002), and having no endocrine-metabolic complications (p = 0.003). Survival analysis showed better survival for patients operated on within 24h (p < 0.001). Conclusions: Delayed surgery for patients with hip fractures was associated with bad additional medical conditions. Survival was higher for patients operated on within 24h of admission. abstract_id: PUBMED:38143132 Potential predictors for surgical delay in patients with intertrochanteric fractures and their impact on hospitalization length, in a Latin American trauma center. Introduction: The incidence of intertrochanteric fractures is increasing, and health institutions must know the profile of their patients. This paper describes the relationship between clinical characteristics and attention process with surgical delay and prolonged hospitalization length in patients with intertrochanteric fractures admitted to a Latin-American trauma center. Materials And Methods: Retrospective, comparative, cross-sectional study. The medical records of patients admitted for intertrochanteric fracture between August 1st 2019 and May 31st 2021 were reviewed to extract data regarding clinical characteristics, causes of surgery delay, and hospitalization length. Regression models were used to distinguish potential predictor variables on surgical delay and hospitalization length. Results: 362 cases with intertrochanteric fractures were surgically treated during the study period. The mean time from admission to surgery was 4.2 ± 3.8 days. in 33,1% of the cases the surgery was performed within the first 48 h. A history of coronary heart disease (CHD) and chronic kidney disease were potential predictors of surgery delay (p<0.005). Only CHD was independently associated with surgery delay (OR 5.267 [95%CI 1.201-23.100); p = 0.028). Hospitalization was extended in cases where surgery was performed after 48 h (10,1 ± 6,2 days vs 5,9 ± 3,0 days; p<0.001). The regression model showed that for each day passed from fracture to admission and each day from admission to surgery, the hospitalization duration increased by 3,7 and 4,4 days, respectively. Discussion: Patients with intertrochanteric fractures have comorbidities that potentially delay their surgical treatment and prolong hospitalization duration. The efficient use of hospital resources and the proper early evaluation of cardiac pathologies conducted during admission, could positively impact the achievement of surgical treatment within the first 48 h after the fracture, reducing hospitalization duration. abstract_id: PUBMED:17996004 Predictive value of six risk scores for outcome after surgical repair of hip fracture in elderly patients. Background: Hip fracture surgery is associated with high post-operative mortality and poor functional results: the excess mortality is 20% in the first year; of those patients who survive, only 50% recover their previous ability to walk. The purpose of this study was to assess the predictive value of six functional status and/or surgical risk scoring systems with regard to serious complications after hip fracture surgery in the elderly. Methods: We performed a prospective study of a consecutive series of 232 patients (aged 65 years or older) undergoing hip fracture surgery. We pre-operatively applied: The American Society of Anesthesiologists classification, the Barthel index, the Goldman index, the Physiological and Operative Severity Score for the enUmeration of Mortality and Morbidity (POSSUM) scoring system, the Charlson index and the Visual Analogue Scale for Risk (RISK-VAS) scale. These scales were evaluated with respect to three variables: incidence of serious complications, the ability to walk after a 3-month period and 90-day survival. The predictive value of the different scales was assessed by the calculated area under a receiver operating characteristic curve. Results: The RISK-VAS scale, the POSSUM scoring system and the Charlson index reached a sufficient predictive value with regard to serious post-operative complications. The Barthel index and the RISK-VAS scale were those most useful for predicting ambulation at 3 months. None of the scales proved to be capable of predicting 90-day mortality. Conclusions: A simple index such as the RISK-VAS scale was the best predictor of serious post-operative complications. The functional level before the fracture, measured with the Barthel index, had a major influence on the ambulation recovery. abstract_id: PUBMED:30539154 Isolated hip fracture in the elderly and time to surgery: is there an outcome difference? Background: Early operative intervention for hip fractures in the elderly is advised to reduce mortality and morbidity. Postoperative complications impose a significant burden on patient outcomes and cost of medical care. Our aim was to determine the relationship between time to surgery and postoperative complications/mortality in patients with hip fracture. Methods: This is a retrospective review of data collected from our institution's trauma registry for patients ≥65 years old with isolated hip fracture and subsequent surgery from 2015 to 2017. Patients were stratified into two groups based on time to surgery after admission: group 1: <48 hours versus group 2: >48 hours. Demographic variables included age, gender, race, and Injury Severity Score (ISS). The outcome variables included intensive care unit length of stay (ICU-LOS), deep venous thrombosis (DVT), pulmonary embolism (PE) rate, mortality, and 30-day readmission rates. Analysis of variance was used for analysis, with significance defined as a p value <0.05. Results: A total of 485 patients with isolated hip fracture required surgical intervention. Of those, 460 had surgery <48 hours and 25 had surgery >48 hours postadmission. The average ISS was the same in both groups. The average ICU-LOS was significantly higher in the >48 hours group compared with the <48 hours group (4.0 vs. 2.0, p<0.0002). There was no statistically significant difference between groups when comparing DVTand PE rate, 30-day readmission, or mortality rates. Discussion: Time to surgery may affect overall ICU-LOS in patients with hip fracture requiring surgical intervention. Time to surgery does not affect complication rates, 30-day readmission, or mortality. Future research should investigate long-term outcomes such as functional status and disability-adjusted life years. Level Of Evidence: III. Retrospective/ prognostic cohort study. abstract_id: PUBMED:33992000 Delay in Hip Fracture Repair in the Elderly: A Missed Opportunity Towards Achieving Better Outcomes. Background: Hip fractures are a major cause of morbidity and mortality in the elderly. The American Academy of Orthopedic Surgeons (AAOS) recommends surgical repair within 48 hours of admission, as this is associated with lower postoperative mortality and complications. This study demonstrates the association between patient demographics, level of care, and hospital region to delay in hip fracture repair in the elderly. Methods: The National Trauma Data Bank (NTDB) was queried for elderly patients (age >65 years) who underwent proximal femoral fracture repair. Identified patients were subcategorized into two groups: hip fracture repair in <48 hours, and hip fracture repair > 48 hours after admission. Patient and hospital characteristics were collected. Outcome variables were timed from the day of admission to surgery and inpatient mortality. Results: Out of 69,532 patients, 28,031 were included after inclusion criteria were applied. 23,470 (83.7%) patients underwent surgical repair within 48 hours. The overall median time to procedure was 21 (interquartile range [IQR] 7-38) hours. Females were less likely to undergo a delay in hip fracture repair (odds ratio [OR; 95% confidence interval {CI}]: 0.82 [0.76-0.88], P< 0.05), and patients with higher Injury Severity Score (ISS ≥25) had higher odds of delay in surgical repair (OR; 95% CI: 1.56 [1.07-2.29], P< 0.05). Patients treated at hospitals in the Western regions of the United States had lower odds of delay, and those treated in the Northeast and the South had higher odds of delay compared to the hospitals in the Midwest (taken as standard). There was no association between trauma level designation and odds of undergoing delay in hip fracture repair. Conclusion: Variables related to patient demographic and hospital characteristics are associated with delay in hip fracture repair in the elderly. This study delineates key determinants of delay in hip fracture repair in the elderly patients. abstract_id: PUBMED:31893000 Admission delay is associated with worse surgical outcomes for elderly hip fracture patients: A retrospective observational study. Background: The influence of surgical delay on mortality and morbidity has been studied extensively among elderly hip fracture patients. However, most studies only focus on the timing of surgery when patients have already been hospitalized, without considering pre-admission waiting time. Therefore, the present study aims to explore the influence of admission delay on surgical outcomes. Methods: In this retrospective study, we recorded admission timing and interval from admission to surgery for included patient. Other covariates were also collected to control confounding. The primary outcome was 1-year mortality. The secondary outcomes were 1-month mortality, 3-month mortality, ICU admission and postoperative pneumonia. We mainly used multivariate logistic regression to determine the effect of admission timing on postoperative outcomes. An additional survival analysis was also performed to assess the impact of admission delay on survival status in the first year after operation. Results: The proportion of patients hospitalized on day 0, day 1, day 2 after injury was 25.4%, 54.7% and 66.3%, respectively. And 12.6% patients visited hospital one week later after injury. Mean time from admission to surgery was 5.2 days (standard deviation 2.8 days). Hospitalization at one week after injury was a risk factor for 1-year mortality (OR 1.762, 95% CI 1.026-3.379, P=0.041). Conclusion: Admission delay of more than one week is significantly associated with higher 1-year mortality. As a supplement to the current guidelines which emphasizes early surgery after admission, we also advocate early admission once patients get injured. abstract_id: PUBMED:37867607 Delay to Surgical Treatment in Geriatric Hip Fracture Patients. Background: Hip fractures in the geriatric population are frequently encountered. There is increasing focus on minimizing the delay to surgery in these patients. This study was designed to evaluate factors responsible for a delay to surgery in a geriatric hip fracture population and how time to surgery affects mortality. Methods: A retrospective cohort of patients sustaining low energy geriatric hip fractures in either an American College of Surgeons (ACS) verified Level 1 trauma center or a local university affiliated community teaching hospital were reviewed. The following variables were evaluated as independent risk factors for delay to surgery: demographic data, surgical details, use of cardiology resources, treatment center, and comorbidities. As a secondary objective, the effect of time to surgery on 1 year mortality was analyzed. Results: 1157 patients met inclusion criteria. The following factors increased the risk of delay to surgery greater than 48 hours: male sex, treatment in a community hospital (versus trauma center), older age, multiple comorbidities (eg, cardiovascular-related conditions or other fractures), cardiology consultation, and an American Society of Anesthesiologists physical status score of 3 or 4. Cardiology consultation was the strongest independent predictor of risk for delay to surgery of >48 hours (odds ratio, 6.68; 95% confidence interval, 4.40 to 10.14; P < .001). The 1-year mortality of patients did not differ when surgical treatment occurred before 48 hours or after 48 hours (Log-rank test P = .109). Conclusion: The presence of cardiovascular comorbidities and cardiology consultations can delay surgical treatments for hip fractures in patients greater than 65 years old, but the delay did not influence 1-year all-cause mortality. Level Of Evidence: Level IV. Answer: The delay to surgery for isolated hip fracture in efficient systems does not necessarily predict adverse outcomes if the delay is medically justified. In a study of 977 patients, it was found that while age over 65, male sex, and the presence of certain comorbid conditions were associated with mortality and complications, surgical delay itself was not associated with these adverse outcomes. However, surgical delay was linked to longer post-surgical hospital stays, regardless of comorbidity or age (PUBMED:16612294). Another study indicated that a pre-operative delay of more than 72 hours was independently predictive of general complications and lower motor Functional Independence Measure (FIM) gain at four months, with outcomes worsening progressively after the fifth day of delay. However, the pre-operative period was not found to be an independent predictor of mortality (PUBMED:28956102). An analysis of hip fracture patients who did not undergo surgery within 24 hours, as recommended by Belgian quality standards, showed that delayed surgery was associated with more cardiovascular and pulmonary complications, but less hematologic complications. Mortality at the end of data collection was higher for patients with delayed surgery, and survival analysis showed better survival for patients operated on within 24 hours of admission (PUBMED:33990873). In a Latin American trauma center, a history of coronary heart disease (CHD) and chronic kidney disease were potential predictors of surgery delay. Hospitalization was extended in cases where surgery was performed after 48 hours, and the efficient use of hospital resources and proper early evaluation of cardiac pathologies could positively impact the achievement of surgical treatment within the first 48 hours after the fracture, reducing hospitalization duration (PUBMED:38143132). Overall, while certain studies suggest that surgical delay can be associated with worse outcomes, such as increased complications and longer hospital stays, the relationship between delay and mortality is not always clear-cut. The presence of comorbidities and the medical fitness of the patient are important factors that may justify surgical delays in efficient systems, and these delays do not necessarily lead to adverse outcomes if they are medically warranted (PUBMED:16612294; PUBMED:28956102; PUBMED:33990873; PUBMED:38143132).
Instruction: Purging disorder: an ominous variant of bulimia nervosa? Abstracts: abstract_id: PUBMED:25643935 Behavioral, emotional, and situational context of purging episodes in anorexia nervosa. Objective: The current study examined behavioral, emotional, and situational factors involved in purging among women with anorexia nervosa (AN). Method: Women with AN (n=118) completed a two-week ecological momentary assessment protocol involving daily reports of eating disorder behaviors, mood, and stressful events. Generalized estimating equations examined the likelihood and context of purging following eating episodes involving both overeating and loss of control (binge eating; BE); loss of control only (LOC); overeating only (OE); and neither loss of control nor overeating (non-pathological eating; NE). Results: Relative to NE, purging was more likely to occur following BE, LOC, and OE (Wald chi-square = 18.05; p < .001). BE was more strongly associated with subsequent purging than LOC but not OE; the latter two did not differ from one another. Negative affect predicted purging following NE (Wald chi-square = 7.71; p = .005). Discussion: Binge eating involving large amounts of food was the strongest predictor of purging in AN, which challenges the notion that loss of control is the most salient aspect of experiencing distress in bulimia nervosa and BE disorder. Parallel to findings from the BE literature, negative affect strongly predicted purging following NE. Further research should clarify the function and triggers of purging in AN. abstract_id: PUBMED:24185981 The role of loss of control eating in purging disorder. Objective: Purging Disorder (PD), an Other Specified Feeding or Eating Disorder (APA, 2013), is characterized by recurrent purging in the absence of binge eating. Though objectively large binge episodes are not present, individuals with PD may experience a loss of control (LOC) while eating a normal or small amounts of food. The present study sought to examine the role of LOC eating in PD using archival data from 101 women with PD. Method: Participants completed diagnostic interviews and self-report questionnaires. Analyses examined the relationship between LOC eating and eating disorder features, psychopathology, personality traits, and impairment in bivariate models and then in multivariate models controlling for purging frequency, age, and body mass index. Results: Across bivariate and multivariate models, LOC eating frequency was associated with greater disinhibition around food, hunger, depressive symptoms, negative urgency, distress, and impairment. Discussion: LOC eating is a clinically significant feature of PD and should be considered in future definitions of PD. Future research should examine whether LOC eating better represents a dimension of severity in PD or a specifier that may impact treatment response or course. abstract_id: PUBMED:31976573 Preliminary examination of insulin and amylin levels in women with purging disorder. Objective: This preliminary study explored whether differences in meal-stimulated insulin or amylin release are linked to altered ingestive behaviors in individuals with bulimia nervosa (BN) or purging disorder (PD). Method: Women with BN (n = 15), PD (n = 16), or no eating disorder (n = 18) underwent structured clinical interviews and assessments of gut hormone and subjective responses to a fixed test meal. Multilevel model analyses were used to explore whether gut hormone responses contribute to subjective responses to the test meal and whether these associations differed by group. Results: Insulin and amylin levels significantly increased following the test meal. Women with PD showed greater insulin release compared to those with BN, but not controls. Multilevel models support significant group X insulin interactions predicting subjective ratings of nausea and urge to vomit, with a stronger association between higher insulin responses and higher nausea and urge to vomit in women with PD and BN. Amylin responses did not differ by group. Conclusion: Increased sensitivity to the effects of insulin on nausea and urge to vomit may be linked to purging in both PD and BN. Differences in postprandial insulin levels may be linked to purging behavior in the absence versus presence of binge eating. abstract_id: PUBMED:24777645 Eating patterns in youth with restricting and binge eating/purging type anorexia nervosa. Objective: To describe eating patterns in youth with restricting and binge/purge type anorexia nervosa (AN) and to examine whether eating patterns are associated with binge eating or purging behaviors. Method: Participants included 160 children and adolescents (M = 15.14 ± 2.17 years) evaluated at The University of Chicago Eating Disorders Program who met criteria for DSM-5 restrictive type AN (AN-R; 75%; n = 120) or binge eating/purging type AN (AN-BE/P; 25%; n = 40). All participants completed the eating disorder examination on initial evaluation. Results: Youth with AN-R and AN-BE/P differed in their eating patterns, such that youth with AN-R consumed meals and snacks more regularly relative to youth with AN-BE/P. Among youth with AN-BE/P, skipping dinner was associated with a greater number of binge eating episodes (r = -.379, p < .05), while skipping breakfast was associated with a greater number of purging episodes (r = -.309, p < .05). Discussion: Youth with AN-R generally follow a regular meal schedule, but are likely consuming insufficient amounts of food across meals and snacks. In contrast, youth with AN-BE/P tend to have more irregular eating patterns, which may play a role in binge eating and purging behaviors. Adults monitoring of meals may be beneficial for youth with AN, and particularly those with AN-BE/P who engage in irregular eating patterns. abstract_id: PUBMED:24825485 Risk factors for binge eating and purging eating disorders: differences based on age of onset. Objective: To (1) determine whether childhood risk factors for early onset binge eating and purging eating disorders also predict risk for later-onset binge eating and purging disorders, and (2) compare the utility of childhood and early adolescent variables in predicting later-onset disorders. Method: Participants (N = 1,383) were drawn from the Western Australian Pregnancy Cohort (Raine) Study, which has followed children from pre-birth to age 20. Eating disorders were assessed when participants were aged 14, 17, and 20. Risk factors for early onset eating disorders have been reported previously (Allen et al., J Am Acad Child Psychiat, 48, 800-809, 2009). This study used logistic regression to determine whether childhood risk factors for early onset disorders, as previously identified, would also predict risk for later-onset disorders (n = 145). Early adolescent predictors of later-onset disorders were also examined. Results: Consistent with early onset cases, female sex and parent-perceived child overweight at age 10 were significant multivariate predictors of binge eating and purging disorders with onset in later adolescence. Eating, weight, and shape concerns at age 14 were also significant in predicting later-onset disorders. In the final stepwise multivariate model, female sex and eating, weight, and shape concerns at age 14 were significant in predicting later-onset eating disorders, while parent-perceived child overweight at age 10 was not. Discussion: There is overlap between risk factors for binge eating and purging disorders with early and later onset. However, childhood exposures may be more important for early than later onset cases. abstract_id: PUBMED:29219202 Disturbance of gut satiety peptide in purging disorder. Objective: Little is known about biological factors that contribute to purging after normal amounts of food-the central feature of purging disorder (PD). This study comes from a series of nested studies examining ingestive behaviors in bulimic syndromes and specifically evaluated the satiety peptide YY (PYY) and the hunger peptide ghrelin in women with PD (n = 25), bulimia nervosa-purging (BNp) (n = 26), and controls (n = 26). Based on distinct subjective responses to a fixed meal in PD (Keel, Wolfe, Liddle, DeYoung, & Jimerson, ), we tested whether postprandial PYY response was significantly greater and ghrelin levels significantly lower in women with PD compared to controls and women with BNp. Method: Participants completed structured clinical interviews, self-report questionnaires, and laboratory assessments of gut peptide and subjective responses to a fixed meal. Results: Women with PD demonstrated a significantly greater postprandial PYY response compared to women with BNp and controls, who did not differ significantly. PD women also endorsed significantly greater gastrointestinal distress, and PYY predicted gastrointestinal intestinal distress. Ghrelin levels were significantly greater in PD and BNp compared to controls, but did not differ significantly between eating disorders. Women with BNp endorsed significantly greater postprandial hunger, and ghrelin predicted hunger. Discussion: PD is associated with a unique disturbance in PYY response. Findings contribute to growing evidence of physiological distinctions between PD and BNp. Future research should examine whether these distinctions account for differences in clinical presentation as this could inform the development of specific interventions for patients with PD. abstract_id: PUBMED:26969189 The importance of loss of control while eating in adolescents with purging disorder. Objective: Although many individuals with purging disorder (PD) report loss of control (LOC) eating, it is unclear whether they differ from those who do not, or from other eating disorders involving purging and/or LOC. Method: We compared PD with LOC (PD-LOC), PD without LOC (PD-noLOC), bulimia nervosa (BN), and anorexia nervosa-binge/purge subtype (AN-B/P) on measures of eating-related and general psychopathology in treatment-seeking adolescents. Results: PD-LOC comprised ∼30% of PD diagnoses. PD-LOC and PD-noLOC did not differ from one another, or from BN and AN-B/P, on most measures of psychopathology, with some exceptions. PD-noLOC was similar to AN-B/P (p = 0.99) and significantly different from BN on eating concerns (p < 0.001), while PD-LOC was similar to BN, AN-B/P, and PD-noLOC on this measure (ps ≥ 0.06). PD-LOC reported higher self-esteem than BN, AN-B/P, and PD-noLOC (ps < 0.001). Discussion: PD was largely similar to other eating disorders characterized by purging, regardless of whether LOC eating was present. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:801-804). abstract_id: PUBMED:35366014 Evaluating the predictive validity of purging disorder by comparison to bulimia nervosa at long-term follow-up. Objective: The current study sought to examine the predictive validity of the purging disorder diagnosis at long-term follow-up by comparing naturalistic outcomes with bulimia nervosa. Method: Women with purging disorder (N = 84) or bulimia nervosa (N = 133) who had completed comprehensive baseline assessments as part of one of three studies between 2000 and 2012 were sought for follow-up assessment. Nearly all (94.5%) responded to recruitment materials and 150 (69% of sought sample; 83.3% non-Hispanic white; 33.40 [7.63] years old) participated at an average of 10.59 (3.71) years follow-up. Participants completed the Eating Disorder Examination, the Structured Clinical Interview for DSM-IV, and a questionnaire battery. Diagnostic groups were compared on eating disorder (illness status, recovery status, and eating pathology) and related outcomes. Group differences in predictors of outcome were explored. Results: There were no significant differences in eating disorder presence (p = .70), recovery status (p = .87), and level of eating pathology (p = .17) between diagnostic groups at follow-up. Post hoc equivalence tests indicated group differences were smaller than a medium effect size (p's ≤ .005). Groups differed in diagnosis at follow-up (p = .002); diagnostic stability was more likely than cross-over to bulimia nervosa for women with baseline purging disorder (p = .004). Discussion: Although purging disorder and bulimia nervosa do not differ in long-term outcomes, the relative stability in clinical presentation suggests baseline group differences in clinical presentation may be useful in augmenting treatments for purging disorder. Public Significance Statement: While purging disorder is classified as an "other specified" eating disorder, individuals who experience this disorder have comparable negative long-term outcomes as those with bulimia nervosa. This highlights the importance of screening for and treating purging disorder as a full-threshold eating disorder. abstract_id: PUBMED:38189475 Momentary skills use predicts decreased binge eating and purging early in day treatment: An ecological momentary assessment study. Objective: Emerging research indicates that skills acquisition may be important to behavior change in cognitive behavior therapy (CBT) for eating disorders. This study investigated whether skills use assessed in real time during the initial 4 weeks of CBT-based day treatment was associated with momentary eating disorder behavior change and rapid response to treatment. Methods: Participants with DSM-5 bulimia nervosa or purging disorder (N = 58) completed ecological momentary assessments (EMA) several times daily for the first 28 days of treatment. EMA assessed skills use, the occurrence of binge eating and/or purging, and state negative affect. Rapid response was defined as abstinence from binge eating and/or purging in the first 4 weeks of treatment. Results: Greater real-time skills use overall, and use of "planning ahead," "distraction," "social support," and "mechanical eating" skills in particular, were associated with a lower likelihood of engaging in binge eating or purging during the same period. After controlling for baseline group differences in overall difficulties with emotion regulation, rapid and non-rapid responders did not differ in overall skills use, or skills use at times of higher negative affect, during the EMA period. Discussion: Momentary use of skills appears to play an important role in preventing binge eating and purging, and certain skills appear to be particularly helpful. These findings contribute to the literature elucidating the processes by which CBT treatments for eating disorders work by providing empirical evidence that skills use helps to prevent binge eating and purging behaviors. Public Significance: Individuals with eating disorders learn new skills during treatment to help them improve their symptoms. This study shows that for people with eating disorders, using skills helps prevent eating disorder behaviors in the moment. Certain skills may be particularly helpful, including planning ahead, distracting activities, support from others, and focusing on eating meals and snacks regardless of how one is feeling. These findings help us better understand how treatments work. abstract_id: PUBMED:27790806 Differences in Cortisol Awakening Response between Binge-Purging and Restrictive Patients with Anorexia Nervosa. Malnutrition and childhood trauma were shown to affect in opposite way the cortisol awakening response (CAR) of patients with anorexia nervosa (AN). To assess the influence of binge-purging behaviour on the CAR of AN patients, we measured the CAR of restrictive AN (ANR) or binge-purging AN (ANBP) patients without history of childhood maltreatment. Seventeen ANBP women, 18 ANR women and 42 healthy women collected saliva samples at awakening and after 15, 30 and 60 min, and filled in the Eating Disorder Inventory-2 (EDI-2). ANR and ANBP patients exhibited a CAR significantly higher than healthy women. Moreover, the CAR of ANBP women was even higher than that of ANR women and positively correlated with the bulimia subitem scores of the EDI-2. Present findings show, for the first time, differences in the CAR between ANBP and ANR subtypes, which may suggest a possible connection between the HPA axis functioning and binge-purging. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Answer: Purging disorder (PD) is a variant of eating disorder characterized by recurrent purging behaviors to influence weight or shape in the absence of binge eating episodes typically seen in bulimia nervosa (BN). The research on PD suggests that it is a clinically significant condition with its own unique features and challenges. Studies have shown that individuals with PD may experience a loss of control (LOC) while eating normal or small amounts of food, which is associated with greater disinhibition around food, hunger, depressive symptoms, negative urgency, distress, and impairment (PUBMED:24185981). This suggests that LOC eating is a significant feature of PD and should be considered in future definitions and treatments of the disorder. Physiological differences have also been observed in individuals with PD. For example, women with PD showed greater insulin release compared to those with BN, and there was a stronger association between higher insulin responses and higher nausea and urge to vomit in women with PD and BN (PUBMED:31976573). Additionally, PD is associated with a unique disturbance in the satiety peptide YY (PYY) response, which contributes to greater gastrointestinal distress in these individuals (PUBMED:29219202). The eating patterns of individuals with PD also differ from those with other eating disorders. For instance, youth with binge eating/purging type anorexia nervosa (AN-BE/P) tend to have more irregular eating patterns, which may play a role in binge eating and purging behaviors (PUBMED:24777645). Furthermore, the presence of LOC while eating in adolescents with PD does not significantly differentiate them from those without LOC or from other eating disorders involving purging and/or LOC (PUBMED:26969189). Long-term outcomes for individuals with PD are comparable to those with BN, indicating that PD should be treated as a full-threshold eating disorder due to its significant negative outcomes (PUBMED:35366014). Moreover, momentary skills use during treatment, such as planning ahead and seeking social support, has been found to predict decreased binge eating and purging in individuals with PD (PUBMED:38189475). In summary, while PD shares some similarities with BN, it also has distinct features that warrant its recognition as a separate clinical entity. The research indicates that PD is not a less severe form of BN but rather an ominous variant with its own set of challenges, including physiological differences, the significance of LOC eating, and the need for specific interventions (PUBMED:24185981; PUBMED:31976573; PUBMED:29219202). The importance of recognizing and treating PD as a serious eating disorder is underscored by its significant physiological and psychological impacts, as well as its long-term outcomes that are as severe as those seen in BN (PUBMED:35366014). Additionally, differences in the cortisol awakening response (CAR) between restrictive AN (ANR) and binge-purging AN (ANBP) patients suggest a possible connection between the hypothalamic-pituitary-adrenal (HPA) axis functioning and binge-purging behaviors (PUBMED:27790806). This finding may have implications for understanding the neuroendocrine aspects of PD, given that PD involves purging without the presence of binge eating. Overall, the evidence suggests that PD is a significant and distinct eating disorder that requires attention and tailored treatment approaches. It is not simply a variant of BN but a disorder with its own clinical presentation, risk factors, and treatment needs.
Instruction: BRAF, KRAS and PIK3CA mutations in colorectal serrated polyps and cancer: primary or secondary genetic events in colorectal carcinogenesis? Abstracts: abstract_id: PUBMED:18782444 BRAF, KRAS and PIK3CA mutations in colorectal serrated polyps and cancer: primary or secondary genetic events in colorectal carcinogenesis? Background: BRAF, KRAS and PIK3CA mutations are frequently found in sporadic colorectal cancer (CRC). In contrast to KRAS and PIK3CA mutations, BRAF mutations are associated with tumours harbouring CpG Island methylation phenotype (CIMP), MLH1 methylation and microsatellite instability (MSI). We aimed at determine the frequency of KRAS, BRAF and PIK3CA mutations in the process of colorectal tumourigenesis using a series of colorectal polyps and carcinomas. In the series of polyps CIMP, MLH1 methylation and MSI were also studied. Methods: Mutation analyses were performed by PCR/sequencing. Bisulfite treated DNA was used to study CIMP and MLH1 methylation. MSI was detected by pentaplex PCR and Genescan analysis of quasimonomorphic mononucleotide repeats. Chi Square test and Fisher's Exact test were used to perform association studies. Results: KRAS, PIK3CA or BRAF occur in 71% of polyps and were mutually exclusive. KRAS mutations occur in 35% of polyps. PIK3CA was found in one of the polyps. V600E BRAF mutations occur in 29% of cases, all of them classified as serrated adenoma. CIMP phenotype occurred in 25% of the polyps and all were mutated for BRAF. MLH1 methylation was not detected and all the polyps were microsatellite stable. The comparison between the frequency of oncogenic mutations in polyps and CRC (MSI and MSS) lead us to demonstrate that KRAS and PIK3CA are likely to precede both types of CRC. BRAF mutations are likely to precede MSI carcinomas since the frequency found in serrated polyps is similar to what is found in MSI CRC (P = 0.9112), but statistically different from what is found in microsatellite stable (MSS) tumours (P = 0.0191). Conclusion: Our results show that BRAF, KRAS and PIK3CA mutations occur prior to malignant transformation demonstrating that these oncogenic alterations are primary genetic events in colorectal carcinogenesis. Further, we show that BRAF mutations occur in association with CIMP phenotype in colorectal serrated polyps and verified that colorectal serrated polyps and MSI CRC show a similar frequency of BRAF mutations. These results support that BRAF mutations harbour a mild oncogenic effect in comparison to KRAS and suggest that BRAF mutant colorectal cells need to accumulate extra epigenetic alterations in order to acquire full transformation and evolve to MSI CRC. abstract_id: PUBMED:21932420 Oncogenic PIK3CA mutations in colorectal cancers and polyps. Oncogenic PIK3CA mutations contribute to colorectal tumorigenesis by activating AKT signaling to decrease apoptosis and increase tumor invasion. A synergistic association of PIK3CA mutation with KRAS mutation has been suggested to increase AKT signaling and resistance to antiepidermal growth factor receptor inhibitor therapy for advanced colorectal cancer, although studies have been conflicting. We sought to clarify this by examining PIK3CA mutation frequency in relation to other key molecular features of defined pathways of tumorigenesis. PIK3CA mutation was assessed by high resolution melt analysis in 829 colorectal cancer samples and 426 colorectal polyps. Mutations were independently correlated with clinicopathological features including patient age, sex and tumor location as well as molecular features including microsatellite instability, KRAS and BRAF mutation, MGMT methylation and the CpG Island Methylator Phenotype (CIMP). Mutation of the helical (Exon 9) and catalytic (Exon 20) domain mutation hotspots were also examined independently. Overall, PIK3CA mutation was positively correlated with KRAS mutation (p < 0.001), MGMT methylation (p = 0.007) and CIMP (p < 0.001). Novel, exon-specific associations linked Exon 9 mutations to a subgroup of cancers characterized by KRAS mutation, MGMT methylation and CIMP-Low, whilst Exon 20 mutations were more closely linked to features of serrated pathway tumors including BRAF mutation, microsatellite instability and CIMP-High or Low. PIK3CA mutations were uncommonly, but exclusively, seen in tubulovillous adenomas (4/124, 3.2%) and 1/4 (25.0%) tubulovillous adenomas with a focus of cancer. These data provide insight into the molecular events driving traditional versus serrated pathway tumorigenesis. abstract_id: PUBMED:24433726 Immunophenotypes and gene mutations in colorectal precancerous lesions and adenocarcinoma Objective: To analyze immunophenotypes and gene mutations of colorectal precancerous lesions and adenocarcinoma, and to compare the difference of carcinogenetic mechanisms between the two precancerous lesions. Methods: Fifty-three cases of colorectal serrated lesions including 30 hyperplastic polyps, 20 sessile serrated adenomas (SSA) and 3 mixed polyps were collected from January 2006 to June 2012.Forty-five cases of traditional adenomas and 50 cases of colorectal adenocarcinomas were also recruited. Thirty hyperplastic polyps, 20 cases of SSA, 3 mixed polyps and 45 traditional adenomas were investigated by immunohistochemistry for the expression of DNA mismatch repair (MMR) proteins (MLH1, MSH2 and MSH6) and DNA methyltransferase MGMT. Mutations of KRAS, BRAF and PIK3CA genes in 10 cases of SSAs, 10 traditional adenomas, 1 mixed polyps and 50 colorectal adenocarcinomas were analyzed by PCR followed by direct Sanger sequencing. Results: (1) Only 3 cases of hyperplastic polyps lost MLH1 expression, and none of SSAs or traditional adenomas showed loss of MLH1. The negative expression rates of MSH2, MSH6 and MGMT in hyperplastic polyps and SSA were significantly higher than those of traditional adenomas. (2) KRAS mutation was found in 5/10 cases of SSAs, 5/10 traditional adenomas and 1/1 mixed polyps. (3) Colorectal adenocarcinomas harbored the mutations of KRAS (48%, 24/50), BRAF (6%, 3/50) and PIK3CA (4%, 2/50). Conclusions: Immunophenotypic and gene mutation profiles are different between colorectal serrated lesion and traditional adenoma. Alterations of MMR and MGMT expression play important roles in the pathogenesis of "serrated neoplasm". KRAS mutation is a significant genetic change in the early phase of colorectal carcinogenesis. abstract_id: PUBMED:34372923 Evaluation of global and intragenic hypomethylation in colorectal adenomas improves patient stratification and colorectal cancer risk prediction. Background: Aberrant DNA hypomethylation of the long interspersed nuclear elements (LINE-1 or L1) has been recognized as an early event of colorectal transformation. Simultaneous genetic and epigenetic analysis of colorectal adenomas may be an effective and rapid strategy to identify key biological features leading to accelerated colorectal tumorigenesis. In particular, global and/or intragenic LINE-1 hypomethylation of adenomas may represent a helpful tool for improving colorectal cancer (CRC) risk stratification of patients after surgical removal of polyps. To verify this hypothesis, we analyzed a cohort of 102 adenomas derived from 40 high-risk patients (who developed CRC in a post-polypectomy of at least one year) and 43 low-risk patients (who did not develop CRC in a post-polypectomy of at least 5 years) for their main pathological features, the presence of hotspot variants in driver oncogenes (KRAS, NRAS, BRAF and PIK3CA), global (LINE-1) and intragenic (L1-MET) methylation status. Results: In addition to a significantly higher adenoma size and an older patients' age, adenomas from high-risk patients were more hypomethylated than those from low-risk patients for both global and intragenic LINE-1 assays. DNA hypomethylation, measured by pyrosequencing, was independent from other parameters, including the presence of oncogenic hotspot variants detected by mass spectrometry. Combining LINE-1 and L1-MET analyses and profiling the samples according to the presence of at least one hypomethylated assay improved the discrimination between high and low risk lesions (p = 0.005). Remarkably, adenomas with at least one hypomethylated assay identified the patients with a significantly (p < 0.001) higher risk of developing CRC. Multivariable analysis and logistic regression evaluated by the ROC curves proved that methylation status was an independent variable improving cancer risk prediction (p = 0.02). Conclusions: LINE-1 and L1-MET hypomethylation in colorectal adenomas are associated with a higher risk of developing CRC. DNA global and intragenic hypomethylation are independent markers that could be used in combination to successfully improve the stratification of patients who enter a colonoscopy surveillance program. abstract_id: PUBMED:29181128 Telomere shortening in non-tumorous and tumor mucosa is independently related to colorectal carcinogenesis in precancerous lesions. Telomere shortening is associated with colorectal carcinogenesis and recent studies have focused on its characteristics in both normal mucosa and tumor tissues. To clarify the role of telomeres in colorectal carcinogenesis, we analyzed telomere shortening in normal and tumor regions of 93 colorectal precursor lesions. Telomere length was examined in 61 tubular adenomas (TAs) and 32 serrated polyps (SPs), and PIK3CA expression, KRAS mutation, BRAF mutation, and MSI were also analyzed. Telomere length was similar in normal and tumor tissues of TAs and SPs. In normal tissues of TAs, telomere shortening was associated with PIK3CA amplification (81.3% vs. 18.8%, p < 0.001), whereas it was associated with BRAF mutation in normal tissues of SPs (66.7% vs. 23.1%, p = 0.060). According to the analysis on tumor tissues, KRAS and BRAF mutations were mutually exclusive in TAs and SPs (p < 0.001), and telomere shortening was associated with mitochondrial microsatellite instability (63.6% vs. 36.4%, p = 0.030). These data suggested a pivotal role of telomere shortening in normal colorectal tissue for proceeding to TAs or SPs along with PIK3CA amplification and BRAF mutation, respectively. Moreover, telomeres in TAs may collaborate with mitochondrial instability for disease progression. abstract_id: PUBMED:24935274 Progress and opportunities in molecular pathological epidemiology of colorectal premalignant lesions. Molecular pathological epidemiology (MPE) is an integrative molecular and population health science that addresses the molecular pathogenesis and heterogeneity of disease processes. The MPE of colonic and rectal premalignant lesions (including hyperplastic polyps, tubular adenomas, tubulovillous adenomas, villous adenomas, traditional serrated adenomas, sessile serrated adenomas/sessile serrated polyps, and hamartomatous polyps) can provide unique opportunities for examining the influence of diet, lifestyle, and environmental exposures on specific pathways of carcinogenesis. Colorectal neoplasia can provide a practical model by which both malignant epithelial tumor (carcinoma) and its precursor are subjected to molecular pathological analyses. KRAS, BRAF, and PIK3CA oncogene mutations, microsatellite instability, CpG island methylator phenotype, and LINE-1 methylation are commonly examined tumor biomarkers. Future opportunities include interrogation of comprehensive genomic, epigenomic, or panomic datasets, and the adoption of in vivo pathology techniques. Considering the colorectal continuum hypothesis and emerging roles of gut microbiota and host immunity in tumorigenesis, detailed information on tumor location is important. There are unique strengths and caveats, especially with regard to case ascertainment by colonoscopy. The MPE of colorectal premalignant lesions can identify etiologic exposures associated with neoplastic initiation and progression, help us better understand colorectal carcinogenesis, and facilitate personalized prevention, screening, and therapy. abstract_id: PUBMED:21263251 Sessile serrated adenoma with early neoplastic progression: a clinicopathologic and molecular study. Sessile serrated adenoma (SSA), also referred to as sessile serrated polyp, has been proposed as a precursor lesion to microsatellite unstable carcinoma. However, the mechanism of stepwise progression from SSA to early invasive carcinoma has been unclear. The purpose of this study was to elucidate the histologic characteristics and possible role of p53, β-catenin, BRAF, KRAS, and PIK3CA in the development and progression of SSA. We analyzed 12 cases of SSA with neoplastic progression (SSAN), including 7 cases of intraepithelial high-grade dysplasia (HGD) and 5 cases of submucosal invasive carcinoma, and compared them with 53 SSAs and 66 hyperplastic polyps (HPs) by immunohistochemistry and gene mutation analysis. Histologically, 75% (9 of 12) of SSANs showed tubular or tubulovillous growth patterns rather than serrated ones in the HGD/intramucosal carcinoma component. All 5 SSANs with invasive carcinoma lost their serrated structure and developed increased extracellular mucin in their submucosal carcinoma component, a consistent feature of mucinous adenocarcinoma. Nuclear accumulations of β-catenin and p53 were observed in 50% (6 of 12) and 41.7% (5 of 12) of SSANs, respectively, and were exclusively present in HGD/carcinoma areas. By contrast, neither nuclear β-catenin nor p53 expressions were seen in HPs or SSAs (P<0.0001). BRAF mutations (V600E) were observed in 45.8% (11 of 24) of HPs, 60.9% (14 of 23) of SSAs, and 63.6% (7 of 11) of SSANs, and were equally found in both SSA and carcinoma/HGD areas of the individual SSANs. KRAS exon 1 mutations were uncommon in all 3 groups (4.2%, 4.4%, and 0%, respectively). No mutations of PIK3CA exon 9 or exon 20 were found in any cases that were examined. These findings suggest that BRAF mutations may be associated with the pathogenesis of SSA, but progression to HGD or early invasive carcinoma may be associated with other factors, such as alterations of p53 and β-catenin. In addition, our histologic observations suggest a possible close association between SSAN and mucinous adenocarcinoma. abstract_id: PUBMED:26808395 Clinicopathological, endoscopic, and molecular characteristics of the "skirt" - a new entity of lesions at the margin of laterally spreading tumors. Background And Study Aim: A slightly elevated flat lesion with wide pits has occasionally been observed at the margin of laterally spreading tumors (LSTs) and is known as a "skirt." The aim of this study was to evaluate the clinicopathological, endoscopic, and genetic characteristics of a skirt. Patients And Methods: Consecutive LSTs were examined to evaluate the pathological, endoscopic, and genetic characteristics. Pathological characteristics, including the dimension of the cryptic opening (DCO), width of the individual gland (WIG), DCO to WIG ratio, and microvessel diameter were elucidated and compared with those of hyperplastic polyps, low grade dysplasia (LGD), and normal mucosa. The endoscopic findings of pit and microvascular patterns were assessed. Gene mutation analyses were performed for the KRAS, BRAF, NRAS, and PIK3CA genes. Results: A skirt was identified in 35 of 1023 LSTs, and 80 % of lesions with a skirt had a component of either intramucosal or submucosal adenocarcinoma. The DCO, WIG, and DCO to WIG ratio of the skirt were significantly larger than those of other lesions. The microvessel diameters in skirts were significantly smaller than those in LGDs. Regarding the endoscopic findings, 30 skirts showed pits with a coral reef-like appearance, and all skirt regions were found to have a type I capillary pattern. KRAS mutation at codon 146 was found in the nodular part in one of five LSTs with a skirt. Conclusion: The skirt is a newly identified lesion distinct from hyperplastic polyps and LGDs, suggesting the presence of a novel pathway for rectal carcinogenesis from LSTs with a skirt. abstract_id: PUBMED:22808230 Expression of Abelson interactor 1 (Abi1) correlates with inflammation, KRAS mutation and adenomatous change during colonic carcinogenesis. Background: Abelson interactor 1 (Abi1) is an important regulator of actin dynamics during cytoskeletal reorganization. In this study, our aim was to investigate the expression of Abi1 in colonic mucosa with and without inflammation, colonic polyps, colorectal carcinomas (CRC) and metastases as well as in CRC cell lines with respect to BRAF/KRAS mutation status and to find out whether introduction of KRAS mutation or stimulation with TNFalpha enhances Abi1 protein expression in CRC cells. Methodology/principal Findings: We immunohistochemically analyzed Abi1 protein expression in 126 tissue specimens from 95 patients and in 5 colorectal carcinoma cell lines with different mutation status by western immunoblotting. We found that Abi1 expression correlated positively with KRAS, but not BRAF mutation status in the examined tissue samples. Furthermore, Abi1 is overexpressed in inflammatory mucosa, sessile serrated polyps and adenomas, tubular adenomas, invasive CRC and CRC metastasis when compared to healthy mucosa and BRAF-mutated as well as KRAS wild-type hyperplastic polyps. Abi1 expression in carcinoma was independent of microsatellite stability of the tumor. Abi1 protein expression correlated with KRAS mutation in the analyzed CRC cell lines, and upregulation of Abi1 could be induced by TNFalpha treatment as well as transfection of wild-type CRC cells with mutant KRAS. The overexpression of Abi1 could be abolished by treatment with the PI3K-inhibitor Wortmannin after KRAS transfection. Conclusions/significance: Our results support a role for Abi1 as a downstream target of inflammatory response and adenomatous change as well as oncogenic KRAS mutation via PI3K, but not BRAF activation. Furthermore, they highlight a possible role for Abi1 as a marker for early KRAS mutation in hyperplastic polyps. Since the protein is a key player in actin dynamics, our data encourages further studies concerning the exact role of Abi1 in actin reorganization upon enhanced KRAS/PI3K signalling during colonic tumorigenesis. abstract_id: PUBMED:20833970 Cathepsin B expression and survival in colon cancer: implications for molecular detection of neoplasia. Background And Aims: Proteases play a critical role in tumorigenesis and are upregulated in colorectal cancer and neoplastic polyps. In animal models, cathepsin B (CTSB)-activatable imaging agents show high enzyme activity within intestinal tumors. Methods: We conducted a prospective cohort study of 558 men and women with colon cancer with tumors that were accessible for immunohistochemical assessment. We used Cox proportional hazards models, stratified by stage, to compute colon cancer-specific and overall mortality according to tumoral expression of CTSB. Results: Among 558 participants, 457 (82%) had tumors that expressed CTSB (CTSB positive) and 101 (18%) had tumors that did not express CTSB (CTSB negative). CTSB expression was not associated with disease stage (P = 0.19). After a median follow-up of 11.6 years, there were 254 total and 155 colon cancer-specific deaths. Compared with participants with CTSB-negative tumors, participants with CTSB-positive tumors experienced a multivariate hazard ratio for colon cancer-specific mortality of 1.99 (95% confidence interval, 1.19-3.34) and overall mortality of 1.71 (95% confidence interval, 1.16-2.50). CTSB expression was independently associated with KRAS (P = 0.01) and BRAF mutation (P = 0.04), but not microsatellite instability status, CpG island methylator phenotype status, PIK3CA mutation, LINE-1 methylation, TP53 expression, or PTGS2 (cyclooxygenase-2) expression. Among 123 individuals with adenomas, 91% expressed CTSB. Conclusions: As assessed by immunohistochemistry, CTSB is expressed in the vast majority of colon cancers, independent of stage, and is significantly associated with higher risk of colon cancer-specific and overall mortality. Impact: These results support the potential of CTSB a target for image detection of neoplastic lesions in humans. Answer: The mutations in BRAF, KRAS, and PIK3CA are considered primary genetic events in colorectal carcinogenesis. These mutations occur prior to malignant transformation, indicating their role in the early stages of tumor development. In a study of colorectal polyps and carcinomas, it was found that mutations in KRAS, PIK3CA, or BRAF were present in 71% of polyps and were mutually exclusive. Specifically, KRAS mutations were found in 35% of polyps, PIK3CA mutations were rare but present in one of the polyps, and V600E BRAF mutations occurred in 29% of cases, all of which were classified as serrated adenoma. The BRAF mutations were associated with the CpG Island methylation phenotype (CIMP) in 25% of the polyps, although MLH1 methylation was not detected, and all polyps were microsatellite stable. The frequency of BRAF mutations in serrated polyps was similar to that in microsatellite instability (MSI) colorectal cancer (CRC), suggesting that BRAF mutations likely precede MSI carcinomas. In contrast, the frequency of BRAF mutations was statistically different from that found in microsatellite stable (MSS) tumors, indicating a distinct pathway (PUBMED:18782444). Additionally, PIK3CA mutations were found to be positively correlated with KRAS mutations, MGMT methylation, and CIMP. Exon-specific associations linked Exon 9 mutations to a subgroup of cancers characterized by KRAS mutation, MGMT methylation, and CIMP-Low, while Exon 20 mutations were more closely linked to features of serrated pathway tumors, including BRAF mutation, microsatellite instability, and CIMP-High or Low (PUBMED:21932420). Furthermore, KRAS mutations were identified as significant genetic changes in the early phase of colorectal carcinogenesis, with mutations found in serrated lesions, traditional adenomas, and colorectal adenocarcinomas (PUBMED:24433726). In summary, the presence of BRAF, KRAS, and PIK3CA mutations in colorectal polyps and their association with specific molecular phenotypes such as CIMP and MSI support the notion that these oncogenic alterations are primary genetic events in the process of colorectal tumorigenesis.
Instruction: Pharmacologic interventions after an LD50 cocaine insult in a chronically instrumented rat model: are beta-blockers contraindicated? Abstracts: abstract_id: PUBMED:1676569 Pharmacologic interventions after an LD50 cocaine insult in a chronically instrumented rat model: are beta-blockers contraindicated? Study Purpose: To evaluate drug management of acute cocaine toxicity in a new animal model. The study null hypothesis was that no drug would affect outcome compared with a placebo control. Model: Chronically instrumented, conscious, unrestrained rats subjected to an LD50 dose (1.4 mg/100 g body wt) of IV cocaine. Methods: Rapid injection of IV cocaine by IV vascular access port followed by injection of therapeutic study drugs. Outcome (survival vs death) with drug treatment was compared with a placebo group. PHARMACOLOGIC INTERVENTIONS: Normal saline placebo (0.2 mL/100 g), diazepam (0.5 mg/100 g), chlorpromazine (0.2 mg/100 g), propranolol (1.0 mg/100 g), labetalol (3.0 mg/100 g), or verapamil (0.1 mg/100 g) was given. There were ten rats in each treatment group. Allocation to treatment groups was nonrandomized. Statistical Methods: Fisher's exact test. Results: IV injection of cocaine was followed by tonic-clonic seizure activity in all treatment groups. No study drug significantly improved survival compared with the placebo group. However, no animal treated with propranolol survived (P less than .05 vs saline control), and only one of ten animals treated with labetalol survived (P = .14 vs placebo group). Conclusion: In this conscious animal model subjected to an LD50 IV cocaine insult, chlorpromazine and diazepam, previously shown to be of value in other animal models, had no effect on survival. Verapamil also did not affect outcome. Outcome was adversely affected by treatment with beta-blocking agents. abstract_id: PUBMED:8694050 Pregnancy enhances cocaine-induced stimulation of uterine contractions in the chronically instrumented rat. Objective: Our purpose was to test whether cocaine stimulates uterine activity in nonpregnant and pregnant rats. Study Design: The carotid artery and jugular vein were chronically catheterized, and a microballoon probe was inserted into the uterine cavity of 15 pregnant and 14 nonpregnant female rats. Conscious animals received a bolus dose of either cocaine or saline solution intravenously. Cardiovascular and uterine contractile responses were studied. Results: Cocaine (2.5 mg/kg) induced a marked increase in uterine activity and arterial blood pressure in both pregnant and nonpregnant animals without producing systemic toxicity. The maximum change in uterine contractions was greater in the pregnant group than in the nonpregnant group, and blood pressure responses were transient in both. Conclusion: This study is the first demonstration that cocaine stimulates the rat uterus in vivo, with a greater increase in contractions in pregnant compared with nonpregnant animals. These differences are not related to the hemodynamic response or pharmacokinetic profile of cocaine. abstract_id: PUBMED:20206876 Cocaine and beta-blockers: the paradigm. Cocaine is one of the most commonly used substances of abuse. The use of beta-blockers in cocaine induced acute coronary syndrome has long been a matter of debate. While it is widely believed that beta-blockers are contraindicated in cocaine toxicity, there appears to be some recognizable role for certain beta-blockers in ameliorating the cardiovascular as well as central nervous system effects of cocaine. This article explores the role of beta-blockers in the management of cocaine toxicity. abstract_id: PUBMED:28785697 Early use of beta blockers in patients with cocaine associated chest pain. Background: The most common symptom of cocaine abuse is chest pain. Cocaine induced chest pain (CICP) shares patho-physiological pathways with the acute coronary syndromes (ACS). A key event is the increase of activity of the adrenergic system. Beta blockers (BBs), a cornerstone in the treatment of ACS, are felt to be contraindicated in the patient with CICP due to a potential of an "unopposed alpha adrenergic effect (UAE)". Objectives: Identify signs of UAE and in-hospital complications in patients who received BB while having cocaine induced chest pain. Methods: We performed a retrospective review of 378 patients admitted to a medical unit because of CICP. Twenty six of these were given a BB at the time of admission while having CICP. We compared these patients to a control group paired by age, sex, race and history of hypertension who did not received a BB while having CICP. Blood pressure, heart rate, length of stay and in-hospital cardiovascular complications were compared. Results: No statistically significant differences were found between the two groups except for a longer length of stay in the case group. This was felt to be due to unrelated causes. Conclusions: This study does not support the presence of an UAE in patients with continuing CICP and treated early with BB. There were no in-hospital cardiovascular complications in the group of patients who had an early dose of BB while having CICP. Implications: BB appeared safe when given early on admission to patients with CICP. abstract_id: PUBMED:10461893 Nicotinic acetylcholine receptor antagonistic activity of monoamine uptake blockers in rat hippocampal slices. The aim of our study was to investigate the effect of different monoamine uptake blockers on the nicotine-evoked release of [3H]noradrenaline ([3H]NA) from rat hippocampal slices. We found that desipramine (DMI), nisoxetine, cocaine, citalopram, and nomifensine inhibit the nicotine-evoked release of [3H]NA with an IC50 of 0.36, 0.59, 0.81, 0.93, and 1.84 microM, respectively. These IC50 values showed no correlation with the inhibitory effect (Ki) of monoamine uptake blockers on the neuronal NA transporter (r = 0.17, slope = 0.02), indicating that the NA uptake system is not involved in the process. In whole-cell patch clamp experiments neither drug blocked Na+ currents at 1 microM in sympathetic neurons from rat superior cervical ganglia, and only DMI produced a pronounced inhibition (52% decrease) at 10 microM. Comparison of the effect of DMI and tetrodotoxin (TTX) on the electrical stimulation- and nicotine-evoked release of [3H]NA showed that DMI, in contrast to TTX, inhibits only the nicotine-induced response, indicating that the target of DMI is not the Na+ channel. Our data suggest that monoamine uptake blockers with different chemical structure and selectivity are able to inhibit the nicotinic acetylcholine receptors in the CNS. Because these compounds are widely used in the therapy of depressed patients, our findings may have great importance in the evaluation of their clinical effects. abstract_id: PUBMED:7853196 Calcium channel blockers antagonize some of cocaine's cardiovascular effects, but fail to alter cocaine's behavioral effects. The effects of cocaine alone and in combination with the calcium channel blockers nimodipine, verapamil and diltiazem were determined for different groups of squirrel monkeys on cardiovascular function, schedule-controlled behavior and drug self-administration. Cocaine alone (0.3 mg/kg) produced increases in both blood pressure and heart rate. All three calcium channel blockers antagonized the pressor effect, but were ineffective against the tachycardiac effect of cocaine. Nimodipine was the most potent agent in antagonizing the pressor effect of cocaine. Response rates for monkeys responding on a second-order schedule of food presentation were increased by intermediate doses of cocaine (0.1-1.0 mg/kg) and were primarily decreased at a higher dose (3.0 mg/kg). Quarter-life values, an index of response patterning, were only decreased by cocaine. None of the calcium channel blockers altered cocaine's effects on either response rate or response patterning. In the self-administration experiments, the training dose of 56 micrograms/kg cocaine maintained high rates of responding on a simple fixed-ratio schedule. As with schedule-controlled behavior, none of the calcium channel blockers altered cocaine self-administration even when administered before self-administration sessions during 5 consecutive days. These results suggest that the calcium channel blockers may be useful in treating cardiovascular-related complications after cocaine use, but they would not be effective as long-term treatment agents for cocaine abuse. abstract_id: PUBMED:30562494 Clinical Outcomes After Treatment of Cocaine-Induced Chest Pain with Beta-Blockers: A Systematic Review and Meta-Analysis. Background: Recent guidelines have suggested avoiding beta-blockers in the setting of cocaine-associated acute coronary syndrome. However, the available evidence is both scarce and conflicted. The purpose of this systematic review and meta-analysis is to investigate the evidence pertaining to the use of beta-blockers in the setting of acute cocaine-related chest pain and its implication on clinical outcomes. Methods: Electronic databases were systematically searched to identify literature relevant to patients with cocaine-associated chest pain who were treated with or without beta-blockers. We examined the end-points of in-hospital all-cause mortality and myocardial infarction. Pooled risk ratios (RR) and their 95% confidence intervals (CI) were calculated for all outcomes using a random-effects model. Results: Five studies with a total of 1447 patients were included. Our analyses found no differences between patients treated with or without beta-blockers for either myocardial infarction (RR 1.08; 95% CI, 0.61-1.91) or all-cause mortality (RR 0.75; 95% CI, 0.46-1.24). Heterogeneity among included studies was low to moderate. Conclusion: This systematic review and meta-analysis suggests that beta-blocker use is not associated with adverse clinical outcomes in patients presenting with acute chest pain related to cocaine use. abstract_id: PUBMED:16242230 Differential cytotoxic responses of PC12 cells chronically exposed to psychostimulants or to hydrogen peroxide. Repeated abuse of stimulant drugs, cocaine and amphetamine, is associated with extraneuronal dopamine accumulation in specific brain areas. Dopamine may be cytotoxic through the generation of reactive oxygen species, namely hydrogen peroxide (H2O2), resulting from dopamine oxidative metabolism. In this work, we studied the cytotoxicity in PC12 cells (a dopaminergic neuronal model) chronically and/or acutely exposed to cocaine or amphetamine, as compared to H2O2 exposure. Chronic cocaine treatment induced sensitization to acute cocaine insult and increased cocaine-evoked accumulation of extracellular dopamine, although no changes in dihydroxyphenylacetic acid (DOPAC) levels were observed. Moreover, dopamine was depleted in cells chronically exposed to amphetamine and acute amphetamine toxicity persisted in these cells, indicating that dopamine was not involved in amphetamine cytotoxicity. PC12 cells chronically treated with H2O2 were totally resistant to acute H2O2, but not to acute cocaine or amphetamine exposure, suggesting that the toxicity induced by these stimulant drugs is unrelated to adaptation to oxidative stress. Interestingly, chronic cocaine treatment largely, but not completely, protected the cells against a H2O2 challenge, whilst a decrement in intracellular ATP was observed. This study shows that chronic treatment of PC12 cells with cocaine or H2O2 modifies the cytotoxic response to an acute exposure to these agents. abstract_id: PUBMED:28399647 β-Blockers, Cocaine, and the Unopposed α-Stimulation Phenomenon. Cocaine abuse remains a significant worldwide health problem. Patients with cardiovascular toxicity from cocaine abuse frequently present to the emergency department for treatment. These patients may be tachycardic, hypertensive, agitated, and have chest pain. Several pharmacological options exist for treatment of cocaine-induced cardiovascular toxicity. For the past 3 decades, the phenomenon of unopposed α-stimulation after β-blocker use in cocaine-positive patients has been cited as an absolute contraindication, despite limited and inconsistent clinical evidence. In this review, the authors of the original studies, case reports, and systematic review in which unopposed α-stimulation was believed to be a factor investigate the pathophysiology, pharmacology, and published evidence behind the unopposed α-stimulation phenomenon. We also investigate other potential explanations for unopposed α-stimulation, including the unique and deleterious pharmacologic properties of cocaine in the absence of β-blockers. The safety and efficacy of the mixed β-/α-blockers labetalol and carvedilol are also discussed in relation to unopposed α-stimulation. abstract_id: PUBMED:20498415 Beta-blockers for chest pain associated with recent cocaine use. Background: Although beta-blockers prevent adverse events after myocardial infarction, they are contraindicated when chest pain is associated with recent cocaine use. Recommendations against this use of beta-blockers are based on animal studies, small human experiments, and anecdote. We sought to test the hypothesis that beta-blockers are safe in this setting. Methods: We performed a retrospective cohort study of consecutive patients admitted to the San Francisco General Hospital, San Francisco, California, with chest pain and urine toxicologic test results positive for cocaine, from January 2001 to December 2006. Mortality data were collected from the National Death Index. Results: Of 331 patients with chest pain in the setting of recent cocaine use, 151 (46%) received a beta-blocker in the emergency department. There were no meaningful differences in electrocardiographic changes, troponin levels, length of stay, use of vasopressor agents, intubation, ventricular tachycardia or ventricular fibrillation, or death between those who did and did not receive a beta-blocker. After adjusting for potential confounders, systolic blood pressure significantly decreased a mean 8.6 mm Hg (95% confidence interval, 14.7-2.5 mm Hg) in those receiving a beta-blocker in the emergency department compared with those who received their first beta-blocker in the hospital ward (P = .006). Over a median follow-up of 972 days (interquartile range, 555-1490 days), after adjusting for potential confounders, patients discharged on a beta-blocker regimen exhibited a significant reduction in cardiovascular death (hazard ratio, 0.29; 95% confidence interval, 0.09-0.98) (P = .047). Conclusion: beta-Blockers do not appear to be associated with adverse events in patients with chest pain with recent cocaine use. Answer: The question of whether beta-blockers are contraindicated following an LD50 cocaine insult in a chronically instrumented rat model is addressed in the study by PUBMED:1676569. In this study, the researchers evaluated the effects of various pharmacologic interventions, including beta-blockers, on survival after an LD50 dose of intravenous cocaine was administered to conscious rats. The results showed that no study drug significantly improved survival compared with the placebo group. However, it was found that no animal treated with propranolol, a beta-blocker, survived, and only one of ten animals treated with labetalol, another beta-blocker, survived. These findings suggest that beta-blocking agents adversely affected the outcome in this animal model. The study's conclusion indicates that in the specific context of acute cocaine toxicity in a conscious rat model, beta-blockers may be contraindicated as they did not improve survival and, in the case of propranolol, were associated with a 100% mortality rate. This supports the notion that beta-blockers could be harmful in the setting of acute cocaine toxicity, at least in this animal model. It is important to note that while this study provides evidence against the use of beta-blockers in acute cocaine toxicity in rats, clinical decisions in human cases of cocaine toxicity should also consider other research and clinical guidelines. For instance, other studies, such as PUBMED:28785697 and PUBMED:30562494, suggest that beta-blockers may not be associated with adverse clinical outcomes in patients presenting with acute chest pain related to cocaine use. Therefore, while beta-blockers may be contraindicated in the specific scenario of an LD50 cocaine insult in rats, their use in humans with cocaine-induced cardiovascular complications may still be considered under certain circumstances, taking into account the potential risks and benefits.
Instruction: Biomechanical analysis of pin placement for pediatric supracondylar humerus fractures: does starting point, pin size, and number matter? Abstracts: abstract_id: PUBMED:22706457 Biomechanical analysis of pin placement for pediatric supracondylar humerus fractures: does starting point, pin size, and number matter? Background: Several studies have examined the biomechanical stability of smooth wire fixation constructs used to stabilize pediatric supracondylar humerus fractures. An analysis of varying pin size, number, and lateral starting points has not been performed previously. Methods: Twenty synthetic humeri were sectioned in the midolecranon fossa to simulate a supracondylar humerus fracture. Specimens were all anatomically reduced and pinned with a lateral-entry configuration. There were 2 main groups based on specific lateral-entry starting point (direct lateral vs. capitellar). Within these groups pin size (1.6 vs. 2.0 mm) and number of pins (2 vs. 3) were varied and the specimens biomechanically tested. Each construct was tested in extension, varus, valgus, internal, and external rotation. Data for fragment stiffness (N/mm or N mm/degree) were analyzed with a multivariate analysis of variance and Bonferroni post hoc analysis (P<0.05). Results: The capitellar starting point provided for increased stiffness in internal and external rotation compared with a direct lateral starting point (P<0.05). Two 2.0-mm pins were statistically superior to two 1.6-mm pins in internal and external rotation. There was no significant difference found comparing two versus three 1.6-mm pins. Conclusions: The best torsional resistances were found in the capitellar starting group along with increased pin diameter. The capitellar starting point enables the surgeon to engage sufficient bone of the distal fragment and maximizes pin separation at the fracture site. In our anatomically reduced fracture model, the addition of a third pin provided no biomechanical advantage. Clinical Relevance: Consider a capitellar starting point for the more distally placed pin in supracondylar humerus fractures, and if the patient's size allows, a larger pin construct will provide improved stiffness with regard to rotational stresses. abstract_id: PUBMED:28083455 Multicenter Study of Pin Site Infections and Skin Complications Following Pinning of Pediatric Supracondylar Humerus Fractures. Introduction: Pediatric supracondylar humerus fractures are the most common elbow fractures in pediatric patients. Surgical fixation using pins is the primary treatment for displaced fractures. Pin site infections may follow supracondylar humerus fracture fixation; the previously reported incidence rate in the literature is 2.34%, but there is significant variability in reported incidence rates of pin site infection. This study aims to define the incidence rate and determine pre-, peri-, and postoperative factors that may contribute to pin site infection following operative reduction, pinning, and casting. Methods: A retrospective chart analysis was performed over a one-year period on patients that developed pin site infection. A cast care form was added to Nemours' electronic medical records (EMR) system (Epic Systems Corp., Verona, WI) to identify pin site infections for retrospective review. The cast care form noted any inflamed or infected pins. Patients with inflamed or infected pin sites underwent a detailed chart review. Preoperative antibiotic use, number and size of pins used, method of postoperative immobilization, pin dressings, whether postoperative immobilization was changed prior to pin removal, and length of time pins were in place was recorded. Results: A total of 369 patients underwent operative reduction, pinning, and casting. Three patients developed a pin site infection. The pin site infection incidence rate was 3/369=0.81%. Descriptive statistics were reported for the three patients that developed pin site infections and three patients that developed pin site complications. Conclusion: Pin site infection development is low. Factors that may contribute to the development of pin site infection include preoperative antibiotic use, length of time pins are left in, and changing the cast prior to pin removal. abstract_id: PUBMED:29922066 Medial comminution as a risk factor for the stability after lateral-only pin fixation for pediatric supracondylar humerus fracture: an audit. Background And Purpose: Closed reduction and lateral-only pin fixation is one of the common treatment methods for displaced supracondylar fracture in children. However, several risk factors related to the stability have been reported. The aim of this study was to evaluate the medial comminution as a potential risk factor related to the stability after appropriate lateral-only pin fixation for Gartland type III supracondylar humerus fracture. Methods: Sixty-seven patients with type III supracondylar fractures who were under the age of 12 years were included. Immediate postoperative and final Baumann and humerocapitellar angles were measured. Pin separation at fracture site was evaluated to estimate the proper pin placement. Presence of the medial comminution was recorded when two pediatric orthopedic surgeons agreed to the loss of cortical contact at the medial column by the small butterfly fragment or comminuted fracture fragments. Factors including age, sex, body mass index, pin number, pin separation at fracture site, and medial comminution were analyzed. Results: Medial comminution was noted in 20 patients (29.8%). The average pin separation at fracture site was significantly decreased in patients with medial comminution compared to patients without medial comminution (P=0.017). A presence of medial comminution was associated with a 4.151-fold increase in the log odds for the Baumann angle changes of more than average difference between immediate postoperative and final follow-up angle (P=0.020). Conclusion: When lateral-only pin fixation is applied for Gartland type III supracondylar humerus fracture in children, the medial comminution may be a risk factor for the stability because of the narrow pin separation at fracture site. We recommend additional medial pin fixation for supracondylar humerus fracture with medial comminution. abstract_id: PUBMED:10912605 A technique to determine proper pin placement of crossed pins in supracondylar fractures of the elbow. Supracondylar humerus fractures are the most common elbow injury in children. Stable fractures can be closed, reduced, and casted, whereas unstable fractures require percutaneous pinning. Studies have shown that there is a biomechanical advantage of crossed pin fixation as opposed to two lateral pin fixation. However, medial pin placement has the risk of injuring the ulnar nerve. This modification of technique was used on 46 patients, aged 12 months to 14 years (median age, 3.6 years). Two patients had an ulnar sensory and motor neurapraxia, and two patients had cubitus varus deformities postoperatively. Thus, a safe, easy, and reproducible technique of crossed pin fixation is described here. abstract_id: PUBMED:35620123 Anxiety surrounding supracondylar humerus pin removal in children. Purpose: The purpose of this study was to quantify the anxiety experienced by patients undergoing pin removal in clinic following closed reduction and percutaneous pinning for supracondylar humerus fractures. Methods: We prospectively enrolled 53 patients (3-8 years) treated for supracondylar humerus fracture with closed reduction and percutaneous pinning between July 2018 and February 2020. Demographic and injury data were recorded. Heart rate and the Face, Legs, Activity, Cry, and Consolability scale were measured immediately before pin removal and after pin removal, and crossover control values were obtained at the subsequent follow-up clinic visit. Results: All patients experienced anxiety immediately prior to pin removal (95% confidence interval, 94%-100%) with a median Face, Legs, Activity, Cry, and Consolability score of 7 (interquartile range, 6-8). In addition, 98% of subjects experienced an elevated heart rate (95% confidence interval, 88%-100%). Patients experienced a median 73% reduction in Face, Legs, Activity, Cry, and Consolability score and mean 21% reduction in heart rate from prior to pin removal to after pin removal (p < 0.001). All 45 patients who completed their follow-up visit had a control Face, Legs, Activity, Cry, and Consolability score of 0 and a mean control heart rate of 89.7 bpm. Twenty-five of these 45 subjects (56%) had an elevated control heart rate for their age and sex. Mean heart rate prior to pin removal was 36% higher than control heart rate. There were no sex differences detected in Face, Legs, Activity, Cry, and Consolability scores or heart rate. Conclusions: Pediatric patients experience high levels of anxiety when undergoing pin removal following closed reduction and percutaneous pinning for supracondylar humerus fractures. This is an area of clinical practice where intervention may be warranted to decrease patient anxiety. Level Of Evidence: II. abstract_id: PUBMED:20179560 Biomechanical analysis of lateral pin placements for pediatric supracondylar humerus fractures. Aim: Several clinical studies have shown that lateral pinning alone is of equal stability to crossed pins in the treatment of supracondylar fractures. The aim of this study was to compare the stability of parallel and varied divergent lateral pin configurations to provide an easily reproducible technique for optimal pin placement. Methods: Twelve third-generation synthetic composite humeri were osteotomized at the level of the coronoid and olecranon fossae to simulate a humeral supracondylar fracture. Each fracture was reduced and fixed using two 1.6 mm (0.062 inches) Kirschner wires (1 fixed, 1 varied) in 4 different positions (from parallel to divergent with respect to fixed wire), and sequentially tested in extension, varus, and valgus as well as internal and external rotations using an MTS 858 Minibionix materials testing load frame (MTS Corporation, Eden Prairie, MN). A 2-way analysis of variance was carried out to compare construct stiffness in all 5 modes of testing according to both pin position and testing sequence. A level of P<0.05 was considered statistically significant. Results: The best torsional, valgus, and extension resistances were found with position 4, which was the most divergent configuration. For both internal and external rotations, position 4 showed statistically higher stiffness as compared with all other configurations (P<0.05). In resistance to extension, both positions 3 and 4 were stiffer than either position 1 or 2 (P<0.05). For resistance in varus testing, position 3 showed statistically greater stiffness than all other pin positions (P<0.05). Although there was no statistical difference between all the 4 positions in valgus testing, position 4 showed greater resistance when compared with other positions. Conclusions: The lateral pin placed parallel to the metaphyseal flare of the lateral humeral cortex, in combination with a second diverging pin crossing the fracture site at the medial edge of the coronoid fossa (position 4), provided the optimum fixation for supracondylar fractures of the humerus. Clinical Relevance: Using these readily available landmarks, the treating surgeon can reproducibly provide appropriate pinning treatment for most of these fractures. abstract_id: PUBMED:26972812 Increased pin diameter improves torsional stability in supracondylar humerus fractures: an experimental study. Background: Pediatric supracondylar humerus fractures are the most common elbow fractures seen in children, and account for 16 % of all pediatric fractures. Closed reduction and percutaneous pin fixation is the current treatment technique of choice for displaced supracondylar fractures of the distal humerus in children. The purpose of this study was to determine whether pin diameter affects the torsional strength of supracondylar humerus fractures treated by closed reduction and pin fixation. Methods: Pediatric sawbone humeri simulating a Gartland type III fracture were utilized. Four different pin configurations were compared. Specimens were subjected to a torsional load producing internal rotation of the distal fragment. The stability provided by 1.25- and 1.6-mm pins was compared. Results: The amount of torque required to produce 15° and 25° of rotation was greater using larger diameter pins in all models tested. The two lateral and one medial large pin (1.6 mm) configuration required the highest amount of torque to produce both 15° and 25° of rotation. Conclusions: In a synthetic pediatric humerus model of supracondylar humerus fractures, larger diameter pins (1.6 mm) provided increased stability compared with small diameter pins (1.25 mm). Fixation using larger diameter pins created a stronger construct and improved the strength of fixation. abstract_id: PUBMED:36980108 Extra Lateral Pin or Less Radiation? A Comparison of Two Different Pin Configurations in the Treatment of Supracondylar Humerus Fracture. Background: Closed reduction and percutaneous fixation are the most commonly used methods in the surgical treatment of supracondylar humerus fractures. The pin configuration changes stability and is still controversial. The aim of this study was to investigate the relationship between surgical duration and radiation dose/duration for different pinning fixations. Methods: A total of 48 patients with Gartland type 2, 3, and 4 supracondylar fractures of the humerus were randomized into two groups-2 lateral and 1 medial (2L1M) pin fixation (n = 26) and 1 lateral 1 medial (1L1M) pin fixation (n = 22). A primary assessment was performed regarding surgical duration, radiation duration, and radiation dose. A secondary assessment included clinical outcome, passive range of motion, radiographic measurements, Flynn's criteria, and complications. Results: There were 26 patients in the first group (2L1M) and 22 patients in the second group (1L1M). There was no statistical difference between the groups regarding age, sex, type of fracture, or Flynn's criteria. The overall mean surgical duration with 1L1M fixation (30.59 ± 8.72) was statistically lower (p = 0.001) when compared to the 2L1M Kirschner wire K-wire fixation (40.61 ± 8.25). The mean radiation duration was 0.76 ± 0.33 s in the 1L1M K-wire fixation and 1.68 ± 0.55 s in the 2L1M K-wire fixation. The mean radiation dose of the 2L1M K-wire fixation (2.45 ± 1.15 mGy) was higher than that of the 1L1M K-wire fixation (0.55 ± 0.43 mGy) (p = 0.000). Conclusions: The current study shows that although there is no difference between the clinical and radiological outcomes, radiation dose exposure is significantly lower for the 1L1M fixation method. abstract_id: PUBMED:28435603 The Rubber Stopper: A Simple and Inexpensive Technique to Prevent Pin Tract Infection following Kirschner Wiring of Supracondylar Fractures of Humerus in Children. Percutaneous pinning after closed reduction is commonly used to treat supracondylar fractures of the humerus in children. Minor pin tract infections frequently occur. The aim of this study was to prevent pin tract infections using a rubber stopper to reduce irritation of the skin against the Kirschner (K) wire following percutaneous pinning. Between July 2011 and June 2012, seventeen children with closed supracondylar fracture of the humerus of Gartland types 2 and 3 were treated with this technique. All patients were treated with closed reduction and percutaneous pinning and followed up prospectively. Only one patient, who was a hyperactive child, developed pin tract infection due to softening of the plaster slab. We found using the rubber stopper to be a simple and inexpensive method to reduce pin tract infections following percutaneous pinning. abstract_id: PUBMED:28251295 Management of pin tract infection in pediatric supracondylar humerus fractures: a comparative study of three methods. The objective of this study was to prospectively compare the incidence of pin tract infection in pediatric supracondylar humerus fractures managed with pin care daily or every other day or weekly. We hypothesized that there were some differences between these three methods. From June 2012 to May 2015, 135 children with supracondylar humerus fractures were randomized to postoperative pin care by cleaning pin tracts daily (group A, 45 cases) or cleaning every 2 days (group B, 45 cases) or cleaning weekly (group C, 45 cases). The three groups were comparable with respect to age, gender, affected side, body mass index (BMI), fracture type, injury to surgery time, number of intraoperative percutaneous pinning, and follow-up time. We collected data on pin retention time, union time, and pin tract infection. The average follow-up time of group A was 4.5 ± 1.3 and 4.2 ± 1.6 months in group B and 4.3 ± 1.4 months in group C. The patient demographics and intraoperative variables of three groups were comparable. No significant difference between these three groups was found in union time and pin fixation time. Of the 135 children, 48 (35.6%) cases had pin tract infection. Grade I infections (Checketts-Otterburns classification) occurred around 28.9% of 270 pin and grade II around 6.7%. We found no differences between three groups as regards frequency and severity of pin tract infections (both P > 0.05). However, complain of pain was more frequent in group A than other two groups (P < 0.05). Conclusions: All of the three methods were effective for the management of pin site infection in pediatric supracondylar humerus fractures. However, excessive frequent care as well as pin care daily had the disadvantages of child's fear and parental anxiety. What is Known: • Pin site infection is a common complication after fracture fixation and bone lengthening using percutaneous pins or wires. • Closed reduction and percutaneous K-wires fixation are the mainstay of treatment in pediatric supracondylar humeral fractures. What is New: • All of the three methods were effective for the management of pin site infection. • Excessive frequent care as well as pin care daily has the disadvantages of child's fear and parental anxiety. Answer: Yes, biomechanical analysis indicates that the starting point, pin size, and number do matter in the pin placement for pediatric supracondylar humerus fractures. A study found that a capitellar starting point provided increased stiffness in internal and external rotation compared to a direct lateral starting point. Two 2.0-mm pins were superior to two 1.6-mm pins in internal and external rotation, and there was no significant difference found when comparing two versus three 1.6-mm pins. The capitellar starting point allows for better engagement of the distal fragment bone and maximizes pin separation at the fracture site. However, the addition of a third pin did not provide a biomechanical advantage in an anatomically reduced fracture model (PUBMED:22706457). Another study showed that medial comminution could be a risk factor for stability after lateral-only pin fixation for Gartland type III supracondylar humerus fractures. Medial comminution was associated with a narrower pin separation at the fracture site, which could lead to instability. Therefore, additional medial pin fixation may be recommended for fractures with medial comminution (PUBMED:29922066). Furthermore, a biomechanical analysis demonstrated that larger diameter pins (1.6 mm) provided increased stability compared to smaller diameter pins (1.25 mm), especially in torsional loads. The use of larger diameter pins created a stronger construct and improved the strength of fixation (PUBMED:26972812). In summary, the biomechanical stability of pin constructs used to stabilize pediatric supracondylar humerus fractures is influenced by the starting point of pin insertion, the size of the pins, and to a lesser extent, the number of pins used. A capitellar starting point and larger pin diameters are recommended for improved stiffness and stability, particularly under rotational stresses.
Instruction: Is use of PPIs related to increased intraepithelial lymphocytes in the colon? Abstracts: abstract_id: PUBMED:22855290 Is use of PPIs related to increased intraepithelial lymphocytes in the colon? Background: The use of proton pump inhibitors (PPIs) is thought to increase the incidence of microscopic colitis (MC), although the exact mechanisms are not fully understood. Increased infiltration of intraepithelial lymphocytes (IELs) is a pathologic finding of MC (including collagenous or lymphocytic colitis). Aims: We investigated whether PPI use is associated with increased IEL infiltration and inflammation in the lamina propria. Methods: We retrospectively reviewed the medical records and histological reports of 78 patients receiving PPIs who had no symptoms of diarrhea, and their age- and gender- matched controls. The levels of IELs and inflammation in the lamina propria were assessed independently by two pathologists using H&E and immunohistochemical staining for CD3 and CD8. Results: The IEL count was significantly higher in the PPI group than in controls (12.92 ± 6.27 vs. 8.10 ± 4.21 per 100 epithelial cells, p < 0.001), as was the extent of inflammation (1.74 ± 0.90 vs. 0.86 ± 0.78, p < 0.001). PPI use was associated with increased IEL infiltration in a multivariate analysis (OR, 3.232; 95 % CI, 1.631-6.404, p < 0.001). Within the PPI group, however, the IEL count was not significantly associated with gender, age, type of PPI, or duration of PPI use. Conclusions: The use of PPIs has a significant association with increased IEL infiltration for subjects without symptoms of diarrhea. This finding suggests that changes such histological alterations seen in the early phage seen in MC possibly represent the stage of the disease even before the onset of symptoms. abstract_id: PUBMED:31114440 Different distribution of mucosal-associated invariant T cells within the human cecum and colon. Introduction: Mucosal-associated invariant T (MAIT) cells are innate-like T cells that are involved in anti-bacterial immunity. MAIT cells are found in the intestines, but their role and distribution within the large intestine have not been fully elucidated. Therefore, we investigated the distribution of MAIT cells within the cecum and colon. Material And Methods: Surgically resected tissues of the cecum and colon were obtained from 4 patients with cecal appendix cancer and 8 patients with colorectal cancer, respectively. Lymphocytes were isolated from the intestinal epithelium (intraepithelial lymphocytes - IELs) and the underlying lamina propria (lamina propria lymphocytes - LPLs), and then, MAIT cells were analyzed by flow cytometry. Results: Compared with the colon, the cecum showed a significantly increased frequency of MAIT cells among IELs (p < 0.01). CD69 expression on MAIT cells was significantly increased in the cecum and colon compared with that in the blood, and the frequency of natural killer group 2, member A+ cells among MAIT cells was significantly increased in the cecum. Conclusions: These results suggest that the distribution of MAIT cells was different between the cecum and colon and that MAIT cells were more likely to be activated, especially in the intestinal epithelium of the cecum than in the colon and blood. abstract_id: PUBMED:21840070 Age-dependent changes in intraepithelial lymphocytes (IELs) of the small intestine, cecum, and colon from young adult to aged mice. We previously reported the regional differences in the IELs present in the proximal (P), middle (M), and distal (D) parts of the small intestine, cecum (Ce), and colon (Co) of mice. In this study, we investigated the age-dependent changes in the regional differences of IELs from young adult to aged mice. In this experiment, 3-, 6-, 12-, 18-, and 24-month-old mice were examined. IELs were separately isolated from 5 parts of the intestines and analyzed by flow cytometry. Regional differences in the number and phenotype of IELs showed the same trends in all age groups. The number of IELs was highest in 6-month-old mice and then gradually decreased with age. As to IEL subsets, age-related changes were not seen except for a few subsets among the age groups. We conclude that age-related decreases in IELs in mouse small intestine may be one of the aging phenomena of the intestinal immune system. Such age-related decreases in IELs may be concerned with the increased liability to intestinal infections in the elderly. abstract_id: PUBMED:32920702 Distribution of histopathological features along the colon in microscopic colitis. Purpose: The diagnosis microscopic colitis (MC) consisting of collagenous colitis (CC) and lymphocytic colitis (LC) relies on histological assessment of mucosal biopsies from the colon. The optimal biopsy strategy for reliable diagnosis of MC is controversial. The aim of this study was to evaluate the distribution of histopathological features of MC throughout the colon. Methods: Mucosal biopsies from multiple colonic segments of patients with MC who participated in one of the three prospective European multicenter trials were analyzed. Histological slides were stained with hematoxylin-and-eosin, a connective tissue stain, and CD3 in selected cases. Results: In total, 255 patients were included, 199 and 56 patients with CC and LC, respectively. Both groups exhibited a gradient with more pronounced inflammation in the lamina propria in the proximal colon compared with the distal colon. Similarly, the thickness of the subepithelial collagenous band in CC showed a gradient with higher values in the proximal colon. The mean number of intraepithelial lymphocytes was > 20 in all colonic segments in patients within both subgroups. Biopsies from 86 to 94% of individual segments were diagnostic, rectum excluded. Biopsies from non-diagnostic segments often showed features of another subgroup of MC. Conclusion: Conclusively, although the severity of the histological changes in MC differed in the colonic mucosa, the minimum criteria required for the diagnosis were present in the random biopsies from the majority of segments. Thus, our findings show MC to be a pancolitis, rectum excluded, questioning previously proclaimed patchiness throughout the colon. abstract_id: PUBMED:23155335 High densities of serotonin and peptide YY cells in the colon of patients with lymphocytic colitis. Aim: To investigate colonic endocrine cells in lymphocytic colitis (LC) patients. Methods: Fifty-seven patients with LC were included. These patients were 41 females and 16 males, with an average age of 49 years (range 19-84 years). Twenty-seven subjects that underwent colonoscopy with biopsies were used as controls. These subjects underwent colonoscopy because of gastrointestinal bleeding or health worries, where the source of bleeding was identified as haemorrhoids or angiodysplasia. They were 19 females and 8 males with an average age of 49 years (range 18-67 years). Biopsies from the right and left colon were obtained from both patients and controls during colonoscopy. Biopsies were fixed in 4% buffered paraformaldehyde, embedded in paraffin and cut into 5 μm-thick sections. The sections immunostained by the avidin-biotin-complex method for serotonin, peptide YY (PYY), pancreatic polypeptide (PP) enteroglucagon and somatostatin cells. The cell densities were quantified by computerised image analysis using Olympus software. Results: The colon of both the patient and the control subjects were macroscopically normal. Histopathological examination of colon biopsies from controls revealed normal histology. All patients fulfilled the diagnosis criteria required for of LC: an increase in intraepithelial lymphocytes (> 20 lymphocytes/100 epithelial cells) and surface epithelial damage with increased lamina propria plasma cells and absent or minimal crypt architectural distribution. In the colon of both patients and control subjects, serotonin-, PYY-, PP-, enteroglucagon- and somatostatin-immunoreactive cells were primarily located in the upper part of the crypts of Lieberkühn. These cells were basket- or flask-shaped. There was no statistically significant difference between the right and left colon in controls with regards to the densities of serotonin- and PYY-immunoreactive cells (P = 0.9 and 0.1, respectively). Serotonin cell density in the right colon in controls was 28.9 ± 1.8 and in LC patients 41.6 ± 2.6 (P = 0.008). In the left colon, the corresponding figures were 28.5 ± 1.9 and 42.4 ± 2.9, respectively (P = 0.009). PYY cell density in the right colon of the controls was 10.1 ± 1 and of LC patients 41 ± 4 (P = 0.00006). In the left colon, PYY cell density in controls was 6.6 ± 1.2 and in LC patients 53.3 ± 4.6 (P = 0.00007). Conclusion: The change in serotonin cells could be caused by an interaction between immune cells and serotonin cells, and that of PYY density might be secondary. abstract_id: PUBMED:12143251 Ulcerative colon T-cell lymphoma: an unusual entity mimicking Crohn's disease and may be associated with fulminant hemophagocytosis. Primary gastrointestinal T-cell lymphoma is uncommon. Most arise from the small intestine and are usually associated with chronic celiac disease; the so-called enteropathy associated T-cell lymphoma. Primary colon T-cell lymphoma is much more rare. We present two patients with primary colon T-cell lymphoma. Both patients had chronic diarrhea and significant weight loss. Endoscopically, the lymphoma was characterized by the presence of multiple skipped ulcers distributed from the terminal ileum to the descending colon. It was differentiated from Crohn's disease by the absence of fistula or thickening of the intestinal walls. Histologically, the lymphoma was composed of medium to large atypical cells located in the ulcer base with extension to the muscular layer and the adjacent atrophic mucosa. Occasional increased intraepithelial lymphocytes were also seen. Immunohistochemically, the lymphoma cells and intraepithelial lymphocytes were CD3+, CD4-, CD56- and CD8-. It was difficult to diagnosis this unusual lymphoma by biopsy. Because most biopsy specimens showed mixed inflammation within which the lymphoma cell was sometimes hard to identify. Both patients died of fulminant hemophagocytic syndrome and Epstein-Barr virus genome was detected in the lymphoma cells using in situ hybridization on the final surgical specimens. Our study indicates that it is important to recognize this ulcerative colon T-cell lymphoma and to differentiate it from inflammatory bowel disease because of its much more aggressive clinical behavior. abstract_id: PUBMED:29289834 Differential ratios of fish/corn oil ameliorated the colon carcinoma in rat by altering intestinal intraepithelial CD8+ T lymphocytes, dendritic cells population and modulating the intracellular cytokines. Intraepithelial lymphocytes (IELs) impart a crucial role in maintaining intestinal homeostasis, yet their role in colon cancer pathogenesis remains unknown. Here, we posited that the modulation of intestinal immune response via dietary interventions might be an implacable strategy in restraining colon carcinoma. In the above context, we studied the effect of differential ratios of fish oil (FO) and corn oil (CO) on the gut immune response in experimentally induced colon cancer. Male Wistar rats were divided into six groups: Group I obtained purified diet while Groups II and III were fed on the diet supplemented with differential ratios of FO and CO i.e. 1:1 and 2.5:1, respectively. The groups were further subdivided into control and carcinogenic group, treated with ethylenediaminetetraacetic acid (EDTA) or N,N'-dimethylhydrazine dihydrochloride (DMH), respectively. Initiation phase comprised the animals sacrificed 48 h after the last injection whereas, the post -initiation phase was constituted by animals sacrificed 12 weeks after the treatment regimen. CD8+ T cells, CD8/αβ TCR cells, dendritic cells increased significantly on treatment with DMH as compared to control. However, on treatment with differential ratios of FO and CO these cells decreased significantly. The intracellular cytokine i.e. interferon gamma (IFN-γ) and cytotoxic granules component i.e Perforin and Granzyme decreased significantly in the initiation phase but in the post-initiation phase IFN-γ and Perforin increased considerably on carcinogen treatment as compared to the control group. On treatment with FO and CO in the initiation phase the IFN-γ, Perforin and Granzyme expression increased significantly. However, in the post-initiation phase treatment with differential ratios of FO and CO led to a significant decrease in the IFN-γ, Perforin and increase in Granzyme was observed in these groups. Altogether, FO supplementation appeared to activate the immune response that may further attenuate the process of carcinogenesis, in a dose and time-dependent manner. abstract_id: PUBMED:28812548 Bacteroidales recruit IL-6-producing intraepithelial lymphocytes in the colon to promote barrier integrity. Interactions between the microbiota and distal gut are important for the maintenance of a healthy intestinal barrier; dysbiosis of intestinal microbial communities has emerged as a likely contributor to diseases that arise at the level of the mucosa. Intraepithelial lymphocytes (IELs) are positioned within the epithelial barrier, and in the small intestine they function to maintain epithelial homeostasis. We hypothesized that colon IELs promote epithelial barrier function through the expression of cytokines in response to interactions with commensal bacteria. Profiling of bacterial 16S ribosomal RNA revealed that candidate bacteria in the order Bacteroidales are sufficient to promote IEL presence in the colon that in turn produce interleukin-6 (IL-6) in a MyD88 (myeloid differentiation primary response 88)-dependent manner. IEL-derived IL-6 is functionally important in the maintenance of the epithelial barrier as IL-6-/- mice were noted to have increased paracellular permeability, decreased claudin-1 expression, and a thinner mucus gel layer, all of which were reversed by transfer of IL-6+/+ IELs, leading to protection of mice in response to Citrobacter rodentium infection. Therefore, we conclude that microbiota provide a homeostatic role for epithelial barrier function through regulation of IEL-derived IL-6. abstract_id: PUBMED:28967957 Colon Immune-Related Adverse Events: Anti-CTLA-4 and Anti-PD-1 Blockade Induce Distinct Immunopathological Entities. Background And Aim: Immune checkpoint inhibitors targeting CTLA-4 and PD-1 improve survival in cancer patients but may induce immune-related adverse events, including colitis. The immunological characteristics of anti-CTLA-4 [αCTLA-4]- and anti-PD-1 [αPD-1]-related colitis have been poorly described. The aim of the present study was to compare the immunological and histological characteristics of αCTLA-4-induced colitis and αPD-1-induced colitis. Methods: Colonic biopsies from patients with αCTLA-4-induced colitis, αPD-1-induced colitis, and inflammatory bowel disease [IBD] were analysed by immunohistochemistry and flow cytometry. Tumour necrosis factor alpha [TNFα] concentration was assessed in biopsy supernatants. Results: CD8+ T cells were found in the lamina propria and epithelium in αPD-1-induced colitis, whereas CD4+ T cells were found in the lamina propria in αCTLA-4-induced colitis. No or low intraepithelial lymphocytes were observed in αCTLA-4-induced colitis. No difference in numbers of mucosal regulatory T cells was observed between αCTLA-4- or αPD-1-induced colitis and IBD patients. Higher numbers of activated ICOS+ conventional CD4+ T cells were observed in αCTLA-4-induced colitis compared with patients with IBD. Among ICOS+CD4+ T cells, conventional CD4+ T cells were the main T cell population in patents with αCTLA-4-induced colitis, whereas Treg cells were predominant in IBD or αPD-1-induced colitis. High mucosal TNFα concentrations were observed in αCTLA-4-induced colitis. Low mucosal TNFα concentrations were associated with steroid sensitivity. Conclusions: These observations show that αCTLA-4- and αPD-1-induced colitis have distinct immunological characteristics. Mucosal TNFα concentration might detect patients at risk of developing corticosteroid resistance after CTLA-4 blockade. abstract_id: PUBMED:2423309 Immunohistological characterization of intraepithelial and lamina propria lymphocytes in control ileum and colon and in inflammatory bowel disease. Using monoclonal antibodies to T and B lymphocytes, to natural killer cells, and to HLA-DR antigen, we characterized the lymphocyte population within the epithelial and lamina propria regions in control intestine and colon, and in grossly involved and in grossly uninvolved intestine and colon of patients with active inflammatory bowel disease. There were significantly more intraepithelial T cells in control ileum than in control colon. In comparison to control, there was a heterogeneity of alterations in intraepithelial and lamina propria T lymphocyte subsets (T11+, T8+, T4+) in inflammatory bowel disease. B lymphocytes were not detected within the lamina propria, except when found in and adjacent to lymphoid aggregates. Leu 7+ cells were uncommon in the lamina propria of control ileum and colon and in diseased tissues. The majority of intraepithelial lymphocytes did not express HLA-DR. Epithelial cells of control colon did not express HLA-DR while epithelial cells of control ileal tissues and of diseased colonic and ileal specimens expressed HLA-DR antigen. Only small numbers of lamina propria T cells expressed HLA-DR in both control and disease tissues. There was intense expression of HLA-DR by monocytes and modest expression of HLA-DR by capillary and lymphatic endothelial cells. The induction of HLA-DR expression by diseased colonic epithelium and the observation that lymphatic endothelium expresses HLA-DR are new observations, and we established that Leu 7+ cells are present in very small numbers in both normal and diseased intestine and colon. Answer: Yes, the use of proton pump inhibitors (PPIs) is related to increased intraepithelial lymphocytes (IELs) in the colon. A study investigated whether PPI use is associated with increased IEL infiltration and inflammation in the lamina propria. The results showed that the IEL count was significantly higher in the PPI group than in controls, as was the extent of inflammation. PPI use was associated with increased IEL infiltration in a multivariate analysis, suggesting that PPIs have a significant association with increased IEL infiltration for subjects without symptoms of diarrhea. This finding indicates that such histological alterations seen in the early phase seen in microscopic colitis possibly represent the stage of the disease even before the onset of symptoms (PUBMED:22855290).
Instruction: Are those who use specific complementary and alternative medicine therapies less likely to be immunized? Abstracts: abstract_id: PUBMED:20005248 Are those who use specific complementary and alternative medicine therapies less likely to be immunized? Objective: Some authorities are concerned that the use of complementary and alternative medications (CAM) may replace recommended preventive health practices. This study was done to determine if users of individual types of CAM were less likely to receive recommended immunizations. Methods: We used data from the 2007 National Health Interview Survey of over 23,000 adult, non-institutionalized U.S. citizens using bivariate and multivariate analysis to determine if users of individual types of CAM were less likely to receive influenza and/or pneumococcal vaccinations. Results: Using a weighted logistic regression analysis, we found that respondents who used chiropractic care were less likely to receive flu shots (OR=0.68, CI=0.55,0.83, p&lt;0.001). There was a mildly positive trend toward receiving the pneumococcal vaccine in users of deep breathing exercises and toward not receiving both in followers of qi gong. Prayer use was prevalent and had a positive impact on receiving immunizations, especially in Blacks and those in poor health. Regular exercise, having a primary care provider and more frequent office visits were also positively associated with receiving immunizations. Conclusion: Chiropractic users are less likely to get flu shots, perhaps reflecting their national body's attitude, which could affect morbidity and mortality. Providers should be aware of their patients' CAM use and encourage accepted primary care practices. abstract_id: PUBMED:34304794 Complementary and Alternative Medicine Therapies for Irritable Bowel Syndrome. Complementary and alternative medicine (CAM) is a term used to define a broad range of therapies, most commonly grouped into natural products, mind-body medicine, and traditional systems of medicine. Patients with irritable bowel syndrome (IBS) commonly use CAM therapies, although there are many barriers that may keep patients and providers from talking about a patient's CAM use. Despite limited quantity and quality of evidence of CAM for IBS, providers can better counsel patients on CAM use by understanding pitfalls related to CAM use and by learning what is known about CAM. abstract_id: PUBMED:36774743 Complementary and alternative medicine use in narcolepsy. Background: Management of narcolepsy includes behavior strategies and symptomatic pharmacological treatment. In the general population, complementary and alternative medicine (CAM) use is common in Europe (30%), also in chronic neurological disorders (10-20%). The aim of our study was to evaluate frequency and characteristics of CAM use in German narcolepsy patients. Methods: Demographic, disease-related data frequency and impact of CAM use were assessed in an online survey. Commonly used CAM treatments were predetermined in a questionnaire based on the National Center for Complementary and Alternative Medicine and included the domains: (1) alternative medical systems; (2) biologically based therapies; (3) energy therapies; (4) mind-body interventions, and (5) manipulative and body-based therapies. Results: We analyzed data from 254 questionnaires. Fifteen percent of participants were at the time of survey administration using CAM for narcolepsy, and an additional 18% of participants reported past use. Among the 33% of CAM users, vitamins/trace elements (54%), homoeopathy (48%) and meditation (39%) were used most frequently. 54% of the users described CAM as helpful. CAM users more frequently described having side effects from their previous medication (p = 0.001), and stated more frequently not to comply with pharmacological treatment than non-CAM users (21% vs. 8%; p = 0.024). Discussion: The use of CAM in narcolepsy patients is common. Our results indicate that many patients still feel the need to improve their symptoms, sleepiness and psychological well-being in particular. Frequent medication change, the experience of adverse events and low adherence to physician-recommended medication appears more frequent in CAM users. The impact of CAM however seems to be limited. abstract_id: PUBMED:23915414 Keeping childbearing safe: midwives' influence on women's use of complementary and alternative medicine. The use of complementary and alternative medicine during pregnancy is common. However, many modalities have not been well researched and safety concerns have been raised. This article describes a grounded theory study which explored how midwives interact with women regarding use of these therapies. Participants were recruited from metropolitan hospitals in Victoria, Australia. Twenty-five midwives were interviewed and a subgroup was also observed. The findings revealed that when working with women interested in complementary and alternative medicine, midwives usually aimed to facilitate informed decisions whilst prioritising safety. However, participants assessed the risk associated with various therapies differently. Although many endorsed the use of various therapies, only a few were integrated into practice. In conclusion, midwives play an important role in mediating women's behaviour towards complementary and alternative medicine. Yet, currently many do not have the appropriate education to appreciate the associated risks, as well as the potential benefits. abstract_id: PUBMED:27339090 Complementary and alternative medicine therapies for chronic pain. Pain afflflicts over 50 million people in the US, with 30.7% US adults suffering with chronic pain. Despite advances in therapies, many patients will continue to deal with ongoing symptoms that are not fully addressed by the best conventional medicine has to offer them. The patients frequently turn to therapies outside the usual purview of conventional medicine (herbs, acupuncture, meditation, etc.) called complementary and alternative medicine (CAM). Academic and governmental groups are also starting to incorporate CAM recommendations into chronic pain management strategies. Thus, for any physician who care for patients with chronic pain, having some familiarity with these therapies-including risks and benefits-will be key to helping guide patients in making evidence-based, well informed decisions about whether or not to use such therapies. On the other hand, if a CAM therapy has evidence of both safety and efficacy then not making it available to a patient who is suffering does not meet the need of the patient. We summarize the current evidence of a wide variety of CAM modalities that have potential for helping patients with chronic pain in this article. The triad of chronic pain symptoms, ready access to information on the internet, and growing patient empowerment suggest that CAM therapies will remain a consistent part of the healthcare of patients dealing with chronic pain. abstract_id: PUBMED:23926702 Complementary and alternative medicine in oncology Complementary and alternative medicine are frequently used by cancer patients. The main benefit of complementary medicine is that it gives patients the chance to become active. Complementary therapy can reduce the side effects of conventional therapy. However, we have to give due consideration to side effects and interactions: the latter being able to reduce the effectiveness of cancer therapy and so to jeopardise the success of therapy. Therefore, complementary therapy should be managed by the oncologist. It is based on a common concept of cancerogenesis with conventional therapy. Complement therapy can be assessed in studies. Alternative medicine in contrast rejects common rules of evidence-based medicine. It starts from its own concepts of cancerogenesis, which is often in line with the thinking of lay persons. Alternative medicine is offered as either "alternative" to recommended cancer treatment or is used at the same time but without due regard for the interactions. Alternative medicine is a high risk to patients. In the following two parts of the article, the most important complementary and alternative therapies cancer patients use nowadays are presented and assessed according to published evidence. abstract_id: PUBMED:33838089 Complementary and alternative medicine therapies and COVID-19: a systematic review. Objectives: Despite the high prevalence of coronavirus and various treatment approaches, including complementary and alternative medicine (CAM), there is still no definitive treatment for coronavirus. The present study aimed to evaluate the effect of CAM interventions on COVID-19 patients. Content: Four databases (Web of Science, PubMed, Scopus, and EMBASE) were searched from the inception of databases until July 16, 2020. Keywords included complementary and alternative medicine therapies and Coronavirus. Summary And Outlook: Of the 1,137 studies searched, 14 studies performed on 972 COVID-19 patients entered the systematic review final stage. The results showed that different CAM interventions (acupuncture, Traditional Chinese medicine [TCM], relaxation, Qigong) significantly improved various psychological symptoms (depression, anxiety, stress, sleep quality, negative emotions, quality of life) and physical symptoms (inflammatory factors, physical activity, chest pain, and respiratory function) in COVID-19 patients. The results showed that various CAM interventions have a positive effect on improving the various dimensions of coronavirus disease but since there are few studies in this regard, further studies using different CAM approaches are recommended. abstract_id: PUBMED:27863612 Complementary and alternative medicine use in children with cystic fibrosis. Purpose: To estimate the overall prevalence of complementary and alternative medicine use among children with cystic fibrosis, determine specific modalities used, predictors of use and subjective helpfulness or harm from individual modalities. Results: Of 53 children attending the cystic fibrosis clinic in London, Ontario (100% recruitment), 79% had used complementary and alternative medicine. The most commonly used modalities were air purifiers, humidifiers, probiotics, and omega-3 fatty acids. Family complementary and alternative medicine use was the only independent predictor of overall use. The majority of patients perceived benefit from specific modalities for cystic fibrosis symptoms. Conclusions: Given the high frequency and number of modalities used and lack of patient and disease characteristics predicting use, we recommend that health care providers should routinely ask about complementary and alternative medicine among all pediatric cystic fibrosis patients and assist patients in understanding the potential benefits and risks to make informed decisions about its use. abstract_id: PUBMED:27707901 Complementary and Alternative Medicine Use in Modern Obstetrics: A Survey of the Central Association of Obstetricians &amp; Gynecologists Members. The use of complementary and alternative medicine during pregnancy is currently on the rise. A validated survey was conducted at the Central Association of Obstetrician and Gynecologists annual meeting to evaluate the knowledge, attitude, and practice of general obstetricians and gynecologists and maternal-fetal medicine specialists in America. We obtained 128 responses: 73 electronically (57%) and 55 via the paper survey (43%). Forty-five percent reported personally using complementary and alternative medicine and 9% of women respondents used complementary and alternative medicine during pregnancy. Overall, 62% had advised their patients to utilize some form of complementary and alternative medicine in pregnancy. Biofeedback, massage therapy, meditation, and yoga were considered the most effective modalities in pregnancy (median [semi-interquartile range] = 2 [0.5]). Maternal-fetal medicine specialists were significantly more likely to disagree on the use of complementary and alternative medicine for risk reduction of preterm birth compared to obstetricians and gynecologists ( P = .03). As the use of complementary and alternative medicine continues to rise in reproductive-age women, obstetricians will play an integral role in incorporating complementary and alternative medicine use with conventional medicine. abstract_id: PUBMED:11040547 The use of complementary and alternative therapies. The National Center for Complementary and Alternative Medicine (NCCAM) was created by Congress in 1998 as part of the National Institutes of Health. As interest in alternative and complementary therapies among healthcare providers and consumers has increased in recent years, the NCCAM has provided research funding to determine the efficacy of various types of unconventional treatments. The Center also provides research training and acts as a clearing-house for information dissemination to practitioners and the general public. Inclusion and exclusion criteria for alternative and complementary medicine as defined by the NCCAM are provided. Interest in these non-traditional treatments will likely continue to increase over the next few years. The NCCAM is playing a vital role as it provides avenues to determine how these therapies can lead to enhanced quality of life for individuals as we enter the new millennium. Answer: The study reported in abstract PUBMED:20005248 aimed to determine if users of individual types of complementary and alternative medicine (CAM) were less likely to receive recommended immunizations. The results from the 2007 National Health Interview Survey indicated that respondents who used chiropractic care were less likely to receive flu shots (OR=0.68, CI=0.55,0.83, p<0.001). However, there was a mildly positive trend toward receiving the pneumococcal vaccine in users of deep breathing exercises and toward not receiving both in followers of qi gong. Prayer use was prevalent and had a positive impact on receiving immunizations, especially in Blacks and those in poor health. Regular exercise, having a primary care provider, and more frequent office visits were also positively associated with receiving immunizations. The conclusion drawn from this study is that chiropractic users are less likely to get flu shots, which could affect morbidity and mortality. Health providers should be aware of their patients' CAM use and encourage accepted primary care practices.
Instruction: Association between response inhibition and working memory in adult ADHD: a link to right frontal cortex pathology? Abstracts: abstract_id: PUBMED:17046725 Association between response inhibition and working memory in adult ADHD: a link to right frontal cortex pathology? Background: We sought to assess the relationship between response inhibition and working memory in adult patients with attention-deficit/hyperactivity disorder (ADHD) and neurosurgical patients with frontal lobe damage. Methods: The stop-signal reaction time (SSRT) test and a spatial working memory (SWM) task were administered to 20 adult patients with ADHD and a group of matched controls. The same tasks were administered to 21 patients with lesions to right frontal cortex and 19 patients with left frontal lesions. Results: The SSRT test, but not choice reaction time, was significantly associated with search errors on the SWM task in both the adult ADHD and right frontal patients. In the right frontal patients, impaired performance on both variables was correlated with the volume of damage to the inferior frontal gyrus. Conclusions: Response inhibition and working memory impairments in ADHD may stem from a common pathologic process rather than being distinct deficits. Such pathology could relate to right frontal-cortex abnormalities in ADHD, consistent with prior reports, as well as with the demonstration here of a significant association between SSRT and SWM in right frontal patients. abstract_id: PUBMED:27194334 High Working Memory Load Increases Intracortical Inhibition in Primary Motor Cortex and Diminishes the Motor Affordance Effect. Unlabelled: Motor affordances occur when the visual properties of an object elicit behaviorally relevant motor representations. Typically, motor affordances only produce subtle effects on response time or on motor activity indexed by neuroimaging/neuroelectrophysiology, but sometimes they can trigger action itself. This is apparent in "utilization behavior," where individuals with frontal cortex damage inappropriately grasp affording objects. This raises the possibility that, in healthy-functioning individuals, frontal cortex helps ensure that irrelevant affordance provocations remain below the threshold for actual movement. In Experiment 1, we tested this "frontal control" hypothesis by "loading" the frontal cortex with an effortful working memory (WM) task (which ostensibly consumes frontal resources) and examined whether this increased EEG measures of motor affordances to irrelevant affording objects. Under low WM load, there were typical motor affordance signatures: an event-related desynchronization in the mu frequency and an increased P300 amplitude for affording (vs nonaffording) objects over centroparietal electrodes. Contrary to our prediction, however, these affordance measures were diminished under high WM load. In Experiment 2, we tested competing mechanisms responsible for the diminished affordance in Experiment 1. We used paired-pulse transcranial magnetic stimulation over primary motor cortex to measure long-interval cortical inhibition. We found greater long-interval cortical inhibition for high versus low load both before and after the affording object, suggesting that a tonic inhibition state in primary motor cortex could prevent the affordance from provoking the motor system. Overall, our results suggest that a high WM load "sets" the motor system into a suppressed state that mitigates motor affordances. Significance Statement: Is an irrelevant motor affordance more likely to be triggered when you are under low or high cognitive load? We examined this using physiological measures of the motor affordance while working memory load was varied. We observed a typical motor affordance signature when working memory load was low; however, it was abolished when load was high. Further, there was increased intracortical inhibition in primary motor cortex under high working memory load. This suggests that being in a state of high cognitive load "sets" the motor system to be imperturbable to distracting motor influences. This makes a novel link between working memory load and the balance of excitatory/inhibitory activity in the motor cortex and potentially has implications for disorders of impulsivity. abstract_id: PUBMED:24819224 Hypoactivation in right inferior frontal cortex is specifically associated with motor response inhibition in adult ADHD. Adult ADHD has been linked to impaired motor response inhibition and reduced associated activation in the right inferior frontal cortex (IFC). However, it is unclear whether abnormal inferior frontal activation in adult ADHD is specifically related to a response inhibition deficit or reflects a more general deficit in attentional processing. Using functional magnetic resonance imaging, we tested a group of 19 ADHD patients with no comorbidities and a group of 19 healthy control volunteers on a modified go/no-go task that has been shown previously to distinguish between cortical responses related to response inhibition and attentional shifting. Relative to the healthy controls, ADHD patients showed increased commission errors and reduced activation in inferior frontal cortex during response inhibition. Crucially, this reduced activation was observed when controlling for attentional processing, suggesting that hypoactivation in right IFC in ADHD is specifically related to impaired response inhibition. The results are consistent with the notion of a selective neurocognitive deficit in response inhibition in adult ADHD associated with abnormal functional activation in the prefrontal cortex, whilst ruling out likely group differences in attentional orienting, arousal and motivation. abstract_id: PUBMED:29339471 Laminar recordings in frontal cortex suggest distinct layers for maintenance and control of working memory. All of the cerebral cortex has some degree of laminar organization. These different layers are composed of neurons with distinct connectivity patterns, embryonic origins, and molecular profiles. There are little data on the laminar specificity of cognitive functions in the frontal cortex, however. We recorded neuronal spiking/local field potentials (LFPs) using laminar probes in the frontal cortex (PMd, 8A, 8B, SMA/ACC, DLPFC, and VLPFC) of monkeys performing working memory (WM) tasks. LFP power in the gamma band (50-250 Hz) was strongest in superficial layers, and LFP power in the alpha/beta band (4-22 Hz) was strongest in deep layers. Memory delay activity, including spiking and stimulus-specific gamma bursting, was predominately in superficial layers. LFPs from superficial and deep layers were synchronized in the alpha/beta bands. This was primarily unidirectional, with alpha/beta bands in deep layers driving superficial layer activity. The phase of deep layer alpha/beta modulated superficial gamma bursting associated with WM encoding. Thus, alpha/beta rhythms in deep layers may regulate the superficial layer gamma bands and hence maintenance of the contents of WM. abstract_id: PUBMED:38238909 A role of frontal association cortex in long-term object recognition memory of objects with complex features in rats. Perirhinal cortex is a brain area that has been considered crucial for the object recognition memory (ORM). However, with the use of an ORM enhancer named RGS14414 as gain-in-function tool, we show here that frontal association cortex and not the Perirhinal cortex is essential for the ORM of objects with complex features that consisted of detailed drawing on the object surface (complex ORM). An expression of RGS14414 , in rat brain frontal association cortex, induced the formation of long-term complex ORM, whereas the expression of the same memory enhancer in Perirhinal cortex failed to produce this effect. Instead, RGS14414 expression in Perirhinal cortex caused the formation of ORM of objects with simple features that consisted of the shape of object (simple ORM). Further, a selective elimination of frontal association cortex neurons by treatment with an immunotoxin Ox7-SAP completely abrogated the formation of complex ORM. Thus, our results suggest that frontal association cortex plays a key role in processing of a high-order recognition memory information in brain. abstract_id: PUBMED:15050513 Inhibition and the right inferior frontal cortex. It is controversial whether different cognitive functions can be mapped to discrete regions of the prefrontal cortex (PFC). The localisationist tradition has associated one cognitive function - inhibition - by turns with dorsolateral prefrontal cortex (DLPFC), inferior frontal cortex (IFC), or orbital frontal cortex (OFC). Inhibition is postulated to be a mechanism by which PFC exerts its effects on subcortical and posterior-cortical regions to implement executive control. We review evidence concerning inhibition of responses and task-sets. Whereas neuroimaging implicates diverse PFC foci, advances in human lesion-mapping support the functional localization of such inhibition to right IFC alone. Future research should investigate the generality of this proposed inhibitory function to other task domains, and its interaction within a wider network. abstract_id: PUBMED:31354440 Reduced Visual and Frontal Cortex Activation During Visual Working Memory in Grapheme-Color Synaesthetes Relative to Young and Older Adults. The sensory recruitment model envisages visual working memory (VWM) as an emergent property that is encoded and maintained in sensory (visual) regions. The model implies that enhanced sensory-perceptual functions, as in synaesthesia, entail a dedicated VWM-system, showing reduced visual cortex activity as a result of neural specificity. By contrast, sensory-perceptual decline, as in old age, is expected to show enhanced visual cortex activity as a result of neural broadening. To test this model, young grapheme-color synaesthetes, older adults and young controls engaged in a delayed pair-associative retrieval and a delayed matching-to-sample task, consisting of achromatic fractal stimuli that do not induce synaesthesia. While a previous analysis of this dataset (Pfeifer et al., 2016) has focused on cued retrieval and recognition of pair-associates (i.e., long-term memory), the current study focuses on visual working memory and considers, for the first time, the crucial delay period in which no visual stimuli are present, but working memory processes are engaged. Participants were trained to criterion and demonstrated comparable behavioral performance on VWM tasks. Whole-brain and region-of-interest-analyses revealed significantly lower activity in synaesthetes' middle frontal gyrus and visual regions (cuneus, inferior temporal cortex), respectively, suggesting greater neural efficiency relative to young and older adults in both tasks. The results support the sensory recruitment model and can explain age and individual WM-differences based on neural specificity in visual cortex. abstract_id: PUBMED:12642175 Frontal cortex BDNF levels correlate with working memory in an animal model of Down syndrome. Individuals with Down syndrome (DS) develop most neuropathological hallmarks of Alzheimer's disease early in life, including loss of cholinergic markers in the basal forebrain. Ts65Dn mice, an animal model of DS, perform poorly on tasks requiring spatial memory and also exhibit basal forebrain pathology beginning around 6 months of age. We evaluated memory as well as brain-derived neurotrophic factor (BDNF) and nerve growth factor (NGF) protein levels in basal forebrain, frontal cortex, hippocampus, and striatum in Ts65Dn mice at the age when cholinergic degeneration is first observed, and compared values to normosomic controls. Six-month-old Ts65Dn mice exhibited impairments in working and reference memory as assessed on a water radial-arm maze. The working memory deficit was related to the inability of Ts65Dn mice to successfully sustain performance as the working memory load increased. Coupled with cognitive performance deficiencies, Ts65Dn mice also exhibited lower frontal cortex BDNF protein levels than controls. Further, BDNF levels were negatively correlated with working memory errors during the latter portion of testing in Ts65Dn mice, thereby suggesting that lower BDNF protein levels in the frontal cortex may be associated with the observed working memory impairment. abstract_id: PUBMED:14584557 Frontal cortex as the central executive of working memory: time to revise our view. For historical reasons (Bianchi, 1895; Harlow, 1968; Luria, 1966; Shallice, 1982), a specific link between the central executive of working memory and the frontal cortex was originally suggested by Baddeley (1986). This review discusses the evidence against such a univocal link. Two executive processes investigated in neuropsychology are discussed: inhibition (WCST, Stroop, proactive interference, go-no go, Stop signal and the Hayling test) and dual-task management. The evidence reviewed demonstrates (i) that executive processes involve links between different brain areas, not exclusively with the frontal cortex, (ii) that patients with no evidence of frontal damage present with executive deficits, and (iii) that patients with frontal lesions do not always show executive deficits. In conclusion, this review suggests that it is time for a more dynamic and flexible view of the neural substrate of executive processes to be considered. It also confirms, as recently suggested by Baddeley (1996, 1998a, 1998b), that the study of frontal patients cannot be used as a primary source of evidence to understand CE functions. abstract_id: PUBMED:35486121 The CannTeen study: verbal episodic memory, spatial working memory, and response inhibition in adolescent and adult cannabis users and age-matched controls. Background: Preclinical and human studies suggest that adolescent cannabis use may be associated with worse cognitive outcomes than adult cannabis use. We investigated the associations between chronic cannabis use and cognitive function in adolescent and adult cannabis users and controls. We hypothesised user-status would be negatively associated with cognitive function and this relationship would be stronger in adolescents than adults. Methods: As part of the 'CannTeen' project, this cross-sectional study assessed cognitive performance in adolescent cannabis users (n = 76; 16-17-year-olds), adolescent controls (n = 63), adult cannabis users (n = 71; 26-29-year-olds) and adult controls (n = 64). Users used cannabis 1-7 days/week. Adolescent and adult cannabis users were matched on cannabis use frequency (4 days/week) and time since last use (2.5 days). Verbal episodic memory (VEM) was assessed using the prose recall task, spatial working memory (SWM) was assessed using the spatial n-back task, and response inhibition was assessed with the stop-signal task. Primary outcome variables were: delayed recall, 3-back discriminability, and stop signal reaction time, respectively. Results: Users had worse VEM than controls (F(1,268) = 7.423, p = 0.007). There were no significant differences between user-groups on SWM or response inhibition. Null differences were supported by Bayesian analyses. No significant interactions between age-group and user-group were found for VEM, SWM, or response inhibition. Conclusions: Consistent with previous research, there was an association between chronic cannabis use and poorer VEM, but chronic cannabis use was not associated with SWM or response inhibition. We did not find evidence for heightened adolescent vulnerability to cannabis-related cognitive impairment. Answer: The association between response inhibition and working memory in adult ADHD has been investigated, and there is evidence suggesting a link to right frontal cortex pathology. A study assessing the relationship between response inhibition and working memory in adult patients with ADHD found that the stop-signal reaction time (SSRT) test, which measures response inhibition, was significantly associated with search errors on a spatial working memory (SWM) task in both adult ADHD patients and patients with lesions to the right frontal cortex. This association was not observed in patients with left frontal lesions. Furthermore, in patients with right frontal cortex damage, impaired performance on both the SSRT and SWM tasks was correlated with the volume of damage to the inferior frontal gyrus, indicating a potential common pathologic process underlying both response inhibition and working memory impairments in ADHD that could be related to right frontal-cortex abnormalities (PUBMED:17046725). Additional research supports the involvement of the right inferior frontal cortex (IFC) in motor response inhibition in adult ADHD. A study using functional magnetic resonance imaging (fMRI) found that ADHD patients showed increased commission errors and reduced activation in the inferior frontal cortex during response inhibition tasks, even when controlling for attentional processing. This suggests that hypoactivation in the right IFC in ADHD is specifically related to impaired response inhibition (PUBMED:24819224). Moreover, the right inferior frontal cortex has been implicated in inhibition processes more broadly. A review of evidence concerning inhibition of responses and task-sets supports the functional localization of such inhibition to the right IFC alone, suggesting a specific role for this region in the implementation of executive control (PUBMED:15050513). In summary, the evidence indicates that there is an association between response inhibition and working memory in adult ADHD, and this relationship may be linked to pathology in the right frontal cortex, particularly the right inferior frontal gyrus and right IFC. These findings suggest that the right frontal cortex plays a crucial role in the cognitive deficits observed in ADHD.
Instruction: Are patients with cirrhotic stage primary sclerosing cholangitis at risk for the development of hepatocellular cancer? Abstracts: abstract_id: PUBMED:9314129 Are patients with cirrhotic stage primary sclerosing cholangitis at risk for the development of hepatocellular cancer? Background/aims: The risk of cholangiocarcinoma in primary sclerosing cholangitis is widely recognized to be 8-30%, whereas the risk of acquiring hepatocellular carcinoma in primary sclerosing cholangitis is unknown. As in other chronic liver diseases, the presence of hepatocellular carcinoma in a patient with primary sclerosing cholangitis undergoing evaluation for orthotopic liver transplantation would clearly impact on the candidacy, diagnostic evaluation, and alternative treatment options. Thus, the aim of our study was to determine the prevalence of hepatocellular carcinoma in patients undergoing liver transplantation for primary sclerosing cholangitis. Methods: The records of the 520 patients undergoing orthotopic liver transplantation at our institution between 1985 and May 1995 were reviewed. Of the 134 patients with primary sclerosing cholangitis, three (2%) had hepatocellular carcinoma. In the 386 patients without primary sclerosing cholangitis undergoing orthotopic liver transplantation, 22 (6%) had hepatocellular carcinoma. Results: Neither the duration of primary sclerosing cholangitis (range 7-23 years) nor the presence of ulcerative colitis (two of three patients) distinguished those patients with primary sclerosing cholangitis plus hepatocellular carcinoma from those with primary sclerosing cholangitis alone. None of the three patients with primary sclerosing cholangitis plus hepatocellular carcinoma had evidence for hepatitis B or C, alpha-1-antitrypsin deficiency, or hemochromatosis. None of the tumors was of the fibrolamellar variety of hepatocellular carcinoma. Conclusions: The prevalence of hepatocellular carcinoma in patients with primary sclerosing cholangitis undergoing orthotopic liver transplantation is 2%. These data suggest that patients with advanced cirrhotic-stage primary sclerosing cholangitis are at increased risk for developing hepatocellular carcinoma and should be screened for hepatocellular carcinoma as well as for cholangiocarcinoma prior to orthotopic liver transplantation. abstract_id: PUBMED:28650098 Disparities in Eurotransplant liver transplantation wait-list outcome between patients with and without model for end-stage liver disease exceptions. The sickest-first principle in donor-liver allocation can be implemented by allocating organs to patients with cirrhosis with the highest Model for End-Stage Liver Disease (MELD) scores. For patients with other risk factors, standard exceptions (SEs) and nonstandard exceptions (NSEs) have been developed. We investigated whether this system of matched MELD scores achieves similar outcomes on the liver transplant waiting list for various diagnostic groups in Eurotransplant (ET) countries with MELD-based individual allocation (Belgium, the Netherlands, and Germany). A retrospective analysis of the ET wait-list outflow from December 2006 until December 2015 was conducted to investigate the relation of the unified MELD-based allocation to the risk of a negative wait-list outcome (death on the waiting list or delisting as too sick) as opposed to a positive wait-list outcome (transplantation or delisting as recovered). A total of 16,926 patients left the waiting list with a positive (11,580) or negative (5346) outcome; 3548 patients had a SE, and 330 had a NSE. A negative outcome was more common among patients without a SE or NSE (34.3%) than among patients with a SE (22.6%) or NSE (18.6%; P &lt; 0.001). Analysis by model-based recursive partitioning detected 5 risk groups with different relations of matched MELD to a negative outcome. In Germany, we found the following: (1) no SE or NSE, SE for biliary sepsis (BS); (2) SE for hepatocellular carcinoma (HCC), hepatopulmonary syndrome (HPS), or portopulmonary hypertension (PPH); and (3) SE for primary sclerosing cholangitis (PSC) or polycystic liver disease (PcLD). In Belgium and the Netherlands, we found the following: (4) SE or NSE, or SE for HPS or PPH; and (5) SE for BS, HCC, PcLD, or PSC. In conclusion, SEs and NSEs do not even out risks across different diagnostic groups. Patients with SEs or NSEs appear advantaged toward patients with cirrhosis without SEs or NSEs. Liver Transplantation 23 1256-1265 2017 AASLD. abstract_id: PUBMED:28611259 Prevalence, Risk Factors, and Survival of Patients with Intrahepatic Cholangiocarcinoma. Purpose: To investigate the prevalence, related risk factors, and survival of intrahepatic cholangiocarcinoma in a Mexican population. Material And Methods: We conducted a cross-sectional study at Medica Sur Hospital in Mexico City with approval of the local research ethics committee. We found cases by reviewing all clinical records of in-patients between October 2005 and January 2016 who had been diagnosed with malignant liver tumors. Clinical characteristics and comorbidities were obtained to evaluate the probable risk factors and the Charlson index. The cases were staged based on the TNM staging system for bile duct tumors used by the American Joint Committee on Cancer and median patient survival rates were calculated using the Kaplan-Meier method. Results: We reviewed 233 cases of hepatic cancer. Amongst these, hepatocellular carcinomas represented 19.3% (n = 45), followed by intrahepatic cholangiocarcinomas, which accounted for 7.7% (n = 18). The median age of patients with intrahepatic cholangiocarcinoma was 63 years, and most of them presented with cholestasis and intrahepatic biliary ductal dilation. Unfortunately, 89% (n = 16) of them were in an advanced stage and 80% had multicentric tumors. Median survival was 286 days among patients with advanced stage tumors (25th-75th interquartile range, 174-645 days). No correlation was found between the presence of comorbidities defined by the Charlson index, and survival. We evaluated the presence of definite and probable risk factors for the development of intrahepatic cholangiocarcinoma, that is, smoking, alcohol consumption, and primary sclerosing cholangitis. Discussion: We found an overall prevalence of intrahepatic cholangiocarcinoma of 7.7%; unfortunately, these patients were diagnosed at advanced stages. Smoking and primary sclerosing cholangitis were the positive risk factors for its development in this population. abstract_id: PUBMED:29519259 'Will I receive a liver transplant in time?'; chance of survival of patients on the liver transplant waiting list Objective: To calculate the chance of receiving a liver transplant for patients on the liver transplant waiting list in the Netherlands. Design: Retrospective cohort research. Method: Data of all patients in the Netherlands on the waiting list for liver transplantation, from the introduction of the model of end-stage liver disease score on 16th December 2006 through to 31st December 2013 were collected. Survival analysis was computed with competing risk analyses. Results: A total of 851 patients were listed, of whom 236 patients with hepatocellular carcinoma, 147 patients with primary sclerosing cholangitis, 142 patients with post-alcoholic liver disease, 93 patients with metabolic liver disease, 78 with viral hepatitis and 155 patients listed for other indications. The median waiting time till transplantation was 196 days. The chance to be transplanted at two years from listing was 65% and the risk of death was 17%. Patients with metabolic liver disease had the highest chance of undergoing liver transplantation. Patients with viral hepatitis were at highest risk of death while on the list, as well as having the lowest chance of undergoing liver transplantation. Conclusion: Our study shows a 65% chance of getting transplanted in time after a median waiting time of 6 months in the Netherlands. Sadly, 1 in 6 patients die before liver transplantation can be performed, with the highest risk of death occurring in patients with viral hepatitis. abstract_id: PUBMED:20819196 Multicentric evaluation of model for end-stage liver disease-based allocation and survival after liver transplantation in Germany--limitations of the 'sickest first'-concept. Since the introduction of model for end-stage liver disease (MELD) in 2006, post-orthotopic liver transplantation (OLT) survival in Germany has declined. The aim of this study was to evaluate risk factors and prognostic scores for outcome. All adult OLT recipients in seven German transplant centers after MELD implementation (December 2006-December 2007) were included. Recipient data were analyzed for their influence on 1-year outcome. A total of 462 patients (mean calculated MELD = 20.5, follow-up: 1 year) were transplanted for alcoholic cirrhosis (33.1%), hepatocellular carcinoma (26.6%), Hepatitis-C (17.1%), Hepatitis-B (9.5%), primary sclerosing cholangitis (5.6%) and late graft-failure after first OLT before December 2006 (8.7%). 1-year patient survival was 75.8% (graft survival 71.2%) correlating with MELD parameters and serum choline esterase. MELD score &gt;30 [odds ratio (OR) = 4.17, confidence interval: 2.57-6.78, 12-month survival = 52.6%, c-statistic = 0.669], hyponatremia (OR = 2.07), and pre-OLT hemodialysis (OR = 2.35) were the main death risk factors. In alcoholic cirrhosis (n = 153, mean MELD = 21.1) and hepatocellular carcinoma (n = 123, mean MELD = 13.5), serum bilirubin and the survival after liver transplantation score were independent outcome parameters, respectively. MELD &gt;30 currently represents a major risk factor for outcome. Risk factors differ in individual patient subgroups. In the current German practice of organ allocation to sicker patients, outcome prediction should be considered to prevent results below acceptable standards. abstract_id: PUBMED:23295054 Activation of FoxO3a/Bim axis in patients with Primary Biliary Cirrhosis. Background/aims: Impaired regulation of apoptosis has been suggested to play a role in the pathogenesis of Primary Biliary Cirrhosis (PBC). In this study, we analysed a signalling pathway that comprises the transcription factor FoxO3a and its downstream target Bim, a Bcl-2 interacting mediator of apoptosis. Materials & Methods: The tissues examined included livers explanted from patients with cirrhotic PBC, primary sclerosing cholangitis (PSC), alcoholic liver disease (ALD) and liver biopsies from patients with non-cirrhotic PBC. Large margin resections of hepatocellular carcinoma were used as controls. Results: Expression of FoxO3a and Bim mRNA was significantly enhanced in both non-cirrhotic and end-stage PBC (2.2-fold and 4.3-fold increases, respectively), but not in the other disorders. Similarly, FOXO3a protein level was increased in end-stage PBC (P &lt; 0.05 vs. control). A significant increase in Bim mRNA in non-cirrhotic and cirrhotic PBC was observed (2.2-fold and 8.2-fold respectively). In addition, the most pro-apoptotic isoform of Bim dominated in livers of PBC patients (2.5- fold increase vs. control; P &lt; 0.05). Enhanced FoxO3a and Bim expression was associated with a substantial activation of caspase-3 in PBC (2-fold increase vs. controls; P &lt; 0.0001), whereas it was decreased in both ALD and PSC (46% and 67% reductions respectively). The relationship between FoxO3a and Bim was further investigated in the livers of FoxO-deficient mice. The somatic deletion of FoxO3a caused a significant decrease in Bim, but not caspase-3 protein expression confirming the crucial role of FoxO3a in induction of Bim gene transcription. Conclusions: Our results imply that the FoxO3/Bim signalling pathway can be of importance in the livers of patients with PBC. abstract_id: PUBMED:26501124 Volatile Biomarkers in Breath Associated With Liver Cirrhosis - Comparisons of Pre- and Post-liver Transplant Breath Samples. Background: The burden of liver disease in the UK has risen dramatically and there is a need for improved diagnostics. Aims: To determine which breath volatiles are associated with the cirrhotic liver and hence diagnostically useful. Methods: A two-stage biomarker discovery procedure was used. Alveolar breath samples of 31 patients with cirrhosis and 30 healthy controls were mass spectrometrically analysed and compared (stage 1). 12 of these patients had their breath analysed after liver transplant (stage 2). Five patients were followed longitudinally as in-patients in the post-transplant period. Results: Seven volatiles were elevated in the breath of patients versus controls. Of these, five showed statistically significant decrease post-transplant: limonene, methanol, 2-pentanone, 2-butanone and carbon disulfide. On an individual basis limonene has the best diagnostic capability (the area under a receiver operating characteristic curve (AUROC) is 0.91), but this is improved by combining methanol, 2-pentanone and limonene (AUROC curve 0.95). Following transplant, limonene shows wash-out characteristics. Conclusions: Limonene, methanol and 2-pentanone are breath markers for a cirrhotic liver. This study raises the potential to investigate these volatiles as markers for early-stage liver disease. By monitoring the wash-out of limonene following transplant, graft liver function can be non-invasively assessed. abstract_id: PUBMED:9362191 Functional measurement of nonfibrotic hepatic mass in cirrhotic patients. Objectives: We have postulated that the perfused hepatic mass (PHM) can be estimated by quantitative (volumetric) liver spleen scan (QLSS) using single photon emission computed tomography assessment of sulfur colloid distribution between liver, spleen, and bone marrow. Thus, this parameter should correlate with the amount of functioning tissue in the liver. As a "gold standard" estimate of the nonfibrotic functioning hepatic mass, the weight of the liver at autopsy or transplant was corrected for the amount of scar tissue present. QLSS parameters were correlated with functional hepatic mass in 13 patients with advanced liver disease with liver available at transplant (8 patients) or autopsy (5 patients) who had prior QLSS. Methods: Greater than 1000 mm2 of a liver tissue was assessed histologically in all patients and from more than 2 regions of the liver in 9 of 13 patients. The total fibrosis score (TFS) (range, 0-17.5) was calculated as a semiquantitative estimate of hepatic fibrosis. The ratio of functioning tissue was calculated as (1 - TFS/20) and the amount of functioning tissue as the nonfibrotic weight (NFW): NFW = liver weight x (1--TFS/20). QLSS parameters were measured postprandially and 30 min after injection of 5 mCi of technetium Tc 99m sulfur colloid. Pixel and total counts from the liver, spleen, and bone marrow as well as organ length were measured. Liver/bone marrow index and liver/spleen index were calculated. The perfused hepatic mass (PHM) was defined as the mean of the liver/bone marrow index and liver/spleen index. Results: All patients had cirrhosis: alcoholic (1 patient), alcoholic with alcoholic hepatitis (1 patient), hepatitis B (3 patients), hepatitis C (6 patients), hepatitis C with hepatocellular carcinoma (1 patient), and primary sclerosing cholangitis (n = 1). The ratio of functioning tissue was 0.54 +/- 0.07; liver weight 1215 +/- 317 g; and NFW = 658 +/- 193 g. The PHM = 55 +/- 14. The PHM calculated from the QLSS correlated strongly with the NFW (functioning tissue) at autopsy/transplant: NFW = 13 PHM - 55; r = 0.9505; p &lt; 0.0001). Conclusions: In cirrhotic patients (a) we have confirmed that the sulfur colloid distribution by QLSS is determined by the perfused hepatic mass, and (b) the amount of functioning tissue can be precisely estimated by QLSS parameters. abstract_id: PUBMED:26182318 Overlap syndrome of autoimmune hepatitis and primary sclerosing cholangitis complicated with hepatocellular carcinoma. Development of hepatocellular carcinoma (HCC) in patients with autoimmune liver disease is less common than in those with other types of chronic liver disease. Here we report a rare case of overlap syndrome consisting of autoimmune hepatitis (AIH) and primary sclerosing cholangitis (PSC) that was subsequently complicated with HCC. A 72-year-old man was initially diagnosed as being in the cirrhotic stage of AIH on the basis of blood chemistry tests and histological examinations. Computed tomography and magnetic resonance cholangiography 20 months later showed diffuse stricturing of the intrahepatic bile duct with dilatation of the areas between the strictures, compatible with the findings of PSC, which resulted in a diagnosis of AIH/PSC overlap syndrome. The level of serum protein induced by vitamin K absence or antagonist II increased 22 months later, and HCC was diagnosed by radiological examinations. Four cycles of transarterial infusion therapy with cisplatin were performed, but the patient died one year later. Sequential overlap of PSC may have played a part in accelerating AIH disease progression, leading to the development of HCC in this patient. Therefore, HCC surveillance may be important in advanced stages of autoimmune disease, especially in the cirrhotic stage. abstract_id: PUBMED:33544906 De Novo Colorectal and Pancreatic Cancer in Liver-Transplant Recipients: Identifying the Higher-Risk Populations. Background And Aims: Gastrointestinal (GI) malignancies are common after liver transplantation. The aim of this study was to identify the risk and timing of the more common GI malignancies, colorectal and pancreatic cancer, to aid in optimizing potential posttransplant screening practices. Approach And Results: Data from the United Network for Organ Sharing database of all adult liver-transplant recipients from 1997 to 2017 were analyzed and a comparison made with cancer incidence from general population data using Surveillance, Epidemiology, and End Results data. Of 866 de novo GI malignancies, 405 colorectal and 216 pancreas were identified. The highest cumulative incidence for colorectal cancer occurred in recipients with primary sclerosing cholangitis (PSC), recipients over the age of 50 with non-alcoholic steatohepatitis (NASH) and hepatocellular carcinoma (HCC)/cholangiocarcinoma (CCA), and females &gt;50 years with alcohol-associated liver disease and HCC/CCA, with risk increasing above the general population within 5 years of transplant. Patients with PSC and HCC/CCA or NASH and HCC/CCA have the highest cumulative incidence of pancreatic cancer also rising within 5 years following transplant, with those patients &gt;50 years old conferring the highest risk. Conclusions: These data identify a high-risk cohort that warrants consideration for intensified individualized screening practices for colorectal cancer after liver transplantation. In addition to recipients with PSC, further study of recipients with NASH and HCC/CCA and females with alcohol-associated liver disease and HCC/CCA may be better tailored to colorectal cancer screening ideals. Higher-risk patient populations for pancreatic cancer (PSC and NASH with HCC/CCA) would benefit from further study to determine potential screening practices. GI malignancies occur at higher rates in liver-transplant patients compared with the general population. In the era of individualized medicine, this study identifies the highest-risk transplant recipients (PSC and NASH cirrhosis with coexisting HCC/CCA) who may benefit from altered screening practices for these malignancies. Answer: Yes, patients with advanced cirrhotic-stage primary sclerosing cholangitis (PSC) are at increased risk for developing hepatocellular carcinoma (HCC). A study aimed at determining the prevalence of HCC in patients undergoing liver transplantation for PSC found that 2% of the patients with PSC had HCC. This suggests that even though the risk of cholangiocarcinoma is more widely recognized in PSC, there is still a significant risk for HCC development in these patients. The study concluded that patients with advanced cirrhotic-stage PSC should be screened for both HCC and cholangiocarcinoma prior to orthotopic liver transplantation (PUBMED:9314129). Additionally, another study reported a case of overlap syndrome consisting of autoimmune hepatitis (AIH) and PSC that was complicated with the development of HCC, indicating that surveillance for HCC may be important in advanced stages of autoimmune liver diseases, especially in the cirrhotic stage (PUBMED:26182318).
Instruction: Dorsal dartos flap in Snodgrass hypospadias repair: how to use it? Abstracts: abstract_id: PUBMED:29118537 Choosing an ideal vascular cover for Snodgrass repair. Aim: The aim of this study is to compare tunica vaginalis (TV), dorsal dartos, and ventral dartos flap as a second layer vascular cover during Snodgrass repair. Materials And Methods: Data of 83 patients who underwent primary hypospadias repair with Snodgrass technique (age range: 1.6-12 years) were retrospectively collected and compared. They were divided into three groups. Group A (26 patients) included cases using TV flap, Group B (36 patients) included those where dorsal dartos from prepuce was used as second cover, and Group C (21 patients) included those with ventral dartos as cover. Results: In Group A, no complications recorded. Mild scrotal edema was present in 5 patients which was conservatively managed. In Group B, there were 8 fistulas, 2 glans breakdown, and 1 meatal stenosis. In Group C, there were 3 fistulas and 1 glans breakdown. Conclusion: TV flap is better than dorsal dartos and ventral dartos as vascular cover for primary hypospadias repair with Snodgrass technique. abstract_id: PUBMED:17406134 Dorsal double-layer dartos flap for preventing fistulae formation in the Snodgrass technique. Introduction: The Snodgrass technique and its modifications have become a preferred method for all varieties of hypospadias in the past decade. However, fistula is the most common complication of this technique. The aim of this study was to investigate the importance of the single and double flap to prevent fistula formation in the Snodgrass procedure. Materials And Methods: Tubularized incised plate urethroplasty, using a single or the double flap, was undertaken in 74 consecutive boys (median age 6.6 years old, range 1-15) within the last 4 years. In the first 29 patients (group 1), a dorsolateral flap was rotated laterally for covering the neourethra and in the remaining 45 patients (group 2) the neourethra was covered with dorsal double dartos flaps. Result: In group 1, fistula in 4 patients and partial glanular dehiscence in 1 patient were detected. There was no fistula formation in group 2. Conclusion: For preventing fistula formation, urethral covering by a well-vascularized dorsal double-layer dartos flap should be the basic part of the Snodgrass procedure. abstract_id: PUBMED:15892822 Dorsal dartos flap for preventing fistula in the Snodgrass hypospadias repair. Objective: To evaluate the importance of urethral covering using vascularized dorsal subcutaneous tissue for preventing fistula in the Snodgrass hypospadias repair. Patients And Methods: The study included 67 children (aged 1-11 years) who had hypospadias repaired between April 1998 and May 2003, including 51 with distal and 16 with midshaft hypospadias. In all children, a standard tubularized incised-plate urethroplasty was followed by reconstruction of new surrounding urethral tissue. A longitudinal dartos flap was harvested from excessive dorsal preputial and penile hypospadiac skin, and transposed to the ventral side by a buttonhole manoeuvre; it was sutured to the glans wings around the neomeatus, and to the corpora cavernosa over the neourethra. Thus the neourethra was completely covered with well-vascularized subcutaneous tissue. Results: At a mean (range) follow-up of 21 (6-65) months, the result was successful, with no fistula or urethral stenosis, in all 67 children. Conclusion: We suggest that urethral covering should be part of the Snodgrass procedure. A dorsal well-vascularized dartos flap, buttonholed ventrally, is a good choice for preventing fistula. Redundancy of the flap and its excellent vascularization depends on the harvesting technique. abstract_id: PUBMED:19857999 Snodgrass hypospadias repair with onlay overlapping double-layered dorsal dartos flap without urethrocutaneous fistula: experience of 156 cases. Objective: To evaluate the neourethra covering created by a vascularized overlapping double-layered dorsal dartos flap for preventing urethrocutaneous fistula in the Snodgrass hypospadias repair (tubularized incised plate). Patients And Methods: Between March 2003 and January 2008, 156 boys (mean age, 4.5 years) were enrolled for hypospadias repair. Preoperative position of the urethral meatus was subcoronal in 37, at the distal shaft in 61 and mid-shaft in 58 boys. All patients underwent the Snodgrass hypospadias repair. The neourethra was then covered with an overlapping double-layered dorsal dartos flap before glans and skin closure. Results: All 156 patients underwent successful reconstruction. With a mean follow up of 23 months (range 6-42), all boys had a satisfactory subjective cosmetic and functional result with a vertically oriented, slit-like meatus at the tip of the glans. No urethrocutaneous fistula or urethral stenosis occurred. Conclusion: As the neouretha covering is an integral part of the Snodgrass hypospadias repair, a dorsal well vascularized double-layered dartos flap is a good choice for preventing urethrocutaneous fistula formation. abstract_id: PUBMED:16707207 Longitudinal dorsal dartos flap for prevention of fistula after a Snodgrass hypospadias procedure. Objectives: The Snodgrass technique presents the procedure of choice for distal hypospadias. Fistula formation is the most common complication with various rates. We evaluated the importance of a urethral covering using vascularized dorsal subcutaneous tissue for fistula prevention. Methods: Our study included 126 patients, aged 10 months to 16 years, who underwent hypospadias repair from April 1998 through June 2005. Of the patients, 89 had distal, 30 had midshaft and 7 had penoscrotal hypospadias. All patients underwent standard tubularized incised plate urethroplasty, which was followed by reconstruction of new surrounding urethral tissue. A longitudinal dorsal dartos flap was harvested and transposed to the ventral side by the buttonhole manoeuvre. The flap was sutured to the glans and the corpora cavernosa to completely cover the neourethra with well-vascularized subcutaneous tissue. Results: Mean follow-up was 32 (6-87) months. A successful result without fistula was achieved in all 126 patients. In six patients, temporary stenosis of the glandular urethra occurred and was solved by dilation. Conclusions: A urethral covering should be performed as part of the Snodgrass procedure. A dorsal well-vascularized dartos flap that is buttonholed ventrally represents a good choice for fistula prevention. Redundancy of the flap and its excellent vascularization depend on the harvesting technique. abstract_id: PUBMED:33754103 Comparison of Urethrocutaneous Fistula Rate After Single Dartos and Double Dartos Tubularized Incised Plate Urethroplasty in Pediatric Hypospadias. Background and objective Tubularized incised plate (TIP) urethroplasty is an easy and popular technique for repairing hypospadias, however urethrocutaneous fistula (UCF) is a frequently reported complication. Different techniques are used to reduce this complication. We aimed to compare the rate of UCF after single dartos and double dartos TIP urethroplasty in children with distal and mid penile hypospadias. Methods A randomized controlled trial (NCT04699318) was conducted in the Department of Pediatric Surgery, Mayo Hospital, Pakistan from August 2017 to February 2018, after ethical approval. After informed consent, a total of 60 patients with distal and mid penile hypospadias who were uncircumcised, had no chordee, and/or previous surgery, were randomly allocated in two groups using computer generated table numbers. Group A underwent single dartos TIP urethroplasty and Group B underwent double dartos TIP urethroplasty. Catheter was removed on day 10 post-operatively in both groups and primary outcome (UCF) was noted after a week of catheter removal. Rate of UCF was compared using chi square and p-value of &lt;0.05 was taken as significant. Data was stratified to check for effect modifiers. Results Out of 60 children, eight (13.3%) developed UCF. In Group A, seven (23.3%) developed UCF and in Group B, one (3.3%) developed UCF (p-value 0.02). In both groups, no patient (0%) had urethral disruption, penile torsion, skin necrosis or meatal stenosis. Conclusion Additional covering of neo-urethra by a double dartos layer significantly reduces fistula rate after tubularized incised plate urethroplasty in both primary distal and mid penile hypospadias. abstract_id: PUBMED:21442394 Comparison of dartos flap and dartos flap plus spongioplasty to prevent the formation of fistulae in the Snodgrass technique. Objective: The aim of our study was to evaluate the role of paraurethral spongial tissue plus dartos flap using an additional urethral cover to prevent fistula formation in patients who underwent surgery with the Snodgrass technique. Patients And Methods: A retrospective study was performed on 161 patients aged 10 months to 15 years who underwent midpenile and distal hypospadias repair using the Snodgrass technique. The patients were assigned to one of two groups. In Group I (75 patients), the neourethra was covered with the dartos flap, and in Group II (86 patients), the neourethra was covered with the dartos flap plus spongioplasty. Results: Urethral fistulae were encountered in six cases (8%) in Group I, and no fistulae were encountered in Group II. Conclusion: The use of corpus spongiosum as an intermediate layer in urethral coverage, combined with the dartos flap, reduces the likelihood of fistula formation. This procedure can be applied easily and effectively to prevent the formation of fistulae. abstract_id: PUBMED:26783086 Comparison of tubularized incised plate urethroplasty combined with a meatus-based ventral dartos flap or dorsal dartos flap in hypospadias. Purpose: Tubularized incised plate urethroplasty (TIPU) is the preferred surgical option for distal and mid-shaft hypospadias repair. Neourethra dartos flap coverage is routinely used as a protective layer with good results. We modified meatus-based ventral dartos flap (MBVDF) to TIPU by dissecting the proximal mid-ventral dartos attached urethra and leaving the subcutaneous fascia connecting the meatus, and retrospectively compared the outcomes of using MBVDF with single dorsal dartos flap (DDF) on the complication rates of TIPU. Methods: We present 2 surgeons' experiences with 356 patients with distal and mid-shaft hypospadias between January 2010 and December 2014. Patients were divided into two groups. Group DDF included 185 patients (mean age 29 months) underwent TIPU with DDF rotated laterally covering the suture lines of the neourethra. Group MBVDF included 171 patients (mean age 26 months) underwent TIPU with MBVDF covering the suture lines of the neourethra. Statistical analysis of patient basic information and complications was performed by two independent sample t test and Chi square test or Fisher's exact test. Results: There were no statistical differences in age, type of hypospadias, and follow-up time between the two groups. The mean operative time in the group MBVDF (68.93 ± 8.32 min) was significantly shorter than in the group DDF (73.60 ± 9.06 min). Ventral skin necrosis (2.7%) and penile rotation (3.8%) in group DDF was significantly higher than group MBVDF which did not occur. The differences in other complication rates including fistula rate (2.7 vs 2.9%) between the groups were not statistically significant. Conclusion: DDF and MBVDF with TIPU are similarly effective methods for decreasing fistula in hypospadias repair. MBVDF with TIPU may be an easier method and can avoid ventral skin necrosis and penile rotation. abstract_id: PUBMED:18758223 Dorsal dartos flap in Snodgrass hypospadias repair: how to use it? Purpose: To introduce new technique for covering neourethra with dorsal dartos subcutaneous tissue in Snodgrass hypospadias repair. Material And Methods: The study included 26 patients with primary hypospadias, aged 2-22 years (average 7.86), operated from June 2002 to August 2006. Of the patients, 21 had distal, 3 midshaft and 2 penoscrotal hypospadias. The standard technique of tubularized incised plate (TIP) with double-layer covering of the neourethra by subcutaneous tissue was used in all cases of reconstruction. The mean follow-up period was 4.5 months (range 3-12 months). Results: Successful result of a normal-looking penis without fistula was achieved in all patients. One patient had meatal stenosis (3.84%) at the early postoperative period which was corrected by urethral dilatation of the external meatus at an interval of up to 2 months postoperatively. Conclusion: Our technique represents a reasonable option for utilizing dorsal dartos subcutaneous tissue in TIP urethroplasty. The neourethra is covered symmetrically with a double layer of well-vascularized tissue and the penis is kept without rotation. Redundancy of the flap and its excellent vascularization depend on the harvesting technique. Further follow-up and a larger number of patients are needed before a final conclusion can be made. abstract_id: PUBMED:26579282 Interposition of dartos flaps to prevent fistula after tubularized incised-plate repair of hypospadias. Objective: To evaluate the effect of urethral coverage by a single- or double-layered dorsal dartos flap after tubularized incised-plate (TIP) repair of hypospadias on fistula formation. Patients And Methods: In this retrospective study we evaluated sequential patients with hypospadias who underwent TIP urethroplasty with a dorsal dartos interpositional flap between April 2008 and December 2009. We reviewed their medical records for the site of hypospadias, previous hypospadias repair, single- or double-layered dartos flap and postoperative complications. The patients were divided into two groups; in group A the urethra was covered by a single layer of dartos fascia, and in group B the urethra was covered by double layers of dartos flap. Results: Of 91 patients who opted for hypospadias repair during the time of the study, 62 had a TIP urethroplasty with a dorsal dartos flap; of these 62, three did not fulfil the requirement of the minimum follow-up, so 59 were eligible for the study (32 in group A and 27 in group B). Preoperative clinical data were comparable in both groups. At a mean of 12.2 months of follow-up, there was no reported fistula in group B, while two patients in group A developed a urethrocutaneous fistula (P = 0.19). Meatal stenosis occurred in two patients in group A and one in group B (P = 0.66). Conclusion: There was no significant difference in subsequent urethrocutaneous fistula between a double-layered dorsal dartos flap and single layer for covering the urethra as a part of TIP urethroplasty for repairing hypospadias. Answer: The dorsal dartos flap is used in Snodgrass hypospadias repair as a vascularized tissue layer to cover the neourethra and prevent fistula formation, which is a common complication of the procedure. The technique involves harvesting a flap of well-vascularized subcutaneous tissue from the dorsal side of the penis and transposing it to the ventral side to cover the neourethra. This can be done by a buttonhole maneuver, where the flap is passed through a small opening to reach the ventral side, and then sutured to the glans wings around the neomeatus and to the corpora cavernosa over the neourethra (PUBMED:15892822, PUBMED:16707207). Different modifications of the dorsal dartos flap have been described, including the use of a single-layered flap (PUBMED:26579282), a double-layered flap (PUBMED:17406134, PUBMED:19857999), and an overlapping double-layered flap (PUBMED:19857999). The double-layered dorsal dartos flap has been shown to be particularly effective in preventing fistula formation, with studies reporting no fistulae when this method was used (PUBMED:17406134, PUBMED:19857999). In one study, the dorsal dartos flap was used in a double-layer covering technique for the neourethra, resulting in a successful outcome without fistula formation in all patients (PUBMED:18758223). Another study compared the use of a single dorsal dartos flap with a meatus-based ventral dartos flap and found that both methods were similarly effective in decreasing fistula rates, with the ventral approach potentially being easier and avoiding complications such as ventral skin necrosis and penile rotation (PUBMED:26783086). Overall, the dorsal dartos flap is a critical component of the Snodgrass hypospadias repair, providing a well-vascularized tissue layer to protect the neourethra and significantly reduce the risk of fistula formation. The technique for harvesting and positioning the flap is important for ensuring adequate vascularization and preventing complications.
Instruction: Does age at the time of elective cardiac surgery or catheter intervention in children influence the longitudinal development of psychological distress and styles of coping of parents? Abstracts: abstract_id: PUBMED:12645496 Does age at the time of elective cardiac surgery or catheter intervention in children influence the longitudinal development of psychological distress and styles of coping of parents? Aims: To assess the influence of age at a cardiac procedure of children, who underwent elective cardiac surgery or interventional cardiac catheterisation for treatment of congenital cardiac defects between 3 months and 7 years of age, on the longitudinal development of psychological distress and styles of coping of their parents. Methods: We used the General Health Questionnaire to measure psychological distress, and the Utrecht Coping List to measure styles of coping. Parents completed questionnaires on average respectively 5 weeks prior to, and 18.7 months after, cardiac surgery or catheter intervention for their child. Results: Apart from one exception, no significant influence was found of the age at which children underwent elective cardiac surgery or catheter intervention on the pre- to postprocedural course of psychological distress and the styles of coping of their parents. Across time, parents of children undergoing surgery reported, on average, significantly higher levels of psychological distress than parents of children who underwent catheter intervention. After the procedure, parents of children who underwent either procedure reported significantly lower levels of psychological distress, and showed a weaker tendency to use several styles of coping, than did their reference groups. Conclusion: Age of the children at the time of elective cardiac surgery or catheter intervention did not influence the course of psychological distress of their parents, nor the styles of coping used by the parents. Future research should investigate in what way the age at which these cardiac procedures are performed influences the emotional and cognitive development of the children. abstract_id: PUBMED:10824905 Psychological distress and styles of coping in parents of children awaiting elective cardiac surgery. Aims: We sought to assess the level of psychological distress, and the styles of coping of, parents of children with congenital heart disease. The study was based on questionnaires, which were completed, on average, four weeks, with a range from 0.1 to 22.1 weeks, prior to elective cardiac surgery or elective catheter intervention. Methods: We used the General Health Questionnaire, and the Utrecht Coping List, to compare scores from parents of those undergoing surgery, with scores of reference groups, and with scores of the parents of those undergoing intervention. Results: Overall, in comparison with our reference groups, the parents of the 75 children undergoing surgery showed elevated levels of psychological distress, manifested as anxiety, sleeplessness, and social dysfunctioning. They also demonstrated less adequate styles of coping, being, for example, less active in solving problems. With only one exception, no differences were demonstrated in parental reactions to whether cardiac surgery or catheter intervention had been planned. The mothers of the 68 patients who were to undergo cardiac surgery, however, reported greater psychological distress and manifested greater problems with coping than did the fathers. Conclusion: Elevated levels of psychological distress, and less adequate styles of coping, were found in the parents of patients about to undergo cardiac surgery, especially the mothers, when compared to reference groups. Future research should investigate whether these difficulties persist, and whether this will influence the emotional development of their children with congenital cardiac malformations. abstract_id: PUBMED:15691401 Psychological functioning in parents of children undergoing elective cardiac surgery. Purpose: To assess levels of distress, the marital relationship, and styles of coping of parents of children with congenital heart disease, to evaluate any change in these parameters following elective cardiac surgery for their child, and to compare these parents with parents of children undergoing another form of hospital treatment, and with parents of healthy children. Design: A prospective study in which parents were assessed the day before the surgical procedure being undergone by their child, and 12 months afterwards. Participants: We assessed three groups of parents of 75 children, aged from birth to 16.9 years. The first was a group whose children were undergoing surgery because of congenital heart disease, the second was a group whose children were undergoing transplantation of bone marrow, and the third was a group whose children were healthy. Measures used for assessment included the General Health Questionnaire, the Dyadic adjustment scale, and the Utrecht coping list. Results: Parents in both groups of children undergoing surgery had significantly higher rates of distress prior to the surgical procedures than did the parents of the healthy children, but within those whose children were undergoing cardiac surgery, there were no differences between parents of children with cyanotic and acyanotic lesions. Following treatment, there was a significant reduction in the levels of distress in both groups whose children had undergone surgery. There were few differences between any of the groups on the other parameters, and the evaluated indexes showed stability over time. Conclusion: Despite elevated levels of psychological distress prior to surgical procedures, which had fallen after one year, the stability of other parameters of parental functioning over time suggests that the surgical interventions are of less significance than either factors attributable to the presence of chronic illness, or the individual characteristics of the parents. abstract_id: PUBMED:35999437 Relationship of depression, impulsivity, distress intolerance and coping styles with maladaptive eating patterns in bariatric candidates. Purpose: The study aimed to investigate the problematic eating patterns and understand their relationship to psychological constructs, including stress intolerance, coping mechanisms and impulsivity, and psychiatric symptoms among bariatric surgery candidates. Methods: The bariatric candidates were evaluated by psychiatric interview and standard scales assessing maladaptive eating behaviors (Eating Attitudes Test (EAT), Bulimia Investigatory Test-Edinburgh (BITE), Dutch Eating Behavior Questionnaire (DEBQ)), depression (Beck Depression Inventory (BDI)), psychiatric symptoms (Brief Symptom Inventory (BSI)), and psychological constructs (Distress Intolerance Index (DSI), Coping Styles Scale (CSS), UPPS Impulsive Behavior Scale(UPPS)). Results: More than half (57.8%) had maladaptive eating behaviors, and 23.6% had binge-eating behavior. Depression and anxiety predicted EAT, BITE, and DEBQ emotional and external eating sub-scale scores; distress intolerance, helpless coping style, and impulsivity predicted maladaptive eating behaviors in bariatric candidates. Conclusion: Maladaptive eating patterns play an essential role in the failure to lose weight and regain weight and are predicted by depression, anxiety, and psychological constructs in this study. Evaluation of pathological trait characteristics besides discrete psychiatric syndromes should be recommended in the pre-operation process to plan relevant interventions in the long-term management of weight. Level Of Evidence: Level III, evidence obtained from well-designed cohort analytic studies. abstract_id: PUBMED:36068609 Association between psychological distress of each points of the treatment of esophageal cancer and stress coping strategy. Background: Patients with esophageal cancer often feel depressed and are fearful of metastasis and death. Esophagectomy is an invasive procedure with a high incidence of complications. The objective of this study was to examine the association between psychological distress on each points of the treatment of esophageal cancer and stress coping strategy. Methods: In total, 102 of 152 consecutive patients who attended the outpatient clinic at Toranomon Hospital between April 2017 and April 2019 met the eligibility criteria for inclusion in this study. Questionnaires designed to identify psychological distress and stress coping strategies were longitudinally administered at 5 time points from the time of the first outpatient consultation to 3 months after esophagectomy. Results: Although 'fighting spirit' (OR 0.836, 95% CI 0.762-0.918; p &lt; 0.001) and 'anxious preoccupation' (OR 1.482, 95% CI 1.256-1.748; p &lt; 0.001) were strongly related to psychological distress before treatment, as time of treatment passes, 'helpless/hopeless' (OR 1.337, 95% CI 1.099-1.626; p = 0.004) was strongly related to psychological distress after esophagectomy. There were no relationships between psychological distress and individual patient characteristics, with the exception of 'history of surgery' and 'final staging'. The concordance index was 0.864 at time 1, 0.826 at time 2, 0.839 at time 3, 0.830 at time 4, and 0.840 at time 5. Conclusions: The relationship between psychological distress and the Coping strategies was stronger on each points of the treatment of esophageal cancer than that between psychological distress and individual patient characteristics. This study uses prospective basic clinical data and may provide the baseline information for risk stratification for psychological management and for future clinical studies in these patients. abstract_id: PUBMED:31573371 Psychological distress and coping following eye removal surgery. Purpose: Psychological distress is reasonably well documented in people with facial disfigurement; however, in patients following eye removal surgery this has not been studied adequately. We hypothesised that lower distress levels would be associated with age and more adaptive coping strategies and that women would be more likely to report higher levels of distress and, therefore, use maladaptive coping strategies.Methods: This exploratory, cross-sectional study measured distress and coping in a sample of 56 post enucleation or evisceration patients. The Hospital Anxiety and Depression Scale and the Brief COPE measured distress and coping strategies.Results: In all, 25.5% and 10.9% of the sample had high levels of anxiety and depression, respectively. Significant associations were found between levels of distress, coping strategies and demographic variables (p &lt; .05). There were significant differences in coping strategies between those with higher and lower levels of distress (p &lt; .05). Females reported higher levels of anxiety (U = 202.5, p &lt; .01) and depression (U = 229, p &lt; .05) than males. Those who experienced enucleation or evisceration aged between 20 and 39 years reported significantly higher levels of depression compared with other age groups (U = 68.5, p &lt; .01).Conclusions: There was a relatively low level of distress across the whole sample, but we found high levels of distress in a considerable proportion (18.18%) of participants. Participants' coping strategies and levels of distress were correlated. Females and participants aged between 20 and 39 years at time of eye removal were particularly vulnerable to distress. abstract_id: PUBMED:17483394 Psychosocial mediation of religious coping styles: a study of short-term psychological distress following cardiac surgery. Although religiousness and religious coping styles are well-documented predictors of well-being, research on the mechanisms through which religious coping styles operate is sparse. This prospective study examined religious coping styles, hope, and social support as pathways of the influence of general religiousness (religious importance and involvement) on the reduced postoperative psychological distress of 309 cardiac patients. Results of structural equation modeling indicated that controlling for preoperative distress, gender, and education, religiousness contributed to positive religious coping, which in turn was associated with less distress via a path fully mediated by the secular factors of social support and hope. Furthermore, negative religious coping styles, although correlated at the bivariate level with preoperative distress but not with religiousness, were associated both directly and indirectly with greater post-operative distress via the same mediators. abstract_id: PUBMED:23991679 A randomized controlled trial of the effectiveness of a therapeutic play intervention on outcomes of children undergoing inpatient elective surgery: study protocol. Aim: To report a trial protocol to determine if a therapeutic play intervention leads to significant reduction in perioperative anxiety, negative emotional manifestations and postoperative pain of children undergoing inpatient elective surgery and in their parents' perioperative anxiety. Background: Children undergoing surgery often experience anxiety, exhibit negative emotional manifestations pre-operatively and postoperative pain. Previous studies report that therapeutic play intervention has positive effects on anxiety reduction, while few studies have examined the effects of such intervention on children undergoing major elective surgery. Design: Randomized controlled trial with repeated measures is proposed. Methods: This study will recruit 106 pairs of 6-14-year-old children undergoing elective surgery in a Singaporean public hospital and their parents (protocol approved in October 2011). Eligible participants will be randomly allocated to either a control group (receiving routine care) or an experimental group (receiving 1-hour therapeutic play intervention plus routine care). Outcome measures include children's anxiety, emotional manifestation and postoperative pain, their parents' anxiety and process evaluation. Data will be collected at baseline (3-7 days before the operation), on the day of surgery and around 24 hours after the surgery. Discussion: This study will identify a clinically useful and potentially effective approach to prepare children for surgery by reducing anxiety of both children and their parents during the perioperative period. The reduction of anxiety may lead to reduction of postoperative pain, which will eventually improve the physical and psychological well-being of children. This study was funded by the National Medical Research Council in Singapore. abstract_id: PUBMED:25212474 A randomized controlled trial of the effectiveness of an educational intervention on outcomes of parents and their children undergoing inpatient elective surgery: study protocol. Aim: To report a study protocol that tests the effectiveness of an educational intervention on outcomes of parents and their children who undergo inpatient elective surgery. Background: Inadequate children's postoperative pain management remains a global problem. Parents are required to be involved in their child's pain assessment and management, yet they often lack relevant knowledge and skills. Education is an effective strategy for enhancing a person's knowledge, attitudes and behaviour. However, few studies have examined its effectiveness in parents and their children undergoing inpatient elective surgery. Design: Randomized controlled trial and embedded qualitative process evaluation. Methods: One hundred and sixty-two pairs of participants (each comprised of one parent and his/her child undergoing inpatient elective surgery) will be recruited (protocol approved in January 2013). Participants will be randomized to either a Control group (routine care), an Intervention group 1 (routine care and an educational intervention with face-to-face teaching), or an Intervention group 2 (routine care and an educational intervention without face-to-face teaching). Outcome measures will include parents' knowledge, attitude and behaviour related to postoperative pain management; their child's postoperative pain; and parents' satisfaction with their child's pain management at baseline and around 6, 12 and 24 hours after the operation. Discussion: A standardized educational intervention protocol and detailed study procedure have been developed in this study to improve parents' knowledge, attitude and behaviour related to postoperative pain management and reduce their child's postoperative pain. abstract_id: PUBMED:28504212 Psychological Adjustment of Parents of Children Born with Atypical Genitalia 1 Year after Genitoplasty. Purpose: We examined the psychological adjustment of parents of children born with moderate to severe genital atypia 12 months after their child underwent genitoplasty. Materials And Methods: Parents were recruited longitudinally from a multicenter collaboration of 10 pediatric hospitals with specialty care for children with disorders/differences of sex development and/or congenital adrenal hyperplasia. Parents completed measures of depressive and anxious symptoms, illness uncertainty, quality of life, posttraumatic stress and decisional regret. Results: Compared to levels of distress at baseline (before genitoplasty) and 6 months after genitoplasty, data from 25 mothers and 20 fathers indicated significant improvements in all psychological distress variables. However, a subset of parents continued endorsing clinically relevant distress. Some level of decisional regret was endorsed by 28% of parents, although the specific decision that caused regret was not specified. Conclusions: Overall the majority of parents were coping well 1 year after their child underwent genitoplasty. Level of decisional regret was related to having a bachelor's level of education, increased levels of illness uncertainty preoperatively and persistent illness uncertainty at 12 months after genitoplasty but was unrelated to postoperative complications. Answer: No, the age at which children underwent elective cardiac surgery or catheter intervention did not influence the longitudinal development of psychological distress and styles of coping of their parents. This conclusion is drawn from a study that assessed the influence of the child's age at the time of the cardiac procedure on the psychological distress and coping styles of their parents. The study utilized the General Health Questionnaire to measure psychological distress and the Utrecht Coping List to measure styles of coping. Parents completed these questionnaires both before (on average 5 weeks prior) and after (18.7 months on average) the cardiac procedure. The results indicated that, with one exception, there was no significant impact of the child's age at the time of the procedure on the parents' psychological distress or coping styles. It was observed that parents of children undergoing surgery reported higher levels of psychological distress than those whose children underwent catheter intervention, but after the procedure, both groups reported significantly lower levels of distress and showed a weaker tendency to use several coping styles compared to their reference groups (PUBMED:12645496).
Instruction: Is there a benefit of preoperative meningioma embolization? Abstracts: abstract_id: PUBMED:23581584 The role of preoperative embolization for intracranial meningiomas. Object: As endovascular techniques have become more advanced, preoperative embolization has become an increasingly used intervention in the management of meningiomas. To date, however, no consensus has been reached on the use of this technique. To clarify the role of preoperative embolization in the management of meningiomas, the authors conducted a systematic review of case reports, case series, and prospective studies to increase the current understanding of the management options for these common lesions and complications associated with preoperative embolization. Methods: A PubMed search was performed to include all relevant studies in which the management of intracranial meningiomas with preoperative embolization was reported. Immediate complications of embolization were reported as major (sustained) or minor (transient) deficits, death, or no neurological deficits. Results: A total of 36 studies comprising 459 patients were included in the review. Among patients receiving preoperative embolization for meningiomas, 4.6% (n = 21) sustained complications as a direct result of embolization. Of the 21 patients with embolization-induced complications, the incidence of major complications was 4.8% (n = 1) and the mortality rate was 9.5% (n = 2). Conclusions: Preoperative embolization is associated with an added risk for morbidity and mortality. Preoperative embolization may be associated with significant complications, but careful selection of ideal cases for embolization may help reduce any added morbidity with this procedure. Although not analyzed in the authors' study, embolization may still reduce rates of surgical morbidity and mortality and therefore may still have a potential benefit for selected patients. Future prospective studies involving the use of preoperative embolization in certain cases of meningiomas may further elucidate its potential benefit and risks. abstract_id: PUBMED:11126901 Is there a benefit of preoperative meningioma embolization? Objective: To evaluate the effect of preoperative embolization of meningiomas on surgery and outcomes. Methods: In a prospective study, 60 consecutive patients with intracranial meningiomas who were treated in two neurosurgical centers were included. In Center A, embolization was performed for none of the patients (n = 30). In Center B, 30 consecutive patients with embolized meningiomas were treated. Preoperatively, tumor size and location, neurological status, and Barthel scale score were recorded. In Center B, the extent of tumor devascularization was evaluated using angiography and postembolization magnetic resonance imaging. Intraoperatively, blood loss, the numbers of blood units transfused, and the observations of the neurosurgeon concerning hemostasis, tumor consistency, and intratumoral necrosis were recorded. Postoperatively, the neurological status and duration of hospitalization were recorded. Six months after surgery, the outcomes were assessed using the Barthel scale and neurological examinations. Results: The mean tumor sizes were 22.9 cc in Center A and 29.6 cc in Center B (P &gt; 0.1). The mean blood losses did not differ significantly (646 ml in Center A versus 636 ml in Center B; P &gt; 0.5). However, for a subgroup of patients with subtotal devascularization (&gt;90% of the tumor) on postembolization magnetic resonance imaging scans in Center B, blood loss was less, compared with the entire group in Center A (P &lt; 0.05). The observations of the neurosurgeon regarding hemostasis, tumor consistency, and intratumoral necrosis did not differ significantly. There were no surgery-related deaths in either center. The rates of surgical morbidity, with permanent neurological worsening, were 20% (n = 6) in Center A and 16% (n = 5) in Center B. There was one permanent neurological deficit (3%) caused by embolization. Conclusion: In this preliminary study, only complete embolization had an effect on blood loss. The value of preoperative embolization for all meningiomas must be reconsidered, especially in view of the high costs and risks of embolization. abstract_id: PUBMED:11504133 Is there a benefit of preoperative meningioma embolization? N/A abstract_id: PUBMED:28725517 Preoperative Embolization for Skull Base Meningiomas. The results of preoperative embolization for skull base meningiomas were retrospectively evaluated to confirm the efficacy of this procedure. Skull base meningiomas that were treated with preoperative embolization were evaluated in 20 patients. The occluded arteries, embolic materials, treatment time, excision rate, neurologic manifestations, and complications were analyzed. The embolic material was 80% liquid, 30% coils, and 15% particles. The surgery was normally completed within 3 to 5 hours. Blood loss was normally approximately 250 mL, excluding four patients having the following conditions: malignant meningioma, a large tumor located on the medial side of the sphenoidal ridge, the petroclival tumor, and infiltrated tumor into the sigmoid sinus. The mean excision rate was 90%, achieving a Simpson grade III, but 10% were graded as Simpson grade IV. No permanent complications due to the preoperative embolization occurred. No neurologic symptoms occurred after excision. Current cerebral endovascular treatment is sophisticated, and the complication rate has markedly decreased. Although it was impossible to compare directly with or without operative embolization, preoperative embolization should be actively used as part of the treatment for this benign tumor, with better understanding of dangerous anastomosis. abstract_id: PUBMED:26412880 Preoperative embolization of meningiomas with low-concentration n-butyl cyanoacrylate. The aim of this study was to determine the clinical safety and efficacy of preoperative embolization of meningiomas with low-concentration n-butyl cyanoacrylate (NBCA). Nineteen cases of hypervascular intracranial meningiomas were treated by preoperative embolization with 14% NBCA, using a wedged superselective catheterization of feeding arteries and reflux-hold-reinjection technique. Clinical data of the patients and radiological and intra-surgical findings were reviewed. All tumors were successfully devascularized without any neurological complications. Marked reduction of tumor staining with extensive NBCA penetration was achieved in 13 cases. Perioperative blood transfusion was only required in two cases. These results indicate that preoperative embolization of meningiomas with low-concentration NBCA is both safe and effective. abstract_id: PUBMED:3226479 Preoperative selective embolization of intracranial meningiomas The authors report preliminary experiences with preoperative embolization of intracranial meningiomas describing the technique using a specially prepared gelfoam as embolizing material. Embolization was done in 3 cases, in 2 of them transient facial nerve paresis appeared as complication. The effectiveness of embolization was assessed by means of selective control intraoperative angiography and histological examination. In the conclusions the authors stress the effectiveness of preoperative embolization for reducing intraoperative bleeding and indicate the usefulness of the procedure in selected cases of other brain tumours. abstract_id: PUBMED:28611936 Large Transcalvarial Meningioma: Surgical Resection Aided by Preoperative Embolization. Meningiomas are the most common type of primary brain tumors, accounting for about 30% of all brain tumors. Meningiomas originate from the meninges and can be associated with any part of the skull. Classification of meningiomas is based upon the World Health Organization (WHO) classification system and prognosis of meningiomas can be determined via histologic grading. Surgery is the gold standard treatment option for all types of meningiomas. Due to the high vascularity of some meningiomas, surgical resection can lead to certain complications including intraoperative blood loss and hemorrhage. Strategies for complication avoidance include preoperative embolization of the meningioma vascular supply. Preoperative embolization has been shown to assist in surgical resection of selected tumors and decrease intraoperative blood loss. We present a case of successful preoperative embolization for a large, complex, transcalvarial meningioma along with a literature review on this topic. abstract_id: PUBMED:30210986 Safety and Efficacy of Preoperative Embolization in Patients with Meningioma. Preoperative embolization for intracranial meningioma has remained controversial for several decades. In this study, we retrospectively reviewed our experience of embolization using particulate embolic material and coil to clarify the therapeutic efficacy, safety, and risk of complication. Methods We reviewed 69 patients who underwent embolization with particulate embolic material followed by surgical resection. An additional 6 procedures were included for patients in whom recurrence was treated, for a total of 75 procedures of preoperative embolization. We analyzed the following clinical data: age, sex, tumor size pathology, complications related to embolization, and surgeon's opinion on the intraoperative ease of debulking and blood transfusion. Embolization was performed mainly from the branches of the external carotid artery. Results No allogenic blood transfusions were needed for any patients. The surgeon had the opinion that whitening and softening of the tumor allowed for easy debulking during decompression of the tumor in most of the patients. Hemorrhagic complications were seen in two patients after embolization. Emergency tumor removal was performed in both of those patients, and they were recovered well after surgery. Transient cranial nerve palsy was seen in one patient. One ischemic complication and one allergic complication occurred. Conclusion Preoperative embolization could give us an advantage in surgery for meningioma. The procedure reduces intraoperative blood loss and operating time by softening the tumor consistency. However, we must pay attention to the possibility of embolic complications and keep the preparation of emergency craniotomy, particularly in patients with large meningiomas. abstract_id: PUBMED:33723970 Does preoperative embolization improve outcomes of meningioma resection? A systematic review and meta-analysis. Current evidence regarding the benefit of preoperative embolization (POE) of meningiomas is inconclusive. This systematic review and meta-analysis aims to evaluate the safety profile of the procedure and to compare outcomes in embolized versus non-embolized meningiomas. PubMed was queried for studies after January 1990 reporting outcomes of POE. Pertinent variables were extracted and synthesized from eligible articles. Heterogeneity was assessed using I2, and random-effects model was employed to calculate pooled 95% CI effect sizes. Publication bias was assessed using funnel plots and Harbord's and Begg's tests. Meta-analyses were used to assess estimated blood loss and operative duration (mean difference; MD), gross-total resection (odds ratio; OR), and postsurgical complications and postsurgical mortality (risk difference; RD). Thirty-four studies encompassing 1782 preoperatively embolized meningiomas were captured. The pooled immediate complication rate following embolization was 4.3% (34 studies, n = 1782). Although heterogeneity was moderate to high (I2 = 35-86%), meta-analyses showed no statistically significant differences in estimated blood loss (8 studies, n = 1050, MD = 13.9 cc, 95% CI = -101.3 to 129.1), operative duration (11 studies, n = 1887, MD = 2.4 min, 95% CI = -35.5 to 30.8), gross-total resection (6 studies, n = 1608, OR = 1.07, 95% CI = 0.8-1.5), postsurgical complications (12 studies, n = 2060, RD = 0.01, 95% CI = -0.04 to 0.07), and postsurgical mortality (12 studies, n = 2060, RD = 0.01, 95% CI = 0-0.01). Although POE is relatively safe, no clear benefit was observed in operative and postoperative outcomes. However, results must be interpreted with caution due to heterogeneity and selection bias between studies. Well-controlled future investigations are needed to define the patient population most likely to benefit from the procedure. abstract_id: PUBMED:24289125 Controversies in the role of preoperative embolization in meningioma management. The role of preoperative embolization in meningioma management remains controversial, even though 4 decades have passed since it was first described. It has been shown to offer benefits such as decreased blood loss and "softening of the tumor" during subsequent resection. However, the actual benefits remain unclear, and the potential harm of an additional procedure along with the cost of embolization have limited its use to a small proportion of the meningiomas treated. In this article the authors retrospectively reviewed their experience with preoperative embolization of meningiomas over the previous 6 years (March 2007-March 2013). In addition, they performed a MEDLINE search using a combination of the terms "meningioma," "preoperative," and "embolization" to analyze the indications, embolizing agents, timing, and complications reported during preoperative embolization of meningiomas. In this retrospective review, 18 cases (female/male ratio 12:6) were identified in which endovascular embolization was used prior to resection of an intracranial meningioma. Craniotomy for tumor resection was performed within 4 days after endovascular embolization in all cases, with an average time to surgery of 1.9 days. The average duration of surgery was 4 hours and 18 minutes, and the average blood loss was 574 ml, with a range of 300-1000 ml. Complications following endovascular therapy were identified in 3 (16.7%) of 18 cases, including one each of transient hemiparesis, permanent hemiparesis, and tumor swelling. The literature review returned 15 articles consisting of a study population greater than 25 patients. No randomized controlled study was found. The use of small polyvinyl alcohol particles (45-150 μm) is more effective in preoperative devascularization than larger particles (150-250 μm), but is criticized due to the higher risk of complications such as cranial nerve palsies and postprocedural hemorrhage. Time to surgery after embolization is inconsistently reported across the articles, and conclusions on the appropriate timing of surgery could not be drawn. The overall complication rate reported after treatment with preoperative meningioma embolization ranges from as high as 21% in some of the older literature to approximately 6% in recent literature describing treatment with newer embolization techniques. The evidence in the literature supporting the use of preoperative meningioma embolization is mainly from case series, and represents Level III evidence. Due to the lack of randomized controlled clinical trials, it is difficult to draw any significant conclusions on the overall usefulness of preoperative embolization during the management of meningiomas to consider it a standard practice. Answer: The benefit of preoperative meningioma embolization remains a subject of debate in the medical community. Some studies suggest that preoperative embolization can be beneficial in certain cases. For instance, a retrospective evaluation found that preoperative embolization for skull base meningiomas was effective, with no permanent complications and no neurologic symptoms occurring after excision, suggesting that it should be actively used as part of the treatment for this benign tumor (PUBMED:28725517). Another study indicated that preoperative embolization with low-concentration n-butyl cyanoacrylate (NBCA) was safe and effective, successfully devascularizing tumors without neurological complications (PUBMED:26412880). However, other studies have reported mixed results. One study found that only complete embolization had an effect on reducing blood loss, and the value of preoperative embolization for all meningiomas must be reconsidered due to high costs and risks (PUBMED:11126901). Another systematic review and meta-analysis concluded that while preoperative embolization is relatively safe, no clear benefit was observed in operative and postoperative outcomes, suggesting that well-controlled future investigations are needed to define the patient population most likely to benefit from the procedure (PUBMED:33723970). Furthermore, a systematic review reported that preoperative embolization is associated with an added risk for morbidity and mortality, with complications occurring in 4.6% of patients, and emphasized the importance of careful selection of cases for embolization to reduce added morbidity (PUBMED:23581584). Another study highlighted that preoperative embolization could reduce intraoperative blood loss and operating time by softening the tumor consistency, but also warned of the possibility of embolic complications (PUBMED:30210986). In summary, while there may be benefits to preoperative embolization in reducing blood loss and facilitating tumor resection in certain cases, the procedure is not without risks and its value must be weighed against potential complications and costs. The evidence is not conclusive, and more research is needed to determine the specific circumstances under which preoperative embolization is most beneficial for patients with meningiomas.
Instruction: Can we predict the failure of electrical cardioversion of acute atrial fibrillation? Abstracts: abstract_id: PUBMED:25534241 Can we predict the failure of electrical cardioversion of acute atrial fibrillation? The FinCV study. Background: Data on predictors of failure of electrical cardioversion of acute atrial fibrillation are scarce. Methods: We explored 6,906 electrical cardioversions of acute (&lt;48 hours) atrial fibrillation in 2,868 patients in a retrospective multicenter study. Results: The success rate of electrical cardioversion was 94.2%. In 26% of unsuccessful cardioversions, the cardioversion was performed successfully later. Antiarrhythmic drug therapy, short (&lt;12 hours) duration of atrial fibrillation episode, advanced age, permanent pacemaker, history of atrial fibrillation episodes within 30 days before cardioversion, and β-blockers were independent predictors of unsuccessful electrical cardioversion. In the subgroup of patients with cardioversion of the first atrial fibrillation episode (N = 1,411), the short duration of episode (odds ratio [OR] = 2.28; 95% confidence interval [CI] 1.34-3.90, P = 0.003) and advanced age (OR = 1.03; 95% CI 1.02-1.05, P &lt; 0.001) were the only independent predictors of unsuccessful cardioversion. After successful cardioversion, the rate of early (&lt;30 days) clinical recurrence of atrial fibrillation was 17.3%. The index cardioversion being performed due to the first atrial fibrillation episode was the only predictor of remaining in the sinus rhythm. Conclusion: A short (&lt;12 hours) duration of acute atrial fibrillation is a significant predictor of unsuccessful cardioversion, especially during the first attack. First atrial fibrillation episode was the only predictor of remaining in the sinus rhythm. abstract_id: PUBMED:31461577 Neuromuscular electrical stimulation is feasible in patients with acute heart failure. Aims: In acute heart failure (AHF), immobilization is caused because of unstable haemodynamics and dyspnoea, leading to protein wasting. Neuromuscular electrical stimulation (NMES) has been reported to preserve muscle mass and improve functional outcomes in chronic disease. NMES may be effective against protein wasting frequently manifested in patients with AHF; however, whether NMES can be implemented safely without any adverse effect on haemodynamics has remained unknown. This study aimed to examine the feasibility of NMES in patients with AHF. Methods And Results: Patients with AHF were randomly assigned to the NMES or control group. The intensity of the NMES group was set at 10-20% maximal voluntary contraction level, whereas the control group was limited at a visible or palpable level of muscle contraction. The sessions were performed 5 days per week since the day after admission. Before the study implementation, we set the feasibility criteria with following items: (i) change in systolic blood pressure (BP) &gt; ±20 mmHg during the first session; (ii) increase in heart rate (HR) &gt; +20 b.p.m. during the first session; (iii) development of sustained ventricular arrhythmia, atrial fibrillation (AF), and paroxysmal supraventricular tachycardia during all sessions; (iv) incidence of new-onset AF during the hospitalization period &lt; 40%; and (v) completion of the planned sessions by &gt;70% of patients. The criteria of feasibility were set as follows; the percentage to fill one of (i)-(iii) was &lt;20% of the total subjects, and both (iv) and (v) were satisfied. A total of 73 patients (median age 72 years, 51 men) who completed the first session were analysed (NMES group, n = 34; control group, n = 39). Systolic BP and HR variations were not significantly different between two groups (systolic BP, P = 0.958; HR, P = 0.665). Changes in BP &gt; ±20 mmHg or HR &gt; +20 b.p.m. were observed in three cases in the NMES group (8.8%) and five in the control group (12.8%). New-onset arrhythmia was not observed during all sessions in both groups. During hospitalization, one patient newly developed AF in the NMES group (2.9%), and one developed AF (2.6%) and two lethal ventricular arrhythmia in the control group. Thirty-one patients in the NMES group (91%) and 33 patients in the control group (84%) completed the planned sessions during hospitalization. This study fulfilled the preset feasibility criteria. Conclusions: NMES is feasible in patients with AHF from immediately after admission. abstract_id: PUBMED:32789837 Impact of successful restoration of sinus rhythm in patients with atrial fibrillation and acute heart failure: Results from the Korean Acute Heart Failure registry. Background: Restoring and maintaining sinus rhythm (SR) in patients with atrial fibrillation (AF) failed to show superior outcomes over rate control strategies in prior randomized trials. However, there is sparse data on their outcomes in patients with acute heart failure (AHF). Methods: From December 2010 to February 2014, 5,625 patients with AHF from 10 tertiary hospitals were enrolled in the Korean Acute Heart Failure registry, including 1,961 patients whose initial electrocardiogram showed AF. Clinical outcomes of patients who restored SR by pharmacological or electrical cardioversion (SR conversion group, n = 212) were compared to those of patients who showed a persistent AF rhythm (AF persistent group, n = 1,662). Results: All-cause mortality both in-hospital and during the follow-up (median 2.5 years) were significantly lower in the SR conversion group than in the AF persistent group after adjustment for risk factors (adjusted hazard ratio [HR]; 95% confidence interval [CI] = 0.26 [0.08-0.88], p = 0.031 and 0.59 [0.43-0.82], p = 0.002, for mortality in-hospital and during follow-up, respectively). After 1:3 propensity score matching (SR conversion group = 167, AF persistent group = 501), successful restoration of SR was associated with lower all-cause mortality (HR [95% CI] = 0.68 [0.49-0.93], p = 0.015), heart failure rehospitalization (HR [95% CI] = 0.66 [0.45-0.97], p = 0.032), and composite of death and heart failure rehospitalization (HR [95% CI] = 0.66 [0.51-0.86], p = 0.002). Conclusions: Patients with AHF and AF had significantly lower mortality in-hospital and during follow-up if rhythm treatment for AF was successful, underscoring the importance of restoring SR in patients with AHF. abstract_id: PUBMED:36474152 Development and validation of a nomogram for predicting atrial fibrillation in patients with acute heart failure admitted to the ICU: a retrospective cohort study. Introduction: Acute heart failure is a serious condition. Atrial fibrillation is the most frequent arrhythmia in patients with acute heart failure. The occurrence of atrial fibrillation in heart failure patients worsens their prognosis and leads to a substantial increase in treatment costs. There is no tool that can effectively predict the onset of atrial fibrillation in patients with acute heart failure in the ICU currently. Materials And Methods: We retrospectively analyzed the MIMIC-IV database of patients admitted to the intensive care unit (ICU) for acute heart failure and who were initially sinus rhythm. Data on demographics, comorbidities, laboratory findings, vital signs, and treatment were extracted. The cohort was divided into a training set and a validation set. Variables selected by LASSO regression and multivariate logistic regression in the training set were used to develop a model for predicting the occurrence of atrial fibrillation in acute heart failure in the ICU. A nomogram was drawn and an online calculator was developed. The discrimination and calibration of the model was evaluated. The performance of the model was tested using the validation set. Results: This study included 2342 patients with acute heart failure, 646 of whom developed atrial fibrillation during their ICU stay. Using LASSO and multiple logistic regression, we selected six significant variables: age, prothrombin time, heart rate, use of vasoactive drugs within 24 h, Sequential Organ Failure Assessment (SOFA) score, and Acute Physiology Score (APS) III. The C-index of the model was 0.700 (95% CI 0.672-0.727) and 0.682 (95% CI 0.639-0.725) in the training and validation sets, respectively. The calibration curves also performed well in both sets. Conclusion: We developed a simple and effective model for predicting atrial fibrillation in patients with acute heart failure in the ICU. abstract_id: PUBMED:33125341 Stroke risk scores to predict hospitalization for acute decompensated heart failure in atrial fibrillation patients. Introduction. Atrial fibrillation (AF) is the most frequent hospitalized arrhythmia. It associates increased risk of death, stroke and heart failure (HF). Stroke risk scores, especially CHA2DS2-VASc, have been applied also for populations with different diseases. There is, however, limited data focusing on the ability of these scores to predict HF decompensation.Methods. We conducted a retrospective observational study on a cohort of 204 patients admitted for cardiovascular pathology to the Cardiology Ward of our tertiary University Hospital. We aimed to determine whether the stroke risk scores could predict hospitalisations for acute decompensated HF in AF patients.Results. C-statistics for CHADS2 and R2CHADS2 showed a modest predictive ability for hospitalisation with decompensated HF (CHADS2: AUC 0.631 p = 0.003; 95%CI 0.560-0.697. R2CHADS2: AUC 0.619; 95%CI 0.548-0.686; p = 0.004), a marginal correlation for CHA2DS2-VASc (AUC 0.572 95%CI 0.501-0.641 with a p value of only 0.09, while the other scores failed to show a correlation. A CHADS2 ≥ 2 showed a RR = 2.96, p&lt;0.0001 for decompensated HF compared to a score &lt;2. For R2CHADS2 ≥ 2, RR = 2.41, p = 0.001 compared to a score &lt;2. For CHA2DS2-VASc ≥ 2 RR = 2.18 p = 0.1, compared to CHA2DS2-VASc &lt;2. The correlation coefficients showed a weak correlation for CHADS2 (r = 0.216; p = 0.001) and even weaker for R2CHADS2 (r = 0.197; p = 0.0047 and CHA2DS2-VASc (r = 0.14; p = 0.035).Conclusions. Among AF patients, CHADS2, CHA2DS2-VASc and R2CHADS2 were associated with the risk of hospitalisation for decompensated HF while ABC and ATRIA failed to show an association. However, predictive accuracy was modest and the clinical utility for this outcome remains to be determined. abstract_id: PUBMED:10490566 Lack of prevention of heart failure by serial electrical cardioversion in patients with persistent atrial fibrillation. Objective: To investigate the occurrence of heart failure complications, and to identify variables that predict heart failure in patients with (recurrent) persistent atrial fibrillation, treated aggressively with serial electrical cardioversion and antiarrhythmic drugs to maintain sinus rhythm. Design: Non-randomised controlled trial; cohort; case series; mean (SD) follow up duration 3.4 (1.6) years. Setting: Tertiary care centre. Subjects: Consecutive sampling of 342 patients with persistent atrial fibrillation (defined as &gt; 24 hours duration) considered eligible for electrical cardioversion. Interventions: Serial electrical cardioversions and serial antiarrhythmic drug treatment, after identification and treatment of underlying cardiovascular disease. Main Outcome Measures: heart failure complications: development or progression of heart failure requiring the institution or addition of drug treatment, hospital admission, or death from heart failure. Results: Development or progression of heart failure occurred in 38 patients (11%), and 22 patients (6%) died from heart failure. These complications were related to the presence of coronary artery disease (p &lt; 0.001, risk ratio 3.2, 95% confidence interval (CI) 1.6 to 6.5), rheumatic heart disease (p &lt; 0.001, risk ratio 5.0, 95% CI 2.4 to 10.2), cardiomyopathy (p &lt; 0.001, risk ratio 5.0, 95% CI 2.0 to 12.4), atrial fibrillation for &lt; 3 months (p = 0.04, risk ratio 2.0, 95% CI 1.0 to 3.7), and poor exercise tolerance (New York Heart Association class III at inclusion, p &lt; 0.001, risk ratio 3.5, 95% CI 1.9 to 6. 7). No heart failure complications were observed in patients with lone atrial fibrillation. Conclusions: Aggressive serial electrical cardioversion does not prevent heart failure complications in patients with persistent atrial fibrillation. These complications are predominantly observed in patients with more severe underlying cardiovascular disease. Randomised comparison with rate control treatment is needed to define the optimal treatment for persistent atrial fibrillation in relation to heart failure. abstract_id: PUBMED:23867596 Characterization of acute heart failure hospitalizations in a Portuguese cardiology department. Introduction And Aims: We describe the clinical characteristics, management and outcomes of patients hospitalized with acute heart failure in a south-west European cardiology department. We sought to identify the determinants of length of stay and heart failure rehospitalization or death during a 12-month follow-up period. Methods And Results: This was a retrospective cohort study including all patients admitted during 2010 with a primary or secondary diagnosis of acute heart failure. Death and readmission were followed through 2011. Of the 924 patients admitted, 201 (21%) had acute heart failure, 107 (53%) of whom had new-onset acute heart failure. The main precipitating factors were acute coronary syndrome (63%) and arrhythmia (14%). The most frequent clinical presentations were heart failure after acute coronary syndrome (63%), chronic decompensated heart failure (47%) and acute pulmonary edema (21%). On admission 73% had left ventricular ejection fraction &lt;50%. Median length of stay was 11 days and in-hospital mortality was 5.5%. The rehospitalization rate was 21% and 24% at six and 12 months, respectively. All-cause mortality was 16% at 12 months. The independent predictors of rehospitalization or death were heart failure hospitalization during the previous year (Hazard ratio - HR - 3.177), serum sodium &lt;135mmol/l on admission (HR 1.995) and atrial fibrillation (HR 1.791). Reduced left ventricular ejection fraction was associated with a lower risk of rehospitalization or death (HR 0.518). Conclusions: Our patients more often presented new-onset acute heart failure, due to an acute coronary syndrome, with reduced left ventricular ejection fraction. Several predictive factors of death or rehospitalization were identified that may help to select high-risk patients to be followed in a heart failure management program after discharge. abstract_id: PUBMED:28017305 Predicting Unsuccessful Electrical Cardioversion for Acute Atrial Fibrillation (from the AF-CVS Score). Electrical cardioversion (ECV) is the standard treatment for acute atrial fibrillation (AF), but identification of patients with increased risk of ECV failure or early AF recurrence is of importance for rational clinical decision-making. The objective of this study was to derive and validate a clinical risk stratification tool for identifying patients at high risk for unsuccessful outcome after ECV for acute AF. Data on 2,868 patients undergoing 5,713 ECVs of acute AF in 3 Finnish hospitals from 2003 through 2010 (the FinCV study data) were included in the analysis. Patients from western (n = 3,716 cardioversions) and eastern (n = 1,997 cardioversions) hospital regions were used as derivation and validation datasets. The composite of cardioversion failure and recurrence of AF within 30 days after ECV was recorded. A clinical scoring system was created using logistic regression analyses with a repeated-measures model in the derivation data set. A multivariate analysis for prediction of the composite end point resulted in identification of 5 clinical variables for increased risk: Age (odds ratio [OR] 1.31, confidence interval [CI] 1.13 to 1.52), not the First AF (OR 1.55, CI 1.19 to 2.02), Cardiac failure (OR 1.52, CI 1.08 to 2.13), Vascular disease (OR 1.38, CI 1.11 to 1.71), and Short interval from previous AF episode (within 1 month before ECV, OR 2.31, CI 1.83 to 2.91) [hence, the acronym, AF-CVS]. The c-index for the AF-CVS score was 0.67 (95% CI 0.65 to 0.69) with Hosmer-Lemeshow p value 0.84. With high (&gt;5) scores (i.e., 12% to 16% of the patients), the rate of composite end point was ∼40% in both cohorts, and among low-risk patients (score &lt;3), the composite end point rate was ∼10%. In conclusion, the risk of ECV failure and early recurrence of AF can be predicted with simple patient and disease characteristics. abstract_id: PUBMED:18393653 Oxidative stress in patients with acute heart failure. Oxidative stress (OS) is a keystone in the pathology of the ischemia reperfusion sequence (acute coronary syndromes, cardiac surgery, transplantation). In heart failure, the implication of OS is less understood. This study was intended to evaluate OS in acute heart failure. Criteria for inclusion were consecutive patients hospitalized in our cardiology department for a first pulmonary edema that revealed a dilated cardiomyopathy (DCM). Exclusion criteria included known cardiomyopathy, smoker, acute coronary syndrome, and treatment with angiotensin converting enzyme inhibitors (ACEI) or angiotensin II receptor blockers (ARAII). OS was evaluated in blood samples: thiobarbituric acid-reactive substances (TBARS), total antioxidant status (TAS), plasma alpha-tocopherol, vitamin A, and beta-carotene. Standard biochemical parameters including CRP, fibrinogen, lipid, and creatinine were assayed. Ten patients (80% men, mean age 55.3 +/- 7.9 years) were included and followed during a 6 month period. The etiologies of DCM were alcohol (n = 3), anti-cancer drugs (n = 2), valvulopathies (n = 2), or idiopathic (n = 3). In acute heart failure, TBARS were elevated (1.69 micromol/L; normal value 0.6-4.2 micromol/L) and TAS status was decreased (0.96 mmol/L; normal value 1.3-1.9 pmol/L). OS was more important when patients had atrial or ventricular arrhythmia. Nevertheless, liposoluble antioxidant parameters (beta-carotene, vitamin A, alpha-tocopherol) had a usual value. At the term of the follow-up, patients returned to a stable condition, OS markers revealed normal values, and every Holter ECG showed no supraventricular or ventricular arrhythmias. In acute heart failure, oxygen-free radicals are increased. We thus hypothetized that a modification in OS could be responsible for arrhythmias and complications of acute heart failure. abstract_id: PUBMED:12729852 Prevention and management of chronic heart failure with electrical therapy. Sudden cardiac death is responsible for &gt;40% of patients with heart failure losing their lives. Thus, the prevention of life-threatening cardiac arrhythmias is a major goal in the management of heart failure. In several randomized clinical trials, electrical therapy with the implantable cardioverter defibrillator (ICD) has proved superior to medical antiarrhythmic therapy in both the secondary and primary prevention of sudden cardiac death in patients with reduced left ventricular function. In addition to the severity of left ventricular dysfunction, the etiology of the cardiomyopathy appears to be a determinant in the benefit derived from this form of electrical therapy. Whereas patients with ischemic cardiomyopathy clearly show improved survival with ICD therapy, outcome data in patients with nonischemic cardiomyopathy are less convincing. The major challenge lies in the risk stratification of patients with heart failure for arrhythmic death. Catheter ablation is another form of electrical therapy that can help in the treatment of patients with heart failure. In patients with a tachycardia-mediated cardiomyopathy because of drug-refractory atrial fibrillation with rapid ventricular response, catheter ablation of the atrioventricular node and pacemaker implantation can effectively restore a physiologic heart rate, often with dramatic regression of left ventricular dysfunction. In patients with frequent ICD therapies because of frequent recurrences of ventricular tachyarrhythmias, catheter ablation of ventricular tachycardia can be an effective adjunctive therapy. New catheter ablation techniques and new atrial pacing algorithms can also significantly reduce the atrial fibrillation burden in patients with heart failure who are particularly susceptible to decompensation because of atrial fibrillation. Pacing for hemodynamic benefit in heart failure has evolved from dual-chamber pacing modes with optimized atrioventricular delay to biventricular pacing resulting in cardiac resynchronization. This new treatment modality for advanced heart failure has been shown to result in significant symptomatic and hemodynamic improvement. Answer: Yes, we can predict the failure of electrical cardioversion of acute atrial fibrillation to some extent. The FinCV study (PUBMED:25534241) identified several independent predictors of unsuccessful electrical cardioversion, including antiarrhythmic drug therapy, short duration of atrial fibrillation episode (<12 hours), advanced age, presence of a permanent pacemaker, history of atrial fibrillation episodes within 30 days before cardioversion, and the use of β-blockers. Specifically, for patients undergoing cardioversion of their first atrial fibrillation episode, a short duration of the episode and advanced age were the only independent predictors of unsuccessful cardioversion. Additionally, the study found that the first atrial fibrillation episode was the only predictor of remaining in sinus rhythm after successful cardioversion. Another study (PUBMED:28017305) developed the AF-CVS score, a clinical risk stratification tool, to identify patients at high risk for unsuccessful outcome after electrical cardioversion for acute atrial fibrillation. The score includes variables such as Age, not the First AF episode, Cardiac failure, Vascular disease, and Short interval from the previous AF episode, which were associated with an increased risk of cardioversion failure and early AF recurrence. These findings suggest that certain patient characteristics and clinical factors can be used to predict the likelihood of failure of electrical cardioversion in patients with acute atrial fibrillation. However, it is important to note that while these predictors can guide clinical decision-making, they are not absolute, and individual patient circumstances may vary.
Instruction: Asthma prescription patterns for children: can GPs do better? Abstracts: abstract_id: PUBMED:26932952 Acute Rhinosinusitis: Prescription Patterns in a Real-World Setting. Objective: Understand real-world prescription patterns for patients presenting with a first diagnosis of ARS and evaluate adherence to published medical guidelines. Study Design: Retrospective administrative database analysis. Setting: US-based outpatient settings. Methods: From a US claims database (MarketScan), 99,033 patients were identified with acute rhinosinusitis (ARS) in 2012 ("index"), with a complete medical and prescription history for 12 months preindex and 18 months postindex and no diagnoses of asthma or chronic rhinosinusitis. Of these, a random 10,000-patient sample was generated matched for age and sex to the initial cohort. Prescriptions and procedures at index, as well as complications up to 12 months postindex, were analyzed. Results: Nearly 90% of all patients received a prescription at index. Antibiotics were prescribed for 84.8% patients, followed by antitussives (16.2% for adults, 6.2% for pediatrics), nasal corticosteroids (15.5% adults, 7.5% for pediatrics), and systemic corticosteroids (10.3% for adults, 5.5% for pediatrics), with 49% adults and 29% children receiving &gt;1 medication at first visit. Macrolides were the most frequently prescribed antibiotics (35.6% adults, 28.6% pediatrics), followed by amoxicillin/clavulanate and amoxicillin. Within 12 months of index, 3 patients presented with meningitis and 3 with orbital cellulitis. Conclusion: Significant variability in ARS treatment was observed, highlighting the need for heightened awareness of existing guidelines. abstract_id: PUBMED:21190392 Asthma prescription patterns for children: can GPs do better? Background: Assessing prescription patterns of asthma medication for children is helpful to optimize prescribing by general practitioners (GPs). The aim was to explore prescription patterns in children with physician-diagnosed asthma and its determinants in general practice. Methods: We used the Second Dutch National Survey of General Practice (DNSGP-2) with children aged 0-17 years registered in 87 general practices. All children with at least one asthma prescription were included (n = 2993). Prescription rates and prescription of continuous (≥3 prescriptions/year) versus intermittent asthma medication were calculated. Data, including several GP characteristics, were analysed using multivariate logistic regression accounting for clustering within practices. Results: During one year, 16% of the children with physician-diagnosed asthma (n = 3562) received no asthma medication. Of the 2993 children with asthma receiving asthma medication (on average 2.9 prescriptions/year), 61% received one or two prescriptions, 39% received three or more. Continuous medication with a bronchodilator and/or a corticosteroid was prescribed in 22% of these children. One out of 5 children receiving continuous medication was prescribed a bronchodilator only. In 7.5% of the prescriptions, asthma medications other than bronchodilators or corticosteroids were prescribed. Prescribing asthma medication varied widely between practices, but none of the children and GP determinants had an independent effect on prescribing continuous versus intermittent medication. Conclusion: In general practice, the annual number of asthma prescriptions per child with asthma is relatively low. One in 20 children is prescribed bronchodilators only continuously, indicating room for improvement. Child and GP characteristics cannot be used for targeting educational efforts. abstract_id: PUBMED:35557495 Prescription Patterns of Oral Corticosteroids for Asthma Treatment and Related Asthma Phenotypes in University Hospitals in Korea. Purpose: Oral corticosteroids (OCSs) are frequently prescribed for asthma management despite their adverse effects. An understanding of the pattern of OCS treatment is required to optimize asthma treatment and reduce OCS usage. This study evaluated the prescription patterns of OCSs in patients with asthma. Methods: This is a retrospective multicenter observational study. We enrolled adult (≥18 years) patients with asthma who had been followed up by asthma specialists in 13 university hospitals for ≥3 years. Lung function tests, the number of asthma exacerbations, and prescription data, including the days of supply and OCS dosage, were collected. The clinical characteristics of OCS-dependent and exacerbation-prone asthmatic patients were evaluated. Results: Of the 2,386 enrolled patients with asthma, 27.7% (n = 660) were OCS users (the median daily dose of OCS was 20 mg/day prednisolone equivalent to a median of 14 days/year). OCS users were more likely to be female, to be treated at higher asthma treatment steps, and to show poorer lung function and more frequent exacerbations in the previous year than non-OCS users. A total of 88.0% of OCS users were treated with OCS burst with a mean dose of 21.6 ± 10.2 mg per day prednisolone equivalent to 7.8 ± 3.2 days per event and 2.4 times per year. There were 2.1% (51/2,386) of patients with OCS-dependent asthma and 9.5% (227/2,386) with exacerbation-prone asthma. These asthma phenotypes were consistent over the 3 consecutive years in 47.1% of OCS-dependent asthmatic patients and 34.4% of exacerbation-prone asthmatic patients when assessed annually over the 3-year study period. Conclusions: We used real-world data from university hospitals in Korea to describe the OCS prescription patterns and relievers in asthma. Novel strategies are required to reduce the burden of OCS use in patients with asthma. abstract_id: PUBMED:30937136 Trends and prescription patterns of traditional Chinese medicine use among subjects with allergic diseases: A nationwide population-based study. Background: The alarmingly rising prevalence of allergic diseases has led to substantial healthcare and economic burdens worldwide. The integrated use of traditional Chinese medicines (TCM) and Western medicines has been common in treating subjects with allergic diseases in clinical practice in Taiwan. However, limited studies have been conducted to evaluate long-term trends and prescription patterns of TCM use among subjects with allergic diseases. Thus, we conducted a nationwide population-based study to characterize TCM use among subjects with allergic diseases. Methods: A total of 241,858 subjects with diagnosed atopic dermatitis, asthma or allergic rhinitis in the period of 2003-2012 were identified from the National Health Insurance Research Database (NHIRD) in Taiwan and included in this study. We assessed trends and prescribed patterns related to TCM (both single herbs and herbal formulas) among the study subjects over the 10-year study period. Results: The overall proportions of TCM use were 30.5%, 29.0% and 45.7% in subjects with atopic dermatitis, asthma and allergic rhinitis, respectively. We found increasing trends of TCM use among subjects having atopic dermatitis and asthma, with annual increase of 0.91% and 0.38%, respectively, over the 10-year study period while the proportion remained steadily high (from 46.6% in 2003 to 46.3% in 2012) among subjects having allergic rhinitis. Moreover, the number of hospitalization due to allergic diseases in TCM users was significantly smaller than that in non TCM users for all three allergic diseases. Conclusion: A notable proportion (30%-50%) of subjects with allergic diseases in Taiwan has used TCM, with the highest proportion of TCM use found in subjects with allergic rhinitis, whereas increasing trends of TCM use are found among subjects with atopic dermatitis and asthma, respectively. Our results suggest that TCM use may help reduce the severe episodes of allergic diseases necessitating hospitalizations. abstract_id: PUBMED:36051434 A Cross-Sectional Study on Prescription Patterns of Short-Acting β2-Agonists in Patients with Asthma: Results from the SABINA III Colombia Cohort. Purpose: Overuse of short-acting β2-agonists (SABAs) for asthma is associated with a significant increase in exacerbations and healthcare resource use. However, limited data exist on the extent of SABA overuse outside of Europe and North America. As part of the multi-country SABA use IN Asthma (SABINA) III study, we characterized SABA prescription patterns in Colombia. Patients And Methods: This observational, cross-sectional cohort study of SABINA III included patients (aged ≥12 years) with asthma recruited from seven sites in Colombia. Demographics, disease characteristics (including investigator-defined asthma severity guided by the 2017 Global Initiative for Asthma report), and asthma treatments prescribed (including SABAs and inhaled corticosteroids [ICS]) in the 12 months preceding the study were recorded using electronic case report forms during a single study visit. Results: Of 250 patients analyzed, 50.4%, 33.2%, and 16.4% were enrolled by pulmonologists, general medicine practitioners, and allergists, respectively. Most patients were female (74.0%) and had moderate-to-severe asthma (67.6%). Asthma was partly controlled or uncontrolled in 57.6% of patients, with 15.6% experiencing ≥1 severe exacerbation 12 months before the study visit. In total, 4.0% of patients were prescribed SABA monotherapy and 55.6%, SABA in addition to maintenance therapy. Overall, 39.2% of patients were prescribed ≥3 SABA canisters in the 12 months before the study visit; 25.2% were prescribed ≥10 canisters. Additionally, 17.6% of patients purchased SABAs over the counter, of whom 43.2% purchased ≥3 canisters. Maintenance medication in the form of ICS or ICS/long-acting β2-agonist fixed-dose combination was prescribed to 36.0% and 66.8% of patients, respectively. Conclusion: Our findings suggest that prescription/purchase of ≥3 SABA canisters were common in Colombia, highlighting a public health concern. There is a need to improve asthma care by aligning clinical practices with the latest evidence-based treatment recommendations to improve asthma management across Colombia. abstract_id: PUBMED:26299480 Analysis of prescription pattern and guideline adherence in the management of asthma among medical institutions and physician specialties in Taiwan between 2000 and 2010. Purpose: The aim of this study was to evaluate prescription patterns of antiasthmatic medications in ambulatory care, guideline adherence by physician specialties and medical institutions, and the rate of hospitalization and emergency department visits due to asthma exacerbation. Methods: The ambulatory visits between 2000 and 2010 from the Taiwan Longitudinal Health Insurance Database 2000 were analyzed for prescription trends. Seven classes of antiasthmatic medications were identified for prescription trend analysis. Prescription patterns of different medical institutions and physician specialties were further evaluated. Findings: We studied 4495 patients with newly diagnosed asthma in 2000. Estimates indicated an increased use in fixed-dose combination of inhaled corticosteroids and long-acting β2-agonists (3.6% in 2002 to 28.8% in 2010) with decreased use of inhaled corticosteroids (14.5% in 2001 to 7.3% in 2010). Xanthine was still the most frequently used medication for asthmatic patients (60.2% in 2001 and 45.2% in 2010). Another marked increase was the use of leukotriene receptor antagonists (2.6% in 2001 to 6.0% in 2010). In the studied population, the rate of hospital admission or emergency department visit moderately decreased from 1.42% to 0.59% during 10 years. Physicians in medical centers and regional hospitals, as well as asthma specialists, dominated the increased use of fixed-dose combinations of inhaled corticosteroids and long-acting β2-agonists and leukotriene receptor antagonists. Implications: Physicians in academic medical centers and asthma specialists achieved better adherence to the core recommendations of the international guidelines for asthma management. The reasons for guideline nonadherence among physicians in district hospitals and primary care clinics deserve health care professionals' attention and require further investigation. abstract_id: PUBMED:26545450 Prescription of asthma action plans in the Aquitaine region of France Introduction: Although guidelines recommend the prescription of written asthma action plans (WAAP), their use remains limited. Methods: A prospective survey was performed from 2013 to 2014. We interviewed respiratory physicians, paediatric respiratory physicians and allergologists taking care of asthmatic patients and practicing in the Aquitaine region of France, using computerized questionnaires, regarding their everyday practice in the use of WAAP. Results: A total of 59/143 (41%) clinicians, with a mean age of 47 years, participated in the study. A total of 41/59 (69.5%) were using a WAAP (12 different models with very inhomogeneous contents, mostly targeting symptoms only). WAAP prescribers were younger than non-prescribers, were more often female, working mostly in the Gironde area, with mixed hospital and private-based activity, and were paediatric-respiratory physicians or respiratory physicians. The severity of asthma had little influence on WAAP prescriptions. Conclusion: In the Aquitaine region, prescription of WAAPs remains inadequate and shows large disparities. WAAP users are mostly younger female specialists. abstract_id: PUBMED:33790552 Prescription Patterns in Patients with Chronic Obstructive Pulmonary Disease and Osteoporosis. Objective: Patients with chronic obstructive pulmonary disease (COPD) have a higher risk of osteoporosis. Few studies have addressed the prescription patterns in osteoporosis patients with COPD. The purpose of this study was to conduct a retrospective study of the prescription patterns in patients with COPD and osteoporosis in Taiwan. Methods: The study was conducted with data from the Taiwan National Health Insurance Research Database from January 1, 2003, to December 31, 2016. We selected the COPD population in Taiwan older than 40 years with at least one prescription for a bronchodilator. We excluded patients who had osteoporosis, fracture, asthma, or cancer before the diagnosis of COPD. After the diagnosis of COPD, patients who did not have osteoporosis were also excluded. We followed this COPD and osteoporosis cohort until they had been prescribed medication for osteoporosis. Results: There were 13,407 patients with COPD and osteoporosis who received osteoporosis treatment. Among the patients who received treatment, the majority were female (n = 9136), accounting for 68.14% of all treated patients. A total of 53.4% of the patients had been prescribed steroids least once within the last year before receiving a diagnosis of osteoporosis. A total of 34.61% of the patients received systemic corticosteroids with a daily dose equivalent to 5 mg of prednisolone within the 3 months prior to the diagnosis of osteoporosis. The older the patient was, the higher the probability of the prescription of medication for osteoporosis. Patients with depression had a high probability of receiving medication for osteoporosis with adjusted hazard ratio of 1.141 (95% confidence interval, 1.072-1.214). Conclusion: The rate of prescriptions for the treatment of osteoporosis in patients with COPD was low. Physicians need to be aware of this issue and treat osteoporosis more aggressively in patients with COPD. abstract_id: PUBMED:26928754 Prescription patterns, adherence and characteristics of non-adherence in children with asthma in primary care. Unlabelled: Adherence to treatment remains important for successful asthma management. Knowledge about asthma medication use and adherence in real-life offers opportunities to improve asthma treatment in children. Objective: To describe prescription patterns, adherence and factors of adherence to drugs in children with asthma. Methods: Population-based cohort study in a Dutch primary care database (IPCI), containing medical records of 176,516 children, aged 5-18 years, between 2000 and 2012. From asthma medication prescriptions, age, gender, seasonal and calendar year rates were calculated. Adherence was calculated using medication possession ratio (MPR) and ratio of controller to total asthma drug (CTT). Characteristics of children with high-vs.-low adherence were compared. Results: The total asthma cohort (n = 14,303; 35,181 person-years (PY) of follow-up) was mainly treated with short-acting β2-agonists (SABA; 40 users/100 PY) and inhaled corticosteroids (ICS; 32/100 PY). Median MPR for ICS was 56%. Children with good adherence (Q4 = MPR &gt; 87%) were younger at start of ICS, more often visited specialists and had more exacerbations during follow-up compared to children with low adherence (Q1 = MPR &lt; 37%). Conclusion: In Dutch primary care children with asthma were mainly prescribed SABA, and ICS. Adherence to ICS was relatively low. Characteristics of children with good adherence were compatible with more severe asthma, suggesting that adherence is driven by treatment need or intensity of medical follow-up. abstract_id: PUBMED:16618607 Differences in the prescription patterns of anti-asthmatic medications for children by pediatricians, family physicians and physicians of other specialties. Background: Prescription patterns of anti-asthma medications in children vary among doctors in different disciplines and settings, and may reflect differences in treatment outcome. The purpose of this study was to analyze the prescribing patterns of anti-asthma drugs by pediatricians, family physicians and other practitioners. Methods: Data for a total of 225,537 anti-asthma prescriptions were collected from the National Health Insurance Research Database for the period from January 1, 2002 to March 31, 2002. These medications included inhaled and oral adrenergics, inhaled and oral corticosteroids, xanthine derivatives, and leukotriene receptor antagonists prescribed by general pediatricians, family physicians and physicians in other disciplines. Results: Oral beta2-agonist was the most commonly prescribed drug used as monotherapy, with prescription rates of 70.4%, 46.9% and 58.0% by pediatricians, family physicians and other physicians, respectively. A xanthine derivative was the next most commonly prescribed monotherapy. Oral corticosteroid combined with oral beta2-agonist, followed by oral beta2-agonist combined with a xanthine derivative were the two most commonly prescribed dual-agent combined therapies by all three physician categories. The prescription rate for inhaled corticosteroid monotherapy was 7.8% by pediatricians, 5.6% by family physicians, and 8.0% by other physicians. The prescription rate for inhaled adrenergic was the highest in family physicians (14.9%), followed by the other physicians (7.2%), and was lowest in pediatricians (3.1%). Conclusion: Pediatricians and family physicians appeared to share similar opinions on the medical management of children with asthma in that both most commonly prescribed oral beta2-agonists and xanthine derivatives, either alone or in combination. Family physicians were least likely to prescribe an inhaled corticosteroid and most likely to prescribe an inhaled adrenergic agent. Answer: The study titled "Asthma prescription patterns for children: can GPs do better?" (PUBMED:21190392) aimed to assess prescription patterns of asthma medication for children and optimize prescribing by general practitioners (GPs). The study used data from the Second Dutch National Survey of General Practice (DNSGP-2) with children aged 0-17 years registered in 87 general practices. It included all children with at least one asthma prescription (n = 2993). The findings revealed that during one year, 16% of the children with physician-diagnosed asthma (n = 3562) received no asthma medication. Of the children receiving asthma medication, 61% received one or two prescriptions, while 39% received three or more. Continuous medication with a bronchodilator and/or a corticosteroid was prescribed in 22% of these children. Notably, one out of 5 children receiving continuous medication was prescribed a bronchodilator only. In 7.5% of the prescriptions, asthma medications other than bronchodilators or corticosteroids were prescribed. The study concluded that the annual number of asthma prescriptions per child with asthma is relatively low in general practice, and one in 20 children is prescribed bronchodilators only continuously, indicating room for improvement. Child and GP characteristics could not be used for targeting educational efforts. The study suggests that there is potential for GPs to improve asthma management in children by increasing adherence to guidelines and optimizing prescription patterns. The low number of annual prescriptions and the continuous prescription of bronchodilators only in some cases highlight areas where GPs could enhance their prescribing practices to better align with evidence-based recommendations for asthma treatment in children.
Instruction: Heart and coronary artery protection in patients with mediastinal Hodgkin lymphoma treated with intensity-modulated radiotherapy: dose constraints to virtual volumes or to organs at risk? Abstracts: abstract_id: PUBMED:18037182 Heart and coronary artery protection in patients with mediastinal Hodgkin lymphoma treated with intensity-modulated radiotherapy: dose constraints to virtual volumes or to organs at risk? Background And Purpose: To increase heart and coronary artery protection in patients with mediastinal Hodgkin lymphoma treated with intensity-modulated radiotherapy (IMRT). Materials And Methods: Twenty patients with early-stage mediastinal Hodgkin lymphoma entered the study. IMRT was delivered to the initially involved lymph node volumes. Various virtual volumes (VVs) were designed to improve the protection of the heart and the origin of the coronary arteries, which were the organs at risk (OARs), while preserving adequate PTV coverage. The results obtained with VVs were then compared with those obtained with dose constraints assigned to OARs. Results: The most satisfactory VV was obtained using the PTV expansion concept. The best compromise between adequate PTV coverage and OAR protection was obtained with dose constraints assigned to the PTV expansion VV and to the origin of the coronary arteries. Conclusions: IMRT can be improved by using dose constraints assigned to the PTV expansion VV and/or to the origin of the coronary arteries. abstract_id: PUBMED:21705151 Dosimetric benefits of intensity-modulated radiotherapy combined with the deep-inspiration breath-hold technique in patients with mediastinal Hodgkin's lymphoma. Purpose: To assess the additional benefits of using the deep-inspiration breath-hold (DIBH) technique with intensity-modulated radiotherapy (IMRT) in terms of the protection of organs at risk for patients with mediastinal Hodgkin's disease. Methods And Materials: Patients with early-stage Hodgkin's lymphoma with mediastinal involvement were entered into the study. Two simulation computed tomography scans were performed for each patient: one using the free-breathing (FB) technique and the other using the DIBH technique with a dedicated spirometer. The clinical target volume, planning target volume (PTV), and organs at risk were determined on both computed tomography scans according to the guidelines of the European Organization for Research and Treatment of Cancer. In both cases, 30 Gy in 15 fractions was prescribed. The dosimetric parameters retrieved for the statistical analysis were PTV coverage, mean heart dose, mean coronary artery dose, mean lung dose, and lung V20. Results: There were no significant differences in PTV coverage between the two techniques (FB vs. DIBH). The mean doses delivered to the coronary arteries, heart, and lungs were significantly reduced by 15% to 20% using DIBH compared with FB, and the lung V20 was reduced by almost one third. The dose reduction to organs at risk was greater for masses in the upper part of the mediastinum. IMRT with DIBH was partially implemented in 1 patient. This combination will be extended to other patients in the near future. Conclusions: Radiation exposure of the coronary arteries, heart, and lungs in patients with mediastinal Hodgkin's lymphoma was greatly reduced using DIBH with IMRT. The greatest benefit was obtained for tumors in the upper part of the mediastinum. The possibility of a wider use in clinical practice is currently under investigation in our department. abstract_id: PUBMED:16169675 Is intensity-modulated radiotherapy better than conventional radiation treatment and three-dimensional conformal radiotherapy for mediastinal masses in patients with Hodgkin's disease, and is there a role for beam orientation optimization and dose constraints assigned to virtual volumes? Purpose: To evaluate the role of beam orientation optimization and the role of virtual volumes (VVs) aimed at protecting adjacent organs at risk (OARs), and to compare various intensity-modulated radiotherapy (IMRT) setups with conventional treatment with anterior and posterior fields and three-dimensional conformal radiotherapy (3D-CRT). Methods And Materials: Patients with mediastinal masses in Hodgkin's disease were treated with combined modality therapy (three to six cycles of adriamycin, bleomycin, vinblastine, and dacarbazine [ABVD] before radiation treatment). Contouring and treatment planning were performed with Somavision and CadPlan Helios (Varian Systems, Palo Alto, CA). The gross tumor volume was determined according to the prechemotherapy length and the postchemotherapy width of the mediastinal tumor mass. A 10-mm isotropic margin was added for the planning target volume (PTV). Because dose constraints assigned to OARs led to unsatisfactory PTV coverage, VVs were designed for each patient to protect adjacent OARs. The prescribed dose was 40 Gy to the PTV, delivered according to guidelines from International Commission on Radiation Units and Measurements Report No. 50. Five different IMRT treatment plans were compared with conventional treatment and 3D-CRT. Results: Beam orientation was important with respect to the amount of irradiated normal tissues. The best compromise in terms of PTV coverage and protection of normal tissues was obtained with five equally spaced beams (5FEQ IMRT plan) using dose constraints assigned to VVs. When IMRT treatment plans were compared with conventional treatment and 3D-CRT, dose conformation with IMRT was significantly better, with greater protection of the heart, coronary arteries, esophagus, and spinal cord. The lungs and breasts in women received a slightly higher radiation dose with IMRT compared with conventional treatments. The greater volume of normal tissue receiving low radiation doses could be a cause for concern. Conclusions: The 5FEQ IMRT plan with dose constraints assigned to the PTV and VV allows better dose conformation than conventional treatment and 3D-CRT, notably with better protection of the heart and coronary arteries. Of concern is the "spreading out" of low doses to the rest of the patient's body. abstract_id: PUBMED:34358649 A Systematic Review on Intensity Modulated Radiation Therapy for Mediastinal Hodgkin's Lymphoma. Background: Secondary malignant neoplasms (SMNs) and cardiovascular diseases induced by chemotherapy and radiotherapy represent the main cause of excess mortality for early-stage Hodgkin lymphoma patients, especially when the mediastinum is involved. Conformal radiotherapy techniques such as Intensity-Modulated Radiation Therapy (IMRT) could allow a reduction of the dose to the organs-at-risk (OARs) and therefore limit long-term toxicity. Methods: We performed a systematic review of the current literature regarding comparisons between IMRT and conventional photon beam radiotherapy, or between different IMRT techniques, for the treatment of mediastinal lymphoma. Results And Conclusions: IMRT allows a substantial reduction of the volumes of OARs exposed to high doses, reducing the risk of long-term toxicity. This benefit is conterbalanced by the increase of volumes receiving low doses, that could potentially increase the risk of SMNs. Treatment planning should be personalized on patient and disease characteristics. Dedicated techniques such as "butterfly" VMAT often provide the best trade-off. abstract_id: PUBMED:24735767 Dosimetric advantages of a "butterfly" technique for intensity-modulated radiation therapy for young female patients with mediastinal Hodgkin's lymphoma. Purpose: High cure rates for Hodgkin's lymphoma must be balanced with long-term treatment-related toxicity. Here we report an intensity-modulated radiation therapy (IMRT) technique that achieves adequate target coverage for mediastinal disease while minimizing high- and low-dose exposure of critical organs. Methods And Materials: Treatment plans for IMRT and conventional anteroposterior-posteroanterior (AP-PA) techniques, with comparable coverage of the planning target volume (PTV), were generated for 9 female patients with mediastinal Hodgkin's lymphoma assuming use of inclined positioning, daily breath-hold, and CT-on-rails verification. Our "butterfly" IMRT beam arrangement involved anterior beams of 300°-30° and posterior beams of 160°-210°. Percentages of normal structures receiving 30 Gy (V30), 20 Gy (V20), and 5 Gy (V5) were tabulated for the right and left breasts, total lung, heart, left and right ventricles, left anterior descending coronary artery (LAD), and spinal cord. Differences in each variable, conformity index, homogeneity index, and V107% between the two techniques were calculated (IMRT minus conventional). Results: Use of IMRT generally reduced the V30 and V20 to critical structures: -1.4% and +0.1% to the right breast, -1.7% and -0.9% to the left breast, -14.6% and -7.7% to the total lung, -12.2% and -10.5% to the heart, -2.4% and -14.2% to the left ventricle, -16.4% and -8.4% to the right ventricle, -7.0% and -14.2% to the LAD, and -52.2% and -13.4% to the spinal cord. Differences in V5 were +6.2% for right breast, +2.8% for left breast, +12.9% for total lung, -3.5% for heart, -8.2% for left ventricle, -1.5% for right ventricle, +0.1% for LAD, and -0.1% for spinal cord. Use of IMRT significantly reduced the volume of tissue receiving 107% of the dose (mean 754 cm3 reduction). Conclusions: This butterfly technique for IMRT avoids excess exposure of heart, breast, lung, and spinal cord to doses of 30 or 20 Gy; mildly increases V5 to the breasts; and decreases the V107%. abstract_id: PUBMED:28188697 A case study evaluating deep inspiration breath-hold and intensity-modulated radiotherapy to minimise long-term toxicity in a young patient with bulky mediastinal Hodgkin lymphoma. Radiotherapy plays an important role in the treatment of early-stage Hodgkin lymphoma, but late toxicities such as cardiovascular disease and second malignancy are a major concern. Our aim was to evaluate the potential of deep inspiration breath-hold (DIBH) and intensity-modulated radiotherapy (IMRT) to reduce cardiac dose from mediastinal radiotherapy. A 24 year-old male with early-stage bulky mediastinal Hodgkin lymphoma received involved-site radiotherapy as part of a combined modality programme. Simulation was performed in free breathing (FB) and DIBH. The target and organs at risk were contoured on both datasets. Free breathing-3D conformal (FB-3DCRT), DIBH-3DCRT, FB-IMRT and DIBH-IMRT were compared with respect to target coverage and doses to organs at risk. A 'butterfly' IMRT technique was used to minimise the low-dose bath. In our patient, both DIBH (regardless of mode of delivery) and IMRT (in both FB and DIBH) achieved reductions in mean heart dose. DIBH improved all lung parameters. IMRT reduced high dose (V20), but increased low dose (V5) to lung. DIBH-IMRT was chosen for treatment delivery. Advanced radiotherapy techniques have the potential to further optimise the therapeutic ratio in patients with mediastinal lymphoma. Benefits should be assessed on an individualised basis. abstract_id: PUBMED:17629183 Coronary artery disease following mediastinal irradiation. Mediastinal irradiation is a known cause of late onset cardiac complications including coronary artery disease. We describe a 58-year-old female patient, without any of the traditional coronary risk factors, who presented with inferior infarction 23 years after radiotherapy for Hodgkin's lymphoma of the mediastinum. Coronary angiography demonstrated severe ostial stenoses of both coronary arteries. The patient underwent coronary artery bypass grafting and is doing well 10 months later. The therapeutic value of mediastinal irradiation is unquestionable. However, it may be associated with late complications from the irradiated tissues, including the heart. Long-term follow up of cancer survivors who have received mediastinal irradiation should therefore include annual cardiac ultrasound examinations, as well as functional testing for the detection of myocardial ischaemia. abstract_id: PUBMED:21885321 Intensity modulated radiotherapy for intrathoracic cancers: a dangerous liaison? Our experience in the treatment of Hodgkin lymphoma mediastinal masses IMRT is a seducing treatment option in patients with Hodgkin lymphoma mediastinal masses due to the complex form of the tumour masses and their proximity to organs at risk such as the heart and the coronary arteries. This treatment delivery technique remains risky owing to respiratory movements and heart beats. The concomitant use of IMRT and respiratory gating is enticing, but a number of theoretical and practical hurdles remain to be resolved before it can be used in clinical daily practice. abstract_id: PUBMED:3578337 Left main coronary artery stenosis following mediastinal irradiation. The association of mediastinal radiation therapy and coronary artery disease has been documented over the past three decades. This report describes a case of left main coronary artery stenosis eight years after radiation therapy in a 27-year-old woman. The patient was a young woman with no risk factors for coronary artery disease who had development of new-onset angina at rest. At coronary arteriography, the patient was found to have a tight ostial left main stenosis. The association of mediastinal radiation therapy with fixed and vasospastic coronary artery disease is reviewed. With many patients treated by radiation therapy now surviving their thoracic malignancies, an enlarging young population may be susceptible to the early development of ischemic heart disease. abstract_id: PUBMED:31146071 Inclusion of heart substructures in the optimization process of volumetric modulated arc therapy techniques may reduce the risk of heart disease in Hodgkin's lymphoma patients. Background And Purpose: Radiotherapy is an effective treatment for Hodgkin's lymphoma (HL), but increases the risk of long term complications as cardiac events and second cancers. This study aimed to reduce the risk of cardiovascular events through an optimization of the dose distribution on heart substructures in mediastinal HL patients with the adoption of different volumetric modulated arc therapy (VMAT) techniques, while maintaining the same risk of second cancer induction on lungs and breasts. Materials And Methods: Thirty patients (15 males and 15 females, 15 bulky lesions) treated between 2012 and 2017 at our institution were selected. Disease extent was mediastinum plus neck (n = 10), mediastinum plus unilateral axilla (n = 10) and mediastinum alone (n = 10). Lungs, breasts, whole heart and sub-structures (coronary arteries, valves and chambers) were contoured as organs at risk and included in the optimization process. A "first-generation" multi-arc butterfly VMAT (B-VMAT) planning solution was compared to a full-arc butterfly VMAT (FaB-VMAT) approach, consisting of a full arc plus a non-coplanar arc. Lifetime attributable risk (LAR) of second breast and lung cancer and relative risk (RR) of coronary artery disease (CAD) and chronic heart failure (CHF) were estimated. Results: FaB-VMAT resulted in lower mean dose to whole heart (7.6 vs 6.9 Gy, p = 0.003), all coronary arteries (16.1 vs 13.5 Gy, p &lt; 0.001), left ventricle (4.2 vs 3.4 Gy, p = 0.007) and in lower V20Gy to the lungs (15% vs 14%, p = 0.008). A significant lower RR for CAD and CHF was observed for FaB-VMAT. The risk of second breast and lung cancer was comparable between the two solutions, with the exception of female patients with mediastinal bulky involvement, where B-VMAT resulted in lower mean dose (2.8 vs 3.5 Gy, p = 0.03) and V4Gy (22% vs 16%, 0.04) to breasts, with a significant reduction in LAR (p = 0.03). Conclusions: FaB-VMAT significantly decreased the RR for CAD and CHF compared to B-VMAT, with almost the same overall risk of lung and breast cancer induction. These results are influenced by the different anatomical presentations, supporting the need for an individualized approach. Answer: In the treatment of patients with mediastinal Hodgkin lymphoma using intensity-modulated radiotherapy (IMRT), there has been a focus on protecting the heart and coronary arteries from radiation exposure. The question of whether to apply dose constraints to virtual volumes (VVs) or directly to the organs at risk (OARs) has been explored in several studies. One study found that the most satisfactory results for protecting the heart and coronary arteries were obtained by using the PTV expansion concept, which involves applying dose constraints to a PTV expansion VV and to the origin of the coronary arteries. This approach provided a good compromise between adequate PTV coverage and OAR protection (PUBMED:18037182). Another study assessed the additional benefits of using the deep-inspiration breath-hold (DIBH) technique with IMRT. The results showed that the mean doses delivered to the coronary arteries, heart, and lungs were significantly reduced by 15% to 20% using DIBH compared with free-breathing (FB), and the lung V20 was reduced by almost one third. This suggests that combining DIBH with IMRT can greatly reduce radiation exposure to the heart and lungs, particularly for tumors in the upper part of the mediastinum (PUBMED:21705151). A comparison of various IMRT setups with conventional treatment and three-dimensional conformal radiotherapy (3D-CRT) indicated that IMRT with dose constraints assigned to VVs allowed better dose conformation than conventional treatment and 3D-CRT, notably with better protection of the heart and coronary arteries. However, there was a concern about the increased volume of normal tissue receiving low radiation doses (PUBMED:16169675). A systematic review on IMRT for mediastinal Hodgkin's lymphoma concluded that IMRT allows a substantial reduction of the volumes of OARs exposed to high doses, reducing the risk of long-term toxicity. However, this benefit is counterbalanced by the increase of volumes receiving low doses, which could potentially increase the risk of secondary malignant neoplasms (SMNs) (PUBMED:34358649). In summary, both approaches—applying dose constraints to VVs and directly to OARs—have been used to protect the heart and coronary arteries during IMRT for mediastinal Hodgkin lymphoma. The choice between the two may depend on individual patient and disease characteristics, with the goal of achieving the best possible balance between effective treatment and minimizing long-term toxicity.
Instruction: Use of rapid ABG analyzer in measurement of potassium concentration: does it agree with venous potassium concentration? Abstracts: abstract_id: PUBMED:19626812 Use of rapid ABG analyzer in measurement of potassium concentration: does it agree with venous potassium concentration? Background: Because extreme venous potassium abnormality can be life threatening, rapid measurement of potassium level is essential. Traditional biochemical analysis for venous potassium takes time and delays management in seriously ill patients. Analysis form arterial blood gas (ABG) may be an alternative method that is faster. Objective: To determine agreement between potassium obtained from venous and arterial blood gas in emergency patients, Siriraj Hospital. Material And Method: Cross-sectional study performed in 53 patients who presented to the emergency department of Siriraj Hospital. Potassium level measured from ABG was compared to venous route. Results: The mean of venous, arterial potassium and difference between each pair were 3.95, 3.46, and 0.49 mmol/L respectively. The Intraclass Correlation Coefficient between each pair of two methods and 95% CI of agreement were 0.904 and 0.839 to 0.943, p &lt; 0.01. Conclusion: The agreement between ABG and venous potassium measurement are confirmed Clinicians can use ABG's potassium level as a guideline for treatment instead of using the conventional venous potassium level. abstract_id: PUBMED:35371883 Accuracy of Potassium Measurement Using Blood Gas Analyzer. Introduction: Newer blood gas analyzers can measure both blood gases and electrolytes in both arterial and venous blood samples. They are small, compact, and mobile point of care test (POCT) devices. They can produce results in as short as five minutes. We aimed at assessing the accuracy of potassium (K) level measured by gas analyzer (index test) by comparing that to the regular laboratory machine (reference standard) in our hospital. Our goal is to use POCT result of potassium so we may start insulin infusion within five to 10 minutes of arrival of diabetic ketoacidosis (DKA) patients to the emergency room (ER). It takes an average of 30 minutes to get the result using the reference standard machine. Potassium level is needed urgently in cases of DKA before initiating insulin infusion. That is true also during cardiopulmonary resuscitation (CPR) and while replacing K in severe hypokalemia and during the management of hyperkalemia. Methods: We looked into the potassium results from 265 patients who had venous blood gas (VBG) or arterial blood gas (ABG) samples and compared that to results of potassium in venous blood samples of these same patients done simultaneously or within two hours. All patients who had blood gas and venous blood drawn simultaneously or within two hours were eligible irrespective of gender, age, diagnosis, and location in the hospital. Data were collected between January 2019 and June 2019. We excluded all cases that were receiving IV fluids, diuretics, or potassium supplements. Samples examined were from all different areas of the hospital including emergency room (ER), intensive care unit (ICU), and general floors. All ages and all diagnoses were included. Results: We used the Bland-Altman method to analyze our data. More than 95% of the data fell within ± 2 standard deviations (S) of the mean difference strongly suggestive of agreement between the index test and the standard reference of the laboratory methods. The bias was 0.19. Lin's concordance correlation coefficient was 0.6584. Conclusion: Findings of this study support the use of POCT blood gas analyzer for measuring potassium when the results are needed urgently. When measuring potassium, blood gas analyzers are as accurate as automated analyzers. They produce results in five minutes or so and can be relied upon when potassium level is needed urgently. They are cost-effective and may be available at the bedside. abstract_id: PUBMED:25849375 Analysis of bias in measurements of potassium, sodium and hemoglobin by an emergency department-based blood gas analyzer relative to hospital laboratory autoanalyzer results. Objective: The emergency departments (EDs) of Chinese hospitals are gradually being equipped with blood gas machines. These machines, along with the measurement of biochemical markers by the hospital laboratory, facilitate the care of patients with severe conditions who present to the ED. However, discrepancies have been noted between the Arterial Blood Gas (ABG) analyzers in the ED and the hospital laboratory autoanalyzer in relation to electrolyte and hemoglobin measurements. The present study was performed to determine whether the ABG and laboratory measurements of potassium, sodium, and hemoglobin levels are equivalent, and whether ABG analyzer results can be used to guide clinical care before the laboratory results become available. Materials And Methods: Study power analyses revealed that 200 consecutive patients who presented to our ED would allow this prospective single-center cohort study to detect significant differences between ABG- and laboratory-measured potassium, sodium, and hemoglobin levels. Paired arterial and venous blood samples were collected within 30 minutes. Arterial blood samples were measured in the ED by an ABL 90 FLEX blood gas analyzer. The biochemistry and blood cell counts of the venous samples were measured in the hospital laboratory. The potassium, sodium, and hemoglobin concentrations obtained by both methods were compared by using paired Student's t-test, Spearman's correlation, Bland-Altman plots, and Deming regression. Results: The mean ABG and laboratory potassium values were 3.77±0.44 and 4.2±0.55, respectively (P&lt;0.0001). The mean ABG and laboratory sodium values were 137.89±5.44 and 140.93±5.50, respectively (P&lt;0.0001). The mean ABG and laboratory Hemoglobin values were 12.28±2.62 and 12.35±2.60, respectively (P = 0.24). Conclusion: Although there are the statistical difference and acceptable biases between ABG- and laboratory-measured potassium and sodium, the biases do not exceed USCLIA-determined limits. In parallel, there are no statistical differences and biases beyond USCLIA-determined limits between ABG- and laboratory-measured hemoglobin. Therefore, all three variables measured by ABG were reliable. abstract_id: PUBMED:25754102 Evaluation of the potassium adsorption capacity of a potassium adsorption filter during rapid blood transfusion. The concentration of extracellular potassium in red blood cell concentrates (RCCs) increases during storage, leading to risk of hyperkalemia. A potassium adsorption filter (PAF) can eliminate the potassium at normal blood transfusion. This study aimed to investigate the potassium adsorption capacity of a PAF during rapid blood transfusion. We tested several different potassium concentrations under a rapid transfusion condition using a pressure bag. The adsorption rates of the 70-mEq/l model were 76·8%. The PAF showed good potassium adsorption capacity, suggesting that this filter may provide a convenient method to prevent hyperkalemia during rapid blood transfusion. abstract_id: PUBMED:37095624 Heparin Concentration in Evacuated Tubes and Its Effect on pH, Ionized Calcium, Lactate, and Potassium in Venous Blood Gas Analysis. Arterial blood specimens collected in evacuated tubes are unacceptable for blood gas analysis. However, evacuated tubes are routinely used for venous blood-gas analysis. The impact of the blood to heparin ratio on venous blood in evacuated tubes is unclear. Venous blood was drawn into lithium and sodium heparin evacuated tubes that were 1/3 full, ½ full, 2/3 full, and fully filled. Specimens were analyzed for pH, ionized calcium (iCa), lactate, and potassium on a blood-gas analyzer. The results for specimens filled only 1/3 full for lithium and sodium heparin tubes revealed a significant increase in pH and a significant decrease in the iCa. Underfilling the lithium and sodium heparin evacuated tubes did not significantly impact the lactate or potassium results. Venous whole-blood specimens should be filled to at least 2/3 full for accurate pH and iCa results. abstract_id: PUBMED:761376 On-line continuous potentiometric measurement of potassium concentration in whole blood during open-heart surgery. We describe a flow-through system with an ion-selective electrode for measurement of blood potassium ion concentration, continuously and on-line off the extracorporeal blood circulation in an operating theater during human open-heart surgery. Comparison measurements were made with the SMA flame photometer (blood plasma) and an Orion SS 30 sodium/potassium analyzer (whole blood). The potassium concentration values obtained with the flow-through system agree well with the ones determined with the flame photometer. The time delay of the measurement with the flow-through system was relatively long (2 min) but delays of only 10--20 s seem feasible. Short time delays can deepen insight and simplify rational treatment under surgery conditions. abstract_id: PUBMED:27303138 Are sodium and potassium results on arterial blood gas analyzer equivalent to those on electrolyte analyzer? Objectives: The present study was conducted with the aim to compare the sodium (Na) and potassium (K) results on arterial blood gas (ABG) and electrolyte analyzers both of which use direct ion selective electrode technology. Materials And Methods: This was a retrospective study in which data were collected for simultaneous ABG and serum electrolyte samples of a patient received in Biochemistry Laboratory during February to May 2015. The ABG samples received in heparinized syringes were processed on Radiometer ABL80 analyzer immediately. Electrolytes in serum sample were measured on ST-100 Sensa Core analyzer after centrifugation. Data were collected for 112 samples and analyzed with the help of Excel 2010 and Statistical software for Microsoft excel XLSTAT 2015 software. Results: The mean Na level in serum sample was 139.4 ± 8.2 mmol/L compared to 137.8 ± 10.5 mmol/L in ABG (P &lt; 0.05). The mean difference between the results was 1.6 mmol/L. Mean K level in serum sample was 3.8 ± 0.9 mmol/L as compared to 3.7 ± 0.9 mmol/L in ABG sample (P &lt; 0.05). The mean difference between the results was 0.14 mmol/L. Statistically significant difference was observed in results of two instruments in low Na (&lt;135 mmol/L) and normal K (3.5-5.2 mmol/L) ranges. The 95% limit of agreement for Na and K on both instruments was 9.9 to -13.2 mmol/L and 0.79 to -1.07 mmol/L respectively. Conclusions: The clinicians should be cautious in using the electrolyte results of electrolyte and ABG analyzer in inter exchangeable manner. abstract_id: PUBMED:29536889 Spurious elevation of serum potassium concentration measured in samples with thrombocytosis. Background: Several factors that can lead to falsely elevated values of serum. Thrombocytosis is one of these factors, since breakage or activation of platelets during blood coagulation in vitro may lead to spurious release of potassium. The purpose of the study was to evaluate to which extent the platelet count may impact on potassium in both serum and plasma. Methods: The study population consisted of 42 subjects with platelets values comprised between 20 and 750×109/L. In each sample potassium was measured in both serum and plasma using potentiometric indirect method on the analyzer Modular P800 (Roche, Milan, Italy). Platelet count was performed with the hematological analyzer Advia 120 (Siemens, Milano, Italy). Results: Significant differences were found between potassium values in serum and in plasma. A significant correlation was also observed between serum potassium values and the platelet count in whole blood, but not with the age, sex, erythrocyte and leukocyte counts in whole blood. No similar correlation was noticed between plasma potassium and platelet count in whole blood. The frequency of hyperkalemia was also found to be higher in serum (20%) than in plasma (7%) in samples with a platelet count in whole blood &gt;450×109/L. Conclusions: The results of this study show that platelets in the biological samples may impact on potassium measurement when exceeding 450×109/L. We henceforth suggest that potassium measurement in plasma may be more accurate than in serum, especially in subjects with thrombocytosis. abstract_id: PUBMED:28553133 Inter-instrumental comparison for the measurement of electrolytes in patients admitted to the intensive care unit. Objective: To investigate whether benchtop auto-analyzers (AAs) and arterial blood gas (ABG) analyzers, for measuring electrolyte levels of patients admitted to intensive care units (ICU), are equal and whether they can be used interchangeably. Materials And Method: This study was conducted on 98 patients admitted to the ICU of the Institute of Medicine, Kathmandu, Nepal between 15 October and 15 December 2016. The sample for AA was collected from the peripheral vein through venipuncture, and that for ABG analyzer was collected from radial artery simultaneously. Electrolyte levels were measured with ABG analyzer in the ICU itself, and with benchtop AA in the central clinical biochemistry laboratory. Results: The mean value for sodium by AA was 144.6 (standard deviation [SD] 7.63) and by ABG analyzer 140.1 (SD 7.58), which was significant (p-value &lt;0.001). The mean value for potassium by AA was 3.6 (SD 0.52) and by ABG analyzer 3.58 (SD 0.66). The Bland-Altman analysis with the 95% limit of agreement between methods were -4.45 to 13.11 mmol/L for sodium and the mean difference was 4.3 mmol/L and -1.15 to 1.24 mmol/L for potassium and the mean difference was 0.04 mmol/L. The United States Clinical Laboratory Improvement Amendments accepts a 0.5 mmol/L difference in measured potassium levels and a 4 mmol/L difference in measured sodium levels, in the gold standard measure of the standard calibration solution. The passing and Bablok regression with 95% confidence interval has an intercept of zero and slope one for both sodium and potassium, and the 95% of random difference is -6.32 to 6.32 for sodium and -0.84 to 0.84 for potassium, showing no significant deviation from linearity. Conclusion: It can be concluded that AA and ABG analyzers may be used interchangeably for measurement of potassium in the Institute of Medicine, while the same cannot be concluded for the measurement of sodium, because of the significant difference in sodium measurement by the two instruments. abstract_id: PUBMED:30936997 Efficiency evaluation and safety monitoring of tailored rapid potassium supplementation strategy for fatal severe hypokalemia. Stringent regulations have been established for the intravenous administration of potassium to avoid hyperkalemia in the clinic. The standard approach, however, often does not work well for treating severe hypokalemia. In the present study, a rabbit model of hyperkalemia was used to develop an tailored rapid potassium supplementation strategy and the effectiveness and safety of this new strategy were assessed. A total of 20 rabbits with induced severe hypokalemia were randomly divided into two equal treatment groups. All of the animals were injected with 3% KCl through the auricular marginal veins by a micro-injection pump; the target serum potassium concentration was 4 mmol/l. The conventional treatment group was administered a continued potassium infusion at the standard infusion rate of 0.4 mmol/kg/h. The tailored rapid supplementation group was treated in two steps: First, a loading dose of potassium was rapidly injected for 5 min and this step was repeated until the serum potassium concentration was increased to 3.5 mmol/l. After this increase in serum potassium concentration, a sustained potassium infusion at a constant dose was performed. Electrocardiogram, blood pressure, respiratory rate, serum potassium concentration, urine volume and vital signs were monitored in real-time. No hyperkalemia occurred in any of the two the groups. However, compared with the conventional group, the tailored rapid group had a significantly shorter duration of potassium infusion and arrhythmia, and a higher survival rate. In conclusion, these results demonstrate that the tailored rapid potassium supplementation strategy shortened the time of hypokalemia and is a safe and better treatment option to remedy life-threatening arrhythmia caused by severe hypokalemia with a high success rate. Answer: The use of rapid ABG analyzers for the measurement of potassium concentration has been studied to determine if it agrees with venous potassium concentration. The findings from various studies indicate that there is a general agreement between potassium levels measured from arterial blood gas (ABG) and venous samples, although some discrepancies have been noted. One study conducted at Siriraj Hospital found that the mean difference between venous and arterial potassium was 0.49 mmol/L, with an Intraclass Correlation Coefficient of 0.904, suggesting a high level of agreement between the two methods (PUBMED:19626812). Another study supported the use of point-of-care blood gas analyzers for measuring potassium, as they were found to be as accurate as automated analyzers and produced results quickly, which is crucial in urgent situations such as during cardiopulmonary resuscitation or the management of diabetic ketoacidosis (PUBMED:35371883). However, another study noted statistical differences and acceptable biases between ABG- and laboratory-measured potassium, but the biases did not exceed US Clinical Laboratory Improvement Amendments (CLIA)-determined limits, indicating that ABG measurements were reliable (PUBMED:25849375). Similarly, a retrospective study comparing ABG and serum electrolyte samples found significant differences in results between the two instruments, suggesting that clinicians should be cautious when using the results interchangeably (PUBMED:27303138). An inter-instrumental comparison study concluded that benchtop auto-analyzers and ABG analyzers could be used interchangeably for the measurement of potassium in an ICU setting, although this was not the case for sodium measurements due to significant differences (PUBMED:28553133). In summary, the evidence suggests that rapid ABG analyzers generally agree with venous potassium concentration measurements, making them a viable option for rapid assessment of potassium levels in emergency and critical care settings. However, some studies advise caution due to observed discrepancies, and it is important to consider the specific context and potential biases when interpreting ABG potassium measurements.
Instruction: Does mode of delivery affect sexual functioning of the man partner? Abstracts: abstract_id: PUBMED:17451485 Does mode of delivery affect sexual functioning of the man partner? Introduction: Recent surveys showed that the major reasons for avoiding vaginal delivery were the fear of childbirth and the concern for postpartum sexual health. Although sexual dysfunction is a disorder that affects a couple rather than an individual, all studies investigating the relationship between the mode of delivery and sexual problems have been conducted only in cohorts of women. Aim: To determine the effect of mode of delivery on quality of sexual relations and sexual functioning of men by using the Golombock-Rust Inventory of Sexual Satisfaction (GRISS). Main Outcome Measure: Mean score of sexual function and prevalence of sexual dysfunction in overall and specific areas of the GRISS were compared among the three groups. Methods: A total of 107 men accompanying their wives in outpatient clinics of obstetrics and gynecology met inclusion/exclusion criteria. Three groups of men were defined; men whose partners had: (i) "elective cesarean delivery" (N = 21; mean age 32.2 +/- 3.8 years); (ii) "vaginal delivery with mediolateral episiotomy" (N = 36; mean age 31.4 +/- 4.5 years); and (iii) "not given birth" (N = 50; mean age 28.8 +/- 4.0 years). Results: Mean overall sexual function score (normal value &lt; 25 points) was 20.5 +/- 8.2 in the elective caesarean group, 19.3 +/- 6.5 in the vaginal delivery group, and 18.8 +/- 9.3 in the nulliparae group (P = 0.731). Prevalence of sexual dysfunction in men was 28.6% in the elective caesarean group, 19.4% in the vaginal delivery group, and 30.0% in the nulliparae group (P = 0.526). Conclusion: Overall sexual function of men was not affected by their partner's parity and mode of delivery. An elective cesarean section simply because of concerns about sexual function would not provide additional benefit to men, and could deny women a possible vaginal delivery, which is generally assumed to be safer than cesarean section. abstract_id: PUBMED:25170409 The effect of mode of delivery on postpartum sexual functioning in primiparous women. Objective: To evaluate the effect of mode of delivery on postpartum sexual functioning in primiparous women. Methods: In this cross-sectional descriptive study, 150 primiparous women in postpartum period, who attended the family planning or vaccination clinics, were enrolled for the study. Eighty-one had vaginal delivery with episiotomy and 69 had experienced cesarean section. Sexual function was evaluated by the Female Sexual Function Index within 3 and 6 months postpartum. Results: About 29% in vaginal delivery group and 37% in cesarean delivery group had resumed their sexual intercourses four weeks after delivery (p=0.280).There were no significant differences between mode of delivery and sexual functioning, including desire, arousal, lubrication, orgasm, satisfaction and pain. Conclusion: The present study showed that postpartum sexual functioning was not associated with the type of delivery. abstract_id: PUBMED:29392439 Mode of delivery, childbirth experience and postpartum sexuality. Purpose: Although childbirth has been studied extensively with regard to postpartum sexuality, the association between the psychological aspects of childbirth and postpartum sexuality has rarely been examined. This research is aimed at studying the possible association of mode of delivery, childbirth experience, sexual functioning, and sexual satisfaction. Methods: Three hundred seventy-six primiparous and nulliparous women completed this web-based survey 100-390 days postpartum. The participants completed a socio-demographic and delivery details questionnaire, the Childbirth Perception Questionnaire (CPQ), the Index of Sexual Satisfaction (ISS) and the Sexual Function Questionnaire's Medical Impact Scale (SFQ-MIS). Results: Structural equation modeling showed that there are indirect effects of mode of delivery on sexual functioning and sexual satisfaction through childbirth experience. Specific significant indirect paths were found: mode of delivery to sexual functioning through childbirth experience [B = - 0.26, p = 0.023, 95% CI = (- 0.40, - 0.10)] and from mode of delivery to sexual satisfaction through childbirth experience [B = 0.11, p = 0.013, 95% CI = (0.05, 0.21)]. No significant direct effects were found between mode of delivery and sexual functioning or sexual satisfaction. Conclusions: The results point to the association of the psychological experience of childbirth, sexual functioning and sexual satisfaction. In addition, we found a possible indirect link between mode of delivery and postpartum sexuality. It can be concluded that the psychological factors associated with childbirth are important to the understanding of postpartum sexuality. abstract_id: PUBMED:26857530 Impact of Mode of Delivery on Female Postpartum Sexual Functioning: Spontaneous Vaginal Delivery and Operative Vaginal Delivery vs. Cesarean Section. Introduction: Several studies have explored the association between modes of delivery and postpartum female sexual functioning, although with inconsistent findings. Aim: To investigate the impact of mode of delivery on female postpartum sexual functioning by comparing spontaneous vaginal delivery, operative vaginal delivery, and cesarean section. Methods: One hundred thirty-two primiparous women who had a spontaneous vaginal delivery, 45 who had an operative vaginal delivery, and 92 who underwent a cesarean section were included in the study (N = 269). Postpartum sexual functioning was evaluated 6 months after childbirth using the Female Sexual Function Index. Time to resumption of sexual intercourse, postpartum depression, and current breastfeeding also were assessed 6 months after delivery. Main Outcome Measures: Female Sexual Function Index total and domain scores and time to resumption of sexual intercourse at 6 months after childbirth. Results: Women who underwent an operative vaginal delivery had poorer scores on arousal, lubrication, orgasm, and global sexual functioning compared with the cesarean section group and lower orgasm scores compared with the spontaneous vaginal delivery group (P &lt; .05). The mode of delivery did not significantly affect time to resumption of sexual intercourse. Women who were currently breastfeeding had lower lubrication, more pain at intercourse, and longer time to resumption of sexual activity. Conclusion: Operative vaginal delivery might be associated with poorer sexual functioning, but no conclusions can be drawn from this study regarding the impact of pelvic floor trauma (perineal laceration or episiotomy) on sexual functioning because of the high rate of episiotomies. Overall, obstetric algorithms currently in use should be refined to decrease further the risk of operative vaginal delivery. abstract_id: PUBMED:25548646 Association between sexual health and delivery mode. Introduction: Female sexual function changes considerably during pregnancy and the postpartum period. In addition, women's physical and mental health, endocrine secretion, and internal and external genitalia vary during these times. However, there are limited studies on the relationship between delivery and sexual function. Aim: The present study aimed to demonstrate the association between sexual function and delivery mode. Methods: Mothers who delivered a single baby at term were recruited for the study, and 435 mothers were analyzed. Main Outcome Measures: The Female Sexual Function Questionnaire (SFQ28) scores and mothers' backgrounds were assessed at 6 months after delivery. Results: The delivery mode affected the SFQ28 partner domain. Episiotomy affected the arousal (sensation) domain. Multiple regression analysis revealed that maternal age and cesarean section were significantly associated with several SHQ28 domains. Conclusion: This study suggests that routine episiotomies at delivery should be avoided to improve postpartum maternal sexual function. Maternal age and cesarean section were found to affect postpartum sexual health. abstract_id: PUBMED:32641223 Associations of Affect, Action Readiness, and Sexual Functioning. Introduction: Emotions are theorized to contain the components of affect and action readiness. Affect guides behavior by causing an approach or withdrawal orientation. Action readiness is the individual's degree of willingness to interact with the environment. Emotions contribute to changes in behavior and physiological responses. Aim: The present study applied these notions to sexuality and examined the associations between affect, action readiness, and sexual functioning. Methods: Participants were male patients with urologic condition (N = 70) with and without sexual problems. Main Outcome Measure: Affect and action readiness were jointly assessed using the latent factor of affective polarity of the Positive and Negative Affect Schedule. Trait affective polarity was assessed questioning generally experienced feelings. State affective polarity was assessed after exposure to an erotic stimulus and questioning momentaneously experienced feelings. Sexual functioning was assessed using the International Index of Erectile Functioning questionnaire. Results: A significant increase of approach-oriented action readiness was found after erotic stimulation, relative to trait levels. In addition, significant associations were found between state approach-oriented action readiness and various aspects of sexual functioning. Interventions based on principles of positive psychology might be developed to reinforce action readiness in men with erectile dysfunction. The strength of the current research concerns the introduction of action readiness as a potential psychological factor implied in sexual functioning. Limitations pertain to the use of the algorithm used to calculate state approach-oriented action readiness and the use of the current sample of patients with urological conditions, limiting generalizability of findings. Conclusion: Action readiness was found to correlate positively with all aspects of sexual functioning. Further research into the role of action readiness in sexuality is recommended. Henckens MJMJ, de Vries P, Janssen E, et al. Associations of Affect, Action Readiness, and Sexual Functioning. Sex Med 2020;8:691-698. abstract_id: PUBMED:33445643 The Impact of Intimate Partner Violence on Sexual Attitudes, Sexual Assertiveness, and Sexual Functioning in Men and Women. Background: Intimate Partner Violence (IPV) causes physical, sexual, or psychological harm. The association between psychosexual (sexual assertiveness, erotophilia, and attitude towards sexual fantasies) and sexual function (sexual desire, sexual excitation, erection, orgasm capacity, and sexual satisfaction), and the experience of physical and non-physical IPV was assessed. Methods: Data from 3394 (1766 women, 1628 men) heterosexual adults completed the Spanish version of the Index of Spouse Abuse, scales measuring psychosexual and sexual function, and demographic characteristics were collected. Results: For men, poorer sexual health was associated with an experience of physical abuse (F = 4.41, p &lt; 0.001) and non-physical abuse (F = 4.35, p &lt; 0.001). For women, poorer sexual health was associated with physical abuse (F = 13.38, p &lt; 0.001) and non-physical abuse (F = 7.83, p &lt; 0.001). Conclusion: The experience of physical or non-physical abuse has a negative association with psychosexual and sexual functioning in both men and women. abstract_id: PUBMED:29706579 Does Endometriosis Affect Sexual Activity and Satisfaction of the Man Partner? A Comparison of Partners From Women Diagnosed With Endometriosis and Controls. Background: Endometriosis-associated pain and dyspareunia influence female sexuality, but little is known about men's experiences in affected couples. Aim: To investigate how men partners experience sexuality in partnership with women with endometriosis. Methods: A multi-center case-control study was performed between 2010 and 2015 in Switzerland, Germany, and Austria. 236 Partners of endometriosis patients and 236 partners of age-matched control women without endometriosis with a similar ethnic background were asked to answer selected, relevant questions of the Brief Index of Sexual Functioning and the Global Sexual Functioning questionnaire, as well as some investigator-derived questions. Outcomes: We sought to evaluate sexual satisfaction of men partners of endometriosis patients, investigate differences in sexual activities between men partners of women with and without endometriosis, and identify options to improve partnership sexuality in couples affected by endometriosis. Results: Many partners of endometriosis patients reported changes in sexuality (75%). A majority of both groups was (very) satisfied with their sexual relationship (73.8% vs 58.1%, P = .002). Nevertheless, more partners of women diagnosed with endometriosis were not satisfied (P = .002) and their sexual problems more strongly interfered with relationship happiness (P = .001) than in partners of control women. Frequencies of sexual intercourse (P &lt; .001) and all other partnered sexual activities (oral sex, petting) were significantly higher in the control group. The wish for an increased frequency of sexual activity (P = .387) and sexual desire (P = .919) did not differ statistically between both groups. Clinical Translation: There is a need to evaluate qualitative factors that influence sexual satisfaction in endometriosis patients. Conclusions: This is one of the first studies to investigate male sexuality affected by endometriosis. The meticulous verification of diagnosis and disease stage according to operation reports and histology allows for a high reliability of diagnosis. Our men's response rate of almost 50% is higher compared to other studies. Recruiting men through their woman partner may have caused selection bias. The adjustment to the specific situation in endometriosis by selecting questions from the Brief Index of Sexual Functioning and Global Sexual Functioning and adding investigator-derived questions likely influenced the validity of the questionnaires. Despite the fact that both partners of endometriosis patients and of control women largely reported high sexual satisfaction, there are challenges for some couples that arise in the context of a sexual relationship when one partner has endometriosis. Challenges such as sexuality-related pain or a reduced frequency of sexual activities should be addressed by health care professionals to ameliorate any current difficulties and to prevent the development or aggravation of sexual dysfunction. Hämmerli S, Kohl Schwartz AS, Geraedts K, et al. Does Endometriosis Affect Sexual Activity and Satisfaction of the Man Partner? A Comparison of Partners From Women Diagnosed With Endometriosis and Controls. J Sex Med 2018;15:853-865. abstract_id: PUBMED:24580794 Let's talk about sex: lower limb amputation, sexual functioning and sexual well-being: a qualitative study of the partner's perspective. Aims And Objectives: To describe the impact of patients' lower limb amputations on their partners' sexual functioning and well-being. Background: Annually, about 3300 major lower limb amputations are performed in the Netherlands. An amputation may induce limitations in performing marital activities, including expression of sexual feelings between partners. However, up until now, little attention has been paid towards this aspect in both research and clinical practice. The lack of studies on sexual activities and lower limb amputation is even more apparent with respect to partners of patients with such an amputation. Previous studies have shown, however, that the presence of a disease or disability may have a large impact not only on the patient's but also on the partner's sexual activities. Design: Qualitative thematic analysis. Methods: Semi-structured interviews. The questions used in the interview were inspired by a generic framework about chronic disease and sexual functioning and well-being. In total, 16 partners of patients with a lower limb amputation who were at least 18 years old were recruited in different rehabilitation centres. Results: Seven major themes (i.e. importance of sexuality, thoughts about sexuality before the amputation, changes in sexual functioning and sexual well-being, amputation as the main cause of these changes, acceptance of the amputation, role confusion and communication about sexuality) were derived from the interviews. Minor changes in sexual functioning and sexual well-being were reported by the participants. Problems participants did encounter were solved by the couples themselves. For some participants, their sexual well-being improved after the amputation. Conclusion And Relevance To Clinical Practice: Participants in our study reported minor changes in their sexual well-being. Most of them indicated that communication about the changes expected and how to cope with these would have been helpful. It is therefore important that professionals address sexuality during the rehabilitation process with patients and partners. abstract_id: PUBMED:25856338 The relationship between mode of delivery and sexual health outcomes after childbirth. Introduction: Several factors are implicated in the women's sexuality after childbirth. Nevertheless, there is conflicting evidence about the influence of mode of delivery (MD) AIM: To prospectively evaluate the relationship between MD and sexual health outcomes after childbirth Methods: A prospective cohort study conducted between May 2005 and March 2007 included 831 pregnant women recruited from primary care clinics of the public sector in São Paulo, Brazil. The exposure variable was MD: uncomplicated vaginal delivery (spontaneous vaginal delivery without episiotomy or any kind of perineal laceration); complicated vaginal delivery (either forceps or normal, with episiotomy or any kind of perineal laceration) and cesarean delivery. Socio-demographic and obstetric data were obtained through a questionnaire applied during the antenatal and postnatal period. Crude and adjusted risk ratios, with 95% confidence intervals, were calculated using Poisson regression to examine the associations between MD and sexual health outcomes. Main Outcome Measures: The three main sexual health outcomes were later resumption of sexual life, self-perception of decline of sexual life (DSL), and presence of sexual desire. Results: One hundred and forty-one women (21.9%) resumed sexual life 3 or more months after delivery. Although 87.1% of women had desire, DSL occurred in 21.1% of the cohort. No associations were found between MD and sexual health outcomes. Conclusions: Women's sexuality after childbirth were not influenced by the type of delivery. Efforts to improve the treatment of sexual problems after childbirth should focus beyond MD. Answer: The mode of delivery does not appear to significantly affect the sexual functioning of the male partner. A study using the Golombock-Rust Inventory of Sexual Satisfaction (GRISS) to assess the quality of sexual relations and sexual functioning of men found that overall sexual function scores and the prevalence of sexual dysfunction in men were not significantly different among those whose partners had elective cesarean delivery, vaginal delivery with mediolateral episiotomy, or had not given birth (PUBMED:17451485). This suggests that concerns about sexual function should not be a deciding factor for elective cesarean sections, as it does not provide additional benefits to men in this regard.
Instruction: Is thyroid stunning clinically relevant? Abstracts: abstract_id: PUBMED:30599051 Thyroid stunning in radioiodine-131 therapy of benign thyroid diseases. Purpose: Existence and cause of thyroid stunning was controversially discussed for decades but the underlying mechanism remains unclear. Numerous studies describe thyroid stunning in radioiodine-131 therapy (RIT) of differentiated thyroid carcinoma. However, there are no studies evaluating thyroid stunning in benign thyroid diseases caused by the radioiodine uptake test (RIUT). Therefore, the influence of pre-therapeutic tracer radiation dose on therapeutic iodine-131 uptake was evaluated retrospectively. Methods: A total of 914 RIT patients were included. Exclusion criteria were anti-thyroid drugs, pre- and/or intra-therapeutic effective half-lives (EHL) beyond 8.04 days and externally performed RIUT or 24 h RIUT. All patients received RIUT 1 week before RIT. Thyroid volume was estimated via ultrasound. Tracer radiation dose to the thyroid was calculated retrospectively. The dependence of changes in the pre-therapeutic to the therapeutic extrapolated-maximum-131I-uptake (EMU) from the dose in RIUT was evaluated statistically. Results: EMU in RIUT ranged from 0.10 to 0.82 (median: 0.35) and EMU in RIT ranged from 0.10 to 0.74 (median: 0.33). Averaged over the whole cohort the therapeutic EMU decreased significantly (2.3% per Gray intra-thyroidal tracer radiation dose). A disease-specific evaluation showed dose-dependent thyroid stunning from 1.2% per Gray in solitary toxic nodules (n = 327) to 21% per Gray in goiters (n = 135) which was significant for the subgroups of disseminated autonomies (n = 114), multifocal autonomies (n = 178) and goiters (p &lt; 0.05) but not for Graves' diseases (n = 160) and solitary toxic nodules (p &gt; 0.05). Conclusions: The presented data indicate for the first time a significant dependence of pre-therapeutic radiation dose on thyroid stunning in goiter and disseminated and multifocal autonomy. To achieve the desired intra-thyroidal radiation dose, RIT activity should be adapted depending on the dose in RIUT. abstract_id: PUBMED:24863093 Is thyroid stunning clinically relevant? A retrospective analysis of 208 patients. Objective: Current guidelines have advised against the performance of (131)I-iodide diagnostic whole body scintigraphy (dxWBS) to minimize the occurrence of stunning, and to guarantee the efficiency of radioiodine therapy (RIT). The aim of the study was to evaluate the impact of stunning on the efficacy of RIT and disease outcome. Subjects And Methods: This retrospective analysis included 208 patients with differentiated thyroid cancer managed according to a same protocol and followed up for 12-159 months (mean 30 ± 69 months). Patients received RIT in doses ranging from 3,700 to 11,100 MBq (100 mCi to 300 mCi). Post-RIT-whole body scintigraphy images were performed 10 days after RIT in all patients. In addition, images were also performed 24-48 hours after therapy in 22 patients. Outcome was classified as no evidence of disease (NED), stable disease (SD) and progressive disease (PD). Results: Thyroid stunning occurred in 40 patients (19.2%), including 26 patients with NED and 14 patients with SD. A multivariate analysis showed no association between disease outcome and the occurrence of stunning (p = 0.3476). Conclusion: The efficacy of RIT and disease outcome do not seem to be related to thyroid stunning. abstract_id: PUBMED:32173798 Correction for hyperfunctioning radiation-induced stunning (CHRIS) in benign thyroid diseases. Purpose: Radioiodine-131 treatment has been a well-established therapy for benign thyroid diseases for more than 75 years. However, the physiological reasons of the so-called stunning phenomenon, defined as a reduced radioiodine uptake after previous diagnostic radioiodine administration, are still discussed controversially. In a recent study, a significant dependence of thyroid stunning on the pre-therapeutically administered radiation dose could be demonstrated in patients with goiter and multifocal autonomous nodules. A release of thyroid hormones to the blood due to radiation-induced destruction of thyroid follicles leading to a temporarily reduced cell metabolism was postulated as possible reason for this indication-specific stunning effect. Therefore, the aim of this study was to develop dose-dependent correction factors to account for stunning and thereby improve precision of radioiodine treatment in these indications. Methods: A retrospective analysis of 313 patients (135 with goiter and 178 with multifocal autonomous nodules), who underwent radioiodine uptake testing and radioiodine treatment, was performed. The previously determined indication-specific values for stunning of 8.2% per Gray in patients with multifocal autonomous nodules and 21% per Gray in patients with goiter were used to modify the Marinelli equation by the calculation of correction factors for hyperfunctioning radiation-induced stunning (CHRIS). Subsequently, the calculation of the required activity of radioiodine-131 to obtain an intra-therapeutic target dose of 150 Gy was re-evaluated in all patients. Furthermore, a calculation of the hypothetically received target dose by using the CHRIS-calculated values was performed and compared with the received target doses. Results: After integrating the previously obtained results for stunning, CHRIS-modified Marinelli equations could be developed for goiter and multifocal autonomous nodules. For patients with goiter, the mean value of administered doses calculated with CHRIS was 149 Gy and did not differ from the calculation with the conventional Marinelli equation of 152 Gy with statistical significance (p = 0.60). However, the statistical comparison revealed a highly significant improvement (p &lt; 0.000001) of the fluctuation range of the results received with CHRIS. Similar results were obtained in the subgroup of patients with multifocal autonomous nodules. The mean value of the administered dose calculated with the conventional Marinelli equation was 131 Gy and therefore significantly below the CHRIS-calculated radiation dose of 150 Gy (p &lt; 0.05). Again, the fluctuation range of the CHRIS-calculated radiation dose in the target volume was significantly improved compared with the conventional Marinelli equation (p &lt; 0.000001). Conclusions: With the presented CHRIS equation it is possible to calculate a required individual stunning-independent radioiodine activity for the first time by only using data from the radioiodine uptake testing. The results of this study deepen our understanding of thyroid stunning in benign thyroid diseases and improve precision of dosimetry in radioiodine-131 therapy of goiter and multifocal autonomous nodules. abstract_id: PUBMED:26581218 Statistical and radiobiological analysis of the so-called thyroid stunning. Background: The origin of the reduction in thyroid uptake after a low activity iodine scan, so-called stunning effect, is still controversial. Two explanations prevail: an individual cell stunning that reduces its capability to store iodine without altering its viability, and/or a significant cell-killing fraction that reduces the number of cells in the tissue still taking up iodine. Our aim is to analyze whether this last assumption could explain the observed reduction. Methods: The survival fraction after administration of a small radioiodine activity was computed by two independent methods: the application of the statistical theory underlying tissue control probability on recent clinical studies of thyroid remnant (131)I ablation and the use of the radiosensitivities reported in human thyroid cell assays for different radioiodine isotopes. Results: Both methods provided survival fractions in line with the uptake reduction observed after a low (131)I activity scan. The second method also predicts a similar behavior after a low (123)I or (124)I activity scan. Conclusions: This study shows that the cell-killing fraction is sufficient to explain the uptake reduction effect for (131)I and (123)I after a low activity scan and that even if some still living cells express a stunning effect just after irradiation (as shown in vitro), they will mostly die with time. As the β/α value is very low, this therapy fractionation should not impact the patient outcome in agreement with recent studies. However, in case of huge uptake heterogeneity, pre-therapy scan could specifically kills high-uptake cells and by the way could reduce the cross irradiation to the low-uptake cells during the therapy, resulting in a reduction of the ablation success rate. abstract_id: PUBMED:38066359 Isolation of Bacteriophages for Clinically Relevant Bacteria. The isolation of bacteriophages targeting most clinically relevant bacteria is reasonably straightforward as long as its targeted host does not have complex chemical, physical, and environmental requirements. Often, sewage, soil, feces, and different body fluids are used for bacteriophage isolation procedures, and following enrichment, it is common to obtain more than a single phage in a sample. This chapter describes a simple method for the enrichment and isolation of bacteriophages from liquid and solid samples that can be adapted for different clinically important aerobic bacteria. abstract_id: PUBMED:12804101 Thyroid stunning. Debates regarding thyroid stunning-a phenomenon whereby a diagnostic dose of radioiodine decreases uptake of a subsequent therapeutic dose by remnant thyroid tissue or by functioning metastases-have been fueled by inconsistent research findings. Quantitative studies evaluating radioiodine uptake and qualitative studies using visual observations both compare thyroid function on the diagnostic scan (DxSCAN) versus the posttreatment whole-body scan (RxWBS). The variability of findings may be the result of a lack of consensus in clinical nuclear medicine regarding many parameters of radioiodine usage including the need to obtain a pretreatment diagnostic scan, appropriate therapeutic dose, time between therapy dose administration and DxSCAN, and how successful ablation is measured. In the studies considered in this review, those that used (123)I rather than (131)I for DxSCAN, allowed less time to elapse between diagnostic and therapy dose, and more time between therapy dose and RxWBS (at least 1 week), did not observe stunning. However, groups that recognized stunning did not demonstrate any difference in outcomes (determined by successful first-time ablation). Whether stunning is a temporary phenomenon whereby stunned tissue eventually rejuvenates, or whether observed stunning actually constitutes "partial ablation," is yet to be delineated. abstract_id: PUBMED:29119429 Isolation of Bacteriophages for Clinically Relevant Bacteria. A number of bacteriophages deposited in different culture collections target clinically relevant bacterial hosts. In this chapter, we describe a method for isolating bacteriophage plaques for the most common bacteria involved in nosocomial infections. abstract_id: PUBMED:21272684 Thyroid stunning: fact or fiction? Stunning of thyroid tissue by diagnostic activities of (131)I has been described by some investigators and refuted by others. The support both for and against stunning has at times been enthusiastic and vigorous. We present the data from both sides of the debate in an attempt to highlight the strengths and deficiencies in the investigations cited. Clinical, animal, and in vitro studies are included. There are considerable differences in clinical practice, such as the administered activity for diagnostic whole-body scan, delay between diagnostic scan and treatment, time between treatment and posttherapy scanning, and timing of follow-up studies, that have to be analyzed with care. Other factors that often cannot be judged, such as levels of thyroid-stimulating hormone and serum iodine at time of diagnostic testing versus treatment could have an influence on stunning. Larger diagnostic doses and longer delays to therapy appear to increase the likelihood of stunning. The stunning effect of early-absorbed radiation from the therapy should also be considered. abstract_id: PUBMED:16000993 Thyroid stunning in vivo and in vitro. Aim: To review published in-vivo and in-vitro quantitative dosimetric studies on thyroid stunning in order to derive novel data applicable in clinical practice. Methods: A non-linear regression analysis was applied to describe the extent of thyroid stunning in thyroid remnants, as a function of the radiation absorbed dose of diagnostic radioiodine-131 (I), in thyroid cancer patients investigated in four in-vivo studies. The regression curves were fitted using individual patient absorbed doses or the mean absorbed doses for the groups of patients. Fitted curves were compared with two recent models, the first found in patients with benign thyroid disease and the second found in cultured thyroid cells after I irradiation. Results: The extrapolated absorbed doses for the onset of thyroid stunning were 0 Gy delivered to thyroid cells in vitro, and &lt; or =4 Gy and 34 Gy delivered to thyroid cells in vivo (malignant and benign conditions, respectively). Thyroid stunning amounted to roughly 50% in the case of 2 Gy delivered to thyroid cells in vitro, and in the case of &lt; or =30 Gy and 472 Gy delivered to thyroid cells in vivo (malignant and benign conditions, respectively). Conclusions: There is no scintigraphically sufficient diagnostic amount of I that can be given prior to I therapy for thyroid cancer that does not cause thyroid stunning, i.e. it is not recommended to deliver pre-therapeutically more than a few gray (&lt;5 Gy) into thyroid remnants. More investigations are required to confirm the proposed in-vitro and benign in-vivo models, but characteristic absorbed doses presented so far for in-vitro vs. in-vivo malignant vs. in-vivo benign thyroid environments differ roughly by an order of magnitude. abstract_id: PUBMED:36830455 Relevant Indicators of Consciousness after Waterbath Stunning in Turkeys and Stunning Efficiency in Commercial Conditions. The prevalence and inter-observer repeatability of the four most valid and feasible indicators of consciousness after waterbath stunning in turkeys were evaluated before bleeding (i.e., tonic convulsion, breathing, spontaneous blinking and vocalization) and during bleeding (i.e., fluttering, breathing, spontaneous swallowing and head shaking). Furthermore, correlations between them were calculated to further understand the association between such indicators. This study compared the assessments of four observers on 7877 turkeys from 28 flocks slaughtered in eight different abattoirs. Repeatability between observers was assessed by combining the crude percentage of agreement and the Fleiss' kappa. Before bleeding, tonic seizure was observed to be not reliable under commercial conditions and spontaneous blinking and vocalization had poor repeatability. During bleeding, spontaneous swallowing was not observed and head shaking had poor repeatability. Hence, the most relevant indicators of consciousness before bleeding is breathing while during bleeding is breathing and wing flapping. Moreover, the stunning efficiency of the key electrical parameters used in the different slaughterhouses was assessed. Therefore, a refined list of indicators of the state of consciousness after waterbath stunning is proposed to be used in commercial turkey slaughterhouses to ensure consistency of controls. On the other hand, the range of the prevalence of turkeys with indicators of consciousness within a batch found before bleeding [0-16%] and after bleeding [0-18%] highlights the importance of controls. Answer: Thyroid stunning is a phenomenon that has been debated in the context of radioiodine therapy for thyroid conditions. The clinical relevance of thyroid stunning is not entirely clear, as studies have shown mixed results. One study found that thyroid stunning occurred in 19.2% of patients, but a multivariate analysis showed no association between disease outcome and the occurrence of stunning (PUBMED:24863093). This suggests that the efficacy of radioiodine therapy (RIT) and disease outcome may not be significantly affected by thyroid stunning. Another study indicated that thyroid stunning is significant in benign thyroid diseases such as goiter and multifocal autonomy, with a dependence on the pre-therapeutic radiation dose (PUBMED:30599051). This study suggests that to achieve the desired intra-thyroidal radiation dose, RIT activity should be adapted depending on the dose in the radioiodine uptake test (RIUT). A further study developed correction factors for hyperfunctioning radiation-induced stunning (CHRIS) to improve the precision of radioiodine treatment in patients with goiter and multifocal autonomous nodules (PUBMED:32173798). This indicates that while stunning may be clinically relevant, it can be accounted for in treatment planning. An analysis of the so-called thyroid stunning concluded that the cell-killing fraction is sufficient to explain the uptake reduction effect after a low activity scan, and that therapy fractionation should not impact patient outcome (PUBMED:26581218). This suggests that while stunning may occur, it may not be clinically significant in terms of patient outcomes. In summary, while thyroid stunning is a recognized phenomenon, its clinical relevance appears to be variable and may depend on the specific context of the disease and treatment protocol. Some studies suggest that stunning does not significantly impact the efficacy of RIT or disease outcomes, while others highlight the need to adjust treatment based on the occurrence of stunning to achieve optimal therapeutic results.
Instruction: Are the bipolar disorders best modelled categorically or dimensionally? Abstracts: abstract_id: PUBMED:27028495 Are the bipolar disorders best modelled categorically or dimensionally? Objective: Considerable debate exists as to whether the bipolar disorders are best classified according to a categorical or dimensional model. This study explored whether there is evidence for a single or multiple subpopulations and the degree to which differing diagnostic criteria correspond to bipolar subpopulations. Method: A mixture analysis was performed on 1081 clinically diagnosed (and a reduced sample of 497 DSM-IV diagnosed) bipolar I and II disorder patients, using scores on hypomanic severity (as measured by the Mood Swings Questionnaire). Mixture analyses were conducted using two differing diagnostic criteria and two DSM markers to ascertain the most differentiating and their associated clinical features. Results: The two subpopulation solution was most supported although the entropy statistic indicated limited separation and there was no distinctive point of rarity. Quantification by the odds ratio statistic indicated that the clinical diagnosis (respecting DSM-IV criteria, but ignoring 'high' duration) was somewhat superior to DSM-IV diagnosis in allocating patients to the putative mixture analysis groups. The most differentiating correlate was the presence or absence of psychotic features. Conclusion: Findings favour the categorical distinction of bipolar I and II disorders and argue for the centrality of the presence or absence of psychotic features to subgroup differentiation. abstract_id: PUBMED:32829199 The bipolar disorders: A case for their categorically distinct status based on symptom profiles. Background: It is unclear whether the bipolar disorders (i.e. BP-I/BP-II) differ dimensionally or categorically. This study sought to clarify this issue. Methods: We recruited 165 patients, of which 69 and 96 had clinician-assigned diagnoses of BP-I and BP-II respectively. Their psychiatrists completed a data sheet seeking information on clinical variables about each patient, while the patients completed a different data sheet and scored a questionnaire assessing the prevalence and severity of 96 candidate manic/hypomanic symptoms. Results: We conducted a series of analyses examining a set (and two sub-sets) of fifteen symptoms that were significantly more likely to be reported by the clinically diagnosed BP-I patients. Latent class analyses favoured two-class solutions, while mixture analyses demonstrated bimodality, thus arguing for a BP-I/BP-II categorical distinction. Statistically defined BP-I class members were more likely when manic to have experienced psychotic features and over-valued ideas. They were also more likely to have been hospitalised, and to have been younger when they received their bipolar diagnosis and first experienced a depressive or manic episode. Limitations: The lack of agreement between some patients and managing clinicians in judging the presence of psychotic features could have compromised some analyses. It is also unclear whether some symptoms (e.g. grandiosity, noting mystical events) were capturing formal psychotic features or not. Conclusions: Findings replicate our earlier study in providing evidence to support the modelling of BP-I and BP-II as categorically discrete conditions. This should advance research into aetiological factors and determining optimal (presumably differing) treatments for the two conditions. abstract_id: PUBMED:22030135 Does testing for bimodality clarify whether the bipolar disorders are categorically or dimensionally different to unipolar depressive disorders? Background: It has been held that if bipolar disorder is categorically distinct, it should differentiate from unipolar depressive disorders by showing bimodality or a 'zone of rarity' in bipolar symptom scores. Two previous studies have failed to demonstrate bimodality. We undertook a third study. Methods: A total of 1106 patients attending the Black Dog Institute Depression Clinic completed the Mood Disorders Questionnaire (MDQ), in addition to undergoing clinical assessment by an Institute psychiatrist. Results: The distributions of scores for the total number of hypomanic symptoms endorsed by unipolar and bipolar patients were both skewed, with the bipolar group endorsing a high number of hypomanic symptoms and the unipolar group endorsing few symptoms--and so giving the impression of an 'even' distribution generated by two quite distinctly differing sub-groups. However, formal statistical analyses involving mixed modelling provided no clear evidence that a bimodal distribution provided a better fit to the data than a unimodal one. Conclusions: Failure to statistically demonstrate a 'point of rarity' did not marry with visual inspection of the plotted data--which clearly suggested two groups putatively capturing those with bipolar and unipolar disorders respectively. The paper considers some limitations to the emphasis on 'bimodality' in differentiating potentially differing conditions. abstract_id: PUBMED:14690770 Psychotic bipolar disorders: dimensionally similar to or categorically different from schizophrenia? For over a century, clinicians have struggled with how to conceptualize the primary psychoses, which include psychotic mood disorders and schizophrenia. Indeed, the nature of the relationship between mood disorders and schizophrenia is an area of ongoing controversy. Psychotic bipolar disorders have characteristics such as phenomenology, biology, therapeutic response, and brain imaging findings, suggesting both commonalities with and dissociations from schizophrenia. Taken together, these characteristics are in some instances most consistent with a dimensional view, with psychotic bipolar disorders being intermediate between non-psychotic bipolar disorders and schizophrenia spectrum disorders. However, in other instances, a categorical approach appears useful. Although more research is clearly necessary to address the dimensional versus categorical controversy, it is feasible that at least in the interim, a mixed dimensional/categorical approach could provide additional insights into pathophysiology and management options, which would not be available utilizing only one of these models. abstract_id: PUBMED:18777228 How should mood disorders be modelled? Classification of any mental disorder is likely to have clinical utility only if it is based on a valid underlying model. The depressive disorders have long provoked debates as to whether a categorical or a dimensional model is all explanatory. This paper will argue that no single (categorical or dimensional) model is likely to be valid, and that a mix of models is required to classify, diagnose and shape management decisions for the mood disorders. After reviewing limitations to the dimensionally based official classificatory systems (DSM-IV and ICD-10), and noting some of the consequences, a set of alternative strategies is outlined. In essence, identifying syndromal 'fuzzy sets' from phenotypic and aetiological clustering, a model that occurs in the rest of medicine. abstract_id: PUBMED:17764909 Reviewing the diagnostic validity and utility of mixed depression (depressive mixed states). Objective: To review the diagnostic validity and utility of mixed depression, i.e. co-occurrence of depression and manic/hypomanic symptoms. Methods: PubMed search of all English-language papers published between January 1966 and December 2006 using and cross-listing key words: bipolar disorder, mixed states, criteria, utility, validation, gender, temperament, depression-mixed states, mixed depression, depressive mixed state/s, dysphoric hypomania, mixed hypomania, mixed/dysphoric mania, agitated depression, anxiety disorders, neuroimaging, pathophysiology, and genetics. A manual review of paper reference lists was also conducted. Results: By classic diagnostic validators, the diagnostic validity of categorically-defined mixed depression (i.e. at least 2-3 manic/hypomanic symptoms) is mainly supported by family history (the current strongest diagnostic validator). Its diagnostic utility is supported by treatment response (negative effects of antidepressants). A dimensionally-defined mixed depression is instead supported by a non-bi-modal distribution of its intradepression manic/hypomanic symptoms. Discussion: Categorically-defined mixed depression may have some diagnostic validity (family history is the current strongest validator). Its diagnostic utility seems supported by treatment response. abstract_id: PUBMED:30378461 Revising Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, criteria for the bipolar disorders: Phase I of the AREDOC project. Objective: To derive new criteria sets for defining manic and hypomanic episodes (and thus for defining the bipolar I and II disorders), an international Task Force was assembled and termed AREDOC reflecting its role of Assessment, Revision and Evaluation of DSM and other Operational Criteria. This paper reports on the first phase of its deliberations and interim criteria recommendations. Method: The first stage of the process consisted of reviewing Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, and recent International Classification of Diseases criteria, identifying their limitations and generating modified criteria sets for further in-depth consideration. Task Force members responded to recommendations for modifying criteria and from these the most problematic issues were identified. Results: Principal issues focussed on by Task Force members were how best to differentiate mania and hypomania, how to judge 'impairment' (both in and of itself and allowing that functioning may sometimes improve during hypomanic episodes) and concern that rejecting some criteria (e.g. an imposed duration period) might risk false-positive diagnoses of the bipolar disorders. Conclusion: This first-stage report summarises the clinical opinions of international experts in the diagnosis and management of the bipolar disorders, allowing readers to contemplate diagnostic parameters that may influence their clinical decisions. The findings meaningfully inform subsequent Task Force stages (involving a further commentary stage followed by an empirical study) that are expected to generate improved symptom criteria for diagnosing the bipolar I and II disorders with greater precision and to clarify whether they differ dimensionally or categorically. abstract_id: PUBMED:33486278 Categorical differentiation of the unipolar and bipolar disorders. There has been a longstanding debate as to whether the bipolar disorders differ categorically or dimensionally, with some dimensional or spectrum models including unipolar depressive disorders within a bipolar spectrum model. We analysed manic/hypomanic symptom data in samples of clinically diagnosed bipolar I, bipolar II and unipolar patients, employing latent class analyses to determine if separate classes could be identified. Mixture analyses were also undertaken to determine if a unimodal, bimodal or a trimodal pattern was present. For both a refined 15-item set and an extended 30-item set of manic/hypomanic symptoms, our latent class analyses favoured three-class solutions, while mixture analyses identified trimodal distributions of scores. Findings argue for a categorical distinction between unipolar and bipolar disorders, as well as between bipolar I and bipolar II disorders. Future research should aim to consolidate these results in larger samples, particularly given that the size of the unipolar group in this study was a salient limitation. abstract_id: PUBMED:28088728 The association between insomnia-related sleep disruptions and cognitive dysfunction during the inter-episode phase of bipolar disorder. Sleep disturbance and cognitive dysfunction are two domains of impairment during inter-episode bipolar disorder. Despite evidence demonstrating the importance of sleep for cognition in healthy and sleep-disordered samples, this link has been minimally examined in bipolar disorder. The present study tested the association between insomnia-related sleep disruptions and cognitive dysfunction during inter-episode bipolar disorder. Forty-seven participants with bipolar disorder and a comorbid insomnia diagnosis (BD-Insomnia) and 19 participants with bipolar disorder without sleep disturbance in the last six months (BD-Control) participated in the study. Two domains of cognition were assessed: working memory and verbal learning. Insomnia-related sleep disruptions were assessed both categorically (i.e., insomnia diagnosis) and dimensionally (i.e., total wake time, total sleep time, total wake time variability, and total sleep time variability). Hierarchical linear regressions, adjusting for participant age, demonstrated that insomnia diagnosis did not have an independent or interactive effect on cognition. However, regardless of insomnia diagnosis, greater total sleep time variability predicted poorer working memory and verbal learning performance. Further, following sleep treatment, a reduction in total wake time predicted improved working memory performance and a reduction in total sleep time variability predicted improved verbal learning performance. These findings raise the possibility that sleep disturbance may contribute to cognitive dysfunction in bipolar disorder and highlight the importance of treating sleep disturbance in bipolar disorder. abstract_id: PUBMED:16488021 The DSM-IV and ICD-10 categories of recurrent [major] depressive and bipolar II disorders: evidence that they lie on a dimensional spectrum. Background: Presently it is a hotly debated issue whether unipolar and bipolar disorders are categorically distinct or lie on a spectrum. We used the ongoing Ravenna-San Diego Collaboration database to examine this question with respect to major depressive disorder (MDD) and bipolar II (BP-II). Methods: The study population in FB's Italian private practice setting comprised consecutive 650 outpatients presenting with major depressive episode (MDE) and ascertained by a modified version of the Structured Clinical Interview for DSM-IV. Differential assignment of patients into MDD versus BP-II was made on the basis of discrete hypomanic episodes outside the timeframe of an MDE. In addition, hypomanic signs and symptoms during MDE (intra-MDE hypomania) were systematically assessed and graded by the Hypomania Interview Guide (HIG). The frequency distributions of the HIG total scores in each of the MDD, BP-II and the combined entire sample were plotted using the kernel density estimate. Finally, bipolar family history (BFH) was investigated by structured interview (the Family History Screen). Results: There were 261 MDD and 389 BP-II. As in the previous smaller samples, categorically defined BP-II compared with MDD had significantly earlier age at onset, higher rates of familial bipolarity (mostly BP-II), history of MDE recurrences (&gt;or=5), and atypical features. However, examining hypomania scores dimensionally, whether we examined the MDD, BP-II, or the combined sample, kernel density estimate distribution of these scores had a normal-like shape (i.e., no bimodality). Also, in the combined sample of MDE, we found a dose-response relationship between BFH loading and intra-MDE hypomania measured by HIG scores. Limitations: Although the interviewer (FB) could not be blind to the diagnostic status of his private patients, the systematic rigorous interview process in a very large clinical population minimized any unintended biases. Conclusions: Unlike previous studies that have examined the number of DSM-IV hypomanic signs and symptoms both outside and during MDE, the present analyses relied on the more precise hypomania scores as measured by the HIG. The finding of a dose-response relationship between BFH and HIG scores in the sample at large strongly suggests a continuity between BP-II and MDD. Our data indicate that even in those clinically depressed patients without past hypomanic episodes (so-called "unipolar" MDD), such scores are normally rather than bimodally distributed during MDE. Moreover, the absence of a 'zone of rarity' in the distribution of hypomanic scores in the combined total, MDD and BP-II MDE samples, indicates that MDD and BP-II exist on a dimensional spectrum. From a nosologic perspective, our data are contrary to what one would expect from a categorical unipolar-bipolar distinction. In practical terms, intra-MDE hypomania and BFH, especially in recurrent MDD, represent strong indicators of bipolarity. Answer: The debate over whether bipolar disorders are best modeled categorically or dimensionally is ongoing, with evidence supporting both perspectives. Some studies have found support for a categorical distinction between bipolar I and II disorders, arguing for the centrality of the presence or absence of psychotic features to subgroup differentiation (PUBMED:27028495). Further research replicates these findings, providing evidence to support the modeling of BP-I and BP-II as categorically discrete conditions, with statistically defined BP-I class members more likely to have experienced psychotic features and other distinguishing clinical features (PUBMED:32829199). On the other hand, there are studies that suggest a dimensional approach may be appropriate. For instance, psychotic bipolar disorders have characteristics suggesting both commonalities with and dissociations from schizophrenia, which could be consistent with a dimensional view, placing psychotic bipolar disorders intermediate between non-psychotic bipolar disorders and schizophrenia spectrum disorders (PUBMED:14690770). Additionally, a study on mixed depression supports a dimensionally-defined mixed depression, as it is supported by a non-bi-modal distribution of its intradepression manic/hypomanic symptoms (PUBMED:17764909). Moreover, some studies have failed to demonstrate a clear 'point of rarity' or bimodality in bipolar symptom scores, which would be expected if bipolar disorder were categorically distinct from unipolar depressive disorders (PUBMED:22030135). This suggests that the distinction may not be as clear-cut as a categorical model would imply. A mixed dimensional/categorical approach has also been proposed, which could provide additional insights into pathophysiology and management options that would not be available utilizing only one of these models (PUBMED:18777228). This is echoed by the AREDOC project, which aims to derive new criteria sets for defining manic and hypomanic episodes and thus for defining the bipolar I and II disorders, potentially clarifying whether they differ dimensionally or categorically (PUBMED:30378461). Lastly, a study analyzing manic/hypomanic symptom data in samples of clinically diagnosed bipolar I, bipolar II, and unipolar patients found evidence for a categorical distinction between unipolar and bipolar disorders, as well as between bipolar I and bipolar II disorders (PUBMED:33486278).
Instruction: Is lobectomy really more effective than sublobar resection in the surgical treatment of second primary lung cancer? Abstracts: abstract_id: PUBMED:23657547 Is lobectomy really more effective than sublobar resection in the surgical treatment of second primary lung cancer? Objectives: Sublobar resection for early-stage lung cancer is still a controversial issue. We sought to compare sublobar resection (segmentectomy or wedge resection) with lobectomy in the treatment of patients with a second primary lung cancer. Methods: From January 1995 to December 2010, 121 patients with second primary lung cancer, classified by the criteria proposed by Martini and Melamed, were treated at our Institution. We had 23 patients with a synchronous tumour and 98 with metachronous. As second treatment, we performed 61 lobectomies (17 of these were completion pneumonectomies), 38 atypical resections and 22 segmentectomies. Histology was adenocarcinoma in 49, squamous in 38, bronchoalveolar carcinomas in 14, adenosquamous in 8, large cells in 2, anaplastic in 5 and other histologies in 5. Results: Overall 5-year survival from second surgery was 42%; overall operative mortality was 2.5% (3 patients), while morbidity was 19% (22 patients). Morbidity was comparable between the lobectomy group, sublobar resection and completion pneumonectomies (12.8, 27.7 and 30.8%, respectively, P = 0.21). Regarding the type of surgery, the lobectomy group showed a better 5-year survival than sublobar resection (57.5 and 36%, respectively, P = 0.016). Compared with lobectomies, completion pneumonectomies showed a significantly less-favourable survival (57.5 and 20%, respectively, P = 0.001). Conclusions: From our experience, lobectomy should still be considered as the treatment of choice in the management of second primary lung cancer, but sublobar resection remains a valid option in high-risk patients with limited pulmonary function. Completion pneumonectomy was a negative prognostic factor in long-term survival. abstract_id: PUBMED:37088839 Sublobar resection versus lobectomy in the treatment of synchronous multiple primary lung cancer. Objective: Although synchronous multiple primary lung cancers (sMPLCs) are common in clinical practice, the choice of surgical modalities for the main lesion is still at the stage of exploration. This study is designed to analyze the prognosis of sMPLCs and single primary lung cancers with similar tumor stages and to explore whether sublobar resection has a similar prognosis as lobectomy for sMPLCs. Methods: One-hundred forty-one cases of sMPLCs were selected, including the following: 65 cases underwent lobectomy for main lesions, and 76 cases underwent sublobar resection for main lesions. One thousand one hundred forty-four cases of single primary lung cancer were matched at 1:1 by propensity score matching. Then, the patients with sMPLCs were divided into a lobectomy group and a sublobar group according to the first tumor stage. Ninety-eight cases of patients with sMPLCs were matched. The short-term perioperative effect, 5-year disease-free survival (DFS) rate, and 5-year overall survival (OS) rate between the two groups were compared. Results: There was no significant difference in OS between sMPLCs and single primary lung cancer after lobectomy (77.1% vs. 77.2%, P = 0.157) and sublobar resection (98.7% vs. 90.7%, P = 0.309). There was no significant difference in OS (86.7% vs. 83.9%, P = 0.482) or DFS (67.6 vs. 87.7%, P = 0.324) between the lobectomy group and sublobar group with sMPLCs. The sublobar resection group obtained a lower incidence of postoperative complications (40.8% vs. 16.3%, P = 0.007) and shorter postoperative hospital stay (11.22 vs. 9.27, P = 0.049). Conclusion: The prognosis of patients with sMPLCs generally depends on the main tumor state, which has no statistical difference regardless of sublobar resection or lobectomy, and the perioperative period of sublobar resection is safer than that of lobectomy. abstract_id: PUBMED:34273141 Comparison of perioperative and survival outcomes between sublobar resection and lobectomy of patients who underwent a second pulmonary resection. Background: Repeat pulmonary resection is widely accepted in clinical practice. This study aimed to compare sublobar resection (segmentectomy or wedge resection) with lobectomy in the treatment of patients who underwent a second pulmonary resection. Methods: This study retrospectively included patients who underwent lobectomy or sublobar resection for second pulmonary resection. 1:1 propensity score matching (PSM) was performed to balance selection bias. Clinicopathological features, perioperative and survival outcomes of lobectomy and sublobar resection were compared. Results: A total of 308 patients who underwent second pulmonary resection were identified: 71 (23.1%) who underwent lobectomy and 237 (76.9%) who underwent sublobar resection. After PSM, 58 patients for each group were selected with well-balanced clinicopathological characteristics. In patients who underwent sublobar resection, significantly shorter chest tube duration (days) (median, 4 vs. 2, p &lt; 0.001) and postoperative hospital stay (days) (median, 6 vs. 4, p &lt; 0.001) were observed. There was no significant difference in overall survival between these two groups after the second and first surgery (p = 0.65, p = 0.98), respectively. Subgroup analysis according to the type of the first resection showed consistent results. Conclusions: Sublobar resection may be considered as an alternative option for second pulmonary resection due to its perioperative advantages and similar survival outcomes compared with lobectomy. abstract_id: PUBMED:34281703 Sublobar resection is comparable to lobectomy for screen-detected lung cancer. Objective: Sublobar resection is frequently offered to patients with small, peripheral lung cancers, despite the lack of outcome data from ongoing randomized clinical trials. Sublobar resection may be a particularly attractive surgical strategy for screen-detected lung cancers, which have been suggested to be less biologically aggressive than cancers detected by other means. Using prospective data collected from patients undergoing surgery in the National Lung Screening Trial, we sought to determine whether extent of resection affected survival for patients with screen-detected lung cancer. Methods: The National Lung Screening Trial database was queried for patients who underwent surgical resection for confirmed lung cancer. Propensity score matching analysis (lobectomy vs sublobar resection) was done (nearest neighbor, 1:1, matching with no replacement, caliper 0.2). Demographics, clinicopathologic and perioperative outcomes, and long-term survival were compared in the entire cohort and in the propensity-matched groups. Multivariable logistic regression analysis was done to identify factors associated with increased postoperative morbidity or mortality. Results: We identified 1029 patients who underwent resection for lung cancer in the National Lung Screening Trial, including 821 patients (80%) who had lobectomy and 166 patients (16%) who had sublobar resection, predominantly wedge resection (n = 114, 69% of sublobar resection). Patients who underwent sublobar resection were more likely to be female (53% vs 41%, P = .004) and had smaller tumors (1.5 cm vs 2 cm, P &lt; .001). The sublobar resection group had fewer postoperative complications (22% vs 32%, P = .010) and fewer cardiac complications (4% vs 9%, P = .033). For stage I patients undergoing sublobar resection, there was no difference in 5-year overall survival (77% for both groups, P = .89) or cancer-specific survival (83% for both groups, P = .96) compared with patients undergoing lobectomy. On multivariable logistic regression analysis, sublobar resection was the only factor associated with lower postoperative morbidity/mortality (odds ratio, 0.63; 95% confidence interval, 0.40-0.98). To compare surgical strategies in balanced patient populations, we propensity matched 127 patients from each group undergoing sublobar resection and lobectomy. There were no differences in demographics or clinical and tumor characteristics among matched groups. There was again no difference in 5-year overall survival (71% vs 65%, P = .40) or cancer-specific survival (75% vs 73%, P = .89) for patients undergoing lobectomy and sublobar resection, respectively. Conclusions: For patients with screen-detected lung cancer, sublobar resection confers survival similar to lobectomy. By decreasing perioperative complications and potentially preserving lung function, sublobar resection may provide distinct advantages in a screened patient cohort. abstract_id: PUBMED:24966794 Early lung cancer in the elderly: sublobar resection provides equivalent long-term survival in comparison with lobectomy. Aim Of The Study: The worldwide population shift towards older ages will inevitably lead to more elderly patients being diagnosed with non-small cell lung cancer (NSCLC). It still remains controversial whether sublobar resection is effective in such cases at an early stage. To answer this question, we need to understand the clinical characteristics of these tumors. Material And Methods: From 2004 to 2010, a total of 167 patients with stage I non-small cell lung cancer (NSCLC) of age ≥ 70 years underwent complete resection in our institution. The clinical data were retrospectively analyzed as regards gender, stage of disease, histology, smoking status, smoking amount, drinking status, surgical approaches and overall survival. Survival was analyzed by the Kaplan-Meier method and log-rank test. Results: The overall 5-year survival rate was 62.4%. There were 122 (73.1%) patients who underwent standard lobectomy resection and 45 (26.9%) patients underwent sublobar resection. Patients with different surgical approaches (lobectomy and sublobar resection) had nearly the same 5-year survival rate (60.9% vs. 63.4%, p = 0.558). Gender (p = 0.023), smoking status (p = 0.045) and smoking amount (p = 0.007) significantly influenced the prognosis. Conclusions: In elderly stage I NSCLC patients, sublobar resection is considered to be an appropriate treatment in comparison with lobectomy, as this procedure provides equivalent long-term survival. abstract_id: PUBMED:26094172 Cost-effectiveness of stereotactic radiation, sublobar resection, and lobectomy for early non-small cell lung cancers in older adults. Objectives: Stereotactic ablative radiation (SABR) is a promising alternative to lobectomy or sublobar resection for early lung cancer, but the value of SABR in comparison to surgical therapy remains debated. We examined the cost-effectiveness of SABR relative to surgery using SEER-Medicare data. Materials And Methods: Patients age ≥66 years with localized (&lt;5 cm) non-small cell lung cancers diagnosed from 2003-2009 were selected. Propensity score matching generated cohorts comparing SABR with either sublobar resection or lobectomy. Costs were determined via claims. Median survival was calculated using the Kaplan-Meier method. Incremental cost-effectiveness ratios (ICERs) were calculated and cost-effectiveness acceptability curves (CEACs) were constructed from joint distribution of incremental costs and effects estimated by non-parametric bootstrap. Results: In comparing SABR to sublobar resection, 5-year total costs were $55,120 with SABR vs. $77,964 with sublobar resection (P&lt;0.001) and median survival was 3.6 years with SABR vs. 4.1 years with sublobar resection (P=0.95). The ICER for sublobar resection compared to SABR was $45,683/life-year gained, yielding a 46% probability that sublobar resection is cost-effective. In comparing SABR to lobectomy, 5-year total costs were $54,968 with SABR vs. $82,641 with lobectomy (P&lt;0.001) and median survival was 3.8 years with SABR vs. 4.7 years with lobectomy (P=0.81). The ICER for lobectomy compared to SABR was $28,645/life-year gained, yielding a 78% probability that lobectomy is cost-effective. Conclusion: SABR is less costly than surgery. While lobectomy may be cost-effective compared to SABR, sublobar resection is less likely to be cost-effective. Assessment of the relative value of SABR versus surgical therapy requires further research. abstract_id: PUBMED:33061635 Postoperative Short-term Outcomes Between Sublobar Resection and Lobectomy in Patients with Lung Adenocarcinoma. Background: To investigate postoperative temporary consequences of the enrolled patients with lung adenocarcinoma. Patients And Methods: We analyzed the clinical data of patients with lung adenocarcinoma admitted by the same surgical team of Peking Union Medical College Hospital (PUMCH) from July 2019 to December 2019. Statistical methods including propensity score matching (PSM) analysis was used to analyze the differences among them. Results: A total of 108 patients were enrolled, including 50 patients with sublobar resection and 58 patients with lobectomy. Before PSM, there were statistically significant differences in age (p=0.015), hospitalization costs (p=0.042), lymphadenectomy (p=0.000), pathological staging (p=0.000), number of lymph nodes removed (p=0.000), number of positive lymph nodes (p=0.034), chest drainage duration (p=0.000), total chest drainage (p=0.000), length of postoperative hospital stays (p=0.000), postoperative D-dimer level (p=0.030) and perioperative lymphocyte margin (LM) (p=0.003) between sublobar resection and lobectomy. After PSM, there were statistical differences in number of lymph nodes removed (p=0.000), chest drainage duration (p=0.031) and total chest drainage (p=0.002) between sublobar resection and lobectomy. Whether with PSM analysis or not, there were no significant differences in other blood test results, such as inflammation indicators, postoperative neutrophil-lymphocyte ratio (NLR), albumin level, perioperative activity of daily living (ADL) scale scoring margin, complications, postoperative admission to intensive care unit (ICU) and readmission within 30 days. NLR was associated with total chest drainage (p=0.000), length of postoperative hospital stays (p=0.000), postoperative D-dimer level (p=0.050) and ADL scale scoring margin (p=0.003) between sublobar resection and lobectomy. Conclusion: Sublobar resection, including wedge resection and segmentectomy, was as safe and feasible as lobectomy in our study, and they shared similar short-term outcomes. Postoperative NLR could be used to detect the clinical outcomes of patients. Secondary resectability of pulmonary function (SRPF) should be the main purpose of sublobar resection. abstract_id: PUBMED:36063054 Lobar versus sublobar resection in clinical stage IA primary lung cancer with occult N2 disease. Objectives: Sublobar resection is increasingly being utilized for early-stage lung cancers, but optimal management when final pathology shows unsuspected mediastinal nodal disease is unclear. This study tested the hypothesis that lobectomy has improved survival compared to sublobar resection for clinical stage IA tumours with occult N2 disease. Methods: The use of sublobar resection and lobectomy for patients in the National Cancer Database who underwent primary surgical resection for clinical stage IA non-small-cell lung cancer with pathologic N2 disease between 2010 and 2017 was evaluated using logistic regression. Survival was assessed with Kaplan-Meier analysis, log-rank test and Cox proportional hazards model. Results: A total of 2419 patients comprised the study cohort, including 320 sublobar resections (13.2%) and 2099 lobectomies (86.8%). Older age, female sex, smaller tumour size and treatment at an academic facility predicted the use of sublobar resection. Patients undergoing lobectomy had larger tumours (2.40 vs 2.05 cm, P &amp;lt; 0.001) and more lymph nodes examined (11 vs 5, P &amp;lt; 0.001). Adjuvant chemotherapy use was similar between the 2 groups (sublobar 79.4% vs lobectomy 77.4%, P = 0.434). Sublobar resection was not associated with worse survival compared to lobectomy in both univariate (5-year survival 46.6% vs 45.2%, P = 0.319) and multivariable Cox proportional hazards analysis (hazard ratio 0.97, P = 0.789). Conclusions: Clinical stage IA non-small-cell lung cancer patients with N2 disease on final pathology have similar long-term survival with either sublobar resection or lobectomy. Patients with occult N2 disease after sublobar resection may not require reoperation for completion lobectomy but should instead proceed to adjuvant chemotherapy. abstract_id: PUBMED:35760619 Outcomes after sublobar resection versus lobectomy in non-small cell carcinoma in situ. Objective: Guidelines for treatment of non-small cell lung cancer identify patients with tumors ≤2 cm and pure carcinoma in situ histology as candidates for sublobar resection. Although the merits of lobectomy, sublobar resection, and lymphoid (LN) sampling, have been investigated in early-stage non-small cell lung cancer, evaluation of these modalities in patients with IS disease can provide meaningful clinical information. This study aims to compare these operations and their relationship with regional LN sampling in this population. Methods: The National Cancer Database was used to identify patients diagnosed with non-small cell lung cancer clinical Tis N0 M0 with a tumor size ≤2 cm from 2004 to 2017. The χ2 tests were used to examine subgroup differences by type of surgery. Kaplan-Meier method and Cox proportional hazard model were used to compare overall survival. Results: Of 707 patients, 56.7% (401 out of 707) underwent sublobar resection and 43.3% (306 out of 707) underwent lobectomy. There was no difference in 5-year overall survival in the sublobar resection group (85.1%) compared with the lobectomy group (88.9%; P = .341). Multivariable survival analyses showed no difference in overall survival (hazard ratio, 1.044; P = .885) in the treatment groups. LN sampling was performed in 50.9% of patients treated with sublobar resection. In this group, LN sampling was not associated with improved survival (84.9% vs 85.0%; P = .741). Conclusions: We observed no difference in overall survival between sublobar resection and lobectomy in patients with cTis N0 M0 non-small cell lung cancer with tumors ≤2 cm. Sublobar resection may be an appropriate surgical option for this population. LN sampling was not associated with improved survival in patients treated with sublobar resection. abstract_id: PUBMED:37713465 Sublobar resection reduces the risk of postoperative cognitive dysfunction compared with lobectomy. Objectives: Sublobar resection, including wedge resection and segmentectomy, is non-inferior to lobectomy in early-stage non-small cell lung cancer treatment. We aimed to compare the risk of postoperative cognitive dysfunction (POCD) between sublobar resection and lobectomy. Methods: We conducted a prospective cohort study. Patients with sublobar resection or lobectomy were divided into the sublobar group or the lobar group, respectively. Cognition was assessed before and after surgery with Montreal Cognitive Assessment and Minimum Mental State Examination tests. POCD is defined as Z score of Montreal Cognitive Assessment change ≤-1.96. Propensity score matching (PSM) was performed to make demographics well-balanced between the 2 groups. Results: A total of 335 patients were enrolled. Both the postoperative 1-day POCD rate (sublobar 5.5% vs lobar 18.2%, P &lt; 0.001) and the postoperative 1-month POCD rate (sublobar 7.9% vs lobar 21.8%, P &lt; 0.001) were significantly lower in the sublobar group compared with lobar group, with demographics unbalanced between the 2 groups. In the 133 demographics-matched pairs obtained by PSM, both the postoperative 1-day POCD rate (sublobar 5.3% vs lobar 17.3%, P = 0.005) and the postoperative 1-month POCD rate (sublobar 8.3% vs lobar 18.8%, P = 0.018) remained significantly lower in the sublobar group than in the lobar group. The incidences of postoperative 1-day (P = 0.109) and postoperative 1-month (P = 0.026) Minimum Mental State Examination abnormity were also lower in the sublobar group than in the lobar group but only the latter was with statistical significance after PSM. Conclusions: Sublobar resection has an advantage over lobectomy in preventing POCD. Our findings might be a reference for selecting the most suitable type of resection for non-small-cell lung cancer patients. Answer: The effectiveness of lobectomy compared to sublobar resection in the treatment of second primary lung cancer has been explored in several studies, with varying results. One study suggests that lobectomy should still be considered the treatment of choice for second primary lung cancer, as it showed better 5-year survival rates compared to sublobar resection (57.5% vs. 36%, respectively) (PUBMED:23657547). However, sublobar resection remains a valid option for high-risk patients with limited pulmonary function, and completion pneumonectomy was associated with significantly less favorable survival. In contrast, another study found no significant difference in overall survival (OS) or disease-free survival (DFS) between lobectomy and sublobar resection for synchronous multiple primary lung cancers (sMPLCs). The sublobar resection group had a lower incidence of postoperative complications and shorter postoperative hospital stays, suggesting that sublobar resection is safer in the perioperative period than lobectomy (PUBMED:37088839). A retrospective study comparing sublobar resection with lobectomy in patients who underwent a second pulmonary resection found no significant difference in overall survival between the two groups. Sublobar resection was associated with significantly shorter chest tube duration and postoperative hospital stay, indicating perioperative advantages (PUBMED:34273141). For screen-detected lung cancer, sublobar resection conferred survival similar to lobectomy while decreasing perioperative complications and potentially preserving lung function, offering distinct advantages in a screened patient cohort (PUBMED:34281703). In elderly patients with stage I NSCLC, sublobar resection provided equivalent long-term survival compared to lobectomy (PUBMED:24966794). Additionally, sublobar resection was found to be less costly than surgery, with lobectomy potentially being cost-effective compared to stereotactic ablative radiation (SABR), but sublobar resection less likely to be cost-effective (PUBMED:26094172). Furthermore, sublobar resection was as safe and feasible as lobectomy in patients with lung adenocarcinoma, sharing similar short-term outcomes (PUBMED:33061635). In clinical stage IA primary lung cancer with occult N2 disease, sublobar resection did not result in worse survival compared to lobectomy (PUBMED:36063054).
Instruction: Quality compensation programs: are they worth all the hype? Abstracts: abstract_id: PUBMED:33227075 COVID-19 pandemic hype: Losers and gainers. Background: Ever since the COVID-19 pandemic was declared by the WHO as a Global Public Health Emergency the COVID-19 pandemic has been hyped by the media. Aim: To review the literature on COVID-19 pandemic hype, losers and gainers. Method: Literature on COVID-19 pandemic hype, January to August 2020, was retrieved from pubmed, google scholar and news media, and reviewed. Results: The COVID-19 Pandemic has been hyped directly by highly disturbing messages from the WHO, news of famous people getting infected and dying because of the coronavirus, and highlighted news in media. Indirect hype has been by fake news and verambitious attempt to contain the virus. There have been many losers and gainers of the corona hype. Conclusion: The COVID-19 pandemic hype has caused huge loss to the world community, but substantial gains are also being witnessed. Media coverage should be balanced. Intensive public awareness programs coupled with best possible medical treatment to symptomatic cases are recommended. abstract_id: PUBMED:37250465 Impact of hype on clinicians' evaluation of trials - a pilot study. Objective: The purpose of this study was to determine the practicality of using a teleconferencing platform to assess the effect of hype on clinicians' evaluations of reports of clinical trials in spinal care. Methods: Twelve chiropractic clinicians were interviewed via a videoconferencing application. Interviews were recorded and timed. Participant behaviour was monitored for compliance with the protocol. Differences between participants numerical ratings of hyped and non-hyped abstracts based on four measures of quality were analysed using pairwise comparisons (Wilcoxon signed rank test for independent samples). In addition, a linear mixed effects model was fitted with condition (i.e. hype vs. no hype) as a fixed effect and participant and abstract as random effects. Results: The interviews and data analysis were conducted without significant technical difficulty. Participant compliance was high, and no harms were reported. There were no statistically significant differences in the quality rankings of hyped versus non-hyped abstracts. Conclusion: The use of a videoconferencing platform to measure the effects of hype on clinicians' evaluations of abstracts of clinical trials is practical and an adequately powered study is justified. Lack of statistically significant results may well be due to low participant numbers. abstract_id: PUBMED:33332059 Is COVID-19 a hype? The answer to the question whether COVID-19 is a hype or not depends on how we define a hype. The article loosely builds on philosophical discussions about hypes in knowledge work and information sciences. The central idea is to make clear that hypes always imply a certain overload of information and that the paradoxical outcome of this that it is not just information that is piling up but also disinformation. It is argued that it is in this sense (and only in this sense) that COVID-19 is a hype. How we respond to this hype depends very strongly on subjective sensitivities towards both information and desinformation. abstract_id: PUBMED:33659419 In vitro AMPylation/Adenylylation of Alpha-synuclein by HYPE/FICD. One of the major histopathological hallmarks of Parkinson's disease are Lewy bodies (LBs) -cytoplasmic inclusions, enriched with fibrillar forms of the presynaptic protein alpha-synuclein (α-syn). Progressive deposition of α-syn into LBs is enabled by its propensity to fibrillize into insoluble aggregates. We recently described a marked reduction in α-syn fibrillation in vitro upon posttranslational modification (PTM) by the Fic (Filamentation induced by cAMP) family adenylyltransferase HYPE/FICD (Huntingtin yeast-interacting protein E/FICD). Specifically, HYPE utilizes ATP to covalently decorate key threonine residues in α-syn's N-terminal and NAC (non-amyloid-β component) regions with AMP (adenosine monophosphate), in a PTM termed AMPylation or adenylylation. Status quo in vitro AMPylation reactions of HYPE substrates, such as α-syn, use a variety of ATP analogs, including radiolabeled α-32P-ATP or α-33P-ATP, fluorescent ATP analogs, biotinylated-ATP analogs (N6-[6-hexamethyl]-ATP-Biotin), as well as click-chemistry-based alkyl-ATP methods for gel-based detection of AMPylation. Current literature describing a step-by-step protocol of HYPE-mediated AMPylation relies on an α-33P-ATP nucleotide instead of the more commonly available α-32P-ATP. Though effective, this former procedure requires a lengthy and hazardous DMSO-PPO (dimethyl sulfoxide-polyphenyloxazole) precipitation. Thus, we provide a streamlined alternative to the α-33P-ATP-based method, which obviates the DMSO-PPO precipitation step. Described here is a detailed procedure for HYPE mediated AMPylation of α-syn using α-32P-ATP as a nucleotide source. Moreover, our use of a reusable Phosphor screen for AMPylation detection, in lieu of the standard, single-use autoradiography film, provides a faster, more sensitive and cost-effective alternative. abstract_id: PUBMED:31589804 Blocking the Hype-Hypocrisy-Falsification-Fakery Pathway is Needed to Safeguard Science. In chemistry and other sciences, hype has become commonplace, compounded by the hypocrisy of those who tolerate or encourage it while disapproving of the consequences. This reduces the credibility and trust upon which all science depends for support. Hype and hypocrisy are but first steps down a slippery slope towards falsification of results and dissemination of fake science. Systemic drivers in the contemporary structure of the science establishment encourage exaggeration and may lure the individual into further steps along the hype-hypocrisy-falsification-fakery continuum. Collective, concerted intervention is required to effectively discourage entry to this dangerous pathway and to restore and protect the probity and reputation of the science system. Chemists must play and active role in this effort. abstract_id: PUBMED:36570720 Sentiment and hype of business media topics and stock market returns during the COVID-19 pandemic. We examine COVID-19 related topics discussed in the printed edition of the Wall Street Journal. Using text analytics and topic modeling algorithms, we discover 15 distinct topics and present differences in their sentiment (polarity) and hype (intensity of coverage) trends throughout 2020. Importantly, the hype of the topic, not the sentiment, relates to stock market returns. In particular, the hype scores for Debt market and Financial markets have the strongest positive relation to the stock market performance. abstract_id: PUBMED:25678711 HypE-specific nanobodies as tools to modulate HypE-mediated target AMPylation. The covalent addition of mono-AMP to target proteins (AMPylation) by Fic domain-containing proteins is a poorly understood, yet highly conserved post-translational modification. Here, we describe the generation, evaluation, and application of four HypE-specific nanobodies: three that inhibit HypE-mediated target AMPylation in vitro and one that acts as an activator. All heavy chain-only antibody variable domains bind HypE when expressed as GFP fusions in intact cells. We observed localization of HypE at the nuclear envelope and further identified histones H2-H4, but not H1, as novel in vitro targets of the human Fic protein. Its role in histone modification provides a possible link between AMPylation and regulation of gene expression. abstract_id: PUBMED:28517985 Personalised Medicine: The Promise, the Hype and the Pitfalls. In engaging critically with personalised medicine and mapping pitfalls which mark its progress this project aims to stimulate conversations which deal intelligently with controversies for the sake of consensus. We aim to ask the ethical questions which will lead to the improvement of healthcare and we take an open-minded approach to finding answers to them over time. What is or should be meant by 'personalised medicine' is a major theme of this issue. It is a debate bound up with question of both values in the sense of ethical reflection and value in the sense of economic return. This editorial discusses and interrelates the articles of the issue under four headings: the promise and the hype of personalised medicine; the human person and the communication of risk; data sharing and participation; value, equity and power. A key intention throughout is to provoke discourse and debate, to identify aspirations which are more grounded in myth or hype than reality and to challenge them; and to identify focussed, practical questions which need further examination. abstract_id: PUBMED:15291820 Analysis of the transcarbamoylation-dehydration reaction catalyzed by the hydrogenase maturation proteins HypF and HypE. The hydrogenase maturation proteins HypF and HypE catalyze the synthesis of the CN ligands of the active site iron of the NiFe-hydrogenases using carbamoylphosphate as a substrate. HypE protein from Escherichia coli was purified from a transformant overexpressing the hypE gene from a plasmid. Purified HypE in gel filtration experiments behaves predominantly as a monomer. It does not contain statistically significant amounts of metals or of cofactors absorbing in the UV and visible light range. The protein displays low intrinsic ATPase activity with ADP and phosphate as the products, the apparent K(m) being 25 micro m and the k(cat) 1.7 x 10(-3) s(-1). Removal of the C-terminal cysteine residue of HypE which accepts the carbamoyl moiety from HypF affected the K(m) (47 micro m) but not significantly the k(cat) (2.1 x 10(-3) s(-1)). During the carbamoyltransfer reaction, HypE and HypF enter a complex which is rather tight at stoichiometric ratios of the two proteins. A mutant HypE variant was generated by amino acid replacements in the nucleoside triphosphate binding region, which showed no intrinsic ATPase activity. The variant was active as an acceptor in the transcarbamoylation reaction but did not dehydrate the thiocarboxamide to the thiocyanate. The results obtained with the HypE variants and also with mutant HypF forms are integrated to explain the complex reaction pattern of protein HypF. abstract_id: PUBMED:22975157 Procedural justice and quality of life in compensation processes. Background: There is considerable evidence that being involved in compensation processes has a negative impact on claimants' health. Previous studies suggested that this negative effect is caused by a stressful compensation process: claimants suffered from a lack of communication, a lack of information, and feelings of distrust. However, these rather qualitative findings have not been quantitatively investigated yet. This observational study aimed to fill this gap of knowledge, investigating the claimants' perceived fairness of the compensation process, the provided information, and the interaction with lawyers and insurance companies, in relation to the claimants' quality of life. Method: Participants were individuals injured in traffic accidents, older than 18 years, who were involved in a compensation process in the Netherlands. They were recruited by three claims settlement offices. Outcome measures were procedural, interactional, and informational justice, and quality of life. Results: Participants (n=176) perceived the interaction with lawyers to be fairer than the interaction with insurance companies (p&lt;.001). The length of hospital stay was positively associated with procedural justice (β=.31, p&lt;.001). Having trunk/back injury was negatively related to procedural justice (β=-.25, p=.001). Whiplash injury and length of time involved in the claim process were not associated with any of the justice scales. Finally, procedural justice was found to be positively correlated with quality of life (rs=.22, p=.004). Discussion: The finding that the interaction with insurance companies was considered less fair than the interaction with lawyers may imply that insurers could improve their interaction with claimants, e.g. by communicating more directly. The result that claimants with mild injuries and with trunk/back injuries considered the compensation process to be less fair than those with respectively severe injuries and injuries to other body parts suggests that especially the former two require an attentive treatment. Finally, the fact that procedural justice was positively correlated with quality of life could implicate that it is possible to improve claimants' health in compensation processes by enhancing procedural justice, e.g. by increasing the ability for claimants to express their views and feelings and by involving claimants in the decision-making process. Answer: The abstracts provided offer a range of perspectives on the concept of "hype" in various contexts, including the COVID-19 pandemic, scientific research, and compensation processes. However, none of the abstracts directly address the effectiveness of quality compensation programs in relation to their "hype." The abstract from PUBMED:37250465 discusses a study that aimed to assess the effect of hype on clinicians' evaluations of clinical trials, but it does not provide conclusive evidence due to the small number of participants. The study found no statistically significant differences in quality rankings between hyped and non-hyped abstracts, suggesting that hype may not necessarily influence professional judgments in this context. The abstract from PUBMED:22975157 examines the relationship between perceived fairness in compensation processes and quality of life among individuals involved in traffic accident claims. It suggests that enhancing procedural justice, such as by improving communication and involving claimants in decision-making, could positively impact claimants' quality of life. This implies that the perceived fairness and quality of a compensation process can have tangible benefits for participants, which could be interpreted as a validation of the "worth" of quality compensation programs. While the abstracts do not provide a direct answer to the question of whether quality compensation programs are worth the hype, they do suggest that the perceived quality and fairness of such programs can have significant impacts on individuals' evaluations and well-being. Therefore, it could be inferred that quality compensation programs that are perceived as fair and effectively communicated may indeed be worth the attention they receive, as they can contribute positively to the experiences and outcomes of the individuals involved (PUBMED:22975157).
Instruction: Are gloves sufficiently protective when hairdressers are exposed to permanent hair dyes? Abstracts: abstract_id: PUBMED:25407590 Are gloves sufficiently protective when hairdressers are exposed to permanent hair dyes? An in vivo study. Background: The use of permanent hair dyes exposes hairdressers to contact allergens such as p-phenylenediamine (PPD), and the preventive measures are insufficient. Objectives: To perform an in vivo test to study the protective effect of gloves commonly used by hairdressers. Patients/materials/methods: Six gloves from Sweden, Italy and Germany were studied: two vinyl, one natural rubber latex, two nitrile, and one polyethylene. The hair dye used for the provocation was a dark shade permanent dye containing PPD. The dye was mixed with hydrogen peroxide, and 8 PPD-sensitized volunteers were tested with the gloves as a membrane between the hair dye and the skin in a cylindrical open chamber system. Three exposure times (15, 30 and 60 min) were used. Results: Eczematous reactions were found when natural rubber latex, polyethylene and vinyl gloves were tested with the dye. The nitrile gloves gave good protection, even after 60 min of exposure to the hair dye. Conclusions: Many protective gloves used by hairdressers are unsuitable for protection against the risk of elicitation of allergic contact dermatitis caused by PPD. abstract_id: PUBMED:32311093 Use of protective gloves by hairdressers: A review of efficacy and potential adverse effects. Occupational hand eczema is common among hairdressers, and protective gloves are important in limiting exposure to irritants and allergens. Various glove types may differ in their protective ability, and their use may lead to hand eczema due to skin irritancy and allergy. MEDLINE was searched for studies investigating permeation of gloves to irritants and allergens used in the hairdressing trade, as well as adverse effects of glove use affecting hairdressers. Forty-four studies were identified; nine reported on permeation. Of those, two in vitro studies found nitrile rubber (NR) gloves to give the best protection when handling hair dyes. Polyethylene (PE) gloves had the lowest reported break-through time. The prevalence of sensitization to rubber materials in European hairdressers was as follows: thiuram mix, median 2.5% (range 0%-8.2%), weighted average 3.0% (95% confidence interval [CI] 3.0%-3.1%); mercapto mix, median 0.4% (range 0%-3.3%), weighted average 0.5% (95% CI 0.47%-0.50%), mercaptobenzothiazole, median 0.6% (range 0%-6.6%), weighted average 0.7% (95% CI 0.6%-0.7%), NRL-type I allergy, median 1.3% (range 1%-16.4%), weighted average 4.0% (95% CI 3.6%-4.5%). In conclusion, NR gloves provide the best skin protection for hairdressers, although natural rubber latex (NRL) and polyvinylchloride (PVC) gloves may be sufficient in most cases. PE gloves are not recommended. Synthetic rubber gloves with low or no levels of accelerators are preferred. abstract_id: PUBMED:28795423 Hairdressers' skin exposure to hair dyes during different hair dyeing tasks. Background: The high risk of occupational skin disease among hairdressers, caused by skin exposure to irritants and sensitizers, such as hair dye substances, is of great concern. Objectives: The aim of the present study was to assess how the various tasks involved in hair dyeing contribute to hairdressers' exposure to hair dye, in order to enable the formulation of well-founded recommendations on working routines that will reduce exposure and prevent occupational disease. Methods: Skin exposure to hair dye was measured for 20 hairdressers applying highlights and all-over hair colour with the hand rinsing technique. Resorcinol was used as a proxy for hair dye exposure. Results: Applying hair dye and cutting the newly dyed hair were the tasks that contributed most to exposure in treatments for highlights. After cutting all-over-coloured hair, all hairdressers had measurable amounts of hair dyes on both hands. Conclusions: Hairdressers are exposed to hair dye ingredients during all steps of the hair dyeing procedure. Cutting newly dyed hair contributes significantly to exposure. For the prevention of occupational disease resulting from hair dye exposure, we suggest cutting hair before dyeing it, and wearing gloves during all other work tasks. abstract_id: PUBMED:16081465 Occupational dermal exposure to permanent hair dyes among hairdressers. Skin exposure to permanent hair dye compounds was assessed in 33 hairdressers using a previously evaluated hand rinse method. Hand rinse samples were collected from each hand before the start of hair dyeing, after application of the dye and after cutting the newly-dyed hair. Sixteen of the hairdressers did not use gloves during dye application, and none used gloves while cutting the dyed hair. The samples were analysed for pertinent aromatic amines and resorcinol (RES) using an HPLC method. 10 of 54 hair dye mixtures contained 1,4-phenylenediamine (PPD), 40 toluene-2,5-diaminesulphate (TDS), and 44 RES. After application of the hair dye, PPD was found in samples from 4 hairdressers, TDS in 12 and RES in 21. PPD was found in samples from 3 of the 17 hairdressers that used gloves during application of the hair dye, TDS in 5 and RES in 11. In the group that did not use gloves during the application of hair dye (n = 16) PPD was found in samples from 1 hairdresser, TDS in 7 and RES in 11. After cutting the dyed hair, PPD was found in samples from 5 hairdressers, TDS in 14 and RES in 20. Analysis of samples of newly-dyed hair cuttings revealed the presence of aromatic amines and/or RES in 11/12 samples. Our conclusion is that hairdressers' skin is exposed to allergenic compounds during hair dyeing. Exposure occurs from dye application, from cutting newly-dyed hair and from background exposure. The exposure loadings are in the level, where there is a risk of sensitization and/or elicitation of contact allergy (i.e. for PPD 22-939 nmol per hand). The glove use observed in this study was often improper, and was insufficient to prevent exposure. To reduce exposure, improved skin protection and work routines are important. abstract_id: PUBMED:25209114 The influence of hydrogen peroxide on the permeability of protective gloves to resorcinol in hairdressing. Background: Hairdressers are exposed to hair dye chemicals, for example resorcinol and hydrogen peroxide. Adequate skin protection is an important preventive measure against occupational skin disease. Objectives: To examine whether hydrogen peroxide may cause deterioration of protective gloves. Methods: Permeation of resorcinol through gloves of polyvinylchloride (PVC) (n = 8), natural rubber latex (NRL) (n = 5) and nitrile rubber (NR) (n = 5) was studied in a two-compartment cell, with resorcinol as an indicator for hair dyes. The amount of resorcinol that had permeated was analysed with a high-performance liquid chromatography instrument. Cumulative breakthrough time and permeation rate were compared for hydrogen peroxide-pretreated and untreated gloves. Results: The cumulative breakthrough time was &gt; 1 hr but &lt; 4 hr for all tested gloves. Pretreatment of PVC gloves resulted in a slightly decreased breakthrough time, and pretreatment of NRL gloves decreased the permeation rate. No change was recorded in NR gloves. Conclusions: Treatment with hydrogen peroxide had a minor effect on permeation in the tested gloves. NR gloves provided the best protection. However, taking the allergy risk of rubber gloves into account, plastic gloves are recommended in hairdressing. PVC gloves may be used, but not for &gt; 1 hr. Disposable gloves should never be reused, regardless of material. abstract_id: PUBMED:22568839 A quantification of occupational skin exposures and the use of protective gloves among hairdressers in Denmark. Background: Occupational hand eczema is common in hairdressers, owing to excessive exposure to wet work and hairdressing chemicals. Objectives: To quantify occupational skin exposure and the use of protective gloves among hairdressers in Denmark. Methods: A register-based study was conducted comprising all graduates from hairdressing vocational schools from 1985 to 2007 (n = 7840). The participants received a self-administered postal questionnaire in May 2009, including questions on hairdressing tasks performed in the past week at work and the extent of glove use. A response rate of 67.9% (n = 5324) was obtained. Results: Of the respondents, 55.7% still worked as hairdressers, and they formed the basis of this study. Daily wet work was excessive; 86.6% had wet hands for ≥2 hr, and 54% for ≥ 4 hr. Glove use was fairly frequent for full head hair colouring and bleaching procedures (93-97.7%), but less frequent for highlighting/lowlighting procedures (49.7-60.5%) and permanent waving (28.3%). Gloves were rarely worn during hair washing (10%), although this was more frequently the case after hair colouring procedures (48.9%). Conclusions: Occupational skin exposure was excessive among hairdressers; the extent of wet work and chemical treatments was high, and glove use was inconsistent, especially for certain hair colouring procedures and wet work tasks. abstract_id: PUBMED:25817606 Glove use among hairdressers: difficulties in the correct use of gloves among hairdressers and the effect of education. Background: Hand eczema is frequent among Danish hairdressers, and they are advised to use gloves as protection. However, studies indicate that a significant proportion use gloves inappropriately. Objectives: To determine whether hairdressers and apprentices use protective gloves in the correct way, and to determine whether a demonstration of correct use could cause an improvement. Methods: Forty-three hairdressers and apprentices were asked to perform a hair wash while wearing gloves. The shampoo used was contaminated with an ultraviolet (UV) trace material. Two rounds of hair washing were carried out by each person, interrupted by a demonstration of how to use gloves correctly. Photographs were taken to compare UV contamination before and after the demonstration. Results: All of the participants (100%) had their hands contaminated during the first round; the area ranged between 0.02 and 101.37 cm(2) (median 3.62 cm(2)). In the second round, 55.8% were contaminated (range 0.00-3.08 cm(2) ; median 0.01 cm(2)). The reduction in contaminated skin areas was statistically significant (p &lt; 0.001), proving an effect of the glove demonstration. There were no significant differences between hairdressers and apprentices. Conclusions: Hairdressers and apprentices lack knowledge on how to handle gloves correctly. A short demonstration of correct glove use made a significant difference in the skin protection provided by gloves. abstract_id: PUBMED:20080812 Personal use of hair dyes and temporary black tattoos in Copenhagen hairdressers. Background: Hairdressers are occupationally and personally exposed to hair dye substances and adverse reactions from the skin are well known. Currently, little is known about personal exposure to hair dye ingredients and temporary black tattoos. Objectives: To investigate hairdressers' professional and personal risk exposures and to compare the frequency of temporary tattoos among hairdressers and subjects from the general population. Methods: A questionnaire was sent to 1679 Copenhagen hairdressers and 1063 (63.3%) responded; 3471 subjects from the general population in Copenhagen were asked about temporary black tattoos. Results: Of the female hairdressers, 38.3% had coloured hair within the previous week. Adverse skin reactions to own hair dye were reported in 29.5%. In the hairdresser population, no significant association was observed between self-reported adverse skin reactions to hair dye and having had a temporary black tattoo when adjusted for sex, age, and atopy. A total of 19.0% of hairdressers (43.5% of apprentices) and 6.3% of participants from the general population had ever had a temporary black tattoo performed at one point. There were no differences in frequency of eczema after temporary tattooing between hairdressers and subjects in the general population. Almost all hairdressers (99.2%) used gloves for hair colouring, 51% for high/low lighting, 39.6% for perming and 21.1% used gloves for shampooing. Conclusions: In conclusion, skin reactions to hair colour are frequent among Copenhagen hairdressers. Temporary black tattoos were more frequent among hairdressers than in a sample of the general population and increased with decreasing age. abstract_id: PUBMED:20443120 Internal exposure of hairdressers to permanent hair dyes: a biomonitoring study using urinary aromatic diamines as biomarkers of exposure. Purpose: To determine whether the occupational exposure of hairdressers to permanent hair dyes can be quantified by the use of biological monitoring of urinary aromatic diamines as one of the main constituents and to compare these levels to those recently determined in persons after personal application of hair dyes. Methods: Fifty-two hairdressers (40 female and 12 male) from 16 hairdresser salons in and around the city of Aachen took part in this field study. Subjects were asked to document all operations associated with possible exposure to permanent hair dyes like mixing colour, application of colour, washing after dyeing, and cutting of freshly coloured hair. Excretion of aromatic diamines 2,5-toluylene diamine (2,5-TDA) and p-phenylene diamine (p-PDA) as main constituents of commercially available hair dyes was measured in urine samples using a highly specific and accurate GC/MS-method. Urine samples were taken at 5 points of time during the work week: pre-shift before the start of the work week, pre- and post-shift on the third day of the work week and finally pre- and post-shift on the last day of a work week in order to meet different workloads and possible accumulative effects over the week. Nineteen persons matched for age served as a control group and gave spot urine samples. Results: Although the levels were generally low, we could determine a significantly higher internal exposure to 2,5-TDA in hairdressers (medians ranged from &lt;0.2 μg/g creatinine up to 1.7 μg/g creatinine at various sampling times, with a maximum of 155.8 μg/g creatinine) compared to the control group (median &lt;0.2 μg/g creatinine, maximum 3.33 μg/g creatinine). At the same time, p-PDA was detectable only in selected cases in the group of hairdressers but not in the control group. Overall, there was neither an intra-shift effect seen nor an effect across the work week. There was also no significant difference in urinary excretion of participants who reported wearing protective gloves compared to those who reported not wearing protective gloves. Conclusion: The internal exposure to aromatic diamines in hairdressers using permanent hair dyes can be determined using biological monitoring. The extent of exposure is low compared to subjects after personal application of hair dyes, who excreted more than 200 times higher amounts of aromatic diamines. This slight work-related exposure might be reduced by the strict adherence to the use of suitable gloves as well as long-sleeved clothing. abstract_id: PUBMED:17030383 Occupational exposure of hairdressers to [14C]-para-phenylenediamine-containing oxidative hair dyes: a mass balance study. We monitored the exposure of hairdressers to oxidative hair dyes for 6 working days under controlled conditions. Eighteen professional hairdressers (3/day) coloured hairdresser's training heads bearing natural human hair (hair length: approximately 30 cm) for 6 h/working day with a dark-shade oxidative hair dye containing 2% [14C]-para-phenylenediamine (PPD). Three separate phases of hair dyeing were monitored: (A) dye preparation/hair dyeing, (B) rinsing/shampooing/conditioning and (C) cutting/drying/styling. Ambient air and personal monitoring samples (vapours and particles), nasal and hand rinses were collected during all study phases. Urine (pre-exposure, quantitative samples for the 0-12, 12-24, 24-48 h periods after start of exposure) and blood samples (blank, 4, 8 or 24 h) were collected from all exposed subjects. Radioactivity was determined in all biological samples and study materials, tools and washing liquids, and a [14C]-mass balance was performed daily. No adverse events were noted during the study. Waste, equipment, gloves and coveralls contained 0.41+/-0.16%, dye mixing bowls 2.88+/-0.54%, hair wash 45.47+/-2.95%, hair+scalp 53.46+/-4.06% of the applied radioactivity, respectively. Plasma levels were below the limit of quantification (10 ng PPDeq/mL). Total urinary 0-48 h excretion of [14C] levels ranged from a total of &lt;2-18 microg PPDeq and was similar in subjects exposed during the different phases of hair dyeing. Minimal air levels at or slightly above the limit of quantification were found in a few personal air monitoring samples during the phases of hair dyeing and hair cutting, but not during the rinsing phase. Air area monitoring samples or nasal rinses contained no measurable radioactivity. Hand residues ranged from 0.006 to 0.15 microg PPDeq/cm2, and were found predominantly after the cutting/drying phase. The mean mass balance of [14C] across the six study days was 102.50+/-2.20%. Overall, the mean, total systemic exposure of hairdressers to oxidative hair dyes during a working day including 6 hair dyeing processes was estimated to be &lt;0.36 microg PPDeq/kg body weight/working day. Our results suggest that (a) current safety precautions for the handling of hair dyes offer sufficient protection against local and systemic exposure and (b) professional exposure to oxidative hair dyes does not pose a risk to human health. Answer: Gloves can provide protection for hairdressers exposed to permanent hair dyes, but their effectiveness varies depending on the type of gloves used and how they are used. An in vivo study showed that nitrile gloves gave good protection against permanent hair dyes containing p-phenylenediamine (PPD), even after 60 minutes of exposure, while natural rubber latex, polyethylene, and vinyl gloves did not provide adequate protection (PUBMED:25407590). Another review supported the finding that nitrile rubber gloves offer the best protection when handling hair dyes, with polyethylene gloves having the lowest reported break-through time. It also suggested that synthetic rubber gloves with low or no levels of accelerators are preferred over natural rubber latex gloves to minimize the risk of allergy (PUBMED:32311093). However, the use of gloves among hairdressers is not always consistent or correct. For example, gloves are less frequently worn for highlighting/lowlighting procedures and permanent waving, and rarely for hair washing, even though these tasks can contribute to exposure (PUBMED:22568839). A study on the correct use of gloves found that all participants had their hands contaminated during a hair wash without proper glove use, but a demonstration of correct glove use significantly reduced contamination (PUBMED:25817606). Additionally, a study on the influence of hydrogen peroxide on the permeability of protective gloves found that treatment with hydrogen peroxide had a minor effect on permeation in tested gloves, with nitrile rubber gloves providing the best protection (PUBMED:25209114). Despite the availability of protective gloves, hairdressers' skin is still exposed to allergenic compounds during hair dyeing, and exposure occurs from dye application, cutting newly-dyed hair, and from background exposure (PUBMED:16081465). In conclusion, while gloves can be protective, their efficacy depends on the material and proper usage. Nitrile gloves are recommended for the best protection, and education on correct glove use is crucial to minimize exposure and prevent occupational skin diseases among hairdressers (PUBMED:25407590; PUBMED:32311093; PUBMED:22568839; PUBMED:25817606; PUBMED:25209114; PUBMED:16081465).
Instruction: Do high proinsulin and C-peptide levels play a role in autonomic nervous dysfunction? Abstracts: abstract_id: PUBMED:9286948 Do high proinsulin and C-peptide levels play a role in autonomic nervous dysfunction?: Power spectral analysis in patients with non-insulin-dependent diabetes and nondiabetic subjects. Background: Immunoreactive insulin has been shown to predict the development of parasympathetic autonomic neuropathy. It is possible that constituents of immunoreactive insulin could explain this association. In this cross-sectional study, the relationship of specific insulin, C-peptide, and proinsulin with autonomic nervous dysfunction was evaluated in 57 NIDDM patients and 108 control subjects. Methods And Results: The frequency-domain analysis of heart rate variability was determined by using spectral analysis from stationary regions of registrations while the subjects breathed spontaneously in a supine position. Total power was divided into three frequency bands: low (0 to 0.07 Hz), medium (MFP, 0.07 to 0.15 Hz), and high (HFP, 0.15 Hz to 0.50 multiplied by the frequency equal to the mean RR interval). In NIDDM patients, total power, the three frequency bands (P&lt;.001 for each), and the MFP/HFP ratio (P=.016), which expresses sympathovagal balance, were reduced compared with control subjects. Fasting proinsulin (r(s)=-.324, P=.014 for diabetics and r(s)=-.286, P=.003 for control subjects), C-peptide (r(s)=-.492, P&lt;.001 for diabetics and r(s)=-.304, P=.001 for control subjects), and total immunoreactive insulin (r(s)=-.291, P=.028 for diabetics and r(s)=-.228, P=.017 for control subjects) were inversely related to MFP/HFP. For proinsulin and C-peptide the results did not change after controlling for the effects of age, body mass index, and fasting glucose. Conclusions: Both proinsulin and C-peptide levels were significantly associated with the sympathovagal balance of autonomic nervous function in NIDDM patients and control subjects, but this study cannot determine whether these compounds are directly involved in autonomic nervous dysfunction. abstract_id: PUBMED:7129312 The occurrence of proinsulin and C-peptide in healthy humans and in diabetics Chemistry, biochemistry and physiology of proinsulin and C-peptide are summarized. A short characterization of the radioimmunological methods for measuring C-peptide and proinsulin follows. The determination of C-peptide and proinsulin which was mainly carried out in serum or plasma essentially improved our knowledge about the function of the beta-cells in the islets of Langerhans in healthy subjects and diabetic patients. The paper reports on the occurrence and the course of C-peptide and proinsulin in healthy subjects and in diabetics of type I and II. abstract_id: PUBMED:2598472 The effect of thyroid disease on proinsulin and C-peptide levels. C-peptide and proinsulin levels were studied in hyper and hypothyroidism both pre and post-treatment and in comparison to matched normals. Fasting C-peptide was reduced in untreated hyperthyroidism (0.4 +/- 0.2 (mean +/- SEM) vs 0.7 +/- 0.2 nmol/l, P less than 0.05) but returned to normal levels following treatment. Fasting proinsulin was elevated in untreated hyperthyroidism (3.6 +/- 0.7 vs 2.4 +/- 0.5 pmol/l, P less than 0.05) also returning to normal after treatment. A similar pattern was seen after oral glucose. The increased proinsulin and reduced C-peptide suggest there may be a defect of proinsulin processing in hyperthyroidism. Fasting C-peptide was reduced in untreated hypothyroidism (0.4 +/- 0.1 vs 0.7 +/- 0.1 nmol/l, P less than 0.05) and also returned to normal after treatment. Fasting proinsulin did not differ significantly from controls. However, proinsulin was reduced after oral glucose (4.7 +/- 0.7 vs. 7.9 +/- 2.0 pmol/l, P less than 0.05) as was C-peptide (0.9 +/- 0.2 vs 2.6 +/- 0.3 nmol/l, P less than 0.05). Both returned to normal after treatment. These findings suggest there are abnormalities of proinsulin and C-peptide levels in both hyper and hypothyroidism. abstract_id: PUBMED:344111 On the role of the proinsulin C-peptide. Intracellular cleavage of protein and polypeptide precursors is now recognized as a widely occurring biosynthetic mechanism. As this field has developed, proinsulin and its cleavage patterns and secretory products have served as useful models for investigations of other systems. A particularly relevant aspect of the proprotein concept is the simple mechanism it provides for the coördinate synthesis and discharge of related peptides from endocrine or other secretory cells. This report reviews briefly the role of the proinsulin C-peptide, first in terms of its special biosynthetic functions, which are unique to the assembly of the two-chain insulin structure, and then with regard to its more general implications for other biosynthetic and secretory systems. abstract_id: PUBMED:1507496 A method for determination of proinsulin levels in serum using both insolubilized anti-insulin antibody and anti-C-peptide antibody A method is described for determination of proinsulin levels in serum. The principle of the assay is that proinsulin reacts with both anti-insulin and anti-C-peptide antibodies. The assay procedure is as follows; Anti-insulin antibody fixed to bacterial cell wall and insolubilized is incubated with test serum to form a complex of proinsulin-anti-insulin antibody (solid phase), followed by washing twice with buffer to eliminate free C-peptide. Then, glycin-HCl buffer is added to dissociate the bound proinsulin. After centrifugation, the supernatant is neutralized with NaOH and proinsulin in it is measured using RIA kit for CPR assay. The assay is simple, sensitive and reproducible. Neither insulin nor C-peptide contained in test serum influences the proinsulin levels determined by this assay. The mean +/- S.D. of the fasting serum proinsulin levels of healthy donors was 7.0 +/- 2.6 PM/l. A patient with insulinoma showed extremely high serum proinsulin level, which decreased to the normal range after extirpation of insulinoma. abstract_id: PUBMED:6379242 Plasma levels of proinsulin, insulin and C-peptide in chronic renal, hepatic and muscular disorders. Proinsulin, insulin and C-peptide levels were investigated in chronic renal, hepatic and muscular disorders. The proinsulin levels in human plasma were determined by radioimmunoassay using insulin-degrading enzyme (IDE). The fasting levels of proinsulin in 29 patients with chronic renal failure (0.95 +/- 0.05) were significantly higher than those in 10 patients with liver cirrhosis (0.46 +/- 0.04), six with muscular dystrophy (0.37 +/- 0.02) and 52 normal subjects (0.24 +/- 0.02 ng/ml, mean +/- S.E.). The fasting levels of insulin and C-peptide in chronic renal failure were also the highest among these groups. The insulin levels in liver cirrhosis and muscular dystrophy were significantly greater than those in normal subjects and increased molar ratios of proinsulin to total insulin immunoreactivity in chronic renal failure were observed. These results suggest that the kidney, liver and muscle are related to circulating insulin levels and that the kidney plays a particularly important role in circulating proinsulin levels. It can be concluded that increases in these peptides are due to a hypersecretion of B-cells, a decreased degradation or excretion. abstract_id: PUBMED:10228759 Increased blood proinsulin and decreased C-peptide levels in patients with pancreatic cancer. Background/aims: Abnormal glucose tolerance during oral glucose tolerance test (OGTT) is frequently observed in patients with pancreatic cancer. The abnormality shown in previous studies, however, was characterized mainly by analyses based on immunoreactive insulin or C-peptide response during OGTT, despite their cross-reactivity with proinsulin. The mechanisms responsible for glucose intolerance in patients with pancreatic cancer remain controversial. Methodology: Both proinsulin and C-peptide responses during 75 g of OGTT were determined without influence of immunologic cross-reactivity in 32 patients with pancreatic cancer and 32 control subjects of similar age, sex, fasting blood glucose levels, and OGTT pattern. Results: The pancreatic cancer patients had higher proinsulin and lower C-peptide levels than the control subjects both in the non-diabetic and diabetic groups. The ratio of the sum of five proinsulin values observed at 0, 30, 60, 120, and 180 min to that of the five C-peptide values (sigma proinsulin/sigma C-peptide ratio) was 6.1 +/- 3.2% in patients with pancreatic cancer and 2.5 +/- 1.0% in control subjects (p &lt; 0.05), while it was not associated with the diabetic pattern in OGTT. The sigma proinsulin/sigma C-peptide ratio was not associated with tumor size, location or resectability but was associated with the number of islets left within or close to cancer stroma. The increased sigma proinsulin/sigma C-peptide ratio decreased after tumor removal. Conclusions: Patients with pancreatic cancer are characterized by increased proinsulin secretion and decreased C-peptide production during OGTT probably due to impaired proinsulin conversion. Although further studies are required in a large scale of patients, measurement of proinsulin and C-peptide levels during OGTT should serve as an early marker to identify high risk groups of the disease. abstract_id: PUBMED:14515209 An improved solid phase synthesis of human proinsulin C-peptide Human proinsulin C-peptide with C-terminal glutamine amide could be prepared through solid phase method by combining the gamma-carboxyl group of glutamic acid with the amino group of MBHA resin. The protecting groups were cleaved by HF. MBHA resin is relatively inexpensive. The new method is another way for the preparation of human proinsulin C-peptide. The preparation human proinsulin C-peptide of analogue using of PAM resin was also reported. abstract_id: PUBMED:373354 In vitro conversion of proinsulin to insulin by cathepsin B and role of C-peptide. Cathepsin B, purified from isolated islets of Langerhans, when incubated with proinsulin under in vitro conditions could convert proinsulin to insulin and C-peptide, releasing free arginine and lysine. When C-peptide, prepared from rat pancreas, was added to the incubation system consisting of proinsulin and cathepsin B, it completely inhibited the conversion of proinsulin to insulin. abstract_id: PUBMED:577034 Human proinsulin. C-peptide radioimmunoassay method. 125I labeling of human proinsulin. C-petide 125I-labelled human-C-peptide was prepared by chloramin T method, enzymic method and active ester method, respectively. Using respective 125I-labelled human-C-peptides in human proinsulin-C-peptide RIA, we compared the binding (Bo/T%) to antibody, displacement by standard human-C-peptide, the recovery test and stability. The usable 125I-labelled antigen for human proinsulin-C-peptide RIA could be prepared by chloramin T method and enzymic method wich labelled 125I to tyrosyl human proinsulin connecting peptide, and active ester method which conjugates 125I-labelled active ester to human proinsulin connecting peptide. The differences among those 125I-labelled antigens was not observed in displacement (B/Bo%) by standard human-C-peptide and the recovery test. In the case of constant preparation of 125I-labelled antigen for RIA, the enzymic method was the best from the viewpoint the reaction ratio is stable and stability of Bo/T% is good. Answer: High proinsulin and C-peptide levels have been associated with autonomic nervous dysfunction. A study involving patients with non-insulin-dependent diabetes mellitus (NIDDM) and control subjects found that both proinsulin and C-peptide levels were significantly inversely related to the sympathovagal balance of autonomic nervous function, as measured by the medium frequency to high-frequency power (MFP/HFP) ratio of heart rate variability (HRV) spectral analysis. This relationship persisted even after controlling for age, body mass index, and fasting glucose levels. However, the study could not determine whether proinsulin and C-peptide are directly involved in causing autonomic nervous dysfunction (PUBMED:9286948). The findings suggest that there is a significant association between these peptides and autonomic nervous function in both diabetic patients and non-diabetic control subjects. However, further research would be needed to establish a causal relationship and to understand the underlying mechanisms by which proinsulin and C-peptide levels might influence autonomic nervous system activity.
Instruction: Day case surgery training for surgical trainees: a disappearing act? Abstracts: abstract_id: PUBMED:20005311 Day case surgery training for surgical trainees: a disappearing act? Introduction: Over the past decade there has been considerable change to surgical training such as modernising medical careers which have raised concerns over exposure to operative experience. With the National Health Service (NHS) plan aiming for the majority of elective surgical cases to be performed as day cases we sought to assess the level of exposure modern day surgical trainees obtain in day case surgery. Methods: An anonymous electronic questionnaire survey was completed by 100 surgical trainees in surgical training across the United Kingdom (UK) from a variety of sub-specialities. 16 questions pertinent to day case surgery exposure were answered. Results: The majority of the trainees who completed the survey felt day case surgery is a vital part of their training as a surgeon. Only less than one-third of all the trainees had formal timetabled day case surgery lists. Of the 31 trainees who had scheduled day lists only 58% (n = 18) were consistently able to attend. The most common reasons for being unable to attend were rota issues and lack of encouragement from seniors. 90 trainees (90%) were not satisfied with their overall Day Case Surgery training. Conclusions: The survey reveals that the modern surgical trainee is gaining a low and inconsistent level of exposure to day case surgery despite being aware of the importance of this modality of training. An urgent review is required to ensure trainees become actively involved in day case surgery and are not missing on this vital training opportunity. abstract_id: PUBMED:31290472 Paediatric day-case surgery in a new paediatric surgical unit in Northwestern Nigeria. Background: Day-case surgery is defined as when the surgical day-case patient is admitted for investigation or operation on a planned non-resident basis and who nonetheless requires facilities for recovery. A significant number of our patients were treated as day cases. This study was conducted to audit paediatric day-case surgery practice at our centre, to determine the indications as well as morbidity and mortality from day-case surgeries. Patients And Methods: This is a prospective study over a period of 14 months. The patients scheduled for surgeries were assessed in the paediatric surgical outpatient clinic and information obtained for each of the patients included age, sex, diagnosis, type of operation, type of anaesthesia and post-operative complications. The data were analysed using SPSS version 15.0 for windows. Results: A total of 182 patients were operated during the study period. The age range of patients was 0.5-156 months and the mean age was 46.6 months. There were 152 male patients (83.5%) and 30 female patients (16.5%). Most of the patients had intact prepuce for circumcision (34.1%). Two patients who had herniotomy developed superficial surgical site infections which were managed as outpatients. There were no readmissions or mortality. Conclusion: Intact prepuce for circumcision as well as hernias and hydroceles is the most common day cases in our centre and is associated with low morbidity and no mortality. abstract_id: PUBMED:12484585 Is day case surgery the key to basic surgical training? The logbooks of 5 senior house officers (SHOs) were audited to determine progression of surgical skills on a single vascular firm. Total surgical experience and, in particular, experience in varicose vein and arterio-venous fistula surgery, performed in the day-case unit (DCU), were examined. Trainees were divided into those undertaking their first surgical SHO post (group 1, n = 2) and those who had had previous surgical exposure (group 2, n = 3) on the basic surgical training rotation. SHOs were exposed to a mean of 273 (+/- 41 SD) operative cases in 6 months. Emergency work comprised 15% (+/- 7%) of workload. Day cases accounted for 35% (+/- 3%) of elective workload. A mean of 66 (+/- 5) varicose vein and AVF cases were undertaken in the DCU. This represented 82% (+/- 6%) of day-case operative experience for the firm. SHOs undertook 12 (+/- 6) VV/AVF cases unassisted, 35 (+/- 5) cases with senior assistance, and 20 (+/- 11) as first assistance in the DCU. All SHOs progressed to being able to perform arterial bypass and amputation (with senior assistance) during their time on the firm. There was no significant difference in experience or progression to major vascular surgery between group 1 and group 2 in this study except in lower limb amputation procedures. It is concluded that vascular surgical firms can provide a good introduction to surgical skills. Most experience as first operator was gained in the DCU and we suggest that those undergoing basic surgical training might benefit from an attachment to the DCU early in their rotations. abstract_id: PUBMED:32550051 Surgical Skills Day: Bridging the Gap. Background The General Medical Council (GMC) requires all newly qualified doctors to be competent in certain surgical skills, including the provision of basic wound closure. Yet there is a profound lack of undergraduate competence in, and exposure to, basic surgical skills such as wound closure. The Surgical Skills Day (SSD) aimed to provide medical students with additional skills training. Methods Student self-assessment and instructors' assessment forms were completed prior to and following a workshop on basic wound closure skills. Paired t-tests was used to statistically compare the two pre and post-instruction data sets. Results A total of 46 students attended the SSD; 29 consented to the skills assessment. 100% (n = 29) self-reported improved competency in at least one of the skills following tuition (p &lt; 0.001). Instructors' assessment agreed that 100% (n = 29) of students improved in at least one of the skills assessed (p &lt; 0.001). 100% of the attendees agreed that additional practical surgical skills should be incorporated into the undergraduate curriculum. 64% (n = 21) of students also confirmed that they were more likely to pursue a career in surgery following the SSD. Conclusion Current clinical teaching in basic suturing is unsuitable for long term retention. SSDs can improve skills acquisition and elevate student confidence. This data builds on our previous work by documenting the high efficacy in skills acquisition as a result of SSD tuition. We recommend that SSDs be integrated into medical school curricula in order to address shortcomings in current undergraduate programmes. abstract_id: PUBMED:33544661 Day case laparoscopic cholecystectomy: Identifying patients for a 'COVID-Cold' isolated day-case unit during the pandemic. Background: The UK practice of laparoscopic cholecystectomy has reduced during the COVID-19 pandemic due to cancellation of non-urgent operations. Isolated day-case units have been recommended as 'COVID-cold' operating sites to resume surgical procedures. This study aims to identify patients suitable for day case laparoscopic cholecystectomy (DCLC) at isolated units by investigating patient factors and unexpected admission. Method: Retrospective analysis of 327 patients undergoing DCLC between January and December 2018 at Ysbyty Gwynedd (District General Hospital; YG) and Llandudno General Hospital (isolated unit; LLGH), North Wales, UK. Results: The results showed that 100% of DCLCs in LLGH were successful; 71.4% of elective DCLCs were successful at YG. Increasing age (p = 0.004), BMI (p = 0.01), ASA Score (p = 0.006), previous ERCP (p = 0.05), imaging suggesting cholecystitis (p = 0.003) and thick-walled gallbladder (p = 0.04) were significantly associated with failed DCLC on univariate analysis. Factors retaining significance (OR, 95% CI) after multiple regression include BMI (1.82, 1.05-3.16; p = 0.034), imaging suggesting cholecystitis (4.42, 1.72-11.38; p = 0.002) and previous ERCP (5.25, 1.53-18.00; p = 0.008). Postoperative complications are comparable in BMI &lt;35kg/m2 and 35-39.9kg/m2. Conclusions: Current patient selection for isolated day unit is effective in ensuring safe discharge and could be further developed with greater consideration for patients with BMI 35-39.9kg/m2. As surgical services return, this helps identify patients suitable for laparoscopic cholecystectomy at isolated COVID-free day units. abstract_id: PUBMED:33555439 Day case superficial parotidectomy-does it work? Purpose: To establish if day case superficial parotidectomy is feasible, safe and does not result in excess readmissions. Method: A retrospective review was carried out of all patients listed for superficial parotidectomy with day case intent by a single surgeon between January 2016 and December 2019 inclusively. The reasons for failure of same day discharge were established. Postoperative complications and readmissions were recorded. Our approach for a superficial parotidectomy typically includes the use of a 10Fr suction drain which is removed at 4 h postoperatively if the output is less than 30 ml. Results: Ninety-one consecutive superficial parotidectomies listed for day case surgery were eligible for inclusion. Seventeen patients failed to be discharged on the same day and were admitted giving a day case success rate of 81%. Most of these (n = 9) occurred in the first year of adopting day case surgery. The most common reason to admit patients was a late finish (n = 8, 47%). Six patients (25%) were admitted due to anaesthetic complications. One patient had a surgical complication requiring admission. Conclusion: Our series demonstrates that day case superficial parotidectomy using a surgical drain is feasible, safe and does not result in an unacceptable readmission rate. In our experience, surgical complications are an uncommon cause for day case failure. The most common cause for day case failure was a late finish. Postoperative complications including bleeding, seroma/salivary collection and facial nerve palsy were in keeping with or better than those quoted in the literature. abstract_id: PUBMED:26676613 Are we ready for day-case partial nephrectomy? Fast-track and day-case surgeries are gaining more and more importance. Their development was eased by the diffusion of minimal invasive surgical strategies and the consequential morbidity reduction. In the field of kidney cancer, seven cases of ambulatory radical nephrectomy were previously reported in the international literature. Regarding robotic partial nephrectomy (PN), short postoperative pathways resulting in patients' discharge on postoperative day 1 were shown to be safe and feasible. We report our initial experience of robot-assisted PN discharged on postoperative day zero and discuss the criteria for adequate patient selection. Indeed, outpatient PN will obviously not be suitable for all patients, and careful selection will be mandatory. Both specific baseline patient's factors and postoperative events will have to be recognized for the first ones and prevented for the second ones. Safety, patient satisfaction, cost efficiency, and reproducibility will be the key factors to assess and promote day-case PN. abstract_id: PUBMED:1407631 Surgical day hospital: technical possibilities and organizational model The Authors propose an organizational model for a surgical day hospital program, which is being used for a pilot day surgery unit in the I Department of Surgery of the Rome University "La Sapienza". The program requires little capital investment, as it is closely linked, geographically and administratively, to the main surgical unit, and uses the present staff, facilities and support services. The model is based on a computerized LAN (Local Area Network), providing fast recording, scheduling, management and trannsfer of medical data for each patient. The present situation is reported in detail. Data from the authors' outpatient department for 1988, have been recorded and elaborated. The results show a low use of surgical day care, limited to minor surgical procedures, and with not a single operation performed under general anesthesia. The authors hope to see a growth in the use of day surgery and a more selective use of inpatient care. abstract_id: PUBMED:29234971 Elective ambulatory surgical care in Ireland-why it needs to be better coded, classified and managed. Background: In the year to July 2017, surgical disciplines accounted for 73% of the total national inpatient and day case waiting list and, of these, day cases accounted for 72%. Their proper classification is therefore important so that patients can be managed and treated in the most suitable and efficient setting. Aims: We set out to sub-classify the different elective surgical day cases treated in Irish public hospitals in order to assess their need to be managed as day cases and the consistency of practice between hospitals. Methods: We analysed all elective day cases that came under the care of surgeons between January 2014 and December 2016 and sub-classified them into those that were (A) true day case surgical procedures; (B) minor surgery or outpatient procedures; (C) gastrointestinal endoscopies; (D) day case, non-surgical interventions and (E) unclassified or having no primary procedure identified. Results: Of 813,236 day case surgical interventions performed over 3 years, 26% were adjudged to accord with group A, 41% with B, 23% with C, 5% with D and 5% with E. The ratio of A to B procedures did not vary significantly across the range of hospital types. However, there were some notable variations in coding and practices between hospitals. Conclusion: Our findings show that many day cases should have been performed as outpatient procedures and that there were variations in coding and practices between hospitals that could not be easily explained. Outpatient procedure coding and a better, more consistent, classification of day cases are both required to better manage this group of patients. abstract_id: PUBMED:8732215 The training implications of day-case surgery. The marked increase in the use of day surgery has major implications for the provision of surgical training. It is essential that day surgery units be used effectively for surgical training. We identify current constraints on such training and suggest methods for improvement. Answer: The training in day case surgery for surgical trainees appears to be facing significant challenges and may be at risk of diminishing. A survey conducted among 100 surgical trainees in the UK revealed that less than one-third had formal timetabled day case surgery lists, and of those who did, only 58% were consistently able to attend due to rota issues and lack of encouragement from seniors. A staggering 90% of trainees were not satisfied with their overall day case surgery training (PUBMED:20005311). This suggests that despite recognizing the importance of day case surgery in their training, modern surgical trainees are not getting adequate exposure to this essential aspect of surgical practice. The situation is further complicated by the fact that day case surgery is becoming increasingly prevalent, with the NHS plan aiming for the majority of elective surgical cases to be performed as day cases. This shift underscores the need for surgical trainees to gain experience in day case procedures. However, the current state of training does not seem to support this need, indicating a disconnect between the goals of healthcare systems and the realities of surgical training. Moreover, the importance of day case surgery in training is highlighted by studies showing that it can be a key component in developing surgical skills. For instance, an audit of the logbooks of senior house officers on a vascular firm showed that most experience as first operator was gained in the day-case unit, suggesting that such units could be beneficial for trainees early in their rotations (PUBMED:12484585). Additionally, initiatives like the Surgical Skills Day have been shown to improve medical students' competency in basic surgical skills, such as wound closure, which are essential for day case procedures (PUBMED:32550051). In conclusion, the evidence suggests that day case surgery training is indeed at risk of becoming a disappearing act for surgical trainees, which could have implications for the quality of future surgical practice. An urgent review and restructuring of surgical training programs may be required to ensure that trainees receive the necessary exposure and experience in day case surgery to meet the evolving demands of healthcare systems.
Instruction: Management of cerebellopontine angle lipomas: need for long-term radiologic surveillance? Abstracts: abstract_id: PUBMED:24691513 Management of cerebellopontine angle lipomas: need for long-term radiologic surveillance? Objective: To date, only a very limited number of lipomas of the cerebellopontine angle (CPA) have been reported. Our objective was to examine clinical and radiologic features of CPA lipomas and determine the most appropriate management plan. Study Design: Retrospective case series. Setting: Tertiary referral center. Patients: Patients with CPA lipomas were identified through the skull base multidisciplinary meeting database. Interventions: Radiologic surveillance and clinical assessment. Main Outcome Measures: Tumor growth, assessed through radiologic measurements on serial magnetic resonance imaging, demographics, presenting symptoms, and any correlation between weight gain and lipoma growth were among the examined factors. Results: Of the 15 patients with CPA lipomas, six were female and nine were male, with an average age at presentation of 50.2 years (range, 31.7-76.4 yr) and an average follow-up time of 51.7 months (range, 6-216 mo). The lipomas were unilateral in all cases, nine on the right (60%) and six on the left (40%) side. None of the lipomas increased in size. All patients were treated conservatively. Sensorineural hearing loss was the main presenting symptom (80%) followed by tinnitus (46.7%) and vertigo (20%). None of the patients suffered from facial nerve dysfunction. There was no correlation between weight gain and tumor growth. Conclusion: CPA lipomas can be diagnosed accurately with appropriate magnetic resonance imaging techniques and be managed conservatively with safety. Cochleovestibular are the most common presenting symptoms, whereas facial nerve involvement is rare. CPA lipomas do not tend to grow and can be monitored on a less regular basis. abstract_id: PUBMED:3871257 Cerebellopontine angle lipoma. Lipomas rarely occur intracranially. Moreover, the cerebellopontine angle is one of the more unusual sites of such hamartomas. Of the 11 reported cases, all but three caused symptoms related to compression of the cranial nerves in the cerebellopontine angle. Only three separate cases have been studied by computed tomography, and in one the fat density was not recognized. This report deals with the clinical presentations, surgical management, and radiologic assessment of these lesions. abstract_id: PUBMED:9879134 Cholesteatoma of the cerebellopontine angle. Congenital cholesteatoma is the third most common tumor found in the cerebellopontine angle. It must be differentiated from acoustic neuromas, meningiomas, metastatic tumors, arachnoid cysts and lipomas. Symptoms include hemifacial spasm, progressive facial paralysis, hearing loss, tinnitus, vertigo, pain and otorrhea. Radiologic and magnetic resonance imaging frequently can be useful to establish a preoperative diagnosis. The treatment of choice is total removal of the lesion. Complete removal with preservation of normal structures is the most difficult and technically exacting procedure performed by the neurotologic surgeon. The clinical features and results from a series of 19 cases, nine of which extended into the cerebellopontine angle, are discussed. abstract_id: PUBMED:11939092 Cerebellopontine angle lipoma: clinical case Lipomas of the cerebellopontine angle are extremely rare. These tumors are probably maldevelopment lesions which can cause slowly progressive neurological symptoms. Including the present case, 90 lipomas in this localization have been described in the literature. The authors report a case of cerebellopontine angle lipoma in a 44-year-old male patient who suffered right hearing loss and tinnitus during seven months. The literature concerning this rare cerebellopontine angle tumor is review. The symptoms, radiological features and surgical management are discussed. abstract_id: PUBMED:3488339 Computed tomography of a cerebellopontine angle lipoma. In this report we document the clinical, radiologic, surgical, and pathologic features of a cerebellopontine angle (CPA) lipoma, including the CT visualization of the seventh and eighth cranial nerves passing through the middle of the lesion, a feature previously undescribed. Comparison is made with other reported CPA lipomas. abstract_id: PUBMED:16798407 Lipoma of the cerebellopontine angle. Lipomas of the cerebellopontine angle (CPA) are unusual tumors that typically present with hearing loss, tinnitus, dizziness, and occasionally facial neuropathies. We describe the case of a healthy 42-year-old woman who presented with left-sided hearing loss and facial synkinesis. T1-weighted magnetic resonance imaging revealed an enhancing lesion of the left CPA with no signal on fat suppression sequences. Despite conservative therapy, the patient developed progressive hemifacial spasm, and a suboccipital craniotomy approach was used to debulk the tumor, which encased cranial nerves V, VII, VIII, IX, X, and XI. Surgical histopathology demonstrated mature adipocytes, consistent with lipoma. Two years after surgery, the patient remains free of facial nerve symptoms. Cerebellopontine angle lipomas are rare lesions of the skull base and are reliably diagnosed with T1-weighted and fat suppression magnetic resonance sequences, which we recommend in the routine radiologic workup of CPA tumors. Accurate preoperative diagnosis is crucial because most CPA lipomas should be managed conservatively. Partial surgical resection is indicated only to alleviate intractable cranial neuropathies or relieve brainstem compression. abstract_id: PUBMED:26043142 Nonschwannoma tumors of the cerebellopontine angle. Although the preponderance of cerebellopontine angle lesions are schwannomas, focused attention to patient clinical history, imaging studies, and tissue biopsies when indicated will aid in detection of less common lesions that might otherwise be misdiagnosed. This is most critical for pathologies that dictate different management paradigms be undertaken. abstract_id: PUBMED:2234906 Fluctuating hearing loss in lipoma of the cerebellopontine angle. Two cases of lipoma of the cerebellopontine angle are presented. Both of them showed fluctuating hearing loss, and one case exhibited Menière-like symptoms. In one of the cases, continued clinical surveillance is considered in view of a high chance of postoperative morbidity. abstract_id: PUBMED:8052393 Cerebellopontine angle lipoma: case report. Lipomas of the cerebellopontine angle are rare, although well described in the literature. A review of the literature with an emphasis on pathogenesis and management is presented. abstract_id: PUBMED:8363852 Cerebellopontine angle lipoma--case report Intracranial lipomas are rare. Cerebellopontine angle lipomas are extremely rare. We report a case of right cerebellopontine angle lipoma. A 47-year-old woman has been suffering from right trigeminal neuralgia for about 1 year. Neurological examination showed right 5th and 8th cranial nerve dysfunction. MRI demonstrated the high intensity mass lesion at right cerebellopontine angle. The lesion was demonstrated as negative intensity in an opposed image by using Dixon's technique. CT showed the lesion as a very low density area (-104 Hounsfield units). Subtotal removal of the tumor was performed following right suboccipital craniectomy. Pathological diagnosis was lipoma. We review the literature on this rare tumor and discuss the clinical symptoms, radiological findings and management. We suggest that the operation should be prudently performed for the case with only hearing disturbance because there is a high possibility that hearing disturbance will worsen. Answer: Cerebellopontine angle (CPA) lipomas are rare intracranial tumors that can present with various symptoms, primarily related to cranial nerve compression. The management of CPA lipomas has been a subject of discussion, particularly regarding the need for long-term radiologic surveillance. A retrospective case series study at a tertiary referral center found that CPA lipomas can be managed conservatively with safety. In this study, none of the lipomas increased in size over an average follow-up time of 51.7 months, suggesting that these tumors do not tend to grow and can be monitored on a less regular basis (PUBMED:24691513). The main presenting symptoms were sensorineural hearing loss, tinnitus, and vertigo, while facial nerve dysfunction was rare. There was no correlation between weight gain and tumor growth, which further supports the notion that these lipomas are stable over time. Other reports in the literature have described the clinical presentations and radiologic assessment of CPA lipomas, noting that they can cause slowly progressive neurological symptoms (PUBMED:3871257, PUBMED:11939092). However, the consensus seems to be that accurate preoperative diagnosis is crucial and that most CPA lipomas should be managed conservatively. Partial surgical resection is indicated only to alleviate intractable cranial neuropathies or relieve brainstem compression (PUBMED:16798407). Given the evidence that CPA lipomas are generally non-growing and can be managed conservatively, long-term radiologic surveillance may not be necessary for all patients. Instead, a tailored approach based on individual patient symptoms and tumor characteristics may be more appropriate. This approach would likely involve initial imaging to establish a baseline, followed by less frequent monitoring unless new symptoms arise or significant changes are noted on follow-up imaging. It is important to note that while CPA lipomas are rare, focused attention to clinical history and imaging studies is essential for accurate diagnosis and appropriate management (PUBMED:26043142).
Instruction: Do we really need to coat the novel silicone intranasal splints with antibiotics? Abstracts: abstract_id: PUBMED:27105976 Do we really need to coat the novel silicone intranasal splints with antibiotics? Purpose: The novel silicone intranasal splints are suggested to resist biofilm formation due to their surface characteristics. We aimed to ascertain the necessity of coating these splints with antibiotics to prevent splint associated infections, in vitro. Materials And Methods: Pieces of Doyle II airway nasal splints made of medical grade silicone were divided into two test groups, treated with either (i) 0.2% nitrofurazone solution or (ii) 0.2% nitrofurazone containing ointment, and a control group, treated with (iii) 0.9% saline. Splint pieces were then incubated with Staphylococcus aureus solutions at 37°C for 48 and 96h. Following this, the splint pieces were incubated in 20ml Mueller Hinton agar and appearing colonies were counted. Results: Following 48and 96h of incubation, the colonization rates in the saline group were significantly higher than the nitrofurazone ointment group (p&lt;0.001). The colonization rates in the liquid nitrofurazone group were significantly lower in comparison to the nitrofurazone ointment group (p&lt;0.001 and p=0.019 respectively). Conclusions: The method of coating the splints with antibiotic was superior to using uncoated splints in terms of preventing S. aureus colonization. The rather smooth surfaces of the splints were insufficient to block bacterial colonization and coating them with antibiotics seems to be beneficial for the prevention of infections. abstract_id: PUBMED:33470781 Polysiloxane Nanofilaments Infused with Silicone Oil Prevent Bacterial Adhesion and Suppress Thrombosis on Intranasal Splints. Like all biofluid-contacting medical devices, intranasal splints are highly prone to bacterial adhesion and clot formation. Despite their widespread use and the numerous complications associated with infected splints, limited success has been achieved in advancing their safety and surface biocompatibility, and, to date, no surface-coating strategy has been proposed to simultaneously enhance the antithrombogenicity and bacterial repellency of intranasal splints. Herein, we report an efficient, highly stable lubricant-infused coating for intranasal splints to render their surfaces antithrombogenic and repellent toward bacterial cells. Lubricant-infused intranasal splints were prepared by creating superhydrophobic polysiloxane nanofilament (PSnF) coatings using surface-initiated polymerization of n-propyltrichlorosilane (n-PTCS) and further infiltrating them with a silicone oil lubricant. Compared with commercially available intranasal splints, lubricant-infused, PSnF-coated splints significantly attenuated plasma and blood clot formation and prevented bacterial adhesion and biofilm formation for up to 7 days, the typical duration for which intranasal splints are kept. We further demonstrated that the performance of our engineered biointerface is independent of the underlying substrate and could be used to enhance the hemocompatibility and repellency properties of other medical implants such as medical-grade catheters. abstract_id: PUBMED:27682847 Quantification of biofilm formation on silicone intranasal splints: An in vitro study. Objectives: Biofilms are associated with persistent infections and resistant to conventional therapeutic strategies. The aim of this study was to investigate the quantity of biofilm produced on silicone intranasal splints. Methods: Quantity of biofilm formation on silicone splints (SS) was tested on 15 strains of Staphylococcus aureus and Moraxella catarrhalis, respectively. Antimicrobial susceptibility testing was performed in accordance with European Committee on Antimicrobial Susceptibility Testing recommendations. Results: All tested strains formed different amounts of biofilm on SS: 66.7% S. aureus and 93.3% M. catarrhalis were weak biofilm producers and 33.3% S. aureus and 6.7% M. catarrhalis were moderate biofilm producers. S. aureus formed significantly higher quantity of biofilm compared with M. catarrhalis (p &lt; 0.05). Multidrug resistant S. aureus produced significantly higher amount of biofilm compared with non-multidrug resistant strains (p &lt; 0.05). Conclusion: Quantity of biofilm on SS is highly dependent on bacterial species and their resistance patterns. Future studies are needed to ascertain another therapeutic option for prophylaxis prior to SS placement. abstract_id: PUBMED:7073611 Magnetic intranasal splints. A magnet-containing silicone rubber intranasal splint is described for use following septoplasty. The splints hold the septal flaps in place by magnetic attraction and eliminate the need for packing. They produce minimal discomfort, are easily inserted and removed, and allow nasal breathing while in place. abstract_id: PUBMED:31390881 Intranasal Septal Splints: Prophylactic Antibiotics and Nasal Microbiology. Objectives: Intranasal septal splints are often used in nasal septal surgeries. Routine use of postoperative antibiotics is an accepted practice, although data regarding its efficacy in preventing postsurgical complications are limited. This study aimed to examine bacterial colonization on septal splints following prophylactic antibiotic therapy and the association with postoperative infections. Methods: Fifty-five patients underwent septoplasty by a single surgeon between March 2015 and April 2016. All had intranasal septal splints and were given antibiotic prophylaxis for 7 days until removal of splints. Nasal cultures were taken before surgery, and septal splints were examined for bacterial colonization following their removal. Results: Thirty-six patients (65%) had positive nasal culture prior to surgery. The most common isolates were Staphylococcus aureus (30%) and Enterobacteriaceae species (66%). All these patients had postoperative bacterial colonization on septal splints. In 15 patients with negative preoperative cultures, bacteria were isolated postoperatively. An increased resistance profile was documented postoperatively in 9 patients (16%), including two with multidrug resistance. In two of these patients preoperative wild-type strains acquired antibiotic resistance postoperatively. No adverse drug reactions to antibiotics were reported. Conclusions: Increased bacterial growth and emergence of resistant strains were observed on intranasal septal splints despite prophylactic antibiotic treatment. Nonetheless, this did not translate into clinical infection. Thus, considering antibiotics overuse and increasing bacterial resistance, further research is needed to determine the role of antibiotic prophylaxis in the setting of intranasal splints. abstract_id: PUBMED:25621239 The value of intranasal splints after partial inferior turbinectomy. To assess the value of using the intranasal septal splint after partial inferior turbinectomy (PIT) surgery. Prospective, randomized comparative study. The study was conducted over a period of 2 years from January 2012 to January 2014 at Minia University hospital, Minia, Egypt. A total of 100 patients underwent bilateral PIT. They were randomly divided into 2 groups. Group A included 50 patients had PIT with intranasal splints and group B included 50 patients had PIT without splints. A comparison was made between the 2 groups regarding the postoperative pain, degree of nasal obstruction and the degree of tissue healing and adhesions formation at 2 time points (2 and 4 weeks postoperatively). At 2 weeks postoperatively: visual analogue score (VAS) for the pain was 5 in group A versus 2.1 in group B (P = 0.01), VAS for nasal obstruction was 6 in group A versus 5 in group B (P = 0.328), 70 % of patients had good healing in group A versus 24 % in group B (P = 0.02). At 4 weeks postoperatively: VAS for the pain was 1.5 in group A versus 1.8 in group B (P = 0.423), VAS for nasal obstruction was 7 in group A versus 6 in group B (P = 0.353), 80 % of patients had good healing in group A versus 54 % in group B (P = 0.03). The use intranasal septal splints after PIT without septal surgery can cause increased postoperative pain in the short term follow-up period with significant evidence of decreasing rates of intranasal adhesions. abstract_id: PUBMED:29380712 History of intranasal splints. Objective: Intranasal splints have long been utilised as a post-operative adjunct in septoplasty, intended to reduce the risk of adhesions and haematoma formation, and to maintain alignment during healing. Methods: A Medline literature review of the history and evolution of intranasal splint materials and designs was performed. Advantages and disadvantages of various splints are discussed. Results: Intranasal splints fashioned from X-ray film were first reported in 1955. Since then, a variety of materials have been utilised, including polyethylene coffee cup lids, samarium cobalt magnets and dental utility wax. Most contemporary splints are produced from silicon rubber or polytetrafluoroethylene (Teflon). Designs have varied in thickness, flexibility, shape, absorption and the inclusion of built-in airway tubes. Future directions in splint materials and designs are discussed. Conclusion: Intranasal splints have steadily evolved since 1955, with numerous novel innovations. Despite their simplicity, they play an important role in nasal surgery and will continue to evolve over time. abstract_id: PUBMED:1555312 Intranasal splints and their effects on intranasal adhesions and septal stability. Intranasal splints have been used to maintain septal stability and prevent intranasal adhesions following septal surgery. However, their efficacy and attendant morbidity have received surprisingly little attention. Our prospective study of 100 adults was divided into patients undergoing septoplasty or submucous resection of the nasal septum alone (n = 50) and those undergoing combined septal and inferior turbinate surgery (n = 50). All patients were randomized to have paired silicon rubber splints inserted for 7 days or not at all. All noses were additionally packed with 2 pieces of Jelonet for 12-20 h and examined and cleaned at 1 and 6 weeks post-operatively. The position of the septum, patency of the airways, presence of adhesions and degree of discomfort were recorded. Statistical analysis of the 89 complete sets of results obtained indicated splints added significantly to post-operative discomfort in both groups, with no demonstrable benefit to the patient. abstract_id: PUBMED:2614233 Treatment of auricular haematoma by silicone rubber splints. A modification of existing techniques for the treatment of auricular haematoma is described. This method employs incision, drainage and the application of silicone rubber splints (Silastic). Existing methods of treatment are reviewed and discussed. A retrospective review of 16 cases using this technique showed a satisfactory cosmetic result in all cases. abstract_id: PUBMED:24065190 Influence of surface properties of Merocel® (polyvinyl acetal) and silicone nasal splints on biofilm formation. The objective of the study was to investigate biofilm formation on Merocel(®) and silicone nasal splint after nasal septal surgery. 50 patients who were scheduled to undergo nasal septal surgery were included in this study. The patients were randomized into receiving an insert of Merocel(®) or silicone splint after septoplasty. In group 1 (8 females, 17 males) and group 2 (10 females, 15 males), Merocel (®) packs or silicone splints were inserted into nasal cavities at the end of the procedures, respectively. All packs were removed 48 h after insertion, and samples were taken from the packs under sterilized conditions. Scanning electron microscopy was performed to observe biofilm formation on the surfaces of Merocel(®) and silicone splints. Biofilm formation was observed in 25 (100%) and 3 (12%) of the Merocel(®) and silicone splint samples, respectively. Our study revealed that biofilm formation on Merocel(®) packs is significantly higher than silicone splints, mainly due to the different texture and surface properties of these materials. Considering the hazardous effects of biofilm formation on humans, our observations in this study may guide surgeons to choose the most appropriate packing material after nasal septal surgery. Answer: The necessity of coating novel silicone intranasal splints with antibiotics has been a subject of investigation due to concerns about biofilm formation and associated infections. One study found that coating silicone intranasal splints with antibiotics was superior to using uncoated splints in preventing Staphylococcus aureus colonization, suggesting that the smooth surfaces of the splints alone were insufficient to block bacterial colonization (PUBMED:27105976). However, another study reported that polysiloxane nanofilament coatings infused with silicone oil on intranasal splints significantly prevented bacterial adhesion and biofilm formation for up to 7 days, indicating that such coatings could enhance the antithrombogenicity and bacterial repellency of intranasal splints without the need for antibiotics (PUBMED:33470781). Moreover, research has shown that the quantity of biofilm on silicone splints is highly dependent on bacterial species and their resistance patterns, with multidrug-resistant Staphylococcus aureus producing significantly higher amounts of biofilm compared to non-multidrug resistant strains (PUBMED:27682847). This suggests that antibiotic coating might be particularly important when dealing with resistant bacterial strains. Despite the use of prophylactic antibiotics, increased bacterial growth and emergence of resistant strains were observed on intranasal septal splints, although this did not translate into clinical infection (PUBMED:31390881). This raises questions about the role of antibiotic prophylaxis in the setting of intranasal splints and suggests that further research is needed to determine its necessity. In conclusion, while there is evidence that antibiotic coatings can prevent bacterial colonization on silicone intranasal splints, alternative coating strategies such as lubricant-infused polysiloxane nanofilaments have also shown promise in preventing biofilm formation. The decision to coat intranasal splints with antibiotics may depend on factors such as the risk of infection, the presence of resistant bacterial strains, and the availability of effective non-antibiotic coatings. Further research is warranted to optimize the prevention of infections associated with intranasal splints.