input
stringlengths
6.82k
29k
Instruction: Do unequal societies cause death among the elderly? Abstracts: abstract_id: PUBMED:33693274 Cause of Death and Associated Factors in Elderly Patients With Atrial Fibrillation - Long-Term Retrospective Study. Background: Oral anticoagulant (OAC) therapy reduces the risk of stroke in patients with atrial fibrillation (AF). This study elucidated the causes of death and related factors in elderly Japanese AF patients. Methods and Results: Over a median (interquartile range [IQR]) follow-up period of 46 (20-76) months, there were 171 all-cause deaths (28% cardiovascular, 46% non-cardiovascular, and 26% unknown causes) among 389 AF patients (median [IQR] age 80 [74-85] years; CHAD2DS2-VASc score 5 [4-6]). Cox regression analysis indicated that diabetes was associated with an increase in all-cause death (hazard ratio [HR] 1.48; 95% confidence interval [CI] 1.02-2.13), whereas hypercholesterolemia (HR 0.53; 95% CI 0.35-0.79), pre-existing heart failure (HR 0.67; 95% CI 0.48-0.95), and OAC use (HR 0.62; 95% CI 0.44-0.88) were associated with reductions in all-cause death. Pre-existing heart failure was associated with both cardiovascular (HR 3.03; 95% CI 1.33-8.20) and non-cardiovascular (HR 0.44; 95% CI 0.30-0.65) deaths, in opposite directions. OAC use was associated with a reduction in cardiovascular death (HR 0.34, 95% CI 0.17-0.69). The predominance of non-cardiovascular death and death-related factors were equivalent regardless of when observations started (before 2009 or in 2009 and later). Conclusions: The predominant cause of death in elderly Japanese AF patients was non-cardiovascular. Distinct clinical factors were associated with cardiovascular and non-cardiovascular death. abstract_id: PUBMED:12045792 Diabetes mellitus mortality among elderly as an underlying or secondary cause of death Objective: To analyze diabetes mellitus-related mortality among elderly and the rate of undereporting of diabetes mellitus as a cause of death when statistical data on diabetes exclusively on the underlying cause of death are considered. Methods: A total of 2.974 death certificates of elderly people living in a housing project in the city of Rio de Janeiro were revised. The study period was 1994. Of them, 291 deaths were due to diabetes mellitus, 150 as the underlying and 141 as the secondary cause of death. The proportion of deaths where diabetes was stated as the underlying cause in relation to the total of diabetes deaths was calculated globally and categorized by sex and age groups. Results: Of the 291 deaths studied, 138 (47.4%) were men and 153 (52.6%) were women. Mortality rates showed a continuous age increase and were higher among men, though sex difference was smaller when only the underlying cause was considered. It was found a higher proportion of deaths (22%) occurring at home. Overall rate of diabetes deaths as the underlying cause was 51.5%, with higher rates seen in women. Conclusions: The analysis of mortality statistics based exclusively on the underlying cause of death can yield misleading profiles due to unrandomized underreporting. There is a need of further studies with diabetic elderly cohorts for a more accurate mortality analysis in this population group. abstract_id: PUBMED:24361802 Noncardiovascular death, especially infection, is a significant cause of death in elderly patients with acutely decompensated heart failure. Background: Despite marked improvements in treatment strategies for heart failure (HF), the mortality rate of elderly patients with HF is still high. Detailed causes of death have not been fully understood. Methods And Results: We studied 459 consecutive patients with acute decompensated HF (ADHF) emergently admitted to our hospital from 2007 to 2011. Patients were divided into 2 groups: <75 years old (younger group; n = 225) and ≥75 years old (elderly group; n = 234). All-cause death, cardiovascular death, and noncardiovascular death were assessed as adverse outcomes. Compared with the younger group, the elderly group was characterized by a higher proportion of women and hypertensive patients and higher left ventricular ejection fraction. During a mean follow-up of 20.7 months, a total of 174 patients (37.9%) died. All-cause death was significantly higher in the elderly group than in the younger group (46.6% vs 28.9%; P < .0001), and this difference was caused by an increase in noncardiovascular deaths (20.9% vs 9.3%; P < .001), especially deaths due to infection (10.7% vs 4.0%; P < .01). Cardiovascular deaths did not differ between the 2 groups. Conclusions: Noncardiovascular deaths, most of which were caused by infection, were frequent among elderly patients with ADHF. abstract_id: PUBMED:29914024 Impact of Mild Cognitive Impairment on Mortality and Cause of Death in the Elderly. Background: Mild cognitive impairment (MCI) is a cognitive state that lies on the continuum between normal aging and dementia, and the prevalence of MCI is higher than dementia. However, the risk for mortality of people with MCI has been far less studied than that of people with dementia, and the population attributable risk percent (PAR%) of death attributable to MCI has not been estimated yet. Objective: To investigate the impact of MCI on mortality and the cause of death in the elderly, and to estimate the PAR% of deaths attributable to MCI. Methods: Data came from 7,315 elderly subjects aged ≥60 years without dementia from two cohort studies with diagnostic assessments of MCI at baseline. Deaths among participants were confirmed through the nationwide mortality database of Statistics Korea. Results: MCI increased the risk of mortality in a multivariate Cox proportional model adjusting for age, sex, education, smoking, alcohol drinking, chronic illness, depression, vascular components, and cohort (hazard ratio = 1.59, 95% confidence interval 1.30, 1.94). PAR% of death attributable to MCI was 10.7% for age 65-74 years, 16.0% for age 75-84 years, and 24.2% for age ≥85 years. In the elderly with MCI, mortality risks from cerebrovascular disease, respiratory disease, and external causes were higher than in the cognitively normal elderly. Conclusions: Our results suggest that the mortality risk of MCI in Asian countries may be comparable to that in Western countries, and MCI can contribute to the death of the elderly as much as dementia. abstract_id: PUBMED:24622228 Place and cause of death in community-dwelling disabled elderly people Aim: To examine the place and cause of death in community-dwelling disabled elderly people. Methods: The baseline data of 1,875 participants and their caregivers in the Nagoya Longitudinal Study for Frail Elderly were used for the analysis. Cox proportional hazard models were used to assess the associations between the variables and the place of death during the 3-year follow-up period. Results: During the observation period of three years, 454 died (hospital death: 347, home death: 107). In total, the rates of pneumonia-, cancer- and heart failure-related death were 22.7%, 14.5%, and 13.2%, respectively. Among the home deaths, 22.4% were age-related deaths and 18.7% were heart failure-related deaths. Females, older, and participants with dementia were more likely to die at home, while those with cancer or a spouse caregiver were more likely to die in the hospital. There were no differences in the levels of caregiver burden or formal service use between the cases of home and hospital death. Multivariate Cox hazard models revealed that home death was associated with an older age and the absence of diabetes mellitus and cancer at baseline. Conclusions: We demonstrated that death at home among community-dwelling disabled elderly is associated with an older age, and the absence of diabetes mellitus and cancer. Due to the lack of important factors that should be addressed, a further study is required in the future. abstract_id: PUBMED:26976464 Isolation and unnatural death of elderly people in the aging Japanese society. Purpose: Japanese society has reached an unprecedented level of aging, with elderly people accounting for 25.1% of the population in October 2013. These changes have created concerns regarding deaths among the elderly. In this study, we compared recent forensic autopsy cases with cases from about 20 years ago, with the goal of understanding the context of death among the elderly within Japanese society today. Methods: We investigated the forensic autopsy records of 297 people aged 65 years or above. In order to examine the effect of residential circumstances, we classified these cases into two groups: people who lived alone (group A) and those who lived with their family (group B). Forty-five of these autopsy cases were conducted about 20 years ago (1989 to 1993) and 252 cases were recent (2009 to 2013). The cases were limited to people who had been found dead or in a critical situation at home. We investigated the first finder, the period of time elapsed between death and discovery, and the cause of death. Results: The incidence of the first finder being a family member was more than 20% greater in group B compared with group A. The proportions of cases for which it took more than three days for someone to find the body or an abnormal situation were about 14% and 7% in groups A and B, respectively, 20 years ago, and about 48% and 19% among the recent cases. These proportions were significantly higher among the recent cases. Among recent cases, a post-mortem elapsed time of more than 3 days occurred more often in group A than group B (p=0.0002). None of the older cases had an unknown cause of death in either group. However, among the recent cases from both groups, 20-30% of cases resulted in unknown causes of death. The incidences of unknown causes of death were significantly higher among the recent cases in both groups (p=0.015) and in group B alone (p=0.037). The incidences of murder cases were significantly lower in group B among the recent cases (p=0.0022). Discussion: Elderly people who live alone are not easily found or aided when in critical situations, and they may only be discovered after death. Prolongation of the postmortem interval (PMI) results in the deterioration of the corpse making determination of cause of death problematic. The results of this study suggest that there are three factors that isolate elderly people and increase the difficulty in determining their cause of death: reduced communication with family members, reduced communication with neighbors or the community, and the increasing prevalence of the nuclear family. In group B, the prolonged discovery time and the increased incidence of unknown causes of death suggest reduced communication with family members, even though the incidence of being found by a family member was higher than in group A. The murder rate was significantly lower in group B, which may suggest that cases of domestic murder were overlooked. Support for a safe life and peaceful ending for the elderly requires a system based on three factors: remote monitoring to ensure safety, the establishment of elderly groups providing mutual support, and increased visits from welfare workers. Understanding the circumstances of the elderly who die alone is beneficial to countries facing an aging society with weakened family or community structures, and who hope for better support for the elderly. abstract_id: PUBMED:28556657 Demographic and social predictors of place of death in the group of elderly Analysis of data from the 2814 Medical certificates of death was performed to reveal socio-demographic characteristics of citizens who died at elderly age, depending on the place of death and alcohol anamnesis. Divorced or single elderly people with lower educational status had greater odds ratio to die «on the street» or «at home» rather than in a «medical organization» (respectively to characteristics of demographic status and the structure of causes of death). Clinically verified diagnosis of «Mental and behavioral disorders due to alcohol» (F10) in anamnesis was not associated with the «choice» of place of death among elderly people. Obtained results can be used to develop the regional programs in the field of public health. abstract_id: PUBMED:24985723 Cause of death the first year after curative colorectal cancer surgery; a prolonged impact of the surgery in elderly colorectal cancer patients. Background: The 1-year mortality after colorectal cancer surgery is high and explains age related differences in colorectal cancer survival. To gain better insight in its etiology, cause of death for these patients was studied. Methods: All 1924 patients who had a resection for stage I-III colorectal cancer from 2006 to 2008 in the Western region of the Netherlands were identified. Data were merged with cause of death data from the Central Bureau of Statistics Netherlands. To calculate excess mortality as compared to the general population, national data were used. Results: Overall 13.2% of patients died within the first postoperative year. One-year mortality increased with age. It was as high as 43% in elderly patients that underwent emergency surgery. In 75% of patients, death was attributed to the colorectal cancer. In 25% of all patients, registered deaths were attributed to postoperative complications. Elderly patients with comorbidity more frequently died due to complications (p < 0.01). Death of other causes was similar to background mortality according to age group. Conclusion: In the presently studied cohort of patients that died within one year of surgery, cause of death was predominantly attributed to colorectal cancer. However, because it is not to be expected that in this cohort the number of deaths from recurrences is very high, the excess 1-year mortality indicates a prolonged impact of the surgery, especially in elderly patients. Therefore, in these patients we should focus on limiting the physiological impact of the surgery and be more involved in the post-hospital period. abstract_id: PUBMED:15727970 Fall-induced deaths among elderly people. Falls and fall-induced injuries in older people are a major public health problem in modern societies with aging populations. Injury is the fifth leading cause of death in older adults, and most of these fatal injuries are related to falls. We have assessed the nationwide trends in fall-induced deaths of older people in Finland for more than 3 decades (1971-2002), and the results showed that the number of fall-induced deaths among elderly Finns is clearly increasing, especially among men. abstract_id: PUBMED:28146162 Use of primary diagnosis during hospitalization in the Unified Health System (Sistema Único de Saúde) to qualify information regarding the underlying cause of natural deaths among the elderly. Introduction:: Ill-defined causes of death are common among the elderly owing to the high frequency of comorbidities and, consequently, to the difficulty in defining the underlying cause of death. Objective:: To analyze the validity and reliability of the "primary diagnosis" in hospitalization to recover the information on the underlying cause of death in natural deaths among the elderly whose deaths were originally assigned to "ill-defined cause" in their Death Certificate. The hospitalizations occurred in the state of Rio de Janeiro, in 2006. Methods:: The databases obtained in the Information Systems on Mortality and Hospitalization were probabilistically linked. The following data were calculated for hospitalizations of the elderly that evolved into deaths with a natural cause: concordance percentages, Kappa coefficient, sensitivity, specificity, and the positive predictive value of the primary diagnosis. Deaths related to "ill-defined causes" were assigned to a new cause, which was defined based on the primary diagnosis. Results:: The reliability of the primary diagnosis was good, according to the total percentage of consistency (50.2%), and fair, according to the Kappa coefficient (k = 0.4; p < 0.0001). Diseases related to the circulatory system and neoplasia occurred with the highest frequency among the deaths and the hospitalizations and presented a higher consistency of positive predictive values per chapter and grouping of the International Classification of Diseases. The recovery of the information on the primary cause occurred in 22.6% of the deaths with ill-defined causes (n = 14). Conclusion:: The methodology developed and applied for the recovery of the information on the natural cause of death among the elderly in this study had the advantage of effectiveness and the reduction of costs compared to an investigation of the death that is recommended in situations of non-linked and low positive predictive values. Monitoring the mortality profile by the cause of death is necessary to periodically update the predictive values. Answer: The provided abstracts do not directly address the question of whether unequal societies cause death among the elderly. However, they do offer insights into various factors that contribute to mortality in elderly populations, including medical conditions, social circumstances, and environmental factors. For instance, diabetes was found to be associated with an increase in all-cause death among elderly Japanese patients with atrial fibrillation (PUBMED:33693274). Non-cardiovascular deaths, particularly due to infection, were frequent among elderly patients with acutely decompensated heart failure (PUBMED:24361802). Mild cognitive impairment was shown to increase the risk of mortality, with higher mortality risks from cerebrovascular disease, respiratory disease, and external causes compared to cognitively normal elderly (PUBMED:29914024). Social factors also play a role in elderly mortality. Elderly individuals who are divorced, single, or have a lower educational status are more likely to die outside of medical organizations (PUBMED:28556657). Isolation and living alone were associated with unnatural death and difficulty in determining the cause of death in elderly Japanese society (PUBMED:26976464). Additionally, the place and cause of death in community-dwelling disabled elderly people were influenced by factors such as age, presence of diabetes mellitus, and cancer (PUBMED:24622228). While these abstracts do not explicitly link societal inequality to elderly mortality, they suggest that a combination of health-related issues and social determinants, such as living conditions, social support, and access to healthcare, can influence mortality outcomes. These factors may be exacerbated in unequal societies where disparities in income, education, and social support are more pronounced, potentially leading to higher mortality rates among the elderly. However, to conclusively answer the question, more targeted research on the impact of societal inequality on elderly mortality would be necessary.
Instruction: Interstate variability of the statute of limitations for medical liability: a cause for concern? Abstracts: abstract_id: PUBMED:18362806 Interstate variability of the statute of limitations for medical liability: a cause for concern? Background: Widespread variation in state laws produces significant differences in state-to-state liability risk and insurance cost. These variations may produce higher medical liability for certain pediatric practitioners because many states have a significantly longer period of potential liability for physicians caring for children. This study reviewed state laws for variability in the period of medical liability for pediatric and adult patients and considered some of the impacts resulting from such variability. Methods: Statutes of limitations applicable to claims for medical malpractice were reviewed for all 50 states and the District of Columbia, as were laws that affect the statute of limitations based on a patient's age. For each state, the average, minimum, and maximum values for the statute of limitations in years were calculated for the following age groups: newborn, 6, 12, and 25 years. Results: In most states, the length of time a claimant has to bring a claim is longer for the youngest patients and decreases as patients get older. In the United States, the average time to bring a claim in years for a newborn and those aged 6, 12, and 25 years was 12, 8, 5, and 2 years, respectively. The maximum time to bring a claim in years for a newborn and those aged 6, 12, and 25 years was 23, 17, 11, and 5 years, respectively. Conclusions: States demonstrate significant variability in the duration and application of statutes of limitations, with younger patients typically having a longer period to file a liability lawsuit. In most regions, pediatric practitioners face increased liability risk based upon significantly longer limitation periods for minor patients. Studies have suggested that medical students are considering specialty liability risk when choosing careers. Increased liability risk faced by pediatric specialists may lead to fewer practitioners entering these fields. Extended periods of liability for pediatric specialists need additional study to evaluate the effects on liability insurance, specialty choice, and specialty care access. Level Of Evidence: Descriptive study (audit of existing laws), level of evidence 2. abstract_id: PUBMED:38297503 Repressed Memories (of Sexual Abuse Against Minors) and Statutes of Limitations in Europe: Status Quo and Possible Alternatives. One of the most heated debates in psychological science concerns the concept of repressed memory. We discuss how the debate on repressed memories continues to surface in legal settings, sometimes even to suggest avenues of legal reform. In the past years, several European countries have extended or abolished the statute of limitations for the prosecution of sexual crimes. Such statutes force legal actions (e.g., prosecution of sexual abuse) to be applied within a certain period of time. One of the reasons for the changes in statutes of limitations concerns the idea of repressed memory. We argue that from a psychological standpoint, these law reforms can be detrimental, particularly when they are done to endorse unfounded psychological theories. The validity of testimonies is compromised many years after the alleged facts, and abolishing the statute of limitations increases the chance that even more (false) recovered memories of abuse might enter the courtroom. We propose solutions to these changes such as establishing an independent expert committee evaluating claims of sexual abuse. abstract_id: PUBMED:32765577 Stem Cell Statute in Jordan: Leading the Way. The use of stem cells in research has caused much controversy and ethical dilemma. The primary source of stem cells is human embryos, a source which has been confronted with objections based on ethical, moral, and religious positions. Jordan has passed the first of its-kind Statute in the region, aiming at regulating the use of stem cells both for therapeutic and research purposes. The Statute adopted a regulatory approach between the restrictive and intermediate. The Statute, however, pays more attention to stem cell banking in many of its articles. Many critical aspects in regulating stem cell research activities are overlooked. This is including but not limited to the process of informed consent, protecting privacy, maintaining confidentiality, the need for a national entity responsible for regulating embryonic stem cell (ESC) research, and requirements of monitoring activity. The authors recommend further review of the current Statute in light of the deficiencies discussed so as to develop a more comprehensive and coherent Statute. abstract_id: PUBMED:37292262 A systematic reviews and meta-analyses of interruption of the statute of limitations for civil claims: A comparative study of Arab legislations. Background: This study deals with interruption of statutory time limits for civil claims. It is based on the assumption that the reason for granting an interruption of the statutory time limit is the manifestation of an intention to claim one's right, instead of keeping silent about it. Methods: The analytical-comparative method is used to analyse and compare the provisions on the interruption of prescription. This study also includes a review of the literature related to the phenomenon being investigated. Therefore, The selection of data complies with the Systematic Reviews and Meta-Analyses (PRISMA). The research design includes an analysis of different legislations and a review of the most significant previous studies which afford a useful guide for discerning between straightforward cases (like the filing of a lawsuit or the initiation of an executive procedure by the creditor) and more difficult ones (like the initiation of merely precautionary proceedings, or the lodging of a lawsuit that is turned down for reasons of lack of jurisdiction or of outright inadmissibility). Results: Interruption, unlike suspension, entails the start of a fresh statutory time limit. Moreover, pronouncement of lack of jurisdiction does not prevent the lawsuit from coming into being, since it is a rejection of the lawsuit on merely formal grounds, and it does not affect the basis of the claim. Conclusions: The selected jurisdictions agree that claims that are merely precautionary, but that do not involve realization of the substantive entitlement, do not necessarily entail an interruptive effect. abstract_id: PUBMED:37831635 Interstate Telemedicine for Urologic Cancer Care. Purpose: US states eased licensing restrictions on telemedicine during the COVID-19 pandemic, allowing interstate use. As waivers expire, optimal uses of telemedicine must be assessed to inform policy, legislation, and clinical care. We assessed whether telemedicine visits provided the same patient experience as in-person visits, stratified by in- vs out-of-state residence, and examined the financial burden. Materials And Methods: Patients seen in person and via telemedicine for urologic cancer care at a major regional cancer center received a survey after their first appointment (August 2019-June 2022) on satisfaction with care, perceptions of communication during their visit, travel time, travel costs, and days of work missed. Results: Surveys were completed for 1058 patient visits (N = 178 in-person, N = 880 telemedicine). Satisfaction rates were high for all visit types, both interstate and in-state care (mean score 60.1-60.8 [maximum 63], P > .05). More patients convening interstate telemedicine would repeat that modality (71%) than interstate in-person care (61%) or in-state telemedicine (57%). Patients receiving interstate care had significantly higher travel costs (median estimated visit costs $200, IQR $0-$800 vs median $0, IQR $0-$20 for in-state care, P < .001); 55% of patients receiving interstate in-person care required plane travel and 60% required a hotel stay. Conclusions: Telemedicine appointments may increase access for rural-residing patients with cancer. Satisfaction outcomes among patients with urologic cancer receiving interstate care were similar to those of patients cared for in state; costs were markedly lower. Extending interstate exemptions beyond COVID-19 licensing waivers would permit continued delivery of high-quality urologic cancer care to rural-residing patients. abstract_id: PUBMED:37546142 ChatGPT in Radiology: The Advantages and Limitations of Artificial Intelligence for Medical Imaging Diagnosis. This review article provides an overview of using artificial intelligence (AI) in radiology. It discusses the advantages and limitations of ChatGPT, a large language model, for medical imaging diagnosis. ChatGPT has shown great promise in improving the accuracy and efficiency of radiological diagnoses by reducing interpretation variability and errors and improving workflow efficiency. However, there are also limitations, including the need for high-quality training data, ethical considerations, and further research and development to improve its performance and usability. Despite these challenges, ChatGPT has the potential to significantly impact radiology and medical imaging diagnosis. The review article highlights the need for continued research and development, coupled with ethical and regulatory considerations, to ensure that ChatGPT is used to its full potential in improving radiological diagnoses and patient care. abstract_id: PUBMED:25989192 State restrictions on the interstate practice of dermatopathology are unconstitutional: the tissue is the issue. The regulation of the interstate practice of dermatopathology, teledermatopathology and teledermatology is the subject of discussion in many recent articles. Laws and regulations in many states complicate such interstate practice, requiring the dermatopathologist to be licensed in the state where the biopsy is taken if the dermatopathologist practices in and is licensed in a different state. To date, this discussion has been from the viewpoint of the dermatopathologist or dermatologist. But, when seen from the patient's viewpoint, its apparent that most such state regulation may be void as an unconstitutional infringement of a patient's fundamental right of unrestricted interstate travel under the Constitution of the United States of America. abstract_id: PUBMED:27257097 The Contagion of Interstate Violence: Reminders of Historical Interstate (but Not Intrastate) Violence Increase Support for Future Violence Against Unrelated Third-Party States. Five experiments investigated the war contagion phenomenon in the context of international relations, hypothesizing that reminders of past inter- (but not intra-) state war will increase support for future, unrelated interstate violence. After being reminded of the Korean War as an interstate rather than intrastate conflict, South Koreans showed stronger support for violent responses to new, unrelated interstate tensions (Study 1). Replicating this war contagion effect among Americans, we demonstrated that it was mediated by heightened perceived threat from, and negative images of, a fictitious country unrelated to the past war (Study 2), and moderated by national glorification (Study 3). Study 4, using another international conflict in the U.S. history, provided further conceptual replication. Finally, Study 5 included a baseline in addition to the inter- versus intrastate manipulation, yielding further support for the generalized effect of past interstate war reminders on preferences for aggressive approaches to new interstate tensions. abstract_id: PUBMED:38368777 Investigating pedestrian-vehicle crashes on interstate highways: Applying random parameter binary logit model with heterogeneity in means. In the U.S., the interstate highway system is categorized as a controlled-access or limited-access route, and it is unlawful for pedestrians to enter or cross this type of highway. However, pedestrian-vehicle crashes on the interstate highway system pose a distinctive safety concern. Most of these crashes involve 'unintended pedestrians', drivers who come out of their disabled vehicles, or due to the involvement in previous crashes on the interstate. Because these are not 'typical pedestrians', a separate investigation is required to better understand the pedestrian crash problem on interstate highways and identify the high-risk scenarios. This study explored 531 KABC (K = Fatal, A = Severe, B = Moderate, C = Complaint) pedestrian injury crashes on Louisiana interstate highways during the 2014-2018 period. Pedestrian injury severity was categorized into two levels: FS (fatal/severe) and IN (moderate/complaint). The random parameter binary logit with heterogeneity in means (RPBL-HM) model was utilized to address the unobserved heterogeneity (i.e., variations in the effect of crash contributing factors across the sample population) in the crash data. Some of the factors were found to increase the likelihood of pedestrian's FS injury in crashes on interstate highways, including pedestrian impairment, pedestrian action, weekend, driver aged 35-44 years, and spring season. The interaction of 'pedestrian impairment' and 'weekend' was found significant, suggesting that alcohol-involved pedestrians were more likely to be involved in FS crashes during weekends on the interstate. The obtained results can help the 'unintended pedestrians' about the crash scenarios on the interstate and reduce these unexpected incidents. abstract_id: PUBMED:25734402 State medical licensure for telemedicine and teleradiology. Physician medical licensure is state based for historical and constitutional reasons. It may also provide the best method for guaranteeing patient protection from unqualified, incompetent, impaired, or unprofessional practitioners of medicine. However, a significant cost for physicians practicing telemedicine is having to obtain multiple state medical licenses. There is reasonable likelihood that model legislation for the practice of telemedicine across state boundaries will be passed in the next few years, providing physicians with a simpler process for license reciprocity in multiple states via interstate licensing compacts. Physicians would have to be licensed in the state in which the patient resides. Patient complaints would still be adjudicated by the medical licensing board in the state where the patient resides according applicable state legislation. Answer: Yes, interstate variability of the statute of limitations for medical liability is a cause for concern. The study reviewed in abstract PUBMED:18362806 highlights that there is widespread variation in state laws, which leads to significant differences in state-to-state liability risk and insurance costs. This variability can particularly affect pediatric practitioners, as many states have a longer period of potential liability for physicians caring for children. The study found that in the United States, the average time to bring a claim for a newborn and those aged 6, 12, and 25 years was 12, 8, 5, and 2 years, respectively, with the maximum time to bring a claim being even longer. This suggests that pediatric practitioners face increased liability risk based on the significantly longer limitation periods for minor patients. The variability in statutes of limitations may influence medical students' specialty choices, potentially leading to fewer practitioners entering fields with higher liability risks, such as pediatrics. This could have further implications for liability insurance, specialty choice, and access to specialty care, indicating that the interstate variability of the statute of limitations for medical liability is indeed a cause for concern that requires additional study (PUBMED:18362806).
Instruction: Can Bladder Wall Thickness Measurement Be Used for Detecting Bladder Outlet Obstruction? Abstracts: abstract_id: PUBMED:18423703 Ultrasound measurement of bladder wall thickness in the assessment of voiding dysfunction. Purpose: Ultrasound measurement of bladder wall thickness has been proposed as a useful diagnostic parameter in patients with bladder outlet obstruction and other voiding dysfunctions. We assessed bladder wall thickness measurement as a noninvasive test in patients with suspected bladder outlet obstruction or overactive bladder syndrome. Materials And Methods: Transabdominal ultrasound measurement of bladder wall thickness was performed during urodynamic study in 180 patients with nonneurogenic voiding dysfunction. Two measurements of anterior bladder wall thickness, 1 cm apart in the midline and averaged, were obtained at 200 ml filling. Bladder wall thickness findings were correlated with urodynamic diagnoses. Results: A total of 180 patients with an average age of 62 years (range 20 to 94) were recruited, comprising 73 males and 107 females. Of the patients 69 had normal urodynamics, 39 had bladder outlet obstruction, 38 had increased bladder sensation on cystometry and 34 had detrusor overactivity. Bladder wall thickness was 1.1 to 4.5 mm in all groups. Males had a slightly thicker bladder wall than females (mean 2.1 vs 1.9 mm, p = 0.064). Mean bladder wall thickness in patients with normal urodynamics, bladder outlet obstruction, detrusor overactivity and increased bladder sensation was 2.0, 2.1, 1.9 and 1.8 mm, respectively. No significant difference was found between the groups (ANOVA p = 0.064, not significant). In particular there was no difference in bladder wall thickness between patients with normal urodynamics, and those with bladder outlet obstruction (p = 0.31) or detrusor overactivity (p = 0.309). Conclusions: Bladder wall thickness is remarkably uniform in patients with nonneurogenic voiding dysfunction. Therefore, it cannot reliably predict bladder outlet obstruction or detrusor overactivity. Bladder wall thickness measurement does not provide an alternative to urodynamic studies for diagnosing voiding dysfunction. abstract_id: PUBMED:36103042 Bladder wall thickness measured by CT can predict bladder outlet obstruction in men: a retrospective cohort study. Purpose: To evaluate whether bladder wall thickness (BWT) measured by CT can be used to predict bladder outlet obstruction in men with low urinary tract symptoms (LUTS). Methods: From 2015 to 2018, a total of 120 men with lower urinary tract symptoms who underwent both urodynamic studies and CT tests of the lower abdomen or pelvis were involved. Bladder wall thickness values were measured by CT scanning. Results: Based on the urodynamic studies, 120 men were categorized into two groups, including 70/120 men (58.3%) in the bladder outlet obstruction (BOO) group and 50/120 men (41.7%) in the non-BOO group. The mean BWT was thicker in the BOO group than in the non-BOO group (3.87 vs. 2.75 mm, p < 0.001). The mean maximum bladder capacity (MBC) was lower in the BOO group than in the non-BOO group (263.42 vs. 308.96 ml, p < 0.001). The mean detrusor pressure at maximum urinary flow rate (PdetQmax) was higher in the patients in the BOO group than in those in the non-BOO group (102.28 vs. 49.25 cmH2O, p < 0.001). The ROC curve showed that BWT was a good predictor with an AUC of 0.855 (95% CI 0.785-0.924, p < 0.001). At the cutoff value of 3.20 mm, the predictive sensitivity of BWT for BOO was 72.9%, and the specificity was 90%. Conclusion: Increased bladder wall thickness was correlated with bladder outlet obstruction in men with LUTS. Bladder wall thickness measured by CT scans may be a noninvasive parameter to predict bladder outlet obstruction in men with LUTS. abstract_id: PUBMED:26142716 Can Bladder Wall Thickness Measurement Be Used for Detecting Bladder Outlet Obstruction? Objective: To investigate the relationship between bladder wall thickness (BWT) and uroflowmetric parameters and the International Prostate Symptoms Score (IPSS) in patients with lower urinary tract symptoms (LUTS). Patients And Methods: A total of 236 male patients who had LUTS-related benign prostatic enlargement with serum prostate-specific antigen level ≤4 ng/mL were included in this study. Age and duration of LUTS and IPSS were recorded. BWT was measured using 7.5 mHz suprapubic ultrasonography before uroflowmetry and postvoid residual (PVR) was calculated thereafter. The relationship between BWT and poor indicators for bladder outlet obstruction (BOO) (IPSS >19, Qmax <15 mL/min, PVR >100 cm(3)) was investigated. Results: The mean age was 62.5 ± 8.1 (39-77) years and the mean BWT was 3.8 ± 1.5 (1.4-8.7) mm. The mean IPSS, Qmax, PVR, and duration of LUTS were 17.7, 13.7 mL/min, 89.9, and 46.5 months, respectively. A positive correlation was found between BWT and IPSS, PVR and duration of LUTS, whereas a negative correlation was found between BWT and Qmax (P <.001). BWT increased when number of BOO parameters increased. BWT was 2.9 in patients without BOO parameters whereas BWT was 3.5, 4.1, and 4.5 mm in patients with any one, any two, and all parameters of BOO, respectively. Conclusion: BWT increased when number of BOO parameters increased. We believe that measurement of BWT is an easy, quick, and repeatable test to predict BOO severity. abstract_id: PUBMED:9474143 The diagnosis of bladder outlet obstruction in men by ultrasound measurement of bladder wall thickness. Purpose: The objective of the study was to investigate specificity and sensitivity of bladder wall thickness in the diagnosis of bladder outlet obstruction. Materials And Methods: The study included 174 patients referred to our prostate centers for lower urinary tract symptoms. Free uroflowmetry and pressure-flow studies were performed in duplicate as part of the diagnostic evaluation. After the 2 voiding studies were done the bladder was filled to 150 ml. and wall thickness was measured via suprapubic ultrasound. Bladder outlet obstruction was diagnosed and graded according to the Abrams-Griffiths and Schäfer nomogram as well as to the group specific urethral resistance algorithm. Results: A significant correlation (r > 0.6, p < or = 0.007) was found between bladder wall thickness and all parameters of the pressure-flow study. A bladder wall thickness of 5 mm. appeared to be the best cutoff point to diagnose bladder outlet obstruction, since 63.3% of patients with bladder wall thickness less than 5 mm. were unobstructed while 87.5% of those with a bladder wall thickness 5 mm. or greater were obstructed. Bladder wall thickness out performed uroflowmetry in terms of specificity and sensitivity in the diagnosis of outlet obstruction as demonstrated by an area under curve value of 0.860 versus 0.688 in the receiver operator characteristics analysis. Conclusions: Measurement of bladder wall thickness appears to be a useful predictor of outlet obstruction with a diagnostic value exceeding free uroflowmetry although it does not represent a substitution to invasive urodynamics. These data support the hypothesis that the relationships between morphology and function are of clinical importance. abstract_id: PUBMED:37635584 Correlation between bladder wall thickness and uroflowmetry in West African patients with benign prostatic enlargement. Background: Pressure flow urodynamic study remains the gold standard for the diagnosis of bladder outlet obstruction; however, their use is limited by their relative unavailability in our environment, cost, and invasiveness. Measurement of bladder wall thickness (BWT) by transabdominal ultrasonography is a promising tool that can be used to diagnose bladder outlet obstruction in our environment where pressure-flow urodynamic study is not readily available. Objective: The study aimed to correlate BWT with uroflowmetry and to establish a BWT cut-off in patients with lower urinary tract symptoms (LUTS) due to benign prostatic enlargement. Materials And Methods: This was a prospective one-year study of patients with LUTS due to benign prostatic enlargement. The patients were divided into obstructed and non-obstructed groups with Q- max of 10 ml/s serving as the cut-off value. Receiver Operator Curve (ROC) was used to evaluate the performance of BWT in diagnosing BOO. Statistical significance was set at P < 0.05. Results: The mean BWT and Q-max were 4.53 ± 2.70 mm and 15.06 ± 9.43 ml/s. There was a negative correlation between BWT and Q-max (r = -0.452, P = 0.000), Q-average (r = -0.336, P = 0.000), and voided volume (r = -0.228, P = 0.046). A BWT cut-off of 5.85 mm was found to be the best threshold to differentiate obstructed from non-obstructed patients with a sensitivity and specificity of 70 and 88.2 percent respectively. Conclusion: Bladder wall thickness showed an inverse relationship with maximum flow rate with high sensitivity and specificity. This non-invasive test can be used as a screening tool for BOO in our setting, where the pressure flow urodynamic study is not readily available. abstract_id: PUBMED:27447991 Ultrasound detrusor wall thickness measurement in combination with bladder capacity can safely detect detrusor underactivity in adult men. Purpose: Detrusor underactivity (DU) has lately gained increasing interest because this bladder condition is an important cause of post-void residual urine and lower urinary tract symptoms (LUTS) in adult men. Until now, DU can only be diagnosed by pressure-flow measurement. Therefore, the aim of this study was to search for noninvasive tests which can safely predict DU in adult men. Methods: Unselected, treatment-naïve male patients aged ≥40 years with uncomplicated, non-neurogenic LUTS were prospectively evaluated. All men received-after standard assessment of male LUTS-ultrasound detrusor wall thickness (DWT) measurements at a bladder filling ≥250 ml and computer urodynamic investigation. DU was defined as incomplete bladder emptying (>30 ml) in the absence of bladder outlet obstruction or dysfunctional voiding. Classification and regression tree (CART) analysis was used to determine parameters and threshold values for DU. Results: The study population consisted of 143 consecutive men with medians of 62 years, IPSS 16, and prostate volume 35 ml. In total, 33 patients (23.1 %) had DU. CART analysis showed that all men with DWT ≤ 1.23 mm plus bladder capacity >445 ml had DU. This multivariate model has a sensitivity of 42 %, specificity of 100 %, positive predictive value of 100 %, and negative predictive value of 85 %. Conclusions: This study showed that all men with ultrasound DWT ≤ 1.23 mm + bladder capacity >445 ml have DU. Combination of these two tests could help physicians to diagnose DU noninvasively in clinical practice. A prospective independent study should confirm these results. abstract_id: PUBMED:19468439 The use of ultrasound-estimated bladder weight in diagnosing bladder outlet obstruction and detrusor overactivity in men with lower urinary tract symptoms. Objectives: Measurement of bladder weight using ultrasound estimates of bladder wall thickness and bladder volume is an emerging clinical measurement technique that may have a role in the diagnosis of lower urinary tract dysfunction. We have reviewed available literature on this technique to assess current clinical status. Methods: A systematic literature search was carried out within PubMed and MedLine to identify relevant publications. These were then screened for relevance. Preliminary results from our clinical experiments using the technique are also included. Results: We identified 17 published papers concerning the technique which covered clinical studies relating ultrasound-estimated bladder wall thickness to urodynamic diagnosis in men, women, and children together with change in response to treatment of bladder outlet obstruction. The original manual technique has been challenged by a commercially available automated technique. Conclusion: Ultrasound-estimated bladder weight is a promising non-invasive technique for the categorization of storage and voiding disorders in both men and women. Further studies are needed to validate the technique and assess accuracy of diagnosis. abstract_id: PUBMED:20846683 Ultrasound estimated bladder weight and measurement of bladder wall thickness--useful noninvasive methods for assessing the lower urinary tract? Purpose: In the last decade interest has arisen in the use of ultrasound derived measurements of bladder wall thickness, detrusor wall thickness and ultrasound estimated bladder weight as potential diagnostic tools for conditions known to induce detrusor hypertrophy. However, to date such measurements have not been adopted into clinical practice. We performed a comprehensive review of the literature to assess the potential clinical usefulness of these measurements. Materials And Methods: A MEDLINE® search was conducted to identify all published literature up to June 2009, investigating measurements of bladder wall thickness, detrusor wall thickness and ultrasound estimated bladder weight. Results: Measurements of bladder and detrusor wall thickness, and ultrasound estimated bladder weight have been studied in men, women and children. A convincing trend has been shown in the ability of these measurements to differentiate men with from those without bladder outlet obstruction. In addition, measurements of bladder wall thickness have revealed a considerable difference between detrusor overactivity and urodynamic stress incontinence. A number of confounding variables and a lack of standardized methodology has resulted in discrepancies among studies. Therefore, reproducible diagnostic ranges or cutoff values have not been established. Conclusions: Ultrasound derived measurements of bladder and detrusor wall thickness, and ultrasound estimated bladder weight are potential noninvasive clinical tools for assessing the lower urinary tract. abstract_id: PUBMED:28394496 Change of Ultrasound Estimated Bladder Weight and Bladder Wall Thickness After Treatment of Bladder Outlet Obstruction With Dutasteride. Objectives: To investigate the change of bladder wall hypertrophy to relieve bladder outlet obstruction (BOO) by treatment with 5α-reductase inhibitor. Methods: Men who have BOO confirmed by urodynamic study (BOO index ≥40) were treated with dutasteride 0.5 mg once a day for 6 months. We measured ultrasound estimated bladder weight (UEBW), UEBW divided by body surface area (UEBW/BSA), and bladder wall thickness (BWT) before and after treatment. Changes in LUTS parameters were assessed by using the International Prostate Symptom Score, uroflowmetry, residual urine volume, prostate volume, serum prostate-specific antigen (PSA), and LUTS outcome scores (LOS). Correlation between the change of LUTS parameters and UEBW, UEBW/BSA, and BWT were evaluated. We assessed the changes of bladder wall hypertrophy according to the results of benefit, satisfaction, and willingness to continue (BSW) questionnaire. Results: Thirty patients completed the 6-month study. The mean UEBW was 47.10 ± 7.79 g before and 50.07 ± 5.39 g after dutasteride treatment (P = 0.259). The mean UEBW/BSA was 26.47 ± 4.30 g/m2 before and 28.2 ± 3.53 g/m2 after treatment (P = 0.253), and there was no definite change in mean BWT after treatment (P = 0.301). Most LUTS parameters including LOS significantly improved. Increased BOO index value was related to decreased BWT (ρ = 0.361, P = 0.049). There was no definite change in mean UEBW, UEBW/BSA, and BWT according to the results of the BSW questionnaire. Conclusions: There was no change in UEBW, UEBW/BSA and BWT despite improving most clinical parameters suggesting BOO. The changes of bladder wall hypertrophy parameters still have limitations to directly reflect the relief of BOO. abstract_id: PUBMED:26676698 Potential Biomarkers Utilized to Define and Manage Overactive Bladder Syndrome. Clinical diagnosis of overactive bladder (OAB) syndrome has great variation and usually can only be based on subjective symptoms. Measurement of urgency severity score in adjunct with voiding diary may reflect the occurrence of OAB and incontinence severity in daily life. Urodynamic study can detect detrusor overactivity (DO), but not in all OAB patients. A more objective way and less invasive tool to diagnose and assess therapeutic outcome in OAB patients is needed. Recent investigations of the potential biomarkers for OAB include urinary and serum biomarkers and bladder wall thickness. Evidence has also shown that urinary proteins, such as nerve growth factor (NGF) and prostaglandin E2 (PGE2 ) levels increase in patients with OAB, bladder outlet obstruction (BOO) and DO. Patients with OAB have significantly higher urinary NGFlevels and urinary NGF levels decrease after antimuscarinic therapy and further decrease after detrusor botulinum toxin injections. However, the sensitivity of single urinary protein in the diagnosis of OAB is not high and several lower urinary tract diseases may also have elevated urinary NGF levels. Searching for a group of inflammatory biomarkers by microsphere-based array in urine might be a better method in differential diagnosis of OAB from interstitial cystitis, urinary tract infection (UTI) or urolithiasis. Bladder wall thickness has been widely investigated in the diagnosis of BOO and pediatric voiding dysfunction.The role of bladder wall thickness in the diagnosis of OAB, however, has not reach a consistent conclusion. We hereby review the latest medical advances in this field. Answer: Bladder wall thickness (BWT) measurement has been investigated as a potential noninvasive method for detecting bladder outlet obstruction (BOO). Several studies have explored the relationship between BWT and BOO with varying results. One study found that BWT measured by CT can predict BOO in men with low urinary tract symptoms (LUTS), with a mean BWT significantly thicker in the BOO group compared to the non-BOO group. The study suggested that BWT measured by CT scans may be a noninvasive parameter to predict BOO in men with LUTS (PUBMED:36103042). Another study reported a positive correlation between BWT and indicators for BOO, such as the International Prostate Symptoms Score (IPSS), postvoid residual (PVR), and duration of LUTS. The study concluded that BWT increased with the number of BOO parameters and suggested that BWT measurement is an easy, quick, and repeatable test to predict BOO severity (PUBMED:26142716). Similarly, another study indicated that a significant correlation exists between BWT and parameters of the pressure-flow study. It suggested that a BWT of 5 mm was a good cutoff point to diagnose BOO, with BWT outperforming uroflowmetry in terms of specificity and sensitivity in the diagnosis of outlet obstruction (PUBMED:9474143). In contrast, a study found that BWT is remarkably uniform in patients with nonneurogenic voiding dysfunction and could not reliably predict BOO or detrusor overactivity. The study concluded that BWT measurement does not provide an alternative to urodynamic studies for diagnosing voiding dysfunction (PUBMED:18423703). Another study in West African patients with benign prostatic enlargement showed an inverse relationship between BWT and maximum flow rate, suggesting that BWT could be used as a screening tool for BOO in settings where pressure-flow urodynamic study is not readily available (PUBMED:37635584). In summary, while some studies support the use of BWT measurement as a predictor of BOO, others suggest that it may not be a reliable standalone diagnostic tool. The variability in findings indicates that further research and standardization of measurement techniques may be necessary before BWT measurement can be widely adopted for detecting BOO.
Instruction: Is "prepectoral edema" a morphologic sign for malignant breast tumors? Abstracts: abstract_id: PUBMED:25784323 Is "prepectoral edema" a morphologic sign for malignant breast tumors? Rationale And Objectives: A variety of morphologic and kinetic signs of benign or malignant breast lesions contribute to a final diagnosis and differential diagnosis in magnetic resonance (MR) mammography (MRM). As a new sign, prepectoral edema (PE) in patients without any history of previous biopsy, operation, radiation, or chemotherapy was detected during routine breast MR examinations. The purpose of this study was to retrospectively evaluate the role of this morphologic sign in the differential diagnosis of breast lesions. Materials And Methods: Between January 2005 and October 2006, a total of 1109 consecutive MRM examinations have been performed in our institution. In this study, only patients who would later be biopsied or operated in our own hospital were included. They had no previous operation, biopsy, intervention, chemotherapy, hormone replacement therapy, or previous mastitis. In total, 162 patients with 180 lesions were included, histologically correlated later-on by open biopsy (124 patients and 136 lesions) or core biopsy (38 patients and 44 lesions). The evaluations were performed by four experienced radiologists in consensus. Results: One hundred eighty evaluated lesions included 104 malignant lesions (93 invasive and 11 noninvasive cancers) and 76 benign lesions. PE was detected in 2.6% of benign lesions (2 of 76), in none of the Ductal cacinoma in situ (DCIS) cases (0 of 11), and in 25.8% of malignant lesions (24 of 93; P < .000). PE was found significantly more frequently in presence of malignant tumors >2 cm in diameter (48.5%, 17 of 35 vs. 13.8%, 8 of 58; P < .001). PE was not statistically associated to malignant tumor type, presence or absence of additional DCIS, and number of lesions. This resulted in the following diagnostic parameters for PE as an indicator for malignancy: sensitivity of 19.3%, specificity of 97.3%, positive predictive value (PPV) of 92.3%, negative predictive value of 48%, and accuracy of 57.7%. Conclusions: In case of occurrence, the "PE sign" seems to be a specific indicator for malignant tumors with a high PPV, independent from its entity. abstract_id: PUBMED:28373472 Prognostic Value of "Prepectoral Edema" in MR-mammography. Background/aim: Previous studies have indicated a highly significant correlation between invasive tumors and accompanying prepectoral edema (PE) in MR-mammography (MRM). The aim of the present study was to identify prognostic factors associated with PE as a diagnostic sign. Materials And Methods: A total of 1,109 consecutive MRM exams were included in this study. Exclusion criteria were previous operation, biopsy, intervention, chemotherapy, hormone replacement therapy (HRT) or previous mastitis. One hundred and sixty-two patients with 180 lesions were evaluated and histologically correlated. Diagnostic evaluations were performed by four experienced radiologists in consensus. Results: One hundred and eighty lesions included 104 malignant lesions (93 invasive and 11 non-invasive) and 76 benign lesions. PE was detected significantly more frequently in presence of lymphangiosis carcinomatosa ((53.8%; 14/26) vs. (9.8%; 4/41)) (p<0,000). PE significantly correlates with positive axillary nodal status ((19.4% (12/62) vs. 44.4% (12/27)) (p=0.020), as well as pathologic enhancement of the pectoral muscle (5.4% (5/93) vs. 22.7% (20/88)) (p=0,015). PE significantly correlates with higher tumor grading (G3) (33.9% vs. 13.9%) (p<0.05). There was no significant difference in a positive vs. negative estrogen (p=0.681) and progesterone (p=0.751) and/or human epidermal growth factor receptor-2 (HER-2) (p=0.726) receptor status accompanied by PE. Conclusion: The presence of PE may be a strong prognostic indicator for lymphatic spread and the cancerous infiltration of lymph nodes. It is also associated with the infiltration of the pectoral muscle, as well as high tumor grading. There is no correlation between prepectoral edema and positive tumor receptor status. abstract_id: PUBMED:23341125 Is evaluation of the presence of prepectoral edema on T2-weighted with fat-suppression 3 T breast MRI a simple and readily available noninvasive technique for estimation of prognosis in patients with breast cancer? Background: The specificity of breast MRI is only moderate. The unsatisfactory specificity of breast MRI has prompted evaluation of high signal intensity (SI) on T2-weighted imaging (T2WI). The purpose of the study was to investigate the prevalence of prepectoral edema determined using high SI on T2WI with fat-suppression 3 T MRI and to correlate its presence with prognostic factors of breast cancer. Methods: The retrospective study comprised 589 consecutive histopathologically confirmed lesions, 460 malignant and 129 benign, identified by 3 T MRI. Presence of prepectoral edema was evaluated on T2WI with fat suppression, and its diagnostic value for malignancies and correlation with clinicopathological findings in histopathologically confirmed breast cancer were assessed. Results: Prepectoral edema was present in 54 of the 460 breast cancers (9 % of the total 589) and none of the 129 benign lesions. Its sensitivity and specificity were 12 and 100 %, respectively. The positive predictive value was 100 %. Young age (p = 0.01), large tumor size (p < 0.0001), high histological grade (p < 0.0001), invasive ductal carcinoma (p < 0.0001), high lymphovascular invasion degree (p < 0.0001), high axillary lymph node positivity (p < 0.0001), high inflammatory breast cancer rate (p < 0.0001), high neoadjuvant chemotherapy rate (p < 0.0001), and chemoresistant breast cancers (p < 0.0001) were significantly associated with prepectoral edema. There was no association of the morphological lesion type on MRI and dynamic enhancement imaging pattern with the presence of prepectoral edema. Conclusion: Prepectoral edema has low prevalence but is specific for breast cancer and correlated with prognostic factors. abstract_id: PUBMED:25336185 Focal breast edema associated with malignancy on T2-weighted images of breast MRI: peritumoral edema, prepectoral edema, and subcutaneous edema. Edema, defined as high signal intensity on T2-weighted images, is a useful breast MRI finding, which compensates for the moderate specificity of breast MRI. Although diffuse breast edema can be caused by benign and malignant etiologies, focal edema is commonly associated with malignancy except for previous procedures including biopsy and surgery. For more accurate comprehension of the appropriate usage of breast edema, focal breast edema should be divided into three different types: peritumoral edema, prepectoral edema, and subcutaneous edema. It is useful to review and clarify these types of focal breast edema based on pathological findings as this may help predict the likelihood of malignancy in breast MRI lesions. abstract_id: PUBMED:18206620 Inflammatory breast carcinoma in magnetic resonance imaging: a comparison with locally advanced breast cancer. Rationale And Objectives: Although inflammatory breast carcinoma (IBC) accounts for 1%-4% of all breast cancer cases, the appearance of this highly malignant tumor in magnetic resonance imaging (MRI) is still not well characterized. The aim of this study was to identify typical imaging features of IBC in comparison with noninflammatory locally advanced breast carcinoma (LABC). Materials And Methods: MRIs of 48 patients with IBC were compared with an equivalent cohort of 52 subjects with LABC. Age and histopathologic subtype were equivalent between the two groups. To delineate characteristic features, a multitude of dynamic and morphologic parameters were evaluated using T1- and T2-weighted sequences. Results: No significant differences of prevalences could be found for the following criteria: dynamic tumor signal characteristics, prominent vessels, perifocal edema, axillary lymph node involvement, morphology of focal masses, and morphologic pattern of non-mass like enhancement. Otherwise, the quantity of focal masses and the spatial distribution of the tumoral infiltration significantly differed between the two cancer groups. The following parameters occurred more frequently in the IBC cases: edema (cutaneous/subcutaneous 81.3%, perimamillar 70.8%, diffuse 89.6%, prepectoral 72.9%, intramuscular pectoral 41.7%), thickening (75.0%) and pathologic enhancement (60.4%) of Cooper's ligaments, skin thickening (83.3%), punched-out sign (initially strong, focal increase of some dermal or subcutaneous parts followed by slow-continuous enhancement of the surrounding skin; 56.3%). Conclusions: Inflammatory breast carcinoma seems to represent a specific biological entity resulting in typical MRI characteristics. Some of the parameters are supposed to visualize the characteristic extensive lymphovascular infiltration and therefore may improve the diagnosis of IBC. abstract_id: PUBMED:20017008 Inflammatory breast cancer: MR imaging findings. Purpose: This paper describes the magnetic resonance (MR) imaging features of primary inflammatory breast cancer (IBC). Materials And Methods: Two radiologists reviewed the MR examinations of 14 women with a pathological diagnosis of IBC. Images were assessed for skin thickening, oedema, nipple retraction, architectural distortion, type and extent of parenchymal and cutaneous enhancement and enhancement kinetics over time, axillary and internal mammary lymphadenopathy, pectoral muscle enhancement and additional findings. Results: Skin thickening was identified in eight patients (58%), oedema in nine (64%), nipple retraction in two (14%), architectural distortion in eight (58%), mass-like enhancement in five (36%), non-mass-like enhancement in nine (64%) with washout enhancement curve in 12 (86%) and plateau curve in two (14%), axillary lymphadenopathy in 12 (86%) and internal mammary artery lymphadenopathy in two (14%), and pectoral muscle enhancement in one (7%). Additional findings included increased breast volume in two patients (14%), prepectoral fluid in four (28%) and hypertrophic internal mammary artery in three (21%). Conclusions: The most characteristic MR findings of IBC are skin thickening, oedema, architectural distortion, mass-like enhancement with washout curve and axillary lymphadenopathy; less frequent ones are nipple retraction, mass-like enhancement and internal mammary lymphadenopathy. Prepectoral fluid is frequent but is not a sign of infiltration. abstract_id: PUBMED:28370761 Preoperative MRI features associated with lymphovascular invasion in node-negative invasive breast cancer: A propensity-matched analysis. Purpose: In node-negative disease, the presence of lymphovascular invasion (LVI) is reported to be an unfavorable prognostic factor. Thus, the aim of this study was to evaluate whether preoperative breast MRI features are associated with LVI in patients with node-negative invasive breast cancer by a propensity-matched analysis. Materials And Methods: Among 389 patients with node-negative invasive ductal breast cancer who had preoperative breast 3.0 Tesla MRI with precontrast T2-weighted fat-suppressed, pre- and dynamic postcontrast T1-weighted fat-suppressed sequences, 61 patients with LVI (LVI group) were matched with 183 patients without LVI (no LVI group) at a ratio of 1:3 in terms of age, histologic grade, tumor size, and hormone receptor status. Two radiologists reviewed the MRI features, following profiles of focal breast edema (peritumoral, prepectoral, subcutaneous), intratumoral T2 signal intensity, adjacent vessel sign, and increased ipsilateral whole-breast vascularity, in addition to 2013 Breast Imaging Reporting and Data System lexicon. Results: The presence of peritumoral edema (45.9% [28/61] versus 30.6% [56/183], P = 0.030) and adjacent vessel sign (82.0% [50/61] versus 68.3% [125/183], P = 0.041) was significantly associated with LVI. Prepectoral edema was also more frequently observed in the LVI group than in the no LVI group with borderline significance (26.2% [16/61] versus 15.3% [28/183], P = 0.055). In cases of nonmass enhancement, regional enhancement was more frequently found in the LVI group than in the no LVI group (60.0% [3/4] versus 5.9% [1/4], P = 0.042). Conclusion: Preoperative breast MRI features may be associated with LVI in patients with node-negative invasive breast cancer. Level Of Evidence: 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1037-1044. abstract_id: PUBMED:35134676 Magnetic resonance imaging findings associated with recurrence in idiopathic granulomatous mastitis. Purpose: To evaluate magnetic resonance imaging (MRI) findings related to recurrence of idiopathic granulomatous mastitis (IGM). Methods: Demographic data [age, number of births, duration of lactation period, body mass index (BMI) and presence of recurrence] of 71 patients who were diagnosed with IGM were analyzed retrospectively. Characteristics of IGM (maximum width, location, involvement of the retroareolar region, deep tissue, skin), fibroglandular density (FGD), background parenchymal enhancement (BPE), distribution and pattern of contrast enhancement, presence of prepectoral edema, abscesses, fistulae, axillary lymphadenopathies on MRI and apparent diffusion coefficient (ADC) values from the pathological area were recorded. Results: The recurrence rate in patients was 59% (42/71). We found a statistically significant relationship between recurrence and BPE (p = 0.028) and mean ADC (p = 0.035) values (for the cut-off of 1.00 × 10-3 mm2/s; sensitivity = 61.9%, specificity = 69%, AUC = 0.648). However, patients' age (p = 0.346), lactation period (p = 0.470), number of births (p = 0.774), BMI (p = 0.630) maximum width of the area of enhancement (p = 0.112), involvement of the retroareolar region (p = 0.290), deep tissue (p = 0.285), skin (p = 0.230), distribution (p = 0.857) and enhancement pattern (p = 0.157), presence of prepectoral edema (p = 0.094), abscesses (p = 0.441), fistulae (p = 0.809), lymphadenopathies (p = 0.571), and FGT (p = 0.098) were not significantly associated with recurrence. Conclusion: Our results revealed that recurrent IGM patients showed high BPE and lower mean ADC values. We think that high BPE and low mean ADC (<1.00 × 10-3 mm2/s) on MRI at the diagnosis stage may be a sign of possible future recurrence, and it will be beneficial to follow the patients more closely and arrange the treatment algorithms accordingly. abstract_id: PUBMED:37025369 Nonconvulsive status epilepticus as sign of tumor recurrence. Status epilepticus (SE) can be a sign of brain tumor progression or recurrence, but there are few reports of nonconvulsive status epilepticus (NCSE) being a sign of tumor progression or recurrence. Moreover, much remains to be elucidated about its clinical course, and outcome. This is the first report of NCSE associated with the progression of a metastatic brain tumor treated by surgical excision of the tumor. The patient was 74-year-old woman. She had a history of craniotomy for tumor resection and gamma knife treatment for multiple metastatic brain tumors originating from breast cancer. She suddenly developed dysarthria and right hemiparesis, followed by convulsive seizures in the right side of her body. Magnetic resonance imaging showed tumor recurrence in the left parietal lobe and worsening edematous changes around the tumor. Antiseizure medication was initiated, however her seizures did not improve; therefore, tumor resection was performed. Postoperatively, her consciousness, seizures, and electroencephalogram findings improved. NCSE caused by brain tumors may be refractory to treatment with antiseizure medications, and early surgical treatment may be useful for seizure control. abstract_id: PUBMED:7065058 The morphologic response of the human endometrium to long-term treatment with progestational agents. The effects of progestational agents on the human endometrium are masked by the proliferating action of endogenous estrogens or by an estrogen compound used for oral contraception. It is therefore difficult to describe the morphology of the endometrial reaction caused by progestational agents alone. The endometria of 12 women were examined, 11 of whom were on a long-term, low-dose regimen of progestogens for contraception; the twelfth was on a short-term, high-dose regimen of injectable progestogen for adjuvant therapy of breast cancer. Endometria from the patients were evaluated with respect to the following parameters; gland mitoses, gland tortuosity, pseudostratification, basal vacuolation, signs of secretion, stromal edema, predecidual reaction, and leukocytic infiltration. The morphologic examination was performed with the light microscope using different stainings and with the scanning electron microscope. Answer: Yes, "prepectoral edema" (PE) is considered a morphologic sign for malignant breast tumors. Studies have shown that PE is detected significantly more frequently in the presence of malignant lesions compared to benign ones. In a study evaluating 180 lesions, PE was found in 25.8% of malignant lesions and only 2.6% of benign lesions, indicating a high specificity for malignancy (PUBMED:25784323). Furthermore, PE has been associated with lymphangiosis carcinomatosa, positive axillary nodal status, and pathologic enhancement of the pectoral muscle, suggesting its role as a strong prognostic indicator for lymphatic spread and cancerous infiltration of lymph nodes (PUBMED:28373472). Additionally, the presence of PE has been correlated with unfavorable prognostic factors in breast cancer, such as young age, large tumor size, high histological grade, invasive ductal carcinoma, high lymphovascular invasion degree, high axillary lymph node positivity, inflammatory breast cancer rate, high neoadjuvant chemotherapy rate, and chemoresistant breast cancers (PUBMED:23341125). Moreover, PE has been identified as a specific MRI characteristic of inflammatory breast carcinoma (IBC), which is a highly malignant tumor, and may help improve the diagnosis of IBC by visualizing the extensive lymphovascular infiltration (PUBMED:18206620). In summary, the occurrence of PE on breast MRI is a specific indicator for malignant tumors and is associated with several prognostic factors indicating a more aggressive disease course.
Instruction: Does treatment of paradoxical vocal fold movement disorder decrease asthma medication use? Abstracts: abstract_id: PUBMED:27861929 Does treatment of paradoxical vocal fold movement disorder decrease asthma medication use? Objectives/hypothesis: To determine whether diagnosis and treatment of paradoxical vocal fold movement disorder (PVFMD) leads to decreased asthma medication use. Secondary objectives include determining initial rate of asthma medication use, characterizing symptom improvement, and correlating with pulmonary function testing (PFT). Study Design: Prospective observational study. Methods: Patients newly diagnosed with PVFMD at a single institution were recruited to participate. Medication questionnaires were completed at the initial visit, at the first return visit for therapy, and at 6 months. PFTs were reviewed when available. Results: Sixty-six patients were recruited; the study was closed early because findings reached significance. Fifty-six patients (85%) were taking asthma medication at presentation. Forty-four patients presented with PFTs, and two-thirds were normal. Forty-two patients completed follow-up questionnaires; 79% decreased asthma medication use (P < .001), and 82% reported symptom improvement. Seventy-seven percent of patients participated in therapy and 23% did not, with equal rates of decrease in asthma medication use between these groups. Outcomes did not vary based on PFT pattern (i.e., obstructive vs. nonobstructive, P = .75). Conclusions: Diagnosis and treatment of PVFMD lead to a decline in asthma medication use. This decrease occurred alongside symptom improvement and irrespective of PFT findings. Use of asthma medication in this patient population is high, at 85%. Level Of Evidence: 4. Laryngoscope, 127:1531-1537, 2017. abstract_id: PUBMED:24286687 Paradoxic vocal fold movement disorder. Paradoxical Vocal Fold Movement Disorder (PVFMD) is a cause of dyspnea that can mimic or occur alongside asthma or other pulmonary disease. Treatment with Laryngeal Control Therapy is very effective once the entity is properly diagnosed and contributing comorbidities are managed appropriately. In understanding the etiology of PVFMD, focus has broadened beyond psychiatric factors alone to include the spectrum of laryngeal irritants (laryngopharyngeal reflux, allergic and sinus disease, sicca, and possibly obstructive sleep apnea). The following is a discussion of the history, terminology, epidemiology, diagnosis, comorbid conditions, and treatment of this entity. abstract_id: PUBMED:32418667 Paradoxical Vocal Fold Movement: A Retrospective Analysis. Background: Paradoxical vocal fold motion (PVFM) is a disorder in which the vocal folds adduct inappropriately during inspiration resulting in episodic dyspnea and sometimes respiratory distress. Diagnosis is obtained through careful history, physical examination, flexible laryngoscopic examination with provocative maneuvers, and laryngeal electromyography. The pathogenesis and clinical findings of this disorder are not known. Objectives: To determine characteristics of patients with confirmed PVFM and to evaluate efficacy of current treatments. Methods: A retrospective chart review of the patients with PVFM who presented at a quaternary care laryngology office between January 1, 2007 and August 31, 2019 was performed. Comorbidities, laboratories tests, imaging, 24-hours pH impedance testing, and laryngeal EMG results were analyzed. Dyspnea Index questionnaire before and after treatment was used to evaluate the efficiency of treatments for PVFM. Results: The average age of the 40 patients was 30.25 years. Forty-five percent of patients were under the age of 18, and 80% were female. Twenty-five percent of patients were serious athletes, and 40% of patients were students. Sixty-five percent had a previous diagnosis of asthma. One third of patients had concurrent psychiatric diagnosis. There was no family history of PVFM in the cohort. There were no other common findings. Treatment for laryngopharyngeal reflux (LPR) was used only when there was evidence of LPR; and 93% of our 40 patients received LPR treatment. Ninety percent of patients who received botulinum toxin, voice therapy (VT), and LPR treatment had subjective improvement. Patients with just VT and LPR treatment had a 43% subjective improvement rate; and the difference was statistically significant at P of 0.021. There was no statistical difference between VT and LPR treatment versus VT or LPR treatment alone. Conclusion: Botulinum toxin, VT, and LPR treatment regimen is currently the most effective management for patients with paradoxical vocal fold movement disorder. More research is needed to determine the etiology of this disorder. abstract_id: PUBMED:28144169 The forgotten cause of stridor in the emergency department. Paradoxical Vocal Fold Movement Disorder is where the larynx exhibits paradoxical vocal cords closure during respiration, creating partial airway obstruction. Causes of vocal fold movement disorder are multifactorial, and patients describe tightness of throat, difficulty getting air in, have stridor, and do not respond to inhalers. We propose using transnasal laryngoscopy examination, which will show narrowing of vocal cords on inspiration, and The Pittsburgh Vocal Cord Dysfunction Index with a cutoff score of ≥4 to distinguish vocal fold movement disorder from asthma and other causes of stridor. Management of paradoxical vocal fold movement disorder involves a combination of pharmacological, psychological, psychiatric, and speech training. Paradoxical vocal fold movement disorder is a very treatable cause of stridor, so long as it is identified and other organic causes are excluded. abstract_id: PUBMED:19793042 Relations between symptoms, durations and triggers of attacks and concurrent diseases in paradoxical vocal fold movement disorder Objectives: To discover the relation between initial symptoms, triggers and durations of attacks of paradoxical vocal fold movement disorder (PVFMD) and concurrent laryngeal and respiratory diseases. Patients And Methods: Twenty one PVFMD patients (17 females, 4 males; mean age 50 year; range 31 to 72 years) were included in the study. Diagnosis of PVFMD was established when inspiratory and/or expiratory vocal cord adduction was seen during laryngostroboscopy. Results: Laryngeal and respiratory diseases accompanying PVFMD were laryngopharyngeal reflux (n=15; 71%), asthma (n=11; 52%), allergic rhinitis (n=6; 28%), chronic sinusitis (n=1; 4%), right vocal cord paresis (n=1; 4%) and chronic obstructive pulmonary disease (n=2; 9%), (p=0.346). The most common symptoms were inspiratory stridor (n=16; 76%), dyspnea (n=13; 60%) and cough (n=9; 42%), (p=0.346). There was no significant relation between concurrent diseases and the frequencies of symptoms (0.091). Triggers for attacks were found to be air pollutants including dust, smoke and temperature variations (n=8; 38%), exercise (n=7; 33%), perfumes and detergents (n=6; 28%), animal fur and pollen (n=4; 19%), phonation and loud speaking (n=3; 14%) and emotional stress-anxiety (n=1; 4%), (p=0.308). Durations of attacks were several minutes in 14 patients (66%), several seconds in five patients (23%) and more than one hour in two patients (9%), (p=0.338). Conclusion: Durations and triggers of attacks and initial symptoms in paradoxical vocal fold movement disorder seems not to be related with concurrent laryngeal and respiratory diseases. abstract_id: PUBMED:27697409 The Ability of Patient-Symptom Questionnaires to Differentiate PVFMD From Asthma. Objectives: Goals of the current study were to (1) conduct initial validation of a new Paradoxical Vocal Fold Movement Disorder Screening Questionnaire (PVFMD-SQ); (2) determine if symptom-based questionnaires can discriminate between patients with confirmed PVFMD and those with diagnosed uncontrolled asthma without clinical suspicion for PVFMD; and (3) determine if a new questionnaire with diagnostic specificity could be created from a combination of significant items on previously validated questionnaires. Methods: This is a prospective, case-controlled study of patients with PVFMD only and asthma only, who completed five questionnaires: Dyspnea Index, Reflux Symptom Index, Voice Handicap Index-10, Sino-Nasal Questionnaire, and PVFMD-SQ. Factor analysis was completed on the new PVFMD-SQ, and the discrimination ability of selected factors was assessed by receiver operating characteristics curve. The factor with the greatest discriminatory ability was selected to create one diagnostic questionnaire, and scores for each participant were calculated to estimate how well the factor correlated with a PVFMD or asthma diagnosis. Mean scores on all questionnaires were compared to test their discriminatory ability. Results: Patients with PVFMD showed greater voice handicap and reflux symptoms than patients with asthma. A 15-item one-factor questionnaire was developed from the original PVFMD-SQ, with a sensitivity of 89% and specificity of 73% for diagnosing asthma versus PVFMD. The combined questionnaires resulted in four factors, none of which showed discriminatory ability between PVFMD and asthma. Conclusion: This study represents the first time that a patient symptom-based screening tool has shown diagnostic sensitivity to differentiate PVFMD from asthma in a cohort of symptomatic patients. abstract_id: PUBMED:27553595 Can PFTS Differentiate PVFMD From Subglottic Stenosis? Objective: To determine if the results of routine spirometry and flow volume loops (Pulmonary Function Tests (PFTs)) can be used to distinguish Paradoxic Vocal Fold Movement Disorder (PVFMD) from Subglottic Stenosis (SGS). Methods: PFT records and medical history of 49 patients with diagnosis of PVFMD and 39 patients with SGS were compared. Groups were then subdivided to compare PFTs in patients with and without smoking history or lung disease. Results: Peak expiratory flow rate (PEFR) and Expiratory Disproportion Index (ratio of forced expiratory volume in 1 second (FEV1) over PEFR (FEV1/PEFR)) were both significantly different between patients with SGS and those with PVFMD (p<0.02). FEV1 was also significantly different, but only in the patients with no smoking history or lung disease (p<0.02). Conclusions: The findings suggest that spirometry can be a valuable tool in distinguishing SGS from PVFMD. Previous studies have shown the utility of PFTs for distinguishing SGS from pulmonary pathology, but this is the first study to exploit PFTs for distinction of SGS from PVFMD. Given that the flexible laryngoscopic exam used to diagnose PVFMD does not always visualize the subglottis completely, PFTs can be used to increase suspicion of SGS and direct further work-up. abstract_id: PUBMED:21051397 Diagnostic criteria for the classification of vocal cord dysfunction. Vocal cord dysfunction (VCD) is a syndrome characterized by paroxysms of glottic obstruction due to true vocal cord adduction resulting in symptoms such as dyspnea and noisy breathing. Since first described as a distinct clinical entity in 1983, VCD has inadvertently become a collective term for a variety of clinical presentations due to glottic disorders. Despite an increased understanding of laryngeal function over the past 25 years, VCD remains a poorly understood and characterized entity. Disparities in the literature regarding etiology, pathophysiology, and management may be due to the historic approach to this patient population. Additionally, disorders clearly not due to paroxysms of true vocal cord adduction, such as laryngomalacia, vocal cord paresis, and CNS causes, need to be differentiated from VCD. Although a psychologic origin for VCD has been established, gastroesophageal reflux disease (GERD), nonspecific airway irritants, and exercise have also been associated with intermittent laryngeal obstruction with dyspnea and noisy breathing. VCD has been repeatedly misdiagnosed as asthma; however, the relationship between asthma and VCD is elusive. There are numerous case reports on VCD, but there is a paucity of prospective studies. Following an in-depth review of the medical literature, this article examines the available retrospective and prospective evidence to present an approach for evaluation of VCD including: (1) evaluation of factors associated with VCD, (2) differential diagnosis of movement disorders of the upper airway, and (3) clinical, spirometric, and endoscopic criteria for the diagnosis. abstract_id: PUBMED:25473555 Dynamic 320-slice CT larynx for detection and management of idiopathic bilateral vocal cord paralysis. Idiopathic bilateral vocal cord paralysis (VCP) is a rare and difficult condition often undiagnosed and frequently confused with asthma and other respiratory conditions. Accurate diagnosis is crucial since 80% of cases patients require surgical intervention, such as tracheostomy or laser surgery, to relieve symptoms. The "gold standard" for diagnosing VCP has been laryngoscopy. In this case study, we demonstrate for the first time that idiopathic bilateral VCP can be accurately diagnosed by means of a novel noninvasive methodology: dynamic volume 320-slice computed tomography larynx. Three-dimensional reconstruction of laryngeal motion during the breathing cycle permitted functional assessment of the larynx showing absence of vocal cord movements. The new methodology may be valuable for noninvasive diagnosis of vocal cord movement disorders before and for follow-up after surgery. abstract_id: PUBMED:7970287 Asthmatic extrathoracic upper airway obstruction: laryngeal dyskinesis. Laryngeal dyskinesis is a functional asthma-like disorder refractory to bronchodilator regimens. Patients treated with electroencephalographic neurofeedback training demonstrate clinical improvement with reversal of their variable extrathoracic upper airway obstruction. Answer: Yes, treatment of paradoxical vocal fold movement disorder (PVFMD) leads to a decrease in asthma medication use. A prospective observational study found that 79% of patients with PVFMD decreased their use of asthma medication following diagnosis and treatment of PVFMD. This decrease in medication use occurred alongside symptom improvement and was irrespective of pulmonary function testing (PFT) findings. Notably, 85% of the patients in the study were taking asthma medication at presentation, suggesting a high initial rate of asthma medication use in this patient population (PUBMED:27861929).
Instruction: An increasing use of defunctioning stomas after low anterior resection for rectal cancer. Is this the way to go? Abstracts: abstract_id: PUBMED:30483951 The application of defunctioning stomas after low anterior resection of rectal cancer. Defunctioning stomas are frequently used by colorectal surgeons after unsatisfactory anastomosis. The primary purpose of constructing a defunctioning stoma is to prevent an anastomotic leakage or to alleviate the detrimental consequences of it. However, the construction of defunctioning stomas is not free and is associated with adverse impacts on the patient. Stoma-related complications can develop in different stages and can impair a patient's quality of life. Furthermore, one in every four to six defunctioning stomas turns into a non-closure stoma. Since no definite indications for the creation of a defunctioning stoma are available, surgeons have to carefully weigh their benefits against their adverse effects. Thus, the precise selection of patients who should undergo the creation of a defunctioning stoma is of great importance, and an alternative method for preventing anastomotic leakage is needed. abstract_id: PUBMED:23632318 An increasing use of defunctioning stomas after low anterior resection for rectal cancer. Is this the way to go? Background: The last decade there has been an increased awareness of the problem of anastomotic leakage after low anterior resection for rectal cancer, which may have led to more defunctioning stomas. In this study, current use of defunctioning stomas was assessed and compared to the use of defunctioning stomas at the time of the TME-trial together with associated outcomes. Methods: Eligible patients with rectal cancer undergoing low anterior resection were selected from the Dutch Surgical Colorectal Audit (DSCA, n = 988). Similar patients were selected from the TME-trial (n = 891). The percentages of patients with a defunctioning stoma, anastomotic leakage and postoperative mortality rates were studied. Multivariable models were used to study possible confounding on the outcomes. Results: At the time of the TME-trial, 57% of patients received a defunctioning stoma. At the time of the DSCA, 70% of all patients received a defunctioning stoma (p < 0.001). Anastomotic leakage rates were similar (11.4% and 12.1%; p = 0.640). The postoperative mortality rate differed (3.9% in the TME-trial vs. 1.1% in the DSCA; p < 0.001), but was not associated with a more frequent use of a stoma (OR 1.80, 95% CI 0.91-3.58). Conclusion: In current surgical practice, 70% of patients undergoing LAR for rectal cancer receives a defunctioning stomas. This percentage seems increased when compared to data from the TME-trial. Clinically relevant anastomotic leakage rates remained similar. Therefore, current routine use of defunctioning stomas should be questioned. abstract_id: PUBMED:34310840 Defunctioning stomas may reduce chances of a stoma-free outcome after anterior resection for rectal cancer. Aim: To investigate the conflicting consequences of faecal diversion on stoma outcomes and anastomotic leakage in anterior resection for rectal cancer, including interaction effects determined by the extent of mesorectal excision. Method: Anterior resections between 2007 and 2016 were identified using the Swedish Colorectal Cancer Registry. National Patient Registry data were added to determine stoma outcome 2 years after surgery. Tumour distance from the anal verge constituted a proxy for extent of mesorectal excision [total mesorectal excision (TME): ≤10 cm; partial mesorectal excision (PME): 13-15 cm]. With confounder-adjusted probit regression, the total effect of defunctioning stoma on permanent stoma, and the interaction effect of extent of mesorectal excision, were estimated together with the indirect effect through anastomotic leakage. Baseline risks, risk differences (RDs) and relative risks (RRs) were reported. Results: The main study cohort included 4529 patients. Defunctioning stomas influenced the absolute permanent stoma risk (TME: RD 0.11 [95% CI 0.09-0.13]; PME: RD 0.15 [95% CI 0.13-0.16]). The baseline risk was higher in TME, with a resulting greater RR in PME (2.23 [95% CI 1.43-3.02] vs 4.36 [95% CI 3.05-5.68]). The indirect reduction in permanent stoma rates, due to the alleviating effect of faecal diversion on anastomotic leakage, was small (TME: 0.89 [95% CI 0.81-0.96]; PME: 0.96 [95% CI 0.91-1.00]). Conclusion: In anterior resection for rectal cancer, defunctioning stomas may reduce chances of a stoma-free outcome. Considering leakage reduction benefits, consequences of routine diversion in TME might be fairly balanced, while this seems questionable in PME. abstract_id: PUBMED:29777268 Defunctioning Stomas Result in Significantly More Short-Term Complications Following Low Anterior Resection for Rectal Cancer. Background: Studies suggest that defunctioning stomas reduce the rate of anastomotic leakage and urgent reoperations after anterior resection. Although the magnitude of benefit appears to be limited, there has been a trend in recent years towards routinely creating defunctioning stomas. However, little is known about post-operative complication rates in patients with and without a defunctioning stoma. We compared overall short-term post-operative complications after low anterior resection in patients managed with a defunctioning stoma to those managed without a stoma. Methods: A retrospective cohort study of patients undergoing elective low anterior resection of the rectum for rectal cancer. The primary outcome was overall 90-day post-operative complications. Results: Two hundred and three patients met the inclusion criteria for low anterior resection. One hundred and forty (69%) had a primary defunctioning stoma created. 45% received neoadjuvant radiotherapy. Patients with a defunctioning stoma had significantly more complications (57.1 vs 34.9%, p = 0.003), were more likely to suffer multiple complications (17.9 vs 3.2%, p < 0.004) and had longer hospital stays (13.0 vs 6.9 days, p = 0.005) than those without a stoma. 19% experienced a stoma-related complication, 56% still had a stoma 1 year after their surgery, and 26% were left with a stoma at their last follow-up. Anastomotic leak rates were similar but there was a significantly higher reoperation rate among patients managed without a defunctioning stoma. Conclusion: Patients selected to have a defunctioning stoma had an absolute increase of 22% in overall post-operative complications compared to those managed without a stoma. These findings support the more selective use of defunctioning stomas. Study Registration: Registered at www.researchregistry.com (UIN: researchregistry3412). abstract_id: PUBMED:28399874 Early closure of defunctioning stoma increases complications related to stoma closure after concurrent chemoradiotherapy and low anterior resection in patients with rectal cancer. Background: After a low anterior resection, creating a defunctioning stoma is vital for securing the anastomosis in low-lying rectal cancer patients receiving concurrent chemoradiotherapy. Although it decreases the complication and reoperation rates associated with anastomotic leakage, the complications that arise before and after stoma closure should be carefully evaluated and managed. Methods: This study enrolled 95 rectal cancer patients who received neoadjuvant concurrent chemoradiotherapy and low anterior resection with anastomosis of the bowel between July 2010 and November 2012. A defunctioning stoma was created in 63 patients during low anterior resection and in another three patients after anastomotic leakage. Results: The total complication rate from stoma creation to closure was 36.4%. Ileostomy led to greater renal insufficiency than colostomy did and significantly increased the readmission rate (all p < 0.05). The complication rate related to stoma closure was 36.0%. Patients with ileostomy had an increased risk of developing complications (p = 0.017), and early closure of the defunctioning stoma yielded a higher incidence of morbidity (p = 0.006). Multivariate analysis revealed that a time to closure of ≤109 days was an independent risk factor for developing complications (p = 0.007). Conclusions: The optimal timing of stoma reversal is at least 109 days after stoma construction in rectal cancer patients receiving concurrent chemoradiotherapy and low anterior resection. abstract_id: PUBMED:22193436 Defunctioning stoma in low anterior resection for rectal cancer: a meta- analysis of five recent studies. Background/aims: The necessity of a defunctioning stoma in low anterior resection with total mesorectal excision for rectal cancer remains controversial. This meta-analysis evaluates the advantages of prophylactic stomas in patients undergoing low anterior resection and assesses postoperative outcomes of patients with or without a defunctioning stoma. Methodology: Studies and relevant literatures regarding the formation of defunctioning stomas after low anterior resection were searched through PubMed and Embase. The rates of anastomotic leakage and re-operation related to leakage with or without defunctioning stoma were pooled and compared using a meta-analysis. The risk ratios were calculated with 95% confidence intervals to evaluate the influence of defunctioning stomas. Results: Five recent studies including 878 patients in total were included in this meta-analysis. These studies demonstrated that defunctioning stomas significantly reduced the rate of postoperative anastomotic leakage and reoperation after low anterior resection, the pooled risk ratio was 0.34 (95% CI=0.22-0.53, p<0.00001) and 0.27 (95% CI=0.16-0.48, p<0.00001), respectively. Conclusions: Defunctioning stomas can be useful to minimize the rate of anastomotic leakage and re-operation related to leakage. Furthermore, anorectal function was not affected. However, the influence of a defunctioning stoma on long-term mortality and the quality of life in patients treated for rectal cancer is inconclusive. abstract_id: PUBMED:32908969 Risk factors for nonclosure of defunctioning stoma and stoma-related complications among low rectal cancer patients after sphincter-preserving surgery. Background: Defunctioning stoma is widely used to reduce anastomotic complications in rectal cancer surgery. However, the complications of stoma and stoma reversal surgery should not be underestimated. Furthermore, in some patients, stoma reversal failed. Here, we investigated the complications of defunctioning stoma surgery and subsequent reversal surgery and identify risk factors associated with the failure of getting stoma reversed. Methods: In total, 154 patients who simultaneously underwent low anterior resection and defunctioning stoma were reviewed. Patients were divided into two groups according to whether their stoma got reversed or not. The reasons that patients received defunctioning stoma and experienced stoma-related complications and the risk factors for failing to get stoma reversed were analysed. Results: The mean follow-up time was 47.54 (range 4.0-164.0) months. During follow-up, 19.5% of the patients suffered stoma-related long-term complications. Only 79 (51.3%) patients had their stomas reversed. The morbidity of complications after reversal surgery was 45.6%, and these mainly consisted of incision-related complications. Multivariate analyses showed that pre-treatment comorbidity (HR = 3.17, 95% CI 1.27-7.96, P = 0.014), postoperative TNM stage (HR = 2.55, 95% CI 1.05-6.18, P = 0.038), neoadjuvant therapy (HR = 2.75, 95% CI 1.07-7.05, P = 0.036), anastomosis-related complications (HR = 4.52, 95% CI 1.81-11.29, P = 0.001), and disease recurrence (HR = 24.83, 95% CI 2.90-213.06, P = 0.003) were significant independent risk factors for a defunctioning stoma to be permanent. Conclusions: Defunctioning stoma is an effective method to reduce symptomatic anastomotic leakage, but the stoma itself and its reversal procedure are associated with high morbidity of complications, and many defunctioning stomas eventually become permanent. Therefore, surgeons should carefully assess preoperatively and perform defunctioning stomas in very high risk patients. In addition, doctors should perform stoma reversal surgery more actively to prevent temporary stomas from becoming permanent. abstract_id: PUBMED:27466522 Comparison of Transumbilical and Conventional Defunctioning Ileostomy in Laparoscopic Anterior Resections for Rectal Cancer. Background/aim: Laparoscopic surgery has made possible anterior resections with small incisions suitable for creating stomas. We retrospectively compared surgical results and stomal complications between transumbilical defunctioning ileostomy (TDI) and conventional defunctioning ileostomy (CDI) in laparoscopic anterior resections for rectal cancer. Patients And Methods: We compared patients who underwent laparoscopic anterior resection with TDI (n=47) with those undergoing CDI (n=27) for rectal cancer between February 2011 and January 2015. Results: For the initial operations, the TDI group had significantly less intraoperative blood loss (30.3 ml vs. 117.0 ml; p=0.014). For stomal closure, the TDI group experienced significantly fewer wound infections (2 vs. 8 cases; p=0.002) and bowel obstructions (none vs. 3 cases; p=0.039). No significant differences in stomal complication rates were observed. Conclusion: TDI is associated with better surgical results and fewer complications than CDI after laparoscopic anterior resection for rectal cancer. abstract_id: PUBMED:24222144 Transumbilical defunctioning ileostomy: A new approach for patients at risks of anastomotic leakage after laparoscopic low anterior resection. Background: The use of a protective defunctioning stoma in rectal cancer surgery has been reported to reduce the rates of reoperation for anastomotic leakage, as well as mortality after surgery. However, a protective defunctioning stoma is not often used in cases other than low rectal cancer because of the need for stoma closure later, and hesitation by patients to have a stoma. We outline a novel and patient-friendly procedure with an excellent cosmetic outcome. This procedure uses the umbilical fossa for placement of a defunctioning ileostomy followed by a simple umbilicoplasty for ileostomy closure. Patients And Methods: This study included a total of 20 patients with low rectal cancer who underwent a laparoscopic low anterior resection with defunctioning ileostomy (10 cases with a conventional ileostomy in the right iliac fossa before March 2012, and 10 subsequent cases with ileostomy at the umbilicus) at the Jikei University Hospital in Tokyo from August 2011 to January 2013. The clinical characteristics of the two groups were compared: operative time, blood loss, length of hospital stay and postoperative complications of the initial surgery, as well as the stoma closure procedure. Results: There were no differences between the groups in the median operative time for initial surgery (248 min vs. 344 min), median blood loss during initial surgery (0 ml vs. 115 ml), and median hospital stay after initial surgery (13 days vs. 16 days). Complication rates after the initial surgery were similar. There were no differences between the groups in median operative time for stoma closure (99 min vs. 102 min), median blood loss during stoma closure (7.5 ml vs. 10 ml), and median hospital stay after stoma closure (8 days in both groups). Complications after stoma closure such as wound infection and intestinal obstruction were comparable. Thus, no significant differences in any factor were found between the two groups. Conclusion: The transumbilical protective defunctioning stoma is a novel solution to anastomotic leakage after laparoscopic rectal cancer surgery, with patient-friendliness as compared to conventional procedures in light of the cosmetic outcome. abstract_id: PUBMED:25683994 Risk factors of permanent stomas in patients with rectal cancer after low anterior resection with temporary stomas. Purpose: The aim of this study was to identify risk factors influencing permanent stomas after low anterior resection with temporary stomas for rectal cancer. Materials And Methods: A total of 2528 consecutive rectal cancer patients who had undergone low anterior resection were retrospectively reviewed. Risk factors for permanent stomas were evaluated among these patients. Results: Among 2528 cases of rectal cancer, a total of 231 patients had a temporary diverting stoma. Among these cases, 217 (93.9%) received a stoma reversal. The median period between primary surgery and stoma reversal was 7.5 months. The temporary and permanent stoma groups consisted of 203 and 28 patients, respectively. Multivariate analysis showed that independent risk factors for permanent stomas were anastomotic-related complications (p=0.001) and local recurrence (p=0.001). The 5-year overall survival for the temporary and permanent stoma groups were 87.0% and 70.5%, respectively (p<0.001). Conclusion: Rectal cancer patients who have temporary stomas after low anterior resection with local recurrence and anastomotic-related complications may be at increased risk for permanent stoma. Answer: The use of defunctioning stomas after low anterior resection for rectal cancer has been a topic of debate among surgeons. While defunctioning stomas are intended to prevent anastomotic leakage or mitigate its consequences, their application is not without adverse effects. Stoma-related complications can impair patient quality of life, and a significant proportion of these stomas may become permanent, with one in every four to six patients not undergoing closure (PUBMED:30483951). Recent data indicates that the use of defunctioning stomas has increased, with 70% of patients receiving a stoma compared to 57% during the TME-trial period. Despite this increase, anastomotic leakage rates have remained similar, leading to questions about the routine use of defunctioning stomas (PUBMED:23632318). Moreover, defunctioning stomas may actually reduce the chances of a stoma-free outcome after anterior resection for rectal cancer. The baseline risk of permanent stoma is higher in total mesorectal excision (TME), and the relative risk is greater in partial mesorectal excision (PME). The reduction in permanent stoma rates due to the alleviating effect of faecal diversion on anastomotic leakage is small, suggesting that the routine use of defunctioning stomas, especially in PME, may not be justified (PUBMED:34310840). Additionally, patients with defunctioning stomas have been found to experience more short-term complications, multiple complications, and longer hospital stays compared to those without a stoma. This supports a more selective use of defunctioning stomas (PUBMED:29777268). While defunctioning stomas can reduce the rate of anastomotic leakage and reoperation, their influence on long-term mortality and quality of life remains inconclusive (PUBMED:22193436). Furthermore, the morbidity associated with stoma and its reversal surgery is high, and many defunctioning stomas eventually become permanent due to various risk factors (PUBMED:32908969). In conclusion, while defunctioning stomas can be beneficial in certain high-risk cases, their increasing use after low anterior resection for rectal cancer should be carefully considered. Surgeons must weigh the benefits against the potential for increased complications, longer hospital stays, and the risk of permanent stoma formation. The decision to use a defunctioning stoma should be made on a case-by-case basis, taking into account the individual patient's risk factors and the potential impact on their quality of life.
Instruction: Is Bax/Bcl-2 ratio considered as a prognostic marker with age and tumor location in colorectal cancer? Abstracts: abstract_id: PUBMED:25864810 Is Bax/Bcl-2 ratio considered as a prognostic marker with age and tumor location in colorectal cancer? Background: Bax and Bcl-2 are the major members of Bcl-2 family whose play a key role in tumor progression or inhibition of intrinsic apoptotic pathway triggered by mitochondrial dysfunction. Therefore, the balance between pro- and anti-apoptotic members of this family can determine the cellular fate. Methods: In this study, the relative level of mRNA expression of Bax and Bcl-2 genes was determined using RNA extraction, cDNA synthesis and RT-qPCR technique from 22 tumoral tissues and adjacent non-tumoral tissues from adenocarcinoma colorectal cancer. Results: The potential prognostic and predictive significance of Bax and Bcl-2 gene expression and Bax/Bcl-2 ratio were demonstrated in colorectal cancer. The significant correlation between qPCR data and different clinicopathologic parameters of colorectal carcinoma, including age, gender, tumor size, tumor stage, tumor location, and tumor differentiation was also examined. Interestingly, no significant correlation was seen between Bax and Bcl-2 expressions and clinicopathological parameters of colorectal cancer. However, Bax/Bcl-2 ratio was statistically correlated with age and tumor location. Patients with age above 50 showed decreased levels of Bax/Bcl-2 ratio. Moreover, the Bax/Bcl-2 ratio was significantly lower in tumors resected from colon compared to sigmoid colon, rectosigmoid and rectum tumors. Conclusion: This study indicates a significant correlation between age and tumor location with Bax/Bcl-2 expression ratio, suggesting predictive value as a potential molecular marker of colorectal cancer. abstract_id: PUBMED:22811811 Bax expression is a candidate prognostic and predictive marker of colorectal cancer. Objective: Since the anti-tumor activity of 5-fluorouracil (5-FU) is due to induction of apoptosis, we assessed the value of expression of key apoptotic molecules (Bax, Bcl-2 and p53) in predicting the efficacy of 5-FU therapy for colorectal adenocarcinomas (CRCs). Methods: Archival tissues of CRCs from 56 patients who received a complete regimen of 5-FU-based chemotherapy after surgery, and 56 patients matched for age, gender, ethnicity, tumor stage, tumor location, and tumor differentiation who had undergone only surgery (without any pre- or post-surgery therapy), were evaluated for immunophenotypic expression of Bax, Bcl-2, and p53. Also, these CRCs were evaluated for Bax mutations. The predictive capacity or prognostic value of these markers was assessed by estimating overall survival. Results: The majority of low Bax expressing CRCs have exhibited mutations at the G (8) tract. There was no significant difference in overall survival rates between the categories of surgery alone and 5-FU-treated patients. However, a better survival was observed for patients who received chemotherapy when their CRCs had low Bax/Bcl2 ratio (HR, 1.55; 95% CI: 1.46-31.00). Patients who received surgery alone and whose CRCs lacked Bax expression had 5.33 times higher mortality than those with high Bax expression (95% CI: 1.78-15.94), when controlled for tumor stage and other confounders. Bcl-2 and nuclear p53 accumulation had no predictive value in either patient group. Conclusion: These findings are the first to demonstrate that high Bax expression is a good prognosticator for patients who underwent surgery alone, and that patient with low Bax/Bcl-2 expression ratio benefit from 5-FU-based adjuvant therapies. abstract_id: PUBMED:29374142 The BAX/BAK-like protein BOK is a prognostic marker in colorectal cancer. The intrinsic or mitochondrial apoptosis pathway is controlled by the interaction of antiapoptotic and pro-apoptotic members of the BCL-2 protein family. Activation of this death pathway plays a crucial role in cancer progression and chemotherapy responses. The BCL-2-related ovarian killer (BOK) possesses three BCL-2 homology domains and has been proposed to act in a similar pro-apoptotic pathway as the pro-apoptotic proteins BAX and BAK. In this study, we showed that stage II and III colorectal cancer patients possessed decreased levels of BOK protein in their tumours compared to matched normal tissue. BOK protein levels in tumours were also prognostic of clinical outcome but increased BOK protein levels surprisingly associated with earlier disease recurrence and reduced overall survival. We found no significant association of BOK protein tumour levels with ER stress markers GRP78 or GRP94 or with cleaved caspase-3. In contrast, BOK protein levels correlated with Calreticulin. These data indicate BOK as a prognostic marker in colorectal cancer and suggest that different activities of BOK may contribute to cancer progression and prognosis. abstract_id: PUBMED:10334520 Analysis of the p53/BAX pathway in colorectal cancer: low BAX is a negative prognostic factor in patients with resected liver metastases. Purpose: To determine the prognostic value of the central downstream apoptosis effector BAX in relation to its upstream regulator p53 in R0-resected hepatic metastases of colorectal cancer. Patients And Methods: Retrospective analysis of 41 patients who underwent potentially curative resection of liver metastases from colarectal cancer was performed. Tumor DNA was screened for p53 mutations by single-stranded conformational polymorphism polymerase chain reaction and for BAX frameshift mutations by fragment length analysis. Protein expression of BAX, p21, and p53 was investigated by immunohistochemistry. Results: Overall median survival was 40.2 months. Tumors with BAX frameshift mutations were considered microsatellite mutator phenotype-positive and were excluded from further prognostic analyses. Patients with high BAX protein expression had a median survival of 53.6 months compared with 35.4 months for patients with low BAX expression (P < .05). The negative prognostic value of low BAX expression was more evident in those patients with wild-type p53 (median survival, 54.0 v 23.3 months for BAX-negative tumors; P < .01). Low BAX expression was an independent negative prognostic marker in multivariate regression analysis for all patients independent of the p53 status (relative risk, 3.03, P = .03), especially for p53 wild-type tumors (relative risk, 8.21; P = .0095). Conclusion: We conclude that low BAX expression is an independent negative prognostic marker in patients with hepatic metastases of colorectal cancer. The best survival was seen in patients with an intact p53-to-BAX pathway; ie, wild-type p53- and BAX-positive tumors. Thus, analysis of apoptosis signaling pathways (here, p53 in concert with its downstream death effector, BAX) might yield more prognostic power in future studies as compared with analysis of single genes such as p53 alone. abstract_id: PUBMED:19453036 Comparative analysis of prognostic significance of molecular markers of apoptosis with clinical stage and tumor differentiation in patients with colorectal cancer: a single institute experience. Background/aims: The most important parameter determining the outcome of colorectal cancer (CRC) is the presence of metastases, which occur in 45-50% of all cases. The balance between proliferation and apoptosis is a key factor for tumor growth, and thus--for metastasis. Evaluation of markers for proliferation and apoptosis could therefore be helpful in predicting tumor behavior in early stage of carcinogenesis. Methodology: Seventy-two biopsies from cases of colorectal cancer (CRC) were immunostained for the proliferation/apoptosis-related proteins Bcl-2, Bax and p53. The resected specimens were also subjected for routine pathologic assessment as part of Tumor, Node and Metastases (TNM) staging. Results: Comparing the marker protein expression with standard prognostic factors such as clinical stage and grade of differentiation revealed a lack of correlation between markers and standard prognostic factors in cases where clinical stage favors good prognosis (I and II stage). We found lack of correlation in 52% of diagnosed patients by tumor grade and 46% in patients by clinical stage. Conclusions: Co-expression of Bax with p53 protein is associated with poor clinical outcome, especially in cases without concomitant expression of bcl-2. The blocked apoptosis and inability of the organism to "liquidate" the neoplastic transformation of the cell (loss/mutation of p53), which we establish in our study in the half the patients with high and moderately differentiated carcinoma and separately in 46% of the patients with favorable prognosis by clinical stage is a reason for fast progression, too. The presence of a low correlation between the staging and the results of the molecular profiling suggest that the staging system needs to improve to address more precisely the issues of therapeutic options and patient survival. Using a panel of markers rather than a single marker is a step in this direction. abstract_id: PUBMED:10023006 Prognostic significance of Bcl-2, Bcl-xL/S, Bax and Bak expressions in colorectal carcinomas. The immunohistochemical expressions of the apoptosis-related proteins Bcl-2, Bcl-xL/S, Bax and Bak were investigated in tumor specimens selected from 58 consecutive patients undergoing surgery for advanced colorectal carcinoma. The expression patterns in 50 specimens of adjacent normal colonic mucosa were also examined. In the normal colonic mucosa, Bcl-2-positive epithelial cells tended to be located at the base of the crypts, while the Bcl-xL/S-, Bax- and Bak-positive epithelial cells tended to be located at the luminal surface. The intracellular expression patterns of Bcl-2 and Bax were diffuse cytoplasmic, whereas those of Bcl-xL/S and Bak were granular cytoplasmic. In the adenocarcinomas, the intracellular expression patterns of all antibodies were diffuse cytoplasmic, and the percentages of Bcl-2-, Bcl-xL/S-, Bax- and Bak-positive cases (>20% of cancer cells labeled) were 29%, 43%, 45% and 69%, respectively. Bax expression was significantly correlated with less lymph vessel invasion (p=0.02) and less depth of invasion (p=0.04). In relation to prognosis (5-year-survival), the patients with Bax-positive tumors had significantly better prognoses than the patients who had Bax-negative tumors (p<0.05). However, the Bcl-2, Bcl-xL/S and Bak expressions were not related to any clinicopathological factors examined. Thus, Bax expression may be an additional prognostic marker in colorectal carcinomas. abstract_id: PUBMED:19167459 Apoptosis signaling proteins as prognostic biomarkers in colorectal cancer: a review. Colorectal cancer is a leading cause of cancer related mortality in the Western world. In recent years, combination 5-fluorouracil based adjuvant chemotherapy as first line treatment of this disease has led to improved disease free and overall survival. However drug resistance, both innate and acquired, remains an obstacle in the effective treatment of this disease. Apoptotic pathways are frequently altered in both tumor progression and drug resistance; therefore proteins associated with this pathway may have potential as prognostic biomarkers for this disease. Identification of clinical biomarkers that are able to identify patients who are more likely to respond to specific chemotherapy will lead to more personalized, effective, and less toxic therapy. This review focuses on the current status of apoptosis related proteins as biomarkers for colorectal cancer and discusses the possible application of systems approaches in this context. abstract_id: PUBMED:35108639 Downregulation of CPT2 promotes proliferation and inhibits apoptosis through p53 pathway in colorectal cancer. Background: Downregulation of Carnitine palmitoyltransferase-2 (CPT2) has been shown to be highly associated with the progression of several cancers, but little known about its expression, biological functions and mechanisms in colorectal cancer (CRC). Methods: Bioinformatics analysis of The Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO) data sets was used to explore the expression of CPT2, the relationship between CPT2 expression and clinicopathologic features, as well as the overall survival of CRC. Cox's proportional hazards regression model was used to analyze independent prognostic factors of CRC. In vitro, CRC tissues were analyzed by RT-qPCR, IHC, IF and western blotting to verify CPT2 expression. Colony formation, CCK-8, cell cycle, apoptosis, transwell and wound healing assays were performed to examine the functions of CPT2 in CRC. In vivo, nude mouse xenograft experiment was used to further examine the effect of CPT2 on tumorigenesis. Furthermore, gene set enrichment analysis (GSEA) was conducted to explore the downstream pathway of CPT2. The regulation of p53 pathway by CPT2 was verified by RT-qPCR and Western blotting. Results: CPT2 expression was frequently downregulated in CRC and correlated with poor prognosis. Low CPT2 expression was significantly associated with age, lymph node metastasis, distant metastasis and TMN stage. Univariate and multivariate analysis indicated that low CPT2 expression was an independent prognostic factor for poorer overall survival. Functionally, overexpression of CPT2 in CRC cells induced growth suppression, cell cycle arrest at the G1 phase, enhanced apoptosis and reduced cell migration and invasion. Conversely, knockdown of CPT2 contributed to cell proliferation, migration and invasion, increased the proportion of S phase cells, decreased the proportion of G1 phase cells and inhibited apoptosis. Mechanistically, we found that CPT2 overexpression can increase p53 expression by activating p-p53, leading to p21, Bax, cleaved caspase-9, cleaved caspase-3 and cleaved PARP activation and Bcl2, MDM2 deactivation, thereby inhibiting tumor proliferation and promoting apoptosis. CPT2 knockdown yielded opposite results. Conclusion: These findings suggest that CPT2 may be a novel prognostic marker of CRC and downregulation of CPT2 can promote proliferation and inhibit apoptosis through p53 pathway in CRC. Strategies targeting CPT2 may be developed as therapies for CRC. abstract_id: PUBMED:11775833 Change of apoptotic status in the human colorectal adenoma-carcinoma sequences and its correlation with carcinogenesis and prognosis. Objective: To assess apoptotic status during the development of colorectal cancer and its prognostic value. Methods: The apoptotic frequency of 168 fresh adenocarcinoma specimens and primary cultured cells at 2, 12, 24 and 48 hours (9 normal mucosa, 4 adenomas and 9 adenocarcinomas) were measured by flow cytometry (FCM). Apoptotic indices (AI) in situ for 25 adenomas and 77 adenocarcinomas were visualized by TdT-mediated dUTP nick end labeling (TUNEL), Ki-s5 labeling indices (KI), bcl-2, bax, waf1 and p53 were immunostained with ABC method. Results: The culture-related apoptosis at 24-48 hours in vitro was obviously decreased in cultured tumor cells when compared with mucosa cells. Spontaneous apoptosis in situ occurred more frequently in tumor with aneuploid type at late stage. There was positive relationship between apoptosis and proliferative activity, determined by both TUNEL and FCM methods. The well-differentiated or early stage lesions with intensive bcl-2/bax expression were significantly more likely to have low AI. p53 accumulation and waf1 depression were mainly related to KI, whereas bax and waf1 overexpression led to a comparatively higher AI/KI ratio. bcl-2 and KI were found to be independent risk factors. Conclusions: The data suggest that the depressed susceptibility to inductive apoptosis may contribute to the initial phase of tumorigenesis, and spontaneous apoptosis in vivo may serve as a marker of tumor progression. The bcl-2 and KI may be valuable in predicting prognosis in colorectal cancer. abstract_id: PUBMED:28656255 Low expression of microRNA-320b correlates with tumorigenesis and unfavorable prognosis in glioma. Accumulating evidence demonstrates that dysregulated microRNAs (miRNAs) play a critical role in tumorigenesis and progression of various cancers. miR-320b, a member of miR‑320 family, was revealed downregulated in numerous human cancers, including nasopharyngeal carcinoma and colorectal cancer. However, the function of miR‑320b in human glioma remained poorly defined. In this study, we report that miR‑320b was lowly expressed in glioma tissues and cell lines in contrast with controls, being closely correlated with histological malignancy of glioma. Furthermore, patients with low expression of miR‑320b were associated with poor prognostic outcomes. In vitro functional assays indicated that overexpression of miR‑320b could markedly enhance cell apoptosis rate and suppress cell proliferation, migration and invasion. miR-320b mimic impaired cell cycle and metastasis through inhibiting the expression of G1/S transition key regulator Cyclin D1 as well as decreasing the expression level of MMP2 and MMP9. Additionally, upregulation of miR‑320b could markedly promote apoptosis by increasing the level of Bax and reducing Bcl-2 expression in glioma. Taken together, our data suggested that miR‑320b might serve as a novel prognostic marker and potential therapeutic target for glioma. Answer: Yes, the Bax/Bcl-2 ratio is considered as a prognostic marker with age and tumor location in colorectal cancer. A study indicated that there is a significant correlation between the Bax/Bcl-2 expression ratio and both age and tumor location in colorectal cancer. Patients above the age of 50 showed decreased levels of the Bax/Bcl-2 ratio. Additionally, the Bax/Bcl-2 ratio was significantly lower in tumors resected from the colon compared to those from the sigmoid colon, rectosigmoid, and rectum, suggesting its predictive value as a potential molecular marker of colorectal cancer (PUBMED:25864810). Moreover, other studies have supported the prognostic significance of Bax expression in colorectal cancer. For instance, high Bax expression was shown to be a good prognosticator for patients who underwent surgery alone, and patients with a low Bax/Bcl-2 expression ratio benefited from 5-FU-based adjuvant therapies (PUBMED:22811811). Another study found that low BAX expression was an independent negative prognostic marker in patients with hepatic metastases of colorectal cancer (PUBMED:10334520). Additionally, Bax expression was significantly correlated with less lymph vessel invasion and less depth of invasion, and patients with Bax-positive tumors had significantly better prognoses than those with Bax-negative tumors (PUBMED:19453036). These findings collectively suggest that the Bax/Bcl-2 ratio, as well as the expression levels of Bax alone, have prognostic significance in colorectal cancer and are associated with various clinicopathological parameters, including age and tumor location.
Instruction: Is daidzein non-metabolizer a high risk for prostate cancer? Abstracts: abstract_id: PUBMED:12411567 Is daidzein non-metabolizer a high risk for prostate cancer? A case-controlled study of serum soybean isoflavone concentration. Background: It has been postulated that soybean isoflavones act as inhibitory factors in prostate cancer. However, to date there have been no case-controlled clinical studies carried out to compare the circulating concentrations of isoflavones in prostate cancer patients and control subjects. Methods: The serum levels of genistein, daidzein and equol were determined and compared in 253 experimental subjects (141 prostate cancer patients and 112 cancer-free controls). Results: The serum concentrations of isoflavones were compared in hospitalized and non-hospitalized subjects and for both the prostate cancer patients and the controls the concentrations were lower in the hospitalized subjects. The serum concentrations of genistein and daidzein were compared in subjects <70 years of age and subjects >/=70 years old and the levels were significantly lower in the younger group. Contrary to our expectation, comparison of the patient group and the control group revealed the serum concentrations of isoflavones to be higher in the patient group. Daidzein non-metabolizers were compared in the hospitalized experimental subjects of the patient group and the control group and they were significantly more common in the patient group. The poorly differentiated cancer patient group included a significantly lower percentage of daidzein metabolizers. Conclusions: The above findings revealed that equol itself or some unknown factor regulating the metabolism of daidzein is deeply involved in the biology of prostate cancer. Future studies are urgently needed to compare the incidence of daidzein metabolizers among various countries. abstract_id: PUBMED:15734719 Gut bacterial metabolism of the soy isoflavone daidzein: exploring the relevance to human health. The indigenous intestinal microflora are involved in a variety of processes within the human body, and are important for maintaining host health. As such, interindividual differences in the ability to harbor certain intestinal bacteria might be associated with interindividual differences in health and/or disease susceptibility. In the last decade there has been considerable interest in phytoestrogen intakes in relation to human health. Daidzein, an isoflavone phytoestrogen found in soy, is metabolized to equol and O-desmethylangolensin (O-DMA) by intestinal bacteria. The specific bacterium/bacteria responsible for equol and O-DMA production in humans have yet to be identified definitively, but in vitro and animal studies have suggested that equol and O-DMA are more biologically active than their precursor daidzein. Interestingly, substantial interindividual differences in daidzein metabolism exist; following soy or daidzein consumption, approximately 30%-50% of the human population produce equol, and approximately 80%-90% produce O-DMA. Observational and intervention studies in humans have suggested that the ability to produce equol and O-DMA may be associated with reduced risk of certain diseases including breast and prostate cancers. However, relatively few studies have been conducted to date. In this review, we discuss the available evidence for a relationship between daidzeinmetabolizing phenotypes and human health, and suggest potential mechanisms for some of the reported relationships. abstract_id: PUBMED:20924663 Urinary phytoestrogens and risk of prostate cancer in Jamaican men. We evaluated the relationship of spot urinary concentrations of phytoestrogens with total prostate cancer and tumor grade in a hospital-based case-control study in Jamaica. Urine samples were analyzed for genistein, daidzein, equol (isoflavones), and enterolactone (lignan) among newly diagnosed cases (n = 175) and controls (n = 194). Urinary concentrations of enterolactone (lignan) were higher among cases. There were no significant differences in median concentrations of isoflavone excretion. Compared with non-producers of equol (reference tertile), men who produced equol were at decreased risk of total prostate cancer (tertile 2: OR, 0.42; CI, 0.23-0.75) (tertile 3: OR, 0.48; CI, 0.26-0.87) (p (trend), 0.020) and high-grade disease (tertile 2: OR, 0.31; CI, 0.15-0.61) (tertile 3: OR, 0.29; CI, 0.13-0.60) (p (trend), 0.001). Higher concentrations of enterolactone were positively related to total prostate cancer (OR, 1.85; CI, 1.01-3.44; p (trend), 0.027) as well as high-grade disease (OR, 2.46; CI, 1.11-5.46; p (trend), 0.023). There were no associations between urinary excretion of genistein and daidzein with risk of prostate cancer. Producers of equol (isoflavone) may be at reduced risk of total- and high-grade prostate cancer whereas enterolactone may increase the likelihood of disease. abstract_id: PUBMED:23870911 Combination of low dose of genistein and daidzein has synergistic preventive effects on isogenic human prostate cancer cells when compared with individual soy isoflavone. The reduced incidence of prostate cancer (PCa) in Asia countries has been attributed to high soy diets, and major soy isoflavones, in particular daidzein and genistein, are thought to be the source of the beneficial and anti-cancer effects of soy foods. However, attention has been drawn to the safety of using high levels of soy isoflavones in humans, which is especially the concern for consumers taking regular soy isoflavone dietary supplements. The main objective of this study is thus to identify a soy isoflavone combination with lower levels of daidzein and genistein to be a more efficacious and safer chemo-preventive agent for PCa. The anticancer effects of daidzein and genistein, and their combinations on early-stage androgen-dependent PCa cells (LNCaP) and bone metastatic LNCaP-derivative PCa cells (C4-2B) were compared. Cells were treated with varying concentrations of daidzein, genistein (25-200 μM) or their combinations (25 or 50 μM) and cell proliferation, apoptosis, cell cycles and cellular uptakes of the isoflavones were measured after 48 h. Daidzein and genistein showed a synergistic effect on inhibiting cell proliferation and inducing apoptosis of both PCa cells. Twenty-five μM daidzein/50 μM genistein and 50 μM daidzein/50 μM genistein significantly increased the apoptotic effects on C4-2B cells although they did not show any effect when used individually. Except 50 μM daidzein/50 μM genistein, all other combinations had no impacts on cell cycles. For treatment with soy isoflavone combination, genistein was always better taken up than daidzein by both LNCaP and C4-2B cells. abstract_id: PUBMED:7656220 Genetic damage and the inhibition of 7,12-dimethylbenz[a]anthracene-induced genetic damage by the phytoestrogens, genistein and daidzein, in female ICR mice. Populations consuming soybeans have reduced rates of breast, colon and prostate cancer possibly due, in part, to the presence in soybeans of two estrogenic isoflavones, genistein and daidzein. This study investigated the genotoxicity of these soya isoflavones and their interactions with 7,12-dimethylbenz[a]anthracene (DMBA)-induced sister chromatid exchanges (SCE) in bone marrow cells and DNA adduct formations in liver and mammary glands of mice. Groups of female ICR mice were pretreated i.p. with daidzein and/or genistein (10-20 mg/kg per day for 6 days or 50 mg/kg per 12 h for 3 days) or with the solvent, dimethylsulfoxide (DMSO). The mice were implanted with bromodeoxyuridine (BrdU) tablets s.c., and treated with DMBA (50 mg/kg) i.p. and colchicine (4 mg/kg) i.p. 24, 23, and 2 h before sacrifice, respectively. In bone marrow cells. DMBA alone induced 11.73 +/- 1.42 SCE/cell compared to 4.35 +/- 0.83 SCE/cell in the DMSO treated controls (P = 0.001). DMBA induced 20% fewer SCE (P < 0.05) in mice pretreated with daidzein, genistein or a combination of genistein and daidzein (6 x 20 mg/kg per day for 6 days) when compared to mice that received no pretreatments. Genistein at 50 mg/kg per 12 h for 3 days also inhibited DMBA-induced SCE by 20%. However, treatment for 3 days with 50 mg/kg per 12 h of genistein or daidzein alone, or a combination of daidzein plus genistein (without DMBA treatment) also induced more SCE than treatment with only the solvent (DMSO, P < 0.05). Pretreatment with both the low and the high doses of daidzein plus genistein or the high dose of genistein reduced the replication index of bone marrow cells when compared to pretreatment with DMSO (P < 0.05). Pretreatment with genistein reduced DMBA-induced DNA adduct formation by 34%, but this was only marginally significant (P = 0.08) due to the large inter-individual variability in adduct levels. These results show that genistein and daidzein suppress SCE and possibly DNA adduct formation induced by the known carcinogen, DMBA. This response to a low dose isoflavone exposure may be partly responsible for the protective effect against endocrine cancers of soya consumption. abstract_id: PUBMED:12038603 Determination of the isoflavonoids genistein and daidzein in biological samples by gas chromatography-mass spectrometry. Background: The marked differences in the incidences of both breast and prostate cancer between the East and the West have been attributed to habitual diet. Traditionally, Japanese and Far Eastern people in general consume large quantities of soya and soya-derived foodstuffs. Diphenolic soya phytoestrogens have weak oestrogenic and anti-oestrogenic properties and have been implicated in preventing or limiting the early processes associated with breast and prostate carcinogenesis. Methods: We have developed a gas chromatography-mass spectrometry procedure that is suitable for measurement of the phytoestrogens daidzein and genistein in serum, urine and tissue samples. Results: In serum samples of Japanese subjects mean (standard deviation) concentrations of daidzein [men, 281 (375.5) nmol/L; women, 246 (369.4) nmol/L] and genistein (men, 493 (604.4) nmol/L; women, 502 (717.6) nmol/L] were approximately 15 times higher than the mean levels achieved in British men [daidzein, 18.2 (20.4) nmol/L; genistein, 34.1 (27.2) nmol/L] and women [daidzein, 13.5 (11.6) nmol/L; genistein, 30.1 (31.2) nmol/L]. In pharmacokinetic studies of British subjects, maximum levels of daidzein and genistein were achieved within 6-8 h after the consumption of a cereal bar containing 20 mg of soya isoflavonoids; these levels were very similar to the mean levels achieved in normal Japanese subjects. Unlike serum, the mean daidzein concentration in urine from British subjects was higher than the mean genistein concentration (1.66 and 0.72 micromol per 24 h, respectively); following soy supplementation, urinary isoflavonoid levels were increased at least 10-fold. Conclusions: Serum daidzein and genistein concentrations are lower in British subjects than in Japanese subjects; this may be due to dietary differences. abstract_id: PUBMED:29971774 Circulating isoflavone and lignan concentrations and prostate cancer risk: a meta-analysis of individual participant data from seven prospective studies including 2,828 cases and 5,593 controls. Phytoestrogens may influence prostate cancer development. This study aimed to examine the association between prediagnostic circulating concentrations of isoflavones (genistein, daidzein, equol) and lignans (enterolactone and enterodiol) and the risk of prostate cancer. Individual participant data were available from seven prospective studies (two studies from Japan with 241 cases and 503 controls and five studies from Europe with 2,828 cases and 5,593 controls). Because of the large difference in circulating isoflavone concentrations between Japan and Europe, analyses of the associations of isoflavone concentrations and prostate cancer risk were evaluated separately. Prostate cancer risk by study-specific fourths of circulating concentrations of each phytoestrogen was estimated using multivariable-adjusted conditional logistic regression. In men from Japan, those with high compared to low circulating equol concentrations had a lower risk of prostate cancer (multivariable-adjusted OR for upper quartile [Q4] vs. Q1 = 0.61, 95% confidence interval [CI] = 0.39-0.97), although there was no significant trend (OR per 75 percentile increase = 0.69, 95 CI = 0.46-1.05, ptrend = 0.085); Genistein and daidzein concentrations were not significantly associated with risk (ORs for Q4 vs. Q1 = 0.70, 0.45-1.10 and 0.71, 0.45-1.12, respectively). In men from Europe, circulating concentrations of genistein, daidzein and equol were not associated with risk. Circulating lignan concentrations were not associated with the risk of prostate cancer, overall or by disease aggressiveness or time to diagnosis. There was no strong evidence that prediagnostic circulating concentrations of isoflavones or lignans are associated with prostate cancer risk, although further research is warranted in populations where isoflavone intakes are high. abstract_id: PUBMED:17252558 By modulating androgen receptor coactivators, daidzein may act as a phytoandrogen. Background: To identify the phytoandrogen from phytohormone, we established an assay to assess the androgenicity of phytoestrogens by using androgen receptor (AR) cofactors to modulate the AR transcriptional activity. Methods: A Dual-luciferase reporter assay was used to evaluate the transcriptional activity of AR stimulated by the phytoestrogen daidzein. Results: The Dual luciferase data showed that daidzein can enhance androgenic effects in AR negative PC-3 cells cotransfected with AR and AR cofactors. In AR and ARA70 positive LNCaP cells, daidzein can enhance ARA55-mediated induction of AR transcriptional activity. With increasing amounts of transfected ARA55, AR transcriptional activity was enhanced by daidzein in a dose-dependent manner. Conclusions: Although daidzein is a phytoestrogen, it can create androgenic effects when cells are cotransfected with AR cofactors. When screening for phytoandrogens, the modulating effects of AR cofactors with AR should be considered in the assay system. abstract_id: PUBMED:23534704 Influence of isoflavone intake and equol-producing intestinal flora on prostate cancer risk. Background: The age-adjusted incidence rate of prostate cancer (PCa) has been reported to be lower among Asians than Western populations. A traditional Japanese meal, high in soybean products or isoflavones, may be associated with a decreased risk of PCa. Equol, which is converted from daidzein by human intestinal flora, is biologically more active than any other isoflavone aglycone. Materials And Methods: We reviewed not only recent epidemiological studies on association of isoflavones with PCa risk, but also recent research on human intestinal bacteria responsible for converting daidzein into equol. Studies were systematically searched from the database published within the last 5 years of from 2008-2012. Results: Five out of 6 articles showed significant association of isoflavones with a decreased risk of PCa, and two of them consistently showed that equol-producers carry a significantly reduced risk of PCa. Furthermore, 5 human intestinal bacteria that can convert daidzein into equol were identified in the last 5 years. Conclusions: If equol can reduce risk of PCa, a possible strategy for reducing the risk of PCa may be to increase the proportion of equol-producers by changing the intestinal flora to carrying an equol-producing bacterium with dietary alteration or probiotic technology. abstract_id: PUBMED:31011360 High dietary fat intake lowers serum equol concentration and promotes prostate carcinogenesis in a transgenic mouse prostate model. Background: Consumption of diet high in soy products is suggested to contribute to lower prostate cancer incidence in Asian men. But little has been known about the influences of dietary patterns on gut microbiota and microbiota-mediated isoflavone metabolism. Here, we determined the influences of western pattern diet on prostate carcinogenesis, gut microbiota and microbiota-mediated equol metabolism using a transgenic adenocarcinoma of mouse prostate (TRAMP) model. Methods: We mimicked the western pattern diet using a high fat diet (HFD). TRAMP mice were fed with either control diet (CD) or HFD. At the age of 24 weeks, mice were orally administered daidzein over a 4-day period, and then sacrificed. The serum daidzein and equol were analyzed by ultra high performance liquid chromatography. Fecal microbiome was analyzed with fecal 16S rRNA pyrosequencing, and prostate was dissected and performed with histopathology. Results: HFD could promote prostate carcinogenesis in TRAMP mice (p = 0.045). The daidzein showed no significant differences between CD and HFD groups, while equol was significantly decreased in HFD group (p = 0.019). Fecal microbiotas differed between the two groups, 21 microbial phylotypes were increased and 11 phylotypes were decreased in abundance in HFD group, including decreased abundance of equol-producing bacterium Adlercreutzia (0.08% vs. 0.27%). Conclusions: HFD may promote prostate carcinogenesis through adversely affecting equol-producing bacterium. Further functional validations are required to ascertain the mechanism of those HFD-responsive bacteria in carcinogenesis. Answer: The available evidence suggests that daidzein non-metabolizers may have a higher risk for prostate cancer. A case-controlled study found that daidzein non-metabolizers were significantly more common in the prostate cancer patient group compared to the control group, and the poorly differentiated cancer patient group included a significantly lower percentage of daidzein metabolizers (PUBMED:12411567). This indicates that the ability to metabolize daidzein, potentially into compounds like equol, could be related to prostate cancer risk. Further supporting this, a study found that men who produced equol, a metabolite of daidzein, were at decreased risk of total prostate cancer and high-grade disease (PUBMED:20924663). This suggests that the metabolism of daidzein into equol may have a protective effect against prostate cancer. Additionally, a review discussed the potential association between the ability to produce equol and O-DMA, both metabolites of daidzein, and reduced risk of certain diseases including prostate cancer (PUBMED:15734719). However, it also noted that relatively few studies have been conducted to date, indicating that more research is needed to confirm these findings. Another study found that high dietary fat intake lowered serum equol concentration and promoted prostate carcinogenesis in a transgenic mouse prostate model, suggesting that the presence of equol-producing bacteria in the gut, which are responsible for daidzein metabolism, may be important for reducing prostate cancer risk (PUBMED:31011360). In contrast, a meta-analysis of individual participant data from seven prospective studies did not find strong evidence that prediagnostic circulating concentrations of isoflavones or lignans, including daidzein, are associated with prostate cancer risk (PUBMED:29971774). However, this study did note that further research might be warranted in populations where isoflavone intakes are high. Overall, while there is some evidence to suggest that daidzein non-metabolizers may have a higher risk for prostate cancer, more research is needed to fully understand the relationship between daidzein metabolism and prostate cancer risk.
Instruction: Response to oCRH in depressed and nondepressed adolescents: does gender make a difference? Abstracts: abstract_id: PUBMED:8682757 Response to oCRH in depressed and nondepressed adolescents: does gender make a difference? Objective: To examine the hypothesis that hypothalamic-pituitary-adrenal responses to stress vary across gender, contributing to gender differences in the prevalence of depression. Method: This study examined gender differences between depressed (n = 21) and control (n = 20) adolescents in adrenocorticotropic hormone (ACTH) and cortisol response to two ovine corticotropin-releasing hormone (oCRH) tests, at baseline and following a cognitive stressor. Results: Boys had higher (p < .05) measures of ACTH than girls, regardless of depression status, whereas corresponding cortisol parameters were similar in both groups. Cortisol measures were higher (p < .05) at time 1 than at time 2 in both groups, a phenomenon that might reflect the novelty of the situation. Conclusions: Gender differences in hormone responses may be related to differences in peripheral metabolism of ACTH, resulting in changes of immunoreactivity but not bioactivity or a different set point of the hypothalamic-pituitary-adrenal axis. The pattern of ACTH and cortisol responses to oCRH and the 24-hour excretion of free cortisol was normal in adolescents with depression, probably reflecting normal negative feedback mechanisms at this age or that most of these patients suffer from atypical rather than melancholic depression. abstract_id: PUBMED:18562014 Characteristics of depressed and nondepressed adult offspring of depressed and matched nondepressed parents. Background: Our aim was to compare adults who were depressed or nondepressed offspring of depressed or matched nondepressed parents on functioning. Methods: Participants were adult children of depressed (n=143) or nondepressed (n=197) parents who participated in a larger study. They completed self-report measures of depression symptoms, medical conditions and pain, family and social functioning, life stressors and coping, and help used for mental health problems. Results: In the depressed-parent group, depressed offspring had poorer personal functioning than did nondepressed offspring. Factors associated with offspring depressed status were being unmarried and having a diagnosed medical condition, more severe pain, a more severe recent stressor, and more reliance on emotional discharge coping. In the nondepressed-parent offspring, factors associated with depressed status were more disability, family disagreements and disorganization, negative events, and reliance on emotional discharge coping. Depressed offspring of depressed parents had more severe depression than depressed offspring of nondepressed parents; they also had more medical conditions, pain, disability, and severe stressors and, accordingly, relied more on approach coping. In contrast, nondepressed offspring of depressed or nondepressed parents were quite similar on functioning. Limitations: Measures were self-report and participants were not followed continuously. Conclusions: Because parental depression increased the risk of impairment among depressed offspring, family history should be considered in the treatment of depression. Offspring of depressed parents who are not experiencing depression are often able to maintain normal functioning in adulthood. abstract_id: PUBMED:3834390 The perception of performance in stress: the utilisation of cognitive facts by nondepressed and depressed students. Three experiments are reported in which expectancies about performance in stressful conditions by nondepressed and depressed nonclinical populations were examined. The first experiment was concerned with estimates of either errors or response rates made in advance, with regard to the likely competence level of a (hypothetical) person allegedly working in conditions of either loud noise, fatigue, sleep loss, social stress, or incentive. Nondepressed subjects as well as depressed subjects provided negative expectancies. The second experiment involved obtaining an estimate of personal competence in conditions where subjects were instructed that personal performance on the task would be required after the estimate had been provided. Nondepressed subjects differed from depressed subjects in that the estimates of the former were less negative in terms of the magnitude of the estimates provided. A third experiment was designed to see whether the negative expectancies about performance in stress exhibited both by nondepressed and by depressed subjects would be used in making allowances for the competence of a typist on the basis of a typescript allegedly produced under high noise conditions. An unexpected effect was that depressed subjects judged the typist more harshly and failed to make allowance for adverse working conditions in the way that nondepressed subjects did. The results are discussed in terms of the implications for understanding cognitive factors in depression. abstract_id: PUBMED:11358016 Evaluating the continuity of symptomatology between depressed and nondepressed individuals. Debate is contentious concerning whether depression should be viewed as a distinct category or as a continuum including overlapping normal and clinical phenomena. A nonparametric item response model was used to evaluate whether the probability of expressing individual symptoms differed between nondepressed and clinically depressed adults experiencing similar levels of overall severity. Even though depressed and nondepressed individuals were equated in terms of overall severity, differences on specific symptoms emerged. Depressed mood, anhedonia, and suicidality were more likely to be expressed in depressed than in nondepressed individuals, whereas hypochondriasis and middle insomnia were more likely to be expressed in nondepressed individuals at similar levels of severity. Such differences are inconsistent with the view of depression as a simple continuum. abstract_id: PUBMED:3208568 Infants of depressed mothers show "depressed" behavior even with nondepressed adults. To determine whether the "depressed" behavior (e.g., less positive affect and lower activity level) of infants noted during interactions with their "depressed" mothers generalizes to their interactions with nondepressed adults, 74 3-6-month-old infants of "depressed" and nondepressed mothers were videotaped in face-to-face interactions with their mothers and with nondepressed female strangers. "Depressed" mothers and their infants received lower ratings on all behaviors than nondepressed mothers and infants. Although the infants of "depressed" versus nondepressed mothers also received lower ratings with the stranger adult, very few differences were noted between those infants' ratings when interacting with their mother versus the stranger, suggesting that their "depressed" style of interacting is not specific to their interactions with depressed mothers but generalizes to their interactions with nondepressed adults as early as 3 months of age. abstract_id: PUBMED:3559899 Judgment of contingency: cognitive biases in depressed and nondepressed subjects. In this research I investigated whether the use of relevant affective outcomes influences depressed and nondepressed subjects' judgment of contingency. Similar to previous studies (Alloy & Abramson, 1979, Experiments 1 and 2), Experiments 1 and 2 confirmed that when the outcome is affectively neutral (i.e., the onset of a light) depressed subjects make accurate judgments of contingency, whereas nondepressed subjects show (in noncontingent situations) a significant illusion of control. In Experiments 3 and 4 (a contingency situation and a noncontingency situation, respectively) different types of sentences (negative self-referent, negative other-referent, positive self-referent, positive other-referent) were used as outcomes. Although depressed subjects were more reluctant to show biased judgments than were the nondepressed subjects, in noncontingency situations depressed subjects made overestimated judgments of contingency when the outcomes were negative self-referent sentences. Results are discussed with regard to current cognitive theories of depression, particularly the learned helplessness model. abstract_id: PUBMED:6870237 Verbal behaviors in depressed and nondepressed mentally retarded persons. Thirty depressed and 30 nondepressed mentally retarded adults were selected from an initial pool of 110 patients seen at either inpatient or outpatient psychiatric settings in western Pennsylvania. These two groups were compared on each of 14 operationally defined behaviors in an analogue interview setting. Despite the fact that these behaviors are typically considered to characterize depression, no difference was found between the depressed and nondepressed groups. A discussion of the possibilities for these results is made. abstract_id: PUBMED:21112092 Discrimination of facial expression by 5-month-old infants of nondepressed and clinically depressed mothers. Five-month-old infants of nondepressed and clinically depressed mothers were habituated to either a face with a neutral expression or the same face with a smile. Infants of nondepressed mothers subsequently discriminated between neutral and smiling facial expressions, whereas infants of clinically depressed mothers failed to make the same discrimination. abstract_id: PUBMED:11883553 Smiling, frowning, and autonomic activity in mildly depressed and nondepressed men in response to emotional imagery of social contexts. The study examined self-reported emotion and facial muscle and autonomic activity of depressed and nondepressed men in response to the social context of emotional situations. 20 university men, assessed on the Beck Depression Inventory, were asked to imagine happy and sad situations with and without visualizing other people. No differences were found between men classified as depressed and nondepressed on self-reported emotion and facial muscle activity. Smiling did not show differences between social contexts although self-reported happiness was increased during happy-social compared to happy-solitary imagery. Adjusting smiling for social context differences in happiness showed less smiling during happy-social than during happy-solitary imagery. In contrast, self-reported sadness and frowning were greater during sad-social compared to sad-solitary imagery. No differences between social contexts were found when frowning was adjusted for social context differences in sadness. Depressed-scoring men showed higher mean heart rate during sad-social than sad-solitary imagery whereas nondepressed-scoring men showed higher mean heart rate during happy social compared to happy-solitary imagery. The results indicate that men may frown more when sad but generally do not smile more during happy-social imagery, independent of depression. Depressed mood may affect heart rate during sad imagery but may not alter facial muscle activity and self:reported emotion in men. abstract_id: PUBMED:17113157 Frontal functions in depressed and nondepressed Parkinson's disease patients: impact of severity stages. Severity of Parkinson's disease (PD) and frontal impairment are positively correlated. Testing frontal functions in depressed/nondepressed PD patients with different severity stages may reveal whether depression leads to this impairment. We aimed to relate severity of PD to frontal functional impairment and to test if negative stimuli/depressive symptoms interfered with frontal tasks. The Stroop test and the Emotional Stroop test were performed by 46 PD patients, 18 of whom were depressed. The Hoehn and Yahr scale assessed severity of the disease. We calculated the difference in seconds for each Stroop card and the interference index (C/D) between depressed and nondepressed patients sharing the same severity of disease. The differences among the groups (depressed and nondepressed) according to the severity of the disease (mild and moderate) were compared using the Mann-Whitney test. The depressed patients had a poorer performance on the test than the nondepressed PD patients, although the difference was not statistically significant. In conclusion, there is a clinically relevant but not statistically significant difference on the performance of frontal tasks between depressed and nondepressed PD patients. Neither depression nor the severity of the disease were determinant to the poorer performance on the Stroop and the Emotional Stroop tests. Answer: The study with PUBMED:8682757 investigated whether there are gender differences in the hypothalamic-pituitary-adrenal (HPA) axis responses to stress, which could contribute to gender differences in the prevalence of depression. The study examined adrenocorticotropic hormone (ACTH) and cortisol responses to two ovine corticotropin-releasing hormone (oCRH) tests in depressed and control adolescents. The results showed that boys had higher measures of ACTH than girls, regardless of depression status, while cortisol parameters were similar in both groups. The study concluded that gender differences in hormone responses might be related to differences in peripheral metabolism of ACTH or a different set point of the HPA axis. The pattern of ACTH and cortisol responses to oCRH and the 24-hour excretion of free cortisol was normal in adolescents with depression, suggesting normal negative feedback mechanisms at this age or that most of these patients suffer from atypical rather than melancholic depression. Therefore, the study suggests that gender does make a difference in the response to oCRH in adolescents, with boys showing higher ACTH levels than girls, but cortisol levels were similar between genders.
Instruction: Using volume criteria: do California hospitals measure up? Abstracts: abstract_id: PUBMED:12943816 Using volume criteria: do California hospitals measure up? Background: Many studies have demonstrated a significant relationship between high procedural volume and better outcomes. As the public becomes increasingly aware of this medical literature, consumer groups have collaborated with medical researchers to operationalize this body of evidence. One such organization, the Leapfrog Group, has proposed annual volume criteria for four operations: coronary bypass grafting (CABG), abdominal aortic aneurysm (AAA), carotid endarterectomy (CEA), and esophageal cancer resection (ECR). This study analyzes California hospitals within the context of these volume criteria. Materials And Methods: Using the California inpatient database from 2000, we identified all CABG, AAA, CEA, and ECR operations performed at metropolitan hospitals. The volume of each of the four operations was tabulated by hospital and evaluated. Comparisons were made between academic and nonacademic hospitals. Results: Most hospitals in California did not meet Leapfrog's volume criteria. Only 2 hospitals of 287 (0.7%) met the volume criteria for the operations that it performed. Of the 71 (25%) hospitals that performed all four procedures, none met the volume criteria of all four procedures. In fact, only 10% of California hospitals performing these operations were high-volume hospitals based on Leapfrog's volume criteria. When comparing academic to nonacademic hospitals, academic hospitals performed more AAA operations than nonacademic hospitals (36 vs 12, P = 0.02). Although academic hospitals tended to have higher caseloads for CABG, CEA, and ECR, these did not reach statistical significance. Also, academic hospitals were more likely to be high volume for AAA (43.8% vs 7.0%, P < 0.01) and for ECR (23.1% vs 4.0%, P < 0.01). Conclusions: California's hospital system is far from being regionalized. Although academic hospitals appear better positioned than nonacademic hospitals, the vast majority of all hospitals do not meet Leapfrog's volume criteria. As efforts to use volume as a proxy measure of quality gain momentum, hospitals and physicians will be forced to measure and report quality. As such, surgeons need to decide between accepting volume as an adequate measure of quality and developing other possibly more direct and reliable methods. abstract_id: PUBMED:29086474 When high-volume PCI operators in high-volume hospitals move to lower volume hospitals-Do they still maintain high volume and quality of outcomes? Objectives: The aim of this quasi-experimental study was to examine whether high-volume percutaneous coronary intervention (PCI) operators still maintain high volume and quality of outcomes when they moved to lower volume hospitals. Background: Systematic reviews have indicated that high-volume PCI operators and hospitals have higher quality outcomes. However, little is known on whether high PCI volume and high quality outcomes are mainly due to operator characteristics (i.e., skill and experience) and is portable across organizations or whether it is due to hospital characteristics (i.e., equipment, team, and management system) and is less portable. Methods: We used Taiwan National Health Insurance claims data 2000-2012 to identify 98 high-volume PCI operators, 10 of whom moved from one hospital to another during the study period. We compared the PCI volume, risk-adjusted mortality ratio, and major adverse cardiovascular event (MACE) ratio before and after moving. Results: Of the 10 high-volume operators who moved, 6 moved from high- to moderate- or low-volume hospitals, with median annual PCI volumes (interquartile range) of 130 (117-165) in prior hospitals and 54 (46-84) in subsequent hospitals (the hospital the operator moved to), and the remaining 4 moved from high to high-volume hospitals, with median annual PCI volumes (interquartile range) of 151 (133-162) in prior hospitals and 193 (178-239) in subsequent hospitals. No significant differences were observed in the risk-adjusted mortality ratios and MACE ratios between high-volume operators and matched controls before and after moving. Conclusions: High-volume operators cannot maintain high volume when they moved from high to moderate or low-volume hospitals; however, the quality of care is maintained. High PCI volume and high-quality outcomes are less portable and more hospital bound. abstract_id: PUBMED:28882913 Hospital volume and mortality for 25 types of inpatient treatment in German hospitals: observational study using complete national data from 2009 to 2014. Objectives: To explore the existence and strength of a relationship between hospital volume and mortality, to estimate minimum volume thresholds and to assess the potential benefit of centralisation of services. Design: Observational population-based study using complete German hospital discharge data (Diagnosis-Related Group Statistics (DRG Statistics)). Setting: All acute care hospitals in Germany. Participants: All adult patients hospitalised for 1 out of 25 common or medically important types of inpatient treatment from 2009 to 2014. Main Outcome Measure: Risk-adjusted inhospital mortality. Results: Lower inhospital mortality in association with higher hospital volume was observed in 20 out of the 25 studied types of treatment when volume was categorised in quintiles and persisted in 17 types of treatment when volume was analysed as a continuous variable. Such a relationship was found in some of the studied emergency conditions and low-risk procedures. It was more consistently present regarding complex surgical procedures. For example, about 22 000 patients receiving open repair of abdominal aortic aneurysm were analysed. In very high-volume hospitals, risk-adjusted mortality was 4.7% (95% CI 4.1 to 5.4) compared with 7.8% (7.1 to 8.7) in very low volume hospitals. Theminimum volume above which risk of death would fall below the average mortality was estimated as 18 cases per year. If all hospitals providing this service would perform at least 18 cases per year, one death among 104 (76 to 166) patients could potentially be prevented. Conclusions: Based on complete national hospital discharge data, the results confirmed volume-outcome relationships for many complex surgical procedures, as well as for some emergency conditions and low-risk procedures. Following these findings, the study identified areas where centralisation would provide a benefit for patients undergoing the specific type of treatment in German hospitals and quantified the possible impact of centralisation efforts. abstract_id: PUBMED:34676647 In-hospital survival after pancreatoduodenectomy is greater in high-volume hospitals versus lower-volume hospitals: a meta-analysis. Background: Variation in cut-off values for what is considered a high volume (HV) hospital has made assessments of volume-outcome relationships for pancreaticoduodenectomy (PD) challenging. Accordingly, we performed a systematic review and meta-analysis comparing in-hospital mortality after PD in hospitals above and below HV thresholds of various cut-off values. Method: PubMed/MEDLINE, Embase and Cochrane Library were searched to 4 January 2021 for studies comparing in-hospital mortality after PD in hospitals above and below defined HV thresholds. After data extraction, risk of bias was assessed using the Downs and Black checklist. A random-effects model was used for meta-analysis, including meta-regressions. Registration: PROSPERO, CRD42021224432. Results: From 1855 records, 17 observational studies of moderate quality were included. Median HV cut-off was 25 PDs/year (IQR: 20-32). Overall relative risk of in-hospital mortality was 0.37 (95% CI: 0.30, 0.45), that is, 63% less in HV hospitals. All subgroup analyses found an in-hospital survival benefit in performing PDs at HV hospitals. Meta-regressions from included studies found no statistically significant associations between relative risk of in-hospital mortality and region (USA vs. non-USA; p = 0.396); or 25th percentile (p = 0.231), median (p = 0.822) or 75th percentile (p = 0.469) HV cut-off values. Significant inverse relationships were found between PD hospital volume and other outcomes. Conclusion: In-hospital survival was significantly greater for patients undergoing PDs at HV hospitals, regardless of HV cut-off value or region. Future research is required to investigate regions where low-volume centres have specialized PD infrastructure and the potential impact on mortality. abstract_id: PUBMED:26731180 Association between Hospital Birth Volume and Maternal Morbidity among Low-Risk Pregnancies in Rural, Urban, and Teaching Hospitals in the United States. Objectives This study aims to examine the relationship between hospital birth volume and multiple maternal morbidities among low-risk pregnancies in rural hospitals, urban non-teaching hospitals, and urban teaching hospitals, using a representative sample of U.S. hospitals. Study Design Using the 2011 Nationwide Inpatient Sample from 607 hospitals, we identified 508,146 obstetric deliveries meeting low-risk criteria and compared outcomes across hospital volume categories. Outcomes include postpartum hemorrhage (PPH), chorioamnionitis, endometritis, blood transfusion, severe perineal laceration, and wound infection. Results Hospital birth volume was more consistently related to PPH than to other maternal outcomes. Lowest-volume rural (< 200 births) and non-teaching (< 650 births) hospitals had 80% higher odds (adjusted odds ratio [AOR] = 1.80; 95% CI = 1.56-2.08) and 39% higher odds (AOR = 1.39; 95% CI = 1.26-1.53) of PPH respectively, than those in corresponding high-volume hospitals. However, in urban teaching hospitals, delivering in a lower-volume hospital was associated with 14% lower odds of PPH (AOR = 0.86; 95% CI = 0.80-0.93). Deliveries in rural hospitals had 31% higher odds of PPH than urban teaching hospitals (AOR = 1.31; 95% CI = 1.13-1.53). Conclusions Low birth volume was a risk factor for PPH in both rural and urban non-teaching hospitals, but not in urban teaching hospitals, where higher volume was associated with greater odds of PPH. abstract_id: PUBMED:33382499 Decline in Inpatient Volume at Rural Hospitals. Purpose: To investigate (1) all-payer inpatient volume changes at rural hospitals and (2) whether trends in inpatient volume differ by organizational and geographic characteristics of the hospital and characteristics of the patient population. Methods: We used a retrospective, longitudinal study design. Our study sample consisted of rural hospitals between 2011 and 2017. Inpatient volume was measured as inpatient average daily census (ADC). Additional measured hospital characteristics included census region, Medicare payment type, ownership type, number of beds, local competition, total margin, and whether the hospital was located in a Medicaid expansion state. Measured characteristics of the local patient population included total population size, percent of population aged 65 years or older, and percent of population in poverty. To identify predictors of inpatient volume trends, we fit a linear multiple regression model using generalized estimating equations. Findings: Rural hospitals experienced an average change in ADC of -13% between 2011 and 2017. We found that hospital characteristics (eg, census region, Medicare payment type, ownership type, total margin, whether the hospital was located in a Medicaid expansion state) and patient population characteristics (eg, percent of population in poverty) were significant predictors of inpatient volume trends. Conclusions: Trends in inpatient volume differ by organizational and geographic characteristics of the hospital and characteristics of the patient population. Researchers and policy makers should continue to explore the causal mechanisms of inpatient volume decline and its role in the financial viability of rural hospitals. abstract_id: PUBMED:24483138 Birth volume and the quality of obstetric care in rural hospitals. Background: Childbirth is the most common reason for hospitalization in the United States. Assessing obstetric care quality is critically important for patients, clinicians, and hospitals in rural areas. Methods: The study used hospital discharge data from the Statewide Inpatient Databases, Healthcare Cost and Utilization Project, Agency for Healthcare Research and Quality, for 9 states (Colorado, Iowa, Kentucky, New York, North Carolina, Oregon, Vermont, Washington, and Wisconsin) to identify all births in rural hospitals with 10 or more births/year in 2002 (N = 94,356) and 2010 (N = 103,880). Multivariate logistic regression was used to assess the relationship between hospital annual birth volume, measured as low (10-110), medium (111-240), medium-high (241-460) or high (>460), and 3 measures of obstetric care quality (low-risk cesarean rates for term, vertex, and singleton pregnancies with no prior cesarean; nonindicated cesarean; and nonindicated induction) and 2 patient safety measures (episiotomy and perineal laceration). Results: The odds of low-risk and nonindicated cesarean were lower in medium-high and high-volume rural hospitals compared with low-volume hospitals after controlling for maternal demographic and clinical factors. In low-volume hospitals, odds of labor induction without medical indication were higher than in medium-volume hospitals, but not significantly different from medium-high or high-volume hospitals. Odds of episiotomy were greater in medium-high or high-volume hospitals than in low-volume hospitals. The likelihood of perineal laceration did not differ significantly by birth volume. Conclusions: Obstetric quality and safety outcomes vary significantly across rural hospitals by birth volume. Better performance is not consistently associated with either lower or higher volume facilities. abstract_id: PUBMED:34645916 Outcomes in high and low volume hospitals in patients with acute hematochezia in a cohort study. Outcomes of acute lower gastrointestinal bleeding have not been compared according to hospital capacity. We aimed to perform a propensity score-matched cohort study with path and mediation analyses for acute hematochezia patients. Hospitals were divided into high- versus low-volume hospitals for emergency medical services. Rebleeding and death within 30 days were compared. Computed tomography, early colonoscopy (colonoscopy performed within 24 h), and endoscopic therapies were included as mediators. A total of 2644 matched pairs were yielded. The rebleeding rate within 30 days was not significant between high- and low-volume hospitals (16% vs. 17%, P = 0.44). The mortality rate within 30 days was significantly higher in the high-volume cohort than in the low-volume cohort (1.7% vs. 0.8%, P = 0.003). Treatment at high-volume hospitals was not a significant factor for rebleeding (odds ratio [OR] = 0.91; 95% confidence interval [CI], 0.79-1.06; P = 0.23), but was significant for death within 30 days (OR = 2.03; 95% CI, 1.17-3.52; P = 0.012) on multivariate logistic regression after adjusting for patients' characteristics. Mediation effects were not observed, except for rebleeding within 30 days in high-volume hospitals through early colonoscopy. However, the direct effect of high-volume hospitals on rebleeding was not significant. High-volume hospitals did not improve the outcomes of acute hematochezia patients. abstract_id: PUBMED:28844631 Improving Value in Total Hip and Knee Arthroplasty: The Role of High Volume Hospitals. Background: Recent healthcare reform efforts have focused on improving the quality of total joint replacement care while reducing overall costs. The purpose of this study is to determine if higher volume centers have lower costs and better outcomes than lower volume hospitals. Methods: We queried the Centers for Medicare and Medicaid Services (CMS) Inpatient Charge Data and identified 2702 hospitals that performed a total of 458,259 primary arthroplasty procedures in 2014. Centers were defined as low (performing <100 total joint arthroplasty [TJA] per year) or high volume and mean total hospital-specific charges and inpatient payments were obtained. Patient satisfaction scores as well 30-day risk-adjusted complication and readmission scores were obtained from the multiyear CMS Hospital Compare database. Results: Of all the hospitals, 1263 (47%) hospitals were classified as low volume and performed 60,895 (12%) TJA cases. Higher volume hospitals had lower mean total hospital-specific charges ($56,323 vs $60,950, P < .001) and mean Medicare inpatient payments ($12,131 vs $13,289, P < .001). Higher volume facilities had a lower complication score (2.96 vs 3.16, P = .015), and a better CMS hospital star rating (3.14 vs 2.89, P < .001). When controlling for hospital geographic and demographic factors, lower volume hospitals are more likely to be in the upper quartile of inpatient Medicare costs (odds ratio 2.127, 95% confidence interval 1.726-2.621, P < .001). Conclusion: Hospitals that perform <100 TJA cases per year may benefit from adopting the practices of higher volume centers in order to improve quality and reduce costs. abstract_id: PUBMED:29787533 High-volume hospitals are associated with lower mortality among high-risk emergency general surgery patients. Introduction: We have previously demonstrated that Emergency General Surgery (EGS) patients treated at high-volume hospitals experience lower mortality rates than those treated at low-volume hospitals. However, EGS comprises a wide spectrum of diseases. Our goal was to determine which EGS diseases had better outcomes at high-volume hospitals. Methods: We undertook a retrospective analysis of the National Inpatient Sample database for 2013 (a nationwide representative sample). Patients with EGS diseases were identified using American Association for the Surgery of Trauma definitions. A hierarchical logistic regression model was used to measure risk-adjusted probability of death, adjusting for age, sex, race, ethnicity, insurance type, and comorbidities. Patients were then grouped into 16 risk groups based upon their predicted probability of death. We then compared observed mortality rates at high- versus low-volume hospitals within each risk group. Results: Nationwide, 3,006,615 patients with EGS diseases were treated at 4,083 hospitals in 2013. Patients with predicted risk of death of 4% or higher (275,615 patients, 9.2%) had lower observed mortality rates at high-volume hospitals than at low-volume hospitals (7.7% vs. 10.2%, p < 0.001). We estimated that 1,002 deaths were potentially preventable if high-risk patients who were treated at low-volume hospitals were instead transferred to high-volume hospitals. Conclusion: EGS patients with predicted risk of death of 4% or higher experience lower mortality rates at high-volume hospitals than at low-volume hospitals. A regional system of EGS care that enables rapid transfer of high-risk patients to high-volume hospitals may prevent several deaths. Level Of Evidence: Prognostic and epidemiological, level III; Therapeutic/Case Management, level IV. Answer: Based on the abstracts provided, California hospitals largely do not meet the volume criteria set by the Leapfrog Group for certain procedures. The study in abstract PUBMED:12943816 found that most hospitals in California did not meet Leapfrog's volume criteria for coronary bypass grafting (CABG), abdominal aortic aneurysm (AAA) repair, carotid endarterectomy (CEA), and esophageal cancer resection (ECR). Only 2 out of 287 hospitals met the volume criteria for the operations they performed, and none of the 71 hospitals that performed all four procedures met the volume criteria for all four. Academic hospitals performed more AAA operations than nonacademic hospitals and were more likely to be high volume for AAA and ECR, but overall, the vast majority of hospitals did not meet the volume criteria. This is significant because numerous studies have demonstrated a relationship between high procedural volume and better outcomes. For example, abstract PUBMED:34676647 reports that in-hospital survival after pancreatoduodenectomy is greater in high-volume hospitals, and abstract PUBMED:28882913 indicates that lower in-hospital mortality is associated with higher hospital volume for 20 out of 25 types of treatment studied. Similarly, abstract PUBMED:29787533 suggests that high-risk emergency general surgery patients have lower mortality rates at high-volume hospitals. However, it's important to note that volume is not the only factor affecting quality of care. Abstract PUBMED:29086474 indicates that high-volume PCI operators could not maintain their high volume when moving to lower volume hospitals, but the quality of care was maintained, suggesting that high-quality outcomes are more hospital-bound and less portable. Additionally, abstract PUBMED:34645916 found that high-volume hospitals did not improve outcomes for acute hematochezia patients. In conclusion, while academic hospitals in California are somewhat better positioned, the overall hospital system in the state does not meet the proposed volume criteria for certain procedures, which could have implications for the quality of care provided.
Instruction: Collecting system involvement by renal tumor: are CT measurements reliable enough? Abstracts: abstract_id: PUBMED:18846376 Collecting system involvement by renal tumor: are CT measurements reliable enough? Objectives: Smaller kidney lesions which are more often detected recently by accidental imaging are amenable for nephron sparing approach whether at open surgery, laparoscopy or ablative techniques. The pretreatment planning is based on multiplanner CT expected to well define the relationship of the lesion to the major renal blood vessels and collecting system (CS). This study is aimed to compare the pre-surgical CT measurements of the distance from tumor to CS to the actual distances observed on radical nephrectomy specimens. Patients And Methods: Contrast CT of 39 patients with renal cell carcinoma (RCC) underwent measurements of the distance between CS and renal tumor. All measurements were confronted with the measurements performed on radical nephrectomy specimens of the same patients. Results: Of all 39 patients in 34 (87%) CT showed a contact relation between the tumor and the CS. In fact, the CS involvement has been histologically proven only in three (7.6%) cases. Cutting off the measurements at thresholds of 2 and 5 mm also showed a significant discrepancy between CT and specimen measurements. Conclusions: The trend of NSS and ablative techniques stressed out the importance of pretreatment measurements of the distance between the tumor and the CS. This study as performed on radical nephrectomy specimens points out the overestimated proximity of the tumor to the CS. These data if confirmed by other studies, may play a role while planning the management of NS approaches. abstract_id: PUBMED:32890620 An Enhancing Mass Abutting the Collecting System. We present a clinical imaging question focusing on a patient with a history of gross hematuria and an enhancing mass abutting the left renal collecting system. In patients with nondiagnostic cytology and imaging, upper tract urothelial carcinoma should be explored as a potential etiology prior to definitive surgical management. abstract_id: PUBMED:12853843 Effect of intentional cryo-injury to the renal collecting system. Purpose: We determined the short-term and long-term sequelae of intentional cryoablation of the renal pelvicaliceal system and evaluated whether continuous irrigation of the renal pelvicaliceal system with warm saline protects it against cryo-injury. Materials And Methods: In 12 swine open bilateral renal cryoablation using an argon gas based system was performed to create a cryolesion in the lower pole that was intentionally extended into the collecting system. A single cryoprobe was used to create a 3 cm ice ball in group 1 (6 animals) and 2, 3 mm cryoprobes were used to create a 4.5 cm ice ball in group 2 (6). In all 12 right kidneys pelvicaliceal warming (range 38C to 42C) was performed using continuous retrograde saline irrigation through an indwelling 5Fr ureteral catheter. In all 12 left kidneys cryoablation was performed without pelvicaliceal warming. Real-time confirmation of caliceal involvement by the cryolesion was obtained by retrograde ureteropyelogram. Immediately after cryo-injury 6 left and 6 right kidneys were harvested for histology and the animals with a solitary kidney were followed for 1 to 3 months. Results: Nadir cryoprobe tip temperature was -136C with a mean cryolesion time of 10.5 minutes. Cryolesion size was comparable in the right vs left kidneys in groups 1 and 2 (2.9 vs 3.0 and 4.7 vs 4.6 cm, respectively). Similarly cryoablation time was comparable between the right and left kidneys in groups 1 and 2 (11.3 vs 10.8 and 11.9 vs 12.2 minutes, respectively). Two animals died of aspiration pneumonia (1) and wound dehiscence (1). In all 10 surviving animals no instance of urinary extravasation was noted. At 1-month followup regrowth of normal urothelium occurred with some scarring of the lamina propria or underlying smooth muscle. Adjacent renal parenchyma was replaced by fibrous scar. At 3 months the cryo-injured collecting system was completely healed with a fibrous scar. There were no appreciable histological differences between the kidneys with or without warm pelvicaliceal irrigation. Conclusions: Our data suggest that absent physical puncture injury of the collecting system with the cryoprobe tip the cryodamaged renal collecting system heals by secondary intention in a watertight manner. These data have clinical relevance for facilitating cryoablation of a small, localized, central renal tumor in proximity to the pelvicaliceal system. abstract_id: PUBMED:12913694 Collecting system invasion in renal cell carcinoma: impact on prognosis and future staging strategies. Purpose: To define further the prognostic impact of urothelial invasion in renal cell carcinoma (RCC) we examined the outcome in patients presenting to our institution with kidney cancer treated with nephrectomy. Materials And Methods: We reviewed the medical records of 895 patients with RCC who were treated with nephrectomy between 1989 and 1999. Median followup was 31 months. Kaplan-Meier survival curves were constructed with respect to 1997 TNM stage, Fuhrman grade and University of California-Los Angeles Integrated Staging System stage, comparing patients with and without collecting system invasion. Univariate and multivariate analyses were performed. Overall survival was defined as time from nephrectomy to time of death or last followup. Results: Of the 895 patients 124 (14%) demonstrated collecting system invasion. Patients with collecting system invasion were more likely to be symptomatic and have associated metastases and/or positive nodes at diagnosis. Urothelial invasion was evident in 21 of 329 T1, 12 of 131 T2, 84 of 388 T3 and 7 of 47 T4 tumors. Three-year overall survival for patients with vs without collecting system invasion by stage was 67% vs 81% for T1, 60% vs 69% for T2, 31% vs 46% for T3 and 29% vs 12% for T4 disease. Patients with urothelial invasion incurred a significant increase in the likelihood of death and were at 1.4 times greater risk of death compared with patients without collecting system invasion. Conclusions: Our findings suggest that collecting system invasion in RCC cases is associated with specific clinical findings as well as poor prognostic variables and it has a profound impact on prognosis in low stage tumors. abstract_id: PUBMED:11992044 Renal cell carcinoma invading the urinary collecting system: implications for staging. Purpose: Current TNM staging of renal cell carcinoma is based on the tumor propensity for local extension (T), nodal involvement (N) and metastatic spread (M). Locally advanced renal cell carcinoma may involve the perirenal fat, adrenal glands, renal vein, vena cava and/or urinary collecting system. The existing TNM classification does not reflect the ability of renal cell carcinoma to invade the urothelium. We evaluated the incidence and characteristics as well as overall and cancer specific survival of renal cell carcinoma invading the urinary collecting system. Methods And Materials: We reviewed pathological findings in 504 kidneys from 475 patients with renal cell carcinoma who presented to our institution in a 3-year period. Urothelial involvement required evidence of gross or histological invasion of the renal calices, infundibulum, pelvis or ureter. Demographic and survival data were obtained from medical records and an institutional cancer registry for tumors invading the urothelium. Stage specific survival data were then compared with tumors not involving the urinary collecting system. Results: Definitive urothelial involvement by the primary tumor was interpretable in 426 of 504 kidneys. Invasion of the collecting system was identified in 61 of 426 cases (14%). Mean diameter of the invading lesions was 10.2 cm. (range 3 to 26). The majority of cases showed clear cell and sarcomatoid histology. Invasion by a papillary lesion was rare. Involvement of the collecting system was most common at the renal poles. Of 61 lesions invading the collecting system 48 (79%) were stage pT3 or greater, while only 13 (21%) were pathologically localized stage pT2 or less. Vascular invasion was identified in 38 renal cell carcinoma cases (62%) with urothelial involvement. A total of 16 cases (26%) were associated with vena caval thrombus. Invading tumors were high Fuhrman grade III or IV in 43 cases (70%). Overall disease specific survival was poor with a median of 19 months. In patients with localized stage pT1 or pT2N0M0 disease and urothelial invasion median disease specific survival was 46 months. Conclusions: Renal cell carcinoma lesions involving the renal collecting system are characteristically large, high grade and high stage. Clear cell carcinoma most commonly invades, while invasion by papillary tumors is rare. Overall the prognosis for high stage lesions with urothelial involvement is poor and does not appear significantly different from the reported disease specific survival of patients with high stage lesions without urothelial invasion. Localized tumors 4 cm. or less, which are amenable to elective nephron sparing surgery, rarely invade the urothelium. However, when a low stage pT2 or less renal lesion involves the urinary space, survival appears worse than equivalently staged renal cell carcinoma without invasion. Including urothelial invasion into current TNM staging systems for renal cell carcinoma is unlikely to provide significant additional prognostic or therapeutic information. abstract_id: PUBMED:16698352 Clinical sequelae of radiographic iceball involvement of collecting system during computed tomography-guided percutaneous renal tumor cryoablation. Objectives: Percutaneous renal tumor cryoablation is being evaluated as a treatment option for small renal tumors. However, when tumors are located centrally, involvement of the collecting system by the radiographic iceball can occur. We reviewed our series of computed tomography (CT)-guided percutaneous renal tumor cryoablation to identify those cases in which there appeared to be involvement of the collecting system by the radiographic iceball and to determine any clinical sequelae of such involvement. Methods: Retrospective review of the medical records identified 6 patients who had undergone CT-guided percutaneous renal tumor cryoablation with evidence of collecting system involvement. Measurements of the tumor size, size of the radiographic iceball, and the size of the immediate postprocedure "cryozone" (region of apparent treatment on contrast-enhanced CT) were obtained from the preprocedure, intraprocedure, and immediate postprocedure CT scans. Follow-up imaging was obtained beginning at 3 to 6 months. Results: Six patients were identified who met the inclusion criteria and had at least 3 months of documented follow-up. Despite the apparent involvement of the collecting system during the cryoablation procedure, no patient developed clinical signs or symptoms or radiographic evidence of a urine leak or fistula formation. Furthermore, no evidence of ureteral narrowing or stricture formation has been found to date, with a mean follow-up of 167.7 days (range 90 to 288). Conclusions: We observed no clinically appreciable urine leaks despite what appeared to be obvious involvement of the collecting system by the radiographic iceball. However, care should be exercised to avoid this insult when possible until additional research has confirmed its safety. abstract_id: PUBMED:19616244 Urinary collecting system invasion is an independent prognostic factor of organ confined renal cell carcinoma. Purpose: We evaluated urinary collecting system invasion as a prognostic parameter of renal cell carcinoma. Materials And Methods: A total of 1,124 patients who underwent nephrectomy for a renal tumor at 5 European centers were included in this retrospective study. Several variables were analyzed including urinary collecting system invasion, age, sex, TNM stage, Fuhrman grade, histological subtype, Eastern Cooperative Oncology Group performance status and cancer specific survival. Results: There were 771 males (68.6%) and 353 females (31.4%) in this study, and median age was 61 years (range 14 to 88). Median tumor size was 6 cm (range 1 to 24). Tumors were organ confined and Fuhrman grade was recorded as 1 or 2 in 67.1% and 62.3% of cases, respectively. Symptoms were present at diagnosis, and Eastern Cooperative Oncology Group performance status was 1 or more in 50.3% and 16.1% of the cases, respectively. Median followup was 43 months (range 1 to 299). At the end of followup 246 patients (21.9%) died of cancer. In 132 cases (11.7%) urinary collecting system invasion was noted. Urinary collecting system invasion was associated with symptoms, TNM stage, Fuhrman grade, tumor size (p <0.001) and Eastern Cooperative Oncology Group performance status (p = 0.003), but not with histological subtype (p = 0.7). On univariate analysis TNM stage, Fuhrman grade, symptoms, Eastern Cooperative Oncology Group performance status, tumor size and urinary collecting system invasion (p = 0.0001) were significant predictors of cancer specific survival. Urinary collecting system invasion was an independent prognostic parameter only in the setting of pT1-T2 tumors. When the urinary collecting system was invaded the 5 and 10-year probabilities of survival were 43% and 41%, respectively. Conclusions: Urinary collecting system invasion appears to be an independent prognostic parameter of organ confined renal cell carcinoma. Our data support the need to integrate this parameter in further TNM revisions. abstract_id: PUBMED:31842655 Renal cryoablation - does deep endophytic ablation affect the renal collecting system? Objective: To investigate to what extent the urothelium of the renal collecting system is affected when performing deep endophytic cryoablation.Methods: The study was conducted as an in vivo animal model with a total of 15 female pigs. Each animal was subjected to bilateral endophytioc renal cryoablation and randomized to a postoperative follow-up period of either one, two or four weeks. At the end of follow-up all animals had a magnetic resonance imaging (MRI) examination and bilateral nephrectomy was performed. On MRI-imaging the extent of the cryolesions, as well as signs of urinomas or fistulas, were examined. Histopathologic examinations were performed to investigate the effect on the urothelium.Results: All animals tolerated the procedure well without any postoperative complications. MRI examinations found the renal collecting system to be involved in the cryolesions at all three stages of follow-up and revealed no signs of hematomas, urinomas or fistula formations. Epithelial edema was found at all three stages of follow-up while significant parenchymal fibrosis adjacent to the urothelium was most pronounced in the four weeks of follow-up group. The urothelium was significantly affected with luminal hemorrhage as well as hemorrhage in and underneath the urothelium and urothelial dissociation from the underlying renal parenchyma. Despite these impacts on the urothelium, this was found to be intact and vital at all three stages of follow-up, in sharp contrast to the renal parenchyma that underwent fibrotic changes.Conclusions: In this, in vivo non-tumor pig model CA effectively destroyed the renal parenchyma while the impacted renal urothelium remained intact and did not undergo fibrotic changes, nor was urinomas or fistulas observed. abstract_id: PUBMED:17244281 Prognostic relevance of capsular involvement and collecting system invasion in stage I and II renal cell carcinoma. Objective: To define the prognostic relevance of capsular involvement (invasion with no penetration) and collecting-system invasion in patients with stage I (pT1N0M0) and stage II (pT2N0M0) renal cell carcinoma (RCC), by evaluating the outcome of patients treated with nephrectomy. Patients And Methods: In all, 519 patients from a kidney cancer database treated with nephrectomy for stage I and II RCC between 1985 and 2005 were assessed retrospectively. The primary endpoint was recurrence-free survival time. The prognostic relevance of capsular involvement and collecting-system invasion were examined using univariate and multivariate survival analysis. Results: Capsular involvement and collecting-system invasion were evident in 112 (21.6%) and 39 (7.5%) patients, respectively. Capsular involvement was associated with higher Fuhrman grades and larger tumours. The incidence of collecting-system invasion was higher in patients with microvascular invasion. The median follow-up was 49 months. In univariate analysis, patients with capsular involvement and collecting-system invasion had a worse prognosis than patients without (P = 0.007 and <0.001, respectively). In multivariate analysis, capsular involvement (hazard ratio 1.84, P = 0.036) and collecting-system invasion (3.78, P < 0.001) were independent prognostic factors of recurrence-free survival. Interestingly, there was no survival difference between patients with capsular involvement in stage I/II and patients with invasion of perinephric tissue (pT3aN0M0). Conclusions: These findings suggest that capsular involvement and collecting-system invasion are poor prognostic findings in stage I and II RCC. They should both be considered when planning the follow-up. A revised pT3a stage including patients with capsular involvement could improve its prognostic validity. abstract_id: PUBMED:22935663 Wilms tumor and a duplex collecting system: a case report and review of literature. Wilms tumor (WT) is described as a component of many different clinical conditions and genetic syndromes. However, the simultaneous occurrence of WT and a duplex collecting system is extremely rare. We report a case of a 4-year-old boy diagnosed with WT and a left duplex collecting system. The patient underwent a left radical nephrectomy. The histologic examination diagnosed it to be a case of WT with favorable histology. The patient received the chemotherapy regimen for stage 1 WT and was in complete remission during the follow-up period of 2 years. Further investigations are required for assessing the need for screening of WT in children with genitourinary malformations. Answer: The reliability of CT measurements in assessing the involvement of the collecting system by renal tumors has been questioned. In a study comparing pre-surgical CT measurements of the distance from the tumor to the collecting system with actual distances observed on radical nephrectomy specimens, it was found that CT often overestimated the proximity of the tumor to the collecting system. In 87% of cases, CT showed contact between the tumor and the collecting system, but histological proof of involvement was only present in 7.6% of cases. This discrepancy was significant even when different measurement thresholds were applied (PUBMED:18846376). This overestimation has implications for treatment planning, especially for nephron-sparing approaches and ablative techniques, where accurate assessment of tumor proximity to the collecting system is crucial. The findings suggest that CT measurements may not be reliable enough and could potentially influence the management of such approaches (PUBMED:18846376). Additionally, the impact of collecting system invasion on prognosis and staging strategies for renal cell carcinoma has been studied. Collecting system invasion is associated with worse clinical outcomes and is considered an independent prognostic factor in organ-confined renal cell carcinoma (PUBMED:19616244, PUBMED:17244281). It has been suggested that collecting system invasion should be integrated into future TNM revisions due to its prognostic significance (PUBMED:19616244). In the context of renal cryoablation, studies have shown that the renal collecting system can heal by secondary intention in a watertight manner after cryo-injury, suggesting clinical relevance for cryoablation of small, localized, central renal tumors near the pelvicaliceal system (PUBMED:12853843). However, another study indicated that despite apparent involvement of the collecting system during cryoablation, no clinical signs or symptoms or radiographic evidence of urine leak or fistula formation were observed, although caution is still advised (PUBMED:16698352). In summary, CT measurements may not be entirely reliable for assessing collecting system involvement by renal tumors, and this has significant implications for treatment planning and prognosis. Collecting system invasion is an important prognostic factor and should be carefully evaluated in the management of renal tumors.
Instruction: Can clinical and endoscopic findings accurately predict early-stage adenocarcinoma? Abstracts: abstract_id: PUBMED:16333557 Can clinical and endoscopic findings accurately predict early-stage adenocarcinoma? Background: The presentation and management of esophageal cancer are changing, as more patients are diagnosed at an earlier stage of the disease in which endoscopic treatment methods may be contemplated. Therefore, we conducted a study to determine whether symptomatic and endoscopic findings can accurately identify node-negative early-stage adenocarcinoma. Methods: A total of 213 consecutive patients (171 men and 42 women) with resectable esophageal adenocarcinoma seen from 1992 to 2002 were evaluated. None of these patients received neoadjuvant chemotherapy or radiation therapy. Using a multivariable model, model-based probabilities of early-stage disease (T1 im/sm N0) were calculated for each combination of the following three features: no dysphagia as main symptom at presentation, tumor length <or=2 cm, and noncircumferential lesion. Results: Eighty-two percent of the patients with all three characteristics presented with early-stage disease. Even in the setting of small, visible, noncircumferential tumors/nodules in patients without dysphagia, 14% of the patients harbored node metastasis. Conclusions: Simple clinical and endoscopic findings predicted early-stage disease in 82% of cases, whereas a small but significant percentage had node metastasis. Because node metastasis predisposes to local failure in nonresectional treatment options such as endoscopic mucosal resection and photodynamic therapy, such findings should have a significant bearing on treatment decisions. abstract_id: PUBMED:34845554 Long-term outcomes of endoscopic mucosal resection for early-stage esophageal adenocarcinoma. Background: With growing application of endoscopic therapy for early-stage esophageal cancer, we sought to review our experience of endoscopic mucosal resections (EMRs). The aim of our study was to understand the natural course of these patients, especially with positive margins. Methods: A prospectively maintained database of all patients undergoing endoscopic therapies at Georgetown University Hospital for esophageal cancer was used for the analysis between 2010 and 2020. Results: Of 80 patients in the EMR database, 35 were performed as index cases for esophageal adenocarcinoma. Majority (74.3%) had a pre-treatment ultrasound confirming absence of regional adenopathy. There were no post-EMR bleeding or perforation events requiring re-intervention. Complete R0 resection was achieved in 22/35 (62.9%) after initial EMR. Thirteen patients had positive margins. Of these 13 patients, only 7 patients underwent repeat endoscopic resection, 2 underwent subsequent esophagectomy, 2 received definitive radiation given poor surgical candidacy, and 2 were lost to follow-up. Overall and 5-year survival of all patients undergoing EMR was 67.9 months and 85%, respectively. Subset analysis of the 13 patients with R1 resection demonstrated an overall survival of 49.2 months and 60% 5-year survival vs overall survival of 78.9 months and 93% 5-year survival for R0 resection. At a median follow-up of 60.5 months, cancer recurrence occurred in 3 patients. All of them were successfully managed with repeat EMR. Conclusions: Endoscopic resections represent a safe and effective treatment for early-stage esophageal cancer. Patients with high-risk features should be counseled to undergo an esophagectomy if they are operable candidates. abstract_id: PUBMED:38089923 Pathologically diagnosed early-stage gastric adenocarcinoma with enteroblastic differentiation after endoscopic submucosal dissection: A case report. A 77-year-old male patient underwent esophagogastroduodenoscopy at his family doctor, and an easily hemorrhagic depressed lesion was noted near the anterior wall of the gastric antrum. A biopsy revealed moderately differentiated tubular adenocarcinoma > poorly differentiated adenocarcinoma, and the patient was referred to our department for further examination. A 15-mm 0-IIc lesion is seen near the anterior wall of the gastric antrum and narrow band imaging magnifying endoscopy revealed obscured glandular duct structures and corkscrew pattern vascular structures. We diagnosed the patient with early-stage gastric cancer [L, Ant, 15mm, cType0-IIc, cT1(M-SM1), cN0, cM0, cStage IA] after an esopahogastroduodenoscopy examination at our hospital, and endoscopic submucosal dissection was performed. Histopathological images with hematoxylin and eosin staining showed tumor cells with pale cytoplasm and the immunostaining for alpha-fetoprotein, sal-like protein 4, and Glypican3 was positive. The patient was pathologically diagnosed with gastric adenocarcinoma with enteroblastic differentiation, pT1b1 (SM, 0.4 mm), type 0-IIc, 15 mm, UL (-), Ly0, and V0. Gastric adenocarcinoma with enteroblastic differentiation is one of the representative histological types of alpah-fetoprotein-producing gastric cancer. Alpha-fetoprotein-producing gastric cancer is infrequent, accounting for at least 3% of all gastric cancers, and is generally highly malignant. Most cases are already advanced upon diagnosis, and finding them in the early stage is rare. Therefore, pathological findings that may indicate the gastric adenocarcinoma with enteroblastic differentiation should be noted even in early gastric cancer. abstract_id: PUBMED:29417228 Comparison of long-term clinical outcomes between endoscopic and surgical resection for early-stage adenocarcinoma of the esophagogastric junction. Background: The aim of this study was to analyze long-term clinical and oncologic outcomes in patients with early-stage adenocarcinoma of the esophagogastric junction (AEG) managed with either endoscopic resection (ER) or surgery. Methods: The inclusion criteria were AEG, meeting classic or expanded indications for ER of early gastric cancer, and complete resection. A total of 66 patients with Siewert type II AEG were included (ER group, n = 38; vs. surgery group, n = 28). Results: The mean age of the ER group was greater than that of the surgery group (mean ± SD, 66.9 ± 9.7 vs. 58.5 ± 10.4 years, respectively; p = 0.001). Compared to the ER group, macroscopically flat or depressed-type lesions were more common (47.4 vs. 89.3%; p = 0.001), and mean lesion size was larger in the surgery group (13.3 ± 8.4 vs. 18.6 ± 11.0 mm; p = 0.039). One intensive care unit admission and subsequent surgery-related death occurred in the surgery group (1/28 vs. 0/38 in the ER group; p = 0.424). During follow-up, recurrence was detected in both groups (4/38 vs. 1/28; p = 0.385). Overall survival and 5-year disease-free survival did not differ between the groups (93.3 vs. 92.9%; p = 0.282 and 88.0 vs. 100.0%; p = 0.066). Conclusions: Once complete resection is achieved in patients with AEG who met the expanded criteria for endoscopic submucosal dissection of gastric cancer, there was no significant difference in clinical outcomes between ER and surgery. abstract_id: PUBMED:33673151 Clinical Analysis of Early-Stage Pancreatic Cancer and Proposal for a New Diagnostic Algorithm: A Multicenter Observational Study. Early diagnosis of pancreatic ductal adenocarcinoma (PDAC) is challenging but essential for improving its poor prognosis. We established a multicenter study to clarify the clinicopathological features, and to propose new algorithm for early diagnosis of PDAC. Ninety-six patients with stage 0 and IA PDAC were enrolled from 13 high-volume centers. Overall, 70% of the patients were asymptomatic. The serum pancreatic enzyme levels were abnormal in half of the patients. The sensitivity of endoscopic ultrasonography (EUS) for detecting small PDAC was superior to computed tomography and magnetic resonance imaging (MRI) (82%, 58%, and 38%, respectively). Indirect imaging findings were useful to detect early-stage PDAC; especially, main pancreatic duct stenosis on MRI had the highest positive rate of 86% in stage 0 patients. For preoperative pathological diagnosis, the sensitivity of endoscopic retrograde cholangiopancreatography (ERCP)-associated pancreatic juice cytology was 84%. Among the stage IA patients, EUS-guided fine-needle aspiration revealed adenocarcinoma in 93% patients. For early diagnosis of PDAC, it is essential to identify asymptomatic patients and ensure close examinations of indirect imaging findings and standardization of preoperative pathological diagnosis. Therefore, a new diagnostic algorithm based on tumor size and imaging findings should be developed. abstract_id: PUBMED:27444824 Correlation between endoscopic forceps biopsies and endoscopic mucosal resection with endoscopic ultrasound in patients with Barrett's esophagus with high-grade dysplasia and early cancer. Background: Patients with Barrett's esophagus (BE) and high-grade dysplasia (HGD) or intramucosal cancer (IMC) on endoscopic forceps biopsies are referred to endoscopic therapy even though forceps biopsies do not reflect the disease extent accurately. Endoscopic mucosal resection (EMR) and endoscopic ultrasound (EUS) are frequently used for staging prior to endoscopic therapy. Our aims were to evaluate: (1) if endoscopic forceps biopsies correlated with EMR histology in these patients; (2) the utility of EUS compared to EMR; and (3) if accuracy of EUS varied based on grade of differentiation of tumor. Methods: This is a retrospective review of patients referred to endoscopic therapy of BE with HGD or early esophageal adenocarcinoma (EAC) who underwent EMR from 2006 to 2011. Age, race, sex, length of Barrett's segment, hiatal hernia size, number of endoscopies and biopsy results and EUS findings were abstracted. Results: A total of 151 patients underwent EMR. In 50 % (75/151) of patients, EMR histology was consistent with endoscopic forceps biopsy findings. EMR resulted in change in diagnosis with upstaging in 21 % (32/151) and downstaging in 29 % (44/151). In patients with HGD on EMR, EUS staging was T0 in 74.1 % (23/31) but upstaged in 25.8 % (8/31). In patients with IMC on EMR, EUS findings were T1a in 23.6 % (9/38), upstaged in 18.4 % (7/38) and downstaged in 57.8 % (22/38). EUS accurately identified EMR histology in all submucosal cancers. Grade of differentiation was reported in 24 cancers on EMR histology. There was no correlation between grade and EUS staging. Conclusions: EUS is of limited utility in accurate staging of BE patients with HGD or early EAC. Endoscopic forceps biopsy correlated with EMR findings in only 50 % of patients. Irrespective of the endoscopic forceps biopsy results, all BE patients with visible lesions should be referred to EMR. abstract_id: PUBMED:28446997 Endoscopic submucosal dissection and endoscopic mucosal resection for early stage esophageal cancer. Mortality from esophageal cancer remains high despite advances in medical therapy. Although the incidence of squamous cell carcinoma of the esophagus remains unchanged, the incidence of the esophageal adenocarcinoma has increased over time. Gastroesophageal reflux disease (GERD and obesity are contributing factors to the development of Barrett's esophagus and subsequent development of adenocarcinoma. Early recognition of the disease can lead to resection of esophageal cancer prior to the development of lymphovascular invasion. Various modalities have been implemented to aid identification of precancerous lesions and early esophageal cancer. Chromoendoscopy, narrowband imaging and endoscopic ultrasound examination are typically used for evaluating early esophageal lesions. Recently, confocal laser endomicroscopy (CLE) and volumetric laser scanning were implemented with promising results. Endoscopic management of early esophageal cancer may be done using endoscopic mucosal resection (EMR) or endoscopic submucosal dissection (ESD). Both techniques allow resection of the mucosa (and possibly a portion of the submucosa) containing the early tumor without interruption of deeper layers. A submucosal injection creating a cushion coupled with snare resection or cap assisted mucosal suction followed by ligation and snare resection are the most common techniques of EMR. EMR can remove lesions less than 2 cm in size en bloc. Larger lesions may require resection in piecemeal fashion. This may limit assessment of the margins of the lesion and orienting the lesion's border. ESD offers en bloc dissection of the lesion regardless of its size. ESD is performed with specialized needle knives, which allow incision followed by careful dissection of the lesion within the submucosal layer. Tumor recurrence after ESD is rare but the technique is labor intensive and has an increased risk of perforation. Esophageal stenosis remains a concern after extensive EMR or ESD. Dilation with balloon or stent placement is usually sufficient to treat post-resection stenosis. abstract_id: PUBMED:36372896 Early gastric mixed neuroendocrine-non-neuroendocrine neoplasms with endoscopic findings of neuroendocrine cell carcinoma components exposed on the mucosal surface: a case report. Background: Gastric mixed neuroendocrine-non-neuroendocrine neoplasms are rare malignant tumors. The lack of specific findings makes it difficult to diagnose endoscopically. We report the case of early gastric mixed neuroendocrine-non-neuroendocrine neoplasms treated by endoscopic submucosal dissection. Case Presentation: An 81-year-old Japanese female underwent esophagogastroduodenoscopy for screening and was treated with endoscopic submucosal dissection for the diagnosis of early gastric cancer. Histopathologically, the lesion was diagnosed as mixed neuroendocrine-non-neuroendocrine neoplasms (tubular adenocarcinoma 2 60%, endocrine cell carcinoma 40%), pT1b(submucosa (SM) 900 μm), pUL(-), Ly(+), v(-), pHM0, pVM0. After additional surgical resection without adjuvant chemotherapy, she has had no recurrences or metastases for 3 years. Conclusions: Comparing narrow-band imaging magnified endoscopic findings with pathological findings, the depressed area with a lack of surface structure was consistent with the neuroendocrine cell carcinoma component, while narrow-band imaging magnification findings showed non-network vessels. In this case, we examined endoscopic findings of early stage mixed neuroendocrine-non-neuroendocrine neoplasms in detail and compared it with the pathological findings. We believe that these endoscopic findings contribute to the diagnosis of mixed neuroendocrine-non-neuroendocrine neoplasms and can lead to its early detection. abstract_id: PUBMED:30725253 Exploring the possibility of endoscopic submucosal dissection for clinical submucosal invasive early gastric cancers. Background: The current standard treatment modality for clinical submucosal invasive (cT1b) early gastric cancer (EGC) is surgery. However, there are discrepancies in T staging between pre- and post-operative findings, and in cases of overestimation, patients may lose the opportunity to preserve the stomach. The aim of this study was to analyze surgical outcomes of cT1b EGC and determine the pre-treatment factors favoring ESD. Methods: Patients who underwent gastrectomy for cT1b EGC with a tumor size of 30 mm or less in diameter and differentiated-type histology were retrospectively reviewed from January 2010 to December 2014. According to the final surgical pathologic results, two groups were classified: patients whose pathologic results qualified for current ESD indication (ESD-qualified group, n = 203) and patients whose pathologic results made them ineligible for ESD (ESD-disqualified group, n = 261). The preoperative clinical characteristics were compared. Results: Forty-three percent of the patients (203/464) who underwent gastrectomy for cT1b EGC qualified for ESD; their endoscopic lesion tended to be smaller than 20 mm in size and located in the distal part of stomach. In addition, the ESD-qualified group showed a significantly higher proportion of well-differentiated tubular adenocarcinoma on endoscopic biopsy and of the flat/depressed type in the endoscopic evaluation. Conclusion: Forty-three percent of the patients with cT1b EGC who underwent gastrectomy had a chance to preserve their stomach by ESD. Therefore, pre-treatment factors such as endoscopic lesion size, location, histology, and gross type should be considered for treatment modality selection for cT1b EGC. abstract_id: PUBMED:32842429 Retrospective investigation of patients receiving additional surgery after endoscopic non-curative resection for early colorectal cancer Objective: To investigate the clinical application of additional surgery after non-curative endoscopic resection for early colorectal cancer. Methods: A retrospectively descriptive cohort study was conducted. Inclusion criteria: (1) pathologically confirmed primary colorectal adenocarcinoma;(2) receiving additional surgery after endoscopic resection; (3) semi-elective operation. Exclusion criteria: familial adenomatous polyposis, appendiceal neoplasms, anal canal neoplasms, neuroendocrine tumors, and surgery because of perforation or bleeding after endoscopic resection. Indications of additional surgery: (1) pathologically positive lateral or basal resection margin; (2) submucosal invasion depth ≥ 1000 μm; (3) lymphovascular invasion; (4) poorly differentiated, undifferentiated or mucinous adenocarcinoma; (5) more than grade G2 in tumor budding; (6) incomplete resection or piecemeal specimen with margin impossible to evaluate; (7) patient's consent due to undetermined pathology. According to the above criteria, clinical data of 92 patients at the Colorectal Surgery Department, the First Affiliated Hospital of Nanjing Medical University between January 2013 and December 2018 were collected. Demographic data, pathological examinations, operative methods and outcomes were analyzed. Results: There were 61 (66.3%) male and 31 female (33.7%) patients with an average age of (58.2±10.7) years. The average BMI was (23.8±3.5) kg/m(2). The lesions located in the right-sided colon, left-sided colon and rectum in 19, 37 and 36 patients respectively. Sixteen patients received endoscopic snare resection, 45 received endoscopic mucosal resection and 31 received endoscopic submucosal dissection. Reasons for additional surgery included endoscopic specimen with pathologically positive margin (n=22, 23.9%), submucosal invasion depth ≥ 1000 μm (n=9, 9.8%), lymphovascular invasion (n=4, 4.3%), poorly differentiated, undifferentiated or mucinous adenocarcinoma (n=5, 5.4%), piecemeal resection (n=13, 14.1%), undetermined pathology (n=52, 56.5%). The median duration from endoscopic resection to additional surgery was 16 days. Thirty-four patients (37.0%) received preoperative endoscopic localization with carbon nanoparticles suspension injection and 5 (5.4%) were marked with titanium clip. Seventy-four patients (80.4%) received laparoscopic surgery, 17 (18.5%) received open surgery, while 1 patient (1.1%) was converted to open surgery due to missing titanium clip. Three patients (3.3%) were treated with transanal excision, 2 (2.2%) with bowel resection, and 87 (94.6%) with radical excision. After additional surgery, histopathological examination of surgical specimens revealed the presence of residual tumor in 5 patients (5.4%), lymph node metastasis in 8 (8.7%), lymphovascular invasion in 1 (1.1%) and tumor deposit in 1 (1.1%). Twelve patients (13.0%) developed postoperative complications, including 4 mid-low rectal cancer patients (4.3%) with anastomostic leakage or bleeding. After surgery, according to the TNM staging system, 83 patients (90.2%) were classified as TNM stage 0-I, 9 (9.8%) as TNM stage II-IV. One patient of stage IV with liver metastasis underwent concomitant hepatectomy. One patient of stage II received regular follow-up after operation. Seven cases of stage III and 1 of stage IV received postoperative chemotherapy. Eighty-five patients (92.4%) were followed up with a median time of 12.8 (IQR: 8.1, 24.3) months. No recurrence or metastasis was observed. Conclusions: Surgery is an effective salvage measure for non-curative endoscopic resection of early colorectal cancer. Since surgery may have complications, indications of the additional surgery should be considered carefully. Preoperative endoscopic localization should be performed in order to ensure the safety and efficacy of surgery. Answer: Clinical and endoscopic findings can provide a significant indication of early-stage adenocarcinoma, but they are not foolproof. A study found that a combination of no dysphagia as the main symptom at presentation, tumor length ≤2 cm, and noncircumferential lesion predicted early-stage disease (T1 im/sm N0) in 82% of cases. However, even with these characteristics, 14% of patients still harbored node metastasis (PUBMED:16333557). This suggests that while clinical and endoscopic findings can be predictive, they are not definitive, and a small but significant percentage of patients may have more advanced disease than indicated by these findings alone. Moreover, endoscopic mucosal resection (EMR) has been shown to be a safe and effective treatment for early-stage esophageal cancer, with a complete R0 resection achieved in 62.9% of cases after initial EMR. However, patients with positive margins after EMR, indicating non-curative resection, have a lower overall survival compared to those with R0 resection, highlighting the importance of accurate staging and resection (PUBMED:34845554). Endoscopic ultrasound (EUS) is frequently used for staging prior to endoscopic therapy, but its utility is limited in accurately staging Barrett's esophagus patients with high-grade dysplasia or early esophageal adenocarcinoma (EAC). Endoscopic forceps biopsy correlated with EMR findings in only 50% of patients, suggesting that all patients with visible lesions should be referred to EMR regardless of biopsy results (PUBMED:27444824). In conclusion, while clinical and endoscopic findings can be indicative of early-stage adenocarcinoma, they are not completely reliable for accurate prediction. Additional diagnostic methods, such as EMR and EUS, play a crucial role in confirming the stage of the disease and guiding appropriate treatment decisions.
Instruction: Is green tea drinking associated with a later onset of breast cancer? Abstracts: abstract_id: PUBMED:20006278 Is green tea drinking associated with a later onset of breast cancer? Background: Studies have found that tea polyphenols inhibit aromatase. Because of the substantial difference in levels of estrogens between premenopausal and postmenopausal women, the relationship between tea consumption and breast cancer risk may depend on menopausal status. Methods: We examined this hypothesis in the Shanghai Women's Health Study, a population-based cohort study of 74,942 Chinese women. Results: We found a time-dependent interaction between green tea consumption and age of breast cancer onset (p for interaction, 0.03). In comparison with non-tea drinkers, women who started tea-drinking at 25 years of age or younger had a hazard ratio (HR) of 0.69 (95% confidence interval [CI]: 0.41-1.17) to develop premenopausal breast cancer. On the other hand, compared with non-tea drinkers, women who started tea drinking at 25 years of age or younger had an increased risk of postmenopausal breast cancer with an HR of 1.61 (95% CI: 1.18-2.20). Additional analyses suggest regularly drinking green tea may delay the onset of breast cancer. Conclusions: Further studies are needed to confirm our findings. abstract_id: PUBMED:19074205 Drinking green tea modestly reduces breast cancer risk. Green tea is a commonly consumed beverage in China. Epidemiological and animal data suggest tea and tea polyphenols may be preventive against various cancers, including breast cancer. Catechol-O-methyltransferase (COMT) catalyzes catechol estrogens and tea polyphenols. The COMT rs4680 AA genotype leads to lower COMT activity, which may affect the relationship between green tea consumption and breast cancer risk. We evaluated whether regular green tea consumption was associated with breast cancer risk among 3454 incident cases and 3474 controls aged 20-74 y in a population-based case-control study conducted in Shanghai, China during 1996-2005. All participants were interviewed in person about green tea consumption habits, including age of initiation, duration of use, brew strength, and quantity of tea. Odds ratios (OR) and 95% CI were calculated for green tea consumption measures and adjusted for age and other confounding factors. Compared with nondrinkers, regular drinking of green tea was associated with a slightly decreased risk for breast cancer (OR, 0.88; 95% CI, 0.79-0.98). Among premenopausal women, reduced risk was observed for years of green tea drinking (P-trend = 0.02) and a dose-response relationship with the amount of tea consumed per month was also observed (P-trend = 0.046). COMT rs4680 genotypes did not have a modifying effect on the association of green tea intake with breast cancer risk. Drinking green tea may be weakly associated with a decreased risk of breast cancer. abstract_id: PUBMED:9600118 Influence of drinking green tea on breast cancer malignancy among Japanese patients. Inhibitory effects of green tea on carcinogenesis have been investigated in numerous laboratory studies using (-)-epigallocatechin gallate (EGCG) or crude green tea extract, and there is also some epidemiologic evidence. Further, EGCG has been reported to inhibit the growth of cancer cells, lung metastasis in an animal model, and urokinase activity. In this study, we first examined the association between consumption of green tea prior to clinical cancer onset and various clinical parameters assessed at surgery among 472 patients with stage I, II, and III breast cancer. We found that increased consumption of green tea was closely associated with decreased numbers of axillary lymph node metastases among premenopausal patients with stage I and II breast cancer and with increased expression of progesterone receptor (PgR) and estrogen receptor (ER) among postmenopausal ones. Since these are potential prognostic factors, we then investigated the prognosis of breast cancer with special reference to consumption of green tea, in a follow-up study of these patients. We found that increased consumption of green tea was correlated with decreased recurrence of stage I and II breast cancer (P < 0.05 for crude disease-free survival); the recurrence rate was 16.7 or 24.3% among those consuming > or = 5 cups or < or = 4 cups per day, respectively, in a seven-year follow-up of stage I and II breast cancer, and the relative risk of recurrence was 0.564 (95% confidence interval, 0.350-0.911) after adjustment for other lifestyle factors. However, no improvement in prognosis was observed in stage III breast cancer. Our results indicate that increased consumption of green tea prior to clinical cancer onset is significantly associated with improved prognosis of stage I and II breast cancer, and this association may be related to a modifying effect of green tea on the clinical characteristics of the cancer. abstract_id: PUBMED:22889409 Green tea drinking and subsequent risk of breast cancer in a population-based cohort of Japanese women. Introduction: Although many in-vitro and animal studies have demonstrated a protective effect of green tea against breast cancer, findings from epidemiological studies have been inconsistent, and whether high green tea intake reduces the risk of breast cancer remains unclear. Methods: In this Japan Public Health Center-based Prospective Study, 581 cases of breast cancer were newly diagnosed in 53,793 women during 13.6 years' follow-up from the baseline survey in 1990-94. After the 5-year follow-up survey in 1995-98, 350 cases were newly diagnosed in 43,639 women during 9.5 years' follow-up. The baseline questionnaire assessed the frequency of total green tea drinking while the 5-year follow-up questionnaire assessed that of two types of green tea, Sencha and Bancha/Genmaicha, separately. Results: Compared with women who drank less than 1 cup of green tea per week, the adjusted hazard ratio [HR] for women who drank 5 or more cups per day was 1.12 (95% confidence interval [CI] 0.81-1.56; P for trend = 0.60) in the baseline data. Similarly, compared with women who drank less than 1 cup of Sencha or Bancha/Genmaicha per week, adjusted HRs for women who drank 10 or more cups per day were 1.02 (95% CI 0.55-1.89; P for trend = 0.48) for Sencha and 0.86 (0.34-2.17; P for trend = 0.66) for Bancha/Genmaicha. No inverse association was found regardless of hormone receptor-defined subtype or menopausal status. Conclusions: In this population-based prospective cohort study in Japan we found no association between green tea drinking and risk of breast cancer. abstract_id: PUBMED:23750333 Black tea, green tea and risk of breast cancer: an update. Previous meta-analysis indicated conflicting results in case-control versus cohort studies on the association of green tea with breast cancer risk, and conflicting results were also found in case-control versus cohort studies in another meta-analysis on the association of black tea with breast cancer risk. Many studies were published after the previous meta-analysis. Besides, the dose-response relationship of tea consumption with breast cancer risk is unclear. Thus the association of tea consumption with breast cancer risk was assessed incorporating new publications. Summary relative risk (RR) for highest versus lowest level of tea consumption was calculated based on fixed or random effect models. Dose-response relationship was assessed by restricted cubic spline model and multivariate random-effect meta-regression. The combined results from 9 studies suggested no significant association between green tea consumption and breast cancer risk (RR = 0.82, 95% CI = 0.64-1.04). No significant association was found among cohort studies and case-control studies after sensitivity analysis, respectively. A linear but not significant dose-response association was found between green tea consumption and breast cancer risk. The combined results from 25 studies demonstrated no significant association between black tea consumption and breast cancer risk (RR = 0.98, 95% CI = 0.93-1.03), and no significant association was found in subgroup analysis. A linear but not significant dose-response association was found between black tea consumption and breast cancer risk. Based on the current evidence, black tea and green tea might not contribute significantly to breast cancer risk, respectively. abstract_id: PUBMED:25114865 Green tea compounds in breast cancer prevention and treatment. Breast cancer is the most common cancer among women. In recent years, many in vitro and in vivo studies indicate that green tea possesses anti-cancer effects. The epidemiological studies, however, have produced inconclusive results in humans. Likewise, results from animal models about the preventive or therapeutic effects of green tea components are inconclusive. The mechanisms by which green tea intake may influence the risk of breast cancer in humans remain elusive mechanisms by which green tea intake may influence. Here, we review recent studies of green tea polyphenols and their applications in the prevention and treatment of breast cancer. Furthermore, we discuss the effect of green tea components on breast cancer by reviewing epidemiological studies, animal model studies and clinical trials. At last, we discuss the mechanisms by which green tea components suppress the development and recurrence of breast cancer. A better understanding of the mechanisms will improve the utilization of green tea in breast cancer prevention and therapy and pave the way to novel prevention and treatment strategies for breast cancer. abstract_id: PUBMED:24380073 Biological effects of green tea capsule supplementation in pre-surgery postmenopausal breast cancer patients. Regular green tea intake has been associated with an inverse risk of breast cancer. There are compelling experimental evidence that green tea, particularly, epigallocatechin gallate, the most potent green tea catechin, possesses a range of anti-cancer properties. We conducted a pre-surgical study of green tea capsules vs. no-green tea in women with primary breast cancer to determine the effects of green tea supplementation on markers of biological response. Postmenopausal women with ductal carcinoma in situ (DCIS) or stage I or II breast cancer took green tea capsules (940 mg per day) for an average of 35 days prior to surgery (n = 13) or received no green tea (n = 18). Paired diagnostic core biopsy and surgical specimen samples were analyzed for cell proliferation (Ki-67), apoptosis (caspase-3), and angiogenesis (CD34) separately in benign and malignant cell components. There were no significant changes in caspase-3 and CD34 in the green tea and no green tea groups and there were no significant differences in the change in these markers between the two groups. However, Ki-67 levels declined in both benign and malignant cell components in the green tea group; the decline in Ki-67 positivity in malignant cells was not statistically significant (P = 0.10) but was statistically significant in benign cells (P = 0.007). Ki-67 levels in benign and malignant cells did not change significantly in the no green tea group. There was a statistically significant difference in the change in Ki-67 in benign cells (P = 0.033) between the green tea and the no green tea groups. The trend of a consistent reduction in Ki-67 in both benign and malignant cells in the green tea group warrants further investigations in a larger study of breast cancer patients or high-risk women. abstract_id: PUBMED:10202393 Mechanistic findings of green tea as cancer preventive for humans. Based on our initial work with green tea, in which repeated topical applications of (-)-epigallocatechin gallate (EGCG), the main green tea polyphenol, inhibited tumor promotion in a two-stage carcinogenesis experiment on mouse skin (Phytother Res 1, 44-47, 1987), numerous scientists have since provided so much additional evidence of the benefits of drinking green tea that it is now an acknowledged cancer preventive in Japan, and will possibly soon be recognized as such in other countries. Our work has so far produced several important results with EGCG and green tea: a wide range of target organs in animal experiments for cancer prevention, wide bioavailability of 3H-EGCG in various organs of mice, delayed cancer onset of patients with a history of consuming over 10 cups of green tea per day, and absence of any severe adverse effects among volunteers who took 15 green tea tablets per day (2.25 g green tea extracts, 337.5 mg EGCG, and 135 mg caffeine) for 6 months. This paper introduces three new findings: 1) EGCG interacted with the phospholipid bilayer membrane resulting in confirmation of the sealing effect of EGCG; 2) EGCG inhibited TNF-alpha gene expression in the cells and TNF-alpha release from the cells; 3) high consumption of green tea was closely associated with decreased numbers of axillary lymph node metastases among premenopausal Stage I and II breast cancer patients, and with increased expression of progesterone and estrogen receptors among postmenopausal ones. These results provide new insights into our understanding of the mechanisms of action of tea polyphenols and green tea extract as a cancer preventive. abstract_id: PUBMED:17183063 Green tea and the prevention of breast cancer: a case-control study in Southeast China. Breast cancer is the most common malignancy in women worldwide. Tea has anticarcinogenic effects against breast cancer in experimental studies. However, epidemiologic evidence that tea protects against breast cancer has been inconsistent. A case-control study was conducted in Southeast China between 2004 and 2005. The incidence cases were 1009 female patients aged 20-87 years with histologically confirmed breast cancer. The 1009 age-matched controls were healthy women randomly recruited from breast disease clinics. Information on duration, frequency, quantity, preparation, type of tea consumption, diet and lifestyle were collected by face-to-face interview using a validated and reliable questionnaire. Conditional logistic regression analyses were used to estimate odds ratios (ORs) and associated 95% confidence intervals. Compared with non-tea drinkers, green tea drinkers tended to reside in urban, have better education and have higher consumption of coffee, alcohol, soy, vegetables and fruits. After adjusting established and potential confounders, green tea consumption was associated with a reduced risk of breast cancer. The ORs were 0.87 (0.73-1.04) in women consuming 1-249 g of dried green tea leaves per annum, 0.68 (0.54-0.86) for 250-499 g per annum, 0.59 (0.45-0.77) for 500-749 g per annum and 0.61 (0.48-0.78) for >or=750 g per annum, with a statistically significant test for trend (P < 0.001). Similar dose-response relationships were observed for duration of drinking green tea, number of cups consumed and new batches prepared per day. We conclude that regular consumption of green tea can protect against breast cancer. More research to closely examine the relationship between tea consumption and breast cancer risk is warranted. abstract_id: PUBMED:32143309 Beneficial Properties of Green Tea Catechins. Green tea (Camellia sinesis) is widely known for its anticancer and anti-inflammatory properties. Among the biologically active compounds contained in Camellia sinesis, the main antioxidant agents are catechins. Recent scientific research indicates that the number of hydroxyl groups and the presence of characteristic structural groups have a major impact on the antioxidant activity of catechins. The best source of these compounds is unfermented green tea. Depending on the type and origin of green tea leaves, their antioxidant properties may be uneven. Catechins exhibit the strong property of neutralizing reactive oxygen and nitrogen species. The group of green tea catechin derivatives includes: epicatechin, epigallocatechin, epicatechin gallate and epigallocatechin gallate. The last of these presents the most potent anti-inflammatory and anticancer potential. Notably, green tea catechins are widely described to be efficient in the prevention of lung cancer, breast cancer, esophageal cancer, stomach cancer, liver cancer and prostate cancer. The current review aims to summarize the potential anticancer effects and molecular signaling pathways of major green tea catechins. It needs to be clearly emphasized that green tea as well as green tea catechols cannot replace the standard chemotherapy. Nonetheless, their beneficial effects may support the standard anticancer approach. Answer: The association between green tea drinking and the onset of breast cancer has been investigated in several studies with varying results. Some studies suggest that green tea consumption may be associated with a delayed onset of breast cancer. For instance, a study within the Shanghai Women's Health Study found a time-dependent interaction between green tea consumption and age of breast cancer onset, suggesting that women who started drinking green tea at 25 years of age or younger had a reduced risk of developing premenopausal breast cancer, but an increased risk of postmenopausal breast cancer (PUBMED:20006278). This indicates that the relationship between tea consumption and breast cancer risk may depend on menopausal status. Other studies have found a modest reduction in breast cancer risk associated with green tea drinking. A population-based case-control study in Shanghai, China, reported that regular drinking of green tea was associated with a slightly decreased risk for breast cancer, particularly among premenopausal women (PUBMED:19074205). Similarly, a study among Japanese patients found that increased consumption of green tea was associated with decreased recurrence of stage I and II breast cancer, suggesting a beneficial effect on prognosis (PUBMED:9600118). However, not all studies have found a protective effect of green tea against breast cancer. A population-based prospective cohort study in Japan found no association between green tea drinking and the risk of breast cancer (PUBMED:22889409). A meta-analysis also indicated no significant association between green tea consumption and breast cancer risk, suggesting that green tea might not contribute significantly to breast cancer risk (PUBMED:23750333). In summary, while some studies suggest that green tea consumption may be associated with a later onset or reduced risk of breast cancer, particularly in premenopausal women, the evidence is not consistent across all research. Further studies are needed to confirm these findings and to better understand the potential mechanisms by which green tea intake may influence breast cancer risk (PUBMED:20006278, PUBMED:19074205, PUBMED:9600118, PUBMED:22889409, PUBMED:23750333).
Instruction: Correlation of 11C-methionine PET and diffusion-weighted MRI: is there a complementary diagnostic role for gliomas? Abstracts: abstract_id: PUBMED:38324045 Evaluating the Diagnostic Efficacy of 99mTc-Methionine Single-Photon Emission Computed Tomography-Computed Tomography: A Head-to-Head Comparison with 11C-Methionine Positron Emission Tomography-Magnetic Resonance Imaging in Glioma Patients. Background: Amino acid positron emission tomography (PET) imaging plays a significant role in the diagnosis of gliomas and in differentiating tumor recurrence from necrosis. In this study, the authors have evaluated the diagnostic efficacy of [99mTc]Tc-methionine single-photon emission computed tomography-computed tomography (SPECT-CT) in comparison with [11C]methionine PET-magnetic resonance imaging (MRI) in delineating tumors. Methods: Thirty-one (primary: 16 and postoperative: 15) patients of confirmed (either MRI or histopathological proven) glioma underwent both [99mTc]Tc-methionine SPECT-CT and [11C]methionine PET-MRI. A comparative analysis was performed between SPECT, PET, and MR images to calculate the concordance between the modalities and to evaluate the diagnostic efficacy of the [99mTc]Tc-methionine SPECT. Results: [99mTc]Tc-methionine SPECT showed comparable uptake in the tumor lesions in comparison to [11C]methionine PET. A significant and strong positive correlation was observed between the volume of tumor (Vt) in PET and Vt MR (p < 0.004). Likewise, a significant and strong positive correlation was found between Vt SPECT and Vt MR. [99mTc]-methionine has a sensitivity and specificity of 91% and 75%, respectively, compared with 82% and 100% for [11C]methionine in postoperative cases to differentiate the tumor recurrence from necrosis. The sensitivity and specificity of [99mTc]Tc-methionine was 92% and 100%, respectively, compared with 92% and 67% for [11C]methionine in primary tumors. Conclusion: [99mTc]Tc-methionine SPECT-CT is as equally good as [11C]methionine for diagnosing and differentiating it from necrosis especially in high-grade glioma. abstract_id: PUBMED:35171292 Preoperative [11C]methionine PET to personalize treatment decisions in patients with lower-grade gliomas. Background: PET with radiolabeled amino acids is used in the preoperative evaluation of patients with glial neoplasms. This study aimed to assess the role of [11C]methionine (MET) PET in assessing molecular features, tumor extent, and prognosis in newly diagnosed lower-grade gliomas (LGGs) surgically treated. Methods: One hundred and fifty-three patients with a new diagnosis of grade 2/3 glioma who underwent surgery at our Institution and were imaged preoperatively using [11C]MET PET/CT were retrospectively included. [11C]MET PET images were qualitatively and semi-quantitatively analyzed using tumor-to-background ratio (TBR). Progression-free survival (PFS) rates were estimated using the Kaplan-Meier method and Cox proportional-hazards regression was used to test the association of clinicopathological and imaging data to PFS. Results: Overall, 111 lesions (73%) were positive, while thirty-two (21%) and ten (6%) were isometabolic and hypometabolic at [11C]MET PET, respectively. [11C]MET uptake was more common in oligodendrogliomas than IDH-mutant astrocytomas (87% vs 50% of cases, respectively). Among [11C]MET-positive gliomas, grade 3 oligodendrogliomas had the highest median TBRmax (3.22). In 25% of patients, PET helped to better delineate tumor margins compared to MRI only. In IDH-mutant astrocytomas, higher TBRmax values at [11C]MET PET were independent predictors of shorter PFS. Conclusions: This work highlights the role of preoperative [11C]MET PET in estimating the type of suspected LGGs, assessing tumor extent, and predicting biological behavior and prognosis of histologically confirmed LGGs. Our findings support the implementation of [11C]MET PET in routine clinical practice to better manage these neoplasms. abstract_id: PUBMED:24709984 Correlation of 11C-methionine PET and diffusion-weighted MRI: is there a complementary diagnostic role for gliomas? Introduction: 11C-Methionine (MET) PET and diffusion-weighted (DW) MRI are commonly used for evaluation of gliomas. We assessed the correlation between MET uptake and diffusion restriction measured on DW MRI in glioma. Materials And Methods: Thirty-one patients with gliomas, who were initially examined with MET PET and DW MRI, were enrolled retrospectively. MET PET and apparent diffusion coefficient (ADC) images were coregistered to each other, using rigid-body transformation. Tumor-to-normal count density ratio of the cortex (TNR) and normalized apparent diffusion coefficient (nADC) value were measured for each voxel on the MET PET and ADC map. The maximum TNR (TNRmax) and minimum nADC (nADCmin) were obtained for each tumor. Correlations between those parameters were evaluated. Results: The TNRmax and nADCmin values of a glioma were significantly correlated (r=-0.42). TNRmax and nADCmin were significantly correlated with glioma grades. Furthermore, TNRmax and nADCmin showed a trend for correlation with the Ki-67 index. We analyzed the correlation between voxel-based TNR and ADC within a tumor and observed no correlation between them. Regions with high MET uptake did not correspond with regions with low nADC. Conclusion: We found a negative correlation between TNRmax and nADCmin for each glioma; however, MET uptake and ADC within a tumor were independent of each other and were heterogeneous. The two parameters represent different biological features; thus, as a comprehensive approach, MET PET and DW MRI might have a complementary role in the characterization of gliomas. abstract_id: PUBMED:25270372 Spatial distribution of malignant tissue in gliomas: correlations of 11C-L-methionine positron emission tomography and perfusion- and diffusion-weighted magnetic resonance imaging. Background: The prognosis of glioma patients is contingent on precise target selection for stereotactic biopsies and the extent of tumor resection. (11)C-L-methionine (MET) positron emission tomography (PET) demonstrates tumor heterogeneity and invasion with high diagnostic accuracy. Purpose: To compare the spatial tumor distribution delineated by MET PET with that by perfusion- and diffusion-weighted magnetic resonance imaging (MRI), in order to understand the diagnostic value of these MRI methods, when PET is not available. Material And Methods: Presurgical MET PET and MRI, including perfusion- and diffusion-weighted MRI, were acquired in 13 patients (7 high-grade gliomas, 6 low-grade gliomas). A quantitative volume of interest analysis was performed to compare the modalities objectively, supplemented by a qualitative evaluation that assessed the clinical applicability. Results: The inaccuracy of conventional MRI was confirmed (area under the curve for predicting voxels with high MET uptake = 0.657), whereas cerebral blood volume (CBV) maps calculated from perfusion data improved accuracy (area under the curve = 0.760). We considered CBV maps diagnostically comparable to MET PET in 5/7 cases of high-grade gliomas, but insufficient in all cases of low-grade gliomas when evaluated subjectively. Cerebral blood flow and apparent diffusion coefficient maps did not contribute to further accuracy. Conclusion: Adding perfusion-weighted MRI to the presurgical protocol can increase the diagnostic accuracy of conventional MRI and is a simple and well-established method compared to MET PET. However, the definition of low-grade gliomas with subtle or no alterations on cerebral blood volume maps remains a diagnostic challenge for stand-alone MRI. abstract_id: PUBMED:30725422 L-[Methyl-11C] Methionine-Positron-Emission Tomography (MET-PET). Methionine (MET) dependence is a cancer-specific metabolic abnormality that is due to MET overuse for aberrant transmethylation reactions. [11C]-MET is very useful for positron-emission tomography (PET) due to MET overuse in malignant tumors. Many benefits of MET-PET have been demonstrated. MET-PET can differentiate recurrent glioma and necrosis. [11C]-MET-PET can also predict prognosis in gliomas better than [18F]-FDG PET. [11C]-MET-PET is better than MRI for predicting survival in low-grade glioma (LGG). MET-PET has greater specificity for detecting residual tumor after surgery than MRI. abstract_id: PUBMED:37695488 The prognostic power of [11C]methionine PET in IDH-wildtype diffuse gliomas with lower-grade histological features: venturing beyond WHO classification. Purpose: IDH-wildtype (IDH-wt) diffuse gliomas with histological features of lower-grade gliomas (LGGs) are rare and heterogeneous primary brain tumours. [11C]Methionine (MET) positron emission tomography (PET) is commonly used to evaluate glial neoplasms at diagnosis. The present study aimed to assess the prognostic value of MET PET in newly diagnosed, treatment naïve IDH-wt gliomas with histological features of LGGs. Methods: Patients with a histological diagnosis of IDH-wt LGG who underwent preoperative (< 100 days) MET PET/CT and surgery were retrospectively included. Qualitative and semi-quantitative analyses of MET PET images were performed. Progression-free survival (PFS) and overall survival (OS) were analysed by Kaplan-Meier curves. Cox proportional-hazards regression was used to test the association of imaging and clinical data to PFS and OS. Results: We included 48 patients (M:F = 25:23; median age 55). 39 lesions were positive and 9 negative at MET PET. Positive MET PET was significantly associated with shorter median PFS (15.7 months vs. not reached, p = 0.0146) and OS time (32.6 months vs. not reached, p = 0.0253). Incomplete surgical resection and higher TBRmean values were independent predictors of shorter PFS on multivariate analysis (p < 0.001 for both). Higher tumour grade and incomplete surgical resection were independent predictors of OS at multivariate analysis (p = 0.027 and p = 0.01, respectively). Conclusion: MET PET is useful for the prognostic stratification of patients with IDH-wt glial neoplasms with histological LGGs features. Considering their huge biological heterogeneity, the combination of MET PET and molecular analyses may help to improve the prognostic accuracy in these diffuse gliomas subset and influence therapeutic choices accordingly. abstract_id: PUBMED:38095773 Individualized discrimination of tumor progression from treatment-related changes in different types of adult-type diffuse gliomas using [11C]methionine PET. Purpose: This study aimed to assess the ability of [11C]methionine (MET) PET in distinguishing between tumor progression (TP) and treatment-related changes (TRCs) among different types of adult-type diffuse gliomas according to the 2021 World Health Organization classification and predict overall survival (OS). Methods: We retrospectively selected 113 patients with adult-type diffuse gliomas with suspected TP who underwent MET PET imaging. Maximum and mean tumor-to-background ratios (TBRmax, TBRmean) and metabolic tumor volume (MTV) were calculated. Diagnoses were verified by histopathology (n = 50) or by clinical/radiological follow-up (n = 63). The diagnostic performance of MET PET parameters was evaluated through receiver operating characteristic (ROC) analysis and area under the curve (AUC) calculation. Survival analysis employed the Kaplan-Meier method and Cox proportional-hazards regression. Results: TP and TRCs were diagnosed in 76 (67%) and 37 (33%) patients, respectively. ROC analysis revealed TBRmax had the best performance in differentiating TP from TRCs with a cut-off of 1.96 in IDH-mutant astrocytoma (AUC, 0.87; sensitivity, 93%; specificity 69%), 1.80 in IDH-mutant and 1p/19q-codeleted oligodendroglioma (AUC, 0.96; sensitivity, 100%; specificity, 89%), and 2.13 in IDH wild-type glioblastoma (AUC, 0.89; sensitivity, 89%; specificity, 78%), respectively. On multivariate analysis, higher TBRmean and MTV were significantly correlated with shorter OS in all IDH-mutant gliomas, as well as in IDH-mutant astrocytoma subgroup. Conclusion: This work confirms that MET PET has varying diagnostic performances in distinguishing TP from TRCs within three types of adult-type diffuse gliomas, and highlights its high diagnostic accuracy in IDH-mutant and 1p/19q-codeleted oligodendroglioma and potential prognostic value for IDH-mutant gliomas, particularly IDH-mutant astrocytoma. abstract_id: PUBMED:38301247 The role of preoperative [11C]methionine PET in defining tumor-related epilepsy and predicting short-term postoperative seizure control in temporal lobe low-grade gliomas. Objective: Surgery is the mainstay of treatment for low-grade glioma (LGG)-related epilepsy. However, the goal of achieving both oncological radical resection and seizure freedom can be challenging. PET with [11C]methionine (MET) has been recently introduced in clinical practice for the management of patients with LGGs, not only to monitor the response to treatments, but also as a preoperative tool to define the metabolic tumor extent and to predict tumor grading, type, and prognosis. Still, its role in defining tumor-related epilepsy and postoperative seizure outcomes is limited. The aim of this preliminary study was to investigate the role of MET PET in defining preoperative seizure characteristics and short-term postoperative seizure control in a cohort of patients with newly diagnosed temporal lobe low-grade gliomas (tLGGs). Methods: Patients with newly diagnosed and histologically proven temporal lobe grade 2/3 gliomas (2021 WHO CNS tumor classification) who underwent resection at the authors' institution between July 2011 and March 2021 were included in this retrospective study. MET PET images were acquired, fused with MRI scans, and qualitatively and semiquantitatively analyzed. Any eventual PET/MRI involvement of the temporomesial area, seizure characteristics, and 1-year seizure outcomes were reported. Results: A total of 52 patients with tLGGs met the inclusion criteria. MET PET was positive in 41 (79%) patients, with a median metabolic tumor volume of 14.56 cm3 (interquartile range [IQR] 6.5-28.2 cm3). The median maximum and mean tumor-to-background ratio (TBRmax, TBRmean) were 2.24 (IQR 1.58-2.86) and 1.53 (IQR 1.37-1.70), respectively. The metabolic tumor volume was found to be related to the presence of seizures at disease onset, but only in noncodeleted tumors (p = 0.014). Regarding patients with uncontrolled seizures at surgery, only the temporomesial area PET involvement showed a statistical correlation both in the univariate (p = 0.058) and in the multivariate analysis (p = 0.030). At 1-year follow-up, seizure control was correlated with MET PET-derived semiquantitative data. Particularly, higher TBRmax (p = 0.0192) and TBRmean (p = 0.0128) values were statistically related to uncontrolled seizures 1 year after surgery. Conclusions: This preliminary study suggests that MET PET may be used as a preoperative tool to define seizure characteristics and outcomes in patients with tLGGs. These findings need to be further validated in larger series with longer epileptological follow-ups. abstract_id: PUBMED:34630267 Can High b Value Diffusion Be a Surrogate Marker for PET-A MR/PET Study in Neurooncology Set Up. Purpose: Hybrid whole-body magnetic resonance/positron emission tomography (MR/PET) systems are new diagnostic tools enabling the simultaneous acquisition of morphologic and multiparametric functional data, which allow for a diversified characterization of oncological diseases. This study aimed to compare the diagnostic ability of MRI with the diffusion-weighted image (DWI), and simultaneous integrated positron emission tomography MR/PET to detect malignant lesions and elucidate the utility and limitations of these imaging modalities in preoperative and postoperative follow up in cancer patients. Material and Methods: A total of 45 patients undergoing simultaneous MR/PET for CNS ICSOL in our institution between January 2016 and July 2020 were considered in this study. Post-processing was done in Siemens syngo software to generate a b2000 image. This image was then inverted to grayscale and compared with the NAC image of PET. Results: The lesion-based sensitivity, specificity, positive predictive value, and negative predictive value for DWI were 92.3, 83.3, 97.3, and 62.5%, respectively (at 95% CI and p was 0.000). The lesion-based sensitivity, specificity, positive predictive value, and negative predictive value for PET were 97.4, 71.4, 94.9, and 83.3%, respectively (at 95% CI and p was 0.000). The lesion-based sensitivity and specificity of DWI were comparable with those of PET. Conclusions: Although DWI and FDG-PET reflect different tissue properties, there is an association between the measures of both methods in CNS tumors probably because of the coupling of cellularity with tumor metabolism as seen on FDG and other PET tracers. Our study shows that DWI acts as a surrogate biomarker for FDG PET and other tracers in tumors. The method of DWI image generation is simple, radiation-free, and cost-effective in a clinical setup. The simultaneous DWI-PET study provides evidence and confirms the role of DWI in surveillance imaging of tumors. abstract_id: PUBMED:34066294 Voxelwise Principal Component Analysis of Dynamic [S-Methyl-11C]Methionine PET Data in Glioma Patients. Recent works have demonstrated the added value of dynamic amino acid positron emission tomography (PET) for glioma grading and genotyping, biopsy targeting, and recurrence diagnosis. However, most of these studies are based on hand-crafted qualitative or semi-quantitative features extracted from the mean time activity curve within predefined volumes. Voxelwise dynamic PET data analysis could instead provide a better insight into intra-tumor heterogeneity of gliomas. In this work, we investigate the ability of principal component analysis (PCA) to extract relevant quantitative features from a large number of motion-corrected [S-methyl-11C]methionine ([11C]MET) PET frames. We first demonstrate the robustness of our methodology to noise by means of numerical simulations. We then build a PCA model from dynamic [11C]MET acquisitions of 20 glioma patients. In a distinct cohort of 13 glioma patients, we compare the parametric maps derived from our PCA model to these provided by the classical one-compartment pharmacokinetic model (1TCM). We show that our PCA model outperforms the 1TCM to distinguish characteristic dynamic uptake behaviors within the tumor while being less computationally expensive and not requiring arterial sampling. Such methodology could be valuable to assess the tumor aggressiveness locally with applications for treatment planning and response evaluation. This work further supports the added value of dynamic over static [11C]MET PET in gliomas. Answer: The correlation between 11C-methionine (MET) PET and diffusion-weighted (DW) MRI in the evaluation of gliomas has been investigated, and findings suggest that there is indeed a complementary diagnostic role for these imaging modalities in glioma characterization. A study found a significant negative correlation between the maximum tumor-to-normal count density ratio (TNRmax) on MET PET and the minimum normalized apparent diffusion coefficient (nADCmin) on DW MRI for each glioma, indicating that these two parameters are related to glioma grades and potentially the Ki-67 index, which is a marker of proliferation. However, within individual tumors, MET uptake and ADC were found to be independent of each other and heterogeneous, suggesting that MET PET and DW MRI represent different biological features of gliomas. Therefore, using both MET PET and DW MRI could provide a more comprehensive approach to characterizing gliomas (PUBMED:24709984). Additionally, other studies have highlighted the utility of MET PET in various aspects of glioma diagnosis and management. For instance, MET PET has been shown to be useful in differentiating recurrent glioma from necrosis, predicting prognosis, and detecting residual tumor after surgery with greater specificity than MRI (PUBMED:30725422). It has also been demonstrated that MET PET can be valuable in preoperative evaluation, helping to delineate tumor margins and assess molecular features, tumor extent, and prognosis in lower-grade gliomas (PUBMED:35171292). Furthermore, MET PET has been found to be a useful prognostic tool in IDH-wildtype diffuse gliomas with lower-grade histological features (PUBMED:37695488) and in distinguishing tumor progression from treatment-related changes in adult-type diffuse gliomas (PUBMED:38095773). In summary, the correlation between MET PET and DW MRI suggests that they have a complementary diagnostic role in gliomas, with each modality providing unique insights into the biological characteristics of the tumors. This complementary use can enhance the accuracy of glioma characterization, aiding in diagnosis, treatment planning, and prognostic assessment (PUBMED:24709984).
Instruction: Does computer-assisted surgical navigation total knee arthroplasty reduce venous thromboembolism compared with conventional total knee arthroplasty? Abstracts: abstract_id: PUBMED:18756342 Does computer-assisted surgical navigation total knee arthroplasty reduce venous thromboembolism compared with conventional total knee arthroplasty? Introduction: The study aims to show that total knee arthroplasty using computer-assisted surgical navigation without intramedullary rodding is safer than conventional intramedullary techniques in preventing venous thromboembolism. Methods: 30 patients were grouped into groups of 10. Groups A and B had conventional intramedullary rodding of the femur and/or tibia. Group C had no rodding of the femur and tibia using computer-assisted surgical navigation. The degree, duration and size of the embolic shower were captured by a transoesophageal echocardiography probe. The echogenic emboli were graded according to the Mayo Clinic score. Haemodynamic parameters such as pulse oximetry oxygen saturation, end-tidal carbon dioxide, heart rate and mean arterial pressure were also recorded. Results: There was a significant difference in the size of the emboli and the Mayo Clinic score when comparing the groups with intramedullary rodding and those without. There was also a significant difference in the pulse oximetry oxygen saturation and heart rate when the group without intramedullary rodding was compared with groups with rodding. Conclusion: Surgical navigation total knee arthroplasty may be safer than conventional total knee replacement with intramedullary rodding in preventing venous thromboembolism. abstract_id: PUBMED:31610791 Effects of computer-assisted navigation versus the conventional technique for total knee arthroplasty on levels of plasma thrombotic markers: a prospective study. Background: Venous thromboembolism (VTE) is a major sequela after total knee arthroplasty (TKA). We prospectively compared the differences in the perioperative plasma D-dimer and fibrinogen levels between the individuals undergoing TKA via computer-assisted navigation and via a conventional method as the surrogate comparison for VTE. There were 174 patients fulfilling the inclusion criteria and providing valid informed consent between September 2011 and November 2013. There were 69 females and 20 males in the navigation-assisted group (median age: 71.00 years), while the conventional group was composed of 59 females and 26 males (median age: 69.00 years). Blood samples were obtained prior to and at 24 and 72 h after surgery for measurement of the levels of plasma D-dimer and fibrinogen. Results: A significantly lower plasma D-dimer level 24 h after TKA (p = 0.001) and a milder postoperative surge 24 h after TKA (p = 0.002) were observed in patients undergoing navigation-assisted TKA. The proportions of subjects exceeding the plasma D-dimer cut-off values of 7.5, 8.6 and 10 mg/L 24 h after TKA were all significantly higher in the conventional group than in the navigation-assisted group (p = 0.024, 0.004, and 0.004, respectively). Conclusions: A lower plasma D-dimer level and a milder surge in the plasma D-dimer level were observed in patients undergoing navigation-assisted TKA in comparison with patients undergoing conventional TKA 24 h after surgery. These findings may supplement the known advantages of navigation-assisted TKA. abstract_id: PUBMED:33148341 Unicompartmental knee arthroplasty and revision total knee arthroplasty have a lower risk of venous thromboembolism disease at 30 days than primary total knee arthroplasty. Background: While multiple studies have demonstrated a lower venous thromboembolism disease (VTED) risk for unicompartmental knee arthroplasty (UKA) compared to primary total knee arthroplasty (TKA), recent reports have shown that revision TKA also had a lower VTED risk compared to primary TKA, an unexpected finding because of its theoretical increased risk. Given the paucity of up-to-date comparative studies, our goal was to perform a high-powered VTED risk comparison study of UKA and revision TKA to primary TKA using recent data. Methods: The National Surgical Quality Improvement Program (NSQIP) database was queried between 2011 and 2018, and we identified 213,234 patients for inclusion: 191,810 primary TKA, 9294 UKA, and 12,130 revision TKA. Demographics, medical comorbidities, and possible VTE risk factors were collected. Thirty-day outcomes, including deep vein thrombosis (DVT), pulmonary embolism (PE), and all-cause VTED were compared between knee arthroplasty types. Results: On multivariate analysis, UKA was significantly associated with lower rates of DVT [OR 0.44 (0.31-0.61); P < 0.001], PE [OR 0.42 (0.28-0.65); P < 0.001], and all-cause VTED [OR 0.42 (0.32-0.55); P < 0.001] when compared to primary TKA. Revision TKA was significantly associated with lower rates of PE [OR 0.62 (0.47-0.83); P = 0.002], and all-cause VTED [OR 0.82 (0.70-0.98); P = 0.029] when compared to primary TKA. Conclusions: Utilizing recent data from a nationwide patient cohort and controlling for confounding variables, our results showed that both revision TKA and UKA had a lower risk of VTED compared to primary TKA, corroborating the results of recent investigations. Additional prospective investigations are needed to explain this unexpected result. abstract_id: PUBMED:24238825 Topical administration of tranexamic acid in primary total hip and total knee arthroplasty. Major blood loss is a known potential complication in total hip and total knee arthroplasty. We conducted a prospective, stratified, randomized, double-blind, placebo-controlled trial that evaluated 100 patients undergoing total knee or total hip arthroplasty to evaluate the effect on blood loss using the topical application of tranexamic acid. Participants received either 2 g of topical tranexamic acid or the equivalent volume of placebo into the joint prior to surgical closure. Tranexamic acid resulted in a lower mean maximum decline in postoperative hemoglobin levels when compared to placebo (P = 0.013). Patients in the tranexamic acid group demonstrated an improved but non-significant reduction in the units of blood transfused compared to placebo (P = 0.423). There was no clinically significant increase in complications in the tranexamic acid group, including no incidence of venous thromboembolism. abstract_id: PUBMED:36579565 The Differential Systemic Biological Effects between Computer Navigation and Conventional Total Knee Arthroplasty (TKA) Surgeries: A Prospective Study. Distal femur reaming-free total knee arthroplasty (TKA) was reported to possess lower risk of acute myocardial infarction (AMI) or venous thromboembolism (VTE) than conventional TKA in a retrospective population-based study. We tried to offer prospective biological evidence by comparing the levels of AMI and VTE serum surrogate markers among the patients undertaking navigation and conventional TKAs to support these observations. Thirty-four participants undertaking navigation TKA and 34 patients receiving conventional TKA were recruited between February 2013 and December 2015. Blood samples were drawn from all participants before TKA, and 24 and 72 h after TKA, to assess the concentration of soluble P-selectin, matrix metalloproteinase-9 (MMP-9), C-reactive protein (CRP), and interleukin-8 (IL-8) between the participants undergoing navigation and conventional TKAs. We showed that significantly lower serum levels of soluble P-selectin 24 h after, as well as CRP 24 and 72 h after TKA could be observed in the navigation cohort. The more prominent surge of serum soluble P-selectin and CRP were perceived 24 and 72 h after TKA among the participants undergoing conventional TKA. Based upon our prospective biological evidence, the merits of navigation TKA are strengthened by lower levels of AMI and VTE serum surrogate markers. abstract_id: PUBMED:32211475 Total knee arthroplasty in hemophilia A. Hemophilia A is a rare genetic disorder involving a deficiency of clotting factor VIII. Coagulation factor replacement therapy has prolonged the life expectancy of patients with hemophilia, but recurrent hemarthrosis of major joints is often a common occurrence. Therefore, orthopaedic adult reconstructive surgeons increasingly encounter hemophilic arthropathy in young adults and consider treating with total joint arthroplasty. In this report, the authors describe a patient with hemophilia A and severe knee osteoarthritis, who was subsequently treated with primary total knee arthroplasty. This rare case is an opportunity to explore a variety of unique clinical scenarios specific to patients with hemophilia, including the maintenance of optimal factor VIII levels through clotting factor infusions and prevention of a venous thromboembolic event. abstract_id: PUBMED:35440299 Prophylaxis for Venous Thromboembolic Events in Elective Total Hip and Total Knee Arthroplasty. Venous thromboembolism (VTE) is a serious complication after major orthopaedic operations, such as a total hip (THA) and knee (TKA) arthroplasty. Therefore, perioperative VTE prophylaxis is recommended; a multitude of modern options are available, including both pharmacologic (aspirin, unfractionated and lowmolecular- weight heparin, vitamin K antagonists, and novel oral anticoagulants) and/or mechanical interventions (early mobilization, graduated compression stockings, intermittent pneumatic compression devices, and venous foot pumps). However, because of the abundance of these possibilities, it is crucial to understand the benefits and drawbacks of each VTE prophylaxis option to ensure that the optimal treatment plan is developed for each patient. The American College of Chest Physicians (AACP) and the American Academy of Orthopaedic Surgeons (AAOS) have both published individual guidelines on VTE prophylaxis regimens, alongside numerous studies evaluating the efficacy and outcomes of the different prophylaxis modalities. The purpose of this review is to provide a summary of the evidence on VTE prophylaxis after elective total hip and knee arthroplasty based on current guidelines and highlight the major concerns and potential complications. abstract_id: PUBMED:35236467 Does intraoperative mechanical prophylaxis prevent venous thromboembolism in total knee arthroplasty? - effectiveness of passive-assisted ankle motion in surgical/non-surgical side. Background: Gradual compression stocking (GCS) and intermittent pneumatic compression device (IPCD) are used for intraoperative mechanical prophylaxis against venous thromboembolism (VTE) during total knee arthroplasty (TKA). In this study, we applied a passive-assisted ankle motion in combination with GCS and IPCD during TKA and evaluated its effectiveness in preventing postoperative VTE. Methods: We included 77 patients who underwent primary unilateral TKA. Patients were divided into group A (53 patients who underwent GCS and IPCD on their non-surgical side limb) and group B (24 patients who underwent passive ankle dorsiflexion motion in addition to GCS and IPCD on their non-surgical side limb). Deep vein thrombosis (DVT) was assessed using lower extremity ultrasonography (US). The incidence of VTE in each affected limb was compared between the two groups. Results: US was performed 4 days after surgery on average. The incidence of DVT in groups A and B was 47.2 and 70.8 %, respectively. In group A, 22.6 % of DVTs were found only on the surgical side, 11.3 % on the non-surgical side, and 13.2 % on both sides. On the other hand, in group B, 41.7 % of DVTs were found only on the surgical side, 4.2 % on the non-surgical side, and 25.0 % on both sides. No significant difference in the incidence of VTE was noted between the 2 groups. Conclusions: The intraoperative application of passive ankle motion plus GCS and IPCD might not further reduce the incidence of postoperative DVT in TKA patients. abstract_id: PUBMED:31530463 Have Venous Thromboembolism Rates Decreased in Total Hip and Knee Arthroplasty? Background: Venous thromboembolism (VTE) is a major cause of morbidity, mortality, and healthcare costs in arthroplasty patients. In an effort to reduce VTEs, numerous strategies and guidelines have been implemented, but their impact remains unclear. The purpose of this study is to compare annual trends in 30-day VTE, deep vein thrombosis (DVT), pulmonary embolism (PE), and all-cause mortality in (1) total hip arthroplasty (THA) and (2) total knee arthroplasty (TKA). Methods: The American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database identified 363,530 patients who received a TKA or THA from 2008 to 2016. Bivariate analysis was performed to assess the association between the year in which surgery was performed and demographics and comorbidities. Bimodal multivariate logistic regression models for THA and TKA were developed for 2009-2016 using 2008 as a reference. Results: Overall incidence of VTE, DVT, PE, and mortality for THA were 0.6%, 0.4%, 0.3%, and 0.2%, respectively. Based off of multivariate regression VTE, DVT, PE, and mortality rates have shown no significant (P > .05) change from 2008 to 2016 in THA patients. Overall incidence of VTE, DVT, PE, and mortality for TKA were 1.4%, 0.9%, 0.6%, and 0.1%, respectively. Multivariate regression revealed reductions when compared to 2008 for VTEs and DVTs from 2009 to 2016 (P < .05) for TKA patients. A significant reduction in PEs (P = .002) was discovered for 2016, while no significant change was observed in mortality (P > .05). Conclusion: Approximately 1 in 71 patient undergoing TKA, and 1 in 167 undergoing THA developed a VTE within 30 days after surgery. Our study demonstrated that VTE incidence rates have decreased in TKA, while remaining stable in THA over the past 8 years. Further research to determine the optimal prophylaxis algorithm that would allow for a personalized, efficacious, and safe thromboprophylaxis regimen is needed. Level Of Evidence: III. abstract_id: PUBMED:25458092 Topical versus intravenous tranexamic acid in total knee arthroplasty. The objective of this study is to compare the effectiveness of intravenous versus topical application of tranexamic acid in patients undergoing knee arthroplasty. All patients who underwent primary knee arthroplasty at our total joint center over a 12-month period were included in the study. One surgeon utilized 1 g of IV TXA at time of incision in all patients (n=373) except those with a documented history of venous thromboembolism (VTE). Two surgeons utilized a topical application of TXA for all patients without exception (n=198) in which the joint was injected after capsular closure with 3 g TXA/100 mL saline. The transfusion rate was 0% in the topical group vs. 2.4% in the IV group and this was statistically significant (P<0.05). Answer: Yes, computer-assisted surgical navigation total knee arthroplasty (TKA) appears to reduce the risk of venous thromboembolism (VTE) compared with conventional total knee arthroplasty. A study that compared the two methods found a significant difference in the size of emboli and the Mayo Clinic score when comparing groups with intramedullary rodding (conventional method) to the group without rodding using computer-assisted surgical navigation. Additionally, there was a significant difference in pulse oximetry oxygen saturation and heart rate when the group without intramedullary rodding was compared with groups with rodding, suggesting that surgical navigation TKA may be safer in preventing VTE (PUBMED:18756342). Another study reported significantly lower plasma D-dimer levels 24 hours after TKA and a milder postoperative surge in patients undergoing navigation-assisted TKA compared to those undergoing conventional TKA. This indicates that navigation-assisted TKA may have a lower risk of VTE (PUBMED:31610791). Furthermore, a prospective study comparing the levels of acute myocardial infarction (AMI) and VTE serum surrogate markers among patients undergoing navigation and conventional TKAs found significantly lower serum levels of soluble P-selectin 24 hours after, as well as C-reactive protein (CRP) 24 and 72 hours after TKA in the navigation cohort. This suggests that navigation TKA may have lower levels of AMI and VTE serum surrogate markers, supporting the idea that it is associated with a reduced risk of VTE (PUBMED:36579565). In summary, evidence from these studies suggests that computer-assisted surgical navigation TKA may reduce the risk of VTE compared to conventional TKA.
Instruction: Determining the potency of neuromuscular blockers: are traditional methods flawed? Abstracts: abstract_id: PUBMED:20430764 Determining the potency of neuromuscular blockers: are traditional methods flawed? Background: Traditionally, the clinical potency of neuromuscular blocking drugs has been measured using linear regression analysis (LRA) after log dose and probit or logit data transformation. However, probit and logit analyses are meant to handle only quantal responses with binomial error distributions, not continuous data such as per cent of maximal response. Some statisticians now consider this approach outmoded and assert that non-linear regression (NLR) is the preferred way to analyse sigmoidal dose-response relationships. We were interested in the degree to which the method of regression analysis alters calculated ED(50) and ED(95) values. Methods: We analysed raw data for succinylcholine, rocuronium, rapacuronium, and cisatracurium from previously published studies using both LRA and NLR to determine the ED(50) and ED(95) values and the respective slopes of the dose-response relationships. We also estimated drug potency using the Hill equation (HE) using the slopes obtained from LRA and NLR. Results: ED(50) values calculated by NLR, LRA, or the HE were interchangeable. LRA resulted in ED(95) values that were 13-18% lower than those determined by NLR. The 95% confidence limits (CL) for the ED(50) did not exceed +/-8% of the estimated value no matter how it was calculated vs +/-20-30% for the ED(95). Conclusions: The ED(50) is a very robust parameter. When comparing the potency of neuromuscular blockers, it is this value rather than the ED(95) that should be used. The CL for the ED(95), regardless of how it is calculated, are so wide that this parameter must be viewed, at best, as an approximation. abstract_id: PUBMED:21068050 Neuromuscular dose-response studies: determining sample size. Background: Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. Methods: The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. Results: The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). Conclusions: On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance. abstract_id: PUBMED:19317862 Acceleromyography and mechanomyography for establishing potency of neuromuscular blocking agents: a randomized-controlled trial. Background: Acceleromyography (AMG) is increasingly being used in neuromuscular research, including in studies establishing the potency of neuromuscular blocking and reversal agents. However, AMG is insufficiently validated for use interchangeably with the gold standard, mechanomyography (MMG) for this purpose. The aim of this study was to compare AMG and MMG for establishing dose-response relationship and potency, using rocuronium as an example. Methods: We included 40 adult patients in this randomized-controlled single-dose response study. Anaesthesia was induced and maintained with propofol and opioid. Neuromuscular blockade was induced with rocuronium 100, 150, 200 or 250 microg/kg. Neuromuscular monitoring was performed with AMG (TOF-Watch SX) with pre-load (Hand Adapter) at one arm and MMG (modified TOF-Watch SX) on the other, using 0.1 Hz single twitch stimulation. Dose-response relationships were determined for both recording methods using log (dose) against probit (maximum block). The obtained slopes of the regression lines, ED(50), ED(95) and the maximum block were compared. Results: The ED(50) and ED(95) [95% confidence interval (CI)] for AMG were 185 microg/kg(167-205 microg/kg) and 368 microg/kg(288-470 microg/kg), compared with 174 microg/kg(159-191 microg/kg) and 338 microg/kg(273-418 microg/kg) for MMG. There were no statistically significant biases in maximum block, ED(50), ED(95) or slopes obtained with the two methods. Conclusion: Our results indicate that any possible difference between AMG and MMG is so small that it justifies AMG to be used for establishing the potency of neuromuscular blocking agents. However, the wide CIs show that we cannot rule out a 13% higher ED(50) and a 26% higher ED(95) for AMG. abstract_id: PUBMED:9232540 Neuromuscular blockade: offset anomalies. Are they simply potency-related receptor bonding effects? Rapid making and breaking of bonds between quaternary ammonium compounds and cholinergic receptors is typical of ion-pair bonding, which is weak, and ion-pair reactions, which are extremely fast. These properties explain the observed rapid association and dissociation of turbocurarine at receptors. The time course receptor offset is determined by two factors, buffered diffusion due to repetitive bonding, and a potency-related offset-retarding effect. The strength of the latter is a function of chemical structure, which determines the microscopic molecular rate of drug-receptor association and dissociation. Together, buffered diffusion and the potency-related offset-retarding effect provide a complete rational physico-chemical explanation for the marked, yet variable, differences between onset and offset times of non-depolarizing neuromuscular blocking agents. The influence of a potency-related offset-retarding effect together with differing structural requirements for neuromuscular blocking potency and plasma carboxyesterase hydrolysis, suggests that a high potency ultrashort duration block is unlikely to be achieved in a non-depolarizing compound metabolized by plasma esterases alone. abstract_id: PUBMED:36627449 Discovery and Control of Succinimide Formation and Accumulation at Aspartic Acid Residues in The Complementarity-Determining Region of a Therapeutic Monoclonal Antibody. Purpose: Succinimide formation and isomerization alter the chemical and physical properties of aspartic acid residues in a protein. Modification of aspartic acid residues within complementarity-determining regions (CDRs) of therapeutic monoclonal antibodies (mAbs) can be particularly detrimental to the efficacy of the molecule. The goal of this study was to characterize the site of succinimide accumulation in the CDR of a therapeutic mAb and understand its effects on potency. Furthermore, we aimed to mitigate succinimide accumulation through changes in formulation. Methods: Accumulation of succinimide was identified through intact and reduced LC-MS mass measurements. A low pH peptide mapping method was used for relative quantitation and localization of succinimide formation in the CDR. Statistical modeling was used to correlate levels of succinimide with basic variants and potency measurements. Results: Succinimide accumulation in Formulation A was accelerated when stored at elevated temperatures. A strong correlation between succinimide accumulation in the CDR, an increase in basic charge variants, and a decrease in potency was observed. Statistical modeling suggest that a combination of ion exchange chromatography and potency measurements can be used to predict succinimide levels in a given sample. Reformulation of the mAb to Formulation B mitigates succinimide accumulation even after extended storage at elevated temperatures. Conclusion: Succinimide formation in the CDR of a therapeutic mAb can have a strong negative impact on potency of the molecule. We demonstrate that thorough characterization of the molecule by LC-MS, ion exchange chromatography, and potency measurements can facilitate changes in formulation that mitigate succinimide formation and the corresponding detrimental changes in potency. abstract_id: PUBMED:10743000 Electrodiagnosis in disorders of neuromuscular transmission Introduction: Electrophysiological studies are of recognized use in the confirmation of alterations of neuromuscular transmission in further determining their physiopathological characteristics, helping to differentiate them from other conditions with secondary effects on their function. Development And Conclusions: In our study we review the physiopathology of these disorders which compromise the safety factor of the neuromuscular junction, by presynaptic or postsynaptic alterations, and forms the basis of the results of electrophysiological studies. We describe the techniques currently most used: repetitive stimulation and single fibre electromyography complemented by conventional electromyography. Their application and findings in the commonest syndromes are discussed. Finally, their sensitivity, specificity and difficulties are considered. abstract_id: PUBMED:2924073 1,3,8- and 1,3,7-substituted xanthines: relative potency as adenosine receptor antagonists at the frog neuromuscular junction. 1. The ability of 1,3,8-substituted xanthines (1,3-dipropyl-8-(4-(2-aminoethyl)amino)carbonylmethyloxyphenyl) xan thine (XAC), 1,3-dipropyl-8-(4-carboxymethyloxyphenyl)xanthine (XCC), 1,3-dipropyl-8-(2-amino-4-chlorophenyl)xanthine (PACPX), 1,3-dipropyl-8-cyclopentylxanthine (DPCPX), 1,3-diethyl-8-phenylxanthine (DPX) and 8-phenyltheophylline (8-PT)), of 1,3,7-substituted xanthines (1-propargyl-3,7-dimethylxanthine (PGDMX) and caffeine), and of a 3-substituted xanthine (enprofylline) to antagonize the inhibitory effect of 2-chloroadenosine (CADO) on the amplitude of nerve-evoked twitches was investigated in innervated sartorius muscles of the frog. 2. All the 1,3,8-substituted xanthines, in concentrations virtually devoid of effect on neuromuscular transmission, shifted to the right, in a near parallel manner the log concentration-response curve for CADO. Linear Schild plots with slopes near to unity at concentration-ratios less than 14 were obtained for XAC, XCC, DPCPX, DPX and 8-PT. 3. The order of potency of the 1,3,8-substituted xanthines as antagonists of the effect of CADO was XAC (Ki = 23 nM) greater than or equal to DPCPX (35 nM) greater than 8-PT (200 nM) greater than or equal to DPX (295 nM) greater than XCC (1905 nM) greater than or equal to PACPX (2291 nM). No correlation was found between the potency of these xanthines as antagonists of the adenosine receptor at the frog neuromuscular junction and their reported potency as antagonists of the A1- or A2-adenosine receptors. 4. The 1,3,7-substituted xanthines, PGDMX and caffeine, in concentrations virtually devoid of effect on neuromuscular transmission, also caused parallel shifts to the right of the log concentration-response curves for CADO, but were less potent than the 1,3,8-substituted xanthines. PGDMX was more than 20 times more potent than caffeine. 5. Enprofylline in concentrations up to 100 microM did not antagonize the inhibitory effect of CADO on neuromuscular transmission. 6. It is concluded that the antagonist profile of the adenosine receptor mediating inhibition of transmission at the frog neuromuscular junction is different from the antagonist profile of the A1- and A2-adenosine receptors. abstract_id: PUBMED:8261906 Comparative neuromuscular blocking potency of pipecuronium and pancuronium. The effects of pipecuronium bromide (Pi.) and pancuronium bromide (Pa.) on the contractile response of rat-phrenic nerve diaphragm and frog's musculus rectus abdominis preparation were studied. Pi. and Pa. were found to have a dose-dependent reduction in the contractile response of the tested preparation. Trials were made to estimate the potency of Pi. in a comparison with Pa. In this respect Pi. exhibited a more potent effect than Pa. The duration of action is about twice as long as that of Pa. in equieffective doses. Neostigmine rapidly and completely antagonises the neuromuscular blockade caused by Pi. and Pa. abstract_id: PUBMED:9870089 Reduced neuromuscular blocking potency of atracurium in patients with purulent intrathoracic diseases Objective: Based on personal observations the neuromuscular blocking potency of atracurium was supposed to be diminished in purulent intrathoracic diseases. This hypothesis was tested in a prospective clinical trial. Methods: 52 adult patients undergoing general anaesthesia (methohexitone, sufentanil, flunitrazepam, N2O, enflurane) for elective thoracic surgery were investigated. After the intubation dose of 0.6 mg/kg atracurium was applied continuously to maintain a 90% suppression of the evoked compound electromyogram. According to the intraoperatively established diagnosis patients were allocated to three categories: 1) non-malignant tumor as the control group (n = 15), 2) lung cancer (n = 22), 3) purulent intrathoracic process without tumor (n = 15). The groups were compared regarding onset time, DUR 10% and maintenance dose of atracurium. Results: Patients with lung cancer did not differ significantly from the controls regarding efficiency of atracurium. In contrast, patients with a purulent intrathoracic process showed a significantly longer onset time (6.3 +/- 2.5 vs. 2.9 +/- 0.8 min, p < 0.001), and a significantly shorter DUR 10% (23 +/- 6 vs. 36 +/- 10 min, p < 0.001) compared to the control group. Mean infusion rate of atracurium to maintain a 90% suppression of the evoked compound electromyogram was significantly higher in patients with a purulent process compared to the controls (10.5 +/- 3.2 vs. 6.0 +/- 1.2 micrograms/kg.min, p < 0.001). Conclusion: Our results support the hypothesis that patients with a purulent intrathoracic disease show a clear reduction in neuromuscular blocking potency of atracurium. abstract_id: PUBMED:20837723 Determining the potency of neuromuscular blockers. N/A Answer: Traditional methods for determining the potency of neuromuscular blockers may indeed have some limitations. The clinical potency of these drugs has historically been measured using linear regression analysis (LRA) after log dose and probit or logit data transformation. However, this approach is now considered by some statisticians to be outdated because probit and logit analyses are intended for quantal responses with binomial error distributions, not continuous data such as the percent of maximal response. Non-linear regression (NLR) is suggested as the preferred method for analyzing sigmoidal dose-response relationships (PUBMED:20430764). When comparing the potency of neuromuscular blockers, the ED(50) value, which represents the dose required to achieve 50% of the maximal effect, is a robust parameter and should be used rather than the ED(95), which is the dose required to achieve 95% of the maximal effect. The confidence limits for the ED(95) are so wide that this parameter must be viewed as an approximation at best. The study found that the ED(50) values calculated by NLR, LRA, or the Hill equation were interchangeable, but LRA resulted in ED(95) values that were 13-18% lower than those determined by NLR (PUBMED:20430764). Furthermore, when planning dose-response studies of neuromuscular blockers, investigators have rarely used a priori power analysis to determine the minimal sample size their protocols require. A proposed method for determining sample size based on the coefficient of variation around the mean ED(50) suggests that a sample size of not less than 24 subjects should be adequate for determining a drug's clinical potency with a reasonable degree of assurance (PUBMED:21068050). In summary, while traditional methods for determining the potency of neuromuscular blockers have been widely used, they may not be the most accurate or up-to-date approaches. Non-linear regression and careful sample size determination are recommended for more precise and reliable results.
Instruction: Carotid artery pseudo-occlusion: does end-diastolic velocity suggest need for treatment? Abstracts: abstract_id: PUBMED:19223385 Carotid artery pseudo-occlusion: does end-diastolic velocity suggest need for treatment? Objective: We reviewed our institution's experience with carotid artery pseudo-occlusion (CAPO), to investigate whether internal carotid artery (ICA) end-diastolic velocity (EDV) as measured by duplex ultrasonography, was a predictor of need for further intervention. Methods: From February 2003 to January 2008, 7478 patients underwent duplex ultrasonographic evaluation of their carotid arteries. Diagnosis of CAPO included the appearance of a narrow flow jet (string sign) on power doppler images, low velocities in the ICA and additional criteria listed below. Results: Ten patients (0.13%) were identified as having a CAPO. All patients were asymptomatic and had an EDV<78 cm/s. Occlusion or functional occlusion was identified in nine patients on contrast imaging studies. Eight of these patients were treated medically without neurologic complication on follow-up. Two patients were treated with interventions and were asymptomatic at follow up. The mean follow up for the entire group was 12 months. Conclusions: Although this is a low volume study, there is evidence to suggest that asymptomatic patients with low EDV in the setting of carotid artery pseudo-occlusion found of duplex, may be safely managed medically. abstract_id: PUBMED:25678628 Surgical treatment for pseudo-occlusion of the internal carotid artery. Objectives: Carotid artery pseudo-occlusion is a rare condition and its natural history and clinicopathological characteristics are not well defined. We reported our 7-year experience in the surgical treatment of carotid artery pseudo-occlusion to determine the real benefit of the surgical option. Methods: From January 2006 to December 2013, 1414 patients were treated for high-grade stenosis of the internal carotid artery, 33 (2.3%) presented with a carotid pseudo-occlusion (26 males and 7 females, mean age: 70 ± 10). Nineteen patients were symptomatic, and 14 asymptomatic. Carotid artery pseudo-occlusion was identified by duplex scan (segmental occlusion at the origin of internal carotid artery with very thin distal flow) and the diagnostic confirmation was obtained by angio-computed-tomography (CT) scan. The operation was performed under general anaesthesia and constant Electroencephalography (EEG) monitoring. The follow-up was performed by duplex scan at discharge, 30 days, 6 months and yearly. Results: Politetrafluoroetilene (PTFE) patch endarterectomy, eversion endarterectomy and carotid bypass were performed in 20 (61%), 10 (30%) and 3 patients (9%), respectively. No mortality or stroke was observed in postoperative period. Four patients presented with an asymptomatic postoperative thrombosis of the internal carotid artery. No restenosis was observed. Conclusions: Surgical treatment for carotid artery pseudo-occlusion is safe and effective. abstract_id: PUBMED:2968689 Use of diastolic velocity ratios to predict significant carotid artery stenosis. Duplex scanning of the carotid bifurcation has emerged as an accurate noninvasive means of predicting and quantifying carotid arterial stenoses. Compared with the more widely reported measurements of spectral broadening and peak frequency ratios, measurements of diastolic velocity ratios have theoretical advantages in predicting carotid artery stenosis. The use of diastolic velocity ratios between the internal and common carotid arteries was prospectively studied in 30 consecutive patients to determine its accuracy in predicting significant stenosis of the internal carotid artery when compared with angiography. A total of 55 carotid bifurcations were studied, and the use of diastolic velocity ratios correctly predicted high-grade stenosis (greater than or equal to 75% diameter reduction) in 52 cases (95%). We conclude that diastolic velocity ratios may be used to accurately detect significant internal carotid artery stenosis. abstract_id: PUBMED:25496685 Endovascular treatment of internal carotid artery pseudo-aneurysm presenting with epistaxis. A case report. Recurrent epistaxis is a rare presentation of internal carotid artery C4/C5 segment pseudo-aneurysm rupture. We describe a case of a traumatic internal carotid artery pseudo-aneurysm with recurrent epistaxis as a leading symptom that was finally managed with endovascular treatment with stent-assisted coil placement. Clopidogrel and acetylsalicylic acid orally were introduced in the therapy for further stent thrombosis prevention and epistaxis did not recur on six-month follow-up. Endovascular treatment with stent-assisted coil placement seems to be a good method for pseudo-aneurysm treatment while keeping the lumen of the parent artery patent. abstract_id: PUBMED:8430567 Detection and quantification of carotid artery stenosis: efficacy of various Doppler velocity parameters. Objective: The purpose of this study was to evaluate and compare the accuracies of the different Doppler velocity parameters used to grade stenoses of the carotid artery. Materials And Methods: Seven velocity parameters determined with Doppler spectra were compared with angiographic findings in 138 carotid bifurcations: the peak systolic and peak end-diastolic velocities in the internal and common carotid arteries, the ratios of peak systolic and peak end-diastolic velocities between the internal and common carotid arteries, and the ratio of peak systolic velocity in the internal carotid artery to peak end-diastolic velocity in the common carotid artery. A receiver-operating-characteristic (ROC) analysis was performed, with and without correction for the bias introduced by preferentially selecting patients for angiography, if the Doppler results indicated a lesion. Multiple regression analysis was performed to predict the percent stenosis and to predict the probability of a 70% or greater stenosis. Results: Of the Doppler parameters, the peak systolic velocity in the internal carotid artery and its ratio to the peak end-diastolic velocity in the common carotid artery were the most accurate and had equivalent test performance (ROC area = 0.94; after correction for verification bias, ROC area = 0.78). In multiple regression analysis, the peak systolic velocity in the internal carotid artery was the most significant parameter, although its ratio to the peak end-diastolic velocity in the common carotid artery had incremental value in detecting a 70% or greater stenosis, and the peak systolic velocity in the common carotid artery provided additional information for quantifying the stenosis. Conclusion: The peak systolic velocity in the internal carotid artery is the best single velocity parameter for quantifying a stenosis and for detecting a 70% or greater stenosis. abstract_id: PUBMED:14718815 Carotid artery stenting: is there a need to revise ultrasound velocity criteria? Objectives: Ultrasound (US) velocity criteria have not been well-established for patients undergoing carotid artery stenting (CAS). A potential source of error in using US after CAS is that reduced compliance in the stented artery may result in elevated velocity relative to the native artery. We measured arterial compliance in the stented artery, and developed customized velocity criteria for use early after CAS. Methods: US was performed before and within 3 days after CAS, and after 1 month in a subset of 26 patients. Post-procedural peak systolic velocity (PSV) and end-diastolic velocity (EDV) of the internal carotid artery (ICA), PSV/EDV ratio, and internal carotid artery to common carotid artery ratio (ICA/CCA) were recorded. These were compared with degree of in-stent residual stenosis determined at carotid angiography performed at the completion of CAS. Peterson's elastic modulus (Ep) and compliance (Cp) of the ICA were determined in a subgroup of 20 patients at the distal end of the stent and in the same region in the native ICA before stenting. Results: Ninety CAS procedures were analyzed. Mean (+/-SD) angiographic residual stenosis after CAS was 5.4 +/- 9.1%, whereas corresponding PSV by US was 120.4 +/- 32.4 cm/s; EDV, 41.4 +/- 18.6 cm/s; PSV/EDV ratio, 3.3 +/- 1.2; and ICA/CCA ratio, 1.6 +/- 0.5. PSV was unchanged at 1 month. Post-CAS PSV and ICA/CCA ratio correlated most with degree of stenosis (P <.0001 for both). Only six patients demonstrated in-stent residual stenosis 20% or greater, but the standard US threshold of PSV 130 cm/s or greater (validated for >20% ICA stenosis in our laboratory) categorized 38 of 90 patients as having stenosis 20% or greater. Receiver operator curve analysis demonstrated that a combined threshold of PSV 150 cm/s or greater and ICA/CCA ratio 2.16 or greater were optimal for detecting residual stenosis of 20% or greater, with sensitivity 100%, specificity 98%, positive predictive value 75%, and negative predictive value 100%. After placement of a stent, the ICA demonstrated significantly increased Ep (1.2 vs 4.4 x 10(3) mm Hg; P =.004) and decreased Cp (9.8 vs 3.2 %mm Hg x 10(-2); P =.0004). Conclusions: Currently accepted US velocity criteria validated in our laboratory for nonstented ICAs falsely classified several stented ICAs with normal diameter on carotid angiograms as having residual in-stent stenosis 20% or greater. We propose a new criterion that defines PSV less than 150 cm/s, with ICA/CCA ratio less than 2.16, as the best correlate to a normal lumen (0%-19% stenosis) in the recently stented ICA. This was associated with increased stiffness of the stented ICA (increased Ep, decreased Cp). These preliminary results suggest that placement of a stent in the carotid artery alters its biomechanical properties, which may cause an increase in US velocity measurements in the absence of a technical error or residual stenotic disease. abstract_id: PUBMED:21206185 Riles type 1A common carotid artery occlusion diagnosed by specific external carotid artery Doppler waveform pattern in carotid ultrasonography. Case report. A 67-year-old man was admitted for evaluation of left homonymous hemianopsia. Carotid ultrasonography showed that the right common carotid artery (CCA) was occluded up to just proximal to the carotid bifurcation, and the patent external carotid artery showed retrograde flow to the patent internal carotid artery via the carotid bifurcation. The Doppler waveform pattern of the external carotid artery showed high end-diastolic flow velocity and low pulsatility index. The diagnosis was Riles type 1A CCA occlusion. Digital subtraction angiography and iodine-123 N-isopropyl-p-iodoamphetamine single photon emission computed tomography were performed to confirm the collateral circulation and adequate intracranial hemodynamic sufficiency. Nonsurgical treatment with antiplatelet therapy was performed for the CCA occlusion. No stroke events have occurred within the 2-year follow-up period. abstract_id: PUBMED:17257803 Carotid artery velocity characteristics after carotid artery angioplasty and stenting. Objective: Correlation of carotid duplex ultrasound (DUS) flow velocities with carotid artery stenosis before and after carotid endarterectomy is well established. With the evolution of catheter-based techniques, carotid stenosis increasingly is being treated with angioplasty and stenting (CAS). CAS changes the physical properties of the arterial wall, which may alter blood flow velocities compared with the nonstented carotid. Opinions differ about whether DUS is a reliable tool to assess technical outcome and recurrent stenosis after CAS. This study correlated carotid DUS flow velocity findings with carotid arteriography after CAS. Methods: Data from 77 pairs of carotid arteriograms with corresponding DUS after CAS in 68 patients were reviewed. Preintervention and postintervention DUS and carotid arteriogram data were evaluated for each patient. Peak systolic velocities (PSV), end-diastolic velocities (EDV), and internal carotid artery/common carotid artery ratios (ICA/CCA) were correlated with the post-CAS arteriogram. Results: The mean preintervention PSV was 390 +/- 110 cm/s (range, 216 to 691 cm/s), and the average EDV was 134 +/- 51 cm/s (range, 35 to 314 cm/s). Postintervention DUS was obtained a mean of 5 days after CAS (range, 1 to 30 days). Sixty (81%) post-CAS arteriograms were normal, and each corresponded to a normal postintervention DUS (PSV range, 30 to 118 cm/s; EDV range, 18 to 60 cm/s). In 14 arteries (19%), completion arteriograms revealed residual stenoses of 20% to 40% in 13, and 50% in one. The mean PSV was 175 cm/s (range, 137 to 195 cm/s), and the mean EDV was 44 cm/s (range, 20 to 62 cm/s). All velocities exceeded the threshold of a 50% stenosis by DUS criteria for a nonstented carotid artery. In three arteries (2 patients), high-grade recurrent stenoses detected by DUS developed that required reintervention during follow-up. This high-grade restenosis was confirmed by arteriography in each patient, providing an additional three correlations. Conclusions: Normal DUS imaging reliably identifies arteriographically normal carotid arteries after CAS. Carotid velocities are disproportionately elevated with mild and moderate degrees of stenoses, and velocity criteria for quantitating stenoses in these patients require modification. However, DUS appropriately identifies severe recurrent stenoses after CAS. abstract_id: PUBMED:37249031 Effect of anterior communicating artery patency on the flow velocity in bilateral carotid artery stenosis after carotid endarterectomy. Background: The effect of anterior communicating artery (ACoA) patency on the flow velocity of the extracranial carotid arteries is unclear. Methods: A total of 285 patients with carotid artery stenosis were included between January 2019 and January 2021. All patients received unilateral carotid endarterectomy (CEA). The patients were classified into ACoA-patent (161) and ACoA-nonpatent (124) groups using digital subtraction angiography (DSA) and/or computed tomography angiography (CTA). The peak systolic velocity (PSV) and end-diastolic velocity (EDV) measured by carotid duplex ultrasonography (CDU) were compared between both groups, pre- and post-CEA. Results: There was no significant difference in the risk factors for cerebrovascular disease between the two groups. Within 1 week after CEA, the PSV and EDV on operative and nonoperative carotid (contralateral carotid in the same patient) arteries decreased significantly (both p < 0.01). Comparison of nonoperative carotid artery pre- and post-CEA between the two groups showed that post-CEA PSV and EDV in the ACoA-patent group were significantly lower than that of pre-CEA (PSV and EDV, t = 11.507 and 6.716, respectively, both p < 0.001) (according to the Society of Radiologists in Ultrasound Consensus Conference [SRUCC] PSV standard). There was no significant difference in the ACoA-nonpatent group (PSV: t = 1.924, p = 0.057; EDV: t = 1.237, p = 0.218). In the nonoperative carotid artery of the ACoA-patent group, the degree of stenosis assessed by CDU was inconsistent with that of DSA/CTA (κ = 0.294), whereas that in the ACoA-nonpatent group had a high consistency (κ = 0.982). Among 161 ACoA-patent cases, 68 showed overestimated stenosis. Conclusions: The patent ACoA increases PSV and EDV, causing an overestimation of carotid artery stenosis. abstract_id: PUBMED:19015163 The effect of age and other atherosclerotic risk factors on carotid artery blood velocity in individuals ranging from young adults to centenarians. To evaluate the effect of age and other risk factors for atherosclerosis on arterial blood velocity, carotid arteries in 179 healthy individuals ranging from 21 to 102 years old were examined using color Doppler ultrasonography. Velocity in common and internal carotid arteries decreased consecutively from young adults to very elderly people except for peak internal carotid artery velocity. Peak common carotid artery velocity in the elderly (> or = 65 years old) people was inversely associated with age and diastolic blood pressure and directly associated with pulse pressure. Minimum velocity of common carotid artery was inversely correlated with age and diastolic blood pressure in the elderly people. In elderly group, peak internal carotid artery velocity correlated only with serum high-density lipoprotein cholesterol. Minimum internal carotid artery velocity correlated inversely with systolic blood pressure in adults and diastolic blood pressure in elderly people. Blood velocity in the very elderly population approaches the critical level for thrombogenesis. Answer: The relationship between internal carotid artery (ICA) end-diastolic velocity (EDV) and the need for treatment in cases of carotid artery pseudo-occlusion (CAPO) has been investigated in various studies. According to one study, asymptomatic patients with low EDV in the setting of CAPO found on duplex ultrasonography may be safely managed medically without neurologic complication on follow-up (PUBMED:19223385). This suggests that a low EDV does not necessarily indicate the need for immediate intervention in asymptomatic patients. Another study reported on the surgical treatment of carotid artery pseudo-occlusion, indicating that surgical treatment is safe and effective (PUBMED:25678628). However, this study does not specifically address the role of EDV in determining the need for surgery. The use of diastolic velocity ratios has been shown to accurately detect significant internal carotid artery stenosis (PUBMED:2968689), but this study does not directly address pseudo-occlusion. In the context of carotid artery stenting, ultrasound velocity criteria have been questioned, as the reduced compliance in the stented artery may result in elevated velocity relative to the native artery (PUBMED:14718815). This study suggests that the presence of a stent can alter the biomechanical properties of the carotid artery, which may affect EDV measurements. Overall, while low EDV in CAPO may not necessitate treatment in asymptomatic patients (PUBMED:19223385), the decision to treat should be individualized based on a comprehensive assessment of the patient's condition, including symptoms, risk factors, and other diagnostic findings. It is important to consider that the studies referenced do not provide a definitive answer to whether EDV alone should dictate the need for treatment in CAPO, and clinical judgment remains crucial.
Instruction: Does obesity prolong anesthesia in children undergoing common ENT surgery? Abstracts: abstract_id: PUBMED:24824287 Does obesity prolong anesthesia in children undergoing common ENT surgery? Objectives: To report the epidemiology of obesity in a pediatric surgical population and determine whether obesity is a risk factor for longer anesthesia duration. Background: Childhood obesity is a significant public health problem in the United States. Epidemiologic studies on pediatric surgical populations have been limited to states with very high prevalence of adult obesity (Michigan, Texas). Data from other states and more recent data since 2006 are unavailable. Methods: We examined anesthesia records for surgical patients age 2-18 years at Columbia University Medical Center from January 2009 to December 2010. Patients undergoing bariatric surgery or those with records missing preoperative height or weight data were excluded. Body mass index (BMI) was calculated as weight (kg)/height (m(2) ). BMI ≥95th percentile according to national growth charts were considered obese. Results: We reviewed 9522 patients of which 1639 were obese (17.2%). The sex-age category interaction on obesity was not significant using logistic regression (P = 0.11). Among surgical groups, the otolaryngology (ENT) cohort had the highest obesity rate (21.7%, 360/1656). Obese children who had tonsillectomy, adenoidectomy, or both did not have a prolonged anesthetic (P = 0.33) or surgical duration (P = 0.61) compared with nonobese children, adjusting for surgeon, season, surgical procedure code, and ASA status. Conclusion: Children presenting for surgery, particularly the ENT cohort, have a high prevalence of obesity. Obese and nonobese children who had tonsillectomy, adenoidectomy, or both had comparable durations of anesthesia. Therefore, obesity did not lead to longer anesthetic duration. abstract_id: PUBMED:34089150 Perioperative respiratory adverse events during ambulatory anesthesia in obese children. Obesity is one of the most common clinical conditions in the pediatric population with an increasing prevalence ranging from 20 to 30% worldwide. It is well known that during ambulatory anesthesia, obese children are more prone to develop perioperative respiratory adverse events (PRAEs) associated with obesity. To avoid or at least minimize these adverse effects, a thorough preoperative assessment should be undertaken as well as consideration of specific anesthetic approaches such as preoxygenation before induction of anesthesia and optimizing drug dosing. The use of short-acting opioid and nonopioid analgesics and the frequent implementation of regional anesthesia should also be included. Noninvasive airway management, protective mechanical ventilation, and complete reversion of neuromuscular blockade and awake extubation also proved to be beneficial in preventing PRAEs. During the postoperative period, continuous monitoring of oxygenation and ventilation is mandatory in obese children. In the current review, we sought to provide recommendations that might help to reduce the severity of perioperative respiratory adverse events in obese children, which could be of particular importance for reducing the rate of unplanned hospitalizations and ultimately improving the overall postoperative recovery. abstract_id: PUBMED:28243748 Pilot study comparing post-anesthesia care unit length of stay in moderately and severely obese children. Objective: Obesity is a risk factor for surgical complications in adults and children. Differences in postsurgical outcomes according to severity of obesity [moderate: 95-98th age-gender-specific body mass index (BMI) percentile versus severe: ≥99th percentile] in children remain unclear. This study compared post-anesthesia care unit (PACU) stay and hospital admission between severely obese children and moderately obese children undergoing surgery. Methods: In a retrospective review over a 6-month period, obese children, 2-18 years of age undergoing surgery were identified. Multivariate mixed-effects regression was used to compare PACU length of stay (LOS) need for opioid analgesia, and hospital admission between moderately and severely obese patients. Results: There were 1324 records selected for inclusion. PACU LOS did not significantly differ between moderately obese (50 ± 36 min) and severely obese patients (55 ± 38 min). There were no differences between moderately and severely obese patients in use of opioids in the PACU. Yet, severely obese patients were more likely to require inpatient admission than moderately obese patients. Conclusions: The duration of PACU stay still averaged less than 1 h in our cohort, suggesting that the majority of these patients can be cared for safely in the outpatient setting. Future studies should focus on identifying the co-morbid conditions that may prolong postoperative PACU stay or result in unplanned hospital admission in moderately and severely obese patients. Our preliminary data suggest that these factors may include a younger age and the complexity or duration of the surgical procedure. abstract_id: PUBMED:18421267 Does high BMI influence hospital charges in children undergoing adenotonsillectomy? Background: Obesity is a highly prevalent chronic problem with health and fiscal consequences. Data from adults and nonsurgical pediatric patients suggest that obesity has serious implications for the US economy. Objective: Our goal was to describe the impact of BMI on hospital charges in children undergoing adenotonsillectomy (AT). Methods And Procedures: We carried out a retrospective comparative analysis of the electronic anesthesia record and the charges from billing data from a large tertiary institution on children aged 3-18 years who had AT during the year 2005-2007. The main outcome measures were mean total hospital charges, likelihood of admission, and length of hospital stay (LOS). Results: Of 1,643 children, 68.9% were aged <10 years, 76% were whites, and 74.1% had private commercial insurance. Most (75.3%) children were discharged on the day of surgery. Obese and overweight children were more likely to be admitted than their normal-weight peers (X(2)=26.3, P<0.001). Among those admitted, BMI showed a positive correlation with LOS (r=0.20, P<0.001). Obese and overweight patients had significantly higher total hospital charges than their healthy-weight counterparts (P=0.001). Anesthesia, postanesthesia care unit (PACU), and pharmacy and laboratory charges were also higher for obese than normal-weight children (P<0.05). Discussion: Overweight and obese children undergoing AT accrued higher hospital charges and had longer postoperative LOS than their healthy-weight peers. If these findings are extendable to other surgical procedures, they could have far-reaching implications for the US economy. abstract_id: PUBMED:32640705 Hormonal Blood Pressure Regulationduring General Anesthesia Usinga Standardized Propofol Dosagein Children and Adolescents SeemsNot to Be Affected by Body Weight. Obesity in pediatric surgical patients is a challenge for the anesthesiologist. Despite potentially beneficial properties, propofol might also induce hypotension. This study examined whether a dose adjustment in overweight children could avoid hypotension and if there would be differences regarding hormonal regulation in children under anesthesia. Fifty-nine children undergoing surgery under general anesthesia were enrolled in this prospective observational trial. Participants were allocated into two groups according to their BMI. The induction of anesthesia was conducted using propofol ("overweight": 2 mg/kgBW, "regular": 3.2 mg/kgBW). The maintenance of anesthesia was conducted as total intravenous anesthesia. Hormone levels of renin, angiotensin II, aldosterone, copeptin, norepinephrine and epinephrine were assessed at different timepoints. Blood pressure dropped after the administration of propofol in both groups, with a nadir 2 min after administration-but without a significant difference in the strength of reduction between the two groups. As a reaction, an increase in the plasma levels of renin, angiotensin and aldosterone was observed, while levels of epinephrine, norepinephrine and copeptin dropped. By adjusting the propofol dosage in overweight children, the rate of preincision hypotension could be reduced to the level of normal-weight patients with a non-modified propofol dose. The hormonal counter regulation was comparable in both groups. The release of catecholamines and copeptin as an indicator of arginine vasopressin seemed to be inhibited by propofol. abstract_id: PUBMED:38487133 Comparing Perioperative Outcomes of Total Intravenous Anesthesia (TIVA) With Volatile Anesthesia in Patients With Obesity: A Systematic Review. In this systematic review, the perioperative outcomes of total intravenous anesthesia (TIVA) and volatile anesthesia were compared in obese adults (BMI ≥ 30 kg/m²) undergoing elective surgery. The review analyzed data from 12 randomized-controlled trials involving 935 patients, sourced from PubMed/MEDLINE (Medical Literature Analysis and Retrieval System Online), Cochrane, Scopus, and Web of Science databases. The focus was on intraoperative vital signs, emergence time, postoperative nausea and vomiting (PONV), duration of post-anesthesia care unit (PACU) stay, and ICU admission rates. Findings showed that TIVA (using propofol) might reduce PONV, but there were no significant differences in other outcomes compared to volatile anesthesia (with desflurane as the most common agent). The review highlights the need for more research, especially comparing sevoflurane with TIVA, to establish clear clinical guidelines for anesthesia in obese patients. abstract_id: PUBMED:32156303 Salivary cortisol levels and anxiety in melanoma patients undergoing sentinel lymph node excision under local anesthesia versus general anesthesia: a prospective study. Background: Sentinel lymph node excision (SLNE) can be performed in tumescent local anesthesia (TLA) or general anesthesia (GA). Perioperative cortisol level changes and anxiety are common in surgical interventions and might be influenced by the type of anesthesia. In this study, we intended to determine whether the type of anesthesia impacts the patients' perioperative levels of salivary cortisol (primary outcome) and the feeling of anxiety evaluated by psychological questionnaires (secondary outcome). Methods: All melanoma patients of age undergoing SLNE at the University Hospital Essen, Germany, could be included in the study. Exclusion criteria were patients' intake of glucocorticoids or psychotropic medication during the former 6 months, pregnancy, age under 18 years, and BMI ≥ 30 as salivary cortisol levels were reported to be significantly impacted by obesity and might confound results. Results: In total, 111 melanoma patients undergoing SLNE were included in our prospective study between May 2011 and April 2017 and could choose between TLA or GA. Salivary cortisol levels were measured three times intraoperatively, twice on the third and second preoperative day and twice on the second postoperative day. To assess anxiety, patients completed questionnaires (Hospital Anxiety and Depression Scale (HADS), State-Trait Anxiety Inventory (STAI)) perioperatively. Patients of both groups exhibited comparable baseline levels of cortisol and perioperative anxiety levels. Independent of the type of anesthesia, all patients showed significantly increasing salivary cortisol level from baseline to 30 min before surgery (T3) (TLA: t = 5.07, p < 0.001; GA: t = 3.09, p = 0.006). Post hoc independent t tests showed that the TLA group exhibited significantly higher cortisol concentrations at the beginning of surgery (T4; t = 3.29, p = 0.002) as well as 20 min after incision (T5; t = 277, p = 0.008) compared to the GA group. Conclusions: The type of anesthesia chosen for SLNE surgery significantly affects intraoperative cortisol levels in melanoma patients. Further studies are mandatory to evaluate the relevance of endogenous perioperative cortisol levels on the postoperative clinical course. Trial Registration: German Clinical Trials Register DRKS00003076, registered 1 May 2011. abstract_id: PUBMED:27726001 Perioperative adverse respiratory events in overweight and obese children Background: Overweight and obesity are increasing problems in pediatric anesthesia. This observational study was designed to examine how airway-related complications occur in overweight children and adolescents during general anesthesia and if this is a relevant problem in Germany. Methods: From October 2008 until August 2009, at the university clinic in Leipzig, 504 in- and outpatients, aged 2-18 years, ASA I-III, undergoing elective procedures (ENT and pediatric surgery), were observed. With the aid of special data sheets, the following parameters were determined: Mallampati Score, difficult mask ventilation and intubation, use of a Guedel/Wendl tube, Cormack-Lehane Score, number of intubation attempts, airway obstructions (broncho- and laryngospasms), coughing as a sign of airway irritation, and decreases in oxygen saturation >10 %. Results: Overweight and obese children had a significantly higher Mallampati Score and a significantly higher prevalence of coughing (p < 0.05). None of the other parameters showed any significant differences between the groups. However, the incidence of desaturation was 9.5 % in overweight children and 6.3 % in children of normal weight, and that of airway obstructions was 4.1 vs 2.7 %. Conclusion: This study demonstrated a very low incidence of respiratory problems, which may be caused by the low proportion of morbidly obese children and the older age of overweight children in comparison with other studies. abstract_id: PUBMED:23748552 Incidences and predictors of difficult laryngoscopy in adult patients undergoing general anesthesia : a single-center analysis of 102,305 cases. Background: Hypoxemia caused by difficulties in airway management presents a major cause for perioperative morbidity and mortality. The ability to predict difficult laryngoscopy more accurately would enable anesthesiologists to take specific precautions to reduce airway risks and prevent patient-threatening events. Methods: Over a 6-year period of time, all anesthesia records with a documented direct laryngoscopic view were retrieved from the electronic data management system and statistically processed. The Cormack-Lehane four-point scale of grading laryngoscopy was used to assess visibility of the vocal cords. Results: Of 102,306 cases, the overall rate of difficult laryngoscopy was 4.9 %. Male gender (6.5 %), Mallampati score III and IV (17.3 %), obesity with a BMI ≥35 kg/m(2) (6.1 %), as well as physical status ASA III or IV (6.2 %), were identified as risk factors for difficult laryngoscopy. Patients undergoing surgery in the departments of oromaxillofacial (8.9 %), ear nose throat surgery (ENT) (7.4 %), and cardiac surgery (7.0 %) showed the highest rates of difficult laryngoscopy. Conclusions: The results indicate that the risk for difficult airway situations might substantially differ between surgical patient groups. In hospitals with departmental structures and spatially separated operating rooms, the deduction might be increased awareness and particular structural preparation for difficult airway situations in the respective subspecialties. abstract_id: PUBMED:23602008 Evaluation of propofol anesthesia in morbidly obese children and adolescents. Background: Poor characterization of propofol pharmacokinetics and pharmacodynamics in the morbidly obese (MO) pediatric population poses dosing challenges. This study was conducted to evaluate propofol total intravenous anesthesia (TIVA) in this population. Methods: After IRB approval, a prospective study was conducted in 20 MO children and adolescents undergoing laparoscopic surgery under clinically titrated propofol TIVA. Propofol doses/infusion rates, hemodynamic variables, times to induction and emergence, and postoperative occurrence of respiratory adverse events (RAE) were recorded, along with intraoperative blinded Bispectral Index/BIS and postoperative Ramsay sedation scores (RSS). Study subjects completed awareness questionnaires on postoperative days 1 and 3. Propofol concentrations were obtained at predetermined intra- and post-operative time points. Results: Study subjects ranged 9 - 18 years (age) and 97 - 99.9% (BMI for age percentiles). Average percentage variability of hemodynamic parameters from baseline was ≈ 20%. Patients had consistently below target BIS values (BIS < 40 for >90% of maintenance phase), delayed emergence (25.8 ± 22 minutes), increased somnolence (RSS ≥ 4) in the first 30 minutes of recovery from anesthesia and 30% incidence of postoperative RAE, the odds for which increased by 14% per unit increase in BMI (p ≤ 0.05). Mean propofol concentration was 6.2 mg/L during maintenance and 1.8 mg/L during emergence from anesthesia. Conclusions: Our findings indicate clinical overestimation of propofol requirements and highlight the challenges of clinically titrated propofol TIVA in MO adolescents. In this setting, it may be advantageous to titrate propofol to targeted BIS levels until more accurate weight-appropriate dosing regimens are developed, to minimize relative overdosing and its consequences. Answer: No, obesity does not appear to prolong anesthesia in children undergoing common ENT surgery. A study examining anesthesia records for surgical patients aged 2-18 years at Columbia University Medical Center from January 2009 to December 2010 found that obese children who had tonsillectomy, adenoidectomy, or both did not have a prolonged anesthetic or surgical duration compared with nonobese children, after adjusting for various factors (PUBMED:24824287). This suggests that obesity did not lead to longer anesthetic duration in this pediatric surgical population.
Instruction: Unexplained mental retardation: is brain MRI useful? Abstracts: abstract_id: PUBMED:24949052 Study on MRI changes in phenylketonuria in patients referred to mofid hospital/iran. Objective: Phenylketonuria is one of the most common metabolic disorders and the first known cause of mental retardation in pediatrics. As Screening for phenylketonuria (PKU) is not a routine neurometabolic screening test for neonates in Iran, many PKU cases may be diagnosed after developing the clinical symptoms. One of the findings of PKU is myelination disorders, which is seen as hypersignal regions in T2-weighted (T2W) and FLAIR sequences of brain MRI. The aim of our study was to assess MRI changes in PKU patients referred to Mofid Children's Hospital, 2010-2011. Materials & Methods: We studied all PKU cases referred to our clinic as a referral neurometabolic center in Iran for brain MRI and assessed the phenylalanine level at the time of Imaging. The mean phenylalanine level (in one year), clinical manifestations, and MRI pattern based on Thompson scoring, were evaluated. Results: The mean age of our study group was 155±99 months and the mean diagnosis age was 37±27.85 months. There were 15 patients with positive and 15 with negative family history. The mean phenylalanine level at the time of imaging was 9.75±6.28 and the mean 1 year phenylalanine level was 10.28±4.82. Seventy percent of our patients had MRI involvement, in whom 20% showed atrophic changes, in addition to white matter involvement. Based on modified Thompson scoring, the score for our study group was 4.84. The maximum involvement in MRI was in occipital region, followed by parietal, frontal, and temporal zones. There was not any correlation between MRI score and patients' age. But we found significant relationship between MRI score and the age of regimen cessation. No correlation was seen between phenylalanine level (at the time of Imaging) and MRI score. But there was a relationship between mean 1 year phenylalanine level and MRI score. Conclusion: According to the results of this study, brain MRI and white matter involvement can be used for evaluation of long-term control of phenylalanine level in PKU cases. abstract_id: PUBMED:15739114 Unexplained mental retardation: is brain MRI useful? Background: Mental retardation (MR), defined as an IQ below 70, is a frequent cause of consultation in paediatrics. Objective: To evaluate the yield of brain MRI in the diagnostic work-up of unexplained MR in children. Patients And Methods: The MRI features and clinical data of 100 patients (age 1-18 years) affected with non-progressive MR of unknown origin were compared to an age-matched control group (n=100). Two radiologists conducted an independent review of the MRI scans. Results: Univariate and multivariate analyses showed a higher incidence of brain anomalies in the MR group than in the control group (53 vs 17, OR=5.7 [2.9-11.1]), for signal abnormalities within the periventricular white matter (OR=20.3 [2.6-155.3]), lateral ventricular dilatation (OR=15.6 [2.0-124]), mild corpus callosum abnormalities (shortness, atrophy) (OR=6.8 [1.8-25.6]) and subtle cerebellar abnormalities, including fissure enlargement (OR=5.2 [1.1-26.2]). The diagnostic value of MRI abnormalities was considered good in 5% of patients (Alexander disease n=1, diffuse cortical malformation n=1, leukomalacia n=1, vermian agenesis n=1, commissural agenesis n=1), and weak in 48% of patients, in whom non-specific abnormalities did not lead to a diagnosis. Some clinical features resulted in a significantly higher percentage of abnormal MRI scans: abnormal neurological examination (82% vs 47%, P=0.008), abnormal skull circumference (66% vs 49%, P=0.04). Motor delay was associated with cerebellar abnormalities (P=0.01). Conclusions: This study confirms the weak diagnostic yield of MRI in mentally retarded children. The use of a control group has enabled us to identify the neuroimaging markers frequently associated with MR. Subgrouping patients according to neuroimaging markers and clinical signs should help identify those who would benefit from molecular studies. abstract_id: PUBMED:27709120 MRI features in 17 patients with l2 hydroxyglutaric aciduria. l-2-Hydroxyglutaric (l-2-HG) aciduria is a rare inherited metabolic disease usually observed in children. Patients present a very slowly progressive deterioration with cerebellar ataxia, mild or severe mental retardation, and various other clinical signs including extrapyramidal and pyramidal symptoms, and seizures Goffette et al. [1]. This leukencephalopathy was first described in 1980 Duran et al. [2]. Brain magnetic resonance imaging (MRI) demonstrates nonspecific subcortical white matter (WM) loss, cerebellar atrophy and changes in dentate nuclei and putamen Steenweg et al. [3]. The diagnosis is highlighted by increased levels of l-2-HG in body fluids such as urine and cerebrospinal fluid. The purpose of this study is to retrospectively describe the brain MRI features in l-2-HG aciduria. abstract_id: PUBMED:37496380 Correlating MRI-based brain volumetry and cognitive assessment in people with Down syndrome. Introduction: Down syndrome (DS) is the most common genetic cause of intellectual disability. Children and adults with DS show deficits in language performance and explicit memory. Here, we used magnetic resonance imaging (MRI) on children and adults with DS to characterize changes in the volume of specific brain structures involved in memory and language and their relationship to features of cognitive-behavioral phenotypes. Methods: Thirteen children and adults with the DS phenotype and 12 age- and gender-matched healthy controls (age range 4-25) underwent an assessment by MRI and a psychological evaluation for language and cognitive abilities. Results: The cognitive profile of people with DS showed deficits in different cognition and language domains correlating with reduced volumes of specific regional and subregional brain structures, confirming previous related studies. Interestingly, in our study, people with DS also showed more significant parahippocampal gyrus volumes, in agreement with the results found in earlier reports. Conclusions: The memory functions and language skills affected in studied individuals with DS correlate significantly with the reduced volume of specific brain regions, allowing us to understand DS's cognitive-behavioral phenotype. Our results provide an essential basis for early intervention and the design of rehabilitation management protocols. abstract_id: PUBMED:27158271 Methods for acquiring MRI data in children with autism spectrum disorder and intellectual impairment without the use of sedation. Background: Magnetic resonance imaging (MRI) has been widely used in studies evaluating the neuropathology of autism spectrum disorder (ASD). Studies are often limited, however, to higher functioning individuals with ASD. MRI studies of individuals with ASD and comorbid intellectual disability (ID) are lacking, due in part to the challenges of acquiring images without the use of sedation. Methods: Utilizing principles of applied behavior analysis (ABA), we developed a protocol for acquiring structural MRI scans in school-aged children with ASD and intellectual impairment. Board certified behavior analysts worked closely with each child and their parent(s), utilizing behavior change techniques such as pairing, shaping, desensitization, and positive reinforcement, through a series of mock scanner visits to prepare the child for the MRI scan. An objective, quantitative assessment of motion artifact in T1- and diffusion-weighted scans was implemented to ensure that high-quality images were acquired. Results: The sample consisted of 17 children with ASD who are participants in the UC Davis Autism Phenome Project, a longitudinal MRI study aimed at evaluating brain developmental trajectories from early to middle childhood. At the time of their initial scan (2-3.5 years), all 17 children had a diagnosis of ASD and development quotient (DQ) <70. At the time of the current scan (9-13 years), 13 participants continued to have IQs in the range of ID (mean IQ = 54.1, sd = 12.1), and four participants had IQs in the normal range (mean = 102.2, sd = 7.5). The success rate in acquiring T1-weighted images that met quality assurance for acceptable motion artifact was 100 %. The success rate for acquiring high-quality diffusion-weighted images was 94 %. Conclusions: By using principles of ABA in a research MRI setting, it is feasible to acquire high-quality images in school-aged children with ASD and intellectual impairment without the use of sedation. This is especially critical to ensure that ongoing longitudinal studies of brain development can extend from infancy and early childhood into middle childhood in children with ASD at all levels of functioning, including those with comorbid ID. abstract_id: PUBMED:33977271 High resolution structural and functional MRI of the hippocampus in young adults with Down syndrome. Down syndrome is the phenotypic consequence of trisomy 21, with clinical presentation including both neurodevelopmental and neurodegenerative components. Although the intellectual disability typically displayed by individuals with Down syndrome is generally global, it also involves disproportionate deficits in hippocampally-mediated cognitive processes. Hippocampal dysfunction may also relate to Alzheimer's disease-type pathology, which can appear in as early as the first decade of life and becomes universal by age 40. Using 7-tesla MRI of the brain, we present an assessment of the structure and function of the hippocampus in 34 individuals with Down syndrome (mean age 24.5 years ± 6.5) and 27 age- and sex-matched typically developing healthy controls. In addition to increased whole-brain mean cortical thickness and lateral ventricle volumes (P < 1.0 × 10-4), individuals with Down syndrome showed selective volume reductions in bilateral hippocampal subfields cornu Ammonis field 1, dentate gyrus, and tail (P < 0.005). In the group with Down syndrome, bilateral hippocampi showed widespread reductions in the strength of functional connectivity, predominately to frontal regions (P < 0.02). Age was not related to hippocampal volumes or functional connectivity measures in either group, but both groups showed similar relationships of age to whole-brain volume measures (P < 0.05). Finally, we performed an exploratory analysis of a subgroup of individuals with Down syndrome with both imaging and neuropsychological assessments. This analysis indicated that measures of spatial memory were related to mean cortical thickness, total grey matter volume and right hemisphere hippocampal subfield volumes (P < 0.02). This work provides a first demonstration of the usefulness of high-field MRI to detect subtle differences in structure and function of the hippocampus in individuals with Down syndrome, and suggests the potential for development of MRI-derived measures as surrogate markers of drug efficacy in pharmacological studies designed to investigate enhancement of cognitive function. abstract_id: PUBMED:35945102 A nationwide survey of monocarboxylate transporter 8 deficiency in Japan: Its incidence, clinical course, MRI and laboratory findings. Background: Monocarboxylate transporter 8 (MCT8) deficiency is an X-linked recessive developmental disorder characterized by initially marked truncal hypotonia, later athetotic posturing, and severe intellectual disability caused by mutations in SLC16A2, which is responsible for the transport of triiodothyronine (T3) into neurons. We conducted a nationwide survey of patients with MCT8 deficiency to clarify their current status. Methods: Primary survey: In 2016-2017, we assessed the number of patients diagnosed with MCT8 deficiency from 1027 hospitals. Secondary survey: in 2017-2018, we sent case surveys to 31 hospitals (45 cases of genetic diagnosis), who responded in the primary survey. We asked for: 1) perinatal history, 2) developmental history, 3) head MRI findings, 4) neurophysiological findings, 5) thyroid function tests, and 5) genetic test findings. Results: We estimated the prevalence of MCT8 deficiency to be 1 in 1,890,000 and the incidence of MCT8 deficiency per million births to be 2.12 (95 % CI: 0.99-3.25). All patients showed severe psychomotor retardation, and none were able to walk or speak. The significantly higher value of the free T3/free T4 (fT3/fT4) ratio found in our study can be a simple and useful diagnostic biomarker (Our value 11.60 ± 4.14 vs control 3.03 ± 0.38). Initial white matter signal abnormalities on head MRI showed recovery, but somatosensory evoked potentials (SEP) showed no improvement, suggesting that the patient remained dysfunctional. Conclusion: For early diagnosis, including in mild cases, it might be important to consider the clinical course, early head MRI, SEP, and fT3/fT4 ratio. abstract_id: PUBMED:23139673 Alterations of the cerebral white matter in a middle-aged patient with turner syndrome: an MRI study. A 52-year-old woman with intellectual disability was admitted to the hospital due to pneumonia. MRI of her brain showed diffuse hyperintensities on T(2)-weighted and fluid attenuated inversion recovery images in the bilateral cerebral white matter. Laboratory examination revealed sustained high levels of serum KL-6. Karyotyping revealed partial monosomy of the X chromosome. This is the first case showing diffuse white matter lesions in the brain, and sustained high levels of serum KL-6 in Turner syndrome. abstract_id: PUBMED:25565930 Dihydropyrimidine dehydrogenase deficiency in two malaysian siblings with abnormal MRI findings. Dihydropyrimidine dehydrogenase (DPD) deficiency is an autosomal recessive disorder of the pyrimidine metabolism. Deficiency of this enzyme leads to an accumulation of thymine and uracil and a deficiency of metabolites distal to the catabolic enzyme. The disorder presents with a wide clinical spectrum, ranging from asymptomatic to severe neurological manifestations, including intellectual disability, seizures, microcephaly, autistic behavior, and eye abnormalities. Here, we report on an 11-year-old Malaysian girl and her 6-year-old brother with DPD deficiency who presented with intellectual disability, microcephaly, and hypotonia. Brain MRI scans showed generalized cerebral and cerebellar atrophy and callosal body dysgenesis in the boy. Urine analysis showed strongly elevated levels of uracil in the girl and boy (571 and 578 mmol/mol creatinine, respectively) and thymine (425 and 427 mmol/mol creatinine, respectively). Sequence analysis of the DPYD gene showed that both siblings were homozygous for the mutation c.1651G>A (pAla551Thr). abstract_id: PUBMED:33295251 Identifying Clinical Clues in Children With Global Developmental Delay / Intellectual Disability With Abnormal Brain Magnetic Resonance Imaging (MRI). Global developmental delay / intellectual disability are common pediatric conditions. Brain magnetic resonance imaging (MRI), although an important diagnostic tool in the evaluation of these patients, often requires general anesthesia. Recent literature suggests that unnecessary general anesthesia exposure should be avoided in early years because of possible long-term negative neurodevelopmental sequelae. This study sought to identify clinical clues associated with brain MRI abnormalities in children with global developmental delay / intellectual disability in an attempt to provide guidance to physicians on selecting patients who would benefit from an MRI. Retrospective chart review analysis was conducted for patients presenting to a pediatric neurology tertiary care center between 2014 and 2017 for a first clinic evaluation for global developmental delay / intellectual disability. Detailed clinical history and physical examination findings were analyzed and correlated with brain MRI findings. The majority (218/327, 67%) of children referred for evaluation of global developmental delay / intellectual disability underwent complete clinical and radiologic evaluations. Mean age was 37.9 months (±32.5 standard deviation) and 116 were males (53%). Motor deficits were predominant in most subjects (122/218, 56%). Abnormal MRI findings were observed in 153 children (70%), with the most prevalent abnormalities noted within the white matter (104/153, 68%), corpus callosum (77/153, 50%), and the hippocampus (50/153, 33%). Abnormal MRI findings were prevalent in children with predominant motor delay (84, 69%) and cognitive disability (3, 100%) as well as those with visual and hearing impairment (P < .05). The presence of facial dysmorphisms (57/71, P = .02); cranial nerve abnormalities (79/100; P = .007) and abnormal reflexes (16, P = .01) on examination also correlated significantly with increased MRI abnormalities. Answer: Brain MRI can be useful in the diagnostic work-up of unexplained mental retardation (MR) in children, as it can reveal a higher incidence of brain anomalies compared to control groups without MR. The study by PUBMED:15739114 found that brain MRI in children with non-progressive MR of unknown origin showed a higher incidence of signal abnormalities within the periventricular white matter, lateral ventricular dilatation, mild corpus callosum abnormalities, and subtle cerebellar abnormalities. However, the diagnostic value of MRI abnormalities was considered good in only 5% of patients, and weak in 48% of patients, where non-specific abnormalities did not lead to a diagnosis. The study suggests that subgrouping patients according to neuroimaging markers and clinical signs could help identify those who would benefit from molecular studies. In the context of specific metabolic disorders, such as phenylketonuria (PKU), brain MRI can be used to evaluate long-term control of phenylalanine levels, as white matter involvement can be indicative of the disease's impact on the brain (PUBMED:24949052). Similarly, for other inherited metabolic diseases like l-2-hydroxyglutaric aciduria, brain MRI can demonstrate nonspecific subcortical white matter loss, cerebellar atrophy, and changes in dentate nuclei and putamen, which are important for diagnosis (PUBMED:27709120). For individuals with Down syndrome, MRI-based brain volumetry can correlate with cognitive assessment, revealing deficits in cognition and language domains that correlate with reduced volumes of specific regional and subregional brain structures (PUBMED:37496380). High-resolution structural and functional MRI of the hippocampus in young adults with Down syndrome has also been used to detect subtle differences in structure and function that may relate to cognitive deficits (PUBMED:33977271). In summary, while brain MRI can reveal structural anomalies associated with MR, its diagnostic yield can be variable. It is most useful when combined with clinical data and specific neuroimaging markers to guide further molecular or genetic studies. MRI can also be critical in understanding the neuropathology of specific genetic or metabolic conditions associated with MR.
Instruction: Do we need screening for thrombophilia prior to kidney transplantation? Abstracts: abstract_id: PUBMED:15240853 Do we need screening for thrombophilia prior to kidney transplantation? Background: There is increased risk for the occurrence of deep venous thrombosis (DVT) and renovascular thrombosis after kidney transplantation. A disruption of the blood homeostasis caused by surgery and leading to clotting and bleeding malfunctions is widely accepted. However, other causes such as inherited or acquired disorders of the clotting system may further increase the risk of thrombosis. Here, we summarize and review data on possible causes, incidence and ways to prevent the occurrence of DVT and/or renovascular thrombosis after kidney transplantation. Results: The incidence of DVT after kidney transplantation is 6.2-8.3% and approximately 25% of these patients suffer from pulmonary embolism. The DVT occurs primarily on the side of the transplant with an increased risk throughout the first 5 months after transplantation. Thereby, 2-12% of the patients develop renovascular thromboses, most of which are related directly to the surgery. However, inherited or acquired thrombophilia may also play an important role. A severe course is known for prothrombin gene G20210A polymorphism, which can result in graft loss. A great diversity of prophylactic treatments is available but adjustment to the underlying circumstances is crucial for a favourable outcome. Low-dose heparin prophylaxis for at least 2-3 weeks can be used as standard therapy to prevent the occurrence of DVT after kidney transplantation. However, this may not be sufficient for concurrent disorders of the blood homeostasis such as elevated levels of antiphospholipid antibodies, lupus anticoagulant, prothrombin gene G20210A polymorphism or a combined inherited thrombophilia. These patients may need a prophylactic anticoagulation with coumarins starting prior to transplantation and being continued for at least 1 year or even lifelong. Only randomized trials can answer the question concerning optimal duration and safety of coumarins in this setting. Conclusions: DVT and/or renovascular thromboses are severe complications after kidney transplantation. Inherited and acquired thrombophilia, apart from surgery and abnormal anatomy itself, have to be considered and proper prophylactic treatment initiated. abstract_id: PUBMED:18516702 The impact of inherited thrombophilia on surgery: a factor to consider before transplantation? This article mainly reviews hypercoagulability--and specifically inherited thrombophilia--in different types of surgery including kidney transplantation, simultaneous kidney and pancreas transplantation, orthopedic surgery, vascular surgery, cardiac surgery and other categories of surgical procedures, with a major focus on its associated complications and the need to screen or not. A search was conducted using Medline and cross-referencing for articles related to thrombophilia, screening for it, and its association with surgical complications post-op. Thrombosis associated predisposing entities like factor V Leiden, Prothrombin and Methylene tetrahydrofolate reductase gene mutations, in addition to Protein C and S deficiencies, are discussed. In addition, common and relatively uncommon complications of thrombophilia in the above mentioned surgeries will be covered in length, as well as the necessity to screen for thrombophilia prior to the surgical step. This topic is gaining more and more importance for both surgeons and internists (especially Hematologists) and we, herein, present a general review of the published literature as an update on the subject. abstract_id: PUBMED:26413003 Role of Special Coagulation Studies for Preoperative Screening of Thrombotic Complications in Simultaneous Pancreas-Kidney Transplantation. Background: Vascular thrombosis is a well-known complication after simultaneous pancreas-kidney (SPK) transplantation procedures. The role of preoperative special coagulation studies to screen patients at high risk for vascular thrombosis is unclear and not well studied. Methods: This study reports a retrospective medical record review of 83 SPK procedures performed between April 2007 and June 2013 in a single institution. All SPK transplantation recipients underwent preoperative screening for hypercoagulable state. Results: Eighteen of 83 patients (21.69%) were diagnosed with vascular thrombosis of the pancreas. Of the 23 patients with at least 1 positive screening test, only 4 had a thrombotic event (17.39%). On the other hand, 14 of 60 patients with negative screening tests developed vascular thrombosis (23.33%). The hypercoagulable screening workup had a positive predictive value of 17.39% and a negative predictive value of 76.67%. The workup also demonstrated low sensitivity (22.22%) and specificity (70.77%). Conclusion: No differences were seen in patient or graft survival between groups at 12 months. This retrospective study did not show any benefit of using special coagulation studies to rule out patients at risk for vascular thrombosis after SPK transplantation. abstract_id: PUBMED:33535938 Relevance of Inherited Thrombophilia Screening in Adult Kidney Transplant Recipients. Objectives: Thrombophilia has been implicated in posttransplant thrombosis. Data concerning the impact of thrombophilia on thrombotic risk in renal graft recipients are inconclusive. We evaluated whether identification of thrombophilia in patients during pretransplant laboratory screening was a predictor of posttransplant outcomes. Materials And Methods: We conducted a prospective single-center longitudinal study that included adult recipients who underwent kidney transplant from January 2011 to December 2017. Cardiovascular risk factors, personal history of thrombosis, and data concerning kidney transplant episodes were recorded. Before kidney transplant, all patients were systematically screened for thrombophilia. For thrombophilia screening for antithrombin, protein C, protein S deficiencies, and activated protein C resistance, reagents from Stago were used (Stachrom AT, Staclot Protein C, Staclot Protein S, and Staclot APCR). The endpoint was a thrombotic event within 2 years after kidney transplant. Results: Among 75 end-stage renal disease candidates for kidney transplant, 46 kidney transplant recipients were screened for thrombophilia. Thirty-six of the patients were men. The median age was 37 years (interquartile range, 33-43 years). Renal replacement therapy (36 hemodialysis and 10 peritoneal dialysis) was started in all patients. Forty-five patients received a kidney from a living donor. Among the 46 patients, 4 (9%) had a thrombophilia abnormality (3 with protein C deficiency and 1 with activated protein C resistance). Thrombotic events occurred during the follow-up in 7 cases (15%) with no anterior thrombophilia abnormality; 2 of these concerned the kidney transplant. Only 1 patient had loss of kidney graft immediately after kidney transplant. There was no association between pretransplant thrombophilia and posttransplant thrombotic events. Conclusions: Our results suggest that the utility of universal, comprehensive preoperative thrombophilia testing is not beneficial to determine risk of postoperative thrombosis. Thrombophilia testing may be considered in a select population with a history of pretransplant thrombotic events. abstract_id: PUBMED:30381880 Utility of thrombophilia screening in pediatric renal transplant recipients. Thrombosis after kidney transplantation may result in catastrophic outcomes, including graft loss. Thrombophilia has been implicated in post-transplant thrombosis; data, however, are inconclusive on the impact of acquired and inherited thrombophilia and resultant thrombosis in renal graft recipients. We aimed to evaluate whether identifying children with thrombophilia during the pretransplant evaluation predicted post-transplant outcomes. We reviewed 100 kidney transplants performed in 100 children, aged 1-18 years, in a single-center retrospective study. Routine pretransplant comprehensive thrombophilia evaluation was completed. Thrombophilia was demonstrated in 36% patients (N = 36). TEs occurred in 11 patients before kidney transplant. Low PS and antithrombin were found in 9/86 (10.5%) and 2/89 (2.2%) children, respectively. Heterozygosity for FLV and PGM were found in 5/81 (6.2%) and 1/93(1.1%) children, respectively. A post-transplant thrombotic event occurred in 10 children (10%); six involved the renal transplant. The association between a history of a pretransplant thrombotic event and post-operative renal graft thrombosis approached, but did not reach significance (P = 0.071). There was no association between preoperative screening abnormalities and post-operative TEs. Graft loss due to a thrombotic event occurred in two patients; none had underlying thrombophilia. Our data suggest that the utility of universal, comprehensive preoperative thrombophilia testing is not beneficial in determining risk of post-operative graft thrombosis. Thrombophilia testing may be considered in a select population with a history of pretransplant thrombotic event. abstract_id: PUBMED:12392294 Management of thrombophilia in renal transplant patients. Renal allograft recipients with thrombophilia (a hypercoagulable state) are at higher risk for early allograft loss. Following an episode of allograft renal vein thrombosis in a patient subsequently diagnosed with protein C deficiency, we adopted universal screening for hypercoagulable risk factors. Patients with a history of a thromboembolic event underwent laboratory screening for thrombophilia. Eight patients with a defined hypercoagulable disorder or a strong clinical history of thrombosis even in the absence of hematologic abnormalities were treated with anticoagulation following renal transplantation. We reviewed the outcomes of these eight patients and all renal transplant recipients at our center who developed thrombotic complications after renal transplantation. Since the introduction of universal screening for hypercoagulable risk factors, 235 consecutive transplants were performed without allograft thrombosis. Eight patients with evidence of thrombophilia, recognized before renal transplantation, received perioperative heparin and postoperative oral anticoagulation. Two of these eight patients developed perinephric hematomas requiring evacuation, blood transfusion, and temporary withholding of anticoagulation. Of interest, two of the remaining 227 patients, not identified with thrombophilia before surgery, developed thrombotic complications after renal transplantation. A hypercoagulable disorder was subsequently documented in each. Identifying patients with thrombophilia before transplantation and defining their management presents many challenges. The risk of allograft thrombosis must be weighed against the risk of perioperative bleeding and the need for long-term anticoagulation. Recommendations for managing thrombophilia in renal transplant recipients are suggested based on our experience and review of the literature. abstract_id: PUBMED:12923451 Is screening for factor V Leiden and prothrombin G20210A mutations in renal transplantation worthwhile? Results of a large single-center U.K. study. This single center study is the largest series of renal transplant recipients and donors screened for the commonest prothrombotic genotypes. A total of 562 transplant recipients and 457 kidney donors were genotyped for the factor V Leiden and prothrombin G20210A mutations. The prevalence of heterozygous factor V Leiden was 3.4% and 2.6% and prothrombin G20210A was 2.0% and 1.1% in recipients and donors, respectively, similar frequencies to that of the general U.K. population. The 30-day and 1-year graft survival rates in recipients with thrombophilic mutations were 93% and 93%, compared with 88% and 82% in patients without these mutations (log-rank P=0.34). Thrombophilia in recipients (odds ratio 0.55; confidence interval 0.06-2.29; P=0.56) or in donors (odds ratio 1.53; confidence interval 0.27-5.74; P=0.46) did not correlate with graft loss at 30 days after transplantation. In contrast to recent reports, this study did not demonstrate an association between thrombophilia and renal allograft loss, and routine screening is not recommended. abstract_id: PUBMED:16499600 Successful renal transplantation in a patient with heterozygous prothrombin gene, factor V Leiden mutation and heparin-induced thrombocytopenia using r-hirudin as anticoagulant. Vascular complications remain the most common cause of early renal allograft loss in patients with end-stage renal failure. Underlying thrombophilic disorders increase the risk of early graft thrombosis. A male adolescent with high-risk thrombophilia because of combined heterozygous factor V Leiden (G1691A) and prothrombin gene (G20210A) mutation developed HIT II. Hemodialysis and subsequent renal transplantation were undertaken using recombinant hirudin, a direct and selective thrombin inhibitor, as an anticoagulant. Primary function in the transplanted kidney was excellent. No thrombotic or hemorrhagic events have occurred and follow-up showed excellent long-term graft survival. Patients on HD have an increased risk for the development of HIT, and therefore, they need repetitive screening for the development of acquired thrombotic risk factors (e.g. HIT II or lupus anticoagulant). R-hirudin is efficacious and safe on both HD and following renal transplantation. abstract_id: PUBMED:18760065 Arteriovenous thrombosis in chronic renal failure patients receiving renal replacement therapy. Objective: To determine the frequency of thrombotic complications and to identify factors associated with arteriovenous thrombosis in patients of chronic renal failure receiving renal replacement therapy. Study Design: A descriptive study. Place And Duration Of Study: The study was carried out at Sindh Institute of Urology and Transplantation (SIUT), Karachi, from May 2003 to December 2003. Patients And Methods: Of the 3000 patients evaluated, 61 End Stage Renal Disease (ESRD) patients on regular dialysis, having recent renal transplant, were selected for the study after informed consent. These patients had arteriovenous thrombosis with temporary central lines thrombosis and vascular access problems. Cases of congenital or acquired thrombotic disorders, e.g. with malignancy, DIC, liver disease, systemic lupus erythematosus or other immunologic diseases, pregnancy or women using oral contraceptives, were excluded. Similarly, patients taking any type of anticoagulant therapy during the preceding one week were not included in the study. Findings were recorded in a structured questionnaire. Laboratory analysis was done after clinical and radiological evaluation. Thrombophilia screening included antithrombin, protein C, protein S deficiencies and lupus anticoagulant. Results: Forty-seven out of 61 patients selected were positive for thrombophilia screening with protein C deficiency in 26.2%, protein S deficiency in 16.3%, antithrombin in 5%, lupus anticoagulant in 13.1% and combined deficiency was observed in 16.3%. Conclusion: Of the 3000 patients, 61 with frequency of 2% were found to be deficient in one or had combined deficiency of these. Thus, the study of ESRD patients presenting with arteriovenous thromboembolism emphasizes the need to reconsider the perception that this clinical entity is rare and requires further studies. abstract_id: PUBMED:10528648 Renal allograft thrombosis: can thrombophilia explain the inexplicable? Renal allograft thrombosis remains a preventable cause of early allograft thrombosis. It should not be considered simply an unpredictable and poorly understood consequence of surgery. Extrapolated data from the general population and early data from renal patients supports the concept that the interplay of non-inherited hypercoagulability of renal disease with inherited thrombophilia, and the altered environmental milieu of transplantation predisposes to thrombosis (summarized in Figure 2). We should not accept the inevitability of a constant attrition of grafts to thrombosis and need to continue to identify risk factors and confirm appropriate screening and interventions for its prevention, almost certainly requiring collaborative multicentre trials. In the future, just as we now expand the specificity of HLA gene typing with molecular biology, genotyping for recognized thrombophilia genes in patients at risk will expand our ability to recognize and prevent thrombosis with targeted interventions drawn from the increasing array of anticoagulants now available. The contribution of thrombophilia to non-immune mechanisms of chronic allograft loss is also a potentially important but neglected area of research. Answer: Screening for thrombophilia prior to kidney transplantation is a topic of debate. The risk of deep venous thrombosis (DVT) and renovascular thrombosis is increased after kidney transplantation, and inherited or acquired thrombophilia may further elevate this risk (PUBMED:15240853). The incidence of DVT after kidney transplantation is reported to be between 6.2-8.3%, with a significant proportion of these patients also suffering from pulmonary embolism. Prothrombin gene G20210A polymorphism, in particular, is known for a severe course that can result in graft loss (PUBMED:15240853). However, the utility of preoperative thrombophilia testing is not clear-cut. Some studies have found no benefit in using special coagulation studies to rule out patients at risk for vascular thrombosis after simultaneous pancreas-kidney transplantation (PUBMED:26413003), and others suggest that universal preoperative thrombophilia testing may not be beneficial in determining the risk of postoperative thrombosis (PUBMED:33535938, PUBMED:30381880). Moreover, a large single-center UK study did not demonstrate an association between thrombophilia and renal allograft loss, suggesting that routine screening for common prothrombotic genotypes may not be warranted (PUBMED:12923451). On the other hand, some centers have adopted universal screening for hypercoagulable risk factors, and in patients identified with thrombophilia, perioperative heparin and postoperative oral anticoagulation have been used, although this approach also carries the risk of perioperative bleeding (PUBMED:12392294). Additionally, the presence of thrombophilia has been implicated in arteriovenous thrombosis in chronic renal failure patients receiving renal replacement therapy, indicating that thrombophilia screening may be relevant in certain populations (PUBMED:18760065). In conclusion, while DVT and renovascular thromboses are serious complications after kidney transplantation, and thrombophilia can play a role in these events, the evidence is mixed regarding the necessity and benefit of routine preoperative thrombophilia screening. It may be more appropriate to consider thrombophilia testing in select populations, such as those with a history of pretransplant thrombotic events (PUBMED:33535938, PUBMED:30381880). Randomized trials and further research are needed to determine the optimal approach to thrombophilia screening in kidney transplant candidates.
Instruction: Frequency and assessment of symptoms in hospitalized patient with advanced chronic diseases: is there concordance among patients and doctors? Abstracts: abstract_id: PUBMED:18769802 Frequency and assessment of symptoms in hospitalized patient with advanced chronic diseases: is there concordance among patients and doctors? Background: Physicians tend to over or underestimate symptoms reported by patients. Therefore standardized symptom scoring systems have been proposed to overcome this drawback. Aim: To estimate the prevalence and the diagnostic accuracy of physical and psychological symptoms and delirium in patients admitted to an internal medicine service at a university hospital. Material And Methods: We studied 58 patients, 45 with metastasic cancer and 13 with other advanced chronic diseases. The following scales were used: the Confusion Assessment Method for the diagnosis of delirium; the Edmonton Symptom Assessment Scale (ESAS) for pain and other physical symptoms; the Hospital Anxiety and Depression Scale to assess anxiety and depression. The ESAS was simultaneously applied to patients without delirium and their doctors to assess the level of diagnostic concordance. Results: Twenty two percent of patients had delirium. Among the 45 patients without delirium, 11 (25%) had at least eight symptoms and 39 (88.6%) had four symptoms. The prevalence of symptoms was very high, ranging from 22 to 78%. Pain, restlessness, anorexia and sleep disorders were the most common. The concordance between symptoms reported by patients and those recorded by doctor was very low, with a Kappa index between 0.001 and 0.334. Conclusions: In our sample of chronic patients, there is a very high frequency of psychological and physical symptoms that are insufficiently recorded by the medical team. abstract_id: PUBMED:33749957 'Reluctant pioneer': A qualitative study of doctors' experiences as patients with long COVID. Background: The coronavirus disease (COVID-19) pandemic has had far-reaching effects upon lives, healthcare systems and society. Some who had an apparently 'mild' COVID-19 infection continue to suffer from persistent symptoms, including chest pain, breathlessness, fatigue, cognitive impairment, paraesthesia, muscle and joint pains. This has been labelled 'long COVID'. This paper reports the experiences of doctors with long COVID. Methods: A qualitative study; interviews with doctors experiencing persistent symptoms were conducted by telephone or video call. Interviews were transcribed and analysis conducted using an inductive and thematic approach. Results: Thirteen doctors participated. The following themes are reported: making sense of symptoms, feeling let down, using medical knowledge and connections, wanting to help and be helped, combining patient and professional identity. Experiencing long COVID can be transformative: many expressed hope that good would come of their experiences. Distress related to feelings of being 'let down' and the hard work of trying to access care. Participants highlighted that they felt better able to care for, and empathize with, patients with chronic conditions, particularly where symptoms are unexplained. Conclusions: The study adds to the literature on the experiences of doctors as patients, in particular where evidence is emerging and the patient has to take the lead in finding solutions to their problems and accessing their own care. Patient And Public Contribution: The study was developed with experts by experience (including co-authors HA and TAB) who contributed to the protocol and ethics application, and commented on analysis and implications. All participants were given the opportunity to comment on findings. abstract_id: PUBMED:26658427 The Association of Types of Training and Practice Settings with Doctors' Empathy and Patient Enablement among Patients with Chronic Illness in Hong Kong. Background: The increase in non-communicable disease (NCD) is becoming a global health problem and there is an increasing need for primary care doctors to look after these patients although whether family doctors are adequately trained and prepared is unknown. Objective: This study aimed to determine if doctors with family medicine (FM) training are associated with enhanced empathy in consultation and enablement for patients with chronic illness as compared to doctors with internal medicine training or without any postgraduate training in different clinic settings. Methods: This was a cross-sectional questionnaire survey using the validated Chinese version of the Consultation and Relational Empathy (CARE) Measure as well as Patient Enablement Instrument (PEI) for evaluation of quality and outcome of care. 14 doctors from hospital specialist clinics (7 with family medicine training, and 7 with internal medicine training) and 13 doctors from primary care clinics (7 with family medicine training, and 6 without specialist training) were recruited. In total, they consulted 823 patients with chronic illness. The CARE Measure and PEI scores were compared amongst doctors in these clinics with different training background: family medicine training, internal medicine training and those without specialist training. Generalized estimation equation (GEE) was used to account for cluster effects of patients nested with doctors. Results: Within similar clinic settings, FM trained doctors had higher CARE score than doctors with no FM training. In hospital clinics, the difference of the mean CARE score for doctors who had family medicine training (39.2, SD = 7.04) and internal medicine training (35.5, SD = 8.92) was statistically significant after adjusting for consultation time and gender of the patient. In the community care clinics, the mean CARE score for doctors with family medicine training and those without specialist training were 32.1 (SD = 7.95) and 29.2 (SD = 7.43) respectively, but the difference was not found to be significant. For PEI, patients receiving care from doctors in the hospital clinics scored significantly higher than those in the community clinics, but there was no significant difference in PEI between patients receiving care from doctors with different training backgrounds within similar clinic setting. Conclusion: Family medicine training was associated with higher patient perceived empathy for chronic illness patients in the hospital clinics. Patient enablement appeared to be associated with clinic settings but not doctors' training background. Training in family medicine and a clinic environment that enables more patient doctor time might help in enhancing doctors' empathy and enablement for chronic illness patients. abstract_id: PUBMED:27456375 Managing relapsed myeloma: The views of patients, nurses and doctors. Purpose: The study aim was to explore whether there were differing opinions on the current management of relapsed myeloma between patients and health care professionals, a topic which has never been explored previously in the literature. Methods: This qualitative study was undertaken at a regional specialist haematology centre in Ireland. Individual interviews were undertaken with multiple myeloma patients with relapsed disease (n = 8). Three focus groups were also undertaken with haematology nurse specialists, haematology doctors and staff nurses working in a haematology day unit (n = 17). The analysis of interview data was guided by thematic analysis. Results: Two central themes were interpreted from the interview data: 'shared decision making with the expert patient' and 'an unpredictable disease journey'. Patients felt well informed regarding their illness but faced difficult decisions at times. Nurses and doctors stressed the importance of the early introduction of palliative care but acknowledged difficulties due to myeloma being unpredictable. Conclusion: Managing relapsed myeloma was fraught with complex issues. Patients developed alternative ways of coping with their disease including adopting the role of the 'expert patient', continuing to battle their disease and living with a chronic illness. Health care professionals struggle to manage the realities of a disease where the prognosis is improving. Uncertainty around patients' care and difficulties such as when 'enough is enough', continue to cause challenges. The transition to a chronic disease offers hope to patients and an opportunity for health care professionals to implement holistic care plans encouraging patients to be self proactive. abstract_id: PUBMED:28936138 Prevalence and overlap of Disease Management Program diseases in older hospitalized patients. Many countries, like Denmark, have tailored Disease Management Programs (DMPs) based on patients having single chronic diseases [defined institutionally as "program diseases" (PDs)], which can complicate treatment for those with multiple chronic diseases. The aims of this study were (a) to assess the prevalence and overlap among acutely hospitalized older medical patients of PDs defined by the DMPs, and (b) to examine transitions between different departments during hospitalization and mortality and readmission within two time intervals among patients with the different PDs. We conducted a registry study of 4649 acutely hospitalized medical patients ≥65 years admitted to Copenhagen University Hospital, Hvidovre, Denmark, in 2012, and divided patients into six PD groups (type 2 diabetes, chronic obstructive pulmonary disease, cardiovascular disease, musculoskeletal disease, dementia and cancer), each defined by several ICD-10 codes predefined in the DMPs. Of these patients, 904 (19.4%) had 2 + PDs, and there were 47 different combinations of the six different PDs. The most prevalent pair of PDs was type 2 diabetes with cardiovascular disease in 203 (22.5%) patients, of whom 40.4% had an additional PD. The range of the cumulative incidence of being readmitted within 90 days was between 28.8% for patients without a PD and 46.6% for patients with more than one PD. PDs overlapped in many combinations, and all patients had a high probability of being readmitted. Hence, developing strategies to create a new generation of DMPs applicable to older patients with comorbidities could help clinicians organize treatment across DMPs. abstract_id: PUBMED:36096846 Gaps in expectations and current practices of pharmacy services among doctors and patients - an exploratory study in a Sri Lankan tertiary care hospital providing free healthcare. The pharmacist is an important link between doctor and patient. To optimise patient care, it is essential that expectations of doctors and patients regarding pharmacy services are met. Hence the objective of this study was to assess the satisfaction levels of doctors and patients on pharmacy services currently provided, and their expectations from pharmacy services. This cross sectional study was conducted in selected clinics of a university based teaching hospital. Questionnaires developed in-house by referring previously published resources, content validated by a group of experts, and face validated through a pilot study were used. Doctors and patients of chronic disease clinics were selected for the study. All doctors involved in prescribing for more than six months, and patients or their regular care givers attending clinics for more than one year were included. Mentally incapacitated patients were excluded. An interviewee administered questionnaire was distributed to doctors and an interviewer administered questionnaire was used for patients. Exploratory factor analysis (EFA) (principal component analysis with Varimax rotation) was conducted to divide variables of the questionnaires into reliable components. Response rate of doctors was 82.3%. Among them 59.6% (50/84) doctors said that they have a good relationship with pharmacists, and 89.3% (75/84) expected communication with pharmacists more often. EFA for doctors' perceptions resulted in four components. A statistically significant difference was observed between doctors' expectation (95.9% - 81/84) and current practice (22.6% - 19/84) on communicating medication issues (p<0.001). A total of 380 patients participated. EFA for patients' perceptions resulted in ten components. The majority considered pharmacists as an integral part of the healthcare system (98.7% - 375/380) and experts in medication (84.7% - 322/380). They further perceived that dispensed medications are safe (82.9% - 315/380) and of good quality (76.3% - 290/380). Further 95.5% (363/380) were satisfied with dispensing label information. A statistically significant difference was found between the expectations (93% - 353/380) and satisfaction levels (86.5% - 329/380) of patients on pharmacy services (p=0.003). According to findings, both doctors and patients held a positive perception on pharmacy services and pharmacists, but the statistically significant gap reported between expectations and current level of pharmacy service, highlighting both the potential and scope for service improvement. abstract_id: PUBMED:20112108 Prevalence and associated factors of depressive symptoms among Chinese doctors: a cross-sectional survey. Objectives: Doctors, the major workforce in hospitals, are doing heavy emotional and physical work which may lead to depressive symptoms. However, in China, few studies are available pertaining to the prevalence and associated factors of depressive symptoms among doctors. The aim of this study was to evaluate the prevalence of depressive symptoms and to explore its associated factors among Chinese doctors in public hospitals. Methods: This cross-sectional study was performed during the period of September/October 2008. The study population comprised of 1,890 doctors registered and working in the 20 national hospitals in Liaoning province, northeast of China. A questionnaire that comprised depressive symptoms assessed by the Chinese Version of the Center for Epidemiologic Studies Depression Scale (CES-D), demographic factors, work conditions, occupational stress, and coping strategies was distributed to these doctors. A total of 1,488 effective respondents became our subjects (effective response rate 78.7%). Multivariate logistic regression was used to explore the factors related to depressive symptoms. Results: The prevalence of depressive symptoms among doctors was 65.3%. Multivariate logistic analyses showed that high role insufficiency (OR 2.15, 95% CI 1.66-2.78), worse doctor-patient relationship (OR 2.07, 95% CI 1.62-2.64), having a chronic disease (OR 1.73, 95% CI 1.31-2.27), serious role boundary (OR 1.54, 95% CI 1.21-2.00), and role overload (OR 1.42, 95% CI 1.11-1.81) were positively associated with depressive symptoms; whereas adequate rational coping (OR 0.58, 95% CI 0.45-0.76) and social support (OR 0.75, 95% CI 0.57-0.98) were negatively associated with depressive symptoms. Conclusions: Most Chinese doctors probably have depressive symptoms. Role insufficiency, doctor-patient relationship, and rational coping seemed to be crucial in relation to depressive symptoms. Efficient interventions such as taking further education course, improving communications with patients, and improving the ability of rational coping should be considered by health administrators aiming at improving the quality of Chinese doctors' mental health from the view point of depressive symptoms. abstract_id: PUBMED:23525140 Prevalence of gastroesophageal reflux disease symptoms and related factors in patients with rheumatoid arthritis. Gastroesophageal reflux disease (GERD) is common in patients with many chronic diseases, but has not been well recognized in rheumatoid arthritis (RA). We investigated the prevalence of GERD symptoms in 278 outpatients with RA and their association with such clinical factors as age, sex, height, weight, body mass index, medications drugs, and functional status evaluated by the Modified Health Assessment Questionnaire (MHAQ). GERD symptoms were evaluated by Frequency Scale for the Symptoms of GERD (FSSG). The mean FSSG score for all patients was 5.6, and 82 patients were considered to have GERD symptoms (FSSG score ≥8), thus the overall prevalence of GERD symptoms was 29.5%. MHAQ score and height were significantly higher and lower, respectively, and prednisolone usage was significantly more in the patients with GERD symptoms than those without. These three clinical factors were also significantly associated with GERD symptoms by univariate logistic regression. Multivariate logistic regression analysis demonstrated that MHAQ was the only clinical factor related to GERD symptoms. In conclusion, the prevalence of GERD symptoms in RA patients was high and strongly associated with decreased functional status, suggesting that physicians should pay attention to GERD symptoms in RA management, especially for patients with low functional status. abstract_id: PUBMED:28382400 Associations Among Depressive Symptoms, Wellness, Patient Involvement, Provider Cultural Competency, and Treatment Nonadherence: A Pilot Study Among Community Patients Seen at a University Medical Center. Treatment nonadherence is a pernicious problem associated with increasing rates of chronic diseases, escalating healthcare costs, and rising mortality in some patients. Although researchers have suggested numerous factors related to treatment nonadherence, several understudied aspects warrant attention, such as primary-care settings, provider cultural competence, and patient involvement. Adding to the research base, the present pilot study examined 88 primarily Black American and White American community patients from a large university medical center in the southern part of the United States. The study explored two research questions: (a) To what extent are there associations among depressive symptoms, wellness, patient involvement, cultural competency, and treatment nonadherence in a racially diverse community patient population? And (b) to what extent do the study exploratory variables and background characteristics predict treatment nonadherence, both separately and jointly? Depressive symptoms, the patient's perception of a provider's cultural competence, and marital/partnered status were found to be statistically significantly associated with treatment nonadherence, but not entirely in the directions expected. abstract_id: PUBMED:31619216 Burnout among postgraduate doctors in Colombo: prevalence, associated factors and association with self-reported patient care. Background: Postgraduate doctors are prone to burnout due to occupational and educational stressors. Sri Lankan situation is unknown. This study determines burnout among postgraduate doctors in Colombo: Prevalence, associated factors, and association with self-reported patient care. Methods: A cross-sectional study was conducted among 278 postgraduate doctors from eight specialties working in Colombo district, attached to the main postgraduate training institute for medical professionals. A self-administered questionnaire was used. It comprised of the Copenhagen Burnout Inventory and an author-developed questionnaire, which was used to assess, associated factors and self-reported patient care. Prevalence of burnout was calculated. Associations were analysed using chi-square and binary logistic regression. Results: The response rate was 88.1% (n = 245). The prevalence of personal, work-related and client-related burnout was 41.6% (95% CI = 35.5-47.8%), 30.6% (95% CI = 24.8-36.4%), 8.9% (95% CI = 5.4-12.5%) respectively. Personal burnout was positively associated with, the trainee being a female, having a chronic disease, being involved in frequent unhealthy habits, having doctor parents, having home-work demands and having emotional demands. It was negatively associated with, having frequent healthy habits, being satisfied with skill development opportunities, and frequent use of deep studying. Work-related burnout was positively associated with, female gender, being involved in frequent unhealthy habits, having home-work demands and having emotional demands. It was negatively associated with, frequent use of deep methods of studying. Client-related burnout was positively associated with having emotional demands and negatively associated with being satisfied with training. The frequent self-reported, suboptimal patient-care practices: poor communication, poor clinical practice, poor response to patient's needs and poor communication during handing over were associated positively with client-related burnout. Conclusions: Most postgraduate doctors in Colombo have high personal and work-related burnout but client-related burnout is less. The factors associated with burnout need to be addressed by the programme managers of the postgraduate courses. Preventive measures should be introduced to reduce burnout among future postgraduate trainees of Colombo. Answer: The study conducted by researchers and reported in PUBMED:18769802 aimed to estimate the prevalence and diagnostic accuracy of physical and psychological symptoms, including delirium, in patients admitted to an internal medicine service at a university hospital. The study included 58 patients, 45 with metastatic cancer and 13 with other advanced chronic diseases. Various scales were used to assess symptoms, including the Confusion Assessment Method for delirium, the Edmonton Symptom Assessment Scale (ESAS) for pain and other physical symptoms, and the Hospital Anxiety and Depression Scale for anxiety and depression. The ESAS was applied to both patients without delirium and their doctors to assess the level of diagnostic concordance. The results showed that 22% of patients had delirium. Among the 45 patients without delirium, 25% had at least eight symptoms, and 88.6% had four symptoms. The prevalence of symptoms was very high, ranging from 22 to 78%, with pain, restlessness, anorexia, and sleep disorders being the most common. However, the concordance between symptoms reported by patients and those recorded by doctors was very low, with a Kappa index between 0.001 and 0.334. The conclusion of the study was that there is a very high frequency of psychological and physical symptoms in chronic patients that are insufficiently recorded by the medical team. This indicates that there is a lack of concordance between patients and doctors regarding the assessment of symptoms in hospitalized patients with advanced chronic diseases.
Instruction: Can selective arterial clamping with fluorescence imaging preserve kidney function during robotic partial nephrectomy? Abstracts: abstract_id: PUBMED:26968420 Selective Arterial Clamping Versus Hilar Clamping for Minimally Invasive Partial Nephrectomy. Partial nephrectomy has become an accepted treatment of cT1 renal masses as it provides improved long-term renal function compared to radical nephrectomy (Campbell et al. J Urol. 182:1271-9, 2009). Hilar clamping is utilized to help reduce bleeding and improve visibility during tumor resection. However, concern over risk of kidney injury with hilar clamping has led to new techniques to reduce length of warm ischemia time (WIT) during partial nephrectomy. These techniques have progressed over the years starting with early hilar unclamping, controlled hypotension during tumor resection, selective arterial clamping, minimal margin techniques, and off-clamp procedures. Selective arterial clamping has progressed significantly over the years. The main question is what are the exact short- and long-term renal effects from increasing clamp time. Moreover, does it make sense to perform these more time-consuming or more complex procedures if there is no long-term preservation of kidney function? More recent studies have shown no difference in renal function 6 months from surgery when selective arterial clamping or even hilar clamping is employed, although there is short-term improved decline in estimated glomerular filtration rate (eGFR) with selective clamping and off-clamp techniques (Komninos et al. BJU Int. 115:921-8, 2015; Shah et al. 117:293-9, 2015; Kallingal et al. BJU Int. doi: 10.1111/bju.13192, 2015). This paper reviews the progression of total hilar clamping to selective arterial clamping (SAC) and the possible difference its use makes on long-term renal function. SAC may be attempted based on surgeon's decision-making, but may be best used for more complex, larger, more central or hilar tumors and in patients who have renal insufficiency at baseline or a solitary kidney. abstract_id: PUBMED:24909960 Can selective arterial clamping with fluorescence imaging preserve kidney function during robotic partial nephrectomy? Objective: To compare renal functional outcomes in robotic partial nephrectomy (RPN) with selective arterial clamping guided by near-infrared fluorescence (NIRF) imaging to a matched cohort of patients who underwent RPN without selective arterial clamping and NIRF imaging. Methods: From April 2011 to December 2012, NIRF imaging-enhanced RPN with selective clamping was used in 42 cases. Functional outcomes of successful cases were compared with a cohort of patients, matched by tumor size, preoperative estimated glomerular filtration rate (eGFR), functional kidney status, age, sex, body mass index, and American Society of Anesthesiologists score, who underwent RPN without selective clamping and NIRF imaging. Results: In matched-pair analysis, selective clamping with NIRF was associated with superior kidney function at discharge, as demonstrated by postoperative eGFR (78.2 vs 68.5 mL/min/1.73 m(2); P = .04), absolute reduction of eGFR (-2.5 vs -14.0 mL/min/1.73 m(2); P <.01), and percent change in eGFR (-1.9% vs -16.8%; P <.01). Similar trends were noted at 3 month follow-up, but these differences became nonsignificant (P[eGFR] = .07; P[absolute reduction of eGFR] = .10; and P[percent change in eGFR] = .07). In the selective clamping group, a total of 4 perioperative complications occurred in 3 patients, all of which were Clavien grade I-III. Conclusion: Use of NIRF imaging was associated with improved short-term renal functional outcomes when compared with RPN without selective arterial clamping and NIRF imaging. With this effect attenuated at later follow-up, randomized prospective studies and long-term assessment of kidney-specific functional outcomes are needed to further assess the benefits of this technology. abstract_id: PUBMED:24615844 Console-integrated real-time three-dimensional image overlay navigation for robot-assisted partial nephrectomy with selective arterial clamping: early single-centre experience with 17 cases. Background: The objective of this study was to describe our early experience with robot-assisted partial nephrectomy (RAPN) incorporating selective arterial clamping, using an image overlay navigation system. Methods: Three-dimensional images were reconstructed from computed tomography using open source processing software, OsiriX, and were directly visualized on the screen of a da Vinci surgeon's console with TilePro multi-input display functions. Using this imaging system, RAPN with selective arterial clamping was performed in 17 patients with renal tumours. Results: The intraoperative image overlay navigation system made it possible to clearly show the tumour position and vascular supply within the console's field of view, and facilitate selective arterial clamping during RAPN, resulting in a mean decrease in estimated glomerular filtration rates at both 1 and 4 weeks after RAPN < 10 ml/min/1.73 m(2) . Conclusions: RAPN with selective arterial clamping using this innovative imaging system could be a useful alternative to conventional RAPN. abstract_id: PUBMED:25698588 Near-infrared fluorescence imaging with intraoperative administration of indocyanine green for robotic partial nephrectomy. Near-infrared fluorescence (NIRF) imaging with intraoperative administration of indocyanine green (ICG) is a technology with emerging applications in urologic surgery. ICG is a water-soluble dye that fluoresces bright green when viewed under near-infrared light (700-1000 nm). This technology has been applied to robotic partial nephrectomy, first to potentially allow for the differentiation of renal tumor from normal parenchyma. In this application, it has been hypothesized that normal kidney tissue fluoresces green, while the tumor commonly remains hypofluorescent, thereby aiding tumor excision. Secondly, NIRF imaging with ICG has been employed to facilitate selective arterial clamping during robotic partial nephrectomy, allowing for a regional perfusion deficit in the kidney to be readily identified and therefore targeted at a given tumor. Recent studies have shown the associated decrease in global ischemia to minimize resultant loss of renal function at certain time endpoints. This review presents the most recent studies and evidence on the intraoperative administration of indocyanine green for robotic partial nephrectomy. abstract_id: PUBMED:23253629 Near-infrared fluorescence imaging to facilitate super-selective arterial clamping during zero-ischaemia robotic partial nephrectomy. Unlabelled: WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: There is concern that warm ischaemia time during partial nephrectomy may have an adverse impact on postoperative renal function. As a result, there is increased interest in developing a safe and effective method for performing non-ischaemic partial nephrectomy. Several novel approaches have recently been described. We present our initial experience performing zero-ischaemia partial nephrectomy using near-infrared fluorescence imaging to facilitate super-selective arterial clamping. We report the operative and early postoperative outcomes from such cases as compared with a matched cohort of patients undergoing traditional partial nephrectomy with clamping of the main renal artery. We show that this technique is both safe and effective and may lead to improved renal preservation at short-term follow-up. Objective: To describe a novel technique of eliminating renal ischaemia during robotic partial nephrectomy (RPN) using near-infrared fluorescence (NIRF) imaging. Patients And Methods: Over an 8-month period (March 2011 to November 2011), 34 patients were considered for zero-ischaemia RPN using the da Vinci NIRF system. Targeted tertiary/higher-order tumour-specific branches were controlled with robotic bulldog(s) or neurosurgical aneurysm micro-bulldog(s). Indocyanine green dye was given, and NIRF imaging used to confirm super-selective ischaemia, defined as darkened tumour/peri-tumour area with green fluorescence of remaining kidney. Matched pair analysis was performed by matching each patient undergoing zero-ischaemia RPN (n = 27) to a previous conventional RPN (n = 27) performed by the same surgeon. Results: Of 34 patients, 27 (79.4%) underwent successful zero-ischaemia RPN; seven (20.6%) required conversion to main renal artery clamping (ischaemia time <30 min) for the following reasons: persistent tumour fluorescence after clamping indicating inadequate tumoral devascularization (n = 5), and parenchymal bleeding during RPN (n = 2). Matched-pair analysis showed comparable outcomes between cohorts, except for longer operating time (256 vs 212 min, P = 0.02) and superior kidney function (reduction of estimated glomerular filtration rate (-1.8% vs -14.9%, P = 0.03) in the zero-ischaemia cohort. All surgical margins were negative. Conclusions: In this pilot study, we show that zero-ischaemia RPN with NIRF is a safe alternative to conventional RPN with main renal artery clamping. Eliminating global ischaemia may improve functional outcomes at short-term follow-up. abstract_id: PUBMED:32125072 Selective clamping during robot-assisted partial nephrectomy in patients with a solitary kidney: is it safe and does it help? Objectives: To obtain the most accurate assessment of the risks and benefits of selective clamping in robot-assisted partial nephrectomy (RAPN) we evaluated outcomes of this technique vs those of full clamping in patients with a solitary kidney undergoing RAPN. Patients And Methods: Data from institutional review board-approved retrospective and prospective databases from 2006 to 2019 at multiple institutions with sharing agreements were evaluated. Patients with a solitary kidney were identified and stratified based on whether selective or full renal artery clamping was performed. Both groups were analysed with regard to demographics, risk factors, intra-operative complications, and postoperative outcomes using chi-squared tests, Fisher's exact tests, t-tests and Mann-Whitney U-tests. Results: Our initial cohort consisted of 4112 patients, of whom 72 had undergone RAPN in a solitary kidney (51 with full clamping and 21 with selective clamping). There were no significant differences in demographics, tumour size, baseline estimated glomerular filtration rate (eGFR), or warm ischaemia time (WIT) between the groups (Table 1). Intra-operative outcomes, including estimated blood loss, operating time, and intra-operative complications were similar in the two groups. Short- and long-term postoperative percentage change in eGFR, frequency of acute kidney injury (AKI), and frequency of de novo chronic kidney disease (CKD) were also not significantly different between the two techniques. Conclusion: In a large cohort of patients with solitary kidney undergoing RAPN, selective clamping resulted in similar intra-operative and postoperative outcomes compared to full clamping and conferred no additional risk of harm. However, selective clamping did not appear to provide any functional advantage over full clamping as there was no difference observed in the frequency of AKI, CKD or change in eGFR. Short WIT in both groups (<15 min) may have prevented identification of benefits in the selective clamping group; a similar study analysing cases with longer WIT may elucidate any beneficial effects of selective clamping. abstract_id: PUBMED:27635284 Is robotic partial nephrectomy convenient for solitary kidney? Nephron sparing surgery (NSS) is the gold standard treatment option for patients with a solitary kidney in order to preserve renal function. Open partial nephrectomy (OPN) has been long considered the standard care for NSS. Robotic partial nephrectomy (RPN) is being gradually used more commonly even for solitary kidney and complex tumors. There was no difference between RPN and OPN regarding the rate of intraoperative-postoperative complications and positive surgical margin (PSM) (RPN: 7.5%, OPN: 8%) for patients with solitary kidney who underwent partial nephrectomy for small renal masses. Warm ischemia time (WIT) in all of our studies was within the safe range of <25 minutes which is acceptable ischemia time for robotic approaches. More studies are needed in order to evaluate kidney function. In conclusion with increasing experience, solitary kidney tumors can be managed safely with robotic approach. For patients having complex tumors with a potential of WIT >25 minutes, administration of intracorporeal ice slush during surgery may be considered. abstract_id: PUBMED:32410919 Robotic Partial Nephrectomy with Indocyanine Green Fluorescence Navigation. Partial nephrectomy (PN) is a recommended type of treatment of localised renal tumors. Real-time intraoperative imaging technique, such as fluorescence imaging with indocyanine green (ICG) administration helps to improve intraoperative and postoperative outcomes in patients who underwent PN. Our work presents results of patients who underwent robotic PN with ICG navigation. A total of 37 patients underwent robotic PN with application of ICG between April 2015 and May 2019. A total amount of 5 mg of ICG was applied intravenously, and then robotic PN was performed with fluorescent imaging. ICG was used by the surgeon's decision according to unfavourable anatomical properties of tumor or to high R.E.N.A.L. nephrometry score. An exact border between perfused and nonperfused tissue was detected, and exact tumor's branch of the renal artery was clamped. Robotic PN with ICG-fluorescence imaging navigation was performed in 37 cases with a preoperative average diameter of tumor of 31 mm. The mean surgery time was 133 minutes, and the mean estimated blood loss was 190 mL. Arterial clamping was performed in 35 cases. The mean duration of warm ischemia was 14 minutes. Application of ICG enabled specific tumor-supplying vessel clamping in 25 cases. Two complications of grade II according to the Clavien-Dindo classification occurred intraoperatively, and one complication of grade III was observed. Renal function changes showed favourable results for the cases with superselective clamping. Finally, an administration of ICG eases superselective clamping of tumor-specific branch of renal artery and helps to preserve normal renal function with acceptable oncological results. abstract_id: PUBMED:36737298 A Novel Three-dimensional Planning Tool for Selective Clamping During Partial Nephrectomy: Validation of a Perfusion Zone Algorithm. Background: Selective clamping during robot-assisted partial nephrectomy (RAPN) requires extensive knowledge on patient-specific renal vasculature, obtained through imaging. Objective: To validate an in-house developed perfusion zone algorithm that provides patient-specific three-dimensional (3D) renal perfusion information. Design, Setting, And Participants: Between October 2020 and June 2022, 25 patients undergoing RAPN at Ghent University Hospital were included. Three-dimensional models, based on preoperative computed tomography (CT) scans, showed the clamped artery's ischemic zone, as calculated by the algorithm. Surgical Procedure: All patients underwent selective clamping during RAPN. Indocyanine green (ICG) was administered to visualize the true ischemic zone perioperatively. Surgery was recorded for a postoperative analysis. Measurements: The true ischemic zone of the clamped artery was compared with the ischemic zone predicted by the algorithm through two metrics: (1) total ischemic zone overlap and (2) tumor ischemic zone overlap. Six urologists assessed metric 1; metric 2 was assessed objectively by the authors. Results And Limitations: In 92% of the cases, the algorithm was sufficiently accurate to plan a selective clamping strategy. Metric 1 showed an average score of 4.28 out of 5. Metric 2 showed an average score of 4.14 out of 5. A first limitation is that ICG can be evaluated only at the kidney surface. A second limitation is that mainly patients with impaired renal function are expected to benefit from this technology, but contrast-enhanced CT is required at present. Conclusions: The proposed new tool demonstrated high accuracy when planning selective clamping for RAPN. A follow-up prospective study is needed to determine the tool's clinical added value. Patient Summary: In partial nephrectomy, the surgeon has no information on which specific arterial branches perfuse the kidney tumor. We developed a surgeon support system that visualizes the perfusion zones of all arteries on a three-dimensional model and indicates the correct arteries to clamp. In this study, we validate this tool. abstract_id: PUBMED:26459499 Renal Functional and Perioperative Outcomes of Selective Versus Complete Renal Arterial Clamping During Robot-Assisted Partial Nephrectomy: Early Single-Center Experience With 39 Cases. Objective: The objective of this study was to investigate clinical outcomes in patients undergoing selective versus conventional complete renal arterial clamping during robot-assisted partial nephrectomy (RAPN). Methods: This study included 19 patients with renal tumors who received RAPN incorporating selective arterial clamping (group 1). The renal functional as well as perioperative outcomes in group 1 were compared with those in 20 patients with renal tumors undergoing RAPN with total clamping of the renal artery (group 2) during the same period. Results: In group 1, tumor resection under selective arterial clamping could be completed in all patients without intraoperative conversion to conventional RAPN with total clamping. There were no significant differences in the tumor size, RENAL nephrometry score, or preoperative estimated glomerular filtration rate (eGFR) between groups 1 and 2. Furthermore, no significant differences were noted in the estimated blood loss, operative time, or warm ischemia time between the 2 groups. Although there was no significant difference in the rate of decrease in eGFR 4 weeks after RAPN between the 2 groups, the rate of decrease in eGFR 1 week after RAPN in group 1 was significantly lower than that in group 2. The choice of selective or total clamping was also identified as an independent predictor of a postoperative decrease in eGFR by > 10% at 1 week, but not 4 weeks, after RAPN. Conclusions: A precise segmental clamping technique is feasible and safe for performing RAPN, resulting in an improved postoperative renal function, particularly early after surgery. Answer: Yes, selective arterial clamping with fluorescence imaging can preserve kidney function during robotic partial nephrectomy. Studies have shown that selective clamping with near-infrared fluorescence (NIRF) imaging is associated with superior kidney function at discharge when compared to robotic partial nephrectomy (RPN) without selective arterial clamping and NIRF imaging. This is demonstrated by better postoperative estimated glomerular filtration rate (eGFR), less absolute reduction of eGFR, and a smaller percent change in eGFR. However, these differences in renal function outcomes were noted to attenuate at later follow-up, indicating that the benefits of selective arterial clamping with fluorescence imaging may be more pronounced in the short term (PUBMED:24909960). Additionally, NIRF imaging with intraoperative administration of indocyanine green (ICG) has been employed to facilitate selective arterial clamping during RPN, allowing for a regional perfusion deficit in the kidney to be readily identified and therefore targeted at a given tumor. This technique has been associated with a decrease in global ischemia, which can minimize the resultant loss of renal function at certain time endpoints (PUBMED:25698588). Moreover, studies have validated the use of selective arterial clamping during RPN in patients with a solitary kidney, showing that it resulted in similar intra-operative and postoperative outcomes compared to full clamping and conferred no additional risk of harm. Although selective clamping did not show a functional advantage over full clamping in terms of frequency of acute kidney injury (AKI), chronic kidney disease (CKD), or change in eGFR, it is suggested that the short warm ischemia time (WIT) in both groups might have masked any potential benefits (PUBMED:32125072). In conclusion, selective arterial clamping with fluorescence imaging can preserve kidney function during robotic partial nephrectomy, particularly in the short term, and it is a safe technique that does not increase the risk of harm compared to full clamping.
Instruction: Propofol-ketamine combination: a choice with less complications and better hemodynamic stability compared to propofol? Abstracts: abstract_id: PUBMED:26329313 Propofol-ketamine combination: a choice with less complications and better hemodynamic stability compared to propofol? On a prospective study in a group of colonoscopy patients. Background And Aims: In this study, we compared duration for reaching desired Ramsay Sedation Score (RSS) and postoperative recovery according to Modified Aldrete Score (MAS) of propofol and propofol-ketamine combination in a group of colonoscopy patients. Rates of cardiovascular, respiratory, laryngospasm, visual and nausea/vomiting complications were also compared as secondary outcomes. Methods: This is a double-blinded prospective randomized controlled trial. 95 patients were included and blocked randomized to either propofol (GroupP, n: 47) or propofol-ketamine (GroupPK, n: 48). GroupP patients received 0.5 mg/kg propofol and GroupPK received 0.5 mg/kg ketamine-propofol. Subjects were monitorized noninvasively preoperatively and every 5 min during procedure. RSS was recorded for every minute before starting procedure and for every 5 min during procedure. Recovery after colonoscopy was evaluated according to MAS. Same observer checked for MAS just after procedure in postoperative 1 min and for every 3 min during follow-up. Postoperative respiratory depression was defined as rate <10/min, hypercapnia/hypercarbia-arterial CO2 tension >50 mmHg or SO2 <90 while hypotension was defined as a decrease of 20 % in mean blood pressure compared to initial values. Results: GroupPK patients needed shorter duration for achieving RSS ≥ 4 (p: 0.038) but longer duration for achieving MAS ≥ 9 (p: 0.005). GroupP's intraoperative blood pressures and heart rates were significantly lower compared to initial values. We observed that respiratory depression (19.1 vs 0 %, p: 0.001), hypotension (29.8 vs 10.4 %, p: 0.018), and nausea/vomiting (17 vs 4.2 %, p: 0.041) were significantly more common in GroupP. Conclusion: Propofol-ketamine combination is an advantageous choice in means of achieving sedation in a shorter period of time, a better hemodynamic stability, less nausea and vomiting and respiratory complication rates. Yet it seems that this choice might be related with longer recovery duration. abstract_id: PUBMED:25885832 A clinical comparison of ketofol (ketamine and propofol admixture) versus propofol as an induction agent on quality of laryngeal mask airway insertion and hemodynamic stability in children. Background: In spite of being the preferred induction agent for LMA insertion, propofol has many undesirable side effects including dose-related cardiorespiratory depression and local pain at injection site. Ketofol as a novel induction agent has been introduced recently with comparable efficacy and improved hemodynamic control. Objective: To investigate ketofol as a suitable induction agent alternative to propofol for insertion of LMA in children considering insertion conditions, hemodynamic stability, local pain at injection site, and recovery. Materials And Methods: In this randomized, double-blind study, 100 children were randomly assigned into two groups of 50 patients each in which induction was performed with either propofol or ketofol. Providers were given one 20 ml syringe [represent either 2 mg/kg of propofol (P group) or 0.75 mg/kg of ketamine and 1.5 mg/kg of propofol (KP group)] and one 10 ml syringe for rescue if needed [represent 1 mg/kg of propofol (P group) or 0.25 mg/kg of ketamine and 0.5 mg/kg of propofol (KP group)]. After monitoring with bispectral index (BIS), general anesthesia was induced by infusion with a syringe perfuser at a constant rate of 250 ml/h with either of the two agents till the BIS values decreased to 40. Mean arterial pressure (MAP), heart rate (HR) were measured every 30 seconds up to 5 minutes after LMA placement. The time till BIS values decreased to 40 was measured. All children were evaluated for incidence of apnea, pain on injection, jaw relaxation, conditions for LMA insertion, and complications such as muscle rigidity, hallucinations, and excessive secretions. Results: Induction time (time to reach BIS of 40) was faster in the KP group (150 ± 23.5 seconds) than in the P group (205 ± 37.4 seconds). The incidence of injection pain was significantly lower in the KP group (10%) than in the P group (80%). Excellent jaw relaxation and full mouth opening were higher in the KP group [45 patients (90%)] than in the P group [38 patients (76%)]. Excellent LMA insertion conditions were observed in 45 patients (90%) in the KP group and 38 patients (76%) in the P group. The KP group showed preserved hemodynamic stability (mean blood pressure, heart rate) with less incidence and duration of apnea compared to the P group. Conclusion: ketofol is a safe and effective alternative induction agent for LMA insertion in children with rapid onset of action and lower incidence of injection pain. It provided better LMA insertion conditions, improved hemodynamic stability with less prolonged apnea when compared with propofol. abstract_id: PUBMED:33145214 Comparison of Ketofol (Combination of Ketamine and Propofol) and Propofol Anesthesia in Aneurysmal Clipping Surgery: A Prospective Randomized Control Trial. Background: The maintenance of hemodynamic stability is of pivotal importance in aneurysm surgeries. While administering anesthesia in these patients, the fluctuations in blood pressure may directly affect transmural pressure, thereby precipitating rupture of aneurysm and various other associated complications. We aimed to compare the effects of ketofol with propofol alone when used as an induction and maintenance anesthetic agent during surgical clipping of intracranial aneurysms. Materials And Methods: Forty adult, good-grade aneurysmal subarachnoid hemorrhage patients posted for aneurysm neck clipping were included in the study. The patients were randomized into two groups. One group received a combination of ketamine and propofol (1:5 ratio) and the other group received propofol for induction and maintenance of anesthesia. Intraoperative hemodynamic stability, intraventricular pressure, and quality of brain relaxation were studied in both the groups. Results: The patients were comparable with respect to demographic profile, Hunt and Hess grade, world federation of neurological surgeons (WFNS) grade, Fisher grade, duration of anesthesia, duration of surgery, optic nerve sheath diameter, and baseline hemoglobin. Intraoperative hemodynamics were better maintained in the ketofol group during induction, with only 15% of patients having >20% fall in mean arterial pressure (from baseline) intraoperatively, compared to 45% of patients receiving propofol alone (P = 0.038). The mean intraventricular pressure values in both the groups were in the normal range and the quality of brain relaxation was similar, with no significant difference (P > 0.05). Conclusion: Ketofol combination (1:5) as compared to propofol alone provides better hemodynamic stability on induction as well as maintenance anesthesia without causing an increase in intracranial pressure. Effect of ketofol on cerebral oxygenation and quality of emergence need to be evaluated further by larger multicentric, randomized control trials. abstract_id: PUBMED:24251011 Hemodynamic Stability during Induction of Anesthesia in ElderlyPatients: Propofol + Ketamine versus Propofol + Etomidate. Introduction: Various methods have been recommended to prevent hemodynamic instability caused by propofol induction. Current study evaluates hemodynamic effects of ketamine and propofol in comparison to etomidate and propofol during anesthesia induction. Methods: Sixty-two patients over 50 years old undergoing elective surgeries were randomly assigned to ketamine + propofol (ketofol) (n=30) and etomidate + propofol (etofol) (n=32) groups. Patients in ketofol group were induced with ketamine 0.75 mg/kg and propofol 1 mg/kg. In etofol group, induction was performed with etomidate 0.2 mg/kg and propofol 1 mg/kg. Hemodynamic states before and after induction, first, third and sixth minutes after intubation were measured and compared between groups. Results: There was no difference between groups in systolic (SAP), diastolic (DAP) and mean arterial pressure (MAP), heart rate (HR) and blood oxygen saturation (SaO2). There was significant decrease in SAP, DAP and MAP after induction and 6 minutes after intubation and in HR after induction than values before induction. There was significant increase in SaO2 in all evaluated periods than before induction in etofol group; however, the difference in ketofol group was not significant. Conclusion: Both methods of induction -ketamine + propofol and etomidate + propofol- are effective in maintaining hemodynamic stability and preventing hemodynamic changes due to propofol administration. abstract_id: PUBMED:36171936 Comparison of haemodynamic changes between propofol and ketofol as induction agents in patients undergoing laparoscopic surgeries under general anaesthesia. Background And Aims: Laparoscopic surgeries are more commonly performed procedure nowadays because of its advantages however generation of pneumoperitoneum causes significant physiological changes. Propofol is the most commonly used induction agent but its use is limited by its side effects like dose-dependent hypotension and myocardial depression. So by combining propofol with ketamine to form ketofol may result in better hemodynamic stability. The aim of this study was to compare the haemodynamic changes in patients undergoing laparoscopic surgery under general anesthesia using propofol and ketofol as induction agents. Material And Methods: In this prospective randomized double blind study, 80 patients of ASA I/II undergoing elective laparoscopic surgery were randomized into two Groups. Group A received 1 mg/kg of propofol + 1 mg/kg ketamine made up to a total volume of 20 ml with normal saline and Group B received 2 mg/kg propofol + normal saline to make up to an equal volume. Hemodynamic profiles like HR, SBP, DBP, and MAP were recorded at different time intervals until pneumoperitoneum. Postoperative recovery profile and complications were recorded. All data were entered in MS excel and analyzed using SPSS Version 20.0. Repeated measures ANOVA and Chi-square test were used to test the level of significance. Results: Demographic character and duration of surgery were comparable. SBP, DBP, MAP and HR showed statistically significant difference in both groups in various time intervals with P < 0.05 with Group A (ketofol ) having a better hemodynamic stability. Recovery profile in ketofol group took longer duration (4.95 min) compared to propofol group B (1.8 min). Postoperative nausea and vomiting were significant (P = 0.004) in ketofol group. Conclusion: We concluded that ketofol had a better hemodynamic stability compared to propofol as an induction agent, however time for recovery in ketofol group took a longer period compared to propofol group, with no complication in either groups. abstract_id: PUBMED:24392397 Hemodynamic responses with different dose of ketamine and propofol in day care gynecological surgeries. Background: Day care gynaecological surgeries mandate use of hemodynamically stable combination of commonly used intravenous agents, propofol & ketamine. Hence we proposed to evaluate the hemodynamic profile of different dose combination of propofol & ketamine as induction agents in ambulatory gynecological surgeries. Material & Methods: Thirty adult patients scheduled for day care gynecological surgeries were randomly divided into five Groups. Group I received inj. propofol 2mg/kg i.v + inj. Normal saline (NS); Group II received ketamine 2mg/kg i.v + inj. NS; Group III was given inj. Propofol 2mg/kg i.v followed by inj. Ketamine 1 mg/kg i.v.; Group IV received inj. Ketamine 2 mg/kg i.v followed by inj. propofol 1 mg/kg i.v; Group V received combination of inj. Propofol 1 mg/kg i.v and inj. Ketamine 1 mg/kg i.v. After administration of the drug, non invasive systolic, diastolic and mean arterial pressures, heart rate, respiratory rate, arterial oxygen saturation and ECG were recorded every minute for five minutes. Results: Statistically significant hemodynamic alterations were observed in Group I & II, with Group I having the highest incidence of apnea (23.3%). No significant changes in hemodynamics were seen in Group V. Conclusion: We conclude that the combination of 1mg Propofol and 1mg ketamine produced better hemodynamic stability in comparison to other Groups. abstract_id: PUBMED:34670425 Hemodynamic Effects of Ketamine Compared With Propofol or Dexmedetomidine as Continuous ICU Sedation. Background: Ketamine has seen increased use for sedation in the intensive care unit. In contrast to propofol or dexmedetomidine, ketamine may provide a positive effect on hemodynamics. Objective: The objective of this study was to compare the development of clinically significant hypotension or bradycardia (ie, negative hemodynamic event) between critically ill adults receiving sedation with ketamine and either propofol or dexmedetomidine. Methods: This was a retrospective cohort study of adults admitted to an intensive care unit at an academic medical center between January 2016 and January 2021. Results: Patients in the ketamine group (n = 78) had significantly less clinically significant hypotension or bradycardia compared with those receiving propofol or dexmedetomidine (n = 156) (34.6% vs 63.5%; P < 0.001). Patients receiving ketamine also experienced smaller degree of hypotension observed by percent decrease in mean arterial pressure (25.3% [17.4] vs 33.8% [14.5]; P < 0.001) and absolute reduction in systolic blood pressure (26.5 [23.8] vs 42.0 [37.8] mm Hg; P < 0.001) and bradycardia (15.5 [24.3] vs 32.0 [23.0] reduction in beats per minute; P < 0.001). In multivariate logistic regression modeling, receipt of propofol or dexmedetomidine was the only independent predictor of a negative hemodynamic event (odds ratio [OR]: 3.3, 95% confidence interval [CI], 1.7 to 6.1; P < 0.001). Conclusion And Relevance: Ketamine was associated with less clinically relevant hypotension or bradycardia when compared with propofol or dexmedetomidine, in addition to a smaller absolute decrease in hemodynamic parameters. The clinical significance of these findings requires further investigation. abstract_id: PUBMED:38137737 Intrahospital Transport of Critically Ill Patients with Subarachnoid Hemorrhage-Frequency, Timing, Complications, and Clinical Consequences. Background: Patients with subarachnoid hemorrhage (SAH) often necessitate intra-hospital transport (IHT) during intensive care treatment. These transfers to facilities outside of the neurointensive care unit (NICU) pose challenges due to the inherent instability of the hemodynamic, respiratory, and neurological parameters that are typical in these patients. Methods: In this retrospective, single-center cohort study, a total of 108 IHTs were analyzed for demographics, transport rationale, clinical outcomes, and pre/post-IHT monitoring parameters. After establishing clinical thresholds, the frequency of complications was calculated, and predictors of thresholds violations were determined. Results: The mean age was 55.7 (+/-15.3) years, with 68.0% showing severe SAH (World Federation of Neurosurgical Societies Scale 5). IHTs with an emergency indication made up 30.8% of all transports. Direct therapeutic consequences from IHT were observed in 38.5%. On average, the first IHT occurred 1.5 (+/-2.0) days post-admission and patients were transported 4.3 (+/-1.8) times during their stay in the NICU. Significant parameter changes from pre- to post-IHT included mean arterial pressure, systolic blood pressure, oxygen saturation, blood glucose levels, temperature, dosages of propofol and ketamine, tidal volume, inspired oxygen concentration, Horovitz index, glucose, pH, intracranial pressure, and cerebral perfusion pressure. Relevant hemodynamic thresholds were violated in 31.5% of cases, while respiratory complications occurred in 63.9%, and neurological complications in 20.4%. For hemodynamic complications, a low heart rate with a threshold of 61/min (OR 0.96, 95% CI 0.93-0.99, p = 0.0165) and low doses of midazolam with a threshold of 17.5 mg/h (OR 0.97, 95% CI 0.95-1.00, p = 0.0232) significantly predicted adverse events. However, the model did not identify significant predictors for respiratory and neurological outcomes. Conclusions: Conclusively, IHTs in SAH patients are associated with relevant changes in hemodynamic, respiratory, and neurological monitoring parameters, with direct therapeutic consequences in 4/10 IHTs. These findings underscore the importance of further studies on the clinical impact of IHTs. abstract_id: PUBMED:18339162 Addition of ketamine to propofol for initiation of procedural anesthesia in children reduces propofol consumption and preserves hemodynamic stability. Background: There is no ideal anesthesia protocol to perform short invasive procedures in pediatric oncology. The combination of propofol and ketamine may offer advantages over propofol alone. Methods: In a prospective, randomized, double-blind study, we analyzed 63 consecutive procedures performed in 47 oncology children. All patients received 1 mug/kg fentanyl, followed by propofol 1 mg/kg in group P (n=33) or propofol 0.5 mg/kg and ketamine 0.5 mg/kg in group PK (n=30) for the initiation of anesthesia. The need for supplementation with propofol and/or fentanyl to maintain an adequate level of anesthesia was recorded. The hemodynamic and respiratory profile, recovery time and the occurrence of side effects were compared. Results: Significantly more children required propofol (100% vs. 83.3%) and fentanyl (75.5% vs. 43.3%) rescue doses, and developed hypotension (63.6% vs. 23.4%) and bradycardia (48.5 vs. 23.4%) in group P compared with group PK, with a comparable incidence of respiratory adverse events and recovery times. However, 40% of children in group PK were agitated following recovery compared with 6% in group P. Conclusions: The combination of propofol and ketamine for invasive procedures in pediatric oncology resulted in reduced propofol and fentanyl consumption and preserved hemodynamic stability, but more children in the combination group recovered with agitation. abstract_id: PUBMED:38396358 Interleaved Propofol-Ketamine Maintains DBS Physiology and Hemodynamic Stability: A Double-Blind Randomized Controlled Trial. Background: The gold standard anesthesia for deep brain stimulation (DBS) surgery is the "awake" approach, using local anesthesia alone. Although it offers high-quality microelectrode recordings and therapeutic-window assessment, it potentially causes patients extreme stress and might result in suboptimal surgical outcomes. General anesthesia or deep sedation is an alternative, but may reduce physiological testing reliability and lead localization accuracy. Objectives: The aim is to investigate a novel anesthesia regimen of ketamine-induced conscious sedation for the physiological testing phase of DBS surgery. Methods: Parkinson's patients undergoing subthalamic DBS surgery were randomly divided into experimental and control groups. During physiological testing, the groups received 0.25 mg/kg/h ketamine infusion and normal saline, respectively. Both groups had moderate propofol sedation before and after physiological testing. The primary outcome was recording quality. Secondary outcomes included hemodynamic stability, lead accuracy, motor and cognitive outcome, patient satisfaction, and adverse events. Results: Thirty patients, 15 from each group, were included. Intraoperatively, the electrophysiological signature and lead localization were similar under ketamine and saline. Tremor amplitude was slightly lower under ketamine. Postoperatively, patients in the ketamine group reported significantly higher satisfaction with anesthesia. The improvement in Unified Parkinson's disease rating scale part-III was similar between the groups. No negative effects of ketamine on hemodynamic stability or cognition were reported perioperatively. Conclusions: Ketamine-induced conscious sedation provided high quality microelectrode recordings comparable with awake conditions. Additionally, it seems to allow superior patient satisfaction and hemodynamic stability, while maintaining similar post-operative outcomes. Therefore, it holds promise as a novel alternative anesthetic regimen for DBS. © 2024 The Authors. Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society. Answer: The combination of propofol and ketamine, often referred to as "ketofol," has been studied as an alternative to propofol alone for inducing anesthesia in various clinical settings. The evidence from multiple studies suggests that ketofol may offer several advantages over propofol alone, including better hemodynamic stability and fewer complications. A prospective study in colonoscopy patients found that the propofol-ketamine combination led to a shorter duration for achieving the desired Ramsay Sedation Score (RSS) and better hemodynamic stability, with less respiratory depression, hypotension, and nausea/vomiting compared to propofol alone. However, recovery duration according to the Modified Aldrete Score (MAS) was longer with ketofol (PUBMED:26329313). In pediatric patients undergoing laryngeal mask airway (LMA) insertion, ketofol provided better LMA insertion conditions, improved hemodynamic stability, and a lower incidence of injection pain and prolonged apnea compared to propofol (PUBMED:25885832). For aneurysmal clipping surgery, ketofol offered better hemodynamic stability during induction and maintenance of anesthesia without increasing intracranial pressure, compared to propofol alone (PUBMED:33145214). In elderly patients, both ketofol and etomidate-propofol combinations were effective in maintaining hemodynamic stability during anesthesia induction (PUBMED:24251011). In patients undergoing laparoscopic surgeries, ketofol resulted in better hemodynamic stability compared to propofol, although recovery time was longer with ketofol (PUBMED:36171936). A study evaluating different dose combinations of propofol and ketamine for day care gynecological surgeries concluded that a combination of 1mg propofol and 1mg ketamine produced better hemodynamic stability compared to other groups (PUBMED:24392397). In the intensive care unit, ketamine was associated with less clinically relevant hypotension or bradycardia when compared with propofol or dexmedetomidine (PUBMED:34670425). In summary, the propofol-ketamine combination appears to be a choice with less complications and better hemodynamic stability compared to propofol alone in various clinical scenarios. However, it is important to note that in some cases, ketofol may be associated with a longer recovery duration.
Instruction: Enterocele revealed by simultaneous evacuation proctography and peritoneography: does "defecation block" exist? Abstracts: abstract_id: PUBMED:8686626 Enterocele revealed by simultaneous evacuation proctography and peritoneography: does "defecation block" exist? Objective: Pelvic floor weakness may allow prolapse of the bowel into the rectogenital space, forming an enterocele. Enteroceles are believed to obstruct defecation by rectal compression and are therefore considered an abnormal finding on evacuation proctography. With a technique combine evacuation proctography and peritoneography, we prospectively studied constipated patients to reveal the pelvic peritoneal recesses during evacuation and to determine if enterocele actually impairs rectal emptying. Subjects And Methods: Fifty constipated patients were studied prospectively, Using 20 ml of water-soluble contrast medium, we performed peritoneography, then evacuation protography with 120 ml of intrarectal paste. Lateral evacuation and posteroanterior stress views were analyzed by computerized video capture. Anatomic features and functional measurements of rectal emptying were noted. Posteroanterior views were compared with views in 31 subjects undergoing peritoneography for investigation of groin pain. Results: Technical failure in three patients left 47 for analysis. A deep rectogenital pouch was seen in 36 patients (77%). Of these, 12 (58%) contained viscera that formed an enterocele, but the remaining 15 patients (42%) showed no visceral filling. Most pouches were apparent only during straining (31 cases, 86%). Peritoneal descent was greater than in controls (p < .0001), of whom only three had small rectogenital pouches. Patients with enterocele were compared with those who had a rectogenital pouch but no visceral filling and those who had no pouch. Standard anatomic measurements by evacuation proctography were not significantly different, but patients with enterocele evacuated more rapidly (p = .008) and completely (p = .021) than did the other two groups. Conclusion: Combined evacuation proctography and peritoneography is a new technique to diagnose pelvic hernias that occur during evacuation. This technique has shown that a deep rectogenital pouch is common in constipated patients and that just over half such pouches fill with viscera. However, because an enterocele does not impair rectal evacuation, this proctographic finding should be interpreted with caution. abstract_id: PUBMED:7580780 Evacuation proctography combined with positive contrast peritoneography to demonstrate pelvic floor hernias. Background: To demonstrate the pelvic peritoneal recesses during voiding, evacuation proctography was combined with positive contrast peritoneography. Methods: In 30 constipated patients, peritoneography was performed, followed by proctography. Results: During evacuation rectogenital herniation developed in 20 patients (66%), without visceral filling in 12 (40%). Conclusions: Posterior pelvic floor hernia is common during defecation. Less than half fill with bowel and many may not be apparent on standard proctography. abstract_id: PUBMED:8290201 Evacuation proctography (defecography): an aid to the investigation of pelvic floor disorders. Patients with pelvic floor disorders frequently have associated anorectal dysfunction, which can be evaluated by a variety of laboratory tests. Evacuation proctography (defecography) documents the process of rectal evacuation and therefore demonstrates disorders of defecation, particularly those of an obstructive nature. It provides objective information about rectocele size and emptying and demonstrates coexistent enteroceles, many of which are missed on physical examination. This radiographic technique is the method of choice for recognizing rectal intussusception, the mechanism by which rectal prolapse occurs. Proctography suggests the diagnosis of spastic pelvic floor (anismus), a disorder amenable to biofeedback therapy. Proctography has limited application in anal incontinence unless there are associated obstructive symptoms. Evaluation of bladder dysfunction is aided by concomitant cystography. Gynecologists managing pelvic floor disorders should assess coexistent anorectal dysfunction, as undiagnosed enteroceles and disorders of defecation are an important cause of persistent or recurrent symptoms following pelvic floor repair. A comprehensive interdisciplinary approach to pelvic floor disorders is recommended. abstract_id: PUBMED:18409183 Defecation proctography and translabial ultrasound in the investigation of defecatory disorders. Objectives: Defecation proctography is the standard method used in the investigation of obstructed defecation. Translabial ultrasound has recently been shown to demonstrate rectocele, enterocele and rectal intussusception. We performed a comparative clinical study to determine agreement between the two methods. Methods: Thirty-seven women scheduled to undergo defecation proctography for obstructed defecation were recruited. Using both proctography and translabial ultrasound, we determined the anorectal angle, presence of a rectocele and rectocele depth, rectal intussusception and prolapse. Measurements were obtained by operators blinded to all other data. All patients rated discomfort on a scale of 0-10. Results: Six women did not attend defecation proctography, leaving 31 cases for comparison. The mean age was 53 years. Patients rated discomfort at a median of 1 (range 0-10) for ultrasound and 7 (range 0-10) for defecation proctography (P < 0.001). Defecation proctography suggested rectocele and rectal intussusception/prolapse more frequently than did ultrasound. While the positive predictive value of ultrasound (considering defecation proctography to be the definitive test) was 0.82 for rectocele and 0.88 for intussusception/prolapse, negative predictive values were only 0.43 and 0.27, respectively. Cohen's kappa values were 0.26 and 0.09, respectively. There was poor agreement between ultrasound and defecation proctography measurements of anorectal angle and rectocele depth. Conclusions: Translabial ultrasound can be used in the initial investigation of defecatory disorders. It is better tolerated than defecation proctography and also yields information on the lower urinary tract, pelvic organ prolapse and levator ani. Agreement between ultrasound and defecation proctography in the measurement of quantitative parameters was poor, but when intussusception or rectocele was diagnosed on ultrasound these results were highly predictive of findings on defecation proctography. abstract_id: PUBMED:1620863 Pelvic prolapse: assessment with evacuation proctography (defecography) The contribution of evacuation proctography (EP) to the evaluation of pelvic prolapse was assessed in 74 consecutive patients. A rectocele was demonstrated in 73 patients (99%); large rectoceles frequently showed barium trapping, but there was no correlation between these findings and rectal symptoms. An enterocele was detected at evacuation proctography in 13 patients (18%) (including two enteroceles seen only retrospectively), and a sigmoidocele was shown in four patients (5%). Physical examination resulted in detection of only seven enteroceles and of none of the sigmoidoceles. In 48 patients (65%), additional findings were evident at EP, including excessive pelvic floor descent, anal incontinence, rectal intussusception, and spastic pelvic floor. These data suggest that EP is particularly useful in the preoperative evaluation of pelvic prolapse if the patient has anorectal symptoms or is at risk for an enterocele. EP contributes to surgical planning by enabling identification of clinically unsuspected enteroceles and sigmoidoceles and coexistent disorders of rectal evacuation. abstract_id: PUBMED:9275895 Dynamic MR imaging compared with evacuation proctography when evaluating anorectal configuration and pelvic floor movement. Objective: The aim of this study was to determine the agreement between measurements of the anorectal configuration made with dynamic MR imaging and with evacuation proctography. Subjects And Methods: Ten women with constipation were examined sequentially using both standard evacuation proctography and dynamic MR imaging. Correlation and agreement between standard measurements of anorectal configuration at rest and during straining or evacuation of the rectum were evaluated. Results: Significant correlation existed for anorectal junction descent (r = .7, p = .023), change in the anorectal angle (r = .78, p = .008), the position of the rectal axis on straining (r = .62, p = .032), and rectocele depth (r = .82, p = .004). The mean of the measurement differences for anorectal junction descent as measured using the two techniques was -0.23 cm (SD, 1.40 cm). The mean of the measurement differences for change of the anorectal angle was +15.3 degrees (SD, 27.92 degrees). Conclusion: Standard measurements of anorectal configuration made using evacuation proctography and dynamic MR imaging show significant correlation. However, in our study statistical agreement was poor for measurement of anorectal junction descent and anorectal angle change as seen on imaging obtained with the two techniques. abstract_id: PUBMED:22941114 Transperineal ultrasound compared to evacuation proctography for diagnosing enteroceles and intussusceptions. Introduction: This study concerns the level of agreement between transperineal ultrasound and evacuation proctography for diagnosing enteroceles and intussusceptions. Method: In a prospective observational study, 50 consecutive women who were planned to have an evacuation proctography underwent transperineal ultrasound too. Sensitivity, specificity, positive (PPV) and negative predictive value, as well as the positive and negative likelihood ratio of transperineal ultrasound were assessed in comparison to evacuation proctography. To determine the interobserver agreement of transperineal ultrasound, the quadratic weighted kappa was calculated. Furthermore, receiver operating characteristic curves were generated to show the diagnostic capability of transperineal ultrasound. Results: For diagnosing intussusceptions (PPV 1.00), a positive finding on transperineal ultrasound was predictive of an abnormal evacuation proctography. Sensitivity of transperineal ultrasound was poor for intussusceptions (0.25). For diagnosing enteroceles, the positive likelihood ratio was 2.10 and the negative likelihood ratio, 0.85. There are many false-positive findings of enteroceles on ultrasonography (PPV 0.29). The interobserver agreement of the two ultrasonographers assessed as the quadratic weighted kappa of diagnosing enteroceles was 0.44 and that of diagnosing intussusceptions was 0.23. Conclusion: An intussusception on ultrasound is predictive of an abnormal evacuation proctography. For diagnosing enteroceles, the diagnostic quality of transperineal ultrasound was limited compared to evacuation proctography. abstract_id: PUBMED:17161573 Dynamic imaging of posterior compartment pelvic floor dysfunction by evacuation proctography: techniques, indications, results and limitations. The clinical management of patients with anorectal and pelvic floor dysfunction is often difficult. Evacuation proctography has evolved from a method to evaluate the anorectum for functional disorders to its current status as a practical method for evaluating anorectal dysfunction and pelvic floor prolapse. It has a high observer accuracy and yield of positive diagnosis. Clinicians find it of major benefit and has altered management from surgical to medical and vice versa in a significant number of cases. abstract_id: PUBMED:8267346 Defecating proctography: local experience. Between October 1990 to November 1991, defecating proctography was performed on a select group of patients with complaints of persistent constipation or sensation of incomplete evacuation. Out of the 27 patients studied, a high percentage (88.8%) showed some form of anatomical or functional abnormality of the defecating mechanism. As defecating proctography is a relatively new mode of investigation locally, we briefly describe our method and results. These include rectocele formation, intrarectal mucosal prolapse, intussusception and pubo-rectalis paradox. Some of these cases may be amenable to surgical correction. abstract_id: PUBMED:7656736 Simultaneous dynamic proctography and peritoneography for pelvic floor disorders. Purpose: We sought to evaluate a new diagnostic technique for the identification of rectal and pelvic floor pathology in patients with obstructed defecation, pelvic fullness/prolapse, and/or chronic intermittent pelvic floor pain. Methods: Thirteen symptomatic women with either a nondiagnostic physical examination or nondiagnostic dynamic proctography (DPG) were studied. After placement of intraperitoneal and intrarectal contrast material, resting and straining pelvic x-rays were obtained in all patients, and defecation was videotaped using fluoroscopy. Results: Simultaneous DPG and peritoneography identified clinically suspected and unsuspected enteroceles in 10 of the 13 patients studied. An enterocele or other pelvic floor hernia was ruled out by the technique in three of the women studied. Rectoceles and rectal prolapse that were identified during physical examination were confirmed by DPG with peritoneography. Simultaneous DPG and peritoneography also gave a qualitative assessment of the severity and clinical significance of the identified pelvic floor disorders. Results of simultaneous DPG and peritoneography affected operative treatment planning in 85 percent of patients studied. Conclusion: Simultaneous DPG and peritoneography identifies both rectal and pelvic floor pathology and provides a qualitative assessment of pelvic floor pathology severity, which allows for better treatment planning in selected patients with obstructed defecation and pelvic prolapse. Answer: The concept of "defecation block" suggests that an enterocele, which is a prolapse of the bowel into the rectogenital space, could obstruct defecation by compressing the rectum. However, the findings from simultaneous evacuation proctography and peritoneography indicate that this may not be the case. In a study involving 50 constipated patients, a deep rectogenital pouch was common, and over half of these pouches filled with viscera to form an enterocele. Despite this, the presence of an enterocele did not impair rectal evacuation. In fact, patients with enteroceles evacuated more rapidly and completely than those without, suggesting that an enterocele does not necessarily act as a "defecation block" (PUBMED:8686626). This finding is supported by another study where evacuation proctography combined with positive contrast peritoneography showed that rectogenital herniation developed in 66% of constipated patients during evacuation, but less than half filled with bowel, and many may not be apparent on standard proctography (PUBMED:7580780). These studies suggest that while enteroceles are a common finding in constipated patients, they do not necessarily obstruct defecation, and their presence should be interpreted with caution when considering their role in defecatory dysfunction.
Instruction: Percutaneous transluminal therapy of occluded saphenous vein grafts: can the challenge be met with ultrasound thrombolysis? Abstracts: abstract_id: PUBMED:9884375 Percutaneous transluminal therapy of occluded saphenous vein grafts: can the challenge be met with ultrasound thrombolysis? Background: Percutaneous transluminal treatment of a thrombotic vein graft yields poor results. We have previously reported our experience with transluminal percutaneous coronary ultrasound thrombolysis (CUT) in the setting of acute myocardial infarction (AMI). This report describes the first experience with ultrasound thrombolysis in thrombus-rich lesions in saphenous vein grafts (SVGs), most of which were occluded. Methods And Results: The patients (n=20) were mostly male (85%), aged 64+/-4 years old. The presenting symptom was AMI in 2 patients (10%) and unstable angina in the rest. Fifteen patients (75%) had totally occluded SVGs. The median age of clots was 6 days (range, 0 to 100 days). The ultrasound thrombolysis device has a 1.6-mm-long tip and fits into a 7F guiding catheter over a 0.014-in guidewire in a "rapid-exchange" system. CUT (41 kHz, 18 W, </=6 minutes) led to device success in 14 (70%) of the patients and residual stenosis of 65+/-28%. Procedural success was obtained in 13 (65%) of the patients, with a final residual stenosis of 5+/-8%. There was a low rate of device-related adverse events: 1 patient (5%) had a non-Q-wave myocardial infarction, and distal embolization was noted in 1 patient (5%). Adjunct PTCA or stenting was used in all patients. There were no serious adverse events during hospitalization. Conclusions: Ultrasound thrombolysis in thrombus-rich lesions in SVGs offers a very promising therapeutic option. abstract_id: PUBMED:6965314 Percutaneous transluminal angioplasty in the management of occlusive disease involving the coronary arteries and saphenous vein bypass grafts: preliminary results. Since January, 1978, we have evaluated 18 patients in whom segmental occlusive disease of the aorta-coronary saphenous vein graft or the native coronary circulation was present. The significantly occluded vessels were restored to relatively normal circulation by means of percutaneous transluminal dilatation in 10 of these patients, with technical failures in seven patients and the occurrence of a thrombosis during the procedure in one other. Successful dilatation occurred in six of the seven patients with saphenous vein graft stenosis. Percutaneous transluminal angioplasty of the native coronary arteries in 11 patients resulted in five successful dilatations. All dilatation procedures are performed with the use of local anesthesia with an open-heart team standing by. In the event of technical failure or incipient thrombosis with developing infarction, the patient is transferred for immediate bypass. On the basis of these early results, we have developed guidelines of the indications for percutaneous transluminal dilatation of the coronary arteries and their saphenous vein grafts. The need for improved instrumentation to broaden these guidelines is discussed. Case histories of five patients are presented, and a summary table for all 18 patients is also provided. Certain inherent complications are discussed, but our preliminary results are encouraging, particularly with regard to stenoses of saphenous vein bypass grafts. abstract_id: PUBMED:2805060 A complication of prolonged urokinase infusion into a chronically occluded aortocoronary saphenous vein graft. Recanalization of a chronically occluded aortocoronary saphenous vein graft was performed, using a prolonged intracoronary infusion of urokinase followed by percutaneous transluminal coronary angioplasty (PTCA). Despite an angiographically successful result, the patient developed acute myocardial infarction, presumably secondary to distal migration of partially lysed thrombus. One week after successful angioplasty, the graft was once again proximally occluded. abstract_id: PUBMED:14517927 Combined distal embolization protection and rheolytic thrombectomy to facilitate percutaneous revascularization of totally occluded saphenous vein grafts. Totally occluded saphenous vein grafts are difficult to treat percutaneously with a higher likelihood of distal embolization and slow-flow or no-reflow during percutaneous interventions. The PercuSurge system, which utilizes a distal balloon occlusive device, has been shown to improve clinical outcomes during saphenous vein graft (SVG) interventions. This device may not be optimal in the setting of heavy thrombus or debris burden, a situation frequently encountered in totally occluded SVGs. Rheolytic thrombectomy facilitates percutaneous interventions by effectively removing intraluminal thrombus and debris but lacks distal embolization protection. We report our experience with the synergistic use of balloon-based distal embolization protection (PercuSurge) and rheolytic thrombectomy (AngioJet) to optimize percutaneous revascularization of totally occluded SVGs. abstract_id: PUBMED:37869736 Percutaneous closure of giant saphenous vein graft aneurysm. Background: Aneurysmal dilatation of saphenous vein grafts used for coronary artery bypass grafting is a rare complication. These aneurysms are often large in calibre and pose a risk of rupture with significant haemorrhage. Case Summary: We describe a case whereby a large saphenous vein graft aneurysm is closed percutaneously using a vascular plug to cease flow and promote thrombosis of the aneurysm whilst reconstructing the occluded native artery to negate ischaemia. Conclusion: Saphenous vein graft aneurysms following coronary artery bypass graft are rare and late complications. The preferred modality of closure is via percutaneous approach that requires meticulous planning to achieve a good outcome. abstract_id: PUBMED:24137182 Placement of vena cava filter via percutaneous puncture of the great saphenous vein. The aim of this study was to investigate the feasibility and safety of vena cava filter (VCF) placement via percutaneous puncture of the great saphenous vein (GSV) in the prevention of pulmonary embolisms. Using ultrasound positioning, VCF placement via percutaneous puncture of the GSV was performed on 12 patients with deep vein thrombosis (DVT) in the lower extremities. Transcatheter thrombolysis was conducted simultaneously. The postoperative filter position, puncture wound recovery and fluency of the GSV were observed. All filters were successfully released, with accurate positioning. No hematoma was observed at the puncture point during the perioperative period. In certain patients, local petechiae appeared around the puncture point during the thrombolysis period, which did not require special treatment. Re-examination using ultrasound revealed unobstructed blood flow in the GSV. VCF placement via percutaneous puncture of the GSV is a new filter placement method. The feasibility and safety of this method for the prevention of pulmonary embolisms has been demonstrated in a small number of sample cases. abstract_id: PUBMED:8478570 Immunocytochemical investigations of occluded saphenous vein grafts Occluded saphenous vein aorto-coronary bypass grafts were investigated using two types of monoclonal antibodies, specific to either the muscle actin (HHF 35) or the macrophage (HAM 56). The early post operative occlusions of saphenous vein grafts were associated with the deposition of mural thrombi with little intimal proliferation. The late occlusions of saphenous vein grafts were associated with intimal fibrous proliferation without mural thrombus formation. Proliferated smooth muscle cells were a major constituent of intimal fibrous proliferation in both early and late occluded grafts. Macrophages were not detected in both groups. The proliferation of smooth muscle cells in the intima may play a role in the development of late occlusion, rather than the organizational process of accumulated mural thrombi. abstract_id: PUBMED:20146345 Proximal protection in recanalization of totally occluded saphenous vein grafts in acute coronary syndrome. Percutaneous treatment of old, degenerated saphenous vein grafts (SVG) is associated with a high likelihood of major adverse cardiac events. When an acute coronary syndrome (ACS) develops in a patient with old SVG, fresh thrombus may superimpose on an old, degenerative atheroma: a sudden increase in the athero-thrombotic burden ensues with consequent, frequent total occlusion of the lumen. In this scenario, transluminal recanalization of the graft is usually associated with the highest chance of distal embolization and no-reflow and positioning of an embolic protection device (EPD) is almost mandatory. However, distal EPD are difficult to place when the vessel is totally occluded and do not completely avoid distal embolization. We report two cases of totally occluded SVG in patients admitted for ACS that were recanalized with the aid of a proximal EPD system with angiographic and clinical success. abstract_id: PUBMED:2353647 Transluminal atherectomy of saphenous vein aortocoronary bypass grafts. Angioplasty of stenotic saphenous vein aortocoronary bypass grafts is often unsatisfactory because of the relatively high incidence of acute complications and restenosis. During an initial evaluation of transluminal coronary atherectomy, 14 patients had atherectomy of saphenous vein graft lesions (15 grafts). Atherectomy was successful in 13 of 14 patients, decreasing the mean diameter of stenosis from 85 to 15%. In 1 patient, the lesion could not be crossed by the atherectomy device. The following 3 minor complications occurred: 1 embolus of atheromatous material; 1 air embolism; and 1 transient thrombosis leading to subendocardial myocardial infarction. Of the 14 patients, 8 underwent angiography 4 to 6 months after atherectomy; 5 patients had restenosis and 3 had widely patent grafts. Four other patients were clinically evaluated at 3 months after atherectomy. Two were asymptomatic, 1 had class II angina and 1 had class III angina. Transluminal atherectomy achieved excellent immediate results with a low incidence of major complications in the treatment of stenosed saphenous vein bypass grafts. However, preliminary follow-up results suggest a high incidence of restenosis. abstract_id: PUBMED:7788634 A new use of the Simpson percutaneous atherectomy catheter: resection of retained valve cusps of an in-situ vein graft. An in-situ saphenous vein graft occluded because of retained valve cusps. After aspiration thrombectomy, percutaneous transluminal balloon angioplasty was performed, but reocclusion occurred 11 months later. Following a second aspiration thrombectomy, the restenosed retained cusps were resected using the Simpson percutaneous atherectomy catheter. The graft remained patent until the patient's death from unrelated causes 6 months later. Answer: Percutaneous transluminal therapy of occluded saphenous vein grafts (SVGs) presents a significant challenge due to the high thrombus burden often encountered in these lesions. However, ultrasound thrombolysis has emerged as a promising therapeutic option for addressing this issue. A study reported the use of coronary ultrasound thrombolysis (CUT) in thrombus-rich lesions in SVGs, most of which were occluded. The patients treated with CUT experienced a device success rate of 70% and procedural success in 65% of cases, with a low rate of device-related adverse events, suggesting that ultrasound thrombolysis can effectively meet the challenge of treating occluded SVGs (PUBMED:9884375). The use of ultrasound thrombolysis in this context is particularly promising given the historical difficulties associated with percutaneous transluminal treatment of thrombotic vein grafts. Earlier attempts at percutaneous transluminal angioplasty in occlusive disease involving SVGs had mixed results, with technical failures and occurrences of thrombosis during the procedure (PUBMED:6965314). Moreover, complications such as distal embolization and no-reflow have been associated with percutaneous interventions in SVGs, as seen in cases where prolonged urokinase infusion was used for recanalization (PUBMED:2805060). To mitigate the risk of distal embolization, combined approaches using distal embolization protection and rheolytic thrombectomy have been reported to facilitate percutaneous revascularization of totally occluded SVGs (PUBMED:14517927). Additionally, proximal protection systems have been used successfully in recanalization of totally occluded SVGs in acute coronary syndrome, further demonstrating the evolution of techniques to improve outcomes in these challenging scenarios (PUBMED:20146345). In conclusion, while percutaneous transluminal therapy of occluded SVGs is challenging, the use of ultrasound thrombolysis, along with other adjunctive techniques, has shown promising results in overcoming these difficulties and improving patient outcomes.
Instruction: Are low back pain and low physical capacity risk indicators for dropout among recently qualified eldercare workers? Abstracts: abstract_id: PUBMED:20709891 Are low back pain and low physical capacity risk indicators for dropout among recently qualified eldercare workers? A follow-up study. Background: A high job turnover and dropout among eldercare workers has led to a significant shortage of qualified manpower in the Danish eldercare sector. Objectives: The predictive effect of some non-work-related causes for leaving the eldercare sector 2 years after qualification, physical capacity, duration, and severity of previous low back pain, was investigated. Design And Setting: A 2-year prospective cohort study of all the Danish eldercare workers, who finished their education during 2004 (n = 6347). Questionnaire data from 2004 were followed up by register data on attachment to labour market, educational status, and association to trade from Statistics Denmark in 2006. Methods: Data on physical capacity, duration, and severity of low back pain the last 12 months among the female participants were analysed by multinomial logistic regression to estimate odds-ratios for being either in the eldercare sector, in other health- and welfare sectors, in all other sectors, under education, or outside labour market, 2 years after qualification. Results: Disability due to and duration of low back pain were significant predictors for dropout from the eldercare sector 2 years after qualification. Low physical capacity was not. Data on duration of low back pain suggest a trend towards a dose-response relationship: The longer the duration of low back pain, the higher odds for dropout. Conclusions: Low back pain and disability due to low back pain during the last year of education were independent predictors for dropout from the eldercare sector 2 years after qualification. However, low self-rated physical capacity did not predict job dropout or turnover. abstract_id: PUBMED:27633246 Processes, barriers and facilitators to implementation of a participatory ergonomics program among eldercare workers. This study aimed to investigate the processes of a participatory ergonomics program among 594 eldercare workers with emphasis on identified risk factors for low back pain and solutions, and reveal barriers and facilitators for implementation. Sixty-nine per cent of the identified risk factors were physical ergonomic, 24% were organisational and 7% were psychosocial risk factors. Most solutions were organisational (55%), followed by physical (43%) and psychosocial solutions (2%). Internal factors (e.g. team or management) constituted 47% of the barriers and 75% of the facilitators. External factors (e.g. time, financial resources, collaboration with resident or relatives) constituted 53% of the barriers and 25% of the facilitators. This study revealed the processes and implementation of a participatory ergonomics program among eldercare workers. The findings can be transferred to workers, workplaces, health and safety professionals, and researchers to improve future participatory ergonomics programs. abstract_id: PUBMED:9201863 Risk indicators of low back pain among workers in Japan. Association of familial and physical factors with low back pain. Study Design: A questionnaire was given to 3,042 Japanese workers at a factory in 1992. It surveyed age, gender, weight, height, job classification, and work environment, as well as the perceived causes, onset age, and characteristics of low back pain. Family history of low back pain among first-degree relatives and perception of physical condition also were assessed. Objectives: To investigate the risk indicators of low back pain in Japanese workers with various kinds of job classifications in a manufacturing company. Summary Of Background Data: Risk indicators of low back pain in Japanese patients have not been fully investigated in previous studies. Methods: Prevalence rates and characteristics of low back pain were examined among 3,042 factory workers (2,517 men and 525 woman) with jobs with varying physical demands. In the analysis of risk indicators of low back pain, the odds ratios and 95% confidence intervals were computed. In addition, a multiple logistic analysis was performed to evaluate risk indicators of low back pain. Results: Point and lifetime prevalence of low back pain were correlated with the physical demands of the job. The perceived cause found to be most associated with low back pain were lifting in workers with jobs requiring moderate to heavy work and sports activity in sedentary workers. Family history of low back pain in parents, siblings, and children was a risk indicator of low back pain. The average age of the first attack of low back pain in workers with a family history of it in their parents was significantly younger than that in workers with no family history in a multiple logistic analysis in male workers, the physical work demands, age, and family history of low back pain in their parents were risk indicators; however, obesity was not a risk indicator. Physical and mental conditions of workers also were associated with low back pain. Conclusions: The physical job demands show a clear association with the point and lifetime prevalence of low back pain, and improvements in work conditions may decrease low back symptoms among workers. It is likely that a family history of low back pain and physical and mental conditions of workers also should be considered in the management of low back pain. abstract_id: PUBMED:24739372 Effort-reward imbalance and low back pain among eldercare workers in nursing homes: a cross-sectional study in Kagawa Prefecture, Japan. Objectives: To clarify effort-reward imbalance among eldercare workers in nursing homes and to examine the association between the effort-reward imbalance (ERI) and low back pain (LBP) among them. Methods: A total of 549 questionnaires were distributed to a random sample comprising 23% of the 79 special elderly nursing homes in Kagawa in 2013. A total of 467 eldercare workers (response rate 85%) participated in this study. Of the 467 eligible respondents, 372 (80%) completed all items of the ERI questionnaire. Complete data were available for 342 (73%) eldercare workers. Results: Of the 342 respondents, 215 (63%) had LBP at the time of the study, and 291 (85%) showed a critical "high cost/low gain" condition as determined by an effort-reward ratio >1. Multiple logistic regression analyses adjusted for age, gender, previous LBP experience, high-risk work and K6 score showed that eldercare workers with a high ERI had a higher risk for LBP than workers with a low ERI (OR, 1.96; 95% CI, 1.02-3.77). Conclusions: Most eldercare workers have a high ERI, and their LBP is associated with their ERI. Balancing effort and reward may be an important factor for improving LBP among eldercare workers in nursing homes. abstract_id: PUBMED:35441893 Associations between perceived quantitative work demands at different organisational levels and pain and sickness absence in eldercare workers: a multi-level longitudinal analysis. Purpose: Eldercare work is characterised by high quantitative work demands and high occurrence of musculoskeletal pain and sickness absence. Our aim was to investigate the association between quantitative demands aggregated at the different organizational levels of eldercare and low back pain (LBP) and sickness absence due to pain among workers. Methods: This study was conducted in 527 eldercare workers from 105 wards across 20 nursing homes in Denmark. We collected workers' perceived quantitative demands at baseline and workers' LBP and sickness absence repeatedly over the following year. We aggregated worker-level quantitative demands to the ward and nursing home-levels, and used mixed-effects regression models to investigate the associations between quantitative demands at different organizational levels and LBP and sickness absence over 1 year. Results: Across all models, increased quantitative demands (0-100 scale) at the worker-level was associated with an increased likelihood (OR 1.02) and intensity of LBP (β = 0.01). We did not identify any associations between quantitative demands at the ward-level and either of our outcomes. Across all models, increased quantitative demands at the nursing home-level was associated with increased days with sickness absence due to pain (β = 0.03 to 0.06). Conclusion: In eldercare, workers' perceived quantitative demands are associated with the presence and intensity of LBP. Further, quantitative demands across the overall nursing home-level are associated with sickness absence due to pain among eldercare workers. These results are of relevance to developing organisational interventions targeting quantitative demands to reduce sickness absence in eldercare. abstract_id: PUBMED:22814306 Does self-assessed physical capacity predict development of low back pain among health care workers? A 2-year follow-up study. Study Design: Prospective cohort study. Objective: To determine the prognostic value of self-assessed physical capacity for the development of low back pain (LBP) among female health care workers without LBP. Summary Of Background Data: High physical capacities in terms of strength, endurance, flexibility, and balance are assumed to prevent LBP among persons with high physical work demands. However, the few existing studies investigating this relationship show contrasting findings. Methods: Female health care workers answered a questionnaire about physical capacity in 2004, and days with LBP in 2005 and 2006. The odds ratios (ORs) for developing nonchronic (1-30 d of the past 12 mo) and persistent (>30 d of the past 12 mo) LBP in 2006 from self-assessed physical capacity were investigated with multiadjusted logistic regressions among female health care workers without LBP in 2005 (n = 1612). Results: Health care workers with low and medium physical capacity had increased risk of developing nonchronic LBP (OR = 1.52 [CI = 1.05-2.20] and OR = 1.37 [CI = 1.01-1.84], respectively), and health care workers with low physical capacity had an increased risk of developing persistent LBP (OR = 2.13 [CI = 1.15-3.96]), referencing those with high physical capacity. Conclusion: Self-assessed low physical capacity is a strong predictor for developing nonchronic and persistent LBP among pain-free female health care workers. Future intervention studies should investigate whether increased physical capacity, for example, through exercise training prevents development of LBP among female health care workers. abstract_id: PUBMED:30289190 Association between pain-related fear and presenteeism among eldercare workers with low back pain. Background: Presenteeism (work productivity loss at work) and low back pain (LBP) are prevalent among eldercare workers. Presenteeism is a serious emerging problem in an occupational setting. While many studies report the impact of pain-related fear on absenteeism, its impact on presenteeism remains unclear. Methods: We conducted a cross-sectional study to investigate the association between pain-related fear and presenteeism among 505 eldercare workers with LBP. We measured pain-related fear using the 11-item Tampa Scale for Kinesiophobia (TSK-11). We used the Work Limitations Questionnaire (WLQ) to estimate productivity loss due to presenteeism with items including Time Management, Mental-Interpersonal Demands, Physical Demands and Output Demands. The presenteeism was categorized into no (<5%), mild (5%-10.9%), moderate (11%-16.9%) and severe presenteeism (≥17%). We further performed ordinal logistic regression analyses, and the covariates were age, sex, pain intensity, pain disability and psychosocial factors. Multiple imputation was conducted to provide informed estimates for observations with missing data. Results: After adjustment, a higher TSK score was significantly associated with a higher presenteeism (proportional OR = 1.10, 95% CI = 1.06-1.15). A significant association of TSK with all WLQ subscales was retained even after the adjustment (Time Management: proportional OR = 1.04, 95% CI = 1.01-1.08; Mental-Interpersonal Demands: proportional OR = 1.08, 95% CI = 1.04-1.12; Physical Demands: proportional OR = 1.04, 95% CI = 1.01-1.08; and Output Demands: proportional OR = 1.06, 95% CI = 1.02-1.10). Conclusions: Our findings suggest that pain-related fear is an important factor related to presenteeism among eldercare workers with LBP. Significance: This study describes an independent association of pain-related fear with presenteeism among eldercare workers with low back pain. Pain-related fear could be considered a vital factor of presenteeism in addition to absenteeism. abstract_id: PUBMED:28719460 The Association Between Perceived Stress and Low Back Pain Among Eldercare Workers in Japan. Objective: We investigated the association between perceived stress and low back pain (LBP) among Japanese eldercare workers. Methods: Self-reported questionnaires were completed regarding perceived stress (Perceived Stress Scale-14), presence of memorable LBP in the preceding month, and other relevant factors (age, sex, body mass index, smoking status, job demands, job control, and social support) among 954 eldercare workers. Univariate and multivariate logistic regression analysis were performed to estimate odds ratios (OR) and 95% confidence intervals (95%CI). Results: Data from 571 eldercare workers were analyzed. In the multivariate logistic regression model, higher perceived stress was significantly associated with a higher prevalence of memorable LBP (adjusted OR [95% CI] = 1.05 [1.02 to 1.08]). Conclusions: Higher perceived stress was independently associated with LBP among eldercare workers. Our study indicates the importance of considering perceived stress as one of the factors related to LBP. abstract_id: PUBMED:23775865 Do self-reported psychosocial working conditions predict low back pain after adjustment for both physical work load and depressive symptoms? A prospective study among female eldercare workers. Objectives: To investigate whether psychosocial working conditions predict the development of low back pain (LBP) in female eldercare workers while adjusting for physical workload and depressive symptoms. Methods: We investigated risk for developing LBP for between 1 and 30 days in the past year and developing LBP for more than 30 days in the past year at follow-up in 1537 female eldercare workers with no LBP in the year before baseline. Data were analysed using logistic regression analysis adjusted for sociodemographic variables, health behaviours, physical workload and depressive symptoms at baseline. Follow-up ranged from 18 to 22 months. Results: High emotional demands, high and medium role conflicts and low influence predicted risk for reporting LBP for between 1 and 30 days in the past year at follow-up after adjustment for sociodemographics, health behaviours and physical workload. All associations became statistically non-significant when adjusted for depressive symptoms. Low and medium influence at work and high emotional demands predicted risk of reporting LBP for more than 30 days in the past year at follow-up, after adjustment for sociodemographics, health behaviours and physical workload. For employees with low (OR 4.16; 95% CI 1.36 to 12.75) and medium (OR 3.93; 95% CI 1.37 to 11.22) influence, this risk remained statistically significant after adjustment for depressive symptoms. Conclusions: Most psychosocial working conditions in this study were no longer associated with risk of LBP after adjustment for depressive symptoms. However, low and medium influence at work predicted risk for LBP for more than 30 days after adjustment for both physical workload and depressive symptoms. abstract_id: PUBMED:37829938 The Prevalence and Risk Factors of Low Back Pain Among Office Workers in Saudi Arabia. Background and objective Low back pain (LBP) is a significant and prevalent musculoskeletal disorder associated with the workplace that impacts individuals, families, communities, healthcare systems, and companies. Although LBP prevalence and risk factors have been studied in various professional categories in Saudi Arabia, there is no data on the prevalence of LBP among office workers and related risk factors. This study aimed to determine the prevalence of LBP among office workers in Saudi Arabia and identify major risk factors. Methodology This cross-sectional study was conducted among office workers in Saudi Arabia of both genders aged over 18 years living in five geographical regions: Central, Western, Southern, Eastern, or Northern. A self-administered questionnaire was distributed among office workers using an online survey. The questionnaire comprised sociodemographic characteristics, the prevalence of work-related LBP, and work-related characteristics. Results Among 604 office workers, 51.7% were males and 43.5% were aged between 18 and 30 years old. The prevalence of work-related LBP in this study was 59.9%. Independent risk factors for LBP include being overweight or obese, sleep disturbance, previous history of back trauma, increasing years of working in the office, changes made to workstation/work habits to reduce the risk of LBP, and frequent work stress. Protective risk factors for LBP include regular physical exercise and satisfaction with the current job. Conclusions LBP was widely prevalent among office workers in Saudi Arabia. Office workers with elevated body mass index (BMI) who had a previous history of back trauma and had sleep disturbance or work stress in the past month were at significant risk for LBP. Occupational health and safety programs are vital for building ergonomically safe working conditions, and regular physical exercise promotion could alleviate the risk of LBP at work. Answer: Yes, low back pain and low physical capacity are risk indicators for dropout among recently qualified eldercare workers. A study found that disability due to and duration of low back pain were significant predictors for dropout from the eldercare sector 2 years after qualification. However, low physical capacity itself was not a predictor for job dropout or turnover (PUBMED:20709891). Additionally, self-assessed low physical capacity is a strong predictor for developing nonchronic and persistent low back pain among pain-free female health care workers, which could contribute to dropout rates (PUBMED:22814306). Furthermore, the presence of low back pain in eldercare workers is associated with effort-reward imbalance (PUBMED:24739372), perceived stress (PUBMED:28719460), and pain-related fear, which can lead to presenteeism (PUBMED:30289190). These factors may contribute to the decision to leave the job. Additionally, psychosocial working conditions, such as low influence at work, can predict the risk for low back pain for more than 30 days, even after adjusting for depressive symptoms (PUBMED:23775865). Therefore, addressing low back pain and its associated factors, such as physical and psychosocial work conditions, is important for retaining recently qualified eldercare workers.
Instruction: Is serum homocysteine level elevated in colorectal tumor? Abstracts: abstract_id: PUBMED:15725713 Is serum homocysteine level elevated in colorectal tumor? Background/aims: Although it has been known that folate will participate in colorectal carcinogenesis, the relationship between blood folate level and colorectal cancer is less consistent. The blood folate level does not reflect the systemic folate status. By contrast, serum homocysteine has become a sensitive marker for the folate deficiency. We attempted to explain the correlation between folate and colorectal cancer according to the serum homocysteine level. Methods: We reviewed the clinical records, including alcohol history of 184 patients taking the colonoscopy and measurement of the serum homocysteine level at Health Promotion Center from 2001 to 2002. One hundred fifty-one of 184 were included, excluding 33 patients with previous history of colonic polyp, cerebrovascular, cardiovascular attack and thromboembolism. They were divided into the normal control (n=111) and the adenomatous polyp group (n=40). We had selected the colorectal cancer group (n=50) from the collection list of the tissue and blood bank less than 3 months storage interval. Results: There was no significant difference in the mean serum homocysteine level among three groups. However, in the subjects with high alcohol consumption, there was a significant difference in the mean serum homocysteine between the normal control (n=7) and the adenomatous polyp group (n=9) (10.2 vs 15.1 mumol/L, p<0.05). Conclusions: There was no correlation of serum homocysteine and colorectal tumor. However, in the subjects with high alcohol consumption, high serum homocysteine might be related to the development of adenomatous polyp. abstract_id: PUBMED:21264789 Serum folate and homocysteine levels are associated with colon tumorigenesis in end-stage renal disease patients. The aim of this study was to evaluate the effect of folate and homocysteine on colon tumorigenesis by performing colonoscopy and examining serum folate and homocysteine levels in end-stage renal disease (ESRD) patients. We performed colonoscopy in 72 ESRD patients who were undergoing hemodialysis and also measured their serum folate and homocysteine levels. Serum folate and homocysteine concentrations of the 72 ESRD patients were 6.0±3.9 μg/l and 37.3±25.5 μmol/l, respectively. Colorectal neoplasia was detected in 47 (65%) of the patients. Compared to a control group, ESRD patients had significantly more and larger neoplasia (P=0.002 and 0.001, respectively). Multivariate analysis revealed that ESRD patients with lower levels of serum homocysteine had significantly more and larger neoplasia than those with higher levels (P=0.02 and 0.03, respectively). In addition, patients with a shorter duration of hemodialysis were likely to have larger neoplasia. ESRD patients had higher than normal serum homocysteine levels. Interestingly, patients with lower homocysteine levels were likely to carry more and larger colorectal neoplasia. These results suggest that suppression of folate metabolism and an elevated serum homocysteine concentration are inversely associated with colon tumorigenesis in ESRD patients. abstract_id: PUBMED:36760381 The association of serum folate and homocysteine on venous thromboembolism in patients with colorectal cancer: a cross-sectional study. Background: Venous thromboembolism is a common complication in patients with colorectal cancer who exhibit high homocysteine and low folate levels. However, whether venous thrombosis is the result of a direct effect of folic acid or the presence of a homocysteine-mediated mediating effect cannot be determined. This study aimed to explore the association and mediating effects of serum folate and homocysteine on venous thromboembolism in patients with colorectal cancer. Methods: This study included patients with colorectal cancer who were admitted to the First Hospital of Shanxi Medical University from May 2020 to May 2022. The patients' medical records were reviewed to collect information on general demographic characteristics, the prevalence of venous thromboembolism on admission, laboratory blood indices, serum folate, and serum homocysteine. SPSS 26.0 software was used for data collation and statistical analysis; the χ2 test was utilized for univariate analysis and unconditional logistic regression was applied for multivariate analysis. R 4.1.2 was used to perform the mediating effect test. Results: A total of 236 colorectal cancer patients were investigated. The prevalence of colorectal cancer combined with venous thromboembolism was 15.3%; serum folate was <10.75 nmol/L in 25.4% of patients; and serum homocysteine was ≥22 µmol/L in 30.5% of patients. After controlling for confounding factors, the risk of venous thromboembolism was 2.48 times greater [95% confidence interval (CI): 1.04 to 5.94] in patients with low serum folate (<10.75 nmol/L) than in those with high serum folate (≥10.75 nmol/L). Also, the risk of venous thromboembolism was greater in those with high serum homocysteine (≥22 µmol/L) [odds ratio (OR) =2.99. 95% CI: 1.11 to 8.08]. The mediating effect test showed no direct effect of serum folate on venous thromboembolism combined with colorectal cancer, and a full mediating effect of serum homocysteine between serum folate and venous thromboembolism combined with colorectal cancer, with a mediating effect value of 0.002 and a total effect value of 0.0054. Conclusions: Serum folate influences the formation of venous thromboembolism through serum homocysteine. It is recommended that the nutritional supplementation of patients be enhanced to control serum folate and serum homocysteine levels. abstract_id: PUBMED:10206314 Serum folate, homocysteine and colorectal cancer risk in women: a nested case-control study. Accumulating evidence suggests that folate, which is plentiful in vegetables and fruits, may be protective against colorectal cancer. The authors have studied the relationship of baseline levels of serum folate and homocysteine to the subsequent risk of colorectal cancer in a nested case-control study including 105 cases and 523 matched controls from the New York University Women's Health Study cohort. In univariate analyses, the cases had lower serum folate and higher serum homocysteine levels than controls. The difference was more significant for folate (P < 0.001) than for homocysteine (P = 0.04). After adjusting for potential confounders, the risk of colorectal cancer in the subjects in the highest quartile of serum folate was half that of those in the lowest quartile (odds ratio, OR = 0.52, 95% confidence interval, CI = 0.27-0.97, P-value for trend = 0.04). The OR for the highest quartile of homocysteine, relative to the lowest quartile, was 1.72 (95% CI = 0.83-3.65, P-value for trend = 0.09). In addition, the risk of colorectal cancer was almost twice as high in subjects with below-median serum folate and above-median total alcohol intake compared with those with above-median serum folate and below-median alcohol consumption (OR = 1.99, 95% CI = 0.92-4.29). The potentially protective effects of folate need to be confirmed in clinical trials. abstract_id: PUBMED:20501764 Serum adiponectin, leptin, C-peptide, homocysteine, and colorectal adenoma recurrence in the Polyp Prevention Trial. Background: Serum adiponectin, leptin, C-peptide, and homocysteine are indicators for obesity, hyperinsulinemia, and chronic inflammation, which have all been associated with colorectal cancer. Aims: To determine whether serum adiponectin, leptin, C-peptide, and homocysteine are associated with fat, fiber, fruit and vegetable, flavonol, or dry bean intake and colorectal adenoma recurrence. Methods: Using logistic regression, we estimated odds ratios (OR) and 95% confidence intervals (95% CI) for adenoma recurrence in 627 participants from the control arm of the Polyp Prevention Trial, a 4-year trial that examined the effectiveness of a low-fat, high-fiber, high-fruit and vegetable diet on adenoma recurrence. Results: Serum concentrations of C-peptide and homocysteine were inversely related to fiber, fruit and vegetable, and flavonol intake and positively related to percentage of calories from fat (all P(trend) < or = 0.01). High homocysteine concentrations were associated with any (4th versus 1st quartile: OR, 2.26; 95% CI, 1.30-3.94) and more than one adenoma recurrence (OR, 2.11; 95% CI, 1.01-4.40). Individuals in the highest, versus lowest, tertile of serum leptin concentration had a decreased risk of advanced adenoma recurrence (OR, 0.22; 95% CI, 0.06-0.79). Conclusion: Our results suggest that serum homocysteine may serve as an indicator of dietary exposure, including a low-fat and high-fiber, high-fruit and vegetable, and high-flavonol diet, as well as colorectal adenoma recurrence. Impact: Discovering biomarkers that are both modifiable and can predict cancer risk is critical. We identified serum homocysteine as a novel indicator that is modified by diet and predicts risk of adenoma recurrence. abstract_id: PUBMED:35158926 Longitudinal Analysis of 1α,25-dihidroxyvitamin D3 and Homocysteine Changes in Colorectal Cancer. Background: 1α,25-dihydroxycholecalciferol (1,25(OH)2D3) and homocysteine are known to play a role in the pathophysiology of colorectal cancer (CRC). In health, the two changes are inversely proportional to each other, but little is known about their combined effect in CRC. Methods: The serum 1,25(OH)2D3 and the homocysteine levels of eighty-six CRC patients were measured, who were enrolled into four cohorts based on the presence of metastases (Adj vs. Met) and vitamin D3 supplementation (ND vs. D). Results: 1,25(OH)2D3 was constant (Adj-ND), increased significantly (Adj-D, p = 0.0261), decreased (Met-ND), or returned close to the baseline after an initial increase (Met-D). The longitudinal increase in 1,25(OH)2D3 (HR: 0.9130, p = 0.0111) positively affected the overall survival in non-metastatic CRC, however, this effect was cancelled out in those with metastasis (p = 0.0107). The increase in homocysteine negatively affected both the overall (HR: 1.0940, p = 0.0067) and the progression-free survival (HR: 1.0845, p = 0.0073). Lower 1,25(OH)2D3 and/or higher homocysteine level was characteristic for patients with higher serum lipids, albumin, total protein, white blood cell and platelet count, male sex, and right-sided tumors. No statistically justifiable connection was found between the target variables. Conclusions: A measurement-based titration of vitamin D3 supplementation and better management of comorbidities are recommended for CRC. abstract_id: PUBMED:29876016 Meta-analysis of homocysteine-related factors on the risk of colorectal cancer. The major objective of this meta-analysis was to examine the association between homocysteine and related measurements with the risk of colorectal cancer (CRC) and adenomatous polyps (AP). Many studies presented an association between methyltetrahydrofolate reductase (MTHFR) gene polymorphisms and risk of CRC. Yet, there have been variances on what homocysteine-related and dietary factors play on the risk of CRC or AP, in association with folate-related one carbon metabolism pathways. We pooled analyses to examine comprehensively all homocysteine related factors including blood tests measurements, dietary, and lifestyle factors for their associations with the risk of CRC and AP. We located 86 articles published from 1995 to 2017. The results revealed that elevated homocysteine levels and decreased vitamin B12 levels in the blood were associated with increased risks of CRC and AP, with case-control studies having greater significant effect sizes compared to that of cohort-control studies. Decreased methionine and vitamin B6 levels in the blood increased the risk of CRC. MTHFR 677 TT and CT polymorphisms were interacting with elevated homocysteine levels to increase the risk of CRC. Decreased dietary fiber, methionine, vitamin B9 or folate, and vitamin B6 intakes were associated with increased risks of CRC; whereas, increased dietary B12 intake, alcohol intake, and smoking were associated with increased risk of CRC. Further studies can be conducted to examine the mechanistic differences of blood levels of homocysteine-related and dietary factors, including different types of dietary fiber, for their effects on decreasing the homocysteine toxicity to prevent CRC. abstract_id: PUBMED:32994931 Cysteine and homocysteine as biomarker of various diseases. Cysteine and homocysteine (Hcy), both sulfur-containing amino acids (AAs), produced from methionine another sulfur-containing amino acid, which is converted to Hcy and further converted to cysteine. This article aims to highlight the link between cysteine and Hcy, and their mechanisms, important functions, play in the body and their role as a biomarker for various types of diseases. So that using cysteine and Hcy as a biomarker, we can prevent and diagnose many diseases. This review concluded that hyperhomocysteinemia (elevated levels of homocysteine) is considered as toxic for cells and is associated with different health problems. Hyperhomocysteinemia and low levels of cysteine associated with various diseases like cardiovascular diseases (CVD), ischemic stroke, neurological disorders, diabetes, cancer like lung and colorectal cancer, renal dysfunction-linked conditions, and vitiligo. abstract_id: PUBMED:23441607 Higher plasma homocysteine is associated with increased risk of developing colorectal polyps. Colorectal adenomas are considered to be precursors of colorectal cancer. B-vitamins (i.e., folate, vitamin B(6) and B(12)) are involved in homocysteine metabolism and play an important role as coenzymes in 1-carbon metabolism, which is thought to have a critical role in the progression of colorectal polyps. The purpose of this study was to examine the effects of B-vitamins and homocysteine on the risk of developing colorectal polyps. Forty-eight participants with colorectal polyps [29 adenomatous polyps (AP), 19 hyperplastic polyps (HP)], and 96 age- and sex-matched healthy controls were recruited. Fasting blood was drawn from each participant to measure hematological parameters, plasma pyridoxal 5'-phosphate (PLP), serum folate and vitamin B(12), and plasma homocysteine. Participants with AP and HP had significantly higher plasma homocysteine levels than did healthy controls. There was no significant difference in serum folate and vitamin B(12) and plasma PLP among the 3 groups. B-vitamins had no significant effect on the risk of colorectal polyps. However, participants with higher plasma homocysteine [odds ratio (OR) = 1.87, 95% confidence interval (CI) = 1.13, 3.08) level exhibited significantly increased risk of colorectal polyps after adjusting for potential confounders. Plasma homocysteine was a strong predictor of the risk of colorectal polyps in participants with adequate B-vitamins status. abstract_id: PUBMED:20032440 Serum homocysteine, cholesterol, retinol, alpha-tocopherol, glycosylated hemoglobin and inflammatory response during therapy with bevacizumab, oxaliplatin, 5-fluorouracil and leucovorin. Background: Targeted agents present with a new spectrum of side-effects, including toxicities that negatively impact the risk of atherosclerosis. The aim of the present study was to evaluate the effect of the combination of targeted therapy and chemotherapy on serum homocysteine and other laboratory parameters of cardiovascular risk in patients with metastatic colorectal carcinoma. Patients And Methods: Thirty-one patients with metastatic colorectal carcinoma treated with the combination of bevacizumab, oxaliplatin, 5-fluorouracil and leucovorin were studied before and during the therapy. Results: Serum homocysteine decreased significantly throughout the course of treatment. Total cholesterol and low-density lipoprotein cholesterol also decreased significantly during the first month of therapy. In contrast, serum retinol significantly increased during the second and third months of treatment. A significant increase in glycosylated hemoglobin was also observed. After an initial rise, serum C-reactive protein (CRP) and carcinoembryonic antigen (CEA) were significantly lower compared to baseline throughout the course of treatment. Serum ferritin increased throughout most of the course of treatment. A significant correlation was observed between CRP and high-density lipoprotein cholesterol, retinol, ferritin, and CEA. CEA correlated with hemoglobin, retinol, and ferritin. Retinol correlated significantly with hemoglobin. Conclusion: Tumor control, reflected in lower CEA, resulted in suppression of the acute phase response and generally in favorable effects on laboratory parameters indicative of risk factors of atherosclerosis, including lower homocysteine concentrations, and lower total and LDL cholesterol. Answer: The relationship between serum homocysteine levels and colorectal tumors is complex and varies across different studies. Some studies have found no significant difference in serum homocysteine levels among individuals with colorectal tumors compared to controls. For instance, one study found no correlation between serum homocysteine levels and colorectal tumors, although it did note that high serum homocysteine might be related to the development of adenomatous polyps in subjects with high alcohol consumption (PUBMED:15725713). Conversely, other studies have suggested an association between serum homocysteine levels and colorectal tumorigenesis. A study on end-stage renal disease (ESRD) patients found that lower levels of serum homocysteine were associated with more and larger colorectal neoplasia, suggesting an inverse relationship between serum homocysteine concentration and colon tumorigenesis in ESRD patients (PUBMED:21264789). Another study indicated that elevated homocysteine levels were associated with an increased risk of developing colorectal polyps, which are considered precursors to colorectal cancer (PUBMED:23441607). Furthermore, a meta-analysis reported that elevated homocysteine levels and decreased vitamin B12 levels in the blood were associated with increased risks of colorectal cancer and adenomatous polyps (PUBMED:29876016). Additionally, a study found that high homocysteine concentrations were associated with colorectal adenoma recurrence (PUBMED:20501764). In the context of venous thromboembolism in patients with colorectal cancer, one study found that serum folate influences the formation of venous thromboembolism through serum homocysteine, suggesting a mediating effect of homocysteine between serum folate and venous thromboembolism in these patients (PUBMED:36760381). Overall, while some studies do not find a direct correlation between serum homocysteine levels and colorectal tumors, others suggest that elevated homocysteine levels may be associated with an increased risk of colorectal tumorigenesis or recurrence. The evidence indicates that the relationship may be influenced by factors such as renal disease status, alcohol consumption, and folate metabolism.
Instruction: Is there a need for a GP consultant at a university hospital? Abstracts: abstract_id: PUBMED:18823571 Is there a need for a GP consultant at a university hospital? Background: Patients in hospital can develop complaints unrelated to the condition they are admitted for. The treating specialist will then call upon a co-specialist who is specialized in the clinical picture associated with the new complaint. For such a complaint, the GP is usually the first contact, when the patient is not in hospital. Normally specialists only encounter patients GPs have selected for referral. The risk of the specialist overestimating the predictive value of 'unselected' complaints and symptoms of a serious condition is high. This may lead to an overuse of diagnostic treatments. Such treatments weigh more heavily on the patient, cause inadequate use of hospital facilities and, as a consequence, generate higher costs. Because of these considerations, we wished to investigate if there is a need for the GP as a consultant for new complaints during hospital admittance. Method: The files of a random sample of patients who had an interdisciplinary consultation during their stay in hospital were judged by an expertpanel whether the consultation fitted the expertise of a GP. Results: In 28 out of 84 files the consultation fitted the expertise of a GP; most cases concerned a specific condition that is not part of the specialist's expertise, most frequently dermatological problems. In a minority of cases the specialist is confronted with a clinical problem with symptoms of which the cause is not clear, for example fever. Conclusion: Generally, the consultations concern serious, often very complex conditions, i.e. cases that should be assessed by a specialist. Nevertheless, the expert panel's judgment of the interdisciplinary consultations shows that in more than half of the dermatological cases and in a limited number of consultations by a specialist of internal medicine and geriatrics the problems fit the GP's expertise. Given the morbidity in academic hospitals we suppose that the results of a similar study in a peripheral hospital might even show more perspective for a GP consultant. These results offer sufficient arguments to start a pilotstudy into the role of a GP consultant in hospital. abstract_id: PUBMED:3373464 The role of the hospital consultant in general practitioner prescribing. A questionnaire was designed and posted to 600 general medical practitioners in the West Midlands. The results indicated that the prescribers were reassured by the hospital consultant although independence was maintained in prescribing decisions. The hospital medical team was recognized for its expertise rather than the consultant alone. A table was constructed from the results to show the therapy areas in which doctors prefer to refer their patients. Consultant's influence in prescribing is most likely where communication between GP and consultant is optimal. The generalized influence of the consultant as implied by the Greenfield report remains unproven. Any influence is much more specific and may depend upon therapy area, the consultant specialty, and the standing of the hospital medical team. The innovativeness of the prescriber which was also considered may also have an influence as to how the consultant's recommendations will be accepted. abstract_id: PUBMED:8163913 Indications for paediatric tonsillectomy GP versus Consultant perspective. Tonsillectomy is a commonly performed operation in children. Although the justification for this procedure has been debatable in the past, more rigorous criteria in defining the need for tonsillectomies are becoming established. The role of the primary care physician in the management of tonsillar disease in children is important as the decision to refer a child for tonsillectomy is made by the general practitioner. A questionnaire survey of 400 general practitioners (GPs) and 31 consultant ENT surgeons was carried out to establish the important criteria used in deciding the need for paediatric tonsillectomy. We found that in general there was a good correlation between GP and Consultant indications for tonsillectomy. However there were two important areas of difference, while most of the GPs felt that recurrent ear infections and 'glue ear' were important indications for tonsillectomy, Consultants did not. Therefore we must be aware of the current recommendations for tonsillectomy and inform present and future GPs. abstract_id: PUBMED:35271417 University students coping with COVID-19 challenges: Do they need help? Objective: This study examined students' perceived levels of coping and need for help, and the relationship with their risks and protective factors during the early stages of the COVID-19 pandemic in Israel. Participants: A sample of 4,710 university students responded anonymously to an online questionnaire. Methods: The risk and protective factors assessed were: specific COVID-related functional difficulties, levels of social support, and trust in the university authorities and the government in handling the crisis. Results: The most frequent COVID-19 related difficulties were: worry for the health of family members (64.4%) and difficulties with online learning (63.2%). Almost half of the students (47.3%) reported that they needed at least a little help. Hierarchical regression analyses for coping and need for help showed that COVID-related difficulties predicted lower levels of perceived coping and a higher need for help. Conclusions: Universities need to build specific services for students to provide concrete support throughout this continuous health crisis. abstract_id: PUBMED:28992145 Burnout syndrome among non-consultant hospital doctors in Ireland: relationship with self-reported patient care. Objective: Intensive workload and limited training opportunities for Irish non-consultant hospital doctors (NCHDs) has a negative effect on their health and well-being, and can result in burnout. Burnout affects physician performance and can lead to medical errors. This study examined the prevalence of burnout syndrome among Irish NCHDs and its association with self-reported medical error and poor quality of patient care. Methods: A cross-sectional quantitative survey-based design. Setting: All teaching hospitals affiliated with University College Cork. Participants: NCHDs of all grades and specialties. Intervention(s): The following instruments were completed by all participants: Maslach Burnout Inventory-Human Service Survey (MBI-HSS), assessing three categories of burnout syndrome: Emotional exhaustion (EE), Personal Achievement (PA) and Depersonalization (DP); questions related to self-reported medical errors/poor patient care quality and socio-demographic information. Main Outcome Measure(s): Self-reported measures of burnout and poor quality of patient care. Results: Prevalence of burnout among physicians (n = 265) was 26.4%. There was a significant gender difference for EE and DP, but none for PA. A positive weak correlation was observed between EE and DP with medical error or poor patient care. A negative association was reported between PA and medical error and reduced quality of patient care. Conclusions: Burnout is prevalent among NCHDs in Ireland. Burnout syndrome is associated with self-reported medical error and quality of care in this sample population. Measures need to be taken to address this issue, with a view to protecting health of NCHDs and maintaining quality of patient care. abstract_id: PUBMED:27739603 The Ambiguous Effect of GP Competition: The Case of Hospital Admissions. In the theoretical literature on general practitioner (GP) behaviour, one prediction is that intensified competition induces GPs to provide more services resulting in fewer hospital admissions. This potential substitution effect has drawn political attention in countries looking for measures to reduce the growth in demand for hospital care. However, intensified competition may induce GPs to secure hospital admissions a signal to attract new patients and to keep the already enlisted ones satisfied, resulting in higher admission rates at hospitals. Using both static and dynamic panel data models, we aim to enhance the understanding of whether such relations are causal. Results based on ordinary least square (OLS) models indicate that aggregate inpatient admissions are negatively associated with intensified competition both in the full sample and for the sub-sample patients aged 45 to 69, while outpatient admissions are positively associated. Fixed-effect estimations do not confirm these results though. However, estimations of dynamic models show significant negative (positive) effects of GP competition on aggregate inpatient (outpatient) admissions in the full sample and negative effects on aggregate inpatient admissions and emergency admissions for the sub-sample. Thus, intensified GP competition may reduce inpatient hospital admissions by inducing GPs to provide more services, whereas, the alternative hypothesis seems valid for outpatient admissions. © 2016 The Authors. Health Economics Published by John Wiley & Sons, Ltd. abstract_id: PUBMED:23760692 Introducing an annualised contract for a consultant team in a district general hospital. The majority of physicians work a weekly timetable consisting of programmed activities (PAs) defined by the consultant contract. This paper describes the implementation of an annualised contract within a gastroenterology department, which is located across two district general hospital sites within the same trust. The perceived benefits of the system include the introduction of a new out-of-hours emergency endoscopy service, more efficient backfilling of vacant endoscopy lists and greater transparency of work patterns and workload between colleagues and within the trust. abstract_id: PUBMED:25745644 Are hospital admissions reduced by Acute Medicine consultant telephone triage of medical referrals? The NHS in England is facing well-documented pressures related to increasing acute hospital admissions at a time when the acute medical bed-base is shrinking, doctors working patterns are increasingly fragmented and many acute hospital trusts are operating a financial deficit. Novel strategies are required to reduce pressure on the acute medical take. We conducted a prospective cohort study to assess the impact of acute medicine consultant triage of referrals to the acute medical take on the number of acute hospital admissions as compared to a historical control cohort. The introduction of an acute medicine consultant telephone triage service was associated with a 21% reduction in acute medical admissions during whole the study period. True admission avoidance was achieved for 28.5% of referrals triaged by an acute medicine consultant. The greatest benefit was seen for consultant-triage of GP referrals; 43% of all GP referrals resulted in a decision not to admit and in 25% the referral was avoided by giving advice alone. Consultant telephone triage of referrals to the acute medical take substantially reduces the number of acute medical admissions as compared to triage by a trained band 6 or higher nurse coordinator. Our service is cost effective and can be job-planned using 6 full-time equivalent acute medicine consultants. The telephone triage service also provides additional benefits to admission numbers beyond its hours of operation and the general management of the acute medical take. abstract_id: PUBMED:21337780 Implementation of computerized material management system at the University of São Paulo University Hospital This is an experience report on the process of implementing a computerized materials management system at the University Hospital of the University of São Paulo. The system was called Materials Management System (SGM). The process comprised four phases: choice of the model and information tool; restructuring of the logistic materials process at the hospital; restructuring of the support areas and establishment of the SGM system itself. A study carried out at the Surgical Center after the establishment demonstrated that, when comparing materials consumption and inventories in the SGM with the Traditional System, the consumed quantity and the cost of inventory materials at the Unit decreased. abstract_id: PUBMED:12040625 Absenteeism of nursing workers at a university hospital The purpose of this study was to identify the rates of absenteeism of nursing workers from a University Hospital due to diseases. Therefore, the author verified the number of absences due to health problems registered during 12 months, calculating the frequency rate as well as the percentage of wasted time. Data were collected and transcribed after consultation to reports organized by the human resources department of the institution studied. Results showed that the higher rate of absence occurred in the Pediatric Unit (If = 0.35) and the higher percentage of wasted time occurred in the Emergency Unit (Tp = 4.19). The author concluded that the rate of absenteeism due to disease was high among the workers, indicating the need of creating a database in order to optimize the registration of absences and their control as well as of stimulating the development of future research. Answer: The need for a GP consultant at a university hospital is suggested by the findings in the abstract from PUBMED:18823571. The study found that in 28 out of 84 files reviewed, the consultation during a patient's hospital stay fitted the expertise of a GP, with most cases concerning specific conditions not part of the specialist's expertise, particularly dermatological problems. Additionally, there were cases where the specialist faced clinical problems with unclear causes, such as fever, which could also fit within a GP's expertise. The expert panel's judgment indicated that more than half of the dermatological cases and a limited number of consultations by specialists in internal medicine and geriatrics could have been addressed by a GP. The authors concluded that these results provide sufficient arguments to start a pilot study into the role of a GP consultant in a hospital setting. They also speculated that a similar study in a peripheral hospital might show even more perspective for a GP consultant, given the morbidity in academic hospitals.
Instruction: Is the Kinect system suitable for evaluation of the hip joint range of motion and as a screening tool for femoroacetabular impingement (FAI)? Abstracts: abstract_id: PUBMED:25391527 Is the Kinect system suitable for evaluation of the hip joint range of motion and as a screening tool for femoroacetabular impingement (FAI)? Background: In the clinical evaluation of femoroacetabular impingement (FAI), there is a lack of quantitative, reliable and informative assessment methods for the overall functional capability of an individual. Objective: We compared clinical and radiological measurements of the hip joint with a new methodology based on the concept of 3-dimensional reachable workspace using Microsoft Kinect. Methods: We assessed the correlation between the alpha angle of Nötzli on full-length radiographs and the clinical internal rotation. We evaluated the accuracy of joint positions and angles of the hip between the Kinect system and clinical examination including range of motion (ROM). Results: The results of our clinical trial with 24 study participants showed a significant difference between normal internal rotation (> 21°) and reduced internal rotation (⩽ 21°) in comparison to the radiological alpha angle of Nötzli (P=0.026). The acquired reachable Kinect data demonstrated a moderate agreement between the Kinect and clinical examination (correlation coefficients between 0.230 and 0.375). Conclusions: The findings suggest that a higher grade alpha angle of Nötzli accompanies reduced clinical internal rotation. The Kinect system provides reliable results of hip ROM. However, further test series must be performed for the application of Kinect in the clinical evaluation of FAI. abstract_id: PUBMED:21751065 In vitro validation and reliability study of electromagnetic skin sensors for evaluation of end range of motion positions of the hip. There is growing evidence that femoroacetabular impingement (FAI) is a probable risk factor for the development of early osteoarthritis in the nondysplastic hip. As FAI arises with end range of motion activities, measurement errors related to skin movement might be higher than anticipated when using previously reported methods for kinematic evaluation of the hip. We performed an in vitro validation and reliability study of a noninvasive method to define pelvic and femur positions in end range of motion activities of the hip using an electromagnetic tracking device. Motion data, collected from sensors attached to the bone and skin of 11 cadaver hips, were simultaneously obtained and compared in a global reference frame. Motion data were then transposed in the hip joint local coordinate systems. Observer-related variability in locating the anatomical landmarks required to define the local coordinate system and variability of determining the hip joint center was evaluated. Angular root mean square (RMS) differences between the bony and skin sensors averaged 3.2° (SD 3.5°) and 1.8° (SD 2.3°) in the global reference frame for the femur and pelvic sensors, respectively. Angular RMS differences between the bony and skin sensors in the hip joint local coordinate systems ranged at end range of motion and dependent on the motion under investigation from 1.91 to 5.81°. The presented protocol for evaluation of hip motion seems to be suited for the 3-D description of motion relevant to the experimental and clinical evaluation of femoroacetabular impingement. abstract_id: PUBMED:23263850 Evaluation of range of motion restriction within the hip joint. In total hip arthroplasty, determining the impingement free range of motion requirement is a complex task. This is because in the native hip, motion is restricted by both impingement as well as soft tissue restraint. The aim of this study is to determine a range of motion benchmark which can identify motions which are at risk from impingement and those which are constrained due to soft tissue. Two experimental methodologies were used to determine motions which were limited by impingement and those motions which were limited by both impingement and soft tissue restraint. By comparing these two experimental results, motions which were limited by impingement were able to be separated from those motions which were limited by soft tissue restraint. The results show motions in extension as well as flexion combined with adduction are limited by soft tissue restraint. Motions in flexion, flexion combined with abduction and adduction are at risk from osseous impingement. Consequently, these motions represent where the maximum likely damage will occur in femoroacetabular impingement or at most risk of prosthetic impingement in total hip arthroplasty. abstract_id: PUBMED:34144389 Restoring range of motion in reduced acetabular version by increasing femoral antetorsion - What about joint load? Background: Acetabular retroversion results in reduced range of motion, and is thought to contribute to femoroacetabular impingement. Severe retroversion can be corrected with a periacetabular osteotomy, which is a technically demanding intervention. In this study, we investigated whether increasing femoral antetorsion is a potential alternative to restore the range of motion and how this approach would affect hip joint loading. Methods: Six different finite element models of the same subject were built from MRI and used to simulate different load scenarios during stance phase, including healthy and pathological configurations with different acetabular version and femoral torsion angles. The subject's gait was analysed in our gait lab and motion data as well as joint reaction forces were integrated into the model. Hip range of motion, hip abductor muscle forces as well as localization and magnitude of hip joint loads were determined. Findings: The negative effects of acetabular retroversion on hip range of motion including flexion and internal rotation can be reversed by increasing femoral anteversion. The rotation of the femur furthermore affected muscular functionality by shortening the moment arms of the hip abductor muscles, resulting in increased abductor muscle forces, joint reaction forces and hip joint loading. Interpretation: Even though increased femoral antetorsion can compensate for the loss of hip range of motion due to reduced acetabular version, rotational ostotomy of the proximal femur is likely to alter muscular moment arms and therefore increase hip joint load, conflicting the goal of a long-term healthy joint. abstract_id: PUBMED:30509854 Hip muscle weakness and reduced joint range of motion in patients with femoroacetabular impingement syndrome: a case-control study. Background: Femoroacetabular impingement (FAI) syndrome is a hip joint motion-related clinical disorder characterized by abnormal contact between the hip joint structures. Abnormal hip morphology and joint pain may impair the hip joint range of motion (ROM) and muscle function. However, FAI effects on hip joint ROM and muscle strength remain controversial. Objectives: The purpose of this study was to compare hip joint ROM and muscle strength between FAI syndrome patients and healthy controls. Methods: Twenty FAI syndrome male patients and 20 healthy male controls (CG) matched for age (FAI=28±6 years; CG=27±5 years), body mass (FAI=81±12kg; CG=80±13kg) and height (FAI=177±6cm; CG=178±6cm), participated in the study. Hip joint ROM for flexion, internal rotation and external rotation were assessed through goniometry. Maximal isometric strength for hip flexion, extension, abduction and adduction were evaluated through hand-held dynamometry. Results: Hip joint ROM was significantly lower in FAI syndrome patients compared with CG for passive flexion (-4%; effect size - ES=0.65), active internal rotation (-42%; ES=1.60), active external rotation (-28%; ES=1.46) and passive external rotation (-23%; ES=1.63). FAI patients' hip extensors (-34%; ES=1.46), hip adductors (-33%; ES=1.32), and hip flexors (-25%; ES=1.17) were weaker compared to the CG subjects. Conclusions: FAI syndrome patients presented both hip muscle weakness and reduced joint ROM compared to match CG. abstract_id: PUBMED:34585303 Hip joint range of motion is restricted by pain rather than mechanical impingement in individuals with femoroacetabular impingement syndrome. Introduction: Discerning whether range of motion (ROM) is restricted by morphology or other pain sources is challenging in patients with femoroacetabular impingement syndrome (FAIS). Computed tomography (CT) motion simulation provides a hypothetical ROM based on morphology. This study aimed to explore associations between ROM measured using CT motion simulation and maximum passive ROM measured clinically using three dimensional (3D) motion analysis in patients with FAIS, prior to and post arthroscopic hip surgery. Materials And Methods: Eight males with FAIS (in total 12 hip joints) were included in this explorative feasibility study. Participants were examined using CT according to a low-dose protocol prior to and 7-months post arthroscopic surgery. Software was used to simulate at which ROM the impingement would occur. With the hip in 90 degrees' flexion, maximum passive range of internal hip rotation, and maximum passive internal hip rotation coupled with adduction was examined clinically using 3D motion analysis pre- and postoperatively. Spearman rank correlation coefficients and linear regressions examined associations between methods. Results: Preoperatively, the correlation between maximum internal hip rotation measured using CT motion simulation and 3D motion analysis was strong (r = 0.71, p = 0.009). Linear regressions demonstrated that maximal internal rotation measured using CT motion simulation was predominantly larger than when measured using 3D motion analysis. Postoperatively, and when maximum internal rotation was coupled with adduction, no correlations were found between the two methods. Conclusions: The hypothetical morphology restricted ROM is larger than clinically assessed pain restricted ROM, both prior to and post hip arthroscopy. These findings suggest that ROM is restricted by pain rather than mechanical, morphology-based impingement in individuals with FAIS. abstract_id: PUBMED:28695136 Range of Hip Joint Motion Is Correlated With MRI-Verified Cam Deformity in Adolescent Elite Skiers. Background: Radiologically verified cam-type femoroacetabular impingement (FAI) has been shown to correlate with reduced internal rotation, reduced passive hip flexion, and a positive anterior impingement test. Purpose: To validate how a clinical examination of the hip joint correlates with magnetic resonance imaging (MRI)-verified cam deformity in adolescents. Study Design: Cross-sectional study; Level of evidence, 3. Methods: The sample group consisted of 102 adolescents with the mean age 17.7 ± 1.4 years. The hip joints were examined using MRI for measurements of the presence of cam (α-angle ≥55°) and clinically for range of motion (ROM) in both supine and sitting positions. The participants were divided into a cam and a noncam group based on the results of the MRI examination. Passive hip flexion, internal rotation, anterior impingement, and the FABER (flexion, abduction, and external rotation) test were used to test both hips in the supine position. With the participant sitting, the internal/external rotation of the hip joint was measured in 3 different positions of the pelvis (neutral, maximum anteversion, and retroversion) and lumbar spine (neutral, maximum extension, and flexion). Results: Differences were found between the cam and noncam groups in terms of the anterior impingement test (right, P = .010; left, P = .006), passive supine hip flexion (right: mean, 5°; cam, 117°; noncam, 122° [P = .05]; and left: mean, 8.5°; cam, 116°; noncam, 124.5° [P = .001]), supine internal rotation (right: mean, 4.9°; cam, 24°; noncam, 29° [P = .022]; and left: mean, 4.8°; cam, 26°; noncam, 31° [P = .028]), sitting internal rotation with the pelvis and lumbar spine in neutral (right: mean, 7.95°; cam, 29°; noncam, 37° [P = .001]; and left: mean, 6.5°; cam, 31.5°; noncam, 38° [P = .006]), maximum anteversion of the pelvis and extension of the lumbar spine (right: mean, 5.2°; cam, 20°; noncam, 25° [P = .004]; and left: mean, 5.85°; cam, 20.5; noncam, 26.4° [P = .004]), and maximum retroversion of the pelvis and flexion of the spine (right: mean, 8.4°; cam, 32.5°; noncam, 41° [P = .001]; and left: mean, 6.2°; cam, 36°; noncam, 42.3° [P = .012]). The cam group had reduced ROM compared with the noncam group in all clinical ROM measures. Conclusion: The presence of cam deformity on MRI correlates with reduced internal rotation in the supine and sitting positions, passive supine hip flexion, and the impingement test in adolescents. abstract_id: PUBMED:34510222 Less hip range of motion is associated with a greater alpha angle in people with longstanding hip and groin pain. Purpose: A higher alpha angle has been proposed to correlate with lower hip range of motion, but the association in people with longstanding hip and groin pain is currently unclear. The aims were to: (1) assess the association between range of motion and alpha angle in patients with longstanding hip and groin pain; (2) examine if a cut-off value in range of motion variables could identify patients with an alpha angle above or below 60°. Methods: Seventy-two participants were consecutively recruited from an orthopaedic department after referral for hip- and groin-related pain. Passive hip range of motion was measured in flexion, internal rotation with 90° hip flexion, internal rotation in neutral hip position, external rotation with 90° hip flexion, and abduction. The alpha angle was calculated from a frog-leg lateral radiograph. Linear regression examined the association between range of motion and alpha angle, and an ROC-curve analysis was performed to identify the sensitivity and specificity of range of motion cut-offs. Results: Lower range of motion in internal rotation in flexion, external rotation, and abduction were associated with higher alpha angle. Internal rotation of 27° or less displayed good sensitivity (81%) and specificity (85%) to detect an alpha angle above 60°, while a cut-off of 41° in external rotation and 27° in abduction showed a sensitivity of 72% and specificity of 50% and 60%, respectively. Conclusion: Less internal rotation in flexion, external rotation, and abduction are associated with a greater alpha angle in a cohort of people with longstanding hip and groin pain. A cut-off of 27° in internal rotation has good sensitivity and specificity to identify people with an alpha angle above or below 60° and have the potential to be used in the clinical setting to identify patients that require further imaging, or that are unlikely to have cam morphology. Level Of Evidence: II. abstract_id: PUBMED:20411346 An integrated platform for hip joint osteoarthritis analysis: design, implementation and results. Purpose: We present a software designed to improve hip joint osteoarthritis (OA) understanding using 3D anatomical models, magnetic resonance imaging (MRI) and motion capture. Methods: In addition to a standard static clinical evaluation (anamnesis, medical images examination), the software provides a dynamic assessment of the hip joint. The operator can compute automatically and in real-time the hip joint kinematics from optical motion capture data. From the estimated motion, the software allows for the calculation of the active range of motion, the congruency and the center of rotation of the hip joint and the detection and localization of the femoroacetabular impingement region. All these measurements cannot be performed clinically. Moreover, to improve the subjective reading of medical images, the software provides a set of 3D measurement tools based on MRI and 3D anatomical models to assist and improve the analysis of hip morphological abnormalities. Finally, the software is driven by a medical ontology to support data storage, processing and analysis. Results: We performed an in vivo assessment of the software in a clinical study conducted with 30 professional ballet dancers, a population who are at high risk of developing OA. We studied the causes of OA in this selected population. Our results show that extreme motion exposes the morphologically "normal" dancer's hip to recurrent superior or posterosuperior FAI and to joint subluxation. Conclusion: Our new hip software includes all the required materials and knowledge (images data, 3D models, motion, morphological measurements, etc.) to improve orthopedists' performances in hip joint OA analysis. abstract_id: PUBMED:20807405 Validity and test-retest reliability of manual goniometers for measuring passive hip range of motion in femoroacetabular impingement patients. Background: The aims of this study were to evaluate the construct validity (known group), concurrent validity (criterion based) and test-retest (intra-rater) reliability of manual goniometers to measure passive hip range of motion (ROM) in femoroacetabular impingement patients and healthy controls. Methods: Passive hip flexion, abduction, adduction, internal and external rotation ROMs were simultaneously measured with a conventional goniometer and an electromagnetic tracking system (ETS) on two different testing sessions. A total of 15 patients and 15 sex- and age-matched healthy controls participated in the study. Results: The goniometer provided greater hip ROM values compared to the ETS (range 2.0-18.9 degrees; P < 0.001); good concurrent validity was only achieved for hip abduction and internal rotation, with intraclass correlation coefficients (ICC) of 0.94 and 0.88, respectively. Both devices detected lower hip abduction ROM in patients compared to controls (P < 0.01). Test-retest reliability was good with ICCs higher 0.90, except for hip adduction (0.82-0.84). Reliability estimates did not differ between the goniometer and the ETS. Conclusions: The present study suggests that goniometer-based assessments considerably overestimate hip joint ROM by measuring intersegmental angles (e.g., thigh flexion on trunk for hip flexion) rather than true hip ROM. It is likely that uncontrolled pelvic rotation and tilt due to difficulties in placing the goniometer properly and in performing the anatomically correct ROM contribute to the overrating of the arc of these motions. Nevertheless, conventional manual goniometers can be used with confidence for longitudinal assessments in the clinic. Answer: The Kinect system has been explored as a tool for evaluating hip joint range of motion (ROM) and for screening femoroacetabular impingement (FAI). A study comparing clinical and radiological measurements of the hip joint with a methodology based on 3-dimensional reachable workspace using Microsoft Kinect found that the Kinect system provided reliable results of hip ROM. The study showed a significant difference between normal internal rotation and reduced internal rotation in comparison to the radiological alpha angle of Nötzli, suggesting that a higher grade alpha angle accompanies reduced clinical internal rotation. However, the Kinect data demonstrated only moderate agreement with clinical examination, with correlation coefficients between 0.230 and 0.375. The study concluded that while the Kinect system shows promise, further test series must be performed for its application in the clinical evaluation of FAI (PUBMED:25391527). In summary, the Kinect system has potential as a tool for evaluating hip joint ROM and could be useful in screening for FAI, but additional research and validation are needed before it can be widely adopted in clinical practice.
Instruction: Is long-term bladder deterioration inevitable following successful isolated bladder outlet procedures in children with neuropathic bladder dysfunction? Abstracts: abstract_id: PUBMED:18355848 Is long-term bladder deterioration inevitable following successful isolated bladder outlet procedures in children with neuropathic bladder dysfunction? Purpose: Bladder decompensation is well described following artificial urinary sphincter implantation in neurogenic bladders. We evaluated the long-term results of various bladder outlet procedures in a subset of patients with neurogenic bladder and isolated outlet deficiency. Materials And Methods: We retrospectively reviewed the charts of 15 consecutive patients who underwent bladder outlet procedures during a 10-year period for urinary incontinence associated with neuropathic bladder dysfunction. Postoperative success was defined as a dry interval of at least 4 hours. Results: Preoperative evaluation showed a smooth bladder in 11 patients with vesicoureteral reflux and hydronephrosis in 2. Using the minimal acceptable capacity for age, mean percent expected bladder capacity for age was 89% +/- 25%, capacity below 20 cm H(2)O was 81% and capacity below 30 cm H(2)O was 89%. Mean preoperative expected capacity for age was 60% +/- 18%. Mean postoperative followup was 11.2 years. Postoperatively, 11 patients achieved initial dryness but 9 subsequently presented with recurrent incontinence and 2 presented with upper tract deterioration. Four cases failed the initial bladder outlet procedure. Salvage procedures included augmentation cystoplasty in all 15 patients, combined with repeat bladder outlet procedure in 4 and bladder neck closure in 2. Mean time to augmentation cystoplasty was 39.6 +/- 28 months. Conclusions: Isolated bladder outlet procedures for neurogenic incontinence portend a poor long-term outcome, requiring augmentation cystoplasty despite the use of anticholinergic medications and strict followup. Preoperative urodynamic evaluation does not predict the need or timing from the initial bladder outlet procedure for future augmentation cystoplasty. abstract_id: PUBMED:23707450 Promising long-term outcome of bladder autoaugmentation in children with neurogenic bladder dysfunction. Purpose: We evaluated the long-term outcome of bladder autoaugmentation in children with neurogenic bladder dysfunction. Materials And Methods: Data were compiled from the records of 25 children with a median age of 9.3 years (range 0.9 to 14.2) who underwent detrusor myotomy between 1992 and 2008. All patients were diagnosed with small bladder capacity, low compliance and high end filling pressures, and were unresponsive to clean intermittent catheterization and anticholinergics. Results: Median followup was 6.8 years (range 0.1 to 15.6). Median postoperative bladder capacity was unchanged or decreased to 95 ml (range 25 to 274) during the first 3 months compared to a median preoperative capacity of 103 ml (14 to 250). At 5 months postoperatively median bladder capacity increased significantly to 176 ml (range 70 to 420, p<0.01). This increase remained significant during the rest of followup. Median bladder compliance doubled after 1 year to 10 ml/cm H2O (range 1 to 31, p<0.05) compared to the preoperative level, and further increased to 17 ml/cm H2O (5 to 55) at 5 years (p<0.05). Median maximal detrusor pressure was 43 cm H2O (range 8 to 140) preoperatively. This value decreased significantly postoperatively (p<0.01) and at final followup it was 26 cm H2O (range 6 to 97). Kidney function developed normally in all patients except 1 with persistent uremia. Reflux was alleviated in 7 of 9 cases. Of the patients 18 became continent on clean intermittent catheterization. Conclusions: Bladder autoaugmentation in children with neurogenic bladder dysfunction offers, after a transient decrease in bladder capacity, a long lasting increase in capacity and compliance, while the end filling pressure decreases. abstract_id: PUBMED:20206979 Last resort in devastated bladder outlet: bladder neck closure and continent vesicostomy--long-term results and comparison of different techniques. Objectives: To compare surgical techniques and long-term outcomes in patients undergoing bladder neck closure (BNC) and continent vesicostomy for devastated bladder outlet. Methods: A total of 17 patients who underwent BNC, omental interposition, and continent vesicostomy between 1994 and 2008 were analyzed. Indication for surgery was recurrent anastomotic stricture combined with postradical prostatectomy incontinence (n = 10), postvulvectomy (n = 1), and neurogenic bladder dysfunction (n = 6). Diversion was performed in 8 patients with normal bladder capacity (>300 mL in adults) through a Mitrofanoff appendicovesicostomy (n = 4) or ileal intussusception valve (n = 4). Simultaneous ileocecal bladder augmentation was performed in 9 patients with primarily reduced bladder capacity, and either the in situ embedded appendix (n = 4) or an ileal intussusception valve (n = 5) served as the continent outlet. The stoma was placed in the lower abdomen using the "butterfly technique" (n = 8) or in the umbilicus (n = 9). Results: Medium follow-up was 68 months (range: 12-129). Primary BNC was successful in all patients and primary continence rate was 82%. Three patients (18%) suffered from continence failure, caused by reduced bladder capacity in 2 cases. The third patient presented with an iatrogenic destruction of his Mitrofanoff appendicovesicostomy. These patients were successfully reconstructed by ileocecal bladder augmentation with an ileal intussusception valve as the continent outlet. Four patients (23%) developed stomal stenosis (3/8 with an abdominal stoma and 1/9 with an umbilical stoma). Patients with simultaneous bladder augmentation had higher bladder capacity. No patients developed ureteral obstruction. Conclusions: This technique is an effective, last resort treatment for patients with nonreconstructible bladder outlet. abstract_id: PUBMED:28169459 Long-term effectiveness and complication rates of bladder augmentation in patients with neurogenic bladder dysfunction: A systematic review. Aims: To systematically evaluate effectiveness and safety of bladder augmentation for adult neuro-urological patients. Methods: The Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) statement was followed for review of publications. The Medline, Embase, and Cochrane controlled trial databases and clinicaltrial.gov were searched until January 2015. No limitations were placed on date or language. Non-original articles, conference abstracts, and publications involving children and animals were excluded. Risk-of-bias and confounder assessment was performed. Results: A total of 20 studies including 511 patients were eligible for inclusion. The level of evidence for the included studies was low, most level 4 studies with only one level 3 study. The data were narratively synthesized. Across all studies high risk-of bias and confounding was found. Primary outcomes were assessed in 16 of the 20 studies and showed improved quality of life and anatomical changes as well as stable renal function. The secondary outcomes were reported in 17 of the 20 studies and urodynamic parameters and continence all demonstrated improvement after bladder reconstruction. Long-term complications continued up to 10 years post-operatively, including bowel dysfunction in 15% of the patients, stone formation in 10%, five bladder perforations and one bladder cancer. Conclusions: Available studies are not plentiful and of relatively poor quality, appropriately designed prospective studies are urgently needed. Despite this, bladder augmentation appears to be a highly effective procedure at protecting the upper urinary tract and improving quality of life. However, it is associated with relatively high morbidity in both the short and long term. abstract_id: PUBMED:30084093 Bladder augmentation in children: current problems and experimental strategies for reconstruction. Bladder augmentation is a demanding surgical procedure and exclusively offered for selected children and has only a small spectrum of indications. Paediatric bladder voiding dysfunction occurs either on a basis of neurological dysfunction caused by congenital neural tube defects or on a basis of rare congenital anatomic malformations. Neurogenic bladder dysfunction often responds well to a combination of specific drugs and/or intermittent self-catheterization. However, selected patients with spinal dysraphism and children with congenital malformations like bladder exstrophy and resulting small bladder capacity might require bladder augmentation. Ileocystoplasty is the preferred method of bladder augmentation to date. Because of the substantial long-and short-term morbidity of augmentation cystoplasty, recent studies have tried to incorporate new techniques and technologies, such as the use of biomaterials to overcome or reduce the adverse effects. In this regard, homografts and allografts have been implemented in bladder augmentation with varying results, but recent studies have shown promising data in terms of proliferation of urothelium and muscle cells by using biological silk grafts. abstract_id: PUBMED:18206925 The evolution of obstruction induced overactive bladder symptoms following urethrolysis for female bladder outlet obstruction. Purpose: Bladder outlet obstruction following stress incontinence surgery may present as a spectrum of lower urinary tract symptoms. We evaluated the prevalence and impact of persistent overactive bladder symptoms following urethrolysis for iatrogenic bladder outlet obstruction. Materials And Methods: In a retrospective review we identified 40 patients who underwent urethrolysis. All patients underwent a standardized urological evaluation. Patients identified with genitourinary erosion, neurogenic bladder dysfunction and preexisting overactive bladder were excluded. Urethrolysis outcomes were determined by subjective bladder symptoms and objective parameters. Validated questionnaires were completed to assess symptom bother, patient satisfaction and quality of life. Statistical analyses were performed using Stata, version 9.0. Results: A total of 40 patients were included in the study with a mean +/- SD followup of 13 +/- 11 months (range 3 to 38). Of the patients 34 patients presented with obstructive symptoms, while 36 had overactive bladder symptoms. Obstructive symptoms resolved in 28 of the 34 patients (82%), while overactive bladder symptoms resolved completely in only 12 (35%) and they were significantly improved in 4 (12%). Overall 20 patients (56%) were on antimuscarinics for refractory overactive bladder and 8 ultimately required sacral neuromodulation. Pre-urethrolysis detrusor overactivity was more likely in patients with persistent overactive bladder symptoms than in those in whom overactive bladder symptoms resolved (70% vs 38%). Patients with persistent overactive bladder had significantly greater symptom severity/bother, and decreased perception of improvement and quality of life following urethrolysis. Conclusions: Following urethrolysis overactive bladder symptoms may remain refractory in 50% or greater of patients, which has a negative impact on quality of life and the impression of improvement after surgery. Detrusor overactivity demonstrated preoperatively may be useful for predicting who may have persistent overactive bladder symptoms despite an effective urethrolysis procedure. abstract_id: PUBMED:29388934 The application of the physical factors for the medical rehabilitation of the children presenting with neurogenic dysfunction of the bladder This article describes the modern technologies for the medical rehabilitation of the children presenting with neurogenic dysfunction of the bladder aimed at improving the act of urination at all levels of innervation. The most frequent variant of neurogenic dysfunction of the bladder in the children is the hyper-reflexive one; it is this variant in which the most pronounced disorders of urodynamics due to intravesical hypertension and detrusor hypoxia are observed. The urodynamic disorders are known to contribute to the formation of the characteristic clinical picture associated with the hyper-reflexive urinary bladder dysfunction in the children characterized by the mandatory urination syndrome, pollakiuria, imperative urges, and mandatory urinary incontinence in the combination with enuresis. The treatment of urinary bladder dysfunction includes both behavioural and medication therapeutic modalities. The "golden standard" for the pharmacotherapy of neurogenic bladder dysfunction of the hyper-reflexive type in the childhood is the use of M-cholinoblockers, such as oxybutynin. The popularity of the physiotherapeutic methods is attributable to their influence on the main links of pathogenesis, the absence of side effects, and the possibility of application in the children of the early age. The combined treatment of neurogenic bladder dysfunction includes the use of a wide range of physiotherapeutic methods the action of which is aimed at regulating the act of urination at all levels of innervation of the bladder, normalizing the tone of the muscles of the bladder, eliminating sphincter insufficiency, improving circulation and accelerating the maturation of the neuromuscular apparatus of the pelvic organs. At present, the natural and preformed methods of physiotherapy are finding the wide application for the management of neurogenic dysfunction of the bladder in the children in the conjunction with therapeutic physical exercises, massage, and the biological feedback (BFB) technique. abstract_id: PUBMED:804046 Intermittent catheterization in neurogenic bladder dysfunction. Experience with intermittent catheterization in 449 patients with traumatic cord bladder during a 9-year period is presented. There were 317 patients who were discharged from the hospital free of the catheter and maintain balanced bladder function on long-term followup. Of these catheter-free patients 45 per cent have sterile urine at any one time. Intermittent catheterization is the method of choice to establish a catheter-free state and reduce infection among spinal cord injury patients. It is a useful conservative method in long-term therapy of patients with non-traumatic neurogenic bladder dysfunction. abstract_id: PUBMED:38057255 Effect of bladder volume and compliance on ultrasonographic measurement of bladder wall thickness in children with neurogenic bladder dysfunction. Background: Spina bifida is the most common cause of neurogenic bladder dysfunction in children. Measurement of bladder wall thickness (BWT) is non-invasive, low-cost, and may be complementary to urodynamics in demonstrating bladder condition. It is still unknown the wall thickness response to volume change in bladders with different compliance states. Objective: To determine the most appropriate measurement technique by revealing the differences in measurements performed in different compliance and volume conditions in children with neurogenic bladder. Study Design: In the prospective study in 2022, patients with spina bifida who continued their urological follow-up in our spina bifida center were included. Patients with a diagnosis of the neurogenic bladder who performed clean intermittent catheterization at least 4-6 times a day and had a recent urodynamic result in the last 6 months were included in the study. According to urodynamic results, patients were divided into two groups normocompliant (NC) and low-compliant (LC) bladders. BWT measurements were made from the anterior, posterior, right lateral, and left lateral walls of the bladder 3 times as full, half-volume, and an empty bladder. Results: The study included 50 patients (NC group n:21, LC group n:29). The mean age of 24 female and 26 male patients was 6.45 ± 4.07 years. In all BWT measurements, a significant increase in wall thickness was observed with decreasing bladder volume. The best correlation between bladder volume and BWT was found in the anterior wall with a weak negative correlation (p = 0.049, r = -0.280). However, lateral wall measurements were significantly higher in low compliant patients compared to normocompliant patients when the bladder was full. The mean right lateral wall thickness was 1.58 ± 0.68 mm in the NC group and 2.18 ± 1.35 mm in the LC group (p = 0.044). Left lateral wall thickness was 1.45 ± 0.44 mm in the NC group and 2.02 ± 1.4 mm in the LC group (p = 0.033). Discussion: Although standardization has been tried to be achieved in BWT measurements, we understand from the studies in the literature that there is no unity in practice. Conclusion: Lateral wall measurements were found to be significantly higher in LC patients compared to normocompliants in full bladder suggesting that more accurate BWT follow-up can be performed with lateral wall measurements in neurogenic bladder patients if bladder compliance is low. The present study seems to be the first study in the literature in which bladder compliance and ultrasonographic bladder wall thickness measurements were evaluated together in children with neurogenic bladder dysfunction. abstract_id: PUBMED:10588270 Bladder dysfunction in children with vesico-ureteric reflux. Vesico-ureteric reflux and non-neurogenic bladder dysfunction are closely related, although a causal relationship has been established only for severe forms of detrusor-sphincter dyscoordination. There are several urodynamic studies reporting high frequency of bladder instability and/or detrusor-sphincter dyscoordination in children with reflux. The latter includes an element of functional outflow obstruction and is the most serious, since it accompanies kidney damage. When instability is the only urodynamic abnormality damage is absent. There are indications that treatment of bladder dysfunction increases spontaneous resolution of reflux and, furthermore, that bladder dysfunction is a negative prognostic factor following antireflux surgery. Recently also, gross reflux in infant boys was seen to associate with bladder dysfunction in addition to earlier finding of congenital malformation of the ureterovesical junction. However, no comparisons have emerged on the outcome following treatment of bladder dysfunction and following observation only. In conclusion, children with reflux on chemoprophylaxis prior to reimplantation must always be assessed for bladder dysfunction. This is especially important when there are recurrent urinary tract infections. Answer: Long-term bladder deterioration is not necessarily inevitable following successful isolated bladder outlet procedures in children with neuropathic bladder dysfunction, but it appears to be a significant risk. According to a study (PUBMED:18355848), isolated bladder outlet procedures for neurogenic incontinence had a poor long-term outcome, with many patients requiring augmentation cystoplasty despite the use of anticholinergic medications and strict follow-up. The preoperative urodynamic evaluation did not predict the need or timing for future augmentation cystoplasty after the initial bladder outlet procedure. However, other studies suggest that there are procedures that can have promising long-term outcomes. For instance, bladder autoaugmentation showed a significant long-term increase in bladder capacity and compliance, with a decrease in end filling pressure in children with neurogenic bladder dysfunction (PUBMED:23707450). Additionally, bladder neck closure and continent vesicostomy have been effective as a last resort treatment for patients with nonreconstructible bladder outlet (PUBMED:20206979). A systematic review (PUBMED:28169459) indicated that bladder augmentation appears to be highly effective at protecting the upper urinary tract and improving quality of life, but it is associated with relatively high morbidity in both the short and long term. Furthermore, the use of biomaterials and new techniques in bladder augmentation is being explored to reduce the adverse effects associated with traditional augmentation cystoplasty (PUBMED:30084093). In conclusion, while there is a risk of long-term bladder deterioration following isolated bladder outlet procedures in children with neuropathic bladder dysfunction, it is not inevitable, and there are various surgical options and management strategies that can lead to improved outcomes and bladder function over the long term. However, the potential for complications and the need for additional interventions, such as augmentation cystoplasty, should be considered and discussed with patients and their families.
Instruction: Does socioeconomic status affect outcomes in early inflammatory arthritis? Abstracts: abstract_id: PUBMED:3358803 The relationship of socioeconomic status to subsequent health status in systemic lupus erythematosus. We examined the relationship of socioeconomic status to health status, as determined by the Arthritis Impact Measurement Scales, in 78 systemic lupus erythematosus patients who had been entered into a prospective study. After controlling for age, disease duration, and disease severity, a significant relationship between socioeconomic status and outcome was not demonstrated. All study subjects had health insurance for medical services. The results have potential implications for health care policy. abstract_id: PUBMED:25399388 Does socioeconomic status affect outcomes in early inflammatory arthritis? Data from a canadian multisite suspected rheumatoid arthritis inception cohort. Objective: To assess the effect of socioeconomic status (SES) on outcomes in patients with early inflammatory arthritis, using data from the Canadian Early Arthritis Cohort (CATCH) study. Methods: In an incident cohort, 2023 patients were recruited, and allocated to low SES or high SES groups based on education and income. Outcomes at baseline and 12 months were analyzed in relation to SES including the 28-joint Disease Activity Score (DAS28), Simplified Disease Activity Index (SDAI), pain, patient's global assessment scale (PtGA), the Health Assessment Questionnaire-Disability Index (HAQ-DI), and the SF12-v2 Health Survey, using the ANOVA, chi-squared test, and regression analyses. Results: The CATCH population had 43% with high school education or less and 37% in the low-income group (< 50,000 Can$ per annum household income). The low-education group had higher DAS28 at baseline (p = 0.045), becoming nonsignificant at 12 months and lower physical component score on SF12-v2 at baseline (p = 0.022). Patients in the low-income group presented with higher HAQ-DI (p = 0.017), pain (p = 0.035), PtGA (p = 0.004), and SDAI (p = 0.022). Low-income versus high-income groups were associated with an OR above the median for HAQ-DI (1.20; 95% CI 1.00-1.45), PtGA (1.27; 95% CI 1.06-1.53), and SDAI (1.25; 95% CI 1.02-1.52) at baseline. The association with low income persisted at 12 months for HAQ-DI (OR 1.30; 95% CI 1.02-1.67), but not for other variables. Conclusion: Low SES was initially associated with higher disease activity, pain, and PtGA, and poorer function. At 1 year, outcomes were similar to those with high SES, with the exception of HAQ-DI. abstract_id: PUBMED:38419335 Impact of socioeconomic status on biologics utilization in rheumatoid arthritis: revealing inequalities and healthcare efficiency. Background/aims: This cross-sectional study aimed to investigate biologics treatment disparities in rheumatoid arthritis (RA) patients based on socioeconomic status (SES). Methods: Data from the KOrean Observational Study Network for Arthritis (KORONA) database were analyzed to assess various factors associated with SES, health behaviors, and biologics use. Logistic regression and structured equation modeling (SEM) were utilized for data analysis. Results: Among 5,077 RA patients included, 393 (7.7%) patients were identified as biologics users. Within the entire cohort, 31.8% of the participants were in the low-income and low-education groups, and 39.3% of the participants were in the high-income and high-education groups. Despite the patients with low income or low education experienced higher disease activity at diagnosis, had more comorbidities, exhibited higher medication compliance, underwent more check-ups, and had more hospital admissions than their counterparts, the odds of patients with low-income receiving biologics were 34% lower (adjusted odds ratio = 0.76, 95% confidence interval: 0.60-0.96, p = 0.021) after adjustment for demographics and comorbidities. SEM and pathway analyses confirmed the negative impact of low SES on biologics use. Conclusions: The findings suggest that SES plays a significant role in biologics use among RA patients, indicating potential healthcare inefficiencies for low SES patients. Moreover, adverse healthcare habits negatively affect biologics use in RA patients. The study highlights the importance of considering socioeconomic factors while discussing biologics use and promoting equitable access to biologics for optimal RA management. abstract_id: PUBMED:35190241 Does Dual-Eligible Medicare/Medicaid Insurance Status as a Surrogate for Socioeconomic Status Compromise Total Knee Arthroplasty Outcomes? Background: Medicare/Medicaid dual-eligible patients who undergo primary total knee arthroplasty (TKA) demonstrate poor outcomes when compared to patients with other payers. We compare Medicare/Medicaid dual-eligible patients vs Medicare and Medicaid only patients at a single hospital center. Methods: All patients who underwent TKA for aseptic arthritis between August 9, 2016 and December 30, 2020 with either Medicare or Medicaid insurance were retrospectively reviewed. 4599 consecutive TKA (3749 Medicare, 286 Medicare/Medicaid dual eligibility, and 564 Medicaid) were included. Groups were compared using appropriate tests for direct comparisons and regression analysis. Results: Patients with dual eligibility and Medicaid insurance were less likely to be white and married, more likely to be female and current smokers, and more likely to have COPD, mild liver disease, diabetes mellitus, malignancy, and HIV/AIDS, but had a lower age-adjusted Charleson Comorbidity Index when compared to Medicare patients. When controlling for smoking status and medical comorbidities, patients with dual eligibility and Medicaid insurance stayed in the hospital 0.64 and 0.39 additional days (P < .001), respectively, were more likely to be discharged to subacute rehab (RR 2.01, 1.49, P < .001) and acute rehab (RR 2.22, 2.46, P = .007, < .001), and were 2.14 and 1.73 times more likely to return to the ED within 90 days (P < .001) compared to Medicare patients. Conclusion: Value-based healthcare may disincentivize treating patients with low socioeconomic status, represented by Medicaid and dual-eligible insurance status, by their association with increased postoperative healthcare utilization, and less risky patients may be prioritized. abstract_id: PUBMED:23401367 Independent influences of current and childhood socioeconomic status on health outcomes in a North Carolina family practice sample of arthritis patients. Objective: Compelling evidence suggests that socioeconomic status (SES) is a determinant of health outcomes among persons with arthritis. SES in early life has likewise been associated with various aspects of health, but the connection between childhood SES and health among people with arthritis remains to be investigated. The purpose of this study was to determine the influences of current and childhood SES on self-reported disability, depression, and physical and mental health among people with self-reported doctor-diagnosed arthritis. Methods: Data originated from a North Carolina network of primary care centers. Participants with self-reported arthritis with complete sociodemographic and relevant health information were retained in our sample (n = 782). We created summary measures for current and childhood SES from indicators of education, occupation, and homeownership, using parental SES as a proxy for participants' childhood SES. Linear regression models were used to assess the associations between health outcomes and SES variables separately and together, adjusting for key covariates. Results: Lower childhood and current SES scores were associated with worse disability and physical health. Current SES was furthermore associated with mental health and depressive symptoms. Associations of low current and childhood SES with health outcomes remained significant when concurrently included in a linear model. Conclusion: Childhood and current SES are both determinants of health among persons with arthritis. This underscores the importance of childhood SES as a determinant of adult health among individuals with arthritis. Further studies should focus on these associations in different populations and across different types of arthritis. abstract_id: PUBMED:37089622 The relation between socioeconomic status and patient symptoms before and one year after lower extremity arthroplasty. Aims: To determine whether there was a relation between socioeconomic status (SES) and patient symptoms before and one year after total knee arthroplasty (TKA), and/or total hip arthroplasty (THA) and whether a change in symptoms was clinically relevant. Patients And Methods: A secondary analysis of a prospective cohort study was conducted on SES and osteoarthritis symptoms of patients (≥45 years old) who received a primary TKA or THA between 2016 and 2018. The relation between SES and respectively pre- and postoperative and change in patient-reported outcome measures including the Oxford Knee Score (OKS), Oxford Hip Score (OHS), Western Ontario and McMaster Universities Arthritis Index (WOMAC), the visual analog scale (VAS) for pain and the EuroQol 5-Dimensions (EQ-5D) were assessed using linear mixed-effects regression models adjusted for age and sex. The following potential confounding variables were considered in the regression models: body mass index (BMI), American Society of Anesthesiologists (ASA)- classification, Charnley-classification, smoking status, and alcohol consumption. Results: Patients with lower SES were mostly female, had a higher BMI and ASA-classification compared to patients with a higher SES. Patients with lower SES reported lower OKS (β = 3.78, P = 0.001). Patients undergoing THA reported lower scores for the OHS (β = 4.78, P = 0.001), WOMAC (β = 11.7, P = 0.001), and less pain (VAS, β = -0.91, P = 0.001). No statistically significant differences between SES groups were seen in the quality of life and health status as measured with the EQ-5D. Conclusion: Patients with a lower socioeconomic status reported worse symptoms and showed less clinically relevant improvement at one-year follow-up. abstract_id: PUBMED:34042052 Racial/Ethnic and Socioeconomic Disparities in Osteoarthritis Management. This article reviews the literature on racial and socioeconomic disparities in the management of osteoarthritis. Treatments investigated include arthritis education, dietary weight management, exercise/physical therapy, pharmacologic therapy with nonsteroidal antiinflammatory drugs and opioids, intra-articular steroid injections, and total joint replacement. The amount of evidence for each treatment modality varied, with the most evidence available for racial and socioeconomic disparities in total joint arthroplasty. Black patients, Hispanic patients, and patients with low socioeconomic status (SES) are less likely to undergo total joint replacement than white patients or patients with high SES, and generally have worse functional outcomes and more complications. abstract_id: PUBMED:33773863 The Impact of Race and Socioeconomic Status on Total Joint Arthroplasty Care. Background: Racial minorities and patients from lower socioeconomic backgrounds are less likely to undergo total joint arthroplasty (TJA) for degenerative joint disease (DJD). However, when these patients do present for care, little is known about the overall severity of DJD and surgical wait times. Methods: A retrospective cohort of 407 patients (131 black and 276 white) who presented to an arthroplasty clinic and went on to receive TJA was established. Severity of osteoarthritis was assessed radiographically via Kellgren-Lawrence (KL) grade. Preoperative Knee Society Score (KSS) and Harris Hip Score (HHS) were used to measure joint pain and function. Multivariate regression modeling and analysis of covariance were used to examine racial and socioeconomic differences in KL grade, KSS, HHS, and time to surgery. Results: Black patients presented with significantly greater KL scores than white patients (P = .046, odds ratio = 1.65, 95% confidence interval [1.01, 2.70]). In contrast, there were no statistically significant racial differences in the mean preoperative KSS (P = .61) or HHS (P = .69). Black patients were also found to wait, on average, 35% longer for TJA (P = .03, hazard ratio = 1.35, 95% confidence interval [1.04, 1.75]). Low income was associated with higher KL grade (P = .002), lower KSS (P = .07), and lower HHS (P = .001). Conclusion: Despite presenting with more advanced osteoarthritis, black patients reported similar levels of joint dysfunction and had longer surgical wait times when compared with white patients. Lower socioeconomic status was similarly associated with more severe DJD. abstract_id: PUBMED:9184510 Socioeconomic status and racial and ethnic differences in functional status associated with chronic diseases. Objectives: This study examined the relationships between wealth and income and selected racial and ethnic differences in health. Methods: Cross-sectional data on a national sample of 9744 men and women aged 51 through 61 from the 1992 Health and Retirement Survey were analyzed to examine the association between socioeconomic status and racial and ethnic differences in functional status among those with hypertension, diabetes, a heart condition, and arthritis. Results: Compared with Whites, African Americans report higher rates of hypertension, diabetes, and arthritis, while Hispanics report higher rates of hypertension and diabetes and a lower rate of heart conditions. Accounting for differences in education, income, and wealth had little effect on these prevalence differences. In general, among those with chronic diseases, African Americans and Hispanics reported worse function than Whites. This disadvantage was eliminated in every case by controlling for socioeconomic status. Conclusions: While socioeconomic status, including wealth, accounts for much of the difference in functional status associated with these chronic diseases. It plays a relatively small role in explaining differences in the prevalence of chronic disease, possibly reflecting different causal pathways. abstract_id: PUBMED:15526761 Health status and socioeconomic factors related to home remedy practices among elderly subjects living in a community in Okinawa Objective: The purpose of the present study was to clarify health status and socioeconomic characteristics of elderly persons who practice home remedies as complementary alternative medicine. Whether such usage is associated with health-related factors was also assessed. Methods: The subjects are 243 independent elderly living at home in A-village of Okinawa prefecture, aged 65 years or older. A self-administered questionnaire was mailed to all subjects in 2001. Included were questions on socioeconomic indicators, health status, IADL, health habits, social network, social support and home remedy practices. All data were statistically analyzed by Student's t test and the chi-square test. In addition logistic regression was used to calculate the odds ratios for current practitioners for each of the factors. Results: The present study included 200 subjects who responded and completed all questions. (Table 2). The main results are as follows: The practitioner rate for home remedies was 50.8% in males and 71.9% in females, the difference being significant (P<0.05). Usage was found to be significantly related to hypertension (P<0.05), arthritis and rheumatic disease (P<0.01), some health habits and provision nursing of social support (OR = P<0.05). Logistic regression analysis adjusted for age, gender, and income revealed current practitioners of acupuncture to have a significantly higher level of history of hospital admission in the previous year (OR = 0.2 P<0.05), food control habits (OR = 4.3 P<0.05) and provision nursing of social support (OR = 4.2 P<0.05) than non-current practitioners. Conclusion: The findings of this study suggest that home remedy practices are associated with health-related habit and provision of social support for good quality of elderly life. The purpose is not only physical care but also health maintenance and promotion. Answer: Yes, socioeconomic status (SES) does affect outcomes in early inflammatory arthritis. Data from the Canadian Early Arthritis Cohort (CATCH) study showed that patients with low SES, as determined by education and income levels, had higher disease activity, pain, and poorer function at the onset of the disease. Specifically, the low-education group had higher Disease Activity Score (DAS28) at baseline, and the low-income group presented with higher Health Assessment Questionnaire-Disability Index (HAQ-DI), pain, and Simplified Disease Activity Index (SDAI) scores. Although some of these differences became nonsignificant at the 12-month follow-up, the association with low income persisted for HAQ-DI, indicating that low SES was initially associated with worse disease outcomes in early inflammatory arthritis (PUBMED:25399388).
Instruction: Is major pulmonary resection by video-assisted thoracic surgery an adequate procedure in clinical stage I lung cancer? Abstracts: abstract_id: PUBMED:15136385 Is major pulmonary resection by video-assisted thoracic surgery an adequate procedure in clinical stage I lung cancer? Objective: Although several studies have shown that video-assisted thoracic surgery (VATS) for major pulmonary resection is less invasive than open thoracotomy, VATS for lung cancer has been performed in only a limited number of institutions. We aimed to review our experience of VATS for major pulmonary resections, and to determine its safety and adequacy in stage I lung cancer. Methods: Between August 1999 and March 2003, we performed major pulmonary resection by VATS in 106 patients with lung cancer and preoperatively determined clinical stage I disease. We evaluated the number of procedures converted to open thoracotomy and the reasons for conversion, the intraoperative blood loss, interval between surgery and chest tube removal, length of postoperative hospital stay, postoperative complications, mortality rate, prognoses, and patterns of recurrence. Results: We successfully performed VATS in 95 patients, whereas in another 11 patients (10%) conversion to open thoracotomy was required. The operative procedures were lobectomy in 86 patients, segmentectomy in 8 patients, and bilobectomy in 1 patient. In 95 patients who underwent VATS, postoperative complications developed in 9 patients (9%), and 1 patient (1%) died from pneumonia. In the 86 patients without complications, the mean postoperative hospital stay was 7.6 days (range, 4 to 15 days). In a mean follow-up period of 25 months (range, 6 to 48 months) in patients with non-small cell lung cancer (NSCLC), including the one perioperative death, the 3-year survival rate was 93% in 82 patients with clinical stage I disease, and 97% in 68 patients with pathologic stage I disease. The 3-year disease-free survival rate was 79% in patients with clinical stage I disease, and 89% in patients with pathologic stage I disease. Local recurrence was observed in six patients (6%): recurrence in mediastinal lymph nodes in five patients, and in the bronchial stump in one patient. Conclusions: Major pulmonary resection by VATS is acceptable in view of its low perioperative mortality and morbidity, and is an adequate procedure for the achievement of local control and good prognosis in patients with clinical stage I NSCLC. abstract_id: PUBMED:38472125 Early Clinical Outcomes of Thoracoscopic Major Pulmonary Resection and Thymectomy Using Novel Articulating Endoscopic Forceps. Background: Video-assisted thoracoscopic surgery (VATS) is recognized as a safe and effective treatment modality for early-stage lung cancer and anterior mediastinal masses. Recently, novel articulating instruments have been developed and introduced to endoscopic surgery. Here, we share our early experiences with VATS major pulmonary resection and thymectomy performed using ArtiSential articulating instruments. Methods: At the Seoul Metropolitan Government-Seoul National University Boramae Medical Center, 500 patients underwent VATS pulmonary resection between July 2020 and April 2023, while 43 patients underwent VATS thymectomy between January 2020 and April 2023. After exclusion, 224 patients were enrolled for VATS major pulmonary resection, and 38 were enrolled for VATS thymectomy. ArtiSential forceps were utilized in 35 of the 224 patients undergoing pulmonary resection and in 12 of the 38 individuals undergoing thymectomy. Early clinical outcomes were retrospectively analyzed. Results: No significant differences were observed in sex, age, surgical approach, operation time, histological diagnosis, or additional procedures between the patients who underwent surgery using novel articulating instruments and the group treated with conventional endoscopic instruments for both VATS major pulmonary resection and thymectomy. However, the use of the novel articulating endoscopic forceps was associated with a significantly larger number of dissected lymph nodes (p=0.028) and lower estimated blood loss (p=0.009) in VATS major pulmonary resection. Conclusion: Major pulmonary resection and thymectomy via VATS using ArtiSential forceps were found to be safe and effective, with early clinical outcomes comparable to established methods. Further research into long-term clinical outcomes and cost-effectiveness is warranted. abstract_id: PUBMED:37488809 Feasibility and safety of one-stage bilateral video-assisted thoracic surgery for resection of bilateral multiple pulmonary nodules Objective: To evaluate the feasibility and safety of one- stage bilateral video-assisted thoracic surgery (VATS) for resection of bilateral multiple pulmonary nodules (BMPNs). Methods: We analyzed the clinical characteristics, pathological features, perioperative outcomes and follow-up data of 41 patients with BMPNs undergoing one-stage bilateral VATS from July, 2011 to August, 2021. Results: One-stage bilateral VATS was performed uneventfully in 40 of the patients, and conversion to open surgery occurred in 1 case. The surgical approaches included bilateral lobectomy (4.9%), lobar-sublobar resection (36.6%) and sublobar-sublobar resection (58.5%) with a mean operative time of 196.3±54.5 min, a mean blood loss of 224.6±139.5 mL, a mean thoracic drainage duration of 4.7±1.1 days and a mean hospital stay of 14±3.8 days. Pathological examination revealed bilateral primary lung cancer in 15 cases, unilateral primary lung cancer in 21 cases and bilateral benign lesions in 5 cases. A total of 112 pulmonary nodules were resected, including 67 malignant and 45 benign lesions. Postoperative complications included pulmonary infection (5 cases), respiratory failure (2 cases), asthma attack (2 cases), atrial fibrillation (2 cases), and drug-induced liver injury (1 case). No perioperative death occurred in these patients, who had a 1-year survival rate of 97.6%. Conclusion: With appropriate preoperative screening and perioperative management, one-stage bilateral VATS is feasible and safe for resection of BMPNs. abstract_id: PUBMED:31451146 Advances in the Study of the Effects of Video-assisted Thoracoscopic Segmentectomy on Pulmonary Function The National Comprehensive Cancer Network (NCCN) clinical practice guideline of non-small cell lung cancer (NSCLC) in 2018 points out that anatomic pulmonary resection is a preferred option for early stage NSCLC. With the development of video-assisted thoracoscopy, minimally invasive thoracic surgery represented by thoracoscopy has been widely used in clinical practice. Video-assisted thoracoscopic segmentectomy has become one of the treatment options for early stage NSCLC. Clinical studies have found that sublobar resection can achieve similar results and preserve more pulmonary function in the treatment of early stage NSCLC compared with lobectomy, but the changes of pulmonary function after segmentectomy are still controversial. This article focuses on the research progress of pulmonary function changes in NSCLC patients after video-assisted thoracoscopic segmentectomy. . abstract_id: PUBMED:33131358 Comparison of single-stage and two-stage bilateral video-assisted thoracic surgery. Objective: Single-stage sequential bilateral video-assisted thoracoscopic surgery (VATS) is a controversial procedure. In the present study, we retrospectively compared the outcomes of single-stage and two-stage VATS. Methods: This study involved patients who underwent single-stage sequential bilateral VATS (SS-VATS group) or two-stage VATS at a 3-month interval (TS-VATS group) for treatment of non-small cell lung cancer from 2010 to 2018. The major outcome was the comparison of intraoperative changes. Results: The inspiratory peak pressure was higher, the incidences of intraoperative hypoxia and unstable hemodynamics were higher, the surgical time was longer, and the durations of the intensive care unit stay and postoperative hospitalization were longer in the SS-VATS group than in the TS-VATS group. However, the chest tube duration, incidence of postoperative mechanical ventilation, and clinical complications were not different between the two groups. Conclusions: Compared with two-stage VATS, single-stage sequential bilateral VATS can be performed for successful treatment of bilateral pulmonary lesions with a shorter total time and higher cost-effectiveness in terms of anesthesia and hospitalization but with a higher incidence of intraoperative adverse effects and a longer hospital stay. abstract_id: PUBMED:14650596 Video-assisted S6b-subsegmental resection after computed tomography guided localization of pulmonary nodules. A 38-year-old man, who had undergone surgery for gastric cancer one year previously, was found to have two pulmonary nodules (PNs: 10 mm in diameter) on chest radiography. Computed tomography (CT) revealed one of these nodules to be located near the B6b in the right lung hilus, while the other was located in the superficial region of the left lower lobe. Video-assisted thoracic surgery (VATS) was performed, for both diagnostic and therapeutic purposes. In this procedure, after preoperative CT-guided marking, simultaneous subsegmental resection of the right S6b and VATS wedge resection of the left lower lobe were performed successfully with adequate surgical margin. Histological diagnosis was compatible with metastatic pulmonary tumor from a gastric carcinoma primary. This case demonstrates that preoperative CT-guided localization can facilitate safe VATS subsegmental resection of a small deep pulmonary nodule. abstract_id: PUBMED:24040548 Video-assisted thoracic surgery (VATS) left upper sleeve lobectomy with partial pulmonary artery resection. Compare with pneumonectomy, sleeve lobectomy has advantage in long-term outcomes and cost-effective measures. As a result, sleeve lobectomy procedure is considered and practiced as the standard therapy for central lung cancers which are anatomically suitable regardless of lung function. In some cases, the lesion invades hilar and mediastinal vessels, the surgeon may faces more technically challenging in vascular reconstruction procedures together with the sleeve lobectomy procedure. The advent of minimally invasive surgery brings numerous advantages, such as shorter hospital stay, alleviated post-operative pain, faster recover and so on, to the patient. In return, it did demands all the surgeons to master the VATS surgery operative skill to meet the patient's expectation. Although mirror the pathway of open sleeve surgery, the video-assisted bronchial sleeve lobectomy (VABSL) brings numerous obstacles for the surgeon: One needs to accommodate transmission from direct-view to locally 2D screen, from multi-angle multi-direction operation field to one directional operation field, before he can adapt to the VATS operative skill. In addition, VATS surgery did have its' learning curve for the surgeon and the assistant. Here we present a video of a patient underwent sleeve lobectomy with partial pulmonary artery resection for communicating operative techniques. abstract_id: PUBMED:19713346 Impact of video-assisted thoracoscopic major lung resection on immune function. Video-assisted thoracoscopic major lung resection for early stage non-small-cell lung carcinoma has been associated with less postoperative pain, better preserved pulmonary function, shorter hospital stay, and enhanced tolerance of adjuvant chemotherapy compared to thoracotomy. Initial concerns regarding safety, oncological clearance, and cost effectiveness were unfounded. Several recent trials have reported improved long-term survival in patients with early stage non-small-cell lung carcinoma undergoing video-assisted thoracoscopic major lung resection, compared to the open technique, although there are inconsistencies. Interestingly, the immune status and autologous tumor killing ability of lung cancer patients have previously been associated with long-term survival. Video-assisted thoracoscopic lung resection results in an attenuated postoperative inflammatory response, but more importantly, it better preserves postoperative immune function. Circulating natural killer and T-cell numbers, T-cell oxidative activity, and levels of immunochemokines such as insulin growth factor binding protein-3 are higher after video-assisted thoracoscopic surgery than after thoracotomy. Recently, interest has developed in the role of the angiogenesis factor, vascular endothelial growth factor, after cancer surgery. Whether differences in immunological and biochemical mediators contribute towards improved long-term survival following video-assisted thoracoscopic major lung resection for cancer remains to be confirmed. abstract_id: PUBMED:27076942 Single-stage bilateral pulmonary resections by video-assisted thoracic surgery for multiple small nodules. Background: Surgical treatment is thought to be the most effective strategy for multiple small nodules. However, in general, one-stage bilateral resection is not recommended due to its highly invasive nature. Methods: Clinical records of patients undergoing one-stage bilateral resections of multiple pulmonary nodules between January 2009 and September 2014 in a single institution were retrospectively reviewed. Results: Simultaneous bilateral pulmonary resection by conventional video-assisted thoracic surgery (VATS) was undertaken in 29 patients. Ground glass opacity (GGO) accounted for 71.9% (46/64) of total lesions, including 26 pure GGO and 20 mixed GGO lesions. One case underwent bilateral lobectomy that was complicated by postoperative dyspnea. Lobar-sublobar (L/SL) resection and bilateral sublobar resection (SL-SL) were conducted in 16 and 12 cases, respectively, and most of these cases had uneventful postoperative courses. There was no significant difference with regard to postoperative complications (P=0.703), duration of use of chest drains (P=0.485), between one- and two-stage groups. Mean postoperative follow-up in cases of primary lung cancer was 31.4 (range, 10-51) months. There was neither recurrence nor deaths at final follow-up. Conclusions: Single-stage bilateral surgery in selected cases with synchronous bilateral multiple nodules (SBMNs) is feasible and associated with satisfactory outcomes. abstract_id: PUBMED:30962997 One-stage video-assisted thoracic surgery for bilateral multiple pulmonary nodules. Background: At present, we have many diagnostic methods, but there are no specific guidelines on the diagnosis and treatment of bilateral multiple pulmonary nodules (BMPNs). Clinically, one-stage video-assisted thoracic surgery (VATS) for BMPNs is not the usual treatment. Methods: We retrospectively analyzed the clinical data of 18 patients with bilateral pulmonary nodules who underwent one-stage VATS from January 2016 to Jun 2018 in the Department of Thoracic Surgery, Tongji Hospital, Tongji Medical College, at the Huazhong University of Science & Technology. All patients were suspected of multiple primary lung cancer (MPLC) and underwent diagnostic and therapeutic one-stage VATS. Results: All operations were completed successfully. Duration of chest drain use was 3-15 (5.28±2.67) days. The overall duration of hospitalization after surgery was 8-19 (12.58±5.87) days. The incidence of postoperative complications was 16.67% (3 patients in all). The postoperative complications included 1 patient with subcutaneous emphysema, 1 patient with a persistent pulmonary air leak and 1 patient with the development of pleural effusion after removal of the drainage tube. There was no postoperative death. Conclusions: Through rigorous preoperative assessment, it was demonstrated that one-stage VATS is safe and feasible for patients with BMPNs. Patients can avoid progression of the contralateral tumor, the trauma of the second operation, and the various burdens of medical treatment. Answer: Yes, major pulmonary resection by video-assisted thoracic surgery (VATS) is considered an adequate procedure in clinical stage I lung cancer. A study conducted between August 1999 and March 2003, which included 106 patients with preoperatively determined clinical stage I lung cancer, found that VATS was successfully performed in 95 patients. The study reported low perioperative mortality and morbidity rates, with a 3-year survival rate of 93% in patients with clinical stage I disease and 97% in those with pathologic stage I disease. The 3-year disease-free survival rate was 79% for clinical stage I disease and 89% for pathologic stage I disease. These results suggest that VATS is an adequate procedure for achieving local control and a good prognosis in patients with clinical stage I non-small cell lung cancer (NSCLC) (PUBMED:15136385). Furthermore, recent advancements in VATS, such as the use of novel articulating endoscopic forceps, have shown to be safe and effective, with early clinical outcomes comparable to established methods. The use of these novel instruments was associated with a significantly larger number of dissected lymph nodes and lower estimated blood loss in VATS major pulmonary resection, indicating that the procedure continues to improve and evolve (PUBMED:38472125). Additionally, studies on the feasibility and safety of one-stage bilateral VATS for resection of bilateral multiple pulmonary nodules (BMPNs) have shown that with appropriate preoperative screening and perioperative management, the procedure is feasible and safe, with a 1-year survival rate of 97.6% (PUBMED:37488809). Research has also indicated that VATS segmentectomy can preserve more pulmonary function compared to lobectomy in the treatment of early stage NSCLC, which is an important consideration for maintaining the quality of life in lung cancer patients (PUBMED:31451146). In summary, the evidence suggests that major pulmonary resection by VATS is an adequate and effective procedure for patients with clinical stage I lung cancer, offering good local control, survival rates, and potentially better preservation of pulmonary function.
Instruction: The IDEFICS community-oriented intervention programme: a new model for childhood obesity prevention in Europe? Abstracts: abstract_id: PUBMED:21483418 The IDEFICS community-oriented intervention programme: a new model for childhood obesity prevention in Europe? Background And Objectives: The European Union-as well as other parts of the world-faces a major challenge of increasing incidence of overweight/obesity. In particular, the increase in childhood obesity gives rise to a strong imperative for immediate action. Yet, little is known about the effectiveness of community interventions, and further research in this field is needed. There is, however, a growing consensus that such research should start from the paradigm that the current living environments tend to counteract healthy lifestyles. Questioning these environments thoroughly can help to develop new pathways for sustainable health-promoting communities. Against this background, the IDEFICS (Identification and prevention of dietary- and lifestyle-induced health effects in children and infants) study developed and implemented innovative community-oriented intervention programmes for obesity prevention and healthy lifestyle primarily in children aged 2-10 years in eight European countries: Sweden, Estonia, Germany, Belgium, Hungary, Italy, Spain and Cyprus. Materials And Methods: The IDEFICS community-oriented intervention study mobilised an integrated set of interventional efforts at different levels of society, with the aim of facilitating the adoption of a healthy obesity-preventing lifestyle. The overall programme has been composed of 10 modules: three at community level, six at school level and one for parents. The main focus was on diet, physical activity and stress-coping capacity. The sphere of action encompassed both children and their (grand) parents, schools, local public authorities and influential stakeholders in the community. All materials for the interventions were centrally developed and culturally adapted. Results: So far, the following has been achieved: focus group research, literature review and expert consultations were done in an early phase as a basis for the development of the intervention modules. The intervention mapping protocol was followed as guide for structuring the intervention research. The overall intervention programme's duration was 2 years, but a longer-term follow-up programme is under development. Conclusions: This large-scale European effort is expected to contribute significantly to the understanding of this major public health challenge. abstract_id: PUBMED:27730262 The IDEFICS primary prevention as a good practice example Background: Worldwide the prevalence of childhood overweight and obesity is strikingly high. Prevention programs are therefore of high priority at a national and international level. In the framework of the IDEFICS study a primary prevention program was developed, implemented and evaluated. Objective: This paper investigates to what degree the IDEFICS intervention may serve as good practice example. For this purpose, the single modules are described and the achieved effects are discussed. Methods: In eight European countries 16,228 children aged 2 to 9.9 years were recruited from kindergartens and schools. About half of them participated in a primary prevention program. In each country the intervention region was matched to a control region with a similar socio-demographic profile. All children participated in an extensive examination program at baseline, which was repeated two years later to assess the development of the children and the intervention effects. In addition, a further follow-up examination took place five years after the intervention in the framework of the I.Family study. Results: After two years the IDEFICS intervention showed only minor effects on an individual level, but sustainable effects on the community and the setting level. After five years a beneficial change in the consumption of sugar and water could be observed in the intervention families and children who were overweight and obese at baseline showed favorable weight trajectories. Discussion: The reasons for the weak intervention effects may be, among others, due to the limited penetration of intervention messages, an insufficient intensity of local intervention activities and our limited ability to induce structural changes of the obesogenic environment. abstract_id: PUBMED:26707014 Behavioural effects of a community-oriented setting-based intervention for prevention of childhood obesity in eight European countries. Main results from the IDEFICS study. Objective: The objective of this paper is to evaluate the behavioural effects, as reported by the parents of the participating boys and girls, of the IDEFICS (Identification and prevention of Dietary- and lifestyle-induced health EFfects In Children and infantS) intervention. Methods: The effectiveness of the IDEFICS intervention was evaluated through a cluster-controlled trial in eight European countries (control and intervention communities in each country) including more than 16,000 children. The 2- to 9.9-year-old children in the intervention group were exposed to a culturally adapted intervention that aimed to prevent childhood obesity through the community, schools/kindergartens and family. Parents completed questionnaires to measure water, soft drink and fruit juice intake; fruit and vegetable intake; daily TV viewing and other sedentary behaviours; daily physical activity levels and strengthening of the parent-child relationships at baseline and follow-up (2 years later). Mixed models with an additional random effect for country were used to account for the clustered study design, and results were stratified by sex. Results: The pan-European analysis revealed no significant time by condition interaction effects, neither for boys nor girls, i.e. the analysis revealed no intervention effects on the behaviours of the IDEFICS children as reported by their parents (F = 0.0 to 3.3, all p > 0.05). Also very few significances were found in the country-specific analyses. Positive intervention effects were only found for sport club participation in Swedish boys, for screen time in weekends for Spanish boys and for TV viewing in Belgian girls. Conclusion: Although no expected intervention effects as reported by the parents on diet, physical activity and sedentary behaviours could be shown for the overall IDEFICS cohort, a few favourable intervention effects were found on specific behaviours in some individual countries. More in-depth analyses of the process evaluation data are needed to obtain more insight into the relationship between the level of exposure to the intervention and its effect. abstract_id: PUBMED:26707012 The IDEFICS intervention trial to prevent childhood obesity: design and study methods. Introduction: One of the major research dimensions of the Identification and prevention of Dietary- and lifestyle-induced health EFfects In Children and infantS (IDEFICS) study involved the development, implementation and evaluation of a setting-based community-oriented intervention programme for primary prevention of childhood obesity. In this supplement of Obesity Reviews, a compilation of key results of the IDEFICS intervention is packaged in a series of complementary papers. Objective: This paper describes the overall design and methods of the IDEFICS intervention in order to facilitate a comprehensive reading of the supplement. In addition, some 'best practice' examples are described. Results: The IDEFICS intervention trial was conducted to assess whether the IDEFICS intervention prevented obesity in young children aged 2 to 9.9 years. The study was a non-randomized, quasi-experimental trial with one intervention matched to one control region in each of eight participating countries. The intervention was designed following the intervention mapping framework, using a socio-ecological theoretical approach. The intervention was designed to address several key obesity-related behaviours in children, parents, schools and community actors; the primary outcome was the prevalence of overweight/obesity according to the IOTF criteria based on body mass index. The aim was to achieve a reduction of overweight/obesity prevalence in the intervention regions. The intervention was delivered in school and community settings over a 2-year period. Data were collected in the intervention and control cohort regions at baseline and 2 years later. Conclusion: This paper offers an introductory framework for a comprehensive reading of this supplement on IDEFICS intervention key results. abstract_id: PUBMED:26707018 Implementation of the IDEFICS intervention across European countries: perceptions of parents and relationship with BMI. Objectives: The primary aim of the present study was to obtain insight into parents' perception of the IDEFICS intervention at the school or preschool/kindergarten and community levels and whether they received specific materials related to the intervention. The secondary aim was to analyse whether parents who reported higher levels of exposure to the IDEFICS intervention had children with more favourable changes in body mass index (BMI) z-scores between baseline and after 2 years of intervention. Methods: Process evaluation of the IDEFICS intervention investigated the implementation of the 2-year intervention in the intervention communities. Intervention group parents (n = 4,180) in seven countries (Belgium, Cyprus, Estonia, Germany, Italy, Spain and Sweden) answered questions about their exposure to the IDEFICS study. To analyse the relationship between exposure and BMI z-score, a composite score was calculated for exposure at the setting and at the community levels. Results: The frequency of parental exposure to the IDEFICS messages not only through the community but also through the (pre)school/kindergarten was lower than what was intended and planned. The dose received by the parents was considerably higher through the (pre)school/kindergarten settings than that through the community in all countries. Efforts by the settings or communities related to fruit and vegetable consumption (range 69% to 97%), physical activity promotion (range 67% to 91%) and drinking water (range 49% to 93%) were more visible and also realized more parental involvement than those related to TV viewing, sleep duration and spending time with the family (below 50%). Results showed no relation of parental exposure at the setting or the community level on more favourable changes in children's BMI z-scores for the total sample. Country-specific analyses for parental exposure at the setting level showed an expected positive effect in German girls and an unexpected negative effect in Italian boys. Conclusion: Parental exposure and involvement in the IDEFICS intervention in all countries was much less than aimed for, which might be due to the diverse focus (six key messages) and high intensity and duration of the intervention. It may also be that the human resources invested in the implementation and maintenance of intervention activities by the study centres, the caretakers and the community stakeholders were not sufficient. Higher levels of parental exposure were not related to more favourable changes in BMI z-scores. abstract_id: PUBMED:21347766 Standardized development of the IDEFICS intervention and its implementation in Germany The prevalence of overweight/obesity in childhood has also been rising at an alarming rate in Germany during recent years. Central components of successful intervention measures include the underpinning of the program with a theoretical model and the inclusion of several target groups as well as schools and kindergartens as whole units. Evaluation of the program is necessary for the development of evidence-based interventions. These central components are highlighted using the example of the development and implementation of the IDEFICS intervention. The IDEFICS intervention was developed using the intervention mapping protocol and aims for a healthy diet, more physical activity, and relaxation. For the implementation of the IDEFICS intervention, ten modules targeting different levels were developed. The implementation is illustrated using Germany as an example. Difficulties in the implementation arose due to unclear responsibilities and the necessary cultural adaptation of internationally developed modules. However, the strengths (e.g., inclusion of the socially disadvantaged, the implementation in a school/kindergarten setting, and a scientific evaluation) also need to be stressed. abstract_id: PUBMED:26707013 Effects of a community-oriented obesity prevention programme on indicators of body fatness in preschool and primary school children. Main results from the IDEFICS study. Background/objectives: Childhood obesity is a major public health concern but evidence-based approaches to tackle this epidemic sustainably are still lacking. The Identification and prevention of Dietary- and lifestyle-induced health EFfects In Children and infantS (IDEFICS) study investigated the aetiology of childhood obesity and developed a primary prevention programme. Here, we report on the effects of the IDEFICS intervention on indicators of body fatness. Subjects/methods: The intervention modules addressed the community, school and parental level, focusing on diet, physical activity and stress-related lifestyle factors. A cohort of 16,228 children aged 2-9.9 years - about 2000 per country - was equally divided over intervention and control regions. (Participating countries were Sweden, Germany, Estonia, Hungary, Cyprus, Italy, Spain and Belgium.) We compared the prevalence of overweight/obesity and mean values of body mass index z-score, per cent body fat and waist-to-height ratio over 2 years of follow-up. Mixed models adjusting for age and socioeconomic status of the parents and with an additional random effect for country accounted for the clustered study design. Results: The prevalence of overweight and obesity increased in both the intervention and control group from 18.0% at baseline to 22.9% at follow-up in the control group and from 19.0% to 23.6% in the intervention group. The difference in changes between control and intervention was not statistically significant. For the cohort as a whole, the changes in indicators of body fatness did not show any clinically relevant differences between the intervention and control groups. Changes in favour of intervention treatment in some indicators were counterbalanced by changes in favour of the control group in some other indicators. Conclusions: Over the 2-year-observation period, the IDEFICS primary prevention programme for childhood obesity has not been successful in reducing the prevalence of overweight and obesity nor in improving indicators of body fatness in the target population as a whole. abstract_id: PUBMED:21806806 Using the intervention mapping protocol to develop a community-based intervention for the prevention of childhood obesity in a multi-centre European project: the IDEFICS intervention. Background: The prevalence of childhood obesity has increased during the past decades and is now considered an urgent public health problem. Although stabilizing trends in obesity prevalence have been identified in parts of Europe, preventive efforts in children are still needed. Using the socio-ecological approach as the underlying theoretical perspective, the IDEFICS project aimed to develop, implement and evaluate a community-based intervention for the prevention of childhood obesity in eight European countries. The aim of the present manuscript was to describe the content and developmental process of the IDEFICS intervention. Methods: The intervention mapping protocol (IMP) was used to develop the community-based intervention for the prevention of childhood obesity in 3 to 10 years old children. It is a theory- and evidence-based tool for the structured planning and development of health promotion programs that requires the completion of six different steps. These steps were elaborated by two coordinating centers and discussed with the other participating centers until agreement was reached. Focus group research was performed in all participating centers to provide an informed basis for intervention development. Results: The application of the IMP resulted in an overall intervention framework with ten intervention modules targeting environmental and personal factors through the family, the school and the community. The summary results of the focus group research were used to inform the development of the overall intervention. The cultural adaptation of the overall intervention was realised by using country specific focus group results. The need for cultural adaptation was considered during the entire process to improve program adoption and implementation. A plan was developed to evaluate program effectiveness and quality of implementation. Conclusions: The IDEFICS project developed a community-based intervention for the prevention of childhood obesity by using to the intervention mapping heuristic. The IDEFICS intervention consists of a general and standardized intervention framework that allows for cultural adaptation to make the intervention feasible and to enhance deliverability in all participating countries. The present manuscript demonstrates that the development of an intervention is a long process that needs to be done systematically. Time, human resources and finances need to be planned beforehand to make interventions evidence-based and culturally relevant. abstract_id: PUBMED:26707017 Effect of the IDEFICS multilevel obesity prevention on children's sleep duration. Background: According to recent findings, short sleep duration is associated with overweight in children. However, primary prevention efforts aimed at achieving adequate sleep among children are scarce. Therefore, the 'Identification and prevention of Dietary-induced and lifestyle-induced health EFfects In Children and infantS' (IDEFICS) study implemented a multilevel intervention that included sleep duration as a key behavioural target. The aim of this study is to evaluate sleep duration among children participating in the IDEFICS study. Methods: The IDEFICS nocturnal sleep intervention was included as part of stress reduction educational messages aimed at parents and children. Sleep was assessed by a parental 24-h recall (only weekdays; n = 8,543) and by a diary (weekdays and weekends separately; n = 4,150). Mixed linear models tested the intervention effect on sleep duration change between baseline when children were 2-9.9 years of age (2007/2008) and follow-up (2009/2010). Logistic mix models were used to study the intervention effect on the presence of TV in the children's bedroom (one of the intervention messages; n = 8,668). Additionally, parents provided qualitative data regarding exposure to the intervention. Results: About 51.1% of the parents in the intervention regions reported awareness of the sleep intervention. A small intervention effect was seen on weeknight sleep duration in that the decrease in sleep duration over 2 years was smaller in the intervention (15 min) as compared with control regions (19 min) (p = 0.044). There was no overall intervention effect on weekend sleep duration or on the presence of a TV in the bedroom. A small significant time effect between baseline and follow-up was found on bedroom TV presence depending on self-reported intervention exposure (3% increase in TV presence in exposed versus 6.6% increase in non-exposed). Children without a TV in the bedroom had longer nocturnal sleep duration. Discussion: The sleep component of the intervention did not lead to clinically relevant changes in sleep duration. Future interventions aimed at young children's sleep duration could benefit from more specific and intense messaging than that found in the IDEFICS intervention. Future research should use objective measures of sleep duration as well as intermediate outcomes (sleep knowledge, sleep environment and sleep practices). abstract_id: PUBMED:26707016 Effectiveness of the IDEFICS intervention on objectively measured physical activity and sedentary time in European children. Background: This paper reports on the effectiveness of the prevention of dietary- and lifestyle-induced health effects in children and infants (IDEFICS) intervention on objectively measured physical activity (PA) and sedentary time (ST) in 2- to 9.9-year-old European boys and girls. Methods: The intervention was evaluated after 2 years through a non-randomized cluster-controlled trial in eight European countries (one control and one intervention community per country). All children in the intervention group received a culturally adapted childhood obesity prevention programme through the community, schools/kindergartens and family. A random sub-sample of children participating in the IDEFICS study wore an accelerometer at baseline and follow-up for at least 3 days (n = 9,184). Of this sample, 81% provided valid accelerometer data at baseline (n = 7,413; 51% boys; 6.21 ± 1.76 years; boys: 617 ± 170 cpm day(-1) ; girls 556 ± 156 cpm day(-1) ) and 3,010 children provided valid accelerometer data at baseline and during the follow-up survey 2 years later. Results: In boys and girls, no significant differences in PA and ST were found between intervention and control groups over 2 years. Strong temporal effects were found in the total sample of boys and girls: the percentage of time spent in light PA per day decreased by 4 percentage points in both boys and girls between baseline and follow-up (both: p < 0.001), while time spent in ST per day increased by 4 percentage points in both sexes over time (both: p < 0.001). Percentage of time spent in moderate-to-vigorous PA per day remained stable over time in boys and girls. Conclusion: Despite the socio-ecological approach and implementation of a culturally adapted intervention in each country, no effects of the IDEFICS intervention were found on children's objectively measured PA and ST. Behavioural interventions for children may need to enhance specificity and intensity at the family level using other behaviour change techniques and more direct strategies to reach parents. Answer: The IDEFICS (Identification and prevention of dietary- and lifestyle-induced health effects in children and infants) community-oriented intervention programme was developed as a response to the increasing incidence of overweight and obesity among children in Europe. It aimed to facilitate the adoption of a healthy, obesity-preventing lifestyle in children aged 2-10 years across eight European countries: Sweden, Estonia, Germany, Belgium, Hungary, Italy, Spain, and Cyprus (PUBMED:21483418). The intervention programme was designed to address diet, physical activity, and stress-coping capacity through a set of interventional efforts at different societal levels, including children, their (grand)parents, schools, local public authorities, and community stakeholders. The programme consisted of 10 modules, with three at the community level, six at the school level, and one for parents, and was culturally adapted for implementation in the participating countries (PUBMED:21483418). Despite the comprehensive design and implementation efforts, the results of the IDEFICS intervention were mixed. After two years, only minor effects were observed on an individual level, but there were sustainable effects on the community and setting levels. Five years post-intervention, a beneficial change in the consumption of sugar and water was noted among intervention families, and children who were overweight or obese at baseline showed favorable weight trajectories (PUBMED:27730262). However, the overall behavioural effects of the intervention, as reported by parents, did not show significant changes in diet, physical activity, and sedentary behaviours for the IDEFICS children (PUBMED:26707014). Additionally, the intervention did not result in a significant reduction in the prevalence of overweight and obesity or improvements in indicators of body fatness in the target population as a whole (PUBMED:26707013, PUBMED:26707018). The reasons for the limited effectiveness of the intervention may include insufficient penetration of intervention messages, inadequate intensity of local activities, and challenges in inducing structural changes in the obesogenic environment (PUBMED:27730262). Furthermore, the intervention's impact on children's sleep duration and objectively measured physical activity and sedentary time was not clinically relevant (PUBMED:26707017, PUBMED:26707016). In conclusion, while the IDEFICS intervention programme represents a significant effort to address childhood obesity in Europe, its effectiveness as a new model for prevention has been limited. The programme's outcomes suggest that future interventions may need to enhance specificity, intensity, and direct strategies to engage parents and effect behavioural change (PUBMED:26707016).
Instruction: Acquiring laparoscopic skill proficiency: does orientation matter? Abstracts: abstract_id: PUBMED:32556761 Educational system for acquiring appropriate laparoscopic colorectal surgical skills: analysis in a Japanese high-volume cancer center. Background: Acquiring appropriate laparoscopic technique is necessary to safely perform laparoscopic surgery. The Endoscopic Surgical Skill Qualification System of the Japanese Society of Endoscopic Surgery, which was established to improve the quality of laparoscopic surgery in Japan, provides training to become an expert laparoscopic surgeon. In this study, we describe our educational system, in a Japanese highest volume cancer center, and evaluate the system according to the pass rate for the Endoscopic Surgical Skill Qualification System examination. Methods: We assessed 14 residents who trained for more than 2 years from 2012 to 2018 in our department. All teaching surgeons, qualified by the Endoscopic Surgical Skill Qualification System, participated in all surgeries as supervisors. For the first 3 months, trainees learned as the scopist, then as the first assistant for 3 months, and then by performing laparoscopic surgery as an operator during ileocecal resection or sigmoidectomy. Trainees apply for this training in their second year of residency or later. All laparoscopic procedures in our department are standardized in detail. Results: The cumulative pass rate was 75% (12/16), and 87% (12/14) of the trainees eventually passed, while the general pass rate was approximately 30%. On average, those who passed in their second or third year had experienced 94 procedures as the surgeon, 177 as the first assistant, and 199 as the scopist. The number of laparoscopic procedures and the learning curves did not differ between successful and failed applicants. Conclusions: Through our educational system, residents successfully acquired laparoscopic skills with a much higher pass rate in the Endoscopic Surgical Skill Qualification System examination than the general standard. Laparoscopic practice under supervision by experienced surgeons with standardized procedures and accurate understanding of the relevant anatomy is very helpful to achieving appropriate laparoscopic technique. abstract_id: PUBMED:34675789 Associations Between Binocular Depth Perception and Performance Gains in Laparoscopic Skill Acquisition. The ability to perceive differences in depth is important in many daily life situations. It is also of relevance in laparoscopic surgical procedures that require the extrapolation of three-dimensional visual information from two-dimensional planar images. Besides visual-motor coordination, laparoscopic skills and binocular depth perception are demanding visual tasks for which learning is important. This study explored potential relations between binocular depth perception and individual variations in performance gains during laparoscopic skill acquisition in medical students naïve of such procedures. Individual differences in perceptual learning of binocular depth discrimination when performing a random dot stereogram (RDS) task were measured as variations in the slope changes of the logistic disparity psychometric curves from the first to the last blocks of the experiment. The results showed that not only did the individuals differ in their depth discrimination; the extent with which this performance changed across blocks also differed substantially between individuals. Of note, individual differences in perceptual learning of depth discrimination are associated with performance gains from laparoscopic skill training, both with respect to movement speed and an efficiency score that considered both speed and precision. These results indicate that learning-related benefits for enhancing demanding visual processes are, in part, shared between these two tasks. Future studies that include a broader selection of task-varying monocular and binocular cues as well as visual-motor coordination are needed to further investigate potential mechanistic relations between depth perceptual learning and laparoscopic skill acquisition. A deeper understanding of these mechanisms would be important for applied research that aims at designing behavioral interventions for enhancing technology-assisted laparoscopic skills. abstract_id: PUBMED:15547564 Acquiring laparoscopic skill proficiency: does orientation matter? Objective: This study was undertaken to determine whether side-on laparoscopic operating orientation alters time-to-skill mastery compared with head-on orientation. Study Design: One hundred thirty-one medical students were randomly assigned by operating axis (camera position to operating field) and completed 10 attempts at each of 5 previously validated laparoscopic skills stations. Time-to-completion was recorded for each attempt, generating an orientation and skill station learning curve. Statistical analysis was performed by using repeated measures analysis of variance and linear, polynomial, and logarithmic models with 95% CIs. Results: Sixty-eight students were randomly assigned to head-on orientation and 63 to side-on orientation. Comparing median time-to-completion by station, head-on attempts were faster than side-on attempts for every station. Comparing learning curves by orientation, side-on learning curves were steeper than head-on learning curves for every station, except one. Asymptotes were reached in both strata by the tenth attempt. Conclusion: Greater initial disorientation is seen with side-on orientation compared with head-on orientation. This can be overcome with practice. abstract_id: PUBMED:35103857 Involvement of a skill-qualified surgeon favorably influences outcomes of laparoscopic cholecystectomy performed for acute cholecystitis. Objective: The Endoscopic Surgical Skill Qualification System (ESSQS) was developed by the Japan Society for Endoscopic Surgery as a means of subjectively assessing the proficiency of laparoscopic surgeons. We conducted a study to evaluate how involvement of an ESSQS skill-qualified (SQ) surgeon influences short-term outcomes of laparoscopic cholecystectomy performed for acute cholecystitis. Previous reports suggest that assessment of the video-rating system is a potential tool to discriminate laparoscopic surgeons' proficiency and top-rated surgeons face less surgical mortality and morbidity in bariatric surgery. Methods: Data from the National Clinical Database regarding laparoscopic cholecystectomy performed for acute cholecystitis between January 2016 and December 2018 were analyzed. Outcomes were compared between patients grouped according to involvement vs. non-involvement of an SQ surgeon. Outcomes were also compared between patients grouped according to whether their operation was performed by biliary tract-, stomach-, or colon-qualified surgeon. Results: Of the 309,998 laparoscopic cholecystectomies during the study period, 65,295 were suitable for inclusion in the study and 13,670 (20.9%) were performed by an SQ surgeon. Patients' clinical characteristics did not differ between groups. Thirty-day mortality was significantly lower in the SQ group (0.1%) 16/13,670 than in the non-SQ group (0.2%) 140/51,625 (P = 0.001). Thirty-day mortality was [0.1% (9/7173)] in the biliary tract-qualified group, [0.2% (5/3527)] in the stomach-qualified group, and [0.1% (2/3240)] in the colon-qualified group. Conclusion: Surgeons with ESSQS certification outperform the non-skilled surgeons in terms of surgical mortality in 30 and 90 days. Further verification of the value of the ESSQS is warranted and similar systems may be needed in countries across the world to ensure patient safety and control the quality of surgical treatments. abstract_id: PUBMED:34142238 Effect of endoscopic surgical skill qualification system for laparoscopic multivisceral resection: Japanese multicenter analysis. Background: The efficacy of laparoscopic multivisceral resection (Lap-MVR) has been reported by several experienced high-volume centers. The Endoscopic Surgical Skill Qualification System (ESSQS) was established in Japan to improve the skill of laparoscopic surgeons and further develop surgical teams. We examined the safety and feasibility of Lap-MVR in general hospitals, and evaluated the effects of the Japanese ESSQS for this approach. Methods: We retrospectively reviewed 131 patients who underwent MVR between April 2016 and December 2019. Patients were divided into the laparoscopic surgery group (LAC group, n = 98) and the open surgery group (OPEN group, n = 33). The clinicopathological and surgical features were compared between the groups. Results: Compared with the OPEN group, BMI was significantly higher (21.9 vs 19.3 kg/m2, p = 0.012) and blood loss was lower (55 vs 380 ml, p < 0.001) in the LAC group. Operation time, postoperative complications, and postoperative hospital stay were similar between the groups. ESSQS-qualified surgeons tended to select the laparoscopic approach for MVR (p < 0.001). In the LAC group, ESSQS-qualified surgeons had superior results to those without ESSQS qualifications in terms of blood loss (63 vs 137 ml, p = 0.042) and higher R0 resection rate (0% vs 2.0%, p = 0.040), despite having more cases of locally advanced tumor. In addition, there were no conversions to open surgery among ESSQS-qualified surgeons, and three conversions among surgeons without ESSQS qualifications (0% vs 15.0%, p = 0.007). Multivariate analysis revealed blood loss (odds ratio 1.821; 95% CI 1.324-7.654; p = 0.010) as an independent predictor of postoperative complications. Laparoscopic approach was not a predictive factor. Conclusions: The present multicenter study confirmed the feasibility and safety of Lap-MVR even in general hospitals, and revealed superior results for ESSQS-qualified surgeons. abstract_id: PUBMED:37428230 Usefulness of the endoscopic surgical skill qualification system for laparoscopic colectomy for transverse colon cancer: a Japanese multicenter analysis. Purpose: Laparoscopic colectomy for transverse colon cancer (TCC) can be technically demanding due to the anatomical complexity of the region. In Japan, the Endoscopic Surgical Skill Qualification System (ESSQS) was established to improve the skill of laparoscopic surgeons and further develop surgical teams. We examined the safety and feasibility of laparoscopic colectomy for TCC and evaluated the effects of the Japanese ESSQS in facilitating this approach. Methods: We retrospectively reviewed 136 patients who underwent laparoscopic colectomy for TCC between April 2016 and December 2021. Patients were divided into an ESSQS-qualified surgeon group (surgery performed by an ESSQS-qualified surgeon, n = 52) and a non ESSQS-qualified surgeon (surgery performed by a non ESSQS-unqualified surgeon, n = 84). Clinicopathological and surgical features were compared between groups. Results: Postoperative complications occurred in 37 patients (27.2%). The proportion of patients who developed postoperative complications was lower in the ESSQS-qualified surgeon group (8.0%) than that in the non ESSQS-qualified surgeon group (34.5%; p < 0.017). Multivariate analysis revealed "Operation by ESSQS-qualified surgeon surgeon" (odds ratio (OR) 0.360, 95% confidence interval (CI) 0.140-0.924; p = 0.033), blood loss (OR 4.146, 95% CI 1.688-10.184; p = 0.002), and clinical N status (OR 4.563, 95% CI 1.814-11.474; p = 0.001) as factors independently associated with postoperative complications. Conclusion: The present multicenter study confirmed the feasibility and safety of laparoscopic colectomy for TCC and revealed that ESSQS-qualified surgeon achieved better surgical outcomes. abstract_id: PUBMED:18391304 Towards a virtual basic laparoscopic skill trainer (VBLaST). Surgical skill training is a long and tedious process of acquiring fine motor skills. To overcome the drawbacks of the existing toolbox trainer systems, we develop, for the first time, a virtual basic laparoscopic skill trainer (VBLaST) whereby tasks, such as the ones available in the FLS toolbox system, may be performed on the computer. abstract_id: PUBMED:11708511 Transfer of training in acquiring laparoscopic skills. Background: Building on skills already learned in acquiring more complex or related skills is termed transfer of training (TOT). This study examined the TOT effects of previous open and laparoscopic surgical experience on a laparoscopic training module. Study Design: Intracorporeal knot tying was chosen for evaluating TOT among three groups of surgical residents: interns (n = 11) with limited open and laparoscopic surgical experience, junior residents (n = 9) with recent and ongoing open and laparoscopic surgical experience, and senior residents (n = 8) with remote and limited laparoscopic experience but ongoing open surgical experience. After receiving a lecture, demonstration, and written instructions on three knot-tying techniques, residents rotated through three performance stations, one for each technique, over 2 days. After 15 minutes of practice, the residents were videotaped completing a test knot. Time to completion and economy of motion were recorded and analyzed. Results: Junior residents had fewer performance errors than senior residents (reported as mean +/- standard error of the mean) and were significantly faster than interns. No significant differences between interns and senior residents for mean time or error performance were observed. Senior residents did not demonstrate TOT from open surgical experience to laparoscopic knot tying. No significant differences were obtained across the three sessions for errors or for time. Conclusion: No evidence was found for TOT from open surgical experience to newly introduced laparoscopic knot-tying techniques or from one skill training session to a different skill session at least 4 hours later. This study indicates that specific minimally invasive surgery training is needed to develop laparoscopic surgery skills. abstract_id: PUBMED:28385488 What are the Training Gaps for Acquiring Laparoscopic Suturing Skills? Introduction: Advanced laparoscopic suturing is considered a challenging skill to acquire. The aim of this study was to investigate the learning process for advanced laparoscopic suturing in the operating room to understand the obstacles trainees face when trying to master the skill. Methods: A qualitative methodology using semistructured interviews and field observations was used. Data were analyzed using a Grounded Theory approach. Participants were general surgery residents and surgeons with advanced minimally invasive surgery (MIS) experience. Results: Ten MIS surgeons across different institutions and 15 local general surgery residents were interviewed. The semistructured interviews and field observations of 9 advanced MIS operations (27h) yielded the following 6 themes around the acquisition of laparoscopic suturing skills for residents: complexity, training misalignment, variability of opportunities, inconsistency of techniques, lack of feedback, and differing expectations. Conclusion: There are several unmet training needs around laparoscopic suturing skills. Training for advanced laparoscopic skills requires more emphasis on coaching and the development of advanced models. This study heralded the need to incorporate advanced laparoscopic skills into the surgical simulation curriculum. abstract_id: PUBMED:36864351 Skill-qualified surgeons positively affect short-term outcomes after laparoscopic gastrectomy for gastric cancer: a survey of the National Clinical Database of Japan. Background: Proficiency of the operating surgeon is one of the most critical factors potentially associated with reductions in complications and surgery-related mortality. With video-rating systems having shown potential for assessing laparoscopic surgeons' proficiency, the Endoscopic Surgical Skill Qualification System (ESSQS) was developed by the Japan Society for Endoscopic Surgery to subjectively assess the proficiency of laparoscopic surgeons by rating applicants' non-edited case videos. We conducted a study to evaluate how ESSQS skill-qualified (SQ) surgeon involvement influences short-term outcomes of laparoscopic gastrectomy performed for gastric cancer. Methods: Data from the National Clinical Database regarding laparoscopic distal and total gastrectomy performed for gastric cancer between January 2016 and December 2018 were analyzed. Operative mortality, defined as 30-day mortality or 90-day in-hospital mortality, and anastomotic leakage rates were compared per involvement vs. non-involvement of an SQ surgeon. Outcomes were also compared per involvement of a gastrectomy-, colectomy-, or cholecystectomy-qualified surgeon. The association between the area of qualification and operative mortality/anastomotic leakage was also analyzed with a generalized estimating equation logistic regression model used to account for patient-level risk factors and institutional differences. Results: Of 104,093 laparoscopic distal gastrectomies, 52,143 were suitable for inclusion in the study; 30,366 (58.2%) were performed by an SQ surgeon. Of 43,978 laparoscopic total gastrectomies, 10,326 were suitable for inclusion; 6501 (63.0%) were performed by an SQ surgeon. Gastrectomy-qualified surgeons outperformed non-SQ surgeons in terms of both operative mortality and anastomotic leakage. They also outperformed cholecystectomy- and colectomy-qualified surgeons in terms of operative mortality or anastomotic leakage in distal and total gastrectomy, respectively. Conclusion: The ESSQS appears to discriminate laparoscopic surgeons who can be expected to achieve significantly improved gastrectomy outcomes. Answer: Yes, orientation does matter when acquiring laparoscopic skill proficiency. A study was conducted to determine whether side-on laparoscopic operating orientation alters time-to-skill mastery compared with head-on orientation. The study involved 131 medical students who were randomly assigned to either head-on or side-on orientation and completed multiple attempts at various laparoscopic skills stations. The results showed that attempts made with head-on orientation were faster than those made with side-on orientation for every station. Additionally, the learning curves for side-on orientation were steeper than those for head-on orientation for almost all stations, indicating greater initial disorientation with side-on orientation. However, with practice, this disorientation could be overcome, and asymptotes were reached in both orientations by the tenth attempt (PUBMED:15547564). This suggests that while orientation does impact the initial learning phase, with sufficient practice, proficiency can be achieved regardless of the orientation.
Instruction: Does EMS perceived anatomic injury predict trauma center need? Abstracts: abstract_id: PUBMED:23627418 Does EMS perceived anatomic injury predict trauma center need? Objective: Our objective was to determine the predictive value of the anatomic step of the 2011 Field Triage Decision Scheme for identifying trauma center need. Methods: Emergency medical services (EMS) providers caring for injured adults transported to regional trauma centers in three midsized communities were interviewed over two years. Patients were included, regardless of injury severity, if they were at least 18 years old and were transported by EMS with a mechanism of injury that was an assault, motor vehicle or motorcycle crash, fall, or pedestrian or bicyclist struck. The interview was conducted upon emergency department (ED) arrival and collected physiologic condition and anatomic injury data. Patients who met the physiologic criteria were excluded. Trauma center need was defined as nonorthopedic surgery within 24 hours, intensive care unit admission, or death prior to hospital discharge. Data were analyzed by calculating descriptive statistics, including positive likelihood ratios (+LRs) with 95% confidence intervals (CIs). Results: A total of 11,892 interviews were conducted. One was excluded because of missing outcome data and 1,274 were excluded because they met the physiologic step. EMS providers identified 1,167 cases that met the anatomic criteria, of which 307 (26%) needed the resources of a trauma center (38% sensitivity, 91% specificity, +LR 4.4; CI: 3.9-4.9). Criteria with a +LR ≥5 were flail chest (9.0; CI: 4.1-19.4), paralysis (6.8; CI: 4.2-11.2), two or more long-bone fractures (6.3; CI: 4.5-8.9), and amputation (6.1; CI: 1.5-24.4). Criteria with a +LR >2 and <5 were penetrating injury (4.8; CI: 4.2-5.6) and skull fracture (4.8; CI: 3.0-7.7). Only pelvic fracture (1.9; CI: 1.3-2.9) had a +LR less than 2. Conclusions: The anatomic step of the Field Triage Guidelines as determined by EMS providers is a reasonable tool for determining trauma center need. Use of EMS perceived pelvic fracture as an indicator for trauma center need should be reevaluated. Key words: wounds and injury; triage; emergency medical services; emergency medical technicians. abstract_id: PUBMED:32119577 Does Mechanism of Injury Predict Trauma Center Need for Children? Objective: To determine if the Mechanism of Injury Criteria of the Field Triage Decision Scheme (FTDS) are accurate for identifying children who need the resources of a trauma center. Methods: EMS providers transporting any injured child ≤15 years, regardless of severity, to a pediatric trauma center in 3 midsized communities over 3 years were interviewed. Data collected through the interview included EMS observed physiologic condition, suspected anatomic injuries, and mechanism. Patients were then followed to determine if they needed the resources of a trauma center by reviewing their medical record after hospital discharge. Patients were considered to need a trauma center if they received an intervention included in a previously published consensus definition. Data were analyzed with descriptive statistics including positive likelihood ratios (+LR) and 95% confidence intervals (95%CI). Results: 9,483 provider interviews were conducted and linked to hospital outcome data. Of those, 230 (2.4%) met the consensus definition for needing a trauma center. 1,572 enrolled patients were excluded from further analysis because they met the Physiologic or Anatomic Criteria of the FTDS. Of the remaining 7,911 cases, 62 met the consensus definition for needing a trauma center (TC). Taken as a whole, the Mechanism of Injury Criteria of the FTDS identified 14 of the remaining 62 children who needed the resources of a trauma center for a 77% under-triage rate. The mechanisms sustained were 36% fall (16 needed TC), 28% motor vehicle crash (MVC) (20 needed TC), 7% struck by a vehicle (10 needed TC), <1% motorcycle crash (none needed TC), and 29% had a mechanism not included in the FTDS (16 needed TC). Of those who sustained a mechanisms not listed in the FTDS, the most common mechanisms were sport related injuries not including falls (24% of 2,283 cases with a mechanism not included) and assault (13%). Among those who fell from a height greater than 10 feet, 4 needed a TC (+LR 5.9; 95%CI 2.8-12.6). Among those in a MVC, 41 were reported to have been ejected and none needed a TC, while 31 had reported meeting the intrusion criteria and 0 needed a TC. There were 32 reported as having a death in the same vehicle, and 2 needed a TC (+LR 7.42; 95%CI: 1.90-29.0). Conclusion: Over a quarter of the children who needed the resources of a trauma center were not identified using the Physiologic or Anatomic Criteria of the Field Triage Decision Scheme. The Mechanism of Injury Criteria did not apply to over a quarter of the mechanisms experienced by children transported by EMS for injury. Use of the Mechanism Criteria did not greatly enhance identification of children who need a trauma center. More work is needed to improve the tool used to assist EMS providers in the identification of children who need the resources of a trauma center. abstract_id: PUBMED:21870946 Does mechanism of injury predict trauma center need? Objective: To determine the predictive value of the mechanism-of-injury step of the American College of Surgeons Field Triage Decision Scheme for determining trauma center need. Methods: Emergency medical services (EMS) providers caring for injured adult patients transported to the regional trauma center in three midsized communities over two years were interviewed upon emergency department (ED) arrival. Included was any injured patient, regardless of injury severity. The interview collected patient physiologic condition, apparent anatomic injury, and mechanism of injury. Using the 1999 Scheme, patients who met the physiologic or anatomic steps were excluded. Patients were considered to need a trauma center if they had nonorthopedic surgery within 24 hours, had intensive care unit admission, or died prior to hospital discharge. Data were analyzed by calculating positive likelihood ratios (+LRs) and 95% confidence intervals (CIs) for each mechanism-of-injury criterion. Results: A total of 11,892 provider interviews were conducted. Of those, one was excluded because outcome data were not available, and 2,408 were excluded because they met the other steps of the Field Triage Decision Scheme. Of the remaining 9,483 cases, 2,363 met one of the mechanism-of-injury criteria, 204 (9%) of whom needed the resources of a trauma center. Criteria with a +LR ≥ 5 were death of another occupant in the same vehicle (6.8; CI: 2.7-16.7), fall >20 feet (5.3; CI: 2.4-11.4), and motor vehicle crash (MVC) extrication time >20 minutes (5.1; CI: 3.2-8.1). Criteria with a +LR between >2 and <5 were intrusion >12 inches (4.2; CI: 2.9-5.9), ejection (3.2; CI: 1.3-8.2), and deformity >20 inches (2.5; CI: 1.9-3.2). The criteria with a +LR ≤ 2 were MVC speed >40 mph (2.0; CI: 1.7-2.4), pedestrian/bicyclist struck at a speed >5 mph (1.2; CI:1.1-1.4), bicyclist/pedestrian thrown or run over (1.2; CI: 0.9-1.6), motorcycle crash at a speed >20 mph (1.2; CI: 1.1-1.4), rider separated from motorcycle (1.0; CI: 0.9-1.2), and MVC rollover (1.0; CI: 0.7-1.5). Conclusion: Death of another occupant, fall distance, and extrication time were good predictors of trauma center need when a patient did not meet the anatomic or physiologic conditions. Intrusion, ejection, and vehicle deformity were moderate predictors. Key words: wounds and injury; triage; emergency medical services; emergency medical technicians; predictors; mechanism of injury; trauma center. abstract_id: PUBMED:18924008 Establishing the need for trauma center care: anatomic injury or resource use? Objective: It remains unclear whether the "need" for care at a trauma center should be based on anatomic injury (the current standard) or specialized resource use. We investigated whether anatomic injury severity scores adequately explain hospital resource use. Methods: This was a retrospective cohort study including children and adults meeting statewide trauma criteria and transported to 48 hospitals from 1998 to 2003. The injury severity score (ISS) was considered as both continuous (range 0-75) and categorical (0-8, 9-15, and >or= 16) terms. Specialized resource use was defined as: major surgery (with and without orthopedic intervention), mechanical ventilation > 96 hours, blood transfusion, intensive care unit (ICU) stay >or= 2 days, or in-hospital mortality. Resource use was assessed as both a binary variable and a continuous term. Descriptive statistics and simple and multivariable linear regressions were used to compare ISS and resource use. Results: 33,699 injured persons were included in the analysis. Within mild, moderate, and serious anatomic injury categories, 8%, 26%, and 69%, respectively, had specialized resource use. When the resource use definition included orthopedic surgery, 12%, 49%, and 76%, respectively, had specialized resource use. Whereas there was fair correlation between ISS and additive resource use (rho = 0.61), ISS explained only 37% of the variability in resource use (adjusted R-squared = 0.37). Resource use within anatomic injury categories differed by age group. Conclusions: The standard anatomic injury criterion for trauma center "need" (i.e., ISS >or= 16) misclassifies a substantial number of injured persons requiring critical trauma resources. Out-of-hospital trauma triage guidelines based on anatomic injury may need revision to account for patients with resource need. abstract_id: PUBMED:34016920 Making the call in the field: Validating emergency medical services identification of anatomic trauma triage criteria. Background: The National Field Triage Guidelines were created to inform triage decisions by emergency medical services (EMS) providers and include eight anatomic injuries that prompt transportation to a Level I/II trauma center. It is unclear how accurately EMS providers recognize these injuries. Our objective was to compare EMS-identified anatomic triage criteria with International Classification of Diseases-10th revision (ICD-10) coding of these criteria, as well as their association with trauma center need (TCN). Methods: Scene patients 16 years and older in the NTDB during 2017 were included. National Field Triage Guidelines anatomic criteria were classified based on EMS documentation and ICD-10 diagnosis codes. The primary outcome was TCN, a composite of Injury Severity Score greater than 15, intensive care unit admission, urgent surgery, or emergency department death. Prevalence of anatomic criteria and their association with TCN was compared in EMS-identified versus ICD-10-coded criteria. Diagnostic performance to predict TCN was compared. Results: There were 669,795 patients analyzed. The ICD-10 coding demonstrated a greater prevalence of injury detection. Emergency medical service-identified versus ICD-10-coded anatomic criteria were less sensitive (31% vs. 59%), but more specific (91% vs. 73%) and accurate (71% vs. 68%) for predicting TCN. Emergency medical service providers demonstrated a marked reduction in false positives (9% vs. 27%) but higher rates of false negatives (69% vs. 42%) in predicting TCN from anatomic criteria. Odds of TCN were significantly greater for EMS-identified criteria (adjusted odds ratio, 4.5; 95% confidence interval, 4.46-4.58) versus ICD-10 coding (adjusted odds ratio 3.7; 95% confidence interval, 3.71-3.79). Of EMS-identified injuries, penetrating injury, flail chest, and two or more proximal long bone fractures were associated with greater TCN than ICD-10 coding. Conclusion: When evaluating the anatomic criteria, EMS demonstrate greater specificity and accuracy in predicting TCN, as well as reduced false positives compared with ICD-10 coding. Emergency medical services identification is less sensitive for anatomic criteria; however, EMS identify the most clinically significant injuries. Further study is warranted to identify the most clinically important anatomic triage criteria to improve our triage protocols. Level Of Evidence: Care management, Level IV; Prognostic, Level III. abstract_id: PUBMED:29669612 Prehospital Blood Product Administration Opportunities in Ground Transport ALS EMS - A Descriptive Study. IntroductionHemorrhage remains the major cause of preventable death after trauma. Recent data suggest that earlier blood product administration may improve outcomes. The purpose of this study was to determine whether opportunities exist for blood product transfusion by ground Emergency Medical Services (EMS). Methods: This was a single EMS agency retrospective study of ground and helicopter responses from January 1, 2011 through December 31, 2015 for adult trauma patients transported from the scene of injury who met predetermined hemodynamic (HD) parameters for potential transfusion (heart rate [HR]≥120 and/or systolic blood pressure [SBP]≤90). Results: A total of 7,900 scene trauma ground transports occurred during the study period. Of 420 patients meeting HD criteria for transfusion, 53 (12.6%) had a significant mechanism of injury (MOI). Outcome data were available for 51 patients; 17 received blood products during their emergency department (ED) resuscitation. The percentage of patients receiving blood products based upon HD criteria ranged from 1.0% (HR) to 5.9% (SBP) to 38.1% (HR+SBP). In all, 74 Helicopter EMS (HEMS) transports met HD criteria for blood transfusion, of which, 28 patients received prehospital blood transfusion. Statistically significant total patient care time differences were noted for both the HR and the SBP cohorts, with HEMS having longer time intervals; no statistically significant difference in mean total patient care time was noted in the HR+SBP cohort. Conclusions: In this study population, HD parameters alone did not predict need for ED blood product administration. Despite longer transport times, only one-third of HEMS patients meeting HD criteria for blood administration received prehospital transfusion. While one-third of ground Advanced Life Support (ALS) transport patients manifesting HD compromise received blood products in the ED, this represented 0.2% of total trauma transports over the study period. Given complex logistical issues involved in prehospital blood product administration, opportunities for ground administration appear limited within the described system. MixFM, ZielinskiMD, MyersLA, BernsKS, LukeA, StubbsJR, ZietlowSP, JenkinsDH, SztajnkrycerMD. Prehospital blood product administration opportunities in ground transport ALS EMS - a descriptive study. Prehosp Disaster Med. 2018;33(3):230-236. abstract_id: PUBMED:28463096 Border Injuries: An Analysis of Prehospital Demographics, Mechanisms, and Patterns of Injuries Encountered by USBP EMS Agents in the El Paso (Texas USA) Sector. Study Objective The aim of this study was to evaluate Emergency Medical Services (EMS), use, injury mechanisms, prehospital assessments, and injuries among those receiving aid from the United States Border Patrol (USBP) in the El Paso (Texas USA) Sector. Design: This is a time-series, retrospective analysis of all prehospital data for injuries among patients receiving care from USBP EMS on the US Mexico border in the El Paso sector from February 6, 2014 to February 6, 2016. Results: A total of 473 documented EMS encounters occurred in this two-year period and demonstrated a male gender predominance (male 63%; female 37%) with the most prominent ages between 22-40 years old. The most prevalent EMS call types were medical (55%) and trauma (42%). The most common chief complaints were an injured or painful extremity (35%) and rash (13%). The most common USBP EMS provider primary impression was traumatic injury (34%), followed by fever/infection (17%) and extremity injury (7%); however, the most common secondary impression was also extremity injury (20%). The most common mechanism of injury was fall (26%) and motor vehicle accident (MVA; 22%). The USBP EMS was the first provider on scene in 96% of the MVAs. Conclusion: The author reports on injury patterns, mechanisms, chief complaints, EMS impressions, as well as demographics of patients reporting to USBP EMS. A knowledge of these injury patterns will be useful to EMS administrators and physicians along the US Mexico border. Baker RA . Border injuries: an analysis of prehospital demographics, mechanisms, and patterns of injuries encountered by USBP EMS agents in the El Paso (Texas USA) Sector. Prehosp Disaster Med. 2017;32(4):431-436. abstract_id: PUBMED:31559807 Comparison of American guidelines for field triage and Polish criteria as qualification to a trauma center. Introduction: Trauma is the third cause of death among the general population in Poland, and the first in people aged 1-44 years. Trauma centers are hospitals dedicated to treating patients with multiple organ injuries, in a complex way that endeavours to ensure a lower mortality rate, shorter hospital stay and better outcomes if the patients are transferred to such a center. Worldwide, there are many models on how to treat a trauma patient, but them to be qualified for the procedure, the selection of potential patients is crucial. Objective: The aim of the study was to compare the Polish model for qualification to a trauma center and American Guidelines for Field Triage. Material And Methods: Retrospective analysis of medical documentation recorded between 1 January 2014 - 31 December 2014 was undertaken. The study concerned trauma patients admitted to the Emergency Department of the Regional Trauma Center at the Copernicus Memorial Hospital in Łódź, Poland. Inclusion criterion was initial diagnosis 'multiple-organ injury' among patients transported by the Emergency Medical Service (EMS). Results: In the period indicated, 3,173 patients were admitted to the Emergency Department at the Copernicus Memorial Hospital. From among them, 159 patients were included in the study. Only 13.2% of the patients fulfilled the Polish Qualification Criteria to Trauma Center in comparison to 87.4% who fulfilled the American Guidelines for Field Triage. Conclusions: Polish qualification criteria do not consider the large group of patients with severe injuries (ISS>15), but indicate patients with minimal chance of survival. Polish criteria do not consider the mechanism of injury, which is a relevant predictive indicator of severe or extremely severe injuries (ISS>15). Further studies should be undertaken to improve the qualification and treatment of trauma patients in Poland. abstract_id: PUBMED:33084560 Clinical handover from emergency medical services to the trauma team: A gap analysis. Objectives: There has been limited evaluation of handover from emergency medical services (EMS) to the trauma team. We sought to characterize these handover practices to identify areas of improvement and determine if handover standardization might be beneficial for trauma team performance. Methods: Data were prospectively collected over a nine-week period by a trained observer at a Canadian level one trauma centre. A randomized scheduled was used to capture a representative breadth of handovers. Data collected included outcome measures such as duration of handover, structure of the handover, and information shared, process measures such as questions and interruptions from the trauma team, and perceptions of the handover from nurses, trauma team leaders and EMS according to a bidirectional Likert scale. Results: 79 formal verbal handovers were observed. Information was often missing regarding airway (present 22%), breathing (54%), medications (59%), and allergies (54%). Handover structure lacked consistency beyond the order of identification and mechanism of injury. Of all questions asked, 35% were questioning previously given information. The majority of handovers (61%) involved parallel conversations between team members while EMS was speaking. There was a statistically significant disparity between the self-evaluation of EMS handovers and the perceived quality determined by nurses and trauma team leaders. Conclusions: We have identified the need to standardize handover due to poor information content, a lack of structure and active listening, information repetition, and discordant expectations between team members. These data will guide the development of a co-constructed framework integrating the perspectives of all team members. abstract_id: PUBMED:21226551 Motor vehicle intrusion alone does not predict trauma center admission or use of trauma center resources. Background: The Centers for Disease Control and Prevention (CDC) published its Guidelines for Field Triage of Injured Patients in 2009. These CDC guidelines define criteria for the triage of trauma patients to trauma centers, and include physiologic, anatomic, and high-risk mechanism-of-injury criteria. One of the mechanism criteria used for motor vehicle crashes (MVCs) is intrusion >12 inches at the occupant site or >18 inches at any site. Objective. We hypothesized that motor vehicle intrusion, as the sole criterion for transport to a trauma center, is neither sensitive nor specific for predicting which patients will utilize trauma center resources. Methods: This was a retrospective, observational study of a cohort of MVC victims transported to a single trauma center. The authors extracted MVC characteristics, patient demographics, and final diagnoses from emergency medical services (EMS) and emergency department (ED) charts. Two outcomes were examined: admission to the trauma center for a traumatic injury and use of trauma center resources, defined as in-hospital death, admission to an intensive care unit, operative intervention of any type, spinal injury, or intracranial hemorrhage. Results: During the 18-month study period, 608 patients met the inclusion criteria in our study; 36 patients met the trauma center triage criteria based solely on intrusion. The sensitivity, specificity, and positive predictive value (PPV) of intrusion were 38.9%, 92.8%, and 25.5%, respectively, for trauma center admission, and 58.3%, 91.9%, and 12.7%, respectively, for trauma center resource utilization. We conducted one post hoc adjustment to modify intrusion to entrapment. This change greatly increased the PPV of the CDC guidelines for both trauma center admissions and trauma center utilization, without any reduction in sensitivity. Conclusions: Motor vehicle intrusion alone is a poor predictor of the need for trauma center admission or trauma center resource utilization. A modest change to the CDC guidelines from intrusion to entrapment may reduce overtriage while maintaining a high sensitivity for serious injury. Answer: Yes, EMS perceived anatomic injury does predict trauma center need. According to the study reported in PUBMED:23627418, the anatomic step of the Field Triage Guidelines as determined by EMS providers is a reasonable tool for determining trauma center need. The study found that certain criteria such as flail chest, paralysis, two or more long-bone fractures, and amputation had a positive likelihood ratio (LR) greater than 5, indicating a strong association with the need for trauma center resources. However, the use of EMS perceived pelvic fracture as an indicator for trauma center need should be reevaluated, as it had a +LR less than 2. Moreover, the study in PUBMED:34016920 supports the predictive value of EMS-identified anatomic injuries for trauma center need (TCN). It found that EMS-identified anatomic criteria were less sensitive but more specific and accurate for predicting TCN compared to ICD-10 coding. EMS providers demonstrated a marked reduction in false positives but higher rates of false negatives in predicting TCN from anatomic criteria. Penetrating injury, flail chest, and two or more proximal long bone fractures were associated with greater TCN when identified by EMS than by ICD-10 coding. These findings suggest that while EMS perceived anatomic injuries are not perfect predictors of trauma center need, they are valuable indicators that can guide EMS providers in making triage decisions. Further study and refinement of the criteria may improve the accuracy and utility of EMS assessments in the field.
Instruction: Does a morning dose of Methylphenidate Retard reduce hyperkinetic symptoms in the afternoon? Abstracts: abstract_id: PUBMED:15565893 Does a morning dose of Methylphenidate Retard reduce hyperkinetic symptoms in the afternoon? Objectives: In order to treat children with Attention-deficit/Hyperactivity Disorder (ADHD) with a once-a-day stimulant several galenic approaches have been tried. The long acting methylphenidate (MPH, Medikinet-Retard) is a preparation with a two-step dynamic to release MPH (step one: acute; step two: prolonged). The efficacy of Medikinet-Retard, a new long-acting methylphenidate preparation, is analyzed based on the assessment of parents in the afternoon. Methods: In a multicenter drug treatment study (placebo controlled, randomized, double-blind) 85 children (normal intelligence, age 6 to 16 years, diagnosis of ADHD according to DSM-IV) were investigated over 4 weeks with weekly visits. Forty-three children received Medikinet-Retard and forty-two children placebo. The weekly dose titration depending on body weight and symptomatology allowed a final maximum of 60 mg. The effects on ADHD as perveived by the parents were assessed weekly with a German symptom checklist for ADHD according to DSM-IV and ICD-10 (FBB-HKS). The differences between baseline and last week of treatment were compared statistically between groups. Results: There was a large and statistically significant positive drug effect on ADHD symptomatology. The effect size of these differences was d = 1.2 (total score). Effects were found on inattention, hyperactivity and impulsity on the respective subscales. The efficacy of Medikinet-Retard was evaluated by the parents on an average as good. The rate of responders was four-times higher in the verum-group. The correlations of the changed scores in the parent ratings with the respective change scores in the teacher ratings were in the medium range. Conclusion: This is the first study with a German long-acting methylphenidate preparation (Medikinet-Retard). According to data based on parents' assessments, the drug showed very good clinical efficacy and safety in children with ADHD. Its two step galenic release of methylphenidate seems to be appropriate for a once-a-day (morning) stimulant in schoolchildren. abstract_id: PUBMED:10585981 A comparison of morning-only and morning/late afternoon Adderall to morning-only, twice-daily, and three times-daily methylphenidate in children with attention-deficit/hyperactivity disorder. Objective: 1) To compare standard twice-daily methylphenidate (MPH) dosing with a single morning dose of MPH and of Adderall during a typical school-day time period, and 2) to conduct a dose-response study of the effects of a late-afternoon (3:30 PM) dose of MPH and Adderall on evening behavior and side effects. Design: Within-subject, placebo-controlled, crossover design. Setting: Intensive summer treatment program with a comprehensive behavioral approach. Study Participants: Twenty-one children with attention-deficit/hyperactivity disorder (19 boys and 2 girls), between the ages of 6 and 12 years. Interventions: Children received, in random order with daily crossovers, each of the following conditions: 1) placebo, 2) 0.3 mg/kg of MPH received 3 times, 3) 0.3 mg/kg of MPH received twice (7:30 AM and 11:30 AM) with 0.15 mg/kg received at 3:30 PM, 4) 0.3 mg/kg of MPH received once in the morning only, 5) 0.3 mg/kg of Adderall received at 7:30 AM and at 3:30 PM, 6) 0.3 mg/kg of Adderall once in the morning with 0.15 mg/kg received at 3:30 PM, 7) 0.3 mg/kg of Adderall received in the morning only. Outcome Measures: Daily rates of behaviors in social and academic settings, and standardized ratings from counselors and teachers, were assessed for the hours between 8:00 AM and 3:30 PM (a typical school-day). Relative sizes of the medication effects were compared hourly between first daily ingestion (7:30 AM) and 4:45 PM to assess the time course of the 2 drugs. Effects of the 3:30 PM doses on functioning in the evenings at home were evaluated using parent ratings of behavioral and side effects. Results: A single morning dose of Adderall produced equivalent behavioral effects to those of MPH received twice-daily and behavioral effects of that single morning dose lasted throughout the school-day period. One morning dose of MPH was less effective than either 2 daily doses of MPH or 1 dose of Adderall, and seemed to wear off in the early to mid-afternoon. For some children a single morning dose of MPH maintained their behavior for an entire school day in the context of the summer treatment program. On parent ratings of evening behavior, 0.3 mg/kg of MPH at 3:30 PM was superior to 0.15 mg/kg at 3:30 PM, but there was no difference between the 2 doses of Adderall. Compared with placebo at 3:30 PM, only the 0.3 mg/kg dose of MPH caused significant improvement in parent ratings. In placebo versus Adderall comparisons, all doses, even the condition that consisted of Adderall in the morning and placebo at 3:30 PM, produced a significant change in evening behavior. Conclusions: The results show that, at least in the context of an intensive behavioral intervention, a single morning dose of Adderall had behavioral effects throughout an entire school day period that were equivalent to standard twice-daily MPH dosing. These results indicate that Adderall may be used as a long-acting stimulant for children for whom midday dosing is a problem. Further study including dose-response comparisons, effects in regular school settings, and direct comparisons with comparable doses of MPH and d-amphetamine will help to clarify the time course and relative advantages of Adderall. abstract_id: PUBMED:32847239 Early identification of stimulant treatment responders, partial responders and non-responders using objective measures in children and adolescents with hyperkinetic disorder. Background: The aim of this study was to evaluate stimulant medication response following a single dose of methylphenidate (MPH) in children and young people with hyperkinetic disorder using infrared motion analysis combined with a continuous performance task (QbTest system) as objective measures. The hypothesis was put forward that a moderate testdose of stimulant medication could determine a robust treatment response, partial response and non-response in relation to activity, attention and impulse control measures. Methods: The study included 44 children and young people between the ages of 7-18 years with a diagnosis of hyperkinetic disorder (F90 & F90.1). A single dose-protocol incorporated the time course effects of both immediate release MPH and extended-release MPH (Concerta XL, Equasym XL) to determine comparable peak efficacy periods post intake. Results: A robust treatment response with objective measures reverting to the population mean was found in 37 participants (84%). Three participants (7%) demonstrated a partial response to MPH and four participants (9%) were determined as non-responders due to deteriorating activity measures together with no improvements in attention and impulse control measures. Conclusion: Objective measures provide early into prescribing the opportunity to measure treatment response and monitor adverse reactions to stimulant medication. Most treatment responders demonstrated an effective response to MPH on a moderate testdose facilitating a swift and more optimal titration process. abstract_id: PUBMED:337493 Methylphenidate in hyperkinetic children: differences in dose effects on learning and social behavior. Methylphenidate (Ritalin) is widely prescribed for hyperkinetic children. This study showed a peak enchancement of learning in children after being given a dose of 0.3 milligram per kilogram of body weight, and a decrement in learning in those given larger doses; social behavior showed the most improvement in children given 1.0 milligram per kilogram. These results had been hypothesized from theoretical dose-response curves which indicate different target behaviors would improve at different doses. abstract_id: PUBMED:2780127 Dose-response effects of methylphenidate on academic performance and overt behavior in hyperactive children. In the present study, the effects of 0.3 mg/kg and 1.0 mg/kg of methylphenidate on the overt behavior and academic functioning of 12 children with an established diagnosis of attention deficit disorder with hyperactivity were evaluated. A double-blind, placebo-control, within-subject (crossover) design was used, in which each child was tested four times in each drug condition. Drug conditions were alternated on a bidaily basis and each child received two different drug conditions each day. The academic tasks were designed for evaluation of the relationship between task complexity and dose. Whereas overt behavior improved with increasing dose, academic functioning was improved with methylphenidate, but did not vary with either dose or task complexity. Also, investigated were potential carryover effects of a morning dose of methylphenidate on performance in the afternoon. Behavioral and academic improvements produced by a dose of 0.3 mg/kg in the morning were no longer evident in the afternoon, but a morning dose of 1.0 mg/kg produced behavioral improvements that were clinically and statistically discernible in the afternoon, although the academic improvements had dissipated. abstract_id: PUBMED:677278 The behavioral symptoms of hyperkinetic children who successfully responded to stimulant drug treatment. Hyperkinetic children who successfully responded to drug treatment were compared with controls on a series of objective measures of activity and attentional problems. Although group differences were found on most measures, there were no indications that these children were more homogeneous in their behavioral symptoms than the hyperkinetic population in general. abstract_id: PUBMED:21790298 Comparison of the efficacy of two different modified release methylphenidate preparations for children and adolescents with attention-deficit/hyperactivity disorder in a natural setting: comparison of the efficacy of Medikinet(®) retard and Concerta(®)--a randomized, controlled, double-blind multicenter clinical crossover trial. Objective: The comparison of the efficacy of Medikinet(®) retard and Concerta(®) trial was a multisite, randomized, double-blind, crossover trial that aimed at comparing the effects of two different modified release methylphenidate preparations (Medikinet retard: 50% immediate release (IR); Concerta: 22% IR) in a natural setting across the day in 113 randomized children and adolescents with attention-deficit/hyperactivity disorder (age range 6-16 years). The duration of the study per patient was 3 weeks. Methods: The primary outcome variable was the German version of the "Swanson, Kotkin, Agler, M-Flynn, and Pelham scale" in the first 3 hours of school as assessed by teachers. Results: Medikinet retard with a higher IR component than Concerta (and an equivalent daily dose) was superior to Concerta (p=0.0009), and Medikinet retard with similar IR components in the morning as Concerta (but a lower daily dose) was noninferior to Concerta with regard to the primary outcome. Further, exploratory analyses on teacher and parent ratings on attention-deficit/hyperactivity disorder and on externalizing symptoms during the day revealed no evidence for the superiority of Concerta over Medikinet retard in an equivalent daily dosage throughout the day. Conclusion: Children and adolescents may be treated with a lower daily dose of Medikinet retard (which has a similar IR component as Concerta) without resulting in a clinically relevant worse effect during school time. abstract_id: PUBMED:29172680 Efficacy and Safety of HLD200, Delayed-Release and Extended-Release Methylphenidate, in Children with Attention-Deficit/Hyperactivity Disorder. Objective: Evening-dosed HLD200 is a delayed-release and extended-release methylphenidate (DR/ER-MPH) formulation consisting of uniform, dual-layered microbeads with an inner drug-loaded core. DR/ER-MPH is designed to delay the initial release of drug by 8-10 hours, and thereafter, provide a controlled, extended drug release to target onset of effect upon awakening that lasts into the evening. This phase 3 study evaluated the safety and efficacy of DR/ER-MPH on symptoms and temporal at-home functional impairment in children with attention-deficit/hyperactivity disorder (ADHD). Methods: This 3-week, randomized, double-blind, multicenter, placebo-controlled, parallel-group, forced-dose titration trial evaluated DR/ER-MPH (40-80 mg/day) in children aged 6-12 years with ADHD. Primary efficacy endpoint was the ADHD rating scale-IV (ADHD-RS-IV), and the key secondary endpoints were the Before-School Functioning Questionnaire (BSFQ), and Parent Rating of Evening and Morning Behavior-Revised, morning (PREMB-R AM) and evening (PREMB-R PM). Safety measures included spontaneously reported treatment-emergent adverse events (TEAEs) and two TEAEs of special interest, appetite suppression and insomnia (with direct questioning on sleep disturbance). Results: One hundred sixty-one participants were included in the intent-to-treat population (DR/ER-MPH, n = 81; placebo, n = 80). After 3 weeks, DR/ER-MPH achieved significant improvements versus placebo in ADHD symptoms (least-squares [LS] mean ADHD-RS-IV: 24.1 vs. 31.2; p = 0.002), and at-home early morning (LS mean BSFQ: 18.7 vs. 28.4; p < 0.001; LS mean PREMB-R AM: 2.1 vs. 3.6; p < 0.001) and late afternoon/evening (LS mean PREMB-R PM: 9.4 vs. 12.2; p = 0.002) functional impairment. Commonly reported TEAEs (≥10%) were insomnia and decreased appetite. Conclusions: DR/ER-MPH was generally well tolerated and demonstrated significant improvements versus placebo in ADHD symptoms and at-home functional impairments in the early morning, late afternoon, and evening in children with ADHD. abstract_id: PUBMED:7630692 Effects of late-afternoon methylphenidate administration on behavior and sleep in attention-deficit hyperactivity disorder. Objective: This study evaluated the effects on behavior and sleep of methylphenidate (MPH) administered at 4 PM to children with attention-deficit hyperactivity disorder (ADHD). Methodology: Twelve children admitted to a child psychiatric inpatient service with ADHD participated in a double-blind, crossover study in which they received a 4 PM dose of either 15 mg of MPH, 10 mg of MPH, or a placebo in random order for 12 consecutive days. Ratings of behavior, including ADHD symptoms, pertaining to the period from dose administration until sleep onset, were supplied nightly by hospital staff. Sleep latency and sleep adequacy were also assessed for each night. Results: MPH resulted in markedly improved behavioral control compared with placebo; there was no difference between 15-mg and 10-mg MPH doses. MPH did not alter sleep latencies observed with the placebo. Children were more often rated as less tired on awakening after nights that they received 10 mg of MPH compared with 15 mg of MPH and the placebo. Weight loss was apparent among 83% of the patients, but dinner intake did not vary with third-dose condition. Conclusions: Morning and noon administration of stimulants to children with ADHD is a near-universal practice, but many clinicians avoid a third, late-afternoon administration for fear of inducing insomnia. This study's findings show that children with ADHD derive substantial symptom reduction from MPH administered in late afternoon, with no untoward effects on sleep. Therefore, three-times-a-day dosing should be considered for those children exhibiting ADHD symptoms in the evening. Adverse effects on sleep latency were not apparent in the sample overall. Nonetheless, monitoring for possible aggravation of sleep problems and weight loss remains sound treatment practice. abstract_id: PUBMED:18157763 Methylphenidate of retard forms in children and adolescents with ADHD - an overview As yet, stimulants remain the preferred means of treating attention-deficit/hyperactivity disorder (ADHD) pharmacologically. They are indicated when measures based on behavioural therapy or psychoeducation alone are not sufficient. How-ever, the period of effectiveness of immediate release stimulants is often not satisfactory. A variety of retarded forms of methylphenidate have now been developed and approved for the German market. This paper presents an overview of clinical studies on effectiveness, period of effectiveness and the profile of side effects of different forms stimulants available in Germany. In clinical practice, the new retard products represent effective alternatives. There is an advantage in administering this drug in a once-daily single dose. At the same time, the side effects that are caused by an extended period of effectiveness have to be studied in detail. A more precise adaptation to the daily obligations and needs of children and adolescents is needed. Possibly, a combination with unretareded MPH might lead to a better effect/side-effect profile. Answer: Yes, a morning dose of Methylphenidate Retard (Medikinet-Retard) has been shown to reduce hyperkinetic symptoms in the afternoon. In a multicenter drug treatment study, Medikinet-Retard demonstrated a large and statistically significant positive effect on ADHD symptomatology, including inattention, hyperactivity, and impulsivity, based on parents' assessments in the afternoon. The efficacy of Medikinet-Retard was evaluated by the parents as good, and the rate of responders was four times higher in the verum-group compared to the placebo group. The study concluded that the two-step galenic release of methylphenidate in Medikinet-Retard is appropriate for once-a-day (morning) stimulant administration in schoolchildren, indicating its effectiveness in reducing symptoms into the afternoon (PUBMED:15565893).
Instruction: Do the Health of the Nation Outcome Scales measure outcome? Abstracts: abstract_id: PUBMED:25520001 The structure and reliability of the Health of the Nation Outcome Scales. Objective: The Health of the Nation Outcome Scales (HoNOS) has been suggested as an outcome measure for the evaluation of mental health services in several countries; however, the scoring, reliability and structure of HoNOS are all ambiguous. Methods: A total of 80,161 completed HoNOS forms were checked for internal reliability and structural integrity. Parallel analysis was used to investigate the likelihood that HoNOS consists of 12 independent scales. The reliability of the scale was investigated using ordinal reliability analysis. Confirmatory factor analysis was used to investigate the factor structure. Results: Parallel analysis suggested that HoNOS had three underlying factors and could not be considered as 12 independent scales. The 12-item scale had good reliability. Confirmatory factor analysis found that a one factor, four-item scale produced the most acceptable fit. Conclusions: HoNOS should not be considered as 12 independent scales. Although the four-item scale is acceptable psychometrically, it has practical problems, as it focuses only on the social aspects of outcome measurement. It is possible that there are other factor structures that would encompass more items, but any study of these should test for invariance between groups of service users. abstract_id: PUBMED:33032447 Assessing the severity of psychiatric disorders using the Health of the Nation Outcome Scales: An equipercentile linking analysis. Objective: The Health of the Nation Outcome Scales was developed as an overall measure of mental health, applicable to the complete range of psychiatric disorders. Meanwhile, it is a benchmark tool for service providers and is also used for the allocation of costs and funding. The ability of the Health of the Nation Outcome Scales to assess and differentiate the severity of psychiatric disorders is largely unknown, as it is the interpretation of a change in score. We aim to establish Health of the Nation Outcome Scales cut-off and benchmark values for severity and improvement - respectively change, using equipercentile linking to the Clinical Global Impression scales. Methods: In a clinical sample of 30,616 individuals with a psychiatric disorder, we used a multivariate regression analysis to determine the correlation between the scales and possible confounders. We used an equipercentile linking analysis of the Clinical Global Impressions severity scale with the Health of the Nation Outcome Scales sum score to establish cut-off values for severity. The linking of the Health of the Nation Outcome Scales sum score difference and the percentage of change to the Clinical Global Impression improvement scale determined benchmark values for change (i.e. improvement or deterioration). Results: The Health of the Nation Outcome Scales and Clinical Global Impression scales showed a Spearman correlation of 0.38 (p < 0.000). Clinical Global Impression-Severity: 'borderline-ill' corresponded to Health of the Nation Outcome Scales score 3-5; 'mildly ill' to 6-10; 'moderately ill' to 11-16; 'markedly ill' to 17-25; 'severely ill' to 26-35; and 'extremely ill' to a score ⩾36. The Spearman correlation between the percentage change of the Health of the Nation Outcome Scales was 0.39 (p > 0.000); Clinical Global Impression-Improvement: 'minimally improved' corresponded to Health of the Nation Outcome Scales reduction of 4 points or 9%; 'much-improved' to 12 points or 48%; 'very-much-improved' to ⩾20 points or ⩾90%. Clinical Global Impression-Improvement: 'minimally worse' corresponded to an increase of 6 points or 25%; 'much-worse' to 12 points or 54%; and finally, 'very-much-worse' to >18 points or ⩾82%. Conclusion: Our results allow for the comparison of severity and response to treatment of patients with a psychiatric disorder, independent of diagnosis. abstract_id: PUBMED:25135434 An evaluation of the independence of the Health of the Nation Outcome Scales. Objective: In this study the independence of the scales/items in the Health of the Nation Outcome Scales (HoNOS) was empirically investigated. Method: Parallel analysis using random column permutation and bootstrapping were used to compare the factor structure, intercorrelations and Cronbach's alpha from the original HoNOS study and also recently collected HoNOS ratings. Random permutation ensures that the data has the same distributions as the data it is based on, but that the variables are now independent. Results: It is shown that both of the real HoNOS data sets are significantly different to the independent items data in many ways. An examination of fit statistics from confirmatory factor analysis is also used to show that the independence model is a very poor fit to the data. Conclusions: It is clear that the 12 HoNOS scales are unlikely to be independent. There is a need for more research to clarify the appropriate structure of HoNOS, and also to consider whether some of the items need either replacing or augmenting. abstract_id: PUBMED:11761636 Validation of the French version of the Health of the Nation Outcome Scales (HoNOS-F) Objective: This study reports the validation of the French version of the Health of the Nation Outcome Scales (HoNOS-F), a questionnaire developed to measure health and social functioning of people with mental illness. Method: Once each statement was tested for readability, the scale was administered to 3 samples of people suffering from severe mental disorders to estimate its reliability and validity. More specifically, tests were run to establish the internal consistency, the stability, and the interrater reliability of the HoNOS-F. Confirmative factor analyses and mean differences according to age, sex, and diagnosis were also conducted to evaluate respectively construct- and criterion-related validity. Results: Coefficients obtained from the various tests show that the scale is reliable only when the total score is used. The confirmatory factor analyses indicate that the observed data do not fit the 2 proposed models, a unidimensional model and a 4-dimension model. However, the scale did show criterion-related validity. Conclusions: Results of the present study converge with those obtained on the original widely used English version. Therefore, we suggest that clinicians use the questionnaire by referring to each item separately and by considering such patient characteristics as age, sex, and diagnosis. We also suggest that researchers wishing to evaluate health and social functioning of persons with serious mental disorders use the total score. Caution is, however, warranted when interpreting the total score for a French-speaking population, because the factorial solution 1-dimension model did not prove to be satisfactory. abstract_id: PUBMED:10616602 Validation of the Health of the Nation Outcome Scales. Background: The Health of the Nation Outcome Scales (HoNOS) were developed to assess the outcome of severe mental disorders in relation to the Health of the Nation Strategy for England. Aims: To validate the HoNOS. Method: One hundred and fifteen patients were assessed by keyworkers using HoNOS, and by research workers using SCAN, SBS, and SRPS, and completing HoNOS in the light of this additional material. These assessments were repeated after 6 weeks. Results: The performance of HoNOS in the hands of keyworkers was generally poor, in relation both to the research workers' rating of HoNOS and to the criterion instruments. Performance was particularly poor when the change in scores was used as a measure of outcome. Conclusions: There are serious problems in using HoNOS as a routine measure of clinical status in busy psychiatric services. Its performance is probably related to the training and experience of keyworkers. Sequential ratings are not a good method for assessing outcome. Managers and planners should be cautious in adopting HoNOS, but it is worthy consideration in developing a suite of locally agreed outcome measures. abstract_id: PUBMED:27790607 The Use of the Health of the Nation Outcome Scales for Assessing Functional Change in Treatment Outcome Monitoring of Patients with Chronic Schizophrenia. Background: Schizophrenia is a severe mental disorder that is characterized not only by symptomatic severity but also by high levels of functional impairment. An evaluation of clinical outcome in treatment of schizophrenia should therefore target not only assessing symptom change but also alterations in functioning. This study aimed to investigate whether there is an agreement between functional- and symptom-based outcomes in a clinical sample of admissions with chronic forms of schizophrenia. Methods: A full 3-year cohort of consecutive inpatient admissions for schizophrenia (N = 205) was clinically rated with the Positive and Negative Symptom Scale (PANSS) and the Health of the Nation Outcome Scales (HoNOS) as measures of functioning at the time of admission and discharge. The sample was stratified twofold: first, according to the degree of PANSS symptom improvement during treatment with the sample being divided into three treatment response groups: non-response, low response, and high response. Second, achievement of remission was defined using the Remission in Schizophrenia Working Group criteria based on selected PANSS symptoms. Repeated measures analyses were used to compare the change of HoNOS scores over time across groups. Results: More than a half of all admissions achieved a symptom reduction of at least 20% during treatment and around one quarter achieved remission at discharge. Similarly, HoNOS scores improved significantly between admission and discharge. Interaction analyses indicated higher functional improvements to be associated with increasing levels of treatment response. Conclusion: Functional improvement in individuals treated for schizophrenia was linked to a better clinical outcome, which implies a functional association. Thus, improvement of functioning represents an important therapeutic target in the treatment of schizophrenia. abstract_id: PUBMED:16313678 A review of the psychometric properties of the Health of the Nation Outcome Scales (HoNOS) family of measures. Background: The Health of the Nation Outcome Scales was developed to routinely measure outcomes for adults with mental illness. Comparable instruments were also developed for children and adolescents (the Health of the Nation Outcome Scales for Children and Adolescents) and older people (the Health of the Nation Outcome Scales 65+). All three are being widely used as outcome measures in the United Kingdom, Australia and New Zealand. There is, however, no comprehensive review of these instruments. This paper fills this gap by reviewing the psychometric properties of each. Method: Articles and reports relating to the instruments were retrieved, and their findings synthesised to assess the instruments' validity (content, construct, concurrent, predictive), reliability (test-retest, inter-rater), sensitivity to change, and feasibility/utility. Results: Mostly, the instruments perform adequately or better on most dimensions, although some of their psychometric properties warrant closer examination. Conclusion: Collectively, the Health of the Nation Outcome Scales family of measures can assess outcomes for different groups on a range of mental health-related constructs, and can be regarded as appropriate for routinely monitoring outcomes. abstract_id: PUBMED:33883058 The interrater reliability of a routine outcome measure for infants and pre-schoolers aged under 48 months: Health of the Nation Outcome Scales for Infants. Background: A review of Australian mental health services identified a gap in routine outcome measures addressing social, emotional and behavioural domains for pre-schoolers and infants. The Child and Adolescent Mental Health Information Development Expert Advisory Panel Working Group developed the Health of the Nation Outcome Scales for Infants (HoNOSI), a clinician-reported routine outcome measure for use with those aged under 4 years. Prior psychometric testing showed that the HoNOSI was considered to show face validity, and that it met the standards for concurrent validity and internal consistency. Aims: We aimed to investigate the interrater reliability of the HoNOSI. Method: Forty-five infant mental health clinicians completed HoNOSI ratings on a set of five case vignettes. Results: Quadratic weighted kappa interrater reliability estimates showed the HoNOSI to have Almost Perfect interrater reliability for the HoNOSI total score. Of the 15 scales, one had Moderate, seven had Substantial and seven had Almost Perfect interrater reliability. Ten of the fifteen scales and the total score exceeded the COnsensus-based Standards for the Selection of Health Measurement INstruments criteria for interrater reliability (κw ≥ 0.7). Conclusions: There has been a clear need for a routine outcome measure for use with infants and pre-schoolers. This study provides evidence of interrater reliability. The current findings, combined with the face and concurrent validity studies, support further examination of HoNOSI in real-world settings. abstract_id: PUBMED:22225572 Validation of the Health of the Nation Outcome Scales as a routine measure of outcome in early intervention programmes. Aim: So far, no study has assessed the validity of the Health of the Nation Outcome Scales (HoNOS) in patients enrolled in early intervention programmes, nor has any study evaluated the validity of the HoNOS in people at ultra high-risk (UHR) of psychosis. This study set out to assess the validity and reliability of the HoNOS as a measure of outcome in the patients enrolled in an early intervention programme. Methods: The concurrent, discriminant and predictive validity, and the reliability of the HoNOS as a measure of outcome in an early intervention programe were assessed in 87 first-episode psychosis (FEP) patients, and in 81 patients at UHR of psychosis. Results: Reliability indexes were good in the FEP sample, and less good in the UHR sample. HoNOS total scores differentiated between FEP and UHR patients, and the HoNOS subscales proved able to assess a specific profile of symptoms in the two samples, demonstrating a helpful adjunctive measure of health status without complete overlap with other scales. Sensitivity to change was also very good, again with differences between FEP and UHR patients. HoNOS scores at intake did not predict failure to attain remission in FEP patients. There were too few cases of transition to psychosis (n = 2) to assess predictive validity of HoNOS in the UHR sample. Conclusion: HoNOS possesses satisfactory sensitivity and validity to be used in the routine assessment in early intervention programmes. abstract_id: PUBMED:20333952 The Health of Nation Outcome Scale as clinical outcome indicator in elderly psychiatry The Health of the Nation Outcome Scales65+(HoNOS65+) seems to be a useful instrument for effect measuring in elderly psychiatry. The instrument is implemented as an outcome measure within a psychiatric unit for older persons. From 244 patients a repeated assessment is completed: at the start of the treatment and at discharge. Our goal is to determine the sensitivity of change of the Health of the Nation Outcome Scales65+ for patients who are in treatment for a limited period. The sensitivity of change is examined for outpatients and inpatients and for four diagnostic groups (mood disorders, organic disorders, psychotic disorders and others) according to their DSM-IV diagnoses. Comparisons between first and second assessment were done using pairwise t-tests. Inpatients as well as outpatients showed a significant change between intake and discharge. This also holds for all four diagnostic groups. The conclusion is that the HoNOS 65+ is sensitive to change and meets the criteria for a clinical outcome indicator in elderly psychiatric patients. Answer: The Health of the Nation Outcome Scales (HoNOS) are designed to measure the outcome of mental health services and have been used as an overall measure of mental health, applicable to a wide range of psychiatric disorders (PUBMED:33032447). The scales have been implemented as outcome measures in various settings, including adult mental health services, early intervention programs, and elderly psychiatry (PUBMED:10616602, PUBMED:22225572, PUBMED:20333952). Studies have shown that HoNOS can be a useful tool for assessing functional change and treatment outcomes in patients with chronic schizophrenia, indicating that functional improvement is linked to better clinical outcomes (PUBMED:27790607). Additionally, the HoNOS65+ has been found to be sensitive to change and meets the criteria for a clinical outcome indicator in elderly psychiatric patients (PUBMED:20333952). However, the structure and reliability of HoNOS have been questioned. Research has suggested that HoNOS should not be considered as 12 independent scales due to its underlying factor structure, which seems to consist of fewer factors (PUBMED:25520001, PUBMED:25135434). Moreover, the performance of HoNOS in the hands of keyworkers was found to be generally poor, especially when change in scores was used as a measure of outcome (PUBMED:10616602). This suggests that the training and experience of the individuals administering HoNOS can significantly affect its reliability. Despite these concerns, the HoNOS family of measures has been widely used and is considered appropriate for routinely monitoring outcomes, performing adequately on most dimensions of validity and reliability (PUBMED:16313678). The scales have also been validated in different languages and contexts, such as the French version (HoNOS-F), although caution is advised when interpreting the total score due to unsatisfactory factorial solutions (PUBMED:11761636). In conclusion, the Health of the Nation Outcome Scales do measure outcomes in mental health settings, but their reliability and the interpretation of scores can be influenced by the factor structure, the training of the assessors, and the context in which they are used. Further research is needed to clarify the appropriate structure of HoNOS and to consider potential modifications to improve its psychometric properties.
Instruction: Can plasma bile salt, triglycerides, and apoA-V levels predict liver regeneration? Abstracts: abstract_id: PUBMED:22955952 Can plasma bile salt, triglycerides, and apoA-V levels predict liver regeneration? Background: Preoperative portal vein embolization (PVE) is used to increase the future remnant liver (FRL) in patients requiring extensive liver resection. Computed tomography (CT) volumetry, performed not earlier than 3-6 weeks after PVE, is commonly employed to assess hypertrophy of the FRL following PVE. Early parameters to predict effective hypertrophy are therefore desirable. The aim of the present study was to assess plasma bile salt levels, triglycerides (TG), and apoA-V in the prediction of the hypertrophy response during liver regeneration. Methods: Serum bile salt, TG, and apoA-V levels were determined in 20 patients with colorectal metastases before PVE, and 5 h, 1, and 21 days after PVE, as well as prior to and after (day 1-7, and day 21) subsequent liver resection. These parameters were correlated with liver volume as measured by CT volumetry (%FRL-V), and liver function was determined by technetium-labeled mebrofenin hepatobiliary scintigraphy using single photon emission computed tomography. Results: Triglyceride levels at baseline correlate with volume increase of the future remnant liver (FRL-V) post-PVE. Also, bile salts and TG 5 h after PVE positively correlated with the increase in FRL volume (r=0.672, p=0.024; r=0.620, p=0.042, resp.) and liver function after 3 weeks (for bile salts r=0.640, p=0.046). Following liver surgery, TG levels at 5 h and 1 day after resection were associated with liver remnant volume after 3 months (r=0.921, p=0.026 and r=0.981, p=0.019, resp). Plasma apoA-V was increased during liver regeneration. Conclusions: Bile salt and TG levels at 5 h after PVE/resection are significant early predictors of liver volume and functional increase. It is suggested that these parameters can be used for early timing of volume assessment and resection after PVE. abstract_id: PUBMED:23511381 Alcohol and plasma triglycerides. Purpose Of Review: This study reviews recent developments concerning the effects of alcohol on plasma triglycerides. The focus will be on population, intervention and metabolic studies with respect to alcohol and plasma triglycerides. Recent Findings: Alcohol consumption and fat ingestion are closely associated and stimulated by each other via hypothalamic signals and by an elevated cephalic response. A J-shaped relationship between alcohol intake and plasma triglycerides has been described. A normal body weight, polyphenols in red wine and specific polymorphisms of the apolipoprotein A-V and apolipoprotein C-III genes may protect against alcohol-associated hypertriglyceridemia. In contrast, obesity exaggerates alcohol-associated hypertriglyceridemia and therefore the risk of pancreatitis. Summary: High alcohol intake remains harmful since it is associated with elevated plasma triglycerides, but also with cardiovascular disease, alcoholic fatty liver disease and the development of pancreatitis. Alcohol-induced hypertriglyceridemia is due to increased very-low-density lipoprotein secretion, impaired lipolysis and increased free fatty acid fluxes from adipose tissue to the liver. However, light to moderate alcohol consumption may be associated with decreased plasma triglycerides, probably determined by the type of alcoholic beverage consumed, genetic polymorphisms and lifestyle factors. Nevertheless, patients should be advised to reduce or stop alcohol consumption in case of hypertriglyceridemia. abstract_id: PUBMED:16752169 Plasma apolipoprotein A5 and triglycerides in type 2 diabetes. Aims/hypothesis: Variation in the human apolipoprotein (APO) A5 gene (APOA5) is associated with elevated plasma triglycerides. However, data on the exact role of plasma concentrations of APOA5 in human triglyceride homeostasis are lacking. In the present study, we estimated plasma APOA5 levels in patients with type 2 diabetes at baseline and during atorvastatin treatment, a lipid-lowering treatment that results in a reduction in plasma triglycerides and APOC3. Subjects, Materials And Methods: Plasma APOA5 concentration was measured by ELISA in 215 subjects with type 2 diabetes, who were taken from the Diabetes Atorvastatin Lipid-lowering Intervention (DALI) study, a 30-week randomised, double-blind, placebo-controlled study, and given atorvastatin 10 mg or 80 mg daily. Results: At baseline, average plasma APOA5 concentration was 25.7+/-15.6 mug/100 ml. Plasma APOA5 (R (s)=0.40), APOC3 (R (s)=0.72) and APOE (R (s)=0.45) were positively correlated with plasma triglyceride levels (all p<0.001). In multiple linear regression analysis, adjusted for age and sex, the variation in plasma triglycerides was explained mostly by APOC3 (52%) and only to a small extent by APOA5 (6%) and APOE (1%). Atorvastatin treatment decreased plasma triglycerides, APOA5, APOC3 and APOE (all p<0.0001). After treatment, APOC3 remained the major determinant of plasma triglyceride levels (59%), while the contributions of APOA5 and APOE were insignificant (2 and 3%). Conclusions/interpretation: Our findings reveal a positive association between plasma APOA5 and triglycerides in patients with type 2 diabetes. Treatment with atorvastatin decreased plasma APOA5, APOC3, APOE and triglycerides. In contrast to APOC3, APOA5 is not a major determinant of triglyceride metabolism in these patients. abstract_id: PUBMED:32801009 Genetic Variants Associated With Increased Plasma Levels of Triglycerides, via Effects on the Lipoprotein Lipase Pathway, Increase Risk of Acute Pancreatitis. Background & Aims: Almost one third of adults in the West have increased plasma levels of triglycerides. Even mild to moderate hypertriglyceridemia (2-10 mmol/L or 177-886 mg/dL) is associated with an increased risk of acute pancreatitis. However, it is not clear whether hypertriglyceridemia is a cause or result of acute pancreatitis. Lipoprotein lipase degrades plasma triglycerides. Variants in LPL, APOA5, APOC3, ANGPTL3, and ANGPTL4, which regulate the lipoprotein lipase pathway, result in increased or reduced plasma triglyceride levels. We investigated associations between these variants and acute pancreatitis in a study of the general population. Methods: In a prospective cohort study, men and women randomly selected from the area of Copenhagen were invited to complete a questionnaire, undergo a physical examination, and provide blood samples for biochemical and genetic analyses, from 2003 through 2015. We obtained triglyceride measurements from 117,427 participants. We examined for 15 genetic variants that are associated with lipoprotein lipase function in DNA samples from 102,888 participants and analyzed data from 117,427 participants in observational analyses. Diagnoses of acute pancreatitis (970 diagnoses among participants in the genetic analysis and 527 among participants in the observational study) were obtained from Danish registries. We performed a 1-sample Mendelian randomization analysis in which specific variants were used as markers of the plasma level of triglycerides to determine the association between the plasma level of triglyceride and acute pancreatitis. We calculated unweighted, internally weighted, and externally weighted allele scores for each participant by adding numbers of triglyceride-increasing alleles. Results: The highest genetic allele score correlated with a higher plasma level of triglycerides of 0.54 mmol/L (48 mg/dL). Among participants with the highest vs the lowest genetic allele score, the odds ratio for acute pancreatitis was 1.55 (95% CI, 1.08-2.23). Using instrumental variable analysis, integrating the effect of genotype on both triglycerides levels and risk of acute pancreatitis, we associated higher unweighted allele scores with an increased risk of acute pancreatitis (odds ratio [OR], 1.76; 95% CI, 1.16-2.65), as well as internally weighted higher allele scores (OR, 1.41; 95% CI, 1.01-1.97) and externally weighted higher allele scores (OR, 1.44; 95% CI, 1.01-2.04). Every 1 mmol/L (89 mg/dL) increase in triglycerides was observationally associated with an increase in OR of 1.09 (95% CI, 1.05-1.14) after multivariable adjustment. Conclusions: Based on an analysis of individuals with genetic variants associated with an increased level of triglycerides, via their effects on the lipoprotein lipase pathway, we associated an increased plasma levels of triglycerides with an increased risk of acute pancreatitis. Strategies to reduce plasma levels of triglycerides, by increasing lipoprotein lipase function, might be developed for prevention of acute pancreatitis. abstract_id: PUBMED:26505974 Apolipoprotein A-V is present in bile and its secretion increases with lipid absorption in Sprague-Dawley rats. Apolipoprotein (apo) A-V is a protein synthesized only in the liver that dramatically modulates plasma triglyceride levels. Recent studies suggest a novel role for hepatic apoA-V in regulating the absorption of dietary triglycerides, but its mode of action on the gut remains unknown. The aim of this study was to test for apoA-V in bile and to determine whether its secretion is regulated by dietary lipids. After an overnight recovery, adult male Sprague-Dawley bile fistula rats indeed secreted apoA-V into bile at a constant rate under fasting conditions. An intraduodenal bolus of intralipid (n = 12) increased the biliary secretion of apoA-V but not of other apolipoproteins, such as A-I, A-IV, B, and E. The lipid-induced increase of biliary apoA-V was abolished under conditions of poor lymphatic lipid transport, suggesting that the stimulation is regulated by the magnitude of lipids associated with chylomicrons transported into lymph. We also studied the secretion of apoA-V into bile immediately following bile duct cannulation. Biliary apoA-V increased over time (∼6-fold increase at hour 16, n = 8) but the secretions of other apolipoproteins remained constant. Replenishing luminal phosphatidylcholine and taurocholate (n = 9) only enhanced apoA-V secretion in bile, suggesting that the increase was not due to depletion of phospholipids or bile salts. This is the first study to demonstrate that apoA-V is secreted into bile, introducing a potential route of delivery of hepatic apoA-V to the gut lumen. Our study also reveals the uniqueness of apoA-V secretion into bile that is regulated by mechanisms different from other apolipoproteins. abstract_id: PUBMED:16844773 Benefit of farnesoid X receptor inhibition in obstructive cholestasis. The nuclear hormone receptors farnesoid X receptor (FXR) and pregnane X receptor have been implicated in regulating bile acid, lipid, carbohydrate, and xenobiotic metabolism. Bile duct ligation was used to increase endogenous bile acids and evaluate the roles of these receptors in modulating cholestatic liver injury. FXR knockout (KO) mice were found to be protected from obstructive cholestasis. Concurrent deletion of FXR also could ameliorate an increase in liver injury that is seen usually in pregnane X receptor KO mice with cholestasis. Mechanisms proposed for this protection include the lowering of bile acid concentrations and altered expression of the hepatic transporters Mdr1, Mdr2, BSEP, and Mrp4. FXR KO mice also exhibit a biphasic lipid profile after bile duct ligation, with an increase in high-density lipoprotein cholesterol and triglycerides by day 6. The expression of apolipoprotein AV was reduced in these mice, implicating FXR in triglyceride regulation. We show that FXR modulates cholestasis by controlling bile acids within the hepatocyte and is involved in bile acid synthesis, bile excretion via BSEP, and serum export via Mrp4. This study strongly suggests a potential clinical role for FXR antagonists in the treatment of obstructive cholestatic liver disorders. abstract_id: PUBMED:23000317 The paradox of ApoA5 modulation of triglycerides: evidence from clinical and basic research. Unlabelled: Apolipoprotein A5 (ApoA5) is a key regulator of plasma triglycerides (TG), even though its plasma concentration is very low compared to other known apoproteins. Over the years, researchers have attempted to elucidate the molecular mechanisms by which ApoA5 regulates plasma TG in vivo. Though still under debate, two theories broadly describe how ApoA5 modulates TG levels: (i) ApoA5 enhances the catabolism of TG-rich lipoproteins and (ii) it inhibits the rate of production of very low-density lipoprotein (VLDL), the major carrier of TGs. This review will summarize the basic and clinical studies that describe the importance of ApoA5 in TG metabolism. Population studies conducted in various countries have demonstrated an association between single nucleotide polymorphisms (SNPs) in ApoA5 and the increased risk to cardiovascular disease and metabolic syndrome (including diabetes and obesity). ApoA5 is also highly expressed during liver regeneration and is an acute phase protein associated with HDL, which is independent of its effects on TG metabolism. Conclusion: Despite considerable evidences available from clinical and basic research studies on the role of ApoA5 in TG metabolism and its indirect link to metabolic diseases, additional investigations are needed to understand the paradoxical role of this important apoprotein is modulated by both diet and its polymorphism variants. abstract_id: PUBMED:16401313 Interaction of the G182C polymorphism in the APOA5 gene and fasting plasma glucose on plasma triglycerides in Type 2 diabetic subjects. Aim: Apolipoprotein AV (APOA5) is an important determinant of plasma triglyceride concentration. This study aimed to investigate the relationship of an amino acid substitution at position 182 (G182C) of the apolipoprotein AV (APOA5) gene with triglyceride concentration in a Taiwanese population. Methods: This study enrolled two cohorts: non-diabetic subjects (112 males and 89 females) aged 50.3+/-11.0 years (mean+/-sd) and diabetic subjects (106 males and 96 females) aged 62.1+/-10.3 years. The relationship between the G182C polymorphism (rs 2075291) and plasma triglycerides was examined. Demographic and metabolic parameters including age, sex, body mass index, fasting plasma glucose and total cholesterol were also obtained. Results: The G182C polymorphism was a determinant of plasma triglycerides in both non-diabetic (P=0.022) and diabetic (P=0.003) groups, independent of age, gender, fasting plasma glucose, body mass index and total cholesterol. In the diabetic group, this genetic polymorphism interacts significantly (P=0.032) with fasting plasma glucose concentration on plasma triglycerides after adjustment for age, sex, body mass index and total cholesterol. Conclusions: In conclusion, the G182C polymorphism of the APOA5 gene affects plasma triglycerides in both non-diabetic and diabetic populations. The observed interaction of gene and glycaemic control further indicates a multifactorial nature of clinical phenotypes in subjects with Type 2 diabetes. abstract_id: PUBMED:11577099 Apolipoprotein A-V: a novel apolipoprotein associated with an early phase of liver regeneration. Liver regeneration in response to various forms of liver injury is a complex process, which ultimately results in restoration of the original liver mass and function. Because the underlying mechanisms that initiate this response are still incompletely defined, this study was aimed to identify novel factors. Liver genes that were up-regulated 6 h after 70% hepatectomy (PHx) in the rat were selected by cDNA subtractive hybridization. Besides known genes associated with cell proliferation, several novel genes were isolated. The novel gene that was most up-regulated was further studied. Its mRNA showed a liver-specific expression and encoded a protein comprising 367 amino acids. The mouse and human cDNA analogues were also isolated and appeared to be highly homologous. The human gene analogue was located at an apolipoprotein gene cluster on chromosome 11q23. The protein encoded by this gene had appreciable homology with apolipoproteins A-I and A-IV. Maximal expression of the gene in the rat liver and its gene product in rat plasma was observed 6 h after PHx. The protein was present in plasma fractions containing high density lipoprotein particles. Therefore, we have identified a novel apolipoprotein, designated apolipoprotein A-V, that is associated with an early phase of liver regeneration. abstract_id: PUBMED:16750165 Overexpression of apolipoprotein AV in the liver reduces plasma triglyceride and cholesterol but not HDL in ApoE deficient mice. It has been shown that adenovirus-mediated overexpression of human ApoAV (hApoAV) in C57BL/6 mice results in decreased plasma triglyceride (TG) and total cholesterol (TC) levels with a major reduction occurring in the HDL fraction. In order to study the effect of ApoAV on hypercholesterolemic mice, an adenoviral vector expressing hApoAV was constructed and injected into ApoE deficient mice. High levels of hApoAV mRNA in the liver and ApoAV proteins in the liver and plasma were detected. The treatment reduced plasma TG levels by 50% and 75%, and TC levels by 45% and 58% at day 3 and 7, respectively, after treatment as compared with a control group treated with Ad-hAP (human alkaline phosphatase). Plasma HDL-C levels remained unaltered, which were different from normolipidemic mice. These findings suggest that ApoAV might serve as a therapeutic agent for hyperlipidemic disorder. Answer: Yes, plasma bile salt, triglycerides, and apoA-V levels can predict liver regeneration. A study found that triglyceride levels at baseline correlate with the volume increase of the future remnant liver (FRL-V) post-portal vein embolization (PVE). Additionally, bile salts and triglycerides measured 5 hours after PVE positively correlated with the increase in FRL volume and liver function after 3 weeks. Following liver surgery, triglyceride levels at 5 hours and 1 day after resection were associated with liver remnant volume after 3 months. Plasma apoA-V was also increased during liver regeneration, suggesting that these parameters can be used for early timing of volume assessment and resection after PVE (PUBMED:22955952). Furthermore, apoA-V is a protein synthesized in the liver that modulates plasma triglyceride levels and has been implicated in the regulation of dietary triglyceride absorption. It is present in bile, and its secretion increases with lipid absorption, indicating a potential role in liver regeneration (PUBMED:26505974). The expression of apoA-V is upregulated during the early phase of liver regeneration, further supporting its association with liver regenerative processes (PUBMED:11577099). Overexpression of apoA-V in the liver has been shown to reduce plasma triglyceride and cholesterol levels in mice, which could be relevant in the context of liver regeneration (PUBMED:16750165). In summary, plasma bile salt, triglycerides, and apoA-V levels have been shown to correlate with liver volume and function following PVE and liver resection, and apoA-V, in particular, has been associated with liver regenerative processes, suggesting that these parameters can indeed predict liver regeneration.
Instruction: Is home blood pressure monitoring feasible and well accepted in nephrectomized patients for renal cancer? Abstracts: abstract_id: PUBMED:25531211 Is home blood pressure monitoring feasible and well accepted in nephrectomized patients for renal cancer? (STAFF study). Aims: Few studies have looked at blood pressure (BP) evolution after nephrectomy, except for in living kidney donors with no clinical relevant modifications. STAFF is a pilot, open-label, observational study to evaluate the feasibility of following BP by home blood pressure monitoring (HBPM) after nephrectomy for cancer. Major Findings: 56 patients (66.1% previously treated for hypertension) were included between November 1, 2011, and December 31, 2012; 95.8% of the patients realized five of six primary end-points in HBPM, but the last monitoring session at 6 months was often lacking (60%) probably because of a lack of understanding. When BP was controlled before surgery, 36% of the patients underwent new hypertension or hypertension dysregulation, without any correlating factor found; 33% of the patients presented the presence of proteinuria or an increase during the follow-up. Previous hypertension or high body mass index were risk factors for proteinuria increase (p = 0.036 and 0.032) but not treatment by an renin-angiotensin system blocker. There was no statistical link between HTA control and proteinuria. Conclusion: Our study shows that most patients undergoing nephrectomy for cancer are able to follow HBPM. It should be encouraged for detecting high BP or proteinuria, especially if antiangiogenic therapies are envisaged because of the supplementary risk of hypertension and proteinuria induced by these treatments. abstract_id: PUBMED:18634377 Clinical value of the estimation of blood pressure circadian rhythm Arterial blood pressure is one of the main factors of the development of organ vascular changes. Clinical studies have demonstrated that not only blood pressure absolute values but its 24h profile as well may be of significance for the rate and severity of organ damage. The study analyses the results and observations of the so far carried out investigations concerning the lack of nocturnal fall of blood pressure on the development of vascular changes in brain, carotid artery, myocardial hypertrophy and renal lesions. The results of the study indicate that in subjects with the lack of nocturnal fall of blood pressure the frequency of organ changes is more pronounced. Thus, for normal monitoring of the course and the treatment of arterial hypertension, its 24h monitoring seems to be an indispensable procedure. When evaluating blood pressure 24h profile age, sex, organ changes, physical activity and real sleep-wake period should be taken into account. In hypertensive patients without organ changes with nondippers blood pressure profile, repetition of 24h blood pressure monitoring seems to be an indication. abstract_id: PUBMED:18172185 Home blood-pressure monitoring in patients receiving sunitinib. N/A abstract_id: PUBMED:23924525 The late effects of anticancer therapy after childhood Wilm's tumor: the role of diastolic function and ambulatory blood pressure monitoring. Objective: Wilms' tumor, or nephroblastoma, is the most common primary malignant renal tumor of childhood. The excellent outcome now expected for most children with this tumor is attributed to the combination of effective adjuvant chemotherapy, improved surgical and anesthetic techniques and also the radiosensitivity of the tumor. The numerous organ systems are subject to the late effects of anticancer therapy. The aim of this study was to investigate the blood pressure profile and ambulatory blood pressure monitoring, and also cardiac diastolic functions and pulmonary venous flow in 25 children with unilateral Wilms' tumor in remission. Methods: The patient group consists of 25 patients who successfully completed anticancer treatment for unilateral Wilms' tumor. Thirty-three age-, weight- and height-matched healthy children were considered as a control group for an echocardiographic study. Also, 20 age-, weight- and height-matched healthy children were considered as a control group for the ambulatory blood pressure monitoring study. Results: In our study, 24 h, daytime and night-time systolic blood pressure and night-time diastolic blood pressure measurements were found to be significantly increased in the patient group compared with healthy children. We detected diastolic filling pattern abnormalities. We also found increase in pulmonary venous flow (systolic and diastolic) in Wilms' tumor group. Conclusions: We suggest the regular follow-up of survivors of Wilms' tumor for care and prevention of cardiovascular diseases. abstract_id: PUBMED:10736793 Nocturnal arterial pressure and the cardiovascular risk factors in patients with arterial hypertension Background: Blood pressure (BP) has a circadian rhythm. Most of the people presents a BP fall between 10-20% during nighttime hours (dipper condition). Measurement of these circadian variations is performed by continuous blood pressure ambulatory monitoring. We have studied the possible relation between blood pressure nocturnal fall and cardiovascular risk factors in hypertensive patients. Methods: We selected 100 hypertensive patients from the Hypertension and Lipids Unit of San Cecilio University Hospital of Granada, Spain. They were divided into two groups: dippers group and non-dippers, depending of whether or not the fall of nocturnal systolic and diastolic BP was > 10%. All patients included in the study underwent complete clinical history, exhaustive physical examination, complementary examinations, urine and hematology tests, and continuous blood pressure ambulatory monitoring by the Space Labs system. Results: The heart rate was found to be significantly higher (p = 0.0253) in the hypertensive dippers group than in the non-dippers. The latter showed higher plasma creatinine values (p = 0.0343) and lower potassemia values (p = 0.0140) than the dippers group. The dippers group presented concentrations of HDL-cholesterol significantly higher (p = 0.008) and diurnal diastolic BP values (p = 0.0211) also higher than the other group. Conclusions: Non dippers hypertensive patients present a higher number of cardiovascular risk factors as well as a higher tendency to renal lesions, and worse prognosis than the dippers group. abstract_id: PUBMED:21051269 Home blood transfusion Context: The development of alternatives to hospitalization including home medical care (HAD), an aging population and a more secure transfusion raises the question of the feasibility of home blood transfusion. The legislation allows the home blood transfusion under specified conditions, but when they are met, the texts on nursing care and the transfusion gesture may hamper this progress. Methods: We report our experience of 3 years: a protocol was established to do home blood transfusions by trained transfusion nurses from the HAD. Six patients were eligible for transfusion at home but only three of them could be treated at home. Moreover, since late 2009, the Nursing Department no longer allows this practice for legal reasons. At the same time, a questionnaire was sent to 224 HAD to find out about their practice on the subject. Discussion: In the light of practices in different countries, earnings for the quality of life of the patient, lack of space in hospitals and the aging population, it seems essential to change the law to permit a rational transfusion, thoughtful, safe for the patient at home and for caregivers who are involved. abstract_id: PUBMED:34988664 Blood pressure and glomerular filtration rate in youth with tuberous sclerosis complex. Renal involvement is very common in tuberous sclerosis complex (TSC) and is characterized by the development of angiomyolipoma and cysts. The aims of the present study were to assess kidney function and clinical features of renal involvement in TSC, including kidney function and blood pressure (BP) levels in children, adolescents and young adults. Non-selected patients with a definite diagnosis of TSC attending the paediatric neurology outpatient department of a tertiary hospital were included in a cross-sectional study. All participants had a renal imaging study within 6 months of ambulatory blood pressure (BP) and glomerular filtration rate (GFR) assessment. Data on demographics, history, genotype, kidney function at diagnosis and last imaging were collected. Twenty patients were enrolled in this study with a median age of 15 years (IQR range 9 to 18). About 23.5% of the participants had ambulatory hypertension. Systolic BP levels correlated significantly with GFRDTPA values despite the absence of hyperfiltration. Patients that developed hypertension and possibly those with angiomyolipoma or cysts had higher GFR levels in childhood and adolescence. All the patients with ambulatory hypertension had angiomyolipoma or cysts on renal imaging studies. Conclusions: Hypertension may present with increased frequency in young patients with kidney disease associated with TSC. Routine ambulatory BP measurement should be part of the annual clinical assessment in patients with TSC. What Is Known: • Nearly half of the patients with TSC have a premature decline in their renal function in their fifth decade of life. • Hypertension and hyperfiltration have been proposed as modifiable factors of progression of renal decline in patients with TSC-related renal disease. What Is New: • Hypertension is prevalent in youth with tuberous sclerosis complex. • SBP levels have a positive relation with GFR levels within the normal range of GFRDTPA values. abstract_id: PUBMED:1785710 Peroperative massive pulmonary embolism of tumoral origin. Value of extensive monitoring A case is reported of a 47-year-old female patient who suffered from massive tumour embolism during a nephrectomy for a renal carcinoma invading the inferior vena cava. Intraoperative monitoring consisted in direct blood pressure measurement (radial artery cannula), central haemodynamic monitoring (Swan-Ganz catheter), pulse oximetry and capnography. During the surgical manipulation of the suprahepatic vena cava, Petco2 suddenly decreased (from 25 mmHg to 14 mmHg), together with Spo2 (from 99% to 89%), and the mean pulmonary arterial pressure increased from 18 mmHg to 40 mmHg. The drop in arterial blood pressure to 50/30 mmHg, initiated an immediate sternotomy. After clamping the superior and inferior venae cavae, numerous tumour fragments were removed from the pulmonary artery. Cardiac activity restarted after internal cardiac massage, 1 mg adrenaline, 1 g calcium chloride and 150 mmol of molar sodium bicarbonate. The whole procedure lasted 30 min. Arterial blood pressure became stable at 110/50 mmHg, pulmonary arterial and wedge pressures at 20 and 5 mmHg. The Spo2 increased to 98%, and Petco2 to 25 mmHg. The nephrectomy was then carried out, the patient being given 5 micrograms.kg-1.min-1 dobutamine and 3 micrograms.kg-1.min-1 dopamine. At the end of surgery, systolic blood pressure was 120 mmHg, mean pulmonary arterial pressure 25 mmHg, and PaCO2 34 mmHg. The patient left the intensive care unit after twelve days. After one year of follow-up, no complication had occurred. The value of cardiopulmonary bypass in nephrectomy for renal carcinoma invading the vena cava, or the renal vein, is discussed.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:11553028 Blood pressure variability and organ damage. 1. Blood pressure variability (BPV) is expressed as the standard deviation of the average blood pressure (BP). Blood pressure variability is increased in hypertensive patients and animals. However, BPV is not necessarily related to the BP level. 2. For nearly any level of 24 h mean BP, hypertensive patients in whom the BPV is low have a lower prevalence and severity of organ damage than patients in whom the 24 h BPV is high. This observation has been confirmed further in spontaneously hypertensive rats with direct pathological analysis for organ damage. 3. In sinoaortic-denervated (SAD) rats, 24 h average BP is normal and BPV is markedly increased. Myocardial damage, renal lesions and vascular remodelling are seen in these animals 4 weeks after SAD. 4. Haemodynamic effects and activation of the renin- angiotensin system are hypothesized to contribute to organ damage induced by increased BPV. 5. Blood pressure variability is of potential importance in antihypertensive therapy. abstract_id: PUBMED:32144048 Incidence and Risk Factors of Hypertension Following Partial Nephrectomy in Patients With Renal Tumors: A Cross-sectional Study of Postoperative Home Blood Pressure and Antihypertensive Medications. Introduction: We aimed to evaluate the incidence and risk factors for nephrectomy-related hypertension (NR-HT) in patients with renal tumors who underwent partial nephrectomy (PN) or radical nephrectomy (RN). Patients And Methods: A retrospective cross-sectional follow-up survey of postoperative home blood pressure (BP) and defined daily dose (DDD) of antihypertensive medications was conducted in patients with renal tumors who underwent PN (210 patients) or RN (120 patients), and they were compared. We evaluated the incidence and risk factors for NR-HT, defined as the addition of antihypertensive medications in doses of 1 DDD or more after surgery, or postoperative BP of 140/90 mmHg with an increase of 20 mmHg from preoperative BP with no reduction in dose of antihypertensive medications. Results: Both systolic (mean, 124 vs. 129 mmHg; P < .001) and diastolic BP (mean, 74 vs. 79 mmHg; P < .001) significantly increased after PN compared with RN. Systolic (P < .001) and diastolic (P = .003) BP increased significantly more after PN than after RN, and NR-HT was more frequent after PN than after RN (16% vs. 5%; P = .002). PN (odds ratio [OR], 2.93; P = .022) and higher postoperative peak C-reactive protein (OR, 2.34; P = .017) were independently associated with NR-HT. When limited to only the patients who underwent PN, acute kidney injury (OR, 2.65; P = .036) and higher postoperative peak C-reactive protein (OR, 2.54; P = .016) were independent risk factors for NR-HT. Conclusion: PN may cause postoperative progression of hypertension possibly through renal parenchymal damage. Answer: Yes, home blood pressure monitoring (HBPM) is feasible and well accepted in nephrectomized patients for renal cancer. The STAFF study, which was a pilot, open-label, observational study, found that most patients undergoing nephrectomy for cancer were able to follow HBPM. Specifically, 95.8% of the patients realized five of six primary end-points in HBPM. However, the last monitoring session at 6 months was often lacking (60%), which may have been due to a lack of understanding. The study concluded that HBPM should be encouraged for detecting high blood pressure or proteinuria, particularly if antiangiogenic therapies are considered, due to the additional risk of hypertension and proteinuria induced by these treatments (PUBMED:25531211).
Instruction: Intent to become pregnant as a predictor of exposures during pregnancy: is there a relation? Abstracts: abstract_id: PUBMED:26699790 Association Between Healthcare Provider Type and Intent to Breastfeed Among Expectant Mothers. Background: The primary purpose of this study was to determine the association between type of healthcare provider delivering prenatal care and intent to exclusively breastfeed. Methods: A self-report survey was administered to 455 expectant mothers. Logistic regression was performed to determine the association between prenatal care provider type [obstetrician; other primary care physician (family doctor/general practitioner/internist/or other physician); midwife/nurse midwife; more than one provider; and other] with intent to breastfeed (exclusive/non-exclusive). Results: Having a midwife/nurse midwife as a prenatal care provider was associated with intent to breastfeed compared to having an obstetrician (OR 2.544, 95 % CI 1.385-4.675). There was no difference in intent between women with another primary care physician and an obstetrician. Women with another type of health care provider, no prenatal care from a health professional, or no knowledge of who is providing prenatal care were less likely to intend to breastfeed (OR 0.228, CI 0.068-0.766) as compared to those with an obstetrician. Discussion/conclusions: Provider type is associated with intent to breastfeed among pregnant women. Women's intent to breastfeed is an important predictor of breastfeeding initiation, continuation, and duration that may be assessed by healthcare providers during the prenatal period. A consideration of what features of provider care are associated with improved breastfeeding outcomes and characteristics of women seeking prenatal care with midwives may serve to formulate future prenatal care policies and education during prenatal care visits. abstract_id: PUBMED:16050563 Intent to become pregnant as a predictor of exposures during pregnancy: is there a relation? Objective: To evaluate the relationship between intent to be pregnant and selected maternal exposures during pregnancy. Study Design: In 1982-1983, 3,029 women who gave birth to infants without birth defects from 1968-1980 (the control mothers from a large case-control study of birth defects) completed a detailed telephone interview. This analysis examined behaviors reported in the third month of pregnancy because most women would be aware of their pregnancies by the end of the first trimester, and our primary interest was assessing exposures that occurred after pregnancy recognition. Results: Women who reported unintended pregnancies tended to be younger, nonwhite and less educated, and tended to have higher gravidity than women who reported intended pregnancies. After adjustingfor maternal age, education, race and previous adverse pregnancy outcome, women who reported unintended pregnancies were more likely to report smoking (odds ratio [OR] = 1.4, 95% confidence interval [CI] 1.2, 1.7), illicit drug use (OR=3.4, 95% CI 1.9, 6.4), not taking vitamins (OR = 1.4, 95% CI 1.2, 1.7), and alcohol use (OR=1.2, 95% CI 0.99, 1.4) than women who had intended pregnancies. No association was observed between pregnancy intention and medication use. Conclusion: These results suggest that women who report having unintended pregnancies are more likely to have some exposures that may result in adverse pregnancy outcomes. abstract_id: PUBMED:35123438 Impact of the fear of Covid-19 infection on intent to breastfeed; a cross sectional survey of a perinatal population in Qatar. Objectives: Infection control measures during the Covid-19 pandemic have focused on limiting physical contact and decontamination by observing cleaning and hygiene rituals. Breastfeeding requires close physical contact and observance of hygienic measures like handwashing. Worries around contamination increase during the perinatal period and can be expressed as increase in obsessive compulsive symptoms. These symptoms have shown to impact breastfeeding rates. This study attempts to explore any relationship between the Covid-19 pandemic and perinatal obsessive-compulsive symptomatology and whether the Covid-19 pandemic has any impact on intent to breastfeed. Methods: A cross sectional survey of perinatal women attending largest maternity centre in Qatar was carried out during the months of October to December 2020. Socio-demographic information, intent to breastfeed and information around obsessive compulsive thoughts around Covid-19 pandemic were collected using validated tools. Results: 15.7% respondents report intent to not breastfeed. 21.4% respondents reported obsessive-compulsive symptoms. 77.3% respondents believed the biggest source of infection was from others while as only 12% of the respondents believed that the source of infection was through breastfeeding and 15.7% believed the vertical transmission as the main source of risk of transmission. Conclusions: The rates of Obsessive-compulsive symptoms were increased and the rates of intent to breastfeed were decreased when compared with pre pandemic rates. The obsessive-compulsive symptoms and the intent to not breastfeed were significantly associated with fear of infection to the new-born. Obsessive-compulsive symptoms were not significantly correlated with intent to breastfeed and can be seen as adaptive strategies utilized by women to continue breastfeeding in the context of fear of infection. abstract_id: PUBMED:36352345 Racial and Ethnic Differences in Maternal and Child COVID-19 Vaccination Intent Among Pregnant and Postpartum Women in the USA (April-June 2020): an Application of Health Belief Model. This study investigated racial/ethnic differences in pregnant and postpartum women's intentions to receive the COVID-19 vaccination (maternal COVID-19 vaccination intent) and intentions to vaccinate their children against COVID-19 (child COVID-19 vaccination intent) during the early months of the COVID-19 pandemic (April-June 2020). This study also assessed Health Belief Model constructs to examine their influence on maternal and child COVID-19 vaccination intent by race/ethnicity. This study includes 489 US pregnant and postpartum women (18-49 years) recruited via Prolific Academic to complete a 55-item cross-sectional online survey. Crude and adjusted logistic regression analyses were conducted to determine the associations between race/ethnicity, maternal COVID-19 vaccination intent, and child COVID-19 vaccination intent. Among pregnant women, the odds of maternal COVID-19 vaccination intent (aOR = 2.20, 95% CI: .862, 5.61) and child COVID-19 vaccination intent (aOR = .194, 95% CI: .066, .565) among NH Black women were statistically significantly lower than that of NH White women after adjustment for demographic, health, and health belief model variables. Among postpartum women, although some racial differences in maternal or child COVID-19 vaccination intent were observed, these differences were not statistically significant in unadjusted and adjusted models. The findings have implications for future research and interventions which should adopt a racial health equity lens and identify strategies grounded in institutional trustworthiness and systems perspectives to address racial/ethnic disparities in COVID-19 vaccination intent among pregnant and postpartum women during novel pandemics. abstract_id: PUBMED:35725781 COVID-19 vaccination coverage and intent among women aged 18-49 years by pregnancy status, United States, April-November 2021. Background: Pregnant and postpartum women are at increased risk for severe illness from COVID-19. We assessed COVID-19 vaccination coverage, intent, and attitudes among women of reproductive age overall and by pregnancy status in the United States. Methods: Data from the National Immunization Survey Adult COVID Module collected during April 22-November 27, 2021, were analyzed to assess COVID-19 vaccination (receipt of ≥1 dose), intent for vaccination, and attitudes towards vaccination among women aged 18-49 years overall and by pregnancy status (trying to get pregnant, currently pregnant, breastfeeding, and not trying to get pregnant or currently pregnant or breastfeeding). Logistic regression and predictive marginals were used to generate unadjusted and adjusted prevalence ratios (PRs and aPRs). Trend analyses were conducted to assess monthly changes in vaccination and intent. Results: Our analyses included 110,925 women aged 18-49 years. COVID-19 vaccination coverage (≥1 dose) was 63.2% overall (range from 53.3% in HHS Region 4 to 76.5% in HHS Region 1). Vaccination coverage was lowest among pregnant women (45.1%), followed by women who were trying to get pregnant (49.5%), women who were breastfeeding (51.5%), and all other women (64.9%). Non-Hispanic (NH) Black women who were pregnant or breastfeeding had significantly lower vaccination coverage (aPR: 0.74 and 0.66, respectively) than NH White women. Discussion: Our findings are consistent with other studies showing lower vaccination coverage among pregnant individuals, with substantially lower vaccination coverage among NH Black women who are pregnant or breastfeeding. Given the overlapping and disproportionate risks of COVID-19 and maternal mortality among Black women, it is critical that COVID-19 vaccination be strongly recommended for these populations and all women of reproductive age. Healthcare and public health providers may take advantage of every opportunity to encourage vaccination and enlist the assistance of community leaders, particularly in communities with low vaccination coverage. abstract_id: PUBMED:35699596 COVID-19 Vaccination and Intent Among Pregnant Women, United States, April 2021. Objectives: National data on COVID-19 vaccination coverage among pregnant women are limited. We assessed COVID-19 vaccination coverage and intent, factors associated with COVID-19 vaccination, reasons for nonvaccination, and knowledge, attitudes, and beliefs related to COVID-19 illness and vaccination among pregnant women in the United States. Methods: Data from an opt-in internet panel survey of pregnant women conducted March 31-April 16, 2021, assessed receipt of ≥1 dose of any COVID-19 vaccine during pregnancy. The sample included 1516 women pregnant any time during December 1, 2020-April 16, 2021, who were not fully vaccinated before pregnancy. We used multivariable logistic regression to determine variables independently associated with receipt of COVID-19 vaccine. Results: As of April 16, 2021, 21.7% of pregnant women had received ≥1 dose of COVID-19 vaccine during pregnancy, 24.0% intended to receive a vaccine, 17.2% were unsure, and 37.1% did not intend to receive a vaccine. Pregnant women with (vs without) a health care provider recommendation (adjusted prevalence ratio [aPR] = 4.86), those who lived (vs not) with someone with a condition that could increase risk for serious medical complications of COVID-19 (aPR = 2.11), and those who had received (vs not) an influenza vaccination (aPR = 2.35) were more likely to receive a COVID-19 vaccine. Common reasons for nonvaccination included concerns about safety risk to baby (37.2%) or self (34.6%) and about rapid vaccine development (29.7%) and approval (30.9%). Conclusions: Our findings indicate a continued need to emphasize the benefits of COVID-19 vaccination during pregnancy and to widely disseminate the recommendations of the Centers for Disease Control and Prevention and other clinical professional societies for all pregnant women to be vaccinated. abstract_id: PUBMED:29282269 Effect of second child intent on delivery mode after Chinese two child policy implementation: a cross sectional and prospective observational study of nulliparous women in Chongqing. Objectives: To assess the influence of second child intent on the delivery preferences and final delivery modes of nulliparous women, particularly caesarean delivery on maternal request (CDMR), after implementation of China's two child policy. Design: Cross sectional and prospective observational study. Setting: A tertiary teaching hospital in Chongqing, China. Participants: 1000 low risk nulliparous women were initially involved, and were divided into two groups based on their intent on having a second child. 814 women who completed all interviews were analysed. Main Outcome Measures: Preferred mode of delivery in second trimester and at admission; final delivery mode, CDMR. Results: 814 women who completed all interviews were analysed, in whom 51.2% intended to have a second child. The preference for caesarean delivery (CD) at the second trimester between the intent and no intent group was 6.2% versus 17.9% (adjusted RR 0.35, 95% CI 0.20 to 0.63) and increased to 24.2% versus 37.3% (adjusted RR 0.65, 95% CI 0.45 to 0.94) at admission, respectively. The overall CD rates between the two groups were 41.0% versus 50.4% (adjusted RR 0.84, 95% CI 0.60 to 1.18), and the CDMR rates were 16.7% versus 29.0% (adjusted RR 0.56, 95% CI 0.37 to 0.86), respectively. Conclusions: Women who intend to have a second child are less likely to request a CD, and high rates of CD and CDMR in nulliparous women may have decreased after implementation of China's two child policy. abstract_id: PUBMED:19782354 The hidden infertile: infertile women without pregnancy intent in the United States. A national probability sample reveals two relatively distinct groups of infertile women: those with intent, who have experienced a period of 12 or more months during which they tried to conceive but did not, and those without intent, who had a period of at least 12 months during which they could have conceived and did not but who do not describe themselves as having tried to become pregnant at that time. Those with intent are more likely to identify as having a fertility problem, to be distressed, and to pursue infertility treatment than those without intent, suggesting that many women do not realize that they meet the medical criteria for infertility and may wait longer to get help, therefore lowering their chances of conception. abstract_id: PUBMED:35450797 Exploring the relationship between midwives' work environment, women's safety culture, and intent to stay. Background: The shortage of midwives is a concern for healthcare systems as it compromises the quality maternity care. Various studies argue that a favorable work environment increases nurses' job satisfaction and intention to continue working at their current workplace. Aim: To analyze the work environment and its relationship with women's clinical safety culture and midwives' intention to stay in their current job and the midwifery profession. Methods: A cross-sectional, correlational study was performed on N = 218 midwives working in Spain. Standardized instruments were used, including The Practice Environment Scale of the Nursing Work Index (PES-NWI) and the Hospital Survey on Patient Safety Culture (HSOPSC). Descriptive and bivariable statistics were used. The study followed the STROBE guidelines. Results: The work environment in the labor wards was mixed, according to the PES-NWI classification. The mean total score of the PES-NWI significantly and positively correlated with the mean total score of the HSOPSC (rs = 0.498, p < 0.001), indicating that as the quality of midwives' work environment increased, women's clinical safety increased. Significant correlations were observed between the midwives' intent to stay in the hospital where they work and features of women's safety culture. Conclusion: The results of this study showed significant relationships between the work environment, women's safety culture, and midwives' intentions to leave their job/profession. Creating a favorable working environment could be a potentially effective strategy that encourages improvement in the women's safety culture in healthcare organizations and greater intention of midwives to stay at their current job. abstract_id: PUBMED:36759394 Fertility Desire and Associations with Condomless Sex, Antiretroviral Adherence, and Transmission Potential in a Cohort of Kenyan Women Living with HIV in Sero-discordant Relationships: A Mixed Methods Study. For women living with HIV (WLH) in serodiscordant partnerships, decisions about childbearing can challenge condom use and antiretroviral adherence. In a prospective cohort of 148 WLH in serodiscordant partnerships, 58 (39%) wanted more children in the future but were not currently trying to conceive (fertility desire), and 32 (22%) were currently trying to become pregnant (fertility intent). Detection of prostate specific antigen (PSA) in vaginal secretions, a marker for recent condomless sex, was lowest in women with fertility desire and highest in women with fertility intent. Detectable viral load followed a similar pattern. Risk of HIV transmission, when condomless sex and PSA detection occurred concurrently, was three to fourfold higher at visits with fertility intent compared to visits with fertility desire. Qualitative interviews underscored the importance women place on childbearing and suggested that they had limited information about the role of antiretroviral therapy in reducing sexual HIV transmission. Answer: Yes, there is a relation between the intent to become pregnant and exposures during pregnancy. A study found that women who reported unintended pregnancies were more likely to report smoking, illicit drug use, not taking vitamins, and alcohol use than women who had intended pregnancies (PUBMED:16050563). This suggests that the intent to become pregnant can influence maternal behaviors and exposures that may affect pregnancy outcomes.
Instruction: Supporting health insurance expansion: do electronic health records have valid insurance verification and enrollment data? Abstracts: abstract_id: PUBMED:25888586 Supporting health insurance expansion: do electronic health records have valid insurance verification and enrollment data? Objective: To validate electronic health record (EHR) insurance information for low-income pediatric patients at Oregon community health centers (CHCs), compared to reimbursement data and Medicaid coverage data. Materials And Methods: Subjects Children visiting any of 96 CHCs (N = 69 189) from 2011 to 2012. Analysis The authors measured correspondence (whether or not the visit was covered by Medicaid) between EHR coverage data and (i) reimbursement data and (ii) coverage data from Medicaid. Results: Compared to reimbursement data and Medicaid coverage data, EHR coverage data had high agreement (87% and 95%, respectively), sensitivity (0.97 and 0.96), positive predictive value (0.88 and 0.98), but lower kappa statistics (0.32 and 0.49), specificity (0.27 and 0.60), and negative predictive value (0.66 and 0.45). These varied among clinics. Discussion/conclusions: EHR coverage data for children had a high overall correspondence with Medicaid data and reimbursement data, suggesting that in some systems EHR data could be utilized to promote insurance stability in their patients. Future work should attempt to replicate these analyses in other settings. abstract_id: PUBMED:33013204 Status and determinants of enrollment and dropout of health insurance in Nepal: an explorative study. Background: Compared to other countries in the South Asia Nepal has seen a slow progress in the coverage of health insurance. Despite of a long history of the introduction of health insurance (HI) and a high priority of the government of Nepal it has not been able to push rapidly its social health insurance to its majority of the population. There are many challenges while to achieve universal health insurance in Nepal ranging from existing policy paralysis to program operation. This study aims to identify the enrollment and dropout rates of health insurance and its determinants in selected districts of Nepal. Methods: The study was conducted while using a mixed method including both quantitative and qualitative approaches. Numerical data related to enrollment and dropout rates were taken from Health Insurance Board (HIB) of Nepal. For the qualitative data, three districts, Bardiya, Chitwan, and Gorkha of Nepal were selected purposively. Enrollment assistants (EA) of social health insurance program were taken as the participants of study. Focus group discussions (FGD) were arranged with the selected EAs using specific guidelines along with unstructured questions. The results from numerical data and focus group discussions are synthesized and presented accordingly. Results: The findings of the study suggested variation in enrollment and dropout of health insurance in the districts. Enrollment coverage was 13,545 (1%), 249,104 (5%), 1,159,477 (9%) and 1,676,505 (11%) from 2016 to 2019 among total population and dropout rates were 9121(67%), 110,885 (44%) and 444,967 (38%) among total enrollment from 2016 to 2018 respectively. Of total coverage, more than one-third proportion was subsidy enrollment-free enrollment for vulnerable groups. The population characteristics of unwilling and dropout in social health insurance came from relatively well-off families, government employees, businessman, migrants' people, some local political leaders as well as the poor class families. The major determinants of poor enrollment and dropout were mainly due to unavailability of enough drugs, unfriendly behavior of health workers, and indifferent behavior of the care personnel to the insured patients in health care facilities and prefer to take health service in private clinic for their own benefits. The long maturation time to activate health service, limited health package and lack of copayment in different types of health care were the factors related to inefficient program and policy implementation. Conclusion: There is a high proportion of dropout and subsidy enrollment, the key challenge for sustainability of health insurance program in Nepal. Revisiting of existing HI policy on health care packages, more choices on copayment, capacity building of enrollment assistants and better coordination between health insurance board and health care facilities can increase the enrollment and minimize the dropout. abstract_id: PUBMED:34332583 Health insurance enrollment and vision health in rural China: an epidemiological survey. Background: Vision health is an important aspect of health worldwide. Visual impairment (VI) is associated with poor quality of life and is usually more prevalent in rural areas. To help rural populations obtain vision care, health insurance policies have emerged throughout the world. However, some existing literatures show that health insurance enrollment's impact on the overall physical health of rural population has been minimal. Focusing on vision health among adults in rural China, our study aims to investigates the impact of health insurance on vision health, heterogeneity of the effect, and the moderating effect of health insurance enrollment on the impact of chronic physical diseases and basic eye diseases on vision health. Methods: Primary data were collected through a nation-wide epidemiological survey of vision health conducted in rural China in 2018, with a sample size of 28,787 used in our statistical analysis. Instrumental variables regression and Heckman selection models were conducted to examine the impact of health insurance enrollment and reimbursement ratio adults' vision health outcomes. Subsample regressions by sex, age, education level, and whether with eye diseases were further conducted to explore the heterogeneity in our results. We then examined whether health insurance enrollment moderates the impact of chronic physical diseases and basic eye diseases on vision health through the method of introducing interaction terms. Results: Participating in health insurance reduced the probability of VI by 2.15 %. The reimbursement rate increasing by 1 % point may reduce the probability of worsening VI by 6.12 %. Men (-0.0235, P = 0.0002) benefit more from insurance enrollment than women (-0.0201, P = 0.0082) with respect to vision health. From the young adult group to the oldest group, the marginal effect of health insurance increased from - 0.0068 (P = 0.0394) to -0.0753 (P < 0.0001). The marginal effect on VI was most significant in people with lower education levels and weakened with increased education levels. People with basic eye diseases (-0.0496, P = 0.0033) benefit more from participating insurance than the people without basic eye diseases (-0.0196, P = 0.0001) with respect to vision health. The moderating effects of health insurance enrollment on the impacts of cerebral infarction (-0.1225, P < 0.0001), diabetes (-0.0398, P = 0.0245), hyperlipidemia (-0.1364, P = 0.0271), mental illness (-0.1873, P = 0.0010), glaucoma (-0.1369, P = 0.0073), diabetic retinopathy (-0.1560, P = 0.0043), and retinal vein obstruction (-0.2018, P = 0.0155) on vision health were significantly negative. Conclusions: The results suggest that participation in health insurance and higher health insurance reimbursement ratios reduced the risk of VI in the sampled adults. Health insurance has the most significant effect in in vulnerable groups. Heath insurance enrollment moderates the impacts of several chronic physical and basic eye conditions on vision health. Our findings have potential implications for reforming health insurance policies to improve vision health conditions in rural areas of developing countries. abstract_id: PUBMED:37064679 Enrollment of reproductive age women in community-based health insurance: An evidence from 2019 Mini Ethiopian Demographic and Health Survey. Background: Universal health coverage (UHC) is aimed at ensuring that everyone has access to high-quality healthcare without the risk of financial ruin. Community-based health insurance (CBHI) is one of the essential means to achieve the sustainable development goals (SDGs) global health priority of UHC. Thus, this study assessed health insurance enrollment and associated factors among reproductive age women in Ethiopia. Methods: We computed the health insurance enrollment of reproductive-age women using secondary data from the recent Ethiopian Mini Demographic and Health Surveys (EMDHS) 2019. The EMDHS was a community-based cross-sectional study carried out in Ethiopia from March 21 to June 28, 2019. Cluster sampling with two stages was employed for the survey. The study comprised 8885 (weighted) reproductive-age women. STATA 14 was used for data processing and analysis. Bivariate and multivariable logistic regression analyses were conducted. Adjusted odds ratio (AOR) with 95% confidence interval (CI) was reported and statistical significance was set at a value of p < 0.05. Results: Of the 8,885 study participants, 3,835 (43.2, 95% CI; 42.1, 44.2%) of women had health insurance. Women aged 20-24 years, 25-29 years, and 30-34 years less likely to enroll in health insurance compared to their younger counterparts (15-19 years). Women living in rural areas, had greater than five family sizes, living with a female household head, and having more than five living children were negatively associated with enrollment in health insurance. Besides, health insurance enrollment among reproductive-age women is significantly affected by region and religious variation. Conclusion: The overall CBHI enrolment among reproductive-age women in Ethiopia was low. To achieve the SDGs of reducing maternal mortality ratio and neonatal mortality, improving reproductive-age women's access to health insurance is essential. The national, regional, and local officals, policymakers, NGOs, program planners, and other supporting organizations working on improving health insurance enrollment of reproductive age women need to create awareness and support them based on these significant factors. abstract_id: PUBMED:29658152 Extending health insurance coverage to the informal sector: Lessons from a private micro health insurance scheme in Lagos, Nigeria. Background: As a growing number of low- and middle-income countries commit to achieving universal health coverage, one key challenge is how to extend coverage to informal sector workers. Micro health insurance (MHI) provides a potential model to finance health services for this population. This study presents lessons from a pilot study of a mandatory MHI plan offered by a private insurance company and distributed through a microfinance bank to urban, informal sector workers in Lagos, Nigeria. Methods: Study methods included a survey of microfinance clients, key informant interviews, and a review of administrative records. Results: Demographic, health care seeking, and willingness-to-pay data suggested that microfinance clients, particularly women, could benefit from a comprehensive MHI plan that improved access to health care and reduced out-of-pocket spending on health services. However, administrative data revealed declining enrollment, and key informant interviews further suggested low use of the health insurance plan. Key implementation challenges, including changes to mandatory enrollment requirements, insufficient client education and marketing, misaligned incentives, and weak back-office systems, undermined enrollment and use of the plan. Conclusions: Mandatory MHI plans, intended to mitigate adverse selection and facilitate private insurers' entry into new markets, present challenges for covering informal sector workers, including when distributed through agents such as a microfinance bank. Properly aligning the incentives of the insurer and the agent are critical to effectively distribute and service insurance. Further, an urban environment presents unique challenges for distributing MHI, addressing client perceptions of health insurance, and meeting their health care needs. abstract_id: PUBMED:31735937 Factors Affecting Enrollment in Government Health Insurance Program in Kailali District. Background: Health care financial burden on households is high in Nepal. High health care expenditure is a major obstacle in achieving universal health coverage. The health insurance is expected to reduce healthcare expenditure. However, only small segments of the population are covered by health insurance in Nepal.This study assessed the factors affecting enrollment in government health insurance program in the first piloted district, Kailali, Nepal. Methods: A cross-sectional survey was conducted among 1048 households located in 26 wards of Kailali district after 21 months of the implementation of social health insurance program, Nepal. The sample was selected in two stages, first stage being the selection of wards and second, being the households. Results: The higher level of household economic status was associated with increased odds of enrollment in health insurance program (ORs=4.99, 5.04, 5.13, 8.05, for second, third, fourth, and the highest quintile of households, respectively). A higher level of head's education was associated with increased odds of health insurance enrollment (ORs = 1.58, 1.78, 2.36, for primary, secondary, tertiary education, respectively). Presence of chronic illness in the household was positively associated with increased odds of health insurance enrollment (OR= 1.29). Conclusions: The poor and low educated groups were less benefited by social health insurance program in Kailali district, Nepal. Hence, policymakers should focus to implement income-based premium scheme for ensuring equal access to healthcare.Since household with chronic illness leads to high odds of being enrolled, a compulsory health insurance scheme can make the program financially sustainable. abstract_id: PUBMED:35837529 Summary of the Current Status of Health Insurance Enrollment in Connecticut. The state of Connecticut is considering a number of policy options to improve health insurance affordability, access, and equity. To create policies designed to increase insurance coverage and access to care in underserved communities and reduce racial and ethnic disparities, state policymakers need an accurate picture of the current distributions of insurance enrollment across these dimensions. The authors combine data from the American Community Survey Public Use Microdata Sample, which includes demographic characteristics, as well as insurance status, with various data sources from the state to provide a fuller picture of insurance enrollment among those under the age of 65 in Connecticut. They also use existing high-level estimates of 2020 insurance enrollment to provide estimates of how enrollment in the state was affected during the early months of the pandemic. The authors find that insurance enrollment in Connecticut in 2019 was generally high but that there were substantial differences in insurance coverage by race and ethnicity. Asian individuals had the highest rates of employer-sponsored insurance coverage, and Black individuals had the highest rates of Medicaid coverage. Hispanic individuals had a higher rate of Medicaid coverage than non-Hispanic individuals. High-level estimates of changes in insurance coverage during the early months of the COVID-19 pandemic suggest that uninsurance decreased slightly, Medicaid coverage increased, and private insurance coverage fell. This study provides the state of Connecticut with estimates of enrollment in detailed health insurance categories by age, gender, race, and ethnicity and highlights the need for better, more-detailed health insurance enrollment data. abstract_id: PUBMED:32331351 Predictors of Health Insurance Enrollment among HIV Positive Pregnant Women in Kenya: Potential for Adverse Selection and Implications for HIV Treatment and Prevention. Background: The global push to achieve the 90-90-90 targets designed to end the HIV epidemic has called for the removing of policy barriers to prevention and treatment, and ensuring financial sustainability of HIV programs. Universal health insurance is one tool that can be used to this end. In sub-Saharan Africa, where HIV prevalence and incidence remain high, the use of health insurance to provide comprehensive HIV care is limited. This study looked at the factors that best predict social health insurance enrollment among HIV positive pregnant women using data from the Academic Model Providing Access to Healthcare (AMPATH) in western Kenya. Methods: Cross-sectional clinical encounter data were extracted from the electronic medical records (EMR) at AMPATH. We used univariate and multivariate logistic regressions to estimate the predictors of health insurance enrollment among HIV positive pregnant women. The analysis was further stratified by HIV disease severity (based on CD4 cell count <350 and 350>) to test the possibility of differential enrollment given HIV disease state. Results: Approximately 7% of HIV infected women delivering at a healthcare facility had health insurance. HIV positive pregnant women who deliver at a health facility had twice the odds of enrolling in insurance [2.46 Adjusted Odds Ratio (AOR), Confidence Interval (CI) 1.24-4.87]. They were 10 times more likely to have insurance if they were lost to follow-up to HIV care during pregnancy [9.90 AOR; CI 3.42-28.67], and three times more likely to enroll if they sought care at an urban clinic [2.50 AOR; 95% CI 1.53-4.12]. Being on HIV treatment was negatively associated with health insurance enrollment [0.22 AOR; CI 0.10-0.49]. Stratifying the analysis by HIV disease severity while statistically significant did not change these results. Conclusions: The findings indicated that health insurance enrollment among HIV positive pregnant women was low mirroring national levels. Additionally, structural factors, such as access to institutional delivery and location of healthcare facilities, increased the likelihood of health insurance enrollment within this population. However, behavioral aspects, such as being lost to follow-up to HIV care during pregnancy and being on HIV treatment, had an ambiguous effect on insurance enrollment. This may potentially be because of adverse selection and information asymmetries. Further understanding of the relationship between insurance and HIV is needed if health insurance is to be utilized for HIV treatment and prevention in limited resource settings. abstract_id: PUBMED:28749314 Designing Health Information Technology Tools to Prevent Gaps in Public Health Insurance. Background: Changes in health insurance policies have increased coverage opportunities, but enrollees are required to annually reapply for benefits which, if not managed appropriately, can lead to insurance gaps. Electronic health records (EHRs) can automate processes for assisting patients with health insurance enrollment and re-enrollment. Objective: We describe community health centers' (CHC) workflow, documentation, and tracking needs for assisting families with insurance application processes, and the health information technology (IT) tool components that were developed to meet those needs. Method: We conducted a qualitative study using semi-structured interviews and observation of clinic operations and insurance application assistance processes. Data were analyzed using a grounded theory approach. We diagramed workflows and shared information with a team of developers who built the EHR-based tools. Results: Four steps to the insurance assistance workflow were common among CHCs: 1) Identifying patients for public health insurance application assistance; 2) Completing and submitting the public health insurance application when clinic staff met with patients to collect requisite information and helped them apply for benefits; 3) Tracking public health insurance approval to monitor for decisions; and 4) assisting with annual health insurance reapplication. We developed EHR-based tools to support clinical staff with each of these steps. Conclusion: CHCs are uniquely positioned to help patients and families with public health insurance applications. CHCs have invested in staff to assist patients with insurance applications and help prevent coverage gaps. To best assist patients and to foster efficiency, EHR based insurance tools need comprehensive, timely, and accurate health insurance information. abstract_id: PUBMED:33107609 State policies limiting premium surcharges for tobacco and their impact on health insurance enrollment. Objective: The Affordable Care Act allows insurers to charge up to 50% higher premiums to tobacco users, making tobacco use the only behavioral factor that can be used to rate premiums in the nongroup insurance market. Some states have set more restrictive limits on rating for tobacco use, and several states have outlawed tobacco premium surcharges altogether. We examined the impact of state level tobacco surcharge policy on health insurance enrollment decisions among smokers. Study Design: We compared insurance enrollment in states that did and did not allow tobacco surcharges, using a difference-in-difference approach to compare the policy effects among smokers and nonsmokers. We also used geographic variation in tobacco surcharges to examine how the size of the surcharge affects insurance coverage, again comparing smokers to nonsmokers. Data Collection: We linked data from two components of the Current Population Survey-the 2015 and 2019 Annual Social and Economic Supplement and the Tobacco Use Supplement, which we combined with data on marketplace plan premiums. We also collected qualitative data from a survey of smokers who did not have insurance through an employer or public program. Principal Findings: Allowing a tobacco surcharge reduced insurance enrollment among smokers by 4.0 percentage points (P = .01). Further, smokers without insurance through an employer or public program were 9.0 percentage points less likely (P < .01) to enroll in a nongroup plan if they were subject to a tobacco surcharge. In states with surcharges, enrollment among smokers was 3.4 percentage points lower (P < .01) for every 10 percentage point increase in the tobacco surcharge. Conclusions: Tobacco use is the largest cause of preventable illness in the United States. State tobacco surcharge policy may have a substantial impact on whether tobacco users choose to remain insured and consequently their ability to receive care critical for preventing and treating tobacco-related disease. Answer: The study conducted on low-income pediatric patients at Oregon community health centers (CHCs) aimed to validate electronic health record (EHR) insurance information compared to reimbursement data and Medicaid coverage data. The results indicated that EHR coverage data had high agreement with Medicaid data and reimbursement data, with 87% and 95% correspondence, respectively. The sensitivity of EHR data was also high (0.97 and 0.96), as was the positive predictive value (0.88 and 0.98). However, the study found lower kappa statistics (0.32 and 0.49), specificity (0.27 and 0.60), and negative predictive value (0.66 and 0.45). These metrics varied among clinics. The findings suggest that in some systems, EHR data could be utilized to promote insurance stability in patients, although future work should attempt to replicate these analyses in other settings (PUBMED:25888586).
Instruction: Is coronary angiography necessary for vascular surgery patients who have positive results of dipyridamole thallium scans? Abstracts: abstract_id: PUBMED:9201157 Is coronary angiography necessary for vascular surgery patients who have positive results of dipyridamole thallium scans? Purpose: Because dipyridamole thallium (DT) scanning is a useful predictor of perioperative cardiac events, a positive results of a DT scan is frequently the basis for performing more invasive cardiac evaluation and for consideration for performing coronary revascularization procedures before performing peripheral vascular surgery. The rationale for this approach has been that the treatment of anatomically significant coronary artery disease would lower the risk of performing a subsequent vascular operation. However, the benefit of performing aggressive diagnostic and therapeutic cardiac procedures in such patients remains unproved. To examine this issue, data from patients who underwent coronary angiography because of thallium redistribution were compared with data from matched control subjects who underwent peripheral vascular operations without further cardiac evaluation. Methods: The medical records of 70 consecutive patients who underwent coronary angiography because of the presence of two or more segments of redistribution on DT scan were reviewed and compared with 70 other patients matched with respect to age, gender, peripheral vascular operation, and number of segments of redistribution on DT scan who did not undergo additional cardiac evaluation. Results: DT scans were performed on 934 preoperative peripheral vascular surgery patients to help in the assessment of operative risk. Ischemic responses, defined as two or more segments of redistribution, were observed in 297. Of these, 70 underwent cardiac catheterization and 25 underwent coronary revascularization procedures. Adverse outcomes affected 46% of the coronary angiography group and 44% of the control group (p = NS). Patients who underwent coronary angiography and were considered for myocardial revascularization had fewer cardiac events with a subsequent vascular operation than did the control subjects. However, any possible benefit from invasive cardiac evaluation was offset by the three deaths and two myocardial infarctions (MIs) that complicated the cardiac evaluation. There was no significant difference between the angiography group and the matched control subjects with respect to perioperative nonfatal MI (13% vs 9%), fatal MI (4% vs 3%), late nonfatal MI (16% vs 19%), or late cardiac death (10% vs 13%). In long-term follow-up, MIs occurred later in patients who underwent coronary angiography than the control subjects (p = 0.049), but this difference was not associated with an improvement in the overall survival rate. Conclusions: The risks of extended cardiac evaluation and treatment did not produce any improvement in either the perioperative or the long-term survival rate. For most vascular surgery patients who have a positive result of a DT scan, coronary angiography does not provide any additional useful information. abstract_id: PUBMED:3999243 Selection of patients for preoperative coronary angiography: use of dipyridamole-stress--thallium myocardial imaging. To identify patients likely to benefit from preoperative coronary angiography, a method utilizing pharmacologically induced coronary vasodilatation in conjunction with serial thallium 201 myocardial perfusion imaging was investigated. Fifty-four patients admitted for elective aortic or femoropopliteal procedures were studied. There were no cardiac ischemic complications in 32 patients with normal scans or persistent defects (scar). In contrast, 7 of 15 patients with thallium redistribution (ischemia) on pre-operative scanning had perioperative ischemic events, including one death and two acute infarcts. An additional seven patients with positive scans (redistribution) underwent coronary angiography prior to vascular surgery; surgically important two- or three-vessel disease was confirmed in all. Dipyridamole-thallium imaging facilitates selection of the subset of truly high-risk patients in whom preoperative coronary angiography may be warranted. abstract_id: PUBMED:3871502 Determination of cardiac risk by dipyridamole-thallium imaging before peripheral vascular surgery. To evaluate the severity of coronary artery disease in patients with severe peripheral vascular disease requiring operation, we performed preoperative dipyridamole-thallium imaging in 54 stable patients with suspected coronary artery disease. Of the 54 patients, 48 had peripheral vascular surgery as scheduled without coronary angiography, of whom 8 (17 per cent) had postoperative cardiac ischemic events. The occurrence of these eight cardiac events could not have been predicted preoperatively by any clinical factors but did correlate with the presence of thallium redistribution. Eight of 16 patients with thallium redistribution had cardiac events, whereas there were no such events in 32 patients whose thallium scan either was normal or showed only persistent defects (P less than 0.0001). Six other patients also had thallium redistribution but underwent coronary angiography before vascular surgery. All had severe multivessel coronary artery disease, and four underwent coronary bypass surgery followed by uncomplicated peripheral vascular surgery. These data suggest that patients without thallium redistribution are at a low risk for postoperative ischemic events and may proceed to have vascular surgery. Patients with redistribution have a high incidence of postoperative ischemic events and should be considered for preoperative coronary angiography and myocardial revascularization in an effort to avoid postoperative myocardial ischemia and to improve survival. Dipyridamole-thallium imaging is superior to clinical assessment and is safer and less expensive than coronary angiography for the determination of cardiac risk. abstract_id: PUBMED:2353615 Perioperative and long-term prognostic value of intravenous dipyridamole thallium scintigraphy in patients with peripheral vascular disease. The prognostic value of long-term risk stratification of patients with peripheral vascular disease who undergo intravenous dipyridamole thallium scintigraphy has not been well studied. We screened 131 patients with peripheral vascular disease who underwent intravenous dipyridamole thallium testing to determine cardiac event rates over an average follow-up of 18 +/- 10 months. Of the 131 patients, 111 subsequently had peripheral vascular surgery. The patients with abnormal thallium scans after dipyridamole had a significantly higher risk of death or myocardial infarction, both in the perioperative phase (7% versus 0%; p less than 0.001) and at late follow-up (17% versus 6%; p less than 0.01). The risk of a cardiac event was two-fold greater when a reversible as compared to a fixed thallium defect was present. Multivariate analysis selected the number of thallium segments with perfusion defects, prior history of angina pectoris, and chest pain during dipyridamole testing as perioperative predictors of a cardiac event. A reversible thallium defect was the only predictor of death or nonfatal myocardial infarction during late follow-up. Thus intravenous dipyridamole thallium scintigraphy is a useful noninvasive test for risk stratification of patients before peripheral vascular surgery and provides prognostic information as to the risk of a cardiac event in the 2-year period after the test. A reversible thallium defect is associated with a significant increased risk and would indicate that coronary angiography should be considered and preoperative coronary revascularization. abstract_id: PUBMED:3209610 Dipyridamole-thallium scan for screening of coronary artery disease prior to vascular surgery. Patients with extracranial cerebrovascular disease and peripheral vascular disease are at increased risk of ischaemic heart disease with resulting increased risk of early and late mortality following vascular reconstruction. Over a two year period, 67 patients undergoing carotid or aortic surgery were investigated preoperatively with dipyridamole-thallium scan. There were three positive scans in the first 17 patients; two of these patients suffered a postoperative myocardial infarction (one fatal), compared to none of 14 who had a negative scan (p = 0.02). There were seven positive scans in the next 50 patients. Patients with left main trunk or triple vessel disease were recommended to have coronary artery bypass prior to or combined with the vascular reconstruction. There were no deaths or postoperative myocardial infarction in this group, this improvement in morbidity being statistically significant (p = 0.01). Dipyridamole-thallium scanning is an effective screening procedure for coronary artery disease. Patients with a positive scan are at increased risk of postoperative myocardial infarction following vascular reconstruction. Further investigation with coronary arteriography prior to vascular reconstruction is recommended in patients with positive scans. Coronary artery bypass should be performed prior to or combined with the vascular reconstruction in patients with left main trunk or triple vessel disease. abstract_id: PUBMED:7783301 The role of coronary angiography and coronary revascularization before noncardiac vascular surgery. Objective: To determine whether preoperative coronary angiography and revascularization improve short-term outcomes in patients undergoing noncardiac vascular surgery. Design: Decision analysis. Patients: Patients undergoing elective vascular surgery who had either no angina or mild angina and a positive dipyridamole-thallium scan result. Interventions: Three strategies were compared. The first strategy was to proceed directly to vascular surgery. The second was to perform coronary angiography, followed by selective coronary revascularization, before proceeding to vascular surgery and to cancel vascular surgery in patients with severe inoperable coronary artery disease (CAD). The third was to perform coronary angiography, followed by selective coronary revascularization, before proceeding to vascular surgery and to perform vascular surgery in patients with inoperable CAD. Main Outcome Measures: Mortality, nonfatal myocardial infarction, stroke, uncorrected vascular disease, and cost. All outcomes were assessed within 3 months. Results: Proceeding directly to vascular surgery led to lower morbidity and cost in the base case analysis. The coronary angiography strategy led to higher mortality if vascular surgery would proceed in patients with inoperable CAD, but led to slightly lower mortality if vascular surgery were canceled in patients with inoperable CAD. The coronary angiography strategy also led to lower mortality when vascular surgery was particularly risky. Conclusions: Decision analysis indicates vascular surgery without preoperative coronary angiography generally leads to better outcomes. Preoperative coronary angiography should be reserved for patients whose estimated mortality from vascular surgery is substantially higher than average. abstract_id: PUBMED:7487725 Adenosine thallium-201 scans in patients undergoing elective non-cardiac surgery. Dipyridamole Thallium-201 (201Tl) scintigraphy has been used widely for assessment of patients prior to vascular surgery. Recently, Adenosine has been reported to be a safe and useful alternative to Dipyridamole. The purposes of this study were to evaluate the safety and feasibility of the use of Adenosine, to evaluate the correlation of Adenosine 201Tl scans with coronary angiograms (when available) and to evaluate the effect of scan results on clinician management style. Fifty adults with abdominal aortic aneurysm or other vascular disease underwent an intravenous infusion of Adenosine in conjunction with initial and delayed planar 201Tl scans. Images were interpreted qualitatively and quantitatively by the consensus of two or more experienced observers with patients showing transient left ventricular dilatation or redistribution in one or more myocardial segments reported as being at high risk of peri-operative cardiac events. Of the 50 subjects studied, 49 tolerated the maximum infusion dose with 60% experiencing minor transient symptoms. Low (n = 30) and high risk (n = 20) patients were defined according to Adenosine 201Tl scans. Age, gender and clinical characteristics were similar in both groups. Thirteen (65%) high risk subjects had coronary angiography compared with only three (10%) low risk patients. Patients with high-risk 201Tl scans were also more likely to proceed to coronary revascularization prior to non-cardiac surgery [5/20 (25%) vs 1/30 (3%)]. The positive predictive value of high risk 201Tl scans for coronary artery disease was 85%. Thus, Adenosine is considered a useful and safe alternative to Dipyridamole.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:2805321 Failure of dipyridamole-thallium myocardial imaging to detect severe coronary disease. Three patients referred for peripheral vascular surgery who died of coronary artery disease complications despite normal dipyridamole-thallium scans are discussed. Although recent literature has shown enthusiasm for this test in this clinical setting, the dipyridamole-thallium scans are not absolute and patients remain at risk for major coronary artery events. Careful clinical screening and awareness of the signs of left main coronary disease on thallium images are important in the evaluation of these patients. abstract_id: PUBMED:2297954 Dipyridamole thallium imaging may not be a reliable screening test for coronary artery disease in patients undergoing vascular surgery. Dipyridamole thallium imaging has been proposed for cardiac risk stratification in patients undergoing peripheral vascular surgery. The purpose of this study was to define the benefit of this investigation in routine preoperative evaluation of these patients. The outcome of 86 patients undergoing vascular surgery procedures was examined in light of preoperative clinical assessment and dipyridamole SPECT thallium imaging (DST). Fifty-one patients (59%) were considered at high risk on clinical grounds, and 22 patients (26%) had perfusion defects. Ten patients suffered a perioperative coronary event, including unstable angina, myocardial infarction, or cardiac death. Seven of the patients with such events were among the 51 clinically high-risk subjects (14%). Three perioperative events occurred in the group of 19 patients with positive DST images who underwent surgery (16%), but the DST test failed to identify 7 patients who suffered coronary events. The frequency of abnormal thallium imaging was similar to the prevalence of angiographically significant coronary disease reported previously at this center, but considerably less than the rate of abnormal thallium imaging in past studies of vascular surgery patients. The application of the test to a low to moderate risk population is probably responsible for its lower predictive accuracy for coronary events. DST is not an ideal routine noninvasive technique for risk stratification in patients undergoing vascular surgery. abstract_id: PUBMED:2756871 Usefulness of the severity and extent of reversible perfusion defects during thallium-dipyridamole imaging for cardiac risk assessment before noncardiac surgery. Thallium-dipyridamole imaging is a very sensitive test for predicting cardiac events after noncardiac surgery, but it lacks specificity. To improve specificity, a semiquantitative scoring system was developed that combined dipyridamole-induced reversible left ventricular dilatation with scintigraphic indexes for severity and extent of reversible perfusion defects. Using this scoring system, patients were classified into low, intermediate and high risk subgroups. Thallium-dipyridamole imaging was performed in 66 patients before major general and vascular surgery. Thirty-nine patients classified as low risk (30 with normal scans and 9 with fixed defects) underwent surgery uneventfully. Surgery was cancelled in 6 patients with extensive thallium redistribution and coronary angiography was performed because of severe coronary artery disease in 5 and idiopathic dilated cardiomyopathy in 1. In the remaining 21 patients with thallium redistribution, a positive statistical correlation (p = 0.001) between scintigraphic indexes of severity and extent, and cardiac events was noted. Using cutoff values for the scintigraphic indexes, patients with reversible defects could be classified into intermediate and high risk subgroups. Only 1 of 11 patients at intermediate risk developed a complication, whereas 8 of 10 patients at high risk had a postoperative event (7 deaths and 1 myocardial infarction). Thus, using scintigraphic indexes for severity and extent, patients with reversible defects can be stratified into an intermediate risk subgroup that can safely undergo surgery and a high risk subgroup that requires coronary angiography. Answer: The necessity of coronary angiography for vascular surgery patients with positive dipyridamole thallium (DT) scans is debated. Some studies suggest that coronary angiography may not be necessary for most vascular surgery patients with a positive DT scan result, as it does not provide additional useful information that improves perioperative or long-term survival rates (PUBMED:9201157). However, other studies indicate that DT scanning can identify high-risk patients who may benefit from preoperative coronary angiography, as it can detect severe multivessel coronary artery disease and potentially improve outcomes by guiding preoperative coronary revascularization (PUBMED:3999243, PUBMED:3871502, PUBMED:2353615, PUBMED:3209610). The decision to perform coronary angiography should be individualized based on the patient's risk profile and the estimated risk of the vascular surgery. For instance, patients with reversible thallium defects are at a higher risk of cardiac events and may be considered for coronary angiography and myocardial revascularization to avoid postoperative myocardial ischemia and improve survival (PUBMED:3871502, PUBMED:2353615). Conversely, patients without thallium redistribution are at a low risk for postoperative ischemic events and may proceed with vascular surgery without further cardiac evaluation (PUBMED:3871502). A decision analysis study suggests that proceeding directly to vascular surgery without preoperative coronary angiography generally leads to better outcomes, and coronary angiography should be reserved for patients with a substantially higher than average estimated mortality from vascular surgery (PUBMED:7783301). Additionally, the use of adenosine as an alternative to dipyridamole for thallium scans has been found to be safe and useful, with high-risk scan results prompting more frequent coronary angiography and revascularization prior to non-cardiac surgery (PUBMED:7487725). However, it is important to note that DT scans are not absolute, and patients may still be at risk for major coronary events even with normal scans (PUBMED:2805321). Furthermore, the predictive accuracy of DT scans for coronary events may be lower in populations with a low to moderate risk, suggesting that DT scans may not be an ideal routine noninvasive technique for all patients undergoing vascular surgery (PUBMED:2297954). In summary, while DT scans can be helpful in risk stratification, the decision to perform coronary angiography In summary, while DT scans can be helpful in risk stratification, the decision to perform coronary angiography should be based on a comprehensive assessment of the patient's clinical profile, the severity and extent of reversible perfusion defects on the DT scan, and the estimated risk associated with the planned vascular surgery. Some patients, particularly those with extensive reversible defects, may benefit from further evaluation and possible revascularization before surgery, while others with low-risk profiles may proceed without additional cardiac intervention (PUBMED:2756871). The goal is to balance the potential benefits of preoperative coronary angiography and revascularization against the risks and costs associated with these procedures to optimize patient outcomes.
Instruction: Is ultrasound examination sufficient in the evaluation of patients with internal carotid artery severe stenosis or occlusion? Abstracts: abstract_id: PUBMED:12646775 Is ultrasound examination sufficient in the evaluation of patients with internal carotid artery severe stenosis or occlusion? Background And Purpose: Carotid endarterectomy (CE) has been shown to be beneficial in patients with symptomatic high-grade internal carotid artery (ICA) stenosis. Some authors have suggested that when ultrasound shows a stenosis 70-99%, CE can be performed without further imaging study. However, ultrasound findings that suggest an ICA occlusion, not confirmed by angiography but which instead show a near-occlusion usually benefit from CE. The objectives of this study are: (1). to evaluate how angiography-obtained information on intracranial arteries affects the treatment decision in patients with 70-99% ICA stenosis, and (2). to evaluate when a symptomatic ICA occlusion shown by ultrasound could actually be a patent artery and therefore benefit from CE. Materials And Methods: We prospectively collected the cerebral angiograms of 133 consecutive patients with TIA or non-disabling stroke due to large artery disease where ultrasound suggested a stenosis >or=70% or occlusion of symptomatic ICA; we calculated the frequency of intracranial vascular malformations and intracranial artery disease (IAD) located in the infraclinoid or supraclinoid portion of the ICA, and in the anterior or middle cerebral artery. Results: Ultrasound showed 31 ICA occlusions and 102 ICA with 70-99% stenosis. All the patients with an ICA stenosis 70-99% on ultrasound examination had the degree of stenosis confirmed by angiography. Two out of 31 patients did not have a complete occlusion but angiography showed a near-occlusion and consequently they underwent CE. Sixty-five (62.5%) out of 104 patients with patent ICA had IAD (mild 26.9%, moderate 21.2%, and severe 14.4%). Five patients (4.8%) had intracranial vascular malformations (4 aneurysms and 1 arteriovenous malformation). One patient had disabling stroke during angiography. Seven patients (6.7%) did not undergo CE after angiography (1 patient had an aneurysm >10 mm, 1 patient had a very tight stenosis of the basilar artery, 5 patients had intracranial stenosis more severe than the extracranial stenosis). Conclusions: In patients that on the basis of ultrasound examination can benefit from CE, information on intracranial arteries is necessary. Moreover, complete occlusion cannot be detected with certainty only by ultrasound examination. abstract_id: PUBMED:33715027 Near-occlusion is difficult to diagnose with common carotid ultrasound methods. Purpose: To assess the sensitivity and specificity of common carotid ultrasound method for carotid near-occlusion diagnosis. Methods: Five hundred forty-eight patients examined with both ultrasound and CTA within 30 days of each other were analyzed. CTA graded by near-occlusion experts was used as reference standard. Low flow velocity, unusual findings, and commonly used flow velocity parameters were analyzed. Results: One hundred three near-occlusions, 272 conventional ≥50% stenosis, 162 <50% stenosis, and 11 occlusions were included. Carotid ultrasound was 22% (95%CI 14-30%; 23/103) sensitive and 99% (95%CI 99-100%; 442/445) specific for near-occlusion diagnosis. Near-occlusions overlooked on ultrasound were found misdiagnosed as occlusions (n = 13, 13%), conventional ≥50% stenosis (n = 65, 63%) and < 50% stenosis (n = 2, 2%). No velocity parameter or combination of parameters could identify the 65 near-occlusions mistaken for conventional ≥50% stenoses with >75% sensitivity and specificity. Conclusion: Near-occlusion is difficult to diagnose with commonly used carotid ultrasound methods. Improved carotid ultrasound methods are needed if ultrasound is to retain its position as sole preoperative modality. abstract_id: PUBMED:32325231 Follow-up after carotid stenting with the CASPER stent system: A duplex ultrasound evaluation. Objective: To report results of duplex ultrasound evaluation of consecutive patients after carotid stenting with the double layer Carotid Artery Stent designed to Prevent Embolic Release (CASPER) stent system. Methods: Between January 2014 and June 2017, a single-center, retrospective study of 101 consecutive patients (21.8% female; median age, 72.1 years) was performed. Patients with internal carotid artery stenosis treated with the CASPER stent were included. Eligibility criteria for stenting included stenosis of ≥70% of the vessel diameter (or ≥50% diameter with ulceration) in symptomatic carotid artery stenosis or ≥80% stenosis in asymptomatic patients at the carotid artery bifurcation or the proximal cervical internal carotid artery. Duplex ultrasound examination was performed before and within 24 hours of implantation as well as at 14 days, and 3, 6, and 12 months. Results: At the 12-month follow-up visit, moderate in-stent restenosis (ISR) (≥50% and <70%) was detected in three stents (2.8%) and severe (≥70%) ISR in two (1.9%; including one case of stent occlusion). All but the two latter patients remained asymptomatic during the follow-up period. One patient required retreatment for ISR after a minor stroke and another patient with stent occlusion also re-presented with a minor stroke. Multivariable logistic regression was unable to detect any significant factors associated with ISR. Conclusions: Duplex ultrasound examination after carotid stenting is a useful tool for patient follow-up and determination of ISR. We found a low incidence of ISR assessed by duplex ultrasound examination at 12 months after CASPER stenting, but further studies are warranted. abstract_id: PUBMED:3278698 The limitations of diagnosis of carotid occlusion by Doppler ultrasound. Duplex scanning has been advocated as an acceptable alternative to angiography in the preoperative evaluation of carotid artery stenosis. To evaluate the accuracy of carotid Doppler in differentiating severe carotid stenosis from occlusion, we compared the results of angiography with duplex scanning in 124 carotid arteries (62 patients) and with continuous-wave Doppler in 662 carotid arteries (331 patients). The specificity was 95-99%, sensitivity was 86-96%, and accuracy was 95-98%. Duplex scanning wrongly identified occlusion in four arteries and failed to detect occlusion in one artery. In making decisions prior to carotid endarterectomy, even infrequent errors are unacceptable. We recommend angiography of all surgical candidates with apparent severe stenosis when the internal carotid artery cannot be clearly identified on duplex, or to distinguish apparent occlusion from undetectably low blood flow. abstract_id: PUBMED:2690925 Carotid arteries in central retinal vessel occlusion as assessed by Doppler ultrasound. Doppler ultrasound was used to detect possible flow changes in the carotid arteries of patients with central retinal artery and vein occlusion. Twenty-three patients with central retinal artery occlusion (mean age 56, SD 11, years) were examined 4 to 48 months after the development of the occlusion and compared with age and sex matched control subjects with no history of any disease known to be associated with pathological changes in carotid vessels. Significant stenosis or occlusion of one or more carotid arteries was discovered in eight patients with retinal artery occlusion, while the ultrasonic findings were normal in all the controls (p less than 0.005). Blood flow was evaluated by the same method in 16 patients with central retinal vein occlusion (mean age 57, SD 9, years) six to 48 months after the event. A control group was chosen according to the same criteria as in previous comparison. Pathological ultrasonic findings were observed neither in the patients with retinal vein occlusion nor in the control group. The results suggested a possible aetiological relation between pathological changes in the carotid arteries and occlusion of the central retinal artery, but not occlusion of the central retinal vein. abstract_id: PUBMED:10753281 Use of ultrasound contrast in the diagnosis of carotid artery occlusion. Objective: The purpose of this study was to evaluate the use of an echo-enhancing agent in patients with carotid artery occlusion to improve the sensitivity and specificity of carotid color flow ultrasonography. Method: Between January 1997 and December 1998, a prospective study involving 85 cases of carotid artery occlusion in 84 patients was carried out. After a baseline duplex ultrasonography (DU) diagnosis, a second (DU) along with an echo-enhancement agent (SHU-508-A [Levovist]) study was carried out (echo enhancement ultrasonography diagnosis [DUEE]). In 82 cases, a contrast angiography was performed to confirm the diagnosis, whereas in the other three cases the diagnoses were confirmed with surgery. Results: From the 85 internal carotid artery occlusions diagnosed at the initial DU examination, seven came out to be false occlusions in the DUEE examination (8,2%). There was a 100% correlation of the cases between the DUEE examination and the contrast angiography in the 82 cases in which this had been done. In three of the cases, the diagnosis was confirmed surgically because these displayed severe stenoses according to the DUEE studies in symptomatic patients, and so they required urgent treatment. Conclusions: The DUEE study is a potent diagnosis tool that allows the differentiation between true carotid artery occlusions and pseudo-occlusions. abstract_id: PUBMED:29309110 Color Doppler imaging features in patients presenting central retinal artery occlusion with and without giant cell arteritis. Introduction: Central retinal artery obstruction (CRAO) represents an abrupt diminution of blood flow through the CRA that is severe enough to cause ischemia of the inner retina with permanent unilateral visual loss. We presented the role of color Doppler imaging (CDI) of orbital vessels and of extracranial duplex sonography (EDS) in the etiological diagnosis of CRAO in two patients with clinical suspicion of unilateral CRAO. Case Report: Patients were examined following the protocol which included CDI of orbital vessels and EDS. Both patients had no emboli visible on ophthalmoscopy. The B-scan ultrasound evaluation of the first patient found a small round, moderately reflective echo within the right optic nerve, 1.5 mm behind the optic disc (emboli of cholesterol). CDI of retrobulbar vessels revealed the normal right ophthalmic artery (OA) hemodynamic parameters, but the first patient had no arterial flow signal on CDI at the distance of 1.5 mm behind the right optic disc. In contrast, the left eye had the normal aspect on CDI of retrobulbar vessels. The right internal carotid artery EDS identified a severe stenosis at its origin as CRA’s emboli source. The second patient had characteristic CDI findings for giant cell arteritis (GCA) with eye involvement: severe diminished blood flow velocities, especially end-diastolic velocities, in both CRAs. Less abnormalities were observed in the posterior ciliary arteries, and in the ophthalmic arteries. The second patient had no systemic symptoms or signs of GCA. Conclusion: In the presented cases, the ultrasound investigation enabled prompt differentiation between central retinal artery occlusion of embolic mechanism and CRAO caused by GCA. abstract_id: PUBMED:33944975 Defining carotid near-occlusion with full collapse: a pooled analysis. Purpose: Create a new definition of near-occlusion with full collapse to predicting recurrent stroke. Methods: Pooled analysis of two studies. Patients with symptomatic ≥ 50% carotid stenoses were included. Outcome was preoperative recurrent ipsilateral ischemic stroke or retinal artery occlusion within 28 days of presenting event. We analyzed several artery diameters on computed tomography angiography and stenosis velocity on ultrasound. Results: A total of 430 patients with symptomatic ≥ 50% carotid stenosis were included, 27% had near-occlusion. By traditional definition, 27% with full collapse and 11% without full collapse reached the outcome (p = 0.047). Distal internal carotid artery (ICA) diameter, ICA ratio, and ICA-to-external carotid artery ratio were associated with the outcome. Best new definition of full collapse was distal ICA diameter ≤ 2.0 mm and/or ICA ratio ≤ 0.42. With this new definition, 36% with full collapse and 4% without full collapse reached the outcome (p < 0.001). Conclusions: Defining near-occlusion with full collapse as distal ICA diameter ≤ 2.0 mm and/or ICA ratio ≤ 0.42 seems to yield better prognostic discrimination than the traditional appearance-based definition. This novel definition can be used in prognostic and treatment studies of near-occlusion with full collapse. abstract_id: PUBMED:24572006 Prehospital stroke diagnostics based on neurological examination and transcranial ultrasound. Background: Transcranial color-coded sonography (TCCS) has proved to be a fast and reliable tool for the detection of middle cerebral artery (MCA) occlusions in a hospital setting. In this feasibility study on prehospital sonography, our aim was to investigate the accuracy of TCCS for neurovascular emergency diagnostics when performed in a prehospital setting using mobile ultrasound equipment as part of a neurological examination. Methods: Following a '911 stroke code' call, stroke neurologists experienced in TCCS rendezvoused with the paramedic team. In patients with suspected stroke, TCCS examination including ultrasound contrast agents was performed. Results were compared with neurovascular imaging (CTA, MRA) and the final discharge diagnosis from standard patient-centered stroke care. Results: We enrolled '232 stroke code' patients with follow-up data available in 102 patients with complete TCCS examination. A diagnosis of ischemic stroke was made in 73 cases; 29 patients were identified as 'stroke mimics'. MCA occlusion was diagnosed in ten patients, while internal carotid artery (ICA) occlusion/high-grade stenosis leading to reversal of anterior cerebral artery flow was diagnosed in four patients. The initial working diagnosis 'any stroke' showed a sensitivity of 94% and a specificity of 48%. 'Major MCA or ICA stroke' diagnosed by mobile ultrasound showed an overall sensitivity of 78% and specificity of 98%. Conclusions: The study demonstrates the feasibility and high diagnostic accuracy of emergency transcranial ultrasound assessment combined with neurological examinations for major ischemic stroke. Future combination with telemedical support, point-of-care analysis of blood serum markers, and probability algorithms of prehospital stroke diagnosis including ultrasound may help to speed up stroke treatment. abstract_id: PUBMED:37992947 Duplex ultrasound and cross-sectional imaging in carotid artery occlusion diagnosis. Objective: Investigations into imaging modalities in the diagnosis of extracranial carotid artery occlusion (CAO) have raised questions about the inter-modality comparability of duplex ultrasound (DUS) and cross-sectional imaging (CSI). This study examines the relationship between DUS and CSI diagnoses of extracranial CAO. Methods: This single-institution retrospective analysis studied patients with CAO diagnosed by DUS from 2010 to 2021. Patients were identified in our office-based accredited vascular laboratory database. Imaging and clinical data was obtained via our institutional electronic medical record. Primary outcome was discrepancy between DUS and CSI modalities. Secondary outcomes included incidence of stroke and intervention subsequent to CAO diagnosis. Results: Of our 140-patient cohort, 95 patients (67.9%) had DUS follow-up (mean, 42.7 ± 31.3 months). At index duplex, 68.0% of individuals (n = 51) were asymptomatic. Seventy-five patients (53.6%) had CSI of the carotids after DUS CAO diagnosis; 18 (24%) underwent magnetic resonance imaging and 57 (76%) underwent computed tomography. Indications for CSI included follow-up of DUS findings of carotid stenosis/occlusion (44%), stroke/transient ischemic attack (16%), other symptoms (12%), preoperative evaluation (2.7%), unrelated pathology follow-up (9.3%), and outside institution imaging with unavailable indications (16%). When comparing patients with CSI and those without, there were no differences with regard to symptoms at diagnosis, prior neck interventions, or hypertension. There was a significant difference between cross-sectionally imaged and non-imaged patients in anti-hypertensive medications (72% vs 53.8%; P = .04). Despite initial DUS diagnoses of carotid occlusion, 10 patients (13.3%) ultimately had CSI indicating patent carotids. Four of these 10 patients had stenoses of ∼99% (with 1 string sign), four of 70% to 99%, one of 50% to 69%, and one of less than 50% on CSI. The majority of patients (70%) had CSI within 1 month of the index ultrasound. There were no significant relationships between imaging discrepancies and body mass index, heart failure, upper body edema, carotid artery calcification, and neck hardware. Eight individuals (10.7%) underwent ipsilateral revascularization; 62.5% (n = 5) were carotid endarterectomy procedures, and the remaining three procedures were a transcervical carotid revascularization, subclavian to internal carotid artery bypass, and transfemoral carotid artery stenting. Eight patients (10.7%) underwent contralateral revascularization, with the same distribution of procedures as those ipsilateral to occlusions. Two of the 10 patients with discrepancies underwent carotid endarterectomy, and one underwent carotid stenting. Conclusions: In our experience, duplex diagnosis of CAO is associated with a greater than 10% discordance when compared with CSI. These patients may benefit from closer surveillance as well as confirmatory computed tomography or magnetic resonance angiography. Further work is needed to determine the optimal diagnostic modality for CAO. Answer: Ultrasound examination alone may not be sufficient in the evaluation of patients with internal carotid artery (ICA) severe stenosis or occlusion. Several studies have highlighted the limitations and challenges associated with using ultrasound as the sole diagnostic modality for this purpose. One study found that while ultrasound confirmed the degree of stenosis in patients with 70-99% ICA stenosis, it could not detect complete occlusion with certainty. In some cases, what was thought to be an occlusion on ultrasound was actually a near-occlusion confirmed by angiography, and these patients could benefit from carotid endarterectomy (CE) (PUBMED:12646775). Another study reported that carotid ultrasound was only 22% sensitive for diagnosing near-occlusion, indicating that near-occlusion is difficult to diagnose with commonly used carotid ultrasound methods (PUBMED:33715027). Duplex ultrasound has been shown to be useful for patient follow-up after carotid stenting, as in the case of the CASPER stent system, but the study also suggests that further research is warranted to confirm these findings (PUBMED:32325231). Historical data also suggest that carotid Doppler may incorrectly identify occlusion in some arteries and fail to detect occlusion in others, leading to the recommendation of angiography in certain cases to confirm the diagnosis (PUBMED:3278698). The use of ultrasound contrast agents has been shown to improve the sensitivity and specificity of carotid color flow ultrasonography in diagnosing carotid artery occlusion, suggesting that enhanced ultrasound methods may be more reliable than standard ultrasound alone (PUBMED:10753281). Additionally, a study on prehospital stroke diagnostics found that transcranial ultrasound combined with neurological examination can accurately diagnose major ischemic stroke, but this requires experienced stroke neurologists and may not be widely applicable (PUBMED:24572006). Finally, a retrospective analysis found that duplex diagnosis of carotid artery occlusion had a greater than 10% discordance when compared with cross-sectional imaging (CSI), indicating that some patients diagnosed with occlusion by ultrasound actually had patent carotids on CSI (PUBMED:37992947). In conclusion, while ultrasound is a valuable tool in the evaluation of ICA stenosis and occlusion, it may not always be sufficient as a standalone diagnostic method.
Instruction: Does aggressive surgery only benefit patients with less advanced ovarian cancer? Abstracts: abstract_id: PUBMED:35790254 Highly Aggressive Surgery Benefits in Patients With Advanced Ovarian Cancer. Background/aim: We investigated whether highly aggressive surgery has survival and perioperative complication benefit in patients with advanced ovarian cancer. Patients And Methods: This retrospective study included 209 patients with stage III/IV ovarian cancer who underwent aggressive surgery [surgical complexity score (SCS) ≥8] between January 2008 and December 2018. Patients were categorized into the SCS 8-12 (less aggressive surgery, 83 patients) and SCS ≥13 (highly aggressive surgery, 126 patients) groups. Survival outcomes and perioperative complications between the groups were compared. Patient suitability for primary debulking surgery or neoadjuvant chemotherapy (NACT) followed by interval debulking surgery (IDS) was based on the evaluation of performance status, tumor load, and ascites. If patients were suitable for NACT/IDS, the residual tumor margins were marked at the initial laparotomy. The previously marked lesions were removed during IDS, even in patients with macroscopic tumor resolution. Results: Prevalence rates of stage IV disease, poor performance status, presence of omental cake, peritoneal cancer index ≥15, and IDS performed were significantly higher in the highly aggressive surgery group than in the less aggressive surgery group. The median progression-free survival (PFS) and overall survival (OS) were not significantly different between the groups (PFS, 32 and 31 months, respectively; p=0.622; OS, 99 and 75 months, respectively; p=0.390). The incidence of severe perioperative complications was not significantly different between the less aggressive group (4.8%) and the highly aggressive surgery group (6.4%) (p=0.767). Conclusion: Highly aggressive surgery with appropriate selection regardless of the timing of cytoreduction benefits patients with advanced ovarian cancer. abstract_id: PUBMED:16314640 Does aggressive surgery only benefit patients with less advanced ovarian cancer? Results from an international comparison within the SCOTROC-1 Trial. Purpose: Studies indicate that ovarian cancer patients who have been optimally debulked survive longer. Although chemotherapy has been variable, they have defined standards of care. Additionally, it is suggested that patients from the United Kingdom (UK) have inferior survival compared with some other countries. We explored this within the context of a large, international, prospective, randomized trial of first-line chemotherapy in advanced ovarian cancer (docetaxel-carboplatin v paclitaxel-carboplatin; SCOTROC-1). The Scottish Randomised Trial in Ovarian Cancer surgical study is a prospective observational study examining the impact on progression-free survival (PFS) of cytoreductive surgery and international variations in surgical practice. Patients And Methods: One thousand seventy-seven patients were recruited (UK, n = 689; Europe, United States, and Australasia, n = 388). Surgical data were available for 889 patients. These data were analyzed within a Cox model. Results: There were three main observations. First, more extensive surgery was performed in non-UK patients, who were more likely to be optimally debulked (< or = 2 cm residual disease) than UK patients [corrected] (71.3% v 58.4%, respectively; P < .001). Second, optimal debulking was associated with increased PFS mainly for patients with less extensive disease at the outset (test for interaction, P = .003). Third, UK patients with no visible residual disease had a less favorable PFS compared with patients recruited from non-UK centers who were similarly debulked (hazard ratio = 1.85; 95% CI, 1.16 to 2.97; P = .010). This observation seems to be related to surgical practice, primarily lymphadenectomy. Conclusion: Increased PFS associated with optimal surgery is limited to patients with less advanced disease, arguing for case selection rather than aggressive debulking in all patients irrespective of disease extent. Lymphadenectomy may have beneficial effects on PFS in optimally debulked patients. abstract_id: PUBMED:32500467 Aggressive surgery for advanced ovarian cancer decreases the risk of intraperitoneal recurrence. Background: This study investigated the pattern of first recurrence of advanced ovarian cancer before and after the introduction of aggressive surgery. Methods: We investigated 291 patients with stage III/IV epithelial ovarian, fallopian tube, and peritoneal cancer. Aggressive surgery including gastrointestinal and upper abdominal surgeries was introduced for advanced ovarian cancer in 2008. The site and time until first recurrence were compared between 70 patients treated without aggressive surgery (2000-2007) and 221 patients who underwent aggressive surgery (2008-2016). Results: The intraperitoneal recurrence rate was significantly lower in patients treated during 2008-2016 than in patients treated during 2000-2007 (55% [82/149] vs. 81% [46/57], p < 0.001). The median time to intraperitoneal recurrence was significantly longer during 2008-2016 than during 2000-2007 (36.2 months, 95% confidence interval [CI] 31.7-60.0 vs. 14.6 months, 95% CI 11.3-20.1, log-rank test: p < 0.001). However, extraperitoneal recurrence rate was significantly higher during 2008-2016 than during 2000-2007 (27% [40/149] vs. 2% [1/57], p < 0.001). Extraperitoneal recurrence occurred during 2008-2016 in the pleura/lungs and the para-aortic lymph nodes above the renal vessels. Cox proportional hazards regression analysis revealed that treatment period (HR 0.49, 95% CI 0.34-0.71, p < 0.001) and bevacizumab use (HR 0.58, 95% CI 0.39-0.87, p = 0.009) were independently associated with intraperitoneal recurrence; stage IV disease (HR 1.87, 95% CI 1.14-3.06, p = 0.034) was independently associated with extraperitoneal recurrence. Conclusion: Aggressive surgery reduced intraperitoneal recurrence and prolonged time to recurrence, contributing to better patient survival. abstract_id: PUBMED:19027146 Aggressive and complex surgery for advanced ovarian cancer: an economic analysis. Introduction: In an initial cohort we demonstrated that aggressive surgery correlates with improved survival in patients with advanced ovarian cancer yet the economic implications of maximal surgical efforts are unknown. Objective: To evaluate inpatient costs, survival, and cost-effectiveness of alternative primary surgical approaches among advanced ovarian cancer patients. Methods: All patients with a diagnosis of stage IIIC-IV ovarian cancer between 1994 and 2003 were identified and classified by surgical complexity score (SCS) (1=simple, 2=intermediate, and 3=complex). We used clinical and administrative data to estimate costs associated with inpatient stay, survival, and the 5-year cost-effectiveness of complex vs. simple surgery measured in costs per life-year gained. Results: 486 consecutive patients were identified of whom 28%, 50%, and 22% were classified as SCS 1, 2, and 3, respectively. Kaplan-Meier estimated survival differed by SCS group (p<0.001) with an average survival gain of 1.32 years with complex vs. simple surgery (SCS group 3 vs. 1). Inpatient costs significantly differed between SCS groups (mean costs SCS 1: $21,914; SCS 2: $27,408; SCS 3: $33,678; p<0.001). Analyses suggest incremental cost-effectiveness ratios of $4950 and $8912 per life-year gained, comparing SCS groups 2 vs. 1 and 3 vs. 1 respectively. Conclusions: Complex surgery for ovarian cancer cytoreduction carries a survival benefit at increased direct medical cost. However, preliminary cost-effectiveness results suggest complex surgery provides good value for money spent. Future research on the cost and quality of life implications of surgical morbidity during follow-up is warranted to formally assess the cost-effectiveness of complex vs. simple surgical procedures. abstract_id: PUBMED:2227548 Aggressive chemosurgical debulking in patients with advanced ovarian cancer. From July 1986 to June 1989, 43 evaluable patients with advanced ovarian cancer were treated on protocol with initial cytoreductive surgery, two courses of high-intensity intravenous Cytoxan (1000 mg/m2) and cisplatin (120-200 mg/m2) chemotherapy, and repeat debulking laparotomy in an effort to maximize response to a subsequent four cycles of intraperitoneal platinum-based chemotherapy. Two patients were stage IIIA, 2 stage IIIB, 28 stage IIIC, and 11 stage IV. Five tumors were grade 1, 9 grade 2, and 29 grade 3. Thirty-eight (88%) patients had bulky tumor (5-25 cm) found at first laparotomy; 25 of these had greater than 1-cm residual after initial debulking. Following two cycles of intensive intravenous chemotherapy 18 of these 25 had greater than 1-cm disease found at second laparotomy; 12 of 18 underwent secondary cytoreduction to less than 1 cm. Thus, 30 of these 38 (79%) patients entered the intraperitoneal phase of the protocol with less than 1-cm disease. Four patients had 2- to 5-cm tumor at initial laparotomy; two of four were debulked to less than 1-cm residual. All four were found to have less than 1-cm disease at second laparotomy. This combination regimen was well tolerated. There was one treatment-related death. In sum, 42 of 43 patients had tumor greater than 2 cm at staging laparotomy and 38 (88%) had large, bulky disease (5-25 cm); 34 of 43 (79%) entered the intraperitoneal phase of the protocol with optimal (less than 1-cm) disease. Aggressive chemosurgical cytoreduction in patients with bulky advanced ovarian cancer can leave a large proportion of patients with minimal residual disease and maximize their chances of responding to subsequent intraperitoneal chemotherapy. abstract_id: PUBMED:26713279 Selecting the best strategy of treatment in newly diagnosed advanced-stage ovarian cancer patients. Although it is assumed that the combination of chemotherapy and radical surgery should be indicated in all newly diagnosed advanced-stage ovarian cancer patients, one of the main raised questions is how to select the best strategy of initial treatment in this group of patients, neoadjuvant chemotherapy followed by interval debulking surgery or primary debulking surgery followed by adjuvant chemotherapy. The selection criteria to offer one strategy over the other as well as a stepwise patient selection for initial treatment are described. Selecting the best strategy of treatment in newly diagnosed advanced stage ovarian cancer patients is a multifactorial and multidisciplinary decision. Several factors should be taken into consideration: (1) the disease factor, related to the extension and localization of the disease as well as tumor biology; (2) the patient factor, associated with patient age, poor performance status, and co-morbidities; and (3) institutional infrastructure factor, related to the lack of prolonged operative time, an appropriate surgical armamentarium, as well as well-equipped intensive care units with well-trained personnel. abstract_id: PUBMED:26197773 Role of aggressive surgical cytoreduction in advanced ovarian cancer. Ovarian cancer is the eighth most frequent cancer in women and is the most lethal gynecologic malignancy worldwide. The majority of ovarian cancer patients are newly diagnosed presenting with advanced-stage disease. Primary cytoreductive surgery and adjuvant taxane- and platinum-based combination chemotherapy are the standard treatment for advanced ovarian cancer. A number of studies have consistently shown that successful cytoreductive surgery and the resultant minimal residual disease are significantly associated with survival in patients with this disease. Much has been written and even more debated regarding the competing perspectives of biology of ovarian cancer versus the value of aggressive surgical resection. This review will focus on the current evidences and outcomes supporting the positive impact of aggressive surgical effort on survival in the primary management of ovarian cancer. abstract_id: PUBMED:29683880 Economic Analysis of Neoadjuvant Chemotherapy Versus Primary Debulking Surgery for Advanced Epithelial Ovarian Cancer Using an Aggressive Surgical Paradigm. Objectives: Neoadjuvant chemotherapy (NACT) versus primary debulking surgery (PDS) for advanced epithelial ovarian cancer (AEOC) remains controversial in the United States. Generalizability of existing trial results has been criticized because of less aggressive debulking procedures than commonly used in the United States. As a result, economic evaluations using input data from these trials may not accurately reflect costs and outcomes associated with more aggressive primary surgery. Using data from an ongoing trial performing aggressive debulking, we investigated the cost-effectiveness and cost-utility of NACT versus PDS for AEOC. Methods: A decision tree model was constructed to estimate differences in short-term outcomes and costs for a hypothetical cohort of 15,000 AEOC patients (US annual incidence of AEOC) treated with NACT versus PDS over a 1-year time horizon from a Medicare payer perspective. Outcomes included costs per cancer-related death averted, life-years and quality-adjusted life-years (QALYs) gained. Base-case probabilities, costs, and utilities were based on the Surgical Complications Related to Primary or Interval Debulking in Ovarian Neoplasms trial. Base-case analyses assumed equivalent survival; threshold analysis estimated the maximum survival difference that would result in NACT being cost-effective at $50,000/QALY and $100,000/QALY willingness-to-pay thresholds. Probabilistic sensitivity analysis was used to characterize model uncertainty. Results: Compared with PDS, NACT was associated with $142 million in cost savings, 1098 fewer cancer-related deaths, and 1355 life-years and 1715 QALYs gained, making it the dominant treatment strategy for all outcomes. In sensitivity analysis, NACT remained dominant in 99.3% of simulations. Neoadjuvant chemotherapy remained cost-effective at $50,000/QALY and $100,000/QALY willingness-to-pay thresholds if survival differences were less than 2.7 and 1.4 months, respectively. Conclusions: In the short term, NACT is cost-saving with improved outcomes. However, if PDS provides a longer-term survival advantage, it may be cost-effective. Research is needed on the role of patient preferences in tradeoffs between survival and quality of life. abstract_id: PUBMED:28800941 Survival and safety associated with aggressive surgery for stage III/IV epithelial ovarian cancer: A single institution observation study. Objective: We evaluated the efficacy and safety of aggressive surgery for advanced ovarian cancer at a non-high-volume center. Materials And Methods: We evaluated consecutive patients with stage III/IV ovarian, fallopian, and peritoneal cancer undergoing elective aggressive surgery from January 2008 to December 2012, which encompassed the first 5years after implementing an aggressive surgery protocol. After receiving appropriate training for 9months, a gynecological surgical team began performing multi-visceral resections. Primary debulking surgery was chosen when the team considered that optimal surgery was achievable on the initial laparotomy, otherwise interval debulking surgery was chosen (the protocol treatments). Analysis was performed on an intention-to-treat basis (full-set analysis), and outcomes were compared to those of patients who underwent standard surgery between 2000 and 2007. Results: Of 106 consecutive patients studied, 87 (82%) underwent aggressive surgery per protocol and 19 were excluded. Serous carcinoma was the most common disease (78%), followed by clear cell carcinoma (7%), and 32% of the patients had stage IV disease. The respective median progression-free and overall survival rates increased from 14.6 and 38.1months before implementation, respectively, to 25.0 and 68.5months after implementation, respectively. Complete resection was achieved in 83 of the 106 patients (78%), and the surgical complexity score was high (>8) in 61 patients (58%); although there was no mortality within 12weeks of surgery, major complications occurred in 8 patients. Conclusions: We confirmed that outcomes improved after implementing aggressive surgery for advanced ovarian cancer, without causing a significant increase in mortality. Factors enhancing survival outcomes are discussed. abstract_id: PUBMED:35644730 End-of-life care for patients with advanced ovarian cancer in the Netherlands: A retrospective registry-based analysis. Objective: Patients with advanced ovarian cancer have a poor prognosis and can experience debilitating symptoms in the last phase of life. Several analyses, mainly performed in the United States (US), show high rates of chemotherapy administration and hospital visits near the end-of-life in this patient category. No large European studies are available, while the organisation of palliative care differs between the US and Europe. We aimed to analyse the intensity of inpatient care near the end-of-life in the Netherlands and perform a cross-study comparison with previous reports. Methods: All patients with ovarian cancer that died in 2016 and 2017 were identified from the Vektis database, a data warehouse including all insurance declarations in the Netherlands. For the last 6 months of life the following parameters of aggressive care were extracted: administration of chemotherapy, emergency room (ER) visits, surgical procedures, hospital and intensive care unit (ICU) admissions. The intensity of inpatient care was compared to previously reported European and US data. Results: Data on medical care use was available for 1775 patients. During the last 6 months of life, half of the ovarian cancer patients were admitted to hospital. Chemotherapy administration near the end-of-life was infrequent: 12% in the last month of life. Surgery and ICU admissions in the final 6 months of life were rare (<10%). Our cohort showed the lowest percentages of all five indicators of aggressive care reported thus far. Conclusion: Aggressive medical care use in the final 6 months of life in this Dutch cohort of ovarian cancer patients was lower than in other previously reported cohorts. Answer: The evidence from the provided abstracts suggests that the benefit of aggressive surgery in patients with advanced ovarian cancer may depend on various factors, including the extent of the disease and the patient's overall condition. One study within the SCOTROC-1 Trial found that increased progression-free survival (PFS) associated with optimal debulking surgery was mainly observed in patients with less extensive disease at the outset, suggesting that aggressive surgery may benefit patients with less advanced ovarian cancer more than those with more extensive disease (PUBMED:16314640). This study also noted that UK patients with no visible residual disease had a less favorable PFS compared to non-UK patients who were similarly debulked, potentially due to differences in surgical practices such as lymphadenectomy. However, other studies have shown benefits of aggressive surgery in advanced ovarian cancer. A retrospective study indicated that highly aggressive surgery with appropriate patient selection, regardless of the timing of cytoreduction, benefits patients with advanced ovarian cancer, without significantly increasing severe perioperative complications (PUBMED:35790254). Another study reported that aggressive surgery reduced intraperitoneal recurrence and prolonged time to recurrence, contributing to better patient survival (PUBMED:32500467). An economic analysis suggested that complex surgery for ovarian cancer cytoreduction carries a survival benefit at increased direct medical cost, but the cost-effectiveness results suggest that complex surgery provides good value for money spent (PUBMED:19027146). In summary, while one study suggests that the benefit of aggressive surgery may be more pronounced in patients with less advanced disease (PUBMED:16314640), other studies indicate that appropriately selected patients with advanced ovarian cancer can benefit from aggressive surgery in terms of survival and recurrence without significantly increased complications (PUBMED:35790254, PUBMED:32500467). The decision to pursue aggressive surgery should be made on a case-by-case basis, considering the extent of the disease, the patient's condition, and the surgical expertise available (PUBMED:26713279, PUBMED:26197773).
Instruction: Can the histologic changes of cystic fibrosis-associated hepatobiliary disease be predicted by clinical criteria? Abstracts: abstract_id: PUBMED:9226524 Can the histologic changes of cystic fibrosis-associated hepatobiliary disease be predicted by clinical criteria? Background: Correlation between clinical parameters and histology changes in cystic fibrosis liver disease has not been documented. The purpose of this study was to determine the histologic spectrum of cystic fibrosis liver disease and the degree to which a clinical scoring system can identify subjects with significant histologic abnormalities. Methods: We reviewed the predictive value of physical examination, biochemical parameters, and a clinical liver score, incorporating physical examination and biochemical parameters, in predicting significant abnormalities of liver histology in 43 cystic fibrosis patients who underwent hepatic biopsy. Biopsies were scored by two masked pathologists for fibrosis, inflammation, inspissation, fatty infiltration, and congestion. Results: Significant histologic disease was present in 56% of patients despite little biochemical or physical examination evidence of disease. No single parameter used in the scoring system predicted the type or degree of the liver disease. The clinical liver score had a sensitivity of 85% and a specificity of 82% in predicting significant histologic changes, yet it was unable to predict the specific lesion. Conclusions: Significant histologic liver disease is common in cystic fibrosis, although the exact nature of the lesion cannot be predicted without liver biopsy. A clinical liver score that was developed for this may be useful in determining which patients require more definitive evaluation. abstract_id: PUBMED:24868271 Aetiopathogenesis of liver changes in the course of cystic fibrosis, considering disturbances of the bile acid profile as well as genetic and immunological factors. Liver changes observed in the course of cystic fibrosis comprise a group of complex processes of fibrosis, inflammation, remodelling, apoptosis and cholestasis as a result of abnormal functioning of the cystic fibrosis transmembrane conductance regulator (CFTR) protein, immunological reactions and response to oxidation stress. Liver lesions are only observed in 5-20% of patients with diagnosed cystic fibrosis; however, they increase mortality, reduce the lifespan and deteriorate the quality of life. Liver diseases are the most common extrapulmonary causes of death in patients with cystic fibrosis. The aim of the study was to analyse the hitherto performed studies on the aetiopathogenesis of liver changes in the course of cystic fibrosis considering disturbances of the bile acid profile as well as genetic and immunological factors. abstract_id: PUBMED:14517517 Surveillance for cystic fibrosis-associated hepatobiliary disease: early ultrasound changes and predisposing factors. Objective: To investigate routine ultrasonography (US) as an early marker and to identify risk factors for the development of cirrhosis and portal hypertension (PHT) in cystic fibrosis (CF). Study Design: A cohort of 106 children with CF aged 5.9+/-2.3 years were followed for 10.4+/-0.2 years in a CF clinic. Results: At enrollment, the US was normal, but biochemical and/or clinical disease was present in 10%. By the end of the study, 19 had developed US changes, eight with evidence of PHT. At the time of the initial US change, only 36.4% of those had, at the end of the study, either a heterogeneous or a nodular parenchyma, and only 50% of those with PHT had biochemical and/or clinical disease. Of the 30 patients treated with ursodeoxycholic acid for biochemical and/or clinical disease with (n=15) and without (n=15) associated US changes, PHT developed in six of the former and two of the latter. Univariate analysis and logistic regression showed that children with more severe disease in terms of forced expiratory volume in one second were at somewhat greater risk (P<.06) of PHT developing. Conclusion: US was an early marker of liver disease and more severe CF disease, a predictor of progressive liver disease. A controlled trial should be done to assess isolated US-detected disease as an indication for UDCA. abstract_id: PUBMED:35086832 Forskolin-induced organoid swelling is associated with long-term cystic fibrosis disease progression. Rationale: Cystic fibrosis (CF) is a monogenic life-shortening disease associated with highly variable individual disease progression which is difficult to predict. Here we assessed the association of forskolin-induced swelling (FIS) of patient-derived organoids with long-term CF disease progression in multiple organs and compared FIS with the golden standard biomarker sweat chloride concentration (SCC). Methods: We retrieved 9-year longitudinal clinical data from the Dutch CF Registry of 173 people with mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) gene. Individual CFTR function was defined by FIS, measured as the relative size increase of intestinal organoids after stimulation with 0.8 µM forskolin, quantified as area under the curve (AUC). We used linear mixed-effect models and multivariable logistic regression to estimate the association of FIS with long-term forced expiratory volume in 1 s % predicted (FEV1pp) decline and development of pancreatic insufficiency, CF-related liver disease and diabetes. Within these models, FIS was compared with SCC. Results: FIS was strongly associated with longitudinal changes of lung function, with an estimated difference in annual FEV1pp decline of 0.32% (95% CI 0.11-0.54%; p=0.004) per 1000-point change in AUC. Moreover, increasing FIS levels were associated with lower odds of developing pancreatic insufficiency (adjusted OR 0.18, 95% CI 0.07-0.46; p<0.001), CF-related liver disease (adjusted OR 0.18, 95% CI 0.06-0.54; p=0.002) and diabetes (adjusted OR 0.34, 95% CI 0.12-0.97; p=0.044). These associations were absent for SCC. Conclusion: This study exemplifies the prognostic value of a patient-derived organoid-based biomarker within a clinical setting, which is especially important for people carrying rare CFTR mutations with unclear clinical consequences. abstract_id: PUBMED:24166775 Tracheomalacia is associated with lower FEV1 and Pseudomonas acquisition in children with CF. Background: Tracheomalacia (TM) occurs in approximately 1 in 2,100 children. Because the trachea develops abnormally in animal models of cystic fibrosis (CF), we hypothesized this may also occur in children with CF, increasing their risk of TM. Purpose: To examine the prevalence and clinical consequences of TM in children with CF. Methods: We studied children with CF born between 1995 and 2012. TM was defined as dynamic collapse of the trachea, and the severity was recorded as described in the chart. The effect of TM on patient outcomes, including FEV1 , CT changes, and acquisition of CF pathogens, was assessed using a longitudinal patient dataset. Results: Eighty-nine percent of children with CF had at least one bronchoscopy (n = 97/109). Fifteen percent of these children had TM described in any bronchoscopy report (n = 15/97). Of the patients with TM, eight had meconium ileus (P = 0.003) and all were pancreatic insufficient. Pseudomonas aeruginosa infection occurred 1.3 years earlier among children with TM (P = 0.01). Starting FEV1 values by age 8 were diminished by over 18% of predicted for patients with TM. Life-threatening episodes of airway obstruction occurred in 3 of 15 patients with CF and TM, including one leading to death. Gender, prematurity, and hepatic disease were not associated with TM. No difference was observed in the frequency of bronchiectasis. Conclusions: TM is significantly more common in infants and children with CF than in the general population and is associated with airway obstruction and earlier Pseudomonas acquisition. abstract_id: PUBMED:29473496 Modulation of Hepatic MRP3/ABCC3 by Xenobiotics and Pathophysiological Conditions: Role in Drug Pharmacokinetics. Liver transporters play an important role in the pharmacokinetics and disposition of pharmaceuticals, environmental contaminants, and endogenous compounds. Among them, the family of ATP-Binding Cassette (ABC) transporters is the most important due to its role in the transport of endo- and xenobiotics. The ABCC sub-family is the largest one, consisting of 13 members that include the cystic fibrosis conductance regulator (CFTR/ABCC7); the sulfonylurea receptors (SUR1/ABCC8 and SUR2/ABCC9) and the multidrug resistanceassociated proteins (MRPs). The MRP-related proteins can collectively confer resistance to natural, synthetic drugs and their conjugated metabolites, including platinum-containing compounds, folate anti-metabolites, nucleoside and nucleotide analogs, among others. MRPs can be also catalogued into "long" (MRP1/ABCC1, -2/C2, -3/C3, -6/C6, and -7/C10) and "short" (MRP4/C4, -5/C5, -8/C11, -9/C12, and -10/C13) categories. While MRP2/ABCC2 is expressed in the canalicular pole of hepatocytes, all others are located in the basolateral membrane. In this review, we summarize information from studies examining the changes in expression and regulation of the basolateral hepatic transporter MPR3/ABCC3 by xenobiotics and during various pathophysiological conditions. We also focus, primarily, on the consequences of such changes in the pharmacokinetic, pharmacodynamic and/or toxicity of different drugs of clinical use transported by MRP3. abstract_id: PUBMED:9007719 Systemic haemodynamic changes in patients with cystic fibrosis with and without chronic liver disease. Background/aims: There are well-documented systemic haemodynamic changes associated with chronic liver disease. Patients with cystic fibrosis may develop chronic liver disease, but it is not known whether these systemic haemodynamic changes develop and, if they do, whether they are influenced by the associated chronic lung disease. We therefore undertook a study to document the circulatory status of cystic fibrosis patients with and without chronic liver disease. Methods: Fifty-six subjects with cystic fibrosis were studied. Systemic haemodynamic and pulmonary parameters, in sub-groups both with (LD) and without (NLD) liver disease, were measured at rest and during measured exertion. Cystic fibrosis-related chronic liver disease was diagnosed using previously validated ultrasound criteria. Patients underwent assessment at rest and in the fourth minute of seated bicycle exercise at 25W. Heart rate (ECG), blood pressure (semiautomated sphygmomanometer), aortic blood velocity (pulsed Doppler suprasternal probe), arterial oxygen saturation (pulse oximeter) and respiratory variables (pneumotachometer with expired gas analysis by an automatic system) were measured. Results: A complete data set was available for 45 patients (22 LD) at rest and 40 patients (19 LD) on exercise. The patients were well matched for age, sex, height, weight, and pulmonary function. Patients with chronic liver disease had a hyperkinetic circulation while ventilatory variables before and during exercise were similar for the two groups. There was evidence that the circulatory changes were exacerbated by both deteriorating hepatic and pulmonary function. Conclusions: Cystic fibrosis patients with chronic liver disease have a hyperdynamic circulation similar to that documented in other forms of chronic liver disease. These circulatory changes are exacerbated by deteriorating hepatic and pulmonary function. abstract_id: PUBMED:7712318 Institutional and patient criteria for heart-lung transplantation. Heart-lung transplantation is a complex, expensive, and resource-intensive procedure. It is performed more often where there is coexistence of end-stage pulmonary disease with advanced cardiac disease that is not secondary to pulmonary hypertension (e.g., some forms of congenital heart disease, some situations with Eisenmenger's syndrome, and possibly the coexistence of end-stage heart and end-stage lung disease). Available data indicate that some disease states are associated with a higher probability of successful outcome after HLT than are others. Objectively validated and reliable patient selection criteria cannot be obtained from the current medical literature. Expert opinion provided to OHTA contained varying degrees of detail and uniformity with respect to patient selection (Tables 11 and 12). While most agree that "irreversible cardiopulmonary disease" is an indication for this procedure, there are no published data that have provided details as to how patients so designated were selected as candidates for HLT. For example, for chronic obstructive pulmonary disease (emphysema), neither expert opinion nor published studies of HLT provided data indicating the stage of the disease at which a decision to transplant is reasonable and objectively justifiable. A similar lack of detail for cystic fibrosis, bronchiectasis, pulmonary fibrosis, sarcoidosis, asbestosis, and other diseases was noted. Thus, there are insufficient data regarding transplant recipients to permit the development of objective criteria for a threshold for HLT in the various disease states. The circumstances are more problematic regarding suggested contraindications for HLT. While objective survival rates obtained from registry data indicated that survival bore little relationship to recipient age, various opinions provided to OHTA specified a maximum age for HLT candidates, ranging from under 45 years to 60 years. Adverse factors and contraindications, such as "other significant disease," "recurrent pulmonary emboli," "hepatic disease," "infection," and "insulin-dependent diabetes" were nonspecific and unsupported by published, objective evidence. More ambiguous contraindications included "psychiatric illness," "poor family support structure," "history of noncompliance with medical regimens," and lack of "emotional stability." While psychiatric, psychologic, or sociologic evaluations of potential HLT recipients undeniably have relevance to patient selection, use of such criteria should be supported at least by evidence of generally consistent application across the transplant community. Of note was the recommended requirement that the recipient possess "adequate financial resources," which implies that the benefits of HLT are not expected to be equitably provided to the public.(ABSTRACT TRUNCATED AT 400 WORDS) abstract_id: PUBMED:26238925 DEFI-ALPHA cohort and POLYGEN DEFI-ALPHA clinical research hospital programme. A study about clinical, biological and genetics factors associated with the occurrence and the evolution of hepatic complications in children with alpha-1 antitrypsin deficiency Introduction: The alpha-1 antitrypsin (α1-AT) deficiency, most frequently caused by homozygosity for the Z variant (SERPINA1: c.1096 G>A; Glu342Lys), can give rise to two clinical patterns: (i) respiratory impairment with emphysema (mainly in adulthood) because of a pulmonary quantitative defect in anti-elastase activity; (ii) hepatic impairment (mainly in childhood) due to the misfolding of the PiZ protein which accumulates in hepatocytes thus providing cytotoxicity. Current Knowledge: To date, the clinical and genetic factors responsible for the development of major hepatic injuries (fibrosis and portal hypertension) during childhood in PiZ patients are not known. Methods: The DEFI-ALPHA cohort, created in 2008, aims to inventory and prospectively study all α1-AT deficient children diagnosed and included after occurrence of a hepatic sign. The POLYGEN DEFI-ALPHA PHRC has recently (2013) been added to the project to identify modifiers genes by two complementary approaches: (i) the candidate genes strategy with the SERPINA1, CFTR (cystic fibrosis gene), MAN1B1 and SORL1 genes, these two latter being implied in the degradation of misfolding proteins; (ii) the whole exome sequencing (WES) strategy in families in which the PiZ proband has a PiZ brother or sister free of any hepatic sign. Expected Results: The clinical parameter we want to explain is the apparition of a portal hypertension in PiZ children. In the DEFI-ALPHA project, three criteria will be tested: (i) age of inclusion in the cohort, (ii) the way of inclusion (neo-natal icterus or later hepatic impairment) and (iii) treatment or not with ursodesoxycholic acid and, if so, its duration. Genetically, polymorphisms on the SERPINA1 and MAN1B1 genes have already been associated in the literature with different clinical evolutions of the A1ATD but very inconstantly. Our study thus aims to confirm or not this association. The CFTR and SORL1 genes have never been studied in the α1-AT deficiency. Finally, the whole exome sequencing strategy could allow the discovery of new unexpected modifiers genes in this disease. abstract_id: PUBMED:30054796 The Efficacy of MRI in the diagnostic workup of cystic fibrosis-associated liver disease: A clinical observational cohort study. Purpose: To identify independent imaging features and establish a diagnostic algorithm for diagnosis of cystic fibrosis (CF)-associated liver disease (CFLD) in CF patients compared to controls using gadoxetic acid-enhanced MRI. Methods: A total of 90 adult patients were enrolled: 50 with CF, 40 controls. The CF group was composed of two subgroups: a retrospective test subgroup (n = 33) and a prospective validation subgroup (n = 17). Controls (patients with normal liver enzymes and only benign focal liver lesions) were divided accordingly (27:13). MRI variables, including quantitative and qualitative parameters, were used to distinguish CFLD from controls using clinical symptoms, laboratory tests and Debray criteria. Disease severity was classified according to Child-Pugh and Albumin-Bilirubin (ALBI) scores. Fifteen qualitative single-lesion CF descriptors were defined. Two readers independently evaluated the images. Univariate statistical analysis was performed to obtain significant imaging features that differentiate CF patients from controls. Through multivariate analysis using chi-squared automatic interaction detector (CHAID) methodology the most important descriptors were identified. Diagnostic performance was assessed by receiver-operating characteristic (ROC) analysis. Results: Three independent imaging descriptors distinguished CFLD from controls: (1) presence of altered gallbladder morphology; (2) periportal tracking; and (3) periportal fat deposition. Prospective validation of the classification algorithm demonstrated a sensitivity of 94.1% and specificity of 84.6% for discriminating CFLD from controls. Disease severity was well associated with the imaging features. Conclusions: A short unenhanced MRI protocol can identify the three cardinal imaging features of CFLD. The hepatobiliary phase of gadoxetic acid-enhanced MRI can define CFLD progression. Key Points: • Using a multivariate classification analysis, we identified three independent imaging features, altered gallbladder morphology (GBAM), periportal tracking (PPT) and periportal fat deposition (PPFD), that could diagnose CFLD with high sensitivity, 94.1 % (95% CI: 71.3-99.9) and moderate specificity, 84.6 % (95% CI: 54.6-98.1). • Based upon the results of this study, gadoxetic acid-enhanced MRI with DWI is able to diagnose early-stage CFLD, as well as its progression. Answer: The histologic changes of cystic fibrosis-associated hepatobiliary disease cannot be reliably predicted by clinical criteria alone. A study aimed to determine the histologic spectrum of cystic fibrosis liver disease and the degree to which a clinical scoring system can identify subjects with significant histologic abnormalities found that significant histologic disease was present in 56% of patients despite little biochemical or physical examination evidence of disease. No single parameter used in the scoring system predicted the type or degree of the liver disease. The clinical liver score had a sensitivity of 85% and a specificity of 82% in predicting significant histologic changes, yet it was unable to predict the specific lesion (PUBMED:9226524). This suggests that while a clinical liver score may be useful in determining which patients require more definitive evaluation, it cannot substitute for liver biopsy in predicting the exact nature of the lesion in cystic fibrosis liver disease.
Instruction: Should we promote organ donor registries when so few registrants will end up being donors? Abstracts: abstract_id: PUBMED:17545495 Should we promote organ donor registries when so few registrants will end up being donors? Background: A major obstacle facing efforts to register organ donors is that the likelihood that any given registrant will die in such a way as to render his or her organs suitable for donation is extremely low. Such policies make sense only if the resources used to sign up an additional donor are proportional to the expected benefits. Methods: Using data on historical donation patterns, estimates of the potential donor supply, and an estimate of the monetary value of an organ donor, the authors calculate the average value to society of a registrant as a function of age at registration. Result: Under a "first-person consent" regime, the value of a registrant ages 18 to 34 years is $1900. The value of registering individuals who have not already registered is even higher because these persons are more likely to become donors. If donor families have the right of refusal, the value of a registrant is substantially less, around $840. Conclusion: Given that most donor registries are fairly limited operations, piggybacking on drivers' license registration and renewal administration, results suggest that registries are cost-effective. Of course, a complete analysis awaits concrete data on the costs of operating registries and attracting new registrants. abstract_id: PUBMED:26516670 Promoting Organ Donor Registries Through Public Education: What Is the Cost of Securing Organ Donors? Background: Transplant medicine's impact on America's public health is seriously limited by acute shortage of transplantable organs. Consequently, the United Sates has witnessed considerable investment in the promotion of organ donor registries. Although there is no evidence to support that donor registry promotion alleviates organ shortage, this belief continues to drive investments into registry promotion. In this study, return on investment in donor registry promotion was examined using cost-outcomes analysis. Methods: Cost of promoting the donor registry was estimated in US dollars whereas the outcome was measured as the number of individuals who join the registry (registrants) and their value in terms of organ donors. The study was conducted from the perspective of a regional Organ Procurement Organization (OPO). Costs were directly obtained from the OPO. The number of new registrants was obtained from the OPO and the departments of motor vehicles that maintain the donor registry. The value of registrants in terms of organ donors was computed based on a registrant's age-dependent risk of dying and age-dependent probability of becoming an organ donor. Results: Six thousand seven hundred eight individuals joined the organ donor registry (95% confidence interval [95% CI], 5429-7956) at a cost of $455 per registrant (95% CI, US $383-US $562). These individuals result in 4.2 present-day donors (95% CI, 2.5-6.6) at a cost of US $726 000 (95% CI, US $462000-US $1.2 million). Conclusions: Because the cost per registrant and cost per donor is less than society's willingness to pay, donor registry promotion offers positive return on investment. Investment in registry promotion should at the minimum be maintained at current levels. abstract_id: PUBMED:27388535 Donor registries, first-person consent legislation, and the supply of deceased organ donors. In this paper, we exploit the varied timing in state adoption of organ donor registries and first-person consent (FPC) legislation to examine corresponding changes in the supply of deceased organ donors. Results indicate that the establishment of a state organ donor registry leads to an increase in donation rates of approximately 8%, while the adoption of FPC legislation has no effect on the supply of organ donors. These results reinforce the need to encourage individuals to communicate their donation preferences, either explicitly via a registry or by discussing them with family. abstract_id: PUBMED:26992655 Patients on state organ donor registries receive similar levels of intensive care compared to those who are not: an opportunity to increase public intent to donate. The intent to donate organs is affected by the public perception that patients on state registries receive less aggressive life-saving care in order to allow organ donation to proceed. However, the association between first person authorization to donate organs and the actual care received by eventual organ donors in hospitals is unknown. From August 2010 to April 2011, all eight organ procurement organizations in United Network for Organ Sharing Region 5 prospectively recorded demographic data and organ utilization rates on all donors after neurologic determination of death (DNDDs). Critical care and physiologic parameters were also recorded at referral for imminent neurologic death and prior to authorization for donation to reflect the aggressiveness of provided care. There were 586 DNDDs and 23% were on a state registry. Compared to non-registered DNDDs, those on state registries were older but were noted to have similar critical care parameters at both referral and authorization. Furthermore, there was no significant difference in organs procured per donor or organs transplanted per donor between registered and non-registered DNDDs. Thus, DNDDs who are on state donor registries receive similar levels of intensive care compared to non-registered donors. The association noted in this study may therefore help to dispel a common misperception that decreases the intent to donate. abstract_id: PUBMED:22507140 Worldwide variability in deceased organ donation registries. The variability in deceased organ donation registries worldwide has received little attention. We considered all operating registries, where individual wishes about organ donation were recorded in a computerized database. We included registries which recorded an individual's decision to be a donor (donor registry), and registries which only recorded an individual's objection (non-donor registry). We collected information on 15 characteristics including history, design, use and number of registrants for 27 registries (68%). Most registries are nationally operated and government-owned. Registrations in five nations expire and require renewal. Some registries provide the option to make specific organ selections in the donation decision. Just over half of donor registries provide legally binding authorization to donation. In all national donor registries, except one, the proportion of adults (15+) registered is modest (<40%). These proportions can be even lower when only affirmative decisions are considered. One nation provides priority status on the transplant waiting list as an incentive to affirmative registration, while another nation makes registering a donation decision mandatory to obtain a driver's license. Registered objections in non-donor registries are rare (<0.5%). The variation in organ donor registries worldwide necessitates public discourse and quality improvement initiatives, to identify and support leading practices in registry use. abstract_id: PUBMED:23461946 Evaluating deceased donor registries: identifying predictive factors of donor designation. The objectives of this study were to evaluate and compare the performance of the deceased donor registries of the 50 states and the District of Columbia and to identify possible predictive factors of donor designation. Data were collected retrospectively by Donate Life America using a questionnaire sent to Donor Designation Collaborative state teams between 2007 and 2010. By the end of 2010, there were 94,669,081 designated donors nationwide. This accounted for 39.8 per cent of the U.S. population aged 18 years and over. The number of designated organ donors and registry-authorized recovered donors increased each year; however, the total number of recovered donors in 2010 was the lowest since 2004. Donor designation rate was significantly higher when license applicants were verbally questioned at the Department of Motor Vehicles (DMV) regarding their willingness to register as a donor and when DMV applicants were not given an option on DMV application forms to contribute money to support organ donation, compared with not being questioned verbally, and being offered an option to contribute money. State registries continue to increase the total number of designated organ donors; however, the current availability of organs remains insufficient to meet the demand. These data suggest that DMV applicants who are approached verbally regarding their willingness to register as a donor and not given an option on DMV application forms to contribute money to support organ donation might be more likely to designate themselves to be a donor. abstract_id: PUBMED:18549448 Evidence for a need to mandate kidney transplant living donor registries. Kidney disease is a global public health problem of growing proportions. Currently the best treatment for end-stage renal failure is transplantation. Living organ donation remains a complex ethical, moral and medical issue. It is based on a premise that kidney donation is associated with short-term minimal risks to harm the donor, and is outweighed by the definite advantages to the recipient. A growing number of patients with end-stage renal disease and shortage of kidney donors poses a pressing need to expand the criteria needed to accept kidney donors. The current donor registries are structured and are driven to expand donor pool. As living kidney donation is not without risks, more attention should be given to protect the donor health. After kidney donation, mild to moderate renal insufficiency may occur. Renal insufficiency, even mild, is associated with increased risks of hypertension, proteinuria and cardiovascular morbidity. We, therefore, foresee a need to mandate the establishment of renal transplant donor registries at all transplanting programs as a prerequisite to protect the long-term well being of kidney donors. These registries can collect the database necessary to develop standards of practice and guidelines for future kidney donation. abstract_id: PUBMED:24712385 Cancers among US organ donors: a comparison of transplant and cancer registry diagnoses. Transmission of cancer is a life-threatening complication of transplantation. Monitoring transplantation practice requires complete recording of donor cancers. The US Scientific Registry of Transplant Recipients (SRTR) captures cancers in deceased donors (beginning in 1994) and living donors (2004). We linked the SRTR (52,599 donors, 110,762 transplants) with state cancer registries. Cancer registries identified cancers in 519 donors: 373 deceased donors (0.9%) and 146 living donors (1.2%). Among deceased donors, 50.7% of cancers were brain tumors. Among living donors, 54.0% were diagnosed after donation; most were cancers common in the general population (e.g. breast, prostate). There were 1063 deceased donors with cancer diagnosed in the SRTR or cancer registry, and the SRTR lacked a cancer diagnosis for 107 (10.1%) of these. There were 103 living donors with cancer before or at donation, diagnosed in the SRTR or cancer registry, and the SRTR did not have a cancer diagnosis for 43 (41.7%) of these. The SRTR does not record cancers after donation in living donors and so missed 81 cancers documented in cancer registries. In conclusion, donor cancers are uncommon, but lack of documentation of some cases highlights a need for improved ascertainment and reporting by organ procurement organizations and transplant programs. abstract_id: PUBMED:34728814 Undocumented donors: how driver's licenses can help solve the U.S. organ shortage. In 2013, Illinois enacted a new law (SB 957) to allow undocumented motorists to acquire Temporary Visitor Driver's Licenses (TVDLs). We explored the impact of this legislation on organ donor registration in the state. Using Freedom of Information Act requests, we obtained the Illinois TVDL and general adult driver's license applicant organ donation statistics for the last 3 years from the Illinois Secretary of State. We found that between 2017 and 2019, TVDLs directly resulted in 91,720 newly registered organ donors. This group registered as organ donors at rates significantly higher (p < 0.0001) than the general population. In the 3 years studied, only 7.3% of general driver's license registrants became new organ donor registrants, while TVDL drivers signed up at an average rate of 44.9%. TVDLs resulted in a significant increase in registered organ donors. Similar policies implemented nationwide could increase donor registrations substantially. In addition, this finding among a population largely of Latin American origin may suggest strategies for future organ donor registration efforts globally. abstract_id: PUBMED:25936672 Living kidney donors and ESRD. There are more than 325 living kidney donors who have developed end-stage renal disease and have been listed on the Organ Procurement and Transplantation Network (OPTN)/United Network for Organ Sharing (UNOS) deceased donor kidney wait list. The OPTN/UNOS database records where these kidney donors are listed and, if they donated after April 1994, where that donation occurred. These 2 locations are often not the same. In this commentary, I examine whether a national living donor registry should be created and whether transplantation centers should be notified when one of their living kidney donors develops end-stage renal disease. I consider and refute 5 potential objections to center notification. I explain that transplantation centers should look back at these cases and input data into a registry to attempt to identify patterns that could improve donor evaluation protocols. Creating a registry and mining the information it contains is, in my view, our moral and professional responsibility to future patients and the transplantation endeavor. As individuals and as a community, we need to acknowledge the many unknown risks of living kidney donation and take responsibility for identifying these risks. We then must share information about these risks, educate prospective donors about them, and attempt to minimize them. Answer: The promotion of organ donor registries is a subject of debate, particularly given the low likelihood that any given registrant will die in a manner that allows for organ donation. However, several studies suggest that promoting organ donor registries can be cost-effective and beneficial. One study found that under a "first-person consent" regime, the value of a registrant aged 18 to 34 years is $1900, and the value of registering individuals who have not already registered is even higher because these persons are more likely to become donors. If donor families have the right of refusal, the value of a registrant is substantially less, around $840. Despite this, the study suggests that registries are cost-effective, especially since most donor registries are fairly limited operations that often piggyback on drivers' license registration and renewal administration (PUBMED:17545495). Another study examined the return on investment in donor registry promotion and found that the cost per registrant and cost per donor is less than society's willingness to pay, indicating that donor registry promotion offers a positive return on investment. Therefore, it is recommended that investment in registry promotion should at least be maintained at current levels (PUBMED:26516670). Furthermore, evidence shows that the establishment of a state organ donor registry leads to an increase in donation rates of approximately 8%, reinforcing the need to encourage individuals to communicate their donation preferences (PUBMED:27388535). Additionally, patients on state organ donor registries receive similar levels of intensive care compared to those who are not, which may help dispel a common misperception that decreases the intent to donate (PUBMED:26992655). Variability in deceased organ donation registries worldwide and the need for public discourse and quality improvement initiatives to identify and support leading practices in registry use have also been highlighted (PUBMED:22507140). Moreover, certain strategies, such as verbal questioning at the Department of Motor Vehicles (DMV) regarding willingness to register as a donor, have been found to significantly increase donor designation rates (PUBMED:23461946). In conclusion, despite the low probability that each registrant will become an organ donor, the promotion of organ donor registries appears to be justified based on their cost-effectiveness, the potential to increase donation rates, and the overall positive impact on the supply of transplantable organs.
Instruction: Is coronary flow reserve in response to papaverine really normal in syndrome X? Abstracts: abstract_id: PUBMED:8181123 Is coronary flow reserve in response to papaverine really normal in syndrome X? Background: An impaired coronary flow reserve in syndrome X has been demonstrated by many studies. Recently, however, a normal coronary flow reserve in response to papaverine was reported, but the number of patients in these studies was small. The aim of this study was to investigate whether coronary flow reserve in response to intracoronary papaverine is really impaired in syndrome X. Methods And Results: We investigated 53 syndrome X patients (typical angina, a positive exercise test, and completely normal coronary arteries on angiography) and 26 heart transplant patients with normal coronary arteries (control group). All antianginal medications were stopped 48 hours before the study. A 3.6F intracoronary Doppler catheter was positioned in the proximal left anterior descending coronary artery and was connected to a Millar velocimeter. The coronary blood flow velocity at rest and in response to a hyperemic dose of papaverine was measured. Coronary flow reserve was defined as the ratio of hyperemic coronary blood flow velocity in response to papaverine and resting coronary blood flow velocity. The coronary flow reserve (mean +/- SD) in the syndrome X group was 2.72 +/- 1.39. The coronary flow reserve in the control group was significantly higher at 5.22 +/- 1.26 (P &lt; .01). In both groups there was no significant difference in the heart rate or the mean arterial pressure during the study. Conclusions: Our study shows that coronary flow reserve in response to intracoronary papaverine is impaired in syndrome X patients. abstract_id: PUBMED:8280515 Coronary flow reserve in patients with chest pain and normal coronary arteries. Background: Many studies have shown that coronary flow reserve is reduced in patients with chest pain and angiographically normal coronary arteries. The methods used to assess coronary blood flow have varied, but in nearly all reports dipyridamole has been used to bring about vasodilatation. This study was designed to assess whether the apparent impairment of coronary flow reserve seen with dipyridamole could be reproduced with either papaverine or adenosine, which induce maximum coronary blood flow by different mechanisms. Methods: 25 patients with chest pain and angiographically normal coronary arteries were studied with an intracoronary Doppler flow probe and quantitative angiography to determine epicardial coronary artery area, coronary blood flow velocity, coronary flow reserve, and coronary vascular resistance index (CVRI, the ratio of resistance after intervention to basal resistance). All patients received papaverine 8 mg. Eight patients with positive exercise tests received intracoronary papaverine (8 and 10 mg), intracoronary adenosine (6, 20, 60 micrograms), and high-dose intravenous dipyridamole (0.84 mg/kg). Results: The velocity ratio (peak after intervention: baseline) (mean (SEM)) after 8 mg papaverine was 3.3 (0.2) (n = 25) and the coronary flow reserve was 4.1 (0.3) (n = 25). There were no differences between patients with a positive (n = 16) or negative (n = 9) exercise test. In eight patients coronary flow reserve was measured after increasing doses of papaverine, adenosine, and dipyridamole. Coronary flow reserve was 4.5 (0.3) with papaverine, 4.8 (0.3) with adenosine, and 3.5 (0.4) with dipyridamole (p = 0.08 v papaverine and adenosine). CVRI was 0.22 (0.01) with papaverine, 0.21 (0.02) with adenosine, and 0.29 (0.03) with dipyridamole (p &lt; 0.05 v papaverine, p = 0.09 v adenosine). Conclusions: These results indicate that measurement of coronary flow reserve and CVRI in patients with chest pain and normal coronary arteries depends on the pharmacological stimulus. Normal values were obtained with papaverine in all patients, irrespective of the exercise test response. In patients with a positive exercise test significantly lower values were obtained with dipyridamole than with papaverine, or adenosine. The reported impairment of coronary flow reserve in patients with angina and normal coronary arteries may reflect the variability in response to different pharmacological agents. The mechanism underlying this variability is unknown, but may involve an abnormality of adenosine metabolism in the myocardium. abstract_id: PUBMED:8871460 Coronary flow reserve and oesophageal dysfunction in syndrome X. The relative prevalence of abnormalities of coronary flow reserve and oesophageal function was ascertained in 32 syndrome X patients with typical angina chest pain, a positive exercise test, and normal coronary arteries. Coronary flow reserve in response to a hyperaemic dose of papaverine was measured using an intracoronary Doppler catheter positioned in the left anterior descending coronary artery. An abnormal coronary flow reserve was defined as being &lt; 3.0. Patients were investigated for oesophageal dysfunction by manometry and 24-hour pH monitoring. Thirteen patients had an impaired coronary flow reserve (group 1) and 19 patients had a normal flow reserve (group 2). Eight of the 13 group 1 patients (62%) and 13 of the 19 group 2 patients (68%, p = NS) had evidence of oesophageal dysfunction on either manometry or pH studies. Therefore, a total of 26 (81%) syndrome X patients had either an abnormality of coronary flow reserve or oesophageal dysfunction suggesting that chest pain in these patients may be due to myocardial ischaemia or oesophageal dysfunction, thus confirming the heterogeneous nature of this syndrome. The prevalence of oesophageal abnormalities was independent of any abnormalities of coronary flow reserve. abstract_id: PUBMED:7866906 Assessment of coronary flow reserve in patients with angina pectoris Coronary flow reserve was assessed by using a computer-assisted method in 26 normal controls and 45 patients with significant coronary artery stenosis ( &gt; or = 50% luminal narrowing) and 16 patients with syndrome X. After intracoronary administration of papaverine, the diameter, cross sectional area, blood velocity, flow volume and reserve capacity of both left anterior descending (LAD) and right coronary arteries (RCA) were lower in patients with significant coronary stenosis than in controls. Despite similar changes in the diameter and cross sectional area, the velocity, flow volume and reserve capacity of these two vessels were also lower in patients with syndrome X than in controls. We conclude that coronary flow reserve is reduced in patients with coronary artery stenosis or syndrome X due to different levels of coronary artery abnormalities. abstract_id: PUBMED:9208495 Abnormal coronary flow reserve in patients with angina pectoris and hypertensive left ventricular hypertrophy. Objective: To assess coronary flow reserve using a computer-assisted method with intracoronary papaverine in patients with angina pectoris. Patients And Methods: Coronary arterial diameter, cross-sectional area and blood flow velocity were measured during coronary arteriography before and after intracoronary papaverine in 26 control subjects, 45 patients with significant coronary artery disease (&gt; 50% luminal narrowing), 16 patients with syndrome X and 14 patients with hypertension and left ventricular hypertrophy (LVH). Results: After intracoronary administration of papaverine, proximal diameter, cross-sectional area, blood velocity, flow volume and reserve capacity of both left anterior descending and right coronary arteries were lower in patients with significant coronary stenosis than in the controls. Despite similar changes in diameter and cross-sectional area, the blood velocity, flow volume and reserve capacity of the two vessels were also lower in patients with syndrome X than in the controls. In 14 patients with hypertension and LVH, although the blood velocity and flow volume were augmented for the two arteries and did not differ from those in the controls, the flow reserve was reduced because of higher baseline blood velocity and flow volume. Conclusions: Coronary flow reserve is reduced in patients with coronary artery disease, syndrome X or hypertensive LVH, which may be related to abnormal changes at different levels of the coronary vasculature or resting flow states. abstract_id: PUBMED:8285370 Coronary flow reserve measurements with a new Judkins-style Doppler angiographic catheter. The authors assessed whether measurements obtained by Judkins-style Doppler catheters are comparable to those achieved with the intracoronary Doppler technique in 42 patients with normal coronary arteries on angiography (19 syndrome X and 23 heart transplant patients). Resting coronary flow velocity and response to a hyperemic intracoronary dose of papaverine was measured with a Judkins-style, 8F Doppler-tipped catheter positioned in the left coronary ostium and a 3.6F intracoronary Doppler catheter positioned in the proximal left anterior descending artery. Mean coronary flow velocity at rest was significantly higher with the Judkins Doppler (10.1 +/- 4.6 vs 6.3 +/- 4.5 cm/sec, p &lt; 0.01). The mean coronary flow velocity at peak hyperemia was also significantly higher with the Judkins Doppler (33.7 +/- 14.1 vs 19.7 +/- 11.5 cm/sec, p &lt; 0.01). Coronary flow reserve was 3.57 +/- 1.3 with the Judkins Doppler and 3.47 +/- 1.2 with the intracoronary Doppler (r = 0.85). A second study was performed in 14 heart transplant patients with the intracoronary Doppler positioned in the left main coronary artery. The resting and hyperemic flow velocities were again higher with the Judkins Doppler but the differences were not statistically significant. There was again a strong correlation between the Doppler catheters for coronary flow reserve measurements. The Judkins-style Doppler technique appears to be a quick, safe, and accurate alternative to the intracoronary Doppler technique. abstract_id: PUBMED:10506773 Paradoxical increase in coronary flow velocity after termination of acetylcholine infusion is a marker of the impaired vasodilatation at coronary microvessels in patients with angina and normal coronary arteries. Some patients with anginal chest pain and normal coronary arteries exhibit a paradoxical increase in coronary flow velocity after termination of acetylcholine infusion. The aim of this study was to investigate whether this paradoxical increase in the flow velocity is associated with the impaired microvascular dilatation to pharmacological stimuli. We infused graded doses of endothelium-dependent vasodilator acetylcholine (10, 50, and 100 microg/min for 2 min) and the endothelium-independent vasodilators, papaverine and nitroglycerin, into the left coronary artery in 15 patients with chest pain and normal coronary arteries. Coronary blood flow responses were evaluated by Doppler guidewire and quantitative angiography in the proximal left anterior descending coronary artery. Seven patients showed a paradoxical increase in coronary flow velocity after termination of acetylcholine infusion with the highest dose (100 microg/min), whereas eight showed no change or a decrease in the flow velocity. This was also observed in 50 microg/min of acetylcholine infusion. In patients with a paradoxical increase in the flow velocity, coronary flow velocity responses to acetylcholine during three graded doses were significantly blunted in comparison to those without a paradoxical increase. In contrast, coronary flow reserve to papaverine and nitroglycerin was similar in the two groups. Epicardial artery vasoreactivity to acetylcholine did not differ between the two groups. Papaverine and nitroglycerin also caused a similar degree of coronary dilatation in both groups. These results suggest that the paradoxical increase in coronary flow velocity observed immediately after termination of the intracoronary acetylcholine infusion is a marker of impaired nitric oxide-dependent dilatation of the coronary microvessels in patients with normal coronary arteries. Cathet. Cardiovasc. Intervent. 48:170-177, 1999. abstract_id: PUBMED:8781827 Value of intracoronary ultrasound and Doppler in the differentiation of angiographically normal coronary arteries: a prospective study in patients with angina pectoris. Background: A substantial proportion of patients undergoing heart catheterization for suspected coronary artery disease have normal angiograms. Coronary morphology and blood flow velocity can be assessed very accurately with intracoronary ultrasound and Doppler. The purpose of this study was to use both methods to classify further patients with suspected coronary artery disease but with coronary angiograms adjudged normal at the time. Methods And Results: In forty-four patients with suspected coronary artery disease and normal coronary angiograms, intracoronary ultrasound and intracoronary Doppler were performed in the left anterior descending and left main coronary arteries. Coronary flow reserve was obtained by calculating the ratio of the maximal coronary flow mean velocity after the intracoronary administration of 10 mg papaverine to the coronary flow mean velocity at rest. Of 44 patients, 16 (36%) (group I) were found to have normal coronary morphology by intracoronary ultrasound and normal (&gt; 3.0) coronary flow reserve (5.3 +/- 1.8). In seven patients (16%) (group II) there were normal intracoronary ultrasonic findings but a reduced coronary flow reserve (2.1 +/- 0.4). Plaque formation was found in a total of 21 (48%) of the 44 patients; mean plaque sizes were 3.6 +/- 1.6 mm2 for those in group III (normal coronary flow reserve) and 5.0 +/- 2.3 mm2 for those in group IV (reduced coronary flow reserve). Vessel area in both of these groups (16.3 +/- 8.0 mm2 and 19.2 +/- 6.1 mm2) was significantly larger than that of group I (14.6 +/- 5.7 mm2, P &lt; 0.01). Plaque calcification was found in 25% of those in group III and 44% of those in group IV. Thus, only 36% of the patients with normal angiograms were true normal, 48% exhibited early stage of coronary atherosclerosis, and the other 16% might be considered as syndrome X. Conclusion: Intracoronary ultrasound and Doppler can be used to differentiate further heart disease in patients with normal coronary angiograms. Only a minority were true normal. Early signs of atherosclerosis cannot be detected by coronary angiography. This may have important therapeutic and prognostic implications. abstract_id: PUBMED:8487824 Evidence of impaired endothelium-dependent coronary vasodilatation in patients with angina pectoris and normal coronary angiograms. Background: A group of patients has been described who have chest pain resembling angina and positive exercise tests, but normal coronary angiograms and no coronary-artery spasm. This constellation of features has sometimes been called syndrome X or microvascular angina. We attempted to determine whether endothelium-dependent vasodilatation of the coronary vasculature was impaired in patients with this syndrome. Methods: We infused the endothelium-dependent vasodilator acetylcholine and the endothelium-independent vasodilators papaverine and isosorbide dinitrate into the left coronary artery of 9 patients and 10 control subjects. The diameter of the left anterior descending coronary artery was assessed by quantitative angiography, and changes in coronary blood flow were estimated with the use of an intracoronary Doppler catheter. Results: Acetylcholine, given in doses of 1, 3, 10, and 30 micrograms per minute, increased coronary blood flow in a dose-dependent manner in both groups. However, the mean (+/- SD) acetylcholine-induced increases in coronary blood flow were significantly less (P &lt; 0.001) in the patient (8 +/- 14, 37 +/- 37, 59 +/- 67, and 103 +/- 77 percent, respectively) than in the controls (62 +/- 52, 186 +/- 93, 341 +/- 128, and 345 +/- 78 percent, respectively). The changes in coronary blood flow in response to 2 mg of isosorbide dinitrate (236 +/- 66 percent vs. 280 +/- 56 percent) and 10 mg of papaverine (366 +/- 168 percent vs. 411 +/- 92 percent) did not differ significantly between the patients and controls. The administration of papaverine resulted in myocardial lactate production in the patients but not in the controls. The three lower doses of acetylcholine caused a similar degree of dilatation of the left anterior descending coronary artery in the two groups, and the highest dose caused a similar degree of constriction in the two groups. Isosorbide dinitrate and papaverine caused a similar degree of dilatation in both groups. Conclusions: These findings suggest that endothelium-dependent dilatation of the resistance coronary arteries is defective in patients with anginal chest pain and normal coronary arteries, which may contribute to the altered regulation of myocardial perfusion in these patients. abstract_id: PUBMED:7985834 Measurements of human coronary vascular impedance. The aim of this study was to develop a method of measuring human coronary circulation impedance in a clinical setting. The authors measured coronary flow reserve (CFR) in 27 patients with chest pain and normal coronary arteries. A Judkins-style, 8F Doppler-tipped angiographic catheter was positioned in the left coronary ostium. Resting coronary flow velocity (RFV) and response to a hyperemic 12 mg intracoronary dose of papaverine (PFV) were measured. The signals were recorded by a recorder connected to a microprocessor with analogue-to-digital converter and a maths coprocessor. Using this the authors could obtain values for impedance at RFV (IR) and at PFV (IP). The CFR was defined as the ratio: PFV/RFV. An impedance index (II) was obtained as the ratio of coronary vascular impedance at peak hyperemia to the impedance at rest. The CFR was 3.2 +/- 1.2 and the II was 0.33 +/- 0.11. There was a strong inverse correlation between the CFR and the II (r = -0.9). The authors conclude that this new approach may allow a further insight into the coronary pathophysiology and may become useful in clinical cardiology, eg, in the assessment of heart transplant and Syndrome X patients. Answer: Coronary flow reserve (CFR) in response to papaverine is not normal in patients with syndrome X. Several studies have investigated this issue and found that patients with syndrome X, characterized by typical angina, a positive exercise test, and normal coronary arteries on angiography, have an impaired CFR when challenged with papaverine. One study with 53 syndrome X patients found that the CFR in response to intracoronary papaverine was significantly lower (2.72 +/- 1.39) compared to a control group of heart transplant patients with normal coronary arteries (5.22 +/- 1.26, P < .01), indicating that CFR is indeed impaired in syndrome X patients (PUBMED:8181123). Another study that included 25 patients with chest pain and normal coronary arteries reported that while normal values of CFR were obtained with papaverine in all patients, irrespective of the exercise test response, significantly lower values were obtained with dipyridamole than with papaverine or adenosine in patients with a positive exercise test. This suggests that the impairment of CFR in patients with angina and normal coronary arteries may reflect variability in response to different pharmacological agents (PUBMED:8280515). Furthermore, a study that ascertained the relative prevalence of abnormalities of CFR and esophageal function in 32 syndrome X patients found that 13 patients had an impaired CFR (defined as being < 3.0) in response to papaverine. This supports the notion that chest pain in these patients may be due to myocardial ischemia or esophageal dysfunction, confirming the heterogeneous nature of this syndrome (PUBMED:8871460). Additional research assessing CFR in patients with angina pectoris found that CFR is reduced in patients with significant coronary artery stenosis and in patients with syndrome X, suggesting that different levels of coronary artery abnormalities contribute to the reduced CFR in these conditions (PUBMED:7866906, PUBMED:9208495). In conclusion, the evidence indicates that CFR in response to papaverine is not normal in syndrome X, pointing to an impairment in coronary microvascular function in these patients.
Instruction: Is the neutrophil-lymphocyte ratio an indicator of progression in patients with benign prostatic hyperplasia? Abstracts: abstract_id: PUBMED:25124628 Is the neutrophil-lymphocyte ratio an indicator of progression in patients with benign prostatic hyperplasia? Purpose: The aim of this study was to evaluate inflammation parameters and assess the utility of the neutrophil- lymphocyte ratio (NLR) as a simple and readily available predictor for clinical disease activity in patients with nenign prostate hyperplasia BPH. We also aimed to investigate the relationship between inflammatory parameters with α-blocker therapy response, and evaluate the potential association between NLR and the progression of benign prostatic hyperplasia (BPH). Materials And Methods: We examined 320 consecutive patients (July 2013-December 2013) admitted to our outpatient clinic with symptoms of the lower urinary tract at Bozok University. The mean age was 60 (range, 51-75) years. Complete blood count (CBC), prostate-specific antigen (PSA), erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) were assessed. Correlations between PSA, CRP, ESR, prostate volume, International Prostate Symptom Score (IPPS), maximum urinary flow rate (Qmax), and NLR were assessed statistically. Patients were divided into two groups: high and low risk of progression. Results: NLR was positively correlated with IPSS (p=0.001, r=0.265), PSA (p=0.001, r=0.194), and negatively correlated with Qmax (p&lt;0.001, r=-0.236). High-risk patients a had a higher NLR compared with low-risk patients, based on IPSS (p&lt;0.001), PSA (p=0.013), and Qmax (p&lt;0.001); however, there were no significant differences between the groups in terms of age (p&gt;0.05), and prostate volume (p&gt;0.05). Conclusions: NLR can predict BPH progression. We propose that increased inflammation is negatively associated with clinical status in BPH patients and suggest that NLR can give information along with LUTS severity which may be used as a readikly accessible marker for patient follow-up. abstract_id: PUBMED:27464069 Association between Benign Prostatic Hyperplasia and Neutrophil-Lymphocyte Ratio, an Indicator of Inflammation and Metabolic Syndrome. Aim: There is a large body of evidence of clinical studies regarding the relation between metabolic syndrome (MetS) and benign prostatic hyperplasia (BPH)-related lower urinary tract symptoms (LUTS) in men. A possible link between inflammation and these 2 clinical entities has also been proposed. Here, we aimed at evaluating the role of neutrophil-lymphocyte ratio (NLR), an indicator of inflammation and MetS, in the pathogenesis of LUTS and other BPH-related parameters. Method: We evaluated 244 patients with benign prostate hyperplasia admitted to our outpatient clinic between January 2014 and June 2015. NLR was calculated from the peripheral blood sample for all patients. Patients' anthropometric characteristics, serum lipid levels, waist circumferences, total prostate-specific antigen (PSA) and testosterone values, prostate volumes, max values and International Prostate Symptom Score (IPSS) were recorded. All statistical analyses were conducted by SPSS version 17.0 package program, and p value &lt;0.05 was chosen as the criterion for statistical significance. Results: There was statistical significance between NLR and patients with severe IPSS score (&gt;20) and over age 60 years (p = 0.02). Patients younger than 60 years had statistical significance between NLR and PSA &lt;2.5 ng/dl (p = 0.004). When body mass index was selected, there was statistical significance between prostate volume &lt;35 ml (p = 0.009) and age &gt;60 years (p = 0.028). If 60 years of age was selected as cut-off value, there is statistical significance between age and severe IPSS, and in terms of mild erectile dysfunction, PSA &gt;2.5 ng/dl, and prostate volume &gt;35 ml. Conclusion: We found positive correlation between NLR and severe symptoms and progression of BPH. In this manner, anti-inflammatory therapy could contribute to the medical treatment of BPH. Higher NLR may be a candidate marker for severity of symptoms in BPH patients. abstract_id: PUBMED:35047413 The Values of Systemic Immune-Inflammation Index and Neutrophil-Lymphocyte Ratio in the Localized Prostate Cancer and Benign Prostate Hyperplasia: A Retrospective Clinical Study. Purpose: To evaluate the diagnostic values of systemic immune-inflammation index (SII) and neutrophil-lymphocyte ratio (NLR) in patients with localized prostate cancer (PCa). Methods: Between January 2014 and December 2019, 117 patients with benign prostate hyperplasia (BPH) and 278 patients with localized PCa who underwent radical prostatectomy (RP) were included in this study. The inflammatory markers including SII, NLR, platelet-lymphocyte ratio (PLR), lymphocyte-monocyte ratio (LMR), lymphocyte ratio (LR), neutrophil ratio (NR), mean platelet volume (MPV), and red cell distribution (RDW) of these two groups were examined and analyzed. ROC curve analysis was performed to assess the discriminative ability of inflammatory markers and their combination with tPSA for PCa. The binary logistic regression model was used to evaluate the association between significant inflammatory markers and risk of PCa. Results: The pathological results from RP specimen comprised 72 (25.90%) patients with pT1, 168 (60.43%) patients with pT2, and 38 (13.67%) patients with pT3. According to Student's t test, patients with PCa had higher NLR (p = 0.034), SII (p = 0.008), and NR (p = 0.004), and lower LR (p = 0.025), MPV (p = 0.003), and TPV (p = 0.022) compared with patients with BPH; the distribution of age, PLR, LMR, RDW, f/t PSA ratio, and BMI did not show any significant differences. The AUC for NLR, SII, NR, and tPSA was 0.697 (p = 0.015), 0.719 (p &lt; 0.001), 0.647 (p = 0.009), and 0.708 (p &lt; 0.001), with threshold values of 1.6, 471.86, 65.15%, and 12.89 ng/ml, respectively. Patients were divided into two groups according to the threshold values, respectively. By using the multivariable logistic regression models, NLR ≥ 1.6 (OR, 2.731; 95% CI, 0.937-7.961, p = 0.042), SII ≥ 471.86 (OR, 1.274; 95% CI 0.473-3.433; p = 0.033), and PSA ≥ 12.89 ng/ml (OR, 1.443; 95% CI, 0.628-3.944; p = 0.014) were independent risk factors associated with PCa. The AUC for combination of NLR, SII, and NR with tPSA was 0.705 (p &lt; 0.001), 0.725 (p &lt; 0.001), and 0.704 (p &lt; 0.001), respectively. Conclusion: This study demonstrated that SII, NLR, and NR were all independent risk factors of PCa. These factors alone could provide better screen methods for PCa before biopsy. In addition, SII is a more powerful tool among these three inflammatory markers associated with PCa. Besides, combination of SII and NLR with tPSA had not much advantage compared with themselves alone. abstract_id: PUBMED:31049087 Diagnostic Value of Platelet-To-Lymphocyte Ratio in Prostate Cancer. Background: Previous studies demonstrated the promising value of platelet-to-lymphocyte (PLR) in prostate cancer. Aim: This study was conducted to evaluate its pre-biopsy values in predicting prostate cancer. Methods: We included all benign prostatic hyperplasia (BPH) and prostate cancer (PCa) patients who underwent a prostate biopsy in Adam Malik Hospital between August 11th 2011 and August 31st 2015. The relationship between pre-biopsy variables which could be affecting the percentage of prostate cancer risk was evaluated, including age, prostate-specific antigen (PSA) level, and prostate volume (EPV). The PLR was calculated from the ratio of related platelets with their absolute lymphocyte counts. The values then analysed to evaluate their associations with the diagnosis of BPH and PCa. Results: As many as 298 patients consisted of 126 (42.3%) BPH and 172 PCa (57.7%) patients are included in this study. Mean age for both groups are 66.36 ± 7.53 and 67.99 ± 7.48 years old (p = 0.64), respectively. There are statistically significant differences noted from PSA (19.28 ± 27.11 vs 40.19 ± 49.39), EPV (49.39 ± 23.51 vs 58.10 ± 30.54), PLR (160.27 ± 98.96 vs 169.55 ± 78.07), and NLR (3.57 ± 3.23 vs 4.22 ± 2.59) features of both groups (p &lt; 0.05). The AUC of PLR is 57.9% with a sensitivity of 56.4% and specificity of 55.6% in the cut-off point of 143 (p = 0.02). Besides, the NLR cut-off point of 3.08 gives 62.8% AUC with 64.5% sensitivity and 63.5% specificity. We asked for permission from the preceding authors of Indonesian Prostate Cancer Risk Calculator (IPCRC) and calculated its value from 98 randomised patients consist of 45 (45.92%) BPH and 53 (54.08%) PCa. We found a comparable value between PLR/NLR with IPCRC in predicting prostate cancer (AUC of 67.6%, 75.3%, and 68.4%, respectively) with a statistically significant difference of all value in both groups (p &lt; 0.05). Conclusions: PLR gives promising value in predicting prostate cancer in suspected patients. We suggest a further prospective study to validate its diagnostic values so it can be used as applicable routine calculation. abstract_id: PUBMED:33960239 Can we use neutrophil to lymphocyte ratio in the diagnosis and prediction of extracapsular extension in localized prostate cancer? Purpose: We investigated role of neutrophil-to-lymphocyte ratio (NLR) in the diagnosis and prediction of extracapsular extension (ECE) in clinically localized prostate cancer (PCa). Materials And Methods: A total of 396 patients with clinically localized PCa who underwent open radical retropubic prostatectomy (RRP), and 260 patients with benign prostatic hyperplasia (BPH) who underwent suprapubic prostatectomy were included in the study. Preoperative NLR, prostate specific antigen (PSA), prostate specific antigen density (PSAD), free PSA, prostate volume (PV), free/total PSA (f/t PSA) in both groups, and relation of NLR with PSA, Gleason Score (GS), and pathologic stage in PCa group were investigated. Records of patients were analyzed retrospectively. Results: NLR, free PSA, f/t PSA, and PV were statistically higher in BHP, and PSAD was higher in PCa group. In PCa group, NLR was found to be higher in patients with PSA &gt;10 ng/ml compared to those with less than ⩽10 ng/ml. NLR increases as the preoperative GS increases, and it was higher in pT3 patients than pT2 patients. NLR was statistically higher in those patients with positive lymph nodes than those without after RRP (p = 0.029). Conclusion: NLR is not a sufficient biomarker in differentiating clinically localized PCa from BPH. NLR increases as preoperative GS and pathologic stage increases. Lymph node involved patients after RRP have statistically higher NLR. NLR can be an indicator of ECE and lymph-node involvement in clinically localized PCa. abstract_id: PUBMED:31210812 Systemic Inflammatory Response in Predicting Prostate Cancer: The Diagnostic Value of Neutrophil-To-Lymphocyte Ratio. Background: Over the past decades, the study of the microenvironment of cancer has supported the hypothesis between inflammation and cancer. Previous studies have demonstrated a promising value of platelet-to-lymphocyte (PLR) and neutrophil-to-lymphocyte ratio (NLR) as a systemic inflammatory response in prostate cancer. Aim: To evaluate their pre-biopsy values of PLR and NLR in predicting prostate cancer. Material And Methods: This is a diagnostic study with retrospective design. We included all benign prostatic hyperplasia (BPH) and prostate cancer (PCa) patients who underwent prostate biopsy in Adam Malik Hospital between August 2011 and August 2015. We used PSA value above 4 ng/dL as the threshold for the biopsy candidates. The relationship between pre-biopsy variables affecting the percentage of prostate cancer risk was evaluated, including age, prostate-specific antigen (PSA) level, and estimated prostate volume (EPV). The PLR and NLR were calculated from the ratio of related platelets or absolute neutrophil counts with their absolute lymphocyte counts. The values then analysed to evaluate their associations with the diagnosis of BPH and PCa. Results: Out of 298 patients included in this study, we defined two groups consist of 126 (42.3%) BPH and 172 PCa (57.7%) patients. Mean age for both groups are 66.36 ± 7.53 and 67.99 ± 7.48 years old (p = 0.64), respectively. There are statistically significant differences noted from both BPH and PCa groups in terms of PSA (19.28 ± 27.11 ng/dL vs 40.19 ± 49.39 ng/dL), EPV (49.39 ± 23.51 cc vs 58.10 ± 30.54 cc), PLR (160.27 ± 98.96 vs 169.55 ± 78.07), and NLR (3.57 ± 3.23 vs 4.22 ± 2.59) features of both BPH and PCa groups respectively (p &lt; 0.05). A Receiver Operating Characteristics (ROC) analysis was performed for PLR and NLR in analysing their value in predicting prostate cancer. The Area Under the Curve (AUC) of PLR is 57.9% with a sensitivity of 56.4% and specificity of 55.6% in the cut-off point of 143 (p = 0.02). The NLR cut-off point of 3.08 gives 62.8% AUC with 64.5% sensitivity and 63.5% specificity. These AUCs were comparable with the AUC of PSA alone (68.5%). We performed logistic regression between PSA, PLR, and NLR with result in the exclusion of PLR if calculated conjunctively. Therefore, NLR has a promising performance in predicting PCa in patients with PSA above 4 ng/dL (OR = 3.2; 95%CI: 1.96-5.11). We found as many as 80 (63.5%) patients with benign biopsy results with negative NLR value in this study. Conclusion: NLR has promising value in predicting prostate cancer. A further prospective study in validating its diagnostic value was needed. abstract_id: PUBMED:36547159 Comparative Prospective and Longitudinal Analysis on the Platelet-to-Lymphocyte, Neutrophil-to-Lymphocyte, and Albumin-to-Globulin Ratio in Patients with Non-Metastatic and Metastatic Prostate Cancer. Purpose: To prospectively evaluate the albumin/globulin ratio (AGR), neutrophil/lymphocyte ratio (NLR), and platelet/lymphocyte ratio (PLR) diagnostic and prognostic predictive value in a stratified population of prostate cancer (PC) cases. Methods: Population was divided based on the clinical and histologic diagnosis in: Group A: benign prostatic hyperplasia (BPH) cases (494 cases); Group B: all PC cases (525 cases); Group B1: clinically significant PC (426 cases); Group B2: non-metastatic PC (416 cases); Group B3: metastatic PC (109 cases). NLR, PLR, and AGR were obtained at the time of the diagnosis, and only in cases with PC considered for radical prostatectomy, determinations were also repeated 90 days after surgery. For each ratio, cut-off values were determined by receiver operating characteristics curve (ROC) analysis and fixed at 2.5, 120.0, and 1.4, respectively, for NLR, PLR, and AGR. Results: Accuracy in predictive value for an initial diagnosis of clinically significant PC (csPC) was higher using PLR (0.718) when compared to NLR (0.220) and AGR (0.247), but, despite high sensitivity (0.849), very low specificity (0.256) was present. The risk of csPC significantly increased only according to PLR with an OR = 1.646. The percentage of cases with metastatic PC significantly increased according to high NLR and high PLR. Accuracy was 0.916 and 0.813, respectively, for NLR and PLR cut-off, with higher specificity than sensitivity. The risk of a metastatic disease increased 3.2 times for an NLR &gt; 2.5 and 5.2 times for a PLR &gt; 120 and at the multivariate analysis. Conclusion: PLR and NLR have a significant predictive value towards the development of metastatic disease but not in relation to variations in aggressiveness or T staging inside the non-metastatic PC. Our results suggest an unlikely introduction of these analyses into clinical practice in support of validated PC risk predictors. abstract_id: PUBMED:26380014 Platelet to lymphocyte ratio plays an important role in prostate cancer's diagnosis and prognosis. Objective: To compare the platelet to lymphocyte ratio (PLR) in normal people, benign prostatic hyperplasia (BPH) patients and prostate cancer (PCA) patients, and to explore the prognostic role of PLR in PCA. Methods: 155 normal people, 168 BPH patients and 103 PCA patients were enrolled. PCA patients were divided into PLR low value group (PLR&lt;150) and PLR high value group (PLR≥150), and the difference of patients' clinical characteristics between high value group and low value group was comparative studied. Results: The differences of PLR among normal people, BPH patients and PCA patients were statistically significant. In addition, platelet counts, neutrophil counts, PSA level, LDH level, AKP level, CRP level and alkaline phosphatase level were also significantly increased in PLR high value group, while the hemoglobin level was decreased. Besides, serious events such as coma during hospitalization were also more likely to appear in PLR high value group. PCA patients had an average follow-up of 3 years, and a total of 25 cases of patients died, including 11 (16.4%) cases in the PLR low value group, and 14 (38.9%) cases in PLR high value group with. Three years survival rate of patients in high value group was significantly reduced. Additionally, PLR was a possible risk factor associated with mortality, and an independent predictor of all-cause mortality during follow-up. Conclusion: PLR is significantly increased in PCA patients, and it is an independent predictor of 3-year mortality in PCA patients. abstract_id: PUBMED:37904787 Association Between Neutrophil-to-Lymphocyte Ratio and Benign Prostatic Hyperplasia: Results from the TCLSIH Cohort Study. Purpose: The prevalence of benign prostatic hyperplasia (BPH) in the general Chinese adult male population has risen sharply over the past few decades. Increasing evidence suggests that inflammation plays an important role in the pathogenesis of BPH. To better understand the role of inflammation in the pathogenesis of BPH, we can use the neutrophil-to-lymphocyte ratio (NLR) because it is a simple and effective marker of inflammation and immunity. This study aims to prospectively investigate the association between NLR levels and the prevalence of BPH in a general Chinese adult male population. Patients And Methods: This study included a total of 15,783 male participants free from BPH at baseline. NLR was measured according to the complete blood count. BPH was defined as total prostate volume (TPV) ≥30 mL, and TPV was determined by transabdominal ultrasonography. Multivariable Cox proportional hazards models were fitted to calculate hazards ratios (HRs) and corresponding 95% confidence intervals (CIs) for BPH risk with NLR levels. Results: During a median follow-up of 2.7 years, 5078 BPH cases were documented. After adjusting for age, body mass index, smoking, alcohol, education, occupation, income, physical activity, total energy intake, personal and family history of disease, and inflammation markers, the multivariable-adjusted HRs of BPH were 1.00 (reference), 1.08 (95% CIs 0.99, 1.17), 1.10 (95% CIs1.02, 1.19), and 1.12 (95% CIs1.03, 1.21), respectively, for participants with NLR in the first, second, third, and fourth quartiles (P for trend &lt;0.01). Conclusion: Higher NLR levels were associated with a higher risk of BPH in Chinese adult male population. Our findings support the notion that NLR levels may be an important target for BPH prevention and intervention. abstract_id: PUBMED:31523936 Association between the neutrophil-to-lymphocyte ratio and intravesical prostatic protrusion in men with benign prostatic hyperplasia. Objective: To analyze the association between neutrophil-to-lymphocyte ratio (NLR) and intravesical prostatic protrusion (IPP) in men with benign prostatic hyperplasia. Methods: Two hundred and fifty men aged &gt;50 years who presented with lower urinary tract symptoms at our institution between 2014 and 2018 were analyzed. Pearson's method was used for analysis of the correlation between NLR and IPP. Multivariate logistic regression analysis was used to identify predictors of IPP. Further analysis according to total prostate volume (TPV) was performed. Results: The NLR correlated positively with IPP (Pearson's r = 0.459, P &lt; 0.001) and was an independent predictor of IPP ≥10 mm (odds ratio, 2.95; 95% confidence interval, 1.59-5.47; P = 0.0006). Among the 142 men with prostates &lt;40 cm3 , mean NLR was 2.50 ± 0.71 in those with IPP ≥10 mm and 1.71 ± 0.57 in those with IPP &lt; 10 mm (P &lt; 0.001). The NLR differed significantly between those with a prostate &lt;40 cm3 and IPP ≥10 mm and those with a larger prostate and IPP &lt; 10 mm (2.50 ± 0.71 vs 2.07 ± 0.77, respectively; P = 0.020). Conclusions: NLR can be used as a surrogate marker for presence of IPP. Its clinical value would be especially important in men with a small prostate gland but high IPP. The NLR seemed to be more strongly correlated with IPP than with TPV. Answer: Yes, the neutrophil-lymphocyte ratio (NLR) has been indicated as a potential marker for the progression of benign prostatic hyperplasia (BPH). Studies have found that NLR is positively correlated with the International Prostate Symptom Score (IPSS), prostate-specific antigen (PSA), and negatively correlated with maximum urinary flow rate (Qmax). High-risk patients for BPH progression had a higher NLR compared with low-risk patients, suggesting that increased inflammation is negatively associated with clinical status in BPH patients (PUBMED:25124628). Another study found a positive correlation between NLR and severe symptoms and progression of BPH, suggesting that anti-inflammatory therapy could contribute to the medical treatment of BPH and that higher NLR may be a candidate marker for the severity of symptoms in BPH patients (PUBMED:27464069). Furthermore, NLR has been shown to be higher in patients with clinically localized prostate cancer compared to those with BPH, indicating its potential utility in differentiating between the two conditions (PUBMED:35047413). However, one study noted that while NLR is not a sufficient biomarker for differentiating clinically localized prostate cancer from BPH, it does increase with preoperative Gleason Score and pathologic stage, suggesting it could be an indicator of extracapsular extension and lymph-node involvement in clinically localized prostate cancer (PUBMED:33960239). Additionally, higher NLR levels have been associated with a higher risk of BPH in the Chinese adult male population, supporting the notion that NLR levels may be an important target for BPH prevention and intervention (PUBMED:37904787). Overall, these findings suggest that NLR can be a useful marker for assessing the progression and severity of BPH.
Instruction: Can acute Chlamydia pneumoniae respiratory tract infection initiate chronic asthma? Abstracts: abstract_id: PUBMED:9809498 Can acute Chlamydia pneumoniae respiratory tract infection initiate chronic asthma? Background: Chlamydia pneumoniae infection can cause acute respiratory illnesses (including sinusitis, bronchitis, and pneumonia) that are sometimes associated with wheezing. Little is known about whether acute infection in a previously unexposed, nonasthmatic individual can produce persistent wheezing leading to a diagnosis of chronic asthma. Objective: We sought to determine whether patients with acute C. pneumoniae respiratory tract infections would develop chronic asthma. Methods: A consecutive series of 163 primary care outpatient adolescents and adults (average age 43, 45% male) who had acute wheezing illnesses or chronic asthma were evaluated for C. pneumoniae infection by serologic testing. A subgroup of these patients also had nasopharyngeal cultures for C. pneumoniae. Results: Twenty patients (12%) were diagnosed with C. pneumoniae infection defined by serology (15), culture isolation (3), or both (2). Of these 20, 10 patients wheezed for the first time and 6 of them subsequently developed chronic asthma (5) or chronic bronchitis (1) along with a serologic profile suggesting chronic infection. The other 10 patients diagnosed with C. pneumoniae infection already had a diagnosis of chronic asthma. In these patients initial serologic findings suggested chronic rather than acute infection. Conclusions: Acute C. pneumoniae respiratory tract infections in previously unexposed, nonasthmatic individuals can result in chronic asthma. Patients previously diagnosed with chronic asthma should be evaluated for possible chronic C. pneumoniae infection. abstract_id: PUBMED:23537827 Application of three uniplex polymerase chain reaction assays for the detection of atypical bacteria in asthmatic patients in Kuwait. Background: Respiratory infections are known to exacerbate wheezing in many asthmatic patients. We aimed to use molecular methods for the fast detection of Mycoplasma pneumoniae, Chlamydia pneumoniae and Legionella pneumophila in respiratory specimens from asthmatic patients in Kuwait. Methods: We used uniplex PCR assays to detect the three atypical bacteria in clinical specimens from 235 asthmatic and non-asthmatic patients in Kuwait. A regression analysis was used to identify the risk factors related to the bacterial type. Group comparisons for similarity were conducted and correlation coefficients were calculated using SPSS statistical software. Results: The detection limits using uniplex PCR for C. pneumoniae, L. pneumophila and M. pneumoniae were approximately 1pg, 2.4fg and 12pg of DNA, respectively. M. pneumoniae PCR positivity was more common in asthmatic patients (15%) than in non-asthmatic subjects (9%) (P&lt;0.05). A marked difference was observed between patients with acute asthma exacerbation (11%) and patients with chronic (stable) asthma (7%) among Kuwaiti patients; these percentages were 16% for non-Kuwaiti acute asthma patients and 14% for non-Kuwaiti chronic asthma patients (P&lt;0.201). There was a weak positive correlation between asthma severity and PCR positivity for M. pneumoniae. The PCR results for C. pneumoniae and L. pneumoniae were found to be statistically insignificant. Conclusions: The results of this study suggest that infection with M. pneumoniae may be related to the exacerbation of asthma symptoms and could possibly be a factor that induces wheezing. abstract_id: PUBMED:15381135 Respiratory infections and asthma: current treatment strategies. Infections such as lower respiratory illness potentially contribute to the initiation of asthma and are major factors in recurring acute exacerbations of the condition. Although typical bacterial respiratory pathogens such as Streptococcus pyogenes, Streptococcus pneumoniae and Hemophilus influenzae do not initiate asthmatic exacerbations, data from a subgroup of adults suggest a potential role for Mycoplasma pneumoniae and Chlamydia pneumoniae in the onset of asthma. Common cold viruses, predominantly respiratory syncytial virus (RSV) in young children and rhinoviruses in older children and adults, are the major causes of acute exacerbations of asthma. These exacerbations are not prevented with maintenance therapies that are used for chronic asthma, but do respond to short courses of systemic corticosteroids. There are continued attempts to produce a successful vaccine and antiviral agents for the treatment of RSV that are more effective and more practical to use than ribavirin, which is currently the only available antiviral for RSV. The prevention and treatment of rhinovirus infections have focused on the major receptor for the virus, intercellular adhesion molecule-1 (ICAM-1), which is located on respiratory epithelial cells. A multivalent, recombinant, antibody fusion protein identified as CFY196 has high avidity for ICAM-1 and has the potential to protect against rhinovirus infection. Another approach for preventing and treating rhinovirus infection uses a recombinant, soluble, truncated form of ICAM-1 in which the transmembrane and intracellular domains of the protein have been deleted. An initial clinical study on this agent demonstrated clinical efficacy in ameliorating the symptoms of experimental rhinovirus infection in volunteers, but did not significantly prevent infection. abstract_id: PUBMED:12371533 Chlamydia pneumoniae infection and its role in asthma and chronic obstructive pulmonary disease. Chlamydia pneumoniae (CP) is a common cause of respiratory tract infections, and several studies have asked whether it may play a pathogenic role in connection with bronchial asthma and chronic obstructive pulmonary disease (COPD). Evidence that CP infection is associated with these diseases is a cardinal item. However, evaluation of CP infection is hampered by difficulties in obtaining agreement on the definition of a gold standard. In the literature, serology is based on different cutoff points of antibody titres, which complicates the definition of CP seropositive findings and the classification of acute infection, chronic and past infection. In connection with acute and chronic infection, it is important to demonstrate the presence of CP by culture or polymerase chain reaction (PCR) in the respiratory tract, especially in the lower airways. Often, the results of serology is not associated with the findings by culture or PCR testing, which may involve the risk of inconclusive evidence. Evaluation of a possible presence of CP by clinical improvement after treatment with antibiotics is difficult since uncontrolled studies have been used and other microorganisms are also affected by antibiotics. Furthermore, many patients improve without antibiotics, and improvement has also been observed in patients remaining culture positive after treatment with antibiotics. It should also be noted that the antiinflammatory effects of antibiotics may improve the clinical status of patients. Despite these obstacles, studies point to the possibility that in some patients acute CP infections may lead to acute exacerbations of bronchial asthma. Whether a persistent CP infection contributes to chronic asthma or severe COPD, or whether it incites the diseases in previously healthy individuals is a question for further studies. Whether a causal relationship exists between CP infection and obstructive pulmonary disease or whether these patients are more susceptible to CP infection is unknown. Nevertheless, a cooperative role of CP in the proinflammatory mechanisms involved in these diseases remains to be examined since cellular studies show that CP stimulates the production and expression of cytokines, chemokines and adhesion molecules, actions that may amplify and prolong the inflammation. abstract_id: PUBMED:33872336 Chlamydia pneumoniae and chronic asthma: Updated systematic review and meta-analysis of population attributable risk. Background: Chlamydia pneumoniae (Cp) is an obligate intracellular human respiratory pathogen producing persisting lung infection with a plausible link to asthma pathogenesis. The population attributable risk of potentially treatable Cp infection in asthma has not been reported. Methods: The author searched from 2000 to 2020 inclusive for previously un-reviewed and new cross sectional and prospective controlled studies of Cp biomarkers and chronic asthma in both children and adults. Qualitative descriptive results and quantitative estimates of population attributable risk for selected biomarkers (specific IgG, IgA and IgE) are presented. Findings: No large, long-term prospective population-based studies of Cp infection and asthma were identified. About half of case-control studies reported one or more significant associations of Cp biomarkers and chronic asthma. Heterogeneity of results by age group (pediatric v adult asthma), severity category (severe/uncontrolled, moderate/partly controlled, mild/controlled) and antibody isotype (specific IgG, IgA, IgE) were suggested by the qualitative results and confirmed by meta-analyses. The population attributable risks for Cp-specific IgG and IgA were nul in children and were 6% (95% confidence interval 2%-10%, p = 0.002) and 13% (9%-18%, p&lt;0.00001) respectively in adults. In contrast to the nul or small population attributable risks for Cp-specific IgG and IgA, the population attributable risk for C. pneumoniae-specific IgE (children and adults combined) was 47% (39%-55%, p&lt;0.00001). In the subset of studies that reported on asthma severity categories, Cp biomarkers were positively and significantly (P&lt;0.00001) associated with asthma severity. Interpretation: C. pneumoniae-specific IgE is strongly associated with asthma and asthma severity, suggesting a possible mechanism linking chronic Cp infection with asthma in a subset of individuals with asthma. Infection biomarkers should be included in future macrolide treatment trials for severe and uncontrolled asthma. abstract_id: PUBMED:7561707 Treatment of Chlamydia pneumoniae infection in adult asthma: a before-after trial. Background: Some diseases previously believed to be noninfectious, eg, peptic ulcer disease, are now known to be caused by chronic infection. Recently, chronic Chlamydia pneumoniae infection has been suggested as a cause for adult-onset asthma. The purpose of this study was to determine whether antichlamydial treatment would affect the natural history of this disease. Methods: An open-label, before-after treatment trial was performed in a community-based, primary care office. Forty-six patients (mean age 47.7 years; range 17 to 78) with moderate to moderately severe, stable, chronic asthma were treated a median of 4 weeks (range 3 to 9) with oral doxycycline (100 mg twice daily), azithromycin (1000 mg once weekly), or erythromycin (1000 mg daily). Post-treatment pulmonary function and asthma symptoms were compared with baseline values. Follow-up was an average of 6 months (range 1.5 to 36) post-treatment. Results: Four patients with C pneumoniae respiratory tract infection developed chronic asthma, which disappeared after treatment in each case. Of the remaining 42 seroreactive patients who were treated a mean of 6 years after the development of chronic asthma, one half had either complete remission or major clinical improvement (3 and 18 patients, respectively). This improvement was significantly more likely to occur in patients with early disease (P = .01) and before the development of fixed obstruction (P &lt; .01). Conclusions: Antimicrobial therapy appeared to "cure" or significantly improve asthma in approximately one half of treated adults, and the response pattern was consistent with chlamydial pathogenesis. C pneumoniae infection in asthma may be clinically important and should be investigated further. abstract_id: PUBMED:18079229 Asthma and atypical bacterial infection. A growing body of basic and clinical science implicates the atypical bacterial pathogens Mycoplasma pneumoniae and Chlamydophila (formerly Chlamydia) pneumoniae as potentially important factors in asthma, although their exact contribution to asthma development and/or persistence remains to be determined. Evidence from human studies links both M pneumoniae and C pneumoniae to new-onset wheezing, exacerbations of prevalent asthma, and long-term decrements in lung function, suggesting that these organisms can play an important role in the natural history of asthma. Furthermore, animal models of acute and chronic infection with these organisms indicate that they have the ability to modulate allergic sensitization and pulmonary physiologic and immune response to allergen challenge. These findings raise the possibility that, in at least some individuals with asthma, antibiotic therapy might have a role in long-term treatment. While antibiotics do not currently have a defined role in the treatment of stable patients with chronic asthma, there is emerging evidence that asthma symptoms and biomarkers of airway inflammation can improve when patients who have atypical bacterial infection as a cofactor in their asthma are treated with macrolide antibiotics. Ongoing research into the importance of atypical pathogens in asthma will further elucidate whether these infections are important in disease development or whether their prevalence is increased in asthmatic subjects due to chronic airway inflammation or other, yet unidentified, predisposing factors. Current studies will further define the role of macrolide antibiotics in the treatment of stable patients with asthma, ultimately determining whether these therapeutic agents have a place in asthma management. abstract_id: PUBMED:21029940 Effects of atypical infections with Mycoplasma and Chlamydia on asthma. Mycoplasma pneumoniae and Chlamydophila pneumoniae are atypical bacteria that are frequently found in patients with asthma. A definitive diagnosis of infection is often difficult to obtain because of limitations with sampling and detection. Numerous animal studies have outlined mechanisms by which these infections may promote allergic lung inflammation and airway remodeling. In addition, there is mounting evidence from human studies suggesting that atypical bacterial infections contribute to asthma exacerbations, chronic asthma, and disease severity. The role of antimicrobials directed against atypical bacteria in asthma is still under investigation. abstract_id: PUBMED:17479603 The role of pulmonary infection in pediatric asthma. Recently, several authors have documented that respiratory infections may cause wheezing and acute exacerbation of asthma in children. Respiratory syncytial virus infections have been recognized to produce the first episode of wheezing in children who go on to develop chronic asthma. Furthermore, repeated infections caused by other common childhood viral pathogens have been proposed to affect responses of the immune system in such a way as to prevent the onset of allergic diseases and possibly asthma. Recently, it became clear that also infections by intracellular pathogens, such as Chlamydia and Mycoplasma, may cause acute and chronic wheezing in some individuals. In this review we describe the immunologic and clinical implications of the association between respiratory infections and asthma. abstract_id: PUBMED:16543054 Infections and asthma. A new paradigm is developing in regard to the interaction between infection and asthma. This paradigm comprises the acute exacerbations seen in asthma and also asthma chronicity. Viral infections have been commonly evaluated in acute exacerbations, but findings suggest viral-allergen and viral-bacterial interactions are important for chronicity. Most recently, studies are also invoking atypical bacterial infections, Mycoplasma pneumoniae and Chlamydia pneumoniae, as factors in both acute exacerbation and chronic asthma. Answer: Yes, acute Chlamydia pneumoniae respiratory tract infection can initiate chronic asthma. A study found that acute C. pneumoniae respiratory tract infections in previously unexposed, nonasthmatic individuals can result in chronic asthma. In this study, 20 patients were diagnosed with C. pneumoniae infection, and of these, 10 patients wheezed for the first time, with 6 subsequently developing chronic asthma or chronic bronchitis. The study suggests that patients previously diagnosed with chronic asthma should be evaluated for possible chronic C. pneumoniae infection (PUBMED:9809498). Furthermore, other studies have suggested a potential role for Chlamydia pneumoniae in the onset of asthma. For instance, evidence from human studies links C. pneumoniae to new-onset wheezing, exacerbations of prevalent asthma, and long-term decrements in lung function, suggesting that this organism can play an important role in the natural history of asthma (PUBMED:18079229). Additionally, a systematic review and meta-analysis indicated that C. pneumoniae-specific IgE is strongly associated with asthma and asthma severity, suggesting a possible mechanism linking chronic C. pneumoniae infection with asthma in a subset of individuals with asthma (PUBMED:33872336). Moreover, a before-after trial showed that antimicrobial therapy appeared to "cure" or significantly improve asthma in approximately one half of treated adults, and the response pattern was consistent with chlamydial pathogenesis (PUBMED:7561707). This further supports the notion that acute C. pneumoniae infection can lead to chronic asthma in some individuals.
Instruction: Does periacetabular osteotomy have depth-related effects on the articular cartilage of the hip? Abstracts: abstract_id: PUBMED:25834078 Does periacetabular osteotomy for hip dysplasia modulate cartilage biochemistry? Background: The aim of periacetabular osteotomy is to improve joint mechanics in patients with developmental dysplasia of the hip. In our study, we tried to determine whether the proteoglycan content, as measured with delayed gadolinium-enhanced magnetic resonance imaging of cartilage (dGEMRIC), can be modulated with the alteration of the hip joint biomechanics. Methods: In this prospective cohort study, thirty-seven patients (thirty-seven hips) with no or minimal osteoarthritis were treated with periacetabular osteotomy for symptomatic acetabular dysplasia. All patients had preoperative and one-year follow-up dGEMRIC scans. Twenty-eight of the thirty-seven also had two-year scans. The changes in dGEMRIC findings and hip morphology between the preoperative visit and the examinations at one and two years following the periacetabular osteotomy were assessed. Results: The mean preoperative dGEMRIC index (and standard deviation) was 561.6 ± 117.6 ms; this decreased to 515.2 ± 118.4 ms at one year after periacetabular osteotomy but subsequently recovered to 529.2 ± 99.1 ms at two years postoperatively. The decrease in the dGEMRIC index of the acetabular cartilage after surgery appears to be most pronounced at the superior aspect of the acetabulum, where the decrease in mechanical loading after periacetabular osteotomy would be most pronounced. All domains of the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) demonstrated significant improvement from the preoperative to the postoperative visits (all p &lt; 0.001). Conclusions: Periacetabular osteotomy for developmental dysplasia of the hip appears to alter the mechanical loading of articular cartilage in the hip, which in turn alters the cartilage matrix composition, as demonstrated by dGEMRIC. abstract_id: PUBMED:26329795 Does periacetabular osteotomy have depth-related effects on the articular cartilage of the hip? Background: Osteoarthritis may result from abnormal mechanics leading to biochemically mediated degradation of cartilage. In a dysplastic hip, the periacetabular osteotomy (PAO) is designed to normalize the mechanics and our initial analysis suggests that it may also alter the cartilage biochemical composition. Articular cartilage structure and biology vary with the depth from the articular surface including the concentration of glycosaminoglycans (GAG), which are the charge macromolecules that are rapidly turned over and are lost in early osteoarthritis. Delayed gadolinium-enhanced magnetic resonance imaging of cartilage (dGEMRIC) enables noninvasive measurement of cartilage GAG content. The dGEMRIC index represents an indirect measure of GAG concentration with lower values indicating less GAG content. GAG content can normally vary with mechanical loading; however, progressive loss of GAG is associated with osteoarthritis. By looking at the changes in amounts of GAG in response to a PAO at different depths of cartilage, we may gain further insights into the types of biologic events that are occurring in the joint after a PAO. Questions/purposes: We (1) measured the GAG content in the superficial and deep zones for the entire joint before and after PAO; and (2) investigated if the changes in the superficial and deep zone GAG content after PAO varied with different locations within the joint. Methods: This prospective study included 37 hips in 37 patients (mean age 26 ± 9 years) who were treated with periacetabular osteotomy for symptomatic acetabular dysplasia and had preoperative and 1-year follow up dGEMRIC scans. Twenty-eight of the 37 also had 2-year scans. Patients were eligible if they had symptomatic acetabular dysplasia with lateral center-edge angle &lt; 20° and no or minimal osteoarthritis. The change in dGEMRIC after surgery was assessed in the superficial and deep cartilage zones at five acetabular radial planes. Results: The mean ± SD dGEMRIC index in the superficial zone fell from 480 ± 137 msec preoperatively to 409 ± 119 msec at Year 1 (95% confidence interval [CI], -87 to -54; p &lt; 0.001) and recovered to 451 ± 115 msec at Year 2 (95% CI, 34-65; p &lt; 0.001), suggesting that there is a transient event that causes the biologically sensitive superficial layer to lose GAG. In the deep acetabular cartilage zone, dGEMRIC index fell from 527 ± 148 msec preoperatively to 468 ± 143 msec at Year 1 (95% CI, -66 to -30; p &lt; 0.001) and recovered to 494 ± 125 msec at Year 2 (95% CI, 5-32; p = 0.008). When each acetabular radial plane was looked at separately, the change from before surgery to 1 year after was confined to zones around the superior part of the joint. The only significant change from 1 to 2 years was an increase in the superficial layer of the superior zone (1 year 374 ± 123 msec, 2 year 453 ± 117 msec, p &lt; 0.006). Conclusions: This study suggests that PAO may alter the GAG content of the articular cartilage with a greater effect on the superficial zone compared with the deeper acetabular cartilage zone, especially at the superior aspect of the joint. Some surgeons have observed that surgery itself can be a stressor that can accelerate joint degeneration. Perhaps the decrease in dGEMRIC index seen in the superficial layer may be a catabolic response to postsurgical inflammation given that some recovery was seen at 2 years. The decrease in dGEMRIC index in the deep layer seen mainly near the superior part of the joint is persistent and may represent a response of articular cartilage to normalization of increased mechanical load seen in this region after osteotomy, which may be a normal response to alteration in loading. Clinical Relevance: This study looks at the biochemical changes in the articular cartilage before and after a PAO for dysplastic hips using MRI in a similar manner to using histological methods to study alterations in articular cartilage with mechanical loading. Although PAO alters alignment and orientation of the acetabulum, its effects on cartilage biology are not clear. dGEMRIC provides a noninvasive method of assessing these effects. abstract_id: PUBMED:27190755 Bernese periacetabular osteotomy for hip dysplasia: Surgical technique and indications. For young, active patients with healthy hip cartilage, pelvic osteotomy is a surgical option in to address hip pain and to improve mechanical loading conditions related to dysplasia. Hip dysplasia may lead to arthrosis at an early age due to poor coverage of the femoral head and abnormal loading of the joint articulation. In patients with symptomatic dysplasia and closed triradiate cartilage (generally over age 10), including adolescents and young adults (generally up to around age 40), the Bernese periacetabular osteotomy (PAO) is a durable technique for addressing underlying structural deformity. The PAO involves a modified Smith-Petersen approach. Advantages of the Bernese osteotomy include preservation of the weight-bearing posterior column of the hemi-pelvis, preservation of the acetabular blood supply, maintenance of the hip abductor musculature, and the ability to effect powerful deformity correction about an ideal center of rotation. There is an increasing body of evidence that preservation of the native hip can be improved through pelvic osteotomy. In contrast to hip osteotomy and joint preservation, the role of total hip arthroplasty in young, active patients with correctable hip deformity remains controversial. Moreover, the durability of hip replacement in young patients is inherently limited. Pelvic osteotomy should be considered the preferred method to address correctable structural deformity of the hip in the young, active patient with developmental dysplasia. The Bernese PAO is technically demanding, yet offers reproducible results with good long-term survivorship in carefully selected patients with preserved cartilage and the ability to meet the demands of rehabilitation. abstract_id: PUBMED:25152876 Biomechanical factors in planning of periacetabular osteotomy. Objective: This study addresses the effects of cartilage thickness distribution and compressive properties in the context of optimal alignment planning for periacetabular osteotomy (PAO). Background: The Biomechanical Guidance System (BGS) is a computer-assisted surgical suite assisting surgeon's in determining the most beneficial new alignment of a patient's acetabulum. The BGS uses biomechanical analysis of the hip to find this optimal alignment. Articular cartilage is an essential component of this analysis and its physical properties can affect contact pressure outcomes. Methods: Patient-specific hip joint models created from CT scans of a cohort of 29 dysplastic subjects were tested with four different cartilage thickness profiles (one uniform and three non-uniform) and two sets of compressive characteristics. For each combination of thickness distribution and compressive properties, the optimal alignment of the acetabulum was found; the resultant geometric and biomechanical characterization of the hip were compared among the optimal alignments. Results: There was an average decrease of 49.2 ± 22.27% in peak contact pressure from the preoperative to the optimal alignment over all patients. We observed an average increase of 19 ± 7.7° in center-edge angle and an average decrease of 19.5 ± 8.4° in acetabular index angle from the preoperative case to the optimized plan. The optimal alignment increased the lateral coverage of the femoral head and decreased the obliqueness of the acetabular roof in all patients. These anatomical observations were independent of the choice for either cartilage thickness profile, or compressive properties. Conclusion: While patient-specific acetabular morphology is essential for surgeons in planning PAO, the predicted optimal alignment of the acetabulum was not significantly sensitive to the choice of cartilage thickness distribution over the acetabulum. However, in all groups the biomechanically predicted optimal alignment resulted in decreased joint contact pressure and improved acetabular coverage. abstract_id: PUBMED:22058293 Effect of intra-articular lesions on the outcome of periacetabular osteotomy in patients with symptomatic hip dysplasia. In order to clarify how intra-articular lesions influence the survival of a periacetabular osteotomy in patients with dysplasia of the hip, we performed an observational study of 121 patients (121 hips) who underwent a transposition osteotomy of the acetabulum combined with an arthroscopy. Their mean age was 40.2 years (13 to 64) and the mean follow-up was 9.9 years (2 to 18). Labral and cartilage degeneration tended to originate from the anterosuperior part of the acetabulum, followed by the femoral side. In all, eight hips (6.6%) had post-operative progression to Kellgren-Lawrence grade 4 changes, and these hips were associated with the following factors: moderate osteoarthritis, decreased width of the joint space, joint incongruity, and advanced intra-articular lesions (subchondral bone exposure in the cartilage and a full-thickness labral tear). Multivariate analysis indicated subchondral bone exposure on the femoral head as an independent risk factor for progression of osteoarthritis (p = 0.003). In hips with early stage osteoarthritis, femoral subchondral bone exposure was a risk factor for progression of the grade of osteoarthritis. Although the outcome of transposition osteotomy of the acetabulum was satisfactory, post-operative progression of osteoarthritis occurred at a high rate in hips with advanced intra-articular lesions, particularly in those where the degenerative process had reached the point of femoral subchondral bone exposure. abstract_id: PUBMED:24461141 The prevalence of hip labral and chondral lesions identified by method of detection during periacetabular osteotomy: arthroscopy versus arthrotomy. Purpose: Periacetabular osteotomy (PAO) has become an accepted treatment option for patients with severe hip dysplasia. In addition to correcting acetabular architecture, consideration has been given to concomitant joint evaluation and treatment of coexisting intra-articular pathologic conditions, which can be identified with hip arthroscopy or through an open arthrotomy. The purpose of this systematic review was to evaluate the prevalence of intra-articular pathologic conditions identified arthroscopically compared with that of those identified with open arthrotomy. Methods: A systematic review of the literature was performed by a search of PubMed using the words "periacetabular osteotomy," "Bernese osteotomy," and "Ganz osteotomy." Two reviewers searched for relevant articles that met established inclusion criteria, which follow: (1) they were in the English language, (2) they contained data on patients who underwent periacetabular osteotomy, (3) the study patients underwent concomitant arthroscopy or arthrotomy, and (4) the studies contained data on the intra-articular findings at the time of osteotomy. Exclusion criteria included review articles, technique articles, articles reporting on the same patient population, and articles without reported patient data. Results: After review of 361 references and abstracts, 8 articles met the inclusion criteria. A total of 775 patients who had undergone PAO were included in the 8 articles. Three articles inspected all joints at the time of PAO and accounted for 345 PAOs. Arthrotomy was used to evaluate 151 hips, and 31 (21%) labral tears were identified. Arthroscopy was used to evaluate 194 hips, and 162 (84%) labral tears were identified. The difference in prevalence is statistically significant (P &lt; .05). Two of the 3 articles reported chondral injury: 141 (73%) acetabular chondral injuries and 52 (27%) femoral chondral injuries were noted in 194 hips. Conclusions: The prevalence of intra-articular damage, including labral injuries and pathologic conditions of cartilage, at the time of PAO is substantial. Labral tears and chondral injury appear to be better recognized with concomitant arthroscopy when compared with arthrotomy at the time of PAO. Further study is needed to determine whether the treatment of chondrolabral injuries can improve patient outcome and reduce reoperations. Level Of Evidence: Level IV, systematic review of Level IV studies. abstract_id: PUBMED:18853289 Evaluation of Bernese periacetabular osteotomy: prospective studies examining projected load-bearing area, bone density, cartilage thickness and migration. The typical dysplastic hip joint is characterised by maldirection of the acetabulum and femoral neck, insufficient coverage of the femoral head focally and globally and erosions of the limbus acetabuli (1). An unknown number of persons with hip dysplasia will suffer from pain in hip or groin, decreased hip function and development of osteoarthritis at a young age. The Bernese periacetabular osteotomy is performed to prevent osteoarthritis in patients with hip dysplasia and has been carried out at Aarhus University Hospital, Denmark since 1996 with more than 500 osteotomies performed. Throughout the years, research and quality improvement of the treatment has taken place and this PhD thesis is part of that process. The aims of this PhD thesis were to evaluate outcome aspects after periacetabular osteotomy in terms of I) estimating the projected loadbearing surface before and after periacetabular osteotomy, II) estimating bone density changes in the acetabulum after periacetabular osteotomy, III) developing a technique to precisely and efficiently estimate the thickness of the articular cartilage in the hip joint and IV) examining the stability of the re-orientated acetabulum after periacetabular osteotomy. In study I, we applied a stereologic method based on 3D computed tomography (CT) to estimate the projected loadbearing surface in six normal hip joints and in six dysplastic hips. The dysplastic hips were CT scanned before and after periacetabular osteotomy. We found that the average area of the projected loadbearing surface of the femoral head preoperatively was 7.4 (range 6.5-8.4) cm2 and postoperatively 11 (9.8-14.3) cm2. The area of the projected loadbearing surface was increased significantly with a mean of 49% (34-70%) postoperatively and thus comparable with the load-bearing surface in the normal control group. Double measurements were performed and the error variance of the mean was estimated to be 1.6%. The effect of overprojection, on the projected loadbearing surface was minimal. Consequently, the stereo-logic method proved to be precise and unbiased. The study indicates that this method is applicable in monitoring the loadbearing area in the hip joint of patients undergoing periacetabular osteotomy. In study II, a method based on CT and 3D design-based sampling principles was used to estimate bone density in different regions of the acetabulum. Baseline density was measured within the first seven days following periacetabular osteotomy and compared with density two years postoperatively. Double measurements were performed on three patients, and the error variance was estimated to be 0.05. Six patients with hip dysplasia scheduled for periacetabular osteotomy were consecutively included in the study. Bone density increased significantly in the anteromedial quadrant of the acetabulum as well as in the posteromedial quadrant between the two time-points. In the anterolateral quadrant bone density was unchanged following surgery, and the same was true for the posterolateral quadrant. We suggest that the observed increase in bone density medially represents a remodelling response to an altered load distribution after periacetabular osteotomy. The described method is a precise tool to estimate bone density changes in the acetabulum. Study III. As periacetabular osteotomy is performed on dysplastic hips to prevent osteoarthritic progression, changes in the thickness of the articular cartilage is a central variable to follow over time. 26 dysplastic hips on 22 females and 4 males were magnetic resonance imaged (MRI) preoperatively. The first 13 patients were examined twice, with complete repositioning of the patient and set-up in order to obtain an estimate of the precision of the method used. To show the acetabular and femoral cartilages separately, an ankle traction device was used during MRI. This device pulled the leg distally with a load of 10 kg. The mean thickness of the acetabular cartilage was 1.26 mm, SD 0.04 mm. The mean thickness of the femoral cartilage was 1.18 mm, SD 0.06. The precision calculated as the error variance was estimated for the thickness of the acetabular cartilage to 0.01 and femoral cartilage 0.02. We suggest that the method can be advantageous for assessing the progression of osteoarthritis in dysplastic hips after periacetabular osteotomy. In study IV, 32 dysplastic hips, 27 females and 5 males were included in the study. Radiostereometric examinations (RSA) were done at one week, four weeks, eight weeks and six months. Data are presented as mean + SD. Six months postoperatively, the acetabular fragment had migrated 0.7 mm + 0.8 medially, and 0.7 mm + 0.5 proximally. Mean rotation in adduction was 0.5 degrees + 1.3. In other directions, mean migration was below 0.5 mm/degrees. There was no statistical difference between migration 8 weeks and 24 weeks postoperatively in translation or rotation. Due to the limited migration, we find our postoperative partial weight-bearing regime safe. In conclusion, the studies in the present PhD thesis indicate that the projected loadbearing area of the hip joint increases considerable in patients undergoing periacetabular osteotomy and a method to estimate this area was described. Bone density increases in the medial quadrants two years postoperative and a method is developed to precisely estimate bone density on CT images. Also a method to precisely estimate cartilage thickness was presented and we suggest that the method can be advantageous for assessing the progression of osteoarthritis in dysplastic hips after periacetabular osteotomy. Due to the very limited migration of the acetabular fragment fixated with two screws, we find our fixation sufficient and the postoperative partial weight-bearing regimen safe. abstract_id: PUBMED:33581297 Preoperative Rather Than Postoperative Intra-Articular Cartilage Degeneration Affects Long-Term Survivorship of Periacetabular Osteotomy. Purpose: To determine whether intra-articular lesions changed in short-term follow-up after periacetabular osteotomy (PAO) and whether the intra-articular lesion changes impacted the long-term survivorship of PAO. Methods: We reviewed patients with hip dysplasia who underwent PAO with arthroscopic observation between 1990 and 2001. Patients who underwent second-look arthroscopy were included. The correlations between the intra-articular lesion changes and the long-term outcome of PAO were analyzed for patients with &gt;10 years of follow-up. The possible risk factors included demographic factors (age, sex, and body mass index), radiographic factors (Tönnis grade, lateral center-edge angle, Tönnis angle, acetabular head index, crossover sign, posterior wall sign, and joint congruity), and arthroscopic findings (full-thickness lesions at the time of PAO and lesions changes at the time of second-look arthroscopy). Results: A total of 64 patients (72 hips) were studied. Second-look arthroscopy was performed at a median of 1.4 years after PAO. Intra-articular lesions were observed in 93% in the acetabulum, 81% in the femoral head, and 97% in the labrum, respectively. These lesions unchanged in 74% in the acetabulum, 76% in the femoral head, and 79% in the labrum, respectively. Cartilage repair was observed in the acetabulum and the femoral head in 24% and 17% of hips, respectively. Labral repair occurred in 10%. Intra-articular lesion changes were not a predictor of failure. Multivariate analysis identified International Cartilage Repair Society grade 4 lesion in the femoral head as an independent risk factor for failure. Conclusions: Our results suggest that PAO prevents further deterioration in mild cartilage lesions and results in cartilage repair in some cases with advanced cartilage degenerations in the short term. However, these postoperative changes were not associated with long-term survivorship. Thus, appropriate surgical indications based on the preoperative intra-articular cartilage degeneration is paramount to achieving long-term success in PAO. Level Of Evidence: Level IV, therapeutic study. abstract_id: PUBMED:23395033 Periacetabular osteotomy: intra-articular work. The goal of periacetabular osteotomy (PAO) is to correct acetabular pathomorphology and restore a more normal interplay between the acetabulum and proximal femur. After PAO, the biomechanically improved hip joint is presumed to better resist the progression of degenerative joint disease. Isolated PAO without intra-articular inspection often will underestimate the extent of hip disease in young adults. If intra-articular inspection is not performed at the time of PAO, chondrolabral injuries and dysplastic hip pathologies associated with femoroacetabular impingement will not be detected. The interaction of the acetabulum with the proximal femur is critical, and the presence of iatrogenic femoroacetabular impingement can be assessed with intra-articular inspection at the time of PAO. abstract_id: PUBMED:38296440 Editorial Commentary: Concomitant Hip Arthroscopy and Periacetabular Osteotomy Treat Both the Cause and the Effects of Hip Dysplasia: The Best of Both Worlds. In the management of true hip dysplasia, the bony deformity is nearly always accompanied by, and often causal of, intra-articular pathology. It is strongly recommended to conduct a comprehensive preoperative evaluation as well as arthroscopic evaluation and treatment of coexisting hip joint conditions prior to performing an open Bernese periacetabular osteotomy. The osteotomy improves coverage of the femoral head and corrects the abnormal joint mechanics and loading patterns, which result in cartilage damage and subsequent osteoarthritis. Arthroscopy allows treatment of the intra-articular pathology. Combining the hip arthroscopy and periacetabular osteotomy procedures into a single surgical intervention, performed by either a single surgeon or a collaborative surgical team, allows us to address both the cause and the effects of dysplasia. This once-controversial combined technique has become a standard practice in the field with widespread adoption. Answer: Yes, periacetabular osteotomy (PAO) does have depth-related effects on the articular cartilage of the hip. A study that measured the glycosaminoglycan (GAG) content in the superficial and deep zones of the entire joint before and after PAO found that the mean dGEMRIC index in the superficial zone decreased significantly from preoperative values to 1 year postoperatively and then partially recovered by Year 2. This suggests a transient event that causes the biologically sensitive superficial layer to lose GAG. In the deep acetabular cartilage zone, the dGEMRIC index also fell from preoperative values to Year 1 and partially recovered at Year 2. The changes from before surgery to 1 year after were confined to zones around the superior part of the joint, and the only significant change from 1 to 2 years was an increase in the superficial layer of the superior zone. This indicates that PAO may alter the GAG content of the articular cartilage with a greater effect on the superficial zone compared to the deeper acetabular cartilage zone, especially at the superior aspect of the joint (PUBMED:26329795).
Instruction: Are therapeutic motivation and having one's own doctor as researcher sources of therapeutic misconception? Abstracts: abstract_id: PUBMED:24855070 Are therapeutic motivation and having one's own doctor as researcher sources of therapeutic misconception? Background: Desire for improvement in one's illness and having one's own doctor functioning as a researcher are thought to promote therapeutic misconception (TM), a phenomenon in which research subjects are said to conflate research with treatment. Purpose: To examine whether subjects' therapeutic motivation and own doctor functioning as researcher are associated with TM. Methods: We interviewed 90 persons with advanced Parkinson's disease (PD) enrolled or intending to enrol in sham surgery controlled neurosurgical trials, using qualitative interviews. Subjects were compared by motivation (primarily therapeutic vs primarily altruistic or dually motivated by altruistic and therapeutic motivation), and by doctor status (own doctor as site investigator vs not) on the following: understanding of purpose of study; understanding of research procedures; perception of chance of direct benefit; and recollection and perceptions concerning the risks. Results: 60% had primarily therapeutic motivation and 44% had their own doctor as the site investigator, but neither were generally associated with increased TM responses. Overall level of understanding of purpose and procedures of research were high. Subjects responded with generally high estimates of probability of direct benefit, but their rationales were personal and complex. The therapeutic-motivation group was more sensitive to risks. Five (5.6%) subjects provided incorrect answers to the question about purpose of research, and yet, showed excellent understanding of research procedures. Conclusions: In persons with PD involved in sham surgery clinical trials, being primarily motivated by desire for direct benefit to one's illness or having one's own doctor as the site investigator were not associated with greater TM responses. abstract_id: PUBMED:31640105 Raising Awareness of Therapeutic Misconception and Optimism Around Clinical Trials in Huntington's Disease. The Huntington's disease (HD) community is moving into an exciting time with Huntingtin lowering strategies entering human clinical trials. These upcoming targeted therapeutic approaches for this devastating disease with unmet medical needs, are believed to be a last resort for many patients and their families. Recently, patients with HD were shown to be at high risk for therapeutic misconception, mistaking research for actual treatment. It is important that investigators are aware of their patient's, as well as their own, vulnerability to therapeutic misconception. To limit therapeutic misconception, information should be provided on the rationale for clinical trials and the differences between clinical research and clinical care should be carefully discussed. abstract_id: PUBMED:26709382 An exploratory study of therapeutic misconception among incarcerated clinical trial participants. Background: Therapeutic misconception, the misunderstanding of differences between research and clinical care, is widely prevalent among non-incarcerated trial participants. Yet little attention has been paid to its presence among individuals who participate in research while incarcerated. Methods: This study examined the extent to which seventy-two incarcerated individuals may experience therapeutic misconception about their participation in one of six clinical trials, and its correlation with participant characteristics and potential influences on research participation. Results: On average, participants endorsed 70% of items suggestive of therapeutic misconception. The tendency toward therapeutic misconception was significantly higher among: African Americans and Latinos compared to Whites; older and less educated participants; enrollment in a substance abuse-related trial; and correlated with a belief that the trial was the only way to obtain needed treatment. Conclusions: Therapeutic misconception may be particularly high among select incarcerated individuals and is associated with a perceived lack of treatment options. Further examination of therapeutic misconception among incarcerated research participants is needed. abstract_id: PUBMED:28857220 Presuming patient autonomy in the face of therapeutic misconception. Therapeutic misconception involves the failure of subjects either to understand or to incorporate into their own expectations the distinctions in nature and purpose of personally responsive therapeutic care, and the generic relationship between subject and investigator which is constrained by research protocols. Researchers cannot disregard this phenomenon if they are to ensure that subjects engage in research on the basis of genuine informed consent. However, our presumption of patient autonomy must be sustained unless we have compelling evidence of serious misunderstanding. This article argues that the mere expression of aspects of therapeutic misconception should not necessarily displace the presumption of subject autonomy or undermine ethical inclusion in research for at least three reasons. First, some interpretations of the empirical data do not suggest misunderstanding. Second, assessment of misestimation and optimism are delicate and value-laden, and turn quickly from questions of autonomy to questions of judgment. Third, incomplete understanding may yet be sufficient to allow a subject to engage in a substantially autonomous decision-making process. Our point is not to dismiss the possibility of genuine therapeutic misconception, but to question its frequency and fatality to the consent process. abstract_id: PUBMED:36822606 When research becomes practice: the concept of the therapeutic misconception and challenges to consent in clinical trials. Many factors influence patients' decisions to participate in clinical trials. For many, the primary motivation is the possibility that they might derive some benefit from participation. This is particularly true for patients with limited treatment options, such as patients with advanced cancer. While this is not surprising, it is potentially problematic if patients fail to recognise the distinction between research and clinical care (a phenomenon known as the 'therapeutic misconception'). This is becoming increasingly problematic as clinical trial designs become more complex, as clinical trials become more embedded in routine clinical care, and as trials are increasingly used by patients and clinicians to access new diagnostic platforms and therapies. We outline some of these recent trends, focusing on the cancer clinical trials landscape as this provides a good case study of the phenomenon. We conclude by making preliminary suggestions that changes to the consent process, perhaps using 'dynamic consent' platforms, might help to mitigate the therapeutic misconception and note the need for further research to guide strategies for improving communication and decision-making. abstract_id: PUBMED:18316464 Beyond informed consent: the therapeutic misconception and trust. The therapeutic misconception has been seen as presenting an ethical problem because failure to distinguish the aims of research participation from those receiving ordinary treatment may seriously undermine the informed consent of research subjects. Hence, most theoretical and empirical work on the problems of the therapeutic misconception has been directed to evaluate whether, and to what degree, this confusion invalidates the consent of subjects. We argue here that this focus on the understanding component of informed consent, while important, might be too narrow to capture the ethical complexity of the therapeutic misconception. We show that concerns about misplaced trust and exploitation of such trust are also relevant, and ought to be taken into account, when considering why the therapeutic misconception matters ethically. abstract_id: PUBMED:34161615 Reassessing the measurement and presence of therapeutic misconception in a phase 1 setting. Background: Therapeutic misconception (TM) refers to research subjects' failure to distinguish the goals of clinical research from standard personal care. TM has traditionally been determined by questioning the patient about the research study's purpose. Recent research, however, has questioned whether TM is as prevalent as reported due to discrepancies between patient/researcher interpretations of TM questions. The authors have created an interview tool receptive to these advancements to more accurately determine the prevalence of TM. Methods: Patients were questioned about the trial's purpose as follows: 1) "Is the trial mostly intending to help research and gain knowledge?," 2) "Is it mostly intending to help you as a person?," or 3) "Don't know." Participants were then asked what they thought this question was asking: A) "What my own intentions are for participating," B) "What the official purpose of the research study is," or C) "Not sure." A patient exhibited TM by answering that the official trial purpose was to help him or her. Results: Patients (n = 98) had a mean age of 60 years, were mostly White (64%), had a combined family annual income ≥$60,000 (61%), and 49% had a college degree. Twelve of 98 patients (12%) definitely exhibited TM. This was much lower than the author's original finding of 68% in a similar cohort. Twenty-four of 98 patients (24.5%) were unclear about what one or both questions were asking and could not be categorized. Conclusions: Previously, a patient was thought to have TM if they answered that the purpose of the trial was to benefit to him or her. An additional query about how patients interpreted that question revealed only 12% definitely had TM. Lay Summary: Therapeutic misconception (TM) refers to research subjects' failure to distinguish the goals of clinical research from standard personal care. TM signals a basic misunderstanding of the purpose of clinical research, threatening valid informed consent to participate in clinical trials. TM has traditionally been determined by questioning patients about their research study's purpose. Recent research, however, has questioned whether TM is as prevalent due to discrepancies between patient/researcher interpretations of TM questions. By developing an interview-tool receptive to these advancements, we report a lower TM estimate in the phase 1 setting (12%) than we found previously in a similar cohort (68%). abstract_id: PUBMED:23256408 Time to stop worrying about the therapeutic misconception. Work on the therapeutic misconception suggests that investigators should ensure that potential research subjects understand the fundamental differences between clinical research and clinical care. Yet, what potential research subjects should understand depends on their circumstances and the study in question. This analysis implies that researchers and review committees should stop attempting to define, measure, and dispel the therapeutic misconception, and instead should focus on what potential subjects should understand to participate in individual studies. abstract_id: PUBMED:20358792 Therapeutic misconception and clinical trials in sub-Saharan Africa: a review. Objectives: To identify possible existence of therapeutic misconception and its effects on clinical trials in sub-Saharan Africa. Data Source: Original research findings and reviews published in the English literature and author's professional experience with clinical trials in some East, Central and West African countries. Design: Review of peer-reviewed articles. Data Extraction: Online searches and requests for reprints from corresponding authors and institutional subscription. Data Synthesis: Information categorised accordingly. Results: Therapeutic misconception, defined as a conflation by research subjects of research goals and those of routine health care is considered widely prevalent globally. The subjects misunderstand the disclosures during consenting process and enroll hoping to derive personal benefits from the study. Though no study has looked at therapeutic misconception specifically in sub-Saharan Africa, available evidence suggests that it is prevalent. Therapeutic misconception is incompatible with informed voluntary consent. It may affect participation in clinical trials, subjects' safety and well-being and possibly the research findings. Conclusions: There is need for studies to identify the prevalence and effects of therapeutic misconception in the region. Researchers in sub-Saharan Africa should be aware of its existence, thus design trials in which it will not have significant effects and strengthen the consent process to reduce it. abstract_id: PUBMED:26074215 Investigating the informed consent process, therapeutic misconception and motivations of Egyptian research participants: a qualitative pilot study. Few studies have explored the informed consent process among research participants in developing countries. This study aimed to evaluate the informed consent process, therapeutic misconception and motivation for participation among Egyptians participating in clinical trials. In a cross-sectional qualitative pilot study 103 participants in 10 clinical trials responded to a questionnaire. Over 90% agreed they had time to ask questions and received adequate information about the risks prior to consenting. All participants thought the research and the drug would improve their condition; only 46.1% were aware of receiving a non-approved experimental drug and 21.3% of being randomized. Reasons for participation included: better treatment (100%), to benefit society &amp; advance science (85.4%), to receive free drugs (42.6%) and medical care (43.6%), to get hospitalized (15.8%) and to receive money or gifts (4.9%). Investigators need to emphasize the distinction between research and clinical care to address the high rate of therapeutic misconception. Answer: Therapeutic motivation and having one's own doctor as a researcher are thought to be factors that could promote therapeutic misconception (TM), which is when research subjects confuse the objectives of clinical research with personal medical care. However, a study involving persons with advanced Parkinson's disease (PD) enrolled or intending to enroll in sham surgery controlled neurosurgical trials found that neither being primarily motivated by the desire for direct benefit to one's illness nor having one's own doctor as the site investigator were associated with greater TM responses (PUBMED:24855070). Despite high estimates of the probability of direct benefit, the subjects' rationales were personal and complex, and the overall level of understanding of the purpose and procedures of research was high. The group with therapeutic motivation was more sensitive to risks, and only a small percentage of subjects provided incorrect answers about the purpose of research while showing excellent understanding of research procedures. In contrast, other studies have highlighted the presence of TM in different contexts. For instance, patients with Huntington's disease were shown to be at high risk for TM, mistaking research for actual treatment (PUBMED:31640105). Incarcerated individuals participating in clinical trials also exhibited a high tendency toward TM, particularly among certain demographics and those who believed the trial was the only way to obtain needed treatment (PUBMED:26709382). Moreover, reassessment of TM in a phase 1 setting using an improved interview tool revealed a lower prevalence of TM (12%) than previously reported, suggesting that TM might not be as widespread as once thought (PUBMED:34161615). The concept of TM is further complicated by the increasing complexity of clinical trial designs, the embedding of trials in routine clinical care, and the use of trials to access new therapies, particularly in the cancer clinical trials landscape (PUBMED:36822606). Additionally, the ethical implications of TM extend beyond informed consent to include concerns about misplaced trust and exploitation of trust (PUBMED:18316464). In conclusion, while therapeutic motivation and having one's own doctor as a researcher are potential sources of TM, the evidence suggests that their impact on TM may vary depending on the context and population studied. It is important for investigators to be aware of the potential for TM and to ensure that research participants have a clear understanding of the distinction between research and clinical care (PUBMED:20358792; PUBMED:26074215).
Instruction: Malignant melanoma and (18)F-FDG-PET: Should the whole body scan include the legs? Abstracts: abstract_id: PUBMED:12937695 Malignant melanoma and (18)F-FDG-PET: Should the whole body scan include the legs? Aim: (18)F-FDG-PET (FDG-PET) is established in staging and follow-up of malignant melanoma. The legs are affected in 10-40% at time of diagnosis even if the primary is at the arms and torso. Imaging including the legs may detect distant manifestations but increases duration of the scan by approximately 30 min. We intended to disclose the diagnostic benefit of scanning the legs and to evaluate the therapeutic benefit resulting. Patients, Methods: In this retrospective analyse 213 consecutive PET studies of 153 patients with suspected or recent malignant melanoma were re-evaluated for metastatic spread by a blinded investigator. Histopathological follow-up was assessed for confirmation. Results: Suspicious findings at the legs were depicted in 53 patients on 76 occasions. 38/53 showed pathologic uptake in the torso as well. In 15/53 patients it was restricted to the legs. One of them had a hitherto unknown, clinically relevant finding that was not apparent in palpation and inspection. In 6 other patients with primary location at the legs a validation of the positive PET findings was not possible up to now. Conclusion: Metastases and local recurrence of malignant melanoma at the legs were found in 41% of women and 27% of men. However, a long scan does not yield relevant additional data. We found isolated new manifestations at the legs in only 1/153 patients. We recommend performing a long scan only in patients with previous melanoma manifestations restricted to the legs. In all other cases a short scan of the torso and proximal thighs is sufficient. This allows a higher number of PET-scans without loss of diagnostic power and a shorter examination time. abstract_id: PUBMED:37854079 Role of 18 F-FDG PET-CT in Urethral Malignant Melanoma and Analysis of the UK Guidelines on Ano-uro-genital Melanomas. Urethral melanomas are a rare subtype of noncutaneous melanomas. The disease has a tendency to have skip lesions and early metastases as compared with cutaneous melanomas. The role of fluorine-18 fluorodeoxyglucose ( 18 F-FDG) positron emission tomography computed tomography (PET-CT) is well established in cases of cutaneous melanomas and is recommended by the National Comprehensive Cancer Network (NCCN) for stage IIB to IV cancer. There are no established guidelines on the management of noncutaneous melanomas; however, a recently published United Kingdom national guideline aims to streamline the management of ano-uro-genital melanomas. The guideline describes a very limited role in the use of 18 F-FDG PET-CT in this case scenario. The tendency to skip lesions, early metastases, involvement of brain parenchyma, and finally the usage of anti-PD-1 medications are key areas where 18 F-FDG PET-CT has shown superiority over CT scan. With this case report, we aim to highlight the strength of 18 F-FDG PET-CT in the management of urethral melanomas, which can be extrapolated to other ano-uro-genital melanomas. abstract_id: PUBMED:25204436 Therapy response monitoring of the early effects of a new BRAF inhibitor on melanoma xenograft in mice: evaluation of (18) F-FDG-PET and (18) F-FLT-PET. Inhibition of the V600E mutated BRAF kinase gene (BRAF(V600E) ) is an important and effective approach to treating melanomas. A new specific small molecule inhibitor of BRAF(V600E) , PLX3603, showed potent melanoma growth-inhibiting characteristics in preclinical studies and is currently under clinical investigation. In this study we investigated the feasibility of (18) F-FDG and (18) F-FLT-PET to monitor the early effects of the BRAF(V600E) inhibitor in mice with melanoma xenografts. SCID/beige mice with subcutaneous (s.c.) A375 melanoma xenografts, expressing BRAF(V600E) , received the BRAF(V600E) inhibitor twice daily orally (0, 25, 50 and 75 mg/kg). At 1, 3 and 7 days after start of therapy, the uptake of (18) F-FDG and (18) F-FLT in the tumor and normal tissues was determined in ex vivo tissue samples. Serial (18) F-FDG and (18) F-FLT-PET scans were acquired of animals at 1 day before and 1, 3 and 7 days after start of treatment with 75 mg/kg BRAF(V600E) inhibitor. A dose-dependent decrease in (18) F-FDG uptake in the A375 tumors was observed by ex vivo biodistribution analysis. Administration of 75 mg/kg BRAF inhibitor for 1, 3 and 7 days resulted in a significantly decreased (18) F-FDG uptake in A375 tumors (41, 35 and 51%, respectively). (18) F-FLT uptake in the A375 tumors was low at baseline and no significant changes in (18) F-FLT uptake were observed at any of the doses administered. These effects were corroborated by serial in vivo (18) F-FDG and (18) F-FLT-PET imaging. These data demonstrate that (18) F-FDG-PET can be used as an imaging biomarker to noninvasively evaluate the early effects of PLX3603. abstract_id: PUBMED:19922830 (18)F-FDG PET-CT and respiratory synchronization: effect in the detection and catalogation of pulmonary lesions Unlabelled: The respiratory movement reduces the sensitivity in the detection of pulmonary lesions. The synchronized acquisition of PET with respiratory movement (4D) can reduce this inconvenient. Our objective was assess the effect of 4D PET-CT in the metabolic activity of (18)F FDG and final classification of pulmonary lesions. Material And Methods: 12 patients with 18 pulmonary lesions with sizes within 0.8-4cm were assessed. The maximum SUV was obtained in the imaging acquired in standard conditions (3D) and the obtained in the respiratory period with higher metabolic activity in 4D images. The difference of percentage between both values was calculated. Moreover were evaluated the changes of the classification of pulmonary nodes (benign or malignant) obtained from the assessment of 4D and 3D PET images. Results: 17/18 pulmonary lesions showed an increase in the SUV(max) value in the 4D with respect to 3D image. 5 lesions showed SUV(max) values &gt;2.5 in 4D image and &lt;2.5 in 3D image. From these 3 were correctly catalogued as malignant with 4D although the number of false positive increased to 2. Conclusion: (18)F-FDG 4D PET-CT shows values of metabolic activity more realistic allowing the correct classification reducing the false negative although increasing the false positive. abstract_id: PUBMED:34073477 Complete Metabolic Response in FDG-PET-CT Scan before Discontinuation of Immune Checkpoint Inhibitors Correlates with Long Progression-Free Survival. Checkpoint inhibitors have revolutionized the treatment of patients with metastasized melanoma. However, it remains unclear when to stop treatment. We retrospectively analyzed 45 patients (median age 64 years; 58% male) with metastasized melanoma from 3 cancer centers that received checkpoint inhibitors and discontinued therapy due to either immune-related adverse events or patient decision after an (18F)2-fluoro-2-deoxy-D-glucose positron emission tomography (FDG-PET) combined with a low-dose CT scan (FDG-PET-CT) scan without signs for disease progression. After a median of 21 (range 1-42) months of immunotherapy an FDG-PET-CT scan was performed to evaluate disease activity. In these, 32 patients (71%) showed a complete metabolic response (CMR) and 13 were classified as non-CMR. After a median follow-up of 34 (range 1-70) months, 3/32 (9%) of CMR patients and 6/13 (46%) of non-CMR patients had progressed (p = 0.007). Progression-free survival (PFS), as estimated from the date of last drug administration, was significantly longer among CMR patients than non-CMR (log-rank: p = 0.001; hazard ratio: 0.127; 95% CI: 0.032-0.511). Two-year PFS was 94% among CMR patients and 62% among non-CMR patients. Univariable Cox regression showed that metabolic response was the only parameter which predicted PFS (p = 0.004). Multivariate analysis revealed that metabolic response predicted disease progression (p = 0.008). In conclusion, our findings suggest that patients with CMR in an FDG-PET-CT scan may have a favorable outcome even if checkpoint inhibition is discontinued. abstract_id: PUBMED:11948260 Clinical applications of (18)F-FDG in oncology. PET has emerged as a powerful diagnostic tool for the evaluation of cancer patients. Currently, most of these studies are performed with the glucose analog (18)F-FDG, which has been shown to accumulate avidly in most tumors. (18)F-FDG PET is now routinely used in the diagnosis, staging, and posttherapy evaluation of oncologic patients. After reading this paper, the reader should understand the physiologic basis of using (18)F-FDG in patients with different tumors, describe the role of this radiopharmaceutical in the management of oncologic patients, and identify those malignancies for which (18)F-FDG has proved to be effective in diagnosis and follow-up. abstract_id: PUBMED:23961257 (18)F-FDG PET-CT for Staging of Conjunctival Melanoma. The value of F-18 fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-FDG PET-CT) for the evaluation of cutaneous melanoma has been demonstrated previously. However, there are few reports regarding the use PET-CT for the staging of conjunctival melanoma (CM). We report here a case, a 34-year-old male with a six-month history of a pigmented nodule at the palpebral conjunctiva of the left eye, and a firm left preauricular lymph node detected on physical examination. Biopsy of the ocular lesion confirmed CM, and fine needle aspiration cytology of the preauricular node was positive for malignancy. CT showed three pulmonary nodules. An (18)F-FDG PET-CT was performed to restage the patient. The study showed hypermetabolic lesions in the left eye, and in the left preauricular node. The scan was negative for metastasis. These findings were important in guiding management of the disease in this patient. Future prospective studies should further evaluate the role of (18)F-FDG PET-CT for the staging of CM. abstract_id: PUBMED:38181300 Rare Findings of Gallbladder Metastasis From Cutaneous Melanoma on 18 F-FDG PET/CT Imaging. Abstract: We present the imaging findings of a 77-year-old man with a history of malignant cutis melanoma that metastasized to the gallbladder. A restaging 18 F-FDG PET/CT scan showed uneven thickening and elevated 18 F-FDG uptake in the gallbladder wall. Subsequently, the patient underwent laparoscopic cholecystectomy, and histopathologic findings confirmed the diagnosis of metastatic melanoma of the gallbladder. abstract_id: PUBMED:26204273 (18)F-FDG PET/CT and Melanoma: Staging, Immune Modulation and Mutation-Targeted Therapy Assessment, and Prognosis. Objective: Monoclonal antibodies that target the programmed cell death 1 (PD-1) immune checkpoint protein and its associated ligands, PD-L1 and PD-L2, and targeted inhibitors of mutated signal transduction molecules such as BRAF inhibitors show immense promise in treating patients with melanoma. We discuss the use of (18)F-FDG PET/CT for assessing therapy effectiveness, staging advanced disease, and determining prognosis of patients with melanoma. Conclusion: FDG PET/CT is useful in staging disease, assessing therapy, and determining prognosis in patients with melanoma. abstract_id: PUBMED:23106051 Is there an added clinical value of "true"whole body(18)F-FDG PET/CT imaging in patients with malignant melanoma? Accurate and reliable staging of disease extent in patients with malignant MM is essential to ensure appropriate treatment planning. The detection of recurrent or residual malignancy after primary treatment is important to allow for early intervention and to optimise patient survival. 2-deoxy-2-[(18)F]fluoro-D-glucose ((18)F-FDG) PET or PET computed tomography (PET/CT) is indicated for surveillance of malignant MM due to its high sensitivity and specificity for soft-tissue or nodal recurrences and metastases. It has been claimed that including lower extremities and skull in addition to 'eyes to thigh' images in PET/CT evaluation of metastatic MM routinely is warranted. We have studied retrospectively the reports of whole-body PET/CT scans in all patients with MM scanned in our Department from April 2005 to December 2010. All PET abnormalities in the brain/scalp and lower extremities were tabulated by location and whether they were 'expected' or 'unexpected'. Findings were correlated with pathology, other imaging studies, and clinical follow-up. In this study, 398 PET/CT examinations in 361 patients with MM were included. Results showed that twelve of the 398 (3%) scans had brain/scalp abnormalities, with only 4 (1.0%) showing unexpected abnormalities. Twenty nine of the 398 (7.2%) scans showed lower extremity abnormalities, with only 5 (1.2%) showing unexpected abnormalities. In no case was an isolated unexpected malignant lesion identified in the brain/scalp or lower extremities. In conclusion, whole body PET/CT scan showed about 1% unexpected primary or metastatic MM lesions involving the head or lower extremities, which seldom offered significant additional clinical benefit and were unlikely to change clinical management. No clinically significant change in staging would have occurred. Routine 'eyes to thighs' images were adequate for this subset of patients. Answer: The question of whether a whole-body (18)F-FDG PET scan for malignant melanoma should include the legs is addressed in several studies. One retrospective analysis of 213 consecutive PET studies of 153 patients with suspected or recent malignant melanoma found that metastases and local recurrence at the legs were present in 41% of women and 27% of men. However, the study concluded that a longer scan including the legs did not yield relevant additional data, as isolated new manifestations at the legs were found in only 1 out of 153 patients. Therefore, the recommendation was to perform a long scan only in patients with previous melanoma manifestations restricted to the legs, while a short scan of the torso and proximal thighs was deemed sufficient for other cases (PUBMED:12937695). Another study that reviewed whole-body PET/CT scans in patients with malignant melanoma found that only about 1% of scans showed unexpected primary or metastatic melanoma lesions involving the head or lower extremities. The study concluded that routine 'eyes to thighs' images were adequate for this subset of patients, as the unexpected findings were unlikely to change clinical management or significantly alter staging (PUBMED:23106051). In summary, the evidence suggests that including the legs in a whole-body (18)F-FDG PET scan for malignant melanoma may not be necessary for all patients, particularly if there is no history of melanoma manifestations in the legs. Instead, a more focused scan of the torso and proximal thighs may be sufficient for most cases, allowing for a shorter examination time and a higher number of PET scans without loss of diagnostic power.
Instruction: Is modified constraint-induced movement therapy more effective than bimanual training in improving arm motor function in the subacute phase post stroke? Abstracts: abstract_id: PUBMED:22561098 Is modified constraint-induced movement therapy more effective than bimanual training in improving arm motor function in the subacute phase post stroke? A randomized controlled trial. Objective: To compare the effect of modified constraint-induced movement therapy (mCIMT) to bimanual task-related training for patients in the subacute phase post stroke. Design: A single-blinded randomized controlled trial. Settings: Inpatient and outpatient rehabilitation clinics and the patient's home. Subjects: Thirty patients in the subacute phase post stroke (2-16 weeks) were randomized to modified constraint-induced movement therapy with an emphasis on unimanual tasks, or bimanual task-related training, emphasizing bimanual tasks. All trained with a therapist 4 hours a week for four weeks, followed by a 2-3 hours daily self-training program. Patients in the modified constraint-induced movement therapy group were supposed to wear a restraining mitt on the unaffected hand for 4 hours a day for four weeks. Main Measures: Blinded assessments at pre and post treatment and after three months with Action Research Arm Test as a primary outcome measure, Nine-Hole Peg Test and Motor Activity Log. Results: Power calculations suggested an inclusion of 60 patients, but due to recruitment difficulties the trial was stopped after an interim analysis at 30 patients. There was no difference in change (P &gt; 0.05) between the groups on any of the measures, neither at post treatment nor at follow-up assessments. From pre-intervention to follow-up assessment the modified constraint-induced movement therapy group obtained a mean change score of 17.77 (14.66) on Action Research Arm Test, the bimanual group 15.47 (13.59). Conclusion: Bimanual training was as effective as modified constraint-induced movement therapy in improving arm motor function. Wearing a mitt seems unnecessary for most patients in the subacute phase post stroke when focused affected arm training is provided. abstract_id: PUBMED:27942143 Effects of virtual reality training with modified constraint-induced movement therapy on upper extremity function in acute stage stroke: a preliminary study. [Purpose] The purpose of this study was to investigate the effects of virtual reality training combined with modified constraint-induced movement therapy on upper extremity motor function recovery in acute stage stroke patients. [Subjects and Methods] Four acute stage stroke patients participated in the study. A multiple baseline single subject experimental design was utilized. Modified constraint-induced movement therapy was used according to the EXplaining PLastICITy after stroke protocol during baseline sessions. Virtual reality training with modified constraint-induced movement therapy was applied during treatment sessions. The Manual Function Test and the Box and Block Test were used to measure upper extremity function before every session. [Results] The subjects' upper extremity function improved during the intervention period. [Conclusion] Virtual reality training combined with modified constraint-induced movement is effective for upper extremity function recovery in acute stroke patients. abstract_id: PUBMED:25776117 A one-year follow-up after modified constraint-induced movement therapy for chronic stroke patients with paretic arm: a prospective case series study. Background: Despite the confirmed short-term effects of constraint-induced movement therapy, the long-term effects have not been sufficiently verified in terms of functional improvement of the affected arm. Objective: To evaluate the long-term effects and relationship between arm use in activities of daily living and arm improvement with modified constraint-induced movement therapy in chronic stroke patients. Methods: At 1 year after completing modified constraint-induced movement therapy, arm function (Fugl-Meyer Assessment) and amount of daily arm use (motor activity log) were assessed. Results: Fourteen post-stroke patients with mild to moderate impairment of arm function were analyzed. One year after completing modified constraint-induced movement therapy, participants consistently showed improvements in arm function and amount of daily arm use (analysis of variance: Fugl-Meyer Assessment, P &lt; 0.001; Motor Activity Log, P &lt; 0.001). For the Fugl-Meyer Assessment, post-hoc tests detected significant improvements (pre versus post, P = 0.009; pre versus 1 year, P &lt; 0.0001; post versus 1 year, P &lt; 0.036). For the Motor Activity Log, post-hoc tests also detected significant improvements (pre versus post, P = 0.0001; pre versus 1 year, P &lt; 0.0001; post versus 1 year, P = 0.0014). The magnitude of the change in Fugl-Meyer Assessment score correlated significantly with the change in Motor Activity Log score (R = 0.778, P = 0.001). Conclusions: Among post-stroke patients with mild to moderate impairments of arm function, modified constraint-induced movement therapy without any other rehabilitation after intervention may improve arm function and increase arm use for 1 year. In addition, increasing arm use may represent an important factor in improving arm function, and vice versa. abstract_id: PUBMED:25246609 The effects of modified constraint-induced movement therapy combined with trunk restraint in subacute stroke: a double-blinded randomized controlled trial. Objective: To investigate the effects of a modified constraint-induced movement therapy (mCIMT) with trunk restraint in subacute stroke patients. Design: Double-blind, randomized controlled trial. Setting: Rehabilitation clinic. Participants: Eighteen subacute stroke patients with moderate motor impairment. Interventions: The patients were treated with either mCIMT combined with trunk restraint or mCIMT for 5 days per week for 4 weeks. The mCIMT combined with trunk restraint group participated in structured intervention sessions for use of the more affected upper-extremity in task-oriented activities with trunk restraint for 1 hour per day, and with the less affected upper-extremity restrained for 5 hours per day weeks. The mCIMT group followed the same protocol without trunk restraint. Main Outcome Measures: The outcome measures included the action research arm test (ARAT), the Fugl-Meyer assessment (FMA), the modified Barthel index (MBI), the motor activity log (MAL) and the maximal elbow extension angle during reaching (MEEAR) were completed at baseline and post intervention. Results: The mCIMT combined with trunk restraint group exhibited more improved in the ARAT, FMA, MBI, MAL and MEEAR compared with the mCIMT group. Statistical analyses showed significantly different in ARAT (P = 0.046), FMA (P = 0.008), MBI (P = 0.001), MAL-AOU (P = 0.024), MAL-QOM (P = 0.010) and MEEAR (P = 0.001) between groups. Conclusions: These results suggest that mCIMT combined with trunk restraint is more helpful to improve upper-extremity function than mCIMT only in subacute stroke patients with moderate motor impairment. abstract_id: PUBMED:25229024 Effect of constraint-induced movement therapy and mirror therapy for patients with subacute stroke. Objective: To evaluate the effectiveness of constraint-induced movement therapy (CIMT) and combined mirror therapy for inpatient rehabilitation of the patients with subacute stroke. Methods: Twenty-six patients with subacute stroke were enrolled and randomly divided into three groups: CIMT combined with mirror therapy group, CIMT only group, and control group. Two weeks of CIMT for 6 hours a day with or without mirror therapy for 30 minutes a day were performed under supervision. All groups received conventional occupational therapy for 40 minutes a day for the same period. The CIMT only group and control group also received additional self-exercise to substitute for mirror therapy. The box and block test, 9-hole Pegboard test, grip strength, Brunnstrom stage, Wolf motor function test, Fugl-Meyer assessment, and the Korean version of Modified Barthel Index were performed prior to and two weeks after the treatment. Results: After two weeks of treatment, the CIMT groups with and without mirror therapy showed higher improvement (p&lt;0.05) than the control group, in most of functional assessments for hemiplegic upper extremity. The CIMT combined with mirror therapy group showed higher improvement than CIMT only group in box and block test, 9-hole Pegboard test, and grip strength, which represent fine motor functions of the upper extremity. Conclusion: The short-term CIMT combined with mirror therapy group showed more improvement compared to CIMT only group and control group, in the fine motor functions of hemiplegic upper extremity for the patients with subacute stroke. abstract_id: PUBMED:26294941 Similar Effects of Two Modified Constraint-Induced Therapy Protocols on Motor Impairment, Motor Function and Quality of Life in Patients with Chronic Stroke. Modified constraint-induced movement therapy (CIMT) protocols show motor function and real-world arm use improvement. Meanwhile it usually requires constant supervision by physiotherapists and is therefore more expensive than customary care. This study compared the preliminary efficacy of two modified CIMT protocols. A two-group randomized controlled trial with pre and post treatment measures and six months follow-up was conducted. Nineteen patients with chronic stroke received 10 treatment sessions distributed three to four times a week over 22 days. CIMT3h_direct group received 3 hours of CIMT supervised by a therapist (n=10) while CIMT1.5h_direct group had 1.5 hours of supervised CIMT+1.5 hours home exercises supervised by a caregiver (n=9). Outcome measures were the Fugl-Meyer Assessment, the Motor Activity Log, and the Stroke Specific Quality of Life Scale. The modified CIMT protocols were feasible and well tolerated. Improvements in motor function, real-world arm use and quality of life did not differ significantly between treated groups receiving either 3 or 1.5 hours mCIMT supervised by a therapist. abstract_id: PUBMED:33080562 Effect of Modified Constrain Induced Movement Therapy on Fatigue and Motor Performance in Sub Acute Stroke. Objectives: There is a little available information about the fatigue status among people receiving modified constraint induced movement therapy. The study examined such changes. The effect of using restraint on motor performance was also evaluated in sub acute phase after stroke. Materials And Methods: The study was designed as two group pretest and post-test study. The experimental design included a pretest and post test measures of dependent variables fatigue and motor function. All patients were recruited from the Safdurjung Hospital. 20 patients in subacute phase of stroke (3-9 months), (N = 10) mean age±SD 51.90±15.27, MAS score mean ± SD 1.90±.316 and post stroke duration mean ±SD 6.45±2.26 were included in the experimental group and (N = 10) mean age ± SD 54.10±17.42, MAS score mean ±SD 1.52±0.52 and post stroke duration mean±SD score 4.55± 2.52 were included in the control group. The subjects in the experimental group were restrained for six hours every week day with task training for 2 h per day five times a week for three weeks and the subjects in the controlled group received task training for 2 h per day five times a week for three weeks with no restrain. Motor Performance and fatigue were measured on day to day basis by Wolf Motor Function Test Scores (WMFT) and 11th item of Barrow Neurological Institute (BNI) scale in both experimental and controlled group. Results: The restraint group exhibited significant better motor performance than the controlled group. Mean difference between Pre- WMFT scores and Post WMFT scores were (0.533±.362) as compared to controlled group (0.192±.23). No significant statistical difference was observed in the difference of mean Pre- BNI and Post - BNI scores in either of the two groups (p = .57). Difference between the experimental and controlled group in motor performance and fatigue scores were nonsignificant. Conclusions: Restraint improves motor performance in subacute therapy group and the intensive practice associated with m-CIMT may be administered without the exacerbation of fatigue. abstract_id: PUBMED:29400671 Effects of modified constraint-induced movement therapy with trunk restraint in early stroke patients: A single-blinded, randomized, controlled, pilot trial. Background: Reducing compensatory strategies during repetitive upper-limb training may be helpful in relearning motor skills. Objective: To explore the effects of modified constraint-induced movement therapy (mCIMT), additionally modified by adding trunk restraint (TR), on upper-limb function and activities of daily living (ADLs) in early post-stroke patients. Methods: Twenty-four participants with early stroke were randomly assigned to mCIMT combined with TR (mCIMT + TR) or mCIMT alone. Each group underwent twenty sessions (1 h/d, 5 d/wk for 4 weeks). Patients were assessed with the action research arm test (ARAT), the Fugl-Meyer Assessment-Upper extremity (FMA-UE), the Modified Barthel index (MBI), the Maximal elbow extension angle during reaching (MEEAR), and Motor Activity Logs (MAL-AOU and MAL-QOM). Results: The mCIMT + TR group exhibited greater improvement in the ARAT, FMA-UE, MBI, MEEAR, and MAL-AOU, and MAL-QOM than the mCIMT group. Statistical analyses showed significant differences in ARAT (P = 0.003), FMA-UE (P = 0.042), MBI (P = 0.001), MEEAR (P = 0.002), and MAL-AOU (P = 0.005) between the groups. Conclusion: Modified CIMT combined with TR may be more effective than mCIMT alone in improving upper-limb function and ADLs in patients with early stroke. abstract_id: PUBMED:27062417 Effect of Modified Constraint-Induced Movement Therapy Combined with Auditory Feedback for Trunk Control on Upper Extremity in Subacute Stroke Patients with Moderate Impairment: Randomized Controlled Pilot Trial. Background: The trunk compensatory strategy can impede the longer term functional recovery of the upper extremity (UE). The objective of this study is to investigate the effects of modified constraint-induced movement therapy (mCIMT) combined with auditory feedback for trunk control on UE function and activities of daily living among subacute stroke patients with moderate impairment. Methods: Twenty participants with hemiparesis were randomly assigned to either the mCIMT combined with auditory feedback group (n = 10) or the control group (n = 10). The mCIMT combined with auditory feedback group received the mCIMT protocol training at the same time as the auditory feedback for control of the compensatory movement of the trunk. The control group only received the mCIMT protocol. Each group underwent 20 (1 hour/day) intervention sessions (5 days/week for 4 weeks). Results: The mCIMT combined with auditory feedback group exhibited greater changes in the Action Research Arm Test (P = .027; 95% CI .429-6.171), Fugl-Meyer Assessment upper extremity (P = .034; 95% CI .360-8.039), modified Barthel Index (P = .003; 95% CI 3.465-14.536), and amount of use of motor activity log (P = .009; 95% CI .078-.476) compared to the control group. There were no significant differences in the quality of movement (P = .054, 95% CI -.005 to .457) and modified Ashworth Scale (P = .288; 95% CI -.583 to .183) grades between the 2 groups. Conclusion: These findings suggest that mCIMT combined with auditory feedback for trunk control is more helpful in improving the UE function than mCIMT alone in subacute stroke patients with moderate impairment. abstract_id: PUBMED:18212033 A study of constraint-induced movement therapy in subacute stroke patients in Hong Kong. Objective: To investigate the beneficial effect of constraint-induced movement therapy in improving the function of hemiplegic upper extremity in the early subacute stroke patients. Design: A prospective, single-blinded, randomized controlled study comparing the effectiveness of constraint-induced movement therapy or control treatment at post intervention and 12 weeks follow-up. Subjects: The inclusion criteria were 2-16 weeks after stroke, hemiparesis of the affected limb, minimal function of &gt; or =20 degrees wrist extension and &gt; or =10 degrees extension of all digits and Mini-Mental State Examination score &gt; or =17. Interventions: The intervention group underwent a programme of 10 days upper extremity training (4 hours per day) with the unaffected limb being restrained ina shoulder sling and the control group received an equivalent duration of conventional rehabilitation therapy. Main Measures: Functional level for hemiparetic upper extremity, Motor Activity Log, Action Research Arm Test and modified Barthel Index. Results: There were 23 and 20 subjects respectively in the constraint-induced movement therapy and control groups. Significant improvements were seen at post intervention and 12 weeks after constraint-induced movement therapy in functional level for hemiparetic upper extremity (P= 0.001), and in the ;amount of use' (P= 0.001) and ;how well' (P= 0.021) subscales of the Motor Activity Log. The total Action Research Arm Test score, grasp (P= 0.004), grip (P= 0.004), pinch (P= 0.032) and gross (P= 0.006) components showed significant improvement over the control group at post intervention. The grip component (P=0.019) and the total Action Research Arm Test score (P= 0.009) were superior to the control group at 12 weeks. Conclusion: Significant improvement in hand function could be achieved with constraint-induced movement therapy in subacute stroke patients, which was maintained up to 12 week follow-up. Answer: The effectiveness of modified constraint-induced movement therapy (mCIMT) compared to bimanual training in improving arm motor function in the subacute phase post-stroke has been explored in several studies, but the results are not entirely conclusive. One randomized controlled trial found no significant difference in improvement between mCIMT and bimanual task-related training, suggesting that both methods are equally effective in improving arm motor function in the subacute phase post-stroke. The study also suggested that wearing a mitt might be unnecessary when focused affected arm training is provided (PUBMED:22561098). However, other studies have shown that mCIMT, when combined with other interventions such as trunk restraint (PUBMED:25246609, PUBMED:29400671), auditory feedback for trunk control (PUBMED:27062417), or virtual reality training (PUBMED:27942143), can be more effective than mCIMT alone. These combined approaches have been reported to result in greater improvements in upper extremity function, activities of daily living, and motor performance. Furthermore, a study on the long-term effects of mCIMT in chronic stroke patients showed consistent improvements in arm function and daily arm use one year after completing the therapy, indicating that mCIMT can have lasting benefits (PUBMED:25776117). Another study compared the effects of two modified CIMT protocols and found no significant difference in improvements in motor function, real-world arm use, and quality of life between groups receiving either 3 or 1.5 hours of supervised mCIMT (PUBMED:26294941). Lastly, a study examining the effect of mCIMT on fatigue and motor performance found that restraint improves motor performance in the subacute therapy group without exacerbating fatigue (PUBMED:33080562). In conclusion, while mCIMT is effective in improving arm motor function post-stroke, it appears to be equally effective as bimanual training when used alone (PUBMED:22561098). However, when mCIMT is combined with other therapies, it may offer additional benefits (PUBMED:25246609, PUBMED:29400671, PUBMED:27062417, PUBMED:27942143). The choice of therapy may depend on individual patient needs, therapy goals, and available resources.
Instruction: Do rural and urban women experience differing rates of maternal rehospitalizations? Abstracts: abstract_id: PUBMED:26280454 Do rural and urban women experience differing rates of maternal rehospitalizations? Introduction: Conditions such as postpartum complications and mental disorders of new mothers contribute to a relatively large number of maternal rehospitalizations and even some deaths. Few studies have examined rural-urban differences in hospital readmissions, and none of them have addressed maternal readmissions. This research directly compares readmissions for patients who delivered in rural versus urban hospitals. Methods: The data for this cross-sectional study were drawn from the 2011 California Healthcare Cost and Utilization Project. Readmission rates were reported to demonstrate rural-urban differences. Generalized estimating equation models were also used to estimate the likelihood of a new mother being readmitted over time. Results: The 323 051 women who delivered with minor assistance and 158 851 women who delivered by cesarean section (C-section) were included in this study. Of those, seven maternal mortalities occurred after vaginal deliveries and 14 occurred after C-section procedures. Fewer than 1% (0.98% or 3171) women with normal deliveries were rehospitalized. The corresponding number for women delivering via C-section was 1.41% (2243). For both types of deliveries, women giving birth in a rural hospital were more likely to be readmitted. Conclusions: This is the first study examining rural-urban differences in maternal readmissions. The results indicate the importance of monitoring and potentially improving the quality of maternal care, especially when the delivery involves a C-section. More studies investigating rural health disparities in women's health are clearly necessary. abstract_id: PUBMED:36497859 Disparities in Breast Cancer Mortality Rates in Israel among Urban and Rural Women. Breast cancer is a leading cause of death. There are a number of risk factors for breast cancer mortality including parity, age, ethnicity, genetic history, and place of residence. This study examined the disparities in breast cancer-related mortality rates among women from urban areas compared to rural areas in Israel. This was a retrospective, follow-up study on mortality from breast cancer among 894,608 Israeli women born between the years of 1940 and 1960. Data was collected from the Israeli Central Bureau of Statistics, the Population Authority, the Education Ministry, and the Health Ministry. Over 80% of women lived in urban areas. A higher incidence of mortality from breast cancer in Israel was found among urban women compared to rural women (1047.8/100,000 compared to 837/100,000, respectively). Even after adjusting for sociodemographic variables, higher mortality rates were found among women from urban areas in Israel compared to women from rural areas in Israel. It is believed that environmental factors can partially explain the geographic variation of breast cancer incidence, and that breast cancer incidence is likely a complex interaction between genetic, environmental, and health factors. abstract_id: PUBMED:33183896 Pre-Pregnancy Hypertension Among Women in Rural and Urban Areas of the United States. Background: Rates of maternal mortality are increasing in the United States with significant rural-urban disparities. Pre-pregnancy hypertension is a well-established risk factor for adverse maternal and offspring outcomes. Objectives: The purpose of this study was to describe trends in maternal pre-pregnancy hypertension among women in rural and urban areas in 2007 to 2018 in order to inform community-engaged prevention and policy strategies. Methods: We performed a nationwide, serial cross-sectional study using maternal data from all live births in women age 15 to 44 years between 2007 and 2018 (CDC Natality Database). Rates of pre-pregnancy hypertension were calculated per 1,000 live births overall and by urbanization status. Subgroup analysis in standard 5-year age categories was performed. We quantified average annual percentage change using Joinpoint Regression and rate ratios (95% confidence intervals [CIs]) to compare yearly rates between rural and urban areas. Results: Among 47,949,381 live births to women between 2007 and 2018, rates of pre-pregnancy hypertension per 1,000 live births increased among both rural (13.7 to 23.7) and urban women (10.5 to 20.0). Two significant inflection points were identified in 2010 and 2016, with highest annual percentage changes between 2016 and 2018 in rural and urban areas. Although absolute rates were lower in younger compared with older women in both rural and urban areas, all age groups experienced similar increases. The rate ratios of pre-pregnancy hypertension in rural compared with urban women ranged from 1.18 (95% CI: 1.04 to 1.35) for ages 15 to 19 years to 1.51 (95% CI: 1.39 to 1.64) for ages 40 to 44 years in 2018. Conclusions: Maternal burden of pre-pregnancy hypertension has nearly doubled in the past decade and the rural-urban gap has persisted. abstract_id: PUBMED:30301357 The patterns of self-reported maternal complications in Indonesia: are there rural-urban differences? Introduction: Specific and targeted intervention is needed to reduce the relatively high maternal mortality and morbidity in Indonesia. The apparent rural-urban disparities might require different intervention strategies. Therefore, this study aims to assess patterns and sociodemographic determinants of self-reported maternal morbidity in rural and urban areas. Methods: Data from the Indonesian Demographic and Health Survey 2012 were analyzed. In this cross-sectional survey, maternal morbidity status and sociodemographic factors were obtained through questionnaire-based interviews. Women who completed the maternal morbidity questionnaire were included in the analyses (n=12 679). Descriptive statistics were used to assess differences in proportion; mixed-effects regression was used to evaluate the association between sociodemographic factors and maternal morbidity. Results: Approximately 53.3% of women reported experiencing pregnancy and/or labor complications, with a lower proportion in rural areas (51.3%) than in urban areas (55.1%). The patterns of pregnancy morbidities were similar for rural and urban women. The highest proportion of pregnancy morbidity was classified as 'other', followed by 'bleeding'. There were slight differences in the type of labor morbidities. Compared to urban women, women in rural areas had a higher prevalence of bleeding and infection, but a significantly lower prevalence of pregnancy rupture of membrane. Nulliparity was a dominant factor for self-reported maternal morbidity. Younger urban, but not rural, women represented a higher proportion of maternal morbidity. In urban areas, women with social disadvantage represented a higher proportion of morbidity. However, in rural areas, a higher prevalence of maternal morbidity was seen in women of higher socioeconomic status. These rural-urban disparities might be partially caused by differences in knowledge of maternal morbidities and sociodemographic status between rural and urban women. Conclusions: This study found similar patterns of pregnancy morbidity, but slight differences in labor morbidity patterns. Specific interventions for major maternal complications within rural or urban areas are important. There were also differences in the proportion of self-reported maternal complications by sociodemographic factors, which might be caused by differences in reporting. Intervention to improve knowledge and awareness of maternal complication is needed, specifically for socially disadvantaged women and women living in rural areas. abstract_id: PUBMED:36652639 Rural-Urban Disparities in Adverse Maternal Outcomes in the United States, 2016-2019. Objectives. To describe differences in maternal admissions to the intensive care unit (ICU) and mortality in rural versus urban areas in the United States. Methods. We performed a nationwide analysis and calculated age-standardized rates and rate ratios (RRs) of maternal ICU admission and mortality per 100 000 live births between 2016 and 2019 in rural versus urban areas. Results. From 2016 to 2019, there was no significant increase in age-standardized rates of maternal ICU admissions in rural (170.6-192.3) or urban (161.7-172.4) areas, with a significantly higher rate, albeit a relatively small difference, in rural versus urban areas (2019 RR = 1.14; 95% confidence interval [CI] = 1.04, 1.20). Maternal mortality increased in both rural (66.9-81.7 deaths per 100 000 live births) and urban (38.1-42.3) areas and was nearly 2 times higher in rural areas (2019 RR = 1.93; 95% CI = 1.71, 2.17). Conclusions. Pregnant individuals in rural areas are at higher risk for ICU admission and mortality than are their urban counterparts. Significant increases in maternal mortality occurred in rural and urban areas. Public Health Implications. Public health efforts need to focus on resource-limited rural areas to mitigate geographic disparities in maternal morbidity and mortality. (Am J Public Health. 2023;113(2): 224-227.https://doi.org/10.2105/AJPH.2022.307134). abstract_id: PUBMED:19298681 Knowledge and attitude on maternal health care among rural-to-urban migrant women in Shanghai, China. Background: In China, with the urbanization, women migrated from rural to big cities presented much higher maternal mortality rates than local residents. Health knowledge is one of the key factors enabling women to be aware of their rights and health status in order to seek appropriate health services. This study aims to assess the knowledge and attitude on maternal health care and the contributing factors to being knowledgeable among rural-to-urban migrant women in Shanghai. Methods: A cross-sectional study was conducted in a district center hospital in Shanghai where migrants gathered. Totally 475 rural-to-urban migrant pregnant women were interviewed and completed the self-administered questionnaire after obtaining informed consent. Results: The mean score of knowledge on maternal health care was 8.28 out of 12. However, only 36.6% women had attended the required 5 antenatal checks, and 58.3% of the subjects thought financial constrains being the main reason for not attending antenatal care. It was found that higher level of education (OR = 3.3, 95%CI: 1.8-3.8), husbands' Shanghai residence (OR = 4.0, 95%CI: 1.3-12.1) and better family income (OR = 3.3, 95%CI: 1.4-8.2) were associated with better knowledge. Conclusions: Rural-to-urban migrant women's unawareness of maternal health service, together with their vulnerable living status, influences their utilization of maternal health care. Tailored maternal health education and accessible services are in demands for this population. abstract_id: PUBMED:33771563 Disparities in maternal influenza immunization among women in rural and urban areas of the United States. Pregnant women and their infants are at high risk of influenza-associated complications. Although maternal immunization offers optimal protection for both, immunization rates remain low in the U.S. Women in rural communities may represent a difficult to reach group, yet immunization rates among rural-residing women have not been well evaluated. We analyzed data from the 2016-2018 Phase-8 Pregnancy Risk Assessment Monitoring System for 19 U.S. states, including 45,018 women who recently gave birth to a live infant. We compared the prevalence of influenza vaccination prior to or during pregnancy and receipt of a vaccine recommendation from a healthcare provider for rural vs. urban-residing women. We used average marginal predictions derived from multivariate logistic regression models to generate weighted adjusted prevalence ratios (aPR) and corresponding 95% CIs. Of the 45,018 respondents, 6575 resided in a rural area; 55.1% (95% CI: 53.3, 56.9) of rural-residing women and 61.3% (95% CI: 60.6, 61.9) of urban-residing women received an influenza vaccine prior to or during pregnancy. The prevalence of vaccination was 4% lower among rural-residing women (aPR: 0.96; 95% CI: 0.93, 0.99). The greatest difference in rural vs. urban immunization rates were observed for Hispanic women and women with no health insurance. Our results indicate that pregnant women residing in rural communities have lower rates of immunization. To prevent maternal and infant health disparities, it is important to better understand the barriers to maternal immunization along with efforts to overcome them. abstract_id: PUBMED:18460492 Maternal nutrition and birth size among urban affluent and rural women in India. Background: Varying results of worldwide intervention programs to pregnant mothers necessitate the need to understand the relationship between maternal nutrition and birth size among well nourished and undernourished mothers. Objective: To examine this relationship among urban affluent mothers and to compare the findings with those on rural Indian mothers. Subjects: Data collected on urban affluent mothers (n = 236) was compared with rural mothers (n = 633). Design: Mothers were contacted at 18 +/- 2 and 28 +/- 2 wk of gestation for anthropometry, dietary intakes [24-hr recall, Food Frequency Questionnaire (FFQ)] and after delivery for neonatal anthropometry. Results: Despite large differences in nutritional status of urban and rural mothers ( pre-pregnant weight 55.9 +/- 9.2 Vs 41.5 +/- 5.2 kg, respectively) maternal fat intakes at 18 wk were associated with birth weight (p &lt; 0.05), length (p &lt; 0.01) and triceps skin fold thickness (p &lt; 0.05) of the newborn in urban and rural mothers. Consumption of fruits was associated with birth length (p &lt; 0.05) in urban (18wk) and with birth weight (p &lt; 0.01) and length (p &lt; 0.01) in rural (28wk) mothers, when their energy intakes were low. Maternal consumption of milk too, was associated with newborn's triceps (p &lt; 0.01) in urban (28wk) while with birth weight (p &lt; 0.05) and length (p &lt; 0.05) in rural (18wk) mothers. The findings mainly underscore the importance of consumption of micronutrient rich foods, when energy intakes are limiting during pregnancy, for improving birth size. Conclusions: Creating nutritional awareness and motivating rural mothers for consuming micronutrient rich foods like green leafy vegetables and seasonal fruits that are easily available in rural areas, will be a much affordable solution for combating the problem of low birth weight rather than waiting for improvement in the existing nationwide programs for pregnant women. abstract_id: PUBMED:27672220 Maternal morbidity and perinatal outcomes among women in rural versus urban areas. Background: Most studies examining geographic barriers to maternity care in industrialized countries have focused solely on fetal and neonatal outcomes. We examined the association between rural residence and severe maternal morbidity, in addition to perinatal mortality and morbidity. Methods: We conducted a retrospective population-based cohort study of all women who gave birth in British Columbia, Canada, between Jan. 1, 2005, and Dec. 31, 2010. We compared maternal mortality and severe morbidity (e.g., eclampsia) and adverse perinatal outcomes (e.g., perinatal death) between women residing in areas with moderate to no metropolitan influence (rural) and those living in metropolitan areas or areas with a strong metropolitan influence (urban). We used logistic regression analysis to obtain adjusted odds ratios (ORs) and 95% confidence intervals (CIs). Results: We found a significant association between death or severe maternal morbidity and rural residence (adjusted OR 1.15, 95% CI 1.03-1.28). In particular, women in rural areas had significantly higher rates of eclampsia (adjusted OR 2.70, 95% CI 1.79-4.08), obstetric embolism (adjusted OR 2.16, 95% CI 1.14-4.07) and uterine rupture or dehiscence (adjusted OR 1.96, 95% CI 1.42-2.72) than women in urban areas. Perinatal mortality did not differ significantly between the study groups. Infants in rural areas were more likely than those in urban areas to have a severe neonatal morbidity (adjusted OR 1.14, 95% CI 1.02-1.29), to be born preterm (adjusted OR 1.06, 95% CI 1.01-1.11), to have an Apgar score of less than 7 at 5 minutes (adjusted OR 1.24, 95% CI 1.13-1.31) and to be large for gestational age (adjusted OR 1.14, 95% CI 1.10-1.19). They were less likely to be small for gestational age (adjusted OR 0.90, 95% CI 0.85-0.95) and to be admitted to an neonatal intensive care unit (NICU) (adjusted OR 0.36, 95% CI 0.33-0.38) compared with infants in urban areas. Interpretation: Compared with women in urban areas, those in rural areas had higher rates of severe maternal morbidity and severe neonatal morbidity, and a lower rate of NICU admission. Maternity care providers in rural regions need to be aware of potentially life-threatening maternal and perinatal complications requiring advanced obstetric and neonatal care. abstract_id: PUBMED:17708704 The comparison of twinning rates between urban and rural areas in China. Based on the birth record data from the National Vital Statistics in the 1990 Census of China, the present study analyzed the differences between urban and rural areas on monozygotic (MZ) and dizygotic (DZ) twin rates by maternal age in 1989. The twins by zygosity were calculated with Weinberg's differential method. Results show that the MZ and DZ twinning rates in China were associated with maternal age and that there were substantial differences between urban and rural areas. The MZ twinning rates in urban and rural areas were 2.36 pairs and 2.11 pairs per 1000 deliveries respectively, significantly lower than that in most studied populations. Furthermore, our analysis indicated that MZ twinning rates remained relatively constant for mothers under the age group of 36 to 38 years, but rose over this age group in both areas, albeit with a different slope. The DZ twinning rates were strikingly affected by maternal age, but the age for peak DZ rates was found within the age group of 33 to 35 years. In all maternal age groups except for 24 to 26 years, the DZ twinning rates in urban areas were higher than in rural areas. It remains unclear as to why the DZ twinning rates reversed to reach higher values within the older maternal age groups in China, but it is almost certain that the high twinning rates had nothing to do with in vitro fertilization. Answer: Yes, rural and urban women experience differing rates of maternal rehospitalizations. According to a study that directly compared readmissions for patients who delivered in rural versus urban hospitals, women giving birth in a rural hospital were more likely to be readmitted (PUBMED:26280454). This study included 323,051 women who delivered with minor assistance and 158,851 women who delivered by cesarean section (C-section), finding that fewer than 1% of women with normal deliveries were rehospitalized, and the corresponding number for women delivering via C-section was 1.41%. For both types of deliveries, rural women had higher readmission rates.
Instruction: Is internal podalic version a lost art? Abstracts: abstract_id: PUBMED:24056668 Rediscovering the lost art of internal podalic version: two case reports. Internal podalic version was once widely used for the management of placenta praevia. In modern obstetrics, it has been replaced by caesarean section which can lead to uterine rupture in a subsequent pregnancy. However, if the foetus is dead or very small, internal podalic version still may be carried out without major complication to avoid unnecessary caesarean section in developing countries where follow up is not satisfactory and the foetal prognosis is poor. abstract_id: PUBMED:25283062 Does internal podalic version of the non-vertex second twin still have a place in obstetrics? A Danish national retrospective cohort study. Objective: Investigate the rate of internal podalic version followed by breech extraction for a second non-vertex twin with the first delivered vaginally, and compare neonatal outcome with emergency cesarean section. Design: Cohort study. Setting: National Danish Registers. Population: Twin pregnancies (1997-2012) with gestational age ≥34 weeks; first twin delivered vaginally, second by internal podalic version and breech extraction or cesarean section. Methods: Data were collected from the Danish National Patient Register and the Danish National Birth Register. Main Outcome Measures: Rates of delivery mode, 5-min Apgar score, asphyxia, umbilical cord pH, admission to neonatal intensive care unit, treatment by mechanical ventilation, and experience level of obstetricians performing internal podalic version. Results: 457 births were available for analysis: 39 cases of internal podalic version and breech extraction and 418 cesarean section cases for second twin. Compared with the cesarean section group, the internal podalic version group had lower rates of asphyxia. Apgar scores and umbilical cord pH levels were not significantly different, although with a tendency to be higher in the internal version than the cesarean section group, however, fewer cases needed mechanical ventilation. Thirty internal versions and breech extractions were performed by obstetricians with &gt;5 years clinical experience and three by trainees. Conclusion: Cesarean sections for a second twin seem to have been frequent during the last 15 years while internal podalic version is a vanishing procedure. A slight tendency for better neonatal outcome was found in the internal podalic version and extraction group than cesarean section. abstract_id: PUBMED:31016819 Internal podalic version for delivery of high floating head during cesarean section and neonatal outcome. Aim: To evaluate the technique of internal podalic version for delivery of high floating head during cesarean section as an alternative for other methods for its delivery and its effects on neonatal outcome. Methods: This study used clinical randomized controlled study design. Three groups of patients each group has 50 patients who attended for elective cesarean section and found to have high floating head during operation, Group I: the babies were delivered by internal podalic version. Group II: the babies were delivered by short curved forceps. Group III: the babies were delivered by vacuum. All groups were compared in regard to time required for delivery of baby, fetal condition, maternal complications, fetal complications and postoperative complications. Results: Hundred-and-fifty patients were selected in the course of preparation of elective cesarean section, all had high floating fetal head, and they were randomly distributed into three groups, each group had 50 patients. There was significant relationship for the favor of Group 1 (study group) as regard the time required for the maneuver as calculated from the end of uterine incision till delivery of whole baby, (24.86, 59.48,54.52 s for Group I, Group II, Group III, respectively) with P-value &lt;0.00001*. Also there was significant relationship as regard the intraoperative maternal complications for the favor of Group I with P-value =0.004. While there was an insignificant relationship between the three groups as regard preoperative data, APGAR score, fetal birth weight, postoperative pain and postoperative maternal data. Despite that there was a significant difference as regard the postoperative neonatal complications, with P-value =0.002*, all these complications were minor and did not need long-term treatment. Conclusion: Use of internal podalic version for delivery of high floating head during cesarean section is a safe and rapid technique, with less maternal and neonatal complications. abstract_id: PUBMED:9721781 Intravenous nitroglycerin for internal podalic version of the second twin in transverse lie. Objective: To report our experience with high doses (0.1-0.2 mg per 10 kg pregnant weight) of intravenous (IV) nitroglycerin as a uterine relaxing agent for managing internal podalic version of the second twin in transverse lie with unruptured membranes. Methods: Between August 1994 and December 1997, we managed 22 cases of internal podalic version of the second twin with the administration of high doses of IV nitroglycerin. Results: Twenty internal podalic versions were completed successfully, and two cases failed. One failure was considered not related to IV nitroglycerin because the patient had a panic attack, requiring general anesthesia for sedation. The internal podalic version then succeeded. The patient with true failure of IV nitroglycerin required emergency cesarean because of acute fetal bradycardia and a nonrelaxed uterus. This was the only nontransverse lie, but with a very high face presentation. One internal podalic version was complicated by hemorrhage (2000 mL). Conclusion: Intravenous nitroglycerin to induce uterine atonia, with epidural analgesia, avoids general anesthesia and makes internal podalic version easier. In 22 cases (with success in 20) of internal podalic version of the second twin in transverse lie with unruptured membranes, IV nitroglycerin induced transient and prompt uterine relaxation without affecting maternal and fetal outcomes. abstract_id: PUBMED:9031917 Intravenous nitroglycerin for intrapartum internal podalic version of the second non-vertex twin. Objective: Authors report their experience of intravenous nitroglycerin as uterine relaxing agent for managing successfully internal podalic version of the second twin. Methods: From a retrospective study including nine observations of internal podalic version of the second non vertex twin performed with administration of intravenous nitroglycerin, between August 1994 and February 1996, authors compare their results with those reported elsewhere. Results: Two failures of internal podalic version with nitroglycerin have been observed. But one failure is not considered to be due to the NTG: it was a patient, who had a panic attack necessitating a general anesthesia for sedative purpose. The internal podalic version succeeded. The true failure of NTG needed an emergency cesarean due to acute fetal distress and a non relaxing uterus. One internal podalic version was complicated by hemorrhage. The intravenous NTG used to induce uterine atonia associated with epidural-analgesia to relief pain avoiding general anesthesia makes internal podalic version easier. Conclusion: Our results confirmed those already reported. That intravenous nitroglycerin (NTG) injection induces a transient and prompt uterine relaxation required for internal podalic version without affecting maternal and fetal prognosis. abstract_id: PUBMED:3347429 Internal podalic version with unruptured membranes for the second twin in transverse lie. Internal podalic version of the second twin in transverse lie has usually been performed after first rupturing the membranes of the second amniotic sac. In a prospective study on internal podalic version with unruptured amniotic membranes, 11 second-born twins in transverse lie were successfully delivered, with good neonatal outcome and without birth injuries. From our experience, it appears that this modified technique, when used within the presented guidelines, offers a reasonable alternative for the safe delivery of the second twin. abstract_id: PUBMED:11590284 Is internal podalic version a lost art? Optimum mode of delivery in transverse lie. Aim: To study the changing trend in the delivery of transverse lie, and its effect on neonatal outcome, in a developing country. Subjects And Method: This is a retrospective study involving records of 12 years of all patients with transverse lie. Neonatal outcome of births by internal podalic version (IPV) and lower segment caesarean section (LSCS) were compared. Results: In the first six years, 37.3% of transverse lie underwent IPV and 62.7%, LSCS. In the next six years, 15.8% underwent IPV and 84.2%, LSCS. 87.7% and 12.3% of live babies were delivered by LSCS and IPV respectively. 52% of the live born IPV were discharged compared to 95% of LSCS babies. Neonatal outcome was best when IPV was performed on second twin. Conclusion: IPV has a role in the delivery of second twin, pre-viable and dead babies. abstract_id: PUBMED:37278993 Real-time ultrasound to assist during a vaginal breech delivery. We report a novel application of intrapartum sonography, herein used to assist the internal podalic version and the vaginal delivery of a transverse-lying second twin. Following the vaginal delivery of the first cephalic twin, the internal podalic version was performed under continuous ultrasound vision, leading to the uncomplicated breech delivery of a healthy neonate. abstract_id: PUBMED:29324597 Internal Podalic Version and Breech Extraction: Enhancing Realistic Sensations in a Simulation Model. Background: Experience with internal podalic version and breech extraction is diminishing, especially in the younger generation of obstetricians. Simulation training is essential to teach and maintain these skills. Instrument: We present a mannequin-based simulation method that enhances realistic sensations during training. Experience: By positioning the fetal mannequin into a thin plastic bag filled with water, the impression of palpating, grabbing, and pulling the fetal feet through the plastic bag is similar to reaching feet through intact membranes. The unique approach of this realistic simulation model makes the experience fun and memorable, which contributes to the didactic value and success of the model. Conclusion: The simulation model we present mimics the situation and sensation obstetricians experience while performing internal podalic version and breech extraction. abstract_id: PUBMED:22727740 Rupture of membranes in case of internal podalic version: a risk for cesarean section on the second twin Objectives: To evaluate if internal version with ruptured membranes is a risk factor of cesarean section for the second twin. Patients And Methods: Two hundred and fifty-nine twins vaginal deliveries after 33 weeks of gestation from 1997 to 2009 in a level 3 maternity. A retrospective case-control study comparing two groups: cases of cesarean section on second twin and five twins vaginal deliveries following the case. Active management of the second twin delivery was performed with a short intertwin delivery. Results: Eleven cesarean sections on the second twin were performed (4.2%). The main indication was failure of internal version. The risk of cesarean section was significantly greater when the internal version was performed with ruptured membranes (OR: 25.4 IC 95% [2.3-275.7] P&lt;0.003) and when intertwin time delivery interval was increased (8.1±5.1 vs 16.7±6.3, P&lt;0.001). Discussion And Conclusion: The rupture of amniotic membranes before or during the internal podalic version is associated with a risk of failure and cesarean for the second twin. We recommend to perform the internal podalic version with unruptured membranes according to the French recommendations. Answer: Internal podalic version is a procedure that has seen a decline in practice, particularly in developed countries, where cesarean sections have become more common for managing non-vertex presentations and complications such as placenta praevia. However, it is not entirely a lost art. The procedure is still relevant and practiced in certain situations, particularly in developing countries or when the fetal prognosis is poor, to avoid unnecessary cesarean sections where follow-up care may not be satisfactory (PUBMED:24056668). In the context of twin deliveries, internal podalic version followed by breech extraction for the second non-vertex twin, when the first twin is delivered vaginally, is a procedure that has become less frequent over the years. A Danish national retrospective cohort study found that cesarean sections for the second twin have been frequent over the last 15 years, while internal podalic version is a vanishing procedure. However, the study also noted a slight tendency for better neonatal outcomes in the internal podalic version and extraction group compared to the cesarean section group (PUBMED:25283062). Moreover, internal podalic version has been evaluated as a technique for delivering a high floating head during cesarean section, and it has been found to be a safe and rapid technique with fewer maternal and neonatal complications compared to other methods such as forceps or vacuum delivery (PUBMED:31016819). The use of intravenous nitroglycerin as a uterine relaxing agent has been reported to facilitate internal podalic version, making the procedure easier and avoiding the need for general anesthesia (PUBMED:9721781; PUBMED:9031917). In summary, while internal podalic version is less commonly performed today, it is not a lost art. It remains a valuable skill in specific clinical scenarios, such as twin deliveries, and in settings where cesarean sections may not be the best option. The procedure continues to be taught and practiced, albeit less frequently, and is enhanced by simulation models to maintain the skill among obstetricians (PUBMED:29324597).
Instruction: Depression and dependency: distinct or overlapping constructs? Abstracts: abstract_id: PUBMED:1743009 Alexithymia and depression: distinct or overlapping constructs? Previous research showing moderate to high correlations between the Toronto Alexithymia Scale (TAS) and Beck Depression Inventory (BDI) has generated controversy as to whether alexithymia and depression are distinct or overlapping constructs. The present study addressed this controversy using the statistical method of factor analysis. In a sample of undergraduate university students, a correlation matrix comprising items from both the TAS and BDI yielded a four-factor solution with virtually no overlap of the significant factor loadings for the items from each scale, and with the factors corresponding closely with their respective construct. These results were replicated and cross-validated with a sample of psychiatric outpatients. The findings support the view that alexithymia is a construct that is distinct and separate from depression. abstract_id: PUBMED:28608767 Anxiety and depression: toward overlapping and distinctive features. This Special Issue of Cognition and Emotion addresses one of the cardinal concerns of affective science, which is overlapping and distinctive features of anxiety and depression. A central finding in the study of anxiety and depression is that they are moderately highly correlated with each other. This leads us to the question: What is behind this co-occurrence? Possible explanations relate to poor discriminant validity of measures; both emotional states are associated with negative affect; stressful life events; impaired cognitive processes; they share a common biological/genetic diathesis. However, despite a set of common (nonspecific) features, anxiety and depression are clearly not identical emotional states. Differences between them might be best viewed, for example, through their heterogeneous and multi-layered nature, adaptive functions and relations with regulatory processes, positive affect, and motivation or complex cognitive processes. In this introduction we consider several approaches (e.g. functional approach; tripartite model and content-specificity hypothesis) to which most research in this Special Issue is relevant. In addition, we have asked contributors to this Special Issue to indicate how their own studies on comparisons between anxiety and depression and models on anxiety and depression move this area of research to more mature science with applicability. abstract_id: PUBMED:23567406 Are depression and frailty overlapping syndromes in mid- and late-life? A latent variable analysis. Background: Depression and frailty both predict disability and morbidity in later life. However, it is unclear to what extent these common geriatric syndromes represent overlapping constructs. Objective: To examine the joint relationship between the constructs of depression and frailty. Methods: Data come from 2004-2005 wave of the Baltimore Epidemiologic Catchment Area Study, and the analysis is limited to participants 40 years and older, with complete data on frailty and depression indicators (N = 683). Depression was measured using the Diagnostic Interview Schedule, and frailty was indexed by modified Fried criteria. A series of confirmatory latent class analyses were used to assess the degree to which depression and frailty syndromes identify the same populations. A latent kappa coefficient (κl) was also estimated between the constructs. Results: Confirmatory latent class analyses indicated that depression and frailty represent distinct syndromes rather than a single construct. The joint modeling of the two constructs supported a three-class solution for depression and two-class solution for frailty, with 2.9% categorized as severely depressed, 19.4% as mildly depressed, and 77.7% as not depressed, and 21.1% categorized as frail and 78.9% as not frail. The chance-corrected agreement statistic indicated moderate correspondence between the depression and frailty constructs (κl: 66, 95% confidence interval: 0.58-0.74). Conclusions: Results suggest that depression and frailty are interrelated concepts, yet their operational criteria identify substantively overlapping subpopulations. These findings have implications for understanding factors that contribute to the etiology and prognosis of depression and frailty in later life. abstract_id: PUBMED:15305087 Exploring the relations between depression, somatization, dissociation and alexithymia--overlapping or independent constructs? Background: The aim of this study was to extend our knowledge of associations among the constructs of alexithymia, depression, somatization and dissociation. Sampling And Methods: 924 nonclinical subjects answered questions about depression (21-item Beck Depression Inventory), somatization (13-item somatization part of Symptom Check List-90), dissociation (28-item Dissociative Experiences Scale) and alexithymia (20-item Toronto Alexithymia Scale). In addition, a 12-item General Health Questionnaire (GHQ-12) was administered to detect psychiatric distress among subjects. Results: The results suggested that there was a significant clinical correlation between somatization, dissociation, depression and alexithymia (rho varied from 0.31 to 0.56). The principal component analysis revealed the presence of four components: depression, somatization, dissociation and alexithymia. The use of factor scores diminished the covariance between measures (rho varied from -0.10 to 0.01 between the factor scores). There was almost no correlation between the dissociation factor (rho = 0.06) and alexithymia factor (rho = 0.09) scores and general distress (GHQ-12). Conclusions: This study suggests that while somatization, dissociation, depression and alexithymia are distinct constructs, they correlate to a considerable extent. The use of factor analysis and factor scores should be considered to diminish covariance between the above constructs. Comparing results between factored and unfactored results may prove illuminating. As a case in point, the results suggest that the part of dissociation that coincides with other constructs (overlaps) is associated with distress, whereas the distinct part of dissociation (no shared covariance) is not associated with distress. The same applies to the alexithymia construct. Longitudinal studies are needed to show whether there is a trait such as a relatively stable dissociation component and also whether a separate state-dependent dissociation component exists that is associated with coincident distress, somatization and depression. abstract_id: PUBMED:11349244 Are alexithymia and depression distinct or overlapping constructs?: a study in a general population. The aim of this study was to investigate the relationship between alexithymia and depression in a general population sample (N = 1,888), as measured by the 20-item Toronto Alexithymia Scale (TAS-20) and the 21-item Beck Depression Inventory (BDI-21), using factor analysis. The items of the TAS-20 and the BDI-21 loaded on separate factors with only a minor overlap concerning physical worries. However, in a subset of subjects, who were both alexithymic and depressed, loadings were highly overlapping. These findings suggest that alexithymia and depression may be highly associated. Another conclusion might be that psychometric properties of the TAS should be further developed to make differentiation between alexithymia and depression possible. abstract_id: PUBMED:9476747 Depression and dependency: distinct or overlapping constructs? Background: Previous research has found a significant positive relationship between dependency and level of depression. The aim of the present study, using factor analysis method, is to test the hypothesis that dependency and depression constitute distinct dimensions. Methods: In a sample of 202 university students, a correlation matrix comprising items from both the Interpersonal Dependency Inventory and the abridged form of the Beck Depression Inventory was analysed using a principal components analysis followed by an orthogonal varimax rotation. Results: Evidence for a four-factor solution was found with almost no overlap of the significant factor loadings for the items from each scale, and with the factors corresponding closely with their respective construct. Conclusion: The findings support the view that depression and dependency constitute different constructs. Limitation: The results must be confirmed on clinical and heterogeneous nonclinical samples. Clinical Relevance: The independency between depression and dependency could lead to different therapeutic strategies notably in 'socially dependent' type of depression. abstract_id: PUBMED:25995123 Constructs of depression and distress in diabetes: time for an appraisal. Depression presents in roughly 20% of people with diabetes worldwide, and adversely affects quality of life and treatment outcomes. The causes of depression in diabetes are poorly understood, but research suggests a bi-directional association, at least for type 2 diabetes. Inconsistent findings regarding prevalence and depression treatment outcomes in patients with diabetes seem partly attributable to inconsistencies in the definition and measurement of depression and in distinguishing it from diabetes-distress, a psychological concept related to depression. We review evidence suggesting that diabetes-distress and depression are correlated and overlapping constructs, but are not interchangeable. Importantly, diabetes-distress seems to mediate the association between depression and glycaemic control. We propose a model to explain the direct and indirect effects of depression and diabetes-distress on glycaemic control. Additionally, using emerging insights from data-driven approaches, we suggest three distinct symptom profiles to define depression in patients with diabetes that could help explain differential associations between depression and metabolic abnormalities, and to tailor interventions for depression. Future research should focus on further refining depression profiles in patients with diabetes, taking into account the natural history of diabetes and depression, clinical characteristics, and diabetes-distress. The assessment of diabetes-distress and depression in research and clinical practice will be essential to identify high-risk patients with different mental health needs. abstract_id: PUBMED:34611352 Overlapping genes in natural and engineered genomes. Modern genome-scale methods that identify new genes, such as proteogenomics and ribosome profiling, have revealed, to the surprise of many, that overlap in genes, open reading frames and even coding sequences is widespread and functionally integrated into prokaryotic, eukaryotic and viral genomes. In parallel, the constraints that overlapping regions place on genome sequences and their evolution can be harnessed in bioengineering to build more robust synthetic strains and constructs. With a focus on overlapping protein-coding and RNA-coding genes, this Review examines their discovery, topology and biogenesis in the context of their genome biology. We highlight exciting new uses for sequence overlap to control translation, compress synthetic genetic constructs, and protect against mutation. abstract_id: PUBMED:34566713 Excessive Daytime Sleepiness in Depression and Obstructive Sleep Apnea: More Than Just an Overlapping Symptom. Excessive daytime sleepiness (EDS) is a significant public health concern, with obstructive sleep apnea (OSA) being a common cause, and a particular relationship exists with the severity of depression. A literature search on OSA, depression, and EDS was performed in PubMed. The chosen evidence was limited to human studies. Available evidence was systematically reviewed to ascertain the association of EDS with depression and OSA according to the general population and some specific population subgroups. In addition, effectiveness of continuous positive airway pressure (CPAP) was analyzed as a standard therapy for improving EDS and depression in patients with OSA. In the general population, patients with OSA, and some other subpopulations, the review contributed to: (1) delineating the prevalence of EDS; (2) substantiating the relationship of EDS and depression; (3) presenting the relationship between EDS and OSA; and (4) revealing that the duration of CPAP is crucial for its therapeutic effects in improving EDS and depressive symptoms in patients with OSA. abstract_id: PUBMED:33381689 Rules of Conduct for Behavior Analysts in the Presence of Hypothetical Constructs: A Commentary on Eckard and Lattal (2020). Eckard and Lattal (2020) summarized the behavioristic view of hypothetical constructs and theories, and then, in a novel and timely manner, applied this view to a critique of internal clock models of temporal control. In our three-part commentary, we aim to contribute to the authors' discussion by first expanding upon their view of the positive contributions afforded by constructs and theories. We then refine and question their view of the perils of reifying constructs and assigning them causal properties. Finally, we suggest to behavior analysts four rules of conduct for dealing with mediational theories: tolerate constructs proposed with sufficient reason; consider them seriously, both empirically and conceptually; develop alternative, behavior-analytic models with overlapping empirical domains; and contrast the various models. Through variation and selection, behavioral science will evolve. Answer: Depression and dependency have been investigated to determine whether they are distinct or overlapping constructs. A study using factor analysis on a sample of university students, which included items from both the Interpersonal Dependency Inventory and the Beck Depression Inventory, found evidence for a four-factor solution with almost no overlap of the significant factor loadings for the items from each scale. The factors corresponded closely with their respective construct, supporting the view that depression and dependency are different constructs (PUBMED:9476747). This suggests that while there may be a positive relationship between dependency and level of depression, they are not the same thing and could lead to different therapeutic strategies, particularly in the 'socially dependent' type of depression. However, these results need to be confirmed on clinical and heterogeneous nonclinical samples to ensure their generalizability.
Instruction: Functional hepatic recovery after xenotransplantation of cryopreserved fetal liver cells or soluble cell-factor administration in a cirrhotic rat model: are viable cells necessary? Abstracts: abstract_id: PUBMED:17725601 Functional hepatic recovery after xenotransplantation of cryopreserved fetal liver cells or soluble cell-factor administration in a cirrhotic rat model: are viable cells necessary? Background And Aim: Chronic liver failure results in the decrease of the number of functioning hepatocytes. It dictates the necessity of using exogenous viable cells or/and agents that can stimulate hepatic regenerative processes. Fetal liver contains both hepatic and hematopoietic stem cells with high proliferative potential, which may replace damaged cells. Also, immature cells produce fetal-specific factors which may support the injured liver. Our aim was to test the ability of human fetal liver cells and cell-free fetal-specific factors of non-hepatic origin to stimulate recovery processes in an experimental model of carbon tetrachloride-induced cirrhosis in rats. Methods: Cirrhotic rats were intrasplenically injected with fetal liver cells (1 x 10(7) cells/0.3 mL medium) or cell-free fetal-specific factors (0.3 mL/1 mg protein). Control groups received medium alone. Serum indexes, hepatic functions, and morphology were evaluated for 15 days. Result: Human fetal liver cell transplantation was shown to abrogate the mortality of cirrhotic animals, to improve serum markers, and to restore liver mitochondrial function and detoxification. Morphological patterns of liver recovery were observed by histology. In comparison, an injection of fetal-specific factors produced similar functional recovery, whilst a more limited liver regeneration was observed by histology. Conclusions: The positive effects of fetal liver cell and cell-free fetal-specific factors in experimental cirrhosis may result from the presence of stage-specific factors activating hepatocellular repair. abstract_id: PUBMED:16778153 Properties of cryopreserved fetal liver stem/progenitor cells that exhibit long-term repopulation of the normal rat liver. We have previously achieved a high level of long-term liver replacement by transplanting freshly isolated embryonic day (ED) 14 rat fetal liver stem/progenitor cells (FLSPCs). However, for most clinical applications, it will be necessary to use cryopreserved cells that can effectively repopulate the host organ. In the present study, we report the growth and gene expression properties in culture of rat FLSPCs cryopreserved for up to 20 months and the ability of cryopreserved FLSPCs to repopulate the normal adult rat liver. After thawing and placement in culture, cryopreserved FLSPCs exhibited a high proliferation rate: 49.7% Ki-67-positive on day 1 and 34.7% Ki-67-positive on day 5. The majority of cells were also positive for both alpha-fetoprotein and cytokeratin-19 (potentially bipotent) on day 5. More than 80% of cultured cells expressed albumin, the asialoglycoprotein receptor, and UDP-glucuronosyltransferase (unique hepatocyte-specific functions). Expression of glucose-6-phosphatase, carbamyl phosphate synthetase 1, hepatocyte nuclear factor 4alpha, tyrosine aminotransferase, and oncostatin M receptor mRNAs was initially negative, but all were expressed on day 5 in culture. After transplantation into the normal adult rat liver, cryopreserved FLSPCs proliferated continuously, regenerated both hepatocytes and bile ducts, and produced up to 15.1% (mean, 12.0% +/- 2.0%) replacement of total liver mass at 6 months after cell transplantation. These results were obtained in a normal liver background under nonselective conditions. This study is the first to show a high level of long-term liver replacement with cryopreserved fetal liver cells, an essential requirement for future clinical applications. abstract_id: PUBMED:18333813 Soluble factor-dependent in vitro growth and maturation of rat fetal liver cells in a three-dimensional culture system. Fetal liver cell fractions are potent sources of cells for future liver tissue engineering, by virtue of their high proliferation capacity and their potential for hepatic maturation. Recently, some types of hepatic differentiation agents have been identified from findings in stem cell biology. We therefore investigated the in vitro growth and maturation of rat fetal liver cells isolated from 17-day-old pregnant rats in poly-L-lactic acid three-dimensional (3D) macroporous scaffolds in the presence of soluble factors, such as a combination of hepatocyte growth factor, fibroblast growth factor-1, and fibroblast growth factor-4, oncostatin M, and sodium butyrate. Inclusion of all these factors in the 3D culture induced higher levels of hepatic functions and well maintained these enhanced levels during 2 weeks of culture, whereas in the monolayer culture, such functional enhancement was gradually lost after 1 week. The finally achieved functions on a per-cell basis in the 3D culture with all of the soluble factors were comparable to those of adult hepatocytes. We therefore conclude that the 3D culture system shows promise for the in vitro maturation of fetal liver cells as a means of preconditioning of the cells for engineered liver tissue equivalents in future transplantation studies. abstract_id: PUBMED:36831196 Therapeutic Cell Repopulation of the Liver: From Fetal Rat Cells to Synthetic Human Tissues. Progenitor cells isolated from the fetal liver can provide a unique cell source to generate new healthy tissue mass. Almost 20 years ago, it was demonstrated that rat fetal liver cells repopulate the normal host liver environment via a mechanism akin to cell competition. Activin A, which is produced by hepatocytes, was identified as an important player during cell competition. Because of reduced activin receptor expression, highly proliferative fetal liver stem/progenitor cells are resistant to activin A and therefore exhibit a growth advantage compared to hepatocytes. As a result, transplanted fetal liver cells are capable of repopulating normal livers. Important for cell-based therapies, hepatic stem/progenitor cells containing repopulation potential can be separated from fetal hematopoietic cells using the cell surface marker δ-like 1 (Dlk-1). In livers with advanced fibrosis, fetal epithelial stem/progenitor cells differentiate into functional hepatic cells and out-compete injured endogenous hepatocytes, which cause anti-fibrotic effects. Although fetal liver cells efficiently repopulate the liver, they will likely not be used for human cell transplantation. Thus, utilizing the underlying mechanism of repopulation and developed methods to produce similar growth-advantaged cells in vitro, e.g., human induced pluripotent stem cells (iPSCs), this approach has great potential for developing novel cell-based therapies in patients with liver disease. The present review gives a brief overview of the classic cell transplantation models and various cell sources studied as donor cell candidates. The advantages of fetal liver-derived stem/progenitor cells are discussed, as well as the mechanism of liver repopulation. Moreover, this article reviews the potential of in vitro developed synthetic human fetal livers from iPSCs and their therapeutic benefits. abstract_id: PUBMED:32772793 Direct effects of transforming growth factor-β1 signaling on the differentiation fate of fetal hepatic progenitor cells. Aim: To investigate direct roles of TGF-β1 signaling in the differentiation process of fetal hepatic progenitor cells (HPCs). Materials &amp; methods: Exogenous TGF-β1 and SB431542 were added into fetal HPCs. Then, SB431542 was intraperitoneally injected into pregnant mice for 8 days. Results: Fetal HPCs treated with TGF-β1 differentiated into cholangiocytes. However, hepatocyte marker was highly expressed after inhibiting TGF-β1 signaling. In vivo, hematopoietic cells were gradually replaced with liver cells and TGF-β1 expression was evidently decreased as fetal liver developed. Inhibition of TGF-β1 signaling caused increase of ALB+ cells, but CK19 expression was more obvious in control mice livers. Conclusion: TGF-β1 signaling may play decisive roles in fetal HPCs differentiation into functional hepatocytes or cholangiocytes. abstract_id: PUBMED:16700327 Cryopreserved fetal liver cell transplants support the chronic failing liver in rats with CCl4-induced cirrhosis. Hepatocyte transplantation is a promising method for supporting hepatic function in a broad spectrum of liver diseases. The aim of this work was to test the efficacy of human fetal liver cells to support the chronic failing liver in an experimental model of carbon tetrachloride (CCl4)-induced cirrhosis in rats. Liver cirrhosis was induced by intraperitoneal administration of CCl4 at a dose of 0.2 ml (50% v/v solution)/100 g body weight, twice a week for 3 months in rats. Ten days after stopping CCl4 administration (experimental day 0), rats received intrasplenic injection of cryopreserved fetal liver cells (FLC, 1 x 10(7) cells in 0.3 ml medium). As a cirrhotic control group, CCl4-induced cirrhotic rats were used with intrasplenic injection of an equal volume of medium alone. Animals were sacrificed on experimental day 15. Human fetal liver cell transplantation almost completely prevented the death of cirrhotic animals during the 2 weeks after treatment, while high ongoing mortality was seen in the cirrhotic control group. Cell transplantation into the spleen normalized total bilirubin and TBARSs levels and increased albumin levels in blood serum, as well as restoring mitochondrial function and liver detoxification function (assessed by cytochrome P450 contents and activity) compared with the activities seen in the cirrhosis control group. In parallel with this restoration of biochemical and functional liver indices, morphological patterns of liver recovery or regeneration after liver cell transplantation were demonstrated in day 15 samples by light microscopy. These were absent in the group that had received only medium alone. abstract_id: PUBMED:30417530 4 in 1: Antibody-free protocol for isolating the main hepatic cells from healthy and cirrhotic single rat livers. Liver cells isolated from pre-clinical models are essential tools for studying liver (patho)physiology, and also for screening new therapeutic options. We aimed at developing a new antibody-free isolation method able to obtain the four main hepatic cell types (hepatocytes, liver sinusoidal endothelial cells [LSEC], hepatic macrophages [HMΦ] and hepatic stellate cells [HSC]) from a single rat liver. Control and cirrhotic (CCl4 and TAA) rat livers (n = 6) were perfused, digested with collagenase and mechanically disaggregated obtaining a multicellular suspension. Hepatocytes were purified by low revolution centrifugations while non-parenchymal cells were subjected to differential centrifugation. Two different fractions were obtained: HSC and mixed LSEC + HMΦ. Further LSEC and HMΦ enrichment was achieved by selective adherence time to collagen-coated substrates. Isolated cells showed high viability (80%-95%) and purity (&gt;95%) and were characterized as functional: hepatocytes synthetized albumin and urea, LSEC maintained endocytic capacity and in vivo fenestrae distribution, HMΦ increased expression of inflammatory markers in response to LPS and HSC were activated upon in vitro culture. The 4 in 1 protocol allows the simultaneous isolation of highly pure and functional hepatic cell sub-populations from control or cirrhotic single livers without antibody selection. abstract_id: PUBMED:10365812 Human and rat hepatic stellate cells produce stem cell factor: a possible mechanism for mast cell recruitment in liver fibrosis. Background/aims: Mast cell numbers are markedly increased in advanced liver fibrosis. Stem cell factor may recruit mast cells to the liver following injury as it induces mast cell proliferation, survival and differentiation from resident tissue precursors. This study examines stem cell factor production in human fibrotic liver and by hepatic stellate cells during culture in vitro. Methods: Stem cell factor production was examined in human fibrotic livers by ELISA and in human and rat hepatic stellate cell cultures using reverse transcription-polymerase chain reaction (RT-PCR), Northern blotting, Western blotting and immunocytochemistry. Co-culture studies examined adhesion between hepatic stellate cells and purified mast cells. Results: RT-PCR showed stem cell factor mRNA was more consistently expressed in fibrotic human livers relative to normal, and ELISA confirmed this by showing stem cell factor protein was significantly increased 2-3-fold in homogenates of human cirrhotic liver (primary biliary cirrhosis, primary sclerosing cholangitis) relative to normal. RT-PCR detected stem cell factor mRNA in human and rat hepatic stellate cells activated by culture on plastic. This was confirmed by Western blotting, which showed that freshly isolated hepatic stellate cells expressed relatively little 30 kD stem cell factor compared to late primary culture activated hepatic stellate cells (14 day) and passaged hepatic stellate cells. As assessed by fluorescence immunocytochemistry, stem cell factor protein was homogeneously expressed by populations of culture-activated rat hepatic stellate cells. During co-culture, purified human skin mast cells adhered to hepatic stellate cell monolayers on plastic, and this adherence was inhibited &gt;50% by addition of antibodies against stem cell factor. Conclusions: Hepatic stellate cells activated in vitro produce stem cell factor. These cells may play an important role in recruiting mast cells to liver during injury and fibrosis. abstract_id: PUBMED:17259114 Isolation, culture, and differentiation induction of nestin-positive cells in fetal rat hepatic cells Objective: To establish the method for isolation, culture, and differentiation induction of nestin-positive cells in fetal rat hepatic cells. Methods: Hepatic cells were obtained from fetal rats by means of mechanical separation and hanging-drop culture, and after two days of primary culture, the medium was changed for further cell culture in the presence of 20% fetal bovine serum (containing glucose 25 ml/L, mycillin 100 U/ml, pH 7.6), 10 mmol/L nicotinamide, 1 mg/L insulin, affix N2, basic fibroblast growth factor, stem cell factor, epidermal growth factor and leukemia inhibitory factor. Result And Conclusion: Nestin-positive cells were obtained from fetal rat liver, which can differentiate into islet beta cells after culture and expansion in vitro. abstract_id: PUBMED:17647294 Comparison of hepatic properties and transplantation of Thy-1(+) and Thy-1(-) cells isolated from embryonic day 14 rat fetal liver. Unlabelled: Thy-1, a marker of hematopoietic progenitor cells, is also expressed in activated oval cells of rat liver. Thy-1(+) cells are also in rat fetal liver and exhibit properties of bipotent hepatic epithelial progenitor cells in culture. However, no information is available concerning liver repopulation by Thy-1(+) fetal liver cells. Therefore, we isolated Thy-1(+) and Thy-1(-) cells from embryonic day (ED) 14 fetal liver and compared their gene expression characteristics in vitro and proliferative and differentiation potential after transplantation into adult rat liver. Fetal liver cells selected for Thy-1 expression using immunomagnetic microbeads were enriched from 5.2%-87.2% Thy-1(+). The vast majority of alpha fetoprotein(+), albumin(+), cytokine-19(+), and E-cadherin(+) cells were found in cultured Thy-1(-) cells, whereas nearly all CD45(+) cells were in the Thy-1(+) fraction. In normal rat liver, transplanted Thy-1(+) cells produced only rare, small DPPIV(+) cell clusters, very few of which exhibited a hepatocytic phenotype. In retrorsine-treated liver, transplanted Thy-1(+) fetal liver cells achieved a 4.6%-23.5% repopulation. In contrast, Thy-1(-) fetal liver cells substantially repopulated normal adult liver and totally repopulated retrorsine-treated liver. Regarding the stromal cell-derived factor (SDF)-1/chemokine (C-X-C motif) receptor 4 (CXCR4) axis for stem cell homing, Thy-1(+) and Thy-1(-) fetal hepatic epithelial cells equally expressed CXCR4. However, SDF-1alpha expression was augmented in bile ducts and oval cells in retrorsine/partial hepatectomy-treated liver, and this correlated with liver repopulation by Thy-1(+) cells. Conclusion: Highly enriched Thy-1(+) ED14 fetal liver cells proliferate and repopulate the liver only after extensive liver injury and represent a fetal hepatic progenitor cell population distinct from Thy-1(-) stem/progenitor cells, which repopulate the normal adult liver. Answer: The question of whether viable cells are necessary for functional hepatic recovery after xenotransplantation in a cirrhotic rat model has been explored in several studies. According to the research, human fetal liver cell transplantation was shown to improve serum markers, restore liver mitochondrial function and detoxification, and lead to morphological patterns of liver recovery in a model of carbon tetrachloride-induced cirrhosis in rats (PUBMED:17725601). This suggests that viable cells can indeed contribute to functional hepatic recovery. However, the same study also found that the injection of cell-free fetal-specific factors produced similar functional recovery, although with a more limited liver regeneration observed by histology (PUBMED:17725601). This indicates that while viable cells are beneficial for hepatic recovery, soluble factors alone can also stimulate recovery processes, suggesting that viable cells may not be strictly necessary for functional improvement. Additionally, cryopreserved fetal liver stem/progenitor cells (FLSPCs) have been shown to repopulate the normal adult rat liver, proliferating continuously and regenerating both hepatocytes and bile ducts, leading to significant replacement of total liver mass (PUBMED:16778153). This demonstrates the potential of cryopreserved cells for long-term liver replacement, which is essential for clinical applications. In conclusion, while viable fetal liver cells have been proven to support functional hepatic recovery and regeneration in cirrhotic rat models, soluble cell factors alone can also induce functional improvement. Therefore, while viable cells are beneficial and can lead to more extensive regeneration, they may not be strictly necessary for the recovery of liver function.
Instruction: Does working while ill trigger serious coronary events? Abstracts: abstract_id: PUBMED:19701102 Does working while ill trigger serious coronary events? The Whitehall II study. Objective: Working while ill has been found to predict coronary heart disease. We tested if this association was due to triggering. Methods: We used a nested case-control study in an occupational cohort to examine sickness absences during a 2-year period immediately before the first coronary event for 133 cases and 928 matched controls without a history of coronary events. Working while ill was defined as no absence despite being unhealthy (suboptimal self-rated health or psychological distress). Results: The odds of a coronary event were not higher for cases who worked while ill than for correspondingly unhealthy controls who took &gt;0 to 14 days of absence per year (OR = 0.62; 95% CI = 0.28 to 1.38). These results were little affected by multiple adjustments. Conclusions: We found no evidence that working while ill acts as a short-term trigger for coronary events. abstract_id: PUBMED:35847000 A Risk-Factor Model for Antineoplastic Drug-Induced Serious Adverse Events in Cancer Inpatients: A Retrospective Study Based on the Global Trigger Tool and Machine Learning. The objective of this study was to apply a machine learning method to evaluate the risk factors associated with serious adverse events (SAEs) and predict the occurrence of SAEs in cancer inpatients using antineoplastic drugs. A retrospective review of the medical records of 499 patients diagnosed with cancer admitted between January 1 and December 31, 2017, was performed. First, the Global Trigger Tool (GTT) was used to actively monitor adverse drug events (ADEs) and SAEs caused by antineoplastic drugs and take the number of positive triggers as an intermediate variable. Subsequently, risk factors with statistical significance were selected by univariate analysis and least absolute shrinkage and selection operator (LASSO) analysis. Finally, using the risk factors after the LASSO analysis as covariates, a nomogram based on a logistic model, extreme gradient boosting (XGBoost), categorical boosting (CatBoost), adaptive boosting (AdaBoost), light-gradient-boosting machine (LightGBM), random forest (RF), gradient-boosting decision tree (GBDT), decision tree (DT), and ensemble model based on seven algorithms were used to establish the prediction models. A series of indicators such as the area under the ROC curve (AUROC) and the area under the PR curve (AUPR) was used to evaluate the model performance. A total of 94 SAE patients were identified in our samples. Risk factors of SAEs were the number of triggers, length of stay, age, number of combined drugs, ADEs occurred in previous chemotherapy, and sex. In the test cohort, a nomogram based on the logistic model owns the AUROC of 0.799 and owns the AUPR of 0.527. The GBDT has the best predicting abilities (AUROC = 0.832 and AUPR = 0.557) among the eight machine learning models and was better than the nomogram and was chosen to establish the prediction webpage. This study provides a novel method to accurately predict SAE occurrence in cancer inpatients. abstract_id: PUBMED:26044947 Psychosocial Working Environment and Risk of Adverse Cardiac Events in Patients Treated for Coronary Heart Disease. Introduction: During the last decades a possible association between psychosocial working environment and increased risk of coronary heart disease (CHD) has been debated and moderate evidence supports that high psychological demands, lack of social support and iso-strain (the combination of high job strain and lack of social support) is associated with primary CHD. Whether psychosocial working environment plays a role as risk factor for new cardiac events and readmissions in patients with existing cardiovascular disease is less studied. Methods: A cohort of patients &lt;67 years treated with percutaneous coronary intervention (PCI) was established in 2006. Three months after PCI patients answered a questionnaire about their psychosocial working environment. Patients were followed in the Danish National Patient Registry and the Danish Civil Registration System for 3+ years to identify adverse cardiac events and death. We analysed the association between psychosocial working environment and adverse cardiac events by Cox Regression. Results: A number of 528 patients had returned to work 12 weeks after PCI, while 97 were still sick-listed. We identified 12 deaths and 211 other events during follow-up. We found no statistically significant associations between psychosocial working environment and risk of adverse cardiac events and readmissions or mortality. Conclusion: The psychosocial working environment was not associated with adverse cardiac events. abstract_id: PUBMED:15623867 Working while ill as a risk factor for serious coronary events: the Whitehall II study. Objectives: Although sick, some people take no time off work, a phenomenon called "sickness presenteeism." This study examined the association between sickness presenteeism and incidence of serious coronary events. Methods: The analyses were based on a cohort of 5071 male British civil servants without previous myocardial infarction. Baseline screening included measurements of health status and coronary risk factors. Absence records were assessed for the 3 years subsequent to baseline screening. The outcome of interest was incident nonfatal myocardial infarction or fatal coronary heart disease (mean length of follow-up=9.1 years). Results: Seventeen percent of unhealthy employees took no absence during the 3-year follow-up. Their incidence of serious coronary events was twice as high as that of the unhealthy employees with moderate levels of sickness absenteeism (after adjustment for conventional risk factors, hazard ratio 1.97, 95% confidence interval=1.02, 3.83). Conclusions: Employers and employees should be aware of the potential harmful effects caused by sickness presenteeism. abstract_id: PUBMED:29623186 Trigger alerts associated with laboratory abnormalities on identifying potentially preventable adverse drug events in the intensive care unit and general ward. Background: Medication safety strategies involving trigger alerts have demonstrated potential in identifying drug-related hazardous conditions (DRHCs) and preventing adverse drug events in hospitalized patients. However, trigger alert effectiveness between intensive care unit (ICU) and general ward patients remains unknown. The objective was to investigate trigger alert performance in accurately identifying DRHCs associated with laboratory abnormalities in ICU and non-ICU settings. Methods: This retrospective, observational study was conducted at a university hospital over a 1-year period involving 20 unique trigger alerts aimed at identifying possible drug-induced laboratory abnormalities. The primary outcome was to determine the positive predictive value (PPV) in distinguishing drug-induced abnormal laboratory values using trigger alerts in critically ill and general ward patients. Aberrant lab values attributed to medications without resulting in an actual adverse event ensuing were categorized as a DRHC. Results: A total of 634 patients involving 870 trigger alerts were included. The distribution of trigger alerts generated occurred more commonly in general ward patients (59.8%) than those in the ICU (40.2%). The overall PPV in detecting a DRHC in all hospitalized patients was 0.29, while the PPV in non-ICU patients (0.31) was significantly higher than the critically ill (0.25) (p = 0.03). However, the rate of DRHCs was significantly higher in the ICU than the general ward (7.49 versus 0.87 events per 1000 patient days, respectively, p &lt; 0.0001). Although most DRHCs were considered mild or moderate in severity, more serious and life-threatening DRHCs occurred in the ICU compared with the general ward (39.8% versus 12.4%, respectively, p &lt; 0.001). Conclusions: Overall, most trigger alerts performed poorly in detecting DRHCs irrespective of patient care setting. Continuous process improvement practices should be applied to trigger alert performance to improve clinician time efficiency and minimize alert fatigue. abstract_id: PUBMED:32943106 Overlap between adverse events (AEs) and serious adverse events (SAEs): a case study of a phase III cancer clinical trial. Background: Safety data is required to be collected in all clinical trials and can be separated into two types of data, adverse events and serious adverse events. Often, these types of safety data are collected as two discrete data sets, where adverse events that also meet the criteria for seriousness should be reported in both datasets. Safety analyses are often conducted using only the adverse event dataset, which should feature all safety events reported. We investigated whether the reporting of safety in both datasets was systematically followed and explored the impact of this on safety analyses in ICON8, an ovarian cancer clinical trial. Methods: Text searches of serious adverse event data identified events that could potentially match the data reported in the adverse event dataset (looking at pre-specified AE terms only). These serious adverse events were then mapped to adverse event data according to predefined criteria: (a) event term matches, (b) date of onset and date of assessment within 30 days of each other, (c) date of assessment lies between date of onset and date of resolution and (d) events confirmed to occur in the same chemotherapy cycle. A combined dataset of all unique safety events (whether originally reported in the adverse event or serious adverse event dataset) was created and safety analyses re-performed. Results: 51,019 adverse events were reported in ICON8, of which 42,410 were included in the mapping exercise. One thousand five hundred six serious adverse event elements were reported, of which 668 were included in the mapping exercise. Sixty-one percent of serious adverse event elements was matched to an already-reported adverse event. Supplementing these additional safety events and re-performing safety analyses increased the proportion of patients with at least one grade 3 or worse safety events in all arms from 42 to 47% in the control arm and 61 to 65% and 52 to 59% in the research arms. The difference in proportions of grade 3 or worse event in the research arms compared to the control arm changed by 18% (95% confidence interval [CI] 12 to 24%) and 12% (95% CI 6 to 18%), respectively. Conclusions: There was low agreement in mapping serious adverse events to already reported adverse events, with nearly 40% of serious adverse events included in the mapping exercise not mapped to an already reported adverse event. Any analyses of safety data that use only adverse event datasets or do not clearly account for serious adverse event data will likely be missing important safety information. Reporting standards should make clear which datasets were used for analyses. abstract_id: PUBMED:24591850 Risk of spontaneous preterm birth in relation to maternal experience of serious life events during pregnancy. Background: The purpose of this study was to examine the risk of preterm birth (PTB) in relation to serious life events experienced during pregnancy in Peruvian women. Methods: This case-control study included 479 PTB cases and 480 term controls. In-person interviews asked information regarding sociodemographics, medical and reproductive histories, and serious life events experienced during pregnancy. Multivariate logistic regression procedures were used to estimate adjusted odds ratios (ORs) and 95% confidence intervals (CIs). Results: Compared with women who did not experience a serious life event during pregnancy, those who experienced the following life events had a more than two-fold increased odds of PTB: death of first-degree relative (adjusted OR 2.10; 95% CI 1.38-3.20), divorce or separation (adjusted OR 2.09; 95% CI 1.10-4.00), financial troubles (adjusted OR 2.70; 95% CI 1.85-3.94), or serious fight with partner (adjusted OR 2.40; 95% CI 1.78-3.17). Women who experienced any serious life events during pregnancy had higher odds (adjusted OR 2.29; 95% CI 1.65-3.18) of suffering spontaneous preterm labor and preterm premature rupture of membranes (adjusted OR 2.19; 95% CI 1.56-3.08), compared with women who did not experience any such events. Associations of similar directions and extent were observed for severity of PTB (ie, very, moderate, or late PTB). The magnitude of the associations increased as increased frequency of serious life events (P trend &lt;0.001). Conclusion: Experiencing serious life events during pregnancy was associated with increased odds of PTB among Peruvian women. Interventions aimed at assisting women experiencing serious life events may reduce the risk of PTB. Future studies should include objective measures of stress and stress response to understand better the biological underpinnings of these associations. abstract_id: PUBMED:28438739 Reporting Clinical End Points and Safety Events in an Acute Coronary Syndrome Trial: Results With Integrated Collection. Background: End points and adverse events (AEs) are collected separately in clinical trials, yet regulatory requirements for serious AE reporting vary across regions, so classifying end points according to seriousness criteria can be useful in global trials. Methods And Results: In the Apixaban for Prevention of Acute Ischemic Events 2 (APPRAISE-2) trial, patients with a recent acute coronary syndrome were randomized to apixaban or placebo for the prevention of recurrent ischemic events. Suspected end points (myocardial infarction, stroke, or bleeding) were adjudicated by an independent clinical events classification committee. Safety criteria were collected for suspected end points and AEs. Patient-level event rates per 100 patient-days of follow-up, modeled using Poisson regression, explored the influence of region and patient characteristics on event reporting. Overall, 13 909 events were reported by 858 sites in 39 countries; 8.4% (n=1166) were suspected end points, and 91.6% (n=12 743) were AEs. Overall, 66.0% of suspected end points were confirmed by the clinical events classification committee. Most clinical events classification committee-confirmed end points met criteria to be classified as serious (94.0%); many clinical events classification committee-negated end points also did (63.2%), but fewer AEs met seriousness criteria (17.9%). The most common seriousness criterion was hospitalization (79.9%, n=2594). Region explained 28.7% of end point- and 26.4% of serious AE-reporting variation, and patient characteristics explained an additional 25.4% of end point and 13.4% of serious AE variation. Nonserious AE-reporting variation was not explained by adjustment. Conclusions: An integrated collection of end points and serious AEs is feasible in a multinational trial and illustrates the shared characteristics of events. Tailoring event collection to fit the phase and purpose of the trial is achievable and informative. Clinical Trial Registration: URL: http://www.clinicaltrials.gov. Unique identifier: NCT00831441. abstract_id: PUBMED:35647270 Serious Experience Events: Applying Patient Safety Concepts to Improve Patient Experience. Pediatric healthcare systems have successfully decreased patient harm and improved patient safety by adopting standardized definitions, processes, and infrastructure for serious safety events (SSEs). We have adopted those patient safety concepts and used that infrastructure to identify and create action plans to mitigate events in which patient experience is severely compromised. We define those events as serious experience events (SEEs). The purpose of this research brief is to describe SEE definitions, infrastructure used to evaluate potential SEEs, and creation of action plans as well as share our preliminary experiences with the approach. abstract_id: PUBMED:30285872 Serious adverse events and fatal adverse events associated with nivolumab treatment in cancer patients : Nivolumab-related serious/fatal adverse events. Background: Nivolumab, an immune checkpoint inhibitor, has revolutionized the treatment of many cancers. Due to its novel mechanisms of action, nivolumab induces a distinct profile of adverse events. Currently, the incidence and risk of developing serious adverse events (SAEs) or fatal adverse events (FAEs) following nivolumab administration are unclear. Methods: We conducted a systematic search for phase 2 and phase 3 nivolumab trials in PubMed and Embase from inception to June 2018. Data on SAEs/FAEs were extracted from each study and pooled to calculate the overall incidence and odds ratios (ORs). Results: A total of 21 trials with 6173 cancer patients were included in this study. The overall incidence of SAEs and FAEs with nivolumab were 11.2% (95% CI, 8.7-13.8%) and 0.3% (95% CI, 0.1-0.5%), respectively. The incidence of SAEs varied significantly with cancer type and clinical phase, but no evidence of heterogeneity was found for FAEs. Compared with conventional treatment, the administration of nivolumab did not increase the risk of SAEs (OR, 0.69; 95% CI, 0.34-1.40; p = 0.29) or FAEs (OR, 0.61; 95% CI, 0.27-1.39; p = 0.24). SAEs occurred in the major organ systems in a dispersed manner, with the most common toxicities appearing in the respiratory (21.4%), gastrointestinal (7.7%), and hepatic systems (6.6%). The most common cause of SAEs/FAEs was pneumonitis. Conclusions: Although nivolumab is a relatively safe antitumor agent, nononcologists should be advised of the potential adverse events. Additionally, future studies are needed to identify patients at high risk of SAEs/FAEs to aid in the development of optimal monitoring strategies and the exploration of treatments to decrease the risks. Answer: Based on the provided abstracts, there is no clear evidence that working while ill triggers serious coronary events. The Whitehall II study, as reported in abstract PUBMED:19701102, found no evidence that working while ill acts as a short-term trigger for coronary events. The study used a nested case-control design within an occupational cohort and compared sickness absences during a 2-year period immediately before the first coronary event for cases and matched controls. The results showed that the odds of a coronary event were not higher for cases who worked while ill compared to correspondingly unhealthy controls who took absences. Similarly, another study from the Whitehall II cohort, as mentioned in abstract PUBMED:15623867, examined the association between sickness presenteeism (working while ill) and the incidence of serious coronary events. Although the study found that unhealthy employees who took no absence had twice the incidence of serious coronary events compared to those with moderate levels of sickness absenteeism, this was after a follow-up of 9.1 years, suggesting a long-term rather than a short-term effect. Furthermore, the study on the psychosocial working environment and risk of adverse cardiac events in patients treated for coronary heart disease (abstract PUBMED:26044947) found no statistically significant associations between psychosocial working environment and the risk of adverse cardiac events and readmissions or mortality. In conclusion, the evidence from the provided abstracts does not support the notion that working while ill is a short-term trigger for serious coronary events. However, there may be long-term implications of working while ill on coronary health, as suggested by the increased incidence of serious coronary events among those with sickness presenteeism over a longer follow-up period.
Instruction: Could we improve notification of adverse drugs reactions in hospital? Abstracts: abstract_id: PUBMED:22874489 Could we improve notification of adverse drugs reactions in hospital? Assessment of 5 years of network PharmacoMIP's activities Background: Spontaneous reporting of adverse drug reactions (ADR) is fundamental to drug safety surveillance (pharmacovigilance) and assessment of benefit/risk ratio. However, under-reporting remains the limit of the system. Objective: The aim of this study was to assess the effect of regular visits of an Assistant in Clinical Research (CRA) on the improvement of ADR reporting in non-university hospitals. Methods: We set up an ADR report collecting system that involved regular visits in non-university hospitals, We began the visits in 2006 in 2 areas (Haute Garonne and Gers), extended to 4 other areas in 2009. We compared the reporting rate (number of reports/number of beds) of total ADRs reported by non-university hospitals in these areas before (one year) and after the start of CRA visits. Results: A total 2831 of reports were collected by the CRA: 40% were "serious" including two deaths. The results suggest an increase of 100% of the rate of reporting of ADRs. Conclusion: This study shows that regular visits increases the number of ADRs reported by non-university hospitals. Further assessment of this procedure is necessary for long term evaluation of its effectiveness. abstract_id: PUBMED:23241256 Serious adverse drug reactions in older adults notified to pharmacovigilance. Purpose: To report the serious adverse drug reactions (ADRs) in older adults notified to pharmacovigilance, to identify the incriminated drugs and to search for risk factors of occurrence. Methods: A retrospective study including 106 serious adverse drug reactions notified to pharmacovigilance in patients aged of 65 years and more, over a period of 16 years. Imputation was established according to the French method and seriousness according to the World Health Organisation (WHO) criteria. Results: Adverse drug reactions were essentially systemic. Incriminated drugs were mainly antibiotics, allopurinol and cardio-vascular drugs. Gender, age and number of administered drugs did not seem to be risk factors of serious ADRs occurrence. Among older adults, 4% died further to a serious ADRs. Conclusion: Systemic notification to pharmacovigilance will allow a better analysis of risk factors of serious ADRs occurrence and to insure safety and health to the older adults. abstract_id: PUBMED:5764699 Intensive hospital monitoring of adverse reactions to drugs. A total of 1,268 patients admitted to hospital wards were kept under surveillance by one observer throughout their stay in hospital. All drugs given to them and the occurrence of adverse reactions were recorded.Drug reactions were found in 10.2% of the 1,160 patients who received drug therapy. Most reactions were due to known pharmacological actions of the drugs. Though only four reactions were of life-threatening seriousness, 80% of the 129 reactions observed were of moderate severity. Digitalis preparations, bronchodilator drugs, and ampicillin had the highest reaction rates. It is suggested that larger surveys of adverse reactions in relation to drug usage would make a useful contribution to the problem. abstract_id: PUBMED:30914867 Evaluation of Adverse Reactions Induced by Anti-Tuberculosis Drugs in Hospital Pulau Pinang. Background: The use of multi-drug regimens in tuberculosis (TB) treatment has been associated with undesirable adverse drug reactions (ADRs). This study aims to assess the incidence and impact of ADRs on TB treatment in Hospital Pulau Pinang. Methods: This cross-sectional study was conducted via retrospective review of outpatients' medical records. Details regarding ADRs were identified by a pharmacist and verified by a consultant respiratory physician. Results: A total of 91 cases, out of 210 patients enrolled in this study, were detected with 75 patients (35.7%) experienced at least one ADR. The three most common ADRs detected were cutaneous adverse drug reactions (CADRs) (21.0%), drug-induced hepatitis (DIH) (7.1%) and gastrointestinal disturbance (4.8%). Pyrazinamide was the most common causative agent and 15.7% of all TB patients required treatment modification due to ADRs. Females were shown to have a higher tendency to develop ADRs than the males in this study (P = 0.009). The development of ADRs was shown not to affect the TB treatment outcomes (P = 0.955). Conclusion: The incidence of ADRs in this study was high so it is important to identify the risk factors for ADRs and the individuals who have those risk factors when initiating anti-TB drugs. These individuals require special attention when anti-TB drugs are initiated. abstract_id: PUBMED:8838446 A comparative study of adverse drug reactions reported through hospital and private medicine. Adverse drug reactions spontaneously reported by physicians in the hospital and the private practice were compared. All reports were sent to a French Regional Pharmacovigilance Centre over the period of 1 year (1992). Hospital-based physicians reported more adverse reactions and more serious reactions than physicians in private medicine. For both groups of reporters, the skin, central nervous system and digestive tract were the body systems most commonly involved. There were significant differences between the hospital and private practice reports as regards the drugs most frequently involved in adverse drug reactions. abstract_id: PUBMED:31403464 Genetic literacy series: clinical application of pharmacogenetics for adverse reactions to antiepileptic drugs. Adverse drug reactions are a leading cause of treatment failure with antiepileptic drugs. Adverse drug reactions are also a major source of morbidity and mortality, and a substantial burden on the use and costs of health care. Recent pharmacogenetic studies have shown that some adverse drug reactions are associated with genetic variants, which has changed how we select antiepileptic drugs for individual patients. This article, beginning with a case of an adverse drug reaction induced by carbamazepine, will answer four key questions about pharmacogenetics of adverse drug reactions: (1) What types of adverse drug reactions can be caused by antiepileptic drugs? (2) What is pharmacogenetics? (3) How does pharmacogenetics play a role in the adverse drug reactions of antiepileptic drugs? and (4) How do we apply pharmacogenetic testing in clinical practice? Our goal is to increase awareness of the contributions of genetic variation to adverse drug reactions of antiepileptic drugs. abstract_id: PUBMED:36183207 Use of tracking drugs for the search of intra-hospital adverse reactions: a pharmacovigilance study. Objective: To estimate the incidence of potential in-hospital adverse reactions with the use of alert drugs in a general hospital in southern Brazil. Method: Cross-sectional study, carried out in a hospital in southern Brazil. The electronic medical records (TASY®) of patients hospitalized between January and August 2020, who were prescribed one of the drugs earmarked for tracking adverse drug reactions, were evaluated: the drugs included flumazenil, fexofenadine hydrochloride, naloxone, promethazine, diphenhydramine and loperamide. Results: A total of 13,476 medical records were reviewed and 204 (1.5%) were included in the study in which tracker use was indicated in the management of adverse drug reactions. In this study a total of 18 different signs or symptoms were found in medical records, with pruritus/hyperemia/urticaria being the most reported symptoms (n = 76). Among the drug classes that caused most adverse drug reactions, opioids were the most mentioned (n = 44). It should be noted that in 49 medical records the information on which drug caused the adverse events was not reported. Regarding the cause of hospitalization of patients who used creening drugs, cancer was the most frequent (n = 37). Conclusions: This study indicates that the use of trackers can be a tool to estimate the occurrence of adverse drug reactions and to establish adverse events related to the use of medications, which should be reported to the pharmacovigilance service, with a view to patient safety. abstract_id: PUBMED:34756240 Reporting of suspected adverse drug reactions by nursing in Spain. An observational-descriptive retrospective study. Objective: This study aimed to gain knowledge of the nurses' involvement in the spontaneous report of suspected adverse drug reactions (ADR) in the Spanish Pharmacovigilance System for Medicinal Products for Human Use (SEFV-H), describing the principal characteristics of the reported cases, identifying points of improvement. Methods: A descriptive observational retrospective study was based on the data from FEDRA, the database created by the SEFV-H. The sample taken was the spontaneous adverse drug reactions reported to SEFV-H by nurses during the first 6 months of the 2018. Results: Complete data was provided by 6,370 suspicions of ADR reported to SEFV-H by all healthcare professionals. Only 4,8% of the samples were taken by nurses, 62,7% came from medical centers. The majority of the ADR were not considered a serious disease (78%). The most frequently adverse drug reactions reported by nurses were local reactions. The patients most involved were children and vaccines were the most reported drugs (58,3%), followed by the intravenous contrast agents used in diagnostic tests. Conclusions: Nurses report very few cases to SEFV-H and are mostly related to the administration of vaccines and are sent by nurses working in the out-of-hospital setting. Most cases are not serious and usually report known adverse reactions to the suspected drug. This observed under-notification raises the need to promote increased pharmacovigilance training among these notifying nurses so that they can continue to report, and also for those who do not do so in their daily practice, so that they can begin to do so. abstract_id: PUBMED:32476179 Adverse drug reactions' reporting in a Mexican hospital. Objectives: To summarize the procedure for reporting adverse drug reactions (ADRs) in a Mexican hospital, with the purpose of promoting and facilitating the report of ADRs by the heath team. We outline the relevance of pharmacists in performing these activities in Mexican hospitals. Methods: A pharmacovigilance study was carried out following the establishment of the Pharmacovigilance Unit of a Mexican hospital on January 2014 through March 2019. Key Findings: There is an undernotification rate of ADRs, since only 137 ADRs were notified in a five-year period, mostly by biopharmaceutical chemists holding bachelor's degrees that were acting as pharmacists. Conclusions: To improve the safety of medications, it is necessary to improve ADR notifications in Mexico's health system. One important step in this direction is to establish a Pharmacy degree with training for promoting and performing ADR notifications. abstract_id: PUBMED:8904613 Adverse drug reactions in a hospital general medical unit meriting notification to the Committee on Safety of Medicines. 1. We have retrospectively analysed data collected by a local adverse drug reactions reporting scheme in an acute hospital medical setting and have determined the numbers and types of reactions that would have merited notification as yellow card reports according to the guidelines of the Committee on Safety of Medicines. 2. The data related to 20,695 consecutive acute general medical admissions on seven general medical wards (140 beds) and were collected over 3 years, from April 1990 to March 1993. 3. Over 3 years there were 1420 reports of suspected adverse drug reactions, a rate of 68.7 per 1000 admissions. 4. If the guidelines for reporting issued by the Committee on Safety of Medicines had been strictly followed, 477 yellow cards would have been sent (23.1 per 1000 admissions). In 357 of these reports (74.8%), the reaction had caused admission to hospital. Only 31 of the 477 potential cards (6.5%) involved black triangle drugs and 10 of these were for minor reactions. 5. Only 30 of the 477 potential yellow cards (6.3%) were known to have been sent. The majority of those reactions not reported were for drug-related admissions, most of which were for well-known reactions to established drugs. 6. We have confirmed and quantified the extent of under-reporting of serious suspected adverse drug reactions to the Committee on Safety of Medicines from our hospital medical unit. Answer: Yes, we could improve the notification of adverse drug reactions (ADRs) in hospitals. Several studies have demonstrated various methods and interventions that can increase the reporting rates of ADRs. For instance, the implementation of an ADR report collecting system that involved regular visits by an Assistant in Clinical Research (CRA) to non-university hospitals led to a 100% increase in the rate of reporting of ADRs (PUBMED:22874489). This suggests that regular visits and active collection of data can significantly improve ADR reporting. Moreover, the use of tracking drugs for the search of intra-hospital adverse reactions has been studied as a pharmacovigilance tool, which can help estimate the occurrence of ADRs and establish related adverse events, contributing to patient safety (PUBMED:36183207). Additionally, the involvement of nurses in the spontaneous report of suspected ADRs has been identified as an area with potential for improvement, as nurses report very few cases, and there is a need to promote increased pharmacovigilance training among them (PUBMED:34756240). Furthermore, the relevance of pharmacists in reporting ADRs has been highlighted, with the suggestion that establishing a Pharmacy degree with training for promoting and performing ADR notifications could improve medication safety (PUBMED:32476179). The underreporting of ADRs has been a consistent issue, as shown by a study that found only a small fraction of serious suspected ADRs that merited notification were actually reported (PUBMED:8904613). In summary, improving ADR notification in hospitals can be achieved through regular and active data collection, utilizing pharmacovigilance tools, enhancing the training and involvement of healthcare professionals such as nurses and pharmacists, and establishing systems that facilitate the reporting process.
Instruction: Friends in the 'hood: Should peer-based health promotion programs target nonschool friendship networks? Abstracts: abstract_id: PUBMED:35640347 Peer parent programs in child welfare: A systematic review. Background: Mentoring, specifically peer mentoring, emerged in the child welfare setting in the early 2000s. Peer parent programs provide child welfare involved families a unique opportunity to connect with parents that have successfully navigated the child welfare system and who share similar lived experiences. No systematic review has been conducted to summarize findings associated with parental participation in peer parent programs and associated child welfare case outcomes. Objective: This study systematically summarizes the effectiveness of peer parent programs in child welfare on case outcomes, specifically permanency, time-in-care, and re-entry post reunification. Methods: Four citations were identified for review from seven academic databases. A data template was utilized to extract the following information from each study: location, target of intervention, research design, setting of intervention, intervention description, peer parent criteria, inclusion and exclusion criteria for participants, sample size, statistical tests, outcomes assessed, and key findings. Results: Several peer parent programs have been developed to support and empower child welfare involved families with the aim of improving case outcomes. All of the peer parent programs reviewed had a positive impact on reunification rates; that is, parents who participated in a peer mentor program were more likely to reunify with their child(ren) when compared to non-participants. Results regarding time-in-care and re-entry post reunification at 12 and 24 months were mixed. Conclusions: Child welfare agencies should consider developing peer parent programs as they appear to be a promising intervention to support parents and increase positive child welfare outcomes. abstract_id: PUBMED:9347070 Initial and experimental stages of tobacco and alcohol use during late childhood: relation to peer, parent, and personal risk factors. A staged model of smoking adoption has been widely applied in studies of adolescent smoking. The present study applied this model to examine the preliminary stages of tobacco and alcohol use by children. Using discriminant analysis, factors associated with the abstinence, initiation, and experimentation stages of tobacco and alcohol use were compared in a sample of 1,272 children in grades 4 and 6. Modeling of use by best friends and the perceived prevalence of use among same-age peers were most strongly related to the initiation and experimentation stages of tobacco and alcohol use. Other key factors were offers from parents and friends, adjustment to school, and behavioral self-regulation. The weakest factors were parental modeling and self-esteem. The initiation and experimentation stages are not as highly differentiated among children as other studies have found them to be among adolescents, suggesting that if initiation occurs during childhood, progression to experimentation is likely. Prevention programs could simultaneously influence children's risk of tobacco and alcohol use by targeting the common risk factors for preliminary use of these substances. abstract_id: PUBMED:32986075 Effects of Counseling by Peer Human Advisors vs Computers to Increase Walking in Underserved Populations: The COMPASS Randomized Clinical Trial. Importance: Effective and practical treatments are needed to increase physical activity among those at heightened risk from inactivity. Walking represents a popular physical activity that can produce a range of desirable health effects, particularly as people age. Objective: To test the hypothesis that counseling by a computer-based virtual advisor is no worse than (ie, noninferior to) counseling by trained human advisors for increasing 12-month walking levels among inactive adults. Design, Setting, And Participants: A cluster-randomized, noninferiority parallel trial enrolled 245 adults between July 21, 2014, and July 29, 2016, with follow-up through September 15, 2017. Data analysis was performed from March 15 to December 20, 2018. The evidence-derived noninferiority margin was 30 minutes of walking per week. Participants included inactive adults aged 50 years and older, primarily of Latin American descent and capable of walking without significant limitations, from 10 community centers in Santa Clara and San Mateo counties, California. Interventions: All participants received similar evidence-based, 12-month physical activity counseling at their local community center, with the 10 centers randomized to a computerized virtual advisor program (virtual) or a previously validated peer advisor program (human). Main Outcomes And Measures: The primary outcome was change in walking minutes per week over 12 months using validated interview assessment corroborated with accelerometry. Both per-protocol and intention-to-treat analysis was performed. Results: Among the 245 participants randomized, 193 were women (78.8%) and 241 participants (98.4%) were Latino. Mean (SD) age was 62.3 (8.4) years (range, 50-87 years), 107 individuals (43.7%) had high school or less educational level, mean BMI was 32.8 (6.8), and mean years residence in the US was 47.4 (17.0) years. A total of 231 participants (94.3%) completed the study. Mean 12-month change in walking was 153.9 min/wk (95% CI, 126.3 min/wk to infinity) for the virtual cohort (n = 123) and 131.9 min/wk (95% CI, 101.4 min/wk to infinity) for the human cohort (n = 122) (difference, 22.0, with lower limit of 1-sided 95% CI, -20.6 to infinity; P = .02); this finding supports noninferiority. Improvements emerged in both arms for relevant clinical risk factors, sedentary behavior, and well-being measures. Conclusions And Relevance: The findings of this study indicate that a virtual advisor using evidence-based strategies produces significant 12-month walking increases for older, lower-income Latino adults that are no worse than the significant improvements achieved by human advisors. Changes produced by both programs are commensurate with those reported in previous investigations of these behavioral interventions and provide support for broadening the range of light-touch physical activity programs that can be offered to a diverse population. Trial Registration: ClinicalTrials.gov Identifier: NCT02111213. abstract_id: PUBMED:23782478 Understanding the mental health of youth living with perinatal HIV infection: lessons learned and current challenges. Introduction: Across the globe, children born with perinatal HIV infection (PHIV) are reaching adolescence and young adulthood in large numbers. The majority of research has focused on biomedical outcomes yet there is increasing awareness that long-term survivors with PHIV are at high risk for mental health problems, given genetic, biomedical, familial and environmental risk. This article presents a review of the literature on the mental health functioning of perinatally HIV-infected (PHIV+) adolescents, corresponding risk and protective factors, treatment modalities and critical needs for future interventions and research. Methods: An extensive review of online databases was conducted. Articles including: (1) PHIV+ youth; (2) age 10 and older; (3) mental health outcomes; and (4) mental health treatment were reviewed. Of 93 articles identified, 38 met inclusion criteria, the vast majority from the United States and Europe. Results: These studies suggest that PHIV+ youth experience emotional and behavioural problems, including psychiatric disorders, at higher than expected rates, often exceeding those of the general population and other high-risk groups. Yet, the specific role of HIV per se remains unclear, as uninfected youth with HIV exposure or those living in HIV-affected households displayed similar prevalence rates in some studies, higher rates in others and lower rates in still others. Although studies are limited with mixed findings, this review indicates that child-health status, cognitive function, parental health and mental health, stressful life events and neighbourhood disorder have been associated with worse mental health outcomes, while parent-child involvement and communication, and peer, parent and teacher social support have been associated with better function. Few evidence-based interventions exist; CHAMP+, a mental health programme for PHIV+ youth, shows promise across cultures. Conclusions: This review highlights research limitations that preclude both conclusions and full understanding of aetiology. Conversely, these limitations present opportunities for future research. Many PHIV+ youth experience adequate mental health despite vulnerabilities. However, the focus of research to date highlights the identification of risks rather than positive attributes, which could inform preventive interventions. Development and evaluation of mental health interventions and preventions are urgently needed to optimize mental health, particularly for PHIV+ youth growing up in low-and-middle income countries. abstract_id: PUBMED:25274722 Effect of a school-based intervention on physical activity and quality of life through serial mediation of social support and exercise motivation: the PESSOA program. The aim of this study was to explore the effects of social support and behavioral regulation of exercise on physical activity (PA) and quality of life (QoL), in a Portuguese school-based intervention. We hypothesized that serial mediation effects would be present leading to greater levels of PA and QoL. The sample comprised 1042 students (549 boys), aged 10-16 years, BMI = 19.31 ± 3.51, allocated to two groups of schools: control (n = 207) and intervention (n = 835). This study will report the 24 months results of the program, which aimed to develop healthy lifestyles. Questionnaires were used to measure PA, QoL, motivation to exercise and social support. There was no direct impact of the intervention on QoL or PA. Serial mediation analyses were conducted. Social support (P &lt; 0.019) and intrinsic motivation (P = 0.085) increased more on intervention group. Indirect effects were observed in all serial mediation models. The positive indirect effects on PA and QoL were explained by the increase on peer/parent support in serial with the increase in intrinsic motivation (P &lt; 0.01). Parental support led to an increase on external motivation (P &lt; 0.05), which buffered the effects of the intervention. This school-based intervention promoted the development of social support and motivational mechanisms that explained higher levels of PA and QoL. abstract_id: PUBMED:24277649 Social support and self-esteem in adolescence. This study assessed contingencies in the effect of social support from parents and friends on adolescent self-esteem. Questionnaires were administered to 76 Israeli adolescents regarding self-esteem, stressful life events, and perceived level of support from mother, father, and friends. Maternal support had a strong effect on self-esteem. Aid from friends was influential primarily when that of mothers was absent. Paternal support had little effect, once other support sources were controlled. Despite the negative influence of stress on self-esteem, support and stress had no interactive effects. These findings, consistent with attachment theory and social provision theories, were contrary to cross-pressure or separate world models of peer/parent influence. abstract_id: PUBMED:10777974 Factors associated with delayed tobacco uptake among Vietnamese/Asian and Arabic youth in Sydney, NSW. Objective: To describe the smoking behaviour and possible reasons for delayed uptake of tobacco smoking among Arabic and Vietnamese/Asian speaking senior school students in Sydney Method: A descriptive study involving four adult in-depth interviews and five student focus groups plus a quantitative survey of 2,573 school students attending Years 10 and 11 from 12 high schools with high Vietnamese and Arabic populations was conducted in Sydney in 1998. Self-reported smoking behaviour and peer, parent, school and cultural background information was collected. Results: Students who smoke were more likely to have more than $20 a week pocket money, be from an English-speaking background, have no rules at home about smoking, have family members who smoke, not feel close to their father, spend three or more evenings a week out with friends, and have negative perceptions of the school environment and of the level of teacher support. They were less likely to smoke if they perceived their peers to be unsupportive. Conclusions: These results confirm the delayed uptake of smoking among students from a Vietnamese/Asian and Arabic-speaking backgrounds compared with those from an English-speaking background. A number of family and school factors were associated with smoking. Implications: Positive parental modelling, active parenting including awareness of or supervision of student leisure time, strict rules about not smoking and less pocket money are important strategies for preventing smoking among all adolescents. abstract_id: PUBMED:30668133 Peer-to-Peer Human Milk Sharing: Recipient Mothers' Motivations, Stress, and Postpartum Mental Health. Background: Some mothers who cannot breastfeed-partially or completely-choose to feed their infants human milk donated from a peer. Few studies have examined mothers' experiences with using donor milk; none has examined whether or not mothers' stress and mental health are associated with using donor milk from a peer. Methods: Researchers conducted semistructured individual interviews with mothers from the United States and Canada (N = 20) to answer the following questions: (a) what are recipient mothers' motivations for participation in peer-to-peer breast milk sharing and (b) what is the relationship between receiving donated milk and mothers' stress and mental health postpartum? Transcripts were coded using an inductive approach and principles of grounded theory were used to analyze data. Results: Data were organized under two themes: (a) motivations for using milk from a peer and (b) milk-sharing and stress-related experiences. Motivations included health benefits, medical need, and preference for human milk over formula. Factors inducing stress were as follows: logistical stressors of securing donor milk and fear of running out of milk. Factors reducing stress were as follows: donor milk provided relief and comfort and its use reduced mothers' self-reported symptoms of postpartum depression and anxiety. Conclusions: Mothers participated in peer-to-peer breast milk sharing primarily because of health benefits for children. However, participation also had important psychological benefits for some mothers. Additional research and open discourse are needed to support mothers who choose to use milk from a peer and to promote safety of this practice. abstract_id: PUBMED:23524992 Personal attitudes, perceived social norms, and health-risk behavior among female adolescents with chronic medical conditions. To examine whether perceived peer/parent norms or personal beliefs about adolescent substance use influence substance use among female adolescents with chronic medical conditions. Sixty-eight females reported on substance use, personal beliefs, and perceived peer/parent norms. Personal beliefs and perceived peer/parent norms were associated with adolescent's current and future substance use. Although perceived peer norms accounted for variance in current substance use, only personal beliefs accounted for variance in future alcohol use. Targeting perceived peer norms may be effective for intervention efforts among adolescents endorsing current substance use, whereas alcohol use prevention efforts should target personal beliefs. abstract_id: PUBMED:29723103 A Liberation Health Approach to Examining Challenges and Facilitators of Peer-to-Peer Human Milk Sharing. Background: Human milk sharing between peers is a common and growing practice. Although human milk has been unequivocally established as the ideal food source for infants, much stigma surrounds the practice of human milk sharing. Furthermore, there is little research examining peer-to-peer human milk sharing. Research Aim: We used the liberation health social work model to examine the experiences of mothers who have received donated human milk from a peer. Research questions were as follows: (a) What challenges do recipient mothers experience in peer-to-peer human milk sharing? (b) What supports do recipient mothers identify in peer-to-peer human milk sharing? Methods: Researchers conducted in-depth interviews with mothers ( N = 20) in the United States and Canada who were recipients of peer-to-peer human milk sharing. Researchers independently reviewed transcripts and completed open, axial, and selective coding. The authors discussed conflicts in theme identification until agreement was reached. Results: Challenges to peer-to-peer human milk sharing were (a) substantial effort required to secure human milk; (b) institutional barriers; (c) milk bank specific barriers; and (d) lack of societal awareness and acceptance of human milk sharing. Facilitators included (a) informed decision making and transparency and (b) support from healthcare professionals. Conclusion: Despite risks and barriers, participants continued to pursue peer-to-peer human milk sharing. Informed by a liberation health framework, healthcare professionals-rather than universally discouraging human milk sharing between peers-should facilitate open dialogue with parents about the pros and cons of this practice and about screening recommendations to promote safety and mitigate risk. Answer: Based on the provided abstracts, peer-based health promotion programs can indeed benefit from targeting nonschool friendship networks. The evidence suggests that peers play a significant role in influencing health behaviors and outcomes. For instance, peer parent programs in child welfare have shown positive impacts on reunification rates, indicating that parents who participated in peer mentor programs were more likely to reunify with their children compared to non-participants (PUBMED:35640347). This suggests that peer influence can be a powerful tool in improving case outcomes in child welfare settings. Similarly, the COMPASS randomized clinical trial found that counseling by a computer-based virtual advisor was no worse than counseling by trained human advisors for increasing 12-month walking levels among inactive adults (PUBMED:32986075). This indicates that peer influence, even when simulated by a computer program, can be effective in promoting physical activity. The PESSOA program, a school-based intervention, demonstrated that social support and behavioral regulation of exercise from peers and parents mediated increases in physical activity and quality of life (PUBMED:25274722). This further supports the idea that peer influence, including support from nonschool friendship networks, can be beneficial in health promotion. Moreover, the study on social support and self-esteem in adolescence highlighted the importance of support from friends, particularly when maternal support was absent (PUBMED:24277649). This underscores the potential value of peer-based interventions outside of the family context. Lastly, the study on peer-to-peer human milk sharing revealed that despite challenges, mothers continued to pursue this practice, indicating a strong influence of peer networks on health-related decisions (PUBMED:29723103). In conclusion, the evidence from the abstracts supports the notion that peer-based health promotion programs should consider targeting nonschool friendship networks as they can significantly influence health behaviors and outcomes.
Instruction: Follicular and Hürthle cell lesions of the thyroid: can inconclusive results be minimized? Abstracts: abstract_id: PUBMED:19068668 Follicular and Hürthle cell lesions of the thyroid: can inconclusive results be minimized? Objective: To assess the cytologic criteria for distinguishing neoplastic from nonneoplastic follicular cell and Hürthle cell thyroid lesions. Study Design: Ten previously described and commonly used cytologic criteria were evaluated and graded on a 0-4 scale in a consecutive series of thyroid fine needle aspirations (FNAs) reported as follicular or Hürthle cell neoplasms or lesions. Scoring was compared to subsequent surgical outcome. Results: A total of 93 (57fo llicular cell and 36 Hühle cell) cases was analyzed. No individual cytologic feature was helpful in distinguishing benign neoplarms from malignancy in either category (p &gt; 0.05), but 4 or more coexistent cytologic features in combination were identified in 50.0% of follicular neoplasms, 13.6% of Hürthle cell neoplasms and none of the nonneoplastic lesions. An unexpected number (13 of 93, 14.0%) of unrecognized papillary carcinomas, some of follicular subtype, was encountered. Conclusion: In this series, the indeterminate thyroid FNA category could have been reduced by diagnosis of samples with 4 or more of the studied criteria as definite follicular (50% of cases) or Hürthle cell (13.6% of cases) neoplasms and by more astute recognition of papillary carcinomas (14.0% of cases), which blend into this category, often as a result of less-than-optimal sampling or preservation. abstract_id: PUBMED:25759796 CBX7 Expression in Oncocytic Thyroid Neoplastic Lesions (Hürthle Cell Adenomas and Carcinomas). Background: Previous analysis of CBX7 expression in a large number of thyroid adenoma and carcinoma samples revealed a progressive reduction of CBX7 levels that was well related with the malignant grade of thyroid neoplasias. Hürthle cell tumors are unusual thyroid neoplasms characterized by the presence of particular cells called oncocytes. Objectives: In order to develop new tools for a more accurate diagnosis of Hürthle cell tumors of the thyroid, we evaluated CBX7 protein levels to verify the possible presence of an expression signature. Methods: CBX7 expression was evaluated by immunohistochemistry in a panel of thyroid tissue sections including normal thyroids, goiters, follicular adenomas and oncocytic lesions. Results: CBX7 expression was low or null in 68% of Hürthle adenomas, whereas it was comparable to normal thyroid tissue in Hürthle hyperplasias and follicular adenomas. Conclusions: Reduced expression of CBX7 suggests a more aggressive identity of Hürthle adenomas with respect to non-Hürthle ones. abstract_id: PUBMED:25810698 Appearance of Hürthle cell carcinoma soon after surgical extirpation of Hürthle cell adenoma and follicular adenoma of the thyroid gland. Background: Hürthle cell neoplasms could be benign (Hürthle cell adenoma) or malignant (Hürthle cell carcinoma). Hürthle cell carcinoma is a rare tumour, representing 5% of all differentiated thyroid carcinomas. The cytological evaluation of Hürthle cell neoplasms by fine needle aspiration biopsy (FNAB) is complicated because of the presence of Hürthle cells in both Hürthle cell adenoma and Hürthle cell carcinoma. Thus, the preoperative distinction between these two entities is very difficult and possible only with pathohistological findings of the removed tumour. Case Report: A 57-year old female patient was admitted at our Department, for investigation of nodular thyroid gland. She was euthyroid and FNAB of the nodules in both thyroid lobes were consistent of Hürthle cell adenoma with cellular atypias. After thyroidectomy the histopathology revealed Hürthle cell adenoma with high cellular content and discrete cellular atypias in the left lobe and follicular thyroid adenoma without cellular atypias in the right lobe. One year after substitution therapy, a palpable tumour on the left side of the remnant tissue was found, significantly growing with time, presented as hot nodule on (99m)Tc-sestamibi scan and conclusive with Hürthle cell adenoma with marked cellularity on FNAB. Tumorectomy was performed and well-differentiated Hürthle cell carcinoma detected. The patient received ablative dose of 100 mCi (131)I. No signs of metastatic disease are present up to date. Conclusions: The differences between Hürthle cell adenomas and Hürthle cell carcinomas could be clearly made only by histopathological evaluation. Patients with cytological diagnosis of Hürthle cell neoplasms should proceed to total thyroidectomy, especially if tumour size is &gt; 1cm, FNAB findings comprise cellular atypias and/or multiple bilateral nodules are detected in the thyroid gland. abstract_id: PUBMED:35332982 Copy number variations identified in thyroid FNA specimens are associated with Hürthle cell cytomorphology. Background: The fine-needle aspiration (FNA) diagnosis of thyroid Hürthle cell neoplasms (HCNs) remains challenging. This study explored a possible association of copy number variations (CNVs) with Hürthle cell lesions of the thyroid. Methods: Thyroid FNA cases that were diagnosed as follicular lesion of undetermined significance (FLUS) or follicular neoplasm (FN)/HCN for which the ThyroSeq version 3 genomic classifier test was performed were retrieved. Results: A total of 324 thyroid FNA cases (228 FLUS cases, 46 HCN cases, and 50 FN cases) were included in the study. FLUS cases were further classified as Hürthle cell type (follicular lesion of undetermined significance-Hürthle cell type [FLUS-HCT]; 20 cases) or non-Hürthle cell type (follicular lesion of undetermined significance-non-Hürthle cell type [FLUS-NHCT]; 208 cases). HCN and FLUS-HCT cases showed a higher prevalence of CNVs (23 of 66 [35%]) in comparison with those classified as FN or FLUS-NHCT (14 of 258 [5%]; P &lt; .001). A total of 105 patients had histopathologic follow-up. Cases with CNVs were more likely to be neoplastic (18 of 26 [69%]) and associated with Hürthle cell changes (14 of 26 [54%]) in comparison with cases without any molecular alterations (neoplastic, 8 of 24 [33%]; Hürthle cell changes, 2 of 24 [8%]; P &lt; .05). In HCN/FLUS-HCT cases with CNVs (n = 14), Hürthle cell changes (13 of 14 [93%]) and neoplasms (9 of 14 [64%]) were more likely to be seen on surgical follow-up in comparison with the 17 cases without CNVs (Hürthle cell changes, 6 of 17 [35%]; neoplastic, 3 of 17 [18%]; P &lt; .05). Conclusions: CNVs identified in thyroid FNA cases are associated with Hürthle cell morphology and are suggestive of a neoplasm with Hürthle cell features in thyroid FNAs classified as FLUS-HCT/HCN. This finding may be helpful in triaging patients who would benefit from surgical management. abstract_id: PUBMED:30668897 The prevalence and surgical outcomes of Hürthle cell lesions in FNAs of the thyroid: A multi-institutional study in 6 Asian countries. Background: Hürthle cell-rich nodules (HCNs) encompass non-neoplastic to malignant lesions. There is paucity of literature on the frequency distribution of HCNs among Bethesda categories, histologic follow-up, risk of malignancy (ROM), and risk of neoplasia (RON). The objective of this retrospective, multi-institutional study was to determine the prevalence of the cytologic diagnostic category and surgical outcomes of patients with HCN. Methods: Nine tertiary health centers representing 6 Asian countries participated. Cases were retrieved from respective databases. The Bethesda System for Reporting Thyroid Cytopathology was used. Cytology results were correlated with surgical diagnoses. Results: Of 42,190 thyroid aspirates retrieved, 760 (1.8%) had a Hürthle cell predominance. Most (61%) were categorized as atypia of undetermined significance/follicular lesion of undetermined significance, Hürthle cell type" (AUS-H); 35% were categorized as follicular neoplasm, Hürthle cell type (FN-H); and 4% were categorized as suspicious for malignancy (SFM). Histologic follow-up was available for 288 aspirates (38%). Most were benign on resection (66%), and the most common histologic diagnosis was Hürthle cell adenoma (28.5%). The ROM for AUS-H, FN-H, and SFM, as calculated on resected nodules, was 32%, 31%, and 71%, respectively; and the RON was 47%, 81%, and 77%, respectively. The 5 institutions that had an AUS-H:HCN ratio below 0.5 diagnosed HCN less frequently as AUS-H than as FN-H. Conclusions: This is the largest, contemporary, multi-institutional series of HCNs with surgical follow-up. Although there was wide interinstitutional variation in prevalence and surgical outcomes, there was no significant difference in the ROM among institutions. The categories AUS-H and FN-H had a similar ROM for resected nodules. abstract_id: PUBMED:27322887 Impact of specific patterns on the sensitivity for follicular and Hurthle cell carcinoma in thyroid fine-needle aspiration. Background: To the authors' knowledge, the impact of specific cytologic patterns in thyroid fine-needle aspiration (FNA) for follicular and Hurthle cell carcinoma is not known. Methods: The results of all thyroid FNAs performed from 1997 through 2015 with corresponding resections of follicular and Hurthle cell carcinomas were reviewed. Results: A total of 24 follicular carcinomas and 35 Hurthle cell carcinomas were identified. All FNA specimens from follicular carcinomas had a microfollicular component (3 of which were noted only in 3-dimensional groups), but only 18 cases (75%) had cytologic atypia, which most commonly consisted of enlarged nuclei with slightly pale chromatin. Diagnosing cases without cytologic atypia as benign would significantly decrease the sensitivity of the test (P = .02). Five Hurthle cell FNA specimens consisted of only scant Hurthle cells. The remaining Hurthle cell FNA specimens could be divided into those with large cell dysplasia (15 cases), those with small cell dysplasia (10 cases), cases with anisonucleosis insufficient for a diagnosis of large cell dysplasia (4 cases), and cases with Hurthle cells in flat sheets without atypia (1 case). Diagnosing cases with flat sheets without atypia as benign did not appear to affect the sensitivity of the test (P = 1.00). Conclusions: Diagnosing FNA specimens with microfollicles without atypia as benign appears to significantly reduce the sensitivity for follicular carcinoma to 75%; in contrast, diagnosing FNA specimens with Hurthle cells in flat sheets without anisonucleosis as benign does not. Cancer Cytopathol 2016;124:729-36. © 2016 American Cancer Society. abstract_id: PUBMED:34452575 Thyroid Nodule Size as a Predictor of Malignancy in Follicular and Hurthle Neoplasms. Introduction: The management of follicular (FN) and Hurthle cell neoplasms (HCN) is often difficult because of the uncertainty of malignancy risk. We aimed to assess characteristics of benign and malignant follicular and Hurthle neoplasms based on their shape and size. Materials And Methods: Patients with Follicular adenoma (FA) or carcinoma (FC) and Hurthle Cell adenoma (HCA) or carcinoma (HCC) who had preoperative ultrasonography were included. Demographic data were retrieved. Size and shape of the nodules were measured. Logistic regression analyses and odds ratios were performed. Results: A total of 115 nodules with 57 carcinomas and 58 adenomas were included. Logistic regression analysis shows that the nodule height and the patient age are predictors of malignancy (p-values = 0.001 and 0.042). A cutoff value of nodule height ≥ 4 cm. produces an odds ratio of 4.5 (p-value = 0.006). An age ≥ 55 year-old demonstrates an odds ratio of 2.4-3.6 (p-value = 0.03). Taller-than-wide shape was not statistically significant (p-value = 0.613). Conclusion: FC and HCC are larger than FA and HCA in size, with a cutoff at 4 cm. Increasing age increases the odds of malignancy with a cutoff at 55 year-old. Taller-than-wide shape is not a predictor of malignancy. abstract_id: PUBMED:32683470 Clinical impact of follicular oncocytic (Hürthle cell) carcinoma in comparison with corresponding classical follicular thyroid carcinoma. Purpose: There are controversial debates if patients with Hürthle cell carcinoma, also known as oxyphilic or oncocytic cell follicular thyroid carcinoma, have a poorer outcome. In this study, we systematically evaluated the clinical outcome in a large patient cohort following thyroidectomy and initial I-131 radioactive iodine therapy (RIT). Methods: We retrospectively evaluated a total of 378 patients with diagnosed oncocytic follicular Hürthle cell carcinoma (OFTC) (N = 126) or with classical follicular thyroid carcinoma (FTC) (N = 252). Patients received thyroidectomy and complementary I-131 RIT. Clinical data regarding basic demographic characteristics, tumor grade, persistent disease and recurrence during follow-up, and disease-free, disease-specific, and overall survival were collected during follow-up of 6.9 years (interquartile range 3.7; 11.7 years). Univariate and multivariate analyses were used to identify factors associated with disease-related and overall survival. Results: Before and after matching for risk factors, recurrence was significantly more frequently diagnosed in OFTC patients during follow-up (17% vs. 8%; p value 0.037). Likewise, OFTC patients presented with a reduced mean disease-free survival of 17.9 years (95% CI 16.0-19.8) vs. 20.1 years (95% CI 19.0-21.1) in FTC patients (p value 0.027). Multivariate analysis revealed OFTC (HR 0.502; 95% CI 0.309-0.816) as the only independent prognostic factor for disease-free survival. Distant metastases of OFTC patients were significantly less iodine-avid (p value 0.014). Mean disease-specific and overall survival did not differ significantly (p value 0.671 and 0.687) during follow-up of median 6.9 years (3.7; 11.7 years). Conclusions: Our study suggests that recurrence is more often seen in OFTC patients. OFTC patients have a poorer prognosis for disease-free survival. Thus, OFTC and FTC behave differently and should be categorized separately. However, patients suffering from OFTC present with the same overall and disease-specific survival at the end of follow-up indifferent to FTC patients after initial RIT. abstract_id: PUBMED:19393417 Fine-needle aspiration biopsies in the management of indeterminate follicular and Hurthle cell thyroid lesions. Objectives: To determine the value of fine-needle aspiration biopsies (FNABs) of the thyroid and stratify the risk of malignancy within the indeterminate FNAB diagnostic category at our institution. Study Design: Case series with chart review of preoperative FNABs of consecutive patients who underwent total thyroidectomy between 2005 and 2007. Subjects And Methods: A total of 115 cases were reviewed, and FNABs were categorized into four groups: benign, positive or suspicious for malignancy, indeterminate (follicular or Hurthle cell lesions), and nondiagnostic. Cytohistologic correlation was then established. Results: The accuracy of FNAB in detecting thyroid malignancy was 88 percent with false-negative and false-positive rates of 13 percent and 7 percent, respectively. Overall, 52 percent of the indeterminate cases were carcinomas (48 percent of follicular lesions and 62 percent of Hurthle cell lesions). In the presence of cytologic atypia, the rate of malignancy increased to 75 percent and 83 percent for the follicular and Hurthle cell lesions, respectively. Conclusions: FNAB is an accurate and helpful method for the evaluation of thyroid nodules with results directly correlating with management. Surgery should be considered for FNABs categorized as indeterminate, especially in the presence of cytologic atypia. Because of the high false-negative rate, benign FNABs require close follow-up with ultrasound examination and periodic biopsies. abstract_id: PUBMED:20637335 Serum thyroglobulin is a poor diagnostic biomarker of malignancy in follicular and Hurthle-cell neoplasms of the thyroid. Background: Serum thyroglobulin (Tg) is the most accurate biomarker for thyroid cancer recurrence. However, some clinicians measure preoperative Tg as a diagnostic cancer marker despite lack of supporting evidence. We examined whether Tg accurately predicts malignancy in follicular or Hürthle-cell neoplasms. Methods: We reviewed 366 patients who underwent thyroidectomies for follicular/Hürthle-cell neoplasms. We compared Tg in malignant versus benign tumors by univariate and receiver-operator characteristic analyses. We also examined several Tg-derived indices that normalized Tg to known confounding factors including nodule size, thyroid function, and type of Tg assay. Results: Thirty-nine patients met inclusion criteria for analysis. There were no differences between malignant (n = 16) and benign (n = 23) lesions in Tg or any of the normalized indexes. Receiver-operator characteristic analysis revealed an area under the curve of .59. Lesions with Tg levels greater than 500 mug/L had a positive predictive value of .75. Conclusions: Tg has poor accuracy for predicting malignancy in follicular or Hürthle-cell thyroid neoplasms. Answer: The challenge of distinguishing between benign and malignant follicular and Hürthle cell lesions of the thyroid based on fine needle aspirations (FNAs) is a significant one. The study by PUBMED:19068668 found that no individual cytologic feature was helpful in distinguishing benign neoplasms from malignancy in either follicular cell or Hürthle cell categories. However, the presence of four or more coexistent cytologic features in combination was identified in 50% of follicular neoplasms and 13.6% of Hürthle cell neoplasms, but not in nonneoplastic lesions. Additionally, a notable number of unrecognized papillary carcinomas were encountered, suggesting that a more astute recognition of papillary carcinomas could reduce indeterminate thyroid FNA categories. The expression of CBX7, as reported by PUBMED:25759796, was found to be low or null in 68% of Hürthle adenomas, while it was comparable to normal thyroid tissue in Hürthle hyperplasias and follicular adenomas. This suggests that reduced CBX7 expression may indicate a more aggressive identity of Hürthle adenomas compared to non-Hürthle ones. PUBMED:25810698 emphasizes that the differences between Hürthle cell adenomas and carcinomas can only be clearly made by histopathological evaluation. It recommends that patients with cytological diagnoses of Hürthle cell neoplasms should proceed to total thyroidectomy, especially under certain conditions such as tumor size greater than 1cm, FNAB findings of cellular atypias, or detection of multiple bilateral nodules in the thyroid gland. The study by PUBMED:35332982 found that copy number variations (CNVs) identified in thyroid FNA cases are associated with Hürthle cell morphology and suggest a neoplasm with Hürthle cell features. This finding could be helpful in triaging patients who would benefit from surgical management.
Instruction: Is there a north-south divide in social class inequalities in health in Great Britain? Abstracts: abstract_id: PUBMED:15117791 Is there a north-south divide in social class inequalities in health in Great Britain? Cross sectional study using data from the 2001 census. Objective: To examine individual social class inequalities in self rated general health within and between the constituent countries of Great Britain and the regions of England. Design: Cross sectional study using data from the 2001 national census. Setting: Great Britain. Participants: Adults aged between 25 and 64 in Great Britain and enumerated in the 2001 population census (n = 25.6 million). Main Outcome Measures: European age standardised rates of self rated general health, for men and women classified by the government social class scheme. Results: In each of the seven social classes, Wales and the North East and North West regions of England had high rates of poor health. There were large social class inequalities in self rated health, with rates of poor health generally increasing from class 1 (higher professional occupations) to class 7 (routine occupations). The size of the health divide varied between regions: the largest rate ratios for routine versus higher professional classes were for Scotland (2.9 for men; 2.8 for women) and London (2.9 for men; 2.4 for women). Women had higher rates of poor health compared to men in the same social class, except in class 6 (semi-routine occupations). Conclusions: A northwest-southeast divide in social class inequalities existed in Great Britain at the start of the 21st century, with each of the seven social classes having higher rates of poor health in Wales, the North East and North West regions of England than elsewhere. The widest health gap between social classes, however, was in Scotland and London, adding another dimension to the policy debate on resource allocation and targets to tackle the health divide. abstract_id: PUBMED:31297431 Theorising social class and its application to the study of health inequalities. The literature on health inequalities often uses measures of socio-economic position pragmatically to rank the population to describe inequalities in health rather than to understand social and economic relationships between groups. Theoretical considerations about the meaning of different measures, the social processes they describe, and how these might link to health are often limited. This paper builds upon Wright's synthesis of social class theories to propose a new integrated model for understanding social class as applied to health. This model incorporates several social class mechanisms: social background and early years' circumstances; Bourdieu's habitus and distinction; social closure and opportunity hoarding; Marxist conflict over production (domination and exploitation); and Weberian conflict over distribution. The importance of discrimination and prejudice in determining the opportunities for groups is also explicitly recognised, as is the relationship with health behaviours. In linking the different social class processes we have created an integrated theory of how and why social class causes inequalities in health. Further work is required to test this approach, to promote greater understanding of researchers of the social processes underlying different measures, and to understand how better and more comprehensive data on the range of social class processes these might be collected in the future. abstract_id: PUBMED:25545770 Social class inequalities in health among occupational cohorts from Finland, Britain and Japan: a follow up study. We examined whether relative occupational social class inequalities in physical health functioning widen, narrow or remain stable among white collar employees from three affluent countries. Health functioning was assessed twice in occupational cohorts from Britain (1997-1999 and 2003-2004), Finland (2000-2002 and 2007) and Japan (1998-1999 and 2003). Widening inequalities were seen for British and Finnish men, whereas inequalities among British and Finnish women remained relatively stable. Japanese women showed reverse inequalities at follow up, but no health inequalities were seen among Japanese men. Health behaviours and social relations explained 4-37% of the magnitude in health inequalities, but not their widening. abstract_id: PUBMED:32462181 Health inequalities are worsening in the North East of England. Background: The North of England, particularly the North East (NE), has worse health (e.g. 2 years lower life expectancy) and higher health inequalities compared to the rest of England. Sources Of Data: We explore this over time drawing on publicly available data. Areas Of Agreement And Controversy: Whilst overall health is improving, within-regional health inequalities are getting worse and the gap between the NE and other regions (particularly the South of England) is worsening. The gap in life expectancy is widening with substantial variation between deprived and affluent areas within the NE. Those living in the NE are more likely to have a shorter lifespan and to spend a larger proportion of their shorter lives in poor health, as well as being more likely to die prematurely from preventable diseases. Growing Points: We highlight wide, and in some cases increasing, inequalities in health outcomes between the NE and the rest of England. This health disadvantage and the north-south health divide are recognized; despite this, the situation appears to be worsening over the time. Areas Timely For Developing Research: Research to understand and reduce health inequalities is needed particularly in the NE of England where reductions could have enhanced the impact. abstract_id: PUBMED:31915863 Digital divide-social inequalities in the utilisation of digital healthcare Today, health inequalities are one of the most important issues in public health worldwide. The digitalisation of healthcare is frequently attributed with the potential to reduce health inequalities. At the same time, effective internet usage is a prerequisite of the successful utilisation of digital health interventions. This might be a new obstacle for those who lack the necessary material resources or individual skills.Evidence on how exactly digital health interventions affect health inequalities is scarce. The aim of this study was to present a narrative review of the available literature. The majority of studies showed an association between the usage of digital health interventions and sociodemographic factors. The utilisation was generally higher among younger people and those with higher education and higher income. Only few studies showed no association. Other studies reported higher utilisation among those with higher levels of health literacy while health literacy showed a social gradient to the disadvantage of those in lower socioeconomic position. With a low overall level of evidence, there is currently no indication that digital health interventions are reducing health inequalities.The studies analysed in this review indicate that existing inequalities persist in the digital realm. Further evidence is needed to create a better understanding of the importance of sociodemographic factors for digital health interventions. abstract_id: PUBMED:26640302 Class in contemporary Britain: comparing the Cultural Capital and Social Exclusion (CCSE) project and the Great British Class Survey (GBCS). The paper discusses the salience of class in Britain in relation to the experiment of the BBC-academic partnership of the Great British Class Survey (GBCS). It addresses the claimed inauguration of a third phase in class analysis in the UK sparked by the experiment. This is done by considering three main issues. First, the GBCS experiment is situated in the context of various explorations of cultural class analyses, and chiefly in relation to the Cultural Capital and Social Exclusion (CCSE) project (ESRC funded 2003-6). Secondly, the focus is on the influence of the academic turn to big data for the procedures and claims of the project, and some implications of the methodological choices. Thirdly, attention is turned to the deleterious effects of commercial and institutional pressures on the current research culture in which the experiment exists. abstract_id: PUBMED:25748068 Are the differences in adulthood ill-health across the north-south divide and between Scotland and England also evident in early childhood health indicators? Regional differences in adult morbidity and mortality within England (i.e., north-south divide or gradient) and between England and Scotland (i.e., Scottish effect) are only partly explained by adult levels of socioeconomic status or risk factors. This suggests variation in early life, and is supported by the foetal origins and life-course literature which posits that birth outcomes and subsequent, cumulative exposures influence adult health. However, no studies have examined the north-south gradient or Scottish effect in health in the earliest years of life. The aims of the study were: i) to examine health indicators in English and Scottish children at birth and age three to establish whether regional differences exist; and ii) to establish whether observed changes in child health at age three were attributable to birth and/or early life environmental exposures. Respondents included 10,639 biological Caucasian mothers of singleton children recruited to the Millennium Cohort Study (MCS) in the year 2000. Outcome variables were: gestational age and birth weight, and height, body mass index (BMI), and externalising behavioural problems at age three. Region/country was categorised as: South (reference), Midlands, North (England), and Scotland. Respondents provided information on child, maternal, household, and socioeconomic characteristics. Results indicated no significant regional variations for gestational age or birth weight. At age three there was a north-south gradient for externalising behaviour and a north-south divide in BMI which attenuated on adjustment. However, a north-south divide in height was not fully explained by adjustment. There was also evidence of a 'Midlands effect', with increased likelihood of shorter stature and behaviour problems. Results showed a Scottish effect for height and BMI in the unadjusted models, and height in the adjusted model, but a decreased likelihood of behaviour problems. Findings indicated no regional differences in health at birth, but some regional variation at age three supports the cumulative life-course model. abstract_id: PUBMED:38049789 Reducing intervention- and research-induced inequalities to tackle the digital divide in health promotion. Social inequalities are an important contributor to the global burden of disease within and between countries. Using digital technology in health promotion and healthcare is seen by some as a potential lever to reduce these inequalities; however, research suggests that digital technology risks re-enacting or evening widening disparities. Most research on this digital health divide focuses on a small number of social inequality indicators and stems from Western, educated, industrialized, rich, and democratic (WEIRD) countries. There is a need for systematic, international, and interdisciplinary contextualized research on the impact of social inequality indicators in digital health as well as the underlying mechanisms of this digital divide across the globe to reduce health disparities. In June 2023, eighteen multi-disciplinary researchers representing thirteen countries from six continents came together to discuss current issues in the field of digital health promotion and healthcare contributing to the digital divide. Ways that current practices in research contribute to the digital health divide were explored, including intervention development, testing, and implementation. Based on the dialogue, we provide suggestions for overcoming barriers and improving practices across disciplines, countries, and sectors. The research community must actively advocate for system-level changes regarding policy and research to reduce the digital divide and so improve digital health for all. abstract_id: PUBMED:35874800 Patterns of social inequality in arts and cultural participation: Findings from a nationally representative sample of adults living in the United Kingdom of Great Britain and Northern Ireland. Context: A significant amount of literature indicates the health benefits of arts engagement. However, as this engagement is socially patterned, differential access to and participation in the arts may contribute to social and health inequalities. Objective: This study aimed to uncover the patterns of participation in arts activities and engagement with culture and heritage among adults in the United Kingdom of Great Britain and Northern Ireland, and to examine whether such patterns are associated with demographic and socioeconomic characteristics. Methodology: We applied latent class analysis to data on arts and cultural participation among 30 695 people in the Understanding Society study. Multinomial logistic regression was used to identify predictors for the patterns of activity engagement. Results: For arts participation, adults were clustered into "engaged omnivores," "visual and literary arts," "performing arts" and "disengaged." For cultural engagement, adults were clustered into "frequently engaged," "infrequently engaged" and "rarely engaged." Regression analysis showed that the patterns of arts activity were structured by demographic and socioeconomic factors. Conclusion: This study reveals a social gradient in arts and cultural engagement. Given the health benefits of arts engagement, this suggests the importance of promoting equal access to arts and cultural programmes, to ensure that unequal engagement does not exacerbate health inequalities. abstract_id: PUBMED:24643674 Grim up North or Northern grit? Recessions and the English spatial health divide (1991-2010). Background: Previous research suggests that the health effects of recessions are mixed and vary spatially between countries. Using the North-South English health divide as an example, this paper examines whether there are also spatial variations within countries. Methods: Cross-sectional data on self-reported 'not good health' was obtained from the British Household Panel Survey and the Health Survey for England from 1991 to 2010. Age-adjusted generalized linear models were used to examine the effects of recessions (1990/91 and 2008/09) on self-reported health in the four English NHS Commissioning Regions (North, South, Midlands and London) with stratification by gender. Results: Over the 20-year study period, the North had consistently higher rates of 'not good health' than the South [OR 1.50 (1.46-1.55) outside recessions and OR 1.29 (1.19-1.39) during recessions]. However, during periods of recession, this health divide narrowed slightly with a 2% decrease in the prevalence of 'not good health' in the North [OR 0.91 (0.86, 0.96)]. Conclusion: This study is evidence of spatial variations in the health effects of recessions within England and the North-South divide appears to slightly reduce during recessions. Health in the North remains worse than the South. Answer: Yes, there is a north-south divide in social class inequalities in health in Great Britain. A cross-sectional study using data from the 2001 national census found that in each of the seven social classes, Wales and the North East and North West regions of England had higher rates of poor health. The study also observed large social class inequalities in self-rated health, with rates of poor health generally increasing from class 1 (higher professional occupations) to class 7 (routine occupations). The size of the health divide varied between regions, with the largest rate ratios for routine versus higher professional classes being for Scotland and London. This indicates a northwest-southeast divide in social class inequalities in health in Great Britain at the start of the 21st century (PUBMED:15117791). Further research has shown that health inequalities are worsening in the North East of England, with the region experiencing worse health outcomes, such as a lower life expectancy and higher rates of preventable diseases, compared to the rest of England. The gap in life expectancy between the North East and other regions, particularly the South of England, is widening, and within-regional health inequalities are getting worse (PUBMED:32462181). Additionally, a study on regional differences in adult morbidity and mortality within England and between England and Scotland found that these differences are only partly explained by adult levels of socioeconomic status or risk factors, suggesting that variation in early life influences adult health. This supports the life-course literature which posits that birth outcomes and subsequent exposures influence adult health. The study found some regional variation in child health at age three, which supports the cumulative life-course model (PUBMED:25748068). Overall, the evidence suggests that there is indeed a north-south divide in social class inequalities in health in Great Britain, with the divide being more pronounced in certain regions and persisting over time.
Instruction: Relationship between job demand and burnout in nurses: does it depend on work engagement? Abstracts: abstract_id: PUBMED:27111251 Relationship between job demand and burnout in nurses: does it depend on work engagement? Aim: The present study aimed to deepen the understanding of the relationships among job demands, control, social support, burnout and engagement in nurses. Background: Burnout is a prevalent phenomenon among nurses because of the interaction between high demands and low resources, according to the job demands-resources model. Methods: A descriptive, correlational design was used in a stratified random sample of 100 nurses recruited from two Spanish hospitals. Job demand, social support, control, engagement, and burnout were measured. Data were analysed by hierarchical regression analysis. Results: Social support is a significant predictor of nurses' engagement and demands is a predictor of nurses' burnout. Work engagement moderates the relationship between job demands and burnout. Conclusions: The process that leads to burnout and the process that leads to engagement are not isolated processes; engagement acts as a moderator of burnout. Implications For Nursing Management: The prevailing paradigm in combating burnout in nursing can be changed and could be based on the enhancement of nurses' strengths through increasing engagement. abstract_id: PUBMED:27275218 Work Demands-Burnout and Job Engagement-Job Satisfaction Relationships: Teamwork as a Mediator and Moderator. Background: Few studies have examined teamwork as mediator and moderator of work demands-burnout and job engagement-job satisfaction relationships in healthcare workers (HCWs) in South-East Europe. Aim: To assess mediation and moderation effect of teamwork on the relationship between independent (work demands or job engagement) and dependent (burnout or job satisfaction) variables. Methods: Work demands, burnout, job engagement, and job satisfaction were measured with Hospital Experience Scale, Maslach Burnout Inventory, Utrecht Work Engagement Scale, and Job Satisfaction Survey, respectively. Hospital Survey on Patient Safety Culture was used for assessment of teamwork. In order to examine role of teamwork as a mediating variable we fit series of regression models for burnout and job satisfaction. We also fit regression models predicting outcome (burnout or job satisfaction) from predictor (work demands or job engagement) and moderator (teamwork) variable. Results: Teamwork was partial mediator of work demands-burnout relationship and full mediator of job engagement-job satisfaction relationship. We found that only job engagement-job satisfaction relationship was moderated by teamwork. Conclusions: Occupational health services should target detection of burnout in HCWs and implementation of organizational interventions in hospitals, taking into account findings that teamwork predicted reduced burnout and higher job satisfaction. abstract_id: PUBMED:30923507 Work Values of Police Officers and Their Relationship With Job Burnout and Work Engagement. Values represent people's highest priorities and are cognitive representations of basic motivations. Work values determine what is important for employees in their work and what they want to achieve in their work. Past research shows that levels of both aspects of job-related well-being, job burnout and work engagement, are related to work values. The policing profession is associated with high engagement and a risk of burnout. There is a gap in the literature regarding the hierarchy of work values in police officers, how work values are associated with job burnout and work engagement in this group, and whether work values in police officers are sensitive to different levels of job burnout and work engagement. Therefore, the aim of our study was to examine the relationships between work values and job burnout and work engagement, in a group of experienced police officers. We investigated: (a) the hierarchy of work values based on Super's theory of career development, (b) relationships between work values and burnout and work engagement, and (c) differences between the work values in four groups (burned-out, strained, engaged, and relaxed). A group of 234 Polish police officers completed the Work Values Inventory (WVI) modeled upon Super's theory, the Oldenburg Burnout Inventory and the Utrecht Work Engagement Scale. The results show that police officers gave the highest priority to extrinsic work values. Job burnout was negatively correlated with the cognitive intrinsic work values (Creativity, Challenge, and Variety), while work engagement was positively correlated with the largest group of intrinsic work values (Creativity, Challenge, Variety, Altruism, and Achievement), as well as with the extrinsic work values (Prestige and Co-workers). The police officers showed significant differences, between levels of job burnout and work engagement, for intrinsic work values such as Variety, Challenge, and Creativity (large effects), and for Altruism and Prestige (moderate effects). The findings are discussed within the context of the Conservation of Resources theory, which explains how people invest and protect their personal resources, and how this is connected with preferred work values. We conclude that intrinsic work values are sensitive to different levels of burnout and engagement. abstract_id: PUBMED:37908133 The impact of job burnout on nurses' caring behaviors: Exploring the mediating role of work engagement and job motivation. Aim: To investigate the influence of job burnout on nurses' caring behaviors, with a specific focus on the serial mediating role of work engagement and job motivation. Background: Job burnout is a prevalent issue among nurses and poses significant challenges to their ability to deliver compassionate care. However, the underlying mechanisms that explain the relationship between job burnout and nurses' caring behaviors, which could provide valuable insights for developing targeted interventions and strategies to support nurse well-being and ensure the consistent provision of compassionate care, remain underexplored. Methods: A cross-sectional design was used to gather data from 520 clinical nurses working in various acute healthcare settings in the Philippines. Mediation analyses were conducted using the PROCESS Macro with Model 6. Results: Nurses exhibited high levels of caring behaviors (M = 4.219); however, they experienced moderate levels of job burnout (M = 2.342). Job burnout had a negative effect (β = -0.0603, p = 0.0482) on nurses' caring behaviors. Work engagement and job motivation sequentially mediated the relationship between job burnout and nurse caring behaviors (β = -0.0126, p &lt; 0.05). Conclusions: Higher levels of job burnout were associated with a decreased demonstration of nurses' caring behaviors. Job burnout among nurses led to a decrease in their level of work engagement and job motivation, ultimately impacting their ability to provide caring behaviors to patients. Implications For Nursing And Health Policy: Organizations should prioritize creating a positive work environment and implementing theory-driven strategies to address job burnout, which can foster work engagement and job motivation among nurses, thereby providing crucial support for them in delivering high-quality, compassionate care to patients. abstract_id: PUBMED:35928489 Job demands, resources, and task performance in Chinese social workers: Roles of burnout and work engagement. Social work is a rapidly developing occupation in China. In the early 2000s, there were merely a few hundred thousand social workers, but by 2020 there were over 1.5 million social workers in the field. However, research has indicated these social workers are also experiencing record high burnout and turnover rates. Thus, researchers have started to question the work engagement and task performance factors that could be contributing to these increasing rates. This study uses the Job Demands and Resources (JD-R) Theory to understand how 537 social workers from Guangzhou, China are impacted by burnout and how it influences work engagement and task performance. The results show JD-R directly affect task performance through burnout and work engagement via a dual process. First, job demands were associated with high burnout and low work engagement, which both were found to lead to low task performance. Second, job resources were related to low burnout rates and high work engagement, both of which were associated with high task performance. These findings call for healthcare interventions to reduce burnout and workplace policy changes to promote work engagement to support task performance in social workers in China. These factors can each have a crucial impact on the public health of both the affected social workers and the vulnerable clients these social workers serve. abstract_id: PUBMED:37321135 Personality moderators of the cross-sectional relationship between job demands and both burnout and work engagement in judges: The boosting effects of conscientiousness and introversion. The central question of this study is whether buffering, boosting and exacerbating effects of the Big Five personality factors extraversion, openness to experience, agreeableness, conscientiousness and neuroticism can be demonstrated in the relationship between two job demands (i.e., work pressure and working overtime) and both burnout and work engagement in 257 Dutch judges. It is important to better understand the interaction effects between various job demands (work pressure and working overtime) and personality on both burnout and work engagement in judges given their increased risk of burnout and lower work engagement due to cognitively and their emotionally demanding work. Three hypotheses were tested in a cross-sectional design study. Moderation analyses showed that, as expected, conscientiousness significantly boosted the relationship between working overtime and work engagement. Hence, high scorers on conscientiousness showed more work engagement when working overtime. Also, extraversion moderated the relation between working overtime and work engagement, but only at a low level of extraversion. Thus, contrary to expectations, introverts showed more work engagement when they work overtime. Also, significant main effects were found. Work pressure and neuroticism related positively to burnout, while extraversion and agreeableness related negatively to burnout. Moreover, extraversion, agreeableness and conscientiousness related positively to work engagement. In our study, conscientiousness, extraversion and agreeableness can be considered as personal resources for judges, in line with the Conservation of Resources (COR) theory. Especially conscientiousness can facilitate judges to cope with challenging working circumstances and introversion ensures that judges stay engaged despite working overtime. abstract_id: PUBMED:37063548 Work addiction, work engagement, job burnout, and perceived stress: A network analysis. Introduction: Recently, the network theory of mental disorders has been used to conceptualize work addiction as a dynamic system of symptoms in direct relationships. This study aimed to extend previous work by investigating the direct relationships of work addiction symptoms with dimensions of work engagement, job burnout, and perceived stress. Methods: These phenomena were measured with the Bergen Work Addiction Scale, the Utrecht Work Engagement Scale, the Maslach Burnout Inventory-General Survey, and the Perceived Stress Scale. The sample comprised 676 working Poles with a mean age of 36.12 years (SD = 11.23). The network analysis followed the guidelines for estimating psychological networks from cross-sectional data. Results: Work engagement and job burnout were more closely associated with each other than with work addiction which supports the notion that engagement and burnout represent polar opposites of the same construct and that work addiction is a separate phenomenon, related to both work engagement and job burnout via specific pathways. The symptoms of work addiction were connected with other phenomena through four direct relationships: (1) mood modification-absorption, (2) mood modification-stress, (3) withdrawal-absorption, and (4) problems-exhaustion. Discussion: These findings narrow down and specify hypotheses regarding potential mechanisms leading from engagement to addiction and from addiction to burnout. The possible mechanisms focus on the absorption component and mood modification related to efforts focused on alleviating chronic stress and negative emotional states. In turn, problems arising from work addiction may lead to exhaustion. Future studies investigating these mechanisms in detail may enable proper prevention programs and therapeutic interventions. abstract_id: PUBMED:36531194 How trait gratitude relates to teachers' burnout and work engagement: job demands and resources as mediators. The benefits of gratitude in the workplace, in general, and for teachers, in particular, are still understudied. In the present study, we investigated whether teachers' trait gratitude is linked to their work engagement and burnout. Moreover, we explored whether perceived job demands and job resources mediate the relation between employees' gratitude and these two outcomes. A sample of 312 Romanian teachers participated in the study. Participants filled out questionnaires assessing trait gratitude, perceived job characteristics, as well as burnout and work engagement. Results indicated that teachers' trait gratitude was positively associated with their work engagement and negatively with burnout. The relation between trait gratitude and work engagement was mediated by perceived job resources, whereas the link between trait gratitude and burnout was mediated by both job demands and job resources. Our findings suggest that educational institutions could cultivate teachers' gratitude in order to create a healthier and more motivated workforce. abstract_id: PUBMED:24029805 Work engagement and job burnout within the disability support worker population. The aim of this study was to explore work engagement and job burnout within the disability support worker (DSW) population, using the job demands-resources (JD-R) model as a guiding theory. The research measured a set of work-related demands and resources related to working within the disability sector in order to assess which demands/resources account for a significant portion of unique variance when used to model DSW engagement and burnout. This study sampled 258 DSWs from across Australia who completed an online or paper questionnaire that included measures of engagement, burnout and the demands/resources of interest. With regard to demands, role ambiguity was significantly associated with the three engagement scores and the three burnout scores. It also accounted for the most unique variance in the three engagement scores (vigour [VI], dedication [DE] and absorption [AB]), and the personal accomplishment (PA) burnout score. With regard to resources, job feedback was significantly associated with two of the engagement scores (VI and DE) and all three burnout scores. It accounted for the most unique variance in VI and DE, and PA. In conclusion, this research adds to the existing disability workforce literature as it represents one of the first comprehensive investigations of work engagement within this population. Improved job descriptions, on-the-job feedback and the creation of specialist support workers are offered as recommendations to improve the psychosocial health of DSWs. abstract_id: PUBMED:35692525 Self-Care Strategies and Job-Crafting Practices Among Behavior Analysts: Do They Predict Perceptions of Work-Life Balance, Work Engagement, and Burnout? Applied behavior analysis (ABA) practitioners report high levels of burnout, exhibited as exhaustion and disengagement. Turnover, a stressful and costly experience for individual practitioners and the human service organizations that employ them, is a potential consequence of burnout. Work-life balance and work engagement are associated with lower burnout and lower intention to quit. Research concerning behavioral predictors of work-life balance, work engagement, and burnout-all of which are associated with turnover intentions-among ABA service providers is scant. Therefore, the purpose of the current study was to explore whether and how the use of self-care strategies and job-crafting practices influences perceived levels of work-life balance, work engagement, and burnout among ABA practitioners who work in human service settings. In a sample of 826 ABA practitioners, 72% reported medium to high levels of burnout. Hierarchical regression analyses revealed that the use of both self-care strategies and job-crafting practices strongly predicted work-life balance, work engagement, and burnout above and beyond sociodemographic variables (gender and years of experience). Findings can inform the development of effective organizational/systems- and individual-level self-care and job-crafting interventions that support sustainable individual, organizational, and client-related outcomes. We contend that self-care and job-crafting interventions support a culture of well-being in graduate programs, training/supervision curricula, and mentor-mentee relationships. Answer: The relationship between job demand and burnout in nurses does indeed depend on work engagement. According to the study with PUBMED:27111251, work engagement moderates the relationship between job demands and burnout. This means that the process leading to burnout and the process leading to engagement are not isolated; engagement acts as a buffer against burnout. The study suggests that increasing nurses' engagement could be a strategy to combat burnout, shifting the paradigm from focusing solely on the negative aspects to enhancing nurses' strengths. Furthermore, the study with PUBMED:37908133 supports the notion that job burnout negatively affects nurses' caring behaviors, and this relationship is sequentially mediated by work engagement and job motivation. This implies that higher levels of job burnout lead to decreased work engagement and job motivation, which in turn impacts nurses' ability to provide caring behaviors to patients. Additionally, the study with PUBMED:35928489 indicates that job demands are associated with high burnout and low work engagement, leading to low task performance among social workers. Conversely, job resources relate to low burnout rates and high work engagement, which are associated with high task performance. This suggests that the relationship between job demands and burnout can be influenced by the level of work engagement, which also affects job performance. In summary, the relationship between job demand and burnout in nurses is influenced by work engagement. High job demands can lead to burnout, but if nurses are engaged in their work, this can mitigate the negative effects of job demands and reduce the likelihood of burnout. Therefore, strategies to enhance work engagement are crucial in managing the impact of job demands on burnout among nurses.
Instruction: Should we be worried? Abstracts: abstract_id: PUBMED:29223173 The Worried Well? Characteristics of Cognitively Normal Patients Presenting to a Rural and Remote Memory Clinic. Introduction: In an effort to better understand why cognitively normal patients were referred to a memory clinic, we sought to identify features of "worried well" patients to better identify those more likely to be cognitively normal. Methods: In total, 375 consecutive patients referred by primary care practitioners to a Rural and Remote Memory Clinic were categorized into two groups based on their neurologic diagnosis, "worried well" (cognitively normal, N=81) or "other" (patients with any neurologic diagnosis, N=294). Data collected included: age, sex, years of formal education, Mini-Mental Status Examination score from initial visit, Center for Epidemiologic Studies Depression Scale score, Self-Rating of Memory Scale, alcohol consumption, marital status, hours per week of work, past medical history, sleep concerns, and family history of memory concerns. The two groups were compared using t-tests and χ2 tests. The same comparison was done between the same set of "worried well" patients (cognitively normal, N=81) and the subgroup of patients with a diagnosis of Alzheimer's disease (N=146) from the "other" group. Results: Significant differences included younger age, more formal education, more frequently having previous psychiatric diagnosis and more self-reported alcohol consumption in the "worried well" group. The "worried well" and "Alzheimer's disease" comparison had the same significant differences as the "worried well" and "other" comparison. Conclusion: We observed a pattern of differences unfold between the "worried well" patients and those with cognitive disease. No one variable was pathognomonic of a "worried well" patient. However, taking all the above into account when evaluating a patient may help clinically. abstract_id: PUBMED:33067767 Feeling scared or worried self-report in children receiving cancer treatments using the Symptom Screening in Pediatrics Tool (SSPedi). Introduction: The objectives of this study were to describe reports of bother for feeling scared or worried among children with cancer and pediatric hematopoietic stem cell transplant (HSCT) recipients, and to identify factors associated with it. Methods: We included children receiving cancer treatments who were 8-18 years of age. Three patient types were enrolled: inpatients receiving active cancer treatment, outpatients receiving maintenance acute lymphoblastic leukemia chemotherapy, and outpatients in survivorship. Amount of bother due to feeling scared or worried yesterday or today was self-reported using the Symptom Screening in Pediatrics Tool (SSPedi) on a 0-4 scale. Risk factors were evaluated using logistic regression. Results: Among the 502 children included, 225 (45.0%) reported any degree of bother (score ≥ 1) and 29 (5.8%) reported severe bother (score ≥ 3) for feeling scared or worried. In multiple regression evaluating any bother, boys were less likely to be bothered (odds ratio (OR) 0.60, 95% confidence interval (CI) 0.41-0.87) and inpatients receiving active cancer treatment were more likely to be bothered compared to outpatients in survivorship (OR 3.58, 95% CI 2.00-6.52). The only factor associated with being severely bothered by feeling scared or worried was clinic visit or admission due to fever (OR 4.57, 95% CI 1.24-13.60). Discussion: We found 45% of children receiving cancer treatments reported being bothered by feeling scared or worried. Girls and inpatients receiving active treatment experienced more bother of any degree, while visiting the hospital due to fever was associated with being severely bothered. Future work should identify interventions to prevent or alleviate this symptom. abstract_id: PUBMED:33685531 What Happens to the Worried Well? Follow-Up of Subjective Cognitive Impairment. Background: Increasing concern around perceived neurocognitive decline is increasing the number of referrals to specialists and anxiety for patients. We aimed to explore the likelihood of the "worried well" experiencing neurocognitive decline and developing a neurological diagnosis. Methods: A total of 166 "worried well" patients who attended the Rural and Remote Memory Clinic (RRMC) between 2004 and 2019 were included in this study. Demographic, health, social, and behavioral factors were measured at the initial visit. Mini-Mental State Examination (MMSE), Center for Epidemiologic Studies Depression Scale (CESD), and Functional Activities Questionnaire (FAQ) scores were measured and compared at initial assessment and at 1-year follow-up. MMSE scores over time were assessed with a mean follow-up of 2.95 years (SD 2.87). Results: No statistically significant difference was seen in MMSE, CESD, or FAQ scores when comparing clinic day to 1-year follow-up, and no consistent pattern of MMSE score over time was seen. Of the 166 patients with subjective cognitive impairment (SCI) on initial assessment, 5 were diagnosed with Alzheimer's disease (AD) at 8.5, 3.5, 5, 3, and 1.75 years; 2 were diagnosed with MCI at 1 and 2 years; 1 was diagnosed with vascular cognitive impairment at 5 years; and 1 was diagnosed with frontotemporal dementia (FTD) at 0.5 years. Conclusion: The likelihood of a patient with SCI developing a neurological diagnosis is reassuringly low (9/166), but not irrelevant. This, along with the benefits of early diagnosis and treatment for dementia, leads us to believe that patients with SCI should still be seen in follow-up at least at the 1-year mark. abstract_id: PUBMED:33150221 All's not well with the "worried well": understanding health anxiety due to COVID-19. The novel corona virus (SARS- CoV2) pandemic has created an unprecedented public health problem and a mental health crisis looms ahead. The isolation, socio-economic disruption, uncertainty and fear of contagion have led to a spike of health anxiety in the general public. Individuals with health anxiety may get dismissed as the "worried well" in this pandemic due to disruption of mental health services and inability of healthcare systems to understand the psychosocial factors in the background. Education of general public, training of healthcare workers in cognitive behavioural model of health anxiety and timely referral to mental health professionals in severe cases is need of the hour. abstract_id: PUBMED:33776208 Reducing patient surge: community based social networks as first responders. A major challenge for health services worldwide is in providing adequate medical care during mass disasters. The ongoing COVID-19 pandemic highlights this difficulty. Patient surge, a consequence of most types of disasters that contribute to trauma experiences, is a primary factor in disrupting such care as it is composed of worried well persons and those experiencing psychosocial trauma that can severely disrupt and overwhelm effective acute hospital based health care. We review the literature and propose a potential solution framework to reduce such a surge that relies on exploiting community social networks as first responders. We utilize and integrate literature based evidence on patient surge, community disaster behaviors and community based informal social networks to examine reasons for patient surge to hospitals. We then propose that leveraging community based social networks as a potent deterrent for non-critically injured, especially those who have experienced psychosocial trauma or the worried well, from seeking hospital care during ongoing disasters. By emphasizing the social capital inherent in community based social networks, this perspective posits an alternative cost-effective means of reducing patient surge. abstract_id: PUBMED:26175817 Bloody semen, severe hypertension and a worried man. Haematospermia is often associated with severe uncontrolled hypertension. The bloody semen is often very worrisome for the patient and his sexual partner(s). In addition to anti-hypertensive, counselling and lifestyle modification are essential for management of the condition. abstract_id: PUBMED:9086943 Understanding discourses of the worried well. This paper undertakes a deconstruction of the discourses of the 'worried well' population, a population argued to be particularly affected by the biomedical discourse. This deconstruction highlights the 'naturalized, glossed-over, unarticulated presuppositions' supported by hegemonic discourses of medicine. The historical development of the biomedical framework, the assumptions underpinning scientific discourse, and in particular its treatment of women, are examined. Issues pertaining to the diagnosis and treatment of this population are also discussed. The paper concludes with the recognition that the impetus rests with nurses to be agents of change in the reconstruction of discourses surrounding the worried well, to view clients labelled within biomedical discourse as worried well in a new light. Clients' subjective experiences of health must be valued, recognized and contextualized, thereby converting the rhetoric of holistic care into reality. abstract_id: PUBMED:6202248 Mothers are easily worried. Mothers are easily worried and this cannot always be avoided. But mothers are often worried unnecessarily by insensitivity, unwise choice of words, failure to determine what they are really concerned about, by criticizing them for being over anxious and ignoring their fears, or by inadequate explanation and counselling. Much anxiety is caused by unwise remarks in an assessment clinic or by a doctor's failure to know the normal and abnormal variations in development, behaviour, or physical growth and physical features with resulting unnecessary medicines, surgery, or special investigations. It is easy to implant a feeling of guilt by putting the blame for a child's behaviour or handicap on the parents. Screening procedures in the newborn may cause much parental anxiety. abstract_id: PUBMED:1549946 Worried wellness: how meaningful is the concept in managing elderly patients? The concept of "worried wellness" is poorly defined for research purposes, and little is known about the worried-well elderly. To study the incidence of worried wellness in the elderly and define the clinical utility of the term, researchers conducted a structured interview with 733 independently living community elderly and 250 elderly patients from a family practice residency-teaching practice. The interview combined standard subjective health questions with specific additional measures of worry to form a "worry" scale. The worry data were cross-tabulated with "wellness" parameters that included symptomatology, presence of chronic medical conditions, use of medical equipment, and activities of daily living. A wide range of prevalences for worry were found, depending on the measure used. Wellness data were more consistent: though the vast majority of elderly were fully functional, 9 out of 10 subjects had at least one medical condition. Cross-tabulation based on individual items from the worry and wellness measures yielded a prevalence of one-third or less, depending on the measure used. However, composite analyses for both liberal and conservative measures of worry disclosed very few cases of worried wellness in the patient or community samples. This finding points to a very low incidence of the phenomenon among the elderly and raises significant doubt about the clinical utility of the term "worried wellness." abstract_id: PUBMED:6536499 Health beliefs and physician identified "worried well". Health belief variables of the frequent and inappropriate user of medical services, or "worried well," were examined in two studies. Study 1 showed the physician identified "worried well" to be concerned about both general and mental health issues and to perceive typically minor symptoms as more serious than other frequent appropriate users, infrequent users, and mental health comparison groups. Study 2, a prospective study, measured beliefs and subsequent use for one year. Ratings of inappropriateness of use and psychosomatic involvement made by medical providers were predicted by high degrees of self reported somatic symptomatology, mental health concern, belief in self responsibility for health care, belief that one should handle one's own problems, and a limited belief in the responsibility of powerful others for health care. Answer: The question "Should we be worried?" is broad and can be interpreted in various contexts. However, based on the provided abstracts, we can infer that the term "worried" often relates to individuals who are concerned about their health despite not having a clear medical diagnosis or reason for concern. These individuals are sometimes referred to as the "worried well." In the context of cognitive health, the "worried well" are individuals who present to memory clinics with concerns about their cognitive function but are found to be cognitively normal upon assessment (PUBMED:29223173). These patients may be younger, have more formal education, and have a history of psychiatric diagnoses compared to those with cognitive diseases such as Alzheimer's (PUBMED:29223173). A follow-up study of "worried well" patients with subjective cognitive impairment (SCI) found that the likelihood of developing a neurological diagnosis is low, but not negligible, suggesting that follow-up is still warranted (PUBMED:33685531). In pediatric oncology, a significant proportion of children receiving cancer treatments reported being bothered by feelings of being scared or worried, with girls and inpatients receiving active treatment experiencing more bother (PUBMED:33067767). This indicates that worry is a significant symptom in this patient population and interventions to alleviate it should be considered. During the COVID-19 pandemic, health anxiety increased among the general public, and the "worried well" may have been dismissed due to the disruption of mental health services (PUBMED:33150221). The pandemic also highlighted the issue of patient surge in hospitals, which includes "worried well" individuals seeking care, and proposed leveraging community-based social networks as first responders to reduce this surge (PUBMED:33776208). Historically, the "worried well" have been influenced by biomedical discourse, and there is a call for healthcare professionals to value and contextualize clients' subjective experiences of health (PUBMED:9086943). In the elderly population, the concept of "worried wellness" is not well-defined, and the incidence among the elderly appears to be low, questioning the clinical utility of the term (PUBMED:1549946). Additionally, health beliefs of the "worried well" have been examined, showing that they may perceive minor symptoms as more serious and have concerns about both general and mental health (PUBMED:6536499). In summary, whether or not we should be worried depends on the context. In healthcare, it is important to recognize and address the concerns of the "worried well" to ensure appropriate care and follow-up, alleviate unnecessary anxiety, and manage healthcare resources effectively.
Instruction: Is screening of wireless capsule endoscopies by non-physicians feasible? Abstracts: abstract_id: PUBMED:17059808 Is screening of wireless capsule endoscopies by non-physicians feasible? Introduction: Reading a wireless capsule endoscopy (WCE) may be time-consuming. In order to reduce the time needed by a physician to view a WCE, we investigated if other medical professions could preview the video and detect bowel pathology in advance. Materials And Methods: A specialist in gastroenterology and two laboratory technicians independently viewed 34 consecutive WCEs and noted all findings. Thereafter, two specialists reviewed the findings made by the technicians independently in order to make a diagnosis. The results were compared with the diagnosis made by the first specialist, who viewed the entire WCE himself. Results: The reviewing specialists were able to make a correct diagnosis based on the technicians' findings in 97% and 85% of the patients, respectively. Conclusion: The study indicates that it is safe and timesaving to use other medical professions than physicians to preview WCEs. abstract_id: PUBMED:32268557 A Fluorescence-Based Wireless Capsule Endoscopy System for Detecting Colorectal Cancer. Wireless capsule endoscopy (WCE) has been widely used in gastrointestinal (GI) diagnosis that allows the physicians to examine the interior wall of the human GI tract through a pain-free procedure. However, there are still several limitations of the technology, which limits its functionality, ultimately limiting its wide acceptance. Its counterpart, the wired endoscopic system is a painful procedure that demotivates patients from going through the procedure, and adversely affects early diagnosis. Furthermore, the current generation of capsules is unable to automate the detection of abnormality. As a result, physicians are required to spend longer hours to examine each image from the endoscopic capsule for abnormalities, which makes this technology tiresome and error-prone. Early detection of cancer is important to improve the survival rate in patients with colorectal cancer. Hence, a fluorescence-imaging-based endoscopic capsule that automates the detection process of colorectal cancer was designed and developed in our lab. The proof of concept of this endoscopic capsule was tested on porcine intestine and liquid phantom. The proposed WCE system offers great possibilities for future applicability in selective and specific detection of other fluorescently labelled cancers. abstract_id: PUBMED:25148269 Flexible and capsule endoscopy for screening, diagnosis and treatment. Endoscopy dates back to the 1860s, but many of the most significant advancements have been made within the past decade. With the integration of robotics, the ability to precisely steer and advance traditional flexible endoscopes has been realized, reducing patient pain and improving clinician ergonomics. Additionally, wireless capsule endoscopy, a revolutionary alternative to traditional scopes, enables inspection of the digestive system with minimal discomfort for the patient or the need for sedation, mitigating some of the risks of flexible endoscopy. This review presents a research update on robotic endoscopic systems, including both flexible scope and capsule technologies, detailing actuation methods and therapeutic capabilities. A future perspective on endoscopic potential for screening, diagnostic and therapeutic gastrointestinal procedures is also presented. abstract_id: PUBMED:27362990 An Innovative Wireless Endoscopic Capsule With Spherical Shape. This paper aims to contribute to the advancement of the Wireless Capsule Endoscopy (WCE) field for ColoRectal Cancer (CRC) screening, by developing all electronic circuits to build an innovative wireless endoscopic capsule with a spherical shape, conceived to reduce the friction during its locomotion and thus improving patient's acceptability. The proposed capsule embeds an image sensor with optics and Light Emitting Diodes (LEDs), a control unit with a telemetry module, an actuation system, a battery with a smart recharging circuit able to recharge in 20 minutes, a smart power-on circuit and a localization module. Everything is devised to fit in a small spherical shape with a diameter of 26 mm and a weight of 12.70 g. The authors present a description of the sub-modules involved in the capsule development, together with the firmware and hardware integration. In order to reduce the bandwidth for matching the specifications of the target commercial telemetry, the firmware interfacing of a custom encoder was performed, which is able to compress the incoming images with a negligible loss of information and occupying a number of Look Up-Tables (LUTs) less than 1780. As a preliminary work, a versatile Field Programmable Gate Arrays (FPGA) based demo-board system has been developed in order to test and optimize the functionalities and the performance of the single sub-modules and wireless vision chain system. This work allows to demonstrate the feasibility of a complex biomedical system, with severe constraints by highlighting the necessity to enhance the frame rate in the future. abstract_id: PUBMED:25815581 Dual-head wireless powered video capsule based on new type of receiving coils. Wireless capsule endoscopy (WCE) has been a great breakthrough in visually detecting the pathological changes of gastrointestinal (GI) wall, but the limit of viewing angle and power by batteries still hinder the wide application of WCE. In order to address these shortcomings, a dual-head video capsule system based on new type of receiving coils is presented. First, the dual-head video capsule system is designed, which could capture images of the whole GI tract in two channels, transforming the images into NTSC videos at a frame rate of 30 f s(-1) and transmitting the signals outside the body. Second, the wireless power transmission platform with new type of receiving coils is established to provide at least 108 mW of continuous, stable energy for the capsule. Then a prototype was fabricated and applied in animal experiments. The designed dual-head video capsule system is proved to be feasible and a potential solution for future clinical application. abstract_id: PUBMED:25782661 Wireless capsule endoscopy of the small intestine in children. Wireless capsule endoscopy (CE) for the diagnosis of small-bowel disease has been in clinical use for more than a decade, and is no longer an emerging technology, but rather one that has reached fruition. This noninvasive technology has been readily embraced by both physicians and patients. Used in the diagnosis of inflammatory bowel disease, for locating sources of obscure gastrointestinal bleeding, and for assessing small-bowel polyp burden in polyposis syndromes as well as for less common indications, CE has transformed the diagnostic algorithms of small-bowel investigations. Although already in widespread use, the technology incorporated into the various CE platforms continues to improve and expand. Here, we briefly review the indications, limitations, and advances in video capsule technology, with an emphasis on its use in pediatrics. abstract_id: PUBMED:25571262 Measurement of motion detection of wireless capsule endoscope inside large intestine. Wireless Capsule Endoscope (WCE) provides a noninvasive way to inspect the entire Gastrointestinal (GI) tract, including large intestine, where intestinal diseases most likely occur. As a critical component of capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of detected intestinal diseases. Knowing how the capsule moves inside the large intestine would greatly complement the existing wireless localization systems by providing the motion information. Since the most recently released WCE can take up to 6 frames per second, it's possible to estimate the movement of the capsule by processing the successive image sequence. In this paper, a computer vision based approach without utilizing any external device is proposed to estimate the motion of WCE inside the large intestine. The proposed approach estimate the displacement and rotation of the capsule by calculating entropy and mutual information between frames using Fibonacci method. The obtained results of this approach show its stability and better performance over other existing approaches of motion measurements. Meanwhile, findings of this paper lay a foundation for motion pattern of WCEs inside the large intestine, which will benefit other medical applications. abstract_id: PUBMED:27639097 Wireless Capsule Enteroscopy in Healthy Volunteers. Introduction: The aim of our prospective study was to define endoscopy appearance of the small bowel in healthy volunteers. Method: Forty-two healthy volunteers underwent wireless capsule endoscopy, clinical investigation, laboratory tests, and completed a health-status questionnaire. All subjects were available for a 36-month clinical follow-up. Results: Eleven subjects (26%) had fully normal endoscopy findings. Remaining 31 persons (74%), being asymptomatic, with normal laboratory results, had some minor findings at wireless capsule endoscopy. Most of those heterogeneous findings were detected in the small intestine (27/31; 87%), like erosions and/or multiple red spots, diminutive polyps and tiny vascular lesions. During a 36-month clinical follow-up, all these 42 healthy volunteers remained asymptomatic, with fully normal laboratory control. Conclusions: Significant part of healthy subjects had abnormal findings at wireless capsule endoscopy. These findings had no clinical relevance, as all these persons remained fully asymptomatic during a 36-month follow-up. Such an endoscopic appearance would be previously evaluated as "pathological". This is a principal report alerting that all findings of any control group of wireless capsule endoscopic studies must be evaluated with caution. abstract_id: PUBMED:27810622 Generic feature learning for wireless capsule endoscopy analysis. The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase). abstract_id: PUBMED:29203945 Is there an application for wireless capsule endoscopy in horses? This pilot study assessed wireless capsule endoscopy in horses. Image transmission was achieved with good image quality. Time to exit the stomach was variable and identified as one limitation, together with gaps in image transmission, capsule tumbling, and inability to accurately locate the capsule. Findings demonstrate usefulness and current limitations. Answer: Yes, screening of wireless capsule endoscopies (WCE) by non-physicians is feasible. A study indicated that it is safe and time-saving to use other medical professions than physicians to preview WCEs. In this study, a specialist in gastroenterology and two laboratory technicians independently viewed 34 consecutive WCEs and noted all findings. The reviewing specialists were able to make a correct diagnosis based on the technicians' findings in 97% and 85% of the patients, respectively (PUBMED:17059808). This suggests that non-physicians can effectively detect bowel pathology in advance, thus reducing the time needed by a physician to view a WCE.
Instruction: Illness severity measured by CRIB score: a product of changes in perinatal care? Abstracts: abstract_id: PUBMED:9462192 Illness severity measured by CRIB score: a product of changes in perinatal care? Aim: To determine the perinatal factors associated with initial illness severity (measured by the CRIB (clinical risk index for babies) score) and its relation to survival to discharge. Methods: A retrospective study was made of intensive care nursing records on 380 inborn babies, of less than 31 weeks gestation or 1501 g birthweight, admitted to one unit between 1984-6 and 1991-4. Results: Between the two time periods mean initial illness severity score increased significantly from 2.8 to 3.9. This was the result of an increase in the maximum appropriate inspired oxygen concentration in the first 12 hours. Risk adjusted survival did not improve over time after accounting for gestation but was significantly greater after accounting for CRIB score. Illness severity score was also significantly inversely associated with gestation and 1 and 5 minute Apgar scores, using multiple regression analysis. Between the two time periods there was also a 92% increase in the admission rate of babies under 31 weeks gestation, higher median 1 and 5 minute Apgar scores (6 vs 5 and 9 vs 8, respectively), more multiple births, and more caesarean section deliveries. Conclusions: The increase in illness severity score and admission rate may reflect changes in obstetric practice. The increase in illness severity score may also reflect changes in early neonatal care. However, after adjusting for CRIB score, risk adjusted mortality fell significantly, suggesting that neonatal care 12 hours from birth onwards had improved with time. abstract_id: PUBMED:25266790 Comparing CRIB-II and SNAPPE-II as mortality predictors for very preterm infants. Aims: This article compares the severity of illness scoring systems clinical risk index for babies (CRIB)-II and score for neonatal acute physiology with perinatal extension (SNAPPE)-II for discriminatory ability and goodness of fit in the same cohort of babies of less than 32 weeks gestation and aims to provide validation in the Australian population. Methods: CRIB-II and SNAPPE-II scores were collected on the same cohort of preterm infants born within a 2-year period, 2003 and 2004. The discriminatory ability of each score was assessed by the area under the receiver operator characteristic curve, and goodness of fit was assessed by the Hosmer-Lemeshow (HL) test. The outcome measure was in-hospital mortality. A multivariate logistic regression model was tested for perinatal variables that might add to the risk of in-hospital mortality. Results: Data for both scores were available for 1607 infants. Both scores had good discriminatory ability (CRIB-II area under the curve 0.913, standard error (SE) 0.014; SNAPPE-II area under the curve 0.907, SE 0.012) and adequate goodness of fit (HL χ2 = 11.384, 8 degrees of freedom, P = 0.183 for CRIB-II; HL χ2 = 4.319, 7 degrees of freedom, P = 0.742 for SNAPPE-II). The multivariate model did not reveal other significant variables. Conclusions: Both severity of illness scores are ascertained during the first 12 h of life and perform similarly. Both can facilitate risk-adjusted comparisons of mortality and quality of care after the first post-natal 12 h. CRIB-II scores have the advantage of being simpler to collect and calculate. abstract_id: PUBMED:16147834 Evaluating illness severity for very low birth weight infants: CRIB or CRIB-II? Background: Estimating the risk of in-hospital mortality provides essential information in the neonatal intensive care unit (NICU). The clinical risk index for babies (CRIB) is a widely used, risk-adjustment instrument to determine illness severity in infants of gestational age &lt;or=31 wks, or birth weight &lt;or=1,500 g, recently updated and simplified into a five-items scoring system (CRIB-II).Aim. The accuracy values of CRIB and CRIB-II scores in predicting in-hospital mortality were compared in a tertiary level, minimal intubation policy NICU setting. Methods: A total of 147 very low birth weight (VLBW) infants were examined. Both CRIB and CRIB-II scores were calculated for each newborn, and death before hospital discharge was selected as the outcome measure. Comparisons were performed by receiver-operating characteristic (ROC) curve analysis, and the area under the curve (AUC) was used as a measure of predictor accuracy. Results: Mean AUCs for CRIB, CRIB-II, gestational age and birth weight in identifying neonatal mortality in VLBW infants ranged from 0.924 (CRIB) to 0.869 (gestational age). No significant differences were found for the AUCs of CRIB versus CRIB-II, CRIB versus gestational age, CRIB versus birth weight, CRIB-II versus gestational age, or CRIB-II versus birth weight. Conclusions: Our findings show that; 1) CRIB and CRIB-II show similar accuracy values in predicting in-hospital neonatal mortality in VLBW infants; and 2) neither score offers an advantage in predicting mortality, as compared to gestational age or birth weight, thus suggesting that treatment modalities may modify predictive accuracy. abstract_id: PUBMED:35524143 Comparison of CRIB-II with SNAPPE-II for predicting survival and morbidities before hospital discharge in neonates with gestation ≤ 32 weeks: a prospective multicentric observational study. Various studies validated and compared Score for Neonatal Acute Physiology with Perinatal extension-II (SNAPPE-II) and Clinical Risk Index for Babies-II (CRIB-II) admission sickness severity scores for predicting survival, but very few studies compared them for predicting the morbidities in preterm infants. In this multicenter prospective observational study, SNAPPE-II and CRIB-II newborn illness severity scores were compared for predicting mortality and morbidities in infants with gestational age of ≤ 32 weeks. Major morbidities were classified as bronchopulmonary dysplasia, abnormal cranial ultrasound (presence of intraventricular hemorrhage grade III or more or periventricular leukomalacia grade II to IV), and retinopathy of prematurity requiring treatment. Combined adverse outcome was defined as death or any major morbidity. Comparison of the scoring systems was done by area under the curve (AUC) on receiver operating characteristics curve (ROC curve) analysis. A total of 419 neonates who were admitted to 5 participating NICUs were studied. The mortality rate in the study population was 8.8%. Both CRIB-II (AUC: 0.795) and SNAPPE-II (AUC: 0.78) had good predictive ability for in-hospital mortality. For predicting any one of the major morbidities and combined adverse outcome, CRIB-II had better predictive ability than SNAPPE-II with AUC of 0.83 vs. 0.70 and 0.85 vs. 0.74, respectively. Conclusion: In infants with gestational age of ≤ 32 weeks, both CRIB-II and SNAPPE-II are good scoring systems for predicting mortality. CRIB-II, being a simpler scoring system and having better predictive ability for major morbidities and combined adverse outcome, is preferable over SNAPPE-II. What Is Known: • SNAPPE-II and CRIB-II scores have good predictive ability on in-hospital mortality in preterm neonates. What Is New: • SNAPPE-II and CRIB-II both have good predictive ability for mortality, but CRIB-II has better ability for short-term morbidities related to the prematurity. abstract_id: PUBMED:10191793 Perinatal risk and severity of illness in newborns at 6 neonatal intensive care units. Objectives: This multisite study sought to identify (1) any differences in admission risk (defined by gestational age and illness severity) among neonatal intensive care units (NICUs) and (2) obstetric antecedents of newborn illness severity. Methods: Data on 1476 babies born at a gestational age of less than 32 weeks in 6 perinatal centers were abstracted prospectively. Newborn illness severity was measured with the Score for Neonatal Acute Physiology. Regression models were constructed to predict scores as a function of perinatal risk factors. Results: The sites differed by several obstetric case-mix characteristics. Of these, only gestational age, small for gestational age. White race, and severe congenital anomalies were associated with higher scores. Antenatal corticosteroids, low Apgar scores, and neonatal hypothermia also affected illness severity. At 2 sites, higher mean severity could not be explained by case mix. Conclusions: Obstetric events and perinatal practices affect newborn illness severity. These risk factors differ among perinatal centers and are associated with elevated illness severity at some sites. Outcomes of NICU care may be affected by antecedent events and perinatal practices. abstract_id: PUBMED:19302092 Severity of perinatal illness and cerebral cortical growth in preterm infants. Aim: We have shown previously that the degree of prematurity affects cortical surface area growth. We now addressed the question whether cortical surface area growth after preterm birth is predicted by the severity of peri- and postnatal illness. Methods: Cortical surface area was measured in 269 images from 111 infants born between 23 and 29 weeks and imaged at 23 to 48 weeks gestational age (GA). The severity of perinatal illness was assessed using the clinical risk index for babies score (CRIB I) and the severity of ongoing illness by the presence of chronic lung disease (CLD). The effects on cortical growth were modelled using generalized least-square regression for random effects with Bonferroni correction. To explore the results further we examined CRIB II, C-reactive protein (CRP) on the second day after birth, and time taken to achieve full enteral feeding. Results: Cortical surface area grew by 12.4% per week. Reduced cortical growth was predicted by adverse CRIB I (-0.15% per week per unit) and development of CLD (-1.18% per week). Secondary analysis showed that growth was related to adverse CRIB II (-0.36% per week per unit) and increasing CRP (-0.03% per week per mMol), but not by the time taken to achieve full enteral feeding. Conclusion: After very premature birth illness severity predicts reduced cortical growth. abstract_id: PUBMED:32098532 Predictive ability of neonatal illness severity scores for early death in extremely premature infants. Background: The predictive ability of neonatal illness severity scores for mortality or morbidity in extremely premature infants has not been extensively studied. We aimed to evaluate the ability of neonatal illness severity scores [Clinical Risk Index for Babies II (CRIB II), Score for Neonatal Acute Physiology II (SNAP-II), and SNAP-Perinatal Extension II (SNAPPE-II)] in predicting mortality and short-term morbidity of extremely premature infants. Methods: This retrospective study involved 171 infants with gestational age (GA) between 22 and 27 weeks who were admitted to the NICU during 2010-2017. Predictive ability of neonatal illness severity scores for mortality and short-term morbidity (bronchopulmonary dysplasia, retinopathy of prematurity, intraventricular hemorrhage, necrotizing enterocolitis, and gastrointestinal perforation) was assessed by comparing their area under the receiver operating characteristic curve. Results: The overall mortality rate was 11.1%. Mortality at 23 weeks' gestation was higher than that at 24-27 weeks' gestation (p &lt; .01, adjusted residual 4.5). Neonatal illness severity scores were significantly higher in infants who died than in those who survived (p &lt; .01). CRIB II (AUC 0.93, 95% CI 0.85-1.00), SNAP-II (AUC 0.90, 95% CI 0.76-1.00), and SNAPPE-II (AUC 0.95, 95% CI 0.91-0.99) appeared to be excellent predictors and were superior to birth weight (AUC 0.88, 95% CI 0.80-0.95) or GA (AUC 0.84, 95% CI 0.72-0.96) alone in predicting early death (died on &lt;28th postnatal day). CRIB II, SNAP-II, and SNAPPE-II were better predictors of early death than mortality in extremely premature infants. Neonatal illness severity score and short-term morbidity were not strongly associated. Conclusions: The neonatal illness severity scores were excellent predictors of early death in extremely premature infants and might be useful for selecting extremely preterm infants who need intervention. abstract_id: PUBMED:29684189 Correlation between Perfusion Index and CRIB Score in Sick Neonates Admitted to a Tertiary Center. Objective: The study was to determine the correlation of Perfusion Index (PI) and Clinical Risk Index for Babies (CRIB) score, in assessing the severity of illness in sick neonates. Methods: This was a cross-sectional study conducted at a tertiary care Neonatal Intensive Care Unit (NICU). All eligible neonates, both term and preterm, admitted to the high-dependency unit of the NICU were included, after parental consent. Relevant details of history and examination were collected with a structured proforma. Severity of illness was assessed using CRIB score within 12 h of admission. PI was recorded within 24 h of admission, and babies were examined for the presence or absence of shock and their outcome was documented. The correlation coefficient between PI and CRIB score was derived. Results: A total of 200 eligible newborns were enrolled. The mean gestational age of the neonates was 34 weeks. The median [interquartile range (IQR)] CRIB score was 1.00 (0.00, 3.00), and PI was 1.400 (0.93, 2.30). The Spearman's rank correlation coefficient between PI and CRIB score was -0.41 with p value &lt;0.05. The median PI of neonates with CRIB score ≤5, 6-10 and &gt;10 was 1.50, 0.74, 0.67, respectively (p value &lt;0.0001). The median (IQR) PI of babies with shock and without shock was 0.63 (0.43, 0.84) and 1.58 (1.19, 2.41), respectively, with p value &lt;0.001. Conclusion: PI has a negative correlation with CRIB score and can be used to assess the severity of illness in sick neonates. abstract_id: PUBMED:18173018 Using the CRIB as an early prognostic index for very low birthweight infants, treated in neonatal intensive care unites Unlabelled: The illness severity by admission in NICU reflects the intensity of the therapy, the prognosis for the newborn and the hospital costs. Using the CRIB (Clinical Risk Index for Babies) as an objective and easy method for measuring the illness severity in the first 12 h of life allows assessing the risk of death until discharge. Aim: To apply the CRIB for assessing the illness severity and to investigate its prognostic value for life and risk of permanent disabilities among very low birthweight (VLBW) and gestational age (VLGA) infants. Methods: The study includes the inborn babies in the Specialized Obstetrics &amp; Gynaecology Hospital "Maichin dom" with birthweight &lt; 1500g and gestational age &lt; 32 weeks of gestation, who are admitted in the NICU for 2 periods: I group--250 newborns in the period 01.2002-06.2004 and II group--186 newborns in the period 07.2004-06.2006. The CRIB is estimated based on data collected in the first 12 h of life. An analysis was made about the prognostic value of the CRIB score and the outcome. Criteria for outcome are: the in hospital mortality rate and disabilities such as severe intraventricular hemorrhages (IVH gr. III and IV), chronic lung disease (CLD) and retinopathy of prematurity (ROP). Results: We establish that the CRIB score is significantly higher among the infants who died--12.7%/13% in the I / II period compared with survivors--6.2% / 5.7% (P &lt; 0.001). The mortality rates raise progressively with the CRIB score: among the newborns with CRIB 0-5 they are 2.9% / 2.6% and reaches up to 78.6% / 90% when the CRIB is &gt; 15. The infants with permanent disabilities were with significantly higher CRIB scores too: 11.7 / 11.4% among the infants with IVH III-IV compared to 6.2 / 5.2 without; 9.6 / 10.1 among infants with CLD compared to 5.7 /5.2 without; 10.1 / 10 among infants with ROP compared to 5.5 / 5.1 without. Conclusions: The CRIB score is useful and easy to apply early and objective prognostic criterion about the risk of in hospital death and permanent disabilities among VLBW newborns. It can be used as a basis for comparing the results of the different NICUs too. abstract_id: PUBMED:15321961 Assessing mortality risk in very low birthweight infants: a comparison of CRIB, CRIB-II, and SNAPPE-II. Background: Illness severity scores are increasingly used for risk adjustment in clinical research and quality assessment. Recently, a simplified version of the score for neonatal acute physiology (SNAPPE-II) and a revised clinical risk index for babies (CRIB-II) score have been published. Aim: To compare the discriminatory ability and goodness of fit of CRIB, CRIB-II, and SNAPPE-II in a cohort of neonates &lt; 1500 g birth weight (VLBWI). Methods: Data from 720 VLBWI, admitted to 12 neonatal units in Lombardy (Northern Italy) participating in a regional network, were analysed. The discriminatory ability of the scores was assessed measuring the area under the receiver operating characteristic curve (AUC). Outcome measure was in-hospital death. Results: CRIB and CRIB-II showed greater discrimination than SNAPPE-II (AUC 0.90 and 0.91 v 0.84, p &lt; 0.0004), partly because of the poor quality of some of the data required for the SNAPPE-II calculation-for example, urine output-but also because of the relative weight given to some items. In addition to each score, several variables significantly influenced survival in logistic regression models. Antenatal steroid prophylaxis, singleton birth, absence of congenital anomalies, and gestational age were independent predictors of survival for all scores, in addition to caesarean section and not being small for gestation (for SNAPPE-II) and a five minute Apgar score of &gt; or = 7 (for SNAPPE-II and CRIB). Conclusions: CRIB and CRIB-II had greater discriminatory ability than SNAPPE-II. Risk adjustment using all scores is imperfect, and other perinatal factors significantly influence VLBWI survival. CRIB-II seems to be less confounded by these factors. Answer: The CRIB (clinical risk index for babies) score is a measure of initial illness severity in newborns, particularly those who are premature or have low birth weight. Research has indicated that changes in perinatal care may indeed influence the CRIB score. A retrospective study found that the mean initial illness severity score increased significantly over time, which was attributed to an increase in the maximum appropriate inspired oxygen concentration in the first 12 hours. This change may reflect modifications in obstetric practice and early neonatal care. However, after adjusting for CRIB score, risk-adjusted mortality fell significantly, suggesting that neonatal care from 12 hours after birth onwards had improved over time (PUBMED:9462192). Another study comparing CRIB-II and SNAPPE-II scores found that both severity of illness scores, ascertained during the first 12 hours of life, perform similarly and can facilitate risk-adjusted comparisons of mortality and quality of care after the first post-natal 12 hours. CRIB-II scores have the advantage of being simpler to collect and calculate (PUBMED:25266790). Further research evaluating the CRIB and CRIB-II scores in predicting in-hospital neonatal mortality in very low birth weight (VLBW) infants showed that both scores have similar accuracy values. Neither score offered an advantage in predicting mortality compared to gestational age or birth weight, suggesting that treatment modalities may modify predictive accuracy (PUBMED:16147834). A study comparing CRIB-II with SNAPPE-II for predicting survival and morbidities before hospital discharge in neonates with gestation ≤ 32 weeks found that both CRIB-II and SNAPPE-II are good scoring systems for predicting mortality. However, CRIB-II has better predictive ability for major morbidities and combined adverse outcomes, making it preferable over SNAPPE-II (PUBMED:35524143). In summary, the CRIB score, which measures illness severity in newborns, is influenced by changes in perinatal care practices. While the score itself has been shown to be a reliable indicator of initial illness severity and has improved over time, it is also clear that advancements in neonatal care have contributed to better outcomes, as evidenced by the decrease in risk-adjusted mortality when accounting for the CRIB score.
Instruction: Can FDG-PET assist in radiotherapy target volume definition of metastatic lymph nodes in head-and-neck cancer? Abstracts: abstract_id: PUBMED:19285354 Can FDG-PET assist in radiotherapy target volume definition of metastatic lymph nodes in head-and-neck cancer? Background And Purpose: The role of FDG-PET in radiotherapy target volume definition of the neck was evaluated by comparing eight methods of FDG-PET segmentation to the current CT-based practice of lymph node assessment in head-and-neck cancer patients. Materials And Methods: Seventy-eight head-and-neck cancer patients underwent coregistered CT- and FDG-PET scans. Lymph nodes were classified as "enlarged" if the shortest axial diameter on CT was 10mm, and as "marginally enlarged" if it was 7-10mm. Subsequently, lymph nodes were assessed on FDG-PET applying eight segmentation methods: visual interpretation (PET(VIS)), applying fixed thresholds at a standardized uptake value (SUV) of 2.5 and at 40% and 50% of the maximum signal intensity of the primary tumor (PET(SUV), PET(40%), PET(50%)) and applying a variable threshold based on the signal-to-background ratio (PET(SBR)). Finally, PET(40%N), PET(50%N) and PET(SBRN) were acquired using the signal of the lymph node as the threshold reference. Results: Of 108 nodes classified as "enlarged" on CT, 75% were also identified by PET(VIS), 59% by PET(40%), 43% by PET(50%) and 43% by PET(SBR). Of 100 nodes classified as "marginally enlarged", only a minority were visualized by FDG-PET. The respective numbers were 26%, 10%, 7% and 8% for PET(VIS), PET(40%), PET(50%) and PET(SBR). PET(40%N), PET(50%N) and PET(SBRN), respectively, identified 66%, 82% and 96% of the PET(VIS)-positive nodes. Conclusions: Many lymph nodes that are enlarged and considered metastatic by standard CT-based criteria appear to be negative on FDG-PET scan. Alternately, a small proportion of marginally enlarged nodes are positive on FDG-PET scan. However, the results are largely dependent on the PET segmentation tool used, and until proper validation FDG-PET is not recommended for target volume definition of metastatic lymph nodes in routine practice. abstract_id: PUBMED:17638170 FDG-PET-CT based imaging of retropharyngeal lymph nodes with its impact on radiotherapy Background: In about 20% there is an involvement of the retropharyngeal lymph nodes in patients with locally advanced carcinoma of the hypopharynx and cervical esophagus. A case report should demonstrate the diagnostic and therapeutic impact of FDG-PET/CT in the radiotherapeutic management of a locally advanced carcinoma of the hypopharynx in special consideration of the RPLN. Patient And Method: A pretherapeutic FDG-PET/CT was performed with the patient fixed in the radiotherapy mask in order to integrate the anatomic and metabolic information into the radiotherapy planning system by an exact matching of the data. Results: Only the FDG-PET could detect a retropharyngeal lymph node involvement (RPLN) by an intensive glucose utilisation with a consecutive modification of the target volume and dose increase in this region. Conclusion: This case report demonstrates that FDG-PET/CT facilitates the imaging of metabolic active and otherwise hardly detectable lymph nodes in locally advanced head and neck cancer with consequences on target volume definition and dose application in radiotherapy. abstract_id: PUBMED:24179702 FDG-PET, a Complementary Modality to Computed-Tomography in Radiotherapy Target Volume Delineation for Head and Neck Cancer. Objectives: The objective of the current review was to use published data to assess the role of [18F] fluorodeoxyglucose-positron emission tomography (FDG-PET) as a complementary modality to computed-tomography (CT) in radiotherapy target volume delineation for head and neck cancer (HNC). Methods: Studies were identified by searching PubMed electronic databases. Both prospective and retrospective studies were included. Information regarding the role of FDG-PET for radiotherapy target volume delineation for HNC was analyzed. Results: FDG-PET is a promising tool for improving radiotherapy target volume delineation by defining a metabolically active biological target volume (BTV). The use of novel PET tracers representing properties such as hypoxia, protein synthesis and proliferation remain to be better characterized. Conclusions: The role of FDG-PET for radiotherapy target volume delineation for patients with HNC is expanding and should be further evaluated in clinical trials. abstract_id: PUBMED:31439447 18F-FDG-PET/CT-based treatment planning for definitive (chemo)radiotherapy in patients with head and neck squamous cell carcinoma improves regional control and survival. Background And Purpose: Multimodality imaging including 18F-FDG-PET has improved the detection threshold of nodal metastases in head and neck squamous cell carcinoma (HNSCC). The aim of this retrospective analysis is to investigate the impact of FDG-PET/CT-based nodal target volume definition (FDG-PET/CT-based NTV) on radiotherapy outcomes, compared to conventional CT-based nodal target volume definition (CT-based NTV). Materials And Methods: Six-hundred-thirty-three patients treated for HNSCC with definitive (chemo)radiotherapy using IMRT/VMAT techniques between 2008 and 2017 were analyzed. FDG-PET/CT-based NTV was performed in 46% of the patients. The median follow-up was 31 months. Diagnostic imaging depicting the regional recurrence was co-registered with the initial CT-scan to reconstruct the exact site of the recurrence. Multivariate Cox regression analysis was performed to identify variables associated with radiotherapy outcome. Results: FDG-PET/CT-based NTV improved control of disease in the CTVelective-nodal (HR: 0.33, p = 0.026), overall regional control (HR: 0.62, p = 0.027) and overall survival (HR: 0.71, p = 0.033) compared to CT-based NTV. The risk for recurrence in the CTVelective-nodal was increased in case of synchronous local recurrence of the primary tumor (HR: 12.4, p &lt; 0.001). Conclusion: FDG-PET/CT-based NTV significantly improved control of disease in the CTVelective-nodal, overall regional control and overall survival compared to CT-based NTV. A significant proportion of CTVelective-nodal recurrences are potentially new nodal manifestations from a synchronous local recurrent primary tumor. These results support the concept of target volume transformation and give an indication of the potential of FDG-PET to guide gradual radiotherapy dose de-escalation in elective neck treatment in HNSCC. abstract_id: PUBMED:22865325 Tumor volume delineation in head and neck cancer with 18-fluor-fluorodeoxiglucose positron emission tomography: adaptive thresholding method applied to primary tumors and metastatic lymph nodes. Purpose: There are several potential advantages of using 18-fluor-fluorodeoxiglucose (18F-FDG) PET for target volume contouring, but before PET-based gross tumor volumes (GTVs) can reliably and reproducibly be incorporated into high-precision radiotherapy planning, operator-independent segmentation tools have to be developed and validated. The purpose of the present work was to apply the adaptive to the signal/background ratio (R(S/B)) thresholding method for head and neck tumor delineation, and compare these GTV(PET) to reference GTV(CT) volumes in order to assess discrepancies. Materials And Methods: A cohort of 19 patients (39 lesions) with a histological diagnosis of head and neck cancer who would undergo definitive concurrent radiochemotherapy or radical radiotherapy with intensity-modulated radiotherapy technique (IMRT), were enrolled in this prospective study. Contouring on PET images was accomplished through standardized uptake value (SUV)-threshold definition. The threshold value was adapted to R(S/B). To determine the relationship between the threshold and the R(S/B), we performed a phantom study. A discrepancy index (DI) between both imaging modalities, overlap fraction (OF) and mismatch fraction (MF) were calculated for each lesion and imaging modality. Results: The median DI value for lymph nodes was 2.67 and 1.76 for primary lesions. The OF values were larger for CT volumes than for PET volumes (p &lt; 0.001), for both types of lesions. The MF values were smaller for CT volumes than for PET volumes (p &lt; 0.001), for both types of lesions. The GTV(PET) coverage (OF(PET)) was strongly correlated with the lesion volume (GTV(CT)) for metastatic lymph nodes (Pearson correlation = 0.665; p &lt; 0.01). For smaller lesions, despite the GTV volumes were relatively larger on PET than in CT contours, the coverage was poorer. Accordingly, the MF(PET/CT) was negatively correlated with the lesion volume for metastatic lymph nodes. Conclusions: The present study highlights the considerable challenges involved in using FDG PET imaging for the delineation of GTV in head and neck neoplasms. The methods that rely mainly on SUV(max) for thresholding, as the RS/B method, are very sensitive to partial volume effects and may provide unreliable results when applied on small lesions. abstract_id: PUBMED:37871651 Contribution of FDG-PET in the diagnostic assessment of cervical lymph node metastasis in Oral Cavity Squamous Cell Carcinoma (OCSCC). Introduction: The main objective of this study was to evaluate the contribution of FDG-PET in the diagnostic assessment of cervical lymph node metastasis in Oral Cavity Squamous Cell Carcinoma (OCSCC) and to advance a diagnostic threshold value for SUVmax in carcinomatous cervical lymph node. Methods: 47 patients with OCSCC and suspicious cervical lymph node involvement (cN+) on FDG-PET were included in this retrospective study. The primary outcome was cervical lymph node SUVmax based on histological cervical metastatic disease (« gold standard »). Results: Among the 77 cervical lymph nodes considered suspicious on patients' FDG-PET, 50 were really metastatic on histological examination. The lymph node SUVmax with metastatic involvement on histological examination was 4.6 ± 3.9 [2.6 - 23.7] versus 3.6 ± 1.2 [2 - 7.3] without carcinomatous involvement (p = 0.004). The lymph node size was not statistically significant according to metastatic disease (p = 0.28). Discussion: A cervical lymph node SUVmax value of less than 2.6 on FDG-PET would suggest non-metastatic lymph node involvement. Supra Omohyoid Neck Dissection (SOHND) could therefore be performed in OCSCC when the SUVmax of the cervical lymph node is below this value in order to reduce the surgical morbidity of dissection of the lower internal jugular chain (Level IV). abstract_id: PUBMED:15073248 Prospective feasibility trial of radiotherapy target definition for head and neck cancer using 3-dimensional PET and CT imaging. Unlabelled: The aim of this investigation was to evaluate the influence and accuracy of (18)F-FDG PET in target volume definition as a complementary modality to CT for patients with head and neck cancer (HNC) using dedicated PET and CT scanners. Methods: Six HNC patients were custom fitted with head and neck and upper body immobilization devices, and conventional radiotherapy CT simulation was performed together with (18)F-FDG PET imaging. Gross target volume (GTV) and pathologic nodal volumes were first defined in the conventional manner based on CT. A segmentation and surface-rendering registration technique was then used to coregister the (18)F-FDG PET and CT planning image datasets. (18)F-FDG PET GTVs were determined and displayed simultaneously with the CT contours. CT GTVs were then modified based on the PET data to form final PET/CT treatment volumes. Five-field intensity-modulated radiation therapy (IMRT) was then used to demonstrate dose targeting to the CT GTV or the PET/CT GTV. Results: One patient was PET-negative after induction chemotherapy. The CT GTV was modified in all remaining patients based on (18)F-FDG PET data. The resulting PET/CT GTV was larger than the original CT volume by an average of 15%. In 5 cases, (18)F-FDG PET identified active lymph nodes that corresponded to lymph nodes contoured on CT. The pathologically enlarged CT lymph nodes were modified to create final lymph node volumes in 3 of 5 cases. In 1 of 6 patients, (18)F-FDG-avid lymph nodes were not identified as pathologic on CT. In 2 of 6 patients, registration of the independently acquired PET and CT data using segmentation and surface rendering resulted in a suboptimal alignment and, therefore, had to be repeated. Radiotherapy planning using IMRT demonstrated the capability of this technique to target anatomic or anatomic/physiologic target volumes. In this manner, metabolically active sites can be intensified to greater daily doses. Conclusion: Inclusion of (18)F-FDG PET data resulted in modified target volumes in radiotherapy planning for HNC. PET and CT data acquired on separate, dedicated scanners may be coregistered for therapy planning; however, dual-acquisition PET/CT systems may be considered to reduce the need for reregistrations. It is possible to use IMRT to target dose to metabolically active sites based on coregistered PET/CT data. abstract_id: PUBMED:33088355 The Role of 18F-FDG PET/CT for Evaluation of Cervical Metastatic Lymph Nodes in a Patient with Metallic Artifacts from Dental Prosthesis: a Case Report. Cervical metastasis of squamous cell carcinoma from an unknown primary site occurs in approximately 2-4% of head and neck tumor cases. Without identification of the primary site, proper management cannot be achieved, resulting in significant morbidity and a lower cure rate. For the primary site assessment, positron emission tomography/computed tomography (PET/CT) using the radiotracer 18F-fluorodeoxyglucose (FDG) is the imaging modality of choice and has been shown to be superior to CT alone. Here, we report a case of cervical metastasis of squamous cell carcinoma in an 82-year-old man. 18F-FDG PET/CT revealed a tongue-base lesion, although neck CT detected no lesions because of artifacts from dental prosthesis, which was suspected as the primary site. The final diagnosis was confirmed as tongue-base cancer with neck lymph node metastases through biopsy. These results suggest that 18F-FDG PET/CT is a valuable diagnostic tool in patients with cervical metastatic lymph nodes because it can demonstrate primary tumors in patients with dental prosthesis. abstract_id: PUBMED:30795884 Positron emission tomography with computed tomography imaging (PET/CT) for the radiotherapy planning definition of the biological target volume: PART 1. Aim: Functional and molecular imaging, including positron emission tomography with computed tomography imaging (PET/CT) is increasing for radiotherapy (RT) definition of the target volume. This expert review summarizes existing data of functional imaging modalities and RT management, in terms of target volume delineation, for the following anatomical districts: brain (for primary and secondary tumors), head/neck and lung. Materials And Methods: A collection of available published data was made, by PubMed a search. Only original articles were carefully and critically revised. Results: For primary and secondary brain tumors, amino acid PET radiotracers could be useful to identify microscopic residual areas and to differ between recurrence and treatment-related alterations in case of re-irradiation. As for head and neck neoplasms may benefit from precise PET/CT-based target delineation, due to the major capability to identify high-risk RT areas. In primary and secondary lung cancer, PET/CT could be useful both to delimit a tumor and collapsed lungs and as a predictive parameter of treatment response. Conclusion: Taken together, molecular and functional imaging approaches offer a major step to individualize radiotherapeutic care going forward. Nevertheless, several uncertainties remain on the standard method to properly assess the target volume definition including PET information for primary and secondary brain tumors. abstract_id: PUBMED:23942906 Pathology-based validation of FDG PET segmentation tools for volume assessment of lymph node metastases from head and neck cancer. Purpose: FDG PET is increasingly incorporated into radiation treatment planning of head and neck cancer. However, there are only limited data on the accuracy of radiotherapy target volume delineation by FDG PET. The purpose of this study was to validate FDG PET segmentation tools for volume assessment of lymph node metastases from head and neck cancer against the pathological method as the standard. Methods: Twelve patients with head and neck cancer and 28 metastatic lymph nodes eligible for therapeutic neck dissection underwent preoperative FDG PET/CT. The metastatic lymph nodes were delineated on CT (NodeCT) and ten PET segmentation tools were used to assess FDG PET-based nodal volumes: interpreting FDG PET visually (PETVIS), applying an isocontour at a standardized uptake value (SUV) of 2.5 (PETSUV), two segmentation tools with a fixed threshold of 40% and 50%, and two adaptive threshold based methods. The latter four tools were applied with the primary tumour as reference and also with the lymph node itself as reference. Nodal volumes were compared with the true volume as determined by pathological examination. Results: Both NodeCT and PETVIS showed good correlations with the pathological volume. PET segmentation tools using the metastatic node as reference all performed well but not better than PETVIS. The tools using the primary tumour as reference correlated poorly with pathology. PETSUV was unsatisfactory in 35% of the patients due to merging of the contours of adjacent nodes. Conclusion: FDG PET accurately estimates metastatic lymph node volume, but beyond the detection of lymph node metastases (staging), it has no added value over CT alone for the delineation of routine radiotherapy target volumes. If FDG PET is used in radiotherapy planning, treatment adaptation or response assessment, we recommend an automated segmentation method for purposes of reproducibility and interinstitutional comparison. Answer: Yes, FDG PET can assist in radiotherapy target volume definition of metastatic lymph nodes in head and neck cancer. Studies have shown that FDG-PET is a promising tool for improving radiotherapy target volume delineation by defining a metabolically active biological target volume (BTV) (PUBMED:24179702). FDG-PET/CT-based nodal target volume definition (FDG-PET/CT-based NTV) has been shown to improve control of disease in the CTVelective-nodal, overall regional control, and overall survival compared to conventional CT-based nodal target volume definition (CT-based NTV) (PUBMED:31439447). Additionally, FDG-PET/CT facilitates the imaging of metabolically active and otherwise hardly detectable lymph nodes in locally advanced head and neck cancer with consequences on target volume definition and dose application in radiotherapy (PUBMED:17638170). However, the accuracy of FDG PET in target volume delineation is dependent on the segmentation tool used. For instance, a study found that many lymph nodes that are enlarged and considered metastatic by standard CT-based criteria appear to be negative on FDG-PET scan, and the results are largely dependent on the PET segmentation tool used (PUBMED:19285354). Another study highlighted the challenges involved in using FDG PET imaging for the delineation of GTV in head and neck neoplasms, particularly for small lesions (PUBMED:22865325). Furthermore, FDG PET has been shown to accurately estimate metastatic lymph node volume, but beyond the detection of lymph node metastases (staging), it has no added value over CT alone for the delineation of routine radiotherapy target volumes (PUBMED:23942906). The role of FDG-PET for radiotherapy target volume delineation for patients with head and neck cancer is expanding and should be further evaluated in clinical trials (PUBMED:24179702). In conclusion, while FDG PET can assist in radiotherapy target volume definition of metastatic lymph nodes in head and neck cancer, its use should be carefully considered and is dependent on the segmentation method applied. It is recommended to use an automated segmentation method for reproducibility and interinstitutional comparison if FDG PET is used in radiotherapy planning, treatment adaptation, or response assessment (PUBMED:23942906).
Instruction: Does low level laser therapy relieve the pain caused by the placement of the orthodontic separators? Abstracts: abstract_id: PUBMED:26315965 Does low level laser therapy relieve the pain caused by the placement of the orthodontic separators?--A meta-analysis. Objective: Pain caused by orthodontic treatment can affect patient's compliance and even force them to terminate treatments. The aim of this meta-analysis is to evaluate of the analgesic effect of low level laser therapy (LLLT) after placement of the orthodontic separators. Methods: Five databases: PubMed, Embase, Cochrane library, China Biology Medicine disc (SinoMed CBM), China National Knowledge Infrastructure (CNKI) were searched for all the appropriate studies in June, 2014. Two reviewers screened the research results under our inclusion criteria and evaluated the risk of bias independently. Then the data of the included studies was extracted for quantitative analysis by the Review Manager 5.1 software. Results: Six studies were included in our meta-analysis finally. Comparing to the placebo group, the LLLT has good analgesic effect at 6 h, 1d, 2d, 3d after placement of separators which is of statistical significance. While at 2 h, 4d, 5d after the placement, the results tend to support LLLT, but not statistically significant. Conclusion: Based on current included studies, LLLT can reduce the pain caused by the placement of separators effectively. However, because of the high heterogeneity, well designed RCTs are required in the future. abstract_id: PUBMED:23334785 The effect of 810-nm low-level laser therapy on pain caused by orthodontic elastomeric separators. The purpose of this study was to assess the effect of 810-nm (DMC Equipamentos, Sao Carlos, Brazil) continuous wave low-level laser therapy (LLLT) on the pain caused by orthodontic elastomeric separators. Thirty-seven orthodontic patients (12 male and 25 female, aged 11-32 years, mean age = 24.97 years) participated in the study, including 20 subjects aged 18 years or more, and 17 under 18 years of age. Four elastomeric separators (Dentarum, Springen, Germany) were placed for the first permanent molars (distal and mesial), either for maxillary (22 patients) or mandibular (15 patients) arches; one quadrant was randomly selected and used as a placebo group (received no laser irradiation). After separator placement for each quadrant, patients received 10 doses (2 J/cm(2), 100 mW, 20 s) of laser irradiation on the buccal side (at the cervical third of the roots), for distal and mesial of the second premolars and first permanent molars, as well as distal of second permanent molars (five doses). The same procedure was repeated for the lingual or palatal side (five doses). After 24 h, patients returned to the clinic and received another 10 doses of laser irradiation on the same quadrant. Postseparation pain level recorded on a 10-cm visual analog scale for both jaws immediately (hour 0), and after 6, 24, 30 h, as well as on days 3, 4, 5, 6, and 7. Significant differences in the pain perception (PP) were found between the laser and placebo groups at 6, 24, 30 h, and day 3 of the experiment (P &lt; 0.05). Friedman's test of multiple comparisons revealed significant differences in the PP among various time intervals for laser (chi-square = 173.407, P = 0.000) and placebo (chi-square = 184.712, P = 0.000) groups. In both groups, pain was highest at 6 and 30 h after placing elastomeric separators. No gender differences were observed in both groups. More pain was recorded in the mandible (P &lt; 0.05) at 24 (laser group) and 30 h (both groups) after starting the experiment. The PP was significantly higher (P &lt; 0.05) for the group aged 18 years or more, only at days 3 [both groups] and 4 [laser group only] of the experiment. The 810-nm continuous wave LLLT significantly reduced the PP in the first 3 days after orthodontic separation. However, the mean postseparation PP in both groups was low and wide ranges of PP scores were observed. abstract_id: PUBMED:26154454 Low-level laser therapy effects on pain perception related to the use of orthodontic elastomeric separators. Introduction: Some patients refer to pre-banding orthodontic separation as a painful orthodontic procedure. Low-level laser therapy (LLLT) has been reported to have local analgesic effect. Objective: The aim of this single-blind study was to investigate the perception of pain caused by orthodontic elastomeric separators with and without a single LLLT application (6J). Methods: The sample comprised 79 individuals aged between 13 and 34 years old at orthodontic treatment onset. Elastomeric separators were placed in first maxillary molars at mesial and distal surfaces and kept in place for three days. The volunteers scored pain intensity on a visual analogue scale (VAS) after 6 and 12 hours, and after the first, second and third days. One third of patients received laser applications, whereas another third received placebo applications and the remaining ones were controls. Applications were performed in a split-mouth design. Thus, three groups (laser, placebo and control) were assessed. Results: No differences were found among groups considering pain perception in all periods observed. Conclusion: The use of a single-dose of LLLT did not cause significant reduction in orthodontic pain perception. Overall pain perception due to orthodontic separator placement varied widely and was usually mild. abstract_id: PUBMED:31828995 Low-level laser therapy protocols in dental movement acceleration and in pain management during orthodontic treatment. In recent years various studies about the biostimulatory effects of the laser therapy in orthodontics have been carried out. This study investigates the potential advantages obtainable using the Low-level Laser Therapy during orthodontic treatment and the most efficient clinical protocols. Recently published randomized controlled trials (RCTs) have been obtained through a search on electronic databases (Cochrane Library and Pubmed). Clinical studies in humans in which Low-level Laser Therapy was applied during orthodontic treatment were included. In conclusion, 14 relevant clinical studies were identified. This study shows the possibility to obtain an increase in tooth movement between 31% and 100% depending on the laser therapy considered and the time interval for measuring the value. In addition, there is a potential impact in reducing orthodontic pain limited to the day following the application of laser therapy when orthodontic therapy includes canine retraction, and during a period not exceeding five days from the placement of fixed orthodontic appliances in the others clinical cases.Low-level Laser Therapy is considered effective both to increase the movement of the dental elements and to reduce pain during orthodontic therapy. Different clinical protocols have been identified depending on the orthodontic cases considered. Both an LED device and an AlGaAs diode device can be used. In the future paying more attention to the therapeutic possibilities offered by laser devices with greater power is recommended. A greater energy density directed to the target tissues has been proven to provoke more significant therapeutic effects. abstract_id: PUBMED:29728552 The Efficacy of Low-level Laser Therapy on Pain caused by Placement of the First Orthodontic Archwire: A Clinical Study. Aim: This study was conducted to clinically evaluate the effect of low-level laser therapy (LLLT) as a method of reducing pain reported by patients after placement of their first orthodontic archwires. Materials and methods: A sample of 10 patients with an age group of 12 to 26 years with moderate-to-severe anterior crowding was selected. Each patient was assigned to an experimental group (left quadrant with laser therapy) and a control group (right quadrant with no laser therapy). Low-level laser therapy was given immediately after the placement of initial archwire. All patients were instructed to fill up a survey form at home over the next 7 days. Results: The results revealed that the average onset of pain in the experimental group (16.10 hours) was significantly reduced when compared with the control group (3.10 hours). The most painful day was similar for both the groups. The pain ceased much sooner in the experimental group than in the control group. The intensity of pain was lesser in the experimental group when compared with the control group. Conclusion: Low-level laser therapy was an effective and noninvasive method for controlling pain in orthodontic patients after receiving their first archwires. The duration and intensity of pain reduced with the application of LLLT Clinical significance: Pain reduction during orthodontic procedures. Keywords: Low-level laser therapy, Nickel-titanium wires, Orthodontic pain. abstract_id: PUBMED:34084734 Efficacy of Photobiomodulation Therapy for Orthodontic Pain Control Following the Placement of Elastomeric Separators: A Randomized Clinical Trial. Introduction: Controlling pain in orthodontic patients has gained special attention. This study assessed the efficacy of photobiomodulation therapy (PBMT) for pain control following the placement of elastomeric separators. Methods: This split-mouth single-blind randomized clinical trial evaluated 30 orthodontic patients who required posterior elastomeric separators. The two maxillary quadrants were randomized into the laser and control groups. In the laser quadrant, an 808 nm diode laser (400 mW, 15.60 J/cm2 , 11 seconds, continuous-wave, contact mode) was irradiated to the cervical third of the maxillary first molar roots 24 hours prior to the placement of separators. The control quadrant received placebo radiation by a light-curing unit. The patients received the second laser cycle right before the placement of separators 24 hours later. The level of self-perceived pain was recorded at 0, 2, 6, 24, and 72 hours and 5 days after the intervention in the laser and control quadrants using a visual analog scale (VAS). Data were analyzed using the analysis of variance (ANOVA) and paired-samples t test. Results: The trend of change in the pain score was similar in both groups. The pain score was significantly lower in the laser group than the control group at all-time points (P &lt;0.05) except at time 0. The pain score increased in the first 6 hours and reached its maximum level in 24 hours in both groups. Conclusion: PBMT by an 808 nm diode laser can effectively decrease pain following the placement of elastomeric separators. abstract_id: PUBMED:36162900 LOW-LEVEL LASER THERAPY MAY REDUCE PAIN ASSOCIATED WITH ORTHODONTIC SEPARATOR PLACEMENT. Article Title And Bibliographic Information: Farzan, A. K. Khaleghi (2021). "The Effectiveness of Low-Level Laser Therapy in Pain Induced by Orthodontic Separator Placement: A Systematic Review." J Lasers Med Sci 12: e29. Source Of Funding: None. Type Of Study/design: Systematic review. abstract_id: PUBMED:32143365 A Comparative Assessment of Pain Caused by the Placement of Banded Orthodontic Appliances with and without Low-Level Laser Therapy: A Randomized Controlled Prospective Study. Patients still refuse or discontinue orthodontic treatment due to related pain and discomfort. In this study, we investigate if low-level laser therapy (LLLT) can reduce pain caused by orthodontic bands. Sixty subjects who needed bands placed on the upper permanent first molars were assigned randomly to the LLLT group, placebo, and control groups. Inclusion criteria were: age range 10-14 years, fully erupted upper first molars in healthy condition, presence of tight mesial proximal contact. Exclusion criteria were: systemic or metabolic diseases, chronic pain or neurological or psychiatric disorders, use of pharmacological agents interfering with pain perception, previous orthodontic treatment or the simultaneous presence of other devices in the patient's mouth. The assessment of pain was performed by using a numeric rating scale (NRS) considering different time intervals, i.e., immediately after bands placement, 6 h, 24 h, and from day 2 to day 5. Differences in the maximum pain and in pain experienced at each time-point, among the three groups, was assessed by using the Kruskal-Wallis H. The final sample included 56 patients, 29 males, and 27 females, with a mean age of 12.03 ± 1.3 years. Patients were randomly allocated into three groups (tested, control, and placebo group) with each group consisting respectively of 19, 20, and 17 individuals. Subjects in the LLLT experienced less pain at each time interval as well as the maximum pain score being lower in the LLLT compared to control and placebo groups. These findings were all statistically significant (p &lt; 0.05). LLLT can alleviate the intensity of pain after the placement of orthodontic bands. abstract_id: PUBMED:38308275 The effectiveness of low-level laser therapy and low-intensity pulsed ultrasound in reducing pain induced by orthodontic separation: a randomized controlled trial. Background: The low-level laser therapy (LLLT) and low-intensity pulsed ultrasound (LIPUS) have been recently applied to control pain during orthodontic treatment. Objective: To evaluate and compare the effectiveness of LLLT and LIPUS in reducing pain induced by orthodontic separation. Study Design: A single-blinded randomized controlled trial. Methods: One hundred and fifty patients were randomly assigned into three groups; LLLT group, LIPUS group, and control group. After 5 min from the separators' placement, the first dose of the laser or the ultrasound was applied, the second dose was applied after 24 h, and the third dose was applied after 48 h on both maxillary and mandibular first molars. The exposure of laser was for 20 s at each point (maxillary and mandibular first molars), with an 810-nm aluminum-gallium-arsenide (AlGaAs) diode laser on continuous mode. The output power set at 150 mW, the energy density of 4 J/cm2, and a laser spot diameter of 7 mm were applied. Whereas the frequency of ultrasonic toothbrush was 1.6 MHz; and average output intensity was 0.2 W/cm2. The application was for 20 min (5 min on each first molar). The control group received the separators without another intervention. A Visual Analog Scale (VAS 100 mm) was used to assess pain intensity at several time intervals during the first four days after the separators' placement. Results: A total of 145 patients were assessed. There was a significant difference in pain perception among the three groups after 5 min (P = .002). The maximum pain level was reached after 24 h. However, the laser group and the ultrasound group showed a statistically significant decrease in pain scores compared to the control group at all the assessment time points (P &lt; .001). Whereas there was no difference between the laser group and the ultrasound group in reducing the pain scores (P &gt; .05). Conclusions: The LLLT and the LIPUS effectively reduce the separation pain when applied in multiple doses without differences between them. Trial Registration: This trial was registered with the German Clinical Trials Register (DRKS). ( https://www.drks.de/drks_web/navigate.do?navigationId=trial.HTML&amp;TRIAL_ID= DRKS00029991). Date of registration: 26/08/2022. abstract_id: PUBMED:26718379 Effect of a single dose of low-level laser therapy on spontaneous and chewing pain caused by elastomeric separators. Introduction: The aim of this study was to see the effect of a single dose of low-level laser therapy on spontaneous and chewing pain after the placement of elastomeric separators. Methods: Eighty-eight patients were randomly selected for this single-blind study. Elastomeric separators were placed mesial and distal to the permanent first molars in all quadrants. Both arches were divided into experimental and control sides. The experimental sides were treated with low-level laser therapy on 3 points on the buccal mucosa for 20 seconds each, with a 940-nm gallium-aluminum-arsenic diode laser on continuous mode and power set at 200 mW. The other side received placebo laser therapy without turning on the laser. A numeric rating scale was used to assess the intensity of spontaneous and chewing pain for the next 7 days. The independent sample t test and repeated-measures analysis of variance with the post hoc Tukey test was used to analyze the results. Results: Significant differences were found in spontaneous and chewing pain among both groups (P &lt;0.05). Conclusions: A single dose of low-level laser therapy can be an efficient modality to reduce the postoperative pain associated with the placement of elastomeric separators. Answer: Yes, low level laser therapy (LLLT) has been shown to relieve the pain caused by the placement of orthodontic separators. A meta-analysis evaluating the analgesic effect of LLLT after placement of orthodontic separators found that LLLT has a good analgesic effect at various time points after the placement of separators, with statistical significance observed at 6 hours, 1 day, 2 days, and 3 days post-placement (PUBMED:26315965). Another study reported that 810-nm continuous wave LLLT significantly reduced pain perception in the first 3 days after orthodontic separation (PUBMED:23334785). However, a single-blind study found no significant reduction in orthodontic pain perception with a single dose of LLLT (PUBMED:26154454). Further research indicated that LLLT is effective both in increasing the movement of dental elements and in reducing pain during orthodontic therapy, with potential advantages obtainable using LLLT during orthodontic treatment (PUBMED:31828995). A clinical study also supported the effectiveness of LLLT in reducing pain reported by patients after placement of their first orthodontic archwires, with the duration and intensity of pain reduced with the application of LLLT (PUBMED:29728552). A randomized clinical trial assessing the efficacy of photobiomodulation therapy (PBMT), a form of LLLT, for pain control following the placement of elastomeric separators, found that PBMT by an 808 nm diode laser effectively decreased pain following the placement of elastomeric separators (PUBMED:34084734). A systematic review also suggested that LLLT may reduce pain associated with orthodontic separator placement (PUBMED:36162900). Another randomized controlled prospective study showed that subjects in the LLLT group experienced less pain at each time interval, as well as a lower maximum pain score, compared to control and placebo groups, indicating that LLLT can alleviate the intensity of pain after the placement of orthodontic bands (PUBMED:32143365). Additionally, a randomized controlled trial comparing the effectiveness of LLLT and low-intensity pulsed ultrasound (LIPUS) in reducing pain induced by orthodontic separation found that both LLLT and LIPUS effectively reduced separation pain when applied in multiple doses (PUBMED:38308275).
Instruction: Clinical outcomes of nitinol and stainless steel uncovered metal stents for malignant biliary strictures: is there a difference? Abstracts: abstract_id: PUBMED:21111871 Clinical outcomes of nitinol and stainless steel uncovered metal stents for malignant biliary strictures: is there a difference? Background: Self-expandable metal stents (SEMSs) made from nitinol (N) were developed as a potentially more effective alternative to conventional stainless steel (SS) stents. Objective: To compare clinical outcomes of N versus SS stents in the management of malignant biliary strictures. Design: Retrospective study. Setting: Tertiary-care cancer center. Patients: All patients with first-time N (Flexxus) and SS (Wallstent) uncovered biliary SEMSs placed between January 2006 and October 2007. Interventions: SEMS placement. Results: A total of 81 N and 96 SS stents were placed. The most common cancer diagnosis was pancreatic (80.2% N; 62.5% SS; P = .06). The most frequent site of stricture was the common bile duct (85.2% N; 86.5% SS; P = .31). Biliary decompression was achieved in 93.8% of the N group and 86.4% of the SS group (P = .22). Immediate stent manipulation was required in 4 patients in each group. Subsequent intervention for poor drainage was performed in 17 N (21%) and 26 SS (27%) stents at mean times of 142.1 days (range, 5-541 days; median, 77 days) and 148.1 days (range, 14-375; median, 158.5), respectively (P = .17). The occlusion rate between N and SS stents was not significant (P = .42). The overall durations of stent patency in the N and SS group were similar (median 129 and 137 days, respectively; P = .61), including the subgroup analysis performed on patients with pancreatic cancer (P = .60) and common duct strictures (P = .77). Complication rates were low in both groups (early: 3.7% N, 6.3% SS; late: 2.5% N, 3.1% SS). Ninety percent underwent chemotherapy and 38% radiation therapy in each group. Limitations: Retrospective design. Conclusion: Similar outcomes were achieved with N and SS stents regarding efficacy, duration of stent patency, occlusion rates, and complications. Our results are most applicable to patients with common duct strictures and pancreatic cancer. abstract_id: PUBMED:34657395 Endoscopic removal of biliary uncovered self-expandable metal stents in one patient placed 15 years ago. Before: 51 W: We removed two biliary uncovered SEMSs from one patient using stent-in-stent technique with a fully covered biliary stent, one of which was placed for up to 15 years. That is rare and also meaningful. Maybe,we should have a new understanding of uncovered SEMS in management of benign biliary strictures. After: 50 W: Yu and colleagues removed two biliary uncovered self-expandable metal stents, one of which had been in place for 15 years, from one patient using the stent-in-stent technique with a fully covered biliary stent. A new understanding of uncovered self-expandable metal stents in the management of benign biliary strictures is needed. abstract_id: PUBMED:37229325 Efficacy and safety of covered versus uncovered self‑expandable metal stents for the palliative treatment of malignant distal biliary stricture: A long‑term retrospective study. Both covered self-expandable metal stents (CSEMSs) and uncovered self-expandable metal stents (USEMSs) have been tried in the palliation of malignant distal biliary strictures by means of endoscopic retrograde cholangiopancreatography (ERCP); however, the comparison of efficacy and safety between them remains contested. To the best of our knowledge, no similar studies have assessed this in the Chinese population. In the present study, the clinical and endoscopic data of 238 patients (CSEMSs, n=55; USEMSs, n=183) with malignant distal biliary strictures from 2014 to 2019 were collected. The efficacy indicated by mean stent patency, stent patency rate, mean patient survival time and survival rate, and the safety indicated by adverse events after CSEMS or USEMS placement were retrospectively analyzed and compared. The mean stent patency time was significantly longer in the CSEMSs group than that in the USEMSs group (262.8±195.3 days vs. 169.5±155.7 days, P=0.002). The mean patient survival time was significantly longer in the CSEMSs group than that in the USEMSs group (273.9±197.6 days vs. 184.9±167.6 days, P=0.003). The stent patency rate and patient survival rate were significantly higher in the CSEMSs group than those in the USEMSs group at 6 and 12 months, but not at 1 and 3 months. There was no significant difference in stent dysfunction and adverse events between the two groups, although post-ERCP pancreatitis (PEP) occurred more frequently in the CSEMSs group than in the USEMSs group (18.1% vs. 8.8%, P=0.049). In conclusion, CSEMSs were better than USEMSs for malignant distal biliary strictures in terms of stent patency time and patient survival time as well as stent patency rate and patient survival rate in the long term (&gt;6 months). Adverse events in the two groups occurred at a similar rate, although the incidence of PEP was higher in the CSEMSs group. abstract_id: PUBMED:23591331 Comparison of the utility of covered metal stents versus uncovered metal stents in the management of malignant biliary strictures in 749 patients. Background: Self-expandable metal stents (SEMSs) are used to relieve malignant biliary obstruction. Objective: To compare outcomes between covered self-expandable metal stents (CSEMSs) and uncovered self-expandable metal stents (USEMSs) in malignant biliary obstruction. Design: Retrospective cohort study. Setting: Tertiary cancer center. Patients: Patients with malignant biliary obstruction. Interventions: Placement of CSEMS or USEMS. Main Outcome Measurements: Time to recurrent biliary obstruction (TRO), overall survival (OS), and adverse events. Results: From January 2000 to June 2011, 749 patients received SEMSs: 171 CSEMSs and 578 USEMSs. At 1 year, there was no significant difference in the percentage of patients with recurrent obstruction (CSEMSs, 35% vs USEMSs, 38%) and survival (CSEMSs, 45% vs USEMSs, 49%). There was no significant difference in the median OS (CSEMSs, 10.4 months vs USEMSs, 11.8 months; P = .84) and the median TRO (CSEMSs, 15.4 months vs USEMSs, 26.3 months; P = .61). The adverse event rate was 27.5% for the CSEMS group and 27.7% for the USEMS group. Although tumor ingrowth with recurrent obstruction was more common in the USEMS group (76% vs 9%, P &lt; .001), stent migration (36% vs 2%, P &lt; .001) and acute pancreatitis (6% vs 1%, P &lt; .001) were more common in the CSEMS group. Limitations: Retrospective study. Conclusions: There was no significant difference in the patency rate or overall survival between CSEMSs and USEMSs for malignant distal biliary strictures. The CSEMS group had a significantly higher rate of migration and pancreatitis than the USEMS group. No significant SEMS-related adverse events were observed in patients undergoing neoadjuvant chemoradiation or surgical resection. abstract_id: PUBMED:2294541 Self-expandable stainless steel braided endoprosthesis for biliary strictures. A new type of biliary stent was inserted in 45 patients with symptomatic biliary strictures, 40 malignant and five benign. The stent is made of stainless steel woven into a tubular mesh. It is introduced in compressed form on a 7- or 9-F delivery catheter and released at the site of the stricture. Fully open, the stent has an internal diameter of 1 cm. The 30-day mortality was 7%. The early (less than 72 hours) complication rate was 16%. Recurrent jaundice occurred in 42%; late complications of sepsis without jaundice occurred in an additional 11%. None of the stents migrated. This new design allows a large-diameter stent to be inserted percutaneously without increased risk of complications. The frequency of recurrent jaundice, however, was not less than that observed with plastic stents, but this may have reflected the spectrum of patients treated, many with hilar strictures and previous interventions. abstract_id: PUBMED:19482279 Comparative performance of uncoated, self-expanding metal biliary stents of different designs in 2 diameters: final results of an international multicenter, randomized, controlled trial. Background: The Wallstent has remained the industry standard for biliary self-expanding metal stents (SEMSs). Recently, stents of differing designs, compositions, and diameters have been developed. Objective: To compare the new nitinol 6-mm and 10-mm Zilver stents with the 10-mm stainless steel Wallstent and determine the mechanism of obstruction. Design: Randomized, prospective, controlled study. Setting: Nine centers experienced in SEMS placement during ERCP. Patients: A total of 241 patients presenting between September 2003 and December 2005 with unresectable malignant biliary strictures at least 2 cm distal to the bifurcation. Main Outcome Measurement: Stent occlusions requiring reintervention and death. Results: At interim analysis, a significant increase in occlusions was noted in the 6-mm Zilver group at the P = .04 level, resulting in arm closure but continued follow-up. Final study arms were 64, 88, and 89 patients receiving a 6-mm Zilver, 10-mm Zilver, and 10-mm Wallstent, respectively. Stent occlusions occurred in 25 (39.1%) of the patients in the 6-mm Zilver arm, 21 (23.9%) of the patients in the 10-mm Zilver arm, and 19 (21.4%) of the patients in the 10-mm Wallstent arm (P = .02). The mean number of days of stent patency were 142.9, 185.8, and 186.7, respectively (P = .057). No differences were noted in secondary endpoints, and the study was ended at the 95% censored study endpoints. Biopsy specimens of ingrowth occlusive tissue revealed that 56% were caused by benign epithelial hyperplasia. Conclusions: SEMS occlusions were much more frequent with a 6-mm diameter SEMS and equivalent in the two 10-mm arms despite major differences in stent design, material, and expansion, suggesting that diameter is the critical feature. Malignant tumor ingrowth produced only a minority of the documented occlusions. abstract_id: PUBMED:27566620 Covered versus uncovered self-expandable metal stents for malignant biliary strictures: A meta-analysis and systematic review. Self-expandable metal stents (SEMS) are used for palliating inoperable malignant biliary strictures. It is unclear if covered metal stents are superior to uncovered metal stents in these patients. We compared clinical outcomes in patients with covered and uncovered stents. Studies using covered and uncovered metallic stents for palliation in patients with malignant biliary stricture were reviewed. Articles were searched in MEDLINE, PubMed, and Ovid journals. Fixed and random effects models were used to calculate the pooled proportions. Initial search identified 1436 reference articles, of which 132 were selected and reviewed. Thirteen studies (n = 2239) for covered and uncovered metallic stents which met the inclusion criteria were included in this analysis. Odds ratio for stent occlusion rates in covered vs. uncovered stents was 0.79 (95 % CI = 0.65 to 0.96). Survival benefit in patients with covered vs. uncovered stents showed the odds ratio to be 1.29 (95 % CI = 0.95 to 1.74). Pooled odds ratio for migration of covered vs. uncovered stents was 9.9 (95 % CI = 4.5 to 22.3). Covered stents seemed to have significantly lesser occlusion rates, increased odds of migration, and increased odds of pancreatitis compared to uncovered stents. There was no statistically significant difference in the survival benefit, overall adverse event rate, and patency period of covered vs. uncovered metal stents in patients with malignant biliary strictures. abstract_id: PUBMED:8565890 Results of the new nitinol self-expandable stents for distal biliary structures. Background And Study Aims: The nitinol stent is a self-expandable spiral stent made of nickel-titanium alloy. We performed a pilot study to evaluate the method of stent insertion and stent efficacy. Patients And Methods: Twenty-eight patients with irresectable malignancy had nitinol stents inserted for obstructive jaundice due to distal biliary strictures. They were followed prospectively for a median of ten weeks (range 1-48) until stent occlusion or death. All patients underwent sphincterotomy prior to stent insertion. Stent deployment was assisted by subsequent balloon dilation of the stent in two patients. Plastic stent insertion was used to temporarily brace the nitinol stent in another three patients. Results: Short-term relief of jaundice was achieved in 20 of the 28 patients (71%). The remaining eight patients had persistent jaundice due to failure of stent insertion (three patients) or early stent dysfunction (five patients). Long-term success was achieved in 12 of 28 patients (43%). Ten of the 12 patients died without complications after a median of 13.5 weeks (range 3-48). One of the 12 patients is still alive with the stent in place for 40 weeks, and one patient underwent elective bypass surgery, and the stent was removed surgically. Eight of the 28 patients had recurrent symptoms due to late stent dysfunction after a median of 19 weeks (range 6-32). Conclusions: Nitinol stents are technically difficult to insert. The problem of stent dysfunction (13 of 28, 46%) is not improved by the use of nitinol stents. Whether improvement of the insertion technique and enhancement of the expanding force of the stent may improve stent efficacy merits further evaluation. abstract_id: PUBMED:34720527 Fully covered metal biliary stents: A review of the literature. Fully covered self-expandable metal stents (FCSEMS) represent the latest advancement of metal biliary stents used to endoscopically treat a variety of obstructive biliary pathology. A large stent diameter and synthetic covering over the tubular mesh prolong stent patency and reduce risk for tissue hyperplasia and tumor ingrowth. Additionally, FCSEMS can be easily removed. All these features address issues faced by plastic and uncovered metal stents. The purpose of this paper is to comprehensively review the application of FCSEMS in benign and malignant biliary strictures, biliary leak, and post-sphincterotomy bleeding. abstract_id: PUBMED:18277888 Uncovered Hanaro Versus Luminex metal stents for palliation of malignant biliary strictures. Background: Endoscopic stent insertion is the optimum method of palliation for malignant biliary obstruction. Various types of self-expanding metal stents have been introduced in the market. Whether one type of stents is superior to the others in terms of stent patency remains undefined. Goals: This randomized trial compared 2 uncovered metal stents with similar technical characteristics, but significant cost difference, in the palliation of inoperable malignant biliary strictures. Study: Ninety-two patients with inoperable biliary obstruction were randomized to receive either a 10-mm diameter Hanaro or Luminex uncovered metal stent. The duration of stent patency, the overall patient survival, the mechanism of stent occlusion, and the adverse events were analyzed. Results: Eighty-nine patients were included in the analysis; 44 received Hanaro stents and 45 Luminex stents. The overall median patency rates between the 2 stents did not differ (328 d for the Hanaro vs. 289 d for the Luminex stent; P=0.815). Similarly, no difference was found between the overall median survival rates by the 2 stents (347 d for the Hanaro vs. 307 d for the Luminex stent; P=0.654). Two major procedure-related complications occurred, perforation (Hanaro stent) and proximal stent migration (Luminex stent). Stent occlusion requiring reintervention occurred in 25 patients (11 with the Hanaro vs. 14 with the Luminex stent; P=0.521). Conclusions: The 2 uncovered metal stents are comparable in terms of placement, occlusion rates, overall stent patency, and patient survival; Hanaro stent insertion, however, seems to be a cost-saving strategy at least in Greece. Answer: The clinical outcomes of nitinol (N) and stainless steel (SS) uncovered metal stents for malignant biliary strictures have been compared in a retrospective study (PUBMED:21111871). The study found that similar outcomes were achieved with both N and SS stents in terms of efficacy, duration of stent patency, occlusion rates, and complications. Specifically, biliary decompression was achieved in 93.8% of the N group and 86.4% of the SS group, with no significant difference in the occlusion rate between the two types of stents. The overall durations of stent patency were also similar, with a median of 129 days for N stents and 137 days for SS stents. The study concluded that the results are most applicable to patients with common duct strictures and pancreatic cancer. Another study comparing the performance of different designs and diameters of uncoated, self-expanding metal biliary stents found that stent occlusions were much more frequent with a 6-mm diameter SEMS and equivalent in the two 10-mm arms despite major differences in stent design, material, and expansion (PUBMED:19482279). This suggests that stent diameter is a critical feature for stent patency. A meta-analysis and systematic review comparing covered versus uncovered self-expandable metal stents for malignant biliary strictures found that covered stents had significantly lesser occlusion rates but increased odds of migration and pancreatitis compared to uncovered stents (PUBMED:27566620). There was no statistically significant difference in survival benefit, overall adverse event rate, and patency period between covered and uncovered metal stents. In conclusion, the available evidence suggests that there is no significant difference in clinical outcomes between nitinol and stainless steel uncovered metal stents for malignant biliary strictures, particularly in terms of efficacy, stent patency duration, and complication rates. Stent diameter appears to be an important factor in stent patency, and while covered stents may have different rates of occlusion and migration, the choice between covered and uncovered stents does not seem to affect overall survival or patency period significantly.
Instruction: Are modern contraceptives acceptable to people and where do they source them from across Nigeria? Abstracts: abstract_id: PUBMED:23343579 Are modern contraceptives acceptable to people and where do they source them from across Nigeria? Unlabelled: Background: Understanding the extent that different modern contraceptives are acceptable to different populations groups and where they get the commodities from will help in developing specific interventions that will help to scale-up the availability of the contraceptives. Methods: The study took place in urban and rural sites in six states across Nigeria. Data on acceptability and sources of the contraceptives was collected from at least 770 randomly selected mostly female householders from each state respectively using a questionnaire. Acceptability of the different contraceptives was scored by the respondents on a scale of 1 (lowest) to 10 (highest). The relationships between acceptability and sources of the contraceptives with socio-economic status and geographic location of the respondents were examined. Results: The use of modern contraceptives in general was acceptable to 87% of the respondents. Male condom was the most acceptable means of contraceptive with an average score of 5.0. It was followed by implants with and oral contraceptive pill with average scores of 4.0, whilst IUD was the least acceptable with an average score of 2.9. The private sector was the major source of contraceptives to different population groups. Both male and female condoms were mostly procured from patent medicine dealers (PMD) and pharmacy shops. Intra Uterine Devices (IUDs) and implants were mostly sourced from public and private hospitals in the urban areas, whilst injectibles were mostly sourced from private hospitals. Oral contraceptives were mostly sourced from pharmacy shops and patent medicine dealers. There were SES and geographic differences for both acceptability and sources of the contraceptives. Also, the sources of different contraceptives depended on the type of the contraceptive. Conclusion: The different contraceptives were acceptable to the respondents and the major source of the contraceptives was the private sector. Hence, public-private partnership arrangements should be explored so that universal coverage with contraceptives could be easily achieved. Interventions should be developed to eliminate the inequities in both acceptability and sources of different contraceptives. The acceptability of all the contraceptives should be enhanced with relevant behaviour change communication interventions especially in areas with the lowest levels of acceptability. abstract_id: PUBMED:36635738 Prevalence and associated factors of modern contraceptive discontinuation among sexually active married women in Nigeria. Background: Contraceptive discontinuation for reasons other than the desire for pregnancy is associated with a high rate of unintended pregnancies leading to unsafe abortions, maternal morbidity and mortality. In Nigeria, little is known about modern contraceptive discontinuation using the calendar data. Methods: A cross-sectional research design from the 2018 Nigeria Demographic and Health Surveys (NDHS) women's dataset was used to examine the prevalence and associated factors of modern contraceptive discontinuation among sexually active married women in Nigeria. A weighted sample size of 3,353 currently sexually active married or in union women who have ever used a modern contraceptive 5 years before the survey and with complete reproductive histories and are not sterilised or declared infecund was analysed. Data were analysed and displayed using frequency tables and charts, chi-square test, and binary logistic regression model at 5% level of significance. Results: The prevalence of modern contraceptive discontinuation was 35.8% (1199) with 45.8% (549) of the women discontinuing using modern contraceptives while at risk of pregnancy. The most modern method discontinued was Injectables (25.2%) while the commonest reason for modern method discontinuation was because they wanted to become pregnant (36.1%). Associated factors of modern contraceptive discontinuation among sexually active married women in Nigeria were: marital duration (aOR = 3.0; 95%CI: 1.5-6.2), visitation to a health facility in the last 12 months before the survey (aOR = 0.6; 95%CI: 0.4-0.8), education (aOR = 2.0; 95%CI: 1.2-3.4) and region of residence (aOR = 2.7; 95%CI: 1.6-4.7). Conclusion: Modern contraceptive discontinuation among the study respondents was high. Region of residence, health facility visitation and marital duration were significantly associated with modern contraceptive discontinuation. The study suggests that health care providers should address the discontinuation of contraception through counselling, particularly among women who reside in the region of high prevalence of contraceptive discontinuation, short-term users as well as strengthen the use of contraception among those who are still at risk of becoming pregnant. Governments and stakeholders should also partner with private sectors to make health care accessible to women by bring health facilities closer to them to improve facility visitation. abstract_id: PUBMED:37585125 Uptake of modern and traditional contraceptive methods in Nigeria: Lessons from a nationwide initiative on programming for results (2015-2018). In general, family planning uptake promotes healthy living among couples and their children, in addition to aiding national development. This study was a secondary analysis of data collected from two nationally representative data - 2015 and 2018 National Nutrition and Health Surveys (NNHS) - aimed at measuring the uptake of modern and traditional contraceptive methods among women of reproductive age in Nigeria. The data were analysed by presenting differentials in prevalence of modern and traditional contraceptives between 2015 and 2018. The results showed that during the periods modern contraceptive uptake in Nigeria ranged between 10% and 17%. By contrast, the prevalence of the traditional methods was 8.3% and 10.0%. Within four years (2015-2018), the average national modern contraceptive uptake among women increased by 7%, while the traditional contraceptive uptake reduced by 2%. The uptake of both modern and traditional contraceptive methods varied by ages group of women, geo-political regions, and State of residence. We conclude that the uptake of modern contraception is below expectation in all regions in Nigeria. The uptake is worse in the northern regions as compared to the southern regions. Government needs to invest more to increase access to and utilization of modern contraceptive methods. abstract_id: PUBMED:30373570 Gender norms and modern contraceptive use in urban Nigeria: a multilevel longitudinal study. Background: Evidence suggests that gender equality positively influences family planning. However, the evidence from urban Africa is sparse. This study aimed to examine the association between changes in gender norms and modern contraceptive use over time among women in urban Nigeria. Methods: Data were collected in 2010/2011 from 16,118 women aged 15-49 living in six cities in Nigeria (Abuja, Benin, Ibadan, Ilorin, Kaduna, and Zaria) and again in 2014 from 10,672 of the same women (34% attrition rate). The analytical sample included 9933 women living in 480 neighborhoods. A four-category outcome variable measured their change in modern contraceptive use within the study period. The exposure variables measured the changes in the level of gender-equitable attitudes towards: a) wife beating; b) household decision-making; c) couples' family planning decisions; and d) family planning self-efficacy. Multilevel multinomial logistic regression models estimated the associations between the exposure variables at the individual and neighborhood levels and modern contraceptive use controlling for the women's age, education, marital status, religion, parity, household wealth, and city of residence. Results: The proportion of women who reported current use of modern contraceptive methods increased from 21 to 32% during the four-year study period. At both surveys, 58% of the women did not report using modern contraceptives while 11% reported using modern contraceptives; 21% did not use in 2010/2011 but started using by 2014 while 10% used in 2010/2011 but discontinued use by 2014. A positive change in the gender-equitable attitudes towards household decision-making, couples' family planning decisions, and family planning self-efficacy at the individual and neighborhood levels were associated with increased relative probability of modern contraceptive use (adoption and continued use) and decreased relative probability of modern contraceptive discontinuation by 2014. No such associations were found between the individual and neighborhood attitudes towards wife beating and modern contraceptive use. Accounting for the individual and neighborhood gender-equitable attitudes and controlling for the women's demographic characteristics accounted for 55-61% of the variation between neighborhoods in the change in modern contraceptive use during the study period. Conclusion: Interventions that promote gender equality have the potential to increase modern contraceptive use in Nigerian cities. abstract_id: PUBMED:34077133 Effect of community health workers' visits on uptake of modern contraception among rural women of reproductive age in Nigeria. This paper assessed the effect of visits by Community Health Workers (CHW) in the prior 12 months on modern contraceptive use at the time of the survey using a national sample of women residing in rural communities in Nigeria. Cross-sectional data from 5072 rural women ages 15-49 years interviewed in the PMA2020 Survey in 6 states in Nigeria in 2018 were used. Descriptive analysis and generalized linear models were conducted in Stata 15.1 and average marginal effects calculated. Overall prevalence of modern contraceptive use was 14.8% (95% CI: 12.7%, 17.3%), varying from 2.1% in Kano to 22.7% in Nasarawa. Ten percent of women reported that they were visited by a community health worker in the 12-month period preceding the survey, ranging from 2.9% in Kano to 14.6% in Nasarawa. Women visited by a CHW had 50% higher odds of reporting modern contraceptive use, and these visits raised the probability of modern contraceptive use by an average of 6.4 percentage points overall. Local governments in rural Nigeria should invest in training, deploying and supervising CHWs in the provision of modern contraception through home visits to women who may otherwise have limited access to improve use. abstract_id: PUBMED:37585001 Susceptibility of Nigerian adolescents to pregnancy and use of modern contraceptives. Nearly half of pregnancies amongst adolescent girls between ages 15 and 19 are unplanned, one outcome of this is unsafe abortion. Nigerian adolescents aged 15-19 have higher proportion of unmet needs for contraception than those aged 20-24, raising pertinent questions on their perceived susceptibility to pregnancy. Using the Health Belief Model, this article examined the effect of perceived susceptibility to pregnancy on modern contraceptive use among adolescents in Nigeria. Weighted data for 983 sexually active unmarried adolescents aged 15-19 years was extracted from the 2018 Nigerian Demographic and Health Survey. Binomial logistic regression was modelled to test for this relationship. Results showed that there is no significant association between perceived susceptibility and modern contraceptive use. However, adolescents who make use of the internet (AOR=1.659, CI 1.046-2.630), and those who had a sexual partner (AOR=4.051, CI 1.960-8.639), more than one partner in the last 12 months (AOR=6.037, CI 2.292-15.902) were more likely to use modern contraceptive. Young adolescents in Nigeria needs to be sensitized about reproductive health and the importance of the use of contraceptive. abstract_id: PUBMED:32430893 The Examination of Diffusion Effects on Modern Contraceptive Use in Nigeria. This study uses data gathered for an evaluation of a Bill &amp; Melinda Gates Foundation-funded initiative designed to increase modern contraceptive use in select urban areas of Nigeria. When the initiative was conceived, the hope was that any positive momentum in the cities would diffuse to surrounding areas. Using a variety of statistical methods, we study three aspects of diffusion and their effects on modern contraceptive use: spread through mass communications, social learning, and social influence. Using a dynamic causal model, we find strong evidence of social multiplier effects through social learning. The results for social influence and spread through mass communications are promising, but we are unable to identify definitive causal impacts. abstract_id: PUBMED:37585013 Factors influencing the use of modern contraceptives amongst postpartum women in a rural tertiary hospital in South-South Nigeria. The postpartum period is a critical time to address the high unmet needs of family planning and reduce the risks associated with closely spaced pregnancies. Unintended pregnancies immediately following childbirth are associated with increased maternal mortality, morbidity, and poor pregnancy outcomes. Adoption of postpartum family planning is a cost-effective way that would reduce maternal and child morbidity and mortality. This study aimed to establish the prevalence, the clients' sociodemographic factors, and health care workers' influences on modern contraceptive uptake amongst postpartum women at Irrua Specialist Teaching Hospital (ISTH), Irrua. This descriptive cross-sectional study was carried out amongst postpartum mothers attending maternal and child health clinics in ISTH, Irrua, Edo State, Nigeria. The selection of participants was by simple random sampling technique. All consented participants selected through simple random sampling were administered questionnaires, and completed ones were coded and analysed while the level of significance was set at 5%. In this study, over 90% of the women were aware of a method of modern contraception. However, this did not translate to increased contraceptive uptake as only 17.8% of the respondents were currently on a method of postpartum contraception. The statistically significant sociodemographic factors influencing the utilization of modern contraceptives by postpartum women, as seen in this study, were: Age (p &lt;0.001), level of education (p &lt;0.001), religion (p=0.048), and parity (p=0.010). The women who have higher educational status, higher parity, and adequate information provided by healthcare providers were more likely to use at least a method of modern contraceptive postpartum. The uptake of modern contraceptives amongst postpartum women was low in this study. Strengthening family planning counseling services during prenatal care and emphasizing the girl child education, provision of adequate information by healthcare providers on the need for postpartum contraception could enhance modern contraceptive use for postpartum women. abstract_id: PUBMED:35720812 Differentials in Contraceptive Use Among Selected Minority Ethnic Groups in Nigeria. Ethnicity is one of the critical factors that shape contraceptive use in Nigeria. While there are growing disparities in contraceptive uptake among women of reproductive age in the three major ethnic groups (Hausa, Igbo and Yoruba), not much is known about differentials in contraceptive use among the minority ethnic groups. This study examined differentials in contraceptive use among a sample of 1,072 respondents comprising the Ebira (352), Igala (358) and Okun (361) ethnic groups in Nigeria. Questionnaire was administered to respondents proportionately in the selected minority ethnic groups with six key informant interviews and 12 focus group discussions to generate quantitative and qualitative data among ever-married women. Quantitative data were analyzed at bivariable and multivariable levels. The qualitative data were content-analyzed. Differentials in contraception are shaped by ethnic affiliations and socio-demographic characteristics of couples. The use of modern contraceptives was low among the Ebira (25.7%) and Igala (24.1%) ethnic groups, but high among Okun (67%) women of reproductive age. The odd of using a modern contraceptive is significantly higher among the Okun women (UOR = 5.618, 95% CI 4.068-7.759) than the Ebira and Igala. There is no significant difference between the Ebira and Igala minority ethnic groups on modern contraceptive use. Ethnicity as a factor is not a stand-alone predictor of the use of modern contraception among the study groups, other socio-economic variables such as residence, religion, income and marital status were significant predictors of modern contraceptive use among minority ethnic groups. We suggest introducing reproductive health intervention programmes targeted at sensitizing the minority ethnic groups on effective modern contraceptive use while addressing their specific modern contraceptive need in Nigeria. abstract_id: PUBMED:25114515 Peer-driven contraceptive choices and preferences for contraceptive methods among students of tertiary educational institutions in Enugu, Nigeria. Purpose: To describe the methods preferred for contraception, evaluate preferences and adherence to modern contraceptive methods, and determine the factors associated with contraceptive choices among tertiary students in South East Nigeria. Methods: A questionnaire-based cross-sectional study of sexual habits, knowledge of contraceptive methods, and patterns of contraceptive choices among a pooled sample of unmarried students from the three largest tertiary educational institutions in Enugu city, Nigeria was done. Statistical analysis involved descriptive and inferential statistics at the 95% level of confidence. Results: A total of 313 unmarried students were studied (194 males; 119 females). Their mean age was 22.5±5.1 years. Over 98% of males and 85% of females made their contraceptive choices based on information from peers. Preferences for contraceptive methods among female students were 49.2% for traditional methods of contraception, 28% for modern methods, 10% for nonpharmacological agents, and 8% for off-label drugs. Adherence to modern contraceptives among female students was 35%. Among male students, the preference for the male condom was 45.2% and the adherence to condom use was 21.7%. Multivariate analysis showed that receiving information from health personnel/media/workshops (odds ratio 9.54, 95% confidence interval 3.5-26.3), health science-related course of study (odds ratio 3.5, 95% confidence interval 1.3-9.6), and previous sexual exposure prior to university admission (odds ratio 3.48, 95% confidence interval 1.5-8.0) all increased the likelihood of adherence to modern contraceptive methods. Conclusion: An overwhelming reliance on peers for contraceptive information in the context of poor knowledge of modern methods of contraception among young people could have contributed to the low preferences and adherence to modern contraceptive methods among students in tertiary educational institutions. Programs to reduce risky sexual behavior among these students may need to focus on increasing the content and adequacy of contraceptive information held by people through regular health worker-led, on-campus workshops. Answer: Modern contraceptives are generally acceptable to people in Nigeria, with an acceptability rate of 87% among respondents in a study that took place in urban and rural sites across six states. The male condom was the most acceptable means of contraceptive with an average score of 5.0, followed by implants and oral contraceptive pills with average scores of 4.0, while the Intra Uterine Device (IUD) was the least acceptable with an average score of 2.9 (PUBMED:23343579). In terms of sourcing, the private sector was the major source of contraceptives to different population groups. Male and female condoms were mostly procured from patent medicine dealers (PMD) and pharmacy shops. IUDs and implants were primarily sourced from public and private hospitals in urban areas, injectables were mostly sourced from private hospitals, and oral contraceptives were mostly sourced from pharmacy shops and patent medicine dealers. There were socioeconomic status (SES) and geographic differences for both acceptability and sources of contraceptives, and the sources depended on the type of contraceptive (PUBMED:23343579). The study also suggests that public-private partnership arrangements should be explored to achieve universal coverage with contraceptives and that interventions should be developed to eliminate inequities in both acceptability and sources of different contraceptives. Additionally, behavior change communication interventions are recommended, especially in areas with the lowest levels of acceptability (PUBMED:23343579).
Instruction: Hybrid cysts of the eyelid with follicular and apocrine differentiation: an under-recognized entity? Abstracts: abstract_id: PUBMED:18356717 Hybrid cysts of the eyelid with follicular and apocrine differentiation: an under-recognized entity? Purpose: To describe 6 patients with hybrid cysts of the eyelid that demonstrated follicular and apocrine differentiation. Methods: Clinicopathologic series of 6 cases obtained from review of 1,099 cases of hidrocystoma diagnosed at the New York Eye and Ear Infirmary from 1990 to 2006. Results: Six patients, aged 38 years to 85 years (mean 66.5 years), was examined for cystic eyelid lesions. Histopathologic evaluation revealed cystic structures, lined by a combination of apocrine, infundibular (epidermoid), and trichilemmal-type epithelia. Luminal contents consisted of variable amounts of lamellated and compact keratin, and serous material. Contiguity with a hair follicle was demonstrated in 4 of 6 lesions. Immunohistochemical evaluation showed intense immunoreactivity of the entire cyst lining in all lesions for high molecular weight cytokeratin and variable immunoreactivity for low molecular weight cytokeratin. All cystic structures displayed immunoreactivity for carcinoembryonic antigen. Weak focal immunoreactivity for S-100 protein and gross cystic disease fluid protein-15 was observed in most lesions. Conclusions: Hybrid cysts can show apocrine, trichilemmal, and infundibular differentiation, suggestive of their origin at the junction of keratinizing squamous and glandular epithelia of the hair follicle. Although, to our knowledge, only 1 such lesion has been described in the eyelid skin, it is likely that hybrid cysts of the eyelid are not an infrequent finding. abstract_id: PUBMED:29351118 Multiple Eyelid Cysts (Apocrine and Eccrine Hidrocystomas, Trichilemmal Cyst, and Hybrid Cyst) in a Patient With a Prolactinoma. A 53-year-old man presented with smooth-domed, variegated cysts (polycystic disease) of all 4 eyelids, worse on the left side. Some of the cysts were clear, while others were creamy-white colored. In addition, multiple, very fine vesicopapules were noted along the eyelid margins. Histopathologic examination revealed a trichilemmal cyst, several pure apocrine hidrocystomas displaying multiple chambers, a hybrid cyst, and many small eccrine cysts of the deep dermis. The apocrine lesions, including the small ones at the eyelid margins, predominated. Smooth muscle actin sometimes positively stained outer myoepithelial cells in some of the apocrine cysts, which helped to distinguish them from eccrine cysts. Most noteworthy was the fact that the patient had been diagnosed with a prolactinoma 20 years earlier. There is only 1 previous report of multiple apocrine cysts and an antecedent prolactinoma in the dermatologic literature. This syndrome should be separated from that of Schöpf-Schulz-Passarge, which manifests multiple small eyelid apocrine cysts and other ectodermal dysplasias without any association with neoplasia, and from that of focal dermal hypoplasia (Goltz-Gorlin) syndrome with apocrine cysts but again without neoplasia. abstract_id: PUBMED:34563037 A New Case of Hybrid Epidermoid and Apocrine Cyst. We described a new case of a hybrid epidermoid and apocrine cyst, known to be a rare histopathological entity. The cyst was located in the axillary region and completely excised, without complication. The diagnosis was made at the histological analysis, where we found a cystic lesion in the dermis, lined with both epidermoid and apocrine epithelium. abstract_id: PUBMED:28121719 Eyelid Eccrine Cyst: An Exceptional Lesion Among Dominant Apocrine Cysts. A 71-year-old woman developed a small bluish lesion beneath the cilia of the left lower eyelid. Excision and microscopic examination disclosed a cyst with an intimately associated eccrine sweat gland. Immunohistochemistry demonstrated that the cyst's epithelium was strongly CK5/6, CK14, CK7 weakly positive, and gross cystic disease fluid protein-15 and smooth muscle actin negative. This is the first immunohistochemically proven eccrine cyst of the eyelid skin. Apocrine cysts develop only at the eyelid margin where the glands of Moll are located. They immunostain positively for cytoplasmic gross cystic disease fluid protein-15 in the adlumenal cells and smooth muscle actin in an outer myoepithelial (abluminal) layer. abstract_id: PUBMED:24247572 Pigmented eyelid cysts revisited: apocrine retention cyst chromhidrosis. A minority of eyelid hidrocystomas are pigmented containing brown-black contents. Chromhidrosis describes the excretion of colored secretions composed of lipofuscin pigments in apocrine gland-rich anatomic locations. The objective of this study is to evaluate the clinicopathologic features of pigmented eyelid cysts. A case-control study was conducted, examining consecutive pigmented and nonpigmented eyelid hidrocystoma excision specimens. Over a 4-year period, 9 pigmented eyelid hidrocystomas were identified, representing 13% (9/70) of all hidrocystoma excisions. Compared to controls (n = 14), no difference existed for age [mean age 59 (44-78 years) vs. 60 (42-82 years)] or size [mean diameter 2.3 (1-4 mm) vs. 2.7 (1-5 mm)] (pigmented vs. nonpigmented, respectively), but a trend for female, left side, and lower lid predominance for pigmented hidrocystomas existed: 8:1 versus 7:7 F:M; 7:2 versus 7:7 left:right; 8:1 versus 9:5 lower:upper eyelid (pigmented vs. nonpigmented, respectively). Clinically, the pigmented cysts' color varied from dark blue, brown, and to black, and on gross examination, they expressed dark brown to black granular liquid contents. Applying histologic criteria of Jakobiec and Zakka, 8 of 9 and 14 of 14 pigmented and nonpigmented hidrocystomas were of apocrine type. Seven of 9 (78%) pigmented cysts and 6 of 14 (43%) nonpigmented hidrocystomas contained granular eosinophilic cyst contents and/or intracellular cytoplasmic granular pigmented deposits by light microscopy. (The pigmented cyst contents did not survive processing in 2 cases.) By histochemistry (periodic acid Schiff with diastase, Sudan Black, and Fite acid-fast positive staining) and ultraviolet fluorescence, these sediments were determined to be lipofuscin pigments. No hidrocystomas had melanin deposits, and one case had hemosiderin deposits in a scarred cyst wall in addition to cyst lipofuscin pigments. In studies of chromhidrosis, both normal and chromhidrotic apocrine glands contain lipofuscin pigments; the sole difference lies in the amount of lipofuscin granules. Similarly, for eyelid apocrine hidrocystomas, lipofuscin pigments exist in both groups. Presumptively, the amount of lipofuscin and degree of its oxidation distinguish pigmented from nonpigmented apocrine hidrocystomas. abstract_id: PUBMED:21168812 A reappraisal of eyelid eccrine and apocrine hidrocystomas: microanatomic and immunohistochemical studies of 40 lesions. Purpose: To establish reliable criteria for diagnosing eyelid eccrine and apocrine hidrocystomas. Design: Retrospective clinicopathologic and immunohistochemical study. Methods: Twenty-two specimens of normal portions of eyelids were evaluated to establish the distribution and microanatomy of eccrine and apocrine glands. Immunostaining for cytokeratin 7 (CK7), gross cystic disease fluid protein-15 (GCDFP-15), alpha-smooth muscle actin (α-SMA), epithelial membrane antigen (EMA), and carcinoembryonic antigen (CEA) was performed on these tissues and on 40 lesions in 33 patients diagnosed with eccrine or apocrine hidrocystomas by unaided light microscopy. Results: Eccrine glands were not present in the eyelid margins, the lower half of the upper eyelid pretarsal skin, or the pretarsal lower eyelid skin. Apocrine glands were restricted to the eyelid margins and canthi where the cysts were located. GCDFP-15, CK7, and α-SMA immunoreacted with the eccrine secretory coils but not their ducts; apocrine gland secretory spirals also stained positively for these markers throughout their extended courses, but not their short terminal ducts. Positivity was found in 37 of 40 hidrocystomas for α-SMA and 19 for GCDFP-15; lesions tested for CK7 displayed positivity. Conclusions: Alpha-SMA-, CK7-, and/or GCDFP-15-positive apocrine hidrocystomas were the only type discovered in this series and arose from glandular secretory spirals within the marginal, perimarginal, or canthal skin. Three lesions did not stain for α-SMA, initially suggesting an absent myoepithelium and therefore an eccrine ductal origin; they manifested CK7 positivity, however, another characteristic of the apocrine secretory spiral but not ducts. Our findings disprove the contention that eccrine predominate over apocrine hidrocystomas in the eyelids. abstract_id: PUBMED:38086129 Peri-ocular proliferative apocrine hidrocystoma (cystadenoma): A clinicopathological case series. Introduction: Apocrine cystadenomas are rare, benign tumors that might arise in the periocular region from glands of Moll. They characteristically demonstrate proliferative features on histopathological examination, which differentiate them from simple hidrocystomas. Presentation Of Cases: We retrospectively identified 4 consecutive cases of apocrine cystadenomas in male patients with a mean age of 48.5 years (range 20-62). One of the cases was a recurrent lesion. The Preoperative clinical diagnosis was mostly hidrocystoma. All patients agreed on complete surgical excision of their cystic lesions. Histopathological review of the excised cysts confirmed the diagnosis of apocrine cystadenoma based on the presence of proliferative features with no atypia or infiltrative behavior. Cases are summarized in Table 1 and the histopathological appearance is demonstrated in the included figure. Discussion: Apocrine cystadenoma is rare. It occurs in the areas of skin with hair follicles, such as the axilla, neck, and trunk, and may be mistaken for other skin lesions, such as nevi or syringomas. In the eyelid region, they are commonly missed and frequently diagnosed as simple hidrocystoma due to the presence of bluish hue such as in our series. Recurrence is rare but was found in one of our patients at initial presentation. Our series included unique locations of this lesion in the medial canthus in one patient and near the eyelid tarsus in another. Conclusion: Ophthalmologist should be aware of the rare occurrence of this lesion in the periocular region. Further studies to explain the etiology of such proliferative nature in apocrine cysts would be interesting. abstract_id: PUBMED:30893188 Solid Variant of a Proliferative Apocrine Hidrocystoma. A cystic lesion of the eyelid margin in a 37-year-old man was diagnosed clinically as an apocrine hidrocystoma, a common lesion in that location. Histopathologic examination confirmed the presence of a hidrocystoma but also disclosed an extensive proliferation of bland epithelial cells arising from the inner layer of the cyst. The authors further characterized this unique occurrence on the spectrum of Moll gland eyelid tumors by immunohistochemical analysis. abstract_id: PUBMED:7235505 The infundibular adenoma: a follicular poroma with sebaceous and apocrine differentiation (author's transl) Tricholemmal adnexal pilar tumors may be classified according to the tricholemmal zone whose structure they reproduce and according to their degree of maturation or dedifferentiation. The pilar sheath acanthoma (dilated pore of Winer), the tumor of follicular infundibulum and the follicular poroma (inverted follicular keratosis) are related to the infundibulum and keratinize according to the epidermal pattern. The tricholemmal cysts (of simple or proliferating type) and the keratinizing tricholemmoma are related to the upper part of the isthmus of the anagen hair and keratinize according to the specific pattern of the outer root sheath at this isthmic level. The tricholemmoma is related to the lower tricholemma which does not keratinize and the cells of which are glycogen-rich. In this vertical didactic classification, there exist combinations and filiations of the different tumors. In a serial analysis of 68 skin adnexal tumors diagnosed as follicular poromas, we observed 6 cases with sebaceous and apocrine differentiation. For this organoid tumor issued from the infundibular epithelium we propose the term of "infundibular adenoma". abstract_id: PUBMED:17501654 Use of chemical ablation with trichloroacetic acid to treat eyelid apocrine hidrocystomas in a cat. Case Description: A 7-year-old Persian cat was evaluated for recurrence of multiple cystic periocular masses. A number of cyst-like lesions had been resected from the left eyelids 18 months earlier, with lesions recurring within 6 months after surgery. The cat had blepharospasm and signs of discomfort following rupture of the largest cyst the day prior to examination. Previous histologic examination of the cysts had revealed apocrine hidrocystomas. Clinical Findings: Multiple pigmented nodules were seen around the skin of the upper and lower left eyelids. The nodules were brownish to black, round, soft, and fluid-filled. Signs of pain were not evident during palpation of the nodules. Treatment And Outcome: The largest cyst on the upper eyelid was removed by means of a V-shaped full-thickness excision. Histologic and immunohistochemical examination of the excised tissue confirmed the diagnosis of apocrine hidrocystoma. The remaining periocular cysts were surgically debrided and then treated topically with 20% trichloroacetic acid. All lesions healed rapidly without any signs of discomfort. During a recheck examination 12 months later, the upper and lower left eyelids appeared morphologically normal, and there was no evidence of recurrence. Clinical Relevance: Findings suggested that chemical ablation with trichloroacetic acid may be a useful treatment for apocrine hidrocystomas in cats. Answer: Hybrid cysts of the eyelid, which demonstrate both follicular and apocrine differentiation, appear to be an under-recognized entity. These cysts are characterized by their histopathologic features, showing a combination of apocrine, infundibular (epidermoid), and trichilemmal-type epithelia lining the cystic structures (PUBMED:18356717). Although only a single lesion of this type had been described in the eyelid skin prior to the study by Jakobiec et al., the authors suggest that hybrid cysts of the eyelid may not be infrequent findings. The study reviewed 1,099 cases of hidrocystoma and identified six patients with such hybrid cysts, indicating that these lesions might be more common than previously thought (PUBMED:18356717). Moreover, the existence of hybrid cysts is supported by other case reports and series. For instance, a patient with a prolactinoma presented with multiple eyelid cysts, including a hybrid cyst, suggesting a possible association between such cysts and systemic conditions (PUBMED:29351118). Another case described a hybrid epidermoid and apocrine cyst located in the axillary region, further confirming the occurrence of these cysts in different anatomical locations (PUBMED:34563037). The recognition of hybrid cysts is important for differential diagnosis and appropriate management. These cysts can be mistaken for other types of lesions, such as eccrine hidrocystomas, which are less common in the eyelids (PUBMED:28121719). Additionally, the presence of proliferative features in some apocrine cysts, as seen in apocrine cystadenomas, can further complicate the diagnosis (PUBMED:38086129). In conclusion, hybrid cysts of the eyelid with follicular and apocrine differentiation seem to be an under-recognized entity. Awareness of these cysts among clinicians and pathologists is crucial for accurate diagnosis and treatment, as they may be more prevalent than the medical literature has previously indicated (PUBMED:18356717).
Instruction: Is there a delay in the antidepressant effect? Abstracts: abstract_id: PUBMED:7588175 Delay of the antidepressant effect: clinical studies The first controlled trials of antidepressants show up a delay of antidepressant effect to one or two weeks, with sedative or stimulant early effects. More recently, several controlled trials founded on more stringent criterions (DSM III, period of pre-therapeutic placebo of one minimal week, frequent quotations etc.) have showed up a delay of antidepressant effect assigned between the fourth and the sixth week, with all antidepressant drugs. Nevertheless, there are more early effects, either specific antidepressant but incomplete either non specific (sedative, stimulant). These early effects are more marked with the antidepressant drug than with the PBO, but they have not a predictive value of terminal response. Several factors have an influence on the delay of antidepressant effect. The clinical characteristics are not correlated with this delay. The intravenous administration does not reduce the length of this delay. The pulse loading doses with intravenous and oral antidepressant drug seem to reduce the length of this delay. The most important factor is the placebo effect (Quitkin et al.). Nevertheless, the differences of efficacy between antidepressant and PBO appear only towards the third week. There are some differences during the first two weeks. In fact the PBO effect is early and late longer than the antidepressant drug. With PBO, when the improvement is progressive and fasting, it is a spontaneous remission. These data have practical implications if we must know the delay of antidepressant drug. abstract_id: PUBMED:15704999 Is there a delay in the antidepressant effect? A meta-analysis. Objectives: It has long been thought that there is a delay of several weeks before a true antidepressant effect occurs, although this theory has increasingly come into question. The goals of this meta-analysis were to determine whether significant drug-placebo separation occurs during the first 2 weeks of treatment and to ascertain whether the timing of response to antidepressant medication and placebo is distinct. Data Sources: Seventy-six double-blind, placebo-controlled trials conducted between 1981 and 2000, included in a recently published meta-analysis that evaluated placebo response rates in depressed outpatients, were reviewed. In addition, each issue of 6 psychiatric journals from January 1992 through December 2001 was reviewed. Study Selection: Forty-seven studies that evaluated antidepressant medications with established efficacy, performed weekly or biweekly (every other week) evaluations, and presented the time course of improvement as measured by the Hamilton Rating Scale for Depression were included in our meta-analysis. Data Synthesis: The time course of improvement on active medication and placebo was nearly identical, as 60.2% and 61.6% of the improvement that occurred on active medication and placebo, respectively, took place during the first 2 weeks of treatment. Drug-placebo differences were not only present but were most pronounced during the first 2 weeks of treatment and diminished in a stepwise fashion thereafter. A series of subanalyses confirmed that this early drug-placebo separation was clinically observable and represented a true drug effect. Conclusion: These results challenge the notion that a delay exists before a true antidepressant effect occurs. abstract_id: PUBMED:31329067 Resonant Doppler effect in systems with variable delay. We demonstrate that a time-varying delay in nonlinear systems leads to a rich variety of dynamical behaviour, which cannot be observed in systems with constant delay. We show that the effect of the delay variation is similar to the Doppler effect with self-feedback. We distinguish between the non-resonant and the resonant Doppler effect corresponding to the dichotomy between conservative delays and dissipative delays. The non-resonant Doppler effect leads to a quasi-periodic frequency modulation of the signal, but the qualitative properties of the solution are the same as for constant delays. By contrast, the resonant Doppler effect leads to fundamentally different solutions characterized by low- and high-frequency phases with a clear separation between them. This is equivalent to time-multiplexed dynamics and can be used to design systems with well-defined multistable solutions or temporal switching between different chaotic and periodic dynamics. We systematically study chaotic dynamics in systems with large dissipative delay, which we call generalized laminar chaos. We derive a criterion for the occurrence of different orders of generalized laminar chaos, where the order is related to the dimension of the chaotic attractor. The recently found laminar chaos with constant plateaus in the low-frequency phases is the zeroth-order case with a very low dimension compared to the known high dimension of turbulent chaos in systems with conservative delay. This article is part of the theme issue 'Nonlinear dynamics of delay systems'. abstract_id: PUBMED:33005153 The Antidepressant Effect of Ketamine Is Dampened by Concomitant Benzodiazepine Medication. The rapid antidepressant effect of ketamine has become a breakthrough in the research and treatment of depression. Although predictive and modulating factors of the response to ketamine are broadly studied, little is known about optimal concurrent medication protocols. Concerning gamma-aminobutyric acid neurotransmission being a shared target for both ketamine and benzodiazepines (BZD), we evaluated the influence of BZD on the antidepressant effect of a single ketamine infusion in depressed patients. Data from 47 patients (27 females) with major depression (MADRS ≥ 20, ≥ 1 prior nonresponse to antidepressant treatment in current episode) who participated in two previous studies (EudraCT Number: 2009-010625-39 and 2013-000952-17) entered the analysis. All of the subjects were given an infusion of a subanesthetic dose of racemic ketamine (0.54 mg per kg) as an add-on medication to ongoing antidepressant treatment. Thirteen patients (28%) reached ≥ 50% reduction in MADRS within one week after ketamine administration. Nineteen (40%) patients took concomitant benzodiazepines on a daily basis. The doses of BZDs were significantly higher in nonresponders (p=0.007). ROC analysis distinguished responders from nonresponders by a criterion of &gt;8mg of diazepam equivalent dose (DZ equivalent) with a sensitivity of 80% and a specificity of 85% (p&lt;0.001). RM-ANOVA revealed a different time pattern of response to ketamine between the BZD+ (&gt;8mg of DZ equivalent) and BZD- (≤8mg of DZ equivalent) groups, with a significantly worse outcome in BZD+ on day 3 (p=0.04) and day 7 (p=0.02). The results of the study indicate that concomitant benzodiazepine treatment in higher doses may attenuate ketamine's antidepressant effect. The pathophysiological, clinical and methodological implications of this finding should be considered in future research and ketamine treatment. abstract_id: PUBMED:31819774 Antidepressant effect of the interaction of fluoxetine with granisetron. Selective serotonin reuptake inhibitors (SSRIs) may produce digestive side effects such as nausea and vomiting, diarrhoea and decreased appetite. These side effects are determined by the increase in serotonin availability at 5-HT3 receptors. Granisetron, a serotonin 5-HT3 receptor antagonist, is expected to antagonize the digestive adverse effects of serotonin reuptake inhibitors, but the question is to what extent granisetron influences the antidepressant effect of these substances. The aim of this study was to determine the dose of fluoxetine that has an antidepressant effect in the Porsolt test, and the interaction between fluoxetine and granisetron with respect to the antidepressant effect in this test. In experiment 1, fluoxetine was antidepressant only at 20 mg/kg body weight (bw). In experiment 2, granisetron 1 mg/kg bw had a statistically significant antidepressant effect vs. control. Fluoxetine 20 mg/kg bw associated with a small dose of granisetron (0.1 mg/kg bw) produced a significant antidepressant effect vs. control. This shows that low doses of granisetron associated to fluoxetine might produce a significant antidepressant effect, suggesting a potentiation between these two drugs used in sub-effective antidepressant doses. In conclusion, in our experimental conditions, we can assume that granisetron in low doses could be used to combat intestinal transit disorders produced by SSRI antidepressants. These low doses are preferred, because they increase the antidepressant effect of these SSRIs. abstract_id: PUBMED:23492865 Evaluation of Antidepressant-like Effect of Citrus Maxima Leaves in Animal Models of Depression. Objectives: This study planned to assess antidepressant like activity of aqueous extract from leaves of Citrus maxima Merr. (Rutaceae). Materials And Methods: Boiling was used for aqueous extraction. Acute toxicity study was performed in mice. Antidepressant activity was studied using locomotor activity test, modified forced swimming test (FST) and tail suspension test (TST). Three doses 100, 200 and 300 mg/kg of aqueous extract of leaves were selected for testing. Fluoxetine (20 mg/kg, i.p.) and imipramine (30 mg/kg, i.p.) were used as the standard drugs. Results: Aqueous extract of Citrus maxima leaves significantly reduced immobility time in both TST and FST. In locomotor activity testing it showed psychostimulant effect. Extract increased the climbing behavior in FST, which is similar to effect observed with imipramine. Conclusion: The results of this study suggest that antidepressant like effect of Citrus maxima seems to be mediated by an increase in norepinephrine level in synapses. abstract_id: PUBMED:28557969 Metabolic safety of antidepressant medicines Metabolic syndrome is a very serious health issue, not only from internal medicine's point of view. Patients suffering from overweight, arterial hypertension, lipids and carbohydrates metabolism disorders are also in the circle of interest of other areas of medicine, including psychiatry. Currently, one of key problems of pharmacotherapy is a comorbidity of metabolic syndrome and mental disorder. Depression is more common than schizophrenia. Despite the fact that in everyday clinical practice there are more patients with depression than schizophrenia, there is a bigger interest among scientists for metabolic syndrome after antipsychotic drugs than as an effect of use of antidepressant agents. Aim: The aim of an analysis was to review literature committed to influence of depression pharmacotherapy on development of metabolic syndrome. 169 results were provided, including 18 original publications. Final analysis consists of 9 that investigate correlation between antidepressive medicines use and metabolic syndrome development (but not its each individual component). Results: In general, antidepressant pharmacotherapy is associated not only with increased risk of metabolic syndrome occurrence but also their worsening. However, it needs to be emphasized that there is a difference between antidepressants groups - tricyclic antidepressive medicines are the most commonly associated with risk of developing metabolic disorders, but also SNRIs and SSRIs are mentioned as significant contributors. Mechanisms of aforementioned changes are still unclear. However, their influence on histamine and serotonin pathways, which take part in regulation of i.e. food intake, is suggested. The search for mechanisms that are precisely responsible for metabolic changes continues, in hope of finding a way to avoid adverse effects of antidepressant medicines use. abstract_id: PUBMED:32460411 Activation of astrocytic sigma-1 receptor exerts antidepressant-like effect via facilitating CD38-driven mitochondria transfer. Despite sigma-1 receptor (Sig-1R) is a promising therapeutic target in depression, little is known regarding the cellular mechanisms underlying its antidepressant responses. Here, we demonstrated that astrocyte can be a direct cellular target of Sig-1R exerting antidepressant-like effect. In multiple behavioral models including forced swimming test (FST), tail suspension test (TST), open field test (OFT), and chronic unpredictable mild stress (CUMS), inhibition of astrocyte function blocked pharmacological Sig-1R activation-induced antidepressant-like effect, while specific activation of astrocytc Sig-1R by adeno-associated virus (AAV) was sufficient to produce antidepressant-like effect. In depression-related cellular tests, Sig-1R agonist or lentivirus-stimulated astrocyte conditioned medium (ACM) promoted neuronal neurite outgrowth, dendritic branch, and survival. Mechanismly, stimulation of Sig-1R enhanced the expression of CD38 via activation of extracellular regulated protein kinases 1/2 (ERK1/2), resulting in facilitating mitochondrial transfer from astrocyte. Furthermore, blockage of CD38-driven astrocyte transferring mitochondria in vivo and in vitro reversed the antidepressant-like effect of pharmacological Sig-1R activation. Thus, this study sheds light on the cellular mechanism of Sig-1R activation producing antidepressant-like effect. These data present the first evidence that enhancement of Sig-1R action on astrocytes entirely exerts antidepressant-like effect, indicating that specific activation of astrocytic Sig-1R may provide a new approach for antidepressant drug development. abstract_id: PUBMED:38401135 The date/delay effect in intertemporal choice: A combined fMRI and eye-tracking study. Temporal discounting, the tendency to devalue future rewards as a function of delay until receipt, is influenced by time framing. Specifically, discount rates are shallower when the time at which the reward is received is presented as a date (date condition; e.g., June 8, 2023) rather than in delay units (delay condition; e.g., 30 days), which is commonly referred to as the date/delay effect. However, the cognitive and neural mechanisms of this effect are not well understood. Here, we examined the date/delay effect by analysing combined fMRI and eye-tracking data of N = 31 participants completing a temporal discounting task in both a delay and a date condition. The results confirmed the date/delay effect and revealed that the date condition led to higher fixation durations on time attributes and to higher activity in precuneus/PCC and angular gyrus, that is, areas previously associated with episodic thinking. Additionally, participants made more comparative eye movements in the date compared to the delay condition. A lower date/delay effect was associated with higher prefrontal activity in the date &gt; delay contrast, suggesting that higher control or arithmetic operations may reduce the date/delay effect. Our findings are in line with hypotheses positing that the date condition is associated with differential time estimation and the use of more comparative as opposed to integrative choice strategies. Specifically, higher activity in memory-related brain areas suggests that the date condition leads to higher perceived proximity of delayed rewards, while higher frontal activity (middle/superior frontal gyrus, posterior medial frontal cortex, cingulate) in participants with a lower date/delay effect suggests that the effect is particularly pronounced in participants avoiding complex arithmetic operations in the date condition. abstract_id: PUBMED:24769309 Atorvastatin evokes a serotonergic system-dependent antidepressant-like effect in mice. Atorvastatin is a statin largely used in the treatment of hypercholesterolemia and recently revealed as a neuroprotective agent. The antidepressant-like effect of acute atorvastatin treatment in mice has been previously demonstrated by our laboratory. The purpose of this study was to explore the contribution of the serotonergic system in the antidepressant-like effect of atorvastatin in mice. Data demonstrate that the serotonin (5-HT) depleting agent p-chlorophenylalanine methyl ester (PCPA, 100 mg/kg, i.p.) completely abolished atorvastatin (0.1 mg/kg, p.o.) antidepressant-like effect. Besides atorvastatin, fluoxetine (10 mg/kg, p.o.), a serotonin selective reuptake inhibitor (SSRI) was able to exert an antidepressant-like effect, but any of them changed 5-HT content in the hippocampus or frontal cortex. The 5H-T1A (WAY100635, 0.1 mg/kg, s.c) or the 5-HT2A/2C (ketanserin, 5 mg/kg, s.c.) receptor antagonists prevented atorvastatin antidepressant-like effect. In addition, a combinatory antidepressant-like effect was observed when mice received the co-administration of sub-effective doses of atorvastatin (0.01 mg/kg, p.o.) and the SSRI fluoxetine (5 mg/kg, p.o.), paroxetine (0.1 mg/kg, p.o.) or sertraline (1 mg/kg, p.o.). Taken together, these results indicate that the antidepressant-like effect of atorvastatin depends on the serotonergic system modulation. Answer: The question of whether there is a delay in the antidepressant effect has been a subject of debate and research. According to a meta-analysis that reviewed 76 double-blind, placebo-controlled trials conducted between 1981 and 2000, the time course of improvement on active medication and placebo was nearly identical, with most of the improvement occurring during the first 2 weeks of treatment. This study found that drug-placebo differences were most pronounced during the first 2 weeks and diminished thereafter, challenging the notion that there is a delay before a true antidepressant effect occurs (PUBMED:15704999). However, earlier clinical studies suggested that the antidepressant effect might be delayed, with effects typically assigned between the fourth and sixth week of treatment. These studies noted that while there might be early effects, either specific but incomplete or non-specific (sedative, stimulant), these do not have a predictive value of terminal response. Factors such as the placebo effect were found to influence the delay of the antidepressant effect, with placebo effects being early and lasting longer than those of the antidepressant drug (PUBMED:7588175). In summary, while traditional views and some clinical studies have suggested a delay in the antidepressant effect, more recent meta-analytic evidence challenges this notion, indicating that significant improvement can occur within the first two weeks of treatment, suggesting that the delay may not be as pronounced as previously thought.
Instruction: Is the apoE4 allele an independent predictor of coronary events? Abstracts: abstract_id: PUBMED:11152862 Is the apoE4 allele an independent predictor of coronary events? Purpose: Although the apolipoprotein E genotype epsilon4 (apoE4) has been associated with high cholesterol levels, whether it is an independent predictor of coronary events is not certain. Subjects And Methods: We measured apoE genotypes in 730 participants in the Baltimore Longitudinal Study of Aging (421 men and 309 women, mean [+/- SD] age of 52+/-17 years) who were free of preexisting coronary heart disease. A proportional hazards regression model was used to study the association between risk factors and the occurrence of coronary events, defined as angina pectoris, documented myocardial infarction by history or major Q waves on the electrocardiogram (Minnesota Code 1:1 or 1:2), or coronary death, adjusted for other risk factors, including total plasma cholesterol level. Results: The apoE4 allele was observed in 200 subjects (27%), including 183 heterozygotes and 17 homozygotes. Coronary risk factor profiles were similar in those with and without apoE4. Coronary events developed in 104 (14%) of the 730 subjects, including 77 (18%) of the 421 men during a mean follow-up of 20 years and 27 (9%) of the 309 women during a mean follow-up of 13 years. Coronary events occurred significantly more frequently in subjects with apoE4 (n = 40, 20%) than in those without this allele (64, 12%, P &lt;0.05). In a multivariate model, apoE4 was an independent predictor of coronary events in men (risk ratio [RR]= 2.9, 95% confidence interval [CI]: 1.8 to 4.5, P&lt;0.0001) but not in women (RR = 0.9, 95% CI: 0.4 to 1.9, P = 0.62). Conclusion: The apoE4 genotype is a strong independent risk factor for coronary events in men, but not women. The association does not appear to be mediated by differences in total cholesterol levels. abstract_id: PUBMED:12637336 LDL containing apolipoprotein CIII is an independent risk factor for coronary events in diabetic patients. Objective: Triglyceride-rich lipoproteins that contain apolipoprotein CIII (apoCIII) are prominent in diabetic dyslipidemia. We hypothesized that these lipoproteins increase coronary disease risk in diabetic patients beyond that caused by standard lipid risk factors. Methods And Results: Diabetic patients with previous myocardial infarction were followed for 5 years, and 121 who had a recurrent coronary event were matched to 121 who did not. VLDL and LDL that contained or did not contain apoCIII (CIII+ or CIII-) were prepared by immunoaffinity chromatography and ultracentrifugation. IDL was included in the LDL fraction. LDL CIII+, rich in cholesterol and triglyceride, was the strongest predictor of coronary events (relative risk [RR] 6.6, P&lt;0.0001, for 4th versus 1st quartile). LDL CIII+ comprised 10% of total LDL. The main type of LDL, LDL CIII-, was less strongly predictive (RR 2.2, P=0.07). The increased risk associated with LDL CIII+ was unaffected by adjustment for plasma lipids, apoB, non-HDL cholesterol, or the other VLDL and LDL types. For VLDL CIII+, RR 0.5, P=0.07; for VLDL CIII-, RR 2.3, P=0.046. The presence of apolipoprotein E with CIII on VLDL and LDL did not affect risk. Conclusions: LDL with apoCIII strongly predicts coronary events in diabetic patients independently of other lipids and may be an atherogenic remnant of triglyceride-rich VLDL metabolism. abstract_id: PUBMED:30130761 A 6-Month Follow-Up Study of the Relation between Apolipoprotein E Gene Polymorphism and Major Adverse Cardiovascular Events in Patients with Acute Coronary Syndrome. Objectives: This study aimed to investigate the relation between ApoE gene polymorphisms and major adverse cardiovascular events (MACE) in patients with acute coronary syndrome (ACS) during a 6-month follow-up. Methods: From October 2016 to July 2017, 211 patients were admitted to a cardiology clinic with a diagnosis of ACS. Blood samples were obtained from all patients on the first day. The primary end point was a 6-month incidence of MACE. ApoE gene polymorphism was genotyped by real-time PCR using TaqMan® SNP Genotyping Assay. Results: The patients with the E4 allele were associated with higher low-density lipoprotein (LDL) cholesterol and total cholesterol (TC) levels compared with the patients without the E4 allele (p = 0001 and p = 0.001). The patients with the E4 allele were associated with a higher rate of MACE compared with the patients without the E4 allele (ApoE4 allele(+) 23.1% vs. ApoE4 allele(-) 9.3%; p = 0.03). Multivariable analysis suggested that E4 allele carriers showed an 85% risk increment of 6-month MACE (odds ratio 2.48, 95% confidence interval 2.37-5.95; p = 0.01). Conclusions: The trial shows that E4 allele carriers were correlated with not only higher LDL cholesterol and TC levels, but also with a higher incidence of MACE during a 6-month follow-up. abstract_id: PUBMED:34418683 Serum apolipoprotein E levels predict residual cardiovascular risk in patients with chronic coronary syndrome undergoing first percutaneous coronary intervention and on-statin treatment. Background And Aims: Little is known about the long-term impact of apolipoprotein E (apoE) on residual cardiovascular risk in patients with chronic coronary syndrome (CCS) receiving statin treatment. Methods: A total of 1109 consecutive patients (mean age, 67 ± 10 years; 83% men) with CCS who underwent their first intervention between 2000 and 2016 were included in this study. All patients had achieved low-density lipoprotein cholesterol (LDL-C) &lt;100 mg/dL on statin treatment and were divided into two groups based on median serum apoE values. We evaluated the incidence of major adverse cardiovascular events (MACEs), including cardiovascular death, non-fatal acute coronary syndrome, and target vessel revascularization. Results: A total of 552 and 557 patients were categorized to the higher and lower apoE groups, respectively. There were significant relationships between apoE levels and total cholesterol levels, triglyceride levels, high-density lipoprotein cholesterol levels, and estimated remnant cholesterol, except for LDL-C levels. During the median follow-up period of 5.1 years, 195 patients (17.6%) developed MACEs. Kaplan-Meier analysis revealed that the cumulative incidence of MACEs in the higher apoE group was significantly higher than in the lower apoE group (29.5% vs.23.8% log-rank test, p = 0.019). Using multivariable Cox hazard analysis, serum apoE level (1-mg/dL increase) (hazard ratio 1.15; 95% confidence interval 1.03-1.29, p = 0.013) was the strongest independent predictor of MACEs. Conclusions: Serum apoE level could be a strong predictor of residual cardiovascular risk in patients with CCS long-term, even if LDL-C levels are controlled with statin treatment. abstract_id: PUBMED:11885754 The E4 allele of apolipoprotein E is associated with increased restenosis after coronary angioplasty. The aim of this study is to investigate the influence of the E4 allele of apolipoprotein E (apo E) on restenosis after percutaneous transluminal coronary angioplasty (PTCA). The subjects were 171 male patients with more than 75% luminal diameter stenotic lesions of the coronary artery who had undergone an elective initial PTCA. The PTCA was successful in 164 patients, 157 of whom completed a prospective 5 month coronary angiography (CAG) follow up to assess the degree of restenosis after their surgery. Patients with previous coronary artery bypass grafting surgery (CABG), 3 vessel disease, complete obstruction or calcified lesions of the coronary artery, cerebro-vascular disease (CVD), arteriosclerosis obliterans (ASO), and renal failure with hemodialysis were excluded, leaving 105 patients in the analysis. Subjects carrying the E4 allele (n = 22, Phenotype E4/2 = 2, E4/3 = 19, E4/4 = 1: E4 group) were well matched with non-carriers (n = 83, Phenotype E2/2 = 0, E3/2 = 4, E3/3 = 79: E3 group) for clinical, and pre-and post-PTCA angiographic features. The restenosis rates were significantly higher in the E4 group than in the E3 group (patient restenosis rate : 59.1 vs 33.7% p &lt; 0.05, lesion restenosis rate: 51.8 vs 30.9% p &lt; 0.05). These results suggest that the E4 allele is associated with a higher restenosis rate after PTCA. abstract_id: PUBMED:15186949 Apolipoprotein e4 allele is predictor of coronary artery disease death in elderly patients with type 2 diabetes mellitus. The presence of the apolipoprotein (Apo) e4 allele is reported to be associated with the increased risk of coronary artery disease (CAD), as well as the impairment of endothelium-dependent dilation in type 2 diabetes mellitus. Therefore, we hypothesized that the Apo e4 allele increases the death risk from coronary artery disease in type 2 diabetes. From January 1993 to December 1999, 36 type 2 diabetic patients with e4/4 or e4/3, 62 with e3/3 and 33 with e2/2 or e3/2 genotypes were recruited. All subjects were unrelated, elderly, type 2 diabetic patients with coronary artery disease, aged 60-84 years, with cardiac function at the Class I stage at time of enrollment. A follow-up study of 3-10 years was undergone. The results are as follows: At baseline, serum total cholesterol and low density lipoprotein (LDL) cholesterol concentrations were higher in subjects with e4/3 or e4/4 than in subjects with e2/2 or e3/2 (P = 0.026). Lipoprotein(a) concentration was lower in subjects with e2/2 or e3/2 than in subjects with e3/3 (P = 0.044) and e4/3 or e4/4 (P = 0.038). During the 3-10 years follow-up period, a total of 39 patients who died from CAD was recorded in this sample. Compared with patients with e3/3 (P = 0.030) and patients with e2/2 or e3/2 genotypes (P = 0.001), the mortality rate of CAD in patients with e4/3 or e4/4 genotypes was the highest (50%). Stepwise discriminant analysis revealed that in the diabetic population studied the Apo e4 allele was independently and significantly associated with CAD death (B = 0.64). However, the strength of the association decreased (B = 0.48) when total cholesterol, LDL-cholesterol and lipoprotein(a) were included in the model. Therefore, we concluded that Apo e4 allele increases the risk of CAD death in elderly patients with type 2 diabetes mellitus. abstract_id: PUBMED:11472462 The apolipoprotein epsilon2 allele and the severity of coronary artery disease in Type 2 diabetic patients. Aims: To examine the hypothesis that apolipoprotein E2 is associated with more severe coronary disease in Type 2 diabetic patients. Research Design And Methods: In this retrospective cohort study, 491 patients with angiographically assessed coronary disease were recruited from those attending a university hospital cardiology department. Participants completed detailed questionnaires, from which the presence or absence of diabetes was determined. Fasting blood samples were obtained for apolipoprotein E genotype and measurement of blood lipid parameters. Results: The prevalence of triple vessel disease was significantly lower in non-diabetic, epsilon2 allele carriers (39.3% vs. 16.2%; odds ratio (OR) 0.30 (0.12-0.71), P &lt; 0.03) compared with E3/3 carriers. In Type 2 diabetic patients, epsilon2 allele carriers had an excess of triple vessel disease compared with E3/3 genotypes (43.3 vs. 68.8%; OR 2.8 (1.07-7.30), P &lt; 0.05). The differences were independent of other variables. The apo E4 subgroup showed no significant differences in the frequency of triple vessel disease. Conclusions: Diabetic epsilon2 allele carriers had more severe coronary artery disease than diabetic patients with other apo E isoforms. In non-diabetic patients the epsilon2 allele appeared to protect against severe coronary disease. We hypothesize that interaction between the diabetic milieu and the epsilon2 allele accelerates plaque progression. It suggests that diabetic patients who are carriers of the epsilon2 allele, even in the heterozygous form, should be the focus of particular therapeutic attention. Diabet. Med. 18, 445-450 (2001) abstract_id: PUBMED:27629735 The APOE E4 Allele Confers Increased Risk of Ischemic Stroke Among Greek Carriers. Background: Although several studies in various countries have indicated that the presence of the E4 allele of the apolipoprotein-E (APOE) gene is a risk factor for ischemic cerebrovascular disease, the strength of this association still remains a matter of debate. Objectives: The aim of the study was to determine the frequency of the APOE E4 allele and various other gene polymorphisms in in a well-characterized sample of Greek patients and to evaluate the potential associations with the risk of ischemic stroke (IS) and coronary heart disease (CHD). Material And Methods: A total of nine gene variants/polymorphisms - F5 (Leiden - R5 06Q, rs6025), F2 (20210G &gt; A, rs1799963), F13A1 (V34L, rs5985), MTHFR (677C &gt; T - A222V, rs1801133), MTHFR (1298A &gt; C - E429A, rs1801131), FGB (-455G &gt; A -c.-463G &gt; A; rs1800790), SERPINE1 (PAI14G/5G - rs1799889), ACE (ACE I/D, rs1799752), ITGB3 (GPIIIa L33P, rs5918) and the APOE E2/E3/E4 alleles (rs7412, rs429358) - were genotyped in 200 newly diagnosed ischemic stroke (IS) patients, 165 patients with ischemic coronary heart disease (CHD) and 159 controls with no cerebroor cardiovascular disease (non-CVD). A statistical analysis was performed using univariate and multivariate logistic regression models. Results: No significant association was found regarding most gene polymorphisms and the presence of IS or CHD in the patient cohort. However, the APOE E4 allele frequency was significantly higher (p = 0.02) among patients with ischemic stroke (IS) or IS + CHD (12.7%) when compared to the controls (5.1%). More accurately, E4 carriers had 2.66 and 2.71 times greater likelihood of IS or IS + CHD than non-carriers, respectively (OR = 2.66, 95% CI 1.39-5.07, OR = 2.71, 95% CI 0.98-7.48). Conclusions: In contrast to some previous studies, these results support the role of the APOE E4 allele as an independent risk factor for ischemic stroke and ischemic coronary heart disease among Greek patients. abstract_id: PUBMED:10583930 Relevance of apolipoprotein E polymorphism for coronary artery disease in the Saudi population. Background: The apolipoprotein E alleles epsilon2 and epsilon4 have been reported as independent risk factors for coronary artery disease (CAD) and as predictors for the development of atherosclerosis. Methods And Results: We determined by polymerase chain reaction the distribution of apolipoprotein E polymorphism in 320 Saudi blood donors (BD), 96 CAD patients, and 40 control subjects who had undergone angiography. Compared to controls, only epsilon4 was elevated in CAD patients. More than 61% (P &lt;.0001) of the patients had angina, and 52.1% (P &lt;.05) were diabetic; both of these factors were strongly associated with the presence of allele epsilon2. The epsilon2 allele was also associated with hypertension, elevated serum triglycerides, and total cholesterol. On the other hand, the allele epsilon4 appeared to be associated with increased risk of CAD and was also associated with hypertension, 3-vessel disease, and restenosis. Conclusions: Accordingly, epsilon4 may be associated with increased risk of CAD, whereas epsilon2 appears to be a predictor of several risk factors for atherosclerosis. abstract_id: PUBMED:16118069 Apolipoprotein E polymorphism is not a predictor for repeated coronary artery bypass surgery. Objectives: Factors leading to the occlusion of coronary grafts are diverse and may at least partially be inherited. We aimed to study the possible genetic predisposition and especially the role of apoE epsilon4 allele as a risk factor for repeated coronary artery bypass grafting (CABG) in a case-control setting. Design: All patients (n=184) who underwent repeated CABG between 1990 and 1998 were identified in the computed registry of the Department of Cardiothoracic Surgery in Tampere University Hospital. Age, sex and operation date matched controls with first time CABG were selected from the same registry. DNA samples were collected by sample stick sent via the mail for buccal smear. The final analysis included 137 surviving matched pairs. Results: In patients &lt;62 years of age (median age), family history emerged as the only significant (OR=3.4; 95% CI=1.5-7.8, p=0.004) predictor for repeated surgery. Among older patients, repeated CABG was no longer predicted by family history but by hypercholesterolemia (OR=2.1; 95% CI=1.1-4.0, p=0.027), modified by apoE genotype. Conclusions: Our results suggest that medium-term survivors after redo CABG have a strong genetic predisposition unrelated to hypercholesterolemia or apoE genotype, leading to more severe coronary artery disease at earlier age. In the older age group, redo coronary artery bypass surgery is associated with hypercholesterolemia, which, although modified by apoE genotype, may mainly be due to other genetic or acquired factors. Answer: Yes, the apoE4 allele has been identified as an independent predictor of coronary events in several studies. In the Baltimore Longitudinal Study of Aging, the apoE4 genotype was found to be a strong independent risk factor for coronary events in men, but not in women, and this association was not mediated by differences in total cholesterol levels (PUBMED:11152862). Similarly, a 6-month follow-up study of patients with acute coronary syndrome showed that carriers of the E4 allele had not only higher LDL cholesterol and total cholesterol levels but also a higher incidence of major adverse cardiovascular events (MACE) (PUBMED:30130761). Additionally, serum apolipoprotein E levels were found to predict residual cardiovascular risk in patients with chronic coronary syndrome undergoing percutaneous coronary intervention and on-statin treatment, even when LDL-C levels were controlled (PUBMED:34418683). Furthermore, the presence of the E4 allele of apolipoprotein E was associated with increased restenosis after coronary angioplasty (PUBMED:11885754) and was a predictor of coronary artery disease death in elderly patients with type 2 diabetes mellitus (PUBMED:15186949). In contrast, one study found that the apolipoprotein E polymorphism was not a predictor for repeated coronary artery bypass surgery, suggesting that the association may vary depending on the specific coronary event or patient population (PUBMED:16118069). Overall, the evidence suggests that the apoE4 allele is an independent predictor of coronary events in certain populations and under specific conditions, although the strength and nature of this association may differ based on factors such as age, gender, and the presence of other conditions like diabetes.
Instruction: Do users of mental health services lack access to general practitioner services? Abstracts: abstract_id: PUBMED:20438420 Do users of mental health services lack access to general practitioner services? Objective: To compare rates of visits to a general practitioner between users and non-users of mental health services (MHS). Design, Participants And Setting: Population-based retrospective cohort study of 204 727 users and 294 076 matched non-users of MHS in Western Australia from 1 January 1990 to 30 June 2006, based on linked records of the use of MHS, hospital admissions, Medicare claims for GP and specialist services, electoral roll registration and deaths. Main Outcome Measures: Adjusted rate ratios (ARRs) for the number of visits to GPs by users of MHS relative to non-users, and for different categories of mental disorders. Results: Relative to non-users of MHS, the ARR of visits to GPs by users of MHS was 1.622 (95% CI, 1.613-1.631) overall, and was elevated in each separate category of mental illness. ARRs were highest for alcohol/drug disorders, schizophrenia and affective psychoses (2.404, 1.834 and 1.798, respectively). The results were not changed by location (metropolitan, rural or remote addresses). However, the 4% of MHS users with no fixed address had a very low ARR of visits to GPs (0.058; 95% CI, 0.057-0.060). Conclusions: Users of MHS visit GPs substantially more often than non-users, with the exception of those with no fixed address who seldom see a GP at all. abstract_id: PUBMED:30306674 Access to mental health services: The experiences of people with intellectual disabilities. Background: People with intellectual disability experience higher rates of mental health disorders than the rest of the population, and expert opinion holds that multiple barriers prevent people with intellectual disability from accessing appropriate services. Methods: A qualitative study was designed to explore the lived experience of barriers and enablers to access to mental health services among people with intellectual disability. Interviews and focus groups were conducted with people with intellectual disability, carers and service providers. Results: Barriers and enablers were identified across four key dimensions of access: utilization of services; service availability; relevance, effectiveness and access; and equity and access. These factors operated at both systemic and personal levels. Conclusions: The findings from this study provide empirical evidence of anecdotal experiences of access to mental health services and provide insight into the ways users, carers and service providers navigate an often hostile system and indicate further directions for research. abstract_id: PUBMED:38174076 Access and efficacy of university mental health services during the COVID-19 pandemic. Objective: This study sought to understand the mental health issues, mental health support and efficacy of that support among university students. Participants: All students enrolled in a College of Arts and Sciences at one mid-size university received an email that contained a link to an anonymous, online questionnaire developed and disseminated through PsychData. 162 students completed the questionnaire. Methods: Mixed methods: Data was summarized using descriptive analysis, testing for significance, testing for differences, and content analysis. Results: Participants reported high levels of anxiety (76%) and depression (65%). Results indicated that participant demographics were associated with types of mental access, and support. Unexpected results included lack of knowledge or information on cost, and how to access mental health services hindered access for participants, and although telehealth was the most widely used support, in contrast to other studies, participants indicated a preference for face-to-face mental health services. Conclusion: Results highlight the need for improving communication about and access to mental health services in higher education Recommendations and implications for policy and support services are provided. abstract_id: PUBMED:30989633 Income-based inequities in access to mental health services in Canada. Objectives: While mental health services provided by general practitioners and psychiatrists can be billed to public health insurance programs in Canada, services provided by psychologists, social workers and other non-physician providers cannot. This study assesses the extent to which access to mental health services varies by income after first taking into account the higher concentration of mental health needs at lower income levels. Method: Data from the Canadian Community Health Survey 2013-2014 are used to calculate need-standardized concentration indices for access to mental health services. Results: More pro-rich utilization of mental health services provided by non-physicians and more equitable utilization of physician services is found for psychologists and general practitioners, but not for social workers, nurses and psychiatrists. Unmet need for healthcare for mental health problems is found to be more pro-poor than unmet need for physical health problems. Conclusion: By standardizing for inequitable distribution of mental health need, this study provides strong evidence that income-based inequity in access to mental health services is an issue under Canada's two-tier system, particularly with regard to general practitioners and psychologists. For other types of providers, the results suggest that inequities in service utilization vary not just by Medicare coverage but also by service settings and target populations. Despite these variations, greater inequities in unmet need for mental health care than for physical health care suggest that inequity is the dominant reality for Canadians. The results provide a baseline that could be used to assess the equity impacts of policy reforms. abstract_id: PUBMED:11055448 Primary care satellite clinics and improved access to general and mental health services. Objectives: To evaluate the relationship between the implementation of community-based primary care clinics and improved access to general health care and/or mental health care, in both the general population and among people with disabling mental illness. Study Setting: The 69 new community-based primary care clinics in underserved areas, established by the Department of Veterans Affairs (VA) between the last quarter of FY 1995 and the second quarter of FY 1998, including the 21 new clinics with a specialty mental health care component. Data Sources: VA inpatient and outpatient workload files, 1990 U.S. Census data, and VA Compensation and Pension files were used to determine the proportion of all veterans, and the proportion of disabled veterans, living in each U.S. county who used VA general health care services and VA mental health services before and after these clinics began operation. Design: Analysis of covariance was used to compare changes, from late FY 1995 through early FY 1998, in access to VA services in counties in which new primary care clinics were located, in counties in which clinics that included specialized mental health components were located, and for comparison, in other U.S. counties, adjusting for potentially confounding factors. Key Findings: Counties in which new clinics were located showed a significant increase from the FY 1995-FY 1998 study dates in the proportion of veterans who used general VA health care services. This increase was almost twice as large as that observed in comparison counties (4.2% vs. 2.5%: F = 12.6, df = 1,3118, p = .0004). However, the introduction of these clinics was not associated with a greater use of specialty VA mental health services in the general veteran population, or of either general health care services or mental health services among veterans who received VA compensation for psychiatric disorders. In contrast, in counties with new clinics that included a mental health component the proportion of veterans who used VA mental health services increased to almost three times the proportion in comparison counties (0.87% vs. 0.31%: F = 8.3, df = 1,3091, p = .004). Conclusions: Community-based primary care clinics can improve access to general health care services, but a specialty mental health care component appears to be needed to improve access to mental health services. abstract_id: PUBMED:36476220 Barriers and facilitators of access and utilization of mental health services among forensic service users along the care pathway. Background: The verdict of Not Criminally Responsible on account of a Mental Disorder (NCRMD) is increasingly used to access specialized mental health services in Canada and elsewhere. This situation highlights the importance of ensuring timely access to services in the community to prevent violence and justice involvement. The objective of the present study is to identify individual and contextual barriers and facilitators of access to mental health services during the period preceding an offense leading to a verdict of NCRMD. Methods: The sample includes 753 people found NCRMD in Québec, Canada. All episodes of mental health hospitalizations and service use before the index offense were identified using provincial administrative health data, for an average period of 4.5 years. Access was conceptualized as a function of the possibility of seeking, reaching and receiving appropriate health care services, based on Lévesque and colleagues patient-centred model of access to care. Generalized linear models were computed to identify the individual and contextual predictors of: (1) seeking mental healthcare (at least one contact with any type of services for mental health reasons); (2) reaching psychiatric care (at least one contact with a psychiatrist); (3) receiving psychiatric care, operationalized as (3a) continuity and (3b) intensity. Factors associated with volume of emergency mental health services were examined as exploratory analysis. Results: Geographical considerations were highly important in determining who reached, and who received specialized mental health care - above and beyond individual factors related to need. Those who lived outside of major urban centres were 2.6 times as likely to reach psychiatric services as those who lived in major urban centres, and made greater use of emergency mental health services by 2.1 times. Living with family decreased the odds of seeking mental healthcare by half and the intensity of psychiatric care received, even when adjusting for level of need. Conclusions: Findings support efforts to engage with the family of service users and highlights the importance of providing resources to make family-centred services sustainable for health practitioners. Health policies should also focus on the implementation of outreach programs, such as Forensic Assertive Community Treatment teams as part of prevention initiatives. abstract_id: PUBMED:35063916 Qualitative study of candidacy and access to secondary mental health services during the COVID-19 pandemic. Candidacy, a construct describing how people's eligibility for care is negotiated between themselves and services, has received limited attention in the context of mental health care. In addition, candidacy research has only rarely studied the views of carers and health professionals. In this article, we use concepts relating to candidacy to enable a theoretically informed examination of experiences of access to secondary mental health services during the first wave of the COVID-19 pandemic in England. We report a qualitative study of the views and experiences of service users, carers, and healthcare professionals. Analysis of 65 in-depth interviews was based on the constant comparative method. We found that wide-ranging service changes designed to address the imperatives of the pandemic were highly consequential for people's candidacy. Macro-level changes, including increased emphasis on crisis and risk management and adapted risk assessment systems, produced effects that went far beyond restrictions in the availability of services: they profoundly re-structured service users' identification of their own candidacy, including perceptions of what counted as a problem worthy of attention and whether they as individuals needed, deserved, and were entitled to care. Services became less permeable, such that finding a point of entry to those services that remained open required more work of service users and carers. Healthcare professionals were routinely confronted by complex decisions and ethical dilemmas about provision of care, and their implicit judgements about access may have important implications for equity. Many of the challenges of access exposed by the pandemic related to pre-existing resource deficits and institutional weaknesses in care for people living with mental health difficulties. Overall, these findings affirm the value of the construct of candidacy for explaining access to mental healthcare, but also enable deepened understanding of the specific features of candidacy, offering enduring learning and implications for policy and practice. abstract_id: PUBMED:30472518 An investigation of the effect of accessibility to General Practitioner services on healthcare utilisation among older people. Equity in access to healthcare services is regarded as an important policy goal in the organisation of modern healthcare systems. Physical accessibility to healthcare services is recognised as a key component of access. Older people are more frequent and intensive users of healthcare, but reduced mobility and poorer access to transport may negatively influence patterns of utilisation. We investigate the extent to which supply-side factors in primary healthcare are associated with utilisation of General Practitioner (GP) services for over 50s in Ireland. We explore the effect of network distance on GP visits, and two novel access variables: an estimate of the number of addresses the nearest GP serves, and the number of providers within walking distance of a person's home. The results indicate that geographic accessibility to GP services does not in general explain differences in the utilisation of GP services in Ireland. However, we find that the effect of the number of GPs is significant for those who can exercise choice in selecting a GP, i.e., those without public health insurance. For these individuals, the number of GPs within walking distance exerts a positive and significant effect on the utilisation of GP services. abstract_id: PUBMED:36981681 Increasing Access to Mental Health Services: Videogame Players' Perspectives. Young men's mental health is at the forefront of global public health concerns. Young males, who have a high incidence of mental health disorders, are a population that accesses services at lower rates than females and makes up the majority of videogame players. By considering the unique perspectives of digitally connected individuals on mental health service delivery, interventions may be designed to address their needs with a higher likelihood of success. This study investigated international male videogamers' perspectives on how their access to mental health services could be improved via an open-ended survey question. From a total of 2515 completed surveys, 761 responded to the qualitative question. Of these, the 71 responses that discussed access to and provision of mental healthcare services are reported in this article. Results suggest that digital mental health services were a promising way to reach this group. Anonymity and confidentiality were found to be important factors when considering online mental health services. Male videogame players identified a preference for both online and in-person services that are delivered synchronously, one-on-one with an expert practitioner, and readily available in settings that individuals find comfortable. abstract_id: PUBMED:38419901 Shortcomings of services for persons with severe and persistent mental health challenges: a qualitative study of service users and family carers. Introduction: The opinions of service users and carers are crucial to identifying ways to innovate and implement system change. This study aims to explore the views and experiences of service users and carerson the services they have used for their mental health challenges and their suggestions for service reform. Methods: Twenty participants (15 carers and 5 service users) were interviewed for the study. Results: Eight categories emerged from the data. They were: Several gaps in the system, Barriers to accessing services, Services are not fit for purpose, Services operate in isolation, System is not person focused, Service users and carers are treated poorly, Services are overloaded and under resourced and Recommendations for service reform. Respondents reported that a persistent lack of funding and resources for mental health services was a main cause of these shortcomings. Respondents also noted that innovations were needed to re-orient services to enable continuity of care, and training of mental health professionals was needed for a better understanding of the needs of service users and their carers. Discussion: Additional research is needed with larger and more diverse samples to further explore these findings. Answer: Users of mental health services do not generally lack access to general practitioner (GP) services. In fact, a population-based retrospective cohort study in Western Australia found that users of mental health services visited GPs substantially more often than non-users, with adjusted rate ratios (ARRs) indicating higher visit rates across different categories of mental illness. The ARRs were particularly elevated for those with alcohol/drug disorders, schizophrenia, and affective psychoses. However, an exception was noted for the 4% of mental health service (MHS) users with no fixed address, who had a very low ARR of visits to GPs, indicating that this subgroup seldom sees a GP at all (PUBMED:20438420). This suggests that while access to GP services is generally higher among MHS users, there are specific vulnerable subgroups, such as those without a fixed address, who may face significant barriers to accessing GP services. Additionally, other studies have identified barriers to accessing mental health services, such as geographical considerations, which can affect the likelihood of reaching and receiving specialized mental health care, particularly for those living outside of major urban centers (PUBMED:36476220). These barriers can exist at both systemic and personal levels and can impact the utilization of services, service availability, and equity of access (PUBMED:30306674). In summary, while users of mental health services typically have higher rates of GP visits compared to non-users, there are notable exceptions and barriers that can impede access for certain groups within the mental health service user population.
Instruction: Are illness perceptions about coronary artery disease predictive of depression and quality of life outcomes? Abstracts: abstract_id: PUBMED:19232233 Are illness perceptions about coronary artery disease predictive of depression and quality of life outcomes? Objective: Depression occurs commonly in coronary artery disease (CAD) and is associated with substantial disability. Modifiable cognitive determinants of depression in this population have not been identified. We investigated the impact of potentially modifiable illness beliefs about CAD on depressive symptomatology. We also examined the association between these beliefs and health-related quality of life (HRQOL) and socio-demographic variations in illness beliefs. Methods: A prospective study of 193 recently hospitalized CAD patients was conducted. Data were collected from medical records and by self-report 3 and 9 months post-discharge. Socio-demographic differences were analysed with independent sample t-tests. Predictive models were tested in a series of hierarchical linear regression equations that controlled for known clinical, psychosocial, and demographic correlates of outcome. Results: Negative illness beliefs, particularly those associated with the consequences of CAD, were significantly predictive of higher levels of depressive symptomatology at 3 and 9 months. Positive illness perceptions were significantly associated with better HRQOL outcomes. Older and less socially advantaged patients demonstrated more negative illness beliefs. Conclusions: Illness beliefs are significantly associated with depressive symptomatology and HRQOL in CAD patients. These beliefs can be easily identified and constitute a meaningful and clinically accessible avenue for improving psychological morbidity and HRQOL in CAD patients. Older and more socially vulnerable patients may require heightened monitoring of their illness beliefs. Research needs to translate these and other predictive findings into interventions. abstract_id: PUBMED:36351004 Illness perceptions and health literacy are strongly associated with health-related quality of life, anxiety, and depression in patients with coronary heart disease: results from the EUROASPIRE V cross-sectional survey. Aims: To (1) Describe the sociodemographic and risk factor profiles of a sample of patients with coronary disease, (2) Explore associations between illness perceptions and health literacy with sociodemographic characteristics and risk factors, health-related quality of life (HRQoL), anxiety, and depression. Methods And Results: Conducted as part of the ESC Prevention of CVD Project and EUROASPIRE V survey, patients were consecutively and retrospectively identified 6 months to 2 years after an acute event or elective procedure from 12 countries and interviewed. Three thousand four hundred and eight participants (76% male, mean age 64 years) were recruited, 16% were smokers, 38% obese, 60% physically inactive, and 41% hypertensive. Forty percent had attended cardiac rehabilitation. More threatening illness perceptions were associated with female gender (P &lt; 0.0001), lower income (P &lt; 0.0001), lower education (P = 0.02), obesity (P &lt; 0.0001), sedentary behaviour (P &lt; 0.0001), and diabetes (P &lt; 0.0001). Poorer health literacy was associated with obesity (P = 0.02) and sedentary behaviour (P = 0.0001). Threatening illness perceptions were strongly associated with anxiety, depression, and poorer ratings of HRQoL after multivariable adjustment (all P &lt; 0.001). Poor health literacy was associated with anxiety and depression (P &lt; 0.0001) and poorer ratings of HRQoL (HeartQol scores P = 0.03). Results were consistent across regions of Europe, age, gender, and socio-economic strata. Conclusions: Interventions like cardiac rehabilitation should be targeted at vulnerable groups given the strong associations between more threatening illness perceptions, lower health literacy, lower HRQoL, and higher levels of anxiety and depression. The delivery and content of these interventions should be accessible for those with low health literacy. abstract_id: PUBMED:32046646 Longitudinal study of the relationship between patients' medication adherence and quality of life outcomes and illness perceptions and beliefs about cardiac rehabilitation. Background: Adherence to medication regimens is essential for preventing and reducing adverse outcomes among patients with coronary artery disease (CAD). Greater understanding of the relation between negative illness perceptions, beliefs about cardiac rehabilitation (CR) and medication adherence may help inform future approaches to improving medication adherence and quality of life (QoL) outcomes. The aims of the study are: 1) to compare changes in illness perceptions, beliefs about CR, medication adherence and QoL on entry to a CR programme and 6 months later; 2) to examine associations between patients' illness perceptions and beliefs about CR at baseline and medication adherence and QoL at 6 months. Methods: A longitudinal study of 40 patients with CAD recruited from one CR service in Scotland. Patients completed the Medication Adherence Report Scale, Brief Illness Perception Questionnaire, Beliefs about CR questionnaire and the Short-Form 12 Health Survey. Data were analysed using the Wilcoxon Signed Ranks test, Pearson Product Moment correlation and Bayesian multiple logistic regression. Results: Most patients were men (70%), aged 62.3 mean (SD 7.84) years. Small improvements in 'perceived suitability' of CR at baseline increased the odds of being fully adherent to medication by approximately 60% at 6 months. Being fully adherent at baseline increased the odds of staying so at 6 months by 13.5 times. 'Perceived necessity, concerns for exercise and practical barriers' were negatively associated with reductions in the probability of full medication adherence of 50, 10, and 50%. Small increases in concerns about exercise decreased the odds of better physical health at 6 months by about 50%; and increases in practical barriers decreased the odds of better physical health by about 60%. Patients perceived fewer consequences of their cardiac disease at 6 months. Conclusions: Patients' beliefs on entry to a CR programme are especially important to medication adherence at 6 months. Negative beliefs about CR should be identified early in CR to counteract any negative effects on QoL. Interventions to improve medication adherence and QoL outcomes should focus on improving patients' negative beliefs about CR and increasing understanding of the role of medication adherence in preventing a future cardiac event. abstract_id: PUBMED:32460825 Longitudinal evaluation of the effects of illness perceptions and beliefs about cardiac rehabilitation on quality of life of patients with coronary artery disease and their caregivers. Background: Patients' negative illness perceptions and beliefs about cardiac rehabilitation (CR) can influence uptake and adherence to CR. Little is known about the interpartner influence of these antecedent variables on quality of life of patients with coronary artery disease (CAD) and their family caregivers. The aims of the study were: 1) to assess differences in illness perceptions, beliefs about CR and quality of life between patients with CAD and their family caregivers upon entry to a CR programme and at 6 months follow-up; and 2) to examine whether patients' and caregivers' perceptions of the patient's illness and beliefs about CR at baseline predict their own and their partner's quality of life at 6 months. Methods: In this longitudinal study of 40 patient-caregiver dyads from one CR service, patients completed the Brief Illness Perception Questionnaire and Beliefs about Cardiac Rehabilitation Questionnaire at baseline and 6 months; and caregivers completed these questionnaires based on their views about the patient's illness and CR. The Short-Form 12 Health Survey was used to assess patients' and caregivers' perceived health status. Dyadic data were analysed using the Actor-Partner Interdependence Model. Results: Most patients (70%) were men, mean age 62.45 years; and most caregivers (70%) were women, mean age 59.55 years. Caregivers were more concerned about the patient's illness than the patients themselves; although they had similar scores for beliefs about CR. Patients had poorer physical health than caregivers, but their level of mental health was similar. Caregivers' poorer mental health at 6 months was predicted by the patient's perceptions of timeline and illness concern (i.e. partner effects). Patient's and caregiver's illness perceptions and beliefs about CR were associated with their own physical and mental health at 6 months (i.e. actor effects). Conclusions: Overall, the patients and caregivers had similar scores for illness perceptions and beliefs about CR. The actor and partner effect results indicate a need to focus on specific illness perceptions and beliefs about CR, targeting both the individual and the dyad, early in the rehabilitation process to help improve patients and caregivers physical and mental health (outcomes). abstract_id: PUBMED:36043780 Quality of Life and Depression Assessment in Patients with Acute Coronary Syndrome: A Cross-Sectional Study. Purpose: Acute Coronary Syndrome (ACS) is currently the leading cause of death in industrialized countries. Morbidity after ACS includes physical and mental disorders affecting the patient's whole life situation and Quality of Life (QoL). The main aim of the study was to assess QoL and depression among post-ACS patients. Methods: This was a cross-sectional observational study. A total of 112 patients who fulfilled the inclusion criteria were included in this study. A semi-structured questionnaire was administered to the patients to collect data from the patients. In this study, men and women aged 18 - 80 with ACS; patients diagnosed with Non-ST Segment Elevated Myocardial Infarction (NSTEMI) or STEMI or Angina Pectoris were included. Patients with severe mental, and physical illness and dementia were excluded from the study. QoL and depression assessment was done by RAND 36-Item Health Survey and Hamilton Depression Rating scale, respectively. Results: Among 112 post-ACS patients, 78 patients were males, and 38 patients were females. The mean age of the study population was 64.25 ± 9.029 and with most individuals in the category of 61 - 70 years. The majority of study populations were married (71.5%) and lived with their families (92.9%). In this study group, 42.9 % of the population reported at least high school education (SSLC) and 54.5% were full-time employees. Most of the patients (91.1%) were taking the medication regularly, while 55.4% of patients reported regular compliance with the follow-up. This study identified that, among various factors, older age, female gender, lower income, unemployment, low education status, poor compliance with medication, and depressive symptoms led to poor QoL. Conclusion: This study confirms a negative correlation between depressive symptoms and QoL. This study's results reveal the magnitude of depression that is prevalent in the primary health care clinic that goes undiagnosed and unmanaged. Hence, it is recommended to properly screen depressive symptoms in ACS patients. Therefore, concurrently, better QoL can be achieved by managing both depression and ACS. abstract_id: PUBMED:26277367 Depression in people with coronary heart disease: prognostic significance and mechanisms. Depression affects about 40 % of people with coronary heart disease (CHD). This group with depression and CHD have increased mortality and morbidity, worse health-related quality of life, use health services more frequently and consequently cost the health service and the national economy considerably more than their non-depressed counterparts. A number of characteristics of depression and plausible mechanisms have been proposed to explain this observed association, which could lead to improved understanding of the association and lead to new interventions to improve cardiac outcomes. This review summarises the evidence linking depression with worse cardiac outcomes, considers specific aspects of depression which may predict worse outcomes and reviews the mechanisms that could potentially explain the direct or indirect association of depression with coronary outcomes. abstract_id: PUBMED:27846850 Comorbidities and characteristics of coronary heart disease patients: their impact on health-related quality of life. Background: Patients with coronary heart disease (CHD) commonly present with more than one comorbid condition, contributing to poorer health-related quality of life (HRQoL). The aim of our study was to identify the associations between HRQoL and patient characteristics, vascular comorbidities and anxiety/depression disorders. Methods: This observational study was conducted in 36 family medicine practices selected by random stratified sampling from all regions of Slovenia. HRQoL was assessed using the European Quality of Life - 5 Dimensions (EQ-5D) questionnaire and EQ Visual Analogue Scale (EQ-VAS). The associations between HRQoL and patient characteristics stratified by demographics, vascular comorbidities, health services used, their assessment of chronic illness care, and anxiety/depression disorders were identified by ordinal logistic regression and linear regression models. Results: The final sample included 423 CHD patients with a mean age of 68.0 ± SD 10.8 years; 35.2% were female. Mean EQ-VAS score was 58.6 ± SD 19.9 (median: 60 with interquartile range of 45-75), and mean EQ-5D index was 0.60 ± SD 0.19 (median: 0.56 with interquartile range of 0.41-0.76). The statistically significant predictors of a lower EQ-VAS score were higher family physician visit frequency, heart failure (HF) and anxiety/depression disorders (R² 0.240; F = 17.368; p &lt; 0.001). The statistically significant predictor of better HRQoL, according to EQ-5D was higher patient education, whereas higher family physician visit frequency, HF and peripheral artery disease (PAD) were predictors of poorer HRQoL (Nagelkerke R 2 = 0.298; χ 2 = 148.151; p &lt; 0.001). Conclusions: Results of our study reveal that comorbid conditions (HF and PAD), family physician visit frequency and years in education are significant predictors of HRQoL in Slovenian CHD patients. abstract_id: PUBMED:35300440 Impact of Anti-Diabetic Medications on Quality of Life in Persons with Type 2 Diabetes Mellitus. Introduction: Type 2 diabetes mellitus (T2DM) has been found to be associated with poor quality of life (QOL). The aim of this study was to measure QOL in T2DM patients and examine if the patients' socio demographic, diabetes-related clinical characteristics and insulin usage are associated with better quality of life. Materials And Methods: This clinic based cross-sectional study analyzed data from outpatients with T2DM attending a referral clinic between January and June 2016. Association between Diabetes Attitudes, Wishes and Needs (DAWN) QOL and few demographic, socioeconomic, clinical and biochemical predictors were examined using multivariate logistic regression model. A total of 518 patients completed the interview. Results: The HbA1c level of insulin ± oral anti-diabetic (OAD) cohort was significantly lower (7.89 ± 1.98) than the OAD cohort (8.79 ± 1.96), P &lt; 0.001. Compared to their counterparts in the OAD cohort, patients on insulin were older with longer duration of diabetes mellitus. Co-morbid confounders like obesity, hypoglycemia, and blood pressure control or socio demographic confounders like income, education were almost similar in both the cohorts. The incidence of hypertension, coronary artery disease (CAD) and statin usage was significantly higher in the insulin cohort. The overall composite DAWN QOL scores of the insulin ± OAD cohort (25.42 ± 4.35) was marginally higher than that of the OAD cohort (23.62 ± 5.06) (P = 0.067). Analog insulin users were also found to have significantly higher composite DAWN QOL scores compared to human insulin users (25.77 ± 5.73 vs.24.13 ± 4.88, P = 0.037). Conclusions: The insulin cohort, despite being older and having longer duration of diabetes, had significantly higher diet compliance score, and enhanced QOL owing to better diabetes-related knowledge and treatment adherence characteristics than non-insulin users. Questionnaires-based evaluation of QOL can provide better understanding of the patient's experience of the illness, self-care, psychological and emotional functioning, and choice of therapeutic modality enhancing the quality of care. abstract_id: PUBMED:27186176 Quality of life in patients with coronary heart disease after myocardial infarction and with ischemic heart failure. Introduction: Quality of life measures are useful when interventions or treatments are indicated for several reasons such as improvement of physical functioning, pain relief, to estimate the effectiveness of therapies or to predict mortality. The aim of the current study was to describe quality of life in patients with stable coronary artery disease, myocardial infarction and heart failure and to evaluate the relationship between depression and health-related quality of life. Material And Methods: Patients after STEMI, with stable coronary artery disease, and heart failure (n = 332) completed the MacNew Heart Disease Health-related Quality of Life Questionnaire and the Hospital Anxiety and Depression Scale. Results: Patients with myocardial infarction had significantly higher scores than patients with stable coronary artery disease or heart failure on the MacNew global scale (p &lt; 0.001) and the physical (p &lt; 0.001), emotional (p &lt; 0.001) and social (p &lt; 0.001) subscales. The anxiety scores were significantly higher in the group of patients with stable coronary artery disease than in patients with myocardial infarction (p &lt; 0.05). The depression scores were significantly higher in patients with heart failure (p &lt; 0.01). Conclusions: In patients with stable CAD, anxiety correlated mainly with symptoms, i.e. angina, than with the history of MI. Patients with symptoms of angina react to the illness with anxiety more than depression, whereas patients with heart failure with dyspnea react to the illness with depressive symptoms more than anxiety. In patients after MI and with stable CAD, cognitive-behavioral techniques could be useful to quickly reduce the level of anxiety, while patients with heart failure require long-term support therapy to reduce the risk of depressive symptoms. abstract_id: PUBMED:14612467 Depression and health-related quality of life. N/A Answer: Yes, illness perceptions about coronary artery disease (CAD) are predictive of depression and quality of life outcomes. Studies have found that negative illness beliefs, particularly those associated with the consequences of CAD, are significantly predictive of higher levels of depressive symptomatology at various time points post-discharge (PUBMED:19232233). Additionally, positive illness perceptions are significantly associated with better health-related quality of life (HRQOL) outcomes (PUBMED:19232233). More threatening illness perceptions have been associated with anxiety, depression, and poorer ratings of HRQOL even after multivariable adjustment (PUBMED:36351004). Furthermore, older age, female gender, lower income, lower education, obesity, sedentary behavior, and diabetes are linked with more threatening illness perceptions (PUBMED:36351004). Patients' beliefs upon entry to a cardiac rehabilitation (CR) program are particularly important to medication adherence at 6 months, and negative beliefs about CR should be identified early to counteract any negative effects on quality of life (QoL) (PUBMED:32046646). Additionally, caregivers' mental health at 6 months can be predicted by the patient's perceptions of timeline and illness concern, indicating the interpartner influence of illness perceptions on QoL (PUBMED:32460825). Depression in people with CAD is associated with increased mortality and morbidity, worse health-related quality of life, and higher health service costs (PUBMED:26277367). Comorbid conditions such as heart failure and peripheral artery disease, as well as higher family physician visit frequency, are significant predictors of HRQoL in CHD patients (PUBMED:27846850). In summary, illness perceptions about CAD are indeed predictive of depression and quality of life outcomes, with more negative perceptions being associated with worse outcomes. These findings suggest that interventions targeting illness perceptions could potentially improve psychological morbidity and HRQOL in CAD patients.
Instruction: Does crew resource management training work? Abstracts: abstract_id: PUBMED:30097552 Tools for your stroke team: adapting crew-resource management for acute stroke care. Crew-resource management is an approach to work and training that focuses on non-technical skills and strategies to prevent human error in complex procedures. It was initially termed 'cockpit-resource management' and developed for aviation in the 1970s after several severe accidents; it has contributed to a measurable increase in flight safety. In recent years, this approach has been successfully implemented in other high-reliability environments; surgical disciplines have made particular use of crew-resource management strategies and training, with resulting reduced mortality rates. The stepwise implementation of different crew-resource management strategies in stroke care at our tertiary stroke centre has helped to speed up process times significantly, and to improve patient safety and staff satisfaction. Here, we summarise our experience in adapting different crew-resource management tools to acute stroke care, sharing specific tools that have proven valuable in our hands, and we encourage colleagues to implement such strategies in acute stroke care. abstract_id: PUBMED:24529091 The effects of Crew Resource Management (CRM) training on flight attendants' safety attitudes. Introduction: A number of well-known incidents and accidents had led the aviation industry to introduce Crew Resource Management (CRM) training designed specifically for flight attendants, and joint (pilot and flight attendant) CRM training as a way to improve teamwork and communication. The development of these new CRM training programs during the 1990s highlighted the growing need for programs to be evaluated using research tools that had been validated for the flight attendant population. Method: The FSAQ (Flight Safety Attitudes Questionnaire-Flight Attendants) was designed specifically to obtain safety attitude data from flight attendants working for an Asia-Pacific airline. Flight attendants volunteered to participate in a study before receiving CRM training (N=563) and again (N=526) after CRM training. Results: Almost half (13) of the items from the 36-item FSAQ showed highly significant changes following CRM training. Years of experience, crew position, seniority, leadership roles, flight attendant crew size, and length of route flown were all predictive of safety attitudes. Practical Applications: CRM training for flight attendants is a valuable tool for increasing positive teamwork behaviors between the flight attendant and pilot sub-groups. Joint training sessions, where flight attendants and pilots work together to find solutions to in-flight emergency scenarios, provide a particularly useful strategy in breaking down communication barriers between the two sub-groups. abstract_id: PUBMED:26770720 Does classroom-based Crew Resource Management training improve patient safety culture? A systematic review. Aim: To evaluate the evidence of the effectiveness of classroom-based Crew Resource Management training on safety culture by a systematic review of literature. Methods: Studies were identified in PubMed, Cochrane Library, PsycINFO, and Educational Resources Information Center up to 19 December 2012. The Methods Guide for Comparative Effectiveness Reviews was used to assess the risk of bias in the individual studies. Results: In total, 22 manuscripts were included for review. Training settings, study designs, and evaluation methods varied widely. Most studies reporting only a selection of culture dimensions found mainly positive results, whereas studies reporting all safety culture dimensions of the particular survey found mixed results. On average, studies were at moderate risk of bias. Conclusion: Evidence of the effectiveness of Crew Resource Management training in health care on safety culture is scarce and the validity of most studies is limited. The results underline the necessity of more valid study designs, preferably using triangulation methods. abstract_id: PUBMED:11142974 Does crew resource management training work? Introduction: Effective crew resource management (CRM) may play a major role in decreasing the number of accidents and increasing overall safety of air medical programs. The purpose of the study was to compare the responses of crew members, in a variety of situations, who received CRM training and those who did not. The numerical results of the responses were used to evaluate the effectiveness of CRM in increasing crew awareness and promoting team concepts. Method: During a 12-month period, crew members of randomly selected air medical programs (fixed- and rotor-wing) were surveyed by questionnaire. Participants were asked if they had received CRM, effective communication, and team-building training. The next 15 questions pertained to crew interaction/communication in everyday and emergent situations and were scored on a Likert scale (1 = strongly agree and 5 = strongly disagree). A lower total score indicated a greater understanding of team awareness and effective communication. Results: The mean score of crew members who received the three identified areas of training (initial CRM, team-building, effective communications) (37.96, SD +/- 7.67) was found to be significantly lower than the mean score of those who had received none of the training (44.13, SD +/- 5.0) and P &lt; .05. Conclusion: Based on the results, CRM training increases crew awareness and promotes team concepts in both everyday and emergent situations. abstract_id: PUBMED:27060788 Does classroom-based crew resource management training have an effect on attitudes between doctors and nurses? Objective: To evaluate participant reactions and attitudes to crew resource management teamwork classroom-based training by comparing Likert responses before and after the intervention and exploring potential differences in attitudes across the different healthcare professionals. Methods: Between 26 January and 27 March, 2015, a randomly selected sample of 240 frontline healthcare professionals offering direct patient care were recruited to undergo a 4-hour crew resource management classroom-based training programme. Participants were asked to complete a 22-item human factors attitude survey before and after training and a 10-item end-of-programme evaluation. Paired samples t-test was used to assess differences between the participants' pretest and posttest scores on each item. Results: A total of 167 (70%) from 17 different specialties underwent the training and 164 (68.3%) completed (139 nurses, 25 doctors) the survey. The nurses were of similar age to the doctors (38.2 vs 36.9, p=0.83) and were more likely to be women (75.6% vs 24.6%, p &lt;0.001). Human factors attitude survey findings indicated that nurses valued the experience highly compared to doctors. The responses among the nurses revealed significant attitude shifts (p &lt;0.05) in 20 of the 22 items whereas this was the case only for 9 items among the doctors. Conclusions: Overall, the crew resource management classroom-based training programme appeared to have a positive effect on frontline healthcare professionals' attitudes. The implementation of such programme is feasible and acceptable, especially for nurses, in a public hospital setting in Hong Kong. abstract_id: PUBMED:26878329 Participant evaluation of simulation training using crew resource management in a hospital setting in Hong Kong. Introduction: A simulation team-based crew resource management training programme was developed to provide a unique multidisciplinary learning experience for health care professionals in a regional hospital in Hong Kong. In this study, we evaluated how health care professionals perceive the programme. Methods: A cross-sectional questionnaire survey was conducted in the Multidisciplinary Simulation and Skills Centre at Queen Elizabeth Hospital in Hong Kong. A total of 55 individuals in the departments of Obstetrics and Gynaecology, Anaesthesiology and Operating Theatre Services, Intensive Care Unit, and Accident and Emergency participated in the study between June 2013 and December 2013. The course content was specially designed according to the needs of the clinical departments and comprised a lecture followed by scenarios and debriefing sessions. Principles of crew resource management were introduced and taught throughout the course by trained instructors. Upon completion of each course, the participants were surveyed using a 5-point Likert scale and open-ended questions. Results: The participant's responses to the survey were related to course organisation and satisfaction, realism, debriefing, and relevance to practice. The overall rating of the training programme was high, with mean Likert scale scores of 4.1 to 4.3. The key learning points were identified as closed-loop communication skills, assertiveness, decision making, and situational awareness. Conclusions: The use of a crew resource management simulation-based training programme is a valuable teaching tool for frontline health care staff. Concepts of crew resource management were relevant to clinical practice. It is a highly rated training programme and our results support its broader application in Hong Kong. abstract_id: PUBMED:34134934 Crew Resource Management Training for Surgical Teams, A Fragmented Landscape. Objective: Medical Crew Resource Management (CRM) training courses are designed to increase patient safety by reducing the effects of human errors. These training courses are most popular in surgery and a wide range of medical CRM training courses for surgical teams is now available. However, the effects of these CRM training courses on patient outcomes are inconclusive. Although surgical teams feel the need to be trained in team collaboration skills, they are often puzzled about what criteria to apply when choosing a medical CRM training course. This study aimed to compare CRM training courses on didactic components and simulation-exercises to explore if these courses are interchangeable. Methods: In this qualitative study, semi-structured interviews were conducted among 10 main CRM training providers of surgical teams in the Netherlands. Results: Although a large variety was found in the content of CRM training courses, the most substantial differences were found in the simulation-exercises. Nine out of 10 trainers stated that standard simulation-exercises would be a step forward to ensure quality in CRM trainings. According to the trainers, the implementation of medical CRM can reduce human errors and as a result, preventable patient complications. They suggested a quality standard for CRM trainers in the medical field to ensure the quality of medical team training as a way to reach this. Conclusions: Medical CRM training courses are diverse and noninterchangeable. Trainers expect that if CRM becomes part of surgical training and is embedded in operating theatre culture, it could be of great value for patients and professionals. abstract_id: PUBMED:30051761 The training and practice of crew resource management: recommendations from an inductive in vivo study of the flight deck. Crew resource management (CRM) is credited with saving 185 lives at Sioux City. While the theory behind CRM is well documented, there are few studies of how CRM manifests on the line. This inductive in vivo study had three objectives. First, to describe how CRM manifests. Secondly, to evaluate the efficacy of CRM vis-à-vis flight safety. Thirdly, to suggest improvements to the CRM training syllabus. The study produced five conclusions: First, CRM is durable under conditions of moderate strain. Secondly, crews embed and refine CRM through reflection and action. Thirdly, CRM facilitates and shapes social relations. Fourthly, mindlessness (Langer 1989 ) undermines CRM. Finally, the interruption of flight-deck routines by third-parties poses a threat to flight-safety. The paper recommends multi-profession CRM training as a means of improving communication and co-ordination in and around aircraft. The study's limitations include a monocultural flight-deck: flights were operated by pilots with European backgrounds. Mindful of Hofstede's ( 1980 ), Engle's ( 2000 ) and Helmreich and Merritt's ( 2001 ) examination of the relationship between culture and performance, the author suggests the study be repeated with carriers that employ pilots from a variety of cultures. Practitioner Summary: This in vivo study evaluates the efficacy of CRM vis-à-vis flight safety and supports a critique of the CRM syllabus. The author observed twenty sectors and attended a CRM training day. To improve safety and efficiency, it is recommended that airlines run multi-profession (inclusive) CRM training courses. abstract_id: PUBMED:28119369 Improving Pediatric Rapid Response Team Performance Through Crew Resource Management Training of Team Leaders. Background: Rapid response teams (RRTs) improve the detection of and response to deteriorating patients. Professional hierarchies and the multidisciplinary nature of RRTs hinder team performance. This study assessed whether an intervention involving crew resource management training of team leaders could improve team performance. Methods: In situ observations of RRT activations were performed pre- and post-training intervention. Team performance and dynamics were measured by observed adherence to an ideal task list and by the Team Emergency Assessment Measure tool, respectively. Multiple quartile (median) and logistic regression models were developed to evaluate change in performance scores or completion of specific tasks. Results: Team leader and team introductions (40% to 90%, P = .004; 7% to 45%, P = .03), floor team presentations in Situation Background Assessment Recommendation format (20% to 65%, P = .01), and confirmation of the plan (7% to 70%, P = .002) improved after training in patients transferred to the ICU (n = 35). The Team Emergency Assessment Measure metric was improved in all 4 categories: leadership (2.5 to 3.5, P &lt; .001), teamwork (2.7 to 3.7, P &lt; .001), task management (2.9 to 3.8, P &lt; .001), and global scores (6.0 to 9.0, P &lt; .001) for teams caring for patients who required transfer to the ICU. Conclusions: Targeted crew resource management training of the team leader resulted in improved team performance and dynamics for patients requiring transfer to the ICU. The intervention demonstrated that training the team leader improved behavior in RRT members who were not trained. abstract_id: PUBMED:33830287 Crew resource management in emergency centers The treatment of critically ill patients in the emergency room poses major challenges to the treatment teams. Good teamwork is essential for patient care and patient safety. Between 60 and 70% of all errors in high-risk areas-such as medicine-are assigned to the field of "human factors". In aviation, after several aircraft disasters, the concept of "Crew Resource Management" (CRM) was developed in the 1980s to avoid such errors and has since established itself in many high-security industries. In contrast to medicine, there has long been a legal obligation in aviation to conduct regular CRM training. Introduced into medicine by anesthesiologists in 1990 because of its potential, CRM training has so far found its way into emergency medicine especially, even without it being a legal obligation. For trauma room treatment of polytrauma patients, the disciplines involved already offer a specially developed training concept in which teaching of CRM principles is the main focus (HOTT®-Schockraumsimulation). In addition to dedicated private providers of CRM training and individual concepts developed at an institutional level, several common course concepts for the care of emergency patients also integrate CRM principles to varying degrees into their curricula and teaching methods. Level IA evidence for CRM training is still missing also due to systematic difficulties not only in medicine, but also in other high-risk areas. However, further implementation of regular CRM training in medicine should not be suspended for this very reason. Answer: Crew resource management (CRM) training is an approach initially developed for aviation to prevent human error in complex procedures and has since been adapted to other high-reliability environments, including healthcare. The evidence suggests that CRM training can be effective in various settings. In the context of acute stroke care, the implementation of CRM strategies at a tertiary stroke center has been reported to significantly speed up process times, improve patient safety, and increase staff satisfaction (PUBMED:30097552). Similarly, CRM training for flight attendants has been shown to increase positive teamwork behaviors between flight attendants and pilots, which is valuable for improving communication and teamwork during in-flight emergency scenarios (PUBMED:24529091). A systematic review aimed at evaluating the effectiveness of classroom-based CRM training on safety culture in healthcare found mixed results, with some studies reporting positive outcomes on selected culture dimensions, while others showed no significant changes across all dimensions. The review highlighted the need for more valid study designs to assess the impact of CRM training (PUBMED:26770720). Research on air medical programs indicated that CRM training increases crew awareness and promotes team concepts in both everyday and emergent situations (PUBMED:11142974). In a hospital setting in Hong Kong, CRM classroom-based training was found to have a positive effect on healthcare professionals' attitudes, particularly among nurses (PUBMED:27060788). A simulation team-based CRM training program for healthcare professionals also received high ratings, with participants recognizing the relevance of CRM concepts to clinical practice (PUBMED:26878329). Despite the diversity in CRM training courses for surgical teams, trainers believe that standardizing simulation exercises and implementing a quality standard for CRM trainers could enhance the effectiveness of these trainings and potentially reduce human errors and preventable patient complications (PUBMED:34134934). An in vivo study of CRM on the flight deck recommended multi-profession CRM training to improve communication and coordination, which could lead to enhanced safety and efficiency (PUBMED:30051761). Additionally, targeted CRM training of team leaders in pediatric rapid response teams resulted in improved team performance and dynamics (PUBMED:28119369). In emergency centers, CRM training is considered essential for patient care and safety, and while level IA evidence for its effectiveness is still lacking, the continued implementation of CRM training is advocated (PUBMED:33830287).
Instruction: Are patients with Hodgkin lymphoma and high-grade non-Hodgkin lymphoma in clinical therapy optimization protocols representative of these groups of patients in Germany? Abstracts: abstract_id: PUBMED:20423912 Are patients with Hodgkin lymphoma and high-grade non-Hodgkin lymphoma in clinical therapy optimization protocols representative of these groups of patients in Germany? Background: Improvement of lymphoma therapy is largely driven by clinical therapy optimization protocols (TOPs). It is unclear, however, whether the patients treated in clinical TOP are representative for all patients. Patients And Methods: TOP participants were compared with nonstudy patients in a population-based approach. The study included patients with Hodgkin lymphoma (HL) and high-grade non-Hodgkin lymphoma (hgNHL). Incident cases (N = 743) were ascertained in a large population-based epidemiologic survey. Each patient's status with respect to exclusion criteria of the pertinent TOP was abstracted from primary data sources. TOP participants were identified on the basis of the trial databases. Baseline characteristics and risk factor prevalence were compared between nonstudy and TOP patients. Results: Eligible for the respective TOPs were 64.1% of all incident HL patients and 29.6% of all hgNHL patients in the population. Main exclusion criterion was age (HL: 15.2%; hgNHL: 27.4%). Only 71 HL patients (23.0%) and 11 hgNHL patients (3.4%) had actually been enrolled in the respective TOPs. Conclusions: TOP participants do not represent all patients with hgNHL and HL in the population. TOP inclusion criteria caused considerable selection among the participants. Further investigation is required to clarify possible limitations for the application of the outcomes observed in TOP patients for all patients with these diseases. abstract_id: PUBMED:9728600 The role of high-dose chemotherapy in patients with Hodgkin's disease and non-Hodgkin's lymphoma. Many patients with Hodgkin's and non-Hodgkin's lymphoma (NHL) can be cured today with combination chemotherapy and/or radiotherapy. However, for patients with suboptimal responses to initial therapy or for patients with refractory or relapsed disease, salvage therapy alone is usually inadequate to achieve long-term survival. High-dose chemotherapy (HDC) with stem cell rescue has emerged as the treatment of choice for such patients as long-term disease-free survival can be obtained in a significant number of these patients. Dose-intensive treatment has been equivocally shown effective for certain patients with Hodgkin's and NHL, whether or not chemosensitivity is shown before transplant. However, HDC has yet to consistently yield durable responses in patients with indolent NHL. Additionally, perhaps the International Prognostic Index can now help identify "high-risk" NHL patients who may benefit from investigative approaches such as frontline HDC. abstract_id: PUBMED:9674847 Busulfan, cyclophosphamide and etoposide as high-dose conditioning therapy in patients with malignant lymphoma and prior dose-limiting radiation therapy. Relapse after transplant for malignant lymphomas remains the main cause of treatment failure. Most conditioning regimens contain total body irradiation (TBI). We investigated the toxicity and efficacy of an intensified chemotherapy conditioning regimen without TBI in patients with relapsed or high-risk malignant lymphoma who had received prior radiation therapy and were therefore not eligible for TBI. Twenty patients with a median age of 38 (18-56) and relapsed or high-risk malignant non-Hodgkin's lymphoma (NHL, n = 16) or Hodgkin's disease (HD, n = 4) underwent high-dose chemotherapy consisting of busulfan (16 mg/kg), cyclophosphamide (120 mg/kg) and etoposide 30 mg/kg (n = 8) or 45 mg/kg (n = 12) followed by peripheral stem cell support (n = 14), autologous bone marrow (n = 3), allogeneic (n = 2) or syngeneic (n = 1) transplantation. All but two had chemosensitive disease before high-dose chemotherapy. The main toxicity -- according to the Bearman score -- was mucositis II in 18 (90%) patients; five patients (25%) suffered a grade I hepatic toxicity. GI toxicity I occurred in three (15%) and renal toxicity I in two patients (10%). Sixty percent of the patients developed transient dermatitis with erythema and three of them (15%) had skin desquamation; one patient experienced asymptomatic pancreatitis. Toxicity was slightly higher in patients treated with 45 mg/kg etoposide. One patient (5%) died of treatment-related venoocclusive disease. After a median follow-up of 50 months (24-84) the disease-free and overall survival were 50% and 55%. One of the nine relapsing patients developed secondary AML 18 months after transplant. High-dose busulfan, cyclophosphamide and etoposide is an effective regimen resulting in long-term disease-free survival in 50% of patients with relapsed malignant lymphoma and prior radiation therapy. The toxicity is moderate with a low treatment-related mortality (5%). abstract_id: PUBMED:18182078 Blood stem cell mobilization and collection in patients with lymphoproliferative diseases: practical issues. More than 15,000 autologous stem cell transplants (ASCT) were reported to European Blood and Bone Marrow Transplantation registry in 2005, most commonly for multiple myeloma (MM) and lymphomas. In 98% of the cases high-dose therapy was supported by blood stem cells. Thus stem cell mobilization and collection are integral parts of ASCT protocols. We give here a practical approach to blood stem cell mobilization and collection in patients with various lymphoproliferative diseases. While mobilization is usually easy and straightforward in patients with MM, about 10-20% of patients with non-Hodgkin's lymphoma or Hodgkin's lymphoma are hard-to-mobilize. There seems to be even more disease-specific issues in blood stem cell mobilization in patients with chronic lymphocytic leukaemia and in patients with light chain amyloidosis. We also discuss options in hard-to-mobilize patients. abstract_id: PUBMED:3297283 Intensive cytoreductive therapy followed by autologous bone marrow transplantation for patients with hematologic malignancies or solid tumors. Fifty patients were studied. Twenty patients with non-Hodgkin's lymphomas (NHL) of high-grade malignancy and 21 patients with acute leukemia (AL) were treated with high-dose cyclophosphamide and total body irradiation, and three patients with Hodgkin's disease (HD) and six patients with solid tumors were treated with high-dose cyclophosphamide and VP16-213. Those procedures were followed by autologous bone marrow transplantation (ABMT). All patients had received conventional chemo(radio)therapy before the ABMT procedure. Although remissions were obtained in patients with cytotoxic drug-resistant diseases (lymphomas and solid tumors), none has become a long-term survivor, as occurred also in patients with solid tumors in partial remission (PR). Two of five patients with NHL in PR at the time of ABMT have become long-term disease-free survivors (28+, 56+ months). Ten patients with NHL were treated in complete remission (CR) and seven are in unmaintained CR; four with long follow-up (14+ to 59+ months). All patients with AL were treated in CR; two patients received ABMT in second CR, and both relapsed. Ten of nineteen patients in first CR relapsed; eight are alive in CR, five with long follow-up. Four deaths were therapy-related, all were patients in poor clinical condition. Intensive cytoreductive therapy followed by ABMT can produce prolonged disease-free survival (and probably cure) in a fair number of patients with poor risk NHL in CR and PR and probably also in patients with acute myeloblastic leukemia in first CR. This procedure was not successful in achieving long-term disease-free survival in patients with refractory lymphomas or solid tumors. abstract_id: PUBMED:24937122 Changes in the diagnosis and treatment of patients with low grade lymphoma in Germany: years 2006-2009. Today's treatment options for indolent lymphoma and chronic lymphocytic leukemia (CLL) range from watch &amp; wait, immunochemotherapy up to allogeneic transplantation. We describe changes in the diagnosis and treatment of indolent lymphoma and CLL in Germany between 2006 and 2009. Two nation-wide surveys in the fourth quarter of 2006 and 2009 included patients with indolent lymphoma and CLL. Data from 576 patients from 46 centers in Q4/2006 were compared with data from 521 patients from 57 centers in Q4/2009. The subpopulation of patients ≥ 70 years of age and the number of patients with comorbidities increased from 39% to 55% and 47% to 55%, respectively. Both in indolent lymphoma and CLL, Rituximab and R-based immunochemotherapy (50.6% vs. 64.4%) as well as bendamustine (4.8 % vs. 24%) were much more frequently applied. In contrast, high dose chemotherapy consolidation was almost abandoned in first line treatment. Supportive care is given more frequently, with exception of erythropoietin and immunoglobulins. Our national survey confirmed that scientific results were rapidly transferred into clinical care of indolent lymphoma. abstract_id: PUBMED:9037363 Intensive therapy and autotransplant for patients with an incomplete response to front-line therapy for lymphoma. Background: Patients with Hodgkin's disease (HD) and intermediate or high-grade non-Hodgkin's lymphoma (NHL) who fail to achieve a complete remission (CR) with standard induction therapy have a poor prognosis with conventional-dose salvage therapy alone. We examined the role of subsequent intensive therapy and autologous bone marrow transplantation (ABMT) in patients who demonstrated a response to conventional-dose therapy. Patients And Methods: Sixty-six patients with either HD (n = 30) or NHL (n = 36) underwent intensive therapy with etoposide (60 mg/kg), intravenous melphalan (160-180 mg/m2) followed by infusion of unpurged autologous bone marrow and/or blood cells. All patients had advanced stage or bulky disease at diagnosis and failed to achieve a CR after an anthracycline-containing front-line chemotherapy regimen (NHL) or ABVD or equivalent regimen (HD). Patients who achieved a CR after involved-field radiotherapy were excluded. All patients demonstrated sensitivity to conventional-dose salvage treatment before advancing to intensive therapy and ABMT. Results: The CR, partial response (PR) and overall response rate (RR) following ABMT for HD patients was 48%, 17% and 65%, respectively. At a median follow-up of 35 months, the predicted three-year overall survival (OS) is 51% (95% CI: 44%-60%) and event-free survival (EFS) is 34% (95% CI: 26%-54%). For patients with NHL, the CR, PR and RR were 68%, 9% and 77%, respectively. At a median follow-up of 28 months, the predicted three-year OS is 51% (95% CI: 35%-66%) and EFS is 39% (95% CI: 21%-57%). Conclusions: Intensive therapy with etoposide and melphalan followed by ABMT results in prolonged survival in selected patients with lymphoma who fail to achieve a complete remission to front-line chemotherapy. Based on our previous studies of outcome to conventional-dose salvage chemotherapy, we estimate that of all patients failing induction therapy, 28% with HD and 15% with NHL will be event-free at three years after ABMT. abstract_id: PUBMED:2569900 High-dose therapy and autologous peripheral blood stem cell transplantation for patients with lymphoma. Forty patients with refractory Hodgkin's disease (24 patients) or non-Hodgkin's lymphoma (16 patients) who were considered for high-dose therapy but not for autologous bone marrow transplantation (ABMT) due to BM metastases, previous pelvic irradiation, a history of marrow involvement by tumor or hypocellular marrow in conventional harvest sites received high-dose therapy and autologous peripheral blood (PB) hematopoietic stem cell transplantation. Disappearance of circulating neutrophils and development of RBC and platelet transfusion-dependence was followed, in the evaluable patients, by reappearance of 0.5 x 10(9)/L circulating granulocytes and sufficient platelets to obviate the need for platelet transfusions at a median of 25 days after transplantation. Twenty-three patients experienced a clinical complete remission (CR). The projected 2-year event-free survival was 24% for all 40 patients and 49% for the non-Hodgkin's lymphoma patients. The projected 18-month event-free survival for the Hodgkin's disease patients was 15%. PB stem cell transplantation provided an opportunity to administer high-dose salvage therapy to patients with refractory lymphoma who otherwise were not candidates for such therapy. For some of those patients, the high-dose therapy produced prolonged survival, free of tumor progression. abstract_id: PUBMED:7534744 Ifosfamide, epirubicin and etoposide (IEV) therapy in relapsed and refractory high-grade non-Hodgkin's lymphoma and Hodgkin's disease. Background: A fundamental principle in the therapeutic strategy for recurrent lymphomas is the employment of agents that are not part of the usual front line combination regimens. Ideally, the cytotoxic agents should lack complete cross resistance with those utilized up front. Patients And Methods: A three-drug combination of ifosfamide, epirubicin and etoposide (IEV) was used to treat 20 patients with relapsing or refractory high-grade non-Hodgkin's lymphoma (HG-NHL) or Hodgkin's disease (HD). Results: Of 14 patients with HG-NHL, 5 (36%) achieved a complete response (CR) and 4 partial remission (PR), giving an overall response rate of 64%. To date, all the complete responders are still in CR at +5, +5, +6, +7, and +9 months, respectively. Of 6 patients with HD, 4 (66%) obtained CR and 2 PR, giving an overall response rate of 100%. The 4 CRs are still in remission after +4, +5, +9, and +13 months, respectively. Clinical and hematologic toxic effects were moderate: neutropenia was responsible for delaying treatment for a week in 6 patients. Conclusions: These results confirm the efficacy of the IEV regimen in inducing a good remission rate with moderate side effects in relapsing/refractory HG-NHL and HD patients and they show that further investigations with this combination are warranted. abstract_id: PUBMED:19068463 Lymphomas in adolescents: are childhood lymphoma therapy protocols suitable for this patient group? The centres of the Hungarian Paediatric Oncology Network annually take care of 250-300 new patients with childhood cancer, every tenth of them suffering from lymphoma. The aim of our work was to analyse the data of the adolescents (14-19 years) with Hodgkin's lymphoma (HL) and non-Hodgkin's lymphoma (NHL), comparing their survival rates with younger patients under fourteen and with the international data. From January 1990 to December 2004 there were 281 children diagnosed with HL and 230 with NHL. Among the HL patients 107, while among the NHL patients 51 were older than 14 years old. In the group of HL the distribution of patients according to the stage was similar in younger and older patients. In the NHL group 55% of the children younger than 14, and 72% of the patients older than 14 years old had advanced stage disease (stage III or IV). In both groups the patients received chemotherapy according to the current paediatric protocols. The overall survival (OS) of the HL patients younger than 14 was 92.5+/-2% at 5 years and 90.3+/-2% at 10 years, and for the adolescents 93.4+/-2% and 90.7+/-3% at 5 and 10 years (n.s.). The OS of the younger children in the NHL group was 78.2+/-3% at 5 and 10 years, and 77.9+/-6% for the adolescents (n.s.). As a conclusion, survival rates of the adolescents do not differ significantly from the parameters of the patients under fourteen, so the therapy protocols used for childhood lymphomas are suitable for the treatment of the lymphomas appearing at the age of 14-19 years. Answer: No, patients with Hodgkin lymphoma (HL) and high-grade non-Hodgkin lymphoma (hgNHL) in clinical therapy optimization protocols (TOPs) are not fully representative of these groups of patients in Germany. A study comparing TOP participants with nonstudy patients in a population-based approach found that only a subset of all incident HL and hgNHL patients were eligible for the respective TOPs, with 64.1% of HL patients and 29.6% of hgNHL patients meeting the criteria. The main exclusion criterion was age, with 15.2% of HL patients and 27.4% of hgNHL patients being excluded on this basis. Furthermore, actual enrollment in the respective TOPs was quite low, with only 23.0% of HL patients and 3.4% of hgNHL patients participating in the trials. This indicates that TOP participants do not represent all patients with these diseases in the population, and the inclusion criteria of TOPs lead to considerable selection among participants (PUBMED:20423912).
Instruction: Do people with rheumatoid arthritis develop illness-related schemas? Abstracts: abstract_id: PUBMED:9313396 Do people with rheumatoid arthritis develop illness-related schemas? Objective: To assess, using both qualitative/inductive and quantitative data, whether people with rheumatoid arthritis (RA) have schemas related to arthritis. Methods: Themes identified from interview and focus group transcripts were used to develop 1) questionnaire items, and 2) statements participants were asked to recall during home interviews. Two hundred one people with RA completed questionnaires and recall tasks of the type used in cognitive research, followed 10 days later by another recall assessment by telephone. Results: Qualitative methods, item-level questionnaire data, and category-level recall data yielded convergent results supporting 4 final categories: mastery, support, loss of independence, and depression. Regression analyses indicated that category of earlier recollections predicted subsequent recollections assessed via phone. Conclusions: Results from widely different methods offer at least partial support for arthritis schemas and suggest that the concepts identified are meaningful to patients as well as to researchers. abstract_id: PUBMED:9212381 Illness self-schemas in depressed and nondepressed rheumatoid arthritis patients. This study examined the hypothesized illness self-schemas construct in persons with rheumatoid arthritis (RA). Biases in self-description, information processing, and schema-consistent illness behavior were examined in depressed and nondepressed persons with RA and compared with those of depressed and nondepressed controls. Major findings revealed that RA-depressed subjects exhibited pervasively negative self-description and biased processing of negative illness-related information. RA-nondepressed subjects demonstrated a bias for positive self-description and enhanced processing of positive illness-related information. Using regression analysis, the illness self-schema construct predicted unique variance in self-reported functional disability. Findings are reviewed in the context of previous research on self-schemas, chronic pain, and cognitive variables in chronic illness. Potential clinical implications and directions for future research are discussed. The illness self-schema construct has significant heuristic value which could guide further research on the psychosocial adjustment of individuals with chronic illnesses. abstract_id: PUBMED:34929227 Neuroinflammatory remodeling of the anterior cingulate cortex as a key driver of mood disorders in gastrointestinal disease and disorders. Most gastrointestinal diseases and disorders (GIDD) are associated with depression, anxiety, and cognitive dysfunction. This suggests that shared features of GIDD, particularly chronic pain and inflammation, affect specific neural targets. The critical review of clinical and animal research presented here reveals that anterior cingulate cortex (ACC) is a primary target. It is particularly sensitive to neuroinflammation, and its function accounts for altered mental function emergent in GIDD. We propose that peripherally-triggered neuroinflammation normally signals injury/illness to ACC, which increases threat assessment and pain sensitivity to cope with increased vulnerability. Chronic peripheral inflammation over-drives this process, leading to long-term ACC structural remodeling, and excessive threat signaling. This evokes anxiodepressive phenotypes even without direct evidence of threats because ACC utilizes schemas to infer affective outcomes (e.g. pain) based on complex contextual information. This activates the autonomic nervous system, exacerbates immune dysfunction, and promotes further gut pathology. This theory provides a mechanistic account of bidirectional interactions among gastrointestinal, immunological, and neural systems in GIDD, and is likely applicable to other chronic inflammatory conditions. abstract_id: PUBMED:19828917 Assessment of objective and subjective quality of life in people with rheumatoid arthritis - preliminary study. Background: Quality of life evaluation is nowadays an important element in the assessment of treatment and rehabilitation effectiveness in people with rheumatoid arthritis (RA). According to studies of patients with other diseases, objective indicators of life quality are not directly related to the level of subjective life satisfaction. Both dimensions should be taken into account in a reliable evaluation of the quality of life of any patient. The analysis of available literature reveals few publications concerned with evaluating quality of life in people with RA in both the objective and subjective dimensions. Material And Method: A total of 42 people with RA took part in the study. The Life Satisfaction Questionnaire was used to assess subjective quality of life, and the objective dimension was assessed with a questionnaire investigating education, employment, income etc., before and after the disease was diagnosed. Results: The study found that people with RA perceive their global life satisfaction as low. Only 38% of the participants were satisfied with their lives as a whole. Their vocational activity decreased by more than 45% in relation to the situation before developing RA. Watching TV was the most often stated leisure activity. People with RA were most satisfied with their family life and contact with friends, and the least satisfying life domains were their vocational and financial situations. The main predictors of global life satisfaction in people with RA were satisfaction with the financial situation and sexual life. Conclusion: RA decreases both the objective and subjective dimensions of quality of life. Taking into account the importance of patient quality of life evaluation in the assessment of effectiveness of treatment and rehabilitation in patients with RA, it seems necessary to include both dimensions, as only this guarantees achieving results that reliably reflect the real situation. abstract_id: PUBMED:19296831 The health-related quality of life in rheumatoid arthritis, ankylosing spondylitis, and psoriatic arthritis: a comparison with a selected sample of healthy people. Background: The health-related quality of life (HRQL) is an important indicator of the burden of musculoskeletal disease. The Medical Outcome Study Short-Term 36 (SF-36) is the most used tool that evaluates HRQL as a subjective perception about psychological and physical limitations due to an underlying illness. The purpose of this study was to compare the HRQL scores among patients with rheumatoid arthritis (RA), psoriatic arthritis (PsA) and ankylosing spondylitis (AS) and a selected sample of health people and determine their relationship with measures of clinical condition. Methods: 799 patients (469 with RA, 164 with AS, 65 with axial PsA and 101 with peripheral PsA) accepted the invitation to participate. 1579 healthy controls were used for the comparison. We calculated scores for the eight SF-36 subscales, the Physical Component Summary (PCS) score, and the Mental Component Summary (MCS) score, according to published algorithms. Disease-related characteristics included disease duration, comorbidity, a measure for disease activity and for radiographic damage. The presence of comorbidity was ascertained through patient's self-reports by the Self-Administered Comorbidity Questionnaire (SCQ). Comparison were performed with respect to sex and age, and s-scores were calculated for comparison with the norm. Multivariate analyses were used to assess the relationship between HRQL and radiographic damage, disease activity, and socio-demographic data. Results: The four inflammatory rheumatic diseases (IRD), compared to controls, significantly impaired all eight health concepts of the SF-36 (p &lt; 0.0001) in both component PCS and MCS scores (p &lt; 0.0001). Overall, the dimensions typically affected were physical functioning, limitations due to physical function, and bodily pain. The disease with the worst HRQL for those dimensions was RA. The multivariate analyses revealed that the physical component was influenced by a high disease activity and comorbidity. The severity of psoriatic lesions was associated with poor mental functioning in patients with PsA. Conclusion: Chronic IRD have a clearly detrimental effect on the HRQL in both sex and in age groups, and physical domain is more impaired than mental and social ones. abstract_id: PUBMED:10627433 Reduced incidence of alcohol related deaths in subjects with rheumatoid arthritis. Objectives: It has previously been shown that people with ankylosing spondylitis have an increased incidence of alcohol related deaths from accidents and violence. This study investigated alcohol related deaths in subjects with rheumatoid arthritis (RA). Methods: The study covered the subjects, 1666 in number, who had died in 1989 and had been entitled under the nationwide sickness insurance scheme to receive specially reimbursed medication for RA. Results: There were eight alcohol related deaths among the 480 men and three deaths among the 1186 women with RA. The standardised mortality ratios and their 95% confidence intervals (CI) were 0.40 (95% CI 0.20, 0.80) and 0.40 (95% CI 0.13, 1.26), respectively. Conclusion: Alcohol either protects from RA or, subjects with RA curtail their drinking after the manifestation of RA. abstract_id: PUBMED:10493687 Health related quality of life in a population sample with arthritis. Objective: To determine the prevalence and health related quality of life of a community sample of people with arthritis and compare it with persons with other chronic diseases and the healthy population in South Australia. Methods: A representative population survey by trained interviewers in autumn 1995 using a multi-stage, systematic, clustered area sample of 4200 urban and country households. There were 3001 (73.6%) respondents aged 15 or over. Subjects were asked, "Have you ever been told by a doctor that you have arthritis?" and "What type?", in addition to the Medical Outcome Survey Short Form-36 (SF-36) health status survey. Results: Medically confirmed arthritis was self-reported in 666 (22.1%) as osteoarthritis (OA) (8.6%), rheumatoid arthritis (RA) (4.0%), and other, or unspecified arthritis (9.6%). People with arthritis were more likely to be female, aged, and of lower socioeonomic status. SF-36 scores were compared to nonarthritic subjects and adjusted for differences in age, sex, and occupational status. Scores were significantly lower for respondents with arthritis, compared with the rest of the population across all subscales of the SF-36 (p&lt;0.05). This was most marked in the subscales measuring physical function and pain. Conclusion: Self-reported arthritis is common in the South Australian population, particularly in those aged over 65 years. Arthritis has a major impact on the health related quality of life in the community setting. abstract_id: PUBMED:38236426 Predicting comorbid mental health difficulties in people with autoimmune arthritis. Little is known about variables impacting the association between mental health difficulties and autoimmune conditions. This study investigates whether, age of onset, adverse childhood experiences (ACEs), and 'illness invisibility' predict comorbid mental health difficulties in people with autoimmune arthritis. Participants with autoimmune arthritis (N = 209) were recruited via social media platforms. Age of onset of arthritis and the temporal order of mental health difficulties (if applicable) were collected alongside a measure of personality and ACEs. A novel measure of illness invisibility was developed for this study. A cross-sectional mixed-subject design was utilised. 53.5% of the sample endorsed lifetime mental health difficulties. Logistic regression analyses revealed participants with a younger age of onset of arthritis had significantly higher odds of developing a mental health problem (OR 0.93, 95% CI 0.90-0.96). Independently, Illness Invisibility, endorsed by 89.9% of participants, significantly predicted postmorbid mental health difficulties (OR 1.08, 95% CI 1.01-1.19). Adverse Childhood Experiences were frequently endorsed within the sample with 37.8% reporting ≥ 3 cumulative ACEs. Every unit increase in ACEs increased the odds of having comorbid mental health difficulties (OR 1.27, 95% CI 1.09-1.47). Young people who are diagnosed with autoimmune arthritis maybe more likely to experience subsequent mental health difficulties. The 'invisibility' of their illness and exposure to ACEs also is associated with their risk for mental health complications. These findings highlight the importance of mental health screening for young people being investigated for arthritis and interdisciplinary care, especially for young people. abstract_id: PUBMED:26806436 How does self stigma differ across people with psychiatric diagnoses and rheumatoid arthritis, and how does it impact on self-esteem and empowerment? Self stigmatising attitudes have been found in people who have psychiatric diagnoses, however, research assessing self stigma in physical illnesses is rare. It is known that receiving a diagnosis of rheumatoid arthritis (RA) can affect a person's identity and self esteem. This study aimed to compare levels of self stigma, self esteem and empowerment between people diagnosed with psychiatric illnesses and people diagnosed with RA to establish whether self stigma, and specifically endorsement of negative stereotypes, is associated with self esteem and empowerment across these two groups. A total of 202 participants (psychiatric group n = 102; RA group n = 100) were interviewed using the Internalised Stigma of Mental Illness scale (ISMI), or the Internalized Stigma of Mental Illness scale- Rheumatoid Arthritis (ISMI-RA), the Index of Self Esteem (ISE) and the Mental Health Confidence Scale (MHCS). Overall, the psychiatric group had higher self stigma scores (2.5 vs. 2.2, p &lt; .01), lower self esteem (48.7 vs. 36.8, p &lt; .001) and lower empowerment scores (3.8 vs. 4.3, p &lt; .001) than the RA group. However, sizable proportions of both groups had high self stigma scores. ISMI/ISMI-RA was associated with the ISE and the MHCS. The stereotype endorsement subscale of the ISMI/ISMI-RA was not related to self esteem or empowerment in either group. Interventions that aim to decrease self stigma and increase self esteem could focus on alienation. abstract_id: PUBMED:3653877 Sjögren's syndrome-like illness associated with the acquired immunodeficiency syndrome-related complex. Three cases of Sjögren's syndrome-like illness occurring in patients with the acquired immunodeficiency syndrome (AIDS)-related complex (ARC) are described. All three patients were male. Positive serologic tests for the human immunodeficiency virus (HIV) were documented in two; the third patient was a prison inmate with a history of drug abuse. The lymphoid infiltrate seen in these cases resembled morphologically the features of persistent generalized lymphadenopathy. One patient complained of dry eyes and arthralgias. Autoimmune phenomena including lupus-like anticoagulant, immune thrombocytopenic purpura, and others have been reported in patients with AIDS and ARC. The occurrence of Sjögren's syndrome in ARC provides further evidence for autoimmune phenomena arising in the severely damaged immune system of ARC patients. Atypical Sjögren's syndrome now requires evaluation for ARC. Proposed criteria for identifying these patients are as follows: 1) young age (less than 40 years); 2) male sex (less than 10 per cent of non-ARC patients are male); 3) homosexuality or bisexuality, or other high-risk groups for AIDS; 4) generalized lymphadenopathy (also seen in rheumatoid arthritis); 5) negative test for rheumatoid factor despite generalized lymphadenopathy; 6) salivary gland lymphoid infiltrate showing features of persistent generalized lymphadenopathy. Patients with such features should be studied for HIV antibodies and other evidence of autoimmune phenomena in order to define more precisely the nature of this new Sjögren's-like illness. Answer: Yes, people with rheumatoid arthritis (RA) do develop illness-related schemas. A study aimed at assessing whether individuals with RA have schemas related to arthritis found that qualitative methods, item-level questionnaire data, and category-level recall data yielded convergent results supporting four final categories: mastery, support, loss of independence, and depression. Regression analyses indicated that the category of earlier recollections predicted subsequent recollections assessed via phone, offering at least partial support for arthritis schemas and suggesting that the concepts identified are meaningful to patients as well as to researchers (PUBMED:9313396). Another study examining the illness self-schemas construct in persons with RA found that RA-depressed subjects exhibited pervasively negative self-description and biased processing of negative illness-related information, while RA-nondepressed subjects demonstrated a bias for positive self-description and enhanced processing of positive illness-related information. The illness self-schema construct predicted unique variance in self-reported functional disability, indicating its significant heuristic value which could guide further research on the psychosocial adjustment of individuals with chronic illnesses (PUBMED:9212381). These findings suggest that people with RA do indeed develop cognitive frameworks or schemas that are related to their illness, which can influence their self-perception, information processing, and behavior in ways that are consistent with their emotional state, whether depressed or not.
Instruction: Do mixed infections matter? Abstracts: abstract_id: PUBMED:26446690 Age exacerbates HIV-associated white matter abnormalities. Both HIV disease and advanced age have been associated with alterations to cerebral white matter, as measured with white matter hyperintensities (WMH) on fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI), and more recently with diffusion tensor imaging (DTI). This study investigates the combined effects of age and HIV serostatus on WMH and DTI measures, as well as the relationships between these white matter measures, in 88 HIV seropositive (HIV+) and 49 seronegative (HIV-) individuals aged 23-79 years. A whole-brain volumetric measure of WMH was quantified from FLAIR images using a semi-automated process, while fractional anisotropy (FA) was calculated for 15 regions of a whole-brain white matter skeleton generated using tract-based spatial statistics (TBSS). An age by HIV interaction was found indicating a significant association between WMH and older age in HIV+ participants only. Similarly, significant age by HIV interactions were found indicating stronger associations between older age and decreased FA in the posterior limbs of the internal capsules, cerebral peduncles, and anterior corona radiata in HIV+ vs. HIV- participants. The interactive effects of HIV and age were stronger with respect to whole-brain WMH than for any of the FA measures. Among HIV+ participants, greater WMH and lower anterior corona radiata FA were associated with active hepatitis C virus infection, a history of AIDS, and higher current CD4 cell count. Results indicate that age exacerbates HIV-associated abnormalities of whole-brain WMH and fronto-subcortical white matter integrity. abstract_id: PUBMED:26689572 The impact of human immune deficiency virus and hepatitis C coinfection on white matter microstructural integrity. The purpose of the present study is to examine the integrity of white matter microstructure among individuals coinfected with HIV and HCV using diffusion tensor imaging (DTI). Twenty-five HIV+ patients, 21 HIV+/HCV+ patients, and 25 HIV- controls were included in this study. All HIV+ individuals were stable on combination antiretroviral therapy (cART; ≥3 months). All participants completed MRI and neuropsychological measures. Clinical variables including liver function, HIV-viral load, and CD4 count were collected from the patient groups. DTI metrics including mean diffusivity (MD), axial diffusivity (AD), radial diffusivity (RD), and fractional anisotropy (FA) from five subregions of the corpus callosum were compared across groups. The HIV+/HCV+ group and HIV+ group were similar in terms of HIV clinical variables. None of the participants met criteria for cirrhosis or fibrosis. Within the anterior corpus callosum, significant differences were observed between both HIV+ groups compared to HIV- controls on DTI measures. HIV+ and HIV+/HCV+ groups had significantly lower FA values and higher MD and RD values compared to HIV- controls; however, no differences were present between the HIV+ and HIV+/HCV+ groups. Duration of HIV infection was significantly related to DTI metrics in total corpus callosum FA only, but not other markers of HIV disease burden or neurocognitive function. Both HIV+ and HIV+/HCV+ individuals had significant alterations in white matter integrity within the corpus callosum; however, there was no evidence for an additive effect of HCV coinfection. The association between DTI metrics and duration of HIV infection suggests that HIV may continue to negatively impact white matter integrity even in well-controlled disease. abstract_id: PUBMED:33637630 Association of White Matter Hyperintensities With HIV Status and Vascular Risk Factors. Objective: To test the hypothesis that brain white matter hyperintensities (WMH) are more common in people living with HIV (PLWH), even in the setting of well-controlled infection, and to identify clinical measures that correlate with these abnormalities. Methods: Research brain MRI scans, acquired within longitudinal studies evaluating neurocognitive outcomes, were reviewed to determine WMH load using the Fazekas visual rating scale in PLWH with well-controlled infection (antiretroviral therapy for at least 1 year and plasma viral load &lt;200 copies/mL) and in sociodemographically matched controls without HIV (CWOH). The primary outcome measure of this cross-sectional analysis was increased WMH load, determined by total Fazekas score ≥2. Multiple logistic regression analysis was performed to evaluate the effect of HIV serostatus on WMH load and to identify MRI, CSF, and clinical variables that associate with WMH in the PLWH group. Results: The study included 203 PLWH and 58 CWOH who completed a brain MRI scan between April 2014 and March 2019. The multiple logistic regression analysis, with age and history of tobacco use as covariates, showed that the adjusted odds ratio of the PLWH group for increased WMH load is 3.7 (95% confidence interval 1.8-7.5; p = 0.0004). For the PLWH group, increased WMH load was associated with older age, male sex, tobacco use, hypertension, and hepatitis C virus coinfection, and also with the presence of measurable tumor necrosis factor α in CSF. Conclusion: Our results suggest that HIV serostatus affects the extent of brain WMH. This effect is mainly associated with aging and modifiable comorbidities. abstract_id: PUBMED:35115993 Characteristics of Brain White Matter Microstructure in HIV Male Patients With Primary Syphilis Co-Infection. Purpose: To investigate the effect of syphilis infection on the microstructure of white matter (WM) in HIV-infected male patients using diffusion tensor imaging (DTI). Methods: Twenty-seven HIV-infected male patients with current syphilis or a history of syphilis (HIV +/syphilis +), twenty-nine HIV-infected male patients without syphilis co-infection (HIV +/syphilis-), and twenty-nine healthy controls (HC) were enrolled. All participants received DTI, and all patients received comprehensive neuropsychological assessment. Tract-based spatial statistics (TBSS) was adopted to analyze the DTI measures: fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD). Correlation analysis was conducted to investigate the relationships between DTI measures and cognitive performance. Results: There were no significant differences in DTI measures between HIV+/syphilis- and HC. Compared with HC, lower FA was found in body of corpus callosum (BCC), splenium of corpus callosum (SCC), genu of corpus callosum (GCC), the bilateral anterior corona radiata (ACR), superior corona radiata (SCR), posterior corona radiata (PCR), and posterior thalamic radiation (PTR) in HIV+/syphilis+ (p &lt; 0.05). Higher RD was found in BCC and SCC (p &lt; 0.05). Compared with HIV+/syphilis-, lower scores were found in complex motor skills (CMS) in HIV+/syphilis+, lower FA was found in BCC, SCC, GCC, the bilateral ACR, SCR, PCR, PTR, cingulate gyrus (CGC), the right inferior fronto-occipital fasciculus (IFO), the retrolenticular part of internal capsule (RLIC), sagittal stratum (SS), external capsule (EC) in HIV+/syphilis+ (p &lt; 0.01). Correlation analysis uncorrected for multiple comparisons showed there was a positive correlation between FA in GCC and CMS, FA in BCC, and CMS in HIV+/syphilis+. Conclusions: Syphilis co-infection can have an additive or synergistic effect on the brain WM in HIV-infected subjects. HIV-infected patients without syphilis should be actively treated to avoid syphilis infection. abstract_id: PUBMED:23053772 Microbial keratitis following vegetative matter injury. The purpose of the present study was to analyze the microbiological profile of cases of keratitis following trauma with vegetative matter in a tertiary care center. A retrospective review of the medical records of 49 patients with keratitis following vegetative matter injury over a 3-month period was performed. All patients underwent corneal scraping for smears and inoculation onto various culture media. The microbiological profile was based on the smear and culture reports. For patients who were culture-negative, outcome after standard empirical antibacterial therapy as per hospital protocol was analyzed. Thirteen patients with corneal ulcers had fungal etiology, eight had bacterial etiology, and two had protozoal etiology, while 13 patients were polymicrobial and 13 were culture-negative. Polymicrobial infections were mainly bacterial (eight cases), and the remaining five cases had coexistent fungal and bacterial etiology. The treatment was directed to the specific organism and patients improved with medical or surgical therapy. Only a third of culture-negative cases showed fungal etiology on biopsy or histopathology after keratoplasty while a third showed improvement with therapy. Corneal infections following vegetative matter trauma show a varied etiological profile; however, bacterial and polymicrobial infections are more prevalent. Empirical anti-fungal therapy, as commonly practiced, must be avoided in cases with vegetative matter injury. abstract_id: PUBMED:25123171 Oxidative stress and reduced responsiveness of challenged circulating leukocytes following pulmonary instillation of metal-rich particulate matter in rats. Welding fume is an exposure that consists of a mixture of metal-rich particulate matter with gases (ozone, carbon monoxide) and/or vapors (VOCs). Data suggests that welders are immune compromised. Given the inability of pulmonary leukocytes to properly respond to a secondary infection in animal models, the question arose whether the dysfunction persisted systemically. Our aim was to evaluate the circulating leukocyte population in terms of cellular activation, presence of oxidative stress, and functionality after a secondary challenge, following welding fume exposure. Rats were intratracheally instilled (ITI) with PBS or 2 mg of welding fume collected from a stainless steel weld. Rats were sacrificed 4 and 24 h post-exposure and whole blood was collected. Whole blood was used for cellular differential counts, RNA isolation with subsequent microarray and Ingenuity Pathway Analysis, and secondary stimulation with LPS utilizing TruCulture technology. In addition, mononuclear cells were isolated 24 h post-exposure to measure oxidative stress by flow cytometry and confocal microscopy. Welding fume exposure had rapid effects on the circulating leukocyte population as identified by relative mRNA expression changes. Instillation of welding fume reduced inflammatory protein production of circulating leukocytes when challenged with the secondary stimulus LPS. The effects were not related to transcription, but were observed in conjunction with oxidative stress. These findings support previous studies of an inadequate pulmonary immune response following a metal-rich exposure and extend those findings showing leukocyte dysfunction occurs systemically. abstract_id: PUBMED:21965122 Clinical contributors to cerebral white matter integrity in HIV-infected individuals. HIV-infected people frequently exhibit brain dysfunction characterized by preferential damage to the cerebral white matter. Despite suppressed viral load and reconstituted immune function afforded by combination antiretroviral therapy (CART), brain dysfunction continues to be observed even in medically stable individuals. To provide insight into the etiology of HIV-associated brain dysfunction in the CART era, we examined the effects of HIV disease markers, antiretroviral treatment, hepatitis C (HCV) coinfection, and age on DTI measures of white matter integrity in a cohort of 85 individuals aged 23 to 65 years with chronic HIV infection. Fractional anisotropy and mean diffusivity were derived from 29 cerebral white matter regions, which were segmented on each individual brain using a high-resolution T1-weighted image and registered to diffusion images. Significant effects of clinical variables were found on white matter abnormalities in nearly all brain regions examined. Most notably, HCV coinfection and older age were associated with decreased anisotropy or increased diffusivity in the majority of brain regions. Individuals with higher current CD4 levels exhibited higher anisotropy in parietal lobe regions, while those undergoing antiretroviral treatment exhibited higher anisotropy in temporal lobe regions. The observed diffuse pattern of white matter injury suggests that future neuroimaging studies should employ methodologies that are not limited to circumscribed regions of interest. The current findings underline the multifactorial nature of HIV-associated brain dysfunction in the CART era, and the importance of examining the effects of HIV disease in the context of other comorbidities, in particular HCV coinfection and aging. abstract_id: PUBMED:18193498 HIV-associated alterations in normal-appearing white matter: a voxel-wise diffusion tensor imaging study. Objective: There are conflicting reports of adverse HIV-associated alterations in white matter integrity as measured by diffusion tensor imaging (DTI). We sought to address these conflicting reports by assessing, on a voxel-by-voxel basis, HIV-associated regional changes in radiologically defined normal-appearing white matter (NAWM) integrity using high-resolution DTI. Methods: 30 HIV-seropositive (SP) and 30 HIV-seronegative (SN) nondemented, community-dwelling participants underwent DTI to derive whole-brain measures of white matter integrity (fractional anisotropy [FA] and mean diffusivity [MD]). For each participant, the white matter T2 volume was thresholded to remove regions of abnormal signal, resulting in a NAWM mask, which was then applied to the FA and MD volumes to extract voxel-wise NAWM measures of white matter integrity. Voxel-wise group comparisons of FA and MD were conducted (P &lt; 0.005, extent threshold 5 voxels) while controlling for age and substance-abuse history. Results: There were no significant differences between the groups for demographic or cognitive performance variables. Summary whole-brain measures of FA and MD were equivalent between the SP and SN samples. Among the SP sample, history of substance abuse was associated with significantly increased whole-brain NAWM MD, and coinfection with hepatitis C virus (HCV) was associated with a trend for increased MD. Correlations between whole-brain NAWM FA and MD with cognitive performance measures were not significant. Regional analyses of DTI measures revealed variable differences in NAWM FA in the SP sample, with findings of both decreased and increased FA. Differences in NAWM MD were more consistent, with widespread increases noted in the SP sample compared to the SN sample. Eight of the 10 regions displaying significantly increased FA in the SP sample were also found to have significantly increased MD compared to the SN sample. Conclusions: Decreased white matter integrity is present even in radiologically defined NAWM in nondemented, community-dwelling patients with HIV. The decrease in NAWM integrity is best seen in increases in MD, a measure of generalized tissue breakdown. Indications of NAWM axonal integrity (FA) present a more complicated picture, with both decreased FA and increased FA in the SP sample. Our findings of variable HIV-associated FA changes in NAWM may account for previous conflicting reports of changes in DTI parameters in this population. The results of our study suggest that HIV infection contributes to variable changes in DTI values, reflecting both direct loss of axonal integrity and a loss of complexity to the underlying axonal matrix. abstract_id: PUBMED:28065414 High-level ambient particulate matter before influenza attack with increased incidence of Aspergillus antigenemia in Southern Taiwan, 2016. We found significant correlation between the incidence of severe influenza and Aspergillus antigenemia among medical intensive care unit patients for 7-month observation (coefficient γ=0.976, p&lt;0.001). High-level ambient pollution was noticed for 2 months before the epidemic, highlighting that influenza patients might coinfect with aspergillosis in the community. abstract_id: PUBMED:33080169 The Problem of Microbial Dark Matter in Neonatal Sepsis. Neonatal sepsis (NS) kills 750,000 infants every year. Effectively treating NS requires timely diagnosis and antimicrobial therapy matched to the causative pathogens, but most blood cultures for suspected NS do not recover a causative pathogen. We refer to these suspected but unidentified pathogens as microbial dark matter. Given these low culture recovery rates, many non-culture-based technologies are being explored to diagnose NS, including PCR, 16S amplicon sequencing, and whole metagenomic sequencing. However, few of these newer technologies are scalable or sustainable globally. To reduce worldwide deaths from NS, one possibility may be performing population-wide pathogen discovery. Because pathogen transmission patterns can vary across space and time, computational models can be built to predict the pathogens responsible for NS by region and season. This approach could help to optimally treat patients, decreasing deaths from NS and increasing antimicrobial stewardship until effective diagnostics that are scalable become available globally. Answer: Mixed infections can indeed matter, as they can complicate the clinical presentation, diagnosis, and treatment of diseases. For instance, in the context of HIV, co-infections with hepatitis C virus (HCV) or syphilis have been shown to exacerbate white matter abnormalities in the brain. HIV-infected individuals with HCV co-infection exhibited decreased anisotropy or increased diffusivity in many brain regions, suggesting an additive effect on white matter injury (PUBMED:21965122). Similarly, HIV-infected male patients with syphilis co-infection showed alterations in white matter microstructure compared to both healthy controls and HIV-infected patients without syphilis, indicating that syphilis co-infection can have an additive or synergistic effect on brain white matter (PUBMED:35115993). In the case of vegetative matter injury leading to keratitis, polymicrobial infections were observed, which required specific treatment directed at the identified organisms (PUBMED:23053772). This highlights the importance of identifying all pathogens involved in an infection to ensure appropriate treatment. Moreover, in the context of neonatal sepsis, the concept of "microbial dark matter" refers to suspected but unidentified pathogens that are not recovered in blood cultures. This poses a significant challenge for effective treatment and underscores the need for improved diagnostic methods (PUBMED:33080169). In environmental health, high levels of ambient particulate matter were correlated with an increased incidence of Aspergillus antigenemia among patients with severe influenza, suggesting that environmental factors can influence the risk of co-infections (PUBMED:28065414). Overall, these studies demonstrate that mixed infections can have significant clinical implications, affecting disease progression, treatment outcomes, and may require more complex management strategies.
Instruction: Muscle Viability Revisited: Are We Removing Normal Muscle? Abstracts: abstract_id: PUBMED:2053830 Normal and pathologic growth of arterial smooth muscle In addition to contractility, arterial smooth muscle cells, like all connective tissue cells, have a capacity to proliferate and synthesize extracellular matrix. Under different normal (development, aging) and abnormal circumstances (high blood pressure, atherosclerosis, balloon angioplasty), smooth muscle cells express these properties which characterize the dedifferentiated phenotype. This paper reviews the main mechanisms, modulations and possible therapeutic implications of dedifferentiation. abstract_id: PUBMED:26284438 Muscle Viability Revisited: Are We Removing Normal Muscle? A Critical Evaluation of Dogmatic Debridement. Purpose: Determination of muscle viability during debridement is a subjective process with significant consequences. Evaluating muscle color, consistency, contractility, and capacity to bleed (the 4 Cs) was established by a study performed half a century ago. This work reinvestigates the utility of the 4 Cs using current histopathologic techniques. Methods: After institutional review board approval, 36 biopsies were prospectively collected at a level-1 trauma center from 20 patients undergoing a debridement for open fracture (81%), compartment syndrome (11%), infection (5%), or crush injury (3%). Surgeons graded the biopsies using the 4 Cs, and provided their overall impression as healthy, borderline, or dead. Blinded pathological analysis was performed on each specimen. A correlation between the 4 Cs and surgeon impression with histopathological diagnosis was sought through a univariate statistical analysis. Results: The surgeon's impression was dead muscle in 25 specimens, borderline in 10, and healthy in 1. Pathological analysis of the 35 specimens considered as dead or borderline muscle by the surgeon demonstrated normal muscle or mild interstitial inflammation in 21 specimens (60%). Color (P = 0.07), consistency (P = 0.12), contractility (P = 0.51), capacity to bleed (P = 0.07), and surgeon impression (P = 0.50) were unable to predict histologic appearance. Conclusions: Neither the 4 Cs nor the surgeon's impression correlate with histological findings regarding muscle viability. In 72% of specimens, the treating surgeon's gross assessment differed from the histopathologic appearance. Although the fate of the debrided muscle remains unclear if left in situ, these results raise questions regarding current practices, including the possibility that surgeons are debriding potentially viable muscle. Level Of Evidence: Diagnostic Level II. See Instructions for Authors for a complete description of levels of evidence. abstract_id: PUBMED:28790931 Microtissues Enhance Smooth Muscle Differentiation and Cell Viability of hADSCs for Three Dimensional Bioprinting. Smooth muscle differentiated human adipose derived stem cells (hADSCs) provide a crucial stem cell source for urinary tissue engineering, but the induction of hADSCs for smooth muscle differentiation still has several issues to overcome, including a relatively long induction time and equipment dependence, which limits access to abundant stem cells within a short period of time for further application. Three-dimensional (3D) bioprinting holds great promise in regenerative medicine due to its controllable construction of a designed 3D structure. When evenly mixed with bioink, stem cells can be spatially distributed within a bioprinted 3D structure, thus avoiding drawbacks such as, stem cell detachment in a conventional cell-scaffold strategy. Notwithstanding the advantages mentioned above, cell viability is often compromised during 3D bioprinting, which is often due to pressure during the bioprinting process. The objective of our study was to improve the efficiency of hADSC smooth muscle differentiation and cell viability of a 3D bioprinted structure. Here, we employed the hanging-drop method to generate hADSC microtissues in a smooth muscle inductive medium containing human transforming growth factor β1 and bioprinted the induced microtissues onto a 3D structure. After 3 days of smooth muscle induction, the expression of α-smooth muscle actin and smoothelin was higher in microtissues than in their counterpart monolayer cultured hADSCs, as confirmed by immunofluorescence and western blotting analysis. The semi-quantitative assay showed that the expression of α-smooth muscle actin (α-SMA) was 0.218 ± 0.077 in MTs and 0.082 ± 0.007 in Controls; smoothelin expression was 0.319 ± 0.02 in MTs and 0.178 ± 0.06 in Controls. Induced MTs maintained their phenotype after the bioprinting process. Live/dead and cell count kit 8 assays showed that cell viability and cell proliferation in the 3D structure printed with microtissues were higher at all time points compared to the conventional single-cell bioprinting strategy (mean cell viability was 88.16 ± 3.98 vs. 61.76 ± 15% for microtissues and single-cells, respectively). These results provide a novel way to enhance the smooth muscle differentiation of hADSCs and a simple method to maintain better cell viability in 3D bioprinting. abstract_id: PUBMED:25580404 Zinc Restored the Decreased Vascular Smooth Muscle Cell Viability under Atherosclerotic Calcification Conditions. Zinc is considered to be involved in maintaining healthy vascular condition. Atherosclerotic calcification of vascular smooth muscle cells (VSMCs) occurs via the mechanism of cell death; therefore, cell viability is a critical factor for preventing VSMC calcification. In this study, we tested whether zinc affected VSMC viability under both normal physiological non-calcifying (0 mM P) and atherosclerotic calcifying conditions (3 and 5 mM P), since VSMC physiological characters change during the VSMC calcification process. The study results showed that an optimal zinc level (15 μM) restored the decreased VSMC viability which was induced under low zinc levels (0 and 1 μM) and calcifying conditions (3 and 5 mM P) at 9 and 15 days culture. This zinc-protecting effect for VSMC viability is more prominent under atherosclerotic calcifying condition (3 and 5 mM P) than normal condition (0 mM P). Also, the increased VSMC viability was consistent with the decreased Ca and P accumulation in VSMC cell layers. The results suggested that zinc could be an effective biomineral for preventing VSMC calcification under atherosclerotic calcifying conditions. abstract_id: PUBMED:1412131 Physiology of the digestive smooth muscle By comparison with striated muscle, the main features of digestive smooth muscle are richness, complexity and diversity specially with thanks to the enteric and prevertebral nervous neurons which add up their effects to those of the central nervous system. The morphological and molecular characteristics of the smooth muscle contractility begin to be unveiled, but a lot of basic knowledge is still needed for further pharmacological advances. abstract_id: PUBMED:10522302 Muscle, fatigue, sports and infection Purpose: Skeletal muscle can be considered as motors which convert chemical energy into mechanical energy. We can evaluate the intracellular pH and energy state of phosphate-containing metabolites in skeletal muscle of patients complaining fatigue or asthenia, using phosphorus MRS. Main Points: Acute infectious disease and extreme endurance exercise may induce a loss of oxidative capacity of muscle tissue. Muscle fatigue is not due only to an insufficient supply of ATP to the energy consuming mechanisms. Phosphorus MRS show a muscle production of toxic metabolites such as lactates, protons and ammonia. These metabolic features induced excessive intracellular acidosis of skeletal muscle and systemic hyperammonia, responsible of fatigue and asthenia. Perspectives: Reversal of the excessive acidosis and improvement of the capacity for oxidative ATP synthesis might help to relieve the symptoms of exhaustion/fatigue in these patients. abstract_id: PUBMED:24454750 Muscle fiber viability, a novel method for the fast detection of ischemic muscle injury in rats. Acute lower extremity ischemia is a limb- and life-threatening clinical problem. Rapid detection of the degree of injury is crucial, however at present there are no exact diagnostic tests available to achieve this purpose. Our goal was to examine a novel technique - which has the potential to accurately assess the degree of ischemic muscle injury within a short period of time - in a clinically relevant rodent model. Male Wistar rats were exposed to 4, 6, 8 and 9 hours of bilateral lower limb ischemia induced by the occlusion of the infrarenal aorta. Additional animals underwent 8 and 9 hours of ischemia followed by 2 hours of reperfusion to examine the effects of revascularization. Muscle samples were collected from the left anterior tibial muscle for viability assessment. The degree of muscle damage (muscle fiber viability) was assessed by morphometric evaluation of NADH-tetrazolium reductase reaction on frozen sections. Right hind limbs were perfusion-fixed with paraformaldehyde and glutaraldehyde for light and electron microscopic examinations. Muscle fiber viability decreased progressively over the time of ischemia, with significant differences found between the consecutive times. High correlation was detected between the length of ischemia and the values of muscle fiber viability. After reperfusion, viability showed significant reduction in the 8-hour-ischemia and 2-hour-reperfusion group compared to the 8-hour-ischemia-only group, and decreased further after 9 hours of ischemia and 2 hours of reperfusion. Light- and electron microscopic findings correlated strongly with the values of muscle fiber viability: lesser viability values represented higher degree of ultrastructural injury while similar viability results corresponded to similar morphological injury. Muscle fiber viability was capable of accurately determining the degree of muscle injury in our rat model. Our method might therefore be useful in clinical settings in the diagnostics of acute ischemic muscle injury. abstract_id: PUBMED:34915930 ViaFuse: Fiji macros to calculate skeletal muscle cell viability and fusion index. Background: Measuring biological features of skeletal muscle cells is difficult because of their unique morphology and multinucleate nature upon differentiation. Here, we developed a new Fiji macro package called ViaFuse (that stands for viability and fusion) to measure skeletal muscle cell viability and differentiation. To test ViaFuse, we utilized immunofluorescence images of differentiated myotubes where the capping actin protein of muscle z-line subunit beta (CAPZB) was depleted in comparison with control cells. Results: We compared the values achieved using the ViaFuse macros first with manual quantification performed by researchers and second with those obtained utilizing the MATLAB muscle-centric software MyoCount. We observed a high degree of correlation between all methods of quantification. Conclusions: ViaFuse can detect the borders of myotubes and identify nuclear clumps which have been limitations of previous muscle-centric imaging software. The ViaFuse macros require little computer power or space to run and user inputs to the ViaFuse macros are minimal, thereby automating the analysis process in a quick, easy, and accurate fashion. Additionally, the ViaFuse macros work with Fiji, an existing imaging software widely used by skeletal muscle researchers. Furthermore, ViaFuse is compatible with many computer systems, has a very intuitive interface, and does not require prior complex mathematical knowledge. Therefore, we propose ViaFuse as a robust and meticulous method to quantify skeletal muscle cell viability and differentiation. abstract_id: PUBMED:33427629 The muscle biological clock The biological clock plays an essential role in the control of muscle activity, by dissociating temporally the metabolic functions of skeletal muscle. Exercise capacity also displays a circadian rhythm. Alterations in biological rhythm, as in shift workers, alter muscle function and are associated with the development of sarcopenia. abstract_id: PUBMED:35743147 The Effect of High and Variable Glucose on the Viability of Endothelial Cells Co-Cultured with Smooth Muscle Cells. Diabetes mellitus causes endothelial dysfunction. The aim of this study was to investigate the effect of normal (5 mmol/L), high (20 mmol/L), and fluctuating (5 and 20 mmol/L changed every day) glucose concentration in the culture medium on the viability of human umbilical vein endothelial cells (HUVECs) co-cultured with human umbilical artery smooth muscle cells (HUASMCs). The cultures were conducted on semi-permeable flat polysulfone (PSU) fibronectin-coated membranes immobilized in self-made inserts. The insert contained either HUVECs on a single membrane or HUASMCs and HUVECs on two membranes close to each other. Cultures were conducted for 7 or 14 days. Apoptosis, mitochondrial potential, and the production of reactive oxygen species and lactate by HUVECs were investigated. The results indicate that fluctuations in glucose concentration have a stronger negative effect on HUVECs viability than constant high glucose concentration. High and fluctuating glucose concentrations slow down cell proliferation compared to the culture carried out in the medium with normal glucose concentration. In conclusion, HUASMCs affect the viability of HUVECs when both types of cells are co-cultured in medium with normal or variable glucose concentration. Answer: The study "Muscle Viability Revisited: Are We Removing Normal Muscle? A Critical Evaluation of Dogmatic Debridement" (PUBMED:26284438) critically evaluates the traditional method of determining muscle viability during debridement, which is based on the assessment of muscle color, consistency, contractility, and capacity to bleed (the 4 Cs). The study's purpose was to investigate the utility of the 4 Cs using current histopathologic techniques. The results showed that neither the 4 Cs nor the surgeon's impression correlated with histological findings regarding muscle viability. In fact, in 72% of specimens, the treating surgeon's gross assessment differed from the histopathologic appearance, suggesting that potentially viable muscle may be debrided based on these traditional criteria. This raises questions about current debridement practices and the possibility that surgeons might be removing normal muscle that could otherwise be viable.
Instruction: Treatment 'mismatch' in early prostate cancer: do treatment choices take patient quality of life into account? Abstracts: abstract_id: PUBMED:18040996 Treatment 'mismatch' in early prostate cancer: do treatment choices take patient quality of life into account? Background: Pretreatment urinary, bowel, and sexual dysfunction may increase the toxicity of prostate cancer treatments or preclude potential benefits. Using patient-reported baseline dysfunction from a prospective cohort study, we determined the proportion of patients receiving relatively contraindicated ('mismatched') treatments. Methods: Baseline obstructive uropathy and bowel dysfunction relatively contraindicate brachytherapy (BT) and external beam radiation therapy (EBRT), respectively, because they increase patients' vulnerability to treatment-related toxicity. Baseline sexual dysfunction renders moot the intended benefit of nerve-sparing radical prostatectomy (NSRP), which is to preserve sexual function. We categorized patients' clinical circumstances by increasing complexity and counted the mismatches in each, expecting weaker or multiple contraindications to increase mismatched treatments. Results: Of 438 eligible patients, 389 (89%) reported preexisting dysfunction, and more than one-third received mismatched treatments. Mismatches did not significantly increase with clinical complexity, and watchful waiting was very infrequent, even when all treatment options were contraindicated. Patient age and comorbidity, but not preexisting dysfunction, were associated with treatment choice. As expected, mismatched BT and EBRT led to worsened urinary and bowel symptoms, respectively, and NSRP did not improve outcomes after baseline sexual dysfunction. Conclusions: Pretreatment dysfunction does not appear to reliably influence treatment choices, and patients receiving mismatched treatments had worse outcomes. Further study is needed to determine why mismatched treatments were chosen, including the role of incomplete patient-physician communication of baseline dysfunction, and whether using a validated questionnaire before treatment decision-making would bypass this difficulty. Treatment mismatch may be a useful outcome indicator of the quality of patient-centered decisions. abstract_id: PUBMED:18704993 Predictors of patient preferences and treatment choices for localized prostate cancer. Background: Little is known regarding how patients select treatment for localized prostate cancer. This study examined determinants of patients' preferences for health states related to prostate cancer, and assessed whether preferences and/or other factors predict treatment choices. Methods: A survey of 167 patients with newly diagnosed localized prostate cancer was conducted in 4 academic medical practices from 2004 to 2007. The authors assessed demographic and health factors, and used a time-tradeoff method to elicit preferences in the form of quality-adjusted life years (QALYs) regarding health states related to prostate cancer. Linear regressions identified predictors of preferences (in QALYs) for erectile dysfunction (ED), urinary incontinence, rectal/bowel symptoms, and metastatic prostate cancer. Linear probability models identified predictors of treatment choice. Results: Patient preferences were affected by a range of behavioral, demographic, and health factors. For example, sexually active men reported significantly lower QALYs for living with ED, and men with family members who died of cancer reported lower QALYs for metastatic disease. The strongest predictor of treatment was the type of physician seen (radiation oncology vs urology) at the time of the survey. Age and tumor grade also were found to be strongly predictive of treatment. In general, QALYs were not found to predict treatment choice. Conclusions: Patient preferences, as reported in QALYs, are shaped by reasonable behavioral and demographic influences. However, actual treatment choices appear to bear little relation to these patient preferences, and instead demonstrate a strong association with clinician specialty. More attention to variation in preferences among patients, as well as the use of decision-support technologies, may enable physicians to facilitate more optimal individualized treatment choices for patients with prostate cancer. abstract_id: PUBMED:28560840 Long-term quality of life after definitive treatment for prostate cancer: patient-reported outcomes in the second posttreatment decade. Definitive treatment for prostate cancer includes radical prostatectomy (RP), external beam radiation therapy (EBRT), and brachytherapy (BT). The different side effect profiles of these options are crucial factors for patients and clinicians when deciding between treatments. This study reports long-term health-related quality of life (HRQOL) for patients in their second decade after treatment for prostate cancer. We used a validated survey to assess urinary, bowel, and sexual function and HRQOL in a prospective cohort of patients diagnosed with localized prostate cancer 14-18 years previously. We report and compare the outcomes of patients who were initially treated with RP, EBRT, or BT. Of 230 eligible patients, the response rate was 92% (n = 211) and median follow-up was 14.6 years. Compared to baseline, RP patients had significantly worse urinary incontinence and sexual function, EBRT patients had worse scores in all domains, and BT patients had worse urinary incontinence, urinary irritation/obstruction, and sexual function. When comparing treatment groups, RP patients underwent larger declines in urinary continence than did BT patients, and EBRT and BT patients experienced larger changes in urinary irritation/obstruction. Baseline functional status was significantly associated with long-term function for urinary obstruction and bowel function domains. This is one of the few prospective reports on quality of life for prostate cancer patients beyond 10 years, and adds information about the late consequences of treatment choices. These data may help patients make informed decisions regarding treatment choice based on symptoms they may experience in the decades ahead. abstract_id: PUBMED:15812351 Quality of life survey in patients with advanced prostate cancer. Prostate cancer is a major health concern for Western patients who are facing important decisions, alongside with the physicians, concerning their treatment. Additionally, the physician has to take into account the potential side effects of the disease and the treatment itself. Treatment for advanced prostate cancer produces problematic physical and psychosocial side effects. The patient should be asked by the physician for information concerning daily life activities, overall satisfaction voiding ability, and sexual activity in a standardized questionnaire assessing the quality of life (QoL) of the patient. Many such questionnaires and studies have been evaluated or are still in progress. Modern QoL questionnaires are considered to be more than a necessity nowadays for the improvement of the treatment administered from the physician and the performance and functional status of the patient. abstract_id: PUBMED:31411965 Patient-reported Outcomes Following Treatment of Localised Prostate Cancer and Their Association with Regret About Treatment Choices. Background: Well-documented reports of patients' experiences with different treatments are important for helping localised prostate cancer (LPC) patients choose among the available treatment options. Objective: To document differences in patient-reported outcomes (PROs) following radical prostatectomy (RP), external beam radiotherapy (EBRT), brachytherapy (BT), and active surveillance (AS), and to evaluate how these PROs and other factors are associated with treatment decision regret. Design, Setting, And Participants: A prospective, observational, multicentre study of men diagnosed with LPC (stage cT1-2) during 2014-2016. Outcome Measurements And Statistical Analysis: Patients completed validated PRO measures (Quality of Life Questionnaire Core 30 [QLQ-C30], Quality of Life Questionnaire prostate cancer-specific module [QLQ-PR25], Decision Regret Scale, and the Memorial Anxiety Scale for Prostate Cancer) before treatment and at 3, 6, and 12mo after treatment. Mixed-effect models were used to describe different PRO patterns. Results And Limitations: The analytic cohort included 434 men (AS=32%; RP=45%; EBRT=12%; BT=10%). Follow-up response rates were above 90%. At 1-yr follow-up, (1) men who had received RP reported significantly (p&lt;0.01) more urinary incontinence, sexual dysfunction, hormonal/masculinity-related symptoms, and less emotional distress; (2) those having received EBRT reported more sexual dysfunction, hormonal/masculinity-related symptoms, and physical distress; and (3) those having received BT reported more urinary obstruction and irritation symptoms, compared with patients under AS. Irrespective of the treatment modality, 23% of the patients reported clinically relevant treatment regret (99% confidence interval, 17-28%). Multivariate correlates of decision regret were hormonal/masculinity-related symptoms, educational level, and positive surgical margins. Conclusions: Post-treatment physical and psychosocial functioning was significantly associated with specific treatment modalities and pretreatment functioning. Regret was relatively frequently reported by patients who experienced unwanted physical, psychosocial, and oncological outcomes. Greater efforts should be made to understand whether carefully educating patients about the possible consequences and effectiveness of treatments may help limit the feeling of treatment regret. Patient Summary: In men with localised prostate cancer, regret about the treatment choice was more common among those who experienced more treatment-related symptoms during the year after treatment. abstract_id: PUBMED:27219634 Impact of Adherence to Quality Measures for Localized Prostate Cancer on Patient-reported Health-related Quality of Life Outcomes, Patient Satisfaction, and Treatment-related Complications. Background And Objective: Quality measures used in pay-for-performance systems are intended to address specific quality goals, such as safety, efficiency, effectiveness, timeliness, equity, and patient-centeredness. Given the small number of narrowly focused measures in prostate cancer care, we sought to determine whether adherence to any of the available payer-driven quality measures influences patient-centered outcomes, including health-related quality of life (HRQOL), patient satisfaction, and treatment-related complications. Methods: The Comparative Effectiveness Analysis of Surgery and Radiation study is a population-based, prospective cohort study that enrolled 3708 men with clinically localized prostate cancer during 2011 and 2012, of whom 2601 completed the 1-year survey and underwent complete chart abstraction. Compliance with 6 quality indicators endorsed by national consortia was assessed. Multivariable regression was used to determine the relationship between indicator compliance and Expanded Prostate Cancer Index Composite (EPIC-26) instrument summary scores, satisfaction scale scores (service satisfaction scale for cancer care), and treatment-related complications. Results: Overall rates of compliance with these quality measures ranged between 64% and 88%. Three of the 6 measures were weakly associated with 1-year sexual function and bowel function scores (β=-4.6, 1.69, and 2.93, respectively; P≤0.05), whereas the remaining measures had no significant relationship with patient-reported HRQOL outcomes. Satisfaction scores and treatment-related complications were not associated with quality measure compliance. Conclusions: Compliance with available nationally endorsed quality indicators, which were designed to incentivize effective and efficient care, was not associated with clinically important changes in patient-centered outcomes (HRQOL, satisfaction, or complications) within 1-year. abstract_id: PUBMED:38191022 Evaluating Policies of Expanding Versus Restricting First-Line Treatment Choices: A Cost-Effectiveness Analysis Framework. Objectives: Healthcare payers often implement coverage policies that restrict the utilization of costly new first-line treatments. Cost-effectiveness analysis can be conducted to inform these decisions by comparing the new treatment with an existing one. However, this approach may overlook important factors such as treatment effect heterogeneity and endogenous treatment selection, policy implementation costs, and diverse patient preferences across multiple treatment options. We aimed to develop a cost-effectiveness analysis framework that considers these real-world factors, facilitating the evaluation of alternative policies related to expanding or restricting first-line treatment choices. Methods: We introduced a metric of incremental cost-effectiveness ratio (ICER) that compares an expanded choice set (CS) including the new first-line treatment with a restricted CS excluding the new treatment. ICER(CS) accounts for treatment selection influenced by heterogeneous treatment effects and policy implementation costs. We examined a basic scenario with 2 standard first-line treatment choices and a more realistic scenario involving diverse preferences toward multiple choices. To illustrate the framework, we conducted a retrospective evaluation of including versus excluding abiraterone acetate plus prednisone (AAP) (androgen deprivation therapy [ADT] + AAP) as a first-line treatment for metastatic hormone-sensitive prostate cancer. Results: The traditional ICERs for ADT + AAP versus ADT alone and ADT+ docetaxel were $104 269 and $206 324/quality-adjusted life-year, respectively. The ICER(CS) for comparing an expanded CS with ADT + AAP with a restricted CS without ADT + AAP was $123 179/quality-adjusted life-year. Conclusions: The proposed framework provides decision makers with policy-relevant tools, enabling them to assess the cost-effectiveness of alternative policies of expanding versus restricting patients' and physicians' first-line treatment choices. abstract_id: PUBMED:30612936 Patient-reported Outcomes in Metastatic Castration-sensitive Prostate Cancer in the Adjuvant Setting. We review the importance of quality of life (QOL) data from patient-reported outcome measures (PROMs) among men treated with androgen deprivation therapy (ADT) for metastatic castration-sensitive prostate cancer (mCSPC) or localized prostate cancer treated with adjuvant therapy. This information is important for patients as they make treatment choices and for regulatory agencies approving drug therapies. Studies of treatments for mCSPC suggest that the improvements in survival associated with more intensive systemic treatment are accompanied by improvements in QOL. ADT prolongs survival among men with intermediate- or high-risk localized disease in combination with radiation, but the optimal duration is still being defined. For men with biochemical recurrence, starting ADT earlier rather than later had minimal adverse effects on QOL but may not prolong survival. We conclude that rigorous assessment of QOL with validated PROMs must be a priority for clinical trials of novel and more intensive approaches to treatment with ADT. PATIENT SUMMARY: Data on quality of life that are collected using patient-reported outcome measures are important for patients with prostate cancer as they make treatment choices and for regulatory agencies approving drug therapies. abstract_id: PUBMED:12642193 Reacting to the diagnosis of prostate cancer: patient learning in a community of practice. The diagnosis of prostate cancer brings with it a number of emotional issues for the patient. These may include: fear of cancer; confronting one's own mortality; quality of life issues such as incontinence and impotence; and the question "why me?" In addition, there is burden of coping with the emotional responses from friends and loved ones which, arising from their concern, can range from fear to seeking to take charge. Added to this is the need to make choices regarding treatment and obtaining information about these. This article is based on a contribution to a symposium on prostate cancer at the Fifth International Conference on Psycho-Oncology and describes a personal experience of confronting these issues. The concept of communities of practice is used to interpret the learning experiences undertaken by the cancer patient. abstract_id: PUBMED:12861739 Brachytherapy for prostate cancer: the patient's perspective. Prostate cancer is the most common malignancy, and the second most common cause of cancer among men in the United States. The treatment of localized prostate cancer is highly controversial, with many treatment options. Brachytherapy is the permanent implantation of radioisotopes directly into the tumor. Impotence, urinary and bowel dysfunction, as well as disease progression have frequently been utilized as disease-specific measurements of quality of life (QoL) among those with localized prostate cancer. The purpose of this qualitative study was to explore the impact brachytherapy has on QoL from the patient's perspective. Themes emerged from the data that surrounded the physical changes attributed to the treatment, treatment choices, knowledge of the disease, receiving the diagnosis, and use of medications to control symptoms post treatment. The findings from this study can be used for patient education and to assist patients in making appropriate treatment choices. Answer: The concept of treatment 'mismatch' in early prostate cancer refers to the situation where the chosen treatment may not align with the patient's preexisting conditions or quality of life considerations. Research indicates that treatment choices for early prostate cancer often do not adequately take patient quality of life into account. For instance, a study found that more than one-third of patients with preexisting dysfunction received treatments that were relatively contraindicated, leading to worsened outcomes. This suggests that pretreatment dysfunction does not reliably influence treatment choices, and mismatched treatments can result in poorer quality of life for patients (PUBMED:18040996). Another study highlighted that patient preferences, as measured by quality-adjusted life years (QALYs), are influenced by various factors such as sexual activity and family history of cancer. However, these preferences did not significantly predict the actual treatment choices, which were more strongly associated with the type of physician consulted and the patient's age and tumor grade. This indicates a disconnect between patient preferences related to quality of life and the treatments they ultimately receive (PUBMED:18704993). Long-term quality of life outcomes also vary significantly based on the type of treatment received. Radical prostatectomy, external beam radiation therapy, and brachytherapy all have different side effect profiles that can impact urinary, bowel, and sexual function for years after treatment. Understanding these long-term effects is crucial for patients to make informed decisions that align with their quality of life goals (PUBMED:28560840). Furthermore, patient-reported outcomes following treatment show that physical and psychosocial functioning are significantly associated with specific treatment modalities and pretreatment functioning. A considerable proportion of patients expressed regret about their treatment choices, especially when they experienced unwanted physical and psychosocial outcomes (PUBMED:31411965). In summary, while quality of life is a critical factor for patients facing treatment decisions for early prostate cancer, there is evidence that current treatment choices may not consistently reflect these considerations. This underscores the need for improved patient-physician communication and decision-support tools to ensure that treatment decisions are more patient-centered and aligned with individual quality of life preferences (PUBMED:18040996; PUBMED:18704993; PUBMED:28560840; PUBMED:31411965).
Instruction: Should treatment of (sub)acute low back pain be aimed at psychosocial prognostic factors? Abstracts: abstract_id: PUBMED:15967762 Should treatment of (sub)acute low back pain be aimed at psychosocial prognostic factors? Cluster randomised clinical trial in general practice. Objective: To compare the effects of a minimal intervention strategy aimed at assessment and modification of psychosocial prognostic factors and usual care for treatment of (sub)acute low back pain in general practice. Design: Cluster randomised clinical trial. Setting: 60 general practitioners in 41 general practices. Participants: 314 patients with non-specific low back pain of less than 12 weeks' duration, recruited by their general practitioner. Interventions: In the minimal intervention strategy group the general practitioner explored the presence of psychosocial prognostic factors, discussed these factors, set specific goals for reactivation, and provided an educational booklet. The consultation took about 20 minutes. Usual care was not standardised. Main Outcome Measures: Functional disability (Roland-Morris disability questionnaire), perceived recovery, and sick leave because of low back pain assessed at baseline and after 6, 13, 26, and 52 weeks. Results: The dropout rate was 8% in the minimal intervention strategy group and 9% in the usual care group. Multilevel analyses showed no significant differences between the groups on any outcome measure during 12 months of follow-up in the whole group or in relevant subgroups (patients with high scores on psychosocial measures at baseline or a history of frequent or prolonged low back pain). Conclusion: This study provides no evidence that (Dutch) general practitioners should adopt our new treatment strategy aimed at psychosocial prognostic factors in patients with (sub)acute low back pain. Further research should examine why our new strategy was not more effective than usual care. abstract_id: PUBMED:16289797 Why is a treatment aimed at psychosocial factors not effective in patients with (sub)acute low back pain? Psychosocial factors have been shown to play an important role in the development of chronic low back pain (LBP). In our recently completed cluster-randomized trial we found, however, no evidence of an effect of our minimal intervention strategy (MIS) aimed at psychosocial factors, over usual care (UC) in patients with (sub)acute LBP. To explore the reasons why, this paper presents an evaluation of the processes presumably underlying the effectiveness of MIS. General practitioner (GP) attitude was evaluated by the Pain Attitudes and Beliefs Scale and two additional questions. GP behaviour was evaluated by analysing treatment registration forms and patients' responses to items regarding treatment content. Patients also scored items on satisfaction and compliance. Modification of psychosocial measures was evaluated by analysing changes after 6 and 52 weeks on the Fear Avoidance and Beliefs Questionnaire, the Coping Strategies Questionnaire and the 4-Dimensional Symptom Questionnaire. A total of 60 GPs and 314 patients participated in the study. GPs in the MIS-group adopted a less biomedical orientated attitude than in the UC-group, but were only moderately successful in identification of psychosocial factors. Treatment contents as perceived by the patient and patient satisfaction differed significantly between both groups. Changes on psychosocial measures, however, did not differ between groups. The suboptimal identification of psychosocial factors in the MIS-group and the absence of a relevant impact on psychosocial factors may explain why MIS was not more effective than UC. abstract_id: PUBMED:34033963 Lack of Consensus Across Clinical Guidelines Regarding the Role of Psychosocial Factors Within Low Back Pain Care: A Systematic Review. It is widely accepted that psychosocial prognostic factors should be addressed by clinicians in their assessment and management of patient suffering from low back pain (LBP). On the other hand, an overview is missing how these factors are addressed in clinical LBP guidelines. Therefore, our objective was to summarize and compare recommendations regarding the assessment and management of psychosocial prognostic factors for LBP chronicity, as reported in clinical LBP guidelines. We performed a systematic search of clinical LBP guidelines (PROSPERO registration number 154730). This search consisted of a combination of previously published systematic review articles and a new systematic search in medical or guideline-related databases. From the included guidelines, we extracted recommendations regarding the assessment and management of LBP which addressed psychosocial prognostic factors (ie, psychological factors ["yellow flags"], perceptions about the relationship between work and health, ["blue flags"], system or contextual obstacles ["black flags") and psychiatric symptoms ["orange flags"]). In addition, we evaluated the level or quality of evidence of these recommendations. In total, we included 15 guidelines. Psychosocial prognostic factors were addressed in 13 of 15 guidelines regarding their assessment and in 14 of 15 guidelines regarding their management. Recommendations addressing psychosocial factors almost exclusively concerned "yellow" or "black flags," and varied widely across guidelines. The supporting evidence was generally of very low quality. We conclude that in general, clinical LBP guidelines do not provide clinicians with clear instructions about how to incorporate psychosocial factors in LBP care and should be optimized in this respect. More specifically, clinical guidelines vary widely in whether and how they address psychosocial factors, and recommendations regarding these factors generally require better evidence support. This emphasizes a need for a stronger evidence-base underlying the role of psychosocial risk factors within LBP care, and a need for uniformity in methodology and terminology across guidelines. PERSPECTIVE: This systematic review summarized clinical guidelines on low back pain (LBP) on how they addressed the identification and management of psychosocial factors. This review revealed a large amount of variety across guidelines in whether and how psychosocial factors were addressed. Moreover, recommendations generally lacked details and were based on low quality evidence. abstract_id: PUBMED:37298060 Consideration of Psychosocial Factors in Acute Low Back Pain by Physical Therapists. Clinical guidelines consistently recommend screening psychosocial (PS) factors in patients with low back pain (LBP), regardless of its mechanical nature, as recognized contributors to pain chronicity. However, the ability of physiotherapists (PTs) in identifying these factors remains controversial. This study aimed to assess the current identification of psychosocial risk factors by physical therapists (PTs) and which characteristics of PTs are associated with the identification of the main risk for chronicity (physical or psychosocial). A cross-sectional descriptive study surveying Spanish PTs in public and private health services was conducted, including questions on PT characteristics and three low back pain (LBP) patient vignettes with different biopsychosocial (BPS) clinical presentations. From 484 respondents, the majority of PTs agreed regarding the main risk for chronicity for each vignette (PS 95.7% for vignette A, PS and physical 83.5% for vignette B and PS 66% for vignette C). Female PTs were more likely to rate psychosocial compared with males (p &lt; 0.05). PTs with higher levels of social and emotional intelligence (both, p &lt; 0.05) were more likely to identify the main risk for chronicity. However, only gender and social information processing for vignette A (p = 0.024) and emotional clarity for vignette B (p = 0.006) were able to predict the identification of psychosocial and physical risk, respectively. The main risk for chronicity was correctly identified by a large majority of PTs through patient vignettes. Gender, social and emotional intelligence played a relevant role in the recognition of psychosocial risk and biopsychosocial factors. abstract_id: PUBMED:21629595 Psychosocial treatment techniques to augment the impact of physiotherapy interventions for low back pain. Purpose: The present study examined the profile of physical and psychosocial changes that occur in physiotherapy intervention when patients also participate in a psychosocial intervention. The psychosocial intervention, delivered by physiotherapists, was designed to target catastrophic thinking, fear of pain, perceived disability, and depression. Methods: The study sample consisted of 48 individuals referred for the rehabilitation treatment of disabling back pain. Half the sample was enrolled in a physiotherapy intervention only; the other half was enrolled in a psychosocial intervention in addition to receiving a physiotherapy intervention. Results: At post-treatment, the two treatment groups did not differ significantly on measures of pain severity, physical function, or self-reported disability. Patients who participated in the psychosocial intervention in addition to physiotherapy showed significantly greater reductions in pain catastrophizing, fear of movement, and depression than patients who received only the physiotherapy intervention. Reductions in psychosocial risk factors contributed to reduced use of the health care system, reduced use of pain medication, and improved return-to-work outcomes. Conclusions: The findings of the present study suggest that a psychosocial intervention provided by physiotherapists can lead to meaningful reductions in psychosocial risk factors for pain and disability and may contribute to more positive rehabilitation outcomes. abstract_id: PUBMED:26501062 Psychosocial Risk Factors, Interventions, and Comorbidity in Patients with Non-Specific Low Back Pain in Primary Care: Need for Comprehensive and Patient-Centered Care. Non-specific low back pain (LBP) affects many people and has major socio-economic consequences. Traditional therapeutic strategies, mainly focused on biomechanical factors, have had moderate and short-term impact. Certain psychosocial factors have been linked to poor prognosis of LBP and they are increasingly considered as promising targets for management of LBP. Primary health care providers (HCPs) are involved in most of the management of people with LBP and they are skilled in providing comprehensive care, including consideration of psychosocial dimensions. This review aims to discuss three pieces of recent research focusing on psychosocial issues in LBP patients in primary care. In the first systematic review, the patients' or HCPs' overall judgment about the likely evolution of LBP was the factor most strongly linked to poor outcome, with predictive validity similar to that of multidimensional scales. This result may be explained by the implicit aggregation of many prognostic factors underlying this judgment and suggests the relevance of considering the patients from biopsychosocial and longitudinal points of view. The second review showed that most of the interventions targeting psychosocial factors in LBP in primary care have to date focused on the cognitive-behavioral factors, resulting in little impact. It is unlikely that any intervention focusing on a single factor would ever fit the needs of most patients; interventions targeting determinants from several fields (mainly psychosocial, biomechanical, and occupational) may be more relevant. Should multiple stakeholders be involved in such interventions, enhanced interprofessional collaboration would be critical to ensure the delivery of coordinated care. Finally, in the third study, the prevalence of psychosocial comorbidity in chronic LBP patients was not found to be significantly higher than in other patients consulting in primary care. Rather than specifically screening for psychosocial conditions, this suggests taking into account any potential comorbidity in patients with chronic LBP, as in other patients. All these results support the adoption of a more comprehensive and patient-centered approach when dealing with patients with LBP in primary care. As this condition is illustrative of many situations encountered in primary care, the strategies proposed here may benefit most patients consulting in this setting. abstract_id: PUBMED:33981936 Prognostic factors for pain chronicity in low back pain: a systematic review. Low back pain is the leading cause for years lived in disability. Most people with acute low back pain improve rapidly, but 4% to 25% of patients become chronic. Since the previous systematic reviews on the subject, a large number of new studies have been conducted. The objective of this article was to review the evidence of the prognostic factors behind nonspecific chronic low back pain. A systematic literature search was performed without date limitation from the MEDLINE, Cochrane library, and Medic databases. Specific inclusion criteria were used, and risk factors before the onset of chronic symptoms were searched. Study quality was assessed by 2 independent reviewers. One hundred eleven full articles were read for potential inclusion, and 25 articles met all the inclusion criteria. One study was rated as good quality, 19 studies were rated as fair quality, and 5 articles were rated as poor quality. Higher pain intensity, higher body weight, carrying heavy loads at work, difficult working positions, and depression were the most frequently observed risk factors for chronic low back pain. Maladaptive behavior strategies, general anxiety, functional limitation during the episode, smoking, and particularly physical work were also explicitly predictive of chronicity. According to this systematic review, several prognostic factors from the biomechanical, psychological and psychosocial point of view are significant for chronicity in low back pain. abstract_id: PUBMED:37955077 Influence of radiological factors, psychosocial factors, and central sensitization-related symptoms on clinical symptoms in patients with lumbar spinal canal stenosis. Background: No study to date has concurrently evaluated the impact of radiological factors, psychosocial factors, and central sensitization (CS) related symptoms in a single lumbar spinal canal stenosis (LSS) patient cohort. Objective: To investigate the associations between these factors and clinical symptoms in LSS patients. Methods: We recruited 154 patients with LSS scheduled for surgery. Patient-reported outcome measures and imaging evaluation including clinical symptoms, psychosocial factors, CS-related symptoms, and radiological classifications. Spearman's rank correlation coefficient and multiple regression analyses were employed. Results: Spearman's correlation revealed CS-related symptoms positively correlated with low back pain (r= 0.25, p&lt; 0.01), leg pain (r= 0.26, p&lt; 0.01), and disability (r= 0.32, p&lt; 0.01). Pain catastrophizing positively correlated with leg pain (r= 0.23, p&lt; 0.01) and disability (r= 0.36, p&lt; 0.01). Regression analysis showed that pain catastrophizing was associated with disability (β= 0.24, 95%CI = 0.03-0.18), and CS-related symptoms with low back pain (β= 0.28, 95%CI = 0.01-0.09). Radiological classifications were not associated with clinical symptoms. Conclusion: Our findings suggest that psychosocial factors and CS-related symptoms, rather than radiological factors, seem to contribute to clinical symptoms in patients with LSS. abstract_id: PUBMED:21451093 Psychosocial influences on low back pain, disability, and response to treatment. Evidence suggests that psychosocial factors have an influence on the outcome of physical therapy treatment and that the extent of their influence differs considerably among patients. As a consequence, substantial research efforts are focused on developing new clinical tools designed to identify and highlight when psychosocial factors are present at a level relevant to decision making. The conceptual differences in the ways that psychosocial factors influence outcome are described, structured around 3 common research terms: (1) prognostic factors, (2) treatment effect modifiers or moderators, and (3) treatment mediators. Prognostic factors are those characteristics that help estimate a patient's likely outcome irrespective of the chosen management. Treatment effect modifiers or moderators are factors measured at baseline that influence the relationship between a specific intervention and outcome. Treatment mediators are factors that have an intermediary role in the link between treatment and outcome. How these different influences on outcome can be translated into useful and complementary tools that aim to reduce treatment variability in clinical practice is described. One approach is to use prognostic factors to develop screening tools that identify an individual's risk status, typically based on predictive psychosocial factors such as catastophizing and depression. Another approach is to identify specific treatment effect modifiers to derive treatment decision aids or prediction rules to help match a patient's characteristics to the interventions available. A third approach is to use treatment mediators (eg, self-efficacy) to develop monitoring tools to inform the clinician about which aspects of treatment to strengthen. abstract_id: PUBMED:29289414 A Qualitative Study of Doctors of Chiropractic in a Nova Scotian Practice-based Research Network: Barriers and Facilitators to the Screening and Management of Psychosocial Factors for Patients With Low Back Pain. Objectives: This study aimed to assess chiropractors' awareness of clinical practice guidelines for low back pain and to identify barriers and facilitators to the screening and management of psychosocial factors in patients with low back pain. Methods: This qualitative study used semi-structured interviews informed by the Theoretical Domains Framework with 10 Nova Scotian chiropractors who were members of a practice-based research network. Results: The participants correctly identified what the guidelines generally recommend and described the value of psychosocial factors; however, none of the participants could name specific clinical practice guidelines for low back pain. We identified 6 themes related to barriers and facilitators for chiropractors screening and managing psychosocial factors. The themes revolved around the participants' desire to fulfill patients' anatomy-focused treatment expectations and a perceived lack of training for managing psychosocial factors. Participants had concerns about going beyond the chiropractic scope of practice, and they perceived a lack of practical psychosocial screening and management resources. Social factors, such as the influence of other health care practitioners, were reported as both barriers and facilitators to screening and managing psychosocial factors. Conclusions: The participants in this study reported that they mostly treated with an anatomical and biomechanical focus and that they did not always address psychosocial factors identified in their patients with low back pain. Although these findings are limited to Nova Scotian chiropractors, the barriers identified appeared to be potentially modifiable and could be considered in other groups. Low-cost interventions, such as continuing education using evidence-informed behavior change techniques, could be considered to address these barriers. Answer: The treatment of subacute low back pain (LBP) should consider psychosocial prognostic factors, but the evidence on the effectiveness of interventions specifically targeting these factors is mixed. A cluster-randomized clinical trial found no significant differences in outcomes between a minimal intervention strategy aimed at psychosocial prognostic factors and usual care in general practice (PUBMED:15967762). The study suggested that Dutch general practitioners' new treatment strategy aimed at psychosocial prognostic factors was not more effective than usual care for patients with subacute low back pain. The reasons for this lack of effectiveness may include suboptimal identification of psychosocial factors and the absence of a relevant impact on these factors (PUBMED:16289797). Clinical guidelines on LBP vary widely in whether and how they address psychosocial factors, and recommendations generally lack details and are based on low-quality evidence (PUBMED:34033963). This suggests a need for more uniformity in methodology and terminology across guidelines and a stronger evidence base for the role of psychosocial risk factors in LBP care. Physical therapists (PTs) recognize the importance of psychosocial factors in LBP chronicity, and characteristics such as gender and social and emotional intelligence may influence their ability to identify these factors (PUBMED:37298060). Additionally, a psychosocial intervention provided by physiotherapists can lead to meaningful reductions in psychosocial risk factors for pain and disability and may contribute to more positive rehabilitation outcomes (PUBMED:21629595). Primary health care providers are skilled in providing comprehensive care, including consideration of psychosocial dimensions, and a more comprehensive and patient-centered approach is supported when dealing with patients with LBP in primary care (PUBMED:26501062). Several prognostic factors from biomechanical, psychological, and psychosocial perspectives are significant for chronicity in low back pain (PUBMED:33981936). In patients with lumbar spinal canal stenosis, psychosocial factors and central sensitization-related symptoms, rather than radiological factors, seem to contribute to clinical symptoms (PUBMED:37955077). Chiropractors recognize the value of psychosocial factors but report barriers to screening and managing these factors, including a perceived lack of training and practical resources (PUBMED:29289414).
Instruction: Does the type of endograft affect AAA volume change after endovascular aneurysm repair? Abstracts: abstract_id: PUBMED:29850426 Complications of endovascular aneurysm repair of the thoracic and abdominal aorta: evaluation and management. In recent decades, endovascular aneurysm repair or endovascular aortic repair (EVAR) has become an acceptable alternative to open surgery for the treatment of thoracic and abdominal aortic aneurysms and other aortic pathologies such as the acute aortic syndromes (e.g., penetrating aortic ulcer, intramural hematoma, dissection). Available data suggest that endovascular repair is associated with lower perioperative 30-day all-cause mortality as well as a significant reduction in perioperative morbidity when compared to open surgery. Additionally, EVAR leads to decreased blood loss, eliminates the need for cross-clamping the aorta and has shorter recovery periods than traditional surgery. It is currently the preferred mode of treatment of thoracic and abdominal aortic aneurysms in a subset of patients who meet certain anatomic criteria conducive to endovascular repair. The main disadvantage of EVAR procedures is the high rate of post-procedural complications that often require secondary re-intervention. As a result, most authorities recommend lifelong imaging surveillance following repair. Available surveillance modalities include conventional radiography, computed tomography, magnetic resonance angiography, ultrasonography, nuclear imaging and conventional angiography, with computed tomography currently considered to be the gold standard for surveillance by most experts. Following endovascular abdominal aortic aneurysm (AAA) repair, the rate of complications is estimated to range between 16% and 30%. The complication rate is higher following thoracic EVAR (TEVAR) and is estimated to be as high as 38%. Common complications include both those related to the endograft device and systemic complications. Device-related complications include endoleaks, endograft migration or collapse, kinking and/or stenosis of an endograft limb and graft infection. Post-procedural systemic complications include end-organ ischemia, cerebrovascular and cardiovascular events and post-implantation syndrome. Secondary re-interventions are required in approximately 19% to 24% of cases following endovascular abdominal and thoracic aortic aneurysm repair respectively. Typically, most secondary reinterventions involve the use of percutaneous techniques such as placement of cuff extension devices, additional endograft components or stents, enhancement of endograft fixation, treatment of certain endoleaks using various embolization techniques and embolic agents and thrombolysis of occluded endograft components. Less commonly, surgical conversion and/or open surgical modification are required. In this article, we provide an overview of the most common complications that may occur following endovascular repair of thoracic and AAAs. We also summarize the current surveillance recommendations for detecting and evaluating these complications and discuss various current secondary re-intervention approaches that may typically be employed for treatment. abstract_id: PUBMED:12932148 Does the type of endograft affect AAA volume change after endovascular aneurysm repair? Purpose: To compare thrombus volume changes in a longitudinal study over 2 years after endovascular aneurysm repair using the Ancure and Excluder endografts. Methods: In 2 institutions, all consecutive patients treated with a bifurcated Ancure or Excluder endograft were included in this retrospective comparison of computed tomographic angiography (CTA) data recorded and stored to disk postoperatively and at the 12 (12M) and 24-month (24M) follow-up examinations. In one institution, among 45 Ancure endograft patients, 35 (group A) had the 3 requisite scans available. In the second institution, 23 (group B) of 36 patients with the Excluder endograft were eligible for analysis. The proportional volume change was calculated at 12M and 24M and compared to the postoperative CT data. More than 10% shrinkage was considered significant. Results: In both groups, the median absolute volume changed significantly. In group A, significant shrinkage was found in 66% (23/35) at 12M and 74% (26/35) at 24M; in group B, 46% (10/23) had significant shrinkage at 12M as well as at 24M (p=0.027 for the difference between groups A and B at 24M). Statistical analysis of the proportional volume change showed a significant difference between the Ancure and the Excluder devices at 12M (p=0.009) and 24M (p=0.001). Multivariate analysis found aneurysm size (p&lt;0.012) and endograft type (p=0.026) to be independently predictive of the absolute volume change. Conclusions: Sac volume shrinkage after endovascular aneurysm repair is less pronounced and less frequent with the Excluder endoprosthesis than with the Ancure endograft. abstract_id: PUBMED:37398777 The Challenge of Endoleaks in Endovascular Aneurysm Repair (EVAR): A Review of Their Types and Management. An abdominal aortic aneurysm (AAA) is a dilatation of the abdominal aorta above 3 cm or 50% greater than the segment above. It is a dangerous condition accounting for a substantial number of deaths per year and increasing at an alarming rate. Various factors come into play in the development of AAAs, which this study has elaborated on, including smoking and old age, demographics, and comorbid conditions. Endovascular aneurysm repair (EVAR) is a newer treatment modality used for AAAs in which an endograft device is placed into the aorta, thereby creating a bypass tract from the aneurysm and generating flow mimicking that of the natural aorta. It is minimally invasive and associated with less postoperative mortality and reduced hospital stay. However, EVAR is also associated with significant postoperative complications, including endoleaks, which were reviewed in depth. Endoleaks are postprocedural leaks into the aneurysm sac that are usually identified immediately after graft placement and indicate treatment failure. They are of five subtypes, categorized according to their mechanism of development. The most common type is type II endoleaks, and the most dangerous is type I endoleaks. Each subtype has multiple management options with varying rates of success. Prompt identification along with appropriate treatment of endoleaks can lead to better postoperative outcomes and improved quality of life for patients. abstract_id: PUBMED:21326530 Devices used for endovascular aneurysm repair: past, present, and future. Since its first implementation in patients in 1991, endovascular aneurysm repair for abdominal aortic aneurysms has gained wide acceptance and availability. This development has been fueled in great part by advances in stent-graft design and by a variety of devices for various anatomic features. This commentary will summarize some of these endograft developments and indicate the path forward for novel endoprostheses. abstract_id: PUBMED:31289766 Type IIIB endoleak from stent suture lines of a thoracic endograft. We report a case of a patient with type IIIB endoleak after thoracic endovascular aortic repair that remained undetected by computed tomography and was first diagnosed during open conversion surgery. The aneurysm enlarged gradually from 60 to 78 mm without type I and type II endoleaks during 3 to 6 years after thoracic endovascular aortic repair. Type IIIB endoleaks from nitinol stent suture lines were detected, and the endograft was then explanted and replaced by a vascular graft. abstract_id: PUBMED:31193434 Novel open technique for repair of endograft migration. Widespread adoption of endovascular aneurysm repair has led to increased incidence of late complications, such as endograft migration. Treatment options have to be tailored to the patient's health, quality of proximal aorta, and extent of migration. Complete or partial endograft removal is associated with significant morbidity and mortality. We describe a case in which open repair with endograft preservation was employed, with the additional benefit of a sutured proximal anastomosis. abstract_id: PUBMED:32821562 A Case Report on Endovascular Aortic Repair Rupture. Endovascular repair of an abdominal aortic aneurysm (AAA) is a widely accepted alternative to open surgical AAA repair. A ruptured AAA is among the emergency surgeries with the highest risk of death, with an overall mortality rate close to90%. However, the classic symptom triad for ruptured AAAs of hypotension, a pulsatile mass, and abdominal/back pain is seen in only in 25% to 50% of affected patients. Thus, many present with symptoms and signs that suggest adifferent diagnosis. Recognizing uncommon presentations and limitations of imaging and interpretation, in addition to clinical gestalt, can save many lives. This report discusses an unusual case involving a previously repaired AAA presenting with acute rupture at the endograft site. abstract_id: PUBMED:34239642 Rupture of Abdominal Aortic Aneurysm Caused by Combined Type IIIb and Type Ia Endoleak with the Endurant II Endograft: A Case Report. We report a case of combined types IIIb and Ia endoleak that developed 6 years after endovascular aneurysm repair (EVAR) with the Endurant II® endograft for abdominal aortic aneurysm (AAA). The patient presented with post-EVAR AAA rupture and underwent emergency open repair. We observed types IIIb and Ia endoleak and successfully performed felt banding to preserve the stent graft. Type IIIb endoleak with the Endurant® endograft is rare, and treatments have not been fully established. We summarized the case reports regarding type IIIb endoleak with the Endurant® endograft and mainly discussed the treatments. abstract_id: PUBMED:38435054 Trends in AAA Repair in Australia Over Twenty Years. Background: The advent of endovascular techniques has revolutionised the treatment of abdominal aortic aneurysms (AAA). Many countries have seen a transition from open AAA repair (OAR) to endovascular AAA repair (EVAR) over the past 25 years. The only study done in Australia that describes this change was done in the private sector. Majority of healthcare in Australia is delivered through the public, universal healthcare system. The aim of this study was to evaluate the trends in AAA repair in the Australian public sector over the past two decades. Methods: The Australian Institute of Health and Welfare (AIHW) Procedures Data Cubes from the National Hospitals Data Collection was used to extract data pertaining to AAA repairs from 2000 to 2021. Population data from the Australian Bureau of Statistics was used to calculate incidence of each type of repair per 100,000 population. Results: There were 65,529 AAA repairs performed in the Australian public sector from 2000 to 2021. EVARs accounted for 64.4% (42,205) and OARs accounted for 35.6% (23, 324) of them. EVAR surpassed OAR as the preferred method of AAA repair in 2006. This trend was observed in both males and females and across all age groups. Conclusion: There was a consistent and steady transition from OAR to EVAR over the 21 year period with EVAR surpassing OAR as the preferred method of AAA repair relatively early in Australia compared to other countries. Further research that investigates medium- and long-term outcomes of newer stent grafts is needed to further ascertain the continued viability and effectiveness of this trend in AAA treatment. abstract_id: PUBMED:35816378 Endovascular Abdominal Aortic Aneurysm Repair With Ovation Alto Stent Graft: Protocol for the ALTAIR (ALTo endogrAft Italian Registry) Study. Background: Since 2010, the Ovation Abdominal Stent Graft System has offered an innovative sealing option for abdominal aortic aneurysm (AAA) by including a sealing ring filled with polymer 13 mm from the renal arteries. In August 2020, the redesigned Ovation Alto, with a sealing ring 6 mm closer to the top of the fabric, received CE Mark approval. Objective: This registry study aims to evaluate intraoperative, perioperative, and postoperative results in patients treated by the Alto stent graft (Endologix Inc.) for elective AAA repair in a multicentric consecutive experience. Methods: All consecutive eligible patients submitted to endovascular aneurysm repair (EVAR) by Alto Endovascular AAA implantation will be included in this analysis. Patients will be submitted to EVAR procedures based on their own preferences, anatomical features, and operators experience. An estimated number of 300 patients submitted to EVAR with Alto stent graft should be enrolled. It is estimated that the inclusion period will be 24 months. The follow-up period is set to be 5 years. Full data sets and cross-sectional images of contrast-enhanced computed tomography scan performed before EVAR, at the first postoperative month, at 24 or 36 months, and at 5-year follow-up interval will be reported in the central database for a centralized core laboratory review of morphological changes. The primary endpoint of the study is to evaluate the technical and clinical success of EVAR with the Alto stent graft in short- (90-day), mid- (1-year), and long-term (5-year) follow-up periods. The following secondary endpoints will be also addressed: operative time; intraoperative radiation exposure; contrast medium usage; AAA sac shrinkage at 12-month and 5-year follow-up; any potential role of patients' baseline characteristics, valuated on preoperative computed tomography angiographic study, and of device configuration (number of component) in the primary endpoint. Results: The study is currently in the recruitment phase and the final patient is expected to be treated by the end of 2023 and then followed up for 5 years. A total of 300 patients will be recruited. Analyses will focus on primary and secondary endpoints. Updated results will be shared at 1- and 3-5-year follow-ups. Conclusions: The results from this registry study could validate the safety and effectiveness of the new design of the Ovation Alto Stent Graft. The technical modifications to the endograft could allow for accommodation of a more comprehensive range of anatomies on-label. Trial Registration: ClinicalTrials.gov NCT05234892; https://clinicaltrials.gov/ct2/show/NCT05234892. International Registered Report Identifier (irrid): PRR1-10.2196/36995. Answer: Yes, the type of endograft can affect AAA volume change after endovascular aneurysm repair (EVAR). A study comparing thrombus volume changes over 2 years post-EVAR using Ancure and Excluder endografts found that sac volume shrinkage was less pronounced and less frequent with the Excluder endoprosthesis than with the Ancure endograft. The median absolute volume changed significantly in both groups, but the Ancure device showed a higher percentage of significant shrinkage at both 12 and 24 months compared to the Excluder device. Aneurysm size and endograft type were independently predictive of the absolute volume change (PUBMED:12932148).
Instruction: Do parents want to be present during invasive procedures performed on their children in the emergency department? Abstracts: abstract_id: PUBMED:10381997 Do parents want to be present during invasive procedures performed on their children in the emergency department? A survey of 400 parents. Study Objectives: No large study has addressed whether parents want to be present when invasive procedures are performed on their children in the emergency department. We conducted a survey to address this question. Methods: The study used a self-administered, written survey consisting of 5 pediatric scenarios with increasing level of procedural invasiveness. Parents in an urban, teaching hospital ED waiting area were asked to participate. Results: Of 407 persons asked to participate, 400 (98%) completed the survey. The number of parents expressing a desire to be present during a procedure performed on their child was 387 (97.5%) for venipuncture of the extremity, 375 (94.0%) for laceration repair, 341 (86.5%) for lumbar puncture, and 317 (80.9%) for endotracheal intubation. For a major resuscitation scenario, 316 (80.7%) wished to be present if their child were conscious during the resuscitation, 277 (71.4%) wanted to be present if their child were unconscious during the resuscitation, whereas 322 (83.4%) indicated a desire to be present if their child were likely to die during the resuscitation. Of the 400, 261 (65.3%) wished to be present for all 5 scenarios. Only 26 (6.5%) wanted the physician to determine parental presence in all 5 scenarios. Conclusion: Most parents surveyed would want to be present when invasive procedures are performed on their children. With increasing procedural invasiveness, parental desire to be present decreased. However, most parents would want to be in attendance if their child were likely to die, and nearly all parents want to participate in the decision about their presence. abstract_id: PUBMED:24629904 Has the presence of parents during invasive procedures in the emergency department increased in the last few years? Introduction: In the recent years, parents are playing an important role in Pediatric Emergency Department (PED), and wish to be present when invasive procedures (IP) are performed. Objectives: 1) To compare the presence of parents during IP in PED in Spain between 2008 and 2012. 2) To compare the arguments to restrict the presence and problems arising from this, and 3) To determine whether the views of health personnel have changed on this subject. Methodology: A descriptive multicenter comparative study was conducted in 2008 and 2012. A total of 42 questionaires were sent by email to PED managers, with the responding hospitals being included in the 2 periods of the study. Results: A total of 22 hospitals participated in the study. The presence of parents in the PED increased between 2008 and 2012 for all IP, significantly in the blood test and urine catheterization. In 2012, managers state that children are not so nervous, and anxiety of the parents and staff fear of a poorer performance, as an argument to restrict family presence. There were few problems during the 2 periods, with the poor behavior of the children decreasing. According to managers, the opinion of health personnel has not changed in the last four years. Conclusions: The presence of parents during the IP in the PED has increased in the last four years, although the presence is low for more invasive procedures. Managers argue the presence of fewer behavior problems to restrict family presence. The opinion of the staff has not changed in the last four years, although more studies are required on this issue. abstract_id: PUBMED:30121121 The Effect of Parental Presence on Pain and Anxiety Levels During Invasive Procedures in the Pediatric Emergency Department. Introduction: Parental presence during invasive procedures is important in family-centered-care. Family-centered-care is a basic principle of pediatric nursing. Methods: This randomized controlled trial included data from 111 cases in the Pediatric Emergency Department of Istanbul University, Istanbul Medical Faculty, during October 2016-January 2017 (parental involvement group, n = 40; parental presence group, n = 40; parental absence group, n = 31). Information form, State-Trait Anxiety Inventory, State-Trait Anxiety Inventory for Children and Visual Analogue Scale were used to collect data. Results: When children's pain levels before invasive procedures were compared, there was no significant difference between the groups (P &gt; 0.05). Significant difference was found between pain levels of the groups during the process (P &lt; 0.001). It was found that the pain levels of the children in the parental absence group (6.00 ± 2.88) were significantly higher than those in the parental involvement group (3.15 ± 2.79) and the parental presence group (3.70 ± 2.92) (P &lt; 0.05). There was a weak, positive, and significant relationship between parents' trait anxiety levels and the preprocedural pain and trait anxiety levels of children in all groups (P &lt; 0.05). Discussion: Parental involvement is effective in reducing the pain felt during invasive procedures. Moreover, anxiety levels of children during the procedure were not affected in all groups; however, the children of parents with high trait anxiety levels had higher preprocedural pain and trait anxiety levels. abstract_id: PUBMED:37685594 The Impact of Parental Presence on Invasive Procedures in the Pediatric Emergency Department: A Prospective Study. Parental presence during invasive pediatric procedures is controversial, and its benefits are under-researched. The objective of this study was to assess the effects of parental presence during invasive procedures on the parents themselves and the physician performing the procedure. This prospective study was conducted at a single tertiary center in Jerusalem, Israel. During 10 shifts, all physicians and the families of patients who underwent invasive procedures in the pediatric emergency department (PED) were asked to fill in questionnaires related to their experiences. A total of 98 parental questionnaires and 101 physician questionnaires were collected. The most commonly performed procedures were laceration repair (65%) and abscess drainage (18%). Sedation was required in 75% of cases. In total, 73% of the cited family members were present during these procedures. The main reason for refusing to allow family members access was the physicians' concern that the procedure would be hard for parents to watch. However, in more than 85% of cases, the physicians felt that the presence of a family member contributed to the success of the procedure, augmented the child's sense of safety and lessened the family members' feelings of anxiety. All parents who opted to be present during the procedure felt very satisfied, compared to 67% of parents who were not present (p &lt; 0.0001). When asked if, in retrospect, they would have made the same decision, 100% of the parents who were present during the procedure indicated that they would have made the same decision, compared to only 68% of the parents who were not present (p &lt; 0.001). Overall, these findings highlight the positive effects of parental presence during invasive procedures performed in the PED, even when procedures were performed under sedation. Encouraging parental attendance during invasive procedures may, thus, enhance family-centered practices in the PED. abstract_id: PUBMED:19409248 Parental presence during invasive procedures in the emergency department Over the last decades, health care has been moving away from the paternalistic model, to one of family-centered care. In the pediatric emergency department, this new trend has led to parents playing a larger role in the health care of children, particularly during invasive procedures and/or resuscitation. Despite the increased attention to family presence, it remains a controversial topic, especially among emergency medical staff. Several studies have been performed that look at the advantages and disadvantages of this new way of conceiving child health care. To our knowledge, there is only one Spanish article on this topic, which is the reason why we performed a literature review on family presence in the emergency department. abstract_id: PUBMED:11825843 Should parents be present during emergency department procedures on children, and who should make that decision? A survey of emergency physician and nurse attitudes. Objective: To survey physician and nurse attitudes regarding parental presence during painful procedures on children performed in the emergency department (ED) and who should make that decision. Methods: The design was an anonymous written survey consisting of six clinical scenarios distributed to all staff and resident physicians and nurses in ten EDs at institutions in the United States routinely caring for children. Participants were asked whether parents should remain with children undergoing intravenous (IV) placement, laceration repair, lumbar puncture, conscious sedation, major resuscitation, and major resuscitation with death. They also were asked who should make the decision. Results: The percentage of physicians who responded that parents should be present was 91.3% for peripheral IV start, 93.3% for laceration repair, 65.7% for lumbar puncture, 83.1% for conscious sedation, 31.9% for major resuscitation, and 35.6% for major resuscitation where death was likely. The percentage of nurses who responded that parents should be present was 86.8% for peripheral IV start, 89.6% for laceration repair, 55.0% for lumbar puncture, 74.9% for conscious sedation, 41.4% for major resuscitation, and 54.3% for major resuscitation where death was likely. In 64.8% of the completed surveys, the physicians indicated that they alone or in conjunction with a parent should make the decision. In 61.5% of the completed surveys, the nurses indicated that they should be involved in the decision. Conclusions: A majority of emergency physicians and nurses indicated parents should be present for some invasive pediatric procedures. However, as the invasiveness of the pediatric procedures increased, fewer physicians and nurses believed that parents should be present. abstract_id: PUBMED:19409240 Presence of parents in the emergency room during invasive procedures: do they prefer to be present? Introduction: Parents are not usually present during procedures in the paediatric emergency room (ER), although an increasing number of them would like to. Our goal was to find out how parents felt about them being present in ER during procedures. Material And Methods: This is an observational study. Questionnaires were distributed among parents of patients in the ER during January and February 2007. Data included demographic questions as well as the opinion regarding their preference on being present during venipuncture, stitching, lumbar puncture or cardiopulmonary resuscitation. They were also asked about who should take the decision whether to allow the family to be present or not. Results: A total of 98 questionnaires were analyzed. The median age of participants was 32 years-old, of which 84.5% would prefer to be present for venipuncture, 70.4% for stitching, 66.3% for lumbar puncture and 61.2% for cardiopulmonary resuscitation. Venipuncture was performed on 21% of the children, stitching on 4%, and lumbar puncture on 2%. There was no cardiopulmonary resuscitation. Parents considered that the decision about being present should be taken by the professional in 74% (venipuncture), 75% (stitching), 80% (lumbar puncture) and 81% (cardiopulmonary resuscitation). The mean age of the group that preferred to be present was lower (32 vs. 40 years; p = 0.039). Conclusions: Most parents surveyed preferred to be present during invasive procedures in ER. The more invasive the procedure is the higher reluctance from parents to be present. Most parents thought the decision should be taken by the health care professional, particularly when the procedure is more invasive. abstract_id: PUBMED:28669089 Patient experience in the pediatric emergency department: do parents and children feel the same? The objectives are to describe the experience of children and parents in a pediatric emergency service (PED) and to determine whether there are differences of opinion between the two groups. This was a descriptive study. A questionnaire was designed based on the Picker questionnaire on the patient experience. From July through December 2015, a survey was made of the children aged 8-18 treated in the PED, as well as of their parents. The proportion of dissatisfaction was determined. A total of 514 questionnaires (257 children, 257 parents) were completed. The most poorly rated aspects according to the children and parents were the entertainment activities (43.2%), the waiting time (23.7%), and the treatment for pain (10.5%). Differences were detected in the experiences of the children and the parents regarding the overlong waiting time (28.0% children vs 19.5% parents; p = 0.023), inadequate explanations (7.0 vs 1.6%, p = 0.002), inadequate treatment for pain (14.4 vs 6.6%; p = 0.004), and insufficient privacy (11.7 vs 2.7%; p &lt; 0.001). Conclusion: The entertainment activities during the wait and the waiting time are the two elements viewed most negatively by the children and the parents. The children tend to evaluate certain aspects of their experience as patients more negatively, which needs to be borne in mind in order to improve the attention provided. What is Known: • Studying the experience of patients is a key point in patient-centered medicine. • The experience of the pediatric patient has been little studied to date. The experience of the children is often inferred from that of their parents. Nevertheless, the experience of the children as patients might be different. What is New: • Overall, the experience of children and parents in the pediatric emergency department in the study is positive. Some aspects of the experience in emergency are poorly rated by the children, such as the waiting time, the information provided, treatment for pain, and privacy during the visit. • The experience of the children (and not merely that of their parents) needs to be studied in order to improve those areas that are rated poorly do as to enrich the experience in the emergency department. abstract_id: PUBMED:1336846 Parents' view on the care of their children at pediatric emergency department in 1991 A survey carried out by questionnaire was performed among parents accompanying their children to the emergency department of the Lenval's children hospital in order to determine their opinion on the quality of the reception and care. The questionnaire was distributed to 2,382 parents. A total of 719 questionnaires (30.2%) were returned. A large majority of the parents (99%) expressed their satisfaction with the care and reception in the emergency department. The necessary improvements following this survey concern the duration of waiting prior to the consultation and the quality of the reception in the radiological and admittance departments; it is hoped that the radiological and admittance areas will be included within the emergency department, in the future. abstract_id: PUBMED:15953226 Witnessing invasive paediatric procedures, including resuscitation, in the emergency department: a parental perspective. Objective: To determine whether parents prefer to be present during invasive procedures performed on their children in the ED. Methods: A prospective study using a written survey was carried out in the ED of a secondary level regional hospital in south-east Queensland. The survey conducted between August 2003 and November 2003 consisted of parental demographics, seven theoretical paediatric procedural scenarios with increasing level of procedural invasiveness (including resuscitation) and reasons for the decisions of parents to either stay with the child or leave the room. Parents of children with Australasian Triage Scale (ATS) triage category 3, 4 and 5 were surveyed. Results: Of 573 surveys collected, 553 (96.5%) were completed correctly. The number of parents expressing a desire to be present during a procedure performed on their child was 519 (93.9%) for phlebotomy or i.v. cannulation of an extremity, 485 (87.7%) for nasogastric tube insertion, 461 (83.4%) for lumbar puncture, 464 (83.9%) for urinary catheter insertion, 430 (77.8%) for suprapubic bladder aspiration, 519 (93.4%) during procedural sedation and 470 (85%) during a resuscitation where the possibility existed that their child may die. The most common reason for wanting to be present was to provide comfort to their child (542/98%). The most common reason for not wanting to be present was a parental concern of getting in the way (181/33%). Conclusion: Most parents surveyed would want to be present when invasive procedures are performed on their children in the ED. With increasing invasiveness, parental desire to be present decreased. However, the overwhelming majority of parents would want to be in attendance during procedural sedation or resuscitation. Answer: Yes, most parents do want to be present during invasive procedures performed on their children in the emergency department. A survey of 400 parents found that a high percentage of parents expressed a desire to be present during various procedures, with 97.5% for venipuncture, 94.0% for laceration repair, 86.5% for lumbar puncture, and 80.9% for endotracheal intubation. Additionally, if their child were likely to die during a resuscitation, 83.4% indicated a desire to be present (PUBMED:10381997). Another study showed that the presence of parents during invasive procedures in the pediatric emergency department has increased over the years, particularly for less invasive procedures like blood tests and urine catheterization (PUBMED:24629904). Furthermore, a randomized controlled trial found that parental involvement is effective in reducing the pain felt by children during invasive procedures (PUBMED:30121121). A prospective study also highlighted the positive effects of parental presence, with parents feeling very satisfied when present during the procedure (PUBMED:37685594). Despite the increased desire for parental presence, it remains a controversial topic among emergency medical staff, with varying opinions on who should make the decision for parental presence during such procedures (PUBMED:11825843). However, the trend is moving towards family-centered care, with parents playing a larger role in the health care of children during invasive procedures and resuscitations (PUBMED:19409248).
Instruction: Does the duration of smoking cessation have an impact on hospital admission and health-related quality of life amongst COPD patients? Abstracts: abstract_id: PUBMED:24868154 Does the duration of smoking cessation have an impact on hospital admission and health-related quality of life amongst COPD patients? Background: Lack of awareness among ex-smokers on the benefits of sustaining smoking cessation may be the main cause of their smoking relapse. This study explored health-related quality of life (HRQoL) and hospital admission amongst chronic obstructive pulmonary disease (COPD) patients according to the duration of smoking cessation. Materials And Methods: This study recruited COPD patients from a chest clinic who agreed to participate in a medication therapy-adherence program from January to June 2013. They were interviewed during their visits to obtain information regarding their smoking history and HRQoL. They were divided into three groups according to smoking status (sustained quitters, quit ≥5 years; quitters, quit &lt;5 years; and smokers, smoking at least one cigarette/day). The effects of the duration of cessation on HRQoL and hospital admission were analyzed using a multinomial logistic model. Results: A total of 117 participants with moderate COPD met the inclusion criteria, who were comprised of 41 sustained quitters, 40 quitters, and 36 smokers. Several features were similar across the groups. Most of them were married elderly men (aged &gt;64 years) with low-to-middle level of education, who smoked more than 33 cigarettes per day and had high levels of adherence to the medication regimen. The results showed that sustained quitters were less likely to have respiratory symptoms (cough, phlegm and dyspnea) than smokers (odds ratio 0.02, confidence interval 0-0.12; P&lt;0.001). The hospital admission rate per year was increased in quitters compared to smokers (odds ratio 4.5, confidence interval 1.91-10.59; P&lt;0.005). Conclusion: A longer duration of quitting smoking will increase the benefits to COPD patients, even if they experience increased episodic respiratory symptoms in the early period of the cessation. Thus, the findings of this study show the benefits of early smoking cessation. abstract_id: PUBMED:26121509 Smoking Cessation Strategies for Patients with COPD. Smoking cessation remains the priority intervention for patients suffering from chronic obstructive pulmonary disease (COPD) to slow progression of lung tissue damage, decrease symptoms, and enhance quality of life. This article highlights ways home healthcare clinicians can incorporate Prochaska's stages of change to individualize smoking cessation interventions for COPD patients. By assessing patients' readiness-to-quit and implementing evidence-based teaching techniques, including motivational interviewing, home healthcare clinicians can identify specific strategies to promote smoking cessation and enhance the health and quality of life for this population. abstract_id: PUBMED:36920468 The Impact of Digital Health on Smoking Cessation. Background: Smartphones have become useful tools for medicine, with the use of specific apps making it possible to bring health care closer to inaccessible areas, continuously monitor a patient's pathology at any time and place, promote healthy habits, and ultimately improve patients' quality of life and the efficiency of the health care system. Since 2020, the use of smartphones has reached unprecedented levels. There are more than 350,000 health apps, according to a 2021 IQVIA Institute report, that address, among other things, the management of patient appointments; communication among different services or professionals; the promotion of lifestyle changes related to adopting healthy habits; and the monitoring of different pathologies and chronic conditions, including smoking cessation. The number of mobile apps for quitting smoking is high. As early as 2017, a total of 177 unique smoking cessation-relevant apps were identified in the iPhone App Store, 139 were identified in Google Play, 70 were identified in the BlackBerry app store, and 55 were identified in the Windows Phone Store, but very few have adequate scientific support. It seems clear that efforts are needed to assess the quality of these apps, as well as their effectiveness in different population groups, to have tools that offer added value to standard practices. Objective: This viewpoint aims to highlight the benefits of mobile health (mHealth) and its potential as an adjuvant tool in health care. Methods: A review of literature and other data sources was performed in order to show the current status of mobile apps that can offer support for smoking cessation. For this purpose, the PubMed, Embase, and Cochrane databases were explored between May and November 2022. Results: In terms of smoking cessation, mHealth has become a powerful coadjuvant tool that allows health workers to perform exhaustive follow-ups for the process of quitting tobacco and provide support anytime and anywhere. mHealth tools are effective for different groups of smokers (eg, pregnant women, patients with chronic obstructive pulmonary disease, patients with mental illness, and the general population) and are cost-effective, generating savings for the health system. However, there are some patient characteristics that can predict the success of using mobile apps in the smoking cessation process, such as the lower age of patients, dependence on tobacco, the number of quit attempts, and the previous use of mobile apps, among others. Therefore, it is preferable to offer these tools to patients with a higher probability of quitting tobacco. Conclusions: mHealth is a promising tool for helping smokers in the smoking cessation process. There is a need for well-designed clinical studies and economic evaluations to jointly assess the effectiveness of new interventions in different population groups, as well as their impact on health care resources. abstract_id: PUBMED:21352544 Smoking cessation can improve quality of life among COPD patients: validation of the clinical COPD questionnaire into Greek. Background: Chronic obstructive pulmonary disease (COPD) remains a major public health problem that affects the quality of life of patients, however smoking cessation may emeliorate the functional effects of COPD and alter patient quality of life. OBJECTIVE-DESIGN: The aim of this study was to validate the Clinical COPD Questionnaire (CCQ) into Greek and with such to evaluate the quality of life in patients with different stages of COPD, as also assess their quality of life before and after smoking cessation. Results: The internal validity of questionnaire was high (Cronbach's a = 0.92). The reliability of equivalent types in 16 stabilized patients also was high (ICC = 0.99). In general the domains within the CCQ were strongly correlated with each other, while each domain in separate was strongly correlated with the overall CCQ score (r2 = 0.953, r2 = 0.915 and r2 = 0.842 in regards to the functional, symptomatic and mental domain, respectively). The CCQ scores were also correlated with FEV1, (r2 = -0.252, p &lt; 0.001), FEV1/FVC, (r2 = -0.135, p &lt; 0.001) as also with the quality of life questionnaire SF-12 (r2 = -0.384, p &lt; 0.001). Smoking cessation also lead to a significant reduction in CCQ score and increase in the SF-12 score. Conclusions: The self administered CCQ indicates satisfactory validity, reliability and responsiveness and may be used in clinical practice to assess patient quality of life. Moreover the CCQ indicated the health related quality of life gains attributable to smoking cessation among COPD patients, projecting smoking cessation as a key target in COPD patient management. abstract_id: PUBMED:30455124 COPD and smoking cessation: Patients' expectations and responses of health professionals The importance of smoking cessation in the management of COPD is well-established: the benefit of quitting smoking as regards morbidity and mortality outcomes in patients, is unquestioned. The smoking cessation in COPD patients is difficult: high levels of consumption, the duration of smoking, high dependence level, psychological co-morbidities such as anxiety and depression, lower socio-economic and intellectual level, constitute barriers. Studies have shown that patients often minimize the risks of smoking, that others do not believe in the benefits of quitting or doubt their ability to quit smoking. The patients' experience, and expectations with regard to smoking cessation are incompletely satisfied: are considered, the smoking characteristics of these patients, the understanding of the tobacco dependence, the beliefs and ideas of smokers, the knowledge of smoking cessation methods, the role of validated aids and alternative treatments, failure management. The answers of the health professionals can be in several directions: establishment of a better communication patient-doctor (empathy), more centered on the needs of the smoker, the role of the motivation and the place of the motivational interview, the understanding of the mechanisms of addiction, a better individualisation of therapeutics, the necessity of a extended follow-up, the contribution of modern technologies, the electronic cigarette, the smoking cessation in respiratory rehabilitation, guidelines that address smoking cessation treatment. abstract_id: PUBMED:11904863 The nurse's role in smoking cessation. Smoking cessation is an important component to improving patients' health and quality of life. This case study demonstrates how nurses can implement a successful smoking cessation program in outpatient settings. By applying this knowledge, adult-health nurses can increase the probability that hospitalized patients will be successful in stopping smoking. abstract_id: PUBMED:31312165 Smoking Status, Body Mass Index, Health-Related Quality of Life, and Acceptance of Life With Illness in Stable Outpatients With COPD. Smoking and obesity are important factors related to the etiology and clinical features of chronic obstructive pulmonary disease (COPD). The aim of this study was to carry out deeper analyses of the links between smoking status and body mass index as well as of the links between individual dimensions of health-related quality of life (QoL) and acceptance of life with disease in patients with COPD. Information about BMI, smoking status, clinical features of COPD, a COPD assessment test, and airflow obstruction indicators were obtained from 105 outpatients attending a pulmonary clinic. Analysis of covariance demonstrated that the subgroups of patients distinguished based on smoking status and BMI differed significantly in severity of Cough, Dyspnea, Limitation of daily activities, Lack of self-confidence, and Quality of sleep, independently of sociodemographic factors and clinical features of COPD. The results of our study suggest a certain path of the potential effects of obesity and smoking status on QoL. Risky health behaviors, such as smoking and at-risk body mass, only affect some aspects of health-related QoL. abstract_id: PUBMED:26451100 Budgetary impact analysis on funding smoking-cessation drugs in patients with COPD in Spain. The aim of the study was to assess the budgetary impact of funding smoking-cessation drugs in COPD patients in Spain. A hybrid model (cohort and Markov) was developed for a 5-year time horizon. Only approved cessation drugs (varenicline, bupropion, and nicotine replacement therapy) were considered. Irrespective of the drug, the model allowed for an initial cessation attempt, and up to three additional attempts in case of failure or smoking relapse during a 5-year period. Drug effectiveness was based on controlled clinical trials. National Health System perspective was applied; therefore, only medical resources were included. The pharmaceutical costs for smoking-cessation drugs, extra medical follow-up as a consequence of public reimbursement, and annual savings for health costs avoided due to stopping smoking were considered. The model estimated that 17,756 COPD patients would stop smoking if public funding was available, compared with 1,303 without reimbursement. In the reimbursement scenario, the savings accounted for a total of €48.0 million, compensating for expenditures on drugs and medical visits (€40.4 million). Accumulated total additional savings in 5 years (€4.3 million) compared with the scenario without reimbursement was shown. Sensitivity analyses supported the results robustness. Funding smoking-cessation drugs in COPD patients seems to be an efficient option and a National Health System drug reimbursement scheme would represent a cost-saving policy in Spain. abstract_id: PUBMED:30083305 Influences of patient education on exacerbations and hospital admissions in patients with COPD - a longitudinal national register study. Introduction: Chronic obstructive pulmonary disease (COPD) contributes to impaired health-related quality of life (HRQoL). Patient education and smoking cessation programs are recommended to reduce the number of exacerbations and hospitalizations, but the effects of such programs have yet to be explored in larger samples. Objective: The aim was to explore the longitudinal effects of patient education and smoking cessation programs on exacerbations and hospital admissions in patients with COPD. Design: This is a register study where data from the Swedish National Airway Register, including 20,666 patients with COPD, were used. Baseline measures of demographic, disease-related, and patient-reported variables were compared with a follow-up, 10-30 months after baseline. Descriptive statistics and changes between baseline and follow-up were calculated. Results: Comparing those not participating in education programs to those who did, HRQoL deteriorated significantly between baseline and follow-up in non-participants; there was no change in either exacerbations or hospitalizations in either group; there was a significant difference in baseline HRQoL between the two, and, when controlling for this, there was no significant change (p = 0.73). Patients who participated in smoking cessation programs were younger than the non-participants; mean 66.0 (standard deviations (SD) 7.8) vs. mean 68.1 (SD 8.8), p = 0.006. Among participants in smoking cessation programs, the proportion with continued smoking decreased significantly, from 76% to 66%, p &lt; 0.001. Exacerbations at follow-up were predicted by FEV1% of predicted value and exacerbations at baseline. Hospital admissions at follow-up were predicted by baseline FEV1% of predicted value and exacerbations at baseline. Conclusions: To prevent exacerbations and hospital admissions, treatment and prevention must be prioritized in COPD care. Patient education and smoking cessation programs are beneficial, but there is a need to combine them with other interventions. abstract_id: PUBMED:31192311 Using a Health Economic Framework to Prioritize Quality Indicators: An Example With Smoking Cessation in Chronic Obstructive Pulmonary Disease. Background. Health care performance monitoring is a major focus of the modern quality movement, resulting in widespread development of quality indicators and making prioritizations an increasing focus. Currently, few prioritization methods of performance measurements give serious consideration to the association of performance with expected health benefits and costs. We demonstrate a proof-of-concept application of using a health economic framework to prioritize quality indicators by expected variations in population health and costs, using smoking cessation in chronic obstructive pulmonary disease (COPD) as an example. Methods. We developed a health state transition, microsimulation model to represent smoking cessation practices for adults with COPD from the health care payer perspective in Ontario, Canada. Variations in life years, quality-adjusted life years (QALYs), and lifetime costs were associated with changes in performance. Incremental net health benefit (INHB) was used to represent the joint variation in mortality, morbidity, and costs associated with the performance of each quality indicator. Results. Using a value threshold of $50,000/QALY, the indicators monitoring assessment of smoking status and smoking cessation interventions were associated with the largest INHBs. Combined performance variations among groups of indicators showed that 81% of the maximum potential INHB could be represented by three out of the six process indicators. Conclusions. A health economic framework can be used to bring dimensions of population health and costs into explicit consideration when prioritizing quality indicators. However, this should not preclude policymakers from considering other dimensions of quality that are not part of this framework. Answer: Yes, the duration of smoking cessation does have an impact on hospital admission and health-related quality of life (HRQoL) amongst COPD patients. A study found that sustained quitters, defined as those who had quit smoking for 5 years or more, were less likely to have respiratory symptoms such as cough, phlegm, and dyspnea than current smokers. However, the hospital admission rate per year was increased in quitters compared to smokers, which might indicate increased episodic respiratory symptoms in the early period of cessation (PUBMED:24868154). Smoking cessation is known to slow the progression of lung tissue damage, decrease symptoms, and enhance the quality of life in COPD patients (PUBMED:26121509). Additionally, smoking cessation can lead to a significant reduction in the Clinical COPD Questionnaire (CCQ) score and an increase in the SF-12 score, indicating improvements in HRQoL (PUBMED:21352544). Furthermore, patient education and smoking cessation programs are recommended to reduce the number of exacerbations and hospitalizations, although the effects of such programs need to be combined with other interventions for optimal results (PUBMED:30083305). Overall, the evidence suggests that smoking cessation has a positive impact on HRQoL and can influence hospital admission rates among COPD patients, especially over the long term.
Instruction: Is endoscopic third ventriculostomy an internal shunt alone? Abstracts: abstract_id: PUBMED:31143252 Endoscopic Third Ventriculostomy in Children with Failed Ventriculoperitoneal Shunt. Context: Endoscopic third ventriculostomy (ETV) is an accepted procedure for the treatment of obstructive hydrocephalus. The role of endoscopic treatment in the management of shunt malfunction was not extensively evaluated. The aim of this study is to evaluate the success rate of ETV in pediatric patients formerly treated by ventriculoperitoneal (V-P) shunt implantation. Materials And Methods: Thirty-three patients with their first shunt failure and obstructive hydrocephalus in brain imaging between 2008 and 2014 were enrolled in this study. Results: The most common causes of hydrocephalus in these patients were aqueductal stenosis and myelomeningocele with or without associated shunt infection. Of these 33 cases, 20 ETV procedures were successful, and 13 cases needed shunt revision after ETV failure. There was no serious complication during ETV procedures. The follow-up period of patients with successful ETV was 6-50 months (mean 18 months). The time interval between ETV and new shunting subsequent to ETV failure was 24.4 days (10-95). Conclusions: ETV can be considered as an alternative treatment paradigm in patients with previous shunt or new shunt failure with an acceptable success rate of 6o%, although long-term follow-up is needed for these patients. abstract_id: PUBMED:23493502 Endoscopic third ventriculostomy for shunt malfunction: What to do with the shunt? Background: Endoscopic third ventriculostomy (ETV) is an effective surgical option for the treatment of shunt malfunction. The role of postoperative cerebrospinal fluid (CSF) diversion is not clearly understood at this time. We compare the effects of shunt-removal/ligation, shunt externalization or external ventricular drain placement, and no treatment to the indwelling shunt at the time of ETV. Methods: We retrospectively reviewed the records of 20 consecutive patients treated at our institution for shunt malfunction with ETV. Patient data were retrospectively evaluated for the effect that the fate of the shunt plays on ETV success rates. Results: In our series of 20 patients we had an overall success rate of 70% with using ETV for shunt malfunction. Patients who had their shunts ligated at the time of surgery had a success rate of 88%, in comparison to those whom the shunt was left untouched who had a success rate of 60%, or patients who had a perioperative external ventricular drain placed the success rate was 50%. Conclusions: This series of ETV for shunt malfunction performed at a single center by a single surgeon shows a success rate similar to the published literature range of 67 to 80 percent success whether the shunt is ligated or left undisturbed. It is not necessary to ligate the in situ shunt at the time of ETV; however, there may be a trend toward an improved success rate with shunt ligation. Further studies with a greater numbers of patients are warranted. abstract_id: PUBMED:28401404 Controversy about Management of Hydrocephalus - Shunt vs. Endoscopic Third Ventriculostomy. The best management of hydrocephalus is still controversial in the twenty-first century. Shunt treatment for hydrocephalus is the most common procedure performed in neurosurgical practice and is associated with the highest complications rate. But during the last 2 decades, the treatment of hydrocephalus has improved with better shunt devices available today, increased facilities for investigations and newer approaches like endoscopic third ventriculostomy. The recent advances in development of better endoscopes have provided the patient and treating doctor with an option for an alternative surgery for treatment of hydrocephalus. abstract_id: PUBMED:21607179 Endoscopic third ventriculostomy in patients with shunt malfunction. Objective: This paper presents data from a retrospective study of endoscopic third ventriculostomy (ETV) in patients with shunt malfunction and proposes a simple and reasonable post-operative protocol that can detect ETV failure. Methods: We enrolled 19 consecutive hydrocephalus patients (11 male and 8 female) who were treated with ETV between April 2001 and July 2010 after failure of previously placed shunts. We evaluated for correlations between the success rate of ETV and the following parameters : age at the time of surgery, etiology of hydrocephalus, number of shunt revisions, interval between the initial diagnosis of hydrocephalus or the last shunt placement and ETV, and the indwelling time of external ventricular drainage. Results: At the time of ETV after shunt failure, 14 of the 19 patients were in the pediatric age group and 5 were adults, with ages ranging from 14 months to 42 years (median age, 12 years). The patients had initially been diagnosed with hydrocephalus between the ages of 1 month 24 days and 32 years (median age, 6 years 3 months). The etiology of hydrocephalus was neoplasm in 7 patients; infection in 5; malformation, such as aqueductal stenosis or megacisterna magna in 3; trauma in 1; and unknown in 3. The overall success rate during the median follow-up duration of 1.4 years (9 days to 8.7 years) after secondary ETV was 68.4%. None of the possible contributing factors for successful ETV, including age (p=0.97) and the etiology of hydrocephalus (p=0.79), were statistically correlated with outcomes in our series. Conclusion: The use of ETV in patients with shunt malfunction resulted in shunt independence in 68.4% of cases. Age, etiology of hydrocephalus, and other contributing factors were not statistically correlated with ETV success. External ventricular drainage management during the immediate post-ETV period is a good means of detecting ETV failure. abstract_id: PUBMED:29483013 Endoscopic third ventriculostomy for shunt malfunction in children: A review. Endoscopic third ventriculostomy (ETV) is increasingly used in place of shunt revision for shunt malfunction (secondary ETV). This review provides a comprehensive overview of preoperative, operative and postoperative considerations for patients undergoing a secondary ETV. Preoperatively, patient selection is vital and there is evidence that secondary ETV is more effective than primary ETV in certain hydrocephalic aetiologies. Operative considerations include use of neuronavigation and consideration of surgeon technical experience due to anatomical differences that are likely to accompany chronic shunting, management of existing shunt hardware and the use of temporary external ventricular drains or short/long-term ventricular access devices. Postoperatively, there are varying institutional practices with regards to ICP monitoring and length of follow-up after discharge. Finally, this review examines the slit ventricle syndrome as a special case requiring a different approach. abstract_id: PUBMED:33708666 Endoscopic Third Ventriculostomy in Failed Ventriculoperitoneal Shunt in Pediatric Population. Introduction: Ventriculoperitoneal (VP) shunt malfunction is common in pediatric age group patients. There is a high complication rate and revision rate of VP shunt. Endoscopic third ventriculostomy (ETV) can alleviate these complications and can act as an effective alternative for the treatment of hydrocephalus in this age group of patients. Materials And Methods: The authors retrospectively reviewed the management and outcome of 36 failed VP shunts in pediatric patients for the treatment of hydrocephalus. The surgeries were performed between November 2010 and January 2016 in a tertiary care hospital. The minimal follow-up period was 3 months. We divided the patients into the following age groups: &lt;1 year (eight infants), 1-10 years (18 children), and 10-18 years (10 children). The success of the procedure was determined by age, sex, type of hydrocephalus, and the number of shunt revisions and malfunction before ETV. Children with different age (P = 0.839) and sex group did not show any significant data (P = 0.798). Children with communicating hydrocephalus had a success rate of 52.9% (17 patients), and children with noncommunicating hydrocephalus had a success rate of 84.2% (19 patients). The success rate in children with only one shunt malfunction was 57.1% (21 patients), whereas in 15 children with two or more shunt malfunctions, the success rate of ETV was 86.7%. Conclusions: The authors conclude that ETV is an effective alternative for the treatment of hydrocephalus in children. Age does not present a contraindication for ETV in failed VP shunt. abstract_id: PUBMED:31073683 Prior endoscopic third ventriculostomy does not increase ventriculoperitoneal shunt failure rate. Purpose: To determine whether prior endoscopic third ventriculostomy (ETV) influences the failure rate of subsequently placed ventriculoperitoneal (VP) shunts. Methods: Our institution's operative database and patient records were reviewed retrospectively to identify all paediatric patients who had undergone a first VP shunt or ETV at our institution between January 2012 and December 2015. Data was analysed using the Microsoft Excel, GraphPad Prism v7 and SPSS statistics. The literature on this topic to date was also reviewed. Results: Eighty-six children were included in the study: 61 patients had a primary VP shunt inserted during the study period and 25 had a VP shunt inserted following failed ETV. There was no significant difference in the underlying aetiology or age of the patients in each group. In the primary VP shunt group, 47.5% (29 patients) required shunt removal at an average of 274 days post-insertion (range 7 days to 3.4 years). The 1-year revision rate was 34.4%. In the shunt post-ETV group, 48% (12 patients) required shunt removal at an average of 207 days post-insertion (range 2 days to 2.7 years). The 1-year revision rate was 36%. The most common reason for revision in both groups was blockage. Conclusions: We found no significant difference in failure rate or pattern between primarily inserted VP shunts and those inserted following an endoscopic third ventriculostomy. On the basis of this study and the small number of previously reported studies, we would advocate a trial of ETV where feasible to allow a chance at shunt independence. abstract_id: PUBMED:22865970 Endoscopic third ventriculostomy. Endoscopic third ventriculostomy (ETV) is considered as a treatment of choice for obstructive hydrocephalus. It is indicated in hydrocephalus secondary to congenital aqueductal stenosis, posterior third ventricle tumor, cerebellar infarct, Dandy-Walker malformation, vein of Galen aneurism, syringomyelia with or without Chiari malformation type I, intraventricular hematoma, post infective, normal pressure hydrocephalus, myelomeningocele, multiloculated hydrocephalus, encephalocele, posterior fossa tumor and craniosynostosis. It is also indicated in block shunt or slit ventricle syndrome. Proper Pre-operative imaging for detailed assessment of the posterior communicating arteries distance from mid line, presence or absence of Liliequist membrane or other membranes, located in the prepontine cistern is useful. Measurement of lumbar elastance and resistance can predict patency of cranial subarachnoid space and complex hydrocephalus, which decides an ultimate outcome. Water jet dissection is an effective technique of ETV in thick floor. Ultrasonic contact probe can be useful in selected patients. Intra-operative ventriculo-stomography could help in confirming the adequacy of endoscopic procedure, thereby facilitating the need for shunt. Intraoperative observations of the patent aqueduct and prepontine cistern scarring are predictors of the risk of ETV failure. Such patients may be considered for shunt surgery. Magnetic resonance ventriculography and cine phase contrast magnetic resonance imaging are effective in assessing subarachnoid space and stoma patency after ETV. Proper case selection, post-operative care including monitoring of ICP and need for external ventricular drain, repeated lumbar puncture and CSF drainage, Ommaya reservoir in selected patients could help to increase success rate and reduce complications. Most of the complications develop in an early post-operative, but fatal complications can develop late which indicate an importance of long term follow up. abstract_id: PUBMED:35854929 Endoscopic third ventriculostomy for VP shunt malfunction during the third trimester of pregnancy: illustrative case. Background: Endoscopic third ventriculostomy (ETV) is a successful procedure for treating noncommunicating hydrocephalus as an alternative to initial ventriculoperitoneal (VP) shunt placement and as a salvage procedure when a VP shunt fails. Physiological changes of pregnancy can lead to VP shunt failure and complicate the management of shunt malfunction, particularly in the third trimester. Observations: The authors present a case in which an ETV was successfully used in the third trimester (31 weeks of gestation) of pregnancy for acute hydrocephalus due to VP shunt malfunction, and the patient went on to deliver a healthy baby at term; the patient remained well in the long-term follow-up. An English-language PubMed literature review revealed four cases of VP shunt failure successfully treated with an ETV in the first or second trimester but no such reports in the third trimester of pregnancy. Lessons: ETV appears to be a safe and effective alternative to VP shunt replacement in the late prenatal period of pregnancy. abstract_id: PUBMED:31158842 Role of Secondary Endoscopic Third Ventriculostomy in Children: Review of an Institutional Experience. Background: Endoscopic third ventriculostomy (ETV) has become a standard and safe procedure for obstructive hydrocephalus. ETV can also play an important role in children presenting with shunt malfunction with an added advantage of shunt independence. Secondary ETV can be defined as either a redo endoscopic ventriculostomy done after primary ETV stoma closure or that done in cases presenting with shunt malfunction. Objective: The aim of our study was to evaluate the role of secondary ETV in the pediatric age group patients. Methods: This is a retrospective analysis of 36 children (&lt;18 years) who underwent ETV after shunt malfunction and 4 children with ETV done after previous ETV stoma closure from 2004 until 2018. In all patients, the obstructive pattern suggesting aqueduct outflow obstruction was observed on MRI. Patients were followed up for a mean period of 4.25 years. Results: ETV was considered successful if the patient avoided a shunt insertion later on in their life. Considering this definition, a success rate of 72% was observed with secondary ETV for shunt malfunction whereas a success rate of 75% was observed after primary ETV failure without any major side effects in any of the patients. Conclusion: ETV can be considered a primary treatment modality in children with shunt malfunction and has a good success rate in cases presenting with closure of previously performed ETV stoma. Answer: Endoscopic third ventriculostomy (ETV) is not simply an internal shunt; it is a surgical procedure that creates an alternative pathway for cerebrospinal fluid (CSF) to flow within the brain's ventricular system. ETV is performed to treat obstructive hydrocephalus by making an opening in the floor of the third ventricle, which allows CSF to bypass the obstruction and flow directly into the subarachnoid space, where it can be absorbed (PUBMED:22865970). This procedure is considered a treatment of choice for obstructive hydrocephalus and is indicated in various conditions such as congenital aqueductal stenosis, posterior third ventricle tumor, and others (PUBMED:22865970). ETV is an alternative to shunt systems, which are devices implanted to divert CSF from the ventricles to another body cavity, typically the peritoneal cavity, where the fluid can be absorbed (PUBMED:31143252). Shunt systems can have high complication and revision rates, and ETV offers a chance for patients to become shunt-independent (PUBMED:33708666). ETV has been used successfully as a primary treatment modality for hydrocephalus and as a secondary option in cases of shunt malfunction or failure (PUBMED:31158842). The success rate of ETV varies, with some studies reporting success rates of around 60-70% for treating shunt malfunction (PUBMED:31143252; PUBMED:23493502; PUBMED:21607179). Factors such as the underlying cause of hydrocephalus, age at the time of surgery, and the number of previous shunt revisions may influence the success of ETV (PUBMED:33708666; PUBMED:29483013). In summary, ETV is not merely an internal shunt; it is a distinct surgical intervention that creates a new CSF flow pathway within the brain, potentially eliminating the need for an implanted shunt device and its associated complications.
Instruction: Ultrasound-Guided Placement of Central Venous Port Systems via the Right Internal Jugular Vein: Are Chest X-Ray and/or Fluoroscopy Needed to Confirm the Correct Placement of the Device? Abstracts: abstract_id: PUBMED:27216807 Ultrasound-Guided Placement of Central Venous Port Systems via the Right Internal Jugular Vein: Are Chest X-Ray and/or Fluoroscopy Needed to Confirm the Correct Placement of the Device? Background: Percutaneous central venous port (CVP) placement using ultrasound-guidance (USG) via right internal jugular vein is described as a safe and effective procedure. The aim of this study is to determine whether intraoperative fluoroscopy (IF) and/or postoperative chest X-ray (CXR) are required to confirm the correct position of the catheter. Methods: Between January 2012 and December 2014, 302 adult patients underwent elective CVP system placement under USG. The standard venous access site was the right internal jugular vein. The length of catheter was calculated based on the height of the patient. IF was always performed to confirm US findings. Results: 176 patients were men and 126 were women and average height was 176.2 cm (range 154-193 cm). The average length of the catheter was 16.4 cm (range 14-18). Catheter malposition and pneumothorax were observed in 4 (1.3 %) and 3 (1 %) patients, respectively. IF confirmed the correct position of the catheter in all cases. Catheter misplacement (4 cases) was previously identified and corrected on USG. Our rates of pneumothorax are in accordance with those of the literature (0.5-3 %). Conclusion: Ultrasonography has resulted in improved safety and effectiveness of port system implantation. The routine use of CXR and IF should be considered unnecessary. abstract_id: PUBMED:22563269 Ultrasound and fluoroscopy-guided placement of central venous ports via internal jugular vein: retrospective analysis of 1254 port implantations at a single center. Objective: To assess the technical success and complication rates of the radiologic placement of central venous ports via the internal jugular vein. Materials And Methods: We retrospectively reviewed 1254 central venous ports implanted at our institution between August 2002 and October 2009. All procedures were guided by using ultrasound and fluoroscopy. Catheter maintenance days, technical success rates, peri-procedural, as well as early and late complication rates were evaluated based on the interventional radiologic reports and patient medical records. Results: A total of 433386 catheter maintenance days (mean, 350 days; range 0-1165 days) were recorded. The technical success rate was 99.9% and a total of 61 complications occurred (5%), resulting in a post-procedural complication rate of 0.129 of 1000 catheter days. Among them, peri-procedural complications within 24 hours occurred in five patients (0.4%). There were 56 post-procedural complications including 24 (1.9%, 0.055 of 1000 catheter days) early and 32 (2.6%, 0.074 of 1000 catheter days) late complications including, infection (0.6%, 0.018 of 10000 catheter days), thrombotic malfunction (1.4%, 0.040 of 1000 catheter days), nonthrombotic malfunction (0.9%, 0.025 of 1000 catheter days), venous thrombosis (0.5%, 0.014 of 1000 catheter days), as well as wound problems (1.1%, 0.032 of 1000 catheter days). Thirty six CVPs (3%) were removed due to complications. Bloodstream infections and venous thrombosis were the two main adverse events prolonging hospitalization (mean 13 days and 5 days, respectively). Conclusion: Radiologic placement of a central venous port via the internal jugular vein is safe and efficient as evidenced by its high technical success rate and a very low complication rate. abstract_id: PUBMED:27896794 Efficacy and safety of ultrasound-guided placement of central venous port systems via the right internal jugular vein in elderly oncologic patients: our single-center experience and protocol. Background: Ultrasound-guidance has become the routine method for internal jugular vein (IJV) catheterization reducing dramatically failure and complication rates for central venous port (CVP) placement. Aims: The aim of this study was to determine the safety and efficacy of ultrasound-guided IJV CVP placement in elderly oncologic patients. Methods: Between January 2013 and December 2015, 101 elderly oncological patients underwent right IJV CVP placement under ultrasound-guidance. The length of catheter introduction ranged from 18 to 21 cm. Intraoperative fluoroscopy (IF) was always performed intraoperatively. Chest X-ray (CXR) was always performed 30 min after the end of the procedure. Results: The morbidity rate was 1.98%; two arterial punctures were reported with one self-limiting hematoma. Two patients (1.98%) had catheter misplacements, recognized by intraoperative IF. No patients (0%) experienced pneumothorax (PNX), confirmed at CXR. Patients were all discharged at maximum 6 h from the procedure. Discussion: The risk of catheter misplacement, PNX, and arterial/nerve puncture remains present with this technique. Lower rates of catheter misplacement have been reported after right IJV puncture, probably for its straight vertical course. Our results are in accordance with literature (1 counter-lateral subclavian vein and 1 counter-lateral internal jugular vein misplacements). All misplacements were detected intraoperatively. The PNX rates after cannulation of the IJV vary between 0.0 and 0.5%. We had no PNX occurrence. Conclusion: Ultrasonography (US) has improved safety and effectiveness of port system placements. While routine post-procedural CXR seems avoidable, IF should be considered mandatory. abstract_id: PUBMED:27555164 Locating the optimal internal jugular target site for central venous line placement. Study Objective: Historically, the placement of internal jugular central venous lines has been accomplished by using external landmarks to help identify target-rich locations in order to steer clear of dangerous structures. This paradigm is largely being displaced, as ultrasound has become routine practice, raising new considerations regarding target locations and risk mitigation. Most human anatomy texts depict the internal jugular vein as a straight columnar structure that exits the cranial vault the same size that it enters the thoracic cavity. We dispute the notion that the internal jugulars are cylindrical columns that symmetrically descend into the thoracic cavity, and purport that they are asymmetric conical structures. Design: The primary aim of this study was to evaluate 100 consecutive adult chest and neck computed tomography exams that were imaged at an inpatient hospital. We measured the internal jugular on the left and right sides at three different levels to look for differences in size as the internal jugular descends into the thoracic cavity. Main Results: We revealed that as the internal jugular descends into the thorax, the area of the vessel increases and geometrically resembles a conical structure. We also reconfirmed that the left internal jugular is smaller than the right internal jugular. Conclusions: Understanding that the largest target area for central venous line placement is the lower portion of the right internal jugular vein will help to better target vascular access for central line placement. This is the first study the authors are aware of that depicts the internal jugular as a conical structure as opposed to the commonly depicted symmetrical columnar structure frequently illustrated in anatomy textbooks. This target area does come with additional risk, as the closer you get to the thoracic cavity, the greater the chances for lung injury. abstract_id: PUBMED:25440481 Image-guided placement of long-term central venous catheters reduces complications and cost. Background: The goals of this study were to evaluate the complication rate for intraoperative placement of a long-term central venous catheter (CVC) using intraoperative ultrasound (US) and fluoroscopy and to examine the feasibility for eliminating routine postprocedure chest X-ray. Methods: Retrospective data pertaining to operative insertion of long-term CVC were collected and the rate of procedural complications was determined. Results: From January 2008 to August 2013, 351 CVCs were placed via the internal jugular vein using US. Of these, 93% had a single, successful internal jugular vein insertion. The complications included 4 arterial sticks (1.14%). Starting in October 2012, postprocedure chest radiography (CXR) was eliminated in 170 cases, with no complications. A total of $29,750 in charges were deferred by CXR elimination. Conclusions: This review supports the use of US for CVC placement with fluoroscopy in reducing the rate of procedural complications. Additionally, with fluoroscopic imaging, postprocedural CXR can be eliminated with associated healthcare savings. abstract_id: PUBMED:30469160 Is Intraoperative Fluoroscopy Necessary for Central Venous Port System Placement in Children? Introduction: Central venous port (CVP) placement is traditionally performed under fluoroscopy guidance. However, efforts for radiation dose reduction in children have allowed the introduction of ultrasound guidance (USG) and anatomic landmarks as an alternative technique for CVP placement. The aim of this study is to determine whether intraoperative fluoroscopy (IF) is required to confirm the correct position of the catheter tip in children. Patients And Methods: A prospective, single-center study was performed between July and December 2017. Standard venous access site was the right internal jugular vein under USG. Estimated catheter length (ECL) was measured using anatomic landmarks. Ideal catheter length (ICL) was measured after placement under fluoroscopy guidance in the same patient. Age, sex, radiation dose, and complications were also analyzed. A t-test for paired samples and intraclass correlation coefficient were performed to analyze results. Results: A total of 30 consecutive patients aged 7 ± 2 years underwent CVP placement. The mean ECL was 17.1 ± 1.8 cm, while the mean ICL was 17.7 ± 1.8 cm. The mean difference between measurements was 0.28 cm (95% confidence interval [CI], -0.29 to 0.86; p = 0.324). Intraclass correlation coefficient analysis showed an agreement of 0.95 (95% CI, 0.91-0.98) between measurements. Mean radiation exposure during the procedure was 1.060.78 mGym2 during 0.34 ± 0.6 minutes. There were no complications registered during CVP placement. Conclusion: The correlation between IF and USG and anatomically guided catheter tip placement is optimal. These results suggest that fluoroscopy and the radiation exposure it entails can be safely avoided in selected children. abstract_id: PUBMED:29627658 Routine chest X-ray is unnecessary after ultrasound-guided central venous line placement in the operating room. Background: Central venous catheters (CVC) can be useful for perioperative monitoring and insertion has low complication rates. However, routine post insertion chest X-rays have become standard of care and contribute to health care costs with limited impact on patient management. Methods: 200 patient charts who underwent pancreaticoduodenectomy with central line placement and early line removal were reviewed for clinical complications related to central line placement as well as radiographic evidence of malpositioning. A cost analysis was performed to estimate savings if CXR had not been performed across routine surgical procedures requiring central access. Results: In 200 central line placements for Whipple procedures, 198 lines were placed in the right internal jugular and 2 were placed in the subclavian. No cases of pneumothorax or hemothorax were identified and 30 (15.3%) of CVCs were improperly positioned. Only 1 (0.5%) of these was deemed clinically significant and repositioned after the CXR was performed. Conclusion: Routine CXR consumes valuable time and resources (≅$155,000 annually) and rarely affects management. Selection should be guided by clinical factors. abstract_id: PUBMED:33752914 Technical factors and outcomes in pediatric central venous port placement. Purpose While central venous port placement is common, there remains variation in placement technique and rates of technical complications. The aim of this study was to assess variability in techniques and identify predictors of complications for children undergoing port placement. Methods We retrospectively reviewed all 331 patients who underwent venous port placement at a single tertiary children's hospital from May 2018 to June 2020. The primary outcome was early revision or replacement (R/R) for complications occurring within 30 days. Secondary outcomes included radiation exposure and rate of intraoperative conversion to a secondary site. Results The median age was 7 years (Interquartile Range 3-13 years) and the most common diagnoses were leukemia (30.2%), solid tumors (27.8%), and brain tumors (16.9%). Initial approach for port placement was ultrasound-guided internal jugular (IJV) in 255 (147 by surgery and 108 by interventional radiology [IR]) and landmark subclavian vein (SCV) in 76 (all by surgery). Early R/R occurred in 5.1%, including 9.0% of patients with leukemia but 1.1% with solid tumors. Individual proceduralist volume ranged from 2 to 98 cases and was inversely correlated with early R/R (r = -0.12, p = 0.30). In univariate analysis, ports placed by IR had an increased rate of early R/R (9.3%, n = 10) compared to those placed by surgery (3.2%, n = 7, p = 0.036) but this was not significant in multivariable regression controlling for diagnosis and age (Hazard Ratio 2.04; p = 0.19). Mean fluoroscopy time was significantly longer for ports placed by IR (59.9 s) compared to those placed by surgery (15.1 s, p &lt; 0.001). Initial SCV access was associated with an increased (14.5 vs 0.4%) rate of conversion to a secondary site. Conclusions Though venous port placement is a largely safe procedure in children, a substantial minority of patients, particularly those with leukemia, require early R/R. Proceduralist volume and training may influence early R/R, fluoroscopy exposure, and anatomic site preferences. abstract_id: PUBMED:36284831 Horner Syndrome Caused by Central Venous Port Placement via the Internal Jugular Vein: A Case Report. Pneumothorax and unintended arterial puncture are well-known complications of central venous (CV) access via the internal jugular vein (IJV), whereas injury to nerves around the IJV is a relatively rare complication. We describe the case of a male patient in his 60s who developed Horner syndrome after CV port placement via the IJV. We also point out the anatomical nerve structures around the IJV that clinicians should be aware of in order to minimize the risk of nerve injury during CV access. Additionally, with a brief literature review, we describe other nerve injuries that can be caused by CV access. abstract_id: PUBMED:33640215 Use of Point-of-Care Ultrasound to Confirm Central Venous Catheter Placement and Evaluate for Postprocedural Complications. Background: Central venous catheter (CVC) placement is commonly performed in the emergency department (ED), but traditional confirmation of placement includes chest radiograph. Objective: This manuscript details the use of point-of-care ultrasound (POCUS) to confirm placement of a CVC and evaluate for postprocedural complications. Discussion: CVC access in the ED setting is an important procedure. Traditional confirmation includes chest radiograph. POCUS is a rapid, inexpensive, and accurate modality to confirm CVC placement and evaluate for postprocedural complications. POCUS after CVC can evaluate lung sliding for pneumothorax and the internal jugular vein for misdirected CVC. A bubble study with POCUS visualizing agitated saline microbubbles within the right heart can confirm venous placement. Conclusions: POCUS can rapidly and reliably confirm CVC placement, as well as evaluate for postprocedural complications. Knowledge of this technique can assist emergency clinicians. Answer: The use of ultrasound guidance for the placement of central venous port systems via the right internal jugular vein has been shown to be safe and effective, with some studies suggesting that routine use of chest X-ray (CXR) and intraoperative fluoroscopy (IF) may be unnecessary to confirm correct placement of the device. One study found that ultrasound guidance improved the safety and effectiveness of port system implantation, and that routine use of CXR and IF could be considered unnecessary, as IF confirmed the correct position of the catheter in all cases, and catheter misplacement was identified and corrected using ultrasound guidance (PUBMED:27216807). Another study supported the safety and efficiency of radiologic placement of central venous ports via the internal jugular vein, with a high technical success rate and a very low complication rate (PUBMED:22563269). In elderly oncologic patients, ultrasound guidance for right internal jugular vein central venous port placement was found to be safe and effective, with a low morbidity rate and no occurrences of pneumothorax. The study suggested that while routine post-procedural CXR seems avoidable, IF should be considered mandatory (PUBMED:27896794). A retrospective analysis of long-term central venous catheter placement using intraoperative ultrasound and fluoroscopy showed a low rate of procedural complications and suggested that postprocedural CXR could be eliminated, resulting in healthcare savings (PUBMED:27555164). In children, a study found that the correlation between IF and ultrasound guidance and anatomically guided catheter tip placement was optimal, suggesting that fluoroscopy and the associated radiation exposure could be safely avoided in selected cases (PUBMED:30469160). Another study concluded that routine CXR after ultrasound-guided central venous line placement in the operating room is unnecessary and rarely affects management, potentially saving significant healthcare costs (PUBMED:29627658). In summary, while some studies advocate for the routine use of IF to confirm catheter placement (PUBMED:27896794), others suggest that ultrasound guidance alone may be sufficient, and that routine post-procedural CXR or IF could be avoided, especially in selected patient populations such as children (PUBMED:30469160) and in cases where fluoroscopic imaging is used during the procedure (PUBMED:27555164).
Instruction: Does angiographic localization of bleeding affect the management and mortality in gastrointestinal hemorrhage of unknown origin? Abstracts: abstract_id: PUBMED:9592580 Does angiographic localization of bleeding affect the management and mortality in gastrointestinal hemorrhage of unknown origin? Purpose: To evaluate the effect of angiography on patient management and mortality in patients with GIB of unknown origin. Material And Methods: 88 angiographies were performed in 74 patients with GIB of unknown origin (18 upper gastrointestinal tract [GIT]), 35 lower GIT. 21 unknown localisation) and were evaluated retrospectively in regard to the influence on patient management and clinical outcome. Results: After unsuccessful endoscopic diagnosis, angiography shows a sensitivity of 60% in the acute phase of GIB. Once the GIB had stopped the sensitivity was 14%. Following angiographic localisation, patients were more commonly treated surgically (71% vs. 44.5%) and subsequently had a lower rate of persistent or recurring bleeding (15% vs. 37.5%) as well as a lower event related mortality (10.5% vs. 25%). Patients with angiographic localisation of the bleeding site had a better outcome than patients with unsuccessful bleeding localisation, with regard to both surgical (85% vs. 62.5%) and conservative (100% vs. 85%) treatment. Conclusion: Angiographic localisation should be attempted in all cases of unknown GI-bleeding after endoscopic methods have been unsuccessful or ambiguous, because such a procedure has a positive effect on patient management and outcome. Moreover, angiography also offers therapeutic options. abstract_id: PUBMED:29372125 Dilated Thoracic Esophagus Presenting with Painful Progressive Persistent Dysphagia and Leukocytosis of Unknown Origin. Esophageal cancer is the eighth-most common cause of cancer-related mortality worldwide. The most common presenting symptom in advanced distal esophageal cancer is the sensation of sticking food, but it may sometimes present with bleeding and related complications, or asymptomatic leukocytosis. We present the case of a 77-year-old afebrile man with chronic alcoholism and a dilated thoracic esophagus with painful, progressive, and persistent dysphagia and leukocytosis of unknown origin. A 77-year-old man with a past medical history of hypertension and colonic cancer status post right hemicolectomy (surveillance negative) presented to the emergency department with painful, progressive, persistent, and worsening dysphagia for the past three weeks. It was associated with an unintentional weight loss of ten pounds in one month and nausea with non-bilious and non-bloody vomiting for several days. He denied fever, diarrhea, hoarseness of voice, change in bowel movement, hematemesis, hematochezia, melena, orthopnea, dyspnea at rest, palpitation, and abdominal pain. A chest x-ray (lateral view) showed debris in a dilated thoracic esophagus with fluid. An esophagogram showed a 10 x 3 cm obstructive mass with irregular mucosa within the proximal esophagus from the thoracic vertebra levels four to ten. A computed tomography scan of the chest with contrast showed long segment dilatation of the upper and mid-thoracic esophagus with generalized circumferential thickening of the distal esophagus. He was empirically on cefazolin and metronidazole but later switched to piperacillin, tazobactam, and fluconazole. Cardiac risk stratification was done for an esophagogastroduodenoscopy. However, the patient and the family opted for palliative care and agreed to a do-not-resuscitate/do-not-intubate status. In esophageal cancers, tumor-related leukocytosis and neutrophilia are common presentations. However, there is no standardized routine screening test for esophageal cancers. Thus, when asymptomatic afebrile elderly patients present with leukocytosis of unknown origin, clinicians should have suspicions of occult malignancy such as esophageal cancers, gastric cancer, and pancreatic cancer. abstract_id: PUBMED:26142039 Management of Patients with Rebleeding. Peptic ulcer bleeding is a common emergency. Management of ulcer bleeding requires prompt risk stratification, initiation of pharmacotherapy, and timely evaluation for endoscopy. Although endoscopy can achieve primary hemostasis in more than 90% of peptic ulcer bleeding, rebleeding may occur in up to 15% of patients after therapeutic endoscopy and is associated with heightened mortality. Early identification of high-risk patients for rebleeding is important. Depending on bleeding severity and center availability, patients with rebleeding may be managed by second endoscopy, transarterial angiographic embolization, or surgery. This article reviews the current management of peptic ulcers with an emphasis on rebleeding. abstract_id: PUBMED:22666887 Fever of unknown origin and digestive hemorrhage N/A abstract_id: PUBMED:8935401 Presurgical angiographic localization of small bowel bleeding site with methylene blue injection. The source of gastrointestinal hemorrhage is found in 94-95% of patients who undergo radiologic and/or endoscopic examinations of the upper or lower bowel. However, the exact presurgical localization of small bowel bleeding sites, which account for the other 5-6%, remains diagnostically difficult. We describe a previously reported and uncommon but useful angiographic technique used in conjunction with surgery which assists in this localization. abstract_id: PUBMED:15476156 Clinical impact of push enteroscopy in patients with gastrointestinal bleeding of unknown origin. Background & Aims: Despite the frequent use of push enteroscopy (PE), little is known of its clinical impact. The aim of this study was to evaluate the impact of PE on diagnostic and therapeutic management of patients with gastrointestinal bleeding of unknown origin. Methods: Seventy-five consecutive patients (mean age, 56 y) referred for PE were included in this 1-year prospective study. Indications for PE were overt bleeding in 46 patients (61%) and iron-deficiency anemia in 29 patients (39%). Before the procedure, the prescribing physicians were asked to fill out a questionnaire listing the theoretical patient management plan (diagnostic procedures and/or treatment) as if PE were not available. One month after PE, the same physicians were asked about: (1) diagnostic procedures performed after PE, (2) the final diagnosis, and (3) their opinion of the usefulness of PE in clinical management. The clinical impact of PE on diagnostic and/or therapeutic management was assessed by 3 study physicians. Results: Responses to the 2 questionnaires were obtained for all patients. The diagnostic yield of PE was 32% (24 of 75). The clinical impact of PE (changes in diagnostic and/or therapeutic management) was 55% (n = 41). PE was perceived as helpful by the prescribing physicians in 55% of cases. PE tended to influence patient management more in cases of overt bleeding than in cases of occult bleeding (63% vs. 41%; P = 0.06). Conclusions: PE influenced the clinical management of more than half the patients with GI bleeding of unknown origin, and was particularly useful in patients with overt bleeding. abstract_id: PUBMED:6800315 Gastrointestinal tract bleeding of unknown origin. Gastrointestinal (GI) tract bleeding of unknown origin is a vexing clinical problem. In this review, we discuss those causes of GI tract hemorrhage most likely to escape detection by conventional diagnostic modalities and explain how newer techniques of flexible fiberoptic endoscopy, radionuclide scanning, and angiography may be used to establish a diagnosis. We reviewed the literature on the role of exploratory surgery in the diagnosis of occult GI tract bleeding and conclude that its diagnostic yield is small and its value limited. Finally, we present a diagnostic approach to the patient with GI tract bleeding of unknown origin. abstract_id: PUBMED:803239 A hypothetical case of gastrointestinal bleeding: incorporating a management self-test. In this case study of upper gastrointestinal bleeding, a problem-solving question-and-answer format is used and is accompained by referenced discussion. The clinical area covered is one in which endoscopic and angiographic tools are available for diagnosis and tamponading devices and angiographic perfusion techniques exist for mangement. Of course, many varibles affect diagnostic and therapeutic decisions, but logic often dictates directions in management. The availability of skilled endoscopists and angiographers is an important factor in the optimal management of the patient with upper gastrointestinal bleeding in whom the source of bleeding is unknown. abstract_id: PUBMED:22468082 Angiographic evaluation and management of acute gastrointestinal hemorrhage. Although most cases of acute nonvariceal gastrointestinal hemorrhage either spontaneously resolve or respond to medical management or endoscopic treatment, there are still a significant number of patients who require emergency angiography and transcatheter treatment. Evaluation with noninvasive imaging such as nuclear scintigraphy or computed tomography may localize the bleeding source and/or confirm active hemorrhage prior to angiography. Any angiographic evaluation should begin with selective catheterization of the artery supplying the most likely site of bleeding, as determined by the available clinical, endoscopic and imaging data. If a hemorrhage source is identified, superselective catheterization followed by transcatheter microcoil embolization is usually the most effective means of successfully controlling hemorrhage while minimizing potential complications. This is now well-recognized as a viable and safe alternative to emergency surgery. In selected situations transcatheter intra-arterial infusion of vasopressin may also be useful in controlling acute gastrointestinal bleeding. One must be aware of the various side effects and potential complications associated with this treatment, however, and recognize the high re-bleeding rate. In this article we review the current role of angiography, transcatheter arterial embolization and infusion therapy in the evaluation and management of nonvariceal gastrointestinal hemorrhage. abstract_id: PUBMED:3500520 Major lower intestinal haemorrhage. Angiographic localisation and current management. Thirty-four patients with major lower intestinal bleeding underwent emergency selective mesenteric angiography during a 6-year period. Angiography identified a bleeding site in 16 patients (47%). Diverticulosis, found in 22 patients (65%), and angiodysplasia, found in 4 (12%), were the most common causes of major colonic bleeding and originated more frequently from the right colon. Eight patients (24%) bled from less common sources. Radiological control of bleeding was unreliable with a significant complication rate. Fourteen of 16 patients with positive angiograms and 6 of 18 patients with negative angiograms required surgery for persistent major bleeding. Angiographic localisation of colonic bleeding allowed limited resection in 9 of 11 patients with control of haemorrhage in 8 (89%). Fourteen of 34 patients were managed non-operatively; of these 2 had minor recurrent bleeding. The overall mortality rate was 29%, the operative mortality rate 40% and the non-operative mortality rate 14%. A rational diagnostic approach is presented, emphasising the role of selective mesenteric angiography in the management and surgical strategy of major lower intestinal bleeding. Answer: Yes, angiographic localization of bleeding does affect the management and mortality in gastrointestinal hemorrhage of unknown origin. According to a retrospective evaluation of 88 angiographies performed in 74 patients with gastrointestinal bleeding of unknown origin, angiography demonstrated a sensitivity of 60% in the acute phase of gastrointestinal bleeding (GIB). When the bleeding had stopped, the sensitivity dropped to 14%. Importantly, following angiographic localization, patients were more commonly treated surgically (71% vs. 44.5%), and they had a lower rate of persistent or recurring bleeding (15% vs. 37.5%) as well as a lower event-related mortality (10.5% vs. 25%). The study concluded that patients with angiographic localization of the bleeding site had better outcomes than those without successful localization, in both surgical (85% vs. 62.5%) and conservative (100% vs. 85%) treatments. Therefore, angiographic localization should be attempted in all cases of unknown GI-bleeding after endoscopic methods have been unsuccessful or ambiguous, as it positively affects patient management and outcomes. Additionally, angiography offers therapeutic options (PUBMED:9592580). Other studies and reviews also support the role of angiography in the management of gastrointestinal bleeding. For instance, angiographic evaluation and management are crucial in cases of acute nonvariceal gastrointestinal hemorrhage that do not resolve spontaneously or respond to medical management or endoscopic treatment (PUBMED:22468082). Moreover, angiographic localization has been shown to allow for limited resection in patients with major lower intestinal bleeding, with control of hemorrhage in a high percentage of cases (PUBMED:3500520). In summary, angiographic localization is a valuable tool in the management of gastrointestinal hemorrhage of unknown origin, influencing both the therapeutic approach and the associated mortality.
Instruction: Are Tanzanian patients attending public facilities or private retailers more likely to adhere to artemisinin-based combination therapy? Abstracts: abstract_id: PUBMED:25889767 Are Tanzanian patients attending public facilities or private retailers more likely to adhere to artemisinin-based combination therapy? Background: Artemisinin combination therapy (ACT) is first-line treatment for malaria in most endemic countries and is increasingly available in the private sector. Most studies on ACT adherence have been conducted in the public sector, with minimal data from private retailers. Methods: Parallel studies were conducted in Tanzania, in which patients obtaining artemether-lumefantrine (AL) at 40 randomly selected public health facilities and 37 accredited drug dispensing outlets (ADDOs) were visited at home and questioned about doses taken. The effect of sector on adherence, controlling for potential confounders was assessed using logistic regression with a random effect for outlet. Results: Of 572 health facility patients and 450 ADDO patients, 74.5% (95% CI: 69.8, 78.8) and 69.8% (95% CI: 64.6, 74.5), respectively, completed treatment and 46.0% (95% CI: 40.9, 51.2) and 34.8% (95% CI: 30.1, 39.8) took each dose at the correct time ('timely completion'). ADDO patients were wealthier, more educated, older, sought care later in the day, and were less likely to test positive for malaria than health facility patients. Controlling for patient characteristics, the adjusted odds of completed treatment and of timely completion for ADDO patients were 0.65 (95% CI: 0.43, 1.00) and 0.69 (95% CI: 0.47, 1.01) times that of health facility patients. Higher socio-economic status was associated with both adherence measures. Higher education was associated with completed treatment (adjusted OR = 1.68, 95% CI: 1.20, 2.36); obtaining AL in the evening was associated with timely completion (adjusted OR = 0.35, 95% CI: 0.19, 0.64). Factors associated with adherence in each sector were examined separately. In both sectors, recalling correct instructions was positively associated with both adherence measures. In health facility patients, but not ADDO patients, taking the first dose of AL at the outlet was associated with timely completion (adjusted OR = 2.11, 95% CI: 1.46, 3.04). Conclusion: When controlling for patient characteristics, there was some evidence that the adjusted odds of adherence for ADDO patients was lower than that for public health facility patients. Better understanding is needed of which patient care aspects are most important for adherence, including the role of effective provision of advice. abstract_id: PUBMED:22098135 Malaria prevalence and treatment of febrile patients at health facilities and medicine retailers in Cameroon. Objective: To investigate the quality of malaria case management in Cameroon 5 years after the adoption of artemisinin-based combination therapy (ACT). Treatment patterns were examined in different types of facility, and the factors associated with being prescribed or receiving an ACT were investigated. Methods: A cross-sectional cluster survey was conducted among individuals of all ages who left public and private health facilities and medicine retailers in Cameroon and who reported seeking treatment for a fever. Prevalence of malaria was determined by rapid diagnostic tests (RDTs) in consenting patients attending the facilities and medicine retailers. Results: Among the patients, 73% were prescribed or received an antimalarial, and 51% were prescribed or received an ACT. Treatment provided to patients significantly differed by type of facility: 65% of patients at public facilities, 55% of patients at private facilities and 45% of patients at medicine retailers were prescribed or received an ACT (P = 0.023). The odds of a febrile patient being prescribed or receiving an ACT were significantly higher for patients who asked for an ACT (OR = 24.1, P &lt; 0.001), were examined by the health worker (OR = 1.88, P = 0.021), had not previously sought an antimalarial for the illness (OR = 2.29, P = 0.001) and sought treatment at a public (OR = 3.55) or private facility (OR = 1.99, P = 0.003). Malaria was confirmed in 29% of patients and 70% of patients with a negative result were prescribed or received an antimalarial. Conclusions: Malaria case management could be improved. Symptomatic diagnosis is inefficient because two-thirds of febrile patients do not have malaria. Government plans to extend malaria testing should promote rational use of ACT; though, the introduction of rapid diagnostic testing needs to be accompanied by updated clinical guidelines that provide clear guidance for the treatment of patients with negative test results. abstract_id: PUBMED:20712876 Examining appropriate diagnosis and treatment of malaria: availability and use of rapid diagnostic tests and artemisinin-based combination therapy in public and private health facilities in south east Nigeria. Background: Rapid diagnostic tests (RDTs) and Artemisinin-based combination therapy (ACT) have been widely advocated by government and the international community as cost-effective tools for diagnosis and treatment of malaria. ACTs are now the first line treatment drug for malaria in Nigeria and RDTs have been introduced by the government to bridge the existing gaps in proper diagnosis. However, it is not known how readily available these RDTs and ACTs are in public and private health facilities and whether health workers are actually using them. Hence, this study investigated the levels of availability and use of RDTs and ACTs in these facilities. Methods: The study was undertaken in Enugu state, southeast Nigeria in March 2009. Data was collected from heads of 74 public and private health facilities on the availability and use of RDTs and ACTs. Also, the availability of RDTs and the types of ACTs that were available in the facilities were documented. Results: Only 31.1% of the health facilities used RDTs to diagnose malaria. The majority used the syndromic approach. However, 61.1% of healthcare providers were aware of RDTs. RDTs were available in 53.3% of the facilities. Public health facilities and health facilities in the urban areas were using RDTs more and these were mainly bought from pharmacy shops and supplied by NGOs. The main reasons given for non use are unreliability of RDTs, supply issues, costs, preference for other methods of diagnosis and providers' ignorance. ACTs were the drug of choice for most public health facilities and the drugs were readily available in these facilities. Conclusion: Although many providers were knowledgeable about RDTs, not many facilities used it. ACTS were readily available and used in public but not private health facilities. However, the reported use of ACTs with limited proper diagnosis implies that there could be high incidence of inappropriate case management of malaria which can also increase the economic burden of illnesses. Government and donors should ensure constant availability of RDTs in both public and private facilities, so that every treatment with ACTs is accompanied with proper diagnosis. abstract_id: PUBMED:17480216 Antimalarial drug prescribing practice in private and public health facilities in South-east Nigeria: a descriptive study. Background: Nigeria's national standard has recently moved to artemisinin combination treatments for malaria. As clinicians in the private sector are responsible for attending a large proportion of the population ill with malaria, this study compared prescribing in the private and public sector in one State in Nigeria prior to promoting ACTs. Objective: To assess prescribing for uncomplicated malaria in government and private health facilities in Cross River State. Method: Audit of 665 patient records at six private and seven government health facilities in 2003. Results: Clinicians in the private sector were less likely to record history or physical examination than those in public facilities, but otherwise practice and prescribing were similar. Overall, 45% of patients had a diagnostic blood slides; 77% were prescribed monotherapy, either chloroquine (30.2%), sulphadoxine-pyrimethamine (22.7%) or artemisinin derivatives alone (15.8%). Some 20.8% were prescribed combination therapy; the commonest was chloroquine with sulphadoxine-pyrimethamine. A few patients (3.5%) were prescribed sulphadoxine-pyrimethamine-mefloquine in the private sector, and only 3.0% patients were prescribed artemisinin combination treatments. Conclusion: Malaria treatments were varied, but there were not large differences between the public and private sector. Very few are following current WHO guidelines. Monotherapy with artemisinin derivatives is relatively common. abstract_id: PUBMED:34784975 Finding the right balance: implementation of public-private partnership in artemisinin-based combination therapy provision in Manokwari, Indonesia. Background: Indonesia is the second country with the highest number of malaria cases in Southeast Asia. Private health providers including community pharmacies often become the first point of care for the population seeking malaria treatment; however, public-private partnerships for malaria control are not widely implemented. This paper explores the acceptability of a public-private partnership program on the provision of subsidized artemisinin-based combination therapies (ACTs) in community pharmacies from the perspectives of private health providers, patients, and program implementers. Methods: The study was conducted in Manokwari District in West Papua Province, one of the highest endemic districts in Indonesia. Qualitative methods using interviews and focus group discussions (FGDs) were employed to explore the following dimensions of acceptability: affective attitude, burden, ethicality, intervention coherence, opportunity cost, perceived effectiveness, and self-efficacy. Study participants were program implementers, private health providers, and pharmacy clients. Audio-recorded interviews were transcribed and analyzed using thematic analysis. Secondary data on malaria cases and the use of ACTs reported by community pharmacies were also recorded. Results: Only one-fourth of the total community pharmacies in Manokwari participated in the partnership, suggesting low coverage of the program. The proportion of malaria cases reported by community pharmacies increased from 6.9% in 2018 to 30.7% of cases. Most participants had a positive attitude towards the program, which might be associated with the perceived effectiveness of the partnership in improving access to ACTs. Despite the good understanding of the intervention by the participating pharmacies, limited involvement of private physicians often resulted in non-standardized treatment practices. The partnership also imposed a burden on private health providers in terms of human resources and time which entailed significant opportunity costs. A number of ethical issues might undermine the equity of access to ACTs. Conclusion: Despite the positive attitude to the partnership, the perceived burden might outweigh the tangible benefits, posing threats to scaling up the intervention and sustainability. Innovations to simplify the administrative procedures in combination with performance-based incentives are needed to improve implementation. Engagement of patients and physicians is needed to increase the effectiveness of the partnership. abstract_id: PUBMED:29534750 Fever case management at private health facilities and private pharmacies on the Kenyan coast: analysis of data from two rounds of client exit interviews and mystery client visits. Background: Private sector availability and use of malaria rapid diagnostic tests (RDTs) lags behind the public sector in Kenya. Increasing channels through which quality malaria diagnostic services are available can improve access to testing and help meet the target of universal diagnostic testing. Registered pharmacies are currently not permitted to perform blood tests, and evidence of whether malaria RDTs can be used by non-laboratory private providers in line with the national malaria control guidelines is required to inform ongoing policy discussions in Kenya. Methods: Two rounds of descriptive cross-sectional exit interviews and mystery client surveys were conducted at private health facilities and registered pharmacies in 2014 and 2015, 6 and 18 months into a multi-country project to prime the private sector market for the introduction of RDTs. Data were collected on reported RDT use, medicines received and prescribed, and case management of malaria test-negative mystery clients. Analysis compared outcomes at facilities and pharmacies independently for the two survey rounds. Results: Across two rounds, 534 and 633 clients (including patients) from 130 and 120 outlets were interviewed, and 214 and 250 mystery client visits were completed. Reported testing by any malaria diagnostic test was higher in private health facilities than registered pharmacies in both rounds (2014: 85.6% vs. 60.8%, p &lt; 0.001; 2015: 85.3% vs. 56.3%, p &lt; 0.001). In registered pharmacies, testing by RDT was 52.1% in 2014 and 56.3% in 2015. At least 75% of test-positive patients received artemisinin-based combination therapy (ACT) in both rounds, with no significant difference between outlet types in either round. Provision of any anti-malarial for test-negative patients ranged from 0 to 13.9% across outlet types and rounds. In 2015, mystery clients received the correct (negative) diagnosis and did not receive an anti-malarial in 75.5% of visits to private health facilities and in 78.4% of visits to registered pharmacies. Conclusions: Non-laboratory staff working in registered pharmacies in Kenya can follow national guidelines for diagnosis with RDTs when provided with the same level of training and supervision as private health facility staff. Performance and compliance to treatment recommendations are comparable to diagnostic testing outcomes recorded in private health facilities. abstract_id: PUBMED:21651787 Treatment of uncomplicated malaria at public health facilities and medicine retailers in south-eastern Nigeria. Background: At primary care facilities in Nigeria, national treatment guidelines state that malaria should be symptomatically diagnosed and treated with artemisinin-based combination therapy (ACT). Evidence from households and health care providers indicates that many patients do not receive the recommended treatment. This study sought to determine the extent of the problem by collecting data as patients and caregivers leave health facilities, and determine what influences the treatment received. Methods: A cross-sectional cluster survey of 2,039 respondents exiting public health centres, pharmacies and patent medicine dealers was undertaken in urban and rural settings in Enugu State, south-eastern Nigeria. Results: Although 79% of febrile patients received an anti-malarial, only 23% received an ACT. Many patients (38%) received sulphadoxine-pyrimethamine (SP). A further 13% of patients received an artemisinin-derivative as a monotherapy. An estimated 66% of ACT dispensed was in the correct dose. The odds of a patient receiving an ACT was highly associated with consumer demand (OR: 55.5, p &lt; 0.001). Conclusion: Few febrile patients attending public health facilities, pharmacies and patent medicine dealers received an ACT, and the use of artemisinin-monotherapy and less effective anti-malarials is concerning. The results emphasize the importance of addressing both demand and supply-side influences on malaria treatment and the need for interventions that target consumer preferences as well as seek to improve health service provision. abstract_id: PUBMED:28446236 The malaria testing and treatment landscape in Benin. Background: Since 2004, artemisinin-based combination therapy (ACT) has been the first-line treatment for uncomplicated malaria in Benin. In 2016, a medicine outlet survey was implemented to investigate the availability, price, and market share of anti-malarial treatment and malaria diagnostics. Results provide a timely and important benchmark to measure future interventions aimed at increasing access to quality malaria case management services. Methods: Between July 5th to August 6th 2016, a cross sectional, nationally-representative malaria outlet survey was conducted in Benin. A census of all public and private outlets with potential to distribute malaria testing and/or treatment was implemented among 30 clusters (arrondissements). Outlets were eligible for inclusion in the study if they met at least one of three study criteria: (1) one or more anti-malarials reportedly in stock on the day of the survey; (2) one or more anti-malarials reportedly in stock within the 3 months preceding the survey; and/or (3) provided malaria blood testing. An audit was completed for all anti-malarials, malaria rapid diagnostic tests (RDT) and microscopy. Results: 7260 outlets with the potential to sell or distribute anti-malarials were included in the census and 2966 were eligible and interviewed. A total of 17,669 anti-malarial and 494 RDT products were audited. Quality-assured ACT was available in 95.0% of all screened public health facilities and 59.4% of community health workers (CHW), and availability of malaria blood testing was 94.7 and 68.4% respectively. Sulfadoxine-pyrimethamine (SP) was available in 73.9% of public health facilities and not found among CHWs. Among private-sector outlets stocking at least one anti-malarial, non-artemisinin therapies were most commonly available (94.0% of outlets) as compared to quality-assured ACT (36.1%). 31.3% of the ACTs were marked with a "green leaf" logo, suggesting leakage of a co-paid ACT into Benin's unsubsidized ACT market from another country. 78.5% of the anti-malarials distributed were through the private sector, typically through general retailers (47.6% of all anti-malarial distribution). ACT comprised 44% of the private anti-malarial market share. Private-sector price of quality-assured ACT ($1.35) was three times more expensive than SP ($0.42) or chloroquine ($0.41). Non-artemisinin therapies were cited as the most effective treatment for uncomplicated malaria among general retailers and itinerant drug vendors. Conclusions: The ACTwatch data has shown the importance of the private sector in terms of access to malaria treatment for the majority of the population in Benin. These findings highlight the need for increased engagement with the private sector to improve malaria case management and an immediate need for a national ACT subsidy. abstract_id: PUBMED:29268789 The malaria testing and treatment landscape in Kenya: results from a nationally representative survey among the public and private sector in 2016. Background: Since 2004, Kenya's national malaria treatment guidelines have stipulated artemisinin-based combination therapy (ACT) as first-line treatment for uncomplicated malaria, and since 2014, confirmatory diagnosis of malaria in all cases before treatment has been recommended. A number of strategies to support national guidelines have been implemented in the public and private sectors in recent years. A nationally-representative malaria outlet survey, implemented across four epidemiological zones, was conducted between June and August 2016 to provide practical evidence to inform strategies and policies in Kenya towards achieving national malaria control goals. Results: A total of 17,852 outlets were screened and 2271 outlets were eligible and interviewed. 78.3% of all screened public health facilities stocked both malaria diagnostic testing and quality-assured ACT (QAACT). Sulfadoxine-pyrimethamine (SP) for intermittent preventive treatment in pregnancy was available in 70% of public health facilities in endemic areas where it is recommended for treatment. SP was rarely found in the public sector outside of the endemic areas (&lt; 0.5%). The anti-malaria stocking private sector had lower levels of QAACT (46.7%) and malaria blood testing (20.8%) availability but accounted for majority of anti-malarial distribution (70.6% of the national market share). More than 40% of anti-malarials were distributed by unregistered pharmacies (37.3%) and general retailers (7.1%). QAACT accounted for 58.2% of the total anti-malarial market share, while market share for non-QAACT was 15.8% and for SP, 24.8%. In endemic areas, 74.9% of anti-malarials distributed were QAACT. Elsewhere, QAACT market share was 49.4% in the endemic-prone areas, 33.2% in seasonal-transmission areas and 37.9% in low-risk areas. Conclusion: Although public sector availability of QAACT and malaria diagnosis is relatively high, there is a gap in availability of both testing and treatment that must be addressed. The private sector in Kenya, where the majority of anti-malarials are distributed, is also critical for achieving universal coverage with appropriate malaria case management. There is need for a renewed commitment and effective strategies to ensure access to affordable QAACT and confirmatory testing in the private sector, and should consider how to address malaria case management among informal providers responsible for a substantial proportion of the anti-malarial market share. abstract_id: PUBMED:29132370 Expanding malaria diagnosis and treatment in Lao PDR: lessons learned from a public-private mix initiative. Background: As in other countries of the Greater Mekong Sub-region (GMS), the private health sector constitutes a significant avenue where malaria services are provided and presents a unique opportunity for public-private collaboration. In September 2008, a public-private mix (PPM) strategy was launched initially in four northern and southern provinces in Lao PDR to increase access to rapid diagnostic tests (RDTs) and artemisinin-based combination therapy (ACT), improve quality of care, and collect routine malaria data from the private sector. Throughout the process, key stakeholders were involved in the planning, monitoring and supervision of project sites. Following an initial assessment in 2009, the PPM initiative expanded to an additional 14 district sites to a total of 245 private pharmacies and 16 clinics covering 8 provinces and 22 districts. By June 2016, a total of 317 pharmacies, 30 clinics in 32 districts of the 8 provinces were participating in the PPM network and reported monthly malaria case data. Methods: This descriptive study documented the process of initiating and maintaining the PPM network in Lao PDR. Epidemiological data reported through the routine surveillance system from January 2009 to June 2016 were analyzed to illustrate the contribution of case reporting from the private sector. Results: A total of 2,301,676 malaria tests were performed in the PPM districts, which included all the PPM pharmacies and clinics (176,224, 7.7%), proportion of patients tested from 14,102 (4.6%) in 2009 to 29,554 (10.4%) in 2015. Over the same period of 90 months, a total of 246,091 positive cases (10.7%) were detected in PPM pharmacies and clinics (33,565; 13.6%), in the same districts as the PPM sites. The results suggest that the PPM sites contributed to a significant increasing proportion of patients positive for malaria from 1687 (7.4%) in 2009 to 5697 (15.8%) in 2015. Conclusions: Ensuring adequate and timely supplies of RDTs and ACT to PPM sites is critical. Frequent refresher training is necessary to maintain data quality, motivation and feedback. In the context of malaria elimination, the PPM initiative should be expanded further to ensure that all febrile cases seen through the private sector in malaria transmission areas are tested for malaria and treated appropriately. Results from the PPM must be integrated into a centralized registry of malaria cases that should prompt required case and foci investigations and responses to be conducted as part of elimination efforts. Answer: Tanzanian patients attending public health facilities are more likely to adhere to artemisinin-based combination therapy (ACT) than those attending private retailers. A study conducted in Tanzania, which compared patients obtaining artemether-lumefantrine (AL) at public health facilities and accredited drug dispensing outlets (ADDOs), found that 74.5% of health facility patients completed treatment compared to 69.8% of ADDO patients. Additionally, 46.0% of health facility patients took each dose at the correct time ('timely completion') compared to 34.8% of ADDO patients. After controlling for patient characteristics, the adjusted odds of completed treatment and timely completion for ADDO patients were 0.65 and 0.69 times that of health facility patients, respectively (PUBMED:25889767).
Instruction: Does Glp-2 have a protective effect on cerebral ischemia/reperfusion model? Abstracts: abstract_id: PUBMED:34031841 Protective Effect of Piceatannol Against Cerebral Ischaemia-Reperfusion Injury Via Regulating Nrf2/HO-1 Pathway In Vivo and Vitro. Piceatannol is a natural plant-derived compound with protective effects against cardiovascular diseases. However, its effect on cerebral ischaemia-reperfusion injury (CIRI) induced by oxidative stress remains unclear. This study aimed to investigate piceatannol's antioxidation in CIRI. An in vitro oxygen-glucose deprivation followed by reoxygenation model was used and cell viability was measured. A middle cerebral artery occlusion followed by reperfusion model was used in vivo. Neurological function, encephalisation quotient, oedema, and volume of the cerebral infarction were then evaluated. The effects of piceatannol on histopathological findings, as well as the ultrastructure of the cortex, were analysed. The activity of superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), and lactate dehydrogenase (LDH) and the malondialdehyde (MDA) content was measured both in vitro and in vivo. Finally, the expression of nuclear factor erythroid-2-related factor 2 (Nrf2), hemeoxygenase-1 (HO-1), and nicotinamide adenine dinucleotide phosphate quinone oxidoreductase 1 (NQO1) in cerebral tissue was detected using reverse transcription quantitative polymerase chain reaction (RT-qPCR) and western blotting. Our results demonstrated that cell viability in the piceatannol groups was increased. The SOD, GSH-Px activities were increased as LDH activity and MDA content decreased in the piceatannol groups both in vitro and in vivo, reflecting a decrease in oxidative stress. The neurological severity score and infarction volume in the piceatannol groups at doses of 10 and 20 mg/kg were lower than those of the model group. Furthermore, the damage seen on histopathological examination was partially attenuated by piceatannol. RT-qPCR and western blot analysis indicated that the expression of Nrf2, HO-1, and NQO1 were significantly increased by piceatannol. The results of the study demonstrate that piceatannol exerts a protective effect against CIRI. abstract_id: PUBMED:31529230 Protective Effect of Mitogen- and Stress-Activated Protein Kinase on the Rats with Focal Ischemia-Reperfusion Injury. Mitogen- and stress-activated protein kinase (MSK) is a recently identified nuclear cAMP-regulated enhancer B (CREB) and histone H3 kinase that responds to both mitogen- and stress-activated protein kinases. This study was designed to investigate the protective effect of MSK on the rats with focal ischemia-reperfusion injury. The rat model was established by inserting thread into the middle cerebral artery. The protein expression was measured by immunoblotting. The localization of MSK was measured by immunofluorescence assay. Highly-differentiated pheochromocytoma 12 (PC12) is used as a sympathetic neuron-like cell line and treated with glutamate to induce neurotoxicity. MSK was knocked down and overexpressed by siRNA and MSK over-expressing vector, respectively. The cell viability was measured by cell counting kit (CCK-8) assay. The coronal sections were isolated and stained with 2, 3, 5-triphenyltetrazolium chloride (TTC) to determine infarct volume. Finally, astrocytes were separated from cerebral cortexes of normal rats to analyze the effects of MSK on inflammatory response. In the rats with focal ischemia-reperfusion injury, the expression of MSK was reduced, reaching the lowest level at 3 d after ischemia-reperfusion, and then recovered gradually. MSK was found mainly localized in neurons and astrocytes. The expression levels of caspase-3, caspase-8, caspase-9, and INOS showed the opposite trend with respect to MSK. Further analysis showed that overexpression of MSK exerted a protective effect on glutamate-induced neurotoxicity through inhibiting apoptosis of PC12 cells, as well as decreased the infarct size in rat with focal ischemia-reperfusion injury. On the contrary, knockdown of MSK showed opposite results. Finally, MSK suppressed LPS-induced inflammatory response by decreasing the expression of inducible nitric oxide synthase (INOS) and increasing the expression of interleukin-10 (IL-10) in astrocytes from cerebral cortexes of normal rats. In conclusion, MSK exerted a protective effect on rat with focal ischemia-reperfusion injury through its anti-apoptotic effect on neurons and anti-inflammatory effect on astrocytes. abstract_id: PUBMED:25204154 Protective effect of mailuoning injection on cerebral ischemia/reperfusion injury in rats and its mechanism Objective: To discuss the protective effect of Mailuoning injection on ischemia/reperfusion (I/R) injury in rats and its mechanism. Method: Healthy male adult Sprague-Dawley (SD) rats were randomly divided into the sham operation group, the model group, the edaravone (3 mg x kg(-1)) control group, and Mailuoning high, middle and low-dose groups (4, 2, 1 mL x kg(-1)), with 10 rats in each group, and administered with drugs through tail intravenous injection. The middle cerebral artery occlusion (MCAO) was adopted to establish the rat ischemia/reperfusion model. After the ischemia for 2 h and reperfusion for 24 h, the pathological changes in neurovascular units (NVU) of brain tissues at the ischemia side was observed by HE staining. The expressions of glialfibrillary acidic protein (GFAP) and ionized calcium-binding adaptor molecule 1 (Ibal) were detected by the immunohistochemical method. The expressions of tumor necrosis factor-alpha (TNF-alpha), interleukin 1beta (IL-1beta), vascular cell adhesion molecule-1 (VCAM-1) and intercellular adhesion molecule-1 (ICAM-1) were detected by the western blotting technique. Result: Mailuoning injection could significantly improve the pathological changes in cortical penumbra brain tissue UVN of (I/R) rats, reduce the number of GFAP and Ibal positive cells, and significantly decrease the expressions of TNF-alpha, IL-1beta, VCAM-1 and ICAM-1 of brain tissues of I/R rats. Conclusion: Mailuoning injection shows an obvious protective effect on UVN of I/R rats. Its mechanism may involve the inhibition of the activation of astrocyte and microglia and the secretion and expression of various inflammatory factors. abstract_id: PUBMED:34751118 Protective Effect of GSK-3β/Nrf2 Mediated by Dimethyl Fumarate in Middle Cerebral Artery Embolization Reperfusion Rat Model. Aim: This study investigated the protective effect of dimethyl fumarate (DMF) in rats by mediating glycogen synthase kinase 3β (GSK-3β)/Nrf2 using the middle cerebral artery embolization reperfusion (MCAO/R) rat model. Background: After an acute ischemic stroke (AIS), oxidative stress occurs. Dimethyl fumarate (DMF), a nuclear factor-E2-related factor 2 (Nrf2) activator, approved by the US Food and Drug Administration (FDA), was observed to regulate the Nrf2 pathway by acting as an anti-oxidative stress agent; however, whether this agent is involved in inhibiting GSK-3β remains to be established. Methods: DMF model was used to explore the effects of GSK-3β on Nrf2 expression level, Nrf2- ARE binding activity and Nrf2/ARE downstream expression level of anti-oxidant stress protein in Cerebral ischemia-reperfusion injury (CIRI). 60 rats were randomly divided into Sham group, MCAO/R group, solvent control group (DMSO group) and DMF treatment group, with 15 rats in each group. The MCAO/R, DMSO and DMF groups were considered in the MCAO/R model using the modified thread embolization method. In contrast, the Sham group was only anaesthetized and disinfected, and tissue muscle was dissected without inserting suture emboli. DMF group was gavaged with 45mg/kg per day of DMF, DMSO control group was gavaged with DMSO of equal volume, while MCAO/R group was only modeled without any intragastric treatment. The rats were treated seven days after the operation, and a neurological function Longa score was estimated. The rats were sacrificed seven days later, and the infarct volume was assessed by TTC staining. Hematoxylin- eosin (HE) staining was used to observe the pathological changes in rat brain tissue. Nissl staining was used to observe the expression of neurons in the infarcted cortex. Western blotting (WB) was used to observe the protein expression levels of GSK-3β, Nrf2, downstream heme oxygenase 1 (HO1) and NADPH quinone oxidoreductase 1 (NQO1) in four groups. The expression levels of GSK-3β and Nrf2 in the four groups were observed by immunohistochemistry and immunofluorescence. Results: (1) The Longa score of the MCAO/R, DMSO and DMF groups was found to be higher compared to the Sham group, indicating successful operation. The Longa score of the DMF group was lower than that of the other three groups 4-7 days after surgery (P&lt;0.05). (2) HE and Nissl staining showed that the DMF group had lower neuron necrosis and higher gliosis compared to the control groups. (3) TTC staining results showed that the infarct volume of the DMF group was significantly smaller than the MCAO/R and DMSO groups. (4) Protein results showed that the GSK-3β expression in the DMF group was lower than that in all groups, while the expression of Nrf2, HO1 and NQO1 was higher compared to other groups. Conclusion: DMF can reduce neurological deficits and infarct size in the MCAO/R model. The protective effect may be related to decreased GSK-3β expression and increased Nrf2 expression, which may play a role in anti-oxidative stress. abstract_id: PUBMED:25854559 Protective effect of paeoniflorin on the hippocampus in rats with cerebral ischemia-reperfusion through activating cannabinoid receptor 2 Objective: To investigate the protective effect of paeoniflorin on hippocampal neurons in rats subjected to cerebral ischemia and reperfusion through activating cannabinoid receptor 2 (CBR2). Methods: A total of 144 male SD rats were randomly divided into sham-operation group, cerebral ischemia-reperfusion model group, menstruum group, 10 and 40 mg/kg paeoniflorin groups, 3 mg/kg CBR2 selective antagonist AM630 group, 40 mg/kg paeoniflorin combined with 3 mg/kg AM630 group, and 3 mg/kg CBR2 selective agonist HU308 treatment group. Focal cerebral ischemia-reperfusion models were made by inserting a monofilament suture into internal carotid artery. The neurological scores, infarction volume and cerebral edema were detected carefully to find out the effect of paeoniflorin on neurons. Pathological changes were observed by HE staining. The expressions of caspase-3 and cyclooxygenase 2 (COX-2) in hippocampal CA1 region were determined by immunohistochemistry. Results: Paeoniflorin significantly decreased the neurological scores, infarction volume and cerebral edema. In addition, paeoniflorin relieved the pathological changes and inhibited the expressions of caspase-3 and COX-2 in hippocampus CA1 area. But injecting AM630 in advance obviously counteracted the neuroprotective effect of paeoniflorin. Conclusion: CBR2 may participate in the protective effect of paeoniflorin on hippocampal neurons of cerebral ischemia-reperfusion rat models. abstract_id: PUBMED:28579891 Protective effect of chlorogenic acid on the focal cerebral ischemia reperfusion rat models. Objective: The aim of the study was to investigate the protective characteristic of chlorogenic acid, a natural glucosyl xanthone found in Lonicera Japonica on the cerebral ischemia reperfusion injury and the underlying mechanism. Methods: Focal cerebral ischemia reperfusion model was built by blocking the left middle cerebral artery in rats by using the suture-occluded method. Before operation, the corresponding drugs were given for each group once a day for 7 days. After 1 h of final administration, the model was built, after operation, reperfusion was conducted for 22 h, Before the reperfusion 10 min tail vein injection of large, medium and small dose of chlorogenic acid and then mortality was calculated, and Neurological deficit score (NDS) was conducted, and serum was collected to measure the NSE level; a 2 mm thick brain slice located at the intersection of optic nerves was collected for TTC staining, and the percentage of cerebral infarction area was calculated; brain homogenate was collected to measure the ICAM-1, VCAM-1, EPO and HIF-1α levels in brain tissue of cerebral ischemia reperfusion rat models; NGF was detected using immunohistochemical method; the morphological changes in brain tissue was observed with HE staining. Results: All focal cerebral ischemia reperfusion rat models were duplicated successfully. Every chlorogenic acid group with different dosage can significantly reduce the mortality, NDS and cerebral infarction area of rats, and significantly increase the EPO, HIF-1α and NGF levels in brain tissue; significantly improve the pathological lesions of hippocampus and cortex in brain tissue. Conclusion: The results showed that chlorogenic acid could protect the focal cerebral ischemia reperfusion injury rat models by adjusting the inflammatory factor, hypoxia factor and nerve growth factor. abstract_id: PUBMED:25939745 The protective effect of polymerized porcine hemoglobin (pPolyHb) on transient focal cerebral ischemia/reperfusion injury. Glutaraldehyde-polymerized porcine hemoglobin (pPolyHb) is a hemoglobin-based oxygen carrier currently being developed as a potential red blood cell substitute. We assessed the protective effect of pPolyHb on transient focal cerebral ischemia/reperfusion (I/R) injury. Several outcomes were tested, including infarct size, neurological score, production of inflammatory factors, and markers of oxidative status. Our results show that pPolyHb can not only significantly reduce infarct volume and improve neurological score, but can also inhibit the expression of TNF-α and IL-1β. Furthermore, the generation of MDA and MPO was reduced by pPolyHb. We conclude that pPolyHb has a positive effect on transient focal cerebral I/R injury. abstract_id: PUBMED:11938928 Protective effect of hyperin against cerebral ischemia-reperfusion injury The protective effect of hyperin (Hyp) against cerebral ischemia-reperfusion injury was studied. On the cerebral ischemia-reperfusion model in mice, Hyp (50, 100 mg.kg-1, i.p.) was shown to markedly and dose-dependently inhibit the decrease of lactate dehydrogenase (LDH) in cerebrum and improve the learning and memory impairment on the step down test. On the four-vessel occlusion model in rats, Hyp(50 and 100 mg.kg-1, i.p.) significantly reduced the decreases of glutathione peroxidase(GSH-Px), superoxide dismutase(SOD) and LDH activities in the cerebrum. Hyp was also shown to inhibit the increase of nitric oxide (NO) and malondialdehyde (MDA) contents in the cerebrum and promote the recovery of EEG activities. These results suggest that Hyp has protective effect against cerebral ischemia-reperfusion injury via attenuating free radical and NO. abstract_id: PUBMED:28971689 Protective effect of 6-O-methyl-scutellarein on repeated cerebral ischemia/reperfusion in rats. Scutellarin (1) possesses protective effects against neuronal injury, while 6-O-methyl-scutellarein (3), as the main metabolite of scutellarin in vivo, has not been reported about its protective effects previously. The present study mainly investigated whether the neural injury caused by ischemia/reperfusion would be influenced by different doses of 6-O-methyl-scutellarein (3). The results of behavioral, neurological, and histological examinations indicated that 6-O-methyl-scutellarein (3) could improve neuronal injury, and exhibit significant difference among the various doses. More importantly, 6-O-methyl-scutellarein (3) had better protective effects than scutellarin in rat cerebral ischemia. abstract_id: PUBMED:30899173 Effect of Sargentodoxa cuneata total phenolic acids on focal cerebral ischemia reperfusion injury rats model. Objective: Explore the possible protective effect of Sargentodoxa cuneata total phenolic acids on cerebral ischemia reperfusion injury rats. Methods: Focal cerebral ischemia reperfusion rats model were established by linear thrombus. Nimodipine group, Naoluotong group, the high, middle and low dose of Sargentodoxa cuneata total phenolic acids groups were given related drugs via intragastric administration before operation for seven days, once a day. At the same time sham operation group, and ischemia reperfusion group were given the same volume of physiological saline. One hour after the last administration, establish focal cerebral ischemia- reperfusion model in rats by thread method, and the thread was taken out after 2 h ischemia to achieve cerebral ischemia reperfusion injury in rats. After reperfusion for 24 h, the rats were given neurologic deficit score. The brain tissue was taken to measure the levels of IL-6, IL-1β, TNF-α, Bcl-2, Bax, Casp-3 and ICAM-1; HE staining observed histopathological changes in the hippocampus and cortical areas of the brain; Immunohistochemistry was used to observe the expression of NGF and NF-KBp65. Result: Focal cerebral ischemia reperfusion rats model was copyed successed. Compared with model group, each dose group of Sargentodoxa cuneata total phenolic acids could decreased the neurologic deficit score (P &lt; 0.05 or P &lt; 0.01), decreased the levels of IL-6, IL-1β, ICAM-1, TNF-α, Bax and Caspase-3 in brain tissue (P &lt; 0.05 or P &lt; 0.01), increased the levels of IL-10, Bcl-2, NGF in brain tissue (P &lt; 0.05 or P &lt; 0.01), decreased the express of NF-KBp65 in brain (P &lt; 0.05 or P &lt; 0.01). Conclusion: Sargentodoxa cuneata total phenolic acids can improve focal cerebral ischemia reperfusion injury rats tissue inflammation, apoptosis pathway, increase nutrition factor to protect the neurons, reduce the apoptosis of nerve cells, activate brain cells self-protect, improve the histopathological changes in the hippocampus and cortical areas of the brain, reduce cerebral ischemia reperfusion injury. Answer: The provided abstracts do not mention Glp 2 (Glucagon-like peptide-2) or its effects on cerebral ischemia-reperfusion models. The abstracts discuss various other compounds and their protective effects against cerebral ischemia-reperfusion injury, such as piceatannol (PUBMED:34031841), mitogen- and stress-activated protein kinase (PUBMED:31529230), mailuoning injection (PUBMED:25204154), dimethyl fumarate (PUBMED:34751118), paeoniflorin (PUBMED:25854559), chlorogenic acid (PUBMED:28579891), polymerized porcine hemoglobin (PUBMED:25939745), hyperin (PUBMED:11938928), 6-O-methyl-scutellarein (PUBMED:28971689), and Sargentodoxa cuneata total phenolic acids (PUBMED:30899173). However, none of these abstracts provide information on Glp 2 or its potential neuroprotective effects in the context of cerebral ischemia-reperfusion injury. Therefore, based on the provided abstracts, it cannot be concluded that Glp 2 has a protective effect on cerebral ischemia-reperfusion models.
Instruction: Can you ever be too old to be too thin? Abstracts: abstract_id: PUBMED:37181529 Incidence of post-stroke cognitive impairment in patients with first-ever ischemic stroke: a multicenter cross-sectional study in China. Background: Post-stroke cognitive impairment (PSCI) is a common sequela after stroke. China has a large population of stroke survivors, but a large-scale survey on the incidence and risk factors for PSCI has not been undertaken. We aimed to calculate the incidence and risk factors for vascular cognitive symptoms among first-ever stroke survivors in China through a multicenter cross-sectional study. Methods: From May 1, 2019 to November 30, 2019, patients with a clinical diagnosis of first-ever ischemic stroke were recruited from 563 hospitalized-based stroke center networks in 30 provinces of China. Cognitive impairment was measured by 5-min National Institute of Neurological Disease and Stroke-Canadian Stroke Network (NINDS-CSN) at 3-6 months after the indexed stroke. Stepwise multivariate regression and stratified analysis were performed to assess the association between PSCI and demographic variables. Findings: A total of 24,055 first-ever ischemic stroke patients were enrolled, with an average age of 70.25 ± 9.88 years. The incidence of PSCI as per the 5-min NINDS-CSN was 78.7%. Age ≥75 years old (OR: 1.887, 95%CI: 1.391-2.559), western regional residence (OR: 1.620, 95%CI: 1.411-1.860) and lower education level were associated with increased PSCI risk. Hypertension might be related to non-PSCI (OR: 0.832, 95%CI: 0.779-0.888). For patients under 45 years old, unemployment was an independent risk factor for PSCI (OR: 6.097, 95%CI: 1.385-26.830). For patients who were residents of the southern region (OR: 1.490, 95%CI: 1.185-1.873) and non-manual workers (OR: 2.122, 95%CI: 1.188-3.792), diabetes was related to PSCI. Interpretation: PSCI is common in Chinese patients with first-ever stroke, and many risk factors are related to the occurrence of PSCI. Funding: The Beijing Hospitals Authority Youth Program (No. QMS20200801); Youth Program of the National Natural Science Foundation of China (No. 81801142); the Key Project of Science and Technology Development of China Railway Corporation (No. K2019Z005); The Capital Health Research and Development of Special (No. 2020-2-2014); Science and Technology Innovation 2030-Major Project (No. 2021ZD0201806). abstract_id: PUBMED:30841410 Effects of heat on first-ever strokes and the effect modification of atmospheric pressure: A time-series study in Shenzhen, China. Background: Stroke is a leading cause of death globally. Extreme temperatures may induce stroke, but evidence on the effects of heat on first-ever strokes is not clear. Low air pressure can lead to depression and an increase in blood pressure, and it may exacerbate the health impact of heat. In this study, we aimed to evaluate the effects of heat on first-ever strokes, the possible sensitive populations, and the effect of modification of atmospheric pressure. Methods: We collected data on 142,569 first-ever strokes during 2005-2016 in Shenzhen, a coastal city in southern China, with subtropical oceanic monsoon climate. We fitted a time-series Poisson model in our study, estimating the association between daily mean temperature and first-ever strokes in hot months, with a distributed lag non-linear model with 7 days of lag. We calculated strokes attributable to heat in various gender, age groups, household register types, stroke subtypes, and atmospheric pressure levels. Results: Heat had a significant cumulative association with first-ever strokes, and the risk of strokes increased with the rise in temperature after it was higher than 30 °C (the 85th percentile). In total, 1.95% (95% empirical CI 0.63-3.20%) of first-ever strokes were attributable to high temperature. The attributable fraction and attributable number of heat were statistically significant in male, female, middle-aged and old patients, immigrant patients, and CBI patients. The fraction attributable to heat was 3.33% in the low atmospheric pressure group, and the number of estimated daily attributable strokes at low atmospheric pressure levels was higher than that of medium and high atmospheric pressure levels (p &lt; 0.01). Conclusions: High temperatures in hot months may trigger first-ever strokes, and low atmospheric pressure may exacerbate the effect. We mainly found associations between heat and first-ever strokes for intracerebral hemorrhage, middle-aged and old patients, as well as immigrant patients. abstract_id: PUBMED:35300051 Sociodemographic, personal, peer, and familial predictors of e-cigarette ever use in ESPAD Ireland: A forward stepwise logistic regression model. Introduction: E-cigarette ever use has risen significantly in recent years in Ireland, similar to trends elsewhere in Europe, the United States, and Asia-Pacific region. Results from ESPAD Ireland (European School Survey Project on Alcohol and other Drugs) show teenage e-cigarette ever use increased from 18% (2015) to 37% (2019). Given this increase, our aim is to profile e-cigarette ever users and never users in this age group; to examine sociodemographic, personal, peer, and familial factors associated with e-cigarette ever use; and to suggest appropriate measures to reduce use. Methods: A nationally representative stratified random sample of 50 ESPAD schools was surveyed in 2019, with 3495 students aged 15-17 years. Bivariate and multivariable logistic regression analyses were performed using Stata version 16. Results: E-cigarette ever use was significantly associated with ever smoking (AOR=4.15; 95% CI: 1.29-13.41), ever cannabis use (AOR=2.21; 95% CI: 1.11-4.41) and ever inhalants use (AOR=2.51; 95% CI: 1.07-5.88). Children of university-educated mothers had significantly higher odds of e-cigarette ever use (AOR=3.46; 95% CI: 1.40-8.54). Associated with reduced AORs were reading books for enjoyment (AOR=0.32; 95% CI: 0.16-0.64), living in households where smoking was regulated (AOR=0.53; 95% CI: 0.30-0.94), and perceiving moderate risk in trying e-cigarettes once or twice (AOR=0.20; 95% CI: 0.07-0.67). Conclusions: E-cigarette ever use is part of a pattern of teenage polysubstance use including cigarette smoking, providing some support for the common liability theory. Regulation of smoking in the home, reading for enjoyment, and perceiving risk from e-cigarette use are associated with decreased likelihood of ever use, and higher parental education with increased likelihood. Thus, health education emphasizing the role of parents and risks of e-cigarette use is indicated to reduce the rise in e-cigarette ever use in teenagers. abstract_id: PUBMED:36879961 Modeling on number of children ever born and its determinants among married women of reproductive age in Ethiopia: A Poisson regression analysis. Background: One of the main components of population dynamics that determine the size, structure, and composition of a country's population is the number of ever-born children. Psychological, economic, social, and demographic factors all have a strong influence on and predict it. However, there is little information on its current status in Ethiopia. As a result, modeling the number of children ever born and its determinants is critical for the Ethiopian government to develop appropriate policies and programs. Methods: A total of 3260 eligible women were used as a study sample in this study to assess the number of children ever born and determinants among married reproductive age women in Ethiopia. Secondary data were culled from the 2019 Ethiopian Demography and Health Survey datasets. The factors associated with the number of children born were identified using a Poisson regression model (CEB). Results: The average number of children per mother was 6.09, with a standard deviation of 8.74. There were 2432 (74.6%) rural residents among the total respondents, 2402 (73.7%) have no formal education, and three out of five women are not currently working. The participants' average age was 41.66, with a standard deviation of 3.88. When compared to urban residents, the number of CEB for rural residents is 1.37 times higher. When compared to women with no education, the number of CEB for women with higher education was reduced by 48%. For every unit increase in respondents' current age, the percent change in the number of children ever born increases by 2.4%. For every unit increase in the family's wealth index status, the percent change in the number of children ever born decreases by 1.7%. Conclusion: When compared to the target of Ethiopia's health transformation plan, the average number of children born is higher. Improving the household wealth index, women's education, and employment status all contribute to a reduction in the number of CEB, which is important in balancing population growth with natural capacity and the country's economic development. abstract_id: PUBMED:30588293 Estimation of number of ever born children using zero truncated count model: evidence from Bangladesh Demographic and Health Survey. Fertility is an important demographic indicator for any country and there has always been a concern for number of ever born children to know the transition of fertility pattern intensively. Child ever born is the count response variable ranges from 1 to 15 and was originally collected by the Bangladesh Demographic and Health Survey (BDHS) considering the reproductive women who had given at least one birth. This study proposes zero truncated Poisson and zero truncated negative binomial regression models in order to find the best fitted model to estimate number of ever born children using BDHS 2014 dataset. Findings reveal that, the number of children increases with the increment of respondent's age but number of children declines if education status of respondents as well as their husbands' increases. Similarly, religion, wealth index and wanted last child have significantly influenced the number of child ever born. Surprisingly, the number of children ever born to a mother from rural area does not differ significantly from that of urban area in Bangladesh, though there exists a little fluctuation in the number of children ever born to a mother living in seven administrative divisions. Intension of contraceptive use has no influence on number of ever born children to a mother. abstract_id: PUBMED:37731567 Risk factors for stroke recurrence in young patients with first-ever ischemic stroke: A meta-analysis. Background: At present, the incidence rate of ischemic stroke in young people is increasing yearly, and the age of onset is increasingly young. Therefore, primary and secondary prevention of ischemic stroke in young people, especially secondary prevention, is particularly crucial. Aim: We aimed to comprehensively evaluate risk factors for stroke recurrence in first-ever young ischemic stroke (YIS) patients. Methods: The meta-analysis was used to quantitatively analyze the research results on risk factors for stroke recurrence in first-ever YIS patients both domestically and internationally. Stata12.0 software was used for heterogeneity testing, publication bias analysis, sensitivity analysis, and the calculation of combined odds ratios and 95% confidence intervals. Results: The odds ratio (OR) values of the relationship between hypertension and hyperlipidemia and recurrence of first-ever YIS were 1.54 (1.05-2.26) and 1.12 (1.00-1.25), respectively. The OR values of male sex, type 2 diabetes, smoking, drinking and YIS recurrence were 1.66 (0.98-2.79), 1.01 (0.64-1.59), 1.21 (0.83-1.76), and 1.28 (0.82-2.53), respectively. The relationship between male sex, type 2 diabetes, smoking, drinking and YIS recurrence was ambiguous. Conclusion: Hypertension and hyperlipidemia are important risk factors for stroke recurrence in first-ever YIS patients, and active intervention should be taken. abstract_id: PUBMED:27279814 The EVER genes - the genetic etiology of carcinogenesis in epidermodysplasia verruciformis and a possible role in non-epidermodysplasia verruciformis patients. In recent years, the two adjacent novel EVER1 and EVER2 genes have been identified, whose mutations are responsible for the development of epidermodysplasia verruciformis (EV). Epidermodysplasia verruciformis is a rare, autosomal recessive genodermatosis associated with increased risk of skin carcinoma. Up to now 7 mutations in the EVER1 gene and 5 mutations in the EVER2 gene have been identified only in EV. It was also determined that the EVER genes belong to a novel gene family, the transmembrane channel-like (TMC) family, and are responsible for properly functioning zinc homeostasis. These observations have given new insights into EV pathogenesis. abstract_id: PUBMED:33444167 Effects of diurnal temperature range on first-ever strokes in different seasons: a time-series study in Shenzhen, China. Objective: Diurnal temperature range (DTR) is an important meteorological indicator of global climate change; high values of DTR may induce stroke morbidity, while the related high-risk periods and sensitive populations are not clear. This study aims to evaluate the effects of DTR on first-ever strokes in different seasons and in relation to sensitive populations. Methods: We collected data on 142 569 first-ever strokes during 2005-2016 in Shenzhen. We fitted a time-series Poisson model in our study, estimating the associations between DTR and first-ever strokes, with a distributed lag non-linear model. Then, we calculated strokes attributable to high DTR in different genders, age groups, education levels and stroke subtypes. Results: High DTR had a significant association with first-ever strokes, and the risk of stroke increased with the rise of DTR in the summer and winter. In total, 3.65% (95% empirical CI (eCI) 1.81% to 5.53%) of first-ever strokes were attributable to high DTR (5.5°C and higher) in the summer, while 2.42% (95% eCI 0.05% to 4.42%) were attributable to high DTR (8°C and higher) in the winter. In the summer, attributable fraction (AF) was significant in both genders, middle-aged and old patients, patients with different levels of education, as well as patients with cerebral infarction (CBI); in the winter, AF was significant in middle-aged patients, patients with primary and lower education level, as well as patients with CBI. Conclusions: High DTR may trigger first-ever strokes in the summer and winter, and CBI is more sensitive than intracerebral haemorrhage to DTR. Most people are sensitive to high DTR in the summer, while middle-aged and low-education populations are sensitive in the winter. It is recommended that the DTR values be reported and emphasised in weather forecast services, together with the forecasts of heat and cold. abstract_id: PUBMED:34877411 Major epidemiological features of first-ever ischemic stroke in Tuzla Canton, Bosnia and Herzegovina. Background And Aims: Opacity of data on stroke for Bosnia and Herzegovina (B&amp;H) is mainly due to the lack of a unified national stroke registry. This article aims to present updated epidemiological data on the etiology and risk factors for first-ever ischemic stroke in Tuzla Canton, B&amp;H. Methods: This retrospective hospital-based study included all first-ever ischemic stroke patients admitted between January 1, 2018 and December 31, 2018 at the Neurology Department, University Clinical Center Tuzla. Results: First-ever ischemic stroke was diagnosed in 739 patients. Leading risk factors were hypertension (94%), diabetes mellitus (40.7%), and dyslipidemia (38.8%). The most common stroke subtypes were atherothrombotic (36.8%), cardioembolic (21.9%), and stroke of undetermined etiologies (19.2%). Mean NIHSS score at discharge was 13 (IQR 2-16), and favorable patient outcome (mRs ≤2) was recorded in 26.4% patients. Men (aOR 0.39; 95% CI 0.24-0.64) and younger patients (aOR 0.96; 95% CI 0.93-0.98) had significantly higher probability of having a favorable outcome at discharge. Dyslipidemia could be considered as a predictive factor for patient outcome (aOR 0.66; 95% CI 0.43-1.00). Conclusions: More than 92% of our patients had at least one modifiable risk factor, with hypertension and diabetes being at the forefront. One out of four patients had become functionally independent at discharge, while hospital mortality was lower than in other Eastern European countries. The overarching goal should be steered toward the development of a national stroke registry, which should be used as a reference for all further stroke management activities. abstract_id: PUBMED:32029977 Major Correlates and Socioeconomic Inequalities in Hysterectomy among Ever-Married Women in India. Purpose: This article focuses at the prevalence of hysterectomy, its major correlates, and the socioeconomic inequalities in the prevalence of hysterectomy among women in India. Methods: It used data from 527,865 ever-married women interviewed in 2015-2016 National Family Health Survey-4 from 601,000 households across 640 districts in the country. The economic inequalities in the prevalence of hysterectomy have been analyzed using poor-rich ratio and the concentration index (CI) in addition to the adjusted effects of major correlates through multiple logistic regression. Results: Unadjusted (9.3%) and adjusted prevalence of hysterectomy (odds ratio [OR] =7.3; P &lt; 0.001) are significantly higher among women aged 40 and above. Women from rural areas (OR = 1.2; P &lt; 0.001) and those who were formerly married (6%) were more likely to have undergone hysterectomy. Over two-thirds of hysterectomies were conducted in private health-care facilities, where 51% reported that excessive menstrual bleeding was the main reason for hysterectomy. The value of poor-rich ratio (0.79) and CI (0.121) clearly depicts that hysterectomy is more inclined to be concentrated among middle-to-richer class of women in India. Conclusions: Private health-care sector should have standard regulatory practices to deliver more efficient, accountable, and sustainable maternal health-care services. Answer: The provided abstracts do not directly address the question of whether one can be too old to be too thin. The abstracts focus on various topics such as post-stroke cognitive impairment, the effects of heat on first-ever strokes, e-cigarette use among teenagers, determinants of the number of children ever born, risk factors for stroke recurrence in young patients, the genetic etiology of carcinogenesis in epidermodysplasia verruciformis, the effects of diurnal temperature range on first-ever strokes, epidemiological features of first-ever ischemic stroke, and socioeconomic inequalities in hysterectomy among women in India. None of these studies discuss age in relation to being too thin or the health implications of low body weight in older adults. Therefore, based on the provided abstracts, it is not possible to answer the question about age and thinness. Additional information from geriatric health studies or research on the effects of low body mass index (BMI) in older populations would be required to provide an informed response to this question.
Instruction: Are diabetes risk scores useful for the prediction of cardiovascular diseases? Abstracts: abstract_id: PUBMED:23659777 Are diabetes risk scores useful for the prediction of cardiovascular diseases? Assessment of seven diabetes risk scores in the KORA S4/F4 cohort study. Aim: To evaluate the utility of diabetes prediction models for CVD prediction as stated in two earlier studies. Methods: 845 subjects from the population based German KORA (Cooperative Health Research in the Region of Augsburg) S4/F4 cohort study (aged 55 to 74 years, without diabetes, former stroke, and former myocardial infarction at baseline) were followed for up to ten years for incident stroke and myocardial infarction. Seven diabetes risk scores developed from four different studies were applied to the KORA cohort to assess their predictive ability for CVD. Results: Areas under the receiver-operating curve (AROCs) for the prediction of CVD ranged from 0.60 to 0.65 when diabetes risk scores were applied to the KORA cohort. When diabetes risk scores were used to predict CVD and type 2 diabetes, respectively, AROCs for the prediction of CVD were 0.09 to 0.24 lower than AROCs for the prediction of type 2 diabetes. Furthermore, we used KORA data to develop prediction models for either diabetes or CVD, and found that they differed widely in selected predictor variables. Conclusion: In the older population, diabetes risk scores are not useful for the prediction of CVD, and prediction models for diabetes and CVD, respectively, require different parameters. abstract_id: PUBMED:32122200 Underperformance of clinical risk scores in identifying vascular ultrasound-based high cardiovascular risk in systemic lupus erythematosus. Aims: The aim of this study was to assess the performance of eight clinical risk prediction scores to identify individuals with systemic lupus erythematosus (SLE) at high cardiovascular disease (CVD) risk, as defined by the presence of atherosclerotic plaques. Methods: CVD risk was estimated in 210 eligible SLE patients without prior CVD or diabetes mellitus (female: 93.3%, mean age: 44.8 ± 12 years) using five generic (Systematic Coronary Risk Evaluation (SCORE), Framingham Risk Score (FRS), Pooled Cohort Risk Equations (ASCVD), Globorisk, Prospective Cardiovascular Münster Study risk calculator (PROCAM)) and three 'SLE-adapted' (modified-SCORE, modified-FRS, QRESEARCH risk estimator, version 3 (QRISK3)) CVD risk scores, as well as ultrasound examination of the carotid and femoral arteries. Calibration, discrimination and classification measures to identify high CVD risk based on the presence of atherosclerotic plaques were assessed for all risk models. CVD risk reclassification was applied for all scores by incorporating ultrasound results. Results: Moderate calibration (p-value range from 0.38 to 0.63) and discrimination (area under the curve 0.73-0.84), and low-to-moderate sensitivity (8.3-71.4%) and classification ability (Matthews correlation coefficient (MCC) 0.25-0.47) were observed for all risk models to identify patients with plaques at any arterial site as high-risk. MCC was improved for modified-FRS versus FRS (0.43 vs 0.36), but not for modified-SCORE versus SCORE (0.25 vs 0.25). Based on plaque presence, CVD risk was upgraded to high-risk in 10%, 16.1%, 20.5%, 21.5%, 24%, 28.2% and 28.6% of cases classified as non-high-risk by QRISK3, modified-FRS, Globorisk, FRS/PROCAM, ASCVD, modified-SCORE and SCORE, respectively. Conclusions: Most of the five generic and three 'SLE-adapted' clinical risk scores underestimated high CVD risk defined by atherosclerotic plaque presence in patients with SLE. abstract_id: PUBMED:25987961 Assessment of cardiovascular risk in diabetes: Risk scores and provocative testing. Cardiovascular disease (CVD) is the leading cause of morbidity and mortality among patients with diabetes mellitus, who have a risk of cardiovascular mortality two to four times that of people without diabetes. An individualised approach to cardiovascular risk estimation and management is needed. Over the past decades, many risk scores have been developed to predict CVD. However, few have been externally validated in a diabetic population and limited studies have examined the impact of applying a prediction model in clinical practice. Currently, guidelines are focused on testing for CVD in symptomatic patients. Atypical symptoms or silent ischemia are more common in the diabetic population, and with additional markers of vascular disease such as erectile dysfunction and autonomic neuropathy, these guidelines can be difficult to interpret. We propose an algorithm incorporating cardiovascular risk scores in combination with typical and atypical signs and symptoms to alert clinicians to consider further investigation with provocative testing. The modalities for investigation of CVD are discussed. abstract_id: PUBMED:33891687 Underperformance of clinical risk scores in identifying vascular ultrasound-based high cardiovascular risk in systemic lupus erythematosus. Aims: The aim of this study was to assess the performance of eight clinical risk prediction scores to identify individuals with systemic lupus erythematosus (SLE) at high cardiovascular disease (CVD) risk, as defined by the presence of atherosclerotic plaques. Methods: CVD risk was estimated in 210 eligible SLE patients without prior CVD or diabetes mellitus (female: 93.3%, mean age: 44.8 ± 12 years) using five generic (Systematic Coronary Risk Evaluation (SCORE), Framingham Risk Score (FRS), Pooled Cohort Risk Equations (ASCVD), Globorisk, Prospective Cardiovascular Münster Study risk calculator (PROCAM)) and three 'SLE-adapted' (modified-SCORE, modified-FRS, QRESEARCH risk estimator, version 3 (QRISK3)) CVD risk scores, as well as ultrasound examination of the carotid and femoral arteries. Calibration, discrimination and classification measures to identify high CVD risk based on the presence of atherosclerotic plaques were assessed for all risk models. CVD risk reclassification was applied for all scores by incorporating ultrasound results. Results: Moderate calibration (p-value range from 0.38 to 0.63) and discrimination (area under the curve 0.73-0.84), and low-to-moderate sensitivity (8.3-71.4%) and classification ability (Matthews correlation coefficient (MCC) 0.25-0.47) were observed for all risk models to identify patients with plaques at any arterial site as high-risk. MCC was improved for modified-FRS versus FRS (0.43 vs 0.36), but not for modified-SCORE versus SCORE (0.25 vs 0.25). Based on plaque presence, CVD risk was upgraded to high-risk in 10%, 16.1%, 20.5%, 21.5%, 24%, 28.2% and 28.6% of cases classified as non-high-risk by QRISK3, modified-FRS, Globorisk, FRS/PROCAM, ASCVD, modified-SCORE and SCORE, respectively. Conclusions: Most of the five generic and three 'SLE-adapted' clinical risk scores underestimated high CVD risk defined by atherosclerotic plaque presence in patients with SLE. abstract_id: PUBMED:36006556 The comparison of cardiovascular disease risk prediction scores and evaluation of subclinical atherosclerosis in rheumatoid arthritis: a cross-sectional study. Objectives: Primary objectives estimated prevalence of traditional cardiovascular disease (CVD) risk factors and compared different CVD risk prediction algorithms in an Indian rheumatoid arthritis (RA) population. Secondary objectives evaluated associations between carotid intima-media thickness (CIMT) and subclinical atherosclerosis (SCA) with CVD risk factors and CVD risk scores. Methods: The presence of CVD risk factors were recorded, and 10-year CVD risk was predicted using Framingham risk scoring (FRS) using lipids (FRS-Lipids), FRS using body mass index (FRS-BMI), QRISK-2, SCORE, and the algorithm recommended by ACC/AHA (ASCVD). CIMT was measured on the far-wall of the common carotid artery. Subclinical atherosclerosis was defined as CIMT &gt; 0.9 mm or the presence of carotid plaque. Results: A total of 332 patents were enrolled, 12% had diabetes mellitus, 21.4% hypertension, and 6.9% were current/past smokers. Proportions of RA with predicted 10-year CVD risk &gt; 10% varied from 16.2 to 41.9% between scores. Highest magnitude of risk was predicted by FRS-BMI. Agreement between scores in predicting risk was moderate in general. Mean CIMT was 0.70 ± 0.15 mm. Age, male sex, and extra-articular manifestations associated with greater CIMT. All risk scores except SCORE moderately correlated with CIMT. About one-seventh had SCA defined as CIMT &gt; 0.9 mm or the presence of carotid plaques, associated with increasing age, male gender, or higher ratio of total cholesterol to high-density lipoprotein cholesterol. ASCVD and QRISK-2 scores had maximum area under curve for distinguishing SCA. Conclusion: Individual CVD risk scores predict 10-year CVD risk differently in Indian patients with RA, and require validation for predicting hard end points (CVD events, mortality). Key Points • Diabetes mellitus and hypertension are the most prevalent cardiovascular disease risk factors in Indian patients with RA. • Individual cardiovascular risk prediction scores predict risk differently in Indian patients with RA, highest risk being predicted by the FRS-BMI. • Carotid intima-media thickness in RA associated with increasing age, male sex and extra-articular manifestations. • 14% RA had subclinical atherosclerosis, associated with increasing age, male sex, and higher total cholesterol to HDL-C ratio, best distinguished by ASCVD and QRISK-2 scores. abstract_id: PUBMED:35032176 Cardiovascular risk prediction in type 2 diabetes: a comparison of 22 risk scores in primary care settings. Aims/hypothesis: We aimed to compare the performance of risk prediction scores for CVD (i.e., coronary heart disease and stroke), and a broader definition of CVD including atrial fibrillation and heart failure (CVD+), in individuals with type 2 diabetes. Methods: Scores were identified through a literature review and were included irrespective of the type of predicted cardiovascular outcome or the inclusion of individuals with type 2 diabetes. Performance was assessed in a contemporary, representative sample of 168,871 UK-based individuals with type 2 diabetes (age ≥18 years without pre-existing CVD+). Missing observations were addressed using multiple imputation. Results: We evaluated 22 scores: 13 derived in the general population and nine in individuals with type 2 diabetes. The Systemic Coronary Risk Evaluation (SCORE) CVD rule derived in the general population performed best for both CVD (C statistic 0.67 [95% CI 0.67, 0.67]) and CVD+ (C statistic 0.69 [95% CI 0.69, 0.70]). The C statistic of the remaining scores ranged from 0.62 to 0.67 for CVD, and from 0.64 to 0.69 for CVD+. Calibration slopes (1 indicates perfect calibration) ranged from 0.38 (95% CI 0.37, 0.39) to 0.74 (95% CI 0.72, 0.76) for CVD, and from 0.41 (95% CI 0.40, 0.42) to 0.88 (95% CI 0.86, 0.90) for CVD+. A simple recalibration process considerably improved the performance of the scores, with calibration slopes now ranging between 0.96 and 1.04 for CVD. Scores with more predictors did not outperform scores with fewer predictors: for CVD+, QRISK3 (19 variables) had a C statistic of 0.68 (95% CI 0.68, 0.69), compared with SCORE CVD (six variables) which had a C statistic of 0.69 (95% CI 0.69, 0.70). Scores specific to individuals with diabetes did not discriminate better than scores derived in the general population: the UK Prospective Diabetes Study (UKPDS) scores performed significantly worse than SCORE CVD (p value &lt;0.001). Conclusions/interpretation: CVD risk prediction scores could not accurately identify individuals with type 2 diabetes who experienced a CVD event in the 10 years of follow-up. All 22 evaluated models had a comparable and modest discriminative ability. abstract_id: PUBMED:37899208 Development of new scores for atherosclerotic cardiovascular disease using specific medical examination items: the Suita Study. Background: We previously developed risk models predicting stroke, coronary heart disease (CHD), and cardiovascular disease (CVD) among Japanese people from the Suita Study. Yet, applying these models at the national level was challenging because some of the included risk factors differed from those collected in the Japanese governmental health check-ups, such as Tokutei-Kenshin. We, therefore, conducted this study to develop new risk models for stroke, CHD, and atherosclerotic CVD (ASCVD), based on data from the Suita Study. The new models used traditional cardiovascular risk factors similar to those in the Japanese governmental health check-ups. Methods: We included 7,413 participants, aged 30-84 years, initially free from stroke and CHD. All participants received baseline health examinations, including a questionnaire assessing their lifestyle and medical history, medical examination, and blood and urine analysis. The risk factors of stroke, CHD, and ASCVD (cerebral infarction or CHD) were determined using the multivariable-adjusted Cox regression. The models' performance was assessed using the C-statistics for discrimination and the Hosmer-Lemeshow for calibration. We also developed three simple scores (zero to 100) that could predict the 10-year incidence of stroke, CHD, and ASCVD. Results: Within 110,428 person-years (median follow-up = 16.6 years), 410 stroke events, 288 CHD events, and 527 ASCVD events were diagnosed. Age, smoking, hypertension, and diabetes were associated with stroke, CHD, and ASCVD risk. Men and those with decreased high-density lipoproteins or increased low-density lipoproteins showed a higher risk of CHD and ASCVD. Urinary proteins were associated with an increased risk of stroke and ASCVD. The C-statistic values of the risk models were &gt;0.750 and the p-values of goodness-of-fit were &gt;0.30. The 10-year incidence of stroke, CVD, and ASCVD events was 3.8%, 3.5%, and 5.7% for scores 45-54, 10.3%, 11.8%, and 19.6% for scores 65-74, and 27.7%, 23.5%, and 60.5% for scores ≥85, respectively. Conclusions: We developed new Suita risk models for stroke, CHD, and ASCVD using variables similar to those in the Japanese governmental health check-ups. We also developed new risk scores to predict incident stroke, CHD, and ASCVD within 10 years. abstract_id: PUBMED:33951355 Clinical risk scores for the prediction of incident atrial fibrillation: a modernized review. Atrial fibrillation (AF) is considered the most common sustained arrhythmia. Major cardiovascular risk factors that have been identified to initiate and perpetuate AF include age, sex, arterial hypertension, heart failure, valvular heart disease and diabetes mellitus. In the literature, several studies aimed to formulate easily - applied and accurate risk stratification scores, based on antecedent cardiovascular events, comorbidities and biomarkers for the prediction of new-onset AF. The present narrative review addresses the most universally accepted and efficient clinical scores, with an extended applicability in different populations and ages, particularly scores derived from the Framingham Heart Study, the Atherosclerosis Risk in Communities, the Malmo Diet and Cancer Study, as well as the CHARGE-AF, the CHADS2, CHA2DS2-VASc, HATCH and CH2EST scores. Identification of incident AF can be challenging, thus dictating for utilization of validated clinical instruments in everyday clinical practice. abstract_id: PUBMED:33302846 Risk Models and Scores of Cardiovascular Disease in Patients with Diabetes Mellitus. Diabetes mellitus (DM) is an established risk factor for atherosclerotic cardiovascular disease (CVD), and patients with DM are at a two to four-fold higher cardiovascular risk, including myocardial infraction, unstable angina, stroke, and heart failure. All of the above have arisen interest in CVD preventive strategies by the use of non-invasive methods, such as risk scores. The most common approach is to consider DM as a CVD equivalent and, therefore, to treat patients with DM in a similar way to those who required secondary CVD prevention. However, this approach has been disputed as all patients with DM do not have the same risk for CVD, and since other potentially important factors within the context of DM, such as DM duration, presence of albuminuria, and comorbidities, should be taken into consideration. Thus, the second and third approach is the application of risk models that were either developed initially for the general population or designed specifically for patients with DM, respectively. This review summarizes the evidence and implications for clinical practice regarding these scores. Up to date, several models that can be applied to the diabetic population have been proposed. However, only a few meet the minimum requirement of adequate external validation. In addition, moderate discrimination and poor calibration, which might lead to inaccurate risk estimations in populations with different characteristics, have been reported. Therefore, future research is needed before recommending a specific risk model for universal clinical practice in the management of diabetes. abstract_id: PUBMED:33444330 Polygenic risk scores in cardiovascular risk prediction: A cohort study and modelling analyses. Background: Polygenic risk scores (PRSs) can stratify populations into cardiovascular disease (CVD) risk groups. We aimed to quantify the potential advantage of adding information on PRSs to conventional risk factors in the primary prevention of CVD. Methods And Findings: Using data from UK Biobank on 306,654 individuals without a history of CVD and not on lipid-lowering treatments (mean age [SD]: 56.0 [8.0] years; females: 57%; median follow-up: 8.1 years), we calculated measures of risk discrimination and reclassification upon addition of PRSs to risk factors in a conventional risk prediction model (i.e., age, sex, systolic blood pressure, smoking status, history of diabetes, and total and high-density lipoprotein cholesterol). We then modelled the implications of initiating guideline-recommended statin therapy in a primary care setting using incidence rates from 2.1 million individuals from the Clinical Practice Research Datalink. The C-index, a measure of risk discrimination, was 0.710 (95% CI 0.703-0.717) for a CVD prediction model containing conventional risk predictors alone. Addition of information on PRSs increased the C-index by 0.012 (95% CI 0.009-0.015), and resulted in continuous net reclassification improvements of about 10% and 12% in cases and non-cases, respectively. If a PRS were assessed in the entire UK primary care population aged 40-75 years, assuming that statin therapy would be initiated in accordance with the UK National Institute for Health and Care Excellence guidelines (i.e., for persons with a predicted risk of ≥10% and for those with certain other risk factors, such as diabetes, irrespective of their 10-year predicted risk), then it could help prevent 1 additional CVD event for approximately every 5,750 individuals screened. By contrast, targeted assessment only among people at intermediate (i.e., 5% to &lt;10%) 10-year CVD risk could help prevent 1 additional CVD event for approximately every 340 individuals screened. Such a targeted strategy could help prevent 7% more CVD events than conventional risk prediction alone. Potential gains afforded by assessment of PRSs on top of conventional risk factors would be about 1.5-fold greater than those provided by assessment of C-reactive protein, a plasma biomarker included in some risk prediction guidelines. Potential limitations of this study include its restriction to European ancestry participants and a lack of health economic evaluation. Conclusions: Our results suggest that addition of PRSs to conventional risk factors can modestly enhance prediction of first-onset CVD and could translate into population health benefits if used at scale. Answer: The utility of diabetes risk scores for predicting cardiovascular diseases (CVD) has been evaluated in several studies, with varying results. In the KORA S4/F4 cohort study, seven diabetes risk scores developed from four different studies were applied to assess their predictive ability for CVD. The results indicated that the areas under the receiver-operating curve (AROCs) for the prediction of CVD ranged from 0.60 to 0.65, which is relatively low. When these scores were used to predict CVD and type 2 diabetes respectively, AROCs for CVD prediction were 0.09 to 0.24 lower than those for type 2 diabetes prediction. The study concluded that in the older population, diabetes risk scores are not useful for the prediction of CVD, and that prediction models for diabetes and CVD require different parameters (PUBMED:23659777). Another study assessed the performance of eight clinical risk prediction scores in identifying individuals with systemic lupus erythematosus (SLE) at high CVD risk, as defined by the presence of atherosclerotic plaques. The study found that both generic and 'SLE-adapted' clinical risk scores generally underestimated high CVD risk in patients with SLE (PUBMED:32122200, PUBMED:33891687). A cross-sectional study in an Indian rheumatoid arthritis (RA) population compared different CVD risk prediction algorithms and evaluated associations between carotid intima-media thickness (CIMT) and subclinical atherosclerosis (SCA) with CVD risk factors and scores. The study found that individual CVD risk scores predicted 10-year CVD risk differently, and there was a need for validation for predicting hard endpoints (CVD events, mortality) (PUBMED:36006556). A comparison of 22 risk scores for CVD in individuals with type 2 diabetes in primary care settings revealed that all evaluated models had a comparable and modest discriminative ability and could not accurately identify individuals who experienced a CVD event in the 10 years of follow-up (PUBMED:35032176). In summary, while diabetes risk scores may have some predictive ability for CVD, their utility is limited, and they often do not perform well in predicting CVD events, especially in specific populations such as those with SLE or RA.
Instruction: Acetone-free nail polish removers: are they safe? Abstracts: abstract_id: PUBMED:34754632 Acetone Ingestion Resulting in Cardiac Arrest and Death. Acetone is a chemical found naturally in the environment, and acetone poisoning can occur through contact with household products that contain acetone, including nail polish removers, paint removers, some polishes, as well as certain detergents and cleaners. Acetone toxicity affects almost all body systems, including the nervous, respiratory, cardiovascular, and endocrine systems. The incidence of life-threatening medical problems from acetone exposure is very rare. However, in this report, we present a case of acetone toxicity resulting in death. Our patient presented with cardiopulmonary arrest, hypotension, tachycardia, hyperglycemia, acute kidney injury (AKI), thrombocytopenia, elevated liver enzymes, and extensive esophageal and gastric erosion with associated upper gastrointestinal bleeding. abstract_id: PUBMED:16035208 Acetone-free nail polish removers: are they safe? Introduction: The following case report highlights the dangers of acetone-free nail polish remover products which are widely regarded as safe. Case Report: A 15-month-old presented with coma and cardiorespiratory collapse after sucking on Boots Conditioning Nail Polish Remover Pads, which was complicated by features of upper airway obstruction, bilateral pneumothoraces, and pneumomediastinum. The child required intubation and ventilation for 5 days along with full supportive management before making a complete recovery. The clinical features in this case resemble those of poisoning due to gamma butyrolactone, the principal component of this nail polish remover. Gamma butyrolactone is rapidly metabolized to gamma hydroxybutyrate on ingestion, a compound whose toxicity is well recognized. Package labeling gave no indication of the potential toxicity of its components. Conclusion: We wish to draw attention to the potential toxicity of gamma butyrolactone containing nail polish removers and the need for appropriate labeling of product packaging. Since alternative products are available, these products should be removed from the market. abstract_id: PUBMED:30628153 The effect of various primers improving adhesiveness of gel polish hybrids on pH, TOWL and overall nail plates condition. Background: Hybrid manicure is now a popular method of nail care and nail art, which is associated with its durability. Unfortunately, it has an adverse effect on pH, TOWL and overall nail plates condition and has not been investigated in detail so far. Aims: The aim of this study was to compare the pH and TOWL of the hand nails after using different primers. The effect of application time, breaks between applications and the method of curing and removal on those parameters and overall nail plates condition were evaluated. Patients/methods: An evaluation survey was conducted among 116 women. pH and TOWL measurements of the nail plate in 35 women were taken using the Courage &amp; Khazaka. Clinical photos were made using the Fotomedicus system. Results: The measured pH values of the nail plate in the test group with gel polish hybrid on the nails were from 5.50 to 6.65, while after removal of the gel polish hybrid, the values ranged from 5.63 to 6.68. TOWL of the nail plate covered with gel polish hybrid ranged from 2.9 to 33.2 g/m2 /h, whereas after removal of hybrids-from 1.9 to 45.7 g/m2 /h. Conclusion: Different forms of acid-free primers maintain a lower pH of the nail plate covered with a gel polish hybrid, while the acidic primer maintains higher values of the nail plate pH. The significant decrease in the nail plate TOWL following the application of gel polish hybrid indicates a reduced loss of water from the surface of the nail to the atmosphere. abstract_id: PUBMED:24575856 An Initial lexicon of sensory properties for nail polish. Objective: The objective of this study was to develop an initial lexicon for sensory properties of nail polish and to validate this lexicon using a descriptive analysis study of selected samples. Methods: Seventeen commercial products from four categories (regular, flake-containing, water-based and gel) were used in this study. Descriptive sensory analysis was conducted in this study to characterize and evaluate application and removal properties of these nail polishes. Data was then processed by ANOVA, Principal Component Analysis (PCA) and Pearson's Correlation Coefficient analysis to explore the differences among samples and attributes. Results: A lexicon of 21 sensory attributes was developed to describe the application of nail polish. It included three initial texture attributes, thirteen initial appearance attributes and five aroma attributes. A lexicon of five attributes in five stages was developed to describe the removal of nail polish. The results from ANOVA and PCA showed that attributes in the lexicon separated the different product categories. Conclusion: The results of this study indicated that descriptive sensory analysis can be used to evaluate nail polish. The results of this study present scientists who are working on nail polish an additional tool to describe application and removal properties of nail polish. abstract_id: PUBMED:33445102 Exposures in nail salons to trace elements in nail polish from impurities or pigment ingredients - A pilot study. Nail polishes have evolved considerably. Toxic elements, such as lead, have been found in nail polish, and it is unclear if new finishes using metallic effect pigments may be contributing to metals exposure in nail technicians. We characterized concentrations of trace elements in 40 nail polishes, 9 technicians' urine, and 20 technicians' toenail clippings from 8 nail salons in the Boston area in 2017. We also collected 24 salon surface wipes from 3 of the salons. Antimony was not disclosed as a nail polish ingredient, yet concentrations (&lt;15 μg/g) were above existing cosmetics guidelines (0.5 μg/g) in five (13%) of the samples. Aluminum (&lt;11,450 μg/g), barium (&lt;11,250 μg/g), iron (&lt;3,270 μg/g), and magnesium (&lt;2375 μg/g) were disclosed as ingredients and were also found on salon surfaces where nail polish was stored or used. Heavy metal impurities in nail polish were not detected for cadmium. Lead and nickel were found at low concentrations (&lt;0.40 μg/g lead, &lt;0.67 μg/g nickel). Tin (p = 0.003) concentrations were higher in nail polish with finishes compared to without. Barium and strontium (both p = 0.0001) concentrations were higher for red nail polishes compared to all other colors. Of those elements in nail polish and salon surfaces, aluminum and iron were detected in toenails, manganese was detected in urine and toenails, and barium was detected in urine at comparable levels to the general population. Besides preventable antimony levels in nail polish, individual metals in nail polish did not appear to be from impurities but mainly from colorants (i.e., pigments) and not major contributors to nail technician exposure. It is unclear if low-level chronic metals mixtures in nail salons are of health concern. abstract_id: PUBMED:33135770 Side-effects Associated with Gel Nail Polish: A Self-questionnaire Study of 2,118 Respondents. Gel nail polish is commonly used in manicures; how-ever, research into the side-effects of gel nail polish is scarce and focusses mainly on allergic contact dermatitis. The aim of this study was to assess the frequency and characteristics of side-effects associated with use of gel nail polish. A self-questionnaire survey was conducted on a representative sample of individuals (n = 2,118, all female). Of these, 48.3% reported side-effects while applying gel nail polish, approximately 20% during wearing it, and more than 75% after removing the polish. Frequency of changes in the nail plates was significantly higher after removing the gel nail polish than when applying or wearing it (p &lt; 0.0001). Frequency of changes in the nail plates was associated with whether the procedure was performed by professionals or non-professionals. Education about the risk of side-effects and sensitization is crucial for people using gel nail polish. abstract_id: PUBMED:26095266 Sensory factors affecting female consumers' acceptability of nail polish. Objective: The objectives of this study were to determine what sensory factors impact consumers' acceptability of nail polishes, to explore how these sensory factors impact consumers' acceptability of nail polishes, to investigate whether there are any consumer segments according to their overall acceptability on different nail polishes and to scrutinize how the consumer segments are related to the sensory factors. Methods: Ninety-eight females participated in a nail polish consumer study at Kansas State University. Eight commercial products belonging to four categories - regular (REG), gel (GEL), flake (FLK) and water-based (WAT) - were evaluated. Each nail polish sample was evaluated twice by each participant in two different tasks - a task devoted to applying and evaluating the product and a task devoted to observing the appearance and evaluating the product. Pearson's correlation analysis, analysis of variance (ANOVA), external preference mapping, cluster analysis and internal preference mapping were applied for data analysis. Results: Participants' scores of overall liking of the nail polishes were similar in the application task and in the observation task. In general, participants liked the REG and GEL product samples more than the FLK and WAT samples. Among all the sensory attributes, appearance attributes were the major factors that affected participants' overall liking. Aroma seemed to be a minor factor to participants' overall liking. Some sensory attributes, such as runny, shininess, opacity, spreadability, smoothness, coverage and wet appearance, were found to drive participants' overall acceptability positively, whereas others such as pinhole, fatty-edges, blister, brushlines, pearl-like, flake-protrusion, glittery and initial-drag impacted participants' overall acceptability negatively. Four clusters of participants were identified according to their overall liking scores from both the application task and the observation task. Conclusions: Participants' acceptability, based on different sensory attributes, could help a nail polish manufacturer modify or improve their nail polish formulas. Nail polish manufacturers could use the consumer cluster information to improve their marketing strategies for specific categories of their products and to target their advertising on particular consumer groups. abstract_id: PUBMED:24054173 The effect of nail polish on pulse oximetry readings. Introduction: Pulse oximeters utilise the pulsatile nature of arterial blood flow to distinguish it from venous flow and estimate oxygen saturation in arterial blood. Pulse oximetry is primarily used in hospital wards, emergency rooms, intensive care units, operating rooms and home care. Aim: The objective of this study is to determine whether the use of nail polish of various colours have an effect on oximeter readings of oxygen saturation value. Method: The sample group of this study is comprised of 40 healthy women. In the first phase of the study, readings were taken on left and right hand fingers, with no nail polish, to determine any differences in oxygen saturation value. In the second phase of the study, 10 different colours of nail polish, namely dark red, yellow, dark blue, green, purple, brown, white, metallic, black and pink, of the same brand were applied. Readings were recorded once oxygen saturation values on the screen became stable. Number and percentage distributions along with Wilcoxon signed ranks and Friedman test were used in the analysis of data. Conclusion: Only red nail polish did not yield statistically significant reading results. We conclude that different nail polish colours cause a clinically significant change in pulse oximeter readings in healthy volunteers. abstract_id: PUBMED:36601139 Nail Polish Remover Induced Methemoglobinemia: An Uncommon Occurrence. Methemoglobinemia, defined as hemoglobin's impaired oxygen-carrying capacity due to oxidation from the ferrous (Fe2+) state to the ferric (Fe3+) state, has many well-documented etiologies. One example of an uncommon cause of acquired methemoglobinemia is the ingestion of nail polish remover, which can contain methemoglobin generators such as nitroethane, N,N-dimethyl-p-toluidine, and isobutyl nitrite. We present a case of methemoglobinemia in an 81-year-old male following accidental ingestion of isobutyl nitrite-containing nail polish remover, commonly used as a recreational inhalant. Furthermore, we review potentially toxic substances found in commercially available nail products. This case was designed to identify and efficiently treat a rather uncommon cause of methemoglobinemia induced in this case by a common household item, nail polish remover. abstract_id: PUBMED:30643774 Single-Center, Prospective, Blinded Study Comparing the Efficacy and Compatibility of Efinaconazole 10% Solution in Treating Onychomycosis with and without Concurrent Nail Polish Use. Background: Topical efinaconazole 10% solution is a promising new treatment for distal lateral subungual onychomycosis (DLSO). However, it is unknown whether this treatment is both compatible and efficacious in individuals wearing toenail polish. Materials And Methods: We evaluated the efficacy and compatibility of efinaconazole 10% solution with concurrent nail polish use in treating DLSO over 52 weeks. Efficacy was assessed using the onychomycosis severity index (OSI) and by measuring nail growth and thickness, while compatibility with nail polish was evaluated with questionnaires. Results: Eleven patients completed the study; 6 wore nail polish regularly and 5 abstained from polish. The efficacy of efinaconazole was not diminished by concurrent nail polish use as measured by OSI, nail growth, and thickness. However, this treatment produced undesirable cosmetic changes to the quality of nail polish over time. Conclusions: While efinaconazole 10% solution is an effective treatment of DLSO in patients wearing nail polish, this treatment may diminish the quality of the polish. Further research and development is needed to enhance the compatibility of topical onychomycosis treatments with nail polish use. Answer: Acetone-free nail polish removers are often perceived as safer alternatives to those containing acetone, but they can still pose significant health risks. A case report highlights the dangers associated with acetone-free nail polish remover products, which are widely regarded as safe. A 15-month-old child experienced coma and cardiorespiratory collapse after sucking on acetone-free nail polish remover pads, which contained gamma butyrolactone. This substance is rapidly metabolized to gamma hydroxybutyrate upon ingestion, a compound known for its toxicity. The child required intensive medical intervention, including intubation and ventilation for 5 days, and presented with complications such as upper airway obstruction, bilateral pneumothoraces, and pneumomediastinum. The packaging of the product did not indicate the potential toxicity of its components, leading to the conclusion that there is a need for appropriate labeling of product packaging and that such products should be removed from the market due to their potential toxicity (PUBMED:16035208). This case suggests that while acetone-free nail polish removers do not contain acetone, they can still contain other harmful chemicals that may lead to serious health issues, especially when ingested or when there is prolonged exposure. Therefore, it is important for consumers to be aware of the ingredients in nail polish removers and for manufacturers to provide clear warnings about the potential risks associated with their use.
Instruction: Does adrenal mass size really affect safety and effectiveness of laparoscopic adrenalectomy? Abstracts: abstract_id: PUBMED:12429302 Does adrenal mass size really affect safety and effectiveness of laparoscopic adrenalectomy? Objectives: To evaluate the effectiveness and safety of laparoscopic adrenalectomy with regard to adrenal mass size, as well as to consider its clinical and pathologic patterns. Laparoscopy is today considered the first-choice treatment of many adrenal diseases, although its use is still controversial for large adrenal masses and incidentally found adrenal cortical carcinoma. Methods: A total of 125 patients underwent lateral transperitoneal laparoscopic adrenalectomy. The indications were either functioning or nonfunctioning adrenal masses, without any radiologic evidence of involvement of the surrounding structures. The correlation between the size and the operative times, estimated blood loss, incidence of intraoperative and postoperative complications, and length of hospital stay were studied with Pearson's correlation coefficient, Fisher's exact test, and the chi-square test. The analysis of variance test was used to evaluate any possible correlation between the size and clinicopathologic features and the results. Results: A slight correlation was observed between the size and operative time (P = 0.004), but no correlation was observed between the size and the other parameters. Statistical analysis showed a significant correlation between the clinicopathologic patterns (nonfunctioning benign adrenal masses, Conn's adenoma, Cushing's adenoma, pheochromocytoma, adrenal cortical cancer, and other tumor metastasis) and the operative time (P = 0.011), but not with the other parameters. Conclusions: Laparoscopic adrenalectomy is also effective and safe for large lesions. The results of our series confirms that the risk of encountering an incidental adrenal cortical cancer is significantly increased for large lesions, and therefore, in these cases, additional attention is required to observe oncologic surgical principles. abstract_id: PUBMED:26709227 Laparoscopic adrenalectomy for large adrenal masses: Is it really more complicated? Laparoscopic treatment of large adrenal tumors is still questionable due to concern over the risk of malignancy as well as the technical difficulties. No exact dimensional cut-off has been described for laparoscopic adrenalectomy (LA). In this study, we reviewed our experience with LA for masses ≥ 8 cm and tried to determine the limitations of this surgery in this group of patients. Sixteen patients with adrenal mass ≥ 8 cm (Group 1) and 19 patients with adrenal mass &lt; 8 cm (Group 2) treated with transabdominal LA were included in this study. We analyzed operative time, intraoperative and postoperative complications and length of postoperative hospital stay with respect to tumor size and clinopathologic features. Mean maximum tumor diameters were 91.7 mm (range, 80-150 mm) and 52.4 mm (range, 35-73 mm) in Group 1 and Group 2, respectively. Operation time and blood loss were higher in Group 1 compared to Group 2, but these differences did not reach significant levels (p&gt;0.05). Conversion to an open procedure required in two patients, one from each group, because of the firm attachments of adrenal mass to the surrounding tissue. In conclusion, our study demonstrated that LA is a safe and feasible procedure for large lesions even up to 15 cm. The risk of finding incidental adrenal cortical cancer was significantly increased for large lesions in our series as in the literature; therefore, it is important to follow the strict oncological principles in these cases. abstract_id: PUBMED:26195876 Safety and feasibility of laparoscopic adrenalectomy: What is the role of tumour size? A single institution experience. Background: Although, there are studies in the literature having shown the feasibility and safety of laparoscopic adrenalectomy, there are still debates for tumour size and the requirement of the minimal invasive approach. Our hypothesis was that the use of laparoscopy facilitates minimally invasive resection of large adrenal tumours regardless of tumour size. Materials And Methods: Within 7 years, 149 patients underwent laparoscopic adrenalectomy at one institution. The patients were divided into two study groups according to tumour size. Group 1 included patients with adrenal tumours smaller than 5 cm and group 2 included larger than 5 cm. Patient demographics and clinical parameters, operative time, complications, hospital stay and final pathology were analysed. Statistical analyses of clinical and perioperative parameters were performed using Student's t-test and Chi-square tests. Results: There were 88 patients in group 1 and 70 in group 2. There were no significant differences between study groups regarding patient demographics, operative time, hospital stay, and complications. Estimated blood loss was significantly higher in group 2 (P = 0.002). The conversion to open rate was similar between study groups with 5.6% versus 4.2%, respectively. Pathology was similar between groups. Conclusion: Our study shows that the use laparoscopy for adrenal tumours larger than 5 cm is a safe and feasible technique. Laparoscopic adrenalectomy is our preferred minimally invasive surgical approach for removing large adrenal tumours. abstract_id: PUBMED:32098938 Laparoscopic adrenalectomy for giant adrenal tumours: Technical considerations and surgical outcome. Background: Giant adrenal tumours are tumours with size ≥6 cm. These are rare cancer associated with malignancy in 25% of cases. Patients And Methods: A retrospective review was conducted on the medical records of patients admitted to our high-volume centre of Careggi University Hospital with a giant adrenal tumour and submitted to adrenalectomy between January 2008 and December 2018. The group of patients who underwent to laparoscopic adrenalectomy was compared with a group of patients that was submitted to open adrenalectomy. Results: In the past 10 years, we performed about 245 adrenalectomies for benign and malignant adrenal tumours. Fifty (20.4%) of these were giant tumours. The medium size was 9.9 cm (7-22 cm). The mean age was 57 years (21-81 years). Thirty-four (68%) of these cancers were laparoscopically removed and 16 (32%) with an open approach. The surgical outcomes in these patients were optimal if compared to the group of patients submitted to open approach in terms of good pain control, hospital stay, mean operative time and bloodless. No difference was observed about post-operative complications in the two groups. The follow-up after 30 months for malignant tumours did not show local recurrences. Conclusion: Our results pinpoint the advantages of performing a laparoscopic adrenalectomy for giant adrenal tumours. The tumour size is only a predictive parameter of possible malignancy, and the laparoscopic approach is a safe and feasible method in terms of surgical and oncological, only if performed by expert surgeons and in high-volume centres. abstract_id: PUBMED:24862666 Laparoscopic adrenalectomy for large adrenal masses: single team experience. Introduction: Laparoscopic adrenalectomy is today considered the standard treatment for benign small adrenal tumors. An open question is the use of laparoscopy for large adrenal masses because of technical limitations and increased risk of malignancy. In this study we report our experience in laparoscopic adrenalectomy for adrenal masses larger than 6 cm. Methods: Between January 2010 and December 2013 we performed 41 laparoscopic adrenalectomy. Fourteen of 41 patients (34,1%) were submitted to laparoscopic adrenalectomy for lesion &gt;6 cm in size. All patients were submitted routinely to radiological and hormonal tests to indentify tumors characteristics. Results: The patients treated were 9 male and 5 female, the mean age was 55.6 years (range 38-74). The mean tumor size was 8.2 cm (range 6-14 cm) and the lesion were localized on right side in 8 patients and on the left side in 6 patients. The mean operative time was 181 min (range 145-240 min). Mean blood loss was 90 ml. No conversion to open surgery was required. Conclusion: Laparoscopic adrenalectomy offers better surgical outcomes than open adrenalectomy. Size criteria are, at the moment, the main subject discussed for the laparoscopic approach to adrenal tumors. In fact, size is an important variable in predicting malignancy. This experience and the results of literature suggest that laparoscopic approach is safe and feasible for adrenal masses larger than 6 cm with a longer operative time. In presence of local invasion or vascular infiltration laparoscopy is contraindicated. abstract_id: PUBMED:30560854 Are large adrenal neoplasms technical limitation for laparoscopic adrenalectomy? Aim: To study surgical features of laparoscopic adrenalectomy in patients with large adrenal neoplasms. Material And Methods: The results of 32 laparoscopic adrenalectomy procedures were analyzed in patients with adrenal neoplasms over 5 cm. The control group consisted of 67 patients with adrenal neoplasms up to 5 cm. Results: There were significant differences in duration of operations (96.3±13.44 min vs. 67.2±11.07 min; p&lt;0.05) and some postoperative variables. Postoperative morbidity was similar (9.4% vs. 7.5%; p&gt;0.05). Conclusion: Laparoscopic adrenalectomy for adrenal neoplasms from 5 to 8-9 cm is feasible, effective and safe surgical procedure. abstract_id: PUBMED:27576286 Giant adrenal myelolipoma. Right laparoscopic adrenalectomy Adrenal myelolipoma is a rare, benign, non-functioning tumor. It is composed by fat and hematopoietic tissues. We present the case of a 33-year-old woman with diagnosis of a 14 cm diameter non-functioning right adrenal incidentaloma, with imaging features suggestive of myelolipoma. Based on the benign nature of the tumor, laparoscopic resection was performed. Histopathology showed a myelolipoma, weighting 444 grams. The patient evolved with an uneventful postoperative period. Laparoscopic adrenalectomy for a giant myelolipoma was feasible and successful. abstract_id: PUBMED:36113368 Laparoscopic adrenalectomy for a giant adrenal teratoma: A case report and review of the literature. Introduction And Importance: Teratomas typically are benign gonadal neoplasms, arising from more than one embryonic germ layer. Extragonadal teratomas are rare and primary adrenal teratomas more so, with few documented cases. We present one such case, diagnosed via CT, resected via laparoscopic adrenalectomy, and confirmed on histology. To the best of our knowledge, this is the first case documented in the Caribbean. Case Presentation: A 38-year-old obese female with restrictive lung disease presented with right back/flank pain due to a non-functional 10.5 cm right adrenal mass on CT, likely a giant myelolipoma. Further radiologic review suggested this was instead a mature adrenal teratoma. She underwent a laparoscopic adrenalectomy and histology confirmed a mature adrenal teratoma. Clinical Discussion: Most adrenal tumours are incidentalomas and are usually benign adenomas. Primary adrenal teratomas account for 1 % of teratomas and 0.13 % of adrenal tumours. They may be mature or immature; the latter carries a greater risk of malignancy. Benign adrenal teratomas are typically non-functional and commonly mistaken for myelolipomas on imaging. Adrenalectomy is required due to the risk of malignant transformation. The laparoscopic approach depends on size, localized tissue invasion and technical considerations, but offers advantages for the patient if possible. Conclusions: Though uncommon, preoperative radiologic diagnosis of an adrenal teratoma is possible and should be completely resected after a functional workup. A laparoscopic adrenalectomy is preferred once this can be done safely, even when very large, with surgical and oncologic outcomes equivalent to an open approach combined with the known advantages of laparoscopic surgery. abstract_id: PUBMED:31754851 Risk factors for open conversion of lateral transperitoneal laparoscopic adrenalectomy: retrospective cohort study of the Spanish Adrenal Surgery Group (SASG). Background: Laparoscopic adrenalectomy has become the standard of care for treating adrenal tumors. Conversion from laparoscopic adrenalectomy to an open approach during surgery may be necessary in some cases. This study aimed to identify the risk factors for open conversion of laparoscopic adrenalectomy. Methods: Retrospective analysis of all consecutive patients undergoing lateral transperitoneal laparoscopic adrenalectomy in six endocrine surgery units of the Spanish Adrenal Surgery Group (SASG) between January 2005 and December 2017. Demographic, clinical, surgical, and histopathologic characteristics were recorded. Risk factors for conversion were assessed by logistic regression analysis. Results: Of a total of 865 patients included in the study, 58 (6.7%) required conversion to open surgery. In the univariate analysis, factors associated with conversion from laparoscopic to open adrenalectomy were body mass index (BMI) ≥ 30 kg/m2 (P = 0.002), previous abdominal surgery (P = 0.015), tumor size &gt; 5 cm (P = 0.001), and surgery for pheochromocytoma (P = 0.034). In the multivariate analysis, independent risk factors were BMI ≥ 30 kg/m2 [odds ratio (OR) 4.26, 95% confidence interval (CI) 2.81-8.75; P = 0.001], tumor size &gt; 5 cm (OR 10.15, 95% CI 4.24-28.31; P &lt; 0.001), and surgery for pheochromocytoma (OR 2.96, 95% CI 1.89-11.55; P = 0.015). Conclusions: Obesity, tumor size, and pheochromocytoma as the type of adrenal tumor were predictive factors for intraoperative conversion from laparoscopic to open adrenalectomy. Preoperative assessment of these characteristics should be valuable to clinicians in discussing conversion risk in patients and for surgical planning. abstract_id: PUBMED:25654914 Laparoscopic adrenalectomy of large adrenal lesions Introduction: In recent years, laparoscopic adrenalectomy has become the procedure of choice for small and benign adrenal lesions. With the experience gained in the laparoscopic approach, many surgeons have expanded the range of indications for laparoscopic adrenalectomy and nowadays, this approach is used for large adrenal lesions and in lesions suspicious for malignancy. However, great controversy still exists regarding the oncological safety of this approach and therefore, some surgeons still prefer using the open technique. Purpose: To assess the outcome and the oncological safety of Laparoscopic adrenalectomy of large adrenal lesions. Methods: A retrospective analysis was conducted of prospectively collected data of all consecutive patients who underwent laparoscopic adrenatectomy in our department for adrenal lesions of 6 cm or larger during the years 1995-2014. Results: A total of 50 patients, out of 230 who underwent laparoscopic adrenalectomy in our department, were included in the study. The main indications for surgery were pheochromocytoma, non-functional lesions and Cushing's syndrome. The surgical approach in all cases was the lateral transperitoneal approach. The final histopathological diagnoses in the majority of patients were adenoma and pheochromocytoma. Three patients were diagnosed with adrenocortical carcinoma. During a follow-up period of up to 5-years, no Local or distant recurrences were found. Conclusions: The laparoscopic approach for removal of adrenal lesions Larger than 6 cm is safe in respect of intra- and peri-operative morbidity as well as oncological safety. The true incidence of malignancy in large adrenal Lesions appears to be lower than that reputed in the Literature thus far. In high-volume centers the laparoscopic approach should be attempted for large adrenal lesions. Answer: The safety and effectiveness of laparoscopic adrenalectomy (LA) in relation to adrenal mass size have been evaluated in several studies, and the findings suggest that LA can be safely and effectively performed for large adrenal masses, although certain considerations must be taken into account. One study found a slight correlation between adrenal mass size and operative time, but no correlation with other parameters such as estimated blood loss, incidence of intraoperative and postoperative complications, and length of hospital stay. This suggests that while larger masses may require longer surgery times, they do not necessarily lead to increased complications or extended hospital stays (PUBMED:12429302). Another study supported these findings, demonstrating that LA is safe and feasible for large lesions up to 15 cm, with the caveat that the risk of incidental adrenal cortical cancer is higher for larger lesions, necessitating strict oncological principles during surgery (PUBMED:26709227). Further research indicated that LA for adrenal tumors larger than 5 cm is a safe and feasible technique, with no significant differences in operative time, hospital stay, and complications when compared to smaller tumors, although estimated blood loss was higher for larger tumors (PUBMED:26195876). Similarly, another study concluded that LA for giant adrenal tumors (≥6 cm) is advantageous in terms of pain control, hospital stay, operative time, and blood loss, with no observed difference in post-operative complications compared to an open approach (PUBMED:32098938). Additional studies have echoed these findings, suggesting that LA is safe and feasible for adrenal masses larger than 6 cm, with longer operative times being the main difference observed. However, in cases of local invasion or vascular infiltration, laparoscopy is contraindicated (PUBMED:24862666). Moreover, LA for adrenal neoplasms from 5 to 8-9 cm has been deemed feasible, effective, and safe, with similar postoperative morbidity to smaller neoplasms (PUBMED:30560854). In summary, while adrenal mass size may affect certain aspects of LA, such as operative time and estimated blood loss, the overall safety and effectiveness of the procedure are not significantly compromised for larger masses.
Instruction: Should anesthesia residents with a history of substance abuse be allowed to continue training in clinical anesthesia? Abstracts: abstract_id: PUBMED:20006259 Should anesthesia residents with a history of substance abuse be allowed to continue training in clinical anesthesia? The results of a survey of anesthesia residency program directors. Study Objective: To determine the experience, attitudes, and opinions of program directors regarding the reintroduction of residents in recovery from substance abuse into the clinical practice of anesthesiology. Design: Survey instrument. Setting: Anesthesia residency training programs in the United States. Measurements: After obtaining institutional review board approval, a list of current academic anesthesia residency programs in the United States was compiled. A survey was mailed to 131 program directors along with a self-addressed stamped return envelope to ensure anonymity. Returned surveys were reviewed and data compiled by hand, with categorical variables described as frequency and percentages. Main Results: A total of 91 (69%) surveys were returned, representing experience with 11,293 residents over the ten-year period from July of 1997 through June of 2007. Fifty-six (62%) program directors reported experience with at least one resident requiring treatment for substance abuse. For residents allowed to continue with anesthesia residency training after treatment, the relapse rate was 29%. For those residents, death was the initial presentation of relapse in 10% of the reported cases. 43% of the program directors surveyed believe residents in recovery from addiction should be allowed to attempt re-entry while 30% believe that residents in recovery from addiction should not. Conclusions: The practice of allowing residents who have undergone treatment for substance abuse to return to their training program in clinical anesthesia remains highly controversial. They are often lost to follow-up, making it difficult, if not impossible to determine if re-training in a different medical specialty decreases their risk for relapse. A comprehensive assessment of the outcomes associated with alternatives to re-entry into clinical anesthesia training programs is needed. abstract_id: PUBMED:3999300 Narcotics control in anesthesia training. Recent studies document that substance abuse is a significant problem among anesthesia personnel. We have developed a system to better control the accountability of narcotics and other potentially addictive drugs. The system consists of a three-phase approach: (1) an individual anesthesia cart/narcotics box; (2) computer analysis of drug usage; and (3) an anesthesia drug audit. A standard stock issue of drugs is maintained by each resident. Drugs are issued daily to those residents administering anesthesia. Each drug transaction is recorded by the resident according to the patient's name, hospital number, type and length of the surgical procedure, type and amount of drug used, and the amount of each drug discarded. A weekly computer-generated report shows individual usage trends for each drug and a summary of "high" users for that period. The computer does not "flag" an individual as a drug abuser, but monitors trends in controlled substance usage. Those residents having a significant alteration in their drug usage pattern that is not explained legitimately are comprehensively audited. abstract_id: PUBMED:19911617 The curse of discovery: pioneers of dental and medical anesthesia. The current practice of dental local anesthesia remains founded upon drugs and techniques that were developed over 100 years ago. 2009 marks the 125th anniversary of the pioneering use of cocaine as a topical anesthetic and the introduction of nerve block injections in the oral cavity and facial region. Four famous clinicians are recognized in this article, each for their unique contributions to "modern" local anesthesia (Carl Köller, Sigmund Freud and William Halsted) and general anesthesia (Horace Wells), with the recognition of the adverse impact of drug dependence on their personal lives, which fortunately did not preclude their positively revolutionizing the practices of dentists and physicians and their patients every since. abstract_id: PUBMED:18761254 One approach to the return to residency for anesthesia residents recovering from opioid addiction. Study Objective: There is a high rate of relapse among anesthesia residents attempting to re-enter clinical anesthesia training programs after completing treatment for opioid addiction. Individuals may return to clinical practice after a short period of treatment only to relapse into active addiction, and for the opioid addicted anesthesia resident, this often results in death. The objective of this study was to determine weather or not a period of time away from clinical practice after treatment would reduce the rate of relapse by allowing the individual to concentrate on recovery in the critical first year after treatment, during which the majority of relapses occur. Design: 5 residents identified as being addicted to a controlled substance were removed from residency training and offered treatment. Prior to returning to residency training they were required to complete a post-treatment program involving no less than 12 months of work in the anesthesia simulator, followed by a graded re-introduction into the clinical practice of anesthesia. Setting: Academic anesthesia practice in a large teaching hospital. Results: Of the 5 residents who participated in the program, 3 (60%) successfully completed their residency program and their 5 year monitoring contract, and entered the anesthesia workforce as attending anesthesiologists. Conclusions: The treatment of addicted physicians can be successful, and return of the highly motivated individual to the clinical practice of Anesthesiology is a realistic goal, but this reintroduction must be undertaken in a careful, stepwise fashion. A full understanding of the disease process, the potential for relapse, and the implications of too rapid a return to practice must be taken into careful consideration. abstract_id: PUBMED:31612693 Risk factors of inadequate emergence following general anesthesia with an emphasis on patients with substance dependence history. Background: This study aims to define the incidence and risk factors of both emergence agitation and hypoactive emergence in adult patients and substance-dependent patients following general anesthesia to elaborate on the risk factors and precise management of them. Methods: The study recruited 1,136 adult patients who received elective surgeries under general anesthesia for this prospective observational study. Inadequate emergence was determined according to the Richmond Agitation-Sedation Scale (RASS). Emergence agitation was defined as a RASS ≥ +1 point, and hypoactive emergence was defined as a RASS ≤ -2 points. Subgroup analyses were then conducted on patients with substance dependence. Results: Inadequate emergence in the post-anesthesia care unit (PACU) occurred in 20.3% of patients, including 13.9% with emergence agitation and 6.4% with hypoactive emergence. Ninety-five patients had a history of substance dependence. Compared to divorced patients, never-married and presently married patients, who underwent gynecological and thoracic surgeries, had a lower risk of agitation. Neurologic disorders, intraoperative blood loss, intraoperative morphine, and PACU analgesic drug administration were associated with increased agitation risk. Hypertension and psychological disorders, intraoperative opioids, and PACU Foley catheter fixation were associated with increased hypoactive emergence risk. Substance-dependent patients had higher risk for agitation (21.1%, P = 0.019) and hypoactive emergence (10.5%, P = 0.044). Conclusions: Inadequate emergence in PACU following general anesthesia is a significant problem correlated with several perioperative factors. Patients with a history of substance dependence appear to be more at risk of inadequate emergence than the general population. abstract_id: PUBMED:34529093 175 years of anesthesia and narcosis-Towards a "human right to unconsciousness" The Ether Day, a key moment in the history of mankind, commemorates its 175th anniversary on 16 October 2021. On that day the dentist William T. G. Morton successfully gave the first public ether anesthesia in Boston. From then on it was possible to save people from pain with justifiable risk and at the same time to protect them from psychological damage by inducing unconsciousness. The German philosopher Peter Sloterdijk, one of the most renowned and effective philosophers of our times, deduced that from then on humans, to some extent, had a right to unconsciousness when in psychophysical distress. This postulate unfolded from his concept of "anthropotechnics" developed around 1997, meaning the idea of treating human nature as an object of possible improvements. According to Sloterdijk, in favorable cases a synthesis of man and technology can result in a significant improvement of human capabilities in the sense of "enhancement", i.e. an increase, an improvement or even an expansion of intellectual, physical or psychological possibilities, as it were in a transgression of the human (so-called transhumanism). Man should go into vertical tension, i.e. strive for higher aims and exploit his inherent potential, he should not dwell in the horizontal. This is not meant as an appeal but as an imperative: "You must change your life!". In this context modern anesthesia may prove helpful: be operated on by others in order to undergo an enhancement. Or, in its most extreme form, the operation in the "auto-operational curved space", a person can even operate on himself as has been dramatically demonstrated by Rogozov, a young Russian physician and trainee surgeon who successfully performed a self-appendectomy under local anesthesia at the Novolazarevskaya Antarctic Station in 1961; however, the implementation of this idea is a long way off. On the one hand, many countries lack qualified personnel in sufficiently large numbers to perform even vital operations with patients under anesthesia. On the other hand, over the decades it has become clear that anesthesia is obviously beneficial for mankind in that it offers relief from pain and psychological stress but that it can also often show its dark side: substance abuse, use of anesthetics in torture and in executions. In addition, the role of anesthetics in resuscitation, palliative care, and allaying executions is unclear or controversial. Finally, the necessary formal legal steps to acknowledge a "human right to unconsciousness" have not yet been implemented. abstract_id: PUBMED:38106624 Family Anesthesia Experience: Improving Social Support of Residents Through Education of Their Family and Friends. Introduction: The prevalence of burnout among anesthesiology residents is 41%-51%. Burnout is associated with medical errors, physician turnover, and substance use disorder. Social support and wellness may reduce burnout, but a barrier is support persons' lack of understanding of an anesthesiologist's work demands. We developed the Family Anesthesia Experience (FAX) to help support persons best support their resident. Methods: FAX consisted of a 4-hour event with hands-on experience, didactics portion, and panel discussion. Participants learned about a typical day in the life of an anesthesiology resident, wellness, burnout, substance use disorder, and available support resources, and had hands-on experience with procedures. The panel discussion offered logistical information about anesthesiology residency and allowed support persons to ask panel members questions. A postevent survey collected feedback on the event. Results: Fifty-one participants (first-year anesthesiology residents and their support persons) attended the event. Eight of 11 residents (73%) and 32 of 40 support persons (80%) completed the survey. All enjoyed the event, would recommend it to other anesthesiology resident support persons, and felt the event would improve communication and support. Most learned a moderate (35%) to large amount (50%) from the event. Qualitative feedback suggested most support persons found the event helpful in improving their understanding of anesthesiology residents' work demands. Discussion: The FAX was well liked by participants. Although we did not assess specific knowledge gained and long-term effects of the 2022 event, evaluations of previous years' events suggest that the event improved participants' understanding of anesthesiology residents' work and stressors. abstract_id: PUBMED:16797432 Evidence of acute tolerance to remifentanil in intensive care but not in anesthesia. We report the case of a 19-year-old man with a drug abuse history, admitted to the intensive care unit for head and chest trauma, who experienced an acute tolerance to sedative and respiratory depression effects of remifentanil, which was given as the sole agent for sedation. He did not exhibit any signs of drug tolerance or intraoperative awareness during prolonged remifentanil-based anesthesia using propofol or sevoflurane as adjuvants. Several recent studies support the hypothesis of a possible involvement of N-methyl-d-aspartate glutamate receptors. The clinical relevance of this report is that if a patient with a previously acute tolerance to remifentanil during sedation undergoes long-term surgery, and propofol or sevoflurane is coadministered in a remifentanil-based anesthesia, the patient will not necessarily develop opioid tolerance. It is of interest for anesthesiologists, given the high frequency of patients with drug abuse history who are admitted to intensive care units, often sedated with remifentanil, who undergo anesthesia for emergency surgery. abstract_id: PUBMED:10504905 Illicit drugs and anesthesia. Substance abuse is the nation's number one health problem. With illicit drug use so prevalent, the anesthesia care team undoubtedly sees more people under the influence of illicit drugs. Cocaine, heroin, and marijuana are the drugs that are commonly used. Cocaine acts as an intense stimulant, heroin has profound sedative effects, and marijuana may cause various respiratory problems. Many times when drug users present for anesthesia, they will not admit to using illicit drugs, leaving the anesthetist to treat complications intraoperatively. This article discusses the history, street practices, pharmacodynamics, and anesthetic management of people using cocaine, heroin, and marijuana. This knowledge of how to treat patients abusing drugs will undoubtedly improve anesthetic outcomes. There are many illicit drugs used by people that anesthetists do not see frequently in the care of their patients, but the anesthetist should be aware of the common street drugs in use. abstract_id: PUBMED:35743754 Association of General Anesthesia and Neuraxial Anesthesia in Caesarean Section with Maternal Postpartum Depression: A Retrospective Nationwide Population-Based Cohort Study. Although cesarean section (CS) has become a common method of child delivery in recent decades, the choice between general anesthesia (GA) and neuraxial anesthesia (NA) for CS must be carefully considered. Depending on the type of anesthesia used in CS, a major outcome observed is the occurrence of postpartum depression (PPD). This study investigated the association between PPD risk and the anesthesia method used in CS by using data from three linked nationwide databases in Taiwan, namely, the National Health Insurance Research Database, the National Birth Reporting Database, and the National Death Index Database. After propensity score matching by baseline depressive disorders, maternal demographics, status at delivery, infant's health, maternal diseases during pregnancy, and age of partner, we included women who had natural births (n = 15,706), cesarean sections with GA (n = 15,706), and cesarean sections with NA (n = 15,706). A conditional logistic regression was used to estimate the odds ratios and 95% confidence intervals (CIs) of PPDs, including depression, sleep disorder, and medication with hypnotics or antidepressants, under anesthesia during CS. The prevalence rates of combined PPDs were 26.66%, 43.87%, and 36.30% in natural births, CS with GA, and CS with NA, respectively. In particular, the proportions of postpartum use of hypnotic drugs or antidepressants were 21.70%, 39.77%, and 31.84%, which were significantly different. The aORs (95% CIs) were 2.15 (2.05-2.25) for the included depressive disorders, 1.10 (1.00-1.21) for depression, 1.03 (0.96-1.11) for sleep disorder, and 2.38 (2.27-2.50) for medication with hypnotics or antidepressants in CS with GA compared with natural births. Women who underwent CS with GA had a significantly higher risk of depressive disorders and a higher need for antidepressants for sleep problems than those who underwent CS with NA. The risks of PPD were significantly associated with the anesthesia method, especially GA. Our results can assist physicians in carefully considering the appropriate anesthesia method for CS delivery, particularly with regard to postpartum drug abuse and drug safety. Answer: The question of whether anesthesia residents with a history of substance abuse should be allowed to continue training in clinical anesthesia is complex and controversial. According to a survey of anesthesia residency program directors, 43% believe residents in recovery from addiction should be allowed to attempt re-entry, while 30% do not support this idea (PUBMED:20006259). The relapse rate for residents allowed to continue training after treatment for substance abuse is reported to be 29%, with death being the initial presentation of relapse in 10% of cases (PUBMED:20006259). This highlights the significant risks associated with reintroducing recovering residents to the high-stress environment of clinical anesthesia, where access to controlled substances is part of the job. On the other hand, there is evidence that a structured, stepwise reintroduction to clinical practice can be successful. A study involving a post-treatment program that included at least 12 months of work in an anesthesia simulator, followed by a graded re-introduction into clinical practice, showed that 60% of the residents successfully completed their residency and entered the workforce as attending anesthesiologists (PUBMED:18761254). This suggests that with careful monitoring and support, some residents in recovery can successfully return to clinical anesthesia. The issue is further complicated by the fact that patients with a history of substance dependence may have different responses to anesthesia and pain management, which could potentially affect their performance as residents (PUBMED:31612693). Additionally, the high prevalence of burnout among anesthesiology residents, which is associated with substance use disorder, underscores the importance of social support and wellness programs to reduce the risk of relapse (PUBMED:38106624). In conclusion, while there are significant risks associated with allowing anesthesia residents with a history of substance abuse to continue training, there is also evidence that a carefully managed re-entry process can lead to successful outcomes. Each case should be evaluated individually, considering the resident's motivation, the support structures in place, and the potential impact on patient safety.
Instruction: Reduction of post-shift traffic injuries among gasoline station workers: Are they related to the reduction of occupational gasoline vapor exposure? Abstracts: abstract_id: PUBMED:15939386 Reduction of post-shift traffic injuries among gasoline station workers: Are they related to the reduction of occupational gasoline vapor exposure? Aims: Solvents exposure is known to impair psychomotor performances. With gasoline vapor recovery devices (GVRD) installed, this study should observe a decrease of traffic injuries after shift among gasoline workers. Methods: The occupational injury registry data of 1991-2000 in a petrochemical company containing 20,000 workers were analyzed. Cumulative injury rate (CIR) was calculated to estimate lifetime risk. The potential workday loss (PWDL) and the potential salary loss (PSL), which summed up the expected workdays and salary loss assuming a uniform retirement age of 65, were calculated for traffic injuries. Workers of gasoline sales were compared with those without gasoline exposure for commuting traffic injuries. The rate ratio (RR) was calculated based on a person-time data. Results: The CIR dropped 3.14 times during 1997-2000 for commuting accidents on the way back home compared with that during 1991-1992, and the PWDL and PSL dropped 25.2 and 21.5 times, respectively, which corresponded with the time of installments of GVRD. The RRs of commuting accidents on the way back home in the exposed group significantly decreased from 2.15 to 0.53. Conclusions: The decrease of commuting injuries on journey back home for gasoline workers were probably related to the effective reduction of solvent exposure. abstract_id: PUBMED:2050721 Gasoline-induced contact burns. Gasoline contact may cause significant full-thickness burn injuries. Systemic complications may result from the absorption of hydrocarbons through the skin. Regional neuromuscular absorption may produce transient or even permanent impairment. It is vital that the physician be aware of the possible full-thickness injuries and complications that may result from cutaneous exposure to gasoline. abstract_id: PUBMED:17357089 Short-term exposure of mice to gasoline vapor increases the metallothionein expression in the brain, lungs and kidney. Environmental airborne pollution has been repeatedly shown to affect multiple aspects of brain and cardiopulmonary function, leading to cognitive and behavioral changes and to the pronounced inflammatory response in the respiratory airways. Since in the cellular defense system the important role might have stress proteins-metallothionein (MT)-I and MT-II, which are involved in sequestration and dispersal of metal ions, regulation of the biosynthesis and activities of zinc-dependent transcription factors, as well as in cellular protection from reactive oxygen species, genotoxicity and apoptosis, in this study we investigated their expression in the brain, lungs and kidney, following intermittent exposure of mice to gasoline vapor. Control groups consisted of intact mice and of those closed in the metabolic chamber and ventilated with fresh air. The data obtained by immunohistochemistry showed that gasoline inhalation markedly upregulated the MTs expression in tissues which were directly or indirectly exposed to toxic components, significantly increasing the number of MT I+II positive cells in CNS (the entorhinal cortex, ependymal cells, astroglial cells in subventricular zone and inside the brain parenchyma, subgranular and CA1-CA3 zone of the dentate gyrus in hippocampus and macrophages-like cells in perivascular spaces), in the lungs (pneumocytes type I and type II) and in the kidneys (parietal wall of Bowman capsule, proximal and distal tubules). The data point to the protective and growth-regulatory effects of MT I + II on places of injuries, induced by inhalation of gasoline vapor. abstract_id: PUBMED:21094313 Gasoline prices and their relationship to drunk-driving crashes. This study investigates the relationship between changing gasoline prices and drunk-driving crashes. Specifically, we examine the effects of gasoline prices on drunk-driving crashes in Mississippi by several crash types and demographic groups at the monthly level from 2004 to 2008, a period experiencing great fluctuation in gasoline prices. An exploratory visualization by graphs shows that higher gasoline prices are generally associated with fewer drunk-driving crashes. Higher gasoline prices depress drunk-driving crashes among young and adult drivers, among male and female drivers, and among white and black drivers. Results from negative binomial regression models show that when gas prices are higher, there are fewer drunk-driving crashes, particularly among property-damage-only crashes. When alcohol consumption levels are higher, there are more drunk-driving crashes, particularly fatal and injury crashes. The effects of gasoline prices and alcohol consumption are stronger on drunk-driving crashes than on all crashes. The findings do not vary much across different demographic groups. Overall, gasoline prices have greater effects on less severe crashes and alcohol consumption has greater effects on more severe crashes. abstract_id: PUBMED:25511178 The relationship between gasoline price and patterns of motorcycle fatalities and injuries. Objective: Economic factors such as rising gasoline prices may contribute to the crash trends by shaping individuals' choices of transportation modalities. This study examines the relationship of gasoline prices with fatal and non-fatal motorcycle injuries. Methods: Data on fatal and non-fatal motorcycle injuries come from California's Statewide Integrated Traffic Records System for 2002-2011. Autoregressive integrated moving average (ARIMA) regressions were used to estimate the impact of inflation-adjusted gasoline price per gallon on trends of motorcycle injuries. Results: Motorcycle fatalities and severe and minor injuries in California were highly correlated with increasing gasoline prices from 2002 to 2011 (r=0.76, 0.88 and 0.85, respectively). In 2008, the number of fatalities and injuries reached 13,457--a 34% increase since 2002, a time period in which inflation-adjusted gasoline prices increased about $0.30 per gallon every year. The majority of motorcycle riders involved in crashes were male (92.5%), middle-aged (46.2%) and non-Hispanic white (67.9%). Using ARIMA modelling, we estimated that rising gasoline prices resulted in an additional 800 fatalities and 10,290 injuries from 2002 to 2011 in California. Conclusions: Our findings suggest that increasing gasoline prices led to more motorcycle riders on the roads and, consequently, more injuries. Aside from mandatory helmet laws and their enforcement, other strategies may include raising risk awareness of motorcyclists and investment in public transportation as an alternative transportation modality to motorcycling. In addition, universally mandated training courses and strict licensing tests of riding skills should be emphasised to help reduce the motorcycle fatal and non-fatal injuries. abstract_id: PUBMED:8630153 Severe burn injury from recreational gasoline use. This report describes the case of a 13-year-old boy who suffered severe burns with inhalation injury during the recreational use of gasoline. Gasoline sniffing is a popular form of solvent abuse for young children and adolescents in isolated, rural communities. In addition to the neurological and physiological complications of gasoline inhalation, gasoline sniffers face the significant threat of severe burn injury or death resulting from ignition of the gasoline vapor as well as from extensive physical contact with the constituents of liquid gasoline. abstract_id: PUBMED:15551950 Oxidative stress in station service workers Introduction: The aim of this study is to identify an oxidative stress in service station workers. Previous studies verified an increased incidence of leukemia and myeloma, however other authors haven't verified it. There are reports of nasal, pharyngeal, laryngeal, and lung cancer in service station workers. Our study wants to evaluate the oxidative balance in the fuel workers. Material And Methods: We studied 44 subjects with gasoline exposure and 29 control subjects. We determined the blood concentrations of Glutathione reduced and oxidized, Protein sulfhydrylic (PSH) Vitamine E, Vitamine C, Malondialdehyde, Protein oxidized (OX-PROT) and beta carotene. The t test was performed to analyze the differences between the means, the Chi square was used to evaluate the statistical significance of associations between variable categorical (redox index). The Anova test excluded the confusing effect of age, smoke and alcohol habit. Results: The mean age of the workers was 36.6 years, instead the control group was 38. In the workers Glutathione reduced, Vit. E and Beta carotene were lower than in the control subjects, this difference was statistically significant (p &lt; 0.01). The Malondialdehyde concentration was higher in the workers higher than in the control group, but this difference wasn't statistically significant. Discussion And Conclusions: Our data demonstrated Glutathione, Vit. E, and Beta carotene are useful to verify a reduction of the antioxidant activity. The only marker of the presence of oxidative injury that correlated to work exposure was the malondialdehyde. The redox index was surest marker. The limit of our study is the number of control group, it was little and lower than workers. Conclusively we believe it's useful to continue our studies and, if our results are going to be confirmed, we retain that stress oxidative determination would be verified in occupational medicine using these markers, especially to study exposure of the fuel workers who were investigated less and, in our opinion, would receive more attention. abstract_id: PUBMED:27116108 The Association of Gasoline Prices With Hospital Utilization and Costs for Motorcycle and Nonmotorcycle Motor Vehicle Injuries in the United States. Objectives: This study examined the association between gasoline prices and hospitalizations for motorcycle and nonmotorcycle motor vehicle crash (MVC) injuries. Methods: Data on inpatient hospitalizations were obtained from the 2001 to 2010 Nationwide Inpatient Sample. Panel feasible generalized least squares models were used to estimate the effects of monthly inflation-adjusted gasoline prices on hospitalization rates for MVC injuries and to predict the impact of increasing gasoline taxes. Results: On the basis of the available data, a $1.00 increase in the gasoline tax was associated with an estimated 8348 fewer annual hospitalizations for nonmotorcycle MVC injuries, and reduced hospital costs by $143 million. However, the increase in the gasoline tax was also associated with an estimated 3574 more annual hospitalizations for motorcycle crash injuries, and extended hospital costs by $73 million. Conclusions: This analysis of some existing data suggest that the increased utilization and costs of hospitalization from motorcycle crash injuries associated with an increase in the price of gasoline are likely to substantially offset reductions in nonmotorcycle MVC injuries. A policy decision to increase the gasoline tax could improve traffic safety if the increased tax is paired with public health interventions to improve motorcycle safety. abstract_id: PUBMED:9725684 Preventable burns associated with the misuse of gasoline. Gasoline is intended for use as a motor fuel, but the universal availability of gasoline in the home encourages misuse as a solvent, insecticide, accelerant or cleaning solution. The careless or inappropriate use of gasoline may result in burn injury. We examined the circumstance of gasoline-related injury in a population admitted to one burn centre to determine the potential for burn prevention efforts. A retrospective review of all burn admissions to one centre for the years 1978 to 1996 demonstrated hat 1011 of 4339 acute admissions (23.3%) were gasoline-related. This group had an average total burn size of 29.8% total body surface (TBSA) and an average full thickness injury of 14.4% TBSA. There were 144 fatalities resulting from gasoline-associated burn injury. Where such determination could be made, the use of gasoline was judged to be inappropriate or unsafe in 687 of 788 cases (87.1%). Ninety of 144 fatalities (62.5%) were associated with inappropriate or unsafe use of gasoline. The careless or inappropriate use of gasoline poses significant risk of burn injury. The indoor use of gasoline, as well as use of gasoline for purposes other than as a motor fuel, should be strongly discouraged. abstract_id: PUBMED:7443860 Gasoline immersion injury. Chemical burns and pulmonary complications are the most common problems encountered in the patient immersed in gasoline. Our patient demonstrated a 46-percent total-body-surface area, partial-thickness chemical burn. Although he did not develop bronchitis or pneumonitis, he did display persistent atelectasis, laryngeal edema, and subsequent upper airway obstruction. This had not previously been reported in gasoline inhalation injuries. Hydrocarbon hepatitis secondary to the vascular endothelial damage is apparently a reversible lesion with no reported long-term sequelae. Gasoline immersion injuries may be a series multisystem injury and require the burn surgeon to take a multisystem approach to its diagnosis and treatment. Answer: The reduction of post-shift traffic injuries among gasoline station workers appears to be related to the reduction of occupational gasoline vapor exposure. A study analyzing the occupational injury registry data of a petrochemical company with 20,000 workers from 1991-2000 found that the cumulative injury rate (CIR) for commuting accidents on the way back home dropped significantly after the installation of gasoline vapor recovery devices (GVRD). Specifically, the CIR decreased 3.14 times during 1997-2000 compared to 1991-1992, and the potential workday loss (PWDL) and potential salary loss (PSL) dropped 25.2 and 21.5 times, respectively. The rate ratios (RRs) of commuting accidents on the way back home in the exposed group also significantly decreased from 2.15 to 0.53. These findings suggest that the decrease in commuting injuries on the journey back home for gasoline workers was probably related to the effective reduction of solvent exposure (PUBMED:15939386).
Instruction: The end-of-life phase of high-grade glioma patients: dying with dignity? Abstracts: abstract_id: PUBMED:23335620 The end-of-life phase of high-grade glioma patients: dying with dignity? Background: In the end-of-life (EOL) phase, high-grade glioma (HGG) patients have a high symptom burden and often lose independence because of physical and cognitive dysfunction. This might affect the patient's personal dignity. We aimed to (a) assess the proportion of HGG patients dying with dignity as perceived by their relatives and (b) identify disease and care factors correlated with dying with dignity in HGG patients. Methods: We approached relatives of a cohort of 155 deceased HGG patients for the study. Participants completed a questionnaire concerning the EOL phase of the patient, covering several subthemes: (a) symptoms and signs, (b) health-related quality of life, (c) decision making, (d) place and quality of EOL care, and (e) dying with dignity. Results: Relatives of 81 patients participated and 75% indicated that the patient died with dignity. These patients had fewer communication deficits, experienced fewer transitions between health care settings in the EOL phase, and more frequently died at their preferred place of death. Relatives were more satisfied with the physician providing EOL care and reported that the physician adequately explained treatment options. Multivariate analysis identified satisfaction with the physician, the ability to communicate, and the absence of transitions between settings as most predictive of a dignified death. Conclusions: Physicians caring for HGG patients in the EOL phase should timely focus on explaining possible treatment options, because patients experience communication deficits toward death. Physicians should strive to allow patients to die at their preferred place and avoid transitions during the last month of life. abstract_id: PUBMED:30424921 End-of-Life Care for Patients with Glioma. Objective: To describe best practices and guidelines in symptom management at the end of life for adults with malignant glioma. Data Sources: Journal articles, evidence-based reviews, textbooks, and clinical guidelines. Conclusion: Symptom management is an essential element of end-of-life care that aims to preserve dignity and quality of life for patients with glioma and their family caregivers. Implications For Nursing Practice: Advance care planning using a holistic approach to the patient's symptoms experience and goals of care are necessary to develop, implement, and evaluate outcomes of an evidence-based plan of care tailored for each patient and family. abstract_id: PUBMED:31386028 Epilepsy in the end of life phase of brain tumor patients: a systematic review. Epileptic seizures are common in patients with primary or secondary malignant brain tumor. However, current knowledge on the occurrence of seizures during the end of life (EOL) phase of brain tumor patients is limited. Because symptom management with preservation of quality of life is of major importance for patients with a malignant brain tumor, particularly in the EOL, it is necessary to gain a deeper understanding of seizures and their management during this phase. We performed a systematic review of literature related to epilepsy in the EOL phase of brain tumor patients, based on the electronic resources PubMed, Embase, and Cinahl. The search yielded 442 unique records, of which 11 articles were eligible for further analysis after applying predefined inclusion criteria. Seizures occur relatively frequently in the EOL phase, particularly in patients with high-grade glioma. However, seizure management is often hampered by swallowing difficulties and impaired consciousness. Treatment decisions are largely dependent on expert opinion because a standardized approach for treating seizures in the terminal stage of brain tumor patients is still lacking. abstract_id: PUBMED:33898050 Caregiver perceptions of end-of-life care in patients with high-grade glioma. Background: Patients dying from high-grade gliomas (HGG) suffer from high symptom burden in the end-of-life (EoL) phase. Family caregivers are most informed about the patient's symptoms and disease course. The aim of this study is to assess caregiver perception on quality of EoL care of HGG patients. Methods: Caregivers prospectively participated in the Toolkit After-Death Bereaved Family Member Interview, part of the Toolkit of Instruments to Measure End-of-Life Care (TIME survey). This validated survey assesses EoL care in areas such as physical comfort and emotional support, advance care planning, focus on the individual, attention to family, and coordination of care. The quality of EoL care was measured by domain scores (0 = care was always optimal, 1 = care was always suboptimal) or with a 0-10 scale. Results: Of the 55 enrolled family caregivers, 44 completed the interview and rated the overall care high (8.90 ± 1.36/10), perceived that patients' wishes were respected (9.46 ± 0.95) and that they died in dignity (9.65 ± 0.98). Caregivers perceived high satisfaction with information and decision-making (0.18), advance care planning (0.19), focus on the individual (0.16), and care coordination (0.11). Attention to family (0.25) needed improvement. Only 41% of caregivers were confident that they knew what to do at the time of death and 46% felt that the healthcare team did not provide them with someone to turn to in distress. Conclusions: Caregivers reported high overall satisfaction with EoL HGG care, though attention to family and communication needed improvement. Focus should therefore be on improved caregiver communication to improve EoL care, caregiver burnout, and bereavement in HGG populations. abstract_id: PUBMED:26509292 Anaplastic gliomas: end-of-life care recommendations. Despite multimodal antitumor treatment, all patients with an anaplastic glioma will be confronted with incurability in due course and enter the end-of-life (EOL) phase; the period when the patients' condition declines and tumor-directed treatment is no longer effective. Although palliative care is important in all stages of the disease, it is of utmost importance in the EOL phase. The main goal of palliative care is to improve or maintain the quality of life of the patients and their relatives through the prevention and relief of suffering. This review focuses on different aspects of the EOL phase (symptoms and signs, EOL decision-making, advance care planning, organization of care and caregiver burden) and will provide recommendations to optimize palliative care. abstract_id: PUBMED:20511193 Symptoms and problems in the end-of-life phase of high-grade glioma patients. Despite multimodal treatment, it is not possible to cure high-grade glioma (HGG) patients. Therefore, the aim of treatment is not only to prolong life, but also to prevent deterioration of health-related quality of life as much as possible. When the patient's condition declines and no further tumor treatment seems realistic, patients in the Netherlands are often referred to a primary care physician for end-of-life care. This end-of-life phase has not been studied adequately yet. The purpose of this study was to explore specific problems and needs experienced in the end-of-life phase of patients with HGG. We retrospectively examined the files of 55 patients who received treatment in our outpatient clinic and died between January 2005 and August 2008. The clinical nurse specialist in neuro-oncology maintained contact on a regular basis with (relatives of) HGG patients once tumor treatment for recurrence was no longer given. She systematically asked for signs and symptoms. The majority of the patients experienced loss of consciousness and difficulty with swallowing, often arising in the week before death. Seizures occurred in nearly half of the patients in the end-of-life phase and more specifically in one-third of the patients in the week before dying. Other common symptoms reported in the end-of-life phase are progressive neurological deficits, incontinence, progressive cognitive deficits, and headache. Our study demonstrates that HGG patients, unlike the general cancer population, have specific symptoms in the end-of-life phase. Further research is needed in order to develop specific palliative care guidelines for these patients. abstract_id: PUBMED:26948363 Palliative care at the end-of-life in glioma patients. The end-of-life (EOL) phase of patients with a glioma starts when symptom prevalence increases and antitumor treatment is no longer effective. During the EOL phase, care is primarily aimed at reducing symptom burden while maintaining quality of life as long as possible without inappropriate prolongation of life. Palliative care during the EOL phase also involves complex medical decisions for the prevention and relief of suffering. We discuss the prevalence and treatment of the most common EOL symptoms, decision making in the EOL phase, the organization of EOL care, and the role of the patient's caregiver. Treating disease-specific symptoms, such as impaired consciousness, seizures, focal neurologic deficits and cognitive disturbances, is a major concern during the EOL phase, as these symptoms may interfere with EOL decision making. Advance care planning is aimed at reaching consensus about possible EOL decisions between all participants, respecting the values of patients and their informal caregivers. In order to prevent the possibility that the patient becomes incompetent to make informed decisions, we recommend initiating EOL conversations at a relatively early stage in the disease course. abstract_id: PUBMED:32232622 End-of-life care for glioma patients; the caregivers' perspective. Purpose: Gliomas are primary brain tumors with a life-limiting course of disease, and the last weeks of life are often characterized by neurological deficits that affect communication and personality. End-of-life treatment in this patient group therefore requires specific approaches. To date, little data is available on patients' and caregivers' needs and experiences in the last phase of the disease. Methods: In this observational study, relatives of patients treated at the University Hospital Zurich, Switzerland and deceased 2015-2017 due to glioma progression were contacted to complete a structured questionnaire assessing caregivers experience within the last weeks of the disease. Results: The survey was sent to 120 relatives of deceased patients with a glioma (WHO grades II-IV) (median patient age: 62 years; 73.8% male). Forty-three questionnaires were returned (37.7%). Approximately half of the patients were taken care of at home in the last 4 weeks of the disease, mainly with the assistance of in-home nursing care, of which eventually 14 patients (63.6%) died at home. While caregivers reported high satisfaction with medical and nursing care, psychological support was rated average to poor on a 10-point scale. Free comment fields were used widely, revealing open questions and needs of the relatives. Conclusions: This study illustrates the need for a more patient-centered end-of-life care including higher psychological support mechanisms, and a higher inclusion and consideration of relatives and caregivers into the care focus. Earlier discussion of end-of-life preferences could prevent hospitalizations in the last phase of life and could improve patients' and caregivers' quality of life. abstract_id: PUBMED:36271873 Palliative care and end-of-life care in adults with malignant brain tumors. Background: This systematic review provides updated insights, from the published literature in the past 5 years, based on the 2017 European Association of Neuro-Oncology (EANO) guidelines for palliative care in adults with malignant brain tumors. It provides an overview of palliative care options, including during the end-of-life phase for patients with malignant brain tumors. Methods: A systematic literature search was conducted from 2016 to 2021 focusing on four main topics: (1) symptom management, (2) caregiver needs, (3) early palliative care, and (4) care in the end-of-life phase. An international panel of palliative care experts in neuro-oncology synthesized the literature and reported the most relevant updates. A total of 140 articles were included. Results: New insights include that: Hippocampal avoidance and stereotactic radiosurgery results in a lower risk of neurocognitive decline in patients with brain metastases; levetiracetam is more efficacious in reducing seizures than valproic acid as first-line monotherapy antiseizure drug (ASD) in glioma patients; lacosamide and perampanel seem well-tolerated and efficacious add-on ASDs; and a comprehensive framework of palliative and supportive care for high-grade glioma patients and their caregivers was proposed. No pharmacological agents have been shown in randomized controlled trials to significantly improve fatigue or neurocognition. Conclusions: Since the 2017 EANO palliative care guidelines, new insights have been reported regarding symptom management and end-of-life care, however, most recommendations remain unchanged. Early palliative care interventions are essential to define goals of care and minimize symptom burden in a timely fashion. Interventional studies that address pain, fatigue, and psychiatric symptoms as well as (the timing of) early palliative care are urgently needed. abstract_id: PUBMED:26404860 Antiepileptic drug treatment in the end-of-life phase of glioma patients: a feasibility study. Background: During the end-of-life (EOL) phase of glioma patients, a rapid deterioration in neurological functioning may interfere with the oral intake of antiepileptic drugs (AEDs). We aimed to assess the feasibility of non-oral AED treatment in an out-of-hospital setting according to an expert-based guideline. Methods: Glioma patients with a history of epilepsy, in whom further antitumor therapy was considered to be no longer meaningful, were recruited at two Dutch hospitals. As soon as swallowing difficulties developed, the patient's caregiver administered prophylactic treatment with buccal clonazepam. Acute seizures were treated with intranasal midazolam. We evaluated the adherence to the study medication, seizure prevalence, and caregiver's satisfaction. Results: Of the 34 patients who were approached, 25 gave consent to participate and 23 had died at the end of the study. Thirteen of 19 patients (68.4 %) who had developed swallowing difficulties showed adherence to the study protocol. Thirteen patients used prophylactic buccal clonazepam, of which eight patients remained seizure-free until death. Six patients received treatment with intranasal midazolam at least once. In all patients, seizure control was reached. None of the patients needed to be transferred to the hospital due to recurrent seizures. All caregivers were to some degree satisfied with the use of the study medication. Conclusions: Our results demonstrate that it is feasible to treat seizures with a combination of non-oral benzodiazepines in the EOL phase of glioma patients, as it seems to provide an important level of comfort among caregivers to be able to manage seizures at home. Answer: The end-of-life (EOL) phase for high-grade glioma (HGG) patients is a critical period characterized by a high symptom burden and often a loss of independence due to physical and cognitive dysfunction, which can affect the patient's personal dignity (PUBMED:23335620). A study involving relatives of deceased HGG patients found that 75% of relatives felt the patient died with dignity, with factors such as fewer communication deficits, fewer transitions between health care settings, and dying at the preferred place being associated with a dignified death. Satisfaction with the physician providing EOL care and adequate explanation of treatment options were also predictive of a dignified death (PUBMED:23335620). Symptom management is an essential element of EOL care for glioma patients, aiming to preserve dignity and quality of life. Advance care planning using a holistic approach to the patient's symptoms and goals of care is necessary to develop and evaluate outcomes of an evidence-based plan of care tailored for each patient and family (PUBMED:30424921). Seizures are a common issue during the EOL phase for brain tumor patients, and their management can be challenging due to swallowing difficulties and impaired consciousness. A standardized approach for treating seizures in the terminal stage is lacking, and treatment decisions often rely on expert opinion (PUBMED:31386028). Caregivers' perceptions of EOL care in HGG patients indicate high overall satisfaction, with patients' wishes being respected and a sense of dying in dignity. However, areas such as attention to family and communication with caregivers need improvement, as only a minority of caregivers felt prepared for the patient's death or had someone to turn to in distress (PUBMED:33898050). Palliative care recommendations for anaplastic gliomas during the EOL phase focus on improving or maintaining quality of life through the prevention and relief of suffering, addressing symptoms, EOL decision-making, advance care planning, organization of care, and caregiver burden (PUBMED:26509292). In summary, the EOL phase for HGG patients can be managed in a way that respects dignity, with a focus on symptom management, clear communication, advance care planning, and support for both patients and caregivers. However, there is a need for improved caregiver communication and standardized approaches to certain medical issues, such as seizure management, to further enhance the quality of EOL care (PUBMED:23335620; PUBMED:30424921; PUBMED:31386028; PUBMED:33898050; PUBMED:26509292).
Instruction: Musculoskeletal disease research: should we analyze the joint or the person? Abstracts: abstract_id: PUBMED:8823682 Musculoskeletal disease research: should we analyze the joint or the person? Objective: To illustrate newly developed statistical methods in analysis of correlated binary outcome data in musculoskeletal (MSK) disease. Methods: We applied 3 alternative statistical approaches to evaluate the relation of several risk factors to presence of knee osteoarthritis using data from the Framingham Osteoarthritis Study. The methods were (1) an ordinary logistic regression model using each knee as an independent unit of observation; (2) an ordinary logistic regression model treating each person rather than the knee as the unit of analysis; and (3) generalized estimating equation (GEE) and polychotomous logistic regression (PCHLE) using each knee as the unit of analysis but accounting for the correlation between fellow knees. We discuss the advantages and disadvantages of each method with respect to validity, precision, and interpretability. Results: The GEE and PCHLE models had clear advantages. They simultaneously evaluated the effects of person specific and knee specific risk factors, increased precision, enhanced the interpretability of variables, and provided new insights about how risk factors act. Conclusion: While the choice of statistical approach depends critically on the scientific question of interest, the GEE and PCHLE approaches will often be optimal in assessments of factors associated with MSK conditions affecting multiple correlated sites within the body, especially when the interest of the study focuses on site specific risk factors. abstract_id: PUBMED:36086396 Estimation of Maximum Shoulder and Elbow Joint Torques Based on Demographics and Anthropometrics. Repetitive movements that involve a significant shift of the body's center of mass can lead to shoulder and elbow fatigue, which are linked to injury and musculoskeletal disorders if not addressed in time. Research has been conducted on the joint torque individuals can produce, a quantity that indicates the ability of the person to carry out such repetitive movements. Most of the studies surround gait analysis, rehabilitation, the assessment of athletic performance, and robotics. The aim of this study is to develop a model that estimates the maximum shoulder and elbow joint torque an individual can produce based on anthropometrics and demographics without taking a manual measurement with a force gauge (dynamometer). Nineteen subjects took part in the study which recorded maximum shoulder and elbow joint torques using a dynamometer. Sex, age, body composition parameters, and anthropometric data were recorded, and relevant parameters which significantly contributed to joint torque were identified using regression techniques. Of the parameters measured, body mass index and upper forearm volume predominantly contribute to maximum torque for shoulder and elbow joints; coefficient of determination values were between 0.6 and 0.7 for the independent variables and were significant for maximum shoulder joint torque (P&lt;0.001) and maximum elbow joint torque (P&lt;0.005) models. Two expressions illustrated the impact of the relevant independent variables on maximum shoulder joint torque and maximum elbow joint torque, using multiple linear regression. Coefficient of determination values for the models were between 0.6 and 0.7. The models developed enable joint torque estimation for individuals using measurements that are quick and easy to acquire, without the use of a dynamometer. This information is useful for those employing joint torque data in biomechanics in the areas of health, rehabilitation, ergonomics, occupational safety, and robotics. Clinical Relevance- The rapid estimation of arm joint torque without the direct force measurement can help occupational safety with the prevention of injury and musculoskeletal disorders in several working scenarios. abstract_id: PUBMED:15755408 Derangement, osteoarthritis, and rheumatoid arthritis of the temporomandibular joint: implications, diagnosis, and management. Temporomandibular joint (TMJ) dysfunction is often believed to bea young person's malady. However, geriatric patients also present with clinical findings of TMJ clicking, locking, crepitation, limited opening, and pain. With our aging population and the high prevalence of rheumatic and musculoskeletal diseases in the elderly, it is important to understand the etiopathogenesis, clinical presentation, and management of derangement, rheumatoid arthritis, and osteoarthritis of the TMJ. Although arthritis of the TMJ usually causes only mild-to-moderate dysfunction in older patients, they present challenges related to medication use and comorbidity. This article presents the most recent understanding and therapeutic protocols for patient diagnosis and management. abstract_id: PUBMED:14578061 Lower limb joint moment during walking in water. Purpose: Walking in water is a widely used rehabilitation method for patients with orthopedic disorders or arthritis, based on the belief that the reduction of weight in water makes it a safer medium and prevents secondary injuries of the lower-limb joints. To our knowledge, however, no experimental data on lower-limb joint moment during walking in water is available. The aim of this study was to quantify the joint moments of the ankle, knee, and hip during walking in water in comparison with those on land. Method: Eight healthy volunteers walked on land and in water at a speed comfortable for them. A video-motion analysis system and waterproof force platform were used to obtain kinematic data and to calculate the joint moments. Results: The hip joint moment was shown to be an extension moment almost throughout the stance phase during walking in water, while it changed from an extension- to flexion-direction during walking on land. The knee joint moment had two extension peaks during walking on land, whereas it had only one extension peak, a late one, during walking in water. The ankle joint moment during walking in water was considerably reduced but in the same direction, plantarflexion, as that during walking on land. Conclusions: The joint moments of the hip, knee, and ankle were not merely reduced during walking in water; rather, inter-joint coordination was totally changed. abstract_id: PUBMED:17013823 Impact of multiple joint problems on daily living tasks in people in the community over age fifty-five. Objective: To establish the prevalence of multiple joint problems and their impact on everyday tasks. Methods: A random sample of the UK population was assessed using a postal questionnaire. Data on overall joint pain, swelling and stiffness, and activities of daily living were obtained from 16,222 individuals &gt;55 years of age (86% response rate). Prevalence estimates of joint problems were established and the impact of multiple joint pathologies on common physical tasks was determined. Results: Single joint involvement was unusual (median joint involvement 4). Although the knee was the most frequently involved joint (220.30 per 1,000), isolated knee pathology accounted for only 1 in 11 patients with knee pain. Although single joint disorders increased the risk of experiencing functional difficulty, this risk was substantially increased with multiple joint problems: individuals with knee and feet problems were 14 times more likely to experience difficulty standing and walking than those without knee problems (odds ratio [OR] 14.50); knee and hip problems increased the risk &gt;12 times (OR 12.43) whereas knee, back, feet, and hip involvement increased the risk 60 times (OR 62.41). Conclusion: Multiple-site joint problems are much more common than single joint problems. Although individual joint problems have a considerable impact on a person's functional ability, this risk is substantially increased when other joints are involved. With the increasing burden associated with the aging population, it is essential that the management of joint pain be considered in light of the impact of multiple, rather than single, joint problems. abstract_id: PUBMED:32875774 Research advances in pathomechanism and treatment of joint contracture Joint contracture is one of the common musculoskeletal disorders. It has seriously disturbed patients' activities of daily living in various aspects. The pathogenesis of it is eager to explore to distinct degree. Nowadays the thickeness and fibrosis of joint capsular is redarded as the major reason to joint contracture. It is reported that excessive fibroblasts and myofibroblasts activity, collagen hyperplasia, and extracellular matrix (ECM) deposition in these fibrotic condtions lead to the contracture. In addition, upregulators of myofibroblast and collagen synthesis, transforming growth factor-beta 1 (TGF-β1), and connective tissue growth factor (CTGF) were shown to be increased. Altered levels of cytokines were also thought to play a role in this process as elevated levelsof tumor necrosis factor-α(TNF-α), matrix metalloproteinases(MMPs) and abnormal distribution tissue inhibitors of MMPs(TIMPs) were demonstrated in contracted capsules. At present, the methods for clinical treatment of joint contracture mainly include two major categories:stretching therapy, physical factor therapy, exercise therapy, botulinum toxin injection and other non-surgical treatments, arthroscopic lysis, open lysis, and other surgical treatments. Surgical treatment is performed when non-surgical treatment is difficult to achieve further improvement. It has a good effect on mild to moderate joint contracture, but it is difficult to completely restore joint activity for serious joint contracture. Although clinical treatment methods are diverse, the clinical effects are staggered and the effectiveness of their treatment is controversial. Joint contracture is an important challenge faced by orthopedics and rehabilitation physicians, therapists and patients. The review summarized the pathogenesisand treatment of joint contracture and provided a theoretical basis for clinical diagnosis and treatment. abstract_id: PUBMED:11394613 Athletic activity after joint replacement. The first decade of the 21st century has been declared the "Bone and Joint Decade" by 35 nations and 44 states in the United States as of March 2001. It is not surprising that Americans are interested in musculoskeletal disease and the treatment of bone and joint disorders because our population is aging, the prevalence of arthritic joints is increasing, and senior Americans are demonstrating a strong desire to stay active in activities of daily living and athletics. One of the most successful treatments for painful arthritic joints, which limit activity, is total joint replacement, which predictably relieves pain and improves function. Much has been written about the technical aspects of total joint arthroplasty. Less has been written about safe and appropriate activities for patients who have had joint replacement operations. This article evaluates athletic activity after joint replacement by reviewing the orthopaedic literature and surveying members of The Hip Society, The Knee Society, and The American Shoulder and Elbow Surgeons Society. The authors have developed consensus recommendations for appropriate athletic activity for patients who have had joint replacement operations. This article is intended to serve as a guide for orthopaedic surgeons and primary care physicians who give patients recommendations for athletic activity after joint replacement. This article is also intended to stimulate further research in the area of athletic activity after total joint arthroplasty. abstract_id: PUBMED:36031592 Extracorporeal shockwave therapy in an immobilized knee model in rats prevents progression of joint contracture. Joint immobilization, which ensures rest and accelerates tissue recovery in musculoskeletal disorders, often causes joint contracture, for which there is still no effective prevention. To address this, we investigated the effects of extracorporeal shockwave therapy (ESWT) in preventing joint contracture, in a unilaterally immobilized knee rat model. Under general anesthesia, ESWT (0.25 mJ/mm2 , 3000 shot, 4 Hz, 3 days/week) was administered from 1 day after immobilization up to 2, 4, and 6 weeks. The immobilized control group received general anesthesia without ESWT. We evaluated joint angle, tissue elasticity, and gene and protein expression related to fibrosis, inflammation, and angiogenesis in the joint capsule. Relative to the control, the ESWT group had greater joint angle at 4 and 6 weeks, and lower posterior-capsule elasticity at 6 weeks. In the ESWT group, at 6 weeks, gene expression of collagen type I (col1α1), connective tissue growth factor (CTGF), and α-smooth muscle actin (α-SMA) was significantly downregulated, whereas interleukin-6 (IL-6) and hypoxia-inducible factor-1α (HIF-1α) gene expression was upregulated, relative to that in the control. Compared with that in the control, at 4 and 6 weeks, the ratio of CTGF+ cells was significantly lower in the ESWT group; at 4 weeks, the ESWT group had significantly fewer CD68+ cells in the adhesion area, and at 6 weeks, significantly more blood vessels. Statement of Clinical Significance: In a rat model, ESWT counteracted fibrosis, suppressed macrophage infiltration, and promoted neovascularization, reducing elasticity, and increasing joint range-ofmotion. ESWT offers a potential new strategy to prevent progression in joint contracture. abstract_id: PUBMED:35124618 Analysis of maximum joint moment during infant lifting-up motion. Background: Infant care activities can induce musculoskeletal disease. However, little is known about the biomechanical joint load during lifting-up of an infant. Objective: The aim of this study was to investigate normalized maximum moment during lifting-up of infant dummies weighing 4.6 kg, 7.6 kg, and 9.8 kg. Methods: Six healthy young subjects participated in our study. All subjects performed lifting-up activities of dummies to shoulder height with their feet apart and natural postures in their comfortable speed. Three-dimensional reflective marker trajectories and ground reaction forces were used as input to calculate joint moments using a full body musculoskeletal model. Joint moments were normalized by each subject's body mass. Friedman's test was performed to compare mean differences of normalized joint moments for lifting up three dummy weights. Results: Lumbar joint had the greatest normalized joint moment. Lumbar and hip extension moments were significantly increased with dummy weight (P&lt; 0.05). In contrast, knee extension and ankle plantarflexion moment were not significantly affected by dummy weight (P&gt; 0.05). Conclusions: These results indicate that the lumbar joint plays the most important role in infant lifting-up motion and that the load of lumbar and hip joint should be reduced when lifting a heavier infant. These results could contribute to the development of an effective lifting strategy and an assisting device for lifting an infant. abstract_id: PUBMED:38324785 Effectiveness of Facet Joint Nerve Blocks in Managing Chronic Axial Spinal Pain of Facet Joint Origin: A Systematic Review and Meta-Analysis. Background: Chronic axial spinal pain is one of the major causes of disability. Literature shows that spending on low back and neck pain and musculoskeletal disorders continues to escalate, not only with disability, but also with increasing costs, accounting for the highest amount of various disease categories. Based on the current literature utilizing controlled diagnostic blocks, facet joints, nerve root dura, and sacroiliac joints have been shown as potential sources of spinal pain. Therapeutic facet joint interventional modalities of axial spinal pain include radiofrequency neurotomy, therapeutic facet joint nerve blocks, and therapeutic intraarticular injections. Objective: The objective of this systematic review and meta-analysis is to evaluate the effectiveness of facet joint nerve blocks as a therapeutic modality in managing chronic axial spinal pain of facet joint origin. Study Design: A systematic review and meta-analysis of randomized controlled trials (RCTs) and observational studies utilizing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Methods: The available literature on facet joint nerve blocks in axial spinal pain was reviewed. The quality assessment criteria utilized were the Cochrane review criteria to assess risk of bias, the Interventional Pain Management Techniques - Quality Appraisal of Reliability and Risk of Bias Assessment (IPM-QRB) for randomized therapeutic trials, and the Interventional Pain Management Techniques - Quality Appraisal of Reliability and Risk of Bias Assessment for Nonrandomized Studies (IPM-QRBNR) for nonrandomized studies. The evidence was graded according to Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) assessment criteria. The level of evidence was based on best evidence synthesis with modified grading of qualitative evidence from Level I to Level V. A comprehensive literature search of multiple databases from 1966 to July 2023, including manual searches of the bibliography of known review articles was performed. Quality assessment of the included studies and best evidence synthesis were incorporated into qualitative and quantitative evidence synthesis. Outcome Measures: The primary outcome measure was the proportion of patients with significant relief and functional improvement of greater than 50% of at least 3 months. Duration of relief was categorized as short-term (less than 6 months) and long-term (greater than 6 months). Results: This assessment identified 8 high-quality and one moderate quality RCTs and 8 high quality and 4 moderate quality non-randomized studies with application of spinal facet joint nerve blocks as therapeutic modalities. However, based on the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) assessment, only 3 of the 21 studies showed high levels of evidence and clinical applicability, with 11 studies showing moderate levels of GRADE evidence and clinical applicability. Limitations: Despite the availability of multiple studies, the paucity of literature is considered as the major drawback. Based on Grading of Recommendations, Assessment Development, and Evaluations (GRADE) assessment, only 3 of the 21 studies showed high levels of evidence and clinical applicability. Conclusion: Based on the present systematic review and meta-analysis with 9 RCTs and 12 non-randomized studies, the evidence is Level II with moderate to strong recommendation for therapeutic facet joint nerve blocks in managing spinal facet joint pain. Answer: Musculoskeletal disease research often faces the challenge of deciding whether to focus on analyzing individual joints or the person as a whole. This decision is critical as it can influence the validity, precision, and interpretability of the study results. In the context of knee osteoarthritis, three alternative statistical approaches were applied to evaluate the relation of several risk factors: (1) an ordinary logistic regression model using each knee as an independent unit of observation, (2) an ordinary logistic regression model treating each person rather than the knee as the unit of analysis, and (3) generalized estimating equation (GEE) and polychotomous logistic regression (PCHLE) using each knee as the unit of analysis but accounting for the correlation between fellow knees. The GEE and PCHLE models had clear advantages as they simultaneously evaluated the effects of person-specific and knee-specific risk factors, increased precision, enhanced the interpretability of variables, and provided new insights about how risk factors act (PUBMED:8823682). The choice of statistical approach depends critically on the scientific question of interest. However, the GEE and PCHLE approaches will often be optimal in assessments of factors associated with musculoskeletal conditions affecting multiple correlated sites within the body, especially when the interest of the study focuses on site-specific risk factors (PUBMED:8823682). This suggests that while individual joint analysis can be informative, considering the person as a whole, particularly when multiple joints are affected, can provide a more comprehensive understanding of musculoskeletal diseases and their impact on individuals.
Instruction: Is procedural sedation with propofol acceptable for complex imaging? Abstracts: abstract_id: PUBMED:23649207 Is procedural sedation with propofol acceptable for complex imaging? A comparison of short vs. prolonged sedations in children. Background: Concerns exist in the community of non-anesthesiologist sedation providers regarding the appropriateness of prolonged sedations using propofol for outpatient procedures. Objective: To investigate interventions required, completion rate and resource use in prolonged vs. short sedations using propofol in outpatients. Materials And Methods: We reviewed retrospectively 213 children sedated with propofol by a non-anesthesiologist sedation service. Cohorts were composed a priori of children sedated for ≥1 h and &lt;1 h. Comparisons were made regarding need for interventions, sedation duration, sedation completion to discharge time, and procedural completion rate. Results: Most sedations were for MRI (87.5% short vs. 94.5% prolonged) with no statistically significant difference in overall need for interventions (75.2% prolonged vs. 65.4% short) nor completion to discharge times (30.7 ± 11.5 min [prolonged] vs. 30.3 ± 11.7 min [short]) between both groups. One child failed to complete the intended scan. No one required endotracheal intubation or unplanned admission. Conclusion: Prolonged outpatient sedations with propofol conducted by appropriately trained non-anesthesiology sedation providers appears effective for imaging procedures with no increase in interventions or increased resource burden compared to short sedations. This information can assist all stakeholders in determining scope of practice and guidelines for moderately longer pediatric sedations with propofol. abstract_id: PUBMED:30375140 A retrospective comparison of propofol to dexmedetomidine for pediatric magnetic resonance imaging sedation in patients with mucopolysaccharidosis type II. Background: Mucopolysaccharidosis type II patients are reported to have an elevated incidence of difficult airway. Propofol is a commonly used sedative for magnetic resonance imaging in pediatric patients, but patients who receive it may exhibit dose-dependent upper airway obstruction and respiratory depression. Dexmedetomidine also provides adequate procedural sedation with a relatively low risk of airway obstruction. Accordingly, we introduced the use of dexmedetomidine in our practice to reduce the risk of airway obstruction during magnetic resonance imaging procedures. Aims: The aim of this study was to evaluate the incidence of artificial airway interventions in patients sedated with propofol and compare it to that in patients sedated with dexmedetomidine in patients with mucopolysaccharidosis type II during magnetic resonance imaging procedures. Methods: All mucopolysaccharidosis type II patients undergoing magnetic resonance imaging at our institution between April 2014 and February 2018 were included in this study. The patients were divided into two groups according to whether they were managed before and after the introduction of dexmedetomidine: those who were sedated with propofol (group P) and those who were sedated with dexmedetomidine (group D). Results: Forty-six sedations were performed in 27 patients. Artificial airway interventions were significantly more frequent during propofol-based than dexmedetomidine-based sedation: 14 of 32 (43.8%) in group P and 1 of 14 (7.1%) in group D (odds ratio, 10.11; 95% confidence interval, 1.18-86.85; P = 0.018). Time to awake and time to discharge were similar between groups. Changes in hemodynamic variables also did not significantly differ between groups. Conclusion: Dexmedetomidine provides an adequate level of sedation and is associated with lower rates of artificial airway interventions compared to propofol. Therefore, dexmedetomidine may offer advantages for preserving the native airway compared to propofol when administered during magnetic resonance imaging scans in patients with mucopolysaccharidosis type II. abstract_id: PUBMED:24918806 Comparison of midazolam and propofol for sedation in pediatric diagnostic imaging studies. Objective: This study aims to compare the efficacy of propofol and midazolam in terms of adverse effect potentials and to determine the appropriate strategy for pediatric procedural sedation. Methods: A total of 200 pediatric patients (aged &lt; 14 years) undergoing diagnostic procedures were recruited for this nonrandomized prospective controlled cohort study. The patients were assigned to 2 treatment arms: either propofol (Group 1: IV bolus dose of 2 mg/kg during a 2-minute period, IV maintenance dose of 100 mcg/kg/min) or midazolam (Group 2: IV bolus dose of 0.15 mg/kg during a period of 2 to 3 minutes) to achieve sedation. Demographic data, body weight, and clinical status of the patients were evaluated and recorded. The vital signs and sedation levels (ie, Ramsay sedation scale scores) were evaluated and recorded, as well as the complications detected and medications administered in 10-minute intervals throughout the sedation procedure. Findings between the study arms were compared. Results: Arterial blood pressures decreased significantly in both groups (P = 0.001). The patients in Group 1 experienced a greater difference in diastolic blood pressure (P = 0.001) than those in Group 2. Sedation scores in Group 1 were more favorable (P = 0.014) and reached the appropriate sedation level in a shorter time than those in Group 2 (P = 0.010). Likewise, recovery time of patients was shorter in Group 1 than in Group 2 (P = 0.010). Hypoxia was found to be more common in the propofol group, but the difference was not significant (P = 0.333). Conclusion: Propofol seems to be more effective, achieve the appropriate sedation level more quickly, and provide a faster onset of sedation than midazolam in pediatric procedural sedation and analgesia. Propofol is preferred for imaging studies (computed tomography and magnetic resonance imaging) to reduce the occurrence of undesired motion artefacts. Although both drugs are safe to use for sedation before pediatric imaging procedures, propofol is preferred with appropriate preparation. abstract_id: PUBMED:28488002 Experience with the use of propofol for radiologic imaging in infants younger than 6 months of age. Background: There is an increased risk associated with procedural sedation of infants younger than 6 months of age. The use of propofol for radiologic imaging of this age group is not well studied. Objective: We hypothesize that adverse events are higher in the infant population receiving propofol for radiologic imaging. Materials And Methods: A retrospective chart review was undertaken of 304 infants younger than 6 months old who received propofol for procedural sedation from October 2012 to February 2015. Patient demographics, propofol dosing, sedation-related adverse events and interventions were collected. Serious adverse events were defined as laryngospasm, aspiration, the need for admission, cardiac arrest or death. Results: Procedural sedation for radiologic imaging was successful in 301/304 (99%) of infants using propofol. Of these 304 patients, 130 (42.8%) patients were female, and 240 of the 304 (79%) were between 3 and 6 months of age. The majority of patients (172/304 [56.6%]) were American Society of Anesthesiologists-Physical Status Class II. There were 57 sedation-related, minor adverse events in 39 out of 304 (12.8%) patients. Thirteen of the 304 (4.3%) patients had 14 serious adverse events, with airway obstruction the most common. Eighty interventions were required in 56/304 (18.4%) patients. The most common interventions were continuous positive airway pressure (CPAP) in 25/304 patients (8.2%) and jaw thrust in 15/304 (4.9%). The median induction propofol dose was 4.7 mg/kg. A need for an increase in the propofol infusion rate during the procedure was noted in 162/304 (53.3%) infants. No significant predictors of sedation-related adverse events were detected. Conclusion: Propofol can be used for radiologic imaging of infants younger than 6 months of age with a high success rate. Practitioners should be mindful of significantly higher dosing requirements and a higher incidence of airway events, which can be easily identified and managed by a team of experienced sedation providers. abstract_id: PUBMED:29439610 Outpatient Procedural Sedation of Patients With Autism Spectrum Disorders for Magnetic Resonance Imaging of the Brain Using Propofol. Objective: To quantify the number of personnel, time to induce and complete sedation using propofol for outpatient magnetic resonance imaging (MRI) of the brain, and the frequency of serious adverse events (SAEs) in children with autism spectrum disorder (ASD) compared with children without ASD. Results: Baseline characteristics were the same between both groups. Overall sedation success was 99%. Although most children were sedated with ≤3 providers, 10% with ASD needed ≥4 providers (P = .005). The duration of sedation was less for the ASD group compared with the non-ASD group (49 minutes vs 56 minutes, P = .005). There was no difference in SAE frequency between groups (ASD 14% vs non-ASD 16%, P = .57). Conclusion: Children with ASD can be sedated for brain MRI using propofol with no increased frequency of SAEs compared with children without ASD. Sedation teams should anticipate that 10% of children with ASD may need additional personnel before propofol induction. abstract_id: PUBMED:22793966 The effect of "multiphase sedation" in the course of computed tomography and magnetic resonance imaging on children, parents and anesthesiologists. Background And Objectives: We aimed to investigate the effect on children undergoing Computed Tomography (CT) or Magnetic Resonance Imaging (MRI), their parents and attending anesthesiologist of "multiphase sedation" which we define as "the intended sedation level achieved with one or more agents through the same or different routes with more than one administration". Material And Methods: One hundred children and their parents were randomly allocated to one of two study groups. In phase 1; in Group I the patients were given midazolam (0.5mg.kg(-1)) in 5 mL fruit juice, and the ones in control group (Group II) were given only fruit juice. After intravenous (iv) cannulation; in phase II, boluses of propofol were given to achieve the adequate sedation for imaging. Anxiety scores of children and their parents were recorded using Oucher scale and STAI, respectively, and parental satisfaction was evaluated by visual analogue scale (VAS). The number of attempts for iv cannulation, length of time for preparation, and amount of hypnotics were recorded. Results: Anxiety state of children was similar between groups before premedication, but later it was lower in Group I. Before procedure, STAI score of parents was similar and later it was lower in Group I. Parental satisfaction in Group I was higher. The number of attempts for iv cannulation and required propofol dose was less in Group I. Conclusion: "Multiphase sedation" procedure provides children to feel less pain and anxiety, and decreases parental anxiety while increasing their satisfaction. It supplies a comfortable and safe sedation, as it provides a short and problem-free preparation process for the attending anesthetist as well. abstract_id: PUBMED:27687378 Comparing the efficacy and safety between propofol and dexmedetomidine for sedation in claustrophobic adults undergoing magnetic resonance imaging (PADAM trial). Study Objective: To determine the efficacy of sedation with dexmedetomidine compared to propofol for claustrophobic adults undergoing magnetic resonance imaging (MRI) in our institution. Design: Randomized, prospective, double-blinded study. Setting: University-based tertiary referral center. Patients: Thirty claustrophobic adults with American Society of Anesthesiologists physical status I and II who were planned for MRI. Interventions: Patients were randomly assigned to target-controlled infusion propofol or dexmedetomidine loading followed by maintenance dose for procedural sedation. Measurements And Main Results: The primary end point was adequate reduction in patient anxiety levels to allow successful completion of the MRI sequence. Both methods of sedation adequately reduced anxiety levels in visual analog scale scores and Spielberger Strait Test Anxiety Inventory (P&lt;.001). Dexmedetomidine required a longer time to achieve anxiolysis, 7.36minutes (SD, 2.59), and required increasing maintenance dose to induce sleep compared to 10.71minutes (SD, 4.63) for propofol. In terms of image quality, 2 patients (16.67%) in the dexmedetomidine group were satisfactory, whereas all with propofol were graded as good to excellent. Adverse effects were seen in patients sedated with dexmedetomidine with number needed to harm 8 for hypotension and 15 for bradycardia compared to none recorded in the propofol arm. There was no significant difference in patient satisfaction scores or home readiness after the MRI. Conclusions: Both dexmedetomidine and propofol can effectively reduce anxiety levels of claustrophobic adults undergoing MRI, but dexmedetomidine takes longer to achieve adequate anxiolysis and sleep and may have an effect on image quality. Hypotension and bradycardia are common adverse effects observed with dexmedetomidine. abstract_id: PUBMED:16510620 Successful implementation of a radiology sedation service staffed exclusively by pediatric emergency physicians. Objective: As the number of diagnostic imaging studies performed has increased, the demand for sedation in support of these radiologic tests has also increased. Our objectives were to (1) assess the safety and efficacy of a radiology sedation service that is staffed exclusively by pediatric emergency medicine (PEM) physicians, (2) determine the frequency and the type of commonly performed pediatric imaging studies that require procedural sedation, and (3) assess the average duration of procedural sedation for commonly performed radiologic studies. Methods: We conducted a retrospective observational study of patient encounters in 2004 involving procedural sedation to facilitate diagnostic imaging. We are a university-affiliated group of PEM physicians that provide a radiology sedation service during weekdays at a freestanding urban children's hospital. Results: The sedation service participated in 1285 patient encounters during the study period. Deep sedation was provided to 1027 patients. Moderate sedation was administered to 258 patients. Procedural sedation times for the most frequently performed imaging studies ranged from 5 to 183 minutes. Agents that were used to provide deep sedation were pentobarbital (with midazolam, fentanyl, or both) in 65% of cases, propofol in 31%, and ketamine (with or without midazolam) in 4%. Moderate sedation was achieved with chloral hydrate in 86% and oral diazepam in 14% of the cases. A total of 99.1% of the imaging studies were completed successfully. Six imaging studies were aborted because of failed sedation or occurrence of adverse event. Five patients who were deemed high risk on their presedation evaluation were referred electively for general anesthesia. Conclusions: Our data suggest that a dedicated sedation team in support of diagnostic imaging services, staffed exclusively by PEM physicians, can be a successful clinical enterprise. The service consumes significant resources and physician time. abstract_id: PUBMED:24859263 Provision of deep procedural sedation by a pediatric sedation team at a freestanding imaging center. Background: Freestanding imaging centers are popular options for health care systems to offer services accessible to local communities. The provision of deep sedation at these centers could allow for flexibility in scheduling imaging for pediatric patients. Our Children's Sedation Services group, comprised of pediatric critical care medicine and pediatric emergency medicine physicians, has supplied such a service for 5 years. However, limited description of such off-site services exists. The site has resuscitation equipment and medications, yet limited staffing and no proximity to hospital support. Objective: To describe the experience of a cohort of pediatric patients undergoing sedation at a freestanding imaging center. Materials And Methods: A retrospective chart review of all sedations from January 2012 to December 2012. Study variables include general demographics, length of sedation, type of imaging, medications used, completion of imaging, adverse events based on those defined by the Pediatric Sedation Research Consortium database and need for transfer to a hospital for additional care. Results: Six hundred fifty-four consecutive sedations were analyzed. Most patients were low acuity American Society of Anesthesiologists physical class ≤ 2 (91.8%). Mean sedation time was 55 min (SD ± 24). The overwhelming majority of patients (95.7%) were sedated for MRI, 3.8% for CT and &lt;1% (three patients) for both modalities. Propofol was used in 98% of cases. Overall, 267 events requiring intervention occurred in 164 patient encounters (25.1%). However, after adjustment for changes from expected physiological response to the sedative, the rate of events was 10.2%. Seventy-five (11.5%) patients had desaturation requiring supplemental oxygen, nasopharyngeal tube or oral airway placement, continuous positive airway pressure or brief bag valve mask ventilation. Eleven (1.7%) had apnea requiring continuous positive airway pressure or bag valve mask ventilation briefly. One patient had bradycardia that resolved with nasopharyngeal tube placement and continuous positive airway pressure. Fifteen (2.3%) patients had hypotension requiring adjustment of the sedation drip but no fluid bolus. Overall, there were six failed sedations (0.9%), defined by the inability to complete the imaging study. There were no serious adverse events. There were no episodes of cardiac arrest or need for intubation. No patient required transfer to a hospital. Conclusion: Sedation provided at this freestanding imaging center resulted in no serious adverse events and few failed sedations. While this represents a limited cohort with sedations performed by predominately pediatric critical care medicine and pediatric emergency medicine physicians, these findings have implications for the design and potential scope of practice of outpatient pediatric sedation services to support community-based pediatric imaging. abstract_id: PUBMED:30858081 Procedural Sedation Outside the Operating Room and Potential Neurotoxicity: Analysis of an At-Risk Pediatric Population. Objectives: To determine the characteristics of children who met the risk criteria for potential neurotoxicity defined by the US Food and Drug Administration (FDA; 2016 warning) in a procedural sedation (PS) service. Study Design: A single-center retrospective review of all infants and children aged &lt;3 years receiving PS outside the operating room from 2014 to 2016. Demographics, duration of, and the reason for PS were analyzed. Results: A total of 2950 patients with 3653 sedation encounters were included. Median age was 19 (range, 11-26) months. Most PS (86.4%) were for magnetic resonance imaging (MRI). The median number of sedation procedures per patient was 1 (25th-75th: 1-7), and median duration of sedation was 72 minutes (25th-75th: 55-98 minutes). Forty patients (1.4%) required prolonged sedations &gt;3 hours, in a single encounter (median, [25th-75th] of 196 minutes [185-214 minutes]), and 298 patients (10.1%) had multiple sedation exposures during the study period. Overall, 327 patients, 11.1% (95% confidence interval, 10.0%-12.3%) required repeated and/or prolonged sedation. The most common reasons for repeated or prolonged sedation were MRI of the brain and neurologic concerns. Conclusions: Multiple and prolonged PS commonly occurs outside the operating room in this young and potentially vulnerable population. Although certain imaging cannot be avoided, other cases may have the potential to be delayed until the child is &gt;3 years old or to have alternate imaging that may not require prolonged PS. Family and provider awareness of the FDA warnings regarding potential neurotoxicity of sedation in all settings, both inside and outside the operating room, is critical. Answer: Yes, procedural sedation with propofol is acceptable for complex imaging. A study comparing short versus prolonged sedations in children found that prolonged outpatient sedations with propofol conducted by appropriately trained non-anesthesiology sedation providers were effective for imaging procedures with no increase in interventions or increased resource burden compared to short sedations (PUBMED:23649207). Additionally, propofol has been shown to be more effective, achieve the appropriate sedation level more quickly, and provide a faster onset of sedation than midazolam in pediatric procedural sedation and analgesia, making it preferred for imaging studies to reduce the occurrence of undesired motion artifacts (PUBMED:24918806). Furthermore, propofol can be used for radiologic imaging of infants younger than 6 months of age with a high success rate, although practitioners should be mindful of higher dosing requirements and a higher incidence of airway events, which can be managed by experienced sedation providers (PUBMED:28488002). Children with autism spectrum disorder (ASD) can also be sedated for brain MRI using propofol with no increased frequency of serious adverse events (SAEs) compared to children without ASD (PUBMED:29439610). However, it is important to note that while propofol is commonly used and generally effective, there are considerations to be made for certain patient populations. For instance, in patients with mucopolysaccharidosis type II, who have an elevated incidence of difficult airway, dexmedetomidine may offer advantages for preserving the native airway compared to propofol, as it is associated with lower rates of artificial airway interventions (PUBMED:30375140). In summary, procedural sedation with propofol is acceptable for complex imaging, provided that the sedation is conducted by trained providers and that patient-specific considerations, such as age and underlying conditions, are taken into account.
Instruction: Asian MODY: are we missing an important diagnosis? Abstracts: abstract_id: PUBMED:31360071 Maturity-onset diabetes of the young (MODY): current perspectives on diagnosis and treatment. Maturity-onset diabetes of the young (MODY) is characterized by autosomal dominant inheritance, onset before 25 years of age, absence of β-cell autoimmunity, and sustained pancreatic β-cell function. To date, mutations have been identified in at least 14 different genes, including six genes encoding proteins that, respectively, correspond to MODY subtypes 1-6: hepatocyte nuclear factor (HNF) 4α (HNF4α), glucokinase (GCK), HNF1α (HNF1α), pancreatic and duodenal homeobox 1 (PDX1), HNF1β (HNF1β), and neurogenic differentiation 1 (NEUROD1). Diagnostic tools based on currently available genetic tests can facilitate the correct diagnosis and appropriate treatment of patients with MODY. Candidates for genetic testing include nonobese subjects with hyperglycemia, no evidence of β-cell autoimmunity, sustained β-cell function, and a strong family history of similar-type diabetes among first-degree relatives. Moreover, identification of the MODY subtype is important, given the subtype-related differences in the age of onset, clinical course and progression, type of hyperglycemia, and response to treatment. This review discusses the current perspectives on the diagnosis and treatment of MODY, particularly with regard to the six major subtypes (MODY 1-6). abstract_id: PUBMED:17054605 Asian MODY: are we missing an important diagnosis? Aims: Maturity onset diabetes of the young (MODY) is a monogenic form of diabetes where correct diagnosis alters treatment, prognosis and genetic counselling. The first UK survey of childhood MODY identified 20 White, but no Asian children with MODY. We hypothesized that MODY causes diabetes in UK Asians, but is underdiagnosed. Methods: Children with dominant family histories of diabetes were recruited. Direct sequencing for mutations in the two most common MODY genes; HNF1A (TCF1) and GCK was performed in autoantibody-negative probands. We also compared MODY testing data for Asian and White cases from the Exeter MODY database, to 2001 UK census data. Results: We recruited 30 families and identified three Asian families with MODY gene mutations (two HNF1A, one GCK) and three White UK families (two HNF1A, one GCK). Heterozygous MODY phenotypes were similar in Asians and Whites. Only eight (0.5%) of 1369 UK referrals for MODY testing were known to be Asian, but in 2001 Asians represented 4% of the English/Welsh population and have a higher prevalence of diabetes. Conclusions: We identified three cases of childhood MODY in UK Asians and demonstrated reduced rates of MODY testing in Asians, which has negative implications for treatment. It is unclear why this is. MODY should be considered in autoantibody-negative Asian diabetes patients lacking evidence of insulin resistance. abstract_id: PUBMED:30259503 The importance of combined NGS and MLPA genetic tests for differential diagnosis of maturity onset diabetes of the young. Introduction: Maturity onset diabetes of the young (MODY) is a rare form of monogenic diabetes. Being clinically and genetically heterogeneous, it is often misdiagnosed as type 1 or type 2 diabetes, leading to inappropriate therapy. MODY is caused by a single gene mutation. Thirteen genes, defining 13 subtypes, have been identified to cause MODY. A correct diagnosis is important for the right therapy, prognosis, and genetic counselling. Material And Methods: Twenty-nine unrelated paediatric patients clinically suspected of having MODY diabetes were analysed using TruSight One panel for next-generation sequencing (NGS) and multiplex ligation-dependent probe amplification (MLPA) assay. Results: In this study we identified variants in MODY genes in 22 out of 29 patients (75.9%). Using two genetic tests, NGS and MLPA, we detected both single nucleotide variants and large deletions in patients. Most of the patients harboured a variant in the GCK gene (11/22), followed by HNF1B (5/22). The rest of the variants were found in the NEUROD1 and HNF1A genes. We identified one novel variant in the GCK gene: c.596T&gt;C, p.Val199Ala. The applied genetic tests excluded the suspected diagnosis of MODY in two patients and revealed variants in other genes possibly associated with the patient's clinical phenotype. Conclusions: In our group of MODY patients most variants were found in the GCK gene, followed by variants in HNF1B, NEUROD1, and HNF1A genes. The combined NGS and MLPA-based genetic tests presented a comprehensive approach for analysing patients with suspected MODY diabetes and provided a successful differential diagnosis of MODY subtypes. abstract_id: PUBMED:35533745 Monogenic diabetes in adults: A multi-ancestry study reveals strong disparities in diagnosis rates and clinical presentation. Aim: Identification of monogenic diabetes (MgD) conveys benefits for patients' care. Algorithms for selecting the patients to be genetically tested have been established in EuroCaucasians, but not in non-EuroCaucasian individuals. We assessed the diagnosis rate, the phenotype of MgD, and the relevance of selection criteria, according to ancestry in patients referred for a suspected MgD. Methods: Seven genes (GCK, HNF1A, HNF4A, HNF1B, ABCC8, KCNJ11, INS) were analyzed in 1975 adult probands (42% non-EuroCaucasians), selected on the absence of diabetes autoantibodies and ≥2 of the following criteria: age ≤40 years and body mass index &lt;30 kg/m2 at diagnosis, and a family history of diabetes in ≥2 generations. Results: Pathogenic/likely pathogenic variants were identified in 6.2% of non-EuroCaucasian and 23.6% of EuroCaucasian patients (OR 0.21, [0.16-0.29]). Diagnosis rate was low in all non-EuroCaucasian subgroups (4.1-11.8%). Common causes of MgD (GCK, HNF1A, HNF4A), but not rare causes, were less frequent in non-EuroCaucasians than in EuroCaucasians (4.1%, vs. 21.1%, OR 0.16 [0.11-0.23]). Using ethnicity-specific body mass index cutoffs increased the diagnosis rate in several non-EuroCaucasian subgroups. Conclusion: The diagnosis rate of MgD is low in non-EuroCaucasian patients, but may be improved by tailoring selection criteria according to patients'ancestry. abstract_id: PUBMED:11202217 Diagnosis of maturity-onset diabetes of the young in the pediatric diabetes clinic. The diagnosis of pediatric diabetes mellitus (DM) traditionally is not considered a diagnostic specialty. However, considerable heterogeneity in the etiology of DM is seen among children and adolescents, making the recognition of discrete subgroups of DM very important for determining prognosis and appropriate treatment. The subgroups that result in non-insulin-dependent DM in children are as follows: the 'honeymoon' phase of type 1 DM, type 2 DM, genetic syndromes accompanied by DM, and maturity-onset diabetes of the young (MODY). The relative prevalence of these different subgroups depends on the population being studied. In the UK, in pediatric clinics where most patients are Caucasian, MODY is over 10 times more prevalent than type 2 DM. However, type 2 DM would predominate in a clinic where most children are from populations with a high prevalence of this condition (e.g. Asian Indians). It should be emphasized that MODY comprises two discrete clinical syndromes: glucokinase diabetes and transcription factor diabetes, the latter of which results from mutations in the genes encoding hepatocyte nuclear factor (HNF)-1alpha, HNF-1beta, HNF-4alpha and insulin promoter factor-1. abstract_id: PUBMED:33572031 Treatment implications of a delayed diagnosis of maturity-onset diabetes of the young. Maturity-onset diabetes of the young (MODY) is a rare form of monogeneic diabetes that classically presents as non-insulin requiring diabetes with evidence of autosomal dominant inheritance in individuals who are typically young and lean. However, these criteria do not capture all cases and can also overlap with other types of diabetes. The hepatocyte nuclear factor-1 alpha (HNF1A) mutation is a common cause of MODY and is highly sensitive to sulphonylureas, which should be first-line therapy. Our case represents the diagnostic challenges of HNF1A MODY and the implications of a delayed diagnosis, which can lead to reduced success of sulphonylurea treatment. abstract_id: PUBMED:31704690 Absence of Islet Autoantibodies and Modestly Raised Glucose Values at Diabetes Diagnosis Should Lead to Testing for MODY: Lessons From a 5-Year Pediatric Swedish National Cohort Study. Objective: Identifying maturity-onset diabetes of the young (MODY) in pediatric populations close to diabetes diagnosis is difficult. Misdiagnosis and unnecessary insulin treatment are common. We aimed to identify the discriminatory clinical features at diabetes diagnosis of patients with glucokinase (GCK), hepatocyte nuclear factor-1A (HNF1A), and HNF4A MODY in the pediatric population. Research Design And Methods: Swedish patients (n = 3,933) aged 1-18 years, diagnosed with diabetes May 2005 to December 2010, were recruited from the national consecutive prospective cohort Better Diabetes Diagnosis. Clinical data, islet autoantibodies (GAD insulinoma antigen-2, zinc transporter 8, and insulin autoantibodies), HLA type, and C-peptide were collected at diagnosis. MODY was identified by sequencing GCK, HNF1A, and HNF4A, through either routine clinical or research testing. Results: The minimal prevalence of MODY was 1.2%. Discriminatory factors for MODY at diagnosis included four islet autoantibody negativity (100% vs. 11% not-known MODY; P = 2 × 10-44), HbA1c (7.0% vs. 10.7% [53 vs. 93 mmol/mol]; P = 1 × 10-20), plasma glucose (11.7 vs. 26.7 mmol/L; P = 3 × 10-19), parental diabetes (63% vs. 12%; P = 1 × 10-15), and diabetic ketoacidosis (0% vs. 15%; P = 0.001). Testing 303 autoantibody-negative patients identified 46 patients with MODY (detection rate 15%). Limiting testing to the 73 islet autoantibody-negative patients with HbA1c &lt;7.5% (58 mmol/mol) at diagnosis identified 36 out of 46 (78%) patients with MODY (detection rate 49%). On follow-up, the 46 patients with MODY had excellent glycemic control, with an HbA1c of 6.4% (47 mmol/mol), with 42 out of 46 (91%) patients not on insulin treatment. Conclusions: At diagnosis of pediatric diabetes, absence of all islet autoantibodies and modest hyperglycemia (HbA1c &lt;7.5% [58 mmol/mol]) should result in testing for GCK, HNF1A, and HNF4A MODY. Testing all 12% patients negative for four islet autoantibodies is an effective strategy for not missing MODY but will result in a lower detection rate. Identifying MODY results in excellent long-term glycemic control without insulin. abstract_id: PUBMED:35657281 Diagnosis and Management of Monogenic Diabetes in Pregnancy. Monogenic diabetes occurs in up to 3% of people with diabetes. Mutations in over 40 different genes are responsible. The most common genes affected are HNF1A, HNF4A, GCK, and HNF1B. Additionally, other types of diabetes with a genetic aetiology include neonatal diabetes and diabetes plus syndrome. Each of these genetic subtypes has a different phenotype and requires distinctive treatments. Due to the overlap of monogenic diabetes with type 1 and 2 diabetes and even gestational diabetes, they can often be misdiagnosed. During pregnancy, individual subtypes require treatment that is different from standard diabetes care, so recognition and prompt diagnosis of monogenic diabetes are important to avoid inadequate treatment. We describe the management of monogenic diabetes for the most significant subtypes, focussing on the impact on and management in pregnancy. A genetic diagnosis of diabetes can alter long-term treatment in those with diabetes. In pregnancy and the postnatal period, this can involve specific management changes determined by the gene affected and whether there is a fetal inheritance of the gene. Where inheritance of the genotype influences the outcomes, cell-free fetal testing will hopefully soon become a diagnostic tool for early recognition of fetal mutations. abstract_id: PUBMED:38162681 Whole-exome sequencing reveals novel variants of monogenic diabetes in Tunisia: impact on diagnosis and healthcare management. Introduction: Monogenic diabetes (MD) accounts for 3%-6% of all cases of diabetes. This prevalence is underestimated due to its overlapping clinical features with type 1 and type 2 diabetes. Hence, genetic testing is the most appropriate tool for obtaining an accurate diagnosis. In Tunisia, few cohorts of MD have been investigated until now. The aim of this study is to search for pathogenic variants among 11 patients suspected of having MD in Tunisia using whole-exome sequencing (WES). Materials and methods: WES was performed in 11 diabetic patients recruited from a collaborating medical center. The pathogenicity of genetic variation was assessed using combined filtering and bioinformatics prediction tools. The online ORVAL tool was used to predict the likelihood of combinations of pathogenic variations. Then, Sanger sequencing was carried out to confirm likely pathogenic predicted variants among patients and to check for familial segregation. Finally, for some variants, we performed structural modeling to study their impact on protein function. Results: We identified novel variants related to MD in Tunisia. Pathogenic variants are located in several MODY and non-MODY genes. We highlighted the presence of syndromic forms of diabetes, including the Bardet-Biedl syndrome, Alström syndrome, and severe insulin resistance, as well as the presence of isolated diabetes with significantly reduced penetrance for Wolfram syndrome-related features. Idiopathic type 1 diabetes was also identified in one patient. Conclusion: In this study, we emphasized the importance of genetic screening for MD in patients with a familial history of diabetes, mainly among admixed and under-represented populations living in low- and middle-income countries. An accurate diagnosis with molecular investigation of MD may improve the therapeutic choice for better management of patients and their families. Additional research and rigorous investigations are required to better understand the physiopathological mechanisms of MD and implement efficient therapies that take into account genomic context and other related factors. abstract_id: PUBMED:23878349 Maturity onset diabetes of the young: identification and diagnosis. Maturity-onset diabetes of the young (MODY) is a monogenic disorder that results in a familial, young-onset non-insulin dependent form of diabetes, typically presenting in lean young adults before 25 years. Approximately 1% of diabetes has a monogenic cause but this is frequently misdiagnosed as Type 1 or Type 2 diabetes. A correct genetic diagnosis is important as it often leads to improved treatment for those affected with diabetes and enables predictive genetic testing for their asymptomatic relatives. An early diagnosis together with appropriate treatment is essential for reducing the risk of diabetic complications in later life. Mutations in the GCK and HNF1A/4 A genes account for up to 80% of all MODY cases. Mutations in the GCK gene cause a mild, asymptomatic and non-progressive fasting hyperglycaemia from birth usually requiring no treatment. In contrast, mutations in the genes encoding the transcription factors HNF1A and HNF4A cause a progressive insulin secretory defect and hyperglycaemia that can lead to vascular complications. The diabetes in these patients is usually well controlled with sulphonylurea tablets although insulin treatment may be required in later life. In this review, we outline the key clinical and laboratory characteristics of the common and rarer causes of MODY with the aim of raising awareness of this condition amongst health-care scientists. Answer: The abstracts provided suggest that Maturity-onset diabetes of the young (MODY) may be underdiagnosed in Asian populations. MODY is a monogenic form of diabetes characterized by autosomal dominant inheritance, onset before 25 years of age, absence of β-cell autoimmunity, and sustained pancreatic β-cell function, with mutations identified in at least 14 different genes (PUBMED:31360071). Despite its distinct genetic and clinical features, MODY is often misdiagnosed as type 1 or type 2 diabetes, leading to inappropriate therapy (PUBMED:30259503). A study focusing on the UK Asian population hypothesized that MODY causes diabetes in UK Asians but is underdiagnosed. The study found that only 0.5% of UK referrals for MODY testing were known to be Asian, despite Asians representing 4% of the English/Welsh population and having a higher prevalence of diabetes. The study identified three cases of childhood MODY in UK Asians and demonstrated reduced rates of MODY testing in Asians, which has negative implications for treatment (PUBMED:17054605). Another study assessing the diagnosis rate and phenotype of monogenic diabetes (MgD) in adult probands found that pathogenic/likely pathogenic variants were identified in only 6.2% of non-EuroCaucasian patients compared to 23.6% of EuroCaucasian patients. The study suggested that the diagnosis rate of MgD is low in non-EuroCaucasian patients but may be improved by tailoring selection criteria according to patients' ancestry (PUBMED:35533745). These findings indicate that there is a need for increased awareness and testing for MODY in Asian populations to ensure accurate diagnosis and appropriate treatment. The underdiagnosis of MODY in Asians may be due to a lack of awareness among healthcare providers or a lack of access to genetic testing. It is important to consider MODY in autoantibody-negative Asian diabetes patients lacking evidence of insulin resistance (PUBMED:17054605). Moreover, the use of combined next-generation sequencing (NGS) and multiplex ligation-dependent probe amplification (MLPA) genetic tests can provide a comprehensive approach for analyzing patients with suspected MODY diabetes and aid in the differential diagnosis of MODY subtypes (PUBMED:30259503).